• Sustainability
  • DE&I
  • Pandemic
  • Finance
  • Legal
  • Technology
  • Regulatory
  • Global
  • Pricing
  • Strategy
  • R&D/Clinical Trials
  • Opinion
  • Executive Roundtable
  • Sales & Marketing
  • Executive Profiles
  • Leadership
  • Market Access
  • Patient Engagement
  • Supply Chain
  • Industry Trends

Ensuring Responsible and Ethical Use of AI Technology

Commentary
Video

In this part of his Pharmaceutical Executive video interview, Brice Challamel, Vice President, AI Products & Platforms, Moderna, discusses how Moderna is ensuring responsible and ethical use of AI technology within its operations.

With increasing reliance on AI for tasks like data analysis and decision support, how is Moderna ensuring responsible and ethical use of AI technology within its operations?

this is a great question, one that we are always putting first, on the order of priorities. We are not in charge of designing the technology itself. We either use APIs, or use products that were designed by other companies, which we call hyper scalars. And they have their own responsibility. Ours is mainly focused on how we use it, good usage. So, the first thing is that we published code of conduct, which by the way, is a public code of conduct, you would find it on Miranda outcome in our corporate policies. And it is an evolving document, we are learning along the way, what the right code of conduct should be. But it is a combination of course, what is legal, but also, what is desirable in the responsible way that we use AI. For instance, we want to make sure that we respect human integrity, and that we respect human diversity. In every part of the company's life. It is so vital to us, especially a life science company. That means to provide universal products that can serve people and save lives in every part of society. And in every country in the world. We have an even greater desire and an ambition on diversity than any other company that I know of. It is quintessential to us. And AI can come with baked in biases inherited from it learning datasets, this is no new topic. It is part of the trade. And so how we use AI, how we're thoughtful about AI, is the buffer between that layer of training and that corpus and data set that has inherent biases. And the way that we leverage it with a constant mindset towards respect of human integrity, and respect of human diversity. Right.

There's a series of principles that are extended and layered in the way that we build our code of conduct, and then translate into more granular user policy, which is something that you have to read, understand, learn and demonstrate you've understood and learned before you get access to AI products here. So even though we make them universally accessible, they require training before you're granted access for the user policy part of this right to understand what you should be doing and what should not be doing with AI today, we're giving you because with that power comes responsibility. At an even greater managerial levels, we talked about code of conduct, we talked about user policy. And the third level is governance. Granular governance of use cases, if you're doing a GPT for yourself, that helps unity to the basis fine. If you're doing a GPT for your team, then you want your manager to approve that GPT and the team to have their own say in how the GPT is built and is organized and to look at the way that you put To date, and so if you're creating a GPT, there is going to be business critical, and is going to have an impact on the company, then we're going to have a governance of that GPT, I'm looking to this, we don't want to do this for the 1000s of GPTs, then again, this could be using a hammer to put a nail on the wall, right? Like it doesn't make sense.

But we want to be mindful of a couple dozen GPTs that are that have the potential to become critical in the future to who we are and how we work as a company. And provide the right level of governance for those of how they are designed, of how they are trained and how they are updated. of whom owns them of we need to update them. Because those are products. Even though we call chatGPT a product, it's true. An AI agent is also a product. That's why my dental is on products and platforms. Because you can also think of ChatGPT as a platform that delivers products, each AI agent is a product. And then we have to apply the product mindset to it and be as demanding to that product will be any other piece of technology in our company. But we can't do this for anything that anyone does, we need to be mindful of priorities, and also give people a sense of experimentation, freedom on their own personal use cases. And so, we are learning as we go, this is still a lot of research and thoughtfulness that goes into this. But at those three levels, the code of conduct the user policy, the granular governance, we are learning every day. What makes sense, keeps us safe, keeps the company safe, and how we can make sure that our increasing reliance on AI, which is you know, obvious than momentum that I don't think we come short in any given future, we are always in a in a safe learning environment with it. And we're always making the best of it. So, it's this is no small topic. And it's very important to us, reducing the risk with AI, staying safe with it, both in the understanding that we have it in our ways of working, and in the usage that we have it in diverse parts of the companies.

Recent Videos
Related Content