Feature

Article

Pharmaceutical Executive

Pharmaceutical Executive: June 2025
Volume45
Issue 5

Rethinking Audience Modeling: AI, Regulations, & More

The intersection of artificial intelligence and direct-to-consumer advertising presents unprecedented opportunities and evolving challenges as regulations and technologies reshape the biopharma landscape.

Luk Arbuckle, Global AI Practice Leader, IQVIA Applied AI Science

Luk Arbuckle, Global AI Practice Leader, IQVIA Applied AI Science

David Reim, Senior Director, Product & Strategy, IQVIA Digital

David Reim, Senior Director, Product & Strategy, IQVIA Digital

Key Takeaways

  • Responsible AI use in healthcare marketing is essential as marketers increasingly use AI to create targeted audiences for direct-to-consumer advertising, especially when handling sensitive health data.
  • Evolving regulations and rising consumer privacy concerns demand stricter data protection, transparency, and ethical oversight—marketers must navigate complex legal landscapes, including inconsistent state-level privacy laws and greater FTC scrutiny.
  • New AI-driven methods such as federated learning and synthetic data modeling offer a safer, more secure way to build effective marketing audiences without exposing raw health data, setting a stronger foundation for trustworthy and compliant audience targeting.

Marketers are racing to harness artificial intelligence (AI) in direct-to-consumer (DTC) healthcare advertising, but who’s pausing to ask whether we’re doing it responsibly? With regulations evolving rapidly, ensuring responsible practices is more important than ever.

DTC advertising is a powerful and proven tool for driving awareness and encouraging education and engagement to improve health and wellness among relevant consumers. Foundational to the success of DTC is the creation of audiences that are likely to be interested in these messages and act on them. However, the creation of consumer audiences must be done in a manner that complies with US law and regulation, including consumer protection, privacy, and AI safety/security.

Recently, the use of AI in the creation of consumer audiences is driving the need for additional protections on both the data and the insights. As AI becomes more integral to DTC advertising, it’s poised to enable marketers to meet expectations for consumer protection and fairness while at the same time driving more effective audience creation.

Complex challenges to DTC advertising

Key challenges to DTC advertising include:

  • Consumer protection
  • Shifting regulations
  • Technological changes

These challenges present particular risks for healthcare marketers. This is especially true if personal health data is being used in audience modeling. While the use of this data has resulted in audiences that are more receptive to a particular campaign, the inherent sensitivity of this data has significantly increased concerns about fair and transparent practices, thereby pushing consumer protection to the top of marketers’ agendas.

Laws to protect consumer data, especially highly personal areas such as health data, are evolving quickly. In the US, a growing and uneven patchwork of AI and privacy regulations has sprung up as state governments move to ensure consumers’ information is protected in the absence of federal laws. Many of these state laws classify health data as sensitive personal information that requires extra layers of protection.

The regulatory environment is also creating conflicting pressures on healthcare marketers. While the FDA has documented compelling evidence of DTC advertising’s positive impact, the US Federal Trade Commission (FTC) is tightening its oversight, especially on sensitive health data. The Commission takes action against companies it believes are engaging in “unfair practices.”

Any enforcement taken by the FTC or state regulators puts a company’s reputation at risk, in addition to significant fines and required modification of business practices. For healthcare marketing, unfair practices could include:

  • Insufficient de-identification
  • Improperly sharing healthcare information with third parties
  • Unethical or deceptive data practices
  • Inadequate data protection
  • Misalignment between privacy policy and data practices

Technology continues to present both challenges and opportunities. Marketers need to navigate ongoing changes to third-party cookie policies, implement effective controls to separate data, and harness advances in machine learning. These technologies underpinning DTC advertising, especially through digital channels, require continuous evaluation against the external expectations and the risk tolerance of the companies involved.

These competing forces are driving a new, more rigorous thinking around responsible data use in DTC advertising, including the vital process of audience creation. Because of these challenges, better ways of meeting transparency expectations are needed.

Need for transparency, accuracy, and trustworthiness

Healthcare marketers need to prioritize data protection while ensuring the right messaging reaches audiences that will benefit from it the most. Marketers have adapted to privacy demands with robust de-identification methods that remove identifying elements, but the industry lacks widespread adoption of standardized practices. This creates an opportunity for more advanced, responsible approaches. As AI and other emerging technologies reshape the landscape, more sophisticated strategies are needed to balance data protection with responsible use.

Because of these challenges, audience modeling needs a new approach. Given the advances in analytics and machine learning, we need to build consumer audiences with transparency, accuracy, and trustworthiness while developing solutions that embrace responsible, secure, and fair data practices. Establishing a foundation for trustworthy data use is vital to this effort.

A transformative approach to audience modeling

How do we create effective, responsible marketing campaigns that help people in their health journeys? Applied AI—including machine learning, natural language processing, and generative AI—offers a compelling answer. AI can transform audience development while meeting high consumer protection standards. By using only what’s necessary and transforming data into abstracted insights, we reduce unnecessary exposure and build systems that are inherently more secure, resilient, and resistant to misuse.

This transformation can be achieved through a technique that transforms health data into a compressed representation known as an “embedding space”— a highly abstract environment in which trends and relationships are preserved but the original data is no longer visible or accessible (see Figure 1).

Think of it as a map that shows only the key patterns and connections without revealing the individual details behind them. In this space, synthetic trends are generated, which are patterns that reflect the underlying health behaviors without exposing individual-level details. This enables responsible and trustworthy use of data in audience modeling.

In contrast to older approaches that relied on centralized sensitive data, this method uses a federated design that always keeps health and consumer data separate. In one environment, health data is transformed into synthetic health trends. In another, consumer data is transformed into synthetic consumer trends. Only these synthetic trends—already abstracted and secure—are brought together in a separate environment for audience modeling. This approach protects sensitive data and ensures that high-quality audiences can be built without ever exposing the raw information behind them. This process sets a strong foundation for responsible and trustworthy audience creation.

Two critical AI concepts—safety and security—are essential for responsible and trustworthy healthcare marketing. AI safety refers to the responsible design and use of AI systems in ways that minimize harm and align with intended, ethical uses. AI security focuses on protecting data and systems from unauthorized access or misuse. Companies can voluntarily adopt safe AI practices to promote accountability, but guardrails—including technical, organizational, and policy-based controls—are needed to enforce appropriate use, especially as risk increases. The higher the risk, the stronger and more layered these safeguards should be.

Companies need to monitor their overall efforts and AI practices through structured operations and ethical oversight, especially as formal standards and governance mechanisms are still maturing. Ethics boards can be particularly effective in guiding responsible data use, helping to ensure safeguards are upheld and consumer confidence maintained.

Adherence to these principles is indispensable for building trust with patients, demonstrating that health data is used responsibly and transparently to improve health outcomes and wellness.

Looking forward: The potential of AI to shape digital marketing

The intersection of AI and DTC advertising presents unprecedented opportunities and evolving challenges as regulations and technologies reshape the landscape. The path forward requires transparency, responsible practices, and innovative audience modeling to maintain trust and drive meaningful engagement. Those who prioritize trust and innovation will shape healthcare marketing’s future.

Luk Arbuckle is Global AI Practice Leader, IQVIA Applied AI Science; and David Reim is Senior Director, Product & Strategy, IQVIA Digital

References

1. A hammer for every nail: Exploration of AI at Cannes. MM+M IQVIA Podcast. 2024 October 21. https://www.iqviadigital.com/resources/podcasts/a-hammer-for-every-nail-exploration-of-ai-at-cannes?hsLang=en.

2. Mian, M. Managing AI in practice: A structured approach to reliable and defensible systems. IQVIA Blog. March 31, 2025. Available from: https://www.iqvia.com/blogs/2025/03/managing-ai-in-practice.

3. Arbuckle, L. A blueprint for defensible AI. IQVIA Blog. October 1, 2024. https://www.iqvia.com/blogs/2024/10/a-blueprint-for-defensible-ai.

4. Biswal, D. An integrated approach to securing AI. IQVIA Blog. October 1, 2024. https://www.iqvia.com/blogs/2024/09/an-integrated-approach-to-securing-ai.

5. Pragmatic application of healthcare AI governance. IQVIA White Paper. June 19, 2023. https://www.iqvia.com/library/white-papers/pragmatic-application-of-healthcare-ai-governance.

6. Darais, D.; Near, J.; Buckley, D.; Durkee, M. Data distribution in privacy-preserving federated learning. Cybersecurity Insights, a NIST Blog. February 27, 2024. https://www.nist.gov/blogs/cybersecurity-insights/data-distribution-privacy-preserving-federated-learning.

7. King, M. The future of AI in healthcare. IQVIA Blog. March 5, 2024. https://www.iqvia.com/blogs/2024/02/the-future-of-ai-in-healthcare.

8. Arbuckle, L.; Biswal, D. Navigating AI by evaluating readiness. IQVIA Blog. October 1, 2024. https://www.iqvia.com/blogs/2024/09/navigating-ai-by-evaluating-readiness.

Newsletter

Lead with insight with the Pharmaceutical Executive newsletter, featuring strategic analysis, leadership trends, and market intelligence for biopharma decision-makers.

Related Videos
Gen Li
Gen Li
Ted Sweetser
Ted Sweetser
Ted Sweetser
Related Content