|

AI in Healthcare Devices and the Challenge of Data Privacy – with Dr. Ankur Sharma at Bayer

In healthcare, affected person information is the basis of analysis, therapy, and belief. With digital well being programs and AI instruments turning into central to care supply, healthcare suppliers accumulate exponentially extra delicate affected person information. This information explosion has additionally expanded the assault floor, with nations adopting totally different frameworks and approaches to managing healthcare information privateness and safety.

The main world information privateness frameworks — GDPR (Europe), HIPAA, CCPA (U.S.), APEC Privacy Framework (Asia-Pacific), and POPIA (South Africa) — share the purpose of safeguarding affected person information, however differ in their enforcement, scope, and technological maturity. 

The 2025 Digital Health Journal paper explains that these disparities consequence in fragmented compliance practices, inconsistent breach responses, and challenges in cross-border information sharing. 

The paper additionally highlights that restricted IT infrastructure and semantic incompatibilities additional weaken safety programs. As a consequence, healthcare organizations wrestle to keep up safety, interoperability, and affected person belief concurrently—a problem compounded by the rise of AI and digital well being applied sciences that enhance information quantity and complexity.

This challenge turns into much more advanced when third-party distributors are concerned. Hospitals and clinics usually depend on exterior corporations for digital well being information, cloud storage, analytics, and AI options, every of which introduces extra layers of information entry, processing, and potential vulnerability.

According to the Q3 2025 statistics published by the HIPAA Journal, enterprise associates (third-party distributors, together with AI builders) had been accountable for 12 of these breaches, affecting 88,141 people in August 2025 alone, highlighting the vital function of third events in information breach publicity.

On a current episode of the ‘AI in Business’ podcast, Emerj Editorial Director Matthew DeMello sat down with Dr. Ankur Sharma, Head of Medical Affairs, Medical Devices and Digital Radiology at Bayer, to debate challenges and alternatives of AI adoption in healthcare, overlaying regulatory frameworks, information privateness and governance, and medical belief.

This dialog highlights two important insights healthcare organizations should contemplate to undertake and scale AI:

  • Standardize governance to unlock protected AI collaboration: Unifying information governance and rules to allow safe sharing and speed up generative AI (GenAI) adoption in healthcare.
  • Bridge reimbursement gaps for scalable AI adoption: To velocity up AI adoption in healthcare, enhance mannequin transparency, and create reimbursement fashions that reward diagnostic and effectivity positive aspects.

Listen to the full episode beneath:

Guest: Dr. Ankur Sharma, Head of Medical Affairs, Medical Devices and Digital Radiology at Bayer

Expertise: Clinical Research, Medical Devices, Medicine

Brief Recognition: Ankur leads the Medical Affairs Capability Cluster at Bayer, overseeing all medical units and software-as-a-medical-device (SaMD), together with AI-driven options. He is a medical skilled with in depth expertise throughout the medical gadget lifecycle, from growth to post-launch. He additionally brings a powerful background in medical analysis. Ankur holds a level in Mechanical Engineering, a Bachelor of Medicine and Surgery, and has pursued superior research in Clinical Design and Research at the University of California, San Diego.

Standardize Governance to Unlock Safe AI Collaboration

Ankur begins by explaining the sensible challenges of information administration. Within a healthcare system, strict guidelines govern the entry and sharing of affected person info. But even inside a single establishment, information is unfold throughout a number of programs—every managed by totally different distributors obligated to guard it. 

The downside compounds as a result of third-party corporations exterior the healthcare establishment usually develop AI instruments. Getting all these separate entities—hospitals, information system suppliers, and AI builders—to collaborate and share information safely is a major impediment.

Dr. Sharma factors out that in the absence of clear, standardized rules, healthcare establishments usually err on the facet of warning by being overly restrictive with information entry. While guaranteeing security, his strategy can even restrict the potential of AI instruments to assist physicians and improve care. 

He feels strongly that AI governance in healthcare is at the moment fragmented, with totally different approaches being taken throughout establishments—some create their very own inside governance boards. In distinction, others depend on exterior distributors or advisory enter to outline how information can be utilized safely and securely.

Ankur then explains that AI regulation in healthcare varies globally, relying on the place the know-how is deployed. In the U.S., the FDA oversees AI programs labeled as Software as a Medical Device (SaMD)—the identical regulatory framework utilized to bodily medical units similar to implants and diagnostic gear.

In Europe, oversight is supplied by the EU AI Act and varied notified our bodies, whereas different nations have their very own programs.

He clarifies that SaMD instruments are these designed to immediately impression medical choices or affected person outcomes, for example, AI that helps physicians diagnose ailments or predict dangers primarily based on affected person information. These are regulated as a result of their outputs affect medical actions.

In distinction, non-regulated AI instruments are these used for affected person assist or administrative help — like an AI that interprets a radiology report into easy language for a affected person to grasp. These don’t immediately have an effect on medical outcomes, so that they at the moment fall exterior strict regulatory oversight. 

Ankur notes that, to his data, there are at the moment no generative massive language fashions labeled as Software as a Medical Device (SaMD). He explains that the U.S. SaMD house right now consists totally of predictive fashions, the place a given enter reliably produces a corresponding, predictable output.

“As far as I’m conscious, there are not any generative LLMs which are in the SAMD house. The SAMD house at the moment in the U.S. consists of all predictive fashions, that means that if we offer an enter, we could be sure it should yield a prediction primarily based on that enter, and it should all the time work that means.

There’s a set construction to enter and output for predictive fashions. It’s not creating its personal content material. And these are actually the place we’re proper now in the regulated house, there may be nonetheless some problem on the evolution of what we’re going to see from notified our bodies like the FDA or the company round how they need to regulate the use of these generative fashions in healthcare, however we don’t have absolute good readability in the U.S. on that, particularly in in Europe, with the EU AI Act, there may be some of this that’s beginning to occur.”

–Dr. Ankur Sharma, Head of Medical Affairs, Medical Devices and Digital Radiology at Bayer

He means that this rising framework might finally pave the means for the first era of GenAI instruments to be acknowledged and regulated as medical units.

Bridge Reimbursement Gaps for Scalable AI Adoption

Ankur contrasts how AI capabilities in medical analysis versus real-world affected person care. In medical analysis, operations happen inside a tightly managed atmosphere. Therefore, variables are restricted, permitting researchers to hint outcomes to particular causes clearly. But in real-world observe, these guardrails disappear. Patient care includes quite a few variables, making it tougher to watch and validate AI outputs as exactly as in a trial setting.

He explains that, with predictive fashions, physicians can simply interpret and belief the outcomes as a result of the relationship between inputs and outputs is fastened. For occasion, if a mannequin receives enter A, it ought to yield a predictable output B. If the consequence differs, the doctor can nonetheless assess what went flawed and modify accordingly.

However, GenAI fashions don’t supply that very same transparency. Physicians can’t see how the system arrived at its output, which makes it troublesome to evaluate the accuracy or reliability of its medical choices. The opacity poses a problem for guaranteeing security and accountability in affected person care.

Ankur provides that, till these points are resolved, GenAI’s most rapid use instances will seemingly stay in non-regulated areas, specializing in instruments that enhance effectivity moderately than immediately influencing analysis or therapy. 

These embrace purposes that assist medical doctors write studies extra effectively or help sufferers in higher understanding their care. Over time, he expects extra superior variations of these instruments to evolve towards regulated use instances, supporting diagnostic or therapy planning as soon as security, transparency, and reliability requirements are established.

Ankur additionally highlights that one of the largest challenges in healthcare AI adoption is reimbursement. Many AI instruments, he explains, usually are not at the moment reimbursed by insurers or healthcare programs. Without a transparent reimbursement construction, hospitals and clinics have little monetary incentive to undertake these applied sciences, slowing down their integration into on a regular basis care.

He notes that conventional reimbursement fashions are outcome-based, that means funds are tied to measurable enhancements in affected person outcomes. However, many AI instruments don’t immediately produce a consequence — they help with analysis, planning, or workflow effectivity. These contributions nonetheless create worth, however they don’t match neatly into present reimbursement frameworks.

Although the U.S. authorities has just lately begun exploring new reimbursement pathways for AI in healthcare, signaling a possible shift in how these instruments are funded, the specifics stay unclear. This growth means that policymakers are beginning to acknowledge the function AI can play in enhancing medical effectivity and care supply.

He concludes that if these reimbursement pathways change into standardized, they might considerably speed up the adoption of AI throughout the healthcare system. Since healthcare has traditionally been gradual to undertake digital applied sciences, this shift might mark a major turning level, paving the means for the broader use of digital and AI-powered instruments that in the end profit each physicians and sufferers.

Similar Posts