Beyond the Algorithm: Navigating the Statistical Turn in MedTech
Most software is built on a promise of predictability. You write a line of code, you define an input, and you receive a deterministic result. But as Ashkan Rasooli explains in a recent conversation on the Global Medical Device Podcast, artificial intelligence has effectively broken that contract. We are moving away from the deterministic and toward the statistical, a shift that carries profound implications for safety, regulation, and the very definition of a medical device.
In the episode titled How Artificial Intelligence is Impacting the MedTech Industry, Rasooli, an expert in quality management systems and biomedical engineering, joins host Tim Purcell to unpack why the industry is currently caught between massive excitement and legitimate regulatory anxiety. The core of the problem is that while we understand the technology underlying AI, the sheer scale of modern models makes it nearly impossible to predict a specific outcome when the system is presented with new data.
The Architecture of Uncertainty
Rasooli clarifies that the "Intelligence" in AI often leads to a projection of human-like qualities that creates unnecessary fear. Instead, he suggests viewing AI as a "statistical machine." Whether it is Software as a Medical Device (SaMD) or software embedded within a device, these systems are essentially decision-making or generation engines that learn relationships between inputs and outputs.
This statistical nature is both AI's greatest strength and its most persistent shadow. It allows for the identification of patterns in mountains of data that no human could ever process, but it also creates a "black box" effect that challenges traditional validation methods.
The Bias Trap: When Correlation Mimics Causation
One of the most sobering segments of the discussion centers on how AI can amplify existing societal biases. Because AI models do not understand the world—they only understand the data they are fed—they often fail to distinguish between correlation and causation.
Rasooli highlights several critical examples where this failure becomes a matter of life and death:
- Dermatology and Skin Color: AI trained primarily on lighter skin tones may deprioritize darker skin in cancer detection because it views the lower historical frequency of reported cases in the data as a statistical decrease in risk, leading to dangerous false negatives.
- Recidivism and Pulse Oximetry: Drawing parallels to historical data biases in the justice system or the known inaccuracies of pulse oximeters on darker skin, the episode warns that AI doesn't just reflect bias; it scales it.
- Clinical Investigation Diversity: The standard for training AI must be as rigorous as the standard for clinical trial diversity. If the training pool is skewed, the medical outcome will be skewed.
From Wellness to Radiology: The Adoption Curve
Despite the risks, the adoption of AI in MedTech is not happening in a vacuum. There is a clear trend toward starting in lower-risk categories before moving into high-stakes diagnostics. Currently, radiology dominates the landscape, accounting for roughly 87% of FDA-cleared AI devices as of mid-2023. These tools act primarily as "clinician aids," highlighting anomalies for a human doctor to review rather than offering autonomous diagnoses.
We are also seeing a rise in the "Wellness-to-Medical" pipeline. Companies like January AI or Eight Sleep use machine learning to track glucose responses or optimize sleep cycles. By starting in the wellness space, these companies can refine their algorithms and build massive datasets before eventually seeking formal FDA clearance for specific medical claims.
The Golden Nugget
"AI is a statistical machine. It has failed and it will fail, just as any statistical machine will. Because of that, the need for upfront design review of the data and the model is of greater scale importance."
Operational Efficiency and Staged Adoption
For those not yet building AI-enabled devices, Rasooli points out that AI is already revolutionizing operations. From Greenlight Guru’s own AI risk tools that scan the Maude database to predictive maintenance on manufacturing lines, the efficiency gains are tangible.
However, the advice for integrating these tools into a Quality Management System (QMS) is clear: Staged Adoption.
- Phase One: Use AI as a suggestion engine while maintaining 100% human review of all outputs.
- Phase Two: Move to a sampling-based inspection once confidence in the model grows.
- Phase Three: Fully integrate the tool into the workflow only after the statistical reliability is proven through consistent verification and validation.
As the industry awaits more harmonized global frameworks like the finalization of the EU AI Act, the responsibility falls on manufacturers to treat data with the same scrutiny they apply to physical components. The goal isn't just to make devices smarter; it is to ensure that their intelligence remains grounded in safety and clinical reality.