You are opening our English language website. You can keep reading or switch to other languages.
Validating AI in Software as a Medical Device (SaMD): Meeting MDR, GDPR, and EU AI Act Requirements
16.02.20267 min read

Validating AI in Software as a Medical Device (SaMD): Meeting MDR, GDPR, and EU AI Act Requirements

As artificial intelligence transforms healthcare, more companies are adding AI to medical devices to enhance diagnostics, personalize treatments, and increase patient engagement. With AI playing a larger role in clinical decision-making, it is now essential for businesses to prioritize validation and regulatory compliance when integrating AI into their products.

Validating AI in Software as a Medical Device (SaMD): Meeting MDR, GDPR, and EU AI Act Requirements

DataArt recently hosted a webinar to explore the validation of AI-enabled medical devices for the European market and provide practical guidance on aligning with MDR, GDPR, and the EU AI Act.

The panel of experts moderated by Varvara Bogdanova, Innovations Manager, HCLS, DataArt, included:

  • Ian Sutcliffe, Principal SA HCLS Compliance & Medical Devices, AWS
  • Andrey Sorokin, AI Expert & Solution Architect, DataArt
  • Sara Jaworska (Juszczyk), Quality & Regulatory Affairs Senior Manager, DataArt

This article summarizes the key insights from the webinar.

Importance of AI Validation in Medical Devices

AI validation is essential for several reasons that significantly impact patient safety, technology reliability, and business success.

Clinically, the influence of AI on medical decisions is profound. Errors in AI algorithms can lead to incorrect diagnoses or treatment recommendations, directly affecting patient health.

Technically, as AI constantly evolves, it must be regularly tested for various parameters, such as robustness and bias real-world performance. This way, we understand what to expect from AI devices and what can potentially go wrong.

From a business perspective, a lack of proper validation can lead to product rejections, recalls, and a significant loss of consumer trust. Recent reports indicate a correlation between the introduction of AI devices to the market and recalls, particularly highlighting those that did not undergo thorough validation in the first place.

Biggest Challenges in Developing and Validating AI-Powered Devices

The development and validation of AI-based medical devices come with a number of complex challenges.

Regulatory Expectations

AI in healthcare is advancing rapidly, but regulatory guidance often lags behind, creating uncertainty for manufacturers. Standards differ across regions setting distinct and sometimes conflicting requirements.

Ensuring Data Quality and Continuous Evidence

High-quality, representative datasets are essential for clinical-grade AI, as poor data can lead to bias or unreliable outcomes. Companies must demonstrate lawful processing, clinical relevance, and strict governance of data throughout its lifecycle. Regulatory authorities require validation results and documentation to always reflect the current AI system version, tracking changes as models are retrained or updated.

Managing the Full AI Lifecycle in a Compliant Way

Building AI-powered medical devices requires robust lifecycle management, including data acquisition, preprocessing, training, rigorous validation with diverse datasets, secure deployment, and post-market monitoring.

I believe that many organizations underestimate how operationally demanding it is to maintain the full AI lifecycle in compliance. There are several very big groups of requirements for medical software development, AI, cybersecurity, as well as data governance and privacy.

Sara Jaworska (Juszczyk)
Sara Jaworska (Juszczyk)

The Role of Cloud Services in Enabling Compliance

Cloud services can help teams develop, validate, and monitor AI-based medical devices more efficiently in a number of areas.

Automating Quality Controls and Documentation

One of the main challenges in regulated industries is the burden of manual documentation and reporting. Cloud services offer built-in automation for many quality controls and compliance checks, greatly reducing the need for manual paperwork.

Facilitating Advanced Testing

Regulatory requirements for testing are evolving with the types of AI being used. For deterministic systems, automated tests were often sufficient. For non-deterministic systems, such as large language models, cloud platforms enable new forms of testing, like incorporating “human-in-the-loop” validation or using one model to judge the outputs of another (an LLM as a judge). Automated testing frameworks available in the cloud allow teams to efficiently adapt to regulatory expectations.

Embracing and Managing Change

AI models and the data they are trained on change rapidly. Cloud services enable teams to embrace change through robust automation of version control, retraining, and model validation processes. Features such as predetermined change control plans, recommended by regulators, become feasible and manageable in the cloud.

Some businesses try to control change by locking their AI models, but with LLMs continually evolving and real-world data constantly shifting, this approach is risky. I recommend that companies embrace change and proactively build it into their planning. This requires experts who understand both technology and compliance, and they are hard to find. That’s why I’m really glad we have partners like DataArt.

Ian Sutcliffe
Ian Sutcliffe

Good AI Evidence: How Much Data Is Enough

A recurring question in AI validation for medical devices is: what qualifies as convincing evidence that an AI system is safe, effective, and trustworthy. The answer is evolving and context-dependent.

If someone asks how much data is enough to validate AI, the honest answer is there’s no fixed number. And, more importantly, regulators don’t expect one, instead they anticipate a justified, risk-based rationale.

Andrey Sorokin
Andrey Sorokin

AI validation typically starts with a few hundred well-curated samples to catch major failures, but as coverage expands to clinical language, edge cases, and patient variability, thousands of samples may be needed. The exact amount of data depends on the complexity, intended clinical use, and risk profile of the AI system. Metrics like accuracy and recall are statistical estimates, where a tighter confidence interval often requires more data. Reliable validation is not just technical, it also needs expert review, careful annotation, and sometimes AI-assisted evaluation with human oversight.

From a regulatory point of view, evidence must be comprehensive, up-to-date, and reflect the current state of the system. This requires updates after model retraining, data changes, or new risk findings. Documentation, risk management, and performance monitoring should align with overlapping frameworks like MDR, GDPR, and the EU AI Act. Ultimately, AI validation is an ongoing process: regular updates help ensure medical devices remain safe, effective, and compliant throughout their lifecycle.

DataArt and AWS: Powerful Collaboration

The partnership between DataArt and Amazon Web Services provides businesses with a powerful framework for validating AI in medical devices.

Amazon Web Services offer cloud-based tools such as Bedrock and SageMaker to automate model evaluation, implement compliance guardrails, and enable continuous monitoring for bias, toxicity, robustness, and more. These solutions help organizations meet regulatory expectations for risk mitigation and evidence generation.

DataArt brings deep regulatory and technical expertise, helping integrate these tools into workflows designed for MDR, GDPR, and EU AI Act compliance. This collaboration simplifies the process of creating audit-ready documentation, aligning automated validation with international standards for medical software development.

Together, DataArt and Amazon Web Services empower organizations to deliver safer, more effective, and compliant AI-powered medical devices.

Conclusion

Validating AI in medical devices requires risk-based planning, robust testing, ongoing monitoring, and alignment with complex regulations such as MDR, GDPR, and the EU AI Act. DataArt and AWS together offer a powerful combination of cloud technologies and expert regulatory guidance, enabling organizations to validate, monitor, and continually improve their AI-powered medical devices with confidence.

You can watch the full webinar and get more insights on AI validation digital products here:

 

Contact us if you need help with designing, validating, and deploying AI-powered medical devices that meet MDR, GDPR, and the EU AI Act requirements.

Subscribe to Our Newsletter

Subscribe now to get a monthly recap of our biggest news delivered to your inbox!