Back to All Resources

What the TGA’s New AI & Software Regulations Mean for Medical Devices

Taylor Esser

September 19, 2025

Artificial intelligence (AI) is no longer a futuristic concept in healthcare, it is already here, shaping the way clinicians diagnose, monitor, and treat patients. From image recognition in radiology to predictive analytics in intensive care units, AI has the potential to transform patient outcomes and streamline medical workflows. 

However, with this potential comes significant responsibility. In Australia, the Therapeutic Goods Administration (TGA) plays a central role in regulating AI when it is used in medical device software. 

The TGA’s approach is rooted in ensuring that any AI solution supplied in the country is safe, effective, and backed by transparent evidence. For MedTech companies, this means that innovative software solutions must be carefully assessed against regulatory standards before entering the market.

This blog breaks down the TGA’s latest guidance, explaining when AI is regulated as a medical device, how these devices are evaluated, and what manufacturers should do to stay compliant.

When AI Is Considered a Medical Device

Not all AI-enabled tools fall under the TGA’s jurisdiction. The dividing line comes down to the intended purpose. If AI is designed or marketed for medical use, it is generally regulated as a medical device.

According to the TGA, AI is considered a medical device if it is intended to:

  • Diagnose, prevent, monitor, predict, or treat disease
  • Provide prognosis information
  • Alleviate or compensate for an injury or disability
  • Investigate anatomy or physiology
  • Control or support conception

This definition captures a wide range of applications. For example, a melanoma-triage app that helps patients identify suspicious moles falls under regulation. 

Similarly, cloud-based analytics predicting a patient’s deterioration, or a chatbot recommending treatments, are also regulated. Even newer technologies like large language models (LLMs) adapted to summarize clinical findings or AI tools for radiology analysis meet the threshold for regulation.

By contrast, software that simply organizes appointments, manages billing, or performs other non-clinical functions typically falls outside TGA oversight.

 

what-the-tga’s-new-ai-&-software-regulations-mean-for-medical-devices

How AI Medical Devices Are Regulated

Once AI is deemed a medical device, it must go through the same legal and regulatory pathway as other software-based medical devices in Australia. This includes inclusion in the Australian Register of Therapeutic Goods (ARTG) before supply. 

Importantly, the hosting location of the software or its infrastructure is irrelevant whether based in Sydney or the U.S., if the product is supplied in Australia, it is subject to TGA oversight. In 2021, the TGA introduced significant software reforms that clarified classification rules, exclusions, and documentation requirements. 

These reforms provide clearer boundaries for developers, but they also raise the stakes: MedTech innovators must take care to apply the correct classification and prepare robust supporting evidence.

To help manufacturers, the TGA offers the “Is my software regulated?” flowchart, a practical tool that guides companies through boundary cases and classification decisions. Using this resource early can help avoid costly missteps later in the regulatory process.

Generative AI and Chatbots

The rapid rise of generative AI has introduced new complexities for regulators. Chatbots and large language models are increasingly embedded into healthcare solutions, assisting with clinical documentation, summarizing patient data, or even suggesting treatment options.

Under TGA guidance, these tools are regulated as medical devices if they have a medical purpose. This means that if a company integrates an LLM into a clinical decision support product, that company becomes the manufacturer in the eyes of the TGA, with all associated obligations.

However, not all generative AI use cases trigger regulation. If a product uses generative AI purely for administrative or non-medical tasks, such as drafting general correspondence or assisting with scheduling, it is typically outside the TGA’s scope. 

This distinction underscores the importance of carefully defining and documenting the intended use of an AI product.

Evidence Requirements for AI/ML Software

Perhaps the most challenging part of compliance lies in the evidence requirements. Because AI and machine learning (ML) models can evolve and adapt over time, regulators expect detailed documentation across the full lifecycle of the product.

Manufacturers must provide a clear description of their AI model, including algorithm design and its role in the intended purpose. Evidence should also include a rationale for training and testing datasets, with transparency about dataset size, populations represented (age, sex, ethnicity), and relevance to Australian clinical practice. 

This ensures that the model is both valid and generalizable. Beyond technical performance, companies must address AI-specific risks such as bias, overfitting, and model drift. 

For example, an AI trained primarily on data from one population group may underperform when applied to another. The TGA expects manufacturers to anticipate these risks and build mitigation strategies into their documentation.

Clinical evidence is required at a level proportionate to the device’s risk, and human factors must also be considered. For instance, over-reliance on an AI tool could create safety risks if clinicians are not adequately informed of its limitations.

Finally, post-market monitoring is crucial. Because AI models may degrade over time as real-world conditions shift, manufacturers must track performance, detect drift, and adjust models as needed.

To provide consistency, the TGA aligns its expectations with the IMDRF Good Machine Learning Practice (GMLP) principles, which cover everything from design and data management to labeling, transparency, and lifecycle monitoring.

The Role of Synthetic Data

One area of growing interest is the use of synthetic data to support development and validation. The TGA acknowledges that synthetic datasets can be valuable in cases where real-world data is limited, such as rare diseases or scenarios where privacy protections restrict data access.

However, synthetic data is not a blanket solution. For most applications, it cannot replace clinical data w

It is important to note that AI in healthcare is not solely the TGA’s responsibility. The agency works closely with other government bodies to provide whole-of-government oversight.

For example, the Australian Commission on Safety and Quality in Health Care (ACSQHC) partners with TGA to support safety in digital health. At the same time, the Department of Industry, Science and Resources contributes expertise on responsible AI initiatives that go beyond the medical context.

This coordinated approach reflects the reality that AI in healthcare intersects with broader issues such as patient safety, data privacy, and ethics

hen validating a product’s performance. 

Instead, it can act as a supplement, helping expand datasets and improve model diversity, provided that the method of generation and justification for use are clearly explained.

Oversight Beyond TGA

It is important to note that AI in healthcare is not solely the TGA’s responsibility. The agency works closely with other government bodies to provide whole-of-government oversight.

For example, the Australian Commission on Safety and Quality in Health Care (ACSQHC) partners with TGA to support safety in digital health. At the same time, the Department of Industry, Science and Resources contributes expertise on responsible AI initiatives that go beyond the medical context.

This coordinated approach reflects the reality that AI in healthcare intersects with broader issues such as patient safety, data privacy, and ethics.

What You Need to Do: Practical Checklist

For MedTech companies, compliance may feel daunting, but the TGA provides clear steps to follow. 

Key actions include:

  1. Review the flowchart “Is my software regulated?” to confirm whether your product qualifies as a medical device.
  2. Classify your device correctly and prepare supporting evidence, including Essential Principles compliance, risk management, and GMLP documentation
  3. Prepare labeling and transparency content to explain intended use, limitations, and performance across sub-populations.
  4. Submit your product for ARTG inclusion via the appropriate pathway. If there is uncertainty, consider requesting a pre-submission meeting with TGA.
  5. Maintain ongoing post-market monitoring and change management. Notify the TGA of significant updates, such as model retraining or functional changes.
  6. Check advertising compliance, ensuring that marketing to healthcare professionals and consumers follows TGA standards and avoids overstated claims.

Special Case: Digital Scribes

One area of frequent confusion is the regulation of digital scribes. If the tool only performs transcription or translation, for example, converting speech to text without interpretation, it is not regulated as a medical device.

However, if the same software begins analyzing or interpreting clinical conversations, such as generating diagnoses or treatment recommendations, it crosses into regulated territory. In that case, it must meet all requirements, including ARTG inclusion.

Compliance and Reporting

The TGA has made clear that it is actively monitoring the advertising and supply of unregistered software-as-a-medical-device (SaMD) and AI products. Enforcement is likely to increase as AI adoption grows.

Clinicians and consumers are also encouraged to verify ARTG status before using AI-enabled tools. Meanwhile, companies must remain vigilant about privacy and cybersecurity obligations, both of which are tied closely to regulatory compliance in Australia.

Conclusion

AI presents enormous opportunities to improve healthcare delivery, but with those opportunities come significant regulatory responsibilities. For MedTech companies, success depends on understanding when AI is regulated, preparing comprehensive evidence, and ensuring ARTG inclusion before supply.

By aligning early with TGA requirements and embedding good machine learning practices into the development lifecycle, companies can not only avoid compliance pitfalls but also build trust with clinicians, patients, and regulators alike.

Ultimately, compliance is not just a legal hurdle; it is the foundation for delivering safe, effective, and innovative AI solutions that truly benefit Australian healthcare.

# #