On 16 February 2026, TIC Council hosted a high-level session titled “Embedding Trust in Innovation: AI Governance and Quality Infrastructure for Growth” as part of the India AI Impact Summit. Held in cooperation with the AIQI Consortium, the event brought together Conformity Assessment Bodies (CABs), Accreditation Bodies (ABs), Standards Developing Organisations (SDOs), the AI industry, and policymakers to explore the responsible and trusted uptake of Artificial Intelligence (AI) within Quality Infrastructure (QI). 

During the session, TIC Council launched its new paper, “Quality Infrastructure Framework for the Digitalised World,” which provides a strategic pathway to ensure QI remains fit for purpose in an era of AI, digitalisation, and evolving regulatory requirements. 

 

Key takeaways 

AI: Operational and Delivering Value  

The summit highlighted that AI is no longer theoretical within QI; it is actively operational, particularly in preparatory, analytical, and decision-support stages. CABs are increasingly using AI to enhance efficiency, consistency, and risk prioritisation across inspection, testing, and certification workflows. These advancements help industry and manufacturers obtain more structured data, improve compliance management, and receive faster services, thereby strengthening their global competitiveness. 

Governance as the Foundation of Trust  

Participants stressed that trust depends on governance, not technology alone. A core conclusion of the roundtable was that AI supports QI expert judgement but does not replace it. Robust governance frameworks—incorporating human oversight, traceability, auditability, and accountability—remain essential, especially in key decision-making processes and high-risk use cases. 

Addressing Challenges and Policy Outcomes  

The roundtable identified several challenges to the full adoption of AI, including regulatory ambiguity, unclear accreditation expectations, and gaps between existing standards and real-world deployment. To address these, the session outlined several key policy-relevant outcomes: 

  • Legal Clarity: Policymakers must provide clear guidance on the acceptable use of AI within conformity assessment and accreditation frameworks. 
  • International Alignment: Harmonising standards and accreditation practices is essential to prevent fragmentation and preserve cross-border recognition. 
  • Digitalisation of QI: Modernising QI processes is necessary to maintain transparency, efficiency, and global interoperability. 

The session concluded that AI strengthens Quality Infrastructure only when trust, legal certainty, and international alignment remain central. Modernising QI is essential to ensure that innovation drives competitiveness, supports MSMEs, and reinforces confidence in global digital markets