Generative AI and Auditor Independence: A Shifting Landscape in India’s Regulated Sectors
Generative AI and Auditor Independence: A Shifting Landscape in India’s Regulated Sectors
Authored by : Tarun Kher, Partner, Risk Advisory Services, BDO India
As Artificial Intelligence (AI) tools are increasingly embedded in financial operations, India’s regulators and auditors face a new challenge: preserving independence in a tech-driven world. To expedite completion of the day’s tasks on hand, a risk analyst working at a financial institution tried something small: he uploaded a client data-set into a free chatbot, just to see if it could help spot anomalies. What started as curiosity quickly turned into a compliance nightmare. Once that data left the bank’s secure firewall, control over the same was completely compromised.
Generative AI is no longer a futuristic idea—it’s already part of business as usual. Surveys suggest that roughly four in ten auditors are using AI tools in some capacity, and a similar proportion of the remaining auditors expect to start using such tools within a year. At this pace, AI could be central to the profession by 2026. The problem is that the adoption is running much ahead of governance. Regulators have begun to step in. The Reserve Bank of India, in its latest Financial Stability Report, noted that while most banks are experimenting with AI, very few have robust oversight in place.
To bridge that gap, RBI rolled out the FREE-AI framework—built on fairness, responsibility, ethics, and explainability. The message is clear: regulators are no longer watching from the sidelines. In June 2025, the Securities and Exchange Board of India (SEBI) rolled out a consultation paper built around five guiding principles, including transparency, model governance, and stronger data safeguards.
Just a few months earlier, in March, the Insurance Regulatory and Development Authority of India (IRDAI) had already raised the bar with new cyber security norms, from six-hour breach reporting to a mandatory 180-day log retention. Across the board, the debate has shifted: AI is here to stay, and the real question is how it is controlled. The risks are not theoretical. At one mid-sized private bank, internal auditors found that relationship managers had been using free summarisation apps to draft investment notes.
In doing so, snippets of customer KYC data, including PAN numbers, slipped outside the bank’s firewall. Regulators didn’t intervene, but remediation and investigation costs still ran into crores. Audit committees have taken note. It’s no longer enough for management to simply confirm that AI tools exist. Boards now ask: Who is using them? For what purpose? Under what safeguards? This is shifting the conversation on auditor independence.
Traditionally, auditors kept a distance from the systems they reviewed. But if both management and auditors are relying on the same AI platforms, those boundaries get blurred. The tough question follows: Can independence be maintained in such an environment? Globally, similar questions are surfacing. The EU’s AI Act, signed in 2024, places strict oversight on “High-Risk” AI applications.
India is moving in that direction too, through the proposed Digital India Act, NITI Aayog’s AI roadmap, and MeitY’s Responsible AI guidelines. Across all of these, the common themes are transparency, consent, and accountability—principles that matter the most in regulated sectors like banking, insurance, and healthcare.
Data Privacy and internal audits are now being performed through licensed AI tools, which are extensively being deployed to identify personally identifiable information (PII) of data principals such as customers, vendors and employees. The AI tools are configured to collect PII and archive it into separate encrypted folders with unique identification numbers (UIN) assigned to each data principal. Express consent to store the PII is mapped by the AI tool against each UIN to ensure compliance, based on which a report is generated as internal audit evidence.
Further, in case any data principal requested the Data Protection Officer to exercise their rights, by virtue of the UIN, the AI tool was successfully able to perform the change results while creating an audit trail for internal audit review. Appointing a Chief AI Ethics Officer may still be years away for most firms, but smaller steps such as mapping AI usage, putting review protocols in place, and working hand-in-hand with information security are achievable today.
These are areas where audit can take the lead rather than wait for rules to catch up. The bottom line: trust in AI won’t come automatically. It has to be built through governance, transparency, and independence. For India’s regulated sectors, generative AI is more than a new tool—it is a test of institutional integrity. Auditors who step up now will not only protect their organisations but also help define what independence means in an AI-driven future.
Disclaimer - The views, thoughts and opinions expressed in the article are solely the author’s and are not representative of the author's employer/ organisation