The ChatGPT Revolution is Coming for Compliance
Author: Vrinda Khurjekar, Senior Director – Cloud Consulting
Business heavyweights from Goldman Sachs to Apple – many of which have banned employees from using ChatGPT – have joined a call to rein in generative AI development over fears of misinformation and replacing human labor, as well as cybersecurity concerns. Any tool that possesses the capability to undermine data privacy, leak source code, and compromise compliance and regulatory performance through “automated decision-making” is the last thing many business leaders want to see in the pipeline.
The US regulatory community has already pricked up its ears.
Washington introduced two new bills in June 2023 that directly address the generative AI concerns across industries, mainly to ensure there is human oversight and transparency. It seems like we are in a perpetual rush to adapt our laws and guardrails to various new technologies that are generating novel risks, while also being cautious to not squelch innovation.
From a client perspective, compliance officers and providers already have blurred vision while focusing on multiple moving regulatory targets, from data privacy, digital assets, and cybersecurity to fraud and defamation.
But, will generative AI tools change the way compliance teams do their own jobs too?
Is regulatory compliance yet another use case that generative AI is poised to revolutionize?
Generative AI Is Not Only For Writing Research
Many are already aware that generative AI is automating many industries that deal in information and content, from law to education and advertising. In corporate America, it is being principally leveraged for customer service and employee productivity.
Gartner predicts generative AI will have a great impact by augmenting core processes with AI models in sectors like pharmaceutical, manufacturing, architecture, engineering, automotive, aerospace, defense, medical, and energy. Regulatory frameworks in healthcare, energy, transportation, and trade are among the many that will need to assimilate evolving generative AI regulatory guidelines. Compliance teams need all the help they can get.
Generative AI As A Compliance ‘Interpreter’
PwC proposes that “a nimble, collaborative, regulatory-and-response approach is emerging with generative AI, requiring, perhaps, a major adjustment for compliance officers.”
The top compliance use case we have observed is, unsurprisingly, to summarize all the compliance requirements across huge sets of regulatory documents that are out there. Compliance teams are using generative AI to be an interpreter of sorts.
If, for example, a fintech is looking to partner with a broker-dealer and to set up compliance with the recently proposed SEC cybersecurity regulations, they can use an AI tool to go through the fine print of the guideline manuals and swiftly render an answer as to what five or ten steps the company needs to take to become or remain compliant.
Another internal use case that is increasing productivity and efficiency within organizations is that of a knowledge base for employees to retrieve basic compliance information without going through a long review process.
If a sales or IT executive is signing a contract with a new vendor or customer and needs to check it against internal guidelines to make sure they will comply with HIPAA or ISO 27001, for example, some are using AI tools for quick answers. At this stage, this cannot be done for all types of questions, as AI is only able to check compliance with some basic items on a contract, like limitations of liability in case of a data breach. If the tool cannot give the green light, it can then push the item to compliance for a manual review.
AI Helping Humans Be Better Resources
Oracle recently announced plans to incorporate generative AI into its human resources platform.
McKinsey is touting generative AI’s enormous potential to automate human capital processes, from facilitating skills-based hiring and personalized communications with applicants to collating feedback for performance reviews. The C-suite can use it to decide on policy updates required while bringing the workforce back to the office, or staying hybrid. Hybrid work brings with it regulatory issues in cybersecurity and workers’ compensation, among other things. An HR leader who may not have the state and local workers’ compensation laws front of mind could run scenarios through the AI tool, which would be trained for all workers’ compensation and other labor-related rules.
Will AI Replace Compliance Professionals?
Can generative AI handle subjectivity, nuance, and gray areas in compliance? As of now, it certainly cannot, as its current uses are merely high-level summarization cases in which the models are stable — without any operation on top of the raw data that you’re getting. But, as soon as you need to do an operation on top of it, whether mathematical or logical, the current models are not successful.
Generative AI technology cannot yet be a single source of truth for functions like financial transaction monitoring or know-your-customer and must be supported by human intervention and audit. If arithmetic data validation needs to happen, then the generative AI models are not there yet because they have to add, subtract, or multiply things and then arrive at a cohesive conclusion.
Regulators Will Leverage Generative AI Too
And let’s not ignore the strong possibility that the regulatory agencies and other enforcement bodies will tap into this technology to figure out a better way not only of assessing companies but also whether their current standards are serving the intended purpose. Regulatory agencies may decide they need to create a more seamless way of managing compliance rather than making it administratively cumbersome to determine whether an organization is compliant. Instead of regulatory agencies engaging in today’s unwieldy back-and-forth oversight mechanisms, they could conceivably use AI tools to run a scan across all of an organization’s publicly available APIs to instantly determine whether the company meets all ISO, GDPR, or HIPAA standards.
And that’s it, that’s your audit!
The Dawning Of The Age Of AI
As of now, generative AI can flag simple compliance items swiftly, but cannot make it up to the next step, intelligence decisions. However, we expect to see massive progress toward intelligent decision-making within the next two years, propelled by a burst of investment and market opportunities. Make no mistake, this technology is rapidly advancing before our eyes. After Microsoft Azure integrated ChatGPT, we were seeing 8 to 10 updates a day, to these models’ configurations for different sets of customers.
We see parallels between the early development of EDIs (the standard electronic data format that replaced paper-based documents) in the 1990s and today’s probability of eliminating paper altogether in most sectors, including regulatory compliance practices.
But, compliance professionals will not themselves be digitized. Instead, they will get more involved in the product architectural decisions rather than just executing a checklist of items that some regulatory body created. They will get involved earlier and in a more consultative capacity.
Compliance Leaders Will Head Upstream
Because generative AI tech is moving at such a breakneck pace, compliance officers will have to be constantly evolving with the technology and regulations. A checklist created today is not going to be relevant ten years down the line. Compliance officers will, in fact, be more crucial decision-makers because the volume of threats is going to be greater. They will be more proactive instead of reactive and become more valuable for upstream decision-making.
Overall, most industries are going slow when integrating compliance processes (with AI in general too) with generative AI. Soon, compliance officers will be on the hunt for AI tools, seeking to make their jobs easier and to keep up with Joneses. They may find resistance from up the chain of command, especially from those leaders demanding any new AI tool demonstrate robust ROI. However, the premise of trimming time and costs may not be the right metric to look at with an early-stage technology.
Initially, the integration of generative AI may actually require additional compliance workers to validate the models. The ROI of AI in compliance will prove more long tail, but staying ahead of the competition is ROI in its own right. As Bill Gates said, the age of AI has begun, and “entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.”
Vrinda Khurjekar – Senior Director, AMER at Searce – a leading AI consultancy that works with some of today’s leading organizations implement AI across their organizations.
more news
Embracing AI in Southeast Asia: The Strategy for Avoiding Cost Overruns
Unearthing Mining’s Data and AI Potential: Why It’s Overlooked and How to Start
Tummoc Embarks on Transformative Journey towards Digital Excellence in Collaboration with Google Cloud and Searce
let’s connect
We help you embrace change by creating newer
ways to work or optimising existing processes.