By Mohan Jayaraman, Philipp Rindler, Velu Sinhaand Maria Teresa Tejada
THANKS to recent technological advances in generative artificial intelligence (AI) foundation models and record-breaking rates of consumer adoption, it’s no longer a question whether your company will use this technology. It’s a question of when and how.
Trained on enormous volumes of data and adapted to many applications, foundation models are more sophisticated, complex and capable than prior AI tools, especially at handling unstructured data. Increasingly offered as a service, they are also much easier and economical to adopt. But concerns about unforeseen consequences and potential misuse of the technology make it urgent for business leaders to understand the privacy, fairness, ethical and social implications of generative AI, and to balance those risks against its promising commercial potential.
Managing and mitigating the new risks that come with technological advance is familiar terrain for financial service institutions. Generative AI will amplify some well-known concerns but will also present new ones. The risk faced by any individual company will depend on two things: first, where and how it applies generative AI, and second, the maturity of its AI governance. Whatever their level of risk, any company using generative AI must identify relevant and emerging risks; understand how their applications map to existing and new regulations; and enhance internal functions, such as machine learning engineering, technology and legal, in anticipation of new risks.
Generative AI has the potential to significantly improve the productivity and quality of many types of knowledge work, increase revenue and reduce costs. Consequently, financial service organizations are likely to use it in a variety of ways. These may include augmenting the productivity of their workforces, personalizing content for consumers and, eventually, improving consumer self-service. Traditional AI has already been used extensively in financial services, typically with structured data for prediction and segmentation. Today’s foundation models could be used for converting unstructured data like text, images and audio, as well as data sets such as communications, legal documents and written financial reports into structured data, which could then be used for strengthening these existing AI risk models.
The breadth and scale of generative AI’s likely uses combined with its evolving social and ethical risks make creating and managing a comprehensive governance program complex (see Figure 1).
REGULATORY RISKSRegulators are clearly still catching up to the rapid evolution of generative AI and foundation models. In the coming months, executives will have to watch out for upcoming regulations and proactively manage them. These will come from existing regulatory bodies that are forming their perspectives, as well as from new regulatory entities that may be created specifically for this technology, such as those envisioned in the European Union’s AI Act.
Generative AI also exposes organizations to increased legal risk from inadvertent or unintentional exposure of customer data by employees experimenting on public or shared systems, uncertainties in the provenance of data used in training foundation models, and potential copyright risks on content generated using these technologies.
Additionally, the economic risks from regulatory noncompliance must also be considered — the draft European regulations are suggesting stiff financial penalties, similar to fines for noncompliance with data privacy regulations.
OPERATIONAL RISKSGiven the rapid pace of advances in generative AI, many features and capabilities are being launched to support experimentation. Until these solutions are hardened to support scaling, control privacy, monitor performance, manage security anomalies, follow data sovereignty, access regulations and meet enterprise service levels, their commercial use must be very carefully considered.
Excessive complexity can make these systems brittle and more vulnerable to new vectors of cybersecurity attack, like training data poisoning and prompt injection attacks. It is likely, too, that the technology’s ease of use may enable generation of malicious e-mails, phishing attacks and “deepfakes” of voices and images, among other issues. Vendor risk relates both to locking into a “walled garden,” especially as the vendor ecosystem grows, and to the possibility that some vendors will not survive in this increasingly busy space. Open-source models may have their own complexity of maintenance and upgrades.
MODEL RISKSThe financial service industry has well-developed policies of fairness, accuracy, explainability and transparency built in compliance with regulatory guidelines. Generative AI intensifies some existing risks associated with AI while requiring a different approach to others. Given the large amount of data that goes into creating foundation models, for example, it is likely that bias will creep into some aspects of the data. And with foundation models mostly available as a service, new and derivative applications will inherit their risk of bias. Earlier machine learning models produced structured output for specific tasks, while generative AI creates novel results whose fidelity and accuracy can be difficult to assess. One particular concern: It can “hallucinate” output that was not present in its training data. That’s a desirable result when looking for innovative content, but unacceptable if presented without verification or qualification.
ECONOMIC RISKSAs with any new technology, unless planned correctly, generative AI initiatives run the risk of becoming expensive experiments that don’t deliver shareholder value. There is a risk of underestimating the extent to which an organization and its people will need to transform in order to realize the benefits of generative AI. Given the technology’s evolving nature, companies risk investing in the wrong technology or failing to hit the right balance between what they choose to build in-house and what they buy from outside vendors. Ultimately, every executive worries they might lose out to a competitor that deploys the technology in a way that is so appealing to customers it renders their business model obsolete.
REPUTATION RISKSThe tectonic shift generative AI is precipitating brings fear of automation and the potential impact on employment, employees and society at large. Stakeholders including customers, employees and investors have all demonstrated, as they have with ESG (environmental, social and governance), that they place a high level of emphasis on social responsibility, and this technology will be no exception.
5 DESIGN PRINCIPLESBuilding the organizational capability to responsibly design and deploy generative AI will require an investment of significant resources. By focusing that investment on five principles, companies can begin to mitigate risk and achieve their responsible AI goals while delivering on their strategic ambitions (see Figure 2).
1. Be human-centric — design for transparency and explainability. Generative AI systems must be built with audit trails and monitoring that fit their end use. This will help ensure that the systems are accessible and fair, are not unfairly biased and do not discriminate. All stakeholders should be adequately informed when they interact with a machine and should be able to reach a human to escalate any issues they have with a decision made by the system.
For AI to be trustworthy, it must be designed for human agency and oversight. It is critical that financial service institutions ensure that a human is in the generative AI loop, whether to review feedback or address an escalated problem. End-users or other subjects should always know when a decision, content, advice or outcome is the result of an algorithm.
2. Know where you stand —ensure that data privacy and infrastructure are robust. With a growing choice of foundation models and providers, organizations will need to select the right service and vendor. Some companies will choose a fully cloud-hosted software-as-a-service approach, while others will opt for models with privately managed infrastructure. As with other cloud technologies, companies will need to balance the simplicity of single sourcing against the risk of becoming locked into one vendor, and be aware of their vendor’s data security, privacy and data residency standards.
Whichever choice is made, companies can build their technical infrastructure to be foundation-model agnostic so that they have the flexibility to change with the evolution of the ecosystem. Financial service firms can specifically mitigate customer and organizational data privacy concerns as well as security and performance risks by opting for the right technology architecture and focusing on building capability in prompt engineering, embeddings and outputs.
3. Earn trust — prepare for regulation. Regulators are playing catch-up on generative AI, but organizations can prepare by proactively monitoring for, evaluating and addressing risks and taking a forward-looking approach to governance, risk management and compliance reporting.
4. Employ agility — ensure oversight and disclosure, before and after deployment. Given the fast-evolving nature of this technology and its scale, companies will have to keep monitoring their applications for new and developing risks after deployment and build a human override. They must also have explicit criteria for testing and evaluating the model. Tools that provide information about the AI, such as model cards, will need to evolve to ensure that foundation models can be quantitatively evaluated and tested at industrial scale before deployment.
5. Act with intention — consider organizational maturity and AI governance when selecting applications. When companies first develop generative AI, it makes sense to focus on uses with low risk. Later, as their responsible AI capabilities mature, companies can work up to those with higher risk. It may be ideal for organizations to start with internal applications, then move on to applications with a limited set of external users. Once those applications have built detailed feedback loops, they can expand to a wider audience.
Generative AI is no longer futuristic but an imminent reality, one offering financial service leaders both unparalleled opportunities and new business and societal risks. Financial service firms can responsibly embrace this transformative technology by building robust governance frameworks and upskilling and reskilling employees to adapt to the AI-driven workplace.
This starts with a conscious decision to prioritize responsible AI practices that are designed with their broader impact in mind and aligned with the organization’s core values and long-term strategic objectives. By pioneering an appropriate model for deploying generative AI, financial service organizations have the opportunity to not only gain competitive advantage in an increasingly digital world, but also set an example of responsibility and foresight.
Mohan Jayaraman is Bain & Company’s expert partner based in Singapore; Philipp Rindler is an expert senior manager based in Zurich; Velu Sinha is an expert partner based in Amsterdam; and Maria Teresa Tejada is an expert partner based in Atlanta.