Governance Playbook
A guide to using generative AI in board management
Understand which board management tasks are best augmented by AI, allowing you to free up time for strategic work
Make use of the starter guide for involving AI in your board management activities
Get a checklist of essential security and data considerations when using AI as part of your board management efforts
The recent surge in conversations surrounding artificial intelligence (AI) may create the impression that it is a new technology. AI has, however, been in development since the 1950s. Initially, it was mainly used by scientists and mathematicians to work on various projects.
Within 15 years, AI researchers worked on developing natural language and large language models (LLMs) that are now the foundation of conversational, generative AI. Applications such as ChatGPT have made AI mainstream and demonstrated that the technology can be applied across more than numerical datasets.
Across many industries, businesses are applying AI to their products, using it to improve their processes and exploring how else they can leverage AI in their organizations. The IBM Global AI Adoption Index 2022 notes that 35 percent of companies report using AI in their business and 42 percent are looking at AI. This is especially true in the governance arena. Many boards are uniquely positioned to create policies around AI use at work, data privacy protections and how their companies can leverage AI to meet business needs.
Corporate governance professionals should also consider how AI can make board meetings, votes and compliance requirements more efficient.
The demands on boards have never been greater, the complexity of the issues they address continues to grow and the flow of information and data can be overwhelming. When viewed through that prism, AI is an invaluable tool for board members and other governance professionals, as well as a real competitive advantage. Imagine a world where new board member onboarding takes days, historical data is automatically researched and presented and data is integrated with third-party resources. Board members become more effective and efficient. AI will open up a world of possibilities that will allow boards and the governance professionals who serve them to be able to focus on the work that matters most.
In this guide, you will learn about use-cases for AI in corporate governance, how to use AI securely, how to evaluate what solution is best for your company, and more. We hope you find it helpful as we all strive to adopt this exciting technology in a way that is meaningful, safe and secure.
By starting from the top, boards of directors can become leaders in the use of AI – and be better informed about the issues that the introduction of AI at their companies will be required to address.
Board management requires pristine attention being paid to meeting schedules, required participants and appropriate notifications for both the board and committees. Instead of scheduling four, six or eight recurring meetings, an AI tool can create all those meetings by simply typing or verbally prompting ‘Schedule four meetings on the third Thursday of each quarter and invite the board of directors’. A great AI tool can take this even further, by providing answers to ‘Who has RSVP’d?’ or ‘Send a reminder for the upcoming board meeting’.
The process of drafting and finalizing meeting minutes is laborious, yet crucial to the meeting cycle and overall governance program. Human oversight will always be a part of creating the final and approved minutes document. But AI can record meetings (with settings for purging), create a transcript and build a draft based on a company’s unique preferences before a human even starts the editing process.
AI can process enormous amounts of data. This can have a variety of uses from litigation research and answering stockholder questions about historical company information to highlighting dates or times in large reams of information.
Using a solution that has access to your calendar along with your charters and by-laws can allow your AI solution to quickly identify when the board or a committee is not in compliance with the number of members or meetings required.
AI can also provide notifications in advance of upcoming openings on the board or committees. This awareness can allow for strategic recruitment for replacements based on the company’s current skills matrix and upcoming corporate direction.
The ability to summarize large amounts of language-based data is a key advantage of LLM-based AI. Board work requires the review of intense amounts of data including reports, proposals and policies. Summarizing this information shouldn’t be a substitution for preparation but can aid as a prioritization tool for preparatory work.
An extension of diving into data can be to use AI tools to answer particular queries about a dataset or group of documents. For example, you may be able to ask whether a particular activity undertaken in your organization complies with an internal policy. AI can give you an answer near-instantly here, while a human lawyer might take hours to trawl through the documents at length. It can also compare two different policies and find any key differences.
‘The whole legal department is trying to brainstorm to find ways to use AI to make our lives easier and allow us to solve some ‘big company issues’. One way we would like to use it is to process the minutes we take for our board and board committee meetings.
'We would like to use AI to look at our agenda and draft skeleton minutes, then go in and fill the blanks with what was actually discussed. I know our corporate secretary is very interested in doing that for all our board meetings, too. It’s a very simple time-saver.
‘Something else we’re very interested in is using AI to boost our historic company research. We are a large energy company and have a long history, so sometimes shareholder inquiries mean that we will have to pull up records from the 1940s – or sometimes even earlier.
'In the past, we would have to pull documents out of storage or have to track them down. Now that our minutes have been scanned and are searchable, I’m hoping to plug in our governance databases and use AI to find the exact data we need.
Something else we’re very interested in is using AI to boost our historic company research
‘It would be great if we could have access to all of the databases across the company, as well as our subsidiaries. For example, we would like to link up our human resources database so we can track when officers and/or directors of subsidiaries change jobs or move around the company to other jurisdictions.
‘Another use could be to formulate how capital contributions and dividends are sent throughout our many subsidiaries. You can imagine that more than 900 active subsidiaries, capital contributions and dividends can create a lot of paperwork. We would really love some kind of AI to map out the chain of ownership for us, because it can take an awful lot of time to do manually.
‘It can be tough to juggle privacy rules with AI, and confidentiality is certainly a concern. We don’t share our databases with anybody as there is some very sensitive data included there that has to remain confidential. We rely on our IT group to have those safeguards in place and to manage all of our subsidiaries. That group also works very closely with vendors and vets everything: we can’t partner up with someone without thorough security checks.
‘It can be a challenge to educate your internal partners about what you really want from AI – it can be so obvious to you what you want, but hard to visualize for someone outside the department.
'It’s also vital to prioritize tasks you want to invest time and effort in: sometimes the easier tasks will be more valuable to automate, but the bigger or more complicated jobs will benefit more.’
Photo by Emiliano Vittoriosi on Unsplash
Other than the very important issue of data security, there are many other concerns around the full integration of AI into board management processes that need addressing.
Unsurprisingly, AI is not built to handle pen-and-paper data. Any information you want to feed into it must be digitized and cleansed – meaning any unnecessary data has been removed – before it can be input into an AI ecosystem. In-house teams will want to ensure that such data is available and organized for that purpose.
Many jurisdictions are still developing regulatory frameworks to govern how companies use AI. The US does not currently have comprehensive AI regulation, but there are several guidelines currently in place, including the Equal Employment Opportunity Commission’s technical assistant document and the SEC and FINRA frameworks.
The White House has also issued a white paper called ‘The blueprint for an AI bill of rights’ that may guide federal policy. In the EU, the European Parliament’s Artificial Intelligence Act holds developers, providers and users responsible for the safe implementation of the technology. Being across these recommendations and expectations may be important.
Does your organization have an existing AI framework, policy or code of conduct? Then make sure you are fully familiar with it before implementing any form of AI. Previous IT policies and guidelines may have been updated since the advent of generative AI, too. Staying updated on all IT policies as AI continues to grow in its usage is critical to avoid costly and risky behaviors.
For the uninitiated, AI can be a scary topic. Address the issue by raising case studies with colleagues – and board members – around how generative AI might safely be used to support their work. You may offer to support that knowledge with training or primers with IT security teams to make for a comprehensive approach.
Due to their dependency on pure data and the algorithms on which they are built, AI systems are at risk of bias and other errors with the data, the algorithm or both that could lead to mistakes and unintended outcomes. For example, information may be skewed by the way it is obtained or used, and algorithms may be biased due to erroneous assumptions in the machine learning process. It is important to put in place measures to reduce, or negate, such biases.
If a generative AI platform is unsure, it will fill in the gaps, maybe even fabricating false content in an attempt to complete the request. This is why it’s crucial to supervise and verify the results of generative AI software before leveraging its output.
Having human oversight of AI can help mitigate those biases and other challenges that may come down the turnpike. Regularly reviewing any automated decision-making is crucial, and any deviations from the intended purpose of using the technology should be noted in a reasonable timeframe.
Of course, generative AI is still very much evolving and the rate of change is likely only to increase as the technology matures. You must stay abreast of any new technologies that may blend with AI – and be able to assess and govern them – in order to succeed.
Image by Franz Bachinger from Pixabay
‘We sometimes have to walk our C-suite and senior management teams through any suggestions we have in this area because they have such a visceral reaction to AI, but I explain to them that it’s been around for a long time and they’ve been using it without knowing it – even their Outlook suggested responses are powered by AI.
‘I work alongside my chief strategy officer and our head of IT on all things AI – we’re in lockstep. We’ve set out rules and obligations and we’re currently in the process of coming up with a company-wide policy. That will be a living document and will get updated as we make progress. For example, at the moment we are not using it as part of any client-owned products we produce, to keep them secure. I’m lucky that they feel the same way about it as I do.
We’ve set out rules and obligations and we’re currently in the process of coming up with a company-wide policy
‘At one point everyone wanted to ban it; I told them that instead we need to use it intelligently and become AI curators. That means we pick out a few credible uses of the technology and have rules around those uses.
'It’s like being a good parent: if you just say no, people will find a way to do it anyway and – more importantly – they’ll learn about it from second-hand or third-hand sources and have a less healthy relationship with it. Banning AI outright is not doing a service to my company: my job is to defend my organization but also help it operate better.
‘It’s all about recognizing each use-case and placing limits around each one. There are definitely data privacy concerns because generative AI sucks up existing information to learn better and use the patterns it sees.
'It might even take in sensitive information so we put in place guardrails, limits around time and place, and we’re onto the next use. If you’re using it for copywriting, you have to fact-check and maybe only use it to generate ideas. For lawyers, there are all sorts of famous cases you cannot use AI for, but for other things it might be appropriate.
There are definitely data privacy concerns because generative AI sucks up existing information to learn better
‘There are basically three buckets of concern around what we are doing: data privacy, copyright and quality control. Beyond that, you need to curate what you’re doing, but it’s never a firm ‘no’.
‘That frees me up to ask AI to help with pretty much anything around the boardroom. I’ll ask it to examine our compensation, or our structure and rules, or putting together an agenda for meetings. I use it to help get our research done, too, but also I like to see what AI thinks about my version of a write-up.
'I use it as a thinking partner sometimes: lawyers can be quite isolated in our day-to-day work and I don’t always have time to check in with the team about something.’
Are the prompts that are entered saved and used to train the AI model? Understand whether your inputs and those of other users will affect the long-term capabilities of the AI tool.
Does this AI solution have access to my data and that of others? Understand how AI tools use your own data and that of others using the platform to determine the exposure risk.
What is your AI built on? Is there a connection to the open internet? It’s important to know the difference between public and private AI, as this has implications for data security and access to non-public material information.
How will my data be protected? Make sure contracts state a specific length of time your data will be kept for and understand the mechanisms that are protecting your data.
What type of security certifications do you have? Learn about the security certifications of the technologies and servers holding your data.