IR Playbook
Artificial intelligence and investor relations:
A roadmap for successful and responsible implementation
Where does the rapidly evolving world of artificial intelligence (AI) fit into the world of investor relations? What are the key concerns IROs have when it comes to getting started on AI and where does the community see the biggest benefits from the technology’s adoption by IR?
This guide, built off the back of a series of in-depth workshops with 47 investor relations professionals in Toronto, New York and London, gives you a direct insight into how AI – which arguably entered the business thought process with the launch of ChatGPT and other generative AI tools – can be folded into your IR toolbox.
We look at the potential and the challenges involved in incorporating this rapidly evolving technology into your IR program and hear from global investor relations professionals about how they are using AI today.
Notified and IR Magazine have created a hands-on guide that is specific to IR, offering a view on what your peers are doing with AI, how you can get started with the technology and where it can best fit into your investor relations program.
Pulling data from in-workshop surveys, polls and exercises, we have gathered qualitative and quantitative information into checklists, tips and advice you can use to start taking action on AI today.
– Series of workshops in Toronto, New York and London– More than 60 contributors took part in surveys, discussions and exercises that explored IR-specific themes around AI– Our working group members* included:
A recurring theme in discussions around the AI opportunity in IR is whether artificial intelligence will replace the IRO, said Erik Carlson, COO and CFO at Notified, talking to workshop delegates in New York. ‘What I say is, No, AI is not going to replace you, but someone who can use AI better than you might,’ he noted.
Addressing the second of three dedicated AI for IR workshops, Carlson continued: ‘We have an obligation, in this room, to figure out how to leverage the tools that are available to us. Over the last 15 to 20 years, we’ve seen personal computing, mobile devices, the internet. AI is the next generation of that innovation curve.’
In Toronto, Nimesh Davé, president at Notified, talked about how the move toward the cloud had paved the way for the deep pools of data AI tools can now mine. ‘Now we’re at a point where all that data has moved into these very large, aggregated pools, and we’re teaching machines to ask the right questions of that data,’ he noted.
He stressed the importance of ‘teaching’ AI, and how that drives differences between public and private tools. ‘The most important thing about artificial intelligence is that it’s not artificial intelligence,’ said Davé. ‘It’s just somebody teaching the system the intelligent questions to ask.’ And IROs can do that on a private system to create a tool that works for them.
The opportunity for investor relations – what Carlson noted as ‘typically one of the only functions that sees across all of an organization’ – is all about freeing up time for professionals to do what they do best: messaging and building relationships.
That’s the key in private AI, said Davé: ‘You can ask it questions and you can teach it the way you want it to behave based on your code of conduct, your ethics and your company code of integrity. You say, Do not violate these principles, and it won’t.’
‘Having data at your fingertips, in a searchable format, becomes almost like having an assistant,’ Carlson said. ‘That is going to make each IRO so much more efficient, particularly for IR teams of one or two, where you spend a tremendous amount of time researching, gathering information and working with other departments to get the answers you need.
‘Imagine that data being served to you, leaving you to work on messaging, to work with your executives. We’re in early innings but that’s where the opportunity is. That is the art of the possible.’
Because AI – and generative AI in particular – is so much in the news, it can feel all-encompassing, like something that is already deeply embedded in business. The reality for IR, however, is that the majority of investor relations professionals are still either experimenting with or learning about AI: more than 80 percent of IROs say they are tinkering with and testing AI uses, or trying to increase their knowledge of AI and its application to their roles.
At least 11 percent of IROs have already started using AI in their roles regularly, while around 8 percent remain reluctant, taking a ‘wait and see’ approach.
What does this mean for you and the decisions you’re making around AI? It is useful to know not just what AI might be able to do for IR but also how your peers are approaching the tools available.
Having that information allows you to make more informed decisions about what’s appropriate but also provides you with the information necessary to get management buy-in: knowing how your peers or competitors are investing in AI is a galvanizing force in driving understanding of how your own organization should be using it.
At present, the majority of IR teams are taking an interest in the power of AI. And while you don’t need to be a pioneer, it will be beneficial to learn more about the technology and its applications to ensure you remain competitive with your peers.
With two events complete, conversations concluded and questions from the IR community shared, the London event kicked off with a brief Q&A in which Steven Wade, head of content at IR Magazine, put questions from key themes to Erik Carlson, COO and CFO at Notified.
It starts with education, education, education. You have to understand the limits, as well as the capabilities, of the tools you’re using. A common thread as we’ve done these sessions is that far too few companies, at this point, have AI use policies. Practically the best thing IROs could do is to go back to their general counsel, go back to their IT team and ask: how do we put together an AI use policy that allows employees within our enterprise to start experimenting with this technology?
It’s about creating the parameters that allow people to experiment in a safe way, to educate them on the types of data that can go into public tools versus private tools. Then it’s on technology providers like Notified, and similar competitors, to make sure we’re providing enterprise-grade, secure tools that leverage AI capabilities that are purpose-built for the IR use-case. It’s one of those use-cases that is incredibly sensitive because of the amount of materially non-public information.
[When Notified created its own AI use policy], the number one thing we found is that people didn’t understand, within our organization, the difference between public and private AI tools. Providing a list of approved tools, the types of data people should be using it for and, probably more importantly, accepted use-cases, was incredibly helpful. We educated the organization and then were very specific on the types of training available – essentially a governance program that you need to go on in order to be able to unlock access to these tools, with quarterly refreshers.
I think the one guarantee with regulation is that it’s always delayed. We’ve been talking about ESG regulation for years, at least in the US – Europe is further ahead. In the US, we are still two years out from required disclosure for ESG so I think there will be a substantial delay in any AI regulation – as there always is.
But I do think we may get to a point where, if we’re talking about materials such as an earnings script or other disclosures, you are going to have to disclose whether you’ve leveraged AI to actually create those materials. We’re not there yet, of course.
In the conversations we’ve had in different cities across the world, there seems to be a tremendous desire in the role of IR to eliminate some of the very manual data synthetization work that happens on a quarterly and an annual basis, so that IR professionals can focus on driving messaging, on tonality, on making sure they’re telling the value proposition and really getting that corporate story out to investors and stakeholders alike.
The biggest opportunity, and it’s still very much early innings, is figuring out the different components of the IR workflow where there’s opportunity to shortcut some of the work that is incredibly labor-intensive. The earnings cycle is a great example: we see an opportunity to leverage AI tools that are already publicly available but within a private environment, to shorten the length of the earnings prep time from 12 to six weeks – essentially allowing us to give time back to the IR department.
The ways in which AI can be put to work for investor relations is increasing as the technology is applied to new tasks. We asked workshop attendees where they found AI most useful – and where they think it is easiest to implement.
While many IROs have questions and concerns around AI – more on those and how to mitigate those concerns later – there are many ways in which AI is being efficiently folded into the IR toolkit today. Here we have pulled together a selection of quotes from Toronto, New York and London setting out how IROs are already using AI in their work.
‘I work for a really small shop – I wear many hats and I don’t have a lot of time. Finding graphics people is very hard and they’re very expensive so I started using a tool called Beautiful AI. There is a really good library of images but you can also upload your own. When you’re formatting, it keeps everything consistent. There are different templates you can use so you can whip up a deck really quickly and it looks very, very professional. From a cost-benefit point of view, it saves me a lot of time but also money because I’m paying about $112 a year for this tool.’
‘We’re going to try Microsoft Copilot for the deck. We think it could be helpful for a first draft of the slides for an earnings deck, giving us a starting point that includes a visualization of the data as well.’
‘ChatGPT isn’t real-time but Google Bard is – it’s tied to Google’s search engine. Because it’s indexing Google News and other search results, you can use it to ask for key themes in a specific industry. I’ll use that as a checklist for what I’m not thinking about. Nine times out of 10, there’s one or two things on the list that prompt new ideas.’
‘We do challenge analysts on their assumptions. AI could help with that. It could look at previous scripts, it could look at what you said, it could look at the models and it could pick out the areas where those analysts have been particularly bullish, or particularly bearish, for you to go away and challenge the analyst.’
‘AI is such a great idea, everyone’s talking about it. But how do we actually turn the plug on and start using it daily? For me, summaries and peer monitoring work. We use it to monitor what peers have said during their investor calls.’
‘You can use it internally when running through investor notes as you prep for a Q&A: you’ve had all those meetings and now you can ask, What themes are lots of people asking us about?’
‘ChatGPT can load 10K or 8K docs and filings from competitor companies and synthesize data really quickly. With a very specific set of parameters, it can then summarize notes. You could ask, for example, for highlights of how capital expense has changed, quarter over quarter.’
‘If, for example, I want to find investors that are interested in a specific company, I’ll ask AI to look for investors focused on that industry, or peers in that stage of growth perhaps. The output might be a combination of family offices, brokerage firms, institutional investors, individuals, with their profiles, company email and contact information. Because we’re at the stage of AI where it’s not 100 percent trustworthy, I have to verify every piece of information but it’s a great help. Now I have a list of investors I can go after. Without AI, it would take days or weeks to compile a list like this. Even though I have to verify the data from AI, it is much faster.’
‘When we work on the earnings press release, we help the CEO or CFO to write his/her quote. ChatGPT does that very well because you want to share the sentiment of excitement, but you don’t want to overshare: you are following disclosure rules. Then we use our team analyst for a more human touch.’
‘You can use AI to figure out a better way to say what you just wrote. And this really comes down to how you are going to prompt the generative AI tool to give you what you want, to work out how to say something with more optimism, or more conservatively, with a more professional tone, for example. Natural language tools are really good for massaging different types of language and running lines from scripts.’
‘At the beginning of a meeting, a lot of our European investors right now say they’re recording the meeting (you can opt out of course). But being able to use a tool such as Otter AI allows you to be more engaged within the conversation. Similarly, when you’re in a conference and you’re with management, in the middle of taking notes and someone pings a question to you. Recording – and using AI for notetaking – just allows you to be more engaged.’
‘You can use AI to compare two ESG reports. You ask, Please show us the disclosures this company is making, compare the difference and show me where the gaps are in my report.’
Do an audit of the skills you have around you. Rather than thinking about IR in general, think about your team and the skills it has. If you already have a fantastic copywriter in your team, maybe you don’t need to use AI to produce copy. But if you don’t have a junior analyst, for example, maybe that’s where AI can fill a gap. You should be asking: what solutions can be provided by AI?
‘I envision a point where each of us has tools in different areas: one looking for trends in the broader industry, pulling in industry data and macro trends, for example. And then another area, which is absolutely locked down private, where the team is working on materials and able to leverage dedicated IR engines.’
Investor relations professionals at the events in Toronto, New York and London were asked to complete a prioritization task: essentially charting which tools – available today or expected in the future – would have the biggest impact on their roles. They also ranked those that would be easiest to implement.
‘We had real differentiation in some of the scoring [on our table]. I think it's about the quality of the service you already get. One example was shareholder ID, an area where the quality I get is very, very good: you'd get it and you'd probably need to do half an hour just checking it over. It was accurate and it was brilliant. So I scored that very, very low, because for me, AI wouldn't add anything. Whereas others said they actually need to do a lot of work and that it was really hard. For some it was their highest-scoring [tool]. It's about the capabilities you already have.’
It’s important to know what your peers are doing with AI. But if you don’t understand the technology and how to use it, it is difficult to make decisions on where it will best serve your needs. Education is cited as a key challenge for IR professionals when it comes to AI adoption.
What is public vs private AIThis is a crucial differentiator for IR applications of AI because of the sensitive nature of a lot of material. Public AI – such as ChatGPT – does what it says on the tin: it is public. And whatever information you feed into a public AI tool will become accessible and shareable.
Private AI, on the other hand, is a closed-door system that maintains the integrity of your proprietary data – and its privacy. This is where the future of AI will really benefit IR: when you have an AI tool that is trained on your own company’s data and your IR needs.
General or strong AI vs shallow or weakAlthough these terms might sound like one is better than the other, they simply refer to the use-cases of different tools. Shallow or weak AI has a pre-designated job, like that used in facial recognition or driverless cars, for example. General or strong AI – which is flexible and uses a range of sources – is not set to any specific skill.
Prompt engineeringSome delegates at the AI workshops noted overly generic results when they tried ChatGPT for press release writing, for example. This is where prompt engineering comes in: the better you get at directing AI, the better your results will be. Prompts are the questions you ask your AI, and by being very specific and allowing it to learn, you can develop a ‘persona’ for your tools that works the way you want it to – and gives you the output you need.
A really good rule of thumb is that unless your enterprise has a single sign-on into an AI tool, it’s a pretty safe bet that it’s public
The two biggest challenges to greater AI adoption among IROs are around data security and the reliability of data, and each of these should be addressed by a company AI use policy, which we’ll look at further down. It is interesting to note that the AI learning curve – understanding how to use it and when – is a top-three barrier to adopting the technology.
You will get the most out of AI by really understanding the way the technology works, and prompt engineering is a great place to start. By being very specific in the things you ask and the tasks you put to an AI tool, you will get a better output. You should also remember that AI remembers: it grows on what you feed it, allowing you to ‘train’ it to your needs and essentially build a persona that fits into your IR team.
Given the role that investor relations plays in communication, both externally to the Street and internally to the board and senior management – and IRO access to and use of non-public, material information – it is understandable that IR professionals raise concerns around safety and ethics when it comes to AI.
There are different ways you can mitigate concerns around the use of AI. The following is a table of potential challenges related to safety and ethics, with potential solutions.
Privacy & copyright
– Outline allowable data inputs in data policy– Check/amend data policy in contracts– Check provider’s data history
Human impact
– Identify tasks that are human-critical– Plan for team career development
Transparency & traceability
– Create internal oversight of the data processes, verify facts and assure quality
Bias
– Check outputs for bias
Duty of care to shareholders
– Understand how investors use AI and make content machine-readable
Authenticity
– Consider disclosing when AI is used– Check for nuance
Human touchSomething that came up a number of times at the workshops is the idea that AI needs a human monitor. There are certainly cases of ‘hallucination’ in AI output – some notorious – but even without dramatically false outputs, it remains crucial that you fact-check anything you have used AI for.
Then there is the human touch aspect. Some tasks don’t require personality, some are arguably better without it but if you are using AI in a creative process, it might need a bit of humanity. That said, with prompt engineering you can also direct AI to different styles and personas.
As a public company, the margin for error is zero
To use all this information that we have at our fingertips, that we’re so careful of, we will need to be hand-in-hand with our chief security officer, our head of IT and, in many cases, our head of legal
An essential first step to developing that AI investor relations assistant is a use policy. While the majority of IROs are exploring use-cases for AI, more than 65 percent of firms do not have an AI use policy in place. Almost a third have a company-wide AI policy, while just 5 percent have an IR-specific AI policy. But this is where your company and your IR team set out how they will use AI responsibly and ethically, where they set out crucial issues such as who can use AI and how, and where employees learn they cannot upload company documents into ChatGPT.
If your company does not have an AI use policy, you’d better write one – and soon – before a product manager in your organization loads corporation data into a public domain. Go to ChatGPT and ask it to write one for you
Working groupAs well as asking about an AI use policy, we asked IROs whether their firm had established an AI working group. Around three quarters had not, though almost a fifth said they were in the process of doing so. A quarter of the companies at our workshops had already established an AI steering group.
Although AI isn’t used by the majority of IR professionals yet, the number of existing use-cases actively shared at our workshops points to the myriad ways the technology is already serving as an IR assistant for teams around the world.
As one delegate pointed out, however, we are still in the phase where we are ‘exploring the ability of AI’. This means everything needs to be fact-checked and human oversight remains vital.
IROs raised concerns around the validity of outputs, questioned whether their data could be trusted with AI and noted how a lack of education around the technology served as a barrier.
With the right checks and balances, an essential AI use policy and education around artificial intelligence, these challenges can be overcome and IROs can gain confidence in AI – as well as get vital management buy-in to invest in time and money-saving tools.