An investor wake-up call on artificial intelligence

This article is an on-site version of our Moral Money newsletter. Sign up here to get the newsletter sent straight to your inbox.
Visit our Moral Money hub for all the latest ESG news, opinion and analysis from around the FT
Could anxiety about generative AI prompt a broader rethink of whether technology companies really count as ethical investments?
Shares of educational tech company Chegg nearly halved after it admitted earlier this week that some students had been turning towards artificial intelligence chatbot ChatGPT rather than its human tutors.
Louise Piffaut, head of ESG integration for equity stocks at Aviva Investors, told me yesterday this crisis could “wake up” investors who have until now “misunderstood” how exposed technology companies are to AI.
Some people might suspect their internet provider of excessive surveillance when a spookily well-targeted advert pops up, Piffaut says, “but that’s probably where it stops for the average retail investor”. But in reality, she adds, “the different forms of artificial intelligence including biometrics, Siri and Alexa . . . all have ESG risks that have been largely under-appreciated”.
From a climate perspective, AI will probably drive tech company energy usage (which is already sky-high) even higher.
Then there are human rights issues, like racial bias in facial recognition or the potential for mass surveillance, which the European parliament, among others, has been preparing to crack down on. A 75-year-old British scientist seen as the godfather of modern AI told the New York Times this week he had quit Google, in part because it’s “hard to see how you can prevent the bad actors from using it for bad things”.
These ethical issues come as companies are scrambling to adapt. Chegg. for example, said it was now “embracing [generative AI] aggressively”, tackling the threat by launching its own service that lets students talk to an AI-powered chatbot directly. Read on for my story on how one AI-focused data company thinks the latest iteration could transform the way clients think about risk.
Plus we explore another ambiguous challenge: the questions that Saudi Arabia poses for ESG investing. And don’t miss our new Moral Money video on the bumper profits that have been rolling into the coffers of the oil supermajors. (Kenza Bryan)
The next FT Moral Money Forum report will dig deep into carbon markets, and we want to hear from you. Do you favour carbon taxes? Can voluntary markets overcome quality concerns? Will compliance markets do more to push business to decarbonise? Share your thoughts here.
Artificial intelligence is actually quite retro in the ESG world, according to the chief commercial officer at RepRisk, the Zurich-based due diligence company that has been scouring the web for evidence of corporate wrongdoing since 2007.
Back then, AI was mostly thought of as a time-saving exercise. Like many others, the company created a giant web-scraping tool that mimicked the ability of human analysts to quickly read a news article or NGO report and understand the strength of claims about a toxic oil spill or a dispute with a trade union.
This tool has blossomed to scan half a million documents every day in 23 languages. It throws out the least credible or relevant claims and alerts analysts in Zurich, Toronto or Manila to the ones that could be worth reviewing.
More recently it has taken things one step further with so-called “machine learning”, putting 16 years of human feedback back into its system to teach it which sources to trust and how to interpret them. RepRisk’s clients include Standard Chartered and ING, among more than 80 banks, as well as insurers, private equity players, hedge funds and asset managers.
“We need to be able to read between the lines when a company is criticised,” Alexandra Mihailescu Cichon told Moral Money. “We feel strongly about the need for curation. The information needs to be actionable.” For example, for cultural reasons, Japanese-language sources are likely to couch criticism in gentler terms than those from the US — something a simple translation tool might struggle to pick up.
Prediction is the next frontier. RepRisk has cross-referenced the location of “environmentally sensitive sites” with millions of its own risk reports in the hope of flagging possible future issues. Extractive projects within 1km of these sites faced a 77 per cent higher risk of public criticism for environmental incidents between 2007 and 2022 compared with those 30km away, according to RepRisk research. For private companies, which tend to face less scrutiny, risk increased by 27 per cent. In both cases proximity to a Unesco world heritage site raised the risk even further.
Knowing all this could give investors a better chance at getting out early, Cichon argues. The controversial East African Crude Oil Pipeline, for example, set to run from Uganda to the Tanzanian coast, is within 1km of 33 environmentally sensitive sites, according to RepRisk data.
Not all agree that a predictive approach to ESG is helpful. “The industry hates black boxes,” Aviva’s Piffaut told me. “If you have an AI-led score that you cannot explain because it’s full of code, and a model you don’t understand because you’re not a data engineer, it doesn’t drive the transparency we need right now.” 
Fadi Zaher, head of index funds at UK-based Legal & General Investment Management, warned that the lack of regulation for ESG data means there may not be much point to predictive AI for the moment. “If you’re feeding it poor quality information it will spit out poorer quality output.”
In the case of the pipeline, it is hard to think that any fund manager would have missed this risk. Still, some like the insurance broker Marsh McLennan chose to back it even as it was publicly shunned by major banks and insurers. Putting biodiversity risk data in quantifiable terms could go a long way with investors. (Kenza Bryan)
The Milken Institute conference taking place in Los Angeles this week has featured extensive debate about AI — along with hot topics such as the American debt ceiling, banking stress, war in Ukraine and deteriorating US-China relations.
Another issue sparking unexpected interest is the extraordinary ambition now emerging from Saudi Arabia.
This creates a conundrum for global businesses that aspire to have ethical standards. As Saudi government and business leaders have argued at Milken, the Kingdom has outlined a bold ambition to refocus its economy away from its current reliance on fossil fuels with a so-called “Vision 2030” plan of reforms. So after investing $300bn in new infrastructure last year, the government plans to invest another $3tn (yes, trillion) before 2030. And to support this, the government is overhauling its capital markets, introducing green energy projects, unleashing mass digitisation and even embracing some social reforms, such as a dramatic increase in the proportion of women in the workplace. Hooray.
This creates potentially lucrative opportunities for investors in general. And the focus on green energy should excite ESG enthusiasts in particular. But the Kingdom has had an infamously bad human rights record in the past, epitomised by the gruesome murder of the journalist Jamal Khashoggi. And while ESG investors have sometimes downplayed such human rights issues when chasing green projects, the speed at which western businesses were forced to pull out of Russia last year has left many nervous about half-understood political risks. “Some investors simply won’t put Saudi projects into an ESG basket,” one asset manager told me in Milken.
The Saudi delegation says this is unfair: it told investors in Milken that events such as Khashoggi’s murder were isolated ones that will not be repeated in the future. “It is in the past,” the mantra went. Hopefully so. But if these bold investment plans — and green pledges — keep rolling out, the Kingdom will present an interesting test case for how to define ESG. We will be watching closely. (Gillian Tett)
The world’s largest sovereign wealth fund thinks regulators are not moving fast enough on AI ethics, so it has decided to step in, report Richard Milne and Katie Martin from Oslo.
If you’re one of a handful of people who have still not read the FT Magazine’s blockbuster piece on “the race to God-like AI”, now could be a good time.
Join us in London or online for the third annual Moral Money Summit Europe on May 23-24. Leading investors, corporates and policymakers will come together to discuss what needs to happen next to unlock a more sustainable, equitable and inclusive economy.
FT Asset Management — The inside story on the movers and shakers behind a multitrillion-dollar industry. Sign up here
Energy Source — Essential energy news, analysis and insider intelligence. Sign up here

You May Also Like