By Bob Walker, Strategic Consultant at Millani
The world of business and investment is rapidly incorporating artificial intelligence, machine learning, and natural language processing into strategy and operations. The power and flexibility of these tools have made them irresistible for a large number of commercial applications. They promise to transform business — and our lives. “AI is the new electricity” says Andrew Ng, founder of Google Brain Deep Learning Project.
Responsible investing is no different. We are seeing a proliferation of research tools promising to harvest decision-useful insights from the massive flows of unstructured data available on the internet and to reduce the repetitive ‘grunt-work’ of ESG research and analysis. These come in the form of ESG assessments, media monitoring, headline risk assessment, incident reporting, among others.
But as with all new tools, the risks of AI are becoming better understood, gaining public profile and attracting both investor and regulatory attention.
Often considered ‘clean’, AI has a significant environmental footprint, with its computational infrastructure inflicting a heavy demand on energy and water. The impact on land is also severe with a complete reliance on a range of rare earth minerals, the majority of which are mined in China. If the technology sector goes unchecked and unmitigated, it could have huge potential for dangerous working conditions, labor exploitation and severe environmental impacts.
The social and political risks of AI are also becoming better understood. Gender and racial biases in data-sets, algorithms that point towards extreme social media content, invasions of privacy, the fundamental lack of consent for personal data extraction, facial, speech recognition and public surveillance — all have the potential to create unintended consequences. Law enforcement’s widespread adoption of Clearview’s inaccurate facial recognition algorithm — derived from Facebook photos — is one such example.
But as these technologies develop and are increasingly deployed, investors are taking action. Federated Hermes has been engaging companies on AI risks for several years and has set out “Investors’ Expectations on Responsible AI and Data Governance.” More recently, an investor group is collaborating on a human rights approach with a focus on the big five technology companies (Amazon, Apple, Facebook, Google, and Microsoft). Led by the Council on Ethics at the Swedish National Pension Funds, expectations are set out in “Tech Giants and Human Rights: Investor Expectations”. Another investor group is coalescing around International Corporate Governance Network’s guidance on dual-class shares — a prevalent practice amongst tech founders to retain control and fend off shareholder concerns.
Canadian investors are also jumping in and taking charge. They may have a unique responsibility to do so. Many of the most significant advances in AI have been invented in Canada (with major research efforts at University of Toronto and University of Montreal). Canada is defining itself as a world leader in the space, developing a pan-Canadian AI strategy, with work advancing rapidly at the Canadian Institute for Advanced Research (CIFAR).
This suggests a new strategy for engagement. Consideration could be given to go beyond bilateral and collaborative shareholder-corporate dialogues, to an industry-to-industry effort to better understand the risks, establish the appropriate regulation, and ensure AI is infused with the values we want, rather than the ones that emerge by accident. Intentionality will be key.