As ESG regulation expands, businesses are grappling with increasingly stringent compliance requirements, say Steven Farmer (Partner), Scott Morton (Counsel), Iris Karaman and Kate Chan (Associates) at Pillsbury Law.
Frameworks like the EU Corporate Sustainability Reporting Directive (CSRD), the EU Corporate Sustainability Due Diligence Directive (CSDDD), the EU Deforestation Regulation (EUDR), and UK equivalents impose extensive obligations on companies, including to conduct due diligence, disclose emissions, and verify the environmental and human rights impacts of their operations and supply chains.
At the heart of these obligations lies a crucial challenge: collecting, verifying, and processing vast amounts of data across complex, global supply chains. Generative artificial intelligence (AI) has emerged as a critical tool for tackling these challenges and is uniquely positioned to process and analyse the massive volumes of data required to comply with ESG regulations. However, using AI to meet these requirements comes with legal and practical considerations.
The data challenge: a global supply chain burden
Global supply chains are notoriously complex, making the task of ensuring compliance daunting. Where companies are required to provide detailed, transparent sustainability disclosures, including information about environmental and social impacts along the entire supply chain, this often involves gathering data not just from direct suppliers but indirect parties. For businesses sourcing from multiple suppliers — who themselves source from others — this task is not only labour-intensive but, in some cases, virtually impossible to execute manually.
The challenges are also significant in the UK, as the Environment Act 2021 and various emissions disclosure regimes also require companies to evaluate their environmental impacts, such as carbon emissions, biodiversity risks, and deforestation, often spanning multiple tiers of suppliers and partners.
The potential of AI in ESG compliance
AI offers transformative potential for addressing these challenges, particularly for data collection, verification, and risk analysis.
Amongst other use cases, AI can process vast datasets, flagging potential human rights violations or environmental risks by analysing supplier reports, public data, and even satellite imagery. For companies subject to CSRD and other emissions disclosure regimes, AI systems can also quantify carbon footprints by analysing energy usage, production patterns, and transportation data.
By cross-referencing supplier certifications, financial data, and geolocation inputs, AI can help verify supply chain data integrity — a critical requirement under the EUDR. Finally, AI systems can identify emerging risks and trends, allowing businesses to address issues proactively rather than reactively.
AI’s potential significance is further underscored by a growing recognition that digital tools and technology are essential for achieving ESG compliance. For example, CSDDD highlights the importance of leveraging advanced tools which could include AI, satellite imagery, and blockchain to streamline and reduce costs. However, the legislation also cautions against over-reliance on these technologies, emphasising the need for robust human oversight.
The risks of AI-driven compliance
While AI provides substantial advantages, it also introduces significant risks. A prominent concern is the “black box” problem, where AI outputs are not easily explainable since the user has little or no knowledge of the system’s internal workings. When required by law to demonstrate how they identified and mitigated ESG risks, EU and UK regulators generally expect companies to justify their methodologies and verify the accuracy of their findings — tasks that become difficult when AI processes are opaque.
Another challenge lies in the quality of the input data. AI systems rely heavily on accurate, consistent data to produce outputs. Supply chain data, however, is often fragmented and inconsistent, increasing the risk of errors. If an AI system incorrectly analyses supplier emissions data under emissions reporting regimes, the company could face regulatory penalties or reputational damage for submitting inaccurate or incomplete disclosures, potentially giving rise to greenwashing liability risks.
AI systems that produce results reflecting and perpetuating human biases could further contribute to such liability risks. Finally, AI systems are limited in their ability to account for qualitative aspects of ESG compliance, such as the broader societal implications of corporate practices. The CSDDD requires businesses to assess their human rights impacts, which demands contextual and cultural understanding perhaps beyond the current state of technology.
A strategic approach
To maximise the benefits of AI while mitigating risks, businesses should adopt a strategic and legally informed approach. This includes:
• Certain legislation explicitly requires companies to disclose their methodologies and ensure the accuracy of their ESG assessments. This means implementing safeguards to validate AI-generated outputs, such as independent audits or human review processes. Robust documentation of methodologies and findings is also essential for demonstrating compliance.
• Invest in improving the quality and consistency of supply chain data, as AI is only as reliable as the data it processes. This might involve integrating AI with other technologies like blockchain to enhance data integrity.
• Maintain transparency about AI usage – both internally for governance purposes and externally for stakeholder accountability. This could include disclosing how AI is integrated into processes, the specific roles it plays, its known limitations, and measures taken to mitigate associated risks.
• Compliance requires collaboration between legal, technical, and operational teams. Lawyers can advise on regulatory requirements, while technical experts can tailor AI systems to meet these needs.
• Staff at all levels should be trained on appropriate and responsible AI usage, with a particular focus on understanding its limitations and potential pitfalls. This training should aim to reduce over-reliance on AI outputs, fostering a culture of accountability and ownership. Employees should be encouraged to critically evaluate AI-generated outputs and verify their accuracy, supported by organisational policies, such as an acceptable use policy, to reinforce these practices.
• The performance of AI systems should be audited to detect anomalies, model creep, biases or potential failures before they become critical.
The intersection of AI and ESG compliance offers both opportunities and challenges. For businesses managing global supply chains, AI can transform how data is collected, verified, and analysed, making it an indispensable tool for meeting the demands of regulations.
However, AI must be deployed responsibly, with robust safeguards to ensure transparency, accuracy, and accountability. By balancing innovation with compliance, businesses can not only mitigate regulatory risks but also position themselves as leaders in sustainable governance, meeting the demands of a world increasingly focused on ESG accountability.
By Steven Farmer (Partner), Scott Morton (Counsel), Iris Karaman and Kate Chan (Associates) at Pillsbury Law