By Denise Carpenter, ICD.D

In the modern Boardroom, it is essential to increase your Board’s Artificial Intelligence (AI) visibility and fluency. It’s just good governance.

Before diving in from a Board perspective, it’s important to define AI. AI systems used by businesses today are highly efficient and powerful prediction engines. AI generates predictions based on data sets selected by engineers and subject matter experts, who use them to train and feed algorithms that are, in turn, optimized on articulated goals — most often — by those developers. However, engineers designing these systems are not typically tasked with building in guardrails for effective use or ensuring that systems operate within the law or corporate strategies. That’s where Boards come in.

There are several angles from which a Board, its committees and directors can approach AI. First, a Board should view this matter from a traditional compliance, strategic planning, legal and business risk perspective. Secondly, a Board can approach AI governance through an environmental, social and governance (ESG) lens. The ESG community is increasingly making the case that technologies such as AI need to be a key consideration in a Board’s governance portfolio, especially in relation to civil liberties, workforce, and social justice issues. Finally, a Board should consider how AI in society at large will impact enterprise activity.

The reality is that many Boards are just beginning to assess the impact of AI on their businesses. In assessing AI’s impact at the board level, I have found the following four pillars useful:

  1. AI is more than just an issue for the technology team. AI’s impact resonates across the organization and implicates those managing all business functions, including legal, marketing and human resources functions.
  2. AI is a complex system. AI is a system comprising the technology itself, the data upon which it runs and the human teams who manage it.
  3. C-level leadership and governors are responsible for AI oversight. AI systems are highly complex and contextual. Strategic guidance and management are essential for fully integrated and trusted AI systems.
  4. AI is dynamic.By definition, AI is a constant state of evolution. As such, its oversight and accountability must similarly evolve.

Why the Board?

The considerations outlined previously suggest the need for strong oversight, but why is the Board the right party to provide it?

The reasons seem straightforward. First, strategy and risk are among the key areas of board oversight. Second, AI is increasingly a critical tool in advancing and supporting strategy, but it carries risk. Thus, AI ethics oversight by the board is appropriate—and critical. The board’s oversight of AI must drive the right tone and set the guiding principles for an ethical framework for AI adoption that can be operationalized by the company to achieve responsible and ethical outcomes.

AI and Board Oversight

Today, Board oversight must include requirements for corporate and use-case level AI policies. These policies should set out where AI systems will be used, what checks and balances will be put in place, as well as setting out standards for robust and safe operation. Additionally, these policies must be underscored by practical processes, compliance structures and strong culture.

It is clear the Boards play a critical role in how AI is used in business. The question is, how will your Board approach AI oversight? For example, Boards are commonly faced with the difficult questions, “Should we use AI in this way?” and “What is our duty to understand how that function is consistent with all of our other beliefs, missions, and strategic objectives?”

When starting to assess AI at the Board level, I recommend Boards consider asking themselves the following questions:

  • What is the AI opportunity for our business? What are the benefits and risks?
  • How can AI impact our business now and in the future? How should it?
  • What is our AI strategy?
  • What is our approach to AI governance?
  • How will/are we driving trust in our company’s use of AI?
  • What are our principles and/or framework to deploy and use AI in a responsible way?
  • Have we communicated these internally and externally?
  • How are they being embedded in AI initiatives across the business?
  • Who oversees the use of AI? Does that person or group have adequate and appropriately skilled resources?
  • What guardrails have we established to address the challenges associated with ethics and governance of AI?
  • Are our uses of AI appropriate? How do we know?
  • Are they achieving the desired results?
  • Has/will the use of AI create unanticipated risks, including ethical challenges?
  • Do we have adequate skill sets on the board or management team to properly oversee the use of AI? Do we need to seek out director candidates with relevant skill sets?
  • How are we collaborating with our ecosystem of business partners, suppliers, customers, regulators, and other constituents to align on approaches to trustworthy AI?
  • How will we ensure our AI oversight is evergreen – up to date, relevant, appropriate for evolving technology?

Implementing an AI Ethical Framework

By implementing an ethical framework for AI, organizations create a common language. This framework should articulate your values, management system and the measures that will sustain trust and ensure data integrity among all internal and external stakeholders.

Like human intelligence and behaviour, a critical component of AI is information — or data. Without data, decisions would be made and actions would be taken arbitrarily and without any logical basis. As such, there are a number of innate, ethical risks related to AI.

Here are just a few of  the ethical and governance challenges associated with the use of AI:

  • Multiple definitions of AI and related terms across the organization and its ecosystem
  • Limited focus or alignment of AI to the company’s mission and values
  • Is AI being utilized for narrow objectives without considering a broader aperture of how it can change the business for better?
  • Development of AI happens in an ad-hoc manner with limited standards or guardrails
  • Data for AI design and development is acquired or used without any checks or testing for bias or authenticity
  • Is AI developed with a sole or primary focus on improving efficiency?
  • Outcomes of AI systems are not necessarily monitored for alignment with intended objectives.

Having a common framework and lens to apply the governance and management of risks associated with AI consistently across the enterprise can allow for faster and more consistent adoption of AI.

Areas of Ethical Risk

From the Board’s perspective, some primary areas of ethical risk may include:

  • Fairness:Will the use of AI result in discriminatory outcomes? Do AI systems use datasets that contain real-world bias and are they susceptible to learning, amplifying and propagating that bias at digital speed and scale?
  • Transparency and Explainability:Are AI systems and algorithms open to inspection and are the resulting decisions fully explainable? How is the use of AI that leverages individual data communicated and explained to impacted individuals before, during and after business interactions?
  • Responsibility and Accountability:Who is ultimately accountable for unintended outcomes of AI? Does the organization have a structured mechanism to recognize and acknowledge unintended outcomes, identify who is accountable for the problem and who is responsible for making things right?
  • Robustness and Reliability:What measures for reliability and consistency do AI systems need to meet before being put into use? What are the processes to handle inconsistencies and unintended outcomes?
  • Privacy and Trust:Can/do our employees, customers and stakeholders trust their interactions with our AI systems? Can/do they trust our use of AI data? Does the AI system generate insights and actions for individuals that they do not expect, leading to concerns and questions of trust and propriety?
  • Safety and Security:What systems do we have in place to ensure that our AI results help to maintain or increase safety? How frequently are they tested for errors in controlled environments? Have risks to human life, social, and economic systems been identified and mitigated? What is in place to ensure these risks won’t happen again?

It is worth noting that areas of AI risk may be rooted in perception as much as reality. For example, even if an AI system is operating well, the perception that it is generating unfair or unreliable outcomes can be as damaging as actual unfair or unreliable outcomes. And the big question: How does one define “operating well?”

Conclusion

As with many other aspects of technology, AI is becoming indispensable to companies that are focused on long-term growth and value generation. However, AI is also increasing the impact of data risks and generating new risks, such as those from unintended consequences. That’s why it is essential to ensure that appropriate Board oversight and guidance is being used to help to identify, assess and manage these risks.

As a Board newly approaching these issues, it can be difficult to know where to start. However, with the background information, ethical framework, and questions in this article, Boards are prepared to begin their AI oversight journey.

About Denise Carpenter ICD.D

Denise is a collaborative, engaging Board Chair, Board Director and Executive Coach.

Known for her calm and measured approach, she is respected as a strategic, politically savvy and socially sensitive thought leader. Denise brings the following skills to the Boardroom table: her broad perspective, ESG expertise and understanding of corporate expectations.

Denise is nationally recognized as a mentor and advocate of Diversity, Equity, and Inclusion. She is a recognized speaker on diversity of thought, governance as a business enabler and stakeholder relations.