The White House announced investments and actions to put the Bill of Rights and the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework (AI RMF 1.0) to the test. Tackling AI risk head-on, the Biden administration engaged with Alphabet, Anthropic, Microsoft, and OpenAI, with an additional focus on the impact of generative AI. In addition, the Department of Justice, Federal Trade Commission (FTC), Consumer Financial Protection Bureau (CFPB), and Equal Employment Opportunity Commission established AI principles advocating robust data collection and analysis to mitigate bias and discrimination.
Conversations with AI leaders show that AI governance is in its early days, yet the tsunami is coming, and the impact will be felt in all enterprises. Leaders must be prepared because they are accountable for how their organization uses AI. In the US, 17 states and the District of Columbia have pending legislation around AI and AI task forces reviewing their existing laws related to cyberattacks, surveillance, privacy, discrimination, and the potential impacts of AI. The time is now for enterprise AI governance to ensure:
- Evaluations of embedded AI in applications and platforms. 51% of data and analytics decision-makers buy applications with embedded AI capabilities, and 45% leverage pre-trained AI models. Enterprises need AI policies to test for effectiveness, responsibility, and business and data processing risks. When a software-as-a-service model using embedded AI conflicts with enterprise policies, vendors must demonstrate how they move models on-premises, allow shut-off configuration, and release updates.
- Controls around IP use and infringement. Foundation models and generative AI expose enterprises to entitlement and IP violations. The US Supreme Court recently upheld that only humans can create IP, not AI. Other countries, such as Australia, have similar laws. Enterprises need a comprehensive understanding of data sources; a process for validating training data, algorithms, and code; and automated controls to avoid IP violations.
- Product safety standards on AI. AI leaders, like Alphabet’s Sundar Pichai, have called for regulation rather than proactively addressing AI risk, allowing an uptick in harmful propaganda and misinformation. The EU AI Act attempts to counteract that trend by extending product safety regulations to AI use. In the US, the CFPB and FTC are examining existing product safety, libel, and consumer protection laws. Legal teams must prepare for regulatory compliance and potential class-action lawsuits as enterprise AI capabilities come under regulatory scrutiny.
- Inclusiveness as part of AI ethics. AI ethics that do not consider inclusivity are incomplete. With more black-box machine-learning models, such as large language models and neural nets, organizations will struggle to ensure that model behavior does not violate civil or human rights laws. Enterprises must take action to minimize bias in training data and model outcomes and recognize that conversations about AI and ethics must involve a broad set of stakeholders.
- Data integrity and observability. Enterprises need to be able to trace and explain their data. New York State has a regulation under review that requires disclosure of data sources and any use of synthetic data. While most organizations track data sources and observe AI when a model is in production, data governance will be necessary for data science processes and data sourcing to proactively address data transparency and usage rights throughout the AI lifecycle.
As regulators and courts start to scrutinize the use of AI, enterprises need to build AI governance as a bulwark against risk quickly. Expecting data science and AI teams to tackle AI governance alone is a recipe for failure. AI governance will require enterprisewide cooperation — including CEOs, leadership teams, and business stakeholders — to build effective processes and policies.
The original article by Forrester’s vice president and principal analyst Michele Goetz and analyst Rowan Curran is here.
The views and opinions expressed in this article are those of the author and do not necessarily reflect those of CDOTrends. Image credit: iStockphoto/style-photography