President Biden issued a sweeping executive order establishing a national framework for the development and use of AI technology in the United States.
The wide-ranging order aims to promote innovation in AI while also managing risks in areas like bias, privacy, and security.
Steps range from regulatory oversight for healthcare AI to attracting tech talent to government service.
For chief risk officers and chief compliance officers in both the public and private sectors, there is much to consider. Here are ten essential takeaways to guide your thinking.
1. Testing AI Safety
The order advocates rigorous testing of AI systems before deployment to identify risks, flaws, and harmful biases. Techniques like “red teaming”, where dedicated teams probe for vulnerabilities, are encouraged. For AI with potential national security risks, specialized government labs will assess capabilities and guardrails to prevent threats. Testing and evaluations aim to verify that systems function correctly and that risks are mitigated before release.
2. Monitoring AI Risks
Managing AI risks is seen as an ongoing process, not a one-time step before deployment. The order calls for continuously monitoring and evaluating AI systems once in use to detect emerging issues. This covers monitoring for discriminatory impacts on different population groups. The goal is to ensure AI maintains safety and performs equitably over time. Detected problems can then be addressed through improvements and other corrective actions.
3. Following Risk Management Frameworks
The order directs government adoption of the NIST AI Risk Management Framework. This invites a comprehensive lifecycle view for managing AI risks, and other frameworks may be incorporated over time. Following standardized frameworks promotes consistent identification, assessment and mitigation of risks across both government and industry. It provides proven processes for risk management tailored to AI’s unique considerations.
4. Vetting Procured AI
Government agencies procuring AI systems and services from vendors are advised to evaluate claims of effectiveness and embedded risk mitigation capabilities carefully. Independent evaluation provides objective assessment rather than relying on vendor marketing. Documentation and oversight requirements ensure that procured AI meets safety, fairness and other criteria.
5. Privacy Enhancing Technologies
The order advocates using privacy-enhancing technologies (PETs) to safeguard personal data and manage privacy risks exacerbated by AI. PETs are a category of tools that minimize exposure of sensitive data during AI modeling and use. Their application is intended to mitigate improper access or disclosure of private information. Overall, PETs provide technical guardrails to prevent AI privacy harm.
6. Regulatory Oversight
Federal regulators are directed to monitor AI risks and impacts for sectors like healthcare, finance, transportation, and education. The guidance provided to industry aims to ensure consumer protections keep pace with AI-enabled products and services. Rulemaking or emphasis on existing requirements may address risks like discrimination and fraud. The goal is to protect patients, passengers, financial consumers and students from potential downsides as AI is deployed in critical areas.
7. Coordinating Government AI Risk Management
The order requires government agencies to appoint Chief AI Officers and create governance boards to coordinate AI policies and risk management. Central coordination aims to ensure consistent identification and mitigation of AI risks across government. It also facilitates sharing of best practices and lessons learned. With many agencies adopting AI, centralized oversight and collaboration will enable taking a systematic, government-wide view of managing risks responsibly.
8. Preventing Anticompetitive Risks
Regulators are advised to use their authorities to prevent misuse of AI that disadvantages competitors or reduces market competition. Dominant firms controlling key assets could use AI to exploit their position absent oversight unlawfully. The order warns that concentration risks could limit innovation and choice. Enforcement actions aim to stop collusion and promote access to AI for entrepreneurs and small businesses.
9. Safeguarding Civil Rights
The Justice Department and other Federal agencies are directed to use their existing authorities to prevent discriminatory abuses of AI that violate civil rights laws and Constitutional protections. Oversight will monitor criminal justice, benefits programs, hiring practices and other areas where AI risks marginalizing vulnerable groups. Actions aim to avoid unlawful discrimination based on race, disability, and other protected characteristics. Algorithms must uphold civil liberties and protections. Accountability measures for developers and users will enforce these rights.
10. International AI Risk Principles
Global cooperation that brings other nations together to manage AI risks is encouraged. Through bilateral and multilateral engagement, the order seeks to establish norms, standards and policies that ensure AI is developed and used responsibly worldwide. The U.S. will lead the development of a framework for accountability to mitigate cross-border AI risks. Collaborative efforts aim to prevent authoritarian misuse and build consensus around principles that prevent discrimination, respect rights, and promote safety.