Navigating the New Frontier of AI Safety and the White House’s Executive Order

By - November 15, 2023

Regulation of AI has been somewhat lax until now. In response to public and White House pressure, major AI companies like Microsoft, Anthropic, Google, and OpenAI have formed an industry group named the Frontier Model Forum. Their goal? To ensure the “safe and responsible development” of advanced AI models. These ‘frontier models’ surpass current AI models in size, capability, and versatility. Examples include Anthropic’s Claude-2, OpenAI’s GPT-4, and Google’s upcoming Gemini. This initiative is a positive step, but critics argue that the Forum lacks solid commitments and measurable outcomes. Moreover, the $10M AI Safety Fund established by the Forum seems too small given the billions of dollars spent annually to develop more advanced models.

In a similar vein, OpenAI has launched a Preparedness Team. This team is dedicated to monitoring and guarding against a range of catastrophic risks, such as:

  • Individual persuasion
  • Cybersecurity threats
  • Chemical, biological, radiological, and nuclear (CBRN) dangers
  • Autonomous replication and adaptation (ARA)

Measuring AI Transparency

These actions signal a move towards addressing risks more transparently. Complementing these efforts, Stanford University has introduced the Foundation Model Transparency Index. This index evaluates top AI models on 100 indicators of transparency, providing a comprehensive overview for developers and implementers of AI.

Building safe AI systems varies by model but typically involves extensive testing, including techniques like reinforcement learning with human feedback – known as RLHF. This process, known as AI alignment, aims to direct AI systems towards human goals, preferences, and ethical values. For instance, OpenAI spent six months training and aligning GPT-4 before its release. According to OpenAI, GPT-4 is 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT-3.5.

Advancing AI Governance: Key Features of the New White House Executive Order

The recent White House Executive Order on AI marks a significant shift. It moves from self-regulation to a standards-based, regulated approach. This Order builds on previous voluntary guidelines, providing a clearer framework for AI safety. Its key elements include:

  • Establishment of the White House AI Council: This Council, created within the Executive Office of the President, coordinates AI policies across federal agencies. It underscores AI’s importance in government policy and strategy.
  • AI Trustworthiness and Risk Management: The focus here is on ensuring AI systems are reliable and safe, which is vital for everyone to understand.
  • Data Management Best Practices: Emphasizes best practices in handling and securing data, addressing widespread concerns about data privacy and security.
  • International AI Development and Deployment: Outlines the creation of a playbook and research agenda for AI deployment and research worldwide, considering labor market impacts and risk mitigation.
  • AI Risks to Critical Infrastructure: Stresses managing AI risks to critical infrastructure, including developing response strategies for cross-border risks.
  • Reporting and Framework Compliance: Mandates reporting to the President within 180 days on compliance with the NIST AI Risk Management Framework and U.S. National Standards Strategy.
  • Composition and Operational Flexibility of the AI Council: Details the members of the Council and the Chair’s authority to create subgroups, offering insight into the governance structure.
  • General Provisions: Discusses the Order’s limitations, including its non-interference with departmental authorities and its legal scope.

While Executive Orders can be overturned by future administrations, this one is a vital step in ensuring transparency and the safety of AI models.

Why It Matters

For CEOs and business leaders, understanding and adapting to these AI developments is critical. The evolving landscape of AI regulation directly impacts corporate strategy, risk management, and innovation pathways. As AI becomes more integrated into business operations, CEOs need to proactively engage with these changes. This means not only complying with emerging regulations but also fostering a culture of ethical AI use within their organizations. Strategies for this include investing in AI literacy programs for employees, establishing internal ethical AI guidelines, and creating cross-functional teams to oversee AI deployment. Engaging with external experts and joining industry forums can also provide valuable insights and keep your company at the forefront of responsible AI practices.

By taking these steps, you can ensure that your company not only complies with the new regulations but also leverages AI responsibly and competitively in the ever-evolving digital landscape.

Get more insights from RSM

Generative AI is rapidly changing how companies of all sizes do business. Get insights to capitalize on opportunities while addressing risks.

Diego Rosenfeld is a principal in RSM’s Boston office, serving as the national go-to-market leader for managed IT services (MITS) and a member of the RSM managed technology services leadership team. As go-to-market leader, Diego oversees MITS product strategy and regional and market-based client engagement teams. He works hand in hand with RSM industry teams to develop managed services that integrate our rich capabilities into scalable, industry-relevant offerings.

Receive Posts by Email

Subscribe and receive notifications of new posts by email.