CEOs Consider Biden’s AI Executive Order

December 20, 2023
Much has been written about Artificial Intelligence

Much has been written about Artificial Intelligence – how it is transforming business processes, improving worker productivity, increasing efficiency, addressing unexpected disruptions, and potentially wreaking havoc in the hands of cyber criminals. Whether you welcome AI as a revolutionary technology, fear its seemingly infinite powers, or are somewhere in the middle on its value, many agree that some level of AI regulation should be implemented. President Biden agrees and in October issued a “sweeping” Executive Order designed to make AI safer and more accountable. CEOs are paying attention to the new AI Executive Order and considering measures to accommodate its mandates.

AI Executive Order Mandates

The AI Executive Order mandates many government agencies to develop areas of AI regulation in 2024. The Order also encourages open development of AI technology innovations including those focused on new AI security tools.

Covered in the Executive Order are topics such as:

  • How AI-generated code can be exploited and what measures organizations can take to reduce their risks,
  • The creation of a registry of AI models,
  • What events would trigger requirements presented in the Executive Order,
  • Large foundational AI models,
  • AI safety,
  • Civil liberties and algorithm bias,
  • How the Executive Order will affect AI-generated code,
  • How the Executive Order’s bioweapons provisions will affect organizations,
  • Cybersecurity and AI tools, and
  • What organizations already using open source AI applications and models should do differently as a result of the AI Executive Order.

Reaction to What’s Not Discussed in the AI Executive Order

Many organizations, and particularly their technology teams, are concerned about the Executive Order’s generally ambiguous language and, moreover, what it does not cover. For example, it does not say anything about how the Executive Order will apply to open source AI models. Further, it does not indicate any exemptions for open source projects they are involved in, for example, creating new foundational models. Nor does it clearly explain whether the requirement that models requiring more than a certain amount of computational resources must report their results, or if this requirement is only intended to those models vulnerable to abusive and/or fraudulent activity.

Take a Proactive Approach

Keep in mind that the Executive Order is not a law, however, it is likely to serve as the initial structure for legislation covering the responsible use of AI. It is for this reason that executives and their technology leaders (i.e., CIOs, CTOs, CSOs, etc.) begin proactively considering their application of AI in the context of cybersecurity, privacy, and protection of proprietary data. Begin implementing measures to mitigate risk and ensure the responsible use of AI.