President Joe Biden hosted a meeting on artificial intelligence at The Fairmont hotel in San Francisco. In October, the Biden administration released an executive order on the safe, secure, and trustworthy development and use of AI which lays out the president’s vision for AI use in the United States and acknowledges the need to mitigate its risks. Despite being nearly 100 pages long, the executive order is not an exhaustively detailed roadmap for trustworthy AI. It sets technological priorities and regulatory steps within various agencies of the executive branch. The administration’s responsible AI vision and the steps for realizing it are essential in communicating intentions for AI, given its transformative nature, which can impact societies, economies, diplomacy, and warfare.
The executive order sends a fundamental signal to the public, as well as allies and competitors abroad, about the administration’s intentions for AI. Policymakers can use costly signaling to effectively communicate their intentions and actions regarding AI, considering the risks associated with its misuse and the potential for misunderstanding and escalation. Identifying the intentions of other countries in the context of AI is also critically important, and costly signaling can help mitigate the risks of misperception and inadvertent escalation. The article discusses four types of costly signals that are relevant to AI and the potential benefits and challenges of using them.
Decoding signals around military AI is especially challenging due to the surprising and hard-to-fix failures of AI technologies and the role of private industry in developing dual-use applications. However, governments and firms could use costly signaling to communicate their intent in using AI in military and political decision-making, reducing the risks associated with its deployment and improving transparency and trust between nations.