1. Home
  2. /
  3. Politics
  4. /
  5. International Agreement Formed to...

International Agreement Formed to Prioritize Security in Artificial Intelligence Development

The United States, along with Britain and over a dozen other countries, announced a groundbreaking international agreement aimed at ensuring the safety of artificial intelligence (AI) systems. The agreement emphasizes the importance of creating AI systems that are “secure by design” and provides recommendations for companies involved in AI development and deployment.

Credit: DepositPhotos

In a comprehensive 20-page document released on Sunday, the participating countries agreed that companies should prioritize the safety of customers and the public when designing and using AI.

While the agreement is non-binding, it outlines crucial guidelines, including the need to monitor AI systems for potential abuse, protect data from tampering, and vet software suppliers.

Jen Easterly, the director of the U.S. Cybersecurity and Infrastructure Security Agency, highlighted the significance of multiple countries coming together to emphasize the security aspects of AI systems.

She stated, “This is the first time that we have seen an affirmation that these capabilities should not just be about cool features…but about security from the design phase.”

Read More: Key Takeaways from POLITICO’s Defense Summit: Pentagon Funding, Officer Retirements, International Aid, and the “Buy American” Debate

This agreement is part of a series of efforts by governments worldwide to influence the development of AI. While few of these initiatives carry legal weight, they reflect the growing recognition of the impact of AI on various industries and society as a whole.

In addition to the United States and Britain, countries such as Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore have also signed the agreement.

The framework addresses concerns related to the potential hijacking of AI technology by hackers and recommends procedures such as conducting security testing before releasing AI models. However, it does not delve into complex issues surrounding the appropriate uses of AI or the collection of data used to train AI models.

Photo Credit: DepositPhotos

Also Read: The Mounting Evidence of Biden’s Involvement with Dictators: A Concerning Reality

The increasing prevalence of AI has raised significant concerns regarding its potential misuse, including disruptions to the democratic process, heightened fraud risks, and significant job losses. European countries have taken the lead in AI regulations, with lawmakers drafting rules to govern its development.

France, Germany, and Italy have recently reached an agreement supporting “mandatory self-regulation through codes of conduct” for foundational AI models that have broad applications.

While the Biden administration has been pushing for AI regulation in the United States, progress in passing effective legislation has been slow due to political polarization. In an effort to address AI risks, protect consumers, workers, and minority groups, and enhance national security, the White House issued a new executive order in October.

This international agreement represents a significant step towards ensuring that AI systems prioritize security and safety, setting a precedent for future collaborations in the global AI community.

Read Next: U.S. Business Leaders Host Dinner for Chinese President Xi Jinping at International Summit


Malik is a skilled writer with a passion for news and current events. With their keen eye for detail, they provide insightful perspectives on the latest happenings. Stay informed and engaged!