In this case however, the response will need to be immediate—humans can still navigate a broken traffic light relatively well, but a driverless car will run a now “invisible” stop sign without the human passengers having a chance to intervene. This response plan may also require expanded partnerships and information sharing agreements with other entities, such as companies controlling the technology. Further, the response plan will require training and coordination such that officers will be equipped to recognize that seemingly harmless graffiti or vandalism may actually be an attack, and then know to activate the appropriate response plan. There are other scenarios in which intrusion detection will be significantly more difficult. As previously discussed, many AI systems are being deployed on edge devices that are capable of falling into an attacker’s hands.
Just as not all uses of AI are “good,” not all AI attacks are “bad.” While AI in a Western context is largely viewed as a positive force in society, in many other contexts it is employed to more nefarious ends. Countries like China and other oppressive regimes use AI as a way to track, control, and intimidate their citizens. As a result, “attacks” on these systems, from a US-based policy view of promoting human rights and free expression, would not be an “attack” in a negative sense of the word. Instead, these AI “attacks” would become a source of protection capable of promoting safety and freedom in the face of oppressive AI systems instituted by the state.
The proposed regulations were put forward as part of the Artificial Intelligence Act, which was first introduced in April 2021. Although no laws yet exist in the United States for regulating AI, there are an increasing number of guidelines and frameworks to help provide direction on how to develop so-called ethical AI. One of the most detailed was recently unveiled by the Government Accountability Office. Called the AI Accountability Framework for Federal Agencies, it provides guidance for agencies that are building, selecting or implementing AI Systems. They can complete classification as well as use historical data to make predictions for the future.
At the same time, my Administration will promote responsible uses of AI that protect consumers, raise the quality of goods and services, lower their prices, or expand selection and availability. As nations enact laws designed to protect data, IBM is focused on helping clients address their local requirements while driving innovation at the same time. For example, IBM Cloud offers a variety of capabilities and services that can help clients address the foundational pillars of sovereign cloud including data sovereignty, operational sovereignty and digital sovereignty. Effectively regulating the use of frontier AI, intervening as close as possible to the harm, can address many of the relevant challenges.
As a result, while it may have been necessary to make the balloons actually look like tanks to fool a human, to fool an AI system, only a few stray marks or subtle changes to a handful of pixels in an image are needed to destroy an AI system. This report seeks to provide policymakers, politicians, industry leaders, and the cybersecurity community an understanding of this emerging problem, identify what areas of society are most immediately vulnerable, and set forth policies that can be adopted to find security in this important new era. This vulnerability is due to inherent limitations in the state-of-the-art AI methods that leave them open to a devastating set of attacks that are as insidious as they are dangerous. Whether it’s causing a car to careen through a red light, deceiving a drone searching for enemy activity on a reconnaissance mission, or subverting content filters to post terrorist recruiting propaganda on social networks, the danger is serious, widespread, and already here.
But there could be dangers and downsides to AI as well, a fact that those who work with the technology are increasingly aware of. The results of a survey of over 600 software developers from across the public and private sector, with many of them tasked with working on projects involving AI, was released this week by Bitwarden and Propeller Insights. A full 78% of the survey respondents said that the use of generative AI would make security more challenging. In fact, 38% said that AI would become the top threat to cybersecurity over the next five years, which proved to be the most popular answer.
Together, we will navigate a changing regulatory landscape while building a future where AI is safe, secure, and trusted. Additionally, within 365 days, the Secretary of Commerce, through the Director of the NIST, is tasked with creating guidelines for agencies to evaluate the efficacy of differential-privacy-guarantee protections, including those related to AI. Additionally, the EO addresses the importance of ensuring fair competition in AI markets. Agency heads are tasked with utilizing their authority to promote anti-competitive practices. Additionally, the memorandum will direct actions to counter potential threats from adversaries and foreign actors using AI systems that may jeopardize U.S. security.
Our work with policymakers and standards organizations, such as NIST, contributes to evolving regulatory frameworks. We recently highlighted SAIF’s role in securing AI systems, aligning with White House AI commitments. The interagency council’s membership shall include, at minimum, the heads of the agencies identified in 31 U.S.C. 901(b), the Director of National Intelligence, and other agencies as identified by the Chair. Until agencies designate their permanent Chief AI Officers consistent with the guidance described in subsection 10.1(b) of this section, they shall be represented on the interagency council by an appropriate official at the Assistant Secretary level or equivalent, as determined by the head of each agency. (i) Propose regulations that require United States IaaS Providers to submit a report to the Secretary of Commerce when a foreign person transacts with that United States IaaS Provider to train a large AI model with potential capabilities that could be used in malicious cyber-enabled activity (a “training run”). Such reports shall include, at a minimum, the identity of the foreign person and the existence of any training run of an AI model meeting the criteria set forth in this section, or other criteria defined by the Secretary in regulations, as well as any additional information identified by the Secretary.
Analysis and insights from hundreds of the brightest minds in the cybersecurity industry to help you prove compliance, grow business and stop threats. However, as with any other project, AI adoption poses challenges that the public sector must overcome. Governments can start with pilot projects, at the same time, pass legislations that facilitate sustainable AI adoption in the long run. Microsoft has developed a tool named Cyber Signals which actively tracks 140+ threat groups and 40+ nation-state actors across 20 countries.
One recent study found that the safety filters in one of Meta’s open-sourced models could be removed with less than $200 worth of technical resources. While we believe that open sourcing of non-frontier AI models is currently an important public good, open sourcing frontier AI models should be approached with great restraint. The capabilities of frontier AI models are not reliably predictable and are often difficult to fully understand even after intensive testing. It took nine months after GPT-3 was widely available to the research community before the effectiveness of chain-of-thought prompting—where the model is simply asked to “think step-by-step”—was discovered.
Read more about Secure and Compliant AI for Governments here.
AI research is underway in the fields of intelligence collection and analysis, logistics, cyber operations, information operations, command and control, and in a variety of semiautonomous and autonomous vehicles. Already, AI has been incorporated into military operations in Iraq and Syria.
AI governance is needed in this digital technologies era for several reasons: Ethical concerns: AI technologies have the potential to impact individuals and society in significant ways, such as privacy violations, discrimination, and safety risks.
Machine learning can leverage large amounts of administrative data to improve the functioning of public administration, particularly in policy domains where the volume of tasks is large and data are abundant but human resources are constrained.
Some of the key challenges regulators and companies will have to contend with include addressing ethical concerns (bias and discrimination), limiting misuse, managing data privacy and copyright protection, and ensuring the transparency and explainability of complex algorithms.