The White House knows the risks of AI being used by federal agencies. Here’s how they’re handling it.
New requirements from the White House will address the risks of AI used by federal agencies that impact Americans everyday. That includes government bodies like the Transportation Security Administration and federal healthcare.
On Thursday, Vice President Kamala Harris announced a sweeping policy from the Office of Management of Budget that requires all federal agencies to safeguard against AI harms, provide transparency of AI use, and hire AI experts. The policy builds on President Joe Biden’s executive order at the Global Summit on AI Safety in the UK last October, along with initiatives outlined by Harris.
“I believe that all leaders from government, civil society and the private sector have a moral, ethical and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm, while ensuring everyone is able to enjoy its full benefit,” said Harris in a briefing. The statement underscored the White House’s vision that AI should be used to advance the public interest.
That means laying out strict ground rules for how federal agencies use AI and how they disclose it to the public.
Safeguards for AI discrimination
The requirement that will directly impact Americans the most is implementing safeguards that protect against “algorithmic discrimination.” The OMB will require agencies to “assess, test, and monitor” any harms caused by AI. Specifically, travelers can opt out of the TSA’s use of facial recognition technology, which has been proven to be less accurate for people with darker skin.
For federal healthcare systems like Medicaid and Medicare, a human is required to oversee applications of AI such as diagnostics, data analysis, and medical device software.
The OMB policy also highlights AI used to detect fraud, which has helped the U.S. Department of the Treasury recover $ 325 million from check fraud, and requires human oversight when such technology is used. The policy goes on to say if the agency can’t adequately provide safeguards, they have to stop using the AI immediately.
Transparency reports to hold agencies accountable
Less impactful for Americans’ on a day-to-day basis, but equally important, the OMB also requires federal agencies to publicly provide inventories of AI they use and how they are “addressing relevant risks.” In order to standardize inventories and ensure the reports are accountable, the OMB has detailed instructions for what to provide.
The White House is hiring
Working with AI and providing its due diligence is going to be a lot of work for the government, which is why they’re scaling up employment. The OMB policy will require every federal agency to designate a “Chief AI Officer.” A senior administration official said it’s up to the individual agencies to determine whether the Chief AI Officer is a political appointee or not.
The White House wants to grow the AI workforce even further by committing to hiring 100 “AI professionals” through a national talent search. So if you know a lot about AI and have a passion for working in government, you can check out a career fair on April 18 or check out the Administration’s AI.gov website for employment info.
Trying not to stifle innovation
Lest the e/accs get too riled up, the policy also makes an effort to foster innovation and development by (responsibly) encouraging the use of AI. For instance, under the new policy, the Federal Emergency Management Agency (FEMA) is meant to use AI to improve forecasting of environmental disasters, and the Centers for Disease Control and Prevention (CDC) will use machine learning to better predict the spread of disease.
Overall, the OMB policy covers a lot of ground that aims to create more accountability, transparency, and protections for the public.