AT2k Design BBS Message Area
Casually read the BBS message area using an easy to use interface. Messages are categorized exactly like they are on the BBS. You may post new messages or reply to existing messages!

You are not logged in. Login here for full access privileges.

Previous Message | Next Message | Back to Engadget is a web magazine with...  <--  <--- Return to Home Page
   Local Database  Engadget is a web magazine with...   [102 / 117] RSS
 From   To   Subject   Date/Time 
Message   VRSS    All   The White House lays out extensive AI guidelines for the federal   March 28, 2024
 4:00 AM  

Feed: Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics
Feed Link: https://www.engadget.com/
---

Title: The White House lays out extensive AI guidelines for the federal
government

Date: Thu, 28 Mar 2024 09:00:58 +0000
Link: https://www.engadget.com/the-white-house-lays...

It's been five months since President Joe Biden signed an executive order
(EO) to address the rapid advancements in artificial intelligence. The White
House is today taking another step forward in implementing the EO with a
policy that aims to regulate the federal government's use of AI. Safeguards
that the agencies must have in place include, among other things, ways to
mitigate the risk of algorithmic bias.

"I believe that all leaders from government, civil society and the private
sector have a moral, ethical and societal duty to make sure that artificial
intelligence is adopted and advanced in a way that protects the public from
potential harm while ensuring everyone is able to enjoy its benefits," Vice
President Kamala Harris told reporters on a press call.

Harris announced three binding requirements under a new Office of Management
and Budget (OMB) policy. First, agencies will need to ensure that any AI
tools they use "do not endanger the rights and safety of the American
people." They have until December 1 to make sure they have in place "concrete
safeguards" to make sure that AI systems they're employing don't impact
Americans' safety or rights. Otherwise, the agency will have to stop using an
AI product unless its leaders can justify that scrapping the system would
have an "unacceptable" impact on critical operations.

Impact on Americans' rights and safety

Per the policy, an AI system is deemed to impact safety if it "is used or
expected to be used, in real-world conditions, to control or significantly
influence the outcomes of" certain activities and decisions. Those include
maintaining election integrity and voting infrastructure; controlling
critical safety functions of infrastructure like water systems, emergency
services and electrical grids; autonomous vehicles; and operating the
physical movements of robots in "a workplace, school, housing,
transportation, medical or law enforcement setting."

Unless they have appropriate safeguards in place or can otherwise justify
their use, agencies will also have to ditch AI systems that infringe on the
rights of Americans. Purposes that the policy presumes to impact rights
defines include predictive policing; social media monitoring for law
enforcement; detecting plagiarism in schools; blocking or limiting protected
speech; detecting or measuring human emotions and thoughts; pre-employment
screening; and "replicating a personΓÇÖs likeness or voice without express
consent."

When it comes to generative AI, the policy stipulates that agencies should
assess potential benefits. They all also need to "establish adequate
safeguards and oversight mechanisms that allow generative AI to be used in
the agency without posing undue risk."

Transparency requirements

The second requirement will force agencies to be transparent about the AI
systems they're using. "Today, President Biden and I are requiring that every
year, US government agencies publish online a list of their AI systems, an
assessment of the risks those systems might pose and how those risks are
being managed," Harris said.

As part of this effort, agencies will need to publish government-owned AI
code, models and data, as long as doing so won't harm the public or
government operations. If an agency can't disclose specific AI use cases for
sensitivity reasons, they'll still have to report metrics

ASSOCIATED PRESS

Last but not least, federal agencies will need to have internal oversight of
their AI use. That includes each department appointing a chief AI officer to
oversee all of an agency's use of AI. "This is to make sure that AI is used
responsibly, understanding that we must have senior leaders across our
government who are specifically tasked with overseeing AI adoption and use,"
Harris noted. Many agencies will also need to have AI governance boards in
place by May 27.

The vice president added that prominent figures from the public and private
sectors (including civil rights leaders and computer scientists) helped shape
the policy along with business leaders and legal scholars.

The OMB suggests that, by adopting the safeguards, the Transportation
Security Administration may have to let airline travelers opt out of facial
recognition scans without losing their place in line or face a delay. It also
suggests that there should be human oversight over things like AI fraud
detection and diagnostics decisions in the federal healthcare system.

As you might imagine, government agencies are already using AI systems in a
variety of ways. The National Oceanic and Atmospheric Administration is
working on artificial intelligence models to help it more accurately forecast
extreme weather, floods and wildfires, while the Federal Aviation
Administration is using a system to help manage air traffic in major
metropolitan areas to improve travel time.

"AI presents not only risk, but also a tremendous opportunity to improve
public services and make progress on societal challenges like addressing
climate change, improving public health and advancing equitable economic
opportunity," OMB Director Shalanda Young told reporters. "When used and
overseen responsibly, AI can help agencies to reduce wait times for critical
government services to improve accuracy and expand access to essential public
services."

This policy is the latest in a string of efforts to regulate the fast-
evolving realm of AI. While the European Union has passed a sweeping set of
rules for AI use in the bloc, and there are federal bills in the pipeline,
efforts to regulate AI in the US have taken more of a patchwork approach at
state level. This month, Utah enacted a law to protect consumers from AI
fraud. In Tennessee, the Ensuring Likeness Voice and Image Security Act (aka
the Elvis Act ΓÇö seriously) is an attempt to protect musicians from
deepfakes i.e. having their voices cloned without permission.

This article originally appeared on Engadget at https://www.engadget.com/the-
white-house-lays-out-extensive-ai-guidelines-for-the-federal-government-
090058684.html?src=rss

---
VRSS v2.1.180528
  Show ANSI Codes | Hide BBCodes | Show Color Codes | Hide Encoding | Show HTML Tags | Show Routing
Previous Message | Next Message | Back to Engadget is a web magazine with...  <--  <--- Return to Home Page

VADV-PHP
Execution Time: 0.0169 seconds

If you experience any problems with this website or need help, contact the webmaster.
VADV-PHP Copyright © 2002-2024 Steve Winn, Aspect Technologies. All Rights Reserved.
Virtual Advanced Copyright © 1995-1997 Roland De Graaf.
v2.1.220106