MESSAGE
DATE | 2016-11-06 |
FROM | Ruben Safir
|
SUBJECT | Subject: [Hangout-NYLXS] AI and Ethics - look who came to protect us all
|
http://www.nytimes.com/2016/09/02/technology/artificial-intelligence-ethics.html
SAN FRANCISCO — For years, science-fiction moviemakers have been making
us fear the bad things that artificially intelligent machines might do
to their human creators. But for the next decade or two, our biggest
concern is more likely to be that robots will take away our jobs or bump
into us on the highway.
Now five of the world’s largest tech companies are trying to create a
standard of ethics around the creation of artificial intelligence. While
science fiction has focused on the existential threat of A.I. to humans,
researchers at Google’s parent company, Alphabet, and those from Amazon,
Facebook, IBM and Microsoft have been meeting to discuss more tangible
issues, such as the impact of A.I. on jobs, transportation and even warfare.
Tech companies have long overpromised what artificially intelligent
machines can do. In recent years, however, the A.I. field has made rapid
advances in a range of areas, from self-driving cars and machines that
understand speech, like Amazon’s Echo device, to a new generation of
weapons systems that threaten to automate combat.
The specifics of what the industry group will do or say — even its name
— have yet to be hashed out. But the basic intention is clear: to ensure
that A.I. research is focused on benefiting people, not hurting them,
according to four people involved in the creation of the industry
partnership who are not authorized to speak about it publicly.
The importance of the industry effort is underscored in a report issued
on Thursday by a Stanford University group funded by Eric Horvitz, a
Microsoft researcher who is one of the executives in the industry
discussions. The Stanford project, called the One Hundred Year Study on
Artificial Intelligence, lays out a plan to produce a detailed report on
the impact of A.I. on society every five years for the next century.
Advertisement
Continue reading the main story
One main concern for people in the tech industry would be if regulators
jumped in to create rules around their A.I. work. So they are trying to
create a framework for a self-policing organization, though it is not
clear yet how that will function.
“We’re not saying that there should be no regulation,” said Peter Stone,
a computer scientist at the University of Texas at Austin and one of the
authors of the Stanford report. “We’re saying that there is a right way
and a wrong way.”
While the tech industry is known for being competitive, there have been
instances when companies have worked together when it was in their best
interests. In the 1990s, for example, tech companies agreed on a
standard method for encrypting e-commerce transactions, laying the
groundwork for two decades of growth in internet business.
The authors of the Stanford report, which is titled “Artificial
Intelligence and Life in 2030,” argue that it will be impossible to
regulate A.I. “The study panel’s consensus is that attempts to regulate
A.I. in general would be misguided, since there is no clear definition
of A.I. (it isn’t any one thing), and the risks and considerations are
very different in different domains,” the report says.
Photo
From left, Jeff Bezos of Amazon, Virginia Rometty of IBM, Satya Nadella
of Microsoft, Sundar Pichai of Google, and Mark Zuckerberg of Facebook.
Credit Eric Risberg/Associated Press
One recommendation in the report is to raise the awareness of and
expertise about artificial intelligence at all levels of government, Dr.
Stone said. It also calls for increased public and private spending on A.I.
“There is a role for government and we respect that,” said David Kenny,
general manager for IBM’s Watson artificial intelligence division. The
challenge, he said, is “a lot of times policies lag the technologies.”
A memorandum is being circulated among the five companies with a
tentative plan to announce the new organization in the middle of
September. One of the unresolved issues is that Google DeepMind, an
Alphabet subsidiary, has asked to participate separately, according to a
person involved in the negotiations.
The A.I. industry group is modeled on a similar human rights effort
known as the Global Network Initiative, in which corporations and
nongovernmental organizations are focused on freedom of expression and
privacy rights, according to someone briefed by the industry organizers
but not authorized to speak about it publicly.
Advertisement
Continue reading the main story
Separately, Reid Hoffman, a founder of LinkedIn who has a background in
artificial intelligence, is in discussions with the Massachusetts
Institute of Technology Media Lab to fund a project exploring the social
and economic effects of artificial intelligence.
Both the M.I.T. effort and the industry partnership are trying to link
technology advances more closely to social and economic policy issues.
The M.I.T. group has been discussing the idea of designing new A.I. and
robotic systems with “society in the loop.”
The phrase is a reference to the long-running debate about designing
computer and robotic systems that still require interaction with humans.
For example, the Pentagon has recently begun articulating a military
strategy that calls for using A.I. in which humans continue to control
killing decisions, rather than delegating that responsibility to machines.
“The key thing that I would point out is computer scientists have not
been good at interacting with the social scientists and the
philosophers,” said Joichi Ito, the director of the MIT Media Lab and a
member of the board of directors of The New York Times. “What we want to
do is support and reinforce the social scientists who are doing research
which will play a role in setting policies.”
The Stanford report attempts to define the issues that citizens of a
typical North American city will face in computers and robotic systems
that mimic human capabilities. The authors explore eight aspects of
modern life, including health care, education, entertainment and
employment, but specifically do not look at the issue of warfare. They
said that military A.I. applications were outside their current scope
and expertise, but they did not rule out focusing on weapons in the future.
The report also does not consider the belief of some computer
specialists about the possibility of a “singularity” that might lead to
machines that are more intelligent and possibly threaten humans.
“It was a conscious decision not to give credence to this in the
report,” Dr. Stone said.
--
So many immigrant groups have swept through our town
that Brooklyn, like Atlantis, reaches mythological
proportions in the mind of the world - RI Safir 1998
http://www.mrbrklyn.com
DRM is THEFT - We are the STAKEHOLDERS - RI Safir 2002
http://www.nylxs.com - Leadership Development in Free Software
http://www2.mrbrklyn.com/resources - Unpublished Archive
http://www.coinhangout.com - coins!
http://www.brooklyn-living.com
Being so tracked is for FARM ANIMALS and and extermination camps,
but incompatible with living as a free human being. -RI Safir 2013
_______________________________________________
hangout mailing list
hangout-at-nylxs.com
http://www.nylxs.com/
|
|