You are on page 1of 6

The Benefits of the Creation of IEEE Standards Concerning Ethical Artificial Intelligence

Overview
The purpose of this report is to inform the IEEE standards group about the benefits of
creating standards for ethical artificial intelligence. Artificial intelligence, as a critical technology
of the future, has no existing government regulations controlling its creation or use. Further,
government is aware of the problem of artificial intelligence, but does not plan to design
regulations around general intelligence, and instead focus on specific problems such as security,
privacy, and safety (Agrawal). This notion is short-sighted, as many of the major concerns
regarding artificial intelligence stem from the fact that it will eventually have human-like
intelligence. This report will further explore the idea of making standards surrounding general
artificial intelligence, and specifically its ethics.
The IEEE standards group has a history of making recommendations on best practice
engineering principles (IEEE-SA), and as such will be the best place to develop standards in this
area, as the government is unwilling to do so. This document will present to the group why this is
an important project to pursue now, rather than making it a problem for the future. It will
introduce why artificial intelligence has a problem regarding ethics, how creating standards will
solve the problem, the downfalls of creating standards, and give examples of general artificial
intelligence guidelines created by other groups.

Background
Training an artificial intelligence technology to look through a database of criminals and
detect suspicious activity through camera footage is one potentially critical application of
artificial intelligence. Stopping terrorist attacks in their tracks or finding criminals on the run
would be much easier simply because the total "eyes" watching would exponentially increase
while not increasing the number of humans paid to watch.
Unfortunately, this kind of surveillance comes at a cost. Software identifying humans by
their facial features through a picture is inaccurate, and is especially less accurate for
dark-skinned individuals and women (Hao). This begs the question: is it possible that someone is
being tagged as suspicious or not purely because of how they look and not because of what they
are doing? The ethical concerns regarding this issue are numerous: is it acceptable for the world
to be in a surveillance state, where the authorities are looking for crimes all the time; is there
inherent bias in what the technology; and is the technology harmful to the population?
It is clear from the problems raised by artificial intelligence surveillance that it is
necessary to have some sort of standards to follow, since there are no government regulations
controlling the creation of artificial intelligence. Companies, researchers, and programmers
deciding for themselves how their artificial intelligence technology will make decisions creates
the possibility that an ethically flawed artificial intelligence technology can become a
mainstream technology (Davies). In his article "Program good ethics into artificial intelligence",
Davies also believes that the first superintelligent artificial intelligence technology will be
difficult to control, so it is necessary to add good ethics to it. This is a situation that could be
remedied by standards. Google creating a company code of ethical principles for artificial
intelligence (Hao) is well-intentioned, but a company self-governing itself on ethical principles is
not the best solution to this problem, as they may have other concerns such as financial growth
that take precedence over ethics. This situation could also be solved by ethical standards.
A similar problem is that creators of artificial intelligence may not even be aware that it is
ethically flawed, as they could have unintentionally added some bias to the technology based on
their knowledge and surroundings. The possibility of this going unnoticed by co-workers, which
would generally be people of similar nature, is high. This could be rectified using standards,
which could behoove artificial intelligence researchers to think about their work from a different
perspective or otherwise make them think more about what they've created.

Creating Guidelines
The creation of ethical guidelines for artificial intelligence is a way to contribute to the
solution. A set of guidelines can act as an industry standard that all workers in the artificial
intelligence field will strive to follow without any oversight. Guidelines will not be hard-and-fast
rules that artificial intelligence developers must follow; they will be guiding principles to give
them a sense of what they should do in morally uncertain situations. The goal will be for
programmers to read these guidelines and have them in mind when developing artificial
intelligence solutions.
This report suggests creating recommended guidelines for those working in the artificial
intelligence field. It will reflect the "current state-of-the-art in the application of engineering
principles", a type of project that the IEEE standards association currently supports (IEEE-SA).
IEEE is the best place to create these standards because of their well-respected position in the
industry and academia, and their considerable process of creating standards.
A set of guidelines is not the perfect solution to the problem of adding ethics to artificial
intelligence, but it is a simple step that can be taken to make a difference right now. The
logistical difficulties of having oversight on any organization that is creating artificial
intelligence would be too much of a challenge to overcome. It would require extensive funding
and would have to address the problem that artificial intelligence researchers may not be willing
to open up their work.
To help create the guidelines, it is critical that artificial intelligence researchers
participate in the conversation. Artificial intelligence researcher Sabine Hauert suggests that
experts need to join and shape the conversation around artificial intelligence. Having experts
engage in the creation of guidelines is critical because they would be most affected by them.
Similarly, researcher Jim Davies suggests that the larger artificial intelligence community needs
to work together to create the first artificial intelligence technology to ensure that it is
benevolent. In this situation, it is also critical that the community is part of the solution.
Benefits of Creating Guidelines
With this project, the primary goal is to make sure any artificial intelligence that gets
created follows an ethical code. This is a very difficult problem to solve, as there is no way to
simply oversee all artificial intelligence technologies and review them, since they likely are the
property of private companies or universities. According to artificial intelligence researcher Jim
Davies, the most important part of developing the first superintelligent technology is to make
sure it has "goals, values and ethical codes." Standards can contribute to this idea by people
having the these goals and values in mind when creating the technology.
Artificial intelligence researcher Stuart Russell talks about the problem of artificial
intelligence technologies becoming lethal autonomous weapons systems (LAWS). He says,
"LAWS could violate fundamental principles of human dignity by allowing machines to choose
whom to kill" (Nature). This is an ethical problem that could be addressed by guidelines, such as
saying that no artificial intelligence technology could be of harm to humans. Russell also asserts
that by combining artificial intelligence and robotics technology of today, creating LAWS would
be possible (Nature). Based on this fact, it is clear that artificial intelligence guidelines need to be
created today.
Artificial intelligence researcher Russ Altman considers the fact that artificial intelligence
can "accelerate scientific discovery in biology and medicine, and to transform health care"
(Nature). The fact that artificial intelligence can be used beneficially is another reason to create
guidelines. It is important to make it clear that it should only be used to benefit humans and the
world. An example of something Altman considers as beneficial is artificial intelligence that can
be used to more accurately simulate a "visual cohort of patients" such that a robot could diagnose
humans, give them a treatment plan, and consider the different possible outcomes based on the
treatments. Finally, Altman states that he has qualms with the fact that artificial intelligence
technologies could be expensive and create larger disparities in the American healthcare. This
problem could be solved by creating a standard that artificial intelligence technologies must
benefit all fairly.

Challenges of Creating Guidelines


There are several challenges and problems with creating guidelines relating to ethics. In
general, there is the problem that the people creating the guidelines may affect the ideas of the
guidelines. This is a problem solved by the IEEE standards process, a main reason why IEEE
would be the best place for this document. The first step in the process involves creating a
Working Group to write the standards; this group would have numerous people from the
artificial intelligence industry contributing along with standards experts from outside the
industry. When writing and editing the guidelines, it would likely become clear if there are
extreme biases included. Further, the public review process and the Standards Review
Committee will also be able to mitigate biases.
Standards having no effect on people who read them is another concern with this project.
A study from North Carolina State University suggested that having engineers read a code of
ethics had no effect on their actions (Hao). Ultimately, this is a problem for all standards, as
anyone can choose to ignore them. A similar idea is the difficulty of enforcement: anyone can
choose to ignore the standards. However, as with all previous standards, it is still important to
develop them, as if they have the ability to change even a small subset of people's behavior, it is
still preferable to not developing the standards at all.

Example Guidelines
There are many people that have proposed sets of guidelines to control artificial
intelligence technologies. As the IEEE standards for ethical artificial intelligence are being
developed, it is necessary to research and draw ideas from these guidelines.
One group that has created artificial intelligence guidelines is Google. CEO Sundar
Pichai published a document containing guidelines that all artificial intelligence technologies
created by the company should follow. The first guideline that Pichai suggests is to be sure that
all artificial intelligence technologies are socially beneficial. He says, "Advances in AI will have
transformative impacts in a wide range of fields, including healthcare, security, energy,
transportation, manufacturing, and entertainment" (Pichai). All of these potential uses for
artificial intelligence are places where technological advancement would benefit everyone
involved. In the form of a broad ethical standard, it could be said that artificial intelligence
technologies shall be clearly beneficial to all.
Another guideline that Pichai suggests is that artificial intelligence should be accountable
to people. The goal of this is to make sure artificial intelligence will always "respond to human
direction and control" (Pichai). Preventing the unlikely possibility of an artificial intelligence
technology becoming more powerful than it was intended to be can be helped by creating a
standard. In the form of an ethical standard, artificial intelligence shall prioritize humans need
over its own.
The Future of Life Institute created a list of artificial intelligence principles developed by
several hundred experts in the field. They broke down their principles into three areas, those
concerning research issues, those concerning ethical values, and those concerning long-term
issues. For the former, the guidelines suggest that any intelligence created should be "beneficial
intelligence" and not "undirected intelligence", and that there should be "no corner-cutting on
safety standards" (Future of Life Institute). These guidelines can be adopted into standards to
ensure that the conditions under which artificial intelligence is developed will produce the an
ethical technology. In terms of ethical values, the Future of Life Institute suggests that
"Designers and builders of advanced AI systems are stakeholders in the moral implications of
their use, misuse, and actions, with a responsibility and opportunity to shape those implications"
(Future of Life Institute). This is a way to make artificial intelligence researchers accountable for
the actions of their technology. Introducing a guideline of this nature would make researchers
more heavily consider the choices they make when building the technology. Finally, in terms of
long-term issues, the Future of Life Institute suggests that since artificial intelligence can have
such a large effect on human life, it "should be planned for and managed with commensurate
care and resources" (Future of Life Institute). This suggests a guideline that artificial intelligence
technologies shall be built with care and regard to humans that may use it in the future. Overall,
the Future of Life Institute is a valuable resource to consider when creating the IEEE standard for
ethical artificial intelligence, as it contains the opinions of many researchers and scientists
working in the field.
Artificial intelligence will be an integral part of many mainstream technologies in the
future. One standard that can be created is regarding technologies that artificial intelligence
should not be a part of. An example is weapons: robots making the decision to harm humans is a
situation that can be avoided by creating a standard that artificial intelligence cannot be part of
any technology that harm humans. A less extreme example is artificial intelligence being able to
continuously use up world resources. Jim Davies, artificial intelligence researcher, suggests this
could be a problem if the technology seeked a certain goal that required a resource and let
nothing get in its way, it could cause harm to humanity by not considering the problems with its
goal (Davies). Pichai also lists several artificial intelligence applications that Google will choose
to not pursue, which include technologies that harm or injure people, technologies that invade
privacy, or technologies that violate human rights. As part of the standard concerning ethical
artificial intelligence, it is necessary to consider what artificial intelligence can be deemed
unethical and such that it should not pursued by anyone.

Conclusion
Creating standards for artificial intelligence ethics is a necessary step to take as more
critical artificial intelligence technologies are being developed. Currently, the government has
limited plans to regulate artificial intelligence, none of which include controlling general
artificial intelligence policy. The IEEE standards group, which creates engineering standards in
this industry, should develop ethical standards for artificial intelligence. These standards are
imperative to make sure artificial intelligence technologies handle all situations in an acceptable
way. Potential standards could make sure that robots do not harm the human race, and benefit all
people fairly. As the technology of the future, artificial intelligence should follow societal ethical
principles; one way to achieve this is to create standards.
Works Cited
“IEEE-SA - Types & Nature Of Projects.” IEEE-SA - The IEEE Standards Association - Home,
IEEE, standards.ieee.org/develop/projtype.html.

Hao, Karen. “Establishing an AI Code of Ethics Will Be Harder than People Think.” MIT
Technology Review, MIT Technology Review, 21 Oct. 2018,
www.technologyreview.com/s/612318/establishing-an-ai-code-of-ethics-will-be-harder-than-peo
ple-think/. Accessed 8 Feb. 2019.

"Ethics of artificial intelligence: four leading researchers share their concerns and solutions for
reducing societal risks from intelligent machines." Nature, vol. 521, no. 7553, 2015, p. 415+.
Expanded Academic ASAP,
http://link.galegroup.com.ezproxy.neu.edu/apps/doc/A415563136/EAIM?u=mlin_b_northest&si
d=EAIM&xid=8ca4ff34. Accessed 7 Feb. 2019.

Smith, Rob. “5 Core Principles to Keep AI Ethical.” World Economic Forum, 19 Apr. 2018,
www.weforum.org/agenda/2018/04/keep-calm-and-make-ai-ethical/. Accessed 8 Feb. 2019.

Davies, Jim. “Program Good Ethics into Artificial Intelligence.” Nature, 2016,
doi:10.1038/538291a.

Agrawal, Ajay, et al. “The Obama Administration's Roadmap for AI Policy.” Harvard Business
Review, Harvard Business Publishing, 21 Sept. 2017,
hbr.org/2016/12/the-obama-administrations-roadmap-for-ai-policy.

Pichai, Sundar. “AI at Google: Our Principles.” Google, Google, 7 June 2018,
blog.google/technology/ai/ai-principles/.

“AI Principles.” Future of Life Institute, Future of Life Institute, 2017,


futureoflife.org/ai-principles/.

You might also like