Monday, May 27, 2013

Is Artificial Superintelligence Research Ethical?

Many philosophers, futurologists and artificial intelligence researchers have conjectured that in the next 20 to 200 years a machine capable of at least human level performance on all tasks will be developed. Since such a machine would among other things be capable of designing the next generation of even smarter intelligent machines it is generally assumed that an intelligence explosion will take place shortly after such a technological self-improvement cycle begins. While specific predictions regarding the consequences of such an intelligence singularity are varied from potential economic hardship to the complete extinction of the humankind, many of the involved researchers agree that the issue is of utmost importance and needs to be seriously addressed. Investigators concerned with the existential risks posed to humankind by the appearance of superintelligence often describe what can be called a Singularity Paradox as their main reason for thinking that humanity might be in danger. Briefly Singularity Paradox could be described as: “Superintelligent machines are feared to be too dumb to possess commonsense.”

Singularity Paradox is easy to understand via some examples. Suppose that scientists succeed in creating a superintelligent machine and order it to “make all people happy”. Complete happiness for humankind is certainly a noble and worthwhile goal, but perhaps we are not considering some unintended consequences of giving such an order. Any human immediately understands what is meant by this request; a non-exhaustive list may include making all people healthy, wealthy, beautiful, talented, giving them loving relationships and novel entertainment. However, many alternative ways of “making all people happy” could be derived by a superintelligent machine. For example: 

• A daily cocktail of cocaine, methamphetamine, methylphenidate, nicotine, and 3,4-methylenedioxymethamph-etamine, better known as Ecstasy, may do the trick.
• Forced lobotomies for every man, woman and child might also accomplish the same goal.
• A simple observation that happy people tend to smile may lead to forced plastic surgeries to affix permanent smiles to all human faces.

An infinite number of other approaches to accomplish universal human happiness could be derived. For a superintelligence the question is simply which one is fastest/cheapest (in terms of computational resources) to implement. Such a machine clearly lacks commonsense, hence the paradox. So is the future Artificial Intelligence dangerous to humankind? 

Certain types of research, such as human cloning, certain medical or psychological experiments on humans, animal (great ape) research, etc. are considered unethical because of their potential detrimental impact on the test subjects and so are either banned or restricted by law. Additionally moratoriums exist on development of dangerous technologies such as chemical, biological and nuclear weapons because of the devastating effects such technologies may exert of the humankind. Similarly I argue that certain types of artificial intelligence research fall under the category of dangerous technologies and should be restricted. Classical AI research in which a computer is taught to automate human behavior in a particular domain such as mail sorting or spellchecking documents is certainly ethical and does not present an existential risk problem to humanity. On the other hand I argue that Artificial General Intelligence (AGI) research should be considered unethical. This follows logically from a number of observations. First, true AGIs will be capable of universal problem solving and recursive self-improvement. Consequently they have potential of outcompeting humans in any domain essentially making humankind unnecessary and so subject to extinction. Additionally, a truly AGI system may possess a type of consciousness comparable to the human type making robot suffering a real possibility and any experiments with AGI unethical for that reason as well.

If AGIs are allowed to develop there will be a direct competition between superintelligent machines and people. Eventually the machines will come to dominate because of their self-improvement capabilities. Alternatively people may decide to give power to the machines since the machines are more capable and less likely to make an error. A similar argument was presented by Ted Kazynsky, aka Unabomber, in his famous manifesto: “It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decision for them, simply because machine-made decisions will bring better result than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won't be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide. ”

To address this problem, the last decade has seen a boom of new subfields of computer science concerned with development of ethics in machines. Machine ethics, computer ethics, robot ethics, ethicALife, machine morals, cyborg ethics, computational ethics, roboethics, robot rights, and artificial morals are just some of the proposals meant to address society’s concerns with safety of ever more advanced machines. Unfortunately the perceived abundance of research in intelligent machine safety is misleading. The great majority of published papers are purely philosophical in nature and do little more than reiterate the need for machine ethics and argue about which set of moral convictions would be the right ones to implement in our artificial progeny: Kantian, Utilitarian, Jewish, etc. However, since ethical norms are not universal, a “correct” ethical code could never be selected over others to the satisfaction of humanity as a whole.

Consequently, because of serious and unmitigated dangers of AGI, I propose that AI research review boards are set up, similar to those employed in review of medical research proposals. A team of experts in artificial intelligence should evaluate each research proposal and decide if the proposal falls under the standard AI – limited domain system or may potentially lead to the development of a full blown AGI. Research potentially leading to uncontrolled artificial universal general intelligence should be restricted from receiving funding or be subject to complete or partial bans. An exception may be made for development of safety measures and control mechanisms specifically aimed at AGI architectures.

With the survival of humanity on the line, the issues raised by the problem of the Singularity Paradox are too important to put “all our eggs in one basket”. We should not limit our response to any one technique, or an idea from any one scientist or a group of scientists. A large research effort from the scientific community is needed to solve this issue of global importance. Even if there is a relatively small chance that a particular method would succeed in preventing an existential catastrophe it should be explored as long as it is not likely to create significant additional dangers to the human race. After analyzing dozens of solutions from as many scientists, I came to the conclusion that the search is just beginning. I am currently writing a book (Artificial Superintelligence: a Futuristic Approach) devoted to summarizing my findings about the state of the art in this new field of inquire and hope that it will invigorate research into AGI safety.   

In conclusion, we are best to assume that the AGI may present serious risks to humanity’s very existence and to proceed or not to proceed accordingly. Humanity should not put its future in the hands of the machines since it will not be able to take the power back. In general a machine should never be in a position to terminate human life or to make any other non-trivial ethical or moral judgment concerning people. A world run by machines will lead to unpredictable consequences for human culture, lifestyle and overall probability of survival for the humankind. The question raised by Bill Joy: “Will the future need us?” is as important today as ever. “Whether we are to succeed or fail, to survive or fall victim to these technologies, is not yet decided”.


Dr. Roman Yampolskiy is an assistant professor at the University of Louisville, Department of Computer Engineering and Computer Science. His recent research focuses on technological singularity. In addition to his affiliation with SU Dr. Yampolskiy was a visiting fellow of Singularity Institute and had his work published in the first academic book devoted to the study of Singularity – “Singularity Hypothesis” and the first special issues of an academic journal devoted to that topic (Journal of Consciousness Studies).

Wednesday, May 22, 2013

Artificial Superintelligence: A Futuristic Approach

CrowdFunding campaign for my book, Artificial Superintelligence: A Futuristic Approach. http://igg.me/at/ASFA  Please like, tweet and share! Better yet - buy a book!

http://www.indiegogo.com/projects/artificial-superintelligence-a-futuristic-approach

Artificial Superintelligence: A Futuristic Approach

Introduction

Many philosophers, futurologists and artificial intelligence researchers have conjectured that in the next 20 to 200 years a machine capable of at least human level performance on all tasks will be developed. Since such a machine would among other things be capable of designing the next generation of even smarter intelligent machines, it is generally assumed that an intelligence explosion will take place shortly after such a technological self-improvement cycle begins. While specific predictions regarding the consequences of such an intelligence singularity are varied from potential economic hardship to the complete extinction of the humankind, many of the involved researchers agree that the issue is of utmost importance and needs to be seriously addressed. This book, “Artificial Superintelligence: A Futuristic Approach” will directly address this issue and consolidate research aimed at making sure that emerging superintelligence is beneficial to humanity.

Book Cover

Writing Sample: Leakproofing Singularity

What others said about: Leakproofing Singularity
“Yampolskiy’s excellent article gives a thorough analysis of issues pertaining to the “leakproof singularity”: confining an AI system, at least in the early stages, so that it cannot “escape”. It is especially interesting to see the antecedents of this issue in Lampson’s 1973 confinement problem in computer security. I do not have much to add to Yampolskiy’s analysis.”
David J. Chalmers, Professor of Philosophy, New York University
“This is great! I like the way you 
- introduce the state of the art in related security for ordinary computer systems 
- review the academic literature
- review the discussion-group posts which, though obscure, make innovative and essential points 
- enumerate possible failure scenarios, and suggest solutions 
- while pointing out clearly that all solutions can fail in the face of superintelligence.
This is exactly the sort of article the community needs.”
Joshua Fox, Research Associate at Singularity Institute 
AI researcher Roman Yampolskiy’s article, ‘Leakproofing the Singularity: Artificial Intelligence Confinement Problem’, provides us with a detailed and well-reasoned analysis of … ways of externally constraining the AI design that might lead towards a singularity, especially constraining such AI to a virtual world from which it cannot leak into the real world.”
Uziel Awret, Editor of Special Issue on Singularity of Journal of Consciousness Studies
“The connection back to Lampson is very interesting and apt.”
Vernor Vinge, Hugo Award-winning author and Professor of Mathematics (retired)

Tentative List of Chapters:

1)     Introduction to Artificial Superintelligence.
2)     AI-Completeness – the Problem Domain of Superintelligent Machines.
3)     The Space of Mind Designs and the Human Mental Model.
4)     How to Prove that You Invented Superintelligence So No One Else Can Steal It.
5)     Wireheading, Addiction and Mental Illness in Machines.
6)     On the Limits of Recursively Self-Improving Artificially Intelligent Systems.
7)     Singularity Paradox and What to Do About It.
8)     Superintelligence Safety Engineering.
9)     Artificial Intelligence Confinement Problem (and Solution).
10)  Controlling Impact of Future Super AI.
Appendix
11)  Efficiency Theory: a Unifying Theory for Information, Computation and Intelligence.
12)  Unverifiability: Why Software Can’t Ever be Completely Bug Free.
13)  Artimetrics: Behavioral and Visual Identity Management of Artificial Agents.
14)  Wisdom of Artificial Crowds: Simulating Democracy and Intelligence of Crowds in Cyberspace.

Tentative Timeline:
January-May 2013: preparing fundraising campaign.
May – July 2013 (NOW): crowd funding campaign and continuing research.
July – October 2013: writing of the book.
October-December 2013: re-writing, editing, revising, proofreading, formatting, finalizing cover design, publishing.
Early 2014: shipping!!!

About the Author

Dr. Roman Yampolskiy conducts research in Artificial Intelligence Safety and Technological Singularity. An alumnus of Singularity University (GSP2012) and a visiting fellow / research advisor of the Singularity Institute (MIRI), Dr. Yampolskiy has contributed papers to the first book on Singularity (Singularity Hypotheses, Springer 2012), first journal issue devoted to Singularity (Journal of Consciousness Studies, 2012) and the first conference devoted to safe Super-Intelligent systems (AGI Safety, 2012).
Roman V. Yampolskiy holds a PhD degree from the Department of Computer Science and Engineering at the University at Buffalo. There, he was a recipient of a four year NSF (National Science Foundation) IGERT (Integrative Graduate Education and Research Traineeship) fellowship. Before beginning his doctoral studies, Dr. Yampolskiy received a BS/MS (High Honors) combined degree in Computer Science from Rochester Institute of Technology, NY, USA.
After completing his PhD dissertation, Dr. Yampolskiy held a position of an Affiliate Academic at the Center for Advanced Spatial Analysis, University of London, College of London. In 2008 Dr. Yampolskiy accepted an assistant professor position at the Speed School of Engineering, University of Louisville, KY. He had previously conducted research at the Laboratory for Applied Computing (currently known as Center for Advancing the Study of Infrastructure) at the Rochester Institute of Technology and at the Center for Unified Biometrics and Sensors at the University at Buffalo.
Dr. Yampolskiy is an author of over 100 publications including multiple journal articles and books. His research has been cited by numerous scientists and profiled in popular magazines both American and foreign (New Scientist, Poker Magazine, Science World Magazine), dozens of websites (BBC, MSNBC, Yahoo! News) and on radio (German National Radio, Alex Jones Show). Reports about his work have attracted international attention and have been translated into many languages including Czech, Danish, Dutch, French, German, Hungarian, Italian, Polish, Romanian, and Spanish.

FAQ

How the money will be spent?
The money is mostly needed to pay for the publication, marketing and distribution costs as well as the costs of editing, and proofreading the book. Some funds will also go to acquire copyrighted materials such as images for the cover and for ongoing research expenses. Additionally, Dr. Yampolskiy works under a contract with the University of Louisville which doesn’t include June and July. A portion of the raised funds will be used to feed Dr. Yampolskiy during that time as he will be very hungry from all that writing. 
What has already been done?
Most of the research has been completed. You can never be done consulting with the experts, but I have already had a chance to exchange ideas with the world’s best scientists and philosophers. Some drafts have been written for individual chapters. Professionals have been recruited for cover design and proofreading.
Roman Yampolskiy with AI reserchers and top scientists
Who designed this awesome book cover?
A friend and a former classmate Svetlana Dolinskiy is responsible for the design of the front cover.
Is that Alex Jones interviewing you? Really?
Despite being well known for his short temper and some unorthodox opinions Alex was extremely professional and respectful in conducting this interview. So unless you can get Piers Morgan to interview me, I will stick with my choice of video ;)
I found a spelling error what should I do?
Call 911! No wait, just email me, I will fix it and thank you.
Have you published any other books?
Yes, this would be my 7th book. You can find my other books for sale on Amazon.com.
Do you have links to the media coverage about your research?
Yes, you can find them linked from my homepage. Even before this funding campaign got started I was fortune to have a lot of interest in my research from popular media.
Dr. Yampolskiy's research in popular media
I am with the media and I would like to interview you about your research, what is the best way to get in touch with you?
My email and phone numbers are listed on my homepage. roman.yampolskiy@louisville.edu
Do you have any other videos of you presenting your research?
Yes, I have a few available online: my talk at AGI conference in Oxford, and my Ignite presentation at Singularity University.

Disclaimer

This project is undertaken as a personal initiative and I am not acting as a representative of any organization, group, institution, company or future superintelligence. I am (Roman Yampolskiy) solely responsible for this fundraising campaign including proposed work, expressed ideas or opinions and promised deliverables. Neither University of Louisville nor any other organization or person should be held liable or assumed to share said opinions, ideas or goals or any legal or financial responsibility to the campaign supporters.

http://www.indiegogo.com/projects/artificial-superintelligence-a-futuristic-approach