Wednesday, December 11, 2013

Thank you Nicolas Tarleton for sponsoring my upcoming book: http://www.indiegogo.com/projects/artificial-superintelligence-a-futuristic-approach
 

Tuesday, June 4, 2013

The Exponentially Accelerating Progress in Artificial Intelligence Raises Safety Questions

A day does not go by without a news article reporting some amazing breakthrough in artificial intelligence. In fact progress in AI has been so steady some futurologists, such as Ray Kurzweil, are able to project current trends into the future and anticipate what the headlines of tomorrow will bring us. Let’s look at some relatively recent headlines:

1997 Deep Blue became the first machine to win a chess match against a reigning world champion (perhaps due to a bug).

2004 DARPA sponsors a driverless car grand challenge. Technology developed by the participants eventually allows Google to develop a driverless automobile and modify existing transportation laws.

2005 Honda's ASIMO humanoid robot is able to walk as fast as a human, delivering trays to customers in a restaurant setting. The same technology is now used in military soldier robots.

2007 Checkers, the computer learns to play a perfect game in the process opening the door for algorithms capable of searching vast databases of compressed information.  

2011 IBM’s Watson wins Jeopardy against top human champions. It is currently training to provide medical advice to doctors and is capable of mastering any domain of knowledge.

2012 Google releases its Knowledge Graph, a semantic search knowledge base, widely believed to be the first step to true artificial intelligence.

2013 Facebook releases Graph Search, a semantic search engine with intimate knowledge about over one billion of Facebook’s users, essentially making it impossible for us to hide anything from the intelligent algorithms.

2013 BRAIN initiative aimed at reverse engineering the human brain is funded by 3 Billion US dollars by the White House and follows an earlier Billion Euro European initiative to accomplish the same. “It just so happens that the same technology the project will develop … could also be used to make our brains do whatever they want. Wirelessly. From a distance.”

From the above examples, it is easy to see that not only is progress in AI taking place, it is actually accelerating as the technology feeds on itself. While the intent behind the research is usually good, any developed technology could be used for good or evil purposes.

From observing exponential progress in technology Ray Kurzweil was able to make hundreds of detailed predictions for the near and distant future. As early as 1990 he anticipated that among other things we will see between 2010 and 2020 are:
  • Eyeglasses that beam images onto the users' retinas to produce virtual reality developed. (See Project Glass)
  • Computers featuring "virtual assistant" programs that can help the user with various daily tasks. (see Siri)
  • Cell phones built into clothing and able to project sounds directly into the ears of their users. (See E-textiles)
But his projections for a somewhat distant future are truly breathtaking and scary. Kurzweil anticipates that by the year:

2029 Computers will routinely pass the Turing Test, a measure of how well a machine can pretend to be a human.

2045 The technological singularity occurs as machines surpass people as the smartest life forms and the dominant specie on the planet, and perhaps universe.

If Kurzweil is correct about these long term predictions, as he was correct so many times in the past, it would raise new and sinister issues related to our future in the age of intelligent machines.

Will we survive technological singularity or are we going to see a Terminator like scenario play out? How dangerous are the superintelligent machines going to be? Can we control them? What are the ethical implications of AI research we are conducting today? We may not be able to predict the answers to those questions, but one thing is for sure - AI will change everything and impact everyone. It is the most revolutionary and most interesting discovery we will ever make. It is also potentially the most dangerous as governments, corporations and mad scientists compete to unleash it on the world without much testing or public debate. I am excited to devote my next book to looking for answers to the fundamental questions raised by exponential developments in artificial intelligence and in particular its safety implications.

Dr. Roman Yampolskiy is a computer scientist and a director of Cyber Security Lab at the University of Louisville. His recent research focuses on technological singularity. Dr. Yampolskiy is currently working on a book about the safety implications of the coming technological singularity - “Artificial Superintelligence: a Futuristic Approach.

 

Monday, May 27, 2013

Is Artificial Superintelligence Research Ethical?

Many philosophers, futurologists and artificial intelligence researchers have conjectured that in the next 20 to 200 years a machine capable of at least human level performance on all tasks will be developed. Since such a machine would among other things be capable of designing the next generation of even smarter intelligent machines it is generally assumed that an intelligence explosion will take place shortly after such a technological self-improvement cycle begins. While specific predictions regarding the consequences of such an intelligence singularity are varied from potential economic hardship to the complete extinction of the humankind, many of the involved researchers agree that the issue is of utmost importance and needs to be seriously addressed. Investigators concerned with the existential risks posed to humankind by the appearance of superintelligence often describe what can be called a Singularity Paradox as their main reason for thinking that humanity might be in danger. Briefly Singularity Paradox could be described as: “Superintelligent machines are feared to be too dumb to possess commonsense.”

Singularity Paradox is easy to understand via some examples. Suppose that scientists succeed in creating a superintelligent machine and order it to “make all people happy”. Complete happiness for humankind is certainly a noble and worthwhile goal, but perhaps we are not considering some unintended consequences of giving such an order. Any human immediately understands what is meant by this request; a non-exhaustive list may include making all people healthy, wealthy, beautiful, talented, giving them loving relationships and novel entertainment. However, many alternative ways of “making all people happy” could be derived by a superintelligent machine. For example: 

• A daily cocktail of cocaine, methamphetamine, methylphenidate, nicotine, and 3,4-methylenedioxymethamph-etamine, better known as Ecstasy, may do the trick.
• Forced lobotomies for every man, woman and child might also accomplish the same goal.
• A simple observation that happy people tend to smile may lead to forced plastic surgeries to affix permanent smiles to all human faces.

An infinite number of other approaches to accomplish universal human happiness could be derived. For a superintelligence the question is simply which one is fastest/cheapest (in terms of computational resources) to implement. Such a machine clearly lacks commonsense, hence the paradox. So is the future Artificial Intelligence dangerous to humankind? 

Certain types of research, such as human cloning, certain medical or psychological experiments on humans, animal (great ape) research, etc. are considered unethical because of their potential detrimental impact on the test subjects and so are either banned or restricted by law. Additionally moratoriums exist on development of dangerous technologies such as chemical, biological and nuclear weapons because of the devastating effects such technologies may exert of the humankind. Similarly I argue that certain types of artificial intelligence research fall under the category of dangerous technologies and should be restricted. Classical AI research in which a computer is taught to automate human behavior in a particular domain such as mail sorting or spellchecking documents is certainly ethical and does not present an existential risk problem to humanity. On the other hand I argue that Artificial General Intelligence (AGI) research should be considered unethical. This follows logically from a number of observations. First, true AGIs will be capable of universal problem solving and recursive self-improvement. Consequently they have potential of outcompeting humans in any domain essentially making humankind unnecessary and so subject to extinction. Additionally, a truly AGI system may possess a type of consciousness comparable to the human type making robot suffering a real possibility and any experiments with AGI unethical for that reason as well.

If AGIs are allowed to develop there will be a direct competition between superintelligent machines and people. Eventually the machines will come to dominate because of their self-improvement capabilities. Alternatively people may decide to give power to the machines since the machines are more capable and less likely to make an error. A similar argument was presented by Ted Kazynsky, aka Unabomber, in his famous manifesto: “It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decision for them, simply because machine-made decisions will bring better result than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won't be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide. ”

To address this problem, the last decade has seen a boom of new subfields of computer science concerned with development of ethics in machines. Machine ethics, computer ethics, robot ethics, ethicALife, machine morals, cyborg ethics, computational ethics, roboethics, robot rights, and artificial morals are just some of the proposals meant to address society’s concerns with safety of ever more advanced machines. Unfortunately the perceived abundance of research in intelligent machine safety is misleading. The great majority of published papers are purely philosophical in nature and do little more than reiterate the need for machine ethics and argue about which set of moral convictions would be the right ones to implement in our artificial progeny: Kantian, Utilitarian, Jewish, etc. However, since ethical norms are not universal, a “correct” ethical code could never be selected over others to the satisfaction of humanity as a whole.

Consequently, because of serious and unmitigated dangers of AGI, I propose that AI research review boards are set up, similar to those employed in review of medical research proposals. A team of experts in artificial intelligence should evaluate each research proposal and decide if the proposal falls under the standard AI – limited domain system or may potentially lead to the development of a full blown AGI. Research potentially leading to uncontrolled artificial universal general intelligence should be restricted from receiving funding or be subject to complete or partial bans. An exception may be made for development of safety measures and control mechanisms specifically aimed at AGI architectures.

With the survival of humanity on the line, the issues raised by the problem of the Singularity Paradox are too important to put “all our eggs in one basket”. We should not limit our response to any one technique, or an idea from any one scientist or a group of scientists. A large research effort from the scientific community is needed to solve this issue of global importance. Even if there is a relatively small chance that a particular method would succeed in preventing an existential catastrophe it should be explored as long as it is not likely to create significant additional dangers to the human race. After analyzing dozens of solutions from as many scientists, I came to the conclusion that the search is just beginning. I am currently writing a book (Artificial Superintelligence: a Futuristic Approach) devoted to summarizing my findings about the state of the art in this new field of inquire and hope that it will invigorate research into AGI safety.   

In conclusion, we are best to assume that the AGI may present serious risks to humanity’s very existence and to proceed or not to proceed accordingly. Humanity should not put its future in the hands of the machines since it will not be able to take the power back. In general a machine should never be in a position to terminate human life or to make any other non-trivial ethical or moral judgment concerning people. A world run by machines will lead to unpredictable consequences for human culture, lifestyle and overall probability of survival for the humankind. The question raised by Bill Joy: “Will the future need us?” is as important today as ever. “Whether we are to succeed or fail, to survive or fall victim to these technologies, is not yet decided”.


Dr. Roman Yampolskiy is an assistant professor at the University of Louisville, Department of Computer Engineering and Computer Science. His recent research focuses on technological singularity. In addition to his affiliation with SU Dr. Yampolskiy was a visiting fellow of Singularity Institute and had his work published in the first academic book devoted to the study of Singularity – “Singularity Hypothesis” and the first special issues of an academic journal devoted to that topic (Journal of Consciousness Studies).

Wednesday, May 22, 2013

Artificial Superintelligence: A Futuristic Approach

CrowdFunding campaign for my book, Artificial Superintelligence: A Futuristic Approach. http://igg.me/at/ASFA  Please like, tweet and share! Better yet - buy a book!

http://www.indiegogo.com/projects/artificial-superintelligence-a-futuristic-approach

Artificial Superintelligence: A Futuristic Approach

Introduction

Many philosophers, futurologists and artificial intelligence researchers have conjectured that in the next 20 to 200 years a machine capable of at least human level performance on all tasks will be developed. Since such a machine would among other things be capable of designing the next generation of even smarter intelligent machines, it is generally assumed that an intelligence explosion will take place shortly after such a technological self-improvement cycle begins. While specific predictions regarding the consequences of such an intelligence singularity are varied from potential economic hardship to the complete extinction of the humankind, many of the involved researchers agree that the issue is of utmost importance and needs to be seriously addressed. This book, “Artificial Superintelligence: A Futuristic Approach” will directly address this issue and consolidate research aimed at making sure that emerging superintelligence is beneficial to humanity.

Book Cover

Writing Sample: Leakproofing Singularity

What others said about: Leakproofing Singularity
“Yampolskiy’s excellent article gives a thorough analysis of issues pertaining to the “leakproof singularity”: confining an AI system, at least in the early stages, so that it cannot “escape”. It is especially interesting to see the antecedents of this issue in Lampson’s 1973 confinement problem in computer security. I do not have much to add to Yampolskiy’s analysis.”
David J. Chalmers, Professor of Philosophy, New York University
“This is great! I like the way you 
- introduce the state of the art in related security for ordinary computer systems 
- review the academic literature
- review the discussion-group posts which, though obscure, make innovative and essential points 
- enumerate possible failure scenarios, and suggest solutions 
- while pointing out clearly that all solutions can fail in the face of superintelligence.
This is exactly the sort of article the community needs.”
Joshua Fox, Research Associate at Singularity Institute 
AI researcher Roman Yampolskiy’s article, ‘Leakproofing the Singularity: Artificial Intelligence Confinement Problem’, provides us with a detailed and well-reasoned analysis of … ways of externally constraining the AI design that might lead towards a singularity, especially constraining such AI to a virtual world from which it cannot leak into the real world.”
Uziel Awret, Editor of Special Issue on Singularity of Journal of Consciousness Studies
“The connection back to Lampson is very interesting and apt.”
Vernor Vinge, Hugo Award-winning author and Professor of Mathematics (retired)

Tentative List of Chapters:

1)     Introduction to Artificial Superintelligence.
2)     AI-Completeness – the Problem Domain of Superintelligent Machines.
3)     The Space of Mind Designs and the Human Mental Model.
4)     How to Prove that You Invented Superintelligence So No One Else Can Steal It.
5)     Wireheading, Addiction and Mental Illness in Machines.
6)     On the Limits of Recursively Self-Improving Artificially Intelligent Systems.
7)     Singularity Paradox and What to Do About It.
8)     Superintelligence Safety Engineering.
9)     Artificial Intelligence Confinement Problem (and Solution).
10)  Controlling Impact of Future Super AI.
Appendix
11)  Efficiency Theory: a Unifying Theory for Information, Computation and Intelligence.
12)  Unverifiability: Why Software Can’t Ever be Completely Bug Free.
13)  Artimetrics: Behavioral and Visual Identity Management of Artificial Agents.
14)  Wisdom of Artificial Crowds: Simulating Democracy and Intelligence of Crowds in Cyberspace.

Tentative Timeline:
January-May 2013: preparing fundraising campaign.
May – July 2013 (NOW): crowd funding campaign and continuing research.
July – October 2013: writing of the book.
October-December 2013: re-writing, editing, revising, proofreading, formatting, finalizing cover design, publishing.
Early 2014: shipping!!!

About the Author

Dr. Roman Yampolskiy conducts research in Artificial Intelligence Safety and Technological Singularity. An alumnus of Singularity University (GSP2012) and a visiting fellow / research advisor of the Singularity Institute (MIRI), Dr. Yampolskiy has contributed papers to the first book on Singularity (Singularity Hypotheses, Springer 2012), first journal issue devoted to Singularity (Journal of Consciousness Studies, 2012) and the first conference devoted to safe Super-Intelligent systems (AGI Safety, 2012).
Roman V. Yampolskiy holds a PhD degree from the Department of Computer Science and Engineering at the University at Buffalo. There, he was a recipient of a four year NSF (National Science Foundation) IGERT (Integrative Graduate Education and Research Traineeship) fellowship. Before beginning his doctoral studies, Dr. Yampolskiy received a BS/MS (High Honors) combined degree in Computer Science from Rochester Institute of Technology, NY, USA.
After completing his PhD dissertation, Dr. Yampolskiy held a position of an Affiliate Academic at the Center for Advanced Spatial Analysis, University of London, College of London. In 2008 Dr. Yampolskiy accepted an assistant professor position at the Speed School of Engineering, University of Louisville, KY. He had previously conducted research at the Laboratory for Applied Computing (currently known as Center for Advancing the Study of Infrastructure) at the Rochester Institute of Technology and at the Center for Unified Biometrics and Sensors at the University at Buffalo.
Dr. Yampolskiy is an author of over 100 publications including multiple journal articles and books. His research has been cited by numerous scientists and profiled in popular magazines both American and foreign (New Scientist, Poker Magazine, Science World Magazine), dozens of websites (BBC, MSNBC, Yahoo! News) and on radio (German National Radio, Alex Jones Show). Reports about his work have attracted international attention and have been translated into many languages including Czech, Danish, Dutch, French, German, Hungarian, Italian, Polish, Romanian, and Spanish.

FAQ

How the money will be spent?
The money is mostly needed to pay for the publication, marketing and distribution costs as well as the costs of editing, and proofreading the book. Some funds will also go to acquire copyrighted materials such as images for the cover and for ongoing research expenses. Additionally, Dr. Yampolskiy works under a contract with the University of Louisville which doesn’t include June and July. A portion of the raised funds will be used to feed Dr. Yampolskiy during that time as he will be very hungry from all that writing. 
What has already been done?
Most of the research has been completed. You can never be done consulting with the experts, but I have already had a chance to exchange ideas with the world’s best scientists and philosophers. Some drafts have been written for individual chapters. Professionals have been recruited for cover design and proofreading.
Roman Yampolskiy with AI reserchers and top scientists
Who designed this awesome book cover?
A friend and a former classmate Svetlana Dolinskiy is responsible for the design of the front cover.
Is that Alex Jones interviewing you? Really?
Despite being well known for his short temper and some unorthodox opinions Alex was extremely professional and respectful in conducting this interview. So unless you can get Piers Morgan to interview me, I will stick with my choice of video ;)
I found a spelling error what should I do?
Call 911! No wait, just email me, I will fix it and thank you.
Have you published any other books?
Yes, this would be my 7th book. You can find my other books for sale on Amazon.com.
Do you have links to the media coverage about your research?
Yes, you can find them linked from my homepage. Even before this funding campaign got started I was fortune to have a lot of interest in my research from popular media.
Dr. Yampolskiy's research in popular media
I am with the media and I would like to interview you about your research, what is the best way to get in touch with you?
My email and phone numbers are listed on my homepage. roman.yampolskiy@louisville.edu
Do you have any other videos of you presenting your research?
Yes, I have a few available online: my talk at AGI conference in Oxford, and my Ignite presentation at Singularity University.

Disclaimer

This project is undertaken as a personal initiative and I am not acting as a representative of any organization, group, institution, company or future superintelligence. I am (Roman Yampolskiy) solely responsible for this fundraising campaign including proposed work, expressed ideas or opinions and promised deliverables. Neither University of Louisville nor any other organization or person should be held liable or assumed to share said opinions, ideas or goals or any legal or financial responsibility to the campaign supporters.

http://www.indiegogo.com/projects/artificial-superintelligence-a-futuristic-approach

Tuesday, April 9, 2013

What is it like to be a student at the Singularity University? (An insider’s story).

I first heard about Singularity University from a friend who was a student in the very first SU cohort. He told me that being at SU changed his outlook on life. I was not surprised as I was a big fan of Ray Kurzweil’s (co-founder of SU) books and they had a similar effect on my life. It is because of Kurzweil’s writing on intelligent machines that I decided to major in Computer Science, got my PhD and more recently a faculty position researching and teaching artificial intelligence. My friend had a similar educational background and if SU could produce such an impact on him I wanted to experience it for myself.

The application process was no different than applying to any university, the usual collage of recommendation letters, personal essays, and test scores with final acceptance rate reported to be fewer than 3%. I applied for early admission and within a month had a personal interview with the head of admissions. I was in! Not only was I accepted but I was also assured that the tuition fee (25k) for the 10 week program was not a problem as many scholarships and fellowships were available. After getting to SU I learned that many students accepted to SU had to quit their jobs to be able to attend. In my position as a university professor I am fortunate to have no teaching responsibilities in the summer and a very understanding department chair, so that made my situation a lot easier. I am also very lucky to have an amazing wife who agreed to be a single mom for the summer to our 3 year old son, making it possible for me to pursue my dreams.  

The best way to describe the experience of being a student at SU is to say that it is an Ivy League university from the future, the admissions process is from the year 2012, but the curriculum is from the year 2020. The usual curriculum of biology, physics, computer science, etc. is replaced with Synthetic Biology, Nanotechnology, Artificial General Intelligence and every other futuristic field you can imagine. The unifying theme behind all this disciplines is the exponential growth in advancement of science and students are actively encouraged to find business opportunities which take advantage of this phenomenon. The studies are not limited to theoretical lectures; participants are taken for site visits to companies and organizations such as Google, HP, BioCurious, Genentech, TechShop, NASA Ames, Code for America, Intuitive Surgical and National Ignition Facility. Even more amazingly the subjects are taught by the leading researchers from each discipline and in some cases by founders of those disciplines. In my academic experience I have very rarely seen students surround the lecturer after the talk and plead for autographs or to have a picture taken together. Such behavior is a norm at SU. In fact the quantity (over 160) and quality of speakers is so amazing students are faced with a dilemma such as: “should I attend a speech by the inventor of a self-driving car or should I do my laundry?” Additionally a team of over 20 Teaching Fellows (who are as remarkable as faculty) and super friendly support staff is available to assist participants with all types of academic and logistical issues.     

Logistics of living at SU for 10 weeks are also remarkable. The campus is located in the futuristic NASA research park with background scenery suitable for a science fiction movie. Most students live in dorms located just meters from the main classroom and dining hall. The food deserves a special mention. Healthy and delicious, it is prepared fresh three times a day and it is not unusual to see a portable pizza-oven-mobile parked nearby for those stressful days when healthy options are just not enough. The all-you-can-eat food and snacks are provided at no cost to students along with access to the shared library and Autodesk innovation lab which has 3D printers, latest CAD software, robots, unmanned aerial vehicles and quadcopters for students to experiment with.  You will also get some free books for personal use and a number of additional pleasant surprises such as free smart phone, FitBits, personalized 23andMe DNA testing, movie and museum tickets, conference registrations, participation in motivational seminars (with a chance to walk on fire), professional photos, SU bike rentals, gift cards, San Francisco marathon custom T-Shirts and medals. And of course you can look forward to lots of SU memorabilia including a class ring printed on a 3D printer. Last but not least you can also have your family or friends visit you for a few days and share this remarkable experience with them.

In addition to the amazing faculty and curriculum SU also has the most diverse student body I have ever experienced. Originating in some 40 different countries the 80 program participants are all extremely accomplished individuals: scientists, writers, business owners and entrepreneurs. From week one they came together to form a self-organizing team which began to offer extracurricular activities (soccer, ultimate, marathon training, rugby, basketball, yoga), workshops (team building, spirituality, dating) and classes (foreign languages, dancing, programming languages) as well as organizing travel/leisure opportunities, cultural nights and an occasional flash mob. As the summer progressed smaller teams began to form around common interests to address specific global challenges affecting large segments of the population such as security, poverty, energy, water or food. Those formed the capstone of Singularity University experience. As far as I know SU is the only university where your homework assignment is to help a billion people!

The relationships formed at SU (friendships, mentorships, business partnerships) remain long after the program is completed. Many students come back as guest lecturers or administrators in the following years. Some serve as SU ambassadors around the world running local grand challenge competitions and recruiting top talent around the world. All SU affiliates remain an active part of a strong and quickly growing network of advisors, investors, scientists, business leaders and entrepreneurs. The value of this network alone can’t be overestimated. If I can give one piece of advice to the readers, it would be to immediately start preparing your SU application. If you can’t find 10 weeks for the main program consider applying for a week long Executive program. Either one will change the way you think forever. We are born as linear thinkers and it takes something as great as Singularity University experience to change us into exponential thinkers and consequently change the whole world forever. To quote one of my classmate’s Facebook status: “I am the luckiest person in the world!”

Dr. Roman Yampolskiy is an assistant professor at the University of Louisville, Department of Computer Engineering and Computer Science. His recent research focuses on technological singularity. In addition to his affiliation with SU Dr. Yampolskiy was a visiting fellow of Singularity Institute and had his work published in the first academic book devoted to the study of Singularity – “Singularity Hypothesis” and the first special issues of an academic journal devoted to that topic (Journal of Consciousness Studies).