The advance of the golden age in artificial intelligence is upon us many are left wondering if it could lead to a nightmare scenario such as seen in Skynet from the movie Terminator. That in which machines decide of their own volition to unleash destruction upon mankind. Technological advancements of the industrial era which made many workers in various industries leery of machines making their labor obsolete are now beginning a level of advancement where people are beginning to question the safety of mankind with advancements in artificial intelligence. Utilizing these technologies of the future will require extensive research by unaffiliated parties in ensuring that they perpetually work in the interests of mankind through regulation and mandating “Kill-switches”, or a mechanism for easily disabling artificial intelligence should things go awry.
In 2012, a questionnaire was developed by Nick Bostrom, Professor at Oxford University and Director of the Future of Humanity Institute and Vincent Müller and sent to the worlds leading experts in artificial intelligence and they were posed with the question as to when they believe that artificial intelligence will cross over into artificial superintelligence, (ASI). Or the ability for machines to think, learn and reason at a vastly greater rate than that of humans. Half of the respondents estimated that superintelligence or “Human Level Machine Intelligence”, would be developed between 2040 and 2050. Nine in ten respondents estimated it would be achieved by 2075. When the AI researchers were asked to assign probabilities to the overall impact of ASI on humanity, in the long run, the mean values were 24% “extremely good,” 28% “good,” 17% “neutral,” 13% “bad,” and 18% “extremely bad” (existential catastrophe) 18% (Müller and Bostrom 2014).
There is no shortage of Hollywood plot-lines as to what can happen when artificial superintelligence has gone awry. Many great thinkers of our time and tech industry giants are urging caution. Tesla Motors and SpaceX founder and CEO Elon Musk once described the threat as potentially more dangerous than nuclear bombs (Musk). Famed theoretical physicist, Stephen Hawking said, “The rise of powerful AI will be either the best or the worst thing ever to happen to humanity. We do not know which” (Hawking, Stephen).
Hawking went on to explain that the real threat of AI is not that it will act out of malice, but out of great competence. He used the following analogy to put the risk of advanced AI into further perspective: “You’re probably not an evil ant-hater who steps on ants out of malice, but if you are in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.” (Hawking, Stephen)
The general consensus in the scientific and technological communities is that artificial intelligence must be continued to be developed to advance humankind. However, there is a strong degree of apprehension amongst industry leaders on the development of advanced artificial intelligence which they believe could be a double-edged sword and encompass sweeping economic and social changes leading to unrest.
The key to exploring ASI is for nations to thoroughly fund research into the effects on humankind. To formulate regulations based on this research to narrow the scope of how it can be used. And to offer a human override of all ASI to ensure that it is perpetually operating within its parameters and to the advancement of civilization. The real philosophical lessons of ASI will have less to do with humans teaching machines how to think than with machines teaching humans how to think at a greater depth previously thought impossible. There are many great benefits to ASI on humankind and potentially greater threats if not utilized properly.
Works Cited
Müller, V. C.; Bostrom, N. Future Progress in Artificial Intelligence: A Survey of Expert Opinion. Synthese Library. 2014, 9-13.
Musk, Elon. “The State of Innovation.” Interview by Walter Isaacson. Vanity Fair. N.p., Apr. 2016. https://www.vanityfair.com/news/2017/03/elon-musk-billion-dollar-crusade-to-stop-ai-space-x 24 Nov. 2017.
Hawking, Stephen. “Implications of AI for Human Civilization” Center of the Future of Intelligence Conference, 16 October 2016, University of Cambridge, Cambridge, UK. Keynote Address.

Maybe the only factor they have actually not yet succeeded results from the reality that vapers, that often lived for several years as shamed cigarette smokers, have ultimately done what their families, friends, and also government have actually urged they offer years-- stopped smoking cigarettes. Various from existing smokers, smokers have really been politically marginalized after being attacked as well as condemned. For that reason, they hesitate to get rid of "constitutionals rights for smokers", while cigarette smokers stand up to safeguard their acquisition and sales And sales civil liberties. Use products that they think are accountable for conserving lives.juul norge
ReplyDelete