Is humanity really doomed due to the ubiquity and prevalence of artificial intelligence? According to several AI believers, humans should not fear artificial intelligence, instead look at the positive impacts of AI in changing the way humans manage their lives.
Despite artificial intelligence's growing benefits and use in addressing some of the major problems of the society, some experts are still wary about its existential risks to mankind. So, this article will give you a glimpse of AI's terrifying and astonishing future in the eyes of the world's smartest, philosophers and entrepreneurs as well as known AI advocates.
Coates is the director of the Baidu Research's Silicon Valley AI Lab. According to Coates, the future of artificial intelligence should not be feared because there are still a lot of things in the world that the machine learning algorithm just can't learn on its own.
Coates also added the development of artificial intelligence and deep learning systems that rely on neural network technology should not be considered as efforts to rival intelligence. Instead, machine learning would allow people to create systems that are capable of making decisions without relying on explicitly programmed systems, Information Week notes.
Gates is the 60-year-old co-founder of the world's largest PC software company, Microsoft. Gates is also the world's richest man who believes near-future low intelligence artificial intelligence as a positive tool for labor replacement but he also worries that "superintelligent" systems might become a threat, Time reveals.
Musk is the 44-year-old cofounder, CEO and product architect of Tesla Motors and the founder, CEO and CTO of SpaceX. But despite being an artificial intelligence investor, Musk has spoken out against AI, saying it is the biggest existential threat to the survival of humanity.
Musk also stressed that his investments in artificial intelligence research aim to watch closely on what's going on, instead of aiming for the viable return on capital. According to The Guardian, Musk also called the rise of AI as "summoning the demon," calling for more regulatory oversight at national and international level.
Kaku is a 69-year-old futurist and theoretical physicist who calls artificial intelligence as an end-of-the-century problem. He adds most of the fears dominant in pop culture are premature, saying the most advanced AI-driven robots have the intelligence of a "retarded lobotomized cockroach," Reverb Press reports.
Bostrom is a 43-year-old University of Oxford philosopher known for his contributions in the discussions of the artificial intelligence development. In his book, "Superintelligence: Paths, Dangers, Strategies," Bostrom warns the rise of AI could turn dark and the sci-fi scenes of intelligent machines taking over the world could become a reality, leaving a society of technological magnificence but with nobody to benefit, FT.com shares.
Kurzweil is a 68-year-old futurist, computer scientist and Google's engineering director who's optimistic that human-level artificial intelligence will be achieved by 2029. In his Time article, Kurzweil reveals the most significant way to keep AI safe is to work on human governance and social institutions.
Altman is a 31-year-old venture capitalist, programmer, Y Combinator president and co-chairman of OpenAI. He admits the possibility that his artificial intelligence project will surpass human intelligence but stressed that making it available and accessible to everyone will limit its existential risks, as per Wired.
Hawking is a 74-year-old English theoretical physicist and University of Cambridge's Centre for Theoretical Cosmology research director. According to Hawking, artificial intelligence could be both phenomenal and cataclysmic, warning AI's explosive growth could be the last event of history if risks are not avoided, The Independent learns.
Do you agree on what the experts said about artificial intelligence? Sound off below and follow Parent Herald for more news and updates.