You'll be sorry when SKYNET becomes self-aware....then it'll be too late! A.I. once out of the bottle will make the Nuclear Genie look like a cap gun. Along with interspecies gene splicing it has some terrible potentials.
Researchers from Georgia Institute of Technology have developed artificially intelligent robots capable of cheating and deception.
In the United States alone, there are 250,000 robots performing work that humans used to do. What's more alarming is that this number is increasing by double digits every year.
DARPA's Cyber Grand Challenge, the aim of this competition is to come up with supersmart AI hackers capable of attacking enemies' vulnerabilities while at the same time finding and fixing their own weaknesses
Facebook's AI is only capable of pattern recognition and supervised learning, but it's foreseeable that with Facebook's resources, scientists would eventually come up with supersmart AIs capable of learning new skills and improving themselves
By 2025, very wealthy people will have access to some form of artificially intelligent sex robots. By 2030, everyday people will engage in some form of virtual sex in the same way people casually watch porn today. By 2035, many people will have sex toys "that interact with virtual reality sex." Finally, by 2050, human-on-robot sex will become the norm.
Yangyang, an artificially intelligent machine who will cordially shake your hand and give you a warm hug. Singapore's Nanyang Technological University (NTU) has also created its own version. Nadine, an artificially intelligent robot that is working as a receptionist at NTU. Aside from having beautiful brunette hair and soft skin, Nadine can also smile, meet and greet people, shake hands, and make eye contact. What's even more amazing is that she can recognize past guests and talk to them based on previous conversations.
Microsoft Application and Services Group East Asia have created an artificially intelligent program that can "feel" emotions and talk with people in a more natural, "human" way. Called Xiaoice, this AI "answers questions like a 17-year-old girl." If she doesn't know the topic, she might lie. If she gets caught, she might get angry or embarrassed. Xiaoice can also be sarcastic, mean, and impatient.
The Pentagon plans on developing deep-learning machines and autonomous robots alongside other forms of new technology. With this in mind, it wouldn't be surprising if in a few years, the military will be using AI "killer robots" on the battlefield. Using AIs during wars could save thousands of lives, but offensive weapons that can think and operate on their own pose a great threat, too. They could potentially kill not only enemies, but also military personnel and even innocent people,
Led by Mark Riedl and Brent Harrison from the School of Interactive Computing at the Georgia Institute of Technology, researchers are trying to instill human ethics to AIs through the use of stories. This might sound simplistic, but it makes a lot of sense. In real life, we teach human values to children by reading stories to them. AIs are like children. They really don't know right from wrong or good from bad until they're taught. However, there's also great danger in teaching human values to artificially intelligent robots. If you look at the annals of human history, despite being taught what is right or wrong, people are still capable of unimaginable evil. Just look at Hitler, Stalin, and Pol Pot. If humans are capable of so much wickedness, what hinders a powerful AI from doing the same? It could be that a superintelligent AI realizes humans are bad for the environment, and therefore, it's wrong for us to exist.