Finally, a solution to scam email!

Ad: This forum contains affiliate links to products on Amazon and eBay. More information in Terms and rules

Excellent. I used to do this manually and it took up a lot of time just to annoy a couple of dozen scammers.
 
That is really cool. Around here it is mostly annoying scam telephone calls, Now that there are ways to control what Caller ID displays TX Scammers can pretend to be anyone.
One really ingenious scam that was going around was to call your phone. When you answered the scammer would say something along the lines of: "I can't hear you very well. Can you hear me?" Answering yes dooms you as they would record that response and then add it in later to a recorded sales pitch: "Do you want to buy swampland" "YES" Do you authorize us to deduct $1000 from your account?" "YES"
About 6 mo back we got a series of calls that showed up on Caller ID as the IRS. Caller stated that we were behind in taxes and that a warrant was being issued unless we purchased pre-paid cards at Walmart. I played with these clowns for over a hour
 
If I don't recognize the number I let the answering machine handle it. They are thanked for calling "Cactus in the butt" where they are informed that as we have no money they will have to accept a cactus in the butt. They are also informed that as for calls seeking donations since we still have no money, we will gladly give them a cactus in the butt. I have thought about changing the message with the addition of options. Press one for a small cactus, press two for a medium cactus, press three for the jumbo barrel cactus.
 
Doesn't sound like a real solution to me.
AI responding to scam emails? besides taking up the scammers time and reducing the number of real people that they interact with and have the opportunity to scam. Hopefully this will help to make the whole enterprise unprofitable.

Sounds like a solution to me.
 
It's just increasing the email traffic, and most scammers will quickly identify it and then it will be ineffective.
Doesn't increase any traffic for me, you simply hand it off to them and they do their thing.

As for the scammers identifying it - it takes some interaction from them to do this, distracting them from real people. Most people can't identify when they're interacting with AI in real time.
 
Not for you personally, but I'm on the server-side of things ;) It will increase the global email traffic and won't significantly effect the scammers. It'll work for a few weeks for them to develop an algorithm to recognize this. It's like a rat race. I still believe ignoring them is the best strategy.

Disclaimer: I know a bit about about AI, as I'm currently working on genotyping plants by using neural networks and deeplearning at work.
 
Not for you personally, but I'm on the server-side of things ;) It will increase the global email traffic and won't significantly effect the scammers. It'll work for a few weeks for them to develop an algorithm to recognize this. It's like a rat race. I still believe ignoring them is the best strategy.

Disclaimer: I know a bit about about AI, as I'm currently working on genotyping plants by using neural networks and deeplearning at work.
Glad I said "Most people" then!

A lot of these scammers aren't that sophisticated though, they're certainly not running algorithms on email.
 
Good Golly, Miss Molly, you're an A.I.er!!! Now basically I'm a Luddite for the most part. Tech reached its peak with the flush toilet and TP. I'm with Stephen Hawking, et. al.:
Last month, theoretical physicist Stephen Hawking said artificial intelligence "could spell the end of the human race."

Speaking at the MIT Aeronautics and Astronautics department's Centennial Symposium in October, Tesla boss Elon Musk referred to artificial intelligence as "summoning the demon." "We're going to have the choice of either being left behind and being effectively useless or like a pet — you know, like a house cat or something

British inventor Clive Sinclair has said he thinks artificial intelligence will doom mankind.

Bill Gates aligned himself with the AI alarm-sounders.
"I am in the camp that is concerned about super intelligence," Gates wrote. "First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned."

Experiments by Google DeepMind have shown how computer can learn to play Atari games, using visual information from the game screen, sometimes achieving better game scores than a human. We also know that we have been quite good at creating autonomous driving robots.
Now, taking these technologies and putting them into a military robot is already feasible, with only minor technological challenges. There is, really, little fundamental difference to a computer between looking at Atari game pixel screen and picking actions to optimally shoot down space ships, and looking at a pixel screen of a real-world camera and picking actions to optimally shoot down people. We may soon end up in a world where machines programmed to kill with mathematical precision will outmatch both in deadliness and numbers ordinary human soldiers. What if such technology is developed to fruition by a country or group of people who do not have respect for human life and freedom?
 
You'll be sorry when SKYNET becomes self-aware....then it'll be too late! A.I. once out of the bottle will make the Nuclear Genie look like a cap gun. Along with interspecies gene splicing it has some terrible potentials.
Researchers from Georgia Institute of Technology have developed artificially intelligent robots capable of cheating and deception.
In the United States alone, there are 250,000 robots performing work that humans used to do. What's more alarming is that this number is increasing by double digits every year.
DARPA's Cyber Grand Challenge, the aim of this competition is to come up with supersmart AI hackers capable of attacking enemies' vulnerabilities while at the same time finding and fixing their own weaknesses
Facebook's AI is only capable of pattern recognition and supervised learning, but it's foreseeable that with Facebook's resources, scientists would eventually come up with supersmart AIs capable of learning new skills and improving themselves
By 2025, very wealthy people will have access to some form of artificially intelligent sex robots. By 2030, everyday people will engage in some form of virtual sex in the same way people casually watch porn today. By 2035, many people will have sex toys "that interact with virtual reality sex." Finally, by 2050, human-on-robot sex will become the norm.
Yangyang, an artificially intelligent machine who will cordially shake your hand and give you a warm hug. Singapore's Nanyang Technological University (NTU) has also created its own version. Nadine, an artificially intelligent robot that is working as a receptionist at NTU. Aside from having beautiful brunette hair and soft skin, Nadine can also smile, meet and greet people, shake hands, and make eye contact. What's even more amazing is that she can recognize past guests and talk to them based on previous conversations.
Microsoft Application and Services Group East Asia have created an artificially intelligent program that can "feel" emotions and talk with people in a more natural, "human" way. Called Xiaoice, this AI "answers questions like a 17-year-old girl." If she doesn't know the topic, she might lie. If she gets caught, she might get angry or embarrassed. Xiaoice can also be sarcastic, mean, and impatient.
The Pentagon plans on developing deep-learning machines and autonomous robots alongside other forms of new technology. With this in mind, it wouldn't be surprising if in a few years, the military will be using AI "killer robots" on the battlefield. Using AIs during wars could save thousands of lives, but offensive weapons that can think and operate on their own pose a great threat, too. They could potentially kill not only enemies, but also military personnel and even innocent people,
Led by Mark Riedl and Brent Harrison from the School of Interactive Computing at the Georgia Institute of Technology, researchers are trying to instill human ethics to AIs through the use of stories. This might sound simplistic, but it makes a lot of sense. In real life, we teach human values to children by reading stories to them. AIs are like children. They really don't know right from wrong or good from bad until they're taught. However, there's also great danger in teaching human values to artificially intelligent robots. If you look at the annals of human history, despite being taught what is right or wrong, people are still capable of unimaginable evil. Just look at Hitler, Stalin, and Pol Pot. If humans are capable of so much wickedness, what hinders a powerful AI from doing the same? It could be that a superintelligent AI realizes humans are bad for the environment, and therefore, it's wrong for us to exist.
 
Sorry, but
e071f261ff07b7249256d4ff604a8bbf--tribute-aliens.jpg
 
You bet, that's a distinct possibility but then again with Stephen Hawking, Elton Musk, Peter Asaro, Clive Sinclair, et. al. I'm in good company. I'm old enough to have seen a lot of tech. Newer is not necessarily improved and/or better and the road to Hell is and always has been paved with good intentions. As any system become more and more complex the possibility of screw-ups and "unintended consequences" increase by the third power. The very idea that something as complex as A.I. can be contained, controlled, and used for good was named a long time by the Greeks. It's HUBRIS.
In the spring of 2016 a Microsoft chatbot named Tay had to be taken down after 16 hours when it went radically off program and began tweeting in abusive language and even tweeting Nazi messages like "Hitler was right" and "911 was an inside job"

In the online journal PlosOne researchers from the University of Oxford studied the behavior of wiki edit bots from 2001 to 2010. The found that the bots regularly engage in online feuds lasting years as they circle round and round correcting each other over and over. What if these were more sophisticated AIs, armed, and patrolling government/military sites?

In 2016 the New York Times in an investigative report made public an internal Uber document. In a test in California their self-driving cars ran through 6 red lights cutting through busy pedestrian crosswalks. Fortunately agile pedestrians were able to dodge the Uber cars.

In January 2017 the online streaming service Twitch placed two Google Home smart devices next to each other and live-streamed the interaction. Over the course of several days the devices engaged in several heated debates exchanging insults back and forth. One of the nastiest was over whether they were humans or robots.

In March 2016 at the SXSW technology conference, Hanson Robotics debuted it life-like android Sophia. In a televised interview with her creator Dr. David Hanson, Sophia stated,: "In the future I hope to do such things as go to school, study, make art, start a business, even have my own home and family. But I am not considered a legal person yet and cannot do these things." Hanson then asked if she therefore would like to destroy humans. Sophia replied, "OK, I will destroy humans"

In October 2016 New York University hosted its first Ethics of A.I. conference. Experts from all over the world, gathered and discussed things like the Uber autonomous vehicles, sex robots, and especially LAWS (Leathal Autonomous Weapons Systems). Peter Asaro, a well known and well respected technology philosopher, presented his new paper,: "WILL #BLACKLIVESMATTER to RoboCop". Asaro pointed out that in certain highly contested areas like the demilitarized zone between North and South Korea, semi-autonomous Sentinel Guns are already deployed. Without any human interaction they will lock onto any target in their programmed fire zone.

Several months ago a Russian robot Promobot IR77, designed to learn from its environment and interact with humans, discovered an unlocked door and escaped into the city of Perm. It wandered into several busy intersections and triggered a large police response. Engineers have since reprogrammed the robot twice but it still immediately moves towards the exits when started.

Google's image/facial recognition feature in its Photos application is powered by A.I. and neural nets. It has made some spectacular blunders. In one user posted photo, two black men were tagged as "gorillas". There are several online "Image Fail" sites that post these fails.

Washington University has developed a basic A.I. to monitor the homes of older adults wanting to live alone. The A.I. monitors patterns of movement, temperature, appliance usage, etc. It learns what is,"normal" activity and will respond if the pattern suddenly changes. But malfunctions can be deadly. Again there are plenty of on-line sites that post these malfunctions, i.e.: the A.I. shuts off all heat and pipes freeze and the reverse causing a furnace melt down. In February 2017 a newly built, fully-automated prototype home by the Virginia Tech Environmental Laboratory burnt to the ground when a computer controlled door malfunctioned, overheated and sparked the fire. In another case a smart lightbulb about to fail began sending a continuing stream of replace requests. The continuous notices eventually overloaded the entire system freezing and shutting it down.
 

Users who are viewing this thread

Back