ChatGPT faces defamation lawsuits after making up stories about people

Ad: This forum contains affiliate links to products on Amazon and eBay. More information in Terms and rules

It seems to me that, if we cannot even define our own intelligence, that designing machine-learning programs might not be a good idea. If we cannot even project the possible outcomes, should we pull the trigger?

I'm not worried about Watson, or Deep Blue, etc. Those are algorithmic rather than thoughtful. What happens when you produce software that can put two and two together, metaphorically? What happens when AI learns to lie?

Intelligence comes in so many frameworks, as you point out, that we cannot even understand the various intelligent species which populate our globe naturally. Is there really a need to create an artificial intelligence that we may not understand? Shit, our own intelligence seems to be screwing this world up six ways to Sunday.

I think this is certainly a point where we should go slow.

IMG_8132.jpeg
 
Currently it would only be able to install itself if it was programmed/told to.
Yes, but the real question would be if it was programmed to do so by someone other than the user downloading it...
I actually am not worried about AI taking over the world. I am thinking more along the line of all the things that I could/would do with it if I were an evil person or of a criminal bent - THAT is what is truly worrying for the near future.
Well, weapon-systems would be a greater concern because they're designed to destroy.
 

Users who are viewing this thread

Back