ChatGPT faces defamation lawsuits after making up stories about people

This forum contains affiliate links to products on Amazon and eBay. More information in Terms and rules

DerAdlerIstGelandet

Private Chemtrail Disperser
Staff
Mod
48,445
12,253
Nov 8, 2004
USA/Germany
It seems to me that, if we cannot even define our own intelligence, that designing machine-learning programs might not be a good idea. If we cannot even project the possible outcomes, should we pull the trigger?

I'm not worried about Watson, or Deep Blue, etc. Those are algorithmic rather than thoughtful. What happens when you produce software that can put two and two together, metaphorically? What happens when AI learns to lie?

Intelligence comes in so many frameworks, as you point out, that we cannot even understand the various intelligent species which populate our globe naturally. Is there really a need to create an artificial intelligence that we may not understand? Shit, our own intelligence seems to be screwing this world up six ways to Sunday.

I think this is certainly a point where we should go slow.

IMG_8132.jpeg
 

Users who are viewing this thread