ChatGPT faces defamation lawsuits after making up stories about people

Ad: This forum contains affiliate links to products on Amazon and eBay. More information in Terms and rules

ThomasP

Senior Master Sergeant
3,581
5,719
Apr 17, 2017
midwest USA
Well, so it has begun.

ChatGPT has made up defamatory stories about at least 2 people so far.

One is a mayor in Australia who was a whistle blower in a Bank backed bribery scheme. Unfortunately, ChatGPT claims the mayor served time in prison for bribery, which did not happen - the mayor was never even charged with a crime.

Another instance is that of a college law professor, who ChatGPT claims has engaged in sexual harassment against his students. ChatGPT claimed the harassment occurred while on a trip to Alaska with his students, but the professor has never been to Alaska, or gone on a trip with his students. ChatGPT cited a non existent Washington Post article as evidence.
 
The problem is these things are natural language software so they 'look' intelligent but they are
also what is now called 'trained' which means programmed.

AI is a term bandied about with ease these days but all the 'AI' cannot actually think. It can only respond
to how it is trained. How many times do we suddenly get that lightbulb moment as a solution to a problem
we were grappling with some other time. AI doesn't have the connections or even the bulb.

That is why relying on what is programmable pretend intelligence doesn't go well.
 
Who gets sued?
I guess that would be OpenAI, the company behind the Chat bot.

The problem is these things are natural language software so they 'look' intelligent but they are
also what is now called 'trained' which means programmed.

AI is a term bandied about with ease these days but all the 'AI' cannot actually think. It can only respond
to how it is trained. How many times do we suddenly get that lightbulb moment as a solution to a problem
we were grappling with some other time. AI doesn't have the connections or even the bulb.

That is why relying on what is programmable pretend intelligence doesn't go well.
It's not so black-and-white. DeepLearning in itself is indeed trained, but not programmed as per definition. And you should define "thinking" first to come up with such a statement.

If thinking is collecting information, seeing relations in there, making the correct links and even define new links and conclusions from that data, then DeepLearning indeed does "think".

Don't make any mistake, this is a technology that will revolutionize our way of living and working. It already outperforms humans at some tasks.
 
Last edited:
The only areas where performance is better are those such as search engines because data can be pattern matched faster than a human can do it.

The data itself is the result of human thinking. That is what I meant. As far as designing new things - that requires ideas and thought processes
which the human brain has but a computer does not.

Using something like ChatGPT to generate an answer and have it come out so wrong is the end result of the lack of thought processes.
 
I'm still not sure I fully agree with you. I've been using copilot, and AI tool to help programmers. It does not blindly copy code that it knows, but does adapt the code to fit my situation and even goes so far as to change the code to my programming style. So in a way it creates new code and adapts to the situation. Granted, it uses the knowledge that it's trained with, but that it's not so different from ourselves, where we use our experience as well.
The creational aspect of it's function is still quite limited compared to us, but that will improve over time.

Now back to the original question. I do not believe those faults stem from wether the system can "think" or not. It combined data and came with an answer. Pretty much like we would do ourselves, and, similar to many of us, it came to a wrong answer as the input data might have been false. The worrying thing is that it came up with a lie, the non existing article.
 
It's still all a case of human input giving the level of 'intelligent response'.

The interesting part tp me as far as programming goes is that we don't need to put room much effort into achieving
full AI as we already have the real thing.

As to combining information and coming up with something, it still isn't real 'I' or even 'AI' as dissemination isn't present.

As far as all that goes, I am more for better use of automation to allow humans to use the brain for more creation.
 
Now back to the original question. I do not believe those faults stem from wether the system can "think" or not. It combined data and came with an answer. Pretty much like we would do ourselves, and, similar to many of us, it came to a wrong answer as the input data might have been false. The worrying thing is that it came up with a lie, the non existing article.

We're accustomed to legal protections allowing a victim to take a person or a company to court. If it was a person making a false/defamatory remark, they could be sued in court. You can't sue a machine. The problem here is that AI/ML companies may claim they had no knowledge of how their system/tool might evolve or how it might reach a given conclusion, and hence they are not culpable.

I suspect there will be some interesting machinations ahead as we, collectively, try to work out where legal lines are drawn with this stuff.
 
It's still all a case of human input giving the level of 'intelligent response'.
Maybe I still don't fully grasp what you want to tell here, but isn't it the way most people response? The respond out of what they learned (which is always biased, just like with DL) and we usually redo what we always do or what makes our audience respond favorably on. I see many people accepting things from internet as a fact

As far as all that goes, I am more for better use of automation to allow humans to use the brain for more creation.
I agree, but I'm not sure this will be the case in all this. I could see new programmers, coming fresh from school, not being able to understand their own code as they learned to lean on AI too much.
 
Maybe I still don't fully grasp what you want to tell here, but isn't it the way most people response? The respond out of what they learned (which is always biased, just like with DL) and we usually redo what we always do or what makes our audience respond favorably on. I see many people accepting things from internet as a fact
I think we are on crossed paths here. I was referring to the human ability to create new things from nothing rather than
responding to an input. That is creative intelligence. There is a also our ability to take totally disparate information and connect
it to create something entirely new. Probably a different take on it is all.

I agree, but I'm not sure this will be the case in all this. I could see new programmers, coming fresh from school, not being able to understand their own code as they learned to lean on AI too much.
Yes and that is already the case with many as they use libraries without any understanding of how the code works. I see
that in the coding cycle where debugging is the major proportion of time/work rather than actual coding. Shame really.
 
Last edited:
Hey Siri, how can global warming be stopped?

Siri: OK, I have an idea., if we . . . and maybe we could call it Skynet, and it will . . .

NOOOOOOOOOOOOOOOOOOOOOOO. . . (<-- doppler sound effect)
default_frantic-gif-gif.gif
 
More seriously, the idea of Robots & AI learning to lie and/or otherwise become inimical to humanity was conceived of in the early-1900s. For any of the inventors of ChatGPT to not realize that it might/would end up lying/fabricating stories/information they would have to have so few functioning brain cells that they would not be capable of developing ChatGPT in the first place.
 
More seriously, the idea of Robots & AI learning to lie and/or otherwise become inimical to humanity was conceived of in the early-1900s. For any of the inventors of ChatGPT to not realize that it might/would end up lying/fabricating stories/information they would have to have so few functioning brain cells that they would not be capable of developing ChatGPT in the first place.
One of Microsoft's first adventures into AI- chatbots back in 2016 was a Twitter bot that learned from the reactions it got in order to improve itself. It lasted less than a day until it became so racist, that it had to be taken off line.
 
Last edited:

Users who are viewing this thread

Back