Ad: This forum contains affiliate links to products on Amazon and eBay. More information in Terms and rules
Sounds like it is ready to start posting here.
Sounds like it is ready to start posting here.
I guess that would be OpenAI, the company behind the Chat bot.Who gets sued?
It's not so black-and-white. DeepLearning in itself is indeed trained, but not programmed as per definition. And you should define "thinking" first to come up with such a statement.The problem is these things are natural language software so they 'look' intelligent but they are
also what is now called 'trained' which means programmed.
AI is a term bandied about with ease these days but all the 'AI' cannot actually think. It can only respond
to how it is trained. How many times do we suddenly get that lightbulb moment as a solution to a problem
we were grappling with some other time. AI doesn't have the connections or even the bulb.
That is why relying on what is programmable pretend intelligence doesn't go well.
Now back to the original question. I do not believe those faults stem from wether the system can "think" or not. It combined data and came with an answer. Pretty much like we would do ourselves, and, similar to many of us, it came to a wrong answer as the input data might have been false. The worrying thing is that it came up with a lie, the non existing article.
Maybe I still don't fully grasp what you want to tell here, but isn't it the way most people response? The respond out of what they learned (which is always biased, just like with DL) and we usually redo what we always do or what makes our audience respond favorably on. I see many people accepting things from internet as a factIt's still all a case of human input giving the level of 'intelligent response'.
I agree, but I'm not sure this will be the case in all this. I could see new programmers, coming fresh from school, not being able to understand their own code as they learned to lean on AI too much.As far as all that goes, I am more for better use of automation to allow humans to use the brain for more creation.
I think we are on crossed paths here. I was referring to the human ability to create new things from nothing rather thanMaybe I still don't fully grasp what you want to tell here, but isn't it the way most people response? The respond out of what they learned (which is always biased, just like with DL) and we usually redo what we always do or what makes our audience respond favorably on. I see many people accepting things from internet as a fact
Yes and that is already the case with many as they use libraries without any understanding of how the code works. I seeI agree, but I'm not sure this will be the case in all this. I could see new programmers, coming fresh from school, not being able to understand their own code as they learned to lean on AI too much.
One of Microsoft's first adventures into AI- chatbots back in 2016 was a Twitter bot that learned from the reactions it got in order to improve itself. It lasted less than a day until it became so racist, that it had to be taken off line.More seriously, the idea of Robots & AI learning to lie and/or otherwise become inimical to humanity was conceived of in the early-1900s. For any of the inventors of ChatGPT to not realize that it might/would end up lying/fabricating stories/information they would have to have so few functioning brain cells that they would not be capable of developing ChatGPT in the first place.
Anyone remember "Westworld"?