ChatGPT faces defamation lawsuits after making up stories about people

Ad: This forum contains affiliate links to products on Amazon and eBay. More information in Terms and rules

"They" would be the streaming service I think. They would hugely benefit. Copyright is of no consequence.

They could even provide the song-making as a service. The client answers a couple of questions and they get their personal music playlist generated for them.

Copyright is a huge issue. Why have Hollywood companies and Disney fought so hard for decades to lengthen copyright terms? Because of the protection it provides for those businesses to exclusively earn revenue from their works. Back in 1920, copyright in the U.S. lasted for a single 28-year term, with the possibility of a second 28-year term if it was applied for, yielding a maximum of 56 years of protection. Now, copyright lasts life of the author plus 70 years, or in the case of work-for-hire or anonymous works, it's 95 years from first publication or 120 years from creation, whichever is shorter.

Without copyright protection, it's essentially in the public domain, meaning anyone can use that work freely and without penalty. Some streaming service puts up an AI-made song, and anyone else can stream it, use it on a video, rebroadcast it themselves, or even sell copies of it, and the streaming service would have no recourse.

If copyright didn't matter, then music companies would not have gone so hard after Napster and its follow-ups. The companies would not complain and fight against piracy so much.
 
Even highly tested and controlled technology can go awry - self-driving cars that run over pedestrians and dive into emergency vehicles is completely outside of the programming perimeters.

I'd call that bad coding and testing of the self-driving software.
 
Meh, I'm skeptical of such prognostication. There's nothing existing currently which comes close to actual artificial intelligence.

Personally, I tend to think it's people suffering from Frankenstein complex. Shelley's story casts a long shadow in the collective memory.

It goes a lot deeper than that in Western culture -- all the way back to the Garden of Eden.

I think assuming everything is going to be hunky-dory with new technologies is probably not the best idea in many cases. It's not that it will go wrong, or even that it's likely to go wrong, but that the consequence if it does could be staggering; so it behooves us to tread cautiously.

The overwhelming majority of commercial flights sortie safe, thousands every day in America alone. The odds are better than the odds you have driving to work. But because if a jetliner crashes hundreds of people die, we have a strong regimen governing their design, testing, and operation.
 
Currently it would only be able to install itself if it was programmed/told to.

I actually am not worried about AI taking over the world. I am thinking more along the line of all the things that I could/would do with it if I were an evil person or of a criminal bent - THAT is what is truly worrying for the near future.
 
Right. The conclusion I draw is that we should probably think more before going whole-hog. Y'know, that whole Law of Unintended Consequences thing. It really is a bitch.

I don't necessarily disagree, but there is nothing at present in available AI programs which even remotely approximates actual intelligence. Heck, we can't even exactly define what constitutes intelligence — look at the varying methods used to ascertain the intelligence level of various animal species. But at least animals have self-awareness and act independently of human input.

Was Watson intelligent when it won Jeopardy! back in 2011? Or was it just very good at understanding plain English language and rapidly searching through data?
 
I actually am not worried about AI taking over the world. I am thinking more along the line of all the things that I could/would do with it if I were an evil person or of a criminal bent - THAT is what is truly worrying for the near future.

I'm curious to see how the copyright aspect turns out. While at present the U.S. law is clear on AI-generated material not being copyrightable, this is apparently not the case in some other countries. How will different copyright aspects in different countries affect AI-created works? Will the well-established U.S. legal precedent for human authorship get overturned?
 
I don't necessarily disagree, but there is nothing at present in available AI programs which even remotely approximates actual intelligence. Heck, we can't even exactly define what constitutes intelligence — look at the varying methods used to ascertain the intelligence level of various animal species. But at least animals have self-awareness and act independently of human input.

Was Watson intelligent when it won Jeopardy! back in 2011? Or was it just very good at understanding plain English language and rapidly searching through data?

It seems to me that, if we cannot even define our own intelligence, that designing machine-learning programs might not be a good idea. If we cannot even project the possible outcomes, should we pull the trigger?

I'm not worried about Watson, or Deep Blue, etc. Those are algorithmic rather than thoughtful. What happens when you produce software that can put two and two together, metaphorically? What happens when AI learns to lie?

Intelligence comes in so many frameworks, as you point out, that we cannot even understand the various intelligent species which populate our globe naturally. Is there really a need to create an artificial intelligence that we may not understand? Shit, our own intelligence seems to be screwing this world up six ways to Sunday.

I think this is certainly a point where we should go slow.
 
It seems to me that, if we cannot even define our own intelligence, that designing machine-learning programs might not be a good idea. If we cannot even project the possible outcomes, should we pull the trigger?

I'm not worried about Watson, or Deep Blue, etc. Those are algorithmic rather than thoughtful. What happens when you produce software that can put two and two together, metaphorically? What happens when AI learns to lie?

Intelligence comes in so many frameworks, as you point out, that we cannot even understand the various intelligent species which populate our globe naturally. Is there really a need to create an artificial intelligence that we may not understand? Shit, our own intelligence seems to be screwing this world up six ways to Sunday.

I think this is certainly a point where we should go slow.

Again, I don't disagree. Perhaps we disagree on just how close we are to reaching that point?
 
Again, I don't disagree. Perhaps we disagree on just how close we are to reaching that point?

It may be that we don't agree on how much thought should be put into it. I think we should think long and hard about this, myself; and we should be very reluctant to assign over decisions to machine assets.

How close we are to that point, I don't know. But I think we ought to be careful about how close we ever get to it at all, because it strikes me that there might be a tipping-point no one can predict.
 

Users who are viewing this thread

Back