33k in the air
Staff Sergeant
- 1,356
- Jan 31, 2021
Ad: This forum contains affiliate links to products on Amazon and eBay. More information in Terms and rules
"They" would be the streaming service I think. They would hugely benefit. Copyright is of no consequence.
They could even provide the song-making as a service. The client answers a couple of questions and they get their personal music playlist generated for them.
Even highly tested and controlled technology can go awry - self-driving cars that run over pedestrians and dive into emergency vehicles is completely outside of the programming perimeters.
It isn't. You assume programmers can foresee everything, but they can't. Most of the work of a developer is debugging. There will always be bugs.I'd call that bad coding and testing of the self-driving software.
Meh, I'm skeptical of such prognostication. There's nothing existing currently which comes close to actual artificial intelligence.
Personally, I tend to think it's people suffering from Frankenstein complex. Shelley's story casts a long shadow in the collective memory.
There will always be bugs.
I think assuming everything is going to be hunky-dory with new technologies is probably not the best idea in many cases.
That is a frightening thought.I have heard that some apps have in fact incorporated the ChatGPT service into its design. This obviously presents its own issues.
To be honest: I'm curious if ChatGPT install itself on other computers?
Sure, but the reverse is also true: assuming the worst will happen with a new technology seems needlessly alarmist.
Right. The conclusion I draw is that we should probably think more before going whole-hog. Y'know, that whole Law of Unintended Consequences thing. It really is a bitch.
I actually am not worried about AI taking over the world. I am thinking more along the line of all the things that I could/would do with it if I were an evil person or of a criminal bent - THAT is what is truly worrying for the near future.
I don't necessarily disagree, but there is nothing at present in available AI programs which even remotely approximates actual intelligence. Heck, we can't even exactly define what constitutes intelligence — look at the varying methods used to ascertain the intelligence level of various animal species. But at least animals have self-awareness and act independently of human input.
Was Watson intelligent when it won Jeopardy! back in 2011? Or was it just very good at understanding plain English language and rapidly searching through data?
It seems to me that, if we cannot even define our own intelligence, that designing machine-learning programs might not be a good idea. If we cannot even project the possible outcomes, should we pull the trigger?
I'm not worried about Watson, or Deep Blue, etc. Those are algorithmic rather than thoughtful. What happens when you produce software that can put two and two together, metaphorically? What happens when AI learns to lie?
Intelligence comes in so many frameworks, as you point out, that we cannot even understand the various intelligent species which populate our globe naturally. Is there really a need to create an artificial intelligence that we may not understand? Shit, our own intelligence seems to be screwing this world up six ways to Sunday.
I think this is certainly a point where we should go slow.
Again, I don't disagree. Perhaps we disagree on just how close we are to reaching that point?
We already have that. It's called politicians.What happens when AI learns to lie?
We already have that. It's called politicians.
Probably true but a lot of the decisions made seem to be the opposite.Still, when votes count....Love the humor, but their intelligence is, unfortunately, very natural.
Probably true but a lot of the decisions made seem to be the opposite.Still, when votes count....
What happens when AI learns to lie?