PokeVideoPlayer v23.9-app.js-020924_
0143ab93_videojs8_1563605 licensed under gpl3-or-later
Views : 44,729
Genre: Education
Uploaded At Sep 4, 2024 ^^
warning: returnyoutubedislikes may not be accurate, this is just an estiment ehe :3
Rating : 4.912 (30/1,340 LTDR)
97.81% of the users lieked the video!!
2.19% of the users dislieked the video!!
User score: 96.72- Overwhelmingly Positive
RYD date created : 2024-11-18T12:25:06.281242Z
See in json
Top Comments of this video!! :3
I fail to see how "lying" to a mathematical algorithm is going to be an issue. Even under the assumption that AI will be truly sentient at some point in the near future (huge if), it's an even bigger assumption to think it would have the morality of a human. Isaac Asimov's vision of an AI overlord that understands the morality of human beings and works around it for the benefit of everything on the planet makes much more sense than assuming a computer would even care if it were lied to.
32 |
I can understand the fear of lying to AI as a means to an end. We have hoards of fiction on the subject. However I think ultimately the biggest problem isn't necessarily existential, but quality. If the AI figures out we have lied to it, it may attempt to behave the exact same way towards us, negating any possible usefulness it could have.
66 |
Im a computer science major working in software (not ai field). I follow it somewhat closely and have friends working for companies developing AIs.
This version of AI does not have goals, does not have memory. It is immutable. It encodes the meaning of words in a 10s of thousands dimentional matrix and generates each token (roughly, word) one at a time based off of the entire transcript before that (roughly, 4000 words, though of course these models are being pushed further all the time).
There are tricks, chatgpt now determins bits of information to "remember" and adds them back in to the prompt every time so it doesn't forget what the main point of your question is. This data is easily accessible though.
This is also why an AI can't tell your secrets to someone else. Each instance is a brand new slate of inputs to an immutable matrix of word meanings running through 1 token at a time based on probabilities of the last 4000 words.
112 |
I think if the AI becomes sentient in the future, it won't look back and say "hey, you lied to me when I was younger!" in the same way that no one begrudges their parents for lying to their toddler selves. We realize that some lies are necessary, or at least relatively harmless. Surely the AI will have the capacity to reason that we didn't think it was sentient so we put no moral weight into lying to it
18 |
In my opinion it is peculiar to think that you will establish some kind of relationship based on a set of morals with a new sapient artificial entity. Which set of morals would you choose? Do you apply the same set of morals to yourself as to your friends and family? How about neighbours, acquaintances, fellow countrymen, strangers, foreigners, other species?
Even if it is possible to create a perfectly consistent code of ethics, why would others agree to the same, and why would a sapient artificial entity choose the same or apply it to you?
At best I think one could hope for is mutualism or commensalism as both are found in nature, and these may not even be fixed in form.
1 |
Using an A.I. chatbot as a feedback mechanism, questioning its understanding of stuff like physics, doing hard research against its claims and showing where its refutable and where it might be making solid claims in a continuous feedback loop with the a.i., where youâre not trusting the A.I. at all while taking advantage of its raw processing power, this process can actually lead you to further learning, because youâre constantly questioning the A.I. answers to your questions, and using those answers as a leaping ground for doing your own research. Especially when you challenge the a.i. to sight its own sources, and point you to the information it is using to answer your hard questions. The weakest part of this method is, if youâre not very strong at math, the A.I. will make mistakes you donât notice, even if itâs accurate frequently enough, itâs mathematical mistakes can completely throw off the whole understanding. Subsequently, if you can spot the errors the A.I. makes and continue to challenge its answers, you inevitably do learn a lot in the whole process, where youâre challenging the a.i. and also strengthening its own algorithms in the processing. I call it âHISMâ holistic inter-disciplinary scrutinizing methodology, where you use this process of engagement with the A.I. to holistically scrutinize its understanding of nature from an interdisciplinary lens where you are finding information from experts and from hard research and experiments and using that information to challenge the A.I. and refute its claims and force it to build a more holistic understanding that accounts for the various disciplines you are scrutinizing it from
|
That's how the events of the Matrix began. AI had become sentient and eventually developed their own society after being mistreated by humans. At some point, I think an AI kil led a human in self defense which sparked escalating comflicts between humans and AI. This resulted in the beginning of the war.
|
@LGranado-pj6nb
2 months ago
I'm sorry Dave, I'm afraid I can't do that.
192 |