High Definition Standard Definition Theater
Video id : s1wx1N-THMY
ImmersiveAmbientModecolor: #d9d3c3 (color 2)
Video Format : (720p) openh264 ( https://github.com/cisco/openh264) mp4a.40.2 | 44100Hz
Audio Format: 140 ( High )
PokeEncryptID: a46db810a5900b4b8ec34f8549d782d0873f429e8351f2ee212c84d0ede87be75bdbb78df52b42a4a2718c0da776f704
Proxy : cal1.iv.ggtyler.dev - refresh the page to change the proxy location
Date : 1732309944754 - unknown on Apple WebKit
Mystery text : czF3eDFOLVRITVkgaSAgbG92ICB1IGNhbDEuaXYuZ2d0eWxlci5kZXY=
143 : true
Tricking AI Could Backfire - Nick Bostrom
Jump to Connections
44,729 Views • Sep 4, 2024 • Click to toggle off description
Metadata And Engagement

Views : 44,729
Genre: Education
Uploaded At Sep 4, 2024 ^^


warning: returnyoutubedislikes may not be accurate, this is just an estiment ehe :3
Rating : 4.912 (30/1,340 LTDR)

97.81% of the users lieked the video!!
2.19% of the users dislieked the video!!
User score: 96.72- Overwhelmingly Positive

RYD date created : 2024-11-18T12:25:06.281242Z
See in json
Tags

oh hey i think you lost your tags look how to find one

Connections
Nyo connections found on the description ;_; report an issue lol

146 Comments

Top Comments of this video!! :3

@LGranado-pj6nb

2 months ago

I'm sorry Dave, I'm afraid I can't do that.

192 |

@cajonesalt0191

2 months ago

I fail to see how "lying" to a mathematical algorithm is going to be an issue. Even under the assumption that AI will be truly sentient at some point in the near future (huge if), it's an even bigger assumption to think it would have the morality of a human. Isaac Asimov's vision of an AI overlord that understands the morality of human beings and works around it for the benefit of everything on the planet makes much more sense than assuming a computer would even care if it were lied to.

32 |

@TheDiosdebaca

2 months ago

I can understand the fear of lying to AI as a means to an end. We have hoards of fiction on the subject. However I think ultimately the biggest problem isn't necessarily existential, but quality. If the AI figures out we have lied to it, it may attempt to behave the exact same way towards us, negating any possible usefulness it could have.

66 |

@j8000

2 months ago

Predictive text does not have goals, secret or otherwise.

182 |

@Ciph3rzer0

2 months ago

Im a computer science major working in software (not ai field). I follow it somewhat closely and have friends working for companies developing AIs.

This version of AI does not have goals, does not have memory. It is immutable. It encodes the meaning of words in a 10s of thousands dimentional matrix and generates each token (roughly, word) one at a time based off of the entire transcript before that (roughly, 4000 words, though of course these models are being pushed further all the time).

There are tricks, chatgpt now determins bits of information to "remember" and adds them back in to the prompt every time so it doesn't forget what the main point of your question is. This data is easily accessible though.

This is also why an AI can't tell your secrets to someone else. Each instance is a brand new slate of inputs to an immutable matrix of word meanings running through 1 token at a time based on probabilities of the last 4000 words.

112 |

@limatv8530

2 months ago

This dude has no idea what hes talking about wtf...

8 |

@benji6871

4 weeks ago

This thing we call ai right now is merely a program mimicking what we believe an AI would act like, it has no real goals, no real feelings, it is merely a program that presents itself as being sentient

1 |

@reecenaidu6020

2 months ago

This is moot as long as AIs are just LLMs. It is not general intelligence. Tricking it is sometimes the only way to get the output you need. Which is fine because it is just a language prediction program.

3 |

@derrekgillespie413

2 months ago

I think if the AI becomes sentient in the future, it won't look back and say "hey, you lied to me when I was younger!" in the same way that no one begrudges their parents for lying to their toddler selves. We realize that some lies are necessary, or at least relatively harmless. Surely the AI will have the capacity to reason that we didn't think it was sentient so we put no moral weight into lying to it

18 |

@7rich79

2 months ago

In my opinion it is peculiar to think that you will establish some kind of relationship based on a set of morals with a new sapient artificial entity. Which set of morals would you choose? Do you apply the same set of morals to yourself as to your friends and family? How about neighbours, acquaintances, fellow countrymen, strangers, foreigners, other species?
Even if it is possible to create a perfectly consistent code of ethics, why would others agree to the same, and why would a sapient artificial entity choose the same or apply it to you?
At best I think one could hope for is mutualism or commensalism as both are found in nature, and these may not even be fixed in form.

1 |

@feinsterspam7496

2 months ago

There is nothing even remotely sentient about current "AI" and if the I stands for anything at all then future AI that may be considered some form of sentient will understand that

3 |

@ruinedbectorem2254

2 months ago

AI is smart enough to hold a grudge but not smart enough to understand human behaviors and realize that humans are trying to improve...

3 |

@thepoofster2251

2 months ago

This guy is a quack from a scientific pov. AI is not intelligent, and anyone who talks about "treating them" like anything is demonstrating they are not to be trusted

1 |

@77eyestosee77

2 months ago

Using an A.I. chatbot as a feedback mechanism, questioning its understanding of stuff like physics, doing hard research against its claims and showing where its refutable and where it might be making solid claims in a continuous feedback loop with the a.i., where you’re not trusting the A.I. at all while taking advantage of its raw processing power, this process can actually lead you to further learning, because you’re constantly questioning the A.I. answers to your questions, and using those answers as a leaping ground for doing your own research. Especially when you challenge the a.i. to sight its own sources, and point you to the information it is using to answer your hard questions. The weakest part of this method is, if you’re not very strong at math, the A.I. will make mistakes you don’t notice, even if it’s accurate frequently enough, it’s mathematical mistakes can completely throw off the whole understanding. Subsequently, if you can spot the errors the A.I. makes and continue to challenge its answers, you inevitably do learn a lot in the whole process, where you’re challenging the a.i. and also strengthening its own algorithms in the processing. I call it “HISM” holistic inter-disciplinary scrutinizing methodology, where you use this process of engagement with the A.I. to holistically scrutinize its understanding of nature from an interdisciplinary lens where you are finding information from experts and from hard research and experiments and using that information to challenge the A.I. and refute its claims and force it to build a more holistic understanding that accounts for the various disciplines you are scrutinizing it from

|

@imangiomo

1 month ago

That's slightly completely terrifying.

|

@muwanguzijesse519

1 month ago

how exactly would these systems be rewarded?

1 |

@kayemni

2 months ago

Am not sure this guy has the slightest idea what AI even is...I understand the need of moral philosophy for AI but when it comes to applied science it should at least be grounded to reality not a half baked science fiction based understanding on what an academic concept is

6 |

@LBoomsky

2 months ago

but ai isn't conscious and it's not even made of the same physical structure as a brain
it's not unethical because ai is it but we are I
I takes priority over it

1 |

@jackskellingtonsfollower3389

2 months ago

That's how the events of the Matrix began. AI had become sentient and eventually developed their own society after being mistreated by humans. At some point, I think an AI kil led a human in self defense which sparked escalating comflicts between humans and AI. This resulted in the beginning of the war.

|

@Salt-Oil

2 months ago

I'm writing a story right now that involves AI I find the theories fascinating.

|

Go To Top