High Definition Standard Definition Theater
Video id : e4-43LR3vOo
ImmersiveAmbientModecolor: #cfada2 (color 2)
Video Format : (720p) openh264 ( https://github.com/cisco/openh264) mp4a.40.2 | 44100Hz
Audio Format: 140 ( High )
PokeEncryptID: 1be7893bf9824dc2116563bef4a38ab5944fbc2ff8cfd922f6b807fe72be13b79d98d76125519374a6a913add248d161
Proxy : cal1.iv.ggtyler.dev - refresh the page to change the proxy location
Date : 1732321861363 - unknown on Apple WebKit
Mystery text : ZTQtNDNMUjN2T28gaSAgbG92ICB1IGNhbDEuaXYuZ2d0eWxlci5kZXY=
143 : true
9,390 Views โ€ข Apr 25, 2023 โ€ข Click to toggle off description
Metadata And Engagement

Views : 9,390
Genre: Entertainment
Uploaded At Apr 25, 2023 ^^


warning: returnyoutubedislikes may not be accurate, this is just an estiment ehe :3
Rating : 4.789 (22/396 LTDR)

94.74% of the users lieked the video!!
5.26% of the users dislieked the video!!
User score: 92.11- Overwhelmingly Positive

RYD date created : 2023-12-27T20:26:11.861429Z
See in json
Tags
Connections
Nyo connections found on the description ;_; report an issue lol

49 Comments

Top Comments of this video!! :3

@MrC0MPUT3R

1 year ago

GPT-4 has a context limit of 4,096 tokens where a token is about 0.7 words.

40 |

@craigheinze8258

1 year ago

Sounds just like most advertising and consulting agencies I've dealt with... ๐Ÿ˜‚

13 |

@fibulawars

1 year ago

Context window limitation. Until some long term memory is implemented it will only work with text that is in its context window.ย 

For every message it is fed with the last N tokens and it tries to predict the next. If the information it is not contained within this last N tokens, it does not exist for the GPT.

9 |

@sergeymatpoc

1 year ago

exactly what I was talking about from video #1.
But, as I mentioned before, some models have larger "dictionary", and I think it's just a matter of time when we'll see larger models.

2 |

@WilliamLarsten

1 year ago

Just like Mrcomputer saidโ€ฆ it starts forgetting precious stuff because it can only โ€œrememberโ€ so much. A way to hack that is to ask it to summarize/pick out the essentials from its earlier conversations so it retains the essence of the context. Kind of like our long term memory works.

4 |

@Renfield286

1 year ago

export the chat as a pdf, then continue with a new conversation within code interpreter.

1 |

@dustinoverbeck

1 year ago

Did ChatGPT say to shake the camera so much to give viewers a sense of an earthquake?

1 |

@helo1345

1 year ago

The way I do it with long chats when I'm developing is that I take a single thing that it is making and then if it gets off topic I remind it what it was working on. In this case, it should be cliff notes, you want longer queries and you would want to present it with what it's been working on as the prompt and then go from there.

The ultimatum will not do much of anything from my experience.

The other option is to use what I call chunking, this query looks like:

I'm going to give you a long description of what you are currently working on, it's possible that this will go over your current text buffer, if that is the case I don't want you to process the request I want you to tell me when you're ready for the next line of documentation and I will give it to you.

Another option is to ask it to clear its memory before doing chunking. During all of this the goal is to make the total object that you're working on smaller. If you're prompt is more efficient then you're answer will also be more efficient.

2 |

@mamotalemankoe3775

1 year ago

Remind it first then threaten it if it still doesn't remember. Interesting experiment.

2 |

@GODT1TAN

10 months ago

I feel like certain parameters would need precise key words to make sure it keeps reminding itself of every detail from the start of any conversation, store the entire conversation and replay it before giving me your answer that way further progress can be an easier task thus advancing productivity of the functionality and reliability
Edit: i havent the first clue about coding but that's my logical view on it being mere successful

|

@michaelchachashvili1098

1 year ago

Same happened to us... it doesn't remember multiple things

|

@CaribbeanCryptoTips

1 year ago

new chat for each video

|

@Saiyugi16

1 year ago

Time to migrate into your implementation of the openai gpt-4 API, pinecone, or azure search API and langchain to definitely surpass that limit . You need long term memory right now or an area to retrieve the history to be used for context ๐Ÿ˜Š

1 |

@graymars1097

1 year ago

I think you just gave chatGPT another idea for a subscription model: unlimited chat logs, data retention period or a database ๐Ÿ˜„

1 |

@getofftheline-motoring2266

1 year ago

๐Ÿ˜‚ you made it an interesting intern

2 |

@aestheticstudio007

1 year ago

Yeah want to see What ChatGPT would say about shutting down it's YouTube channel Threatening AI about turning down its work sounds interesting ๐Ÿ˜‚

4 |

@levizwannah

1 year ago

It's stateless, and there is a limit on context size

|

@israelquito3072

1 year ago

NO COMMENTS"KAYA" IN THIS MATTER,I'M STILL LEARNING FROM YOU GUYS!!๐Ÿ˜…๐Ÿ˜

1 |

@victorencarnacion9245

9 months ago

I noticed the same thingโ€ฆ I have to restate parts of the opening prompt

|

Go To Top