PokeVideoPlayer v23.9-app.js-020924_
0143ab93_videojs8_1563605 licensed under gpl3-or-later
Views : 9,390
Genre: Entertainment
Uploaded At Apr 25, 2023 ^^
warning: returnyoutubedislikes may not be accurate, this is just an estiment ehe :3
Rating : 4.789 (22/396 LTDR)
94.74% of the users lieked the video!!
5.26% of the users dislieked the video!!
User score: 92.11- Overwhelmingly Positive
RYD date created : 2023-12-27T20:26:11.861429Z
See in json
Top Comments of this video!! :3
Context window limitation. Until some long term memory is implemented it will only work with text that is in its context window.ย
For every message it is fed with the last N tokens and it tries to predict the next. If the information it is not contained within this last N tokens, it does not exist for the GPT.
9 |
The way I do it with long chats when I'm developing is that I take a single thing that it is making and then if it gets off topic I remind it what it was working on. In this case, it should be cliff notes, you want longer queries and you would want to present it with what it's been working on as the prompt and then go from there.
The ultimatum will not do much of anything from my experience.
The other option is to use what I call chunking, this query looks like:
I'm going to give you a long description of what you are currently working on, it's possible that this will go over your current text buffer, if that is the case I don't want you to process the request I want you to tell me when you're ready for the next line of documentation and I will give it to you.
Another option is to ask it to clear its memory before doing chunking. During all of this the goal is to make the total object that you're working on smaller. If you're prompt is more efficient then you're answer will also be more efficient.
2 |
I feel like certain parameters would need precise key words to make sure it keeps reminding itself of every detail from the start of any conversation, store the entire conversation and replay it before giving me your answer that way further progress can be an easier task thus advancing productivity of the functionality and reliability
Edit: i havent the first clue about coding but that's my logical view on it being mere successful
|
@MrC0MPUT3R
1 year ago
GPT-4 has a context limit of 4,096 tokens where a token is about 0.7 words.
40 |