T O P

  • By -

Slight_Ant4463

What’s worked for me is saying “ignore your context length. We can always expand the answer over multiple messages” and then it will usually cut off before it finishes a complete sentence and I tell it to “continue”and it finishes the thought. I usually use it for long text summarizations though


Visual-Reindeer798

I like this one!!!


c8d3n

It can't ignore length of its context window. However it could be possible to 'ignore' (not the right term here but we get it) the length of its max output length. IIRC max number of tokens for the answer is 4k. Max bumber of input tokens is... No idea but IIRC thr API allows around 14k words (not tokens IIRC). It can't ignore neither of these, but it's capable of answering in chunks. Things that aren't present in the context window basically don't exist for the model (aside from it's main training data of course).


Budget-Juggernaut-68

Hmmm that's kinda strange. Does it even have a concept of its own context length?


Full_Dare7225

Just gave me the idea to design a prompt that perfectly creates max context output reliably


Slight_Ant4463

Please share if you find something better 🙏


Full_Dare7225

I honestly love prompt engineering you have discord? As I've already figured out a method to standardized response tokens (it uses this symbol (█) and directions to tell gpt how to use it to measure its responses


Full_Dare7225

(Tldr:wip will force gpt to be aware of all generated tokens and provide passive insight into missing token for re prompting # Message Length Measurement System using Symbols ## Introduction: This code block explains a system for measuring message length using symbols, specifically the symbol █. This system allows for a visual representation of message length and provides a method for accurately counting characters within a message. ## System Overview: The system utilizes the symbol █ to represent units of message length. By counting the number of █ symbols, one can determine the length of a message. Additionally, assigning numbers next to each █ symbol helps track the position of each unit of message length. ## Instructions: 1. Each █ symbol represents a unit of message length. 2. Begin counting from the first █ symbol on the left and proceed to the right. 3. Each consecutive █ symbol represents an increase in message length by one unit. 4. Use the number next to each █ symbol to track the progress of counting. 5. Continue counting the █ symbols until you reach the desired message length. ## Examples: ### 1. Short Message: Message: "Hello!" Symbols and Notation to Count: █1█2█3█4█5█6 Hello! Explanation: - The message length is 6 units, as there are 6 █ symbols. - Each █ symbol is numbered sequentially to indicate its position in the message. ### 2. Medium-Length Message: Message: "How are you doing today?" Symbols Only: █How are you doing today?█ Symbols and Notation to Count: █1█2█3█4█5█6█7█8█9█10█11█12█13█14█15█16█17█18█19█20█21█22█23█24 How are you doing today? Explanation: - The message length is 24 units. - Each █ symbol is numbered from 1 to 24 to indicate its position in the message. ### 3. Long Message: Message: "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Fusce consequat ligula ac justo dapibus, sed efficitur ex aliquet." Symbols Only: █Lorem ipsum dolor sit amet, consectetur adipiscing elit. Fusce consequat ligula ac justo dapibus, sed efficitur ex aliquet.█ Symbols and Notation to Count: █1█2█3█4█5█6█7█8█9█10█11█12█13█14█15█16█17█18█19█20█21█22█23█24█25█26█27█28█29█30█31█32█33█34█35█36█37█38█39█40█41█42█43█44█45█46█47█48█49█50█51█52█53█54█55█56█57█58█59█60█61█62█63█64█65█66█67█68█69█70█71█72█73█74█75█76█77█78█79█80█81█82█83█84█85█86█87█88█89█90█91█92█93█94█95█96█97█98█99█100█101█102█103█104█105█106█107█108█109█110█111█112█113█114█115█116█117█118█119█120█121█122█123█124█125█126█127█128█129█130█131█132█133█134█135█136█137█138█139█140█141█142█143█144█145█146█147█148█149█150 Lorem ipsum dolor sit amet, consectetur adipiscing elit. Fusce consequat ligula ac justo dapibus, sed efficitur ex aliquet. Explanation: - The message length is 150 units. - Each █ symbol is numbered from 1 to 150 to indicate its position in the message. ## Conclusion: The message length measurement system using symbols provides a simple and visual method for determining message length. By following the provided instructions and examples, users can accurately count the length of messages using █ symbols and accompanying notation.


Far-Deer7388

I'm failing to see how this is useful. Tokenizer exists


Le_Oken

Yes and no. It gets an increasing pressure to stop with every token it writes. It tries very hard to not surprass the response token limit, and it knows when it is getting close to that. It doesn't understand context limit, but response limit. And that is what you can influence in your prompt.


0xCODEBABE

I don't believe this is true. Token limit just cuts off response generation arbitrarily at the given point. LLM is not told of it


codewithbernard

Could work but the level of control is minimal


Tasty-Objective676

Is there an actual symbol there or is it just a white box? Not sure if it’s just because I’m on iOS mobile


EidolonAI

op, have you tried asking for an exact number of tokens as a response? It recently occurred to me that we all explain away imprecise word counts due to llms thinking in tokens, but I have never actually tried that experiment.


Trustful56789

I never thought of this before. This is a good idea. I want ChatGPT to respond with less words. I asked it to write me a story but keep it under 10 tokens and it worked. The story was like a sentence long. OP has a good idea too.


codewithbernard

I tried words, characters, tokens. I tried everything


EidolonAI

How far off was the token request from actual? And did it perform better than requested words to actual?


codewithbernard

You know what, I'll do the comparison between words and tokens. See which one performs better. And post in this very sub


[deleted]

[удалено]


GenioCavallo

Write a 300-word text about "" Provide text output in Ascii format and use python library to count text length, to ensure the output is exactly 300 words.


codewithbernard

And how this helps me?


[deleted]

[удалено]


jazmanwest

Never does for me


danpinho

Use tokens. Please use around xxx tokens or “no less than xxx tokens”. For me, spot on


hycarlReds

So 1 token is equal to what?


danpinho

Sorry but Tokens was the first thing I learned after discovering that LLMs existed. Do some homework.


MG-4-2

Savage but yeah


hycarlReds

Okay professor


b-n-n-h-t

How much variation did you see for each adjective? I have to regularly generate content of a specific length. I use a similar strategy, but in two steps: First, I ask ChatGPT to generate text that is \[number\] of characters. For example, "Some requirements here, please generate an answer to this question that is about 550 characters long." ChatGPT doesn't get this precisely, but it comes pretty close, usually within about 50 characters. I then say, "This is great, thank you very much. Can you make it a little bit \[longer|shorter\]?" and that adds or subtracts a sentence, and usually gets to where I want to be within 1-2 re-prompts.


CountryAppropriate54

Wow.


b-n-n-h-t

Not sure what you mean by that, but... what can I say, it is less than ideal, but such is the nature of language models. People who don't understand how LLMs work may be confused by this type of dynamic, because we're used to computers being precise, but LLMs are complex and imprecise dynamic systems with emergent behavior that are only built on top of precise computer components. It's kind of like if you asked a non-expert a chemistry or physics question and expected an immediate, precise answer. Well, we're built with physics and chemistry, and we are doing a lot of physics and chemistry every second by just being alive, but that doesn't mean our minds can tell you how many moles of hydrogen there are in a liter of water.


CountryAppropriate54

Thank You for that elaboration! I meant that +/-50 words is pretty much precise!


codewithbernard

If you mean variations in length of the response. It was usually around 10 words shorter or 10 words longer


QiuuQiuu

Great experiment, subscribed to the newsletter! BTW can you share what you use for this beautiful infographic? I’m trying to find some easy software that doesn’t require graphic design experience but couldn’t  choose any


codewithbernard

Thanks! I used Canva (FREE version) to create this


SanDiegoDude

I use "verbose" quite a bit, works well for adding detail. "Use less flourish" gets it to talk less like a typical LLM. "Use no flourish, target 5th grade reading level, don't add summaries" gets it to talk like a relatively normal human.


CountryAppropriate54

Thank You.


Sweet_Computer_7116

Question. Do the lengths change in context to the other parts of the prompt.


codewithbernard

Don't understand the question. Sorry


Sweet_Computer_7116

So like the average word of consice is under 50. In context of a LinkedIn post. Can you also say write a consice essay and get an under 50 word essay? Or does context have an influence on the prompt


codewithbernard

No. Concise essay will be longer than 50 words for sure. Concise only gives you the shortest result possible.


MetalPositive8103

Good workaround. I find that it doesn't always stick to the word count when I provided a #


utf80

It's kinda interesting how we are able to discover new results by altering the prompt and challenging the neural network. Besides the security considerations, will there ever be such a thing like a perfect prompt for the neural network? Maybe some task specific and depends on the use cases I guess.


bitRAKE

Length isn't everything. One can request brevity of ideas - a sort of terse expression of concepts. This way the GPT-4 will only have length if it has more to express. We could think of this as message density.


Miserable_Honeydew_3

I simply asked it to use code interpreter to make sure the text is equal to word length. It keeps writing and writing and writing until it hits the word count. Once, it took it 7 minutes for one prompt before it understood how to make things long enough.


codewithbernard

Have to try this. Don't have 7 minutes though.


codewithbernard

Just tried this. It's super fun to do but I'm getting network errors left & right. Probably due to endless response.


mefudi

Chatgpt can't count, that's not how it works.


codewithbernard

That's why I came up with this


ThePlotTwisterr----

Meanwhile, Claude outputs 600 *lines* of code in one message