When querying several time the /completions API with a temperature of 0, I still observe some differences in the responses. Those differences are usually subtle but can be huge for more complicated prompts. See example below for reproduction.
Screenshot
Code
import requests import json completions = [] for i in range(3): print(i) res = requests.post( “https://api.openai.com/v1/completions”, json = { “prompt”: “A logical”, “model”: “text-davinci-003”, “temperature”: 0.0, “max_tokens”: 200, }, headers={ ‘Authorization’: f’Bearer {API_KEY}’ } ) completions.append(json.loads(res.text)[“choices”][0]) for c1 in completions: for c2 in completions: if c1[“text”] != c2[“text”]: print(c1[“text”]) print(c2[“text”]) print() sam_nabla:
“prompt”: “A logical”,
@sam_nabla do you expect the same completions even at temp 0 for such a prompt as above using a probabilistic LLM?
1 Like
Yes I do.
I interpret the temperature is the temperature of the softmax layer used for the sampling (the last layer of the transformer), and using a temperature of 0 will basically make it an argmax: the next token chosen is the one with the highest score. And so I expect it to be deterministic.
But maybe it’s not exactly the meaning of temperature for this API?
sam_nabla:
But maybe it’s not exactly the meaning of temperature for this API?
Hi @sam_nabla
I think you are right!
I ran your prompt 10 times using temp 0 and got the same completion 10 times in a row
Testing
Model: text-davinci-003, Temperature: 0, Max Tokens: 1024, Completion Reason: stop
Ran it like this with these params:
AgusPG February 24, 2023, 2:05pm 5
This is a very interesting question that has been around for some time. In my view, the most comprehensive answer was given here: A question on determinism
Even though you are right in your hypothesis @sam_nabla, it doesn’t hold empirically. Even with a greedy decoding strategy, small discrepancies regarding floating point operations lead to divergent generations. In simpler terms: when the top-two tokens have very similar log-probs, there’s a non-zero probability of choosing the least probable one due to the finite number of digits that you’re using for multiplying probs and storing them.
It should also be noted that, as the decoding occurs in an autoregressive way, once you have picked a different token the whole generated sequence will diverge, as this choice affects to the probability of generating every subsequent token.
Hope that helps
2 Likes
Thanks @AgusPG for the link, it’s a very clear answer to my question.
2 Likes
This is one of those very rare situations where you may also like to play with top_p
This will reduce the pool of choices and spread the probability for each remaining choice. It may make a difference in your case.
1 Like