Basic Concepts Basic prompt words
Posted: Wed Feb 05, 2025 10:10 am
Model setup
When using prompt words, you can interact with the large language model through the API or directly. You can get different prompt results by configuring some parameters.
Temperature : In simple terms, temperaturethe smaller the parameter value, the more certain the results returned by the model. If this parameter is increased, large language models may return more random results, that is, more diverse or more creative outputs. We can also achieve similar effects by increasing the weights of other possible tokens. In practical hungary mobile database applications, for tasks such as quality assurance, we can set a lower temperaturevalue to make the model return more realistic and concise results based on facts. For creative tasks such as poetry generation, appropriately increase temperaturethe parameter value.
Top_p : Again, using top_p(together with temperature, called a kernel sampling technique), you can control the certainty of the answers returned by the model. If you want factually accurate answers, set the parameter value low. If you want more diverse answers, set the parameter value high.
Generally speaking, it is sufficient to change one of the parameters; there is no need to adjust both at the same time.
Before we look at some basic examples, please note that the final results generated may vary depending on the version of the large language model used.
When using prompt words, you can interact with the large language model through the API or directly. You can get different prompt results by configuring some parameters.
Temperature : In simple terms, temperaturethe smaller the parameter value, the more certain the results returned by the model. If this parameter is increased, large language models may return more random results, that is, more diverse or more creative outputs. We can also achieve similar effects by increasing the weights of other possible tokens. In practical hungary mobile database applications, for tasks such as quality assurance, we can set a lower temperaturevalue to make the model return more realistic and concise results based on facts. For creative tasks such as poetry generation, appropriately increase temperaturethe parameter value.
Top_p : Again, using top_p(together with temperature, called a kernel sampling technique), you can control the certainty of the answers returned by the model. If you want factually accurate answers, set the parameter value low. If you want more diverse answers, set the parameter value high.
Generally speaking, it is sufficient to change one of the parameters; there is no need to adjust both at the same time.
Before we look at some basic examples, please note that the final results generated may vary depending on the version of the large language model used.