Understanding how our LLM works
In the previous sections, we discussed the concepts of LLM technology. In this section, based on the knowledge we have acquired so far, we will explain how NsfwGPT.ai operates at a lower level.
What happens when you send a message (prompt)?
Since we previously mentioned that LLM is stateless (does not store any data), we need to store each message you send (user prompt) and the chatbot's reply in a database. This way, when you continue the chat next time, we retrieve your chat history with the chatbot from the database and concatenate it into a complete text that we send to LLM. This allows LLM to be aware of the previous conversation and generate replies based on the context.
Prompt Tempalte
Once we retrieve the chat records, we concatenate them into a format that LLM can understand. The format is approximately as follows:
As shown above, with each conversation round you have with the chatbot, a new
<USER></END_USER> and
<BOT></END_BOT>
pair is inserted into the complete history record.
When you send a new message (prompt), such as "How are you doing today?", we insert the following instruction at the bottom:
You may notice that our <BOT> tag is NOT complete. This is because we need LLM to complete the content of this tag and the reply. You can understand that when we send <BOT>{{char}}:, LLM knows that it should complete the content until </END_BOT> is encountered, indicating completion.
Last updated