Conversation chain
Last updated
Last updated
This is the most fundamental conversation chain. It enables you to chat with large models.
It consists of the following three key components.
The LLM is the core of any AI process. The model receives message inputs, uses them as parameters in the prompts, and outputs results. You can use models from several different providers:
OpenAI: Provides models such as gpt-3.5-turbo, gpt-4, etc. By clicking on more operations, you can alter other complex parameters such as temperature, completion length, or flow.
We will continuously integrate models from other providers, so stay tuned.
The prompt is an AI cue, a method of using natural language to guide or inspire AI models to complete specific tasks.
You can take some courses to improve the effectiveness of the prompts.
The model itself does not save internal states, many applications need to track previous interactions with the model as part of the interface (for example, chatbots).
For this, you can add memory when setting up the Conversation chain. Memory allows you to chat with AI as if the AI has memory of previous conversations.
Human: hi i am bob
AI: Hello Bob! It's nice to meet you. How can I assist you today?
Human: what's my name?
AI: Your name is Bob, as you mentioned earlier.