Several questions about the Chat completion action of the Open AI Plugin:
If I create two arrays:
{Role: User; Content: user_prompt; …}
and one {Role: Assistant; Content: assistant_response; …},
does this mean that the assistant will take these old messages into account when generating its next response (like to simulate a conversation history)? Will the assistant then reply to the user’s last message (the one positioned lowest)? Or is it a completely different usage?
It is also possible to select “System” for the prompt. Does this mean that the Secured prompt I selected in the “System content” field is not my system prompt?
Checkout the template with xano + weweb and the video too as well as the xano snippet for a better understanding
Basically allowing message history retrieval based on a back end tables , that can be used for individual user based on their userid and if wanted based on the session id ( or a specific conversation basically )
Thanks @Zacharyb ! I I must have lost my focus when Quentin dealt with it… So it’s definitely a solution for simulating conversation history. Awesome.
I still feel like the system prompt appears twice but nethermind
Yeah checkout also what LiteLLM has to offer or even Cloudflare LLM functions
Don’t know if your app is only OpenAi vanilla focus but yeah the plugin is not mandatory to make a proper llm app
Can be useful if you use it as a request sent to a chunking > embedding > store > retrieval if needed > return response