Open AI Chat completion Action

Hello everyone,

Several questions about the Chat completion action of the Open AI Plugin:
If I create two arrays:

  • {Role: User; Content: user_prompt; …}
  • and one {Role: Assistant; Content: assistant_response; …},

does this mean that the assistant will take these old messages into account when generating its next response (like to simulate a conversation history)? Will the assistant then reply to the user’s last message (the one positioned lowest)? Or is it a completely different usage?

It is also possible to select “System” for the prompt. Does this mean that the Secured prompt I selected in the “System content” field is not my system prompt?

Thank you !

It’s a completely new convo. You have to refeed the old messages if you want the ai to remember them…

Thank you @raelyn, so what is the purpose of the messages that can be added to the request? What are the use cases?

Checkout the template with xano + weweb and the video too as well as the xano snippet for a better understanding

Basically allowing message history retrieval based on a back end tables , that can be used for individual user based on their userid and if wanted based on the session id ( or a specific conversation basically )

1 Like

Thanks @Zacharyb ! I I must have lost my focus when Quentin dealt with it… So it’s definitely a solution for simulating conversation history. Awesome.
I still feel like the system prompt appears twice but nethermind :thinking:

1 Like

Yeah checkout also what LiteLLM has to offer or even Cloudflare LLM functions
Don’t know if your app is only OpenAi vanilla focus but yeah the plugin is not mandatory to make a proper llm app
Can be useful if you use it as a request sent to a chunking > embedding > store > retrieval if needed > return response

1 Like