Connect ChatGPT API

Hey all!

Are there any resources on how to connect to the ChatGPT API (and on how to send requests + display the output)?

I’ve done this with other tools, but struggling to figure this out.

Thanks!

There are no current resources for this as far as I know.

Where are you stuck?

Workflows with REST API requests + liberal use of variables is what I’d focus on.

I tinkered with this a few months back.

I used a custom component for mine. Installed NodeJS (version 16 something, because the latest versions of NodeJS do not work with weweb last I checked). Then I npm-installed openai from their API instructions on their website.

In your wwElement.vue when you get to it, use something like this:

<template>
  <div class="my-element">
    <input v-model="input_text" id="user-input" autocomplete="off">
    <button @click='sendText' id='sendButton'>
      Request
    </button>
    <input v-model="generated_text" id='gpt-result' type='hidden'>
  </div>
  
</template>

<script>
import { Configuration, OpenAIApi} from 'openai';
export default {
  props: {
    content: { type: Object, required: true },
  },
  computed: {

  },
  data() {
    return {
      input_text: '',
      generated_text: null,
    }
  },
  methods: {
    async sendText() {
      const configuration = new Configuration({
          apiKey: this.content.apiKey,
      });
      // Initialize the OpenAI API client
      const openai = new OpenAIApi(configuration);
      // Generate text based on the user's input using the GPT-3 model
      const prompt = this.input_text;
      const response = await openai.createCompletion({
        model: "text-davinci-003",
        prompt: prompt,
        temperature: 0,
        max_tokens: 150,
      });
      // Set the generated text to the component's data property
      this.generated_text = response.data.choices[0].text.trim();
    }
  }
}
</script>

Your component in the weweb editor should look like this:
image

Not editable inside the editor, but good enough for a start!

In this case the API key is bindable inside the editor, you’ll have to create the appropriate ww-config.js file for that purpose. The WeWeb Dev documents helped me achieve the above :slight_smile:
Hopefully that should steer you in the right direction.

There is a big disclaimer at the top of the readme of the library:

Important note: this library is meant for server-side usage only, as using it in client-side browser code will expose your secret API key.

And also from the openai docs:

Remember that your API key is a secret! Do not share it with others or expose it in any client-side code (browsers, apps). Production requests must be routed through your own backend server where your API key can be securely loaded from an environment variable or key management service.

1 Like

A chatbot is actually what I’m trying to create!

I’ve done it before with other tools, but I was using the follow along guides as a crutch, so I’m having difficulty in just getting started if that makes sense.

I guess a good starting point is:

  • How do I connect to the API
  • Send prompt requests
  • Display the output

@dolirama yes of course - the way I have it to play around for now is make Xano send the API key through my user authentication which I then bind to the custom component for it to use. Is that alright?

From the docs it’s cristal clear that calls to the api must be done from your backend.

@aeynaud you are still sending your api key to the users, so they can now use it to make any call to openai on your behalf without using your app. A custom element is not the tool you need here.

@shrek the option “make this request through a server” in weweb is only meant as a workaround for api that don’t handle CORS. All the request settings are still in your weweb app and are transmitted to the browser, so your api key is exposed. You should use a dedicated backend.

1 Like

I think I got it - does this look accurate?

  1. Receive input from user in weweb app
  2. Send input to backend (in my case, Xano)
  3. Send request to chatgpt API through Xano
  4. Send output back to weweb app

Or is there a “better” way of doing it?

3 Likes

That is indeed how it should be done :slight_smile:

1 Like

Thank you @dorilama :slight_smile:

1 Like

Hi @shrek here is a snippet that should give everything you need to do the following:

  1. Start a new conversation -
    Inputs:
    (Conversation Title) kinda irrelevant, just so you can name the conversation.
    (System Prompt) this is the first prompt to chatGPT which helps train / set context for the role of the conversation.

  2. Continue conversation -
    Inputs:
    Conversation ID (As created in Step 1)
    Content (your message to send)

  3. Summarize conversation -
    Inputs
    Conversation ID

Includes 2 data tables
Conversations
Messages

Environment Variable:
OpenAPI Key - just add your key in and your APIs should be ready to go.

Let me know if you have any questions, please note I haven’t tidy’d this snippet up as I was speed building at the time so please adjust / delete as necessary.

Edit:
I have just noticed I missed an API to pull your conversation into your front end (all the messages within a conversation)
You can set this up with the following configuration:

Noting the sorting used to ensure the messages are displayed in the correct order.

You could then set this API up as a collection.

Your workflow for having a conversation is then quite simple, when the user has typed their message in, you want to add a workflow to your send button that will post to the continue conversation API. Once this has been completed, you can then refresh the collection which will return the new response received from chatGPT.

1 Like

Very cool project!

FYI, we are working with @Locky to offer a Xano + WeWeb template that enables you to build your own chatbot.

Would love to know what every one here would like to see in the template and/or what use cases you’re looking into so we can try to make things as fun and easy for you as possible :slight_smile:

Can’t promise we’ll be able to include everything but we can try! :smile:

1 Like

I found this to make a backend with Xano and openai’s API by the makers, can be useful to see how it works and that’s what I will be doing instead :sweat_smile:
(50) Build a Backend using the OpenAI (GPT-3) API - YouTube

1 Like

Totally, this video will be more informative than my snippet - just to note I do work at Xano as well, so I can help assist with getting the snippet provided working as well :slight_smile:

Something to note is I did set my snippet up specifically to be used as a chatbot, and is referencing the ChatGPT 3.5 model - which I believe is different to the video instructional.

The reason for this is the chatGPT API works based on constructing a conversation with multiple messages as an Array, as opposed to the example provided is asking a single question and receiving a response, perhaps without contextual awareness of the previous message.

Something to consider based on the functionality you are looking to produce.

2 Likes

Adalo has a nice interface that lets you add the variables into a variety of OpenAI requests. There might be elements of it that might spark ideas for WeWeb implementation. OpenAI: Prompt Completion in Your Apps | Integrations | Adalo: Build Apps without Code #nocode - YouTube

1 Like

Thanks for sharing!

I think you will enjoy what we have in store for you very much :slight_smile:

1 Like

The OpenAI plugin is now available :slight_smile:

It addresses the security issues mentioned higher in the conversation.

To clarify, when you use the OpenAI plugin in WeWeb:

  • the API key you use to configure the plugin will not be visible in the user’s browser
  • the content of the secure prompts you configure in the plugin will not be visible either

In the example below, we made a request to the OpenAI’s chat completion API. You can see in the payload that the questions of the user and the answers from the AI are visible in a messages variable. That’s completely fine since the user already knows this.

But if you’re familiar with the OpenAI docs, you might notice that the payload doesn’t show the first prompt that tells gpt-4 what personality and instructions the AI should follow. That’s because we configured that in a secret prompt at plugin level. The only thing the user can see in the browser is the id of that secure prompt but that’s not helpful to them.

Hopefully this clarifies things. If not, please let us know so we can make our user docs clearer.

7 Likes