Tutorial : Streaming Responses From Openai To Weweb via Xano

Hey

I thought this might be helpful to someone who is trying to setup streaming responses to weweb using xano or openai.
Checkout the tutorial video here : https://youtu.be/fJUWLTwR024

Do let me know your comments guys.

Thanks

2 Likes

Hey! How do you stream the Xano response to Weweb via a rest api call in Weweb? Or must we use the Xano plugin?

If you are streaming from xano, why would you use rest api plugin instead of xano plugin? Curious to know that…

Anyway, I don’t think rest api plugin is equipped with streaming now. We will need to go through xano plugin.

Ah, I see. :slight_smile:

Following the lambda code you have in the video, i was able to get the line breaks to show up in the rich text editor in Weweb by turning it into br. However, i also have subheaders in my open ai streaming output… the subheaders dont show… anyway we can get it to show? All I see is "## " in the rich text editor but it doesnt actually create a h2 tag, hope that makes sense…

Without looking into what xano is actually responding with , i am not sure if I can debug it.
Can you post what exactly xano outputs.

It’s just regular text output from open ai!

when I try to stream the same output over, I think Xano is parsing the data in some way that ruins the formatting, but I’m not sure how to fix it.

This is how it looks like in Weweb (without changing the \n to br like you suggested)

When I implemented your code to change all \n to br, the line breaks render correctly but then I face the problem where my ## subheaders are not showing up. It just shows up as ## and it doesnt change to a h2 heading. I hope this makes sense…

When I concatenate and save everything to my database, I see that the paragraphing and everything is retained. But this is not reflected in the streamed chunks.

How it normally shows up (without streaming or manipulation)

But when I implemented the same thing as your video, the subheaders end up looking like that:

The output looks like a ‘markdown’ language.
chatgpt uses it most times.
I am not 100% sure about the best way to deal with it.

But if I were you , I would look into mardown language conversion lambda functions
Or a markdown language converter at the front end with a js function

something like this in weweb as a function

function convertToMarkup(input) {
  let finalHtml = input;

  // Convert headings
  finalHtml = finalHtml.replace(/#{1,6}\s+(.+)/g, (match, p1) => {
    const level = match.trim().split(' ')[0].length;
    return `<h${level}>${p1}</h${level}>`;
  });

  // Convert bold text
  finalHtml = finalHtml.replace(/\*\*(.+?)\*\*/g, '<strong>$1</strong>');

  // Convert italic text
  finalHtml = finalHtml.replace(/\*(.+?)\*/g, '<em>$1</em>');

  // Convert links
  finalHtml = finalHtml.replace(/\[(.+?)\]\((.+?)\)/g, '<a href="$2">$1</a>');

  return finalHtml;
}

Thanks! That was really helpful.

Do you know how to save the entire message from open ai into Xano?

Currently I’m trying to concatenate them all together to get the whole message output so I can save it.

However, it’s running into a problem where it is using too much memory in Xano and crashing the whole end point. (Probably from trying to concatenate too many times)…

Is there a way to be able to access the entire output at one go without having to keep concatenation everything to get the final value? Hope this makes sense! :sweat_smile::smiling_face_with_tear: