WeWeb Copilot and AI: Rate limit exceeded. Try again tomorrow!

Hello, I have not used the Copilot and AI recently but I’m getting “Rate limit exceeded. Try again tomorrow !

Does it need any type of configuration? I’m on a yearly Scale subscription.
I was able to use it only one time when it was first announced but haven’t been able to since then.

Thank you

it’s not you. open AI must feel like weweb has used its fair share of resources

I see… is there a way to use my own OpenAI account key in the Copilot and formula AI so is not from a shared pool?

I love the copilot. Over the last weeks it has become a truly relevant part of my development process for me in Weweb. I think it is one of the most valuable AI additions in NoCode software today.

Over the last three days the rate limit is permantly exceeded, no matter what daytime it is.

Can you please fix that or allow injection of my own openAi key? That would be greatly anticipated :pray:

Kind regards
Marc

1 Like

I logged a bug about this two weeks ago. It’s still in ‘New’ status, which doesn’t bode well for a speedy resolution…

I just had this happen as well… I wish I could use my own API key lol. I guess I’m stuck opening another thread asking a question the Copilot could’ve answered.

1 Like

Hey there :wave:

We are well aware of the issue and have been looking into it but there’s no easy resolution unfortunately. The tech team (product and devs together) are trying to figure out what the best approach would be.

Allowing you to add your own API key would definitely be an amazing solution and we hope we can make it happen but it’s also the one that would require most dev work so not sure when we can prioritize that.

We’ll keep you posted!

1 Like

I’m guessing you’ve vectored your docs and use it to power the context window for openAI? Any chance you guys wanna open source the training set?

Could be a fun experience to set up your own weweb chat bot with supabase and weweb :rofl:

1 Like

Haha it could be a nice idea indeed!

We have a small Python server that handles the LLM vectorization that we intend to switch to a vector DB. Maybe we’ll share it after having done this :slight_smile:

1 Like

Hey @Quentin @Joyce! I wanted to check in on this.

I have been using weweb last few days for the first time here and I was having good luck using the AI generator.

I too, have just run into the “rate limit exceeded error”, truthfully, after not very many attempts.

I’m moving from Bubble and up until this, the app has been great!

However, I would say that for the last hour or so, trying to use the builder without the AI, its honestly very difficult with understanding exactly how to format the formulas.

Is there a reason we are rate limited? Shouldn’t be a big deal as if a card on file with OpenAI, they allow for 60,000 requests per minute. How do we remove rate limiting? Otherwise, its very tough.

Thanks - Jon

Hi @Jonny :wave:

Thanks for reporting it! The tech team will look into it.

The reason we rate limit per user / per month is because the cost per request is high and we’d like to keep it a free feature for everybody :slight_smile:

1 Like

***** tokens.

Requests per minute is closer to 3500 or as low as 200 depending on the model.

Requests and tokens are 2 diff things. And my guess is they’re using gpt4 if we’re getting rate limited.

1 Like

If you are doing this on a per user basis, maybe show the total tokens a user has left for the day after each chat transaction ??

It would be a less jarring experience than a “rate limit exceeded” error. Or even cleaning that language up like “you’ve used up all your free chat tokens for the day, check back in X hours or visit the community for support “

1 Like

Yeaah, agreed! We just created the product ticket to add this info so it doesn’t take people by surprise.

We’re also looking to increase the limits.

We requested an increase too :rofl: the limits on 4 are ridiculous.

:shushing_face: :zipper_mouth_face: we actually use a hopper of keys and pick which we use at random to help avoid the issue.

2 Likes

Sorry i meant the 3500 request per minute - which is a ton lol.

Anyways, @Joyce totally get it. Obviously, the easy solution probably to have a certain amount of requests that you provide free on the free plan, and to have the user add their API key, but what you ideally dont want to do is have no solution for the AI and inhibit people from building.

Personally, im happy to upgrade to get more GPT credits to use in the AI builder even at a smaller plan.

Thoughts?

1 Like

Yeah, you summarized exactly what we’re thinking :slight_smile:

I had mine
I started from scratch but basically my model uses documentation from weweb , and other data source extensions to extend even more the efficiency and possibilities

pgvector 0.5.0 in Supabase launched 2 days ago just wanna try reducing dimensions and better manage collection vectors and indexes, with a good data source from structured and unstructured

Will be a ChatGPT like UI in weweb

But since I’m solo , working on many area from design to llmops , without funding raised yet
Hard to find time to focus on it right now, need to push next update of QreamUI first !

I’m used langchain as framework and gpt4 and open ai function
Currently experimenting with open source model that very very good performance
Anyways

1 Like

The weweb already has a solution for this, I’m getting this message! “Rate limit exceeded. Try again tomorrow!” @Joyce

Hi @Brendon :wave:

If this is the message you get when trying to use WeWeb Copilot, it means you exceeded the number of requests you can make to WeWeb Copilot in a day.