Hello, I have not used the Copilot and AI recently but I’m getting “Rate limit exceeded. Try again tomorrow !”
Does it need any type of configuration? I’m on a yearly Scale subscription.
I was able to use it only one time when it was first announced but haven’t been able to since then.
I love the copilot. Over the last weeks it has become a truly relevant part of my development process for me in Weweb. I think it is one of the most valuable AI additions in NoCode software today.
Over the last three days the rate limit is permantly exceeded, no matter what daytime it is.
Can you please fix that or allow injection of my own openAi key? That would be greatly anticipated
I just had this happen as well… I wish I could use my own API key lol. I guess I’m stuck opening another thread asking a question the Copilot could’ve answered.
We are well aware of the issue and have been looking into it but there’s no easy resolution unfortunately. The tech team (product and devs together) are trying to figure out what the best approach would be.
Allowing you to add your own API key would definitely be an amazing solution and we hope we can make it happen but it’s also the one that would require most dev work so not sure when we can prioritize that.
I have been using weweb last few days for the first time here and I was having good luck using the AI generator.
I too, have just run into the “rate limit exceeded error”, truthfully, after not very many attempts.
I’m moving from Bubble and up until this, the app has been great!
However, I would say that for the last hour or so, trying to use the builder without the AI, its honestly very difficult with understanding exactly how to format the formulas.
Is there a reason we are rate limited? Shouldn’t be a big deal as if a card on file with OpenAI, they allow for 60,000 requests per minute. How do we remove rate limiting? Otherwise, its very tough.
If you are doing this on a per user basis, maybe show the total tokens a user has left for the day after each chat transaction ??
It would be a less jarring experience than a “rate limit exceeded” error. Or even cleaning that language up like “you’ve used up all your free chat tokens for the day, check back in X hours or visit the community for support “
Sorry i meant the 3500 request per minute - which is a ton lol.
Anyways, @Joyce totally get it. Obviously, the easy solution probably to have a certain amount of requests that you provide free on the free plan, and to have the user add their API key, but what you ideally dont want to do is have no solution for the AI and inhibit people from building.
Personally, im happy to upgrade to get more GPT credits to use in the AI builder even at a smaller plan.
I had mine
I started from scratch but basically my model uses documentation from weweb , and other data source extensions to extend even more the efficiency and possibilities
pgvector 0.5.0 in Supabase launched 2 days ago just wanna try reducing dimensions and better manage collection vectors and indexes, with a good data source from structured and unstructured
Will be a ChatGPT like UI in weweb
But since I’m solo , working on many area from design to llmops , without funding raised yet
Hard to find time to focus on it right now, need to push next update of QreamUI first !
I’m used langchain as framework and gpt4 and open ai function
Currently experimenting with open source model that very very good performance
Anyways