Copilot absolutely needs to not have a limit rate

Hello WeWeb team :wave:

First of all, massive congrats for your amazing no code tool.
It’s so much fun to use it and it feels like magic.

Now, I just wanted to tell you guys, you really really need to find a way to let us ask more questions to copilot. I have absolutely 0 coding knowledge and this AI was GOLD for the time I was able to ask it stuff.

I WANT to pay for more tokens, I don’t care how much it costs, I just want to have the option to continue to talk to it.

By the way, 2 issues that need fixing :

  1. There was no warning that a limit existed before I hit it… could be nice to have a warning.
  2. It is super super buggy and everytime it fails to answer a question, my guess is that it removes tokens anyways.

I love you guys and I want you to continue growing.
I know you are very close to your community and I know you will read this post.

Thanks :+1:
Will

3 Likes

Hi @WilliamB :wave:

100% agree with this :slight_smile:

AI integration in WeWeb is one of our 2024 priorities. Revamping how WeWeb Copilot works will be part of this effort. We are exploring a few different options to increase / remove rate limits, including allowing you to add your own API key.

1 Like

Thanks for your answer @Joyce. I really hope to see this implemented soon because this feature is one of your best.

Cheers!

Hey @WilliamB, thanks for the great feedback. I’d love to actually get more feedback on your copilot usage through a quick call if you agree? If yes, feel free to write me at raphael@weweb.io thanks!

Hi @Raphael, with great pleasure. I’m writting you asap.

Hello ! Just wondering about which LLM you use and what is the overall LLMops ?
I imagine it’s something like gpt3-5 ? Since rate limiting ?
IMO for that kind of use case which is some code interpreter , doc retriever through vector db (btw HNN, pg_vector, else ? , in context answers

Because I think that it’s a total overkill if it’s from gpt imo ( not just talking about groq possibilities ) but more like a broader LLM app that will use X or Y model for eg based on the task given so when it come to smaller and less complex queries it could choose a smaller model even like phi 3, can be haiku that is very powerful too

Anyways I more than aware the cost that can be behind it but I know also that that’s something that might be optimized even for you financially