I wanted to share a quick note on the new AI Assistant feature. Like many of you, I was genuinely excited when it launched – the potential is huge, and it’s clearly a big step forward for WeWeb.
But now that I’ve been using it more regularly in my daily workflows, I have to admit I’m running into some pretty consistent issues. Sometimes it simply doesn’t launch. Other times, it gives up on very basic tasks that previously worked well – especially formula generation. In fact, I feel like the assistant used to be more helpful before the big AI release. At least it tried, whereas now it just refuses to answer and go in error mode.
Also, one thing that feels like a missed opportunity: the assistant doesn’t seem to leverage the collections already bound in WeWeb (Xano for example) and create its own variable. That would make it so much more powerful and context-aware.
It’s a bit frustrating, especially given that I’m on the highest pricing tier – I expected the AI features to be more stable and usable.
Curious to hear your thoughts. Are you seeing similar patterns? Do you feel it’s improving, or are you also noticing regressions?
I’m hopeful it will get better, but right now, reliability feels like a critical gap – especially in a market where no-code tools with strong AI integration are multiplying fast.
Totally agree, it feels like it was better when it was only generating formulas, now it refuses to works most of the times, and the rare times it worked, the results were just not working. I completely stopped using it after all those try because it feels like a waste of time.
Overall, I agree. I think it’s in the process of fine-tuning constantly and the versions of it are changing rapidly. But some of the previous versions of AI were more reliable in terms of usefulness and responsiveness.
Now I experience the same you described: it may not do anything or may do something completely unexpected. And you just can’t stop it until it’s done. That’s frustrating.
So, it becomes more reasonable to do most things manually without AI. Eventually, it will take less time to debug.
Thanks for your message, @Totzy You’re not the only one experiencing these issues with the new AI assistant, this tends to happen more often with larger projects.
The problems you mentioned are mainly due to how the AI currently handles the application’s context. Right now, we’re sending too much context, which can cause errors or lead to incorrect results.
Our team is actively working on this, and we’re aiming to deploy context management improvements by Wednesday the 9th of July. Formula generation should be much more reliable after that, and you should see fewer errors.
We don’t expect everything to be perfect right away, but it will be a significant improvement. We’ll keep iterating, so please continue sharing your feedback.
Our goal is to bring the AI generation to the level of “pure” AI-gen tools like Lovable or Bolt, while giving users full control over what’s generated through the no-code editor.
It’s a complex technical challenge, but we’re making steady progress every week, and it’s now our top priority.
Thanks again for your patience, and please keep the feedback coming!
Hello @yma, thanks for asking. We’re not tied to a single model. Our AI architecture leverages multiple models for different tasks, and it’s easy for us to switch between them. We regularly benchmark performance with internal evaluations and so far, Claude stands out as the clear leader for generating UI and logic in WeWeb.