Let me start by saying that I have enjoyed using the AI tools in general, however I wanted to highlight some things that I believe could be improved upon.
Governance: Whilst the AI tool is useful and works fine most of the time I have no visibility how many tokens are being used at any particular time, if I want to see how many tokens I have left I have to go back to the dashboard and look at the workspace - perhaps a token counter in the AI UI near the toggle that switches between AI and traditional editor.
Token usage: My plan affords me 10 million AI tokens per month but using them is like throwing money at a black hole. I have no idea if the task I ask the AI to complete will be 10 tokens or 1 million and so I don’t know how to best shape my requests to get the best bang for my buck. Is there any way to give us visibility into how many tokens our requests will be before we hit the submit button?
AI Commenting: WeWeb is a lowcode solution, however the AI raises the bar in complication for someone like myself because when I ask it to create something almost invariably it will use some JS or formula (In my current project it created a formula for the menu that I can’t figure out.) that makes everything more complicated and goes against the ethos of lowcode. When I ask Gemini to code up something for me it will add a whole bunch of comments in the code clearly explaining what it is doing, could we get something similar happening in weweb?
Thanks so much for taking the time to share your feedback on this.
Re governance: 100% agree.
Re token usage:
Ha! That’s a great idea! I’m not sure how we would go about it because I’m not sure we can predict how much it will cost before making the actual request but I love the idea. Will share with the team
Re AI commenting: yeah, definitely! I think the team already started working on this. In fact, I think that was pushed to production yesterday. Can you check? This is an example of an AI-generated formula from one of my projects:
Oh yes! Even just the input tokens would be great because at least we know just how much data is being sent to the AI sometimes we may be accidentally sending things we don’t want
Yes, or even just a switch button “Thinking Mode” on/off , have hard time understand why not implement this quick thing since no training etc
or even be able to choose 3.5 haiku if all is ran through Claude models.
Again sorry if i’m repeating myself, but the thinking mode for simple task is less efficient than a non thinking model in average according to different study.