WeWeb AI - My few thoughts. and suggestions for AWS Bedrock

Hello ! , just a few thoughts (with english correction so it’s readable ahah)

First off, it’s great to see WeWeb is on AWS Bedrock. That’s a solid strategic move that simplifies the tech stack. For what it’s worth, Claude is also a very capable model for coding, so we’re in a good position.

With that foundation, here’s a quick list of expert recommendations to elevate the implementation:

  • Enhance user-side transparency: Give users clear visibility into which model is being used, how it’s being used, and a simple way to track their credit costs.
  • Activate Guardrails: Implementing AWS Guardrails should be standard practice to enforce responsible AI policies from the start.
  • Leverage cost-effective models: The current prompt routing (no Sonnet 4) is a smart trade-off. For most tasks, Sonnet 3.5 and Haiku are the cost-effective workhorses.
  • Orchestrate complex tasks: Implement a routing layer to manage multi-step workflows. This would allow for flexibly integrating the best model for the job, whether it’s Sonnet 4 for heavy lifting or specialized embedding and rerank models.
  • Implement caching & long-term memory: Leverage the native AWS toolset to build a robust caching layer and long-term memory, which is crucial for reducing latency and managing costs.
  • Establish clear AI governance: This is key for getting explicit user agreement on data usage. It not only builds trust but also allows WeWeb to unlock the full potential of its AI features.
  • Build a knowledge base: Setting up a dedicated knowledge base with a solid Retrieval-Augmented Generation (RAG) pipeline is a game-changer for agent accuracy.
  • Optimize AI spend: The current AI cost is unsustainable. Aggressively optimizing this spend has to be a critical focus.

…and that’s all I’ve got, starting to be sleepy

1 Like

Hi Zach, thank you for your feedback!

Would you mind creating a feedback ticket so our dev team can sit down to analyse it?

Thanks :folded_hands:

1 Like