Hey, WeWeb Community!
I’m excited to share a showcase project that highlights the possibilities of integrating Groq models through WeWeb components with conversational AI models, as demonstrated in the app.qredence.ai interface. This example showcases WeWeb’s visual development platform, enabling you to build a customized chat interface that interacts with multiple models through a REST API.
Try it out (no account needed): https://app.qredence.ai
Key Features:
-
WeWeb-Powered Interface: The entire chat interface is built using WeWeb’s visual development platform, demonstrating its capabilities in building complex user interfaces.
-
Groq Integration: Groq interacts with Large Language Models (LLMs) through a REST API, showcasing the power of using WeWeb with external services.
-
Model Switching: Easily switch between models during the same conversation while maintaining the context of past messages. Supported models include llama3-70b-8192, llama3-8b-8192, mixtral-8x7b-32768, and gemma-7b-it.
-
Local-Only Experience: Everything is local, and nothing is saved in a database.
-
No Account Required: You don’t need to create an account or provide an API key (although there are some limitations with the plugin).
Please note that streaming responses are not implemented in this example, as the responses are already fast. Additionally, conversation history is not included, as this project is primarily a demonstration.
While there are some bugs and optimizations to be made, I plan to share the raw file code and the project code. Once the marketplace is available, I intend to share it for free.
Unfortunately, due to the limitations of my current plan and the lack of a staging domain, this UI will likely be replaced by another one after May 24th.
Feel free to try it out yourself: https://app.qredence.ai
Let me know if you have any other questions or feedback!