Best way to implement transformers.js in WeWeb?

Hi all!

I am trying to implement an AI chatbot using the amazing transformers.js library by Xenova. Here’s a good video explaining how that works.

Clearly this requires custom JS to work in WeWeb but I can’t figure out the best way to do this. Page-level workflow to load the model, then “on click” workflow to actually perform the inference? Custom html element? Custom code?

For reference, this is what typical code would look like (taken from an example in the documentation). I would like to trigger this on a button click, getting the user message from an input field then display the result/output by binding it to a text element.

        import { env, pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers';

        // Set environment configurations
        env.useBrowserCache = false;
        env.allowLocalModels = false;

        // Asynchronously initialize and use the pipeline
        (async () => {
            const generator = await pipeline(
              'text-generation',
              'Xenova/TinyLlama-1.1B-Chat-v1.0'
            );

            // Define the list of messages
            const messages = [
              { "role": "system", "content": "You are a friendly assistant." },
              { "role": "user", "content": "Explain thermodynamics in simple terms." },
            ];

            // Construct the prompt
            const prompt = generator.tokenizer.apply_chat_template(messages, {
              tokenize: false, add_generation_prompt: true,
            });

            // Generate a response
            const result = await generator(prompt, {
              max_new_tokens: 256,
              temperature: 0.7,
              do_sample: true,
              top_k: 50,
            });

            console.log('Successful response generation:', result);
        })();

Any help is appreciated, thanks!