Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create a simpler web-workers example #380

Closed
flatsiedatsie opened this issue Apr 23, 2024 · 2 comments
Closed

Create a simpler web-workers example #380

flatsiedatsie opened this issue Apr 23, 2024 · 2 comments

Comments

@flatsiedatsie
Copy link

I love WebLLM, but I have to admit it's not been easy for me to integrate into my project.

This is because the provided examples seem to assume that developers will write code which will:

  1. fully integrate with and rely on Web LLM as a core component
  2. use webpack or something similar in their creations

However, this creates a lot of hurdles for less sophisticated developers like me.

For example, I just built the get-started-web-worker example.

I then look in the HTML file and.. it's empty. There is only a link to /get_started.b7a05eb9.js, which comes with everything inside it, including all the code I would like to have outside of that file.

How do I now change any settings now? It seems I have to build in the example directory and call the packager again? If so, that's slow and forces me into a way of working I don't enjoy.

Perhaps there could be an object at the window scope that allows me to interact with WebLLM? Or even better, what if Web LLM only builds a worker script, with a clear and documented way to address that worker?

Ideally:

  • I would want to be able to send a message to the worker telling it to switch to a different model, and it would return updates about (down)load progress of the new model.
  • I could send it a message with a prompt and temperature, and it would return updates on its progress.
  • I could tell the model to clear it's conversation history, or give it a message that restores those from a history in my main application.
  • I could tell WebLLM to unload itself, to free up memory when the user wants to switch to an LLM hoster on another 'runner', like Wllama. Or simply kill the worker.
  • Etc

With such a design I can iterate my code quickly, switch between versions of WebLLM, and have a nice abstraction layer.

I have currently already integrated an older version of WebLLM in my project, but because I'm more of a designer and not a real developer that was a huge struggle. I resorted to hacking extra code into /get_started.b7a05eb9.js that calls functions I created on the window object. And I made the script copy the chatUI object to window.chatUI so that I could more easily manipulate everything from the actual UI, while simply hiding the WebLLM chat UI using CSS. I'm sure all of this making your skin crawl just reading it.

My project (can't wait to show you more) also integrates Transformers.js, and with that project it's been a piece of cake to give commands to workers and receive updates back. I've wrapped some of those workers in promises, which works great.

In summary, it would rock if there could be an example that is easier to build on for beginners like me.

Sneak preview of my project:
sneak_preview
It's 100% browser based. The racecar icons indicate WebLLM models, while the others are run by llama-cpp-wasm (soon to upgrade to wllama). Voicerecognition, voice generation, translation, music generation are handled by Transformers.js. Oh, and I've also hacked Web-SD into it.

@flatsiedatsie
Copy link
Author

I've created another small hack, perhaps it's useful to others too.

In the packaged script, search for mainStreaming. Replace that part with:

/**
 * Chat completion (OpenAI style) with streaming, where delta is sent while generating response.
 */ let my_webllm = {};
	my_webllm['engine'] = null;
	
	my_webllm['initProgressCallback'] = (report) => {
       	console.log("WebLLM: init report: ", report);
   	};
	my_webllm['initCompleteCallback'] = () => {
       	console.log("WebLLM: init complete");
   	};
	my_webllm['chunkCallback'] = (chunk, message_so_far, addition) => {
		console.log("WebLLM: chunk callback: chunk,message_so_far,addition: ", chunk, message_so_far, addition);
	}
	my_webllm['completeCallback'] = (message) => {
		console.log("WebLLM: complete callback: message: ", message);
	}
	my_webllm['statsCallback'] = (stats) => {
		console.log("WebLLM: stats callback: stats: ", stats);
	}
	
	
	my_webllm['loadModel'] = async function(selectedModel) {
		if(typeof selectedModel != 'string'){
			console.error("WebLLM: no valid model string provided");
		}	
		my_webllm['engine'] = await _webLlm.CreateWebWorkerEngine(new Worker(require("b16cbe164a5b9742")), selectedModel, {
		       initProgressCallback: my_webllm.initProgressCallback
		});
		my_webllm.initCompleteCallback();
	}
		
	my_webllm['setInitProgressCallback'] = async function(initProgressCallback) {
		if(typeof initProgressCallback === 'function'){
			my_webllm['initProgressCallback'] = initProgressCallback;
		}
		else{
			console.error("WebLLM: no valid initProgressCallback provided");
		}
	}
	my_webllm['setInitCompleteCallback'] = async function(initCompleteCallback) {
		if(typeof initCompleteCallback === 'function'){
			my_webllm['initCompleteCallback'] = initCompleteCallback;
		}
		else{
			console.error("WebLLM: no valid initCompleteCallback provided");
		}
	}
	my_webllm['setChunkCallback'] = async function(chunkCallback) {
		if(typeof chunkCallback === 'function'){
			my_webllm['chunkCallback'] = chunkCallback;
		}
		else{
			console.error("WebLLM: no valid chunkCallback provided");
		}
	}
	my_webllm['setCompleteCallback'] = async function(completeCallback) {
		if(typeof completeCallback === 'function'){
			my_webllm['completeCallback'] = completeCallback;
		}
		else{
			console.error("WebLLM: no valid completeCallback provided");
		}
	}
	my_webllm['setStatsCallback'] = async function(statsCallback) {
		if(typeof statsCallback === 'function'){
			my_webllm['statsCallback'] = statsCallback;
		}
		else{
			console.error("WebLLM: no valid statsCallback provided");
		}
	}
	
	
	my_webllm['doChat'] = async function(request) {
		if(my_webllm.engine == null){
			console.error("WebLLM: aborting, engine has not been started yet");
			return false
		}
		if(typeof request != 'undefined' && request != null && typeof request.messages != 'undefined'){
		    const asyncChunkGenerator = await my_webllm.engine.chat.completions.create(request);
		    let message = "";
		    for await (const chunk of asyncChunkGenerator){
		        //console.log("WebLLM: doChat: chunk: ", chunk);
		        if (chunk.choices[0].delta.content) // Last chunk has undefined content
		        message += chunk.choices[0].delta.content;
				my_webllm['chunkCallback'](chunk, message, chunk.choices[0].delta.content);
		        setLabel("generate-label", message);
		    // engine.interruptGenerate();  // works with interrupt as well
		    }
			
			const final_message = await my_webllm.engine.getMessage();
			my_webllm.completeCallback(final_message);
		    console.log("WebLLM: Final message:\n", final_message); // the concatenated message]
			
			let stats = await my_webllm.engine.runtimeStatsText();
			my_webllm['statsCallback'](stats);
		    //console.log("WebLLM: stats: ", stats);
		}
		else{
			console.error("WebLLM: no valid prompt message provided");
		}
	}
	window.my_webllm = my_webllm;
	console.log("You can now use window.my_webllm: ", window.my_webllm);

// Run one of the function below
// mainNonStreaming();
//mainStreaming();

Then, in the getStarted.html file, replace the end with this:

    <script src="/get_started.b7a05eb9.js" defer=""></script>
	<script>
		
		const request = {
	        stream: true,
	        messages: [
	            {
	                "role": "system",
	                "content": "You are a helpful, respectful and honest assistant. Be as happy as you can when speaking please. "
	            },
	            {
	                "role": "user",
	                "content": "Provide me three US states."
	            },
	            {
	                "role": "assistant",
	                "content": "California, New York, Pennsylvania."
	            },
	            {
	                "role": "user",
	                "content": "Two more please!"
	            }
	        ],
	        temperature: 1.5,
	        max_gen_len: 256
	    };
		
		window.onload = init;
		async function init() {
			console.log("window.my_webllm: ", window.my_webllm);
			if(window.my_webllm){
				await window.my_webllm.loadModel('Llama-3-8B-Instruct-q4f32_1');
				await window.my_webllm.doChat(request);
			}
			
		}

	</script>
</body></html>

@flatsiedatsie
Copy link
Author

Solved by finally having a great javascript file that can simply be loaded in. Loving it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant