Here is a very simple webcomponent that can have a conversation with a local ollama instance, or which can be easily configured to use any of the other models that langchain supports. (If you want to use something that has an API key you'll end up exposing the API key to the world, so maybe it's not a great idea to have the browser talk directly.)
Chat Component
Here's the base component. It assumes that there's one conversation perpage, and it keeps the query and response as attributes. When a new one is created, it takes passes the previous completions to the backing model to give it some memory.
generate-response.js
:
|
|
LLM Selector
Right we can chose between 2 different LLM providers, but it's easy to add more. I'm playing with ollama and OpenAI's version. This simply makes a tab group for both.
llm-selector.js
:
|
|
Ollama Model Selection
For ollama, we pull down a list of installed LLMs and let the user
decide. mistral
is default.
ollama-models.js
:
|
|
OpenAI Component
Mainly we need to get the api key, which we will store in local storage for future ease of use. Once you enter it, it stays in your browser.
openai-models.js
:
|
|
HTML & JavaScript
And the framework that all this hangs off of:
main.js
:
|
|
And then wire it all together:
index.html
:
|
|
Boilerplate
|
|
package.json
:
|
|
|
|
Run with
|
|