Learn how to build a hybrid AI app that runs locally and via server using WebLLM and deploy it to Koyeb
This guide shows how to use Continue with Ollama, a self-hosted AI solution to run the Mistral Codestral model on Koyeb GPUs