Ollama tutorials
Discover how to build, deploy and run Ollama applications in production on Koyeb. The fastest way to deploy applications globally.

Nuno Bispo
Use Ollama to Test Multiple Code Generation Models With Koyeb Sandboxes
This tutorial walks through how to build a complete pipeline that creates isolated environments, generates code, and executes the generated code.
Jan 20, 2026
28 min read

Édouard Bonlieu
Use Continue, Ollama, Codestral, and Koyeb GPUs to Build a Custom AI Code Assistant
This guide shows how to use Continue with Ollama, a self-hosted AI solution to run the Mistral Codestral model on Koyeb GPUs
Jun 24, 2024
4 min read
