Koyeb Launch Week: Round 2
5 minWelcome to our second launch week! After the success of our very first Launch Week in June, we’re back with another batch of announcements!
Last Launch Week, we released:
- Autoscaling in GA
- GPUs in public preview & access to H100, A100, and more
- Volumes in technical preview
- AWS Regions on Koyeb
- Koyeb Startup Program
We had a blast rolling out so many new features and announcements at once, we decided we wanted to do it again! So here we are, gearing up for our second launch week in 3 months.
If you follow our changelog updates on the Koyeb Community, you’ve seen what we’ve been working on and delivering since then. You might even have a few good guesses about what we have in store for this second edition of launch week.
We can’t wait to share everything the team has been working on with you next week! We’ll be updating this post with a recap of everything we share during Launch Week #2.
Monday: New Dashboard - Build, Run, and Scale Apps in Minutes
To kick off Launch Week Round 2, we announced our new dashboard!
The new dashboard makes it easier than ever to build, run, and scale your apps in minutes with a simple and elegant interface. The new dashboard is designed to help you get started quickly and easily, so you can focus on building your app, not managing infrastructure.
Want to put the new control panel to the test? Check out our latest tutorial showcasing how to deploy Portkey Gateway to Koyeb and start streamlining requests to 200+ LLMs.
Tuesday: New Networking Stack
We revamped our networking stack to provide your AI workloads, full stack applications, APIs, and databases faster deployments, more bandwidth, and reduced latency!
When you deploy on the platform, you get advanced capabilities out-of-the-box including automatic load-balancing, fully encrypted private networking, built-in observability, auto-healing, and automatic service discovery.
Want to see the new networking stack in action? Check out our latest one click apps on deploying Ollama and Open WebUI to run a private ChatGPT.
Special Event #1 - Intel AI Summit - September 17 - Paris
If you’re in Paris next Tuesday, join us for Intel AI Summit: Bringing AI Everywhere. 🇫🇷
We’ll be there discussing how we’re bringing AI everywhere with high-performance infrastructure. Hope to see you there!
Wednesday: Paris and Tokyo Regions in GA
We are excited to announce that our Paris and Tokyo regions are now generally available! You can now deploy your applications closer to your users in Europe and Asia, ensuring lower latency and faster response times.
With these regions, you can now run your workloads next to your users around the world, providing a better experience for your customers.
Looking for something to deploy in Paris or Tokyo? Check out our tutorial about using [deploy ComfyUI, Comfy UI Manger, and Flux](/tutorials/using-comfyui-and-flux-to-generate-high-quality-images-on-koyeb. Flux is just one of many advanced image generation AI models you can use in your workflow with ComfyUI.
Thursday: AWS Regions in Public Preview
We are thrilled to announce the public preview of AWS Regions on Koyeb! You can now deploy your applications on AWS infrastructure with the simplicity and flexibility of Koyeb.
Looking for a simpler way to run inference on your self-hosted AI models? Deploy vLLM in one click. vLLM is a Python library that functions as a hosted LLM inference platform. With vLLM, you can download models from Hugging Face and run them them on your own infrastructure.
Special Event #2 - AI Camp in Paris w/ Koyeb and Weaviate
If you’re in Paris next Thursday, join us for Building with AI: Navigate Scaling 🇫🇷 , an AI Camp meetup that we are co-organizing with Weaviate.
Friday: Volumes in Public Preview
We are thrilled to announce the public preview of Volumes! Volumes on Koyeb are blazing-fast NVMe SSDs you can use to persist data across deployments.
Want to build production-ready LLM applications? LlamaIndex is a data framework that makes it simple to build production-ready applications from your data using LLMs. Providing an entire suite of packages and classes for loading, indexing, querying, and evaluating data, LlamaIndex specializes in context augmentation so that you can safely and reliably optimize your queries with custom data.
Our one-click application lets you deploy LlamaIndex on high-performance infrastructure in seconds.
What’s next?
Our launch week is just a few days away! We can’t wait to update you with all the exciting news that we’ve been working on behind the scenes!
If you want to know what’s up ahead, our roadmap is full of exciting features, new locations, and more. By the way, if there is a feature you’d like to see on the platform, request it on our feature request platform and vote for it to track its progress.
Follow us on X @gokoyeb and Koyeb's Linkedin to stay tuned for more updates, content, and announcements about your favorite serverless platform! 🚀