Firecracker MicroVMs: Lightweight Virtualization for Containers and Serverless Workloads

January 27, 2021

Édouard Bonlieu

Édouard Bonlieu
@edouardb_

Yann Léger

Yann Léger
@yann_eu

Alisdair Broshar

Alisdair Broshar
@alisdairbroshar

Deciding whether to run applications in containers or virtual machines used to entail analyzing which trade-offs you could accept in exchange for certain advantages. With Firecracker, we can leverage the benefits of both technologies. In this blog post, we are going to talk about why exactly Firecracker is setting the serverless computing world on fire and what you need to know about this emerging technology.

Table of Contents

  1. What is Firecracker?
  2. Advantages of Firecracker
  3. How Firecracker Works
  4. Additional Reading on MicroVMs and Firecracker

What is Firecracker?

Firecracker is a virtualization technology that was built to enable multi-tenant workloads on a single server. With Firecracker, different function and container workloads can share the computing resources of a single server in a secure manner, making workloads and underlying infrastructure all the more efficient. While its origins are with AWS and Google's Chromium OS's Virtual Machine Monitor, crosvm, Firecracker has become an open-source project under the Apache License Version 2.0 that addresses different needs than crosvm, namely optimizing serverless computing infrastructure.

Firecracker, a new type of lightweight virtualization

Before Firecracker, developers had to choose between the security and isolation guaranteed by traditional VM setups and the speed and density offered by container technology.

Firecracker delivers the best of these two worlds. Developers get the isolation and security guaranteed by traditional VMs and bare-metal instances as well as the density and speed offered by container technology.

Based on KVM

Firecracker is a lightweight virtual machine monitor (VMM) that uses Linux kernel-based virtual machines (KVM) to provision and manage lightweight virtual machines (VMs), also known as microVMs. These microVMs combine the isolation and security offered by full virtualization solutions with the speed and density provided by container technology.

Developers can run containers and function-based workloads in these isolated and high-performing microVMs, and serverless providers can run thousands of these microVMs more efficiently on physical servers.

From a compatibility perspective, Firecracker supports Linux as both host and guest OS as well as OSv guests, a unikernel system specialized for cloud computing. The technology is designed to be processor agnostic with support of Intel, AMD, and ARM processors already implemented.

Designed for Efficiency in the Serverless World

Firecracker was developed by AWS to improve the efficiency of their serverless offerings.

The first implementation of serverless computing services was to run workloads on separate virtual machines in order to ensure isolation and security. As serverless computing rose in popularity, cloud providers looked for more efficient ways to distribute computing resources without compromising the security and isolation virtualization solution provided.

Firecracker was developed as a way to allocate computing resources more efficiently by reducing the cost incurred by dedicated virtual machines. It has evolved into an open-source project that powers serverless functions and is fully integrated into the existing container ecosystem, thanks to its conformation to OCI standards.

If you want to learn more about Firecracker, check out its Charter and Github.

Firecracker is not designed for General-Purpose Workloads

There are some misconceptions about what Firecracker enables and is designed for. Let's clarify things straight away:

  • Firecracker is not a container orchestration tool like Kubernetes. Firecracker does not automate the management, deployment, and scaling of containerized applications.
  • It differs from traditional and general-purpose VMMs like QEMU. Firecracker was designed to run serverless workloads efficiently and does not offer a lot of flexibility. General-purpose VMMs like QEMU on the other end can emulate a large variety of devices to support all kinds of workloads.
  • Nor is it a container runtime like containerd or CRI-O, which manages the complete lifecycle of containers.

Advantages of Firecracker

Component of FirecrackerAdvantage
SecurityWorkloads are protected and attack surfaces are greatly reduced with Firecracker, meaning vulnerabilities from one tenant cannot affect or harm the workload of another tenant.
Fast Boot-Up TimesThanks to its minimalist design and low-overhead, Firecracker boasts 100 millisecond start times, even for concurrent and heavy workloads.
Efficient Allocation of ResourcesWith Firecracker, there is a soft-allocation of resources. Meaning resources are shared in a flexible and efficient manner.
CompatibilityFirecracker supports Linux as host and guest OS, as well as OSv guests.

How Firecracker Works

Firecracker was designed to run multi-tenant workloads on servers with configurable vCPU and memory, fast boot times, low performance overhead and security in mind. Thanks to Firecracker's lightweight design, a single server can quickly boot and host up to thousands of microVMs.

Adapted from Google's open-source VMM, Chromium OS's Virtual Machine Monitor, also known as crosvm, Firecracker removed a lot of crosvm's code in order to meet its original design goals of being a lightweight VMM.

Like crosvm, Firecracker is written in Rust, which boots the Linux kernel directly. It runs in the user space and uses KVM to create microVMs.

Since Firecracker was designed to optimize serverless computing, its creators dropped a lot of features that are found in more general use VMMs like QEMU and crosvm. Some of the lower-level features dropped include some I/O, a full-blown network interface, BIOS, and CPU instruction emulation. With its minimalist design, Firecracker's attack surface is reduced to a minimum, its reduced overhead allows for higher levels of density, and its startup times become faster.

Thanks to Firecracker's RESTful API, developers can control the different processes of Firecracker such as its rate limiter and metadata service. While the rate limiter allocates storage and network resources for both regular and burst activity levels, the metadata service is how configuration information is securely shared between the guest and host OS.

Additional Reading on MicroVMs and Firecracker

Dive into these great resources if you want to learn more about Firecracker and microVMS.

The Future of Firecracker

While still in its early days, Firecracker will definitely change the way the world runs containerized workloads. The Firecracker security and isolation features combined with the performance it offers makes it a game-changing technology and opens up new opportunities to everyone deploying containerized workloads at scale.

Even though the ecosystem and community around Firecracker are still underway, we can expect to see more and more products and platforms emerge around it in the coming months.

We'd be super excited to get your feedback on these posts and get your opinion about Firecracker. If you have anything to add or comment to improve this post, feel free to join us on Slack or ping us on Twitter.

Go Serverless with Koyeb

We hope that after reading this blog post you understand why we are so excited about the opportunities Firecracker brings to the cloud computing industry. Make sure to keep your eyes open for future blog posts related to Firecracker.

Koyeb is a developer-friendly serverless platform to deploy applications globally. See the benefits of going serverless, get started with a free account today!

Here are some useful resources to get you started:

  • Koyeb Documentation: Learn everything you need to know about using Koyeb.
  • Koyeb Tutorials: Discover guides and tutorials on common Koyeb use cases and get inspired to create your own!
  • Koyeb Community Slack Channel: Join the community chat to stay in the loop about our latest feature announcements, exchange ideas with other developers, and ask our engineering teams whatever questions you may have about going serverless.