FLUX.1 Kontext [dev]
Deploy FLUX.1 Kontext [dev] behind a dedicated API endpoint on Koyeb GPU for high-performance, low-latency, and efficient inference.
Deploy FLUX.1 Kontext [dev] model on Koyeb’s high-performance cloud infrastructure.
With one click, get a dedicated GPU-powered inference endpoint ready to handle requests with built-in autoscaling and scale-to-zero.
Get up to $200 in credit to get started!
Overview of FLUX.1 Kontext [dev]
FLUX.1 Kontext [dev] is a 12 billion parameter model designed for editing images based on text instructions. Ideal for researchers and non-commercial users, it excels in image editing tasks requiring precise local and global edits.
FLUX.1 Kontext [dev] will be served with the vLLM inference engine, optimized for high-throughput and low-latency model serving.
The default GPU for running this model is the Nvidia RTX-A6000 instance type. You are free to adjust the GPU instance type to fit your workload requirements.
Quickstart
The FLUX.1 Kontext [dev] one-click model is served using the vLLM engine. vLLM is an advanced inference engine designed for high-throughput and low-latency model serving. Optimized for large language models, it provides efficient performance and compatibility with the OpenAI API.
After you deploy the FLUX.1 Kontext [dev] model, copy the Koyeb App public URL similar to https://<YOUR_DOMAIN_PREFIX>.koyeb.app
and create a simple Python file with the following content to start interacting with the model.
import base64
from io import BytesIO
import httpx
from PIL import Image
KOYEB_URL = "https://<YOUR_DOMAIN_PREFIX>.koyeb.app"
def b64_to_pil(base64_string):
"""
Convert a Base64 string to a PIL Image.
:param base64_string: Base64 encoded image string
:return: PIL Image object
"""
# Remove the header if present
if base64_string.startswith("data:image"):
base64_string = base64_string.split(",")[1]
# Decode the Base64 string
image_data = base64.b64decode(base64_string)
# Create a PIL Image from the decoded binary data
return Image.open(BytesIO(image_data))
payload = {
"prompt": "Add a hat to the cat",
"input_image_url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png",
}
# Call the model precition endpoint
res = httpx.post(
f"{KOYEB_URL}/predict",
json=payload,
timeout=60.0,
)
# Get the output image
res = res.json()
output = res.get("images")[0]
# Convert the base64 model output to an image and save it to disk
img = b64_to_pil(output)
img.save("output_image.jpg")
The snippet above showcases how to interact with the FLUX.1 Kontext [dev] model to edit an image from a text prompt and save it to disk.
Take care to replace the KOYEB_URL
value in the snippet with your Koyeb App public URL.
Executing the Python script generate an image and save it to disk.
python main.py