All appsQwen-Image-Edit

Qwen-Image-Edit

Deploy Qwen-Image-Edit with FastAPI on Koyeb GPU for high-performance, low-latency, and efficient image editing.

Deploy Qwen-Image-Edit on Koyeb’s high-performance cloud infrastructure.

With one click, get a dedicated GPU-powered inference endpoint ready to handle requests with built-in autoscaling and Scale-to-Zero.

Deploy Qwen-Image-Edit for free

Get up to $200 in credit to get started!

Claim credit

Overview of Qwen-Image-Edit

Qwen-Image-Edit, the image editing version of Qwen-Image. Built upon our 20B Qwen-Image model, Qwen-Image-Edit successfully extends Qwen-Image’s unique text rendering capabilities to image editing tasks, enabling precise text editing. Furthermore, Qwen-Image-Edit simultaneously feeds the input image into Qwen2.5-VL (for visual semantic control) and the VAE Encoder (for visual appearance control), achieving capabilities in both semantic and appearance editing.

Qwen-Image-Edit is served using the FastAPI web framework.

The default GPU for running this model is the NVIDIA A100 instance type. You are free to adjust the GPU instance type to fit your workload requirements.

Quickstart

The Qwen-Image-Edit one-click model is powered by FastAPI. FastAPI is a modern, fast (high-performance), web framework for building APIs with Python based on standard Python type hints.

After you deploy the model, copy the Koyeb App public URL similar to https://<YOUR_DOMAIN_PREFIX>.koyeb.app and create a simple Python file with the following content to start interacting with the model. Due to FastAPI's automatic interactive documentation, you can easily view the API docs for this model at https://<YOUR_DOMAIN_PREFIX>.koyeb.app/docs.

url = "https://<YOUR_DOMAIN_PREFIX>.koyeb.app/predict"

payload = {
    "prompt": "Change the background to be a field full of grass and daisies.",
    "img_url": "https://upload.wikimedia.org/wikipedia/commons/1/1f/Oryctolagus_cuniculus_Rcdo.jpg",
    "num_inference_steps": 28,
    "seed": 0,
    "width": 1024,
    "height": 1024,
    "num_images_per_prompt": 1
}

response = requests.post(url, json=payload)

if response.status_code == 200:
    data = response.json()

    # Decode the first image
    img_b64 = data["images"][0].split(",")[1]
    img_bytes = base64.b64decode(img_b64)

    with open("result.jpg", "wb") as f:
        f.write(img_bytes)

    print("Saved result.jpg")
else:
    print("Error:", response.status_code, response.text[:400])

The previous code snippet uses the requests API to call the predict endpoint, generating a base64-encoded image.

Take care to replace the base_url value in the snippet with your Koyeb App public URL.

Securing the inference endpoint

To ensure that only authenticated requests are processed, we recommend setting up an API key to secure your inference endpoint. Follow these steps to configure the API key:

  1. Generate a strong, unique API key to use for authentication.
  2. Navigate to your Koyeb Service settings.
  3. Add a new environment variable named FAST_API_KEY and set its value to your secret API key.
  4. Save the changes and redeploy to update the service.

Once the service is updated, all requests to the inference endpoint will require the API key.

When making requests, ensure the API key is included in the headers.

Deploy AI apps to production in minutes

Get started
Koyeb is a developer-friendly serverless platform to deploy apps globally. No-ops, servers, or infrastructure management.
All systems operational
© Koyeb