FastAPI + Docker for Quick Python APIs
Add lightning-fast Python endpoints to your Laravel app using FastAPI and Docker.
Have you ever been working with Laravel project when suddenly you need to expose a tiny Python endpoint—maybe to serve an ML model, process some data, or just run a quick script? You could wrestle with PHP bindings or hack together a CLI call, but there’s a cleaner, faster way: spin up a microservice with FastAPI and Docker. 🚀
Imagine this: your Laravel app doing its thing, not even knowing there’s a Python service running alongside it. Meanwhile, a lean FastAPI container beside it serves AI model, handles async tasks, or just answers a quick /ping to prove it’s alive. Let’s build that today—in under 10 minutes.
Why FastAPI + Docker
Separation of concerns.
We keep our Laravel code in PHP, and all Python-specific logic (whether it’s ML inference, data transformation, or simple utilities) in a standalone service. No more composer vs. pip drama.
Blazing performance & auto-docs.
FastAPI (powered by Uvicorn) delivers near–Node.js speeds, full async support, and instantly generates OpenAPI/Swagger docs—so we spend less time wiring and more time coding.
Portable everywhere.
Docker wraps the entire environment—Python version, dependencies, configuration—into a single image. It runs the same on your laptop, in CI, or on Kubernetes. Zero surprises.
What we’ll build in this tutorial
As base we will observe Labrodev’s Fast API Skeleton.
It gives us as example:
A single POST /bmi endpoint to calculate Body Mass Index (BMI)
Docker support via a straightforward Dockerfile
A Makefile for common build/run/stop/logs/restart commands
Instructions for local dev and testing
Project structure
fast-api-skeleton/
├── app/
│ ├── main.py # FastAPI application
│ └── schemas.py # Pydantic models
├── Dockerfile # Container build instructions
├── Makefile # Docker lifecycle commands
├── requirements.txt # Python dependencies
└── README.md # Project documentation
Requirements
Python 3.9+
Docker (for containerized deployment)
(Optional) curl or any HTTP client for testing
Running locally
Install dependencies:
pip install --no-cache-dir -r requirements.txt
Start the FastAPI server:
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
Running with Docker
Build the Docker image:
docker build -t fast-api-skeleton-app .
Run the container:
docker run -d --name fast-api-skeleton -p 8000:80 fast-api-skeleton-app
View logs:
docker logs -f fast-api-skeleton
Stop and remove container:
make stop
make rm
You may check the Makefile to see other command aliases:
build: # Build the Docker image.
docker build -t fast-api-skeleton-app .
run: # Run the container (detached).
docker run -d --name fast-api-skeleton -p 8000:80 fast-api-skeleton-app
stop: # Stop the running container.
docker stop fast-api-skeleton
rm: # Remove the stopped container.
docker rm fast-api-skeleton
logs: # Follow container logs.
docker logs -f fast-api-skeleton
restart: # Rebuild and restart the container.
make stop
make rm
make build
make run
Test example
So after successful installation and run up container, let’s try the functionality.
Example in Skeleton is Tiny API with POST endpoint /bmi. The goal of this endpoint is to calculate Body Mass Index based on incoming parameters: name, weight, height.
Let’s look inside app/main.py:
# app/main.py
from fastapi import FastAPI, HTTPException
from .schemas import InputData
app = FastAPI()
@app.post("/bmi")
def calculate_bmi(input_data: InputData):
# unpack
w = input_data.weight
h = input_data.height
# safety check (Pydantic gt=0 already covers this)
if h <= 0:
raise HTTPException(status_code=400, detail="Height must be > 0")
# BMI formula
bmi = w / (h * h)
bmi_rounded = round(bmi, 1)
return {
"name": input_data.name,
"bmi": bmi_rounded,
"category": interpret_bmi(bmi)
}
def interpret_bmi(bmi: float) -> str:
if bmi < 18.5:
return "Underweight"
elif bmi < 25.0:
return "Normal weight"
elif bmi < 30.0:
return "Overweight"
else:
return "Obesity"
We could see here in main.py that there is definition of post method with endoiunt /bmi and logic inside it which contains calculation and interpretation of results. Incoming parameters are described in InputData class which is imported and defined in schemas.py.
And our request schema in app/schemas.py:
# app/schemas.py
from pydantic import BaseModel
class InputData(BaseModel):
name: str
weight: float
height: float
It’s quite straightforward to see that in class InputData we describe our BODY parameters and data types of that parameters.
With this in place, a POST to /bmi with:
{
"name": "Alice",
"weight": 70.0,
"height": 1.75
}
Returns:
{
"name": "Alice",
"bmi": 22.9,
"category": "Normal weight"
}
This skeleton lets you quickly stand up tiny endpoints that send your input straight to an ML model and return just the results you need. You can then call these endpoints from anywhere in your system—your Laravel app, background jobs, or other services—for seamless, Python-powered inference.
Conclusion
We’ve seen how to spin up a tiny, Docker-ready FastAPI service in minutes—complete with a real /bmi endpoint, auto-generated docs, and a Makefile workflow. That BMI calculator is just a stand-in: we can easily swap in a TensorFlow or PyTorch model by:
Saving our trained model (e.g. models/my_model.pkl).
Mounting or copying it into the container.
Adding a new endpoint that loads the model and returns results from asking the model
Defining matching Pydantic schemas in app/schemas.py.
From our Laravel application, we simply make an HTTP call—no messy PHP bindings required. In this setup, FastAPI becomes the bridge between our business logic (Laravel) and any AI model we’ve built. Tiny, focused, and lightning-fast. 🎉
Let’s give our Laravel projects that Python-powered sidekick!
Thanks for reading! Subscribe to Labrodev substack and let’s keep in touch!
Petro from Labrodev.
Picture used in preview credits:
Unsplash, @rocua18