This article demonstrates how to run aPython API on Docker, with the FastAPI framework. We’ll be taking advantage of the built-in OpenAPI documentation feature.
Considerations
The code presented here runs on Python v3.11 and will set up a virtual environment to cache its requirements, rather than installing them globally (on the host), which is what we’ll be doing inside the Docker container because …. we’re root, so we don’t care. This is a bit of an overkill but will allow you to run this Python microservice API from the host as well. I shouldn’t need to specify this but you mustn’t do this in production.
Python API Service
The fastAPI framework allows for quick deployment of API. In the interest of separation of concerns, let’s place our environment loader (i.e. reads from .env
file) in its own class and load it when needed (./environment.py
):
import os
import dotenv.load_dotenv
class LoadEnv:
def __init__(self) -> None:
load_dotenv()
self.app_name = os.environ.get("app_name")
self.app_description = os.environ.get("app_description")
self.app_version = os.environ.get("app_version", 1)
self.app_port = os.environ.get("app_port", 8089)
self.app_debug = os.environ.get("app_debug", false)
Thus consider a file service.py
containing the following:
from fastapi import FastAPI
from environment import LoadEnv
env = LoadEnv()
app = FastAPI(
title = env.app_name,
description = env.app_description,
version = env.app_version,
)
@app.get("/")
def home():
return f"API {env.app_name} v{env.app_version}"
@app.get("/ping")
def pong():
return "pong"
We’ve employed a few libraries which are not loaded by default in Python, so we’ll need to define our requirements list for runtime.
Python requirements
It is an industry standard to define the requirements for your Python application in a requirements.txt
file in the root of your project. For a simple FastAPI app we would need:
fastapi
python-dotenv
redis
So Redis is a requirement…
Cache / Storage with Redis
Let’s write a simple client in Pyton to connect to a Redis server, as per environment variables. I’m calling this class Cache
despite being a Redis implementation (I’m doing this in an attempt to decouple my API from Redis, although decoupling refers to more than this).
import redis
import LoadEnv
class Cache:
def __init__(self):
self.env = LoadEnv()
self.instance = redis.Redis(
host = self.env.redis_host,
port = self.env.redis_port
)
self.pool = redis.ConnectionPool(
host = self.env.redis_host,
port = self.env.redis_port,
db = 0
)
self.client = redis.Redis(connection_pool=self.pool)
def get_instance(self):
return self.instance
def get_client(self):
return self.client
Then we can implement a new API endpoint that returns the status of our caching implementation, in this case Redis.
import Cache
@app.get("/cache")
def cache_check():
caching = Cache().get_instance()
cache_status = caching.get("status").decode()
if cache_status == "healthy":
return "OK"
raise Exception(f"status: {cache_status}")
ENV
Your environment variables so far are:
app_name="fastAPI microService"
app_description = "I wrote an API in Python"
app_port=8089
app_debug=true
redis_port=6380
redis_host="redis"
Docker compose
Let’s put the two services together in a docker-compose.yml
file:
version: "3"
services:
microservice:
image: python:3.11
container_name: microservice-api
restart: unless-stopped
ports:
- ${app_port:-8089}:8089
volumes:
- .:/app/.
logging:
options:
max-size: "1m"
max-file: "1"
redis:
image: redis:latest
container_name: microservice-cache
command: [
- "redis-server", "--appendonly", "yes"
]
restart: unless-stopped
ports:
- 127.0.0.1:${redis_port:-6379}:6379
environment:
- TZ=Europe/London
volumes:
- redis-data:/data
volumes:
redis-data:
Other considerations
Catch-all exception handling
Add this block to your service.py
file:
from fastapi import HTTPException, Request
from fastapi.responses import JSONResponse
@app.exception_handler(HTTPException)
async def http_exception_handler(request: Request, exc: HTTPException):
return JSONResponse(
status_code=exc.status_code,
content={
"status_code": exc.status_code,
"success": False,
"message": exc.detail,
"response": [],
},
)
Bonus feature
Put it all together in a BASH script (call it run.sh
) so that when you onboard new devs they can just pull down the repository, run this script, and get hacking:
#!/bin/bash
python -m venv .venv && \
source .venv/bin/activate && \
.venv/bin/pip3 install --no-cache-dir --upgrade pip && \
.venv/bin/pip3 install --no-cache-dir --upgrade -r requirements.txt
docker compose pull
docker compose up -d --force-recreate --build --remove-orphans
docker ps
Remember to chmod +x run.sh
so you can then run it with ./run.sh
command.
Conclusions
I personally want to avoid clogging the root of my repositories with lots of files, so I’ll put the Python code for instance in a src
directory (or app
, or api
, or something relevant to its purpose). Then I would place any Dockerfiles in a separate docker
directory, and so forth with separate directories for any services I might have defines in my docker-compose.yml
.