Running a HTTPS Python Server on EC2 in 5 Minutes

No Docker needed. Just a simplest setup you can get to launching and getting your dopamine hit.

pancy
4 min readDec 30, 2020

I have come to grown very wary of all the extra layers of technologies (I’m looking at you, Docker) that had compounded onto launching a piece of software on the web. Many will frown upon you if you don’t containerize your app in 2021, but the truth is for most use cases, containers are just a waste of time and headspace compared to just doing it the old-school way.

Here I’m going to walk you through setting up a running HTTPS API server in Python using FastAPI (trust me, Flask is so 2008) and Caddy web server (this one is a total replacement to NGINX and gives you automatic HTTPS, saving you from the pain of setting up TLS certificate).

🤷 Tips: Hold it if you are tempted to Dockerize your app for your first Product Hunt launch or Hacker News show and tell. You’re distracted.

Setting Up

  1. Launch a t2.micro EC2 instance on AWS. I’ll be using Amazon Linux 2 in this example, but you can stick to Ubuntu if you like.
  2. Download a .pem keyfile from the instance in order to SSH into it.
  3. In the instance security group, edit the in-bound rule to open port 80 and 443 to public like this:
Screenshot of EC2 Inbound rules table that open port 80 (HTTP) and port 443 (HTTPS) externally.

4. SSH into the instance with:

ssh -i my-key.pem ec2-user@<your-instance-public-dns-name>

💁‍♀️ Your EC2 instance public DNS name usually ends with .compute.amazonaws.com . Note that ec2-user is currently a default user name for AWS Amazon Linux 2 instance.

5. Install Caddy web server with

yum install yum-plugin-copr
yum copr enable @caddy/caddy
yum install caddy

This installation might run Caddy server process in the background right away. To be sure, run caddy stop .

6. Optionally, you may need to grant low-numbered port access to non-root binary, which is caddy in this case. This allows Caddy web server to bind to port 80 and 443.

sudo setcap CAP_NET_BIND_SERVICE=+eip /usr/bin/caddy

7. Create a file named Caddyfile any where in your instance with the following content:

mystartup.ioreverse_proxy localhost:8000 {
header_up X-Forwarded-For {http.reverse_proxy.downstream.hostport}
header_up Host {http.reverse_proxy.upstream.hostport}
}

This Caddyfile is equivalent to nginx.conf in NGINX, albeit simpler. It basically forward traffics from port 80 and 443 to your Python server running on port 8000.

Replace mystartup.io with your custom domain name you wish to use. If you are happy to use EC2’s public DNS name, just replace it with localhost .

Custom Domain

If you wish to use a custom domain, an extra step here is to go to your domain DNS provider (you know, Godaddy, Namecheap, and the likes) and add an A Record pointing the domain name’s host to your EC2 instance’s public IP. Most often it will look like this:

|  Record Type   |   Host   |       IP Address        |
|================|==========|=========================|
| A | @ | <your-ec2-ip-address> |

Write Code

Now we are going to write a minimal server code in Python. You can either write locally and copy it to your remote EC2 with scp . Here I assume you’re writing it directly in your EC2 instance (in this case, installing a nicer editor like Emacs or Vim helps).

You can just clone it from my repo.

Install the following dependencies from this requirements.txt file:

fastapi
uvicorn
uvloop
gunicorn
httptools

Then write or copy the code into a main.py . This server listens on port :8000 by default and returns a JSON response containing the greeting and a name provided by the client in the query parameter, or default to “John”.

# main.pyfrom typing import Optional
from fastapi import FastAPI
app = FastAPI(title="My greeting server")@app.get("/api/greet")
async def greet(name: Optional[str] = None):
if name is None:
name = "John"
return { "greeting": f"Hello, {name}!" }

Run the server with uvicorn main:app --reload for testing and reloading. However, in production you can use

gunicorn -w 1 -k uvicorn.workers.UvicornWorker main:app

The app should listens indefinitely on localhost:8000 .

Launch 🚀

Still in your SSH session of EC2 instance, cd to the directory where your Caddyfile is located, then run caddy run or caddy start to run as a background daemon.

Now your app server is launched to the world at https://mystartup.io . Try visiting https://mystartup.io/api/greet?name=Pan and see the awesome JSON response.

Bonus: Gracefully Reloading Server

When you need to make an update to your Python server code, simply clone the update or make a change in your instance, then run

kill -HUP <master-pid>

Where master-pid is the gunicorn worker process ID you get from running lsof -i:8000 . This should quickly kill and restart the server with the latest code.

--

--

pancy
pancy

Written by pancy

I’m interested in Web3 and machine learning, and helping ambitious people. I like programming in Ocaml and Rust. I angel invest sometimes.

Responses (1)