Hosting Web Applications with nginx

I own a digital-ocean droplet (a virtualized server) that hosts some small web applications that I use in my daily life, e.g. a journal and an app to track expenses. The droplet has a fixed IP that it can be reached from, but it is not a great way to access the apps manually to be forced to type in a cryptic URL like to login.
The IP and port number are technical details that one should not have to know to access an application. It would be much nicer to reach them via a URL that is easy to remember and type, for example either or On top of that, the applications should not have to know how and where they are deployed, they should still be able to use relative URLs in their code – this can be a problem depending on how we set things up, because the browser follows the same-origin-policy. We also want https to encrypt communication with the apps and we definitely do not want people to be greeted by a “this page is insecure” or anything similar, so we will want to use a valid certificate from letsencrypt.

Assuming we registered a domain named and can extend its DNS entries and we have two apps named app1 (exposed on port 5002) and app2 (exposed on port 5002) running on a server reachable at  The following is a description on how to setup a server to expose these applications via custom subdomains using  nginx and make it all work as desired.

Setup DNS Entries

We need to create A DNS entries that register a subdomain URL to point to our application server, so basically one for each app:

  • →
  • →

Yes, both just point to the same server, we cannot point directly to a port, DNS does not support that. We will have to distinguish them on the server instead to do the actual routing. Creating these entries can take a while to have an effect, up to 48 hours, but in my case it was much faster and available after a couple of minutes.

Now we ssh to your server and continue.

Create an SSL Certificate

To be able to use https to serve our applications we need a certificate. Fortunately these days we can get that for free from The details are all explained on, as a quick recap we install a tool named certbot and use it to temporarily expose information on our server to be validated from the letsencrypt service, so that we demonstrate ownership/control of the server at the given domain(s). In our case, we need the certificate to work for multiple subdomains: At the end of this month (February 2018), letsencrypt will offer wildcard certificates for this (something that is valid for *, but right now we need to create a certificate passing these explicit subdomains ourselves.

$ certbot certonly -d,,

You may wonder why we specify 3 domains here: The first one will be used as the filename, so ideally use something that described what the certificate is valid for. This will only work though, if we also have a DNS entry for it, so make sure to create one using your DNS providers tools as decribed above:

  • →

This is not needed, but I think it is nicer than having a certificate file with a name for only one of the included subdomains. The certbot will ask for a method to demonstate ownership: Choosing option 2) will spin up a local webserver: Make sure no other running server (like nginx, apache or similar) is blocking the default http port 80. Shut them down if needed.

You should get a folder with certificate and key files at the default location, in my case on Ubuntu that was:

$ /etc/letsencrypt/live/

The important files for us here are cert.pem and privkey.pem: It is recommended to not move the files and just let them stay there. We are going to have to tell nginx and possibly any app that wants to use https where they are later.

Setup nginx

First we install nginx:

$ sudo apt-get update
$ sudo apt-get install nginx

Then we create a new config file and softlink to enable it:

$ touch /etc/nginx/sites-available/apps.conf
$ ln -s /etc/nginx/sites-available/apps.conf /etc/nginx/sites-enabled/apps.conf

Open that file in your favorite editor and add configuration as below. A few notes on how it is setup: We are creating one server block per app, which will be chosen automatically based on the unique URL used as the server_name that we get by using a subdomain. The exact configuration of course is based on the nature of the application; I am going to add a very simply and a slightly complex example. We also have two other default servers, one for http that simply redirects to https, the other being used for https and adding the certificate that we will create.

For convenience, we create a short snipped that imports the SSL certificate and key:

$ touch /etc/nginx/snippets/

Add the following text with your favourite editor:

ssl_certificate /etc/letsencrypt/live/;
ssl_certificate_key /etc/letsencrypt/live/;

We will be able to reuse that in any nginx configuration and it may safe us a couple of redundant lines here and there. The rest of the nginx configuration is explained as inline comments:

# Redirect http requests to https. No http allowed at all here.
server {
  listen 80 default_server;
  listen [::]:80 default_server;
  # Tell the client that this has movewd permanently.
  return 301 https://$host$request_uri;

#  Apply our certificate against incoming https requests.
server {
  listen 443 ssl default_server;
  listen [::]:443 ssl default_server;
  include snippets/;

# App 1: Assuming this is a very simple app running e.g. a Flask WSGI
#   application at, all we do is proxy all requests to
#   the local process. Since it is running on a local port, there is no
#   need to encrypt and we just use http in the proxy.
server {
  listen 443;

  location / {
    # Not using a trailing backslash or path will map the incoming
    # request paths 1:1 to the new host:port.
    proxy_pass http://localhost:5001;

# App 2: This app is slightly more complicated, because it consists of a
#   frontend that is served as static files and an API that we also want
#   to be accessible without that frontend, which is why we want it to
#   use https as well.
server {
  listen 443;

  # This app supports image downloads, so raise the default.
  client_max_body_size 50M;

  location /api {
    # Make sure the proxy is properly resolved via headers - If this is
    # not done, AJAX requests would try to use the localhost:5002 url
    # below and fail.
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;

    # If we dont set this, https will make us run into a timeout due it
    # not properly using a keep-alive connection. See:
    proxy_http_version 1.1;
    proxy_set_header Connection "";

    # Note the https! To make this work, the app must be hosted using
    # e.g. an application server like gunicorn that supports https and
    # must be given the same cert/key files as this server block.
    proxy_pass https://localhost:5002/api;

  # Our frontend is a javascript framework that has been built and
  # bundled up into a filder containing an index.html as well as
  # minified .css and .js files. We just serve them as static files
  # here.
  location / {
    root /opt/www/dist;
    error_page 404 =200 /index.html;

Here is an example on how app2 (assuming it is a Python app) could be hosted using gunicorn, defined in a systemd unit file so that the app is managed and e.g. restarted by systemd after reboot:

Description=App 2

ExecStart=/opt/.virtualenvs/app2/bin/gunicorn app2.wsgi:application \
        --workers 4 \
        --bind \
        --log-level debug \
        --certfile /etc/letsencrypt/live/ \
        --keyfile /etc/letsencrypt/live/ \
        --access-logfile /var/log/apps/app2/access.log \
        --error-logfile /var/log/apps/app2/error.log


Make sure to place it where systemd can find it, enable and reload systemd to make it work.

$ # Place here: /etc/systemd/system/app2.service
$ systemctl enable app2.service
$ systemctl daemon-reload
$ systemctl restart app2
$ systemctl status app2

The applications should now be accessible in any browser at and, all working with relative URLs and without showing any warnings to the user. This way we can have very convenient URLs to access our custom software on any computer and we only have to own (and pay for!) a single domain instead of multiple ones.

Two things worth mentioning: I am using subdomains and proxy_pass here as opposed to sub-URLs and a redirect or URL rewrite in nginx here, because a) redirects would change the URL and port that the user will see in the browser and b) rewrites have issues with applications that need to work with relative URLs when doing AJAX requests etc. since they will end up at the wrong path. This could be fixed by adding more location rules to the server block, but that would be putting API-specific logic into the nginx script – not a great idea in my opinion.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s