I own a digital-ocean droplet (a virtualized server) that hosts some small web applications that I use in my daily life, e.g. a journal and an app to track expenses. The droplet has a fixed IP that it can be reached from, but it is not a great way to access the apps manually to be forced to type in a cryptic URL like
220.127.116.11:8888/ to login.
The IP and port number are technical details that one should not have to know to access an application. It would be much nicer to reach them via a URL that is easy to remember and type, for example either
mydomain.com/myapp. On top of that, the applications should not have to know how and where they are deployed, they should still be able to use relative URLs in their code – this can be a problem depending on how we set things up, because the browser follows the same-origin-policy. We also want https to encrypt communication with the apps and we definitely do not want people to be greeted by a “this page is insecure” or anything similar, so we will want to use a valid certificate from letsencrypt.
Assuming we registered a domain named
domain.com and can extend its DNS entries and we have two apps named
app1 (exposed on port
app2 (exposed on port
5002) running on a server reachable at
18.104.22.168. The following is a description on how to setup a server to expose these applications via custom subdomains using nginx and make it all work as desired.
Setup DNS Entries
We need to create A DNS entries that register a subdomain URL to point to our application server, so basically one for each app:
Yes, both just point to the same server, we cannot point directly to a port, DNS does not support that. We will have to distinguish them on the server instead to do the actual routing. Creating these entries can take a while to have an effect, up to 48 hours, but in my case it was much faster and available after a couple of minutes.
ssh to your server and continue.
Create an SSL Certificate
To be able to use https to serve our applications we need a certificate. Fortunately these days we can get that for free from letsencrypt.org. The details are all explained on https://certbot.eff.org, as a quick recap we install a tool named
certbot and use it to temporarily expose information on our server to be validated from the letsencrypt service, so that we demonstrate ownership/control of the server at the given domain(s). In our case, we need the certificate to work for multiple subdomains: At the end of this month (February 2018), letsencrypt will offer wildcard certificates for this (something that is valid for
*.domain.com), but right now we need to create a certificate passing these explicit subdomains ourselves.
$ certbot certonly -d apps.domain.com,app1.domain.com,app2.domain.com
You may wonder why we specify 3 domains here: The first one will be used as the filename, so ideally use something that described what the certificate is valid for. This will only work though, if we also have a DNS entry for it, so make sure to create one using your DNS providers tools as decribed above:
This is not needed, but I think it is nicer than having a certificate file with a name for only one of the included subdomains. The certbot will ask for a method to demonstate ownership: Choosing option 2) will spin up a local webserver: Make sure no other running server (like nginx, apache or similar) is blocking the default http port 80. Shut them down if needed.
You should get a folder with certificate and key files at the default location, in my case on Ubuntu that was:
The important files for us here are
privkey.pem: It is recommended to not move the files and just let them stay there. We are going to have to tell nginx and possibly any app that wants to use https where they are later.
First we install nginx:
$ sudo apt-get update $ sudo apt-get install nginx
Then we create a new config file and softlink to enable it:
$ touch /etc/nginx/sites-available/apps.conf $ ln -s /etc/nginx/sites-available/apps.conf /etc/nginx/sites-enabled/apps.conf
Open that file in your favorite editor and add configuration as below. A few notes on how it is setup: We are creating one server block per app, which will be chosen automatically based on the unique URL used as the
server_name that we get by using a subdomain. The exact configuration of course is based on the nature of the application; I am going to add a very simply and a slightly complex example. We also have two other default servers, one for http that simply redirects to https, the other being used for https and adding the certificate that we will create.
For convenience, we create a short snipped that imports the SSL certificate and key:
$ touch /etc/nginx/snippets/ssl-apps-.domain.com.conf
Add the following text with your favourite editor:
ssl_certificate /etc/letsencrypt/live/apps.domain.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/apps.domain.com/privkey.pem;
We will be able to reuse that in any nginx configuration and it may safe us a couple of redundant lines here and there. The rest of the nginx configuration is explained as inline comments:
Here is an example on how app2 (assuming it is a Python app) could be h
osted using gunicorn, defined in a systemd unit file so that the app is managed and e.g. restarted by systemd after reboot:
[Unit] Description=App 2 [Service] User=root WorkingDirectory=/opt/apps/app2 ExecStart=/opt/.virtualenvs/app2/bin/gunicorn app2.wsgi:application \ --workers 4 \ --bind 0.0.0.0:5002 \ --log-level debug \ --certfile /etc/letsencrypt/live/apps.domain.com/fullchain.pem \ --keyfile /etc/letsencrypt/live/apps.domain.com/privkey.pem \ --access-logfile /var/log/apps/app2/access.log \ --error-logfile /var/log/apps/app2/error.log [Install] WantedBy=multi-user.target Alias=app2.service
Make sure to place it where systemd can find it, enable and reload systemd to make it work.
$ # Place here: /etc/systemd/system/app2.service $ systemctl enable app2.service $ systemctl daemon-reload $ systemctl restart app2 $ systemctl status app2
The applications should now be accessible in any browser at https://app1.domain.com and https://app2.domain.com, all working with relative URLs and without showing any warnings to the user. This way we can have very convenient URLs to access our custom software on any computer and we only have to own (and pay for!) a single domain instead of multiple ones.
Two things worth mentioning: I am using subdomains and proxy_pass here as opposed to sub-URLs and a
redirect or URL
rewrite in nginx here, because a) redirects would change the URL and port that the user will see in the browser and b) rewrites have issues with applications that need to work with relative URLs when doing AJAX requests etc. since they will end up at the wrong path. This could be fixed by adding more location rules to the server block, but that would be putting API-specific logic into the nginx script – not a great idea in my opinion.