How to Unbundle nginx from Omnibus GitLab for Serving Multiple Websites

Omnibus GitLab is a software package (or software stack) that allows you to easily install and run GitLab on your Linode. This guide walks you through the process of installing and setting up your own nginx server on a typical Omnibus installation. Using the method outlined here, you are not forced to use Omnibus’s default settings, and can create as many virtual hosts as you need for hosting multiple websites and apps on the same server as your GitLab.

Preconfigured software stacks sometimes bring a series of challenges to those who need to customize specific settings. If you require more control over your installation, consider installing GitLab from source. This application stack could benefit from large amounts of disk space, so also consider using our Block Storage service with this setup.

Before You BeginPermalink

  1. Familiarize yourself with Linode’s Getting Started guide and complete the steps for setting your Linode’s hostname and timezone.
  2. Complete the sections of our Securing Your Server guide to create a standard user account, harden SSH access and remove unnecessary network services.
  3. This guide has been tested with Ubuntu 14.04 LTS and 16.04 LTS. Some commands will be slightly different for each version, so be sure to read each step carefully for version-specific instructions.
  4. Update your system:
    sudo apt-get update && sudo apt-get upgrade
    

Note

This guide is written for a non-root user. Commands that require elevated privileges are prefixed with sudo. If you’re not familiar with the sudo command, visit our Users and Groups guide for more information.

Install Omnibus GitLabPermalink

If you’re already running an Omnibus GitLab environment upgrade to the newest version and proceed to the next section, Unbundle nginx from Omnibus. If you’re installing GitLab for the first time, continue with the steps in this section.

Note that nginx cannot be disabled in older versions of GitLab Community Edition (CE). If you currently have an older version of GitLab CE installed, we recommend that you upgrade incrementally to avoid issues.

  1. Install the dependencies:
    sudo apt-get install curl openssh-server ca-certificates postfix
    
  2. While installing Postfix, you’ll be asked to configure a few basic settings. On the first ncursesscreen, select Internet Site as the mail configuration. On the second screen, enter your fully qualified domain name (FQDN). This will be used to send email to users when configuring new accounts and resetting passwords. The rest of the mail options will be configured automatically.
  3. Add the GitLab CE repository and install the gitlab-ce package:
    curl -sS https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh | sudo bash
    sudo apt-get install gitlab-ce
    

    You can view the contents of the script in its entirety on the GitLab website if you’re hesitant to run it sight-unseen. The GitLab downloads page also contains alternative download methods if you’re still not comfortable running their script.

Unbundle nginx from Omnibus GitLabPermalink

  1. To unbundle nginx from GitLab, we’ll need to disable the version included in the Omnibus package. Add the following lines to /etc/gitlab/gitlab.rb:
    /etc/gitlab/gitlab.rb
    1
    2
    3
    4
    
    # Unbundle nginx from Omnibus GitLab
    nginx['enable'] = false
    # Set your Nginx's username
    web_server['external_users'] = ['www-data']
  2. Reconfigure GitLab to apply the changes:
    sudo gitlab-ctl reconfigure
    

For more information on how to customize Omnibus nginx, visit the official nginx documentation.

Install Ruby, Passenger, and nginxPermalink

Now that GitLab’s bundled nginx has been disabled, the next step is to install and configure the web server from scratch.

  1. Since GitLab is written in Ruby, install Ruby on your system:
    sudo apt-get install ruby
    sudo gem install rubygems-update
    sudo update_rubygems
    
  2. We’ll also need to install Phusion Passenger, a web application server for Ruby. Install Phusion Passenger’s PGP key:
    sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 561F9B9CAC40B2F7
    
  3. Add Passenger’s APT repository by adding the following lines to /etc/apt/sources.list.d/passenger.list:
    /etc/apt/sources.list.d/passenger.list
    1
    
    deb https://oss-binaries.phusionpassenger.com/apt/passenger trusty main

    Note

    If you’re using Ubuntu 16.04, replace trusty with xenial in the above command.
  4. Update your package repositories:
    sudo apt-get update
    
  5. Install Passenger and nginx:
    sudo apt-get install nginx-extras passenger
    
  6. Enable the new Passenger module by uncommenting the include /etc/nginx/passenger.conf;line from the /etc/nginx/nginx.conf file:
    /etc/nginx/nginx.conf
    1
    
    include /etc/nginx/passenger.conf;
  7. Finally, restart nginx. On Ubuntu 14.04:
    sudo service nginx restart
    

    On Ubuntu 16.04:

    sudo systemctl restart nginx
    

For further information, please refer to Installing Passenger + nginx on Ubuntu 14.04 LTS (with APT).

Create a New Virtual HostPermalink

In this section, we’ll create a new virtual host to serve GitLab. Since we’ve unbundled nginx, we’ll also be able to configure other virtual hosts for other websites and apps.

  1. Copy the default virtual host file to a new virtual host file, replacing example.com with your virtual host:
    sudo cp /etc/nginx/sites-available/default /etc/nginx/sites-available/example.com
    
  2. Edit your new virtual host file to match the following, replacing example.com with your own hostname and do not forget to add this domain to your DNS:
    /etc/nginx/sites-available/example.com (Click me for the latest “Config” file)
    upstream gitlab-workhorse {
      server unix:/var/opt/gitlab/gitlab-workhorse/socket;
    }
    
    ## Normal HTTP host
    server {
      ## Either remove "default_server" from the listen line below,
      ## or delete the /etc/nginx/sites-enabled/default file. This will cause gitlab
      ## to be served if you visit any address that your server responds to, eg.
      ## the ip address of the server (http://x.x.x.x/)n 0.0.0.0:80 default_server;
      listen 0.0.0.0:80 default_server;
      listen [::]:80 default_server;
      server_name YOUR_SERVER_FQDN; ## Replace this with something like gitlab.example.com
      server_tokens off; ## Don't show the nginx version number, a security best practice
      root /opt/gitlab/embedded/service/gitlab-rails/public;
    
      ## See app/controllers/application_controller.rb for headers set
    
      ## Individual nginx logs for this GitLab vhost
      access_log  /var/log/nginx/gitlab_access.log;
      error_log   /var/log/nginx/gitlab_error.log;
    
      location / {
        client_max_body_size 0;
        gzip off;
    
        ## https://github.com/gitlabhq/gitlabhq/issues/694
        ## Some requests take more than 30 seconds.
        proxy_read_timeout      300;
        proxy_connect_timeout   300;
        proxy_redirect          off;
    
        proxy_http_version 1.1;
    
        proxy_set_header    Host                $http_host;
        proxy_set_header    X-Real-IP           $remote_addr;
        proxy_set_header    X-Forwarded-For     $proxy_add_x_forwarded_for;
        proxy_set_header    X-Forwarded-Proto   $scheme;
    
        proxy_pass http://gitlab-workhorse;
      }
    }
  3. Enable your new virtual host by symbolically linking it to sites-enabled (change example.com):
    sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/example.com
    
  4. Restart nginx to load your changes. On Ubuntu 14.04:
    sudo service nginx restart
    

    On Ubuntu 16.04:

    sudo systemctl restart nginx
    
  5. Since nginx needs to access GitLab, add the www-data user to the gitlab-www group:
    sudo usermod -aG gitlab-www www-data
    

Congratulations! You have turned a default Omnibus GitLab server into a multi-purpose one. To serve additional websites and apps using your newly unbundled nginx server, simply create additional virtual hosts above, and configure them to your needs. For more information, please refer to our guide on how to configure nginx.

How To Back Up, Restore, and Migrate a MongoDB Database on Ubuntu

MongoDB is one of the most popular NoSQL database engines. It is famous for being scalable, powerful, reliable and easy to use. In this article we’ll show you how to back up, restore, and migrate your MongoDB databases.

Importing and exporting a database means dealing with data in a human-readable format, compatible with other software products. In contrast, the backup and restore operations create or use MongoDB-specific binary data, which preserves not only the consistency and integrity of your data but also its specific MongoDB attributes. Thus, for migration its usually preferable to use backup and restore as long as the source and target systems are compatible.

Prerequisites

Before following this tutorial, please make sure you complete the following prerequisites:

Except otherwise noted, all of the commands that require root privileges in this tutorial should be run as a non-root user with sudo privileges.

Understanding the Basics

Before continue further with this article some basic understanding on the matter is needed. If you have experience with popular relational database systems such as MySQL, you may find some similarities when working with MongoDB.

The first thing you should know is that MongoDB uses json and bson (binary json) formats for storing its information. Json is the human-readable format which is perfect for exporting and, eventually, importing your data. You can further manage your exported data with any tool which supports json, including a simple text editor.

An example json document looks like this:

Example of json Format
{"address":[
    {"building":"1007", "street":"Park Ave"},
    {"building":"1008", "street":"New Ave"},
]}

Json is very convenient to work with, but it does not support all the data types available in bson. This means that there will be the so called ‘loss of fidelity’ of the information if you use json. For backing up and restoring, it’s better to use the binary bson.

Second, you don’t have to worry about explicitly creating a MongoDB database. If the database you specify for import doesn’t already exist, it is automatically created. Even better is the case with the collections’ (database tables) structure. In contrast to other database engines, in MongoDB the structure is again automatically created upon the first document (database row) insert.

Third, in MongoDB reading or inserting large amounts of data, such as for the tasks of this article, can be resource intensive and consume much of the CPU, memory, and disk space. This is something critical considering that MongoDB is frequently used for large databases and Big Data. The simplest solution to this problem is to run the exports and backups during the night or during non-peak hours.

Fourth, information consistency could be problematic if you have a busy MongoDB server where the information changes during the database export or backup process. There is no simple solution to this problem, but at the end of this article, you will see recommendations to further read about replication.

While you can use the import and export functions to backup and restore your data, there are better ways to ensure the full integrity of your MongoDB databases. To backup your data you should use the command mongodump. For restoring, use mongorestore. Let’s see how they work.

Backing Up a MongoDB Database

Let’s cover backing up your MongoDB database first.

An important argument to mongodump is --db, which specifies the name of the database which you want to back up. If you don’t specify a database name, mongodump backups all of your databases. The second important argument is --out which specifies the directory in which the data will be dumped. Let’s take an example with backing up the newdb database and storing it in the /var/backups/mongobackupsdirectory. Ideally, we’ll have each of our backups in a directory with the current date like /var/backups/mongobackups/01-20-16 (20th January 2016). First, let’s create that directory /var/backups/mongobackups with the command:

  • sudo mkdir /var/backups/mongobackups

Then our backup command should look like this:

  • sudo mongodump –db newdb –out /var/backups/mongobackups/`date +”%m-%d-%y”`

A successfully executed backup will have an output such as:

Output of mongodump
2016-01-20T10:11:57.685-0500    writing newdb.restaurants to /var/backups/mongobackups/01-20-16/newdb/restaurants.bson
2016-01-20T10:11:57.907-0500    writing newdb.restaurants metadata to /var/backups/mongobackups/01-20-16/newdb/restaurants.metadata.json
2016-01-20T10:11:57.911-0500    done dumping newdb.restaurants (25359 documents)
2016-01-20T10:11:57.911-0500    writing newdb.system.indexes to /var/backups/mongobackups/01-20-16/newdb/system.indexes.bson

Note that in the above directory path we have used date +"%m-%d-%y" which gets the current date automatically. This will allow us to have the backups inside the directory /var/backups/01-20-16/. This is especially convenient when we automate the backups.

At this point you have a complete backup of the newdb database in the directory /var/backups/mongobackups/01-20-16/newdb/. This backup has everything to restore the newdbproperly and preserve its so called “fidelity”.

As a general rule, you should make regular backups, such as on a daily basis, and preferably during a time when the server is least loaded. Thus, you can set the mongodump command as a cron job so that it’s run regularly, e.g. every day at 03:03 AM. To accomplish this open crontab, cron’s editor like this:

  • sudo crontab -e

Note that when you run sudo crontab you will be editing the cron jobs for the root user. This is recommended because if you set the crons for your user, they might not be executed properly, especially if your sudo profile requires password verification.

Inside the crontab prompt insert the following mongodump command:

Crontab window
3 3 * * * mongodump --out /var/backups/mongobackups/`date +"%m-%d-%y"`

In the above command we are omitting the --db argument on purpose because typically you will want to have all of your databases backed up.

Depending on your MongoDB database sizes you may soon run out of disk space with too many backups. That’s why it’s also recommended to clean the old backups regularly or to compress them. For example, to delete all the backups older than 7 days you can use the following bash command:

  • find /var/backups/mongobackups/ -mtime +7 -exec rm -rf {} \;

Similarly to the previous mongodump command, this one can be also added as a cron job. It should run just before you start the next backup, e.g. at 03:01 AM. For this purpose open again crontab:

  • sudo crontab -e

After that insert the following line:

Crontab window
3 1 * * * find /var/backups/mongobackups/ -mtime +7 -exec rm -rf {} \;

Completing all the tasks in this step will ensure a good backup solution for your MongoDB databases.

Restoring and Migrating a MongoDB Database

By restoring your MongoDB database from a previous backup (such as one from the previous step) you will be able to have the exact copy of your MongoDB information taken at a certain time, including all the indexes and data types. This is especially useful when you want to migrate your MongoDB databases. For restoring MongoDB we’ll be using the command mongorestore which works with the binary backup produced by mongodump.

Let’s continue our examples with the newdb database and see how we can restore it from the previously taken backup. As arguments we’ll specify first the name of the database with the --db argument. Then with --drop we’ll make sure that the target database is first dropped so that the backup is restored in a clean database. As a final argument we’ll specify the directory of the last backup /var/backups/mongobackups/01-20-16/newdb/. So the whole command will look like this (replace with the date of the backup you wish to restore):

  • sudo mongorestore –db newdb –drop /var/backups/mongobackups/01-20-16/newdb/

A successful execution will show the following output:

Output of mongorestore
2016-01-20T10:44:47.876-0500    building a list of collections to restore from /var/backups/mongobackups/01-20-16/newdb/ dir
2016-01-20T10:44:47.908-0500    reading metadata file from /var/backups/mongobackups/01-20-16/newdb/restaurants.metadata.json
2016-01-20T10:44:47.909-0500    restoring newdb.restaurants from file /var/backups/mongobackups/01-20-16/newdb/restaurants.bson
2016-01-20T10:44:48.591-0500    restoring indexes for collection newdb.restaurants from metadata
2016-01-20T10:44:48.592-0500    finished restoring newdb.restaurants (25359 documents)
2016-01-20T10:44:48.592-0500    done

In the above case we are restoring the data on the same server where the backup has been created. If you wish to migrate the data to another server and use the same technique, you should just copy the backup directory, which is /var/backups/mongobackups/01-20-16/newdb/ in our case, to the other server.

Conclusion

This article has introduced you to the essentials of managing your MongoDB data in terms of backing up, restoring, and migrating databases. You can continue further reading on How To Set Up a Scalable MongoDB Database in which MongoDB replication is explained.

Replication is not only useful for scalability, but it’s also important for the current topics. Replication allows you to continue running your MongoDB service uninterrupted from a slave MongoDB server while you are restoring the master one from a failure. Part of the replication is also the operations log (oplog), which records all the operations that modify your data. You can use this log, just as you would use the binary log in MySQL, to restore your data after the last backup has taken place. Recall that backups usually take place during the night, and if you decide to restore a backup in the evening you will be missing all the updates since the last backup.

from DigitalOcean