Setting up a game server using Nakama on Google Compute Engine, part 2

· by Steve · Read in about 12 min · (2413 Words)

Part 1 of this blog series talked about what Nakama is, and why I chose it over other options for running leaderboards in our very first game, Washed Up!. Now, let’s get down to the nitty gritty of actually setting it up.

A caveat

The service you’ll have at the end of following this post is only really suitable for testing. There are some additional steps required to make the service more secure & resilient, that you’ll absolutely want to do before going to production.

Because these are long posts, I’m going to cover securing and backing up your server in part 3 of this series, and concentrate on just getting you running in this post. Don’t take this post as a recommended final configuration!

Creating the project and VM

The first step is to set up an account on Google Cloud Platform. If you already have an account, great - but if not, the good news is that as a new customer you’ll get $300 of credit to play with for 12 months. That’s likely to mean your first year is free with the services we’re going to use.

After signing up you’ll probably want to create a project for this service. If you’ve just signed up you’ll probably be in “My First Project”, so:

  1. Click on the project drop-down in the header,
  2. In the dialog which pops up, click “+”
  3. Follow the new project wizard through to the end

Next, you want to create a Compute Engine VM instance.

  1. Make sure your new project is selected in the top bar; you may have to wait a minute for it to be selectable
  2. Click on the hamburger menu in the top left
  3. Select Compute Engine, this will open the VM instances list (empty)
  4. Click the Create button in the helper box

The settings in this VM are up to you ultimately, but if you’re just playing for now you’ll want to select “micro” in the Machine Type section for now. You can actually get 1 micro instance free per year which is nice, subject to some limitations:

  • The VM must be in us-east1, us-west1, or us-central1
  • You must have registered a billing account with an attached credit card

That’s not a bad deal. In my experience the “micro” instance will run Nakama just fine for testing, although after a while GCE may start warning you that your instance is overloaded. This is mostly because of the memory pressure; in my experience it’ll work fine anyway for evaluation. I’ve upgraded mine to a “small” instance which takes the pressure off, but obviously starts eating into your $300 credit. I’ll let you decide; I haven’t stress tested Nakama heavily yet. Upgrading later is fairly easy (I started on micro) so there’s no reason to not start modestly.

Other VM settings

The other VM settings are fine as default, which at the time of writing is:

  • 10GB Persistent Disk with Debian 9 (Stretch)
  • Default access scopes (we’ll tweak that in a later post when adding backups)
  • No access for HTTP/HTTPS (we won’t be using ports 80/443 & will configure firewall separately)

So just name your instance and pick a region and you’re golden.

Getting a fixed external IP

By default your VM will be assigned an “ephemeral” external IP address. This means it can change between reconfigurations of your VM - it usually won’t, but there’s no guarantee. For our purposes, we want a predictable IP, so:

  • Click on your VM, then “Edit” at the top of the page
  • Under Network Interfaces, click the edit button
  • In the rollout, under “External IP”, click the drop down then “Create IP address”
  • Name it something like “nakama-external-ip”, then confirm

Make a note of the new IP you get assigned, you’ll need that.

Configuring SSH access

For almost everything we do from here, we need SSH access. If you’re not familiar with using SSH to log into servers with keys yet, I suggest taking a diversion into this tutorial. SSH access to a GCE VM works the same way as any other server.

You’re supposed to be able to attach SSH keys to your Google account using the SDK tools and get automatic access to your VMs via 3rd party tools that way, but I couldn’t get it to work. You can also use the Google SDK via gcloud compute ssh, but I like my own tools. So we’re going to do it the semi-old-fashioned way and just configure SSH ourselves. I kind of prefer it that way to “magic” anyway, although I’m sure it gets tedious if you have to do it a lot.

Edited 23rd March 2018 I originally advised just editing ~/.ssh/authorized_keys yourself, but discovered later that if you do this, Google can sometimes stomp over it if you use some of their other tools. So I’ve edited this so it plays nicely with their way instead.

  • Firstly, you need to download the Google Cloud SDK
  • The installer will probably run gcloud init, but if not, open a console and do that
  • Log in to your Google account to set everything up and pick your project
  • In an editor of your choice, create a file with a single line: [USERNAME]:[SSHKEY] [USERNAME]
  • Your [USERNAME] is most likely the first part of the email address you sign in with
  • The [SSHKEY] part is your public key, it will look like ssh-rsa XXXX.... Include all of it, but skip any comment you’ve added to the end, since Google wants [USERNAME] at the end there
  • Save the file, then run gcloud compute project-info add-metadata --metadata-from-file ssh-keys=[FILENAME]
  • That will add your SSH key to both the metadata of the project, and the ~/.ssh/authorized_keys file.

You can now use the SSH client of your choice to connect using your usual SSH key. The advantage of doing it this way vs editing ~/.ssh/authorized_keys manually is that if any operation causes Google to regenerate keys (e.g. using their own web SSH tools), your changes are not wiped out by accident.

Installing Cockroach DB

Nakama uses Cockroach as its default database, a pure-Go multi-node database server. We’re just going to set up one node for simplicity.

  • SSH in to your instance
  • Grab the latest Cockroach Linux x64 build, at the time of writing:
    • wget -qO- https://binaries.cockroachdb.com/cockroach-v1.1.6.linux-amd64.tgz | tar xvz
  • Install it using sudo; it’s Go so it’s just one binary!
    • sudo cp -i cockroach-v1.1.6.linux-amd64/cockroach /usr/local/bin
  • Quickly test its startup
    • sudo cockroach start --insecure
    • Ctrl+C to exit once you see it start successfully
    • sudo rm -rf cockroach-data - this is just temp data created by that test

Configuring Cockroach DB

So the database runs, but we want it to start up automatically, place data in a known location (we’ll choose /var/lib/cockroach-store), and other things. So firstly let’s create a systemd service to start and monitor the service.

  • sudo vim /etc/systemd/service/cockroach.service

    [Unit]
    Description=CockroachDB server
    ConditionPathExists=/usr/local/bin/cockroach
    Wants=network.target
    After=network.target
    
    [Service]
    ExecStart=/usr/local/bin/cockroach start --insecure --store=attrs=ssd,path=/var/lib/cockroach-store
    Restart=always
    RestartSec=3
    TimeoutSec=6
    LimitNOFILE=1048576:1048576
    LimitNPROC=1048576:1048576
    
    [Install]
    WantedBy=multi-user.target
    
  • mkdir /etc/systemd/service/cockroach.service.d

  • sudo vim /etc/systemd/service/cockroach.service.d/30-envvars.conf

    [Service]
    # Needed to workaround issue - cockroachdb/cockroach#12675
    Environment="COCKROACH_METRICS_SAMPLE_INTERVAL=1000h"
    
  • Start service immediately

    • sudo systemctl start cockroach
  • Check started OK:

    • ps aux | grep cockroach
  • Enable service auto-startup on reboot

    • sudo systemctl enable cockroach

So now you have a running database which will start automatically, and will be restarted if it falls over for any reason other than being stopped gracefully using sudo systemctl stop.

Wait wait, it’s warning about “Insecure mode!”

Yes, for the moment we’re going to run the database in insecure mode for brevity. In part 3 I’ll explain how to secure Cockroach to get rid of this warning, but it’s a bunch more steps and this post is long enough as it is. For the moment you’re fairly safe since the DB is not exposed on any port that is allowed on the firewall, but rest reassured we will fix this later.

Installing Nakama

Installing Nakama is quite similar to the database; just find the latest Linux x64 build of Nakama then:

  • mkdir nakama-1.4.0; cd nakama-1.4.0
  • wget -qO- https://github.com/heroiclabs/nakama/releases/download/v1.4.0/nakama-1.4.0-linux-amd64.tar.gz | tar xvz
  • sudo cp nakama /usr/local/bin/

Once again, being Go, Nakama is just a single binary. Don’t you just love that? 🙂 Make sure the Cockroach database is currently running, then initialise the database by running:

  • sudo nakama migrate up

Configuring Nakama

As with Cockroach we want to be clear about where Nakama stores additional data, and how it starts up and what options are enabled; lets do that now.

Firstly let’s set up a configuration file, so we don’t need to pass loads of options to Nakama as we expand our configuration:

  • sudo mkdir /etc/nakama; sudo vim /etc/nakama/nakama-config.yml

    name: nakama-gce-1
    data_dir: /var/lib/nakama-data
    
    socket:
        server_key: "MAKE UP OR GENERATE A SERVER KEY"
        port: 7350
    
    session:
        # 6h token expiry
        token_expiry_ms: 21600000
        encryption_key: "MAKE UP OR GENERATE AN ENCRYPTION KEY"
        udp_key: "ENTER OR GENERATE A KEY WHICH IS 32 CHARS"
    
    runtime:
        http_key: "MAKE UP OR GENERATE A SERVER KEY"
    

You can use a different name for the server if you want.

You’ll notice you need to make up or generate a few keys here. I recommend using a random password generator, and making them all different. You actually don’t need all of them for our configuration, but it stops the startup warning about missing keys. Apart from the UDP key, which must be 32 characters, they can be any length. The server_key will be used in your client code too, so make sure you make a note of that somewhere secure.

Also notice the token_expiry_ms value. When you authenticate with Nakama you receive a session token for use with subsequent operations, and by default those expire in 60s, which is far too short for a real application. The Nakama developers tell me that’s the default to force people to make sure their clients cope with session expiry properly 😅 Nevertheless, it’s best to configure this now. Just make sure your client code does deal with session expiry, k?

You can tell from the above that Nakama is going to be available on port 7350 once we fire it up.

There will be other changes to make to this file in part 3 when we secure the service over SSL, and deal with the Cockroach DB running in secure mode, but for now this will do.

Nakama service configuration

Next, let’s make the Nakama service start automatically and be monitored like the database:

  • sudo vim /etc/systemd/service/nakama.service

    [Unit]
    Description=Nakama server
    ConditionPathExists=/usr/local/bin/nakama
    Requires=cockroach.service
    Wants=network.target
    After=network.target
    
    [Service]
    ExecStart=/usr/local/bin/nakama --config /etc/nakama/nakama-config.yml
    Restart=always
    RestartSec=3
    TimeoutSec=6
    LimitNOFILE=1048576:1048576
    LimitNPROC=1048576:1048576
    
    [Install]
    WantedBy=multi-user.target
    

This looks a lot like the Cockroach service file, except for the addition of the Requires=cockroach.service line, which means if you start Nakama it automatically starts Cockroach if it’s not running, and if you shut down Cockroach it will also gracefully stop Nakama first. Restarts will similarly order themselves nicely.

The only other thing of note is how we’ve pointed Nakama at the nakama-config.yml we created earlier.

Nakama modules

Next we’ll want to create a folder which we can place server-side scripts into. Nakama is quite nice in that even though the main server binary is fixed, you can use Lua to run code on the server side, for setup and for hooks / custom behaviours if you want.

As an example, let’s set things up so that when Nakama starts, it creates a few leaderboards ready to be populated:

  • sudo mkdir -p /var/lib/nakama-data/modules
  • sudo vim /var/lib/nakama-data/modules/leaderboard_setup.lua

    local nk = require("nakama")
    
    local function create_or_reuse_leaderboard(id, metadata)
        local reset = nil -- no automatic leaderboard reset schedule
        -- use pcall to avoid errors when creating and already exists
        local res, err = pcall(nk.leaderboard_create, id, "desc", reset, metadata, false)
        if (not res and not string.find(err, "in use")) then
            nk.logger_error(("Leaderboard create failed: %q"):format(err))
        end
    end
    
    local easy = {
        difficulty = "Easy"
    }
    local normal = {
        difficulty = "Normal"
    }
    local hard = {
        difficulty = "Hard"
    }
    
    -- these are just random GUIDs, client just needs to use them to identify
    create_or_reuse_leaderboard("f7649637-4ea8-4c0f-b5aa-30284398e4d3", easy)
    create_or_reuse_leaderboard("d6a24a07-4f22-4f28-970f-d3f5fb88a653", normal)
    create_or_reuse_leaderboard("1b2857e3-7cb9-4770-a617-cd5703a57925", hard)
    

There are loads of things you can do with Lua scripts in Nakama, but this is the main one I’ve needed so far.

Fire it up!

  • Start it right now:
    • sudo systemctl start nakama
  • Check it’s running OK:
    • tail /var/lib/nakama-data/log/nakama-gce-1.log
  • Turn on automatic start:
    • sudo systemctl enable nakama

You should now have a working Nakama server! Of course, you can’t currently get to it anywhere other than locally. So, let’s open a crack in the firewall so you can actually use it.

Open the Firewall

First we need to tag our VM so we can refer to it in firewall rules. So back in the Google Cloud Platform interface:

  • Go to your VM again via Menu > Compute Engine > Your VM
  • Click Edit at the top
  • In “Network tags”, add “nakama-server” then Save

Next, let’s set up that firewall rule:

  • Click on menu > VPC network > Firewall rules
  • Click “Create Firewall Rule” at the top
  • Call it “allow-nakama” or similar
  • Direction = Ingres, Action on Match = Allow
  • Targets = Specified target tags, and add “nakama-server” to the tag list
  • Source filter = IP Ranges, IP range = 0.0.0.0/0
  • Protocols and ports = Specified protocols and ports, add “tcp:7350” to port list
  • Save

Your server should now be available! You’ll be able to connect one of the several client options to the external IP on port 7350 now. How to do this bit is a subject for the next post.

Taking a configuration snapshot

This is optional, but Compute Engine has a nice feature where you can take snapshots of the persistent disks to put a stake in the ground. You get 5GB of free snapshots and while the server is new it’s quite a good way to back up your configuration.

They’re available under menu > Compute Engine > Snapshots. I took 5 while I was configuring my instance, the first of which was 500MB, but the other 4 since then have been less than that put together, because snapshots are incremental. For ongoing data backups I prefer less of a sledgehammer approach but it’s nice to have a baseline.

Next Steps

At this point you can start experimenting in a development environment with Nakama. In future parts I will cover:

  • Running Cockroach DB in secure mode
  • How to secure the Nakama service with SSL
  • Periodic database backups to Google Cloud Storage
  • Accessing the Nakama and Cockroach dashboards via SSH tunnelling

I hope this series is useful so far! Let me know any feedback via Twitter.