notes, posts, /etc
Cloud Compute does not have a predefined image for NixOS, which is the distribution we’re going to install on the instance.
I’ve created a simple NixOS system configuration and terraform script (see the code on GitHub here) that builds that system configuration, uploads it to cloud compute, and sets up an f1-micro
instance using the image. f1-micro
is free for 1 instance per month on the Google Cloud free tier.
The config for the system itself is quite small:
{
imports = [ <nixpkgs/nixos/modules/virtualisation/google-compute-image.nix> ];
users.users.root.openssh.authorizedKeys.keyFiles = [ ~/.ssh/id_rsa.pub ];
}
This config imports the base Cloud Compute image from nixpkgs
, which sets up some sane defaults. It then makes sure we are able to log in as root
after the instance is created, by copying over the current user’s public key and adding it to root
’s list of authorized SSH keys.
So, to generate the image and provision a Compute instance, clone the repo and run terraform apply
. The terraform script is configured to take inputs for various things that may change from installation to installation, like the user name and location of the credentials file (learn how to generate the credentials file in the terraform docs).
Terraform is smart and will automatically check the latest state of all of the resources you have defined and get them from their current state to the desired state when you run terraform apply
. You can also use terraform plan
to see what changes terraform will make before it makes them.
Now that NixOS is successfully running on a Cloud Compute instance, we can get started provisioning it with NixOps. NixOps lets you use the same ideas and configurations you use for configuring a NixOS system to deploy applications and services to remote systems, virtual machines, and cloud providers.
The NixOps intro is worth reading if you haven’t read it already and plan to use NixOps to deploy your applications. It goes over the benefits and features of NixOps, and gives a high-level overview of why you might want to use it.
From the project you wish to deploy (I have a sample project for hosting containers), define a NixOS system config that runs your application. I recommend putting this config in one file (say system.nix
), and making another file called ops.nix
that will be loaded into NixOps. The structure of the ops.nix
file is an attribute set where the keys are hostnames/IPs and the values are NixOS system configurations.
Take the ops.nix
from my container-host
project, for example:
{
"nix.kylesferrazza.com" = import ./server.nix;
}
This file just says to NixOps: “build the system configuration in server.nix
and deploy it to the machine at nix.kylesferrazza.com
”.
Now, go ahead and create the NixOps deployment by running nixops create -d my-deployment-name ops.nix
. Once the deployment is created, deploy (or redeploy) with nixops deploy -d my-deployment-name
. NixOps will replace root
SSH keys by default, which is usually okay since you can still use nixops ssh -d my-deployment-name hostname
to SSH into the machine, but I make sure to keep the trick from above in server.nix
so I can still use SSH as usual (or set up multiple NixOps deployments to the same machine):
users.users.root.openssh.authorizedKeys.keyFiles = [ ~/.ssh/id_rsa.pub ];
If all went well, you should hopefully be all set up using NixOS and NixOps on the free tier of Google Cloud Compute!