Guide: Self-host Bitwarden on Google Cloud for Free*

This write-up is a product of my quest to self-host Bitwarden on a free-tier cloud product. Following these instructions, you should have a product that provides you a self-hosted Bitwarden password manager with all the benefits of running it in the cloud. *The only way this might not be free is if you exceed the 1GB egress or have any egress to China or Australia. In this guide I talk about best practices in avoiding this kind of traffic to keep this as free as possible. The end product is a Github repo (link below). The found in that repo should be enough to get going if you’re not new to projects like this, whereas the text below has a bit more detail if you need it.

Github Repository for bitwarden_gcloud

Update July 2020 I added a new script and section for rebooting the host vm when updates have been made to the OS, ensuring that the host system stays patched and secure against n-day vulnerabilities.

Update June 2020 I added fail2ban to block brute-force attacks on the webvault and country-wide blocking to avoid egress charges to China and Australia, or to block other countries that you might want. I added countryblock to block entire countries, and I also added a watchtower container as suggested by this comment, thanks ldw.

Update May 2020 Originally I used a home-grown dynamic dns script for Cloudflare, but it won’t be as well supported as ddclient, so I swapped out my script for linuxserver’s ddclient container. My original Cloudflare DDNS script can be found here.

I’ve been meaning to self-host Bitwarden for some time. If you don’t know about Bitwarden, it’s a password manager that’s open source and allows you to host it yourself! Now my encrypted password data is even more in my control. While I have a home server, I want to limit its exposure to the public Internet. Any service you expose to the Internet can become a pivot point to the rest of your internal network.

I saw that Google Cloud offers an ‘always free’ tier of their Compute Engine. Will one shared core and 614 MB of memory be enough for Bitwarden? According to the system requirements Bitwarden requires 2GB of RAM, but reports in its Github issue tracker say that even that is not enough. I went through the trouble of trying it out anyway and it failed spectacularly, the install script couldn’t even finish. There is, however, a lightweight alternative: Bitwarden RS. It’s written in Rust and an ideal candidate for a micro instance.


  • Bitwarden self-hosted
  • Automatic https certificate management through Caddy 2 proxy
  • Dynamic DNS updates through ddclient
  • Blocking brute-force attempts with fail2ban
  • Country-wide blocking through iptables and ipset
  • Container images kept up-to-date with watchtower


Before you start, ensure you have the following:

  1. A Google Cloud account with billing set up (so they can bill you if you use their non-free services)
  2. A DNS provider that is supported by ddclient for dynamic dns support; a list of supported DNS services can be seen here Note: not all DDNS providers are supported by LetsEncrypt, YMMV

Step 1: Set up a new VM

At the time or writing this, Google offers one free Google Compute Engine f1-micro instance with the following specifications:

* Region: 
 * Oregon: us-west1
 * Iowa: us-central1
 * South Carolina: us-east1
* 30 GB-months HDD
* 5 GB-month snapshot storage in the following regions:
 * Oregon: us-west1
 * Iowa: us-central1
 * South Carolina: us-east1
 * Taiwan: asia-east1
 * Belgium: europe-west1
* 1 GB network egress from North America to all region destinations (excluding China and Australia) per month

To get started, go to Google Compute Engine (after doing all the necessary setup of creating a project, and providing billing info if necessary - don’t worry, this will cost exactly $0.00 each month if done correctly) and open a Cloud Shell. You can create the instance manually, but the Cloud Shell makes everything easier. In the Cloud Shell (a small icon in the upper right corner of your Google Cloud console), the following command will build the properly spec’d machine:

$ gcloud compute instances create bitwarden \
    --machine-type f1-micro \
    --zone us-central1-a \
    --image-project cos-cloud \
    --image-family cos-stable \
    --boot-disk-size=30GB \
    --tags http-server,https-server \
    --scopes compute-rw

You can change the zone if you’d like, however only some have the f1-micro machine-type available. The tags open up the firewall HTTP and HTTPS (HTTP is required later). I’m using the maximum free HDD because apparently I get higher IOPS and it will allow me to maximize the amount of encrypted attachments I can have on this.

I am using the stable Container Optimized OS (COS) for several reasons, primarily:

  1. It’s optimized for Docker containers - less overhead to consume RAM
  2. It’s secure by default - security updates are automatically installed and security is locked down by default

CoreOS was also a contender but it used more memory at idle in my limited testing.

Important: Close the Cloud Shell and continue into into the vm instance SSH shell by selecting it in the Google Cloud Console and clicking the SSH button.

Step 2: Pull and Configure Project

Enter a SSH shell on the new vm instance by clicking the instance’s SSH button. Once you’re in the new shell, clone this repo in your home directory:

$ cd
$ git clone
$ cd bitwarden_gcloud

Before you can start everything up, you need to set up the docker-compose alias by running the utilities/ script (you can read more about why this is necessary here). The script just writes the alias to ~/.bash_alias and includes it in ~/.bashrc:

$ sh utilities/
$ source ~/.bashrc
$ docker-compose --version
docker-compose version 1.25.5, build 8a1c60f

.env file

I provide .env.template which should be copied to .env and filled out. Most of your configuration is done in .env and is self-documented. This file is a collection of environmental variables that are read by docker-compose and passed into their respective containers.

Configure fail2ban (optional)

fail2ban stops brute-force attempts at your vault. It will ban an ip address for a length of time (6 hours by default in this configuration) after a number of attempts (5). You may change these options in the file fail2ban/jail.d/jail.local:

bantime = 6h <- how long to enforce the ip ban
maxretry = 5  <- number of times to retry until a ban occurs

This will work out of the box - no fail2ban configuration is needed unless you want e-mail alerts of bans. To enable this, enter the SMTP settings in .env, and follow the instructions in fail2ban/jail.d/jail.local by uncommenting and entering destemail and sender and uncommenting the action_mwl action in the bitwarden and bitwarden-admin jails in the same file.

Configure Country-wide Blocking (optional)

The countryblock container will block ip addresses from countries specified in .env under COUNTRIES. China, Hong Kong, and Australia (CN, HK, AU) are blocked by default because Google Cloud will charge egress to those countries under the free tier. You may add any country you like to that list, or clear it out entirely if you don’t want to block those countries. Be aware, however, you’ll probably be charged for any traffic to those countries, even from bots or crawlers.

This country-wide blocklist will be updated daily at midnight, but you can change the COUNTRYBLOCK_SCHEDULE variable in .env to suit your needs.

These block-lists are pulled from on each update.

Configure Automatic Rebooting After Updates (optional)

Container-Optimized OS will automatically update itself, but the update will only be applied after a reboot. In order to ensure that you are using the most current operating system software, you can set a boot script that waits until an update has been applied to schedule a reboot.

Before you start, ensure you have compute-rw scope for your bitwarden compute vm. If you used the gcloud command above, it includes that scope. If not, go to your Google Cloud console and edit the “Cloud API access scopes” to have “Compute Engine” show “Read Write”. You need to shut down your compute vm in order to change this.

Modify Reboot Script

Before adding the startup script to Google metadata, modify the script to set your local timezone and the time to schedule reboots: set the TZ= and TIME= variables in utilities/ By default the script will schedule reboots for 06:00 UTC.

Add Startup Script to Metadata

From within your compute vm console, type the command toolbox. This command will download the latest toolbox container if necessary and then drop you into a shell that has the gcloud tool you need to use. Whenever you’re in toolbox, typing the exit command will return you to your compute vm.

From within toolbox, find the utilities folder within bitwarden_gcloud. toolbox mounts the host filesystem under /media/root, so go there to find the folder. It will likely be in /media/root/home/<google account name>/bitwarden_gcloud/utilities - cd to that folder.

Next, use gcloud to add the script to your vm’s boot script metadata with the add-metadata command:

gcloud compute instances add-metadata <instance> --metadata-from-file 

If you have forgotten your instance name, look at the Google Cloud Compute console or find it with the toolbox/gcloud command # gcloud compute instances list

Confirm Startup Script

You can confirm that your startup script has been added in your instance details under “Custom metadata” on the Compute Engine Console.

Next, restart your vm with the command $ sudo reboot. Once your vm has rebooted, you can confirm that the startup script was run with the command:

$ sudo journalctl -u google-startup-scripts.service

You should see something like these lines in the log:

-- Reboot --
Jul 16 18:44:10 bitwarden systemd[1]: Starting Google Compute Engine Startup Scripts...
Jul 16 18:44:10 bitwarden startup-script[388]: INFO Starting startup scripts.
Jul 16 18:44:10 bitwarden startup-script[388]: INFO Found startup-script in metadata.

Now the script will wait until a reboot is pending and then schedule a reboot for the time configured in the script.

If necessary you can run the startup script manually with the command $ sudo google_metadata_script_runner --script-type startup --debug, and get the status of automatic updates with the command $ sudo update_engine_client --status.

Step 3: Start Services with docker-compose

Use docker-compose to get the containers started:

$ docker-compose up

Normally, you’d include a -d, as in $ docker-compose up -d, however the first time is nice to see the initial startup. You should see the caddy service attempt to use ACME to auto-negotiate a Let’s Encrypt SSL cert, for example. It will fail because you don’t have DNS properly set up yet, which is fine. It will keep trying.

If you need to open another SSH session to continue, do that from the Google Cloud Console.

Step 4: Configure Dynamic DNS

DDNS is optional in the sense that you can manually set your DNS record to your ephemeral address, but I don’t know how often Google gives you a new address. Furthermore, LetsEncrypt has a problem with some DDNS providers, so having a real DNS provider like Cloudflare, etc, may be necessary.

Google charges for static IPs, but their ephemeral IPs are free.

Before you can get an SSL cert issued by Caddy/LetsEncrypt, you need a DNS record that points to your Google Cloud vm. You’ll notice in your logs that Caddy/LetsEncrypt will keep trying with the ACME protocol.

Dynamic DNS is supported using ddclient through the ddclient docker container. The ddclient container provides a configuration file at ddns/ddclient.conf that you must edit to work with your particular DNS provider. Their GitHub repo here contains documentation on configuring ddclient and the ddclient.conf file.

Note: ddclient.conf is placed in the ddns/ directory by the ddns container when it is run the first time, and any changes made to this configuration file will automatically be read in by the ddns container, no need to stop and start the container; you will see this shown in the logs.

Cloudflare Instructions

Since I use Cloudflare, I can provide more detail about this step. For other DNS providers, you’re on your own but the documentation for ddclient is pretty helpful.

Edit ddns/ddclient.conf and add the following lines:

use=web,, web-skip='IP Address' # found after IP Address
zone=<your cloudflare site / base URL / e.g.>
login=<your e-mail>
<your bitwarden site subdomain / e.g.>

Newer commits to ddclient support API tokens which are a better choice than a global key, but those commits haven’t made their way into a full ddclient release, so they haven’t been pulled into the container.

Step 5: Start Using Bitwarden

If everything is running properly (the logs will tell you when it isn’t), you can use your browser to visit the address that points to your new Google Cloud Bitwarden vm and start using Bitwarden! Depending on which bootstrapping method you chose in .env (whether you use the /admin page or have open sign-up enabled), you can create your new account and get going!


You should now have a free self-hosted instance of Bitwarden that survives server reboots with an OS that gets timely security patches automatically.

There’s plenty of tweaking and optimization possible, feel free to make this yours. There were many resources that I used to build this guide, many of them listed below. Feel free to comment with any optimizations or issues that you run across.




Thanks for the guide, I followed your project from beginning, even before you introduced ddclient. It works perfectly for me. Thanks for it. I have 2 feedbacks, one is on the script, I had to make some changes for it to work for me. the other one is I added a new container to the project, so that the images can be updated automatically. but this is really up to personal preference. some people may not like it that way.

Changes I have made are pasted below:

[email protected]:~/bitwarden_gcloud$ git diff utilities/ 
diff --git a/utilities/ b/utilities/
index 13afdf6..3f71e75 100644
--- a/utilities/
+++ b/utilities/
@@ -1,15 +1,15 @@
 #!/usr/bin/env sh
 # Write the docker-compose alias to ~/.bash_alias
-ALIAS=$'alias docker-compose=\'docker run --rm \
+ALIAS="alias docker-compose='docker run --rm \
     -v /var/run/docker.sock:/var/run/docker.sock \
     -v "$PWD:$PWD" \
     -w="$PWD" \
-    docker/compose\''
-echo -e "$ALIAS" >> ~/.bash_alias
+    docker/compose'"
+echo "$ALIAS" >> ~/.bash_alias
 # Include ~/.bash_alias in ~/.bashrc
 ALIAS_INCLUDE='if [[ -f ~/.bash_alias ]] ; then \n    . ~/.bash_alias \nfi'
-echo -e "$ALIAS_INCLUDE" >> ~/.bashrc
+echo "$ALIAS_INCLUDE" >> ~/.bashrc
 . ~/.bashrc
    # Watchtower will pull down your new image, gracefully shut down your existing container 
    # and restart it with the same options that were used when it was deployed initially
    image: containrrr/watchtower
    restart: always
    container_name: watchtower
    - /var/run/docker.sock:/var/run/docker.sock
    - TZ
    - WATCHTOWER_SCHEDULE=0 0 3 ? * 1
    - fail2ban

Thanks for the feedback. I rolled your watchtower suggestion into my latest revision, I think it’s a great addition considering how long these services will be running - having regular updates will help keep them secure without any intervention.


Is ddclient automatically installed when I clone the repo because it doesn’t seem to be working at the moment. When I try to use any ddclient commands, i.e sudo ddclient -daemon=0 -debug -verbose -noquiet, it returns command not found.


ddclient is in the container. Start it up and then you edit ddns/ddclient.conf and the changes will be automatically applied. $ docker logs ddns will show you the logs if you’re having trouble.


I’m stuck on #env-file . Not sure what you mean by copy to .env with the .env file once it’s edited. It also appears to not have cloned over - perhaps this is by design.

I really appreciate your work with this steps but it looks to be a bit ahead of my current skillset. Perhaps I’ll come back when my skills are more upgraded.


The repository provides a .env.template file that you are supposed to copy to .env and then complete/fill out as it directs you. A file with a name that’s prefixed with a . are hidden, so use the command ls -a to see hidden files.

Leave a comment

Your email address will not be published. Required fields are marked *