This write-up is a product of my quest to self-host Bitwarden on a free-tier cloud product. Following these instructions, you should have a product that provides you a self-hosted Bitwarden password manager with all the benefits of running it in the cloud. *The only way this might not be free is if you exceed the 1GB egress or have any egress to China or Australia. In this guide I talk about best practices in avoiding this kind of traffic to keep this as free as possible.
The end product is a Github repo (link below). The
readme.md found in that repo should be enough to get going if you’re not new to projects like this, whereas the text below has a bit more detail if you need it.
Update May 2021 I implemented a backup solution, check out Configure Bitwarden Backups for details.
Update July 2020 I added a new script and section for rebooting the host vm when updates have been made to the OS, ensuring that the host system stays patched and secure against n-day vulnerabilities.
Update June 2020 I added
fail2ban to block brute-force attacks on the webvault and country-wide blocking to avoid egress charges to China and Australia, or to block other countries that you might want. I added
countryblock to block entire countries, and I also added a
watchtower container as suggested by this comment, thanks ldw.
Update May 2020 Originally I used a home-grown dynamic dns script for Cloudflare, but it won’t be as well supported as ddclient, so I swapped out my script for linuxserver’s ddclient container. My original Cloudflare DDNS script can be found here.
I’ve been meaning to self-host Bitwarden for some time. If you don’t know about Bitwarden, it’s a password manager that’s open source and allows you to host it yourself! Now my encrypted password data is even more in my control. While I have a home server, I want to limit its exposure to the public Internet. Any service you expose to the Internet can become a pivot point to the rest of your internal network.
I saw that Google Cloud offers an ‘always free’ tier of their Compute Engine. Will one shared core and 614 MB of memory be enough for Bitwarden? According to the system requirements Bitwarden requires 2GB of RAM, but reports in its Github issue tracker say that even that is not enough. I went through the trouble of trying it out anyway and it failed spectacularly, the install script couldn’t even finish. There is, however, a lightweight alternative: Bitwarden RS. It’s written in Rust and an ideal candidate for a micro instance.
- Bitwarden self-hosted
- Automatic https certificate management through Caddy 2 proxy
- Dynamic DNS updates through ddclient
- Blocking brute-force attempts with fail2ban
- Country-wide blocking through iptables and ipset
- Container images kept up-to-date with watchtower
Before you start, ensure you have the following:
- A Google Cloud account with billing set up (so they can bill you if you use their non-free services)
- A DNS provider that is supported by
ddclientfor dynamic dns support; a list of supported DNS services can be seen here Note: not all DDNS providers are supported by LetsEncrypt, YMMV
Step 1: Set up a new VM
At the time or writing this, Google offers one free Google Compute Engine f1-micro instance with the following specifications:
* Region: * Oregon: us-west1 * Iowa: us-central1 * South Carolina: us-east1 * 30 GB-months HDD * 5 GB-month snapshot storage in the following regions: * Oregon: us-west1 * Iowa: us-central1 * South Carolina: us-east1 * Taiwan: asia-east1 * Belgium: europe-west1 * 1 GB network egress from North America to all region destinations (excluding China and Australia) per month
To get started, go to Google Compute Engine (after doing all the necessary setup of creating a project, and providing billing info if necessary - don’t worry, this will cost exactly $0.00 each month if done correctly) and open a Cloud Shell. You can create the instance manually, but the Cloud Shell makes everything easier. In the Cloud Shell (a small icon in the upper right corner of your Google Cloud console), the following command will build the properly spec’d machine:
$ gcloud compute instances create bitwarden \ --machine-type f1-micro \ --zone us-central1-a \ --image-project cos-cloud \ --image-family cos-stable \ --boot-disk-size=30GB \ --tags http-server,https-server \ --scopes compute-rw
You can change the zone if you’d like, however only some have the f1-micro machine-type available. The tags open up the firewall HTTP and HTTPS (HTTP is required later). I’m using the maximum free HDD because apparently I get higher IOPS and it will allow me to maximize the amount of encrypted attachments I can have on this.
I am using the stable Container Optimized OS (COS) for several reasons, primarily:
- It’s optimized for Docker containers - less overhead to consume RAM
- It’s secure by default - security updates are automatically installed and security is locked down by default
CoreOS was also a contender but it used more memory at idle in my limited testing.
Important: Close the Cloud Shell and continue into into the vm instance SSH shell by selecting it in the Google Cloud Console and clicking the
Step 2: Pull and Configure Project
Enter a SSH shell on the new vm instance by clicking the instance’s
SSH button. Once you’re in the new shell, clone this repo in your home directory:
$ cd $ git clone https://github.com/dadatuputi/bitwarden_gcloud.git $ cd bitwarden_gcloud
Before you can start everything up, you need to set up the docker-compose alias by running the
utilities/install-alias.sh script (you can read more about why this is necessary here). The script just writes the alias to
~/.bash_alias and includes it in
$ sh utilities/install-alias.sh $ source ~/.bashrc $ docker-compose --version docker-compose version 1.25.5, build 8a1c60f
.env.template which should be copied to
.env and filled out. Most of your configuration is done in
.env and is self-documented. This file is a collection of environmental variables that are read by
docker-compose and passed into their respective containers.
Configure Bitwarden Backups (optional)
There is rudimentary support for backups provided by default and configured for the most part in the
.env file. Look for the
Bitwarden Backup Options section.
When enabled, backup will run on a regular interval (daily at midnight by default) and keep 30 days (default) of backups in the
bitwarden/backups directory. The script will back up the following resources (based on this documentation):
db.sqlite3- encrypted database
bitwarden/attachments- attachments directory
bitwarden/sends- sends directory
config.json- file with configuration settings (if it exists)
rsa_key*- keys for logged in users
There are three backup methods:
local- backup to the local directory only on the designated interval. You may want to use this if you have your own backup method in mind to synchronize
rclone- synchronize the entire backup directory to a cloud storage service
Bitwarden Local Backups
This is the simplest method and will just maintain a directory of backups and optionally email you when the job is complete.
Bitwarden Email Backups
This backup method uses the SMTP settings provided to Bitwarden, so ensure that those variables are populated with correct values. The email default values provide a daily gzipped backup to your e-mail. This backs up the
sends folders, so it could get quite large and may not be suitable for users who use attachments and sends.
Bitwarden Rclone Backups
This method is more powerful and a better option for users with large backups. To configure rclone, either provide a working configuration file at
bitwarden/rclone.conf or create one using the following command from your gcloud shell while bitwarden is running:
sudo docker exec -it bitwarden ash -c 'rclone config --config $BACKUP_RCLONE_CONF'
Follow the instructions at Rclone Remote Setup. Rclone will guide you through the configuration steps. You will likely need to download rclone on a host with a gui, however rclone does not require installation so this step is easier than it sounds.
Your backup should run at the next cron job, however you may test it from the Google cloud shell with the following command, replacing
<local|email|rclone> with the backup method you would like to test:
sudo docker exec -it bitwarden ash /backup.sh <local|email|rclone>
Look at the log files if you run into issues, and ensure that the appropriate environmental variables are set correctly
fail2ban stops brute-force attempts at your vault. It will ban an ip address for a length of time (6 hours by default in this configuration) after a number of attempts (5). You may change these options in the file
bantime = 6h <- how long to enforce the ip ban maxretry = 5 <- number of times to retry until a ban occurs
This will work out of the box - no
fail2ban configuration is needed unless you want e-mail alerts of bans. To enable this, enter the SMTP settings in
.env, and follow the instructions in
fail2ban/jail.d/jail.local by uncommenting and entering
sender and uncommenting the
action_mwl action in the
bitwarden-admin jails in the same file.
Configure Country-wide Blocking (optional)
countryblock container will block ip addresses from countries specified in
COUNTRIES. China, Hong Kong, and Australia (CN, HK, AU) are blocked by default because Google Cloud will charge egress to those countries under the free tier. You may add any country you like to that list, or clear it out entirely if you don’t want to block those countries. Be aware, however, you’ll probably be charged for any traffic to those countries, even from bots or crawlers.
This country-wide blocklist will be updated daily at midnight, but you can change the
COUNTRYBLOCK_SCHEDULE variable in
.env to suit your needs.
These block-lists are pulled from www.ipdeny.com on each update.
Configure Automatic Rebooting After Updates (optional)
Container-Optimized OS will automatically update itself, but the update will only be applied after a reboot. In order to ensure that you are using the most current operating system software, you can set a boot script that waits until an update has been applied to schedule a reboot.
Before you start, ensure you have
compute-rw scope for your bitwarden compute vm. If you used the
gcloud command above, it includes that scope. If not, go to your Google Cloud console and edit the “Cloud API access scopes” to have “Compute Engine” show “Read Write”. You need to shut down your compute vm in order to change this.
Modify Reboot Script
Before adding the startup script to Google metadata, modify the script to set your local timezone and the time to schedule reboots: set the
TIME= variables in
utilities/reboot-on-update.sh. By default the script will schedule reboots for 06:00 UTC.
Add Startup Script to Metadata
From within your compute vm console, type the command
toolbox. This command will download the latest
toolbox container if necessary and then drop you into a shell that has the
gcloud tool you need to use. Whenever you’re in
toolbox, typing the
exit command will return you to your compute vm.
toolbox, find the
utilities folder within
toolbox mounts the host filesystem under
/media/root, so go there to find the folder. It will likely be in
/media/root/home/<google account name>/bitwarden_gcloud/utilities -
cd to that folder.
gcloud to add the
reboot-on-update.sh script to your vm’s boot script metadata with the
gcloud compute instances add-metadata <instance> --metadata-from-file startup-script=reboot-on-update.sh
If you have forgotten your instance name, look at the Google Cloud Compute console or find it with the
# gcloud compute instances list
Confirm Startup Script
You can confirm that your startup script has been added in your instance details under “Custom metadata” on the Compute Engine Console.
Next, restart your vm with the command
$ sudo reboot. Once your vm has rebooted, you can confirm that the startup script was run with the command:
$ sudo journalctl -u google-startup-scripts.service
You should see something like these lines in the log:
-- Reboot -- Jul 16 18:44:10 bitwarden systemd: Starting Google Compute Engine Startup Scripts... Jul 16 18:44:10 bitwarden startup-script: INFO Starting startup scripts. Jul 16 18:44:10 bitwarden startup-script: INFO Found startup-script in metadata.
Now the script will wait until a reboot is pending and then schedule a reboot for the time configured in the script.
If necessary you can run the startup script manually with the command
$ sudo google_metadata_script_runner --script-type startup --debug, and get the status of automatic updates with the command
$ sudo update_engine_client --status.
Step 3: Start Services with
docker-compose to get the containers started:
$ docker-compose up
Normally, you’d include a
-d, as in
$ docker-compose up -d, however the first time is nice to see the initial startup. You should see the
caddy service attempt to use ACME to auto-negotiate a Let’s Encrypt SSL cert, for example. It will fail because you don’t have DNS properly set up yet, which is fine. It will keep trying.
If you need to open another SSH session to continue, do that from the Google Cloud Console.
Step 4: Configure Dynamic DNS
DDNS is optional in the sense that you can manually set your DNS record to your ephemeral address, but I don’t know how often Google gives you a new address. Furthermore, LetsEncrypt has a problem with some DDNS providers, so having a real DNS provider like Cloudflare, etc, may be necessary.
Google charges for static IPs, but their ephemeral IPs are free.
Before you can get an SSL cert issued by Caddy/LetsEncrypt, you need a DNS record that points to your Google Cloud vm. You’ll notice in your logs that Caddy/LetsEncrypt will keep trying with the ACME protocol.
Dynamic DNS is supported using ddclient through the ddclient docker container. The ddclient container provides a configuration file at
ddns/ddclient.conf that you must edit to work with your particular DNS provider. Their GitHub repo here contains documentation on configuring
ddclient and the
ddclient.conf is placed in the
ddns/ directory by the ddns container when it is run the first time, and any changes made to this configuration file will automatically be read in by the ddns container, no need to stop and start the container; you will see this shown in the logs.
Since I use Cloudflare, I can provide more detail about this step. For other DNS providers, you’re on your own but the documentation for
ddclient is pretty helpful.
ddns/ddclient.conf and add the following lines:
use=web, web=checkip.dyndns.org/, web-skip='IP Address' # found after IP Address protocol=cloudflare zone=<your cloudflare site / base URL / e.g. example.com> ttl=0 login=<your e-mail> password=<GLOBAL API KEY FOUND UNDER [MY PROFILE]-> [API TOKENS] IN CLOUDFLARE> <your bitwarden site subdomain / e.g. bw.example.com>
Newer commits to
ddclient support API tokens which are a better choice than a global key, but those commits haven’t made their way into a full
ddclient release, so they haven’t been pulled into the container.
Step 5: Start Using Bitwarden
If everything is running properly (the logs will tell you when it isn’t), you can use your browser to visit the address that points to your new Google Cloud Bitwarden vm and start using Bitwarden! Depending on which bootstrapping method you chose in
.env (whether you use the
/admin page or have open sign-up enabled), you can create your new account and get going!
If you run into issues, such as containers not starting, the following commands will be helpful:
docker ps- this will show what containers are running, or if one of them has failed
docker-compose logs <container name>- this will show the recent logs for the container name (or all containerse if you omit the name) and is very useful in troubleshooting
You should now have a free self-hosted instance of Bitwarden that survives server reboots with an OS that gets timely security patches automatically.
There’s plenty of tweaking and optimization possible, feel free to make this yours. There were many resources that I used to build this guide, many of them listed below. Feel free to comment with any optimizations or issues that you run across.
Leave a comment
Your email address will not be published. Required fields are marked *