Guide: Self-host Bitwarden on Google Cloud for Free*
This write-up is a product of my quest to self-host Bitwarden on a free-tier cloud product. Following these instructions, you should have a product that provides you a self-hosted Bitwarden password manager with all the benefits of running it in the cloud. *The only way this might not be free is if you exceed the 1GB egress or have any egress to China or Australia. In this guide I talk about best practices in avoiding this kind of traffic to keep this as free as possible.
The end product is a Github repo (link below). The readme.md
found in that repo should be enough to get going if you’re not new to projects like this, whereas the text below has a bit more detail if you need it.
Github Repository for bitwarden_gcloud
Update May 2021 I implemented a backup solution, check out Configure Bitwarden Backups for details.
Update July 2020 I added a new script and section for rebooting the host vm when updates have been made to the OS, ensuring that the host system stays patched and secure against n-day vulnerabilities.
Update June 2020 I added fail2ban
to block brute-force attacks on the webvault and country-wide blocking to avoid egress charges to China and Australia, or to block other countries that you might want. I added countryblock
to block entire countries, and I also added a watchtower
container as suggested by this comment, thanks ldw.
Update May 2020 Originally I used a home-grown dynamic dns script for Cloudflare, but it won’t be as well supported as ddclient, so I swapped out my script for linuxserver’s ddclient container. My original Cloudflare DDNS script can be found here.
I’ve been meaning to self-host Bitwarden for some time. If you don’t know about Bitwarden, it’s a password manager that’s open source and allows you to host it yourself! Now my encrypted password data is even more in my control. While I have a home server, I want to limit its exposure to the public Internet. Any service you expose to the Internet can become a pivot point to the rest of your internal network.
I saw that Google Cloud offers an ‘always free’ tier of their Compute Engine. Will one shared core and 614 MB of memory be enough for Bitwarden? According to the system requirements Bitwarden requires 2GB of RAM, but reports in its Github issue tracker say that even that is not enough. I went through the trouble of trying it out anyway and it failed spectacularly, the install script couldn’t even finish. There is, however, a lightweight alternative: Bitwarden RS. It’s written in Rust and an ideal candidate for a micro instance.
Features
- Bitwarden self-hosted
- Automatic https certificate management through Caddy 2 proxy
- Dynamic DNS updates through ddclient
- Blocking brute-force attempts with fail2ban
- Country-wide blocking through iptables and ipset
- Container images kept up-to-date with watchtower
Prerequisites
Before you start, ensure you have the following:
- A Google Cloud account with billing set up (so they can bill you if you use their non-free services)
- A DNS provider that is supported by
ddclient
for dynamic dns support; a list of supported DNS services can be seen here Note: not all DDNS providers are supported by LetsEncrypt, YMMV
Step 1: Set up a new VM
At the time or writing this, Google offers one free Google Compute Engine f1-micro instance with the following specifications:
* Region:
* Oregon: us-west1
* Iowa: us-central1
* South Carolina: us-east1
* 30 GB-months HDD
* 5 GB-month snapshot storage in the following regions:
* Oregon: us-west1
* Iowa: us-central1
* South Carolina: us-east1
* Taiwan: asia-east1
* Belgium: europe-west1
* 1 GB network egress from North America to all region destinations (excluding China and Australia) per month
To get started, go to Google Compute Engine (after doing all the necessary setup of creating a project, and providing billing info if necessary - don’t worry, this will cost exactly $0.00 each month if done correctly) and open a Cloud Shell. You can create the instance manually, but the Cloud Shell makes everything easier. In the Cloud Shell (a small icon in the upper right corner of your Google Cloud console), the following command will build the properly spec’d machine:
$ gcloud compute instances create bitwarden \
--machine-type f1-micro \
--zone us-central1-a \
--image-project cos-cloud \
--image-family cos-stable \
--boot-disk-size=30GB \
--tags http-server,https-server \
--scopes compute-rw
You can change the zone if you’d like, however only some have the f1-micro machine-type available. The tags open up the firewall HTTP and HTTPS (HTTP is required later). I’m using the maximum free HDD because apparently I get higher IOPS and it will allow me to maximize the amount of encrypted attachments I can have on this.
I am using the stable Container Optimized OS (COS) for several reasons, primarily:
- It’s optimized for Docker containers - less overhead to consume RAM
- It’s secure by default - security updates are automatically installed and security is locked down by default
CoreOS was also a contender but it used more memory at idle in my limited testing.
Important: Close the Cloud Shell and continue into into the vm instance SSH shell by selecting it in the Google Cloud Console and clicking the SSH
button.
Step 2: Pull and Configure Project
Enter a SSH shell on the new vm instance by clicking the instance’s SSH
button. Once you’re in the new shell, clone this repo in your home directory:
$ cd
$ git clone https://github.com/dadatuputi/bitwarden_gcloud.git
$ cd bitwarden_gcloud
Before you can start everything up, you need to set up the docker-compose alias by running the utilities/install-alias.sh
script (you can read more about why this is necessary here). The script just writes the alias to ~/.bash_alias
and includes it in ~/.bashrc
:
$ sh utilities/install-alias.sh
$ source ~/.bashrc
$ docker-compose --version
docker-compose version 1.25.5, build 8a1c60f
.env file
I provide .env.template
which should be copied to .env
and filled out. Most of your configuration is done in .env
and is self-documented. This file is a collection of environmental variables that are read by docker-compose
and passed into their respective containers.
Configure Bitwarden Backups (optional)
There is rudimentary support for backups provided by default and configured for the most part in the .env
file. Look for the Bitwarden Backup Options
section.
When enabled, backup will run on a regular interval (daily at midnight by default) and keep 30 days (default) of backups in the bitwarden/backups
directory. The script will back up the following resources (based on this documentation):
db.sqlite3
- encrypted databasebitwarden/attachments
- attachments directorybitwarden/sends
- sends directoryconfig.json
- file with configuration settings (if it exists)rsa_key*
- keys for logged in users
There are three backup methods:
local
- backup to the local directory only on the designated interval. You may want to use this if you have your own backup method in mind to synchronizebitwarden/backups
email
- email the latest backuprclone
- synchronize the entire backup directory to a cloud storage service
Bitwarden Local Backups
This is the simplest method and will just maintain a directory of backups and optionally email you when the job is complete.
Bitwarden Email Backups
This backup method uses the SMTP settings provided to Bitwarden, so ensure that those variables are populated with correct values. The email default values provide a daily gzipped backup to your e-mail. This backs up the attachments
and sends
folders, so it could get quite large and may not be suitable for users who use attachments and sends.
Bitwarden Rclone Backups
This method is more powerful and a better option for users with large backups. To configure rclone, either provide a working configuration file at bitwarden/rclone.conf
or create one using the following command from your gcloud shell while bitwarden is running:
sudo docker exec -it bitwarden ash -c 'rclone config --config $BACKUP_RCLONE_CONF'
Follow the instructions at Rclone Remote Setup. Rclone will guide you through the configuration steps. You will likely need to download rclone on a host with a gui, however rclone does not require installation so this step is easier than it sounds.
Testing Backup
Your backup should run at the next cron job, however you may test it from the Google cloud shell with the following command, replacing <local|email|rclone>
with the backup method you would like to test:
sudo docker exec -it bitwarden ash /backup.sh <local|email|rclone>
Look at the log files if you run into issues, and ensure that the appropriate environmental variables are set correctly
Configure fail2ban
(optional)
fail2ban
stops brute-force attempts at your vault. It will ban an ip address for a length of time (6 hours by default in this configuration) after a number of attempts (5). You may change these options in the file fail2ban/jail.d/jail.local
:
bantime = 6h <- how long to enforce the ip ban
maxretry = 5 <- number of times to retry until a ban occurs
This will work out of the box - no fail2ban
configuration is needed unless you want e-mail alerts of bans. To enable this, enter the SMTP settings in .env
, and follow the instructions in fail2ban/jail.d/jail.local
by uncommenting and entering destemail
and sender
and uncommenting the action_mwl
action in the bitwarden
and bitwarden-admin
jails in the same file.
Configure Country-wide Blocking (optional)
The countryblock
container will block ip addresses from countries specified in .env
under COUNTRIES
. China, Hong Kong, and Australia (CN, HK, AU) are blocked by default because Google Cloud will charge egress to those countries under the free tier. You may add any country you like to that list, or clear it out entirely if you don’t want to block those countries. Be aware, however, you’ll probably be charged for any traffic to those countries, even from bots or crawlers.
This country-wide blocklist will be updated daily at midnight, but you can change the COUNTRYBLOCK_SCHEDULE
variable in .env
to suit your needs.
These block-lists are pulled from www.ipdeny.com on each update.
Configure Automatic Rebooting After Updates (optional)
Container-Optimized OS will automatically update itself, but the update will only be applied after a reboot. In order to ensure that you are using the most current operating system software, you can set a boot script that waits until an update has been applied to schedule a reboot.
Before you start, ensure you have compute-rw
scope for your bitwarden compute vm. If you used the gcloud
command above, it includes that scope. If not, go to your Google Cloud console and edit the “Cloud API access scopes” to have “Compute Engine” show “Read Write”. You need to shut down your compute vm in order to change this.
Modify Reboot Script
Before adding the startup script to Google metadata, modify the script to set your local timezone and the time to schedule reboots: set the TZ=
and TIME=
variables in utilities/reboot-on-update.sh
. By default the script will schedule reboots for 06:00 UTC.
Add Startup Script to Metadata
From within your compute vm console, type the command toolbox
. This command will download the latest toolbox
container if necessary and then drop you into a shell that has the gcloud
tool you need to use. Whenever you’re in toolbox
, typing the exit
command will return you to your compute vm.
From within toolbox
, find the utilities
folder within bitwarden_gcloud
. toolbox
mounts the host filesystem under /media/root
, so go there to find the folder. It will likely be in /media/root/home/<google account name>/bitwarden_gcloud/utilities
- cd
to that folder.
Next, use gcloud
to add the reboot-on-update.sh
script to your vm’s boot script metadata with the add-metadata
command:
gcloud compute instances add-metadata <instance> --metadata-from-file startup-script=reboot-on-update.sh
If you have forgotten your instance name, look at the Google Cloud Compute console or find it with the toolbox
/gcloud
command # gcloud compute instances list
Confirm Startup Script
You can confirm that your startup script has been added in your instance details under “Custom metadata” on the Compute Engine Console.
Next, restart your vm with the command $ sudo reboot
. Once your vm has rebooted, you can confirm that the startup script was run with the command:
$ sudo journalctl -u google-startup-scripts.service
You should see something like these lines in the log:
-- Reboot --
Jul 16 18:44:10 bitwarden systemd[1]: Starting Google Compute Engine Startup Scripts...
Jul 16 18:44:10 bitwarden startup-script[388]: INFO Starting startup scripts.
Jul 16 18:44:10 bitwarden startup-script[388]: INFO Found startup-script in metadata.
Now the script will wait until a reboot is pending and then schedule a reboot for the time configured in the script.
If necessary you can run the startup script manually with the command $ sudo google_metadata_script_runner --script-type startup --debug
, and get the status of automatic updates with the command $ sudo update_engine_client --status
.
Step 3: Start Services with docker-compose
Use docker-compose
to get the containers started:
$ docker-compose up
Normally, you’d include a -d
, as in $ docker-compose up -d
, however the first time is nice to see the initial startup. You should see the caddy
service attempt to use ACME to auto-negotiate a Let’s Encrypt SSL cert, for example. It will fail because you don’t have DNS properly set up yet, which is fine. It will keep trying.
If you need to open another SSH session to continue, do that from the Google Cloud Console.
Step 4: Configure Dynamic DNS
DDNS is optional in the sense that you can manually set your DNS record to your ephemeral address, but I don’t know how often Google gives you a new address. Furthermore, LetsEncrypt has a problem with some DDNS providers, so having a real DNS provider like Cloudflare, etc, may be necessary.
Google charges for static IPs, but their ephemeral IPs are free.
Before you can get an SSL cert issued by Caddy/LetsEncrypt, you need a DNS record that points to your Google Cloud vm. You’ll notice in your logs that Caddy/LetsEncrypt will keep trying with the ACME protocol.
Dynamic DNS is supported using ddclient through the ddclient docker container. The ddclient container provides a configuration file at ddns/ddclient.conf
that you must edit to work with your particular DNS provider. Their GitHub repo here contains documentation on configuring ddclient
and the ddclient.conf
file.
Note: ddclient.conf
is placed in the ddns/
directory by the ddns container when it is run the first time, and any changes made to this configuration file will automatically be read in by the ddns container, no need to stop and start the container; you will see this shown in the logs.
Cloudflare Instructions
Since I use Cloudflare, I can provide more detail about this step. For other DNS providers, you’re on your own but the documentation for ddclient
is pretty helpful.
Edit ddns/ddclient.conf
and add the following lines:
use=web, web=checkip.dyndns.org/, web-skip='IP Address' # found after IP Address
protocol=cloudflare
zone=<your cloudflare site / base URL / e.g. example.com>
ttl=0
login=<your e-mail>
password=<GLOBAL API KEY FOUND UNDER [MY PROFILE]-> [API TOKENS] IN CLOUDFLARE>
<your bitwarden site subdomain / e.g. bw.example.com>
Newer commits to ddclient
support API tokens which are a better choice than a global key, but those commits haven’t made their way into a full ddclient
release, so they haven’t been pulled into the container.
Step 5: Start Using Bitwarden
If everything is running properly (the logs will tell you when it isn’t), you can use your browser to visit the address that points to your new Google Cloud Bitwarden vm and start using Bitwarden! Depending on which bootstrapping method you chose in .env
(whether you use the /admin
page or have open sign-up enabled), you can create your new account and get going!
Troubleshooting
If you run into issues, such as containers not starting, the following commands will be helpful:
docker ps
- this will show what containers are running, or if one of them has faileddocker-compose logs <container name>
- this will show the recent logs for the container name (or all containerse if you omit the name) and is very useful in troubleshooting
Conclusion
You should now have a free self-hosted instance of Bitwarden that survives server reboots with an OS that gets timely security patches automatically.
There’s plenty of tweaking and optimization possible, feel free to make this yours. There were many resources that I used to build this guide, many of them listed below. Feel free to comment with any optimizations or issues that you run across.
Comments
ldw
Thanks for the guide, I followed your project from beginning, even before you introduced ddclient. It works perfectly for me. Thanks for it. I have 2 feedbacks, one is on the
install-alias.sh
script, I had to make some changes for it to work for me. the other one is I added a new container to the project, so that the images can be updated automatically. but this is really up to personal preference. some people may not like it that way.Changes I have made are pasted below:
bradford
Thanks for the feedback. I rolled your watchtower suggestion into my latest revision, I think it’s a great addition considering how long these services will be running - having regular updates will help keep them secure without any intervention.
Brandon
Is ddclient automatically installed when I clone the repo because it doesn’t seem to be working at the moment. When I try to use any ddclient commands, i.e sudo ddclient -daemon=0 -debug -verbose -noquiet, it returns command not found.
bradford
ddclient
is in the container. Start it up and then you editddns/ddclient.conf
and the changes will be automatically applied.$ docker logs ddns
will show you the logs if you’re having trouble.Ryan
I’m stuck on #env-file . Not sure what you mean by copy to .env with the .env file once it’s edited. It also appears to not have cloned over - perhaps this is by design.
I really appreciate your work with this steps but it looks to be a bit ahead of my current skillset. Perhaps I’ll come back when my skills are more upgraded.
Bradford
The repository provides a
.env.template
file that you are supposed to copy to.env
and then complete/fill out as it directs you. A file with a name that’s prefixed with a.
are hidden, so use the commandls -a
to see hidden files.Aleksei
Hello. AFAIR one fixed per account IP is free now, so you don’t need the ‘ddclient’ part
Bradford
Nice! Do you have a source for that? I can’t find anything on it, and the documentation still says there’s a charge.
Richard
Hi, I followed all your steps to the letter, including the ddclient where I am getting ddns working like a charm. The only problem, when I try to access Bitwarden via http or https, I go nowhere. I have tried by FQDN or by IP. Same results. What am I doing wrong? or what step am I missing? Thanks.
Bradford
I would check your logs - try
docker ps -a
to see a list of the running docker containers, and thendocker logs <container name>
to view the logs for that container. I would suspect that the caddy/proxy and Bitwarden containers would be the issue.Richard
Actually I kept troubleshooting it all day yesterday and got it to work. Somehow I was missing the “Domain=” entry in the .env file. I just filled it up and i had to “docker-compose up -d” to activate it.
The only problem I have now is I am not able to get SMTP working. I have tried to use Google SMTP –> No dice. I even tried another SMTP relay –> Still no dice.
Can you please gives details about the 4-5 fields. Especially the “SMTP_PORT=” and “SMTP_SSL=” fields. What is the difference between them? Am I supposed to put a port in each or what exactly? Which port? I am mainly confused about those 2 entries. I am using port 587. Should I put it in both those variables? Thanks in advance.
Bradford
I should improve the comment for the SMTP fields in the .env.template file. Gmail settings are here. Here’s some details for each field:
Richard
Thanks. Working like a charm. I have been using Google SMTP in other places for years. Somehow the docker container was not accepting port 465, even after setting “SMTP_SSL=true”. As soon as I switched to 587, I started to receive emails. So myabe worth mentioning it for anyone who wants to use Gmail SMTP to integrate it with your configuration.
Canh
I ran into something similar that could be similar to your situation. Check your VM details under the Firewalls section and ensure Allow HTTP and HTTPS are checked. Otherwise stop and edit those settings.
For some reason the network tags as suggested didn’t seem to apply these changes.
Koen
I had the same issue as Canh, HTTP(S) not being allowed. For those of you that want to check, here’s where that’s located (found in the Google Cloud help docs):
Andrew Reid
Barring the deployment, will your script work outside of GCP (say on AWS, OCI or a generic VPS running Ubuntu 20.04 minimal)?
Bradford
Yes Andrew, the
docker-compose
and environmental variables in.env
should be all you need, just ignore the Google Cloud specific settings. You may need to substitute them for the appropriate AWS/etc steps though. If you do get it going on AWS, drop a note here or a pull request.Richard
I am editing “fail2ban/jail.d/jail.local”, but the changes do not seem to be picked up. I even did “docker restart". I even did "docker-compose up -d --build". I even rebooted my server.
$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 0b69e1b49b5d docker/compose “sh /usr/local/bin/d…” 2 hours ago Up 2 hours bold_varahamihira ccfc26f73dad bitwardenrs/server:alpine “/start.sh” 3 hours ago Up 2 hours (healthy) 80/tcp, 3012/tcp bitwarden c288cbe874dd crazymax/fail2ban:latest “/entrypoint.sh fail…” 23 hours ago Up 2 hours (healthy) fail2ban
As you could see, “docker ps” shows that it was restarted 2 hours ago, but it was created 23 hours ago. I guess i need to re-create it for the changes to get picked up.
What am I doing wrong?
Thanks.
Ben
Hi, I love this project, thanks for all the time you put in to document everything!
I was wondering if you’d considered building in a backup system to automate encrypted database backups that could be sent offsite, perhaps via email. This could be helpful in scenarios where the owner forgets, or is unable to backup through the bitwarden web interface, or the server is lost or deleted, etc
Bradford
That’s very strange. I have been running since this was published with nothing like that happening. Before you try to fix it, it might be a good idea to clone your vm disk so that you can restore it if necessary.
After that, I would try stopping it and starting it again. Look at the “Serial port 1 (console)” logs to see if you see any indication of what might be going wrong with it.
If you sync Bitwarden to an app or a browser extension, you might still be able to export the data since it caches it. I tested with mine after unplugging from the network, the Chrome Bitwarden extension allowed me to authenticate and export a
.json
backup of my passwords. Good luck.airmarshall
This has been working great for over a month. Today it became unresponsive and timing out. I checked the console and the VM was still running, SSH was unable to connect. I ended up having to “Stop” the machine.
On restart it’s now been bouncing off 100%+ CPU for over an hour, SSH is still not available (although it doesn’t timeout or fail). The URL times out and bitwarden is unable to connect to the server.
Any ideas what it’s doing and how I can get in?
Unfortunately, as it wasn’t specified in the guide I hadn’t got round to ‘backing-up’ my bitwarden data…..
Ben
I had exactly the same problem. Restarting the VM did not even allow to SSH. I did manage to get the machine working again after removing the startup-script. But then bitwarden was not accesible via web or app.
I also tried “docker-compose down” followed by “docker-compose up”, with the “up” command failing on the “proxy” service.
After I tried your solution (“sudo reboot” followed by the “down” and then “up” command) my bitwarden service was successful restored as well.
I did not try to add the startup-script.
airmarshall
Thanks for the reply.
This morning CPU usage was back to normal. SSH was available. I tared the bitward_gcloud folder and downloaded it, then I cloned the disc as you suggested.
Then I did “docker-compose down” followed by “docker-compose up”, the “up” command failed on the “proxy” service, complaining the port was already in use by another container.
“sudo reboot” followed by the “down” and then “up” command resulted in a successful restoration of service.
Very strange.
I’m not sure I understand the workflow of the OS update system. When checking status I get:
[0210/101506.424285:INFO:update_engine_client.cc(501)] Querying Update Engine status… CURRENT_OP=UPDATE_STATUS_IDLE IS_ENTERPRISE_ROLLBACK=false IS_INSTALL=false LAST_CHECKED_TIME=1612951427 NEW_SIZE=0 NEW_VERSION=0.0.0.0 PROGRESS=0.0 WILL_POWERWASH_AFTER_REBOOT=false
Does that make sense and is it normal?
D
Changing ddclient image to ghcr.io/linuxserver/ddclient fixes the ‘line 3’ error
Gary
Thanks a lot for the guide.
I am also using cloudflare, with the config copied from your post and replacing with my domain’s info, something like,
use=web, web=checkip.dyndns.org/, web-skip=’IP Address’ # found after IP Address protocol=cloudflare zone=
ttl=0
login=
password=
mail2rst
Thanks for your guidance. Is it possible? please elaborate this mechanism for duckdns DDNS. As cloudflare & letenscrypt very picky to run their services for frenom domains (*.tk, ml, ga, cf). if buying of new domain prevented by using duckdns free service. It will help many people. I am newbie in docker & till my reading this thing possible & we can change caddyfile »tls {$EMAIL}» tls duckdns {Duckdns API}. We are using standard caddy docker image but for this we have to add plugin based on duckdns in image. As some great person do it for dedicated to cloudflare. First portion Dockerfile (Caddy Builder) is going above my head. thanks https://github.com/dani-garcia/bitwarden_rs/wiki/Caddy-2.x-with-Cloudflare-DNS
mail2rst
I have received ssl certificates from zero ssl. So now I have two big numbers files (cert.pem & key.pem) each file having about 1000 random characters. So now I want to add my manual certificate into our cady container. Basically I am going to move to caddy file “tls {$SSLCERTIFICATE} {$SSLKEY}” instead of tls {$EMAIL}. What is the best practice to add my custom certificate to system. What you suggest can I declare two variable in .env $SSLCERTIFICATE & $SSLKEY & put there value directly on .env file Or create two files cert.pem & key.pem & I will save these file in caddy/data folder directly. Then I will hash out email in caddy file & activate custom certificate option
of tls {$EMAIL}
tls caddy/data/cert.pem caddy/data/key.pem
Shawn
Thanks a lot for your write-up! I too use Cloudflare but for domains that I own. Therefore I have my own certs and can’t quite figure out how to get the Let’sEncrypt info out of the build completely.
I know it should be jumping out at me, but alas, after two days, I can’t and Chrome doesn’t like conflicting certs.
Zhen
I have the error with the gcloud cmd step:
root@bitwarden:/media/root/home/<google account name/bitwarden_gcloud/utilities# gcloud compute instances add-metadata bitwarden –metadata-from-file startup-script=reboot-on-update.sh Did you mean zone [us-west2-a] for instance: [bitwarden] (Y/n)? Y ERROR: (gcloud.compute.instances.add-metadata) Could not fetch resource:
Anything that I missed?
Piotr
hi, it seems that currently there’s version incompatibility:
Version in “./docker-compose.yml” is unsupported. You might be seeing this error because you’re using the wrong Compose file version. Either specify a suppor ted version (e.g “2.2” or “3.3”) and place your service definitions under the
services
key, or omit theversion
key and place your service definitions at the root of the file to use version 1.Piotr
I have changed the docker compose version in yml file to 3.1 from 4 and it worked.
Calvin
How do we go about updating this? I tried doing a docker-compose pull and it seems to have broken it but I don’t see any errors.
Scott
Thank you for this guide. Any possibility you might consider expanding it to optionally add floccus for cross-browser bookmark sync to the same free Google Compute Engine f1-micro instance?
biju
Mine all setup correctly except the fail2ban is blocking the docker ip instead of real ip. How can I change it to block the real ip insted of docker ip?
Gabe
Thanks for this detailed guide. I’m using Cloudflare as well. Any thoughts about:
JaBo
I also ran into an issue with the setup of the VM instance where I had to edit the VM details to allow for HTTP(s) traffic. Check your VM details under the Firewalls section and ensure Allow HTTP and HTTPS are checked. Otherwise stop and edit those settings.
For some reason the network tags as suggested didn’t seem to apply these changes.
Once I did this I was in business. It may be worth adding to the setup info for the instance or possibly updating the creation command to do it by default
daniel
im facing some weird problems. the script is stopping different places and just will not continue
i’m using ml domain and cloudflare, so i guess thats the reason
Ninad Phadke
This was awesome! I did face an issue after a month or so where the disc essentially was overwhelmed by the logs that were getting created.
For now, I add these lines to each of the services in
docker-compose.yml
I hope this does the trick.
nulluser
Hi. I got an error. “ERROR: for proxy Cannot start service proxy: driver failed programming external connectivity on endpoint proxy (13e2acebedfe3956714500defaeeb3f22f519703fb6d4ff4ef57df18505dec60): Error starting userland proxy: listen tcp4 0.0.0.0:80: bind: address already in use” Could you please help me?
Giang
Recently, the cloud compute engine agreement changes, it seems to me it has added cost to create a compute engine. The SKU description for it is “Network Inter Zone Egress”. Is my setup wrong or did they really change how they charge ?
Gary Goforth
One update is to use the E2-micro VM, which is now the new Free Tier VM. Google will start charging for the F1-Micro VM starting Aug 31, 2021.
Philippe
Anyone with the URL is able to create an account. How do I limit this? Is there a way to approve account creation? Also include a captcha in the webpage to prevent mass creation?
David Robarts
I used the following lines in ddclient.conf to get the VM external IP address information:
Norm
This was up and working for me for a few months, even after the migration to the newer VM type a few weeks ago. So first I’d like to say thanks for a great guide, this has been super helpful and useful for me.
But I just noticed a couple days ago that my local clients aren’t syncing. I checked the GC console and saw that the VM was shut down. I logged into it, and it had me using a different user (same username as before but with “_gmail_com” appended). The old user was there and all the bitwarden_cloud repo files were there, but I couldn’t even su to that user anymore. I moved the files to the new user’s home dir, fixed perms, and started everything up again. Now when I hit the web site I get an SSL error (NET::ERR_CERT_AUTHORITY_INVALID).
I don’t see anything in the caddy/proxy logs that looks wrong or like any kind of error at all, and in bitwarden_gcloud/caddy/data/caddy/certificates/ I have a valid cert from letsencrypt. I’ve even removed the cert files and restarted and caddy gets new certs issued. I don’t understand why caddy isn’t serving up the legit letsencrypt certs it’s getting?
Norm
I should note that the one negative-looking log message for caddy was that it wasn’t super happy about the format of the Caddyfile. I ran “caddy fmt” and used the resulting output, although the only changes to it were whitespace.
Also, when I say it’s not serving up the letsencrypt cert, it is serving up a self-signed cert, with just the external IP address of the VM as its domain. If I enter https:// in my browser and then "accept the security risks [from a cert with an unknown issuer] and continue", I get a Forbidden error.
Also, it seems that ever since I started the VM back up, I am able to sync with the Android app (or at least, it's not giving me errors), but the MacOS desktop client still fails to sync.
Shawn Egan
So, I shutdown the server normally, changed to the e2-micro, booted back up and the bitwarden directory is gone. Thoughts?
Colin
Thanks for your work on this. FYI. Although getting the DDNS configured correctly was a pain, I eventually got everything working well except for the backup. “line 130: sqlite3: not found”. Either I didn’t follow the directions closely or sqlite isn’t automatically configured any longer with this process.
Niklas Schlögel
First of all: Thank you for this guide and repo. I have issues with Let’s Encrypt though. I am also using Cloudflare (ddclient worked so far) and want to use auto-encryption. Ports 80 and 443 are enabled in the GCP firewall settings and I don’t know what I am missing. I get errors like “no solvers available for remaining challenges” or “problem”:{“type”:”urn:ietf:params:acme:error:unauthorized”,”title”:””,”detail”:”Invalid response from http://mydomain.com/.well-known/acme-challenge/VDR0-FCd9XsImnjZbCA75xz9bBUsk-1woEYUzKHQVnc [2606:4700:3033::ac43:914e]: "<!DOCTYPE html>\n\n<!–[if IE 7]> <html class=\"no-js "”,”instance”:””,”subproblems”:[]}}” and finally some rate limiting error message. Please be so kind and help me somebody. I see that recent questions got ignored, but I thought, I’ll give it a shot.
S.Oliver
Hi, thanks for this wonderful instruction. Works for me, but I have a really strange behaviour: When the System is running for ~ 1 day, the System freeze. This means Bitwarden is not accessible and the compute engine also cannot access by ssh. In Google Cloud console, the CPU and IOPS remain on a really high level. (I’m not an expert in Cloud stuff like monitoring) Any ideas?
Bradford
Hi Oliver - are you on the latest ‘free tier’ product (was f1-micro, now e2-micro)? Run
docker ps
and make sure you don’t have more docker containers running than you should. Beyond that, I’m not sure. Take a look at the CPU history. As a reference, my CPU hovers around 20%, spiking up to 30% occasionally. As a last resort, you could blow it away and start over with a backup of your bitwarden database. Good luck .Paul
I went through these instructions, thank you by the way, but I still can’t manage to get it to work. It appears to be running but it’s not connecting. HTTP and HTTPS are firewall allowed and I can SSH. Either trying to connect via hostname or IP no work.
Is there anything I can check, look for?
Thanks
Bradford
I would look at the docker logs - once you start it all up, try
docker compose logs -f
and see what sorts of errors pop up. I suspect Caddy might have some issues, or the vm isn’t configured correctly to allow traffic in.Paul
Thanks.
I have it working now although I’m not entirely sure what I did to get it working.
I now have to get working on backups. I’ve implemented a static IP with google and am using Cloudflare’s firewall to limit access to my country.
Vlad B
There is a copy/paste error in backup.sh for inclusion of the config file, should be: FILES=”$FILES $([ -f config.json ] && echo config.json)”
Ridho
Hi, thank you for the repository!
I’ve question regarding the ddns. If I stop and start the VM, will the ddns automatically detect the new IP? or do we need to restart the docker container? Thanks!
Dave Peck
Hi. Thanks for all your hard work with this project.
I’ve noticed that Bitwarden now have an official “Unified” installation (currently in beta) that has very lightweight system requirements (min 200Mb RAM and 1Gb storage).
More information here … https://bitwarden.com/help/install-and-deploy-unified-beta/
I would have thought that this would run on the Google Cloud free tier. Would it be possible for you to take a look at the documentation and comment on the possibility of using your project to run the Bitwarden Unified release?
I don’t think that I have the necessary technical expertise to attempt this myself so would greatly appreciate your thoughts on the matter.
crespire
I added this to my notes, but I think it might be worth mentioning in this guide for those that might not be technically inclined.
Restore Backups
In order to restore backups, download or provide the archive file and unarchive it in the
/bitwarden
folder.Because
compose
mounts the vaultwarden/data
folder from{$PWD}/bitwarden
, we use this folder to restore data.AirMarshall
Google Chrome Extension Error now occuring when attempting to log-in.
Error message “Cannot read properties of null (reading ‘iterations’)” appears, the client doesn’t allow to log in.
I have tried rebooting the instance and no improvement. Logged in instances seem to still work.
Google suggests ensuring one is on the latest version of vaultwarden sorts it, I assume I am but how can I check this?
Bradford
I have no issues with the Firefox plugin, and I just logged in to the Edge version (I don’t have Chrome installed) with no issues. Here is the version information from the Edge plugin, assuming it’s the same as Chrome:
AirMarshall
How can check my vaultwarden instance to ensure it’s fully up to date?
I will investigate the client with other browsers are report back.