GCP Infrastructure Setup
Introduction
This page is meant to guide you through the infrastructure setup process for the Google Cloud Platform (GCP). Command line usage experience is highly recommended.
Prerequisites
git
repository (holds your fork of the userwise_self_hosted repository)
Instructions
Pre-Infrastructure Setup
These should correspond to your AWS IAM information. We use the Account Identifier & Username to construct the ARN, which is added to our AWS IAM policy, which allows you to assume the self-hosted-dep-access-role
. This enables temporary access to our base container image repository & our helm chart repository.
git clone --bare git@github.com:UserWise/userwise_self_hosted.git
cd userwise_self_hosted.git
git push --mirror git@github.com:<your_username>/userwise_self_hosted.git
cd .. && rm -rf userwise_self_hosted.git
git clone git@github.com:<your_username>/userwise_self_hosted.git
cd userwise_self_hosted
git remote add upstream git@github.com:UserWise/userwise_self_hosted.git
git remote set-url --push upstream DISABLE
# if you want to fetch updates, you can run these commands together:
git fetch upstream
git rebase upstream/main
./bin/install_dependencies
atlas
(Atlas MongoDb CLI)aws
(v2+ AWS CLI)helm
(K8S-Powered Helm CLI Tool)jq
(JSON CLI Tool)kubectl
(Kubernetes Controller CLI Tool)terraform
(Hashicorp’s Terraform Infrastructure Automation CLI Tool)homebrew
(MacOS Package Installer) MacOS Client Only
Infrastructure Setup
infra.tfvars.json
file, and update any necessary infrastructure configurationAny changes made to this will be applied when you run ./bin/terraform apply
.
Use caution when accepting infrastructure changes from ./bin/terraform apply
. Some changes can cause resources to be deleted! Resources that can be saved from accidental termination are appropriately configured on startup.
Hosting multiple clusters? Read This!
Each cluster should be stored in separate repositories, or at least separate directories. Do not share configuration in the same directory! This can cause shared resources to be accidentally deleted.
Also: Every named resource MUST have a unique name. This is again, to reduce the risk of accidental resource deletion.
infra.tfvars.json
, you must provide some secrets on each deploy: mongo_atlas_private_key
, mongo_password
, & psql_password
./bin/terraform apply -target=module.gcp_cluster_hosting.google_compute_network.network
./bin/terraform apply -target=module.gcp_cluster_hosting.google_compute_global_address.private_ip_address
./bin/terraform apply -target=module.gcp_cluster_hosting.google_service_networking_connection.vpc_connection
./bin/terraform apply -target=module.aws_required.module.vpc
./bin/terraform plan
./bin/terraform apply
If this fails with an oauth2
Invalid Grant error, try running: gcloud auth application-default login
Atlas Networking Container already exists:
If you encounter an issue during the ./bin/terraform apply
command, you will need to update the ./infra-config/gcp_cluster_hosting/atlas-peering.tf
.
The resource "mongodbatlas_networking_container" "container"...
block should be updated to use a data source instead (namely changing resource
→ data
and updating the arguments).
Be sure to update all references of mongodbatlas_networking_container.container
→ data.mongodbatlas_networking_container.container
.
./bin/edit_credentials
. This will create two new files: credentials.yml.enc
and master.key
.The credentials.yml.enc
IS SAFE for VCS committing.
If you want to store these encrypted credentials within your repository, you may need to update the .gitignore
file. This file MUST be present for all deployments.
The master.key
IS NOT SAFE for VCS committing.
This key should be backed up and manually created on any devices that needs access to the credentials.yml.enc
file, or to run a deployment. Access to the master.key
will grant access to all credentials! This file MUST be present for all deployments.
Deployment
./bin/deploy
userwise-frontend
Kubernetes servicePost-Deployment
./bin/kubectl exec -it deploy/userwise-app-frontend -- /bin/sh
rails c
irb > company = Company.create(name: "My Company Name")
irb > User.create(email: "myemail@ourdomain.com", password: "mypassword", confirmed_at: DateTime.now.utc, company: company)
https://subdomain.ourdomain.com
& login!
Related content
Powered by UserWise