This is a container stuffed with the latest Google Cloud SDK along with all modules. It is the easiest way to run commands against your cloud instances, apps, switch projects and regions!
Furthermore, you can schedule your commands with cron in order to manage the cloud!
In short, you can run Google Cloud SDK commands against your cloud projects with this container. Just by executing:
$ docker run -it --rm \
-e "GCLOUD_ACCOUNT=$(base64 auth.json)" \
-e "GCLOUD_ACCOUNT_EMAIL=useraccount@developer.gserviceaccount.com" \
-e "CLOUDSDK_CORE_PROJECT=example-project" \
-e "CLOUDSDK_COMPUTE_ZONE=europe-west1-b" \
-e "CLOUDSDK_COMPUTE_REGION=europe-west1" \
blacklabelops/gcloud \
gcloud compute instances list
Lists your instances inside the specified cloud project. Note: Auth credentials are inside the file auth.json.
You can even set up a Cron Schedule and manage the cloud!
$ docker run --rm \
-v $(pwd)/logs/:/logs \
-e "GCLOUD_ACCOUNT=$(base64 auth.json)" \
-e "GCLOUD_ACCOUNT_EMAIL=useraccount@developer.gserviceaccount.com" \
-e "GCLOUD_CRON=$(base64 example-crontab.txt)" \
blacklabelops/gcloud
$ cat logs/gcloud.log
Will start the schedule and log to the local log folder. The cron schedule is defined inside the file example-crontab.txt.
- Managing backups by pushing files to Cloud Storage and Buckets.
- Restoring backups from Cloud Storage and Buckets into containers.
- Executing Long Running File Transfers.
You can only run commands against existing cloud projects!
Documentation can be found here: Creating & Deleting Projects
Also you will have to activate APIs manually before you can use them!
Documentation can be found here: Activating & Deactivating APIs
There are two ways to authenticate the gcloud tools and execute gcloud commands. Both ways need a Google Cloud OAuth Service Account file. This is documented here: Service Account Authentication.
You can now mount the file into your container and execute commands like this:
$ docker run -it --rm \
-v $(pwd)/auth.json:/auth.json \
-e "GCLOUD_ACCOUNT_FILE=/auth.json" \
-e "GCLOUD_ACCOUNT_EMAIL=useraccount@developer.gserviceaccount.com" \
blacklabelops/gcloud \
bash
$ gcloud compute instances list
Opens the bash console inside the container, the second command is executed inside the authenticated container. This works both with json and P12 key files.
You can also Base64 encode the authentication file and stuff it inside an environment variable. This works perfect for long-running stand-alone containers.
$ docker run -it --rm \
-e "GCLOUD_ACCOUNT=$(base64 auth.json)" \
-e "GCLOUD_ACCOUNT_EMAIL=useraccount@developer.gserviceaccount.com" \
blacklabelops/gcloud \
bash
$ gcloud compute instances list
Opens the bash console inside the container, the second command is executed inside the authenticated container. This works both with json and P12 key files.
Set your default Google Project by defining the CLOUDSDK_CORE_PROJECT environment variable.
$ docker run -it --rm \
-e "GCLOUD_ACCOUNT=$(base64 auth.json)" \
-e "GCLOUD_ACCOUNT_EMAIL=useraccount@developer.gserviceaccount.com" \
-e "CLOUDSDK_CORE_PROJECT=example-project" \
blacklabelops/gcloud \
bash
$ gcloud compute instances list
Runs all commands against the project
example-project
.
Set your default Google Project Zone and Region with the environment variables CLOUDSDK_COMPUTE_ZONE and CLOUDSDK_COMPUTE_REGION.
The documentation can be found here : Regions & Zones
Example:
$ docker run -it --rm \
-e "GCLOUD_ACCOUNT=$(base64 auth.json)" \
-e "GCLOUD_ACCOUNT_EMAIL=useraccount@developer.gserviceaccount.com" \
-e "CLOUDSDK_CORE_PROJECT=example-project" \
-e "CLOUDSDK_COMPUTE_ZONE=europe-west1-b" \
-e "CLOUDSDK_COMPUTE_REGION=europe-west1" \
blacklabelops/gcloud \
bash
$ gcloud compute zones list
$ gcloud compute regions describe ${CLOUDSDK_COMPUTE_REGION}
Set your region and zone to belgium. More details appear with the
describe
command.
This container can manage gcloud instances using cron. The crontab can be mounted or simply converted into a base64 string and configured inside the container over environment variables.
An working example crontab can be found here: example-crontab.txt
Please note that in the case of cron triggering commands, the environment variables have to be configured inside the crontab. See my example file for details.
Also note that when you need to include your own scripts then you just have to extend this container.
Mounting a crontab:
$ docker run -d \
-v $(pwd)/example-crontab.txt:/example-crontab.txt \
-v $(pwd)/logs/:/logs \
-e "GCLOUD_CRONFILE=/example-crontab.txt" \
-e "GCLOUD_ACCOUNT=$(base64 auth.json)" \
-e "GCLOUD_ACCOUNT_EMAIL=useraccount@developer.gserviceaccount.com" \
-e "GCLOUD_CRON=$(base64 example-crontab.txt)" \
blacklabelops/gcloud
Needs an environment variable in order to tell the entryscript where to find the crontab.
Using a Base64 encoded crontab:
$ docker run -d \
-v $(pwd)/logs/:/logs \
-e "GCLOUD_ACCOUNT=$(base64 auth.json)" \
-e "GCLOUD_ACCOUNT_EMAIL=useraccount@developer.gserviceaccount.com" \
-e "GCLOUD_CRON=$(base64 example-crontab.txt)" \
blacklabelops/gcloud
The authentication file and crontab are encoded on the fly.
This container does not write a logfile by default. It's considered bad practise as logs
should be accessed by the command docker logs
. There are use case where you want to
have additional log files, e.g. my use case is to relay log to Loggly Loggly Homepage.
I have added a routine for logging and it's activated by defining a logfile.
Environment Variable: LOG_FILE
Example for a separate volume with a logfile:
$ docker run -d \
--name gcloudcron \
-v $(pwd)/logs/:/gcloudlogs \
-e "LOG_FILE=/gcloudlogs/cron.log" \
-e "GCLOUD_ACCOUNT=$(base64 auth.json)" \
-e "GCLOUD_ACCOUNT_EMAIL=useraccount@developer.gserviceaccount.com" \
-e "GCLOUD_CRON=$(base64 example-crontab.txt)" \
blacklabelops/gcloud
You can watch the log by typing
cat ./logs/cron.log
.
Now lets hook up the container with my Loggly side-car container and relay the log to Loggly! The Full documentation of the loggly container can be found here: blacklabelops/loggly
$ docker run -d \
--volumes-from gcloudcron \
-e "LOGS_DIRECTORIES=/gcloudlogs" \
-e "LOGGLY_TOKEN=412e12ee-12e12e1-12e12e-12e12e" \
-e "LOGGLY_TAG=gcloudlog" \
--name gcloudloggly \
blacklabelops/loggly
Note: You need a valid Loggly Customer Key in order to log to Loggly.
First run the Jenkins example container:
docker run -d -p 8090:8080 --name jenkins_jenkins_1 blacklabelops/jenkins
This will pull the container and start the latest jenkins on port 8090
Instant backup of the jenkins volume using a run-once container:
$ docker run \
--volumes-from jenkins_jenkins_1 \
-v $(pwd)/backups/:/backups \
-v $(pwd)/logs/:/logs \
-e "GCLOUD_ACCOUNT=$(base64 auth.json)" \
-e "GCLOUD_ACCOUNT_EMAIL=useraccount@developer.gserviceaccount.com" \
blacklabelops/gcloud \
bash -c "cd /jenkins/ && tar -czvf /backups/JenkinsBackup$(date +%Y-%m-%d-%H-%M-%S).tar.gz * && gsutil rsync /backups gs://jenkinsbackups"
Note: You need a cloud storage named
jenkinsbackups
to make this work
Now the cron example with a prefixed schedule:
$ docker run \
--volumes-from jenkins_jenkins_1 \
-v $(pwd)/backups/:/backups \
-v $(pwd)/logs/:/logs \
-e "GCLOUD_ACCOUNT=$(base64 auth.json)" \
-e "GCLOUD_ACCOUNT_EMAIL=useraccount@developer.gserviceaccount.com" \
-e "GCLOUD_CRON=$(base64 example-crontab.backup.txt)" \
blacklabelops/gcloud
Backup using cron schedule.
Leave a message and ask questions on Hipchat: blacklabelops/hipchat