AWS_ACCESS_KEY_ID=AKIAIFNBUFP72FDTYCIQ AWS_SECRET_ACCESS_KEY=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 aws --region=us-east-1 s3 cp s3://circleci-build-assets/docker-machine-remote.tar .
tar -xvf docker-machine-remote.tar
mkdir -p ~/.docker/machine ~/.docker/machine/machines
cp -rf ./remote ~/.docker/machine/machines/
cp -rf ./remote-certs ~/.docker/machine/
sed -i "s|\${HOME}|${HOME}|g" ~/.docker/machine/machines/remote/config.json
- Run
./bin/remote.sh -e
- Copy-paste all the printed out variables, so they get loaded into your env (yes I know this sucks and super obscure but we have to start somewhere!)
Now all your docker
and docker-compose
commands from this terminal are going to be running against the remote Docker daemon!
Most likely docker
will not work because of the different client / server versions. So you have to use docker-compose
which is able to get around the API version discrepancy.
- Find your corresponding backend docker image tag, this will be printed in the output of the backend circle build step './bin/image-tag-from-branch' where it will print out a message like:
LWS docker version is <your version>
- Copy that version and replace all
lystable/lws:production
references in docker-remote.yml withlystable/lws:<your version>
. There are a few references (one for each component) - Commit this change on your FE branch and push it upstream
- Circle will do the rest
Note: The remote deploy step in every FE circle push you make will pull the latest LWS and will update the components as needed. You will not lose your data when this happens.
It's a question of whether the backend you are pointing to has been built and pushed by Circle to Docker Hub.
The most reliable way is to check when the last push on Docker Hub happened for your image version, at: https://hub.docker.com/r/lystable/lws/tags/
That will be updated by the backend circle build, with each commit, during the build step:
docker push lystable/lws:$(cat VERSION) && ./bin/run-post-publish-circle-steps.sh
After that step has run (which is asynchronous, so you can estimate ~2 minutes) then your FE remote deploy step will be good to run.
docker-compose -f docker-remote.yml ps
# Builds the apps
./bin/build.sh -e docker
# Deploy them
./bin/remote.sh <with the arguments that circle.yml uses>
Use the -r
parameter when calling the remote.sh script.
Just use the -d
parameter when calling the remote.sh script
If deploying from local, the argument you want to add to the ./bin/remote.sh
script is -p
, along with the other parameters you may be using.
Remember: you must add -p
to the circle.yml remote deploy as well, so that Circle doesn't clear out your data next time around.
Note: Be aware that when using real data in combination with a custom backend branch, you're open to diverging alembic migrations in the backend.
After having imported the environment variables as described above, then do, for example:
docker-compose -f docker-remote.yml run lws lws-cmd migrate_all upgrade head
Unfortnately we can't run a shell
from local yet. The issue to resolve is http://stackoverflow.com/questions/39709781/python-tlsv1-alert-protocol-version-error-in-docker-client-connection
# Start full reindex in background (new index if it's the first reindex you are doing)
docker-compose -f docker-remote.yml run -d lws lws-cmd reindex_search_engine --new-index
# Start partial reindex (new index if it's the first reindex you are doing)
docker-compose -f docker-remote.yml run -d lws lws-cmd reindex_search_engine --new-index --team sandbox
# And the next team will be *without* the --new-index argument
docker-compose -f docker-remote.yml run -d lws lws-cmd reindex_search_engine --team asos
# Any team after this will be without the --new-index argument, that was only so the first reindex will make sure we use a new index and latest schema.