The goal of this is to run a local version of console connected to a 4.0 OpenShift cluster created with the installer.
Follow the steps here to set up the installer via libvirtd.
Some tips that you may find helpful:
-
Obtain the Pull Secret from Step 4 of the OpenShift Install Developer Preview. This will require you to have an openshift.com account. Beneath Step 4 there should be a link to download the secret and one to copy it. Copying the secret provides a CLI-safe format, which can be used in the installer prompt. I recommend downloading it and storing the file somewhere safe, NOT the
installer
repo! Set theOPENSHIFT_INSTALL_PULL_SECRET_PATH
environment variable to the path to the downloaded file. This saves time from having to copy the secret every time. -
When running
iptable
and subsequent commands using the ip address192.168.124.1
you should instead use the ip address that is printed out after running:ip -4 a show dev virbr0
or
virsh --connect qemu:///system net-dumpxml default
The howto explains to use these commands rather than the default provided, but it's important to emphasize since the default they chose isn't the default me or my peers have encountered.
-
Sometimes, the installer will hang or fail halfway into downloading or moving the cluster OS image. I clear the cache when this happens by deleting the
~/.cache/openshift-install/
directory to avoid having the failed pull impact the next run of the installer. -
Each time the installer fails, you need to destroy the cluster resources and any metadata that was created. The following commands delete ALL libvirt resources, so be careful! Running
./bin/openshift-install destroy cluster
command works well if the cluster actually came up; otherwise, run./scripts/maintenance/virsh-cleanup.sh
to remove everything from libvirt. Also, rungit clean -fd
to remove any metadata created in your git directory. -
When running the installer, it will do some preliminary setup via terraform and print out
Apply complete! Resources: 9 added, 0 changed, 0 destroyed.
on successful setup. It will then produce logs to track your cluster as it begins to startup. However, the install command might printKilled
after some time, and exit. From what I've seen this only means that the installer gives up, so as long as theApply complete!
message is printed, the installer is waiting for the bootstrap node to finish. You can view the bootstrap node's progress via the Exploring your cluster section suggestions.
Patience is important: from personal experience, it may take 30 minutes or more for the cluster to be ready after the bootstrap node is completed. If the cluster isn't ready after this point, it might be worth waiting a little bit longer.
Once you have a running cluster running via libvirt, make sure you've set your environment variable for KUBECONFIG
to point to the credentials for your cluster (default path provided):
export KUBECONFIG=$GOPATH/src/github.com/openshift/auth/kubeconfig
Then, follow the installer steps for Native Kubernetes:
source ./contrib/environment.sh
./bin/bridge
The console should be running at localhost:9000
In case you don't have privileges to view anything in console, look at the username in the top right of the console (mine was set to system:serviceaccount:kube-system:default
) and run the following command, replacing with your assigned username:
oc adm policy add-cluster-role-to-user cluster-admin $USERNAME
This should give you access to view all resources on the cluster.
The oc-environment.sh
script doesn't work since it requires a user token, but you will be logged in as system:admin
which doesn't use token authentication.