You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Lightweight as compared to using VM for each application
Uses cgroups to restrict resource allocation of each container
Docker
Docker uses LXC containers
Dcoker container solves compatibilty issues for different components running on different OS
Docker utilises kernel so we can run same docker containers on different flavours of linux kernal based OS like redhat, ubuntu, debian, etc
Dcoker packages containers and ships them to run on any OS we want
Public docker registry - dockerhub
Container vs image
Image is a template which is used to create one or more containers
Containers are running instance of image that are isolated and have there own environment and set of processes
A container is in running state till the time any process or service inside container is running. As soon as process inside container dies or finishes container stops
Create your own image
Create Dockerfile having all instructions. This follows a layerd architecture - each instruction in file is works in as layer so if one line component changes all components below it are build again and all above ones are used from cache.
Cache is maintained by docker itself while building an image so that if we build again, it can be used from cache itself. If we go and re-built the same image again, it will be very fast - from cache
Below is the contents of sample Dockerfile to build a sample pythn flask app. This file can be created by doing things manually and then adding commands to Dockerfile
FROM ubuntu # Base image
RUN apt-get update
RUN apt-get install python
RUN pip install python
RUN pip install flask-mysql
COPY . /opt/source-code
ENTRYPOINT FLASK_APP=/opt/source-code/app.py flask run # Command to run when image is ran as container
Storage drivers are responsible for handling all storage(image and other files in containers) in docker, some drivers are AUFS, ZFS, Device mapper, etc. Docker automatically selects storage driver based on the OS
CMD vs entrypoint
CMD accepts the whole command with argument - this takes ubuntu takes as base image and sleeps for 5 seconds when container is started then exits
FROM Ubuntu
CMD sleep 5 # This also works -> CMD ["sleep" "5", ...]
Entrypoint takes only the command and while running the container, it takes its args after image name. This takes sleep as program name and we can run this as docker run <image-name> 10 to make it sleep for 10 secs
FROM Ubuntu
ENTRYPOINT ["sleep"]
If ENTRYPOINT is used and we don't specify args (if required), docker run will fail so to specify some default value we can specify using CMD. In this 5 will be treated as default value
FROM Ubuntu
ENTRYPOINT ["sleep"]
CMD ["5"]
ENTRYPOINT can also be overwritten using below option
docker run --entrypoint sleep2.0 <image-name> 10
Build docker image
docker build . -t <image-name>
docker build .# Takes *Dockerfile* and build image with id
When working with many containers, we will have run multiple docker run commands, this is not very friendly so to solve this we create a docker-compose.yml file and add image and other details to it
redis:
image: redisdb:
image: postgres:9.4
...
Now to run all containers, we can run below command
docker-compose up
We can build images also using docker-compose using build tag and providing the path of directory to build image. Below docker compose file will first build vote app with some name and then use it
Version 2/3 of docker compose, it is now used(v1 is deprecated). It provides various other options and simplifies syntax like links not required when using version 3, all containers are accessible by default
All the data/files that container creates gets deleted when container is deleted, so save the data we use volumes
File system (volumes)
Docker stores all data at /var/lib/docker
If we want to persist data, we must create volume so that it if container dies also, data is retained
# Create a data volume in file system at path /var/lib/docker/volumes/data_volume
docker volume create data_volume
# Will mount mysql db to /var/lib/docker/volumes/data_volume - this is called volume mount
docker run -v data_volume:/var/lib/mysql mysql
# We can also provide some other location on machine - this is called bind mount
docker run -v /root/data/mysql:/var/lib/mysql mysql
# Using -v option is older way, we can --mount option
Volumes are handled by volume drivers plugins, default volume driver plugin is local (creates /var/lib/docker/volumes), other plugins are Azure file storage, vmware vshere storage, etc. While running docker container, we can specify the volume driver using --volume-driver option
Docker networking
Docker has internal DNS server which runs on 127.0.0.11. This DNS server can resolve IPs by container names, it keeps mapping from container name to IP of that container
Docker creates 3 types of network drivers
Bridge: This is default network type to which all containers are assigned to
None: If we assign network to a container it will become isolated and it won't be reachable by other containers
Host: ...
docker network ls # List all networks created# Create a new network
docker network create --driver bridge --subnet 182.18.0.1/24 --gateway 182.18.0.1 wp-mysql-network
# Create a container and assign network driver *none* to it
docker run --name <image-name> --network none <image>
Network namespaces
Containers are separated from underlying hosts using namespaces for isolation from other process and containers
Host networking config like routing table, ARP table are also isolated from container. Containers can create it's own virtual interface and routing taables. To create a new n/w namespace in linux, we can use
ip netns add red # Add n/w namespace
ip netns # List n/w namespace# List interfaces in a namespace
ip netns exec red ip link
ip -n red link
ip netns exec red arp # ARP table in n/w namespace
ip netns exec red route # Route table in n/w namespace
Similar to hosts, we can setup networking b/w 2 namespaces also
ip link add veth-red type veth peer name veth-blue
ip link set veth-red netns red
ip link set veth-blue netns blue
ip -n red addr add 192.168.15.1 dev veth-red
ip -n blue addr add 192.168.15.2 dev veth-blue
ip -n red link set veth-red up
ip -n blue link set veth-blue up
ip netns exec red ping 192.168.15.2 # Ping from red to blue ns
We can also create internal network using tools like linux bridge, open vswitch, ...
# Using linux bridge
ip link add v-net-0 type bridge
ip link
ip link set dev v-net-0 up
# Now connect 3 ns using bridge
ip -n red link del veth-red # Delete current connection
ip link add veth-red type veth peer name veth-red-br
ip link add veth-blue type veth peer name veth-blue-br
ip link set veth-red netns red
ip link set veth-red-br master v-net-0
ip link set veth-blue netns blue
ip link set veth-blue-br master v-net-0
ip -n red addr add 192.168.15.1 dev veth-red
ip -n blue addr add 192.168.15.1 dev veth-blue
ip -n red link set veth-red up
ip -n blue link set veth-blue up
# From host we won't be able to ping the namespace IPs, to enable it# we can add an IP to the network which connects namespace
ip addr add 192.168.15.5/24 dev v-net-0
# To allow namespace to reach out to other hosts, we need to add 2 things:# 1. Route entry in that namespace routing table
ip netns exec blue ip route add 192.168.1.0/24 via 192.168.15.5
# 2. NAT gateway
iptables -t nat -A POSTROUTING -s 192.168.15.0/24 -j MASQUERADE
# To allow extenal host to ping namespace, we can add port forwarding
iptables -t nat -A PREROUTING --dport 80 --to-destination 192.168.15.2:80 -j DNAT
Docker networking cont...
We can run container with various networking options
none: Container becomes isolated, it won't be able to talk to other containers or the host on which it is running
host: Container is reachable from host so if container binds to a port then from host we can reach to that port and no other container would be able to bind on the same port
bridge: Docker creates a bridge and this bridge is used for interacting with containers. This is the default option
docker run --network none nginx
docker run --network host nginx
docker run nginx # Bridge# List docker networks# We see n/w with name `bridge`, but on host it is created with name `docker0` (use `ip link` - this is added using `ip link add docker0 type bridge`)
docker network ls
Port forwarding: If container is running and hosting on port 80, then from inside container this port is accessible but from outside container this is not, for this we can do port forwarding - now we can access using host-ip and mapped port
# Maps port 80 of container to 8080 of host# Now use host-ip (NOT container IP) with port 8080 to access the container application
docker run -p 8080:80 nginx
# Docker does this using IP NAT rules - whatever comes at host-ip:8080 forward it to container-ip:80
iptables \
-t nat \
-A DOCKER \
-j DNAT \
--dport 8080 \
--to-destination 172.17.0.3:80 # ContainerIP:Port# This can be viewed by listing iptables
iptables -nvL -t nat
Container networking interface (CNI)
Container runtimes (like docker and rkt) and orchestration tool (like k8s, mesos) all have to implement networking solution more or less in the same way, so instead of each one implementing same thing this part is taken out of it and used as plugins
To have independence container runtimes/orchestration tools and plugins follow a set of standards called CNI - works with all runtimes and plugins
When container runtime/orchestration tool creates a new namespace they call plugins with relevant params to create n/w configurations
Example of plugins - weaveworks, flannel, calico, vmware nsx
Container orchestration tools
Docker swarm: Easy to setup but lacks few advanced features
Mesos: Supports lot of things but hard to setup
Kubernetes(k8s): Just right
Commands
Run
# Run docker container from image, this first checks if image is present locally otherwise pulls from docker registry# Runs container in *attached* mode - as a foreground process# No image tag is specified so latest version will be used
docker run redis
docker run redis:4.0 # Run version 4.0 of redis
docker run --name <cntainer-name><image-name># Run container with given name
docker run -d redis # Run in detached mode - background mode
docker attach <container-name|id># Attach to container running in detached mode# Run container in interactive mode - to print some message and accept some input from stdin# -i -> Interactive mode# -t -> Attached to the terminal
docker run -it <image-name># Logs us into docker container - centos in this case, bash is the command we specified to run.# We can run any command in that *centos* container, once we *exit*, container will stop
docker run -it centos bash
# Specify command to run inside *ubuntu* container, this command runs a ubuntu container sleeps for 5 seconds# then exits as we haven't attached any terminal to it (without -it)
docker run ubuntu sleep 5
# Port mapping - when a container runs, it gets an IP. if there is some application which runs on port 8080 we can just# hit the CONTAINER_IP:8080 to access the application but from the host only. From other machines this container IP won't# be accessible so we can map the container port to some other port and access app from HOST_IP:HOST_PORT
docker run -p HOST_PORT:CONTAINER_PORT <image-name># Volume mapping - when container is deleted, it deletes all it's data also. If we want to make data persistent even# after container is deleted, we can map/mount container path to host path.# Below command mounts container path of "/var/lib/mysql" to host path "/opt/datadir", all data will persist on "/opt/datadir"# even if container is deleted
docker run -v /opt/datadir:/var/lib/mysql mysql
docker run -v /opt/datadir:/var/lib/mysql -u root mysql # Run with another user *root*# Set environment variable in container
docker run -e VAR_NAME=VALUE <image># Link myapp with other container so that myapp is able to access redis (or any other container)
docker run --name <myapp> --link <redis><myapp:version># Access docker container running on some other host
docker -H=remote-docker-engine-ip:2375 run nginx
docker run --cpu=.5 ubuntu # Don't take more than 50% of CPU at any given time
docker run --memory=100m ubuntu # Don't take more than 100mb of memory at any given time# Run a command on a running container - like see contents of /etc/hosts file
docker exec<container-name|id> cat /etc/hosts
Get container details
docker ps # List all running containers
docker ps -a # List all containers
docker inspect <container-namme|id># Get more details of a container like container IP, env vars, ...
docker logs <container-namme|id># Get container logs
docker stop <container-names|ids># Stop a running container
docker rm <container-names|ids># Remove a running or stopped container permanently
docker images # List of images present on our host with details
docker rmi <image-name|image-id># Remove image that we no longer need. Remove all containers first using this image
docker rmi -f <image-name|image-id># Forceful
docker pull <image-name># Only pull the image from docker registry and don't run it now
docker history<image-name># Details of all instructions in image like size, ...