I have taken these instructions and put them on the kubernetes github repo:
Kubernetes Gitub
End EDIT ******************
These are my notes on how to get started evaluating a Fedora / Docker / kubernetes environment. I'm going to start with two hosts. Both will run Fedora rawhide. The goal is to stand up both hosts with kubernetes / Docker and use kubernetes to orchestrate the deployment of a couple of simple applications. Derek Carr has already put together a great tutorial on getting a kubernetes environment up using vagrant. However, that process is quite automated and I need to set it all up from scratch.
Install Fedora rawhide using the instructions from here. I just downloaded the boot.iso file and used KVM to deploy the Fedora rawhide hosts. My hosts names are: fed{1,2}.
The kubernetes package provides four services: apiserver, controller, kubelet, proxy. These services are managed by systemd unit files. We will break the services up between the hosts. The first host, fed1, will be the kubernetes master. This host will run the apiserver and controller. The remaining host, fed2 will be minions and run kubelet, proxy and docker.
This is all changing rapidly, so if you walk through this and see any errors or something that needs to be updated, please let me know via comments below.
So let's get started.
Hosts:
fed1 = 10.x.x.241
fed2 = 10.x.x.240
Versions (Check the kubernetes / etcd version after installing the packages):
# cat /etc/redhat-release
Fedora release 22 (Rawhide)
# rpm -q etcd kubernetes
etcd-0.4.5-11.fc22.x86_64
kubernetes-0-0.0.8.gitc78206d.fc22.x86_64
1. Enable the copr repos on all hosts. Colin Walters has already built the appropriate etcd / kubernetes packages for rawhide. You can see the copr repo here.
# yum -y install dnf dnf-plugins-core
# dnf copr enable walters/atomic-next
# yum repolist walters-atomic-next/x86_64
Loaded plugins: langpacks
repo id repo name status
walters-atomic-next/x86_64 Copr repo for atomic-next owned by walters 37
repolist: 37
2. Install kubernetes on all hosts - fed{1,2}. This will also pull in etcd.
# yum -y install kubernetes
3. Pick a host and explore the packages.
# rpm -qi kubernetes
# rpm -qc kubernetes
# rpm -ql kubernetes
# rpm -ql etcd
# rpm -qi etcd
4. Configure fed1.
Export the etcd and kube master variables so the services know where to go.
# export KUBE_ETCD_SERVERS=10.x.x.241
# export KUBE_MASTER=10.x.x.241
These are my services files for: apiserver, etcd and controller. They have been changed from what was distributed with the package.Copy these to /etc/systemd/systemd/. using the -Z to maintain proper SELinux context on them. We will change the files in /etc/systemd/system leaving the ones in /usr the same.
# cp -Z /usr/lib/systemd/system/kubernetes-apiserver.service /etc/systemd/system/.
# cp -Z /usr/lib/systemd/system/kubernetes-controller-manager.service /etc/systemd/system/.
# cp -Z /usr/lib/systemd/system/etcd.service /etc/systemd/system/.
# cat /etc/systemd/system/kubernetes-apiserver.service
[Unit]
Description=Kubernetes API Server
[Service]
ExecStart=/usr/bin/kubernetes-apiserver --logtostderr=true -etcd_servers=http://localhost:4001 -address=127.0.0.1 -port=8080 -machines=10.x.x.240
[Install]
WantedBy=multi-user.target
# cat /etc/systemd/system/kubernetes-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
[Service]
ExecStart=/usr/bin/kubernetes-controller-manager --logtostderr=true --etcd_servers=$KUBE_ETC_SERVERS --master=$KUBE_MASTER
[Install]
WantedBy=multi-user.target
# cat /etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
[Service]
Type=simple
# etc logs to the journal directly, suppress double logging
StandardOutput=null
WorkingDirectory=/var/lib/etcd
ExecStart=/usr/bin/etcd
[Install]
WantedBy=multi-user.target
Start the appropriate services on fed1.
# systemctl daemon-reload
# systemctl restart etcd
# systemctl status etcd
# systemctl enable etcd
# systemctl restart kubernetes-apiserver.service
# systemctl status kubernetes-apiserver.service
# systemctl enable kubernetes-apiserver.service
# systemctl restart kubernetes-controller-manager
# systemctl status kubernetes-controller-manager
# systemctl enable kubernetes-controller-manager
Test etcd on the master (fed1) and make sure it's working.
curl -L http://127.0.0.1:4001/v2/keys/mykey -XPUT -d value="this is awesome"
curl -L http://127.0.0.1:4001/v2/keys/mykey
curl -L http://127.0.0.1:4001/version
I got those examples from the CoreOS github page.Open up the ports for etcd and the kubernetes API server on the master (fed1).
# firewall-cmd --permanent --zone=public --add-port=4001/tcp
# firewall-cmd --zone=public --add-port=4001/tcp
# firewall-cmd --permanent --zone=public --add-port=8080/tcp
# firewall-cmd --zone=public --add-port=8080/tcp
Take a look at what ports the services are running on.
# netstat -tulnp
5. Configure fed2These are my service files. They have been changed from what was distributed with the package.
Copy the unit files to /etc/systemd/system/. and make edits there. Don't modify the unit files in /usr/lib/systemd/system/.
# cp -Z /usr/lib/systemd/system/kubernetes-kubelet.service /etc/systemd/system/.
# cp -Z /usr/lib/systemd/system/kubernetes-proxy.service /etc/systemd/system/.
# cat /etc/systemd/system/kubernetes-kubelet.service
[Unit]
Description=Kubernetes Kubelet
[Service]
ExecStart=/usr/bin/kubernetes-kubelet --logtostderr=true -etcd_servers=http://10.x.x.241:4001 -address=10.x.x.240 -hostname_override=10.x.x.240
[Install]
WantedBy=multi-user.target
# cat /etc/systemd/system/kubernetes-proxy.service
[Unit]
Description=Kubernetes Proxy
[Service]
ExecStart=/usr/bin/kubernetes-proxy --logtostderr=true -etcd_servers=http://10.x.x.241:4001
[Install]
WantedBy=multi-user.target
Start the appropriate services on fed2.
# systemctl daemon-reload
# systemctl enable kubernetes-proxy.service
# systemctl restart kubernetes-proxy.service
# systemctl status kubernetes-proxy.service
# systemctl enable kubernetes-kubelet.service
# systemctl restart kubernetes-kubelet.service
# systemctl status kubernetes-kubelet.service
# systemctl restart docker
# systemctl status docker
# systemctl enable docker
Take a look at what ports the services are running on.
# netstat -tulnp
Open up the port for the kubernetes kubelet server on the minion (fed2).
# firewall-cmd --permanent --zone=public --add-port=10250/tcp
# firewall-cmd --zone=public --add-port=10250/tcp
Now the two servers are set up to kick off a sample application. In this case, we'll deploy a web server to fed2. Start off by making a file in roots home directory on fed1 called apache.json that looks as such:
# cat apache.json
{
"id": "apache",
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "apache-1",
"containers": [{
"name": "master",
"image": "fedora/apache",
"ports": [{
"containerPort": 80,
"hostPort": 80
}]
}]
}
},
"labels": {
"name": "apache"
}
}
This json file is describing the attributes of the application environment. For example, it is giving it an "id", "name", "ports", and "image". Since the fedora/apache images doesn't exist in our environment yet, it will be pulled down automatically as part of the deployment process. I have seen errors though where kubernetes was looking for a cached image. In that case I did a manual "docker pull fedora/apache" and that seemed to resolve.For more information about which options can go in the schema, check out the docs on the kubernetes github page.
Now, deploy the fedora/apache image via the apache.json file.
# /usr/bin/kubernetes-kubecfg -c apache.json create pods
You can monitor progress of the operations with these commands:On the master (fed1) -
# journalctl -f -xn -u kubernetes-apiserver -u etcd -u kubernetes-kubelet -u docker
On the minion (fed2) -
# journalctl -f -xn -u kubernetes-kubelet.service -u kubernetes-proxy -u docker
This is what a successful expected result should look like:
# /usr/bin/kubernetes-kubecfg -c apache.json create pods
I0730 15:13:48.535653 27880 request.go:220] Waiting for completion of /operations/8
I0730 15:14:08.538052 27880 request.go:220] Waiting for completion of /operations/8
I0730 15:14:28.539936 27880 request.go:220] Waiting for completion of /operations/8
I0730 15:14:48.542192 27880 request.go:220] Waiting for completion of /operations/8
I0730 15:15:08.543649 27880 request.go:220] Waiting for completion of /operations/8
I0730 15:15:28.545475 27880 request.go:220] Waiting for completion of /operations/8
I0730 15:15:48.547008 27880 request.go:220] Waiting for completion of /operations/8
I0730 15:16:08.548512 27880 request.go:220] Waiting for completion of /operations/8
Name Image(s) Host Labels
---------- ---------- ---------- ----------
apache fedora/apache / name=apache
After the pod is deployed, you can also list the pod.
# /usr/bin/kubernetes-kubecfg list pods
Name Image(s) Host Labels
---------- ---------- ---------- ----------
apache fedora/apache 10.x.x.240/ name=apache
redis-master-2 dockerfile/redis 10.x.x.240/ name=redis-master
You can get even more information about the pod like this.
# /usr/bin/kubernetes-kubecfg -json get pods/apache
Finally, on the minion (fed2), check that the service is available, running, and functioning.
# docker images | grep fedora
fedora/apache latest 6927a389deb6 10 weeks ago 450.6 MB
# docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d5871fc9af31 fedora/apache:latest /run-apache.sh 9 minutes ago Up 9 minutes k8s--master--apache--8d060183
# curl http://localhost
Apache
To delete the container.
/usr/bin/kubernetes-kubecfg -h http://127.0.0.1:8080 delete /pods/apache
That's it.Of course this just scratches the surface. I recommend you head off to the kubernetes github page and follow the guestbook example. It's a bit more complicated but should expose you to more functionality.
You can play around with other Fedora images by building from Fedora Dockerfiles. Check here at Github.
No comments:
Post a Comment