This page looks best with JavaScript enabled

service mesh 之 Istio 1.0 安裝及極度簡易操作

 ·  ☕ 6 min read

Istio 簡介

Istio提供了一個完整的微服務應用解決方案,透過為整個服務網路提供行為監控和細粒度的控制來滿足微服務應用程序的多樣化需求。

Istio提供了非常簡單的方式建立具有監控(monitor)、負載平衡(load balance)、服務間的認證(service-to-service authentication)…等功能的網路功能,而不需要對服務的程式碼進行任何修改。

環境配置

本次安裝是使用三台Bare Metal去作部署,作業系統採用的是Ubuntu 16.04 LTS版。

Kubernetes Role RAM CPUs IP Address
Master 16G 8Cores 10.20.0.154
Node1 16G 8Cores 10.20.0.164
Node2 16G 8Cores 10.20.0.174

這邊利用的是kubeadm進行kubernetes的安裝『版本是Kubernetes1.11』,可以參考官方網站的部屬方式。

安裝 Kubernetes latest version and docker and other package dependencies:

1
2
3
4
5
6
7
8
$apt-get update && apt-get install -y apt-transport-https curl
$curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo -E apt-key add -
$cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF

$sudo apt-get update
$sudo apt-get install -y docker.io kubelet kubeadm kubectl

Kubernetes v1.8+ 要求關閉系統 Swap,如果不想關閉系統的Swap需要修改 kubelet 設定參數,我們利用以下指令關閉系統Swap:

1
$swapoff -a && sysctl -w vm.swappiness=0

透過以下指令啟動Docker Daemon。

1
$systemctl enable docker && systemctl start docker

將橋接的IPv4流量傳遞給iptables

1
2
3
4
5
6
$cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
$sysctl -p /etc/sysctl.d/k8s.conf

在master節點上使用kubeadm進行kubernetes叢集的初始化

1
$sudo kubeadm init --pod-network-cidr=192.168.0.0/16

會得到下列輸出,我們利用下列輸出的資訊將其他節點加入叢集。

1
2
3
4
You can now join any number of machines by running the following on each node
as root:

kubeadm join 172.24.0.3:6443 --token i67sjb.0nvjxbldwuh342of --discovery-token-ca-cert-hash sha256:aa23e1e7a4d55d06fbdf34fa2a1c703dd7e7cfff735b0b0fe800b4335aff68b5

在其他節點上我們可以利用以下指令加入叢集。

1
$kubeadm join 172.24.0.3:6443 --token i67sjb.0nvjxbldwuh342of --discovery-token-ca-cert-hash sha256:aa23e1e7a4d55d06fbdf34fa2a1c703dd7e7cfff735b0b0fe800b4335aff68b5

在master節點設定kube config。

1
2
3
$mkdir -p $HOME/.kube
$sudo -H cp /etc/kubernetes/admin.conf $HOME/.kube/config
$sudo -H chown $(id -u):$(id -g) $HOME/.kube/config

在master 安裝Kubernetes CNI,這邊採用的是Calico。

1
2
$kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
$kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

等所有的pod都完成,在master上操作kubectl指令即可看到所有node呈現ready的狀態

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
$kubectl get pod -n kube-system
NAME                             READY     STATUS    RESTARTS   AGE
calico-node-cgtxh                2/2       Running   0          1m
calico-node-qjrbm                2/2       Running   0          1m
calico-node-v59b2                2/2       Running   0          2m
coredns-78fcdf6894-dz9fs         1/1       Running   0          4m
coredns-78fcdf6894-mn6k8         1/1       Running   0          4m
etcd-master                      1/1       Running   0          3m
kube-apiserver-master            1/1       Running   0          3m
kube-controller-manager-master   1/1       Running   0          3m
kube-proxy-5xj2l                 1/1       Running   0          4m
kube-proxy-bh7wb                 1/1       Running   0          1m
kube-proxy-jqpqg                 1/1       Running   0          1m
kube-scheduler-master            1/1       Running   0          3m
$kubectl get node
NAME      STATUS    ROLES     AGE       VERSION
master    Ready     master    4m        v1.11.1
node-1    Ready     <none>    2m        v1.11.1
node-2    Ready     <none>    2m        v1.11.1

在master節點上,下載並且安裝helm

1
2
3
$wget https://storage.googleapis.com/kubernetes-helm/helm-v2.9.1-linux-amd64.tar.gz
$tar zxvf helm-v2.9.1-linux-amd64.tar.gz
$mv linux-amd64/helm /usr/bin

為kubernetes Helm 建立Tiller Service Account以及綁定Cluster-Admin Role,最後在初始化helm 。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
$kubectl create serviceaccount tiller --namespace kube-system
$cat <<EOF | kubectl create -f -
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-clusterrolebinding
subjects:
- kind: ServiceAccount
  name: tiller
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: ""
EOF
$helm init  --service-account tiller
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

完後成後可透過kubectl指令確認
1
2
3
4
5
6
$kubectl get pod,svc  -l app=helm -n kube-system
NAME                                READY     STATUS    RESTARTS   AGE
pod/tiller-deploy-759cb9df9-b7n7j   1/1       Running   0          4m

NAME                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)     AGE
service/tiller-deploy   ClusterIP   10.107.71.110   <none>        44134/TCP   4m

安裝Istio

透過官方提供的腳本,下載Istio並安裝istioctl binary

1
2
3
$curl -L https://git.io/getLatestIstio | sh -
$cd istio-1.0.0/
$cp bin/istioctl /usr/bin/

在helm version 2.10.0以前的版本Istio還是需要手安裝Istio CRD

1
2
$kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml
$kubectl apply -f install/kubernetes/helm/istio/charts/certmanager/templates/crds.yaml

透過kubectl指令檢查Istio是否安裝成功

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
$kubectl get pod -n istio-system
NAME                                        READY     STATUS    RESTARTS   AGE
istio-citadel-7d8f9748c5-zd4vb              1/1       Running   0          4m
istio-egressgateway-676c8546c5-fq55b        1/1       Running   0          4m
istio-galley-5669f7c9b-q98ld                1/1       Running   0          4m
istio-ingressgateway-5475685bbb-5jfwv       1/1       Running   0          4m
istio-pilot-5795d6d695-2vfq9                2/2       Running   0          4m
istio-policy-7f945bf487-brtxn               2/2       Running   0          4m
istio-sidecar-injector-d96cd9459-ws647      1/1       Running   0          4m
istio-statsd-prom-bridge-549d687fd9-mb99f   1/1       Running   0          4m
istio-telemetry-6c587bdbc4-tzblq            2/2       Running   0          4m
prometheus-6ffc56584f-xcpsj                 1/1       Running   0          4m


[email protected]:/home/ubuntu|⇒  kubectl get pod,svc
NAME                                       READY     STATUS      RESTARTS   AGE
pod/app-debug-6b4f85c9cc-gq7nx             1/1       Running     1          28d
pod/config                                 1/1       Running     0          1d
pod/hello-55f998cd56-d2q8n                 1/1       Running     0          1d
pod/hellogo-bb9bd67f7-ckmkc                1/1       Running     0          1d
pod/myapp-pod                              0/1       Completed   0          1d
pod/myapp-pod2                             0/1       Completed   0          1d
pod/network-controller-server-tcp-7x6vm    1/1       Running     0          1d
pod/network-controller-server-tcp-kjtm9    1/1       Running     0          1d
pod/network-controller-server-unix-2rnfv   1/1       Running     0          1d
pod/network-controller-server-unix-lh8qh   1/1       Running     0          1d
pod/nginx-6f858d4d45-vrlgb                 1/1       Running     0          1d
pod/onos-65df85486f-tvk5t                  1/1       Running     0          1d
pod/skydive-agent-9x2n4                    1/1       Running     0          6h
pod/skydive-agent-rww57                    1/1       Running     0          6h
pod/skydive-analyzer-5f9556687f-gvdbj      2/2       Running     0          6h


$kubectl get svc -n istio-system
NAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                        AGE
istio-citadel              ClusterIP   10.106.166.113   <none>        8060/TCP,9093/TCP                                                              5m
istio-egressgateway        NodePort    10.100.183.7     <none>        80:31607/TCP,443:31053/TCP                                                     5m
istio-galley               ClusterIP   10.109.104.146   <none>        443/TCP,9093/TCP                                                               5m
istio-ingressgateway       NodePort    10.101.66.117    <none>        80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:32388/TCP,8060:32100/TCP,
15030:30847/TCP,15031:32749/TCP                                                5m
istio-pilot                ClusterIP   10.102.202.205   <none>        15010/TCP,15011/TCP,8080/TCP,9093/TCP                                          5m
istio-policy               ClusterIP   10.97.181.32     <none>        9091/TCP,15004/TCP,9093/TCP                                                    5m
istio-sidecar-injector     ClusterIP   10.96.165.139    <none>        443/TCP                                                                        5m
istio-statsd-prom-bridge   ClusterIP   10.101.82.72     <none>        9102/TCP,9125/UDP                                                              5m
istio-telemetry            ClusterIP   10.108.94.224    <none>        9091/TCP,15004/TCP,9093/TCP,42422/TCP                                          5m
prometheus                 ClusterIP   10.104.3.226     <none>        9090/TCP                                                                       5m

範例:Bookinfo Application

這邊示範Istio官方提供的範例:Bookinfo Application

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
$kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/platform/kube/bookinfo.yaml)
service/details created
deployment.extensions/details-v1 created
service/ratings created
deployment.extensions/ratings-v1 created
service/reviews created
deployment.extensions/reviews-v1 created
deployment.extensions/reviews-v2 created
deployment.extensions/reviews-v3 created
service/productpage created
deployment.extensions/productpage-v1 created

透過kubectl指令確認安裝的pod及service

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
$kubectl get pod
NAME                              READY     STATUS    RESTARTS   AGE
details-v1-fc9649d9c-tqbcn        2/2       Running   0          1m
productpage-v1-58845c779c-2lqxg   2/2       Running   0          27m
ratings-v1-6cc485c997-zqvqt       2/2       Running   0          1m
reviews-v1-76987687b7-hfrn2       2/2       Running   0          1m
reviews-v2-86749dcd5-ffmvs        2/2       Running   0          1m
reviews-v3-7f4746b959-zr4ml       2/2       Running   0          1m
$kubectl get services
NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
details       ClusterIP   10.101.208.129   <none>        9080/TCP   1m
kubernetes    ClusterIP   10.96.0.1        <none>        443/TCP    27m
productpage   ClusterIP   10.101.11.13     <none>        9080/TCP   1m
ratings       ClusterIP   10.105.132.197   <none>        9080/TCP   1m
reviews       ClusterIP   10.103.199.76    <none>        9080/TCP   1m

建立一個Gateway讓叢及外部可以存取

1
$kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml

接著我們透過瀏覽器去存取我們服務,在網址的地方輸入http://<node or master ip>:31380/productpage,網頁會顯下圖的網站內容。

Bookinfo Application 網頁服務

Bookinfo Application 網頁服務

不斷重新整理該服務的網頁,會發現網頁上的星星的部分有改變。從沒星星==>黑星星==>紅星星切換,分別對應到pod中的三個版本的review,預設的載平衡功能是輪詢(Round Robin)

設定destination rule,此時還是輪詢法

1
$kubectl apply -f samples/bookinfo/networking/destination-rule-all.yaml

Intelligent Routing

Istio 提供了智能路由(Intelligent Routing),這邊示範如何使用Istio管理各種服務的流量

依照版本進行路由管理

1
2
3
4
5
$kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml
virtualservice.networking.istio.io/productpage created
virtualservice.networking.istio.io/reviews created
virtualservice.networking.istio.io/ratings created
virtualservice.networking.istio.io/details created

這時候我們再回到瀏覽器查看Bookinfo Application 網頁服務,不管重新整理多少次網頁,都看不到網頁上有星星的標示,因為此時所有的請求都被轉送到review v1版本上

依照用戶進行路由管理

1
2
$kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml
virtualservice.networking.istio.io/reviews create

我們回到瀏覽器上,按下右上角的Sign in 的按鈕。帳號密碼都為jason,登入後會發現不管怎麼更新頁面,網頁上都是呈現黑星星的標示。

當我們登出後,也進行更新頁面的動作,網頁上不會出現任何星星的標示。

因為當我們登入jason後,所有的路由都請求都會被轉發到review v2版本上。

Fault injection

有時候我們的程式碼裡面會有bug,可以透過注入故障的方式發現這些潛伏在裡面的bug。

Bookinfo特別示範一個http延遲的例子,為了測BookInfo的微服務,在reviews:v2和ratings以及用戶jason之間注入七秒延遲。此測試將發Bookinfo故意塞進去的一個錯誤。

剛剛的路由規則,如果已經被您刪除掉。請再利用以下指令加回他的路由規則。

1
2
$kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml
$kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml

使用以下指令注入一個http delay ,該指令在reviews:v2和ratings以及用戶jason之間注入七秒http delay

1
$kubectl apply -f samples/bookinfo/networking/virtual-service-ratings-test-delay.yaml

這時我們去存取網頁會拋出一個異常的錯誤,因為我們把服務與服務之間的存取的時間拉長,我們可以透過注入錯誤的方式發現一個存取延遲的bug。

注入延遲後的Bookinfo Application網頁服務

注入延遲後的Bookinfo Application網頁服務

結論

玩過 istio 之後發現功能十分強大,但架構過於複雜當有問題出現時,維運與開發人員難以排查狀況與問題的發生點,但目前 istio 還在 1.0 版本 未來發展起來應該是一頭猛獸 ,會持續關注 service mesh 的相關議題。

istio 背後撐腰的公司非常可怕 ,可以觀察這個專案後續的走向,作為學習的方向!!


Meng Ze Li
WRITTEN BY
Meng Ze Li
Kubernetes / DevOps / Backend

What's on this Page