Sunday, July 14, 2019

UBUNTU18+KVM+VAGRANT+KUBERNETES 14)ELASTICSEARCH+KIBANA+FLUENTD


This post shows how to deploy elasticsearch + kibina + fluentd on K8s cluster.


Reference)
  https://mherman.org/blog/logging-in-kubernetes-with-elasticsearch-Kibana-fluentd/
 https://www.elastic.co/

Importantly, elasticsearch's version should be same with kibana's one. 


1. Elasticsearch 7.2 deployment.

 1) Stroage provisioning


   oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/esearch$cat storageClaimForEsearch.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: storage-claim-for-esearch
  annotations:
    volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 3Gi

  oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/esearch$kb create -f storageClaimForEsearch.yaml
persistentvolumeclaim/storage-claim-for-esearch created
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/esearch$kb get pvc | grep esearch
storage-claim-for-esearch            Bound    pvc-ee2ed84f-7799-4d6a-bf47-c80ef87ed17a   3Gi        RWX            managed-nfs-storage   11s


 2)Creating service.

oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/esearch$cat esearch-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: elasticsearch
  labels:
    app: elasticsearch
spec:
  ports:
    - port: 9200
  selector:
    app: elasticsearch
  type: NodePort
  ports:
  - port: 9200
    targetPort: 9200
    nodePort: 32335
    protocol: TCP
    name: http

 oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/esearch$kb create -f esearch-svc.yaml
service/elasticsearch created
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/esearch$kb describe svc elasticsearch
Name:                     elasticsearch
Namespace:                default
Labels:                   app=elasticsearch
Annotations:              <none>
Selector:                 app=elasticsearch
Type:                     NodePort
IP:                       10.96.27.197
Port:                     http  9200/TCP
TargetPort:               9200/TCP
NodePort:                 http  32335/TCP
Endpoints:                <none>
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
 

 3) Creating deployment.

oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/esearch$cat esearch-dp.yaml
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: elasticsearch
  labels:
    app: elasticsearch
spec:
  replicas: 1
  selector:
    matchLabels:
      app: elasticsearch
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
      - image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0
        name: elasticsearch
        env:
        - name: discovery.type
          value: single-node
        ports:
        - containerPort: 9200
          protocol: TCP
          name: http
        resources:
          limits:
            cpu: 500m
            memory: 2Gi
          requests:
            cpu: 500m
            memory: 2Gi
        volumeMounts:
        - name: esearch-persistent-storage
          mountPath: /usr/share/data
      volumes:
      - name: esearch-persistent-storage
        persistentVolumeClaim:
          #claimName: wp-pv-claim
          claimName: storage-claim-for-esearch

oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/esearch$kb create -f esearch-dp.yaml
deployment.apps/elasticsearch created



oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/esearch$kb get po | grep elast
elasticsearch-5b94d7fdc7-w68ck            1/1     Running   0          32s

4) Test

oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/esearch$curl http://10.1.0.3:32335
{
  "name" : "elasticsearch-5b94d7fdc7-w68ck",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "jgSKzSewSkCXMxX14vzFRw",
  "version" : {
    "number" : "7.2.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "508c38a",
    "build_date" : "2019-06-20T15:54:18.811730Z",
    "build_snapshot" : false,
    "lucene_version" : "8.0.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}


2. kibana 7.2 deployment

  1) Stroage provisioning.

oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/esearch$cat storageClaimForKibana.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: storage-claim-for-kibana
  annotations:
    volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 3Gi

 oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/esearch$kb create -f storageClaimForKibana.yaml
persistentvolumeclaim/storage-claim-for-kibana created

oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/esearch$kb get pvc | grep -i kiba
storage-claim-for-kibana             Bound    pvc-be52077f-c88d-4168-93d3-abfe231d2230   3Gi        RWX            managed-nfs-storage   13s



 2) Deploy with deployment.

 oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/esearch$cat kibana.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kibana
spec:
  selector:
    matchLabels:
      run: kibana
  template:
    metadata:
      labels:
        run: kibana
    spec:
      containers:
      - name: kibana
        image: docker.elastic.co/kibana/kibana:7.2.0
        env:
        - name: ELASTICSEARCH_URL
          value: http://esearch:9200
        - name: XPACK_SECURITY_ENABLED
          value: "true"
        ports:
        - containerPort: 5601
          name: http
          protocol: TCP
        volumeMounts:
        - name: kibana-persistent-storage
          mountPath: /usr/share/kibana/data
      volumes:
      - name: kibana-persistent-storage
        persistentVolumeClaim:
          claimName: storage-claim-for-kibana


---

apiVersion: v1
kind: Service
metadata:
  name: kibana
  labels:
    service: kibana
spec:
  type: NodePort
  selector:
    run: kibana
  ports:
  - port: 5601
    nodePort: 32336
    targetPort: 5601

oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/esearch$kb create -f kibana.yaml
deployment.extensions/kibana created
service/kibana created
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/esearch$kb get po | grep kibana
kibana-8b6d4d4f-xll6s                     1/1     Running   0          9s
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/esearch$kb get svc kibana
NAME     TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
kibana   NodePort   10.104.166.123   <none>        5601:32336/TCP   16s

#We can connect kibana any node's 32336 port like below.





3.fluentd.

 1)Provision: Service account,  ClusterRole, ClusterRoleBinding

oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/esearch/daemonset$cat flun-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd
  namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: fluentd
  namespace: kube-system
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - namespaces
  verbs:
  - get
  - list
  - watch

---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: fluentd
roleRef:
  kind: ClusterRole
  name: fluentd
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: fluentd
  namespace: kube-system

oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/esearch/daemonset$kb create -f flun-rbac.yaml
serviceaccount/fluentd created
clusterrole.rbac.authorization.k8s.io/fluentd created
clusterrolebinding.rbac.authorization.k8s.io/fluentd created

 oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/esearch/daemonset$kb get sa -n kube-system | grep fluen
fluentd                              1         39s
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/esearch/daemonset$kb get clusterrole -n kube-system | grep fluentd
fluentd                                                                93s

oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/esearch/daemonset$kb get clusterrolebinding -n kube-system | grep fluentd
fluentd                                                113s

 2)Provisioning deamonset

oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/esearch/daemonset$cat flun-elas.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
    version: v1
    kubernetes.io/cluster-service: "true"
spec:
  template:
    metadata:
      labels:
        k8s-app: fluentd-logging
        version: v1
        kubernetes.io/cluster-service: "true"
    spec:
      serviceAccount: fluentd
      serviceAccountName: fluentd
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd

        image: fluent/fluentd-kubernetes-daemonset:v1.4-debian-elasticsearch-1
        env:
          - name:  FLUENT_ELASTICSEARCH_HOST
            #value: "elasticsearch.logging"
            value: "elasticsearch.default"
          - name:  FLUENT_ELASTICSEARCH_PORT
            value: "9200"
          - name: FLUENT_ELASTICSEARCH_SCHEME
            value: "http"
          - name: FLUENT_UID
            value: "0"
          - name: FLUENTD_SYSTEMD_CONF
            value: "disable"
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers

 oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/esearch/daemonset$kb create -f flun-elas.yaml
daemonset.extensions/fluentd created
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/esearch/daemonset$kb get po -n kube-system | grep fluent
fluentd-l7q56                        1/1     Running   0          5s
fluentd-lktg5                        1/1     Running   0          5s
fluentd-nwpvd                        1/1     Running   0          5s
fluentd-pv5lg                        1/1     Running   0          5s

4. Test with kabana 7.2.-getting wordpress logs.

#On my k8s cluster, I am running wordpress pod. I am going to show how to find wordpress pod log lastly.

oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/esearch/daemonset$kb get po | grep wordpress
wordpress-6bb56544f9-gfh2w                1/1     Running   5          31h
wordpress-6bb56544f9-sz4xz                1/1     Running   5          31h

#Please see below clip.






Conclusion)
When it comes to central logging or analysis,these kibana+elasticsearch+flunted combination is being used widely.

Especially, on bare metal cluster or other cloud , this method will be useful.



Thanks for reading.

No comments:

Post a Comment