This post is based on below url and shows how to deploy grafana monitoring service using prometheus.
https://medium.com/faun/production-grade-kubernetes-monitoring-using-prometheus-78144b835b60
The difference between this post and above url is this one is for NodePort service, other than that, almost same.
My k8s env. One master and 4 3worker nodes. K8s version is v1.15.0 like below.
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring$kb get nodes
NAME STATUS ROLES AGE VERSION
kubemaster Ready master 6d22h v1.15.0
kubeworker1 Ready <none> 6d22h v1.15.0
kubeworker2 Ready <none> 6d22h v1.15.0
kubeworker3 Ready <none> 6d22h v1.15.0
Prereqs):
1.I have 16 G of ram and 8 core cpus(real core is 4). Enough hardware resources.
2.I already setup k8s on kvm(host is ubuntu18.04 but guest-k8s node - nodes are centos7).
3. k8s related command such as kubectl -h.
4, https://wnapdlf.blogspot.com/2019/07/ubuntu18kvmvagrantkubernetes-10metrics.html => metrics setup.
5. repo clone
ex)
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring$git clone https://github.com/Thakurvaibhav/k8s
Cloning into 'k8s'...
remote: Enumerating objects: 45, done.
remote: Counting objects: 100% (45/45), done.
remote: Compressing objects: 100% (32/32), done.
remote: Total 384 (delta 24), reused 29 (delta 12), pack-reused 339
Receiving objects: 100% (384/384), 119.78 KiB | 310.00 KiB/s, done.
Resolving deltas: 100% (159/159), done.
1.Deploying alertmanager
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s$mkdir monitoring
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s$cd monitoring/
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring$ls
alertmanager dashboards grafana ingress.yaml kube-state-metrics node-exporter prometheus README.md
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring$
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring$kb create -f alertmanager/
namespace/monitoring created
configmap/alertmanager created
deployment.extensions/alertmanager created
service/alertmanager created
#Check!
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring$kb get po -n monitoring
NAME READY STATUS RESTARTS AGE
alertmanager-778df66fbc-7wjnk 1/1 Running 0 41s
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring$kb get svc -n monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager ClusterIP 10.109.120.38 <none> 9093/TCP 50s
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring$kb get configmap -n monitoring
NAME DATA AGE
alertmanager 1 105s
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring$kb get svc -n monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager ClusterIP 10.109.120.38 <none> 9093/TCP 5m52s
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring$kb describe svc alertmanager -n monitoring
Name: alertmanager
Namespace: monitoring
Labels: name=alertmanager
Annotations: prometheus.io/path: /metrics
prometheus.io/scrape: true
Selector: app=alertmanager
Type: ClusterIP
IP: 10.109.120.38
Port: alertmanager 9093/TCP
TargetPort: 9093/TCP
Endpoints: 10.244.2.97:9093
Session Affinity: None
Events: <none>
#To know what this service is doing, I changed this service to nodePort.
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring$vi alertmanager/alertmanager-service.yaml
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring$cat alertmanager/alertmanager-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: 'true'
prometheus.io/path: '/metrics'
# cloud.google.com/load-balancer-type: "Internal"
labels:
name: alertmanager
name: alertmanager
namespace: monitoring
spec:
selector:
app: alertmanager
# type: LoadBalancer
type: NodePort
ports:
- name: alertmanager
protocol: TCP
port: 9093
targetPort: 9093
nodePort: 32350
ml @Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring$kb apply -f alertmanager/alertmanager-service.yam
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
service/alertmanager configured
#NodePort check.
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring$telnet 10.1.0.3 32350
Trying 10.1.0.3...
Connected to 10.1.0.3.
Escape character is '^]'.
^C^]
telnet> q
Connection closed.
2.Deploying Prometheus
Before deploying, I should change storage yaml because I am using nfs dynamic provisioner.oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring/prometheus$kb describe sc
Name: managed-nfs-storage
IsDefaultClass: No
Annotations: <none>
Provisioner: fuseim.pri/ifs
Parameters: archiveOnDelete=false
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/mon/k8s/monitoring$vi prometheus/03-prometheus-storage.yaml
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring$cat prometheus/03-prometheus-storage.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: prometheus-claim
namespace: monitoring
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 1Gi
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/mon/k8s/monitoring$kb get pvc -n monitoring
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
prometheus-claim Bound pvc-4fa27aca-b832-4de4-ade2-842d0a784541 1Gi RWO managed-nfs-storage 3m57s
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring$kb describe pvc prometheus-claim
Error from server (NotFound): persistentvolumeclaims "prometheus-claim" not found
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/mon/k8s/monitoring$kb describe pvc prometheus-claim -n monitoring
Name: prometheus-claim
Namespace: monitoring
StorageClass: managed-nfs-storage
Status: Bound
Volume: pvc-4fa27aca-b832-4de4-ade2-842d0a784541
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-class: managed-nfs-storage
volume.beta.kubernetes.io/storage-provisioner: fuseim.pri/ifs
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: prometheus-deployment-789f95d57d-fz29l
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 4m13s persistentvolume-controller waiting for a volume to be created, either by external provisioner "fuseim.pri/ifs" or manually created by system administrator
Normal Provisioning 4m13s fuseim.pri/ifs_nfs-client-provisioner-78665db465-f9xtp_ac4d67ca-aa5d-11e9-8d3e-b232254cc3e3 External provisioner is provisioning volume for claim "monitoring/prometheus-claim"
Normal ProvisioningSucceeded 4m13s fuseim.pri/ifs_nfs-client-provisioner-78665db465-f9xtp_ac4d67ca-aa5d-11e9-8d3e-b232254cc3e3 Successfully provisioned volume pvc-4fa27aca-b832-4de4-ade2-842d0a784541
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring$ls prometheus/
00-prometheus-rbac.yaml 02-prometheus-rules.yaml prometheus-deployment.yaml
01-prometheus-configmap.yaml 03-prometheus-storage.yaml prometheus-service.yaml
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/mon/k8s/monitoring$kb create -f prometheus/
serviceaccount/monitoring created
clusterrolebinding.rbac.authorization.k8s.io/monitoring created
configmap/prometheus-server-conf created
configmap/prometheus-rules created
persistentvolumeclaim/prometheus-claim created
deployment.extensions/prometheus-deployment created
service/prometheus-service created
Error from server (AlreadyExists): error when creating "prometheus/00-prometheus-rbac.yaml": namespaces "monitoring" already exists
Error from server (AlreadyExists): error when creating "prometheus/01-prometheus-configmap.yaml": namespaces "monitoring" already exists
Error from server (AlreadyExists): error when creating "prometheus/02-prometheus-rules.yaml": namespaces "monitoring" already exists
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/mon/k8s/monitoring$
#Well error is being on cause I already created namespace monitoring when I create alertmanager. Doesn't matter.
#To check what pro..service is doing.
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring/prometheus$vi prometheus-service.yaml
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring/prometheus$cat prometheus-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: "true"
# cloud.google.com/load-balancer-type: "Internal"
name: prometheus-service
namespace: monitoring
labels:
name: prometheus
spec:
selector:
app: prometheus-server
type: NodePort
ports:
- name: prometheus
port: 8080
targetPort: prometheus
nodePort: 32351
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring/prometheus$kb apply -f prometheus-service.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
service/prometheus-service configured
#Port check
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring/prometheus$telnet 10.1.0.4 32351
Trying 10.1.0.4...
Connected to 10.1.0.4.
Escape character is '^]'.
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring/prometheus$kubectl get po -l app=prometheus-server -n monitoring
NAME READY STATUS RESTARTS AGE
prometheus-deployment-789f95d57d-fz29l 1/1 Running 0 16m
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring/prometheus$kb get svc prometheus-service -n monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
prometheus-service NodePort 10.104.17.137 <none> 8080:32351/TCP 18m
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring/prometheus$kb get configmap -n monitoring
NAME DATA AGE
alertmanager 1 41m
prometheus-rules 1 19m
prometheus-server-conf 1 19m
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring/prometheus$kb get configmap -n monitoring
NAME DATA AGE
alertmanager 1 42m
prometheus-rules 1 20m
prometheus-server-conf 1 20m
3. Deploying Kube-State-Metrics
Before deploying kube-state-metrices, also, I changed this service to us NodePort.
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring$vi kube-state-metrics/kube-state-metrics.yaml
#Omitted lines...
156 ---
157 apiVersion: v1
158 kind: Service
159 metadata:
160 name: kube-state-metrics
161 namespace: monitoring
162 labels:
163 k8s-app: kube-state-metrics
164 annotations:
165 prometheus.io/scrape: 'true'
166 spec:
167 type: NodePort
168 ports:
169 - name: http-metrics
170 port: 8080
171 targetPort: http-metrics
172 protocol: TCP
173 - name: telemetry
174 port: 8081
175 targetPort: telemetry
176 protocol: TCP
177 nodePort: 32352
178 selector:
179 k8s-app: kube-state-metrics
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring$kb create -f kube-state-metrics/
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
rolebinding.rbac.authorization.k8s.io/kube-state-metrics created
role.rbac.authorization.k8s.io/kube-state-metrics-resizer created
serviceaccount/kube-state-metrics created
deployment.apps/kube-state-metrics created
service/kube-state-metrics created
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring$nc -zv 10.1.0.5 32352
Connection to 10.1.0.5 32352 port [tcp/*] succeeded!
3. Deploying Grafana
#To use Nodeport .
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring$vi grafana/grafana-service.yaml
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring$cat grafana/grafana-service.yaml
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/mon/k8s/monitoring$cat grafana/grafana-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: grafana
# cloud.google.com/load-balancer-type: "Internal"
name: grafana
namespace: monitoring
spec:
type: NodePort
ports:
- port: 3000
targetPort: 3000
nodePort: 32353
selector:
k8s-app: grafana
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring$kb create -f grafana/
namespace/monitoring created
deployment.extensions/grafana created
service/grafana created
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring$
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring$nc -zv 10.1.0.3 32353
Connection to 10.1.0.3 32353 port [tcp/*] succeeded!
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/monitoring/k8s/monitoring$
#Config datasource like below
Name: DS_Prometheus
Type: Prometheus
#To monitor k8s cluster using prometheus. Copy below url to clipboard.
.https://raw.githubusercontent.com/ohyoungjooung2/u18kvk8s/master/mon/k8s/monitoring/dashboards/Kubernetes%20cluster%20monitoring%20(via%20Prometheus).json
Conclusion)
Deploying grafana for monitoring is easier than ever if we know how to deploy on k8s cluster with the help of many good people.
Thanks for reading.!
No comments:
Post a Comment