Saturday, December 28, 2019

K8s(kurbernetes 1.17) provisioning with vagrant and kvm(libvirt).

Prequisites).
In thie blog post, I will show how to provision k8s 1.17 with vagrant and some shell scripts.

Ubuntu18.04,vagrant and kvm environment.

*First of all, we should install vagrant and libvirt plugin.


oyj@controller:~$ vagrant plugin list
==> vagrant: A new version of Vagrant is available: 2.2.6 (installed version: 2.2.5)!
==> vagrant: To upgrade visit: https://www.vagrantup.com/downloads.html

vagrant-libvirt (0.0.45, global)

#If we did not install libvirt plugin, install like below.
oyj@controller:~$ vagrant plugin install libvirt
Installing the 'libvirt' plugin. This can take a few minutes...


*Second of all, we should clone git repo that I created recently.

oyj@controller:~$ git clone https://github.com/ohyoungjooung2/u18kvk8s.git
Cloning into 'u18kvk8s'...
remote: Enumerating objects: 216, done.
remote: Counting objects: 100% (216/216), done.
remote: Compressing objects: 100% (173/173), done.
remote: Total 216 (delta 36), reused 207 (delta 32), pack-reused 0
Receiving objects: 100% (216/216), 894.49 KiB | 942.00 KiB/s, done.
Resolving deltas: 100% (36/36), done.

*Lastly, just execute setup.sh.

oyj@controller:~$ cd u18kvk8s/k8s/
ooyj@controller:~/u18kvk8s/k8s$ bash setup.sh
Deleting previous id_rsa
 Generating ssh key for provisionng automatic
 copy id_rsa.put to pub_key
 up master first
Bringing machine 'kubemaster' up with 'libvirt' provider...
==> kubemaster: Checking if box 'centos/7' version '1905.1' is up to date...
==> kubemaster: Creating image (snapshot of base box volume).
==> kubemaster: Creating domain with the following settings...
==> kubemaster:  -- Name:              k8s_kubemaster
==> kubemaster:  -- Domain type:       kvm
==> kubemaster:  -- Cpus:              2
==> kubemaster:  -- Feature:           acpi
==> kubemaster:  -- Feature:           apic
==> kubemaster:  -- Feature:           pae
==> kubemaster:  -- Memory:            2048M
==> kubemaster:  -- Management MAC:   
==> kubemaster:  -- Loader:           
==> kubemaster:  -- Nvram:            
==> kubemaster:  -- Base box:          centos/7
==> kubemaster:  -- Storage pool:      default
==> kubemaster:  -- Image:             /var/lib/libvirt/images/k8s_kubemaster.img (41G)
==> kubemaster:  -- Volume Cache:      default
==> kubemaster:  -- Kernel:           
==> kubemaster:  -- Initrd:           
==> kubemaster:  -- Graphics Type:     vnc
==> kubemaster:  -- Graphics Port:     -1
==> kubemaster:  -- Graphics IP:       127.0.0.1
==> kubemaster:  -- Graphics Password: Not defined
==> kubemaster:  -- Video Type:        cirrus
==> kubemaster:  -- Video VRAM:        9216
==> kubemaster:  -- Sound Type:   
==> kubemaster:  -- Keymap:            en-us
==> kubemaster:  -- TPM Path:         
==> kubemaster:  -- INPUT:             type=mouse, bus=ps2
==> kubemaster: Creating shared folders metadata...
==> kubemaster: Starting domain.
==> kubemaster: Waiting for domain to get an IP address...
==> kubemaster: Waiting for SSH to become available...
    kubemaster:


===============omitted..too long ====================

Password authentication is disabled to avoid man-in-the-middle attacks.
Keyboard-interactive authentication is disabled to avoid man-in-the-middle attacks.
admin_init.log                                100% 4605     7.0MB/s   00:00   
scp admin_inig.log success
Wait until kubemaster ready to accept nodes to join

kubeworker2: Warning: Permanently added '10.1.0.2' (ECDSA) to the list of known hosts.
    kubeworker2:  Node will join with master 10.1.0.2
    kubeworker2: [preflight] Running pre-flight checks
    kubeworker2: W1228 11:31:10.356519    6787 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
    kubeworker2: [preflight] Reading configuration from the cluster...
    kubeworker2: [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    kubeworker2: [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
    kubeworker2: [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    kubeworker2: [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    kubeworker2: [kubelet-start] Starting the kubelet
    kubeworker2: [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    kubeworker2:
    kubeworker2: This node has joined the cluster:
    kubeworker2: * Certificate signing request was sent to apiserver and a response was received.
    kubeworker2: * The Kubelet was informed of the new secure connection details.
    kubeworker2:
    kubeworker2: Run 'kubectl get nodes' on the control-plane to see this node join the cluster.



* And we can check by logging in kubemaster node.

oyj@controller:~/u18kvk8s/k8s$ vagrant ssh kubemaster
Last login: Sat Dec 28 11:20:50 2019
[vagrant@kubemaster ~]$ kubectl get nodes
NAME          STATUS   ROLES    AGE     VERSION
kubemaster    Ready    master   14m     v1.17.0
kubeworker1   Ready    <none>   5m34s   v1.17.0
kubeworker2   Ready    <none>   2m48s   v1.17.0



Conclusion).
To sum up, we can create k8s environment with vagrant and some shell scripts. Easy!..^^.

Thanks for reading.



Saturday, September 14, 2019

Ubuntu18.04+ansible 2.10.0.dev0+Openstack(Stein)1. Retriving openstack image facts with prompt.

I've setup Openstack (Two node: one is controller and the other is compute).
And I want to retrive first images with auth testing like below.

Also, I did install ansible 2.10.0.dev0


First, we shoud install "openstacksdk" with pip command like below.

oyj@controller:~/ansible/openstack$ pip install openstacksdk --user


#Create playbook like below.
oyj@controller:~/ansible/openstack$ cat os_imags_facts.yaml
---
- hosts: localhost
  vars_prompt:
    - name: openstack_admin_password
      prompt: "What is openstack admin password?"
      private: yes

  tasks:
    - name: get openstack images
      os_image_facts:
        auth:
         auth_url: http://controller:5000/v3
         username: admin
         password: "{{ openstack_admin_password }}"
         project_name: admin
      delegate_to: localhost
       
    - name: show images name only
      debug:
          msg:  "image_name: {{ openstack_image[item].name}}"

      loop: "{{ range(0,openstack_image|length)|list }}"

#And execute with ansible-playbook.
#I hava not inventories that I do not need in this playbook.




oyj@controller:~/ansible/openstack$ ansible-playbook os_imags_facts.yaml 
 [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

What is openstack admin password?: 

PLAY [localhost] **************************************************************************************************************************************************

TASK [Gathering Facts] ********************************************************************************************************************************************
ok: [localhost]

TASK [get openstack images] ***************************************************************************************************************************************
ok: [localhost -> localhost]

TASK [show images name only] **************************************************************************************************************************************
ok: [localhost] => (item=0) => {
    "msg": "image_name: u16-server"
}
ok: [localhost] => (item=1) => {
    "msg": "image_name: c7"
}

PLAY RECAP ********************************************************************************************************************************************************

localhost                  : ok=3    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   



As a result,I have two images, c7 and u16server.

==
Thanks for reading



Tuesday, July 23, 2019

UBUNTU18+KVM+VAGRANT+K8S 16)AUTOSCALE-HORIZONTAL -CPU BASED-HPA-JMETER


This post shows how to autoscale your pods based on cpu metrics.
Hope you get some help!.


I used to below jmeter site to stress wordpress pod(two) to increase.
https://jmeter.apache.org/usermanual/build-web-test-plan.html


1. Two wordpress pod.

 oyj@Workstation-oyj-X555QG ~/apache-jmeter-5.1.1/bin$kubectl get po | grep wordpress
wordpress-6c7f4d4874-bd257                1/1     Running   1          2d21h
wordpress-6c7f4d4874-kxj5s                1/1     Running   1          2d21h
oyj@Workstation-oyj-X555QG ~/apache-jmeter-5.1.1/bin$

2.metrics to autoscale.(https://wnapdlf.blogspot.com/2019/07/ubuntu18kvmvagrantkubernetes-10metrics.html)
oyj@Workstation-oyj-X555QG ~/apache-jmeter-5.1.1/bin$kb get po -n kube-system  | grep metrics
metrics-server-75cb7fd5d7-tq7l8      1/1     Running   14         10d


3.wordpress deploy yaml (resource is important,it resource no exists, then no autoscale is possible)
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/wordpress$cat wp-dp.yaml
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  replicas: 2
  selector:
    matchLabels:
      app: wordpress
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
    spec:
      containers:
      - image: wordpress:4.8-apache
        name: wordpress
        resources:
          requests:
            cpu: 100m
        env:
        - name: WORDPRESS_DB_HOST
          #Previsous mariadb-master svc, I already created.!
          value: mariadb-master
        - name: WORDPRESS_DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mariadb-pass
              key: password
        ports:
        - containerPort: 80
          name: wordpress
        volumeMounts:
        - name: wordpress-persistent-storage
          mountPath: /var/www/html
      volumes:
      - name: wordpress-persistent-storage
        persistentVolumeClaim:
          claimName: storage-claim-for-wp


#https://wnapdlf.blogspot.com/2019/07/ubuntu18kvmvagrantkubernetes-8.html


4. Create autoscale on wordpress deployment.
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/wordpress$kb autoscale deployment wordpress --cpu-percent=50 --min=2 --max=5
horizontalpodautoscaler.autoscaling/wordpress autoscaled
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/wordpress$kb get hpa
NAME        REFERENCE              TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
wordpress   Deployment/wordpress   <unknown>/50%   2         5         0          4s

oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/wordpress$kb describe hpa
Name:                                                  wordpress
Namespace:                                             default
Labels:                                                <none>
Annotations:                                           <none>
CreationTimestamp:                                     Wed, 24 Jul 2019 04:55:27 +0900
Reference:                                             Deployment/wordpress
Metrics:                                               ( current / target )
  resource cpu on pods  (as a percentage of request):  1% (1m) / 50%
Min replicas:                                          2
Max replicas:                                          5
Deployment pods:                                       2 current / 2 desired
Conditions:
  Type            Status  Reason               Message
  ----            ------  ------               -------
  AbleToScale     True    ScaleDownStabilized  recent recommendations were higher than current one, applying the highest recent recommendation
  ScalingActive   True    ValidMetricFound     the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
  ScalingLimited  False   DesiredWithinRange   the desired count is within the acceptable range
Events:           <none>
oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/wordpress$kb get hpa
NAME        REFERENCE              TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
wordpress   Deployment/wordpress   1%/50%    2         5         2          27s


5. jmeter setting



oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/wordpress$watch kubectl get hpa

oyj@Workstation-oyj-X555QG ~/u18kvk8s/k8s/wordpress$kubectl get po -l app=wordpress






 Let's stress like below clip.



As a result pod and cpu usage is increasing.



As cpu usage goes down, then pods will be come back to original value.(2).


According to "https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/"

"Roughly speaking, HPA will increase and decrease the number of replicas (via the deployment) to maintain an average CPU utilization across all Pods of 50% (since each pod requests 200 milli-cores by kubectl run, this means average CPU usage of 100 milli-cores"



Thanks for reading.