In this post, following fun config is explained.
1.
Docker regi server setup.
2.
ansible key setup on each k8s worker nodes.
3.
jenkins setup on the server that has good resources...(docker
installed).
4.
java spring:boot program creating with jenkins.
5.
deploy into k8s cluster with ‘Jenkinsfile’(pipeline of jenkins)
and deliver.sh
Prereqs)
Docker
installation on jenkins node(server). Linux command that is basic.
K8s
cluster.
https://wnapdlf.blogspot.com/
UBUNTU18+KVM+VAGRANT+K8S 1)INSTALL KVM
UBUNTU18+KVM+VAGRANT+K8S 2)INSTALL VAGRANT
UBUNTU18+KVM+VAGRANT+K8S 3)INSTALL K8S
UBUNTU18+KVM+VAGRANT+KUBERNETES 4)ADD NODE K8S
UBUNTU18+KVM+VAGRANT+K8S 1)INSTALL KVM
UBUNTU18+KVM+VAGRANT+K8S 2)INSTALL VAGRANT
UBUNTU18+KVM+VAGRANT+K8S 3)INSTALL K8S
UBUNTU18+KVM+VAGRANT+KUBERNETES 4)ADD NODE K8S
1.
Docker regi server setup.
[vagrant@dockeregiserver
docker_repo]$ cat oreq.sh
openssl
req \
-newkey
rsa:4096 -nodes -sha256 -keyout ./domain.key \
-x509
-days 365 -out ./domain.crt
[vagrant@dockeregiserver
docker_repo]$ bash oreq.sh
Generating
a 4096 bit RSA private key
.......................................................................................................................................................++
..................................................................................++
writing
new private key to './domain.key'
-----
You
are about to be asked to enter information that will be incorporated
into
your certificate request.
What
you are about to enter is what is called a Distinguished Name or a
DN.
There
are quite a few fields but you can leave some blank
For
some fields there will be a default value,
If
you enter '.', the field will be left blank.
-----
Country
Name (2 letter code) [XX]:KR
State
or Province Name (full name) []:SEOUL
Locality
Name (eg, city) [Default City]:Ytttt
Organization
Name (eg, company) [Default Company Ltd]:SKY
Organizational
Unit Name (eg, section) []:NET
Common
Name (eg, your name or your server's hostname) []:10.1.0.7:3333
Email
Address []:OHYOUNGJOOUNG@GMAIL.COM
[vagrant@dockeregiserver
docker_repo]$ sudo vi --cmd "set nu"
/etc/pki/tls/openssl.cnf
219
[ v3_req ]
220
221
# Extensions to add to a certificate request
222
223
basicConstraints = CA:FALSE
224
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
225
226
[ v3_ca ]
#Below
line added
227
subjectAltName=IP:10.1.0.7
[vagrant@dockeregiserver
docker_repo]$ sudo mkdir -p /etc/docker/certs.d/10.1.0.7\:3333
[vagrant@dockeregiserver
docker_repo]$ sudo cp domain.crt
/etc/docker/certs.d/10.1.0.7\:3333/ca.crt
[vagrant@dockeregiserver
docker_repo]$ sudo mkdir /certs
[vagrant@dockeregiserver
docker_repo]$ sudo cp domain.crt domain.key /certs/
[vagrant@dockeregiserver
docker_repo]$
[vagrant@dockeregiserver
docker_repo]$ vi --cmd "set nu" regi_docker.sh
1
#!/usr/bin/env bash
2
docker run -d \
3
--restart=always \
4
--name registry \
5
-v /certs:/certs \
6
-v /regi:/var/lib/registry \
7
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
8
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
9
-p 3333:5000\
10
registry:2
[vagrant@dockeregiserver
docker_repo]$ docker ps
CONTAINER
ID IMAGE COMMAND
CREATED STATUS PORTS
NAMES
f7581f630dac
registry:2 "/entrypoint.sh
/etc…" 3 seconds ago Up 1 second
0.0.0.0:3333->5000/tcp registry
38829acb5576
itsthenetwork/nfs-server-alpine:latest "/usr/bin/nfsd.sh"
2 days ago Up 3 hours
nfs
[vagrant@dockeregiserver
docker_repo]$ telnet 10.1.0.7 3333
Trying
10.1.0.7...
Connected
to 10.1.0.7.
Escape
character is '^]'.
#From
other host
root@oyj-X555QG:~#
mkdir -p /etc/docker/certs.d/10.1.0.7:3333/
#Copy
ca.crt of dockeregiserver(docker regis server) to this directory.
copy and paste or scp ok.
root@oyj-X555QG:/etc/docker/certs.d/10.1.0.7:3333#
vi ca.crt
root@oyj-X555QG:/etc/docker/certs.d/10.1.0.7:3333#
pwd
/etc/docker/certs.d/10.1.0.7:3333
root@oyj-X555QG:/etc/docker/certs.d/10.1.0.7:3333#
file ca.crt
ca.crt:
PEM certificate
#with
md5sum , we can check if it is the file or not.
root@oyj-X555QG:/etc/docker/certs.d/10.1.0.7:3333#
md5sum ca.crt
11fa3edd1d516bdccea8e7cb881cb3e8
ca.crt
root@oyj-X555QG:/etc/docker/certs.d/10.1.0.7:3333#
[vagrant@dockeregiserver
docker_repo]$ md5sum /etc/docker/certs.d/10.1.0.7\:3333/ca.crt
11fa3edd1d516bdccea8e7cb881cb3e8
/etc/docker/certs.d/10.1.0.7:3333/ca.crt
#Testing
root@oyj-X555QG:/etc/docker/certs.d/10.1.0.7:3333#
docker pull ohyoungjooung2/tomcat9.0.10-on-alpinelinux38
Using
default tag: latest
latest:
Pulling from ohyoungjooung2/tomcat9.0.10-on-alpinelinux38
8e3ba11ec2a2:
Pull complete
5ff7fb08c2b4:
Pull complete
27628e0a0b63:
Pull complete
bd514ad2a5be:
Pull complete
e4d6bfd7da2f:
Pull complete
56f83caba94f:
Pull complete
99ccc0a3d73e:
Pull complete
Digest:
sha256:47ac1af38cb51df023b079561776e5067e96efbc069cf8cfa256428f298b9c22
Status:
Downloaded newer image for
ohyoungjooung2/tomcat9.0.10-on-alpinelinux38:latest
root@oyj-X555QG:/etc/docker/certs.d/10.1.0.7:3333#
root@oyj-X555QG:/etc/docker/certs.d/10.1.0.7:3333#
docker images | grep tomcat
ohyoungjooung2/tomcat9.0.10-on-alpinelinux38
latest 828f122577cf 11 months ago 123MB
root@oyj-X555QG:/etc/docker/certs.d/10.1.0.7:3333#
docker tag 828f122577cf 10.1.0.7:3333/oyjtomcat9010alpinelinux38
root@oyj-X555QG:/etc/docker/certs.d/10.1.0.7:3333#
docker push 10.1.0.7:3333/oyjtomcat9010alpinelinux38
The
push refers to repository [10.1.0.7:3333/oyjtomcat9010alpinelinux38]
root@oyj-X555QG:/etc/ssl#
docker push 10.1.0.7:3333/oyjtomcat9010alpinelinux38
The
push refers to repository [10.1.0.7:3333/oyjtomcat9010alpinelinux38]
13a114eb7f66:
Pushed
59368bda0938:
Pushed
66163e896aa8:
Pushed
14518fdd0f24:
Pushed
d78a29c991b4:
Pushed
4c5904eebc7d:
Pushed
73046094a9b8:
Pushed
latest:
digest:
sha256:ec9729daf0b3285dffe1d05b0721b3bbfada9bf5b4e02dcc85498d93da49e700
size: 1790
#Well
finally setting-up docker regi server is successful.
2.
ansible key setup on each k8s worker nodes.
1)
Check if server is on from ansible command.
oyj@oyj-X555QG:~/INSTALL/u18kvk8s/k8s$
ansible kuber -m ping
10.1.0.4
| SUCCESS => {
"ansible_facts":
{
"discovered_interpreter_python":
"/usr/bin/python"
},
"changed":
false,
"ping":
"pong"
}
10.1.0.5
| SUCCESS => {
"ansible_facts":
{
"discovered_interpreter_python":
"/usr/bin/python"
},
"changed":
false,
"ping":
"pong"
}
10.1.0.2
| SUCCESS => {
"ansible_facts":
{
"discovered_interpreter_python":
"/usr/bin/python"
},
"changed":
false,
"ping":
"pong"
}
10.1.0.3
| SUCCESS => {
"ansible_facts":
{
"discovered_interpreter_python":
"/usr/bin/python"
},
"changed":
false,
"ping":
"pong"
}
oyj@oyj-X555QG:~/INSTALL/u18kvk8s/k8s/cicd$
ansible-playbook nodeDockerRepo.yaml
PLAY
[kuber]
***********************************************************************************************************************************
TASK
[make directory /etc/docker/certs.d/10.1.0.7:3333/]
***************************************************************************************
changed:
[10.1.0.3]
changed:
[10.1.0.4]
changed:
[10.1.0.5]
changed:
[10.1.0.2]
TASK
[debug]
***********************************************************************************************************************************
ok:
[10.1.0.2] => {
"result":
{
"ansible_facts": {
"discovered_interpreter_python":
"/usr/bin/python"
},
"changed":
true,
"diff":
{
"after":
{
"mode": "0600",
"path":
"/etc/docker/certs.d/10.1.0.7:3333/",
"state": "directory"
},
"before": {
"mode": "0755",
"path":
"/etc/docker/certs.d/10.1.0.7:3333/",
"state": "absent"
}
},
"failed":
false,
"gid":
0,
"group":
"root",
"mode":
"0600",
"owner":
"root",
"path":
"/etc/docker/certs.d/10.1.0.7:3333/",
"secontext":
"unconfined_u:object_r:etc_t:s0",
"size":
6,
"state":
"directory",
"uid":
0
}
}
ok:
[10.1.0.4] => {
"result":
{
"ansible_facts": {
"discovered_interpreter_python":
"/usr/bin/python"
},
"changed":
true,
"diff":
{
"after":
{
"mode": "0600",
"path":
"/etc/docker/certs.d/10.1.0.7:3333/",
"state": "directory"
},
"before": {
"mode": "0755",
"path":
"/etc/docker/certs.d/10.1.0.7:3333/",
"state": "absent"
}
},
"failed":
false,
"gid":
0,
"group":
"root",
"mode":
"0600",
"owner":
"root",
"path":
"/etc/docker/certs.d/10.1.0.7:3333/",
"secontext":
"unconfined_u:object_r:etc_t:s0",
"size":
6,
"state":
"directory",
"uid":
0
}
}
ok:
[10.1.0.3] => {
"result":
{
"ansible_facts": {
"discovered_interpreter_python":
"/usr/bin/python"
},
"changed":
true,
"diff":
{
"after":
{
"mode": "0600",
"path":
"/etc/docker/certs.d/10.1.0.7:3333/",
"state": "directory"
},
"before": {
"mode": "0755",
"path":
"/etc/docker/certs.d/10.1.0.7:3333/",
"state": "absent"
}
},
"failed":
false,
"gid":
0,
"group":
"root",
"mode":
"0600",
"owner":
"root",
"path":
"/etc/docker/certs.d/10.1.0.7:3333/",
"secontext":
"unconfined_u:object_r:etc_t:s0",
"size":
6,
"state":
"directory",
"uid":
0
}
}
ok:
[10.1.0.5] => {
"result":
{
"ansible_facts": {
"discovered_interpreter_python":
"/usr/bin/python"
},
"changed":
true,
"diff":
{
"after":
{
"mode": "0600",
"path":
"/etc/docker/certs.d/10.1.0.7:3333/",
"state": "directory"
},
"before": {
"mode": "0755",
"path":
"/etc/docker/certs.d/10.1.0.7:3333/",
"state": "absent"
}
},
"failed":
false,
"gid":
0,
"group":
"root",
"mode":
"0600",
"owner":
"root",
"path":
"/etc/docker/certs.d/10.1.0.7:3333/",
"secontext":
"unconfined_u:object_r:container_config_t:s0",
"size":
6,
"state":
"directory",
"uid":
0
}
}
TASK
[copy /etc/docker/certs.d/10.1.0.7:3333/ca.crt TO
/etc/docker/certs.d/10.1.0.7:3333/ca.crt]
***********************************************
changed:
[10.1.0.4]
changed:
[10.1.0.3]
changed:
[10.1.0.5]
changed:
[10.1.0.2]
TASK
[debug]
***********************************************************************************************************************************
ok:
[10.1.0.2] => {
"result":
{
"changed":
true,
"checksum":
"2292e6b05d6db31f214ab928f0dc1954ea982aef",
"dest":
"/etc/docker/certs.d/10.1.0.7:3333/ca.crt",
"diff":
[],
"failed":
false,
"gid":
0,
"group":
"root",
"md5sum":
"2c8e71e2899f995d80143b36f5035149",
"mode":
"0644",
"owner":
"root",
"secontext":
"system_u:object_r:cert_t:s0",
"size":
2151,
"src":
"/home/vagrant/.ansible/tmp/ansible-tmp-1562754588.17-56209436777603/source",
"state":
"file",
"uid":
0
}
}
ok:
[10.1.0.3] => {
"result":
{
"changed":
true,
"checksum":
"2292e6b05d6db31f214ab928f0dc1954ea982aef",
"dest":
"/etc/docker/certs.d/10.1.0.7:3333/ca.crt",
"diff":
[],
"failed":
false,
"gid":
0,
"group":
"root",
"md5sum":
"2c8e71e2899f995d80143b36f5035149",
"mode":
"0644",
"owner":
"root",
"secontext":
"system_u:object_r:cert_t:s0",
"size":
2151,
"src":
"/home/vagrant/.ansible/tmp/ansible-tmp-1562754588.21-70305646558836/source",
"state":
"file",
"uid":
0
}
}
ok:
[10.1.0.4] => {
"result":
{
"changed":
true,
"checksum":
"2292e6b05d6db31f214ab928f0dc1954ea982aef",
"dest":
"/etc/docker/certs.d/10.1.0.7:3333/ca.crt",
"diff":
[],
"failed":
false,
"gid":
0,
"group":
"root",
"md5sum":
"2c8e71e2899f995d80143b36f5035149",
"mode":
"0644",
"owner":
"root",
"secontext":
"system_u:object_r:cert_t:s0",
"size":
2151,
"src":
"/home/vagrant/.ansible/tmp/ansible-tmp-1562754588.24-238410380680943/source",
"state":
"file",
"uid":
0
}
}
ok:
[10.1.0.5] => {
"result":
{
"changed":
true,
"checksum":
"2292e6b05d6db31f214ab928f0dc1954ea982aef",
"dest":
"/etc/docker/certs.d/10.1.0.7:3333/ca.crt",
"diff":
[],
"failed":
false,
"gid":
0,
"group":
"root",
"md5sum":
"2c8e71e2899f995d80143b36f5035149",
"mode":
"0644",
"owner":
"root",
"secontext":
"system_u:object_r:cert_t:s0",
"size":
2151,
"src":
"/home/vagrant/.ansible/tmp/ansible-tmp-1562754588.27-56803362200834/source",
"state":
"file",
"uid":
0
}
}
PLAY
RECAP
*************************************************************************************************************************************
10.1.0.2
: ok=4 changed=2 unreachable=0 failed=0
skipped=0 rescued=0 ignored=0
10.1.0.3
: ok=4 changed=2 unreachable=0 failed=0
skipped=0 rescued=0 ignored=0
10.1.0.4
: ok=4 changed=2 unreachable=0 failed=0
skipped=0 rescued=0 ignored=0
10.1.0.5
: ok=4 changed=2 unreachable=0 failed=0
skipped=0 rescued=0 ignored=0
oyj@oyj-X555QG:~/INSTALL/u18kvk8s/k8s/cicd$
oyj@oyj-X555QG:~/INSTALL/u18kvk8s/k8s/cicd$
ansible kuber -m shell -a 'docker pull
10.1.0.7:3333/oyjtomcat9010alpinelinux38' -b
10.1.0.4
| CHANGED | rc=0 >>
Using
default tag: latest
latest:
Pulling from oyjtomcat9010alpinelinux38
8e3ba11ec2a2:
Pulling fs layer
4d5d63added5:
Pulling fs layer
d898c2f78889:
Pulling fs layer
7cd97174e94a:
Pulling fs layer
236806b696c0:
Pulling fs layer
844a8b743a33:
Pulling fs layer
a925a7c5ec70:
Pulling fs layer
844a8b743a33:
Pull complete
a925a7c5ec70:
Pull complete
Digest:
sha256:ec9729daf0b3285dffe1d05b0721b3bbfada9bf5b4e02dcc85498d93da49e700
Status:
Downloaded newer image for
10.1.0.7:3333/oyjtomcat9010alpinelinux38:latest
10.1.0.5
| CHANGED | rc=0 >>
Using
default tag: latest
latest:
Pulling from oyjtomcat9010alpinelinux38
8e3ba11ec2a2:
Pulling fs layer
layer
844a8b743a33:
Pulling fs layer
a925a7c5ec70:
Pulling fs layer
7cd97174e94a:
Waiting
236806b696c0:
Waiting
844a8b743a33:
Waiting
Digest:
sha256:ec9729daf0b3285dffe1d05b0721b3bbfada9bf5b4e02dcc85498d93da49e700
Status:
Downloaded newer image for
10.1.0.7:3333/oyjtomcat9010alpinelinux38:latest
10.1.0.3
| CHANGED | rc=0 >>
Using
default tag: latest
latest:
Pulling from oyjtomcat9010alpinelinux38
8e3ba11ec2a2:
Pulling fs layer
4d5d63added5:
Pulling fs layer
d898c2f78889:
Pulling fs layer
7cd97174e94a:
Pulling fs layer
236806b696c0:
Pulling fs layer
7cd97174e94a:
Download complete
8e3ba11ec2a2:
Pull complete
4d5d63added5:
Pull complete
;;;...
Digest:
sha256:ec9729daf0b3285dffe1d05b0721b3bbfada9bf5b4e02dcc85498d93da49e700
Status:
Downloaded newer image for
10.1.0.7:3333/oyjtomcat9010alpinelinux38:latest
10.1.0.2
| CHANGED | rc=0 >>
Using
default tag: latest
latest:
Pulling from oyjtomcat9010alpinelinux38
8e3ba11ec2a2:
Pulling fs layer
4d5d63added5:
Pulling fs layer
d898c2f78889:
Pulling fs layer
7cd97174e94a:
Pulling fs layer
236806b696c0:
Pulling fs layer
Digest:
sha256:ec9729daf0b3285dffe1d05b0721b3bbfada9bf5b4e02dcc85498d93da49e700
Status:
Downloaded newer image for
10.1.0.7:3333/oyjtomcat9010alpinelinux38:latest
#Well
nice to test like magic ansible power!..^^;
3.
jenkins setup on the server that has good resources...(docker
installed).
oyj@oyj-X555QG:~/INSTALL/u18kvk8s/k8s/cicd$
cat jenkinsDockerInst.yaml
---
-
hosts: "{{ var_host | default('kuber') }}"
gather_facts:
no
become:
yes
vars:
OWNER:
"oyj"
HOMEDIR:
"/home/oyj"
JENKINS_HOME:
"{{ HOMEDIR }}/jenkins"
JENKINS_DOWN:
"{{ JENKINS_HOME }}/downloads"
tasks:
-
name: "{{ JENKINS_HOME }} DIR CHECK IF EXITS"
command:
"ls {{ JENKINS_HOME }}"
#path:
"{{ JENKINS_HOME }}"
ignore_errors:
yes
register:
jhome_chk
-
debug:
var:
jhome_chk
#-
meta: end_play
-
name: "make dir {{ JENKINS_HOME }}"
file:
path:
"{{ JENKINS_HOME }}"
state:
directory
owner:
"{{ OWNER }}"
group:
"{{ OWNER }}"
mode:
0755
when:
jhome_chk.rc != 0
register:
result
-
debug:
#var:
result.diff.before.state
var:
result
-
name: "{{ JENKINS_DOWN }} DIR CHECK IF EXITS"
command:
"ls {{ JENKINS_DOWN }}"
#path:
"{{ JENKINS_DOWN }}"
ignore_errors:
yes
register:
jhome_chk_down
-
debug:
var:
jhome_chk_down
-
name: "make dir {{ JENKINS_DOWN }}"
file:
path:
"{{ JENKINS_DOWN }}"
state:
directory
owner:
"{{ OWNER }}"
group:
"{{ OWNER }}"
mode:
0755
when:
jhome_chk_down.rc != 0
register:
result
-
name: "Launch jenkins:lts container"
command:
"docker run --restart=always -d -p 8081:8080 -p 50000:50000
-v {{ JENKINS_HOME }}:/var/jenkins_home -v {{ JENKINS_DOWN
}}:/var/jenkins_home/downloads -v
/var/run/docker.sock:/var/run/docker.sock jenkinsci/blueocean"
-
debug:
var:
result
oyj@oyj-X555QG:~/INSTALL/u18kvk8s/k8s/cicd$
ansible-playbook --extra-vars "var_host=localhost"
jenkinsDockerInst.yaml -vv
ansible-playbook
2.8.1
config
file = /home/oyj/ansible/ansible.cfg
configured
module search path = [u'/home/oyj/.ansible/plugins/modules',
u'/usr/share/ansible/plugins/modules']
ansible
python module location =
/home/oyj/.local/lib/python2.7/site-packages/ansible
executable
location = /home/oyj/.local/bin/ansible-playbook
python
version = 2.7.15+ (default, Nov 27 2018, 23:36:35) [GCC 7.3.0]
Using
/home/oyj/ansible/ansible.cfg as config file
PLAYBOOK:
jenkinsDockerInst.yaml
***************************************************************************************************************************************************************************
1
plays in jenkinsDockerInst.yaml
PLAY
[localhost]
*******************************************************************************************************************************************************************************************
META:
ran handlers
TASK
[/home/oyj/jenkins DIR CHECK IF EXITS]
****************************************************************************************************************************************************************
task
path: /home/oyj/INSTALL/u18kvk8s/k8s/cicd/jenkinsDockerInst.yaml:14
fatal:
[localhost]: FAILED! => {"changed": true, "cmd":
["ls", "/home/oyj/jenkins"], "delta":
"0:00:00.005033", "end": "2019-07-11
02:34:06.580639", "msg": "non-zero return code",
"rc": 2, "start": "2019-07-11
02:34:06.575606", "stderr": "ls: cannot access
'/home/oyj/jenkins': No such file or directory", "stderr_lines":
["ls: cannot access '/home/oyj/jenkins': No such file or
directory"], "stdout": "", "stdout_lines":
[]}
...ignoring
TASK
[debug]
***********************************************************************************************************************************************************************************************
task
path: /home/oyj/INSTALL/u18kvk8s/k8s/cicd/jenkinsDockerInst.yaml:20
ok:
[localhost] => {
"jhome_chk":
{
"changed":
true,
"cmd":
[
"ls",
"/home/oyj/jenkins"
],
"delta":
"0:00:00.005033",
"end":
"2019-07-11 02:34:06.580639",
"failed":
true,
"msg":
"non-zero return code",
"rc":
2,
"start":
"2019-07-11 02:34:06.575606",
"stderr":
"ls: cannot access '/home/oyj/jenkins': No such file or
directory",
"stderr_lines": [
"ls:
cannot access '/home/oyj/jenkins': No such file or directory"
],
"stdout":
"",
"stdout_lines": []
}
}
TASK
[make dir /home/oyj/jenkins]
**************************************************************************************************************************************************************************
task
path: /home/oyj/INSTALL/u18kvk8s/k8s/cicd/jenkinsDockerInst.yaml:25
changed:
[localhost] => {"changed": true, "gid": 1000,
"group": "oyj", "mode": "0755",
"owner": "oyj", "path":
"/home/oyj/jenkins", "size": 4096, "state":
"directory", "uid": 1000}
TASK
[debug]
***********************************************************************************************************************************************************************************************
task
path: /home/oyj/INSTALL/u18kvk8s/k8s/cicd/jenkinsDockerInst.yaml:36
ok:
[localhost] => {
"result":
{
"changed":
true,
"diff":
{
"after":
{
"group": 1000,
"owner": 1000,
"path":
"/home/oyj/jenkins",
"state": "directory"
},
"before": {
"group": 0,
"owner": 0,
"path":
"/home/oyj/jenkins",
"state": "absent"
}
},
"failed":
false,
"gid":
1000,
"group":
"oyj",
"mode":
"0755",
"owner":
"oyj",
"path":
"/home/oyj/jenkins",
"size":
4096,
"state":
"directory",
"uid":
1000
}
}
TASK
[/home/oyj/jenkins/downloads DIR CHECK IF EXITS]
******************************************************************************************************************************************************
task
path: /home/oyj/INSTALL/u18kvk8s/k8s/cicd/jenkinsDockerInst.yaml:42
fatal:
[localhost]: FAILED! => {"changed": true, "cmd":
["ls", "/home/oyj/jenkins/downloads"], "delta":
"0:00:00.004995", "end": "2019-07-11
02:34:07.505288", "msg": "non-zero return code",
"rc": 2, "start": "2019-07-11
02:34:07.500293", "stderr": "ls: cannot access
'/home/oyj/jenkins/downloads': No such file or directory",
"stderr_lines": ["ls: cannot access
'/home/oyj/jenkins/downloads': No such file or directory"],
"stdout": "", "stdout_lines": []}
...ignoring
TASK
[debug]
***********************************************************************************************************************************************************************************************
task
path: /home/oyj/INSTALL/u18kvk8s/k8s/cicd/jenkinsDockerInst.yaml:47
ok:
[localhost] => {
"jhome_chk_down": {
"changed":
true,
"cmd":
[
"ls",
"/home/oyj/jenkins/downloads"
],
"delta":
"0:00:00.004995",
"end":
"2019-07-11 02:34:07.505288",
"failed":
true,
"msg":
"non-zero return code",
"rc":
2,
"start":
"2019-07-11 02:34:07.500293",
"stderr":
"ls: cannot access '/home/oyj/jenkins/downloads': No such file
or directory",
"stderr_lines": [
"ls:
cannot access '/home/oyj/jenkins/downloads': No such file or
directory"
],
"stdout":
"",
"stdout_lines": []
}
}
TASK
[make dir /home/oyj/jenkins/downloads]
****************************************************************************************************************************************************************
task
path: /home/oyj/INSTALL/u18kvk8s/k8s/cicd/jenkinsDockerInst.yaml:50
changed:
[localhost] => {"changed": true, "gid": 1000,
"group": "oyj", "mode": "0755",
"owner": "oyj", "path":
"/home/oyj/jenkins/downloads", "size": 4096,
"state": "directory", "uid": 1000}
TASK
[Launch jenkins:lts container]
************************************************************************************************************************************************************************
task
path: /home/oyj/INSTALL/u18kvk8s/k8s/cicd/jenkinsDockerInst.yaml:61
changed:
[localhost] => {"changed": true, "cmd":
["docker", "run", "--restart=always",
"-d", "-p", "8081:8080", "-p",
"50000:50000", "-v",
"/home/oyj/jenkins:/var/jenkins_home", "-v",
"/home/oyj/jenkins/downloads:/var/jenkins_home/downloads",
"-v", "/var/run/docker.sock:/var/run/docker.sock",
"jenkinsci/blueocean"], "delta":
"0:00:48.974017", "end": "2019-07-11
02:34:57.077516", "rc": 0, "start":
"2019-07-11
"ae0655f7997450f4a5d108bea2b59821c320e6aaff78c491812a3ea679b36ebb",
"stdout_lines":
["ae0655f7997450f4a5d108bea2b59821c320e6aaff78c491812a3ea679b36ebb"]}
TASK
[debug]
***********************************************************************************************************************************************************************************************
task
path: /home/oyj/INSTALL/u18kvk8s/k8s/cicd/jenkinsDockerInst.yaml:74
ok:
[localhost] => {
"result":
{
"changed":
true,
"diff":
{
"after":
{
"group": 1000,
"owner": 1000,
"path":
"/home/oyj/jenkins/downloads",
"state": "directory"
},
"before": {
"group": 0,
"owner": 0,
"path":
"/home/oyj/jenkins/downloads",
"state": "absent"
}
},
"failed":
false,
"gid":
1000,
"group":
"oyj",
"mode":
"0755",
"owner":
"oyj",
"path":
"/home/oyj/jenkins/downloads",
"size":
4096,
"state":
"directory",
"uid":
1000
}
}
META:
ran handlers
META:
ran handlers
PLAY
RECAP
*************************************************************************************************************************************************************************************************
localhost
: ok=9 changed=5 unreachable=0 failed=0
skipped=0 rescued=0 ignored=2
oyj@oyj-X555QG:~/INSTALL/u18kvk8s/k8s/cicd$
docker ps
CONTAINER
ID IMAGE COMMAND CREATED
STATUS PORTS
NAMES
ae0655f79974
jenkinsci/blueocean "/sbin/tini -- /usr/…"
About a minute ago Up About a minute 0.0.0.0:50000->50000/tcp,
0.0.0.0:8081->8080/tcp inspiring_hamilton
4.
Creating spring actuator like below.https://start.spring.io/
#Jenkins
is installed on ubuntu18.xx
oyj@oyj-X555QG:~/Downloads/actuator-sample$
sudo apt install -y openjdk-8-jdk
Reading
package lists... Done
oyj@oyj-X555QG:~/Downloads/actuator-sample$
echo "export
JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/"
>> ~/.bashrc
oyj@oyj-X555QG:~/Downloads/actuator-sample$
source ~/.bashrc
oyj@oyj-X555QG:~/Downloads/actuator-sample$
echo $JAVA_HOME
/usr/lib/jvm/java-8-openjdk-amd64/
oyj@oyj-X555QG:~/Downloads/actuator-sample
oyj@oyj-X555QG:~/Downloads/actuator-sample$
./mvnw spring-boot:run
.
____ _ __ _ _
/\\
/ ___'_ __ _ _(_)_ __ __ _ \ \ \ \
(
( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/
___)| |_)| | | | | || (_| | ) ) ) )
'
|____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
::
Spring Boot :: (v2.1.6.RELEASE)
2019-07-11
03:28:03.581 INFO 2204 --- [ main]
c.e.a.ActuatorSampleApplication : Starting
ActuatorSampleApplication on oyj-X555QG with PID 2204
(/home/oyj/Downloads/actuator-sample/target/classes started by oyj in
/home/oyj/Downloads/actuator-sample)
2019-07-11
03:28:03.588 INFO 2204 --- [ main]
c.e.a.ActuatorSampleApplication : No active profile set,
falling back to default profiles: default
2019-07-11
03:28:07.223 INFO 2204 --- [ main]
o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with
port(s): 8080 (http)
2019-07-11
03:28:07.286 INFO 2204 --- [ main]
o.apache.catalina.core.StandardService : Starting service [Tomcat]
2019-07-11
03:28:07.287 INFO 2204 --- [ main]
org.apache.catalina.core.StandardEngine : Starting Servlet engine:
[Apache Tomcat/9.0.21]
2019-07-11
03:28:07.489 INFO 2204 --- [ main]
o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring
embedded WebApplicationContext
2019-07-11
03:28:07.490 INFO 2204 --- [ main]
o.s.web.context.ContextLoader : Root
WebApplicationContext: initialization completed in 3788 ms
2019-07-11
03:28:08.957 INFO 2204 --- [ main]
o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing
ExecutorService 'applicationTaskExecutor'
2019-07-11
03:28:10.269 INFO 2204 --- [ main]
o.s.b.a.e.web.EndpointLinksResolver : Exposing 2 endpoint(s)
beneath base path '/actuator'
2019-07-11
03:28:10.584 INFO 2204 --- [ main]
o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s):
8080 (http) with context path ''
2019-07-11
03:28:10.591 INFO 2204 --- [ main]
c.e.a.ActuatorSampleApplication : Started
ActuatorSampleApplication in 8.429 seconds (JVM running for 48.732)
^C2019-07-11
03:28:14.315 INFO 2204 --- [ Thread-4]
o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down
ExecutorService 'applicationTaskExecutor'
[INFO]
------------------------------------------------------------------------
[INFO]
BUILD SUCCESS
[INFO]
------------------------------------------------------------------------
oyj@oyj-X555QG:~/Downloads/actuator-sample$
./mvnw spring-boot:run
[INFO]
Scanning for projects...
[INFO]
[INFO]
--------------------< com.example:actuator-sample
>---------------------
[INFO]
Building actuator-sample 0.0.1-SNAPSHOT
[INFO]
--------------------------------[ jar
]---------------------------------
[INFO]
[INFO]
>>> spring-boot-maven-plugin:2.1.6.RELEASE:run (default-cli)
> test-compile @ actuator-sample >>>
[INFO]
[INFO]
--- maven-resources-plugin:3.1.0:resources (default-resources) @
actuator-sample ---
[INFO]
Using 'UTF-8' encoding to copy filtered resources.
[INFO]
Copying 1 resource
[INFO]
Copying 0 resource
[INFO]
[INFO]
--- maven-compiler-plugin:3.8.1:compile (default-compile) @
actuator-sample ---
[INFO]
Nothing to compile - all classes are up to date
[INFO]
[INFO]
--- maven-resources-plugin:3.1.0:testResources
(default-testResources) @ actuator-sample ---
[INFO]
Using 'UTF-8' encoding to copy filtered resources.
[INFO]
skip non existing resourceDirectory
/home/oyj/Downloads/actuator-sample/src/test/resources
[INFO]
[INFO]
--- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @
actuator-sample ---
[INFO]
Nothing to compile - all classes are up to date
[INFO]
[INFO]
<<< spring-boot-maven-plugin:2.1.6.RELEASE:run (default-cli)
< test-compile @ actuator-sample <<<
[INFO]
[INFO]
[INFO]
--- spring-boot-maven-plugin:2.1.6.RELEASE:run (default-cli) @
actuator-sample ---
.
____ _ __ _ _
/\\
/ ___'_ __ _ _(_)_ __ __ _ \ \ \ \
(
( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/
___)| |_)| | | | | || (_| | ) ) ) )
'
|____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
::
Spring Boot :: (v2.1.6.RELEASE)
2019-07-11
03:29:17.605 INFO 3235 --- [ main]
c.e.a.ActuatorSampleApplication : Starting
2019-07-11
03:29:25.227 INFO 3235 --- [ main]
o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s):
8080 (http) with context path ''
2019-07-11
03:29:25.243 INFO 3235 --- [ main]
c.e.a.ActuatorSampleApplication : Started
ActuatorSampleApplication in 9.067 seconds (JVM running for 16.803)
oyj@oyj-X555QG:~/Downloads/actuator-sample/src/main/resources/static$
vi index.html
oyj@oyj-X555QG:~/Downloads/actuator-sample/src/main/resources/static$
cat index.html
<html>
<head></head>
<body>
<h1><center>Hello!...Spring
boot!</h1>
</body>
oyj@oyj-X555QG:~/Downloads/test$
./mvnw spring-boot:run
oyj@oyj-X555QG:~/Desktop$
curl http://localhost:8080/
<html>
<head></head>
<body>
<h1><center>Hello!...Spring
boot!</h1>
</body>
oyj@oyj-X555QG:~/Desktop$
oyj@oyj-X555QG:~/Downloads/actuator-sample$
vi Dockerfile
oyj@oyj-X555QG:~/Downloads/actuator-sample$
cat Dockerfile
FROM
openjdk:8u111-jdk-alpine
VOLUME
/tmp
ADD
/target/actuator-sample-0.0.1-SNAPSHOT.jar app.jar
ENTRYPOINT
["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
oyj@oyj-X555QG:~/Downloads/actuator-sample$
oyj@oyj-X555QG:~/Downloads/actuator-sample$
sudo apt install maven -y
oyj@oyj-X555QG:~/Downloads/actuator-sample$
mvn package -B
INFO]
Downloaded from central:
https://repo.maven.apache.org/maven2/org/codehaus/plexus/plexus-utils/3.2.0/plexus-utils-3.2.0.jar
(263 kB at 193 kB/s)
[INFO]
Building jar:
/home/oyj/Downloads/actuator-sample/target/actuator-sample-0.0.1-SNAPSHOT.jar
[INFO]
[INFO]
--- spring-boot-maven-plugin:2.1.6.RELEASE:repackage (repackage) @
actuator-sample ---
[INFO]
Replacing main artifact with repackaged archive
[INFO]
------------------------------------------------------------------------
[INFO]
BUILD SUCCESS
[INFO]
------------------------------------------------------------------------
[INFO]
Total time: 40.141 s
[INFO]
Finished at: 2019-07-11T04:10:45+09:00
[INFO]
------------------------------------------------------------------------
oyj@oyj-X555QG:~/Downloads/actuator-sample$
vi Dockerfile
oyj@oyj-X555QG:~/Downloads/actuator-sample$
cat Dockerfile
FROM
openjdk:8u111-jdk-alpine
VOLUME
/tmp
ADD
/target/actuator-sample-0.0.1-SNAPSHOT.jar app.jar
ENTRYPOINT
["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
oyj@oyj-X555QG:~/Downloads/actuator-sample$
docker build . -t 10.1.0.7:3000/actuator-sample:1.0
Sending
build context to Docker daemon 18.45MB
Step
1/4 : FROM openjdk:8u111-jdk-alpine
--->
3fd9dd82815c
Step
2/4 : VOLUME /tmp
--->
Running in 17975446653d
Removing
intermediate container 17975446653d
--->
18991f8a4a94
Step
3/4 : ADD /target/actuator-sample-0.0.1-SNAPSHOT.jar app.jar
--->
81d4244cdde7
Step
4/4 : ENTRYPOINT
["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
--->
Running in e65c02483739
Removing
intermediate container e65c02483739
--->
b97a0a9c8d57
Successfully
built b97a0a9c8d57
Successfully
tagged 10.1.0.7:3000/actuator-sample:1.0
oyj@oyj-X555QG:~/Downloads/actuator-sample$
docker images | grep actu
10.1.0.7:3333/actuator-sample
1.0 b97a0a9c8d57 25 seconds
ago 163MB
oyj@oyj-X555QG:~/Downloads/actuator-sample$
oyj@oyj-X555QG:~/Downloads/actuator-sample$
docker push 10.1.0.7:3333/actuator-sample:1.0
The
push refers to repository [10.1.0.7:3333/actuator-sample]
bbda2ef8a2c8:
Pushed
a1e7033f082e:
Pushed
78075328e0da:
Pushed
9f8566ee5135:
Pushed
1.0:
digest:
sha256:b04b731e42774911d65c0b8e6287fe6c7cff6c914c64776bc36629dc42cd9c2a
size: 1159
oyj@oyj-X555QG:~/Downloads/actuator-sample$
vi ~/ansible/hosts
oyj@oyj-X555QG:~/Downloads/actuator-sample$
ansible 10.1.0.7 -m ping
10.1.0.7
| SUCCESS => {
"ansible_facts":
{
"discovered_interpreter_python":
"/usr/bin/python"
},
"changed":
false,
"ping":
"pong"
}
oyj@oyj-X555QG:~/Downloads/actuator-sample$
cat ~/ansible/hosts
[kuber]
10.1.0.2
10.1.0.3
10.1.0.4
10.1.0.5
10.1.0.7
[kuber:vars]
ansible_private_key_file=~/INSTALL/u18kvk8s/k8s/id_rsa
ansible_ssh_user=vagrant
#My
registry server using dockeregiserver that I ..installed before. I
cloud confirm that docker image did uploaded into regi server.
oyj@oyj-X555QG:~/INSTALL/u18kvk8s/k8s$
vagrant ssh dockeregiserver
Last
login: Wed Jul 10 19:19:08 2019 from 10.1.0.1
[vagrant@dockeregiserver
~]$ ls
docker_repo
[vagrant@dockeregiserver
~]$ ls /regi/
docker
[vagrant@dockeregiserver
~]$ cd /regi/docker/registry/v2/
[vagrant@dockeregiserver
v2]$ ls
blobs
repositories
[vagrant@dockeregiserver
v2]$ cd repositories/
[vagrant@dockeregiserver
repositories]$ ls
actuator-sample
oyjtomcat9010alpinelinux38
4.
java spring:boot program creating with jenkins.
#Create
github repo.
oyj@oyj-X555QG:~/actuator-sample$
git commit -m "first commit actuator-sample"
[master
(root-commit) ae8ca6a] first commit actuator-sample
13
files changed, 680 insertions(+)
create
mode 100644 .gitignore
create
mode 100644 .mvn/wrapper/MavenWrapperDownloader.java
create
mode 100644 .mvn/wrapper/maven-wrapper.jar
create
mode 100644 .mvn/wrapper/maven-wrapper.properties
create
mode 100644 Dockerfile
create
mode 100644 README
create
mode 100755 mvnw
create
mode 100644 mvnw.cmd
create
mode 100644 pom.xml
create
mode 100644
src/main/java/com/example/actuatorsample/ActuatorSampleApplication.java
create
mode 100644 src/main/resources/application.properties
create
mode 100644 src/main/resources/static/index.html
create
mode 100644
src/test/java/com/example/actuatorsample/ActuatorSampleApplicationTests.java
oyj@oyj-X555QG:~/actuator-sample$
git remote add origin
git@github.com:ohyoungjooung2/actuator-sample.git
oyj@oyj-X555QG:~/actuator-sample$
git push -u origin master
Counting
objects: 30, done.
Delta
compression using up to 4 threads.
Compressing
objects: 100% (19/19), done.
Writing
objects: 100% (30/30), 50.03 KiB | 10.00 MiB/s, done.
Total
30 (delta 0), reused 0 (delta 0)
To
github.com:ohyoungjooung2/actuator-sample.git
*
[new branch] master -> master
Branch
'master' set up to track remote branch 'master' from 'origin'.
#Jenkins
pipeline setting.
OKKKKKKKKKKKKKKKK
click
apply,save!
This
jenkins docker images container docker agaain? So, jenkins user shoud
be the one member of docker group.
$docker
exec -it -u root ae0655f79974 bash
bash-4.4#
ls
bin
dev etc home lib media mnt opt proc root run sbin srv
sys tmp usr var
bash-4.4#
usermod -a -G docker jenkins
bash:
usermod: command not found
bash-4.4#
usermod
bash:
usermod: command not found
bash-4.4#
which ls
/bin/ls
bash-4.4#
vi /etc/group
bash-4.4#
vi /etc/group # docker:888:jenkins,ping
bash-4.4#
exit
#As
you see, docker is running by group ping. So, jenkins user should be
in a group of ping.
bash-4.4#
ls -l /var/run/docker.sock
srw-rw----
1 root ping 0 Jul 10 16:40 /var/run/docker.sock
bash-4.4#
cat /etc/group | grep ping
ping:x:999:jenkins
#Restart
jenkins.
oyj@oyj-X555QG:~/jenkins/users/oyj_766991882811301787$
docker restart ae0655f79974
5.
deploy into k8s cluster with ‘Jenkinsfile’(pipeline of jenkins)
1) mvn tool install on jenkins
server.
Manage Jenkins->
Global tool Configuration
section
2) Creating Jenkinsfile
pipeline and deliver.sh(docker build and push onto private regi
server)
oyj@oyj-X555QG:~/actuator-sample$
cat Jenkinsfile
pipeline {
agent any
tools {
maven 'maven 3.6'
}
options {
skipStagesAfterUnstable()
}
stages {
stage('Build') {
steps {
sh 'mvn -B
-DskipTests clean package'
}
}
stage('Test'){
steps{
sh 'mvn test'
}
post{
always {
junit
'target/surefire-reports/*.xml'
}
}
}
}
post {
always {
sh 'chmod 755
./deliver.sh'
sh './deliver.sh'
}
}
}
oyj@oyj-X555QG:~/actuator-sample$
cat deliver.sh
#!/usr/bin/env bash
echo 'TestingTesting'
docker build . -t
10.1.0.7:3333/actuator:1.1; docker images | grep actuator
echo 'Pushing'
docker push
10.1.0.7:3333/actuator:1.1; echo $?; echo test
oyj@oyj-X555QG:~/actuator-sample$
git commit -a -m "Jenkinsfile renewed"
[master 3269bf6] Jenkinsfile
renewed
1 file changed, 11
insertions(+), 11 deletions(-)
oyj@oyj-X555QG:~/actuator-sample$
git push
Counting objects: 3, done.
Delta compression using up to
4 threads.
Compressing objects: 100%
(3/3), done.
Writing objects: 100% (3/3),
360 bytes | 360.00 KiB/s, done.
Total 3 (delta 2), reused 0
(delta 0)
remote: Resolving deltas: 100%
(2/2), completed with 2 local objects.
To
github.com:ohyoungjooung2/actuator-sample.git
0d4444d..3269bf6 master ->
master
Click build now!
63 is build number.
Console message should
show..like below(last lines)
Sending build context to Docker daemon 69.44MB
Step 1/4 : FROM openjdk:8u111-jdk-alpine
---> 3fd9dd82815c
Step 2/4 : VOLUME /tmp
---> Running in 8da8308ed9ec
Removing intermediate container 8da8308ed9ec
---> 5f00001cec33
Step 3/4 : ADD /target/actuator-sample-0.0.1-SNAPSHOT.jar app.jar
---> e0538a358456
Step 4/4 : ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
---> Running in 9416d64d1c7b
Removing intermediate container 9416d64d1c7b
---> 76d4a6f4df34
Successfully built 76d4a6f4df34
Successfully tagged 10.1.0.7:3333/actuator:1.1
10.1.0.7:3333/actuator 1.1 76d4a6f4df34 Less than a second ago 163MB
Pushing
The push refers to repository [10.1.0.7:3333/actuator]
f74a869a9bca: Preparing
a1e7033f082e: Preparing
78075328e0da: Preparing
9f8566ee5135: Preparing
78075328e0da: Layer already exists
a1e7033f082e: Layer already exists
9f8566ee5135: Layer already exists
f74a869a9bca: Pushed
1.1: digest: sha256:86c86fc8ffa5c214b469b405b90de94aefa442b10d8dd15c1593bc63e706bf12 size: 1159
0
test
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
!..build and push is
successful!
#Docker pull test.
[vagrant@kubeworker1 ~]$
docker pull 10.1.0.7:3333/actuator:1.1
1.1: Pulling from actuator
53478ce18e19: Pull complete
d1c225ed7c34: Pull complete
887f300163b6: Pull complete
fbfbf8ea8bec: Pull complete
Digest:
sha256:86c86fc8ffa5c214b469b405b90de94aefa442b10d8dd15c1593bc63e706bf12
Status: Downloaded newer image
for 10.1.0.7:3333/actuator:1.1
#Deploy into k8s cluster.
1)secrets
generating
2) deployment.yaml of
actuator.
3) service by createing
svc/nodeport
Refered urls)
1)secrets
generating
oyj@oyj-X555QG:~/INSTALL/u18kvk8s/k8s/cicd$
alias | grep kb
alias kb='kubectl' #kb is same
as kubectl if u do not know alias
oyj@oyj-X555QG:~/INSTALL/u18kvk8s/k8s/cicd$
kb create secret docker-registry regcred
--docker-server=10.1.0.7:3333 --docker-username=tester
--docker-password=StrongPass!%A
kb create secret
docker-registry regcred --docker-server=10.1.0.7:3333
--docker-username=tester --docker-password=StrongPassA
secret/regcred created
oyj@oyj-X555QG:~/INSTALL/u18kvk8s/k8s/cicd$
oyj@oyj-X555QG:~/INSTALL/u18kvk8s/k8s/cicd$
kubectl describe secrets/regcred
Name: regcred
Namespace: default
Labels: <none>
Annotations: <none>
Type:
kubernetes.io/dockerconfigjson
Data
====
.dockerconfigjson: 108 bytes
oyj@oyj-X555QG:~/INSTALL/u18kvk8s/k8s/cicd$
oyj@oyj-X555QG:~/INSTALL/u18kvk8s/k8s/cicd$
kubectl get secret regcred --output=yaml
apiVersion: v1
data:
.dockerconfigjson:
eyJhdXRocyI6eyIxMC4xLjAuNzozMzMzIjp7InVzZXJuYW1lIjoidGVzdGVyIiwicGFzc3dvcmQiOiJTdHJvbmdQYXNzQSIsImF1dGgiOiJkR1Z6ZEdWeU9sTjBjbTl1WjFCaGMzTkIifX19
kind: Secret
metadata:
creationTimestamp:
"2019-07-11T16:47:51Z"
name: regcred
namespace: default
resourceVersion: "542416"
selfLink:
/api/v1/namespaces/default/secrets/regcred
uid:
39a49291-68ec-4996-b05d-b05f6fc0325c
type:
kubernetes.io/dockerconfigjson
oyj@oyj-X555QG:~/INSTALL/u18kvk8s/k8s/cicd$
vi actuator-dp.yaml
oyj@oyj-X555QG:~/INSTALL/u18kvk8s/k8s/cicd$
cat actuator-dp.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: actuator-sample
spec:
replicas: 1
template:
metadata:
labels:
app: actuator-sample
spec:
containers:
- name: actuator-sample
image:
10.1.0.7:3333/actuator:1.1
imagePullPolicy:
Always
ports:
- containerPort: 8080
imagePullSecrets:
- name: regcred
~
oyj@oyj-X555QG:~/INSTALL/u18kvk8s/k8s/cicd$
kb create -f actuator-dp.yaml
deployment.extensions/actuator-sample
created
oyj@oyj-X555QG:~/INSTALL/u18kvk8s/k8s/cicd$
kb get po
NAME
READY STATUS RESTARTS AGE
actuator-sample-76b585f7f4-z5lcn
0/1 ContainerCreating 0 3s
mariadb-master-0
1/1 Running 29 6d22h
nfs-client-provisioner-78665db465-h98vr
1/1 Running 12 3d9h
oyj@oyj-X555QG:~/INSTALL/u18kvk8s/k8s/cicd$
kb describe po actuator-sample-76b585f7f4-z5lcn
Name:
actuator-sample-76b585f7f4-z5lcn
Namespace: default
////…….Omiitted..!!!...
Events:
Type Reason Age
From Message
---- ------ ----
---- -------
Normal Scheduled 11s
default-scheduler Successfully assigned
default/actuator-sample-76b585f7f4-z5lcn to kubeworker2
Normal Pulling 10s
kubelet, kubeworker2 Pulling image "10.1.0.7:3333/actuator:1.1"
Normal Pulled 1s
kubelet, kubeworker2 Successfully pulled image
"10.1.0.7:3333/actuator:1.1"
Normal Created 1s
kubelet, kubeworker2 Created container actuator-sample
Normal Started 1s
kubelet, kubeworker2 Started container actuator-sample
#Well regcred working
alrighty.
#Create service to connect
acutator.
oyj@oyj-X555QG:~/INSTALL/u18kvk8s/k8s/cicd$
cat actuator-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: actuator-sample
labels:
app: actuator-sample
spec:
ports:
- port: 8080
selector:
app: actuator-sample
type: NodePort
ports:
- port: 8073
nodePort: 32338
targetPort: 8080
protocol: TCP
name: http
oyj@oyj-X555QG:~/INSTALL/u18kvk8s/k8s/cicd$
kb create -f actuator-svc.yaml
service/actuator-sample
created
oyj@oyj-X555QG:~/INSTALL/u18kvk8s/k8s/cicd$
telnet 10.1.0.3 32338
Trying 10.1.0.3...
Connected to 10.1.0.3.
Escape character is '^]'.
#Node port test with
browers...(10.1.0.2,10.1.0.3….all nodes ok)
#Well it seems like very ok.
Now delete all svc deploy and pod and it will applied and deployed
with jenkins.
oyj@oyj-X555QG:~/INSTALL/u18kvk8s/k8s/cicd$
kb delete -f actuator-dp.yaml -f actuator-svc.yaml
deployment.extensions
"actuator-sample" deleted
service "actuator-sample"
deleted
#10.1.0.2 kubemaster. After
uploading k8s .yaml and then I will execute kubectl.
oyj@oyj-X555QG:~/jenkins/workspace$
cp /home/oyj/INSTALL/u18kvk8s/k8s/id_rsa ./
oyj@oyj-X555QG:~/jenkins/workspace$
ssh -i id_rsa vagrant@10.1.0.2
Last login: Thu Jul 11
16:53:01 2019 from 10.1.0.1
[vagrant@kubemaster ~]$ ls
aggre-ca.crt aggre-client.crt
aggre-client.key mariadb-master mariadb-master.tar.gz
mariadb-slave wordpress
[vagrant@kubemaster ~]$ exit
logout
Connection to 10.1.0.2 closed.
oyj@oyj-X555QG:~/jenkins/workspace$
ls
actuator-sample
actuator-sample@2 actuator-sample@2@tmp actuator-sample@tmp id_rsa
oyj@oyj-X555QG:~/jenkins/workspace$
mv id_rsa actuator-sample/
oyj@oyj-X555QG:~/actuator-sample$
ls
actuator-dp.yaml
actuator-svc.yaml deliver.sh Dockerfile HELP.md Jenkinsfile mvnw
mvnw.cmd pom.xml README src target
#In
jenkins docker container, we must be able to execute ‘kubectl’
command.
#Now
my host laptop can do kubectl get po to see k8s’s pods.
#oyj@oyj-X555QG:~$ kb get po
NAME
READY STATUS RESTARTS AGE
mariadb-master-0
1/1 Running 29 6d23h
nfs-client-provisioner-78665db465-h98vr
1/1 Running 12 3d10h
oyj@oyj-X555QG:~$
#So,
I need to execute command kubectl command to deploy actuator-sample
app onto k8s cluster.
#To
do that, I need to run sshd on my laptop only accessable from
172.17.0.2 container.
#Check ip on the jenkins
docker container. My host Ip would be 172.17.0.1
oyj@oyj-X555QG:~/INSTALL/u18kvk8s/k8s$
docker exec -it -u root ae0655f79974 bash
bash-4.4# ifconfig
eth0 Link encap:Ethernet
HWaddr 02:42:AC:11:00:02
inet addr:172.17.0.2
Bcast:172.17.255.255 Mask:255.255.0.0
#So..like below would be
probable.
root@oyj-X555QG:/etc/ssh#
root@oyj-X555QG:/etc/ssh# apt
install openssh-server -y
root@oyj-X555QG:/etc/ssh# vi
sshd_config
1 # $OpenBSD:
sshd_config,v 1.101 2017/03/14 07:19:07 djm Exp $
2
3 # This is the sshd server
system-wide configuration file. See
4 # sshd_config(5) for more
information.
5
6 # This sshd was compiled
with PATH=/usr/bin:/bin:/usr/sbin:/sbin
7
8 # The strategy used for
options in the default sshd_config shipped with
9 # OpenSSH is to specify
options with their default value where
10 # possible, but leave them
commented. Uncommented options override the
11 # default value.
12
13 Port 22
14 #AddressFamily any
15 ListenAddress 172.17.0.1
16 #ListenAddress ::
root@oyj-X555QG:/etc/ssh#
systemctl restart sshd
..
root@oyj-X555QG:/etc/ssh#
netstat -tpln | grep sshd
tcp 0 0
172.17.0.1:22 0.0.0.0:* LISTEN
18000/sshd
#I
am connnectig master k8s server using below key.
oyj@oyj-X555QG:~/INSTALL/u18kvk8s/k8s$
pwd
/home/oyj/INSTALL/u18kvk8s/k8s
oyj@oyj-X555QG:~/INSTALL/u18kvk8s/k8s$
ls id_rsa
id_rsa
#I
am going to use portforwding like below.
oyj@oyj-X555QG:~/INSTALL/u18kvk8s/k8s$
ssh -i id_rsa -L 172.17.0.1:9999:10.1.0.2:22 vagrant@10.1.0.2
#I must be able to login to
k8s master server now then from contianer.
oyj@oyj-X555QG:~/INSTALL/u18kvk8s/k8s$
cp id_rsa /home/oyj/jenkins/
bash-4.4$ ls -l id_rsa
-rw------- 1 jenkins jenkins
1679 Jul 11 19:21 id_rsa
bash-4.4$ ssh -i id_rsa
vagrant@172.17.0.1 -p 9999
Last login: Thu Jul 11
19:19:37 2019 from 10.1.0.2
[vagrant@kubemaster ~]$ exit
logout
Connection to 172.17.0.1
closed.
bash-4.4$ ssh -i id_rsa
vagrant@172.17.0.1 -p 9999 'kubectl get nodes'
NAME STATUS ROLES
AGE VERSION
kubemaster Ready master
12d v1.15.0
kubeworker1 Ready <none>
12d v1.15.0
kubeworker2 Ready <none>
12d v1.15.0
kubeworker3 Ready <none>
9d v1.15.0
###Now I can do adjust this
context into jenkins pipeline(Jenkinsfile and shell script)####
oyj@oyj-X555QG:~/actuator-sample$
cat Jenkinsfile
pipeline {
agent any
tools {
maven 'maven 3.6'
}
options {
skipStagesAfterUnstable()
}
stages {
stage('Build') {
steps {
sh 'mvn -B
-DskipTests clean package'
}
}
stage('Test'){
steps{
sh 'mvn test'
}
post{
always {
junit
'target/surefire-reports/*.xml'
}
}
}
}
post {
always {
sh 'chmod 755
./deliver.sh'
sh './deliver.sh'
}
}
}
oyj@oyj-X555QG:~/actuator-sample$
cat deliver.sh
#!/usr/bin/env bash
echo 'TestingTesting'
docker build . -t
10.1.0.7:3333/actuator:1.1; docker images | grep actuator
echo 'Pushing'
docker push
10.1.0.7:3333/actuator:1.1; echo $?; echo test
echo "Deployment of
actuator sample"
if [[ -e actuator-dp.yaml ]]
then
scp -i $HOME/id_rsa -P 9999
actuator-dp.yaml vagrant@172.17.0.1:/home/vagrant/
ssh -i $HOME/id_rsa -p 9999
vagrant@172.17.0.1 '/usr/bin/kubectl create -f actuator-dp.yaml'
else
echo "actuator-dp.yaml
not exists"
fi
echo "Deployment of
actuator sample"
if [[ -e actuator-svc.yaml ]]
then
scp -i $HOME/id_rsa -P 9999
actuator-svc.yaml vagrant@172.17.0.1:/home/vagrant/
ssh -i $HOME/id_rsa -p 9999
vagrant@172.17.0.1 '/usr/bin/kubectl create -f actuator-svc.yaml'
else
echo "actuator-svc.yaml
not exists"
fi
CHK(){
if [[ $? -eq 0 ]]
then
echo "Success!"
exit 0
fi
}
while true
do
ssh -i $HOME/id_rsa -p 9999
vagrant@172.17.0.1 'curl http://10.1.0.3:32338'
CHK
ssh -i $HOME/id_rsa -p 9999
vagrant@172.17.0.1 'curl http://10.1.0.4:32338'
CHK
ssh -i $HOME/id_rsa -p 9999
vagrant@172.17.0.1 'curl http://10.1.0.4:32338'
CHK
done
ssh -i $HOME/id_rsa -p 9999
vagrant@172.17.0.1 'curl http://10.1.0.3:32338'
ssh -i $HOME/id_rsa -p 9999
vagrant@172.17.0.1 'curl http://10.1.0.4:32338'
oyj@oyj-X555QG:~/actuator-sample$
git commit -a -m "After portfowrding modifi..onto deliver.sh"
[master 59f013d] After
portfowrding modifi..onto deliver.sh
1 file changed, 4
insertions(+), 6 deletions(-)
oyj@oyj-X555QG:~/actuator-sample$
git push
Counting objects: 3, done.
Delta compression using up to
4 threads.
Compressing objects: 100%
(3/3), done.
Writing objects: 100% (3/3),
378 bytes | 378.00 KiB/s, done.
Total 3 (delta 2), reused 0
(delta 0)
remote: Resolving deltas: 100%
(2/2), completed with 2 local objects.
To
github.com:ohyoungjooung2/actuator-sample.git
018ff57..59f013d master ->
master
<head></head>
<body>
<h1><center>Hello!...Spring boot!</h1>
</body>
100 75 100 75 0 0 91 0 --:--:-- --:--:-- --:--:-- 91
100 75 100 75 0 0 91 0 --:--:-- --:--:-- --:--:-- 91
Success!
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
Conclusion)
In this article, I show how to
deploy java maven-based(spring-boot) app into k8s.
I suppose if I have enough
resource on my k8s cluster, I clould manage jenkins server in k8s
cluster.
But not that I am in, so, I
used local pc power to run jenkins server. If I install not using
docker based jenkins, then
deployment cloud’ve been
eaiser. Though, I revived my portforwarding skill and in this docker
based jenkins situation.
Dokcer based jenkins cloud be
cumbersome, somehow though I think.
But,It might be more secure if
we use docker-based jenkins and portforwarding to deploy apps into
k8s.
Yea,
it is a double-edged
sword like
always.
Thanks for reading.!
No comments:
Post a Comment