Configure a Jenkins pipeline on Kubernetes with Github and Slack

Prerequisites

  • 这里使用的是 a free IBM Cloud account.

    Install the IBM Cloud command-line interface (CLI) to your work station.

  • 本机 Mac 使用 Docker Desktop.

    同时 Create a Docker Hub account.

  • Install a Kubernetes CLI (kubectl) on Mac
  • Install a Git Client.

    Sign up for a GitHub account.

  • Create a Slack account.

Key Procedure

  • 设置 KUBECONFIG 环境变量, 指向 cloud。

  • 验证是否可以连接到集群。
    kubectl version --short

    Client Version: v1.16.1
    Server Version: v1.14.9+IKS

  • 持久化 jenkins_home。因为这里使用的是单节点集群,所以 pv 类型选用的是 hostPath。

    kubectl apply -f jenkins-pv.yaml
    kubectl apply -f jenkins-pvc.yaml
    kubectl apply -f jenkins-deployment.yaml
    kubectl apply -f jenkins-service.yaml

  • 获取 Jenkins dashboard 服务地址
    export EXTERNAL_IP=$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[?(@.type=="ExternalIP")].address }')

    export NODE_PORT=30100

    echo $EXTERNAL_IP:$NODE_PORT

    184.172.229.55:30100

  • 获取 Jenkins admin 默认密码

    kubectl logs $(kubectl get pods --selector=app=jenkins -o=jsonpath='{.items[0].metadata.name}') jenkins

  • 配置凭据 github、dockerhub、kubeconfig、slack-notification

  • 安装插件:Slack-notification 和Kubernetes Cli Plugin

  • 配置 Jenkins Slack Notification 主要填写 Workspace, Credential。Default channel / member id 可不填,具体可在 Jenkinsfile 配置里指定,比如

    success { slackSend(channel: "#ok", message: "pluckhuang/podinfo:${env.BUILD_NUMBER} Pipeline is successfully completed.")}

Reference and resource

k8s in action summary ~3

第10章
要点:

  • Give replicated pods individual storage
  • Provide a stable identity to a pod
  • Create a StatefulSet and a corresponding headless governing Service
  • Scale and update a StatefulSet
Discover other members of the StatefulSet through DNS
  • Connect to other members through their host names
  • Forcibly delete stateful pods

···

  • 何谓有状态?就是说具有 stable identity,比如name,ip,storage。
  • dns srv 记录,类似负载均,只不过是 service -> map pod with ip

第十四章:
Qos class:

  • besteffort
  • burstable
  • guaranteed

Qos class 是 Quality of Service (QoS) classes 的简写,是当no CPU time at all and will be the first ones killed when memory needs to be freed for other pods. 时的依据处理方式。

resource requests limit and qos classes

which pods get killed first


第十八章

主要是介绍了 helm 的使用方法。helm 类似于 yum、apt,只不过是作为构建在 k8s 之上的包管理工具。

  • 期间了解 helm 遇到个小问题:

    helm install happy-panda stable/mariadb
    helm uninstall stable/mariadb
    helm install happy-panda stable/mariadb 会失败,原因是 uninstall stable/mariadb 并不会删除相关的 pvc。

k8s deployment vs statefulset

k8s in action summary ~2

第7章

传递参数有几种方式:CMD参数化、ENV环境变量、configmap。

configmap —from-file, 源可以是单个文件、json文件、整个目录、字面常量。
用整个目录对应一个应用配置 个人认为灵活性更高

传递 configmap 給容器:
  • 作为环境变量:envFrom 得到所有环境变量
  • 作为命令行参数:args
  • 作为configMap volume ,volume 中加载configmap配置,然后挂载配置到container。对于挂载到container目录上的configMap的变更会同步。文件不会。变通办法是挂载目录后再symlink文件。

第8章:通过api 获取环境参数

第9章:Deployment

重点是滚动升级

repicaController, obsolete:kubectl rolling-update kubia-v1 kubia-v2 --image=luksa/kubia:v2。 废弃使用主要是因为用 kubectl 以客户端方式执行、非声明式,容易中断掉入中间状态。

StrategyType: 更新策略
  • The Recreate strategy causes all old pods to be deleted before the new ones are created. Use this strategy when your application doesn’t support running multiple ver- sions in parallel and requires the old version to be stopped completely before the new one is started. This strategy does involve a short period of time when your app becomes completely unavailable.

  • The RollingUpdate strategy, on the other hand, removes old pods one by one, while adding new ones at the same time, keeping the application available throughout the whole process, and ensuring there’s no drop in its capacity to handle requests. This is the default strategy. The upper and lower limits for the number of pods above or below the desired replica count are configurable. You should use this strategy only when your app can handle running both the old and new version at the same time.

命令总结:
kubectl set image deployment kubia nodejs=pluckhuang/kubia:v2
kubectl rollout status deployment kubia
kubectl rollout undo deployment kubia
kubectl rollout history deployment kubia
kubectl patch deployment kubia -p '{"spec": {"minReadySeconds": 10}}'
RollingUpdate strategy properties:
  • maxSurge
  • maxUnavailable

A canary release is a technique for minimizing the risk of rolling out a bad version of an application and it affecting all your users. Instead of rolling out the new version to everyone, you replace only one or a small number of old pods with new ones. This way only a small number of users will initially hit the new version.

Deployment, minReadySeconds 属性说明:
  • This defaults to 0 (the Pod will be considered available as soon as it is ready).
  • pods ready:readiness probes of all its containers return a success.
  • 也就是说 pods ready 后,再等 minReadySeconds时间后,the Pod 才会被认为可用 。

k8s in action summary ~1

第3章:

  • label:可用来标记 pods,nodes
  • namespace:命名空间,默认 default

第4章:replication and other controllers

livenessProbe:

  • httpsget:几次试探后会重启,保证服务可用。

selector

  • nodeSelector
  • matchLabels
  • matchExpressions

resource

  • ReplicationController
  • ReplicaSets :优于 ReplicationController 使用,主要是因为能花样匹配
    matchLabels
    matchExpressions
  • DaemonSet:DaemonSets are meant to run system services, which usually need to run even on unschedulable nodes.
  • Job
    completions
    parallelism
    activeDeadlineSeconds
  • CronJob:定时任务
    startingDeadlineSeconds

第5章

server resource:

  • NodePort, loadBalancer, Ingress // ingress 配置相当于一个网关
  • types of readiness probes :Unlike liveness probes, if a container fails the readiness check, it won’t be killed or restarted. This is an important distinction between liveness and readiness probes.
  • headless service: 可以通过服务名反向查询 endpoints

第6章

volume:

  • emptyDir

    Because the volume’s lifetime is tied to that of the pod, the volume’s contents are lost when the pod is deleted.

  • gitRepo/hostPath

    both the gitRepo and emptyDir volumes’ contents get deleted when a pod is torn down, whereas a hostPath volume’s contents don’t. If a pod is deleted and the next pod uses a hostPath volume pointing to the same path on the host, the new pod will see whatever was left behind by the previous pod, but only if it’s scheduled to the same node as the first pod.

  • PersistentVolume/PersistentVolumeClaim(pv/pvc):

pv/pvc 使用过程:

  • 创建持久化容器资源
  • 创建资源 claim 请求。
  • 用户创建 pod 声明使用 claim 。管理资源/分配资源由 k8s 维护。

PersistentVolume:like cluster Nodes, don’t belong to any namespace, unlike pods and PersistentVolumeClaims.

pv -> persistentVolumeReclaimPolicy, 相应 claim 被删除后:

  • restain:此pv不能再被claim,除非手动删除此 pv 再重新创建此 pv 。容器内文件不会被清空。
  • recycle:此pv可以再被claim,但是容器内文件会被清空。
  • delete:随着claim删除此pv会被自动删除。容器内文件不会被清空。

构建自己的 mysql docker image

Dockerfile

FROM mysql:5.7.21

ENV \
MYSQL_ROOT_PASSWORD=huangjb \
MYSQL_DATABASE=mgrass \
MYSQL_USER=mgrass \
MYSQL_PASSWORD=mgrass

COPY fordocker.sql /docker-entrypoint-initdb.d/fordocker.sql

EXPOSE 3306/tcp

fordocker.sql

CREATE TABLE t_warehourse (
id int(11) NOT NULL AUTO_INCREMENT,
name varchar(16) NOT NULL,
remark varchar(64) NOT NULL,
enabled tinyint(1) NOT NULL,
created datetime(6) NOT NULL,
updated datetime(6) NOT NULL,
PRIMARY KEY (id)
) ENGINE=InnoDB DEFAULT character set UTF8mb4 collate utf8mb4_bin;

run.sh

docker run --name testmysql -v "$PWD/data":/var/lib/mysql -p 3306:3306 -d mymysql

point:

  • mysql 存储文件在:/var/lib/mysql,一般会持久化 bind。这样即使容器退出了,数据还在。
  • 日志,在 container 里会输出到标准输出。docker logs testmysql, 可查看。

ImagePullBackOff or ErrImagePull on kubectl running, diagnose.

kubia-relica.yaml

apiVersion: v1
kind: ReplicationController
metadata:
  name: kubia1
spec:
  replicas: 1
  selector:
    app: kubia1
  template:
    metadata:
      labels:
        app: kubia1
    spec:
      containers:
      - name: kubia
        # image:  /pluckhuang/kubia:0.0.1 // error here
        image: docker.io/pluckhuang/kubia:0.0.1
        ports:
        - containerPort: 8080
          protocol: TCP
      imagePullSecrets:    # and add this
      - name: dockerhub-secret