-
Deployments and ReplicationControllers are meant for stateless usage and are rather lightweight. StatefulSets are used when state has to be persisted. ... So if your application is stateful or if you want to deploy stateful storage on top of Kubernetes use a StatefulSet.
-
所以常规使用的 controller 只用 Deployments 和 StatefulSet 就够了。
-
StatefulSet 也支持滚动升级: Exploring Upgrade Strategies for Stateful Sets in Kubernetes
月度归档: 2019年11月
k8s in action summary ~2
第7章
传递参数有几种方式:CMD参数化、ENV环境变量、configmap。
configmap —from-file, 源可以是单个文件、json文件、整个目录、字面常量。
用整个目录对应一个应用配置 个人认为灵活性更高
传递 configmap 給容器:
- 作为环境变量:envFrom 得到所有环境变量
- 作为命令行参数:args
- 作为configMap volume ,volume 中加载configmap配置,然后挂载配置到container。对于挂载到container目录上的configMap的变更会同步。文件不会。变通办法是挂载目录后再symlink文件。
第8章:通过api 获取环境参数
第9章:Deployment
重点是滚动升级
repicaController, obsolete:kubectl rolling-update kubia-v1 kubia-v2 --image=luksa/kubia:v2。 废弃使用主要是因为用 kubectl 以客户端方式执行、非声明式,容易中断掉入中间状态。
StrategyType: 更新策略
-
The Recreate strategy causes all old pods to be deleted before the new ones are created. Use this strategy when your application doesn’t support running multiple ver- sions in parallel and requires the old version to be stopped completely before the new one is started. This strategy does involve a short period of time when your app becomes completely unavailable.
-
The RollingUpdate strategy, on the other hand, removes old pods one by one, while adding new ones at the same time, keeping the application available throughout the whole process, and ensuring there’s no drop in its capacity to handle requests. This is the default strategy. The upper and lower limits for the number of pods above or below the desired replica count are configurable. You should use this strategy only when your app can handle running both the old and new version at the same time.
命令总结:
kubectl set image deployment kubia nodejs=pluckhuang/kubia:v2
kubectl rollout status deployment kubia
kubectl rollout undo deployment kubia
kubectl rollout history deployment kubia
kubectl patch deployment kubia -p '{"spec": {"minReadySeconds": 10}}'
RollingUpdate strategy properties:
- maxSurge
- maxUnavailable
A canary release is a technique for minimizing the risk of rolling out a bad version of an application and it affecting all your users. Instead of rolling out the new version to everyone, you replace only one or a small number of old pods with new ones. This way only a small number of users will initially hit the new version.
Deployment, minReadySeconds 属性说明:
- This defaults to 0 (the Pod will be considered available as soon as it is ready).
- pods ready:readiness probes of all its containers return a success.
- 也就是说 pods ready 后,再等 minReadySeconds时间后,the Pod 才会被认为可用 。
k8s in action summary ~1
第3章:
- label:可用来标记 pods,nodes
- namespace:命名空间,默认 default
第4章:replication and other controllers
livenessProbe:
- httpsget:几次试探后会重启,保证服务可用。
selector
- nodeSelector
- matchLabels
- matchExpressions
resource
- ReplicationController
- ReplicaSets :优于 ReplicationController 使用,主要是因为能花样匹配
matchLabels
matchExpressions - DaemonSet:DaemonSets are meant to run system services, which usually need to run even on unschedulable nodes.
- Job
completions
parallelism
activeDeadlineSeconds - CronJob:定时任务
startingDeadlineSeconds
第5章
server resource:
- NodePort, loadBalancer, Ingress // ingress 配置相当于一个网关
- types of readiness probes :Unlike liveness probes, if a container fails the readiness check, it won’t be killed or restarted. This is an important distinction between liveness and readiness probes.
- headless service: 可以通过服务名反向查询 endpoints
第6章
volume:
- emptyDir
Because the volume’s lifetime is tied to that of the pod, the volume’s contents are lost when the pod is deleted.
- gitRepo/hostPath
both the gitRepo and emptyDir volumes’ contents get deleted when a pod is torn down, whereas a hostPath volume’s contents don’t. If a pod is deleted and the next pod uses a hostPath volume pointing to the same path on the host, the new pod will see whatever was left behind by the previous pod, but only if it’s scheduled to the same node as the first pod.
- PersistentVolume/PersistentVolumeClaim(pv/pvc):
pv/pvc 使用过程:
- 创建持久化容器资源
- 创建资源 claim 请求。
- 用户创建 pod 声明使用 claim 。管理资源/分配资源由 k8s 维护。
PersistentVolume:like cluster Nodes, don’t belong to any namespace, unlike pods and PersistentVolumeClaims.
pv -> persistentVolumeReclaimPolicy, 相应 claim 被删除后:
- restain:此pv不能再被claim,除非手动删除此 pv 再重新创建此 pv 。容器内文件不会被清空。
- recycle:此pv可以再被claim,但是容器内文件会被清空。
- delete:随着claim删除此pv会被自动删除。容器内文件不会被清空。