构建自己的 mysql docker image

Dockerfile

FROM mysql:5.7.21

ENV \
MYSQL_ROOT_PASSWORD=huangjb \
MYSQL_DATABASE=mgrass \
MYSQL_USER=mgrass \
MYSQL_PASSWORD=mgrass

COPY fordocker.sql /docker-entrypoint-initdb.d/fordocker.sql

EXPOSE 3306/tcp

fordocker.sql

CREATE TABLE t_warehourse (
id int(11) NOT NULL AUTO_INCREMENT,
name varchar(16) NOT NULL,
remark varchar(64) NOT NULL,
enabled tinyint(1) NOT NULL,
created datetime(6) NOT NULL,
updated datetime(6) NOT NULL,
PRIMARY KEY (id)
) ENGINE=InnoDB DEFAULT character set UTF8mb4 collate utf8mb4_bin;

run.sh

docker run --name testmysql -v "$PWD/data":/var/lib/mysql -p 3306:3306 -d mymysql

point:

  • mysql 存储文件在:/var/lib/mysql,一般会持久化 bind。这样即使容器退出了,数据还在。
  • 日志,在 container 里会输出到标准输出。docker logs testmysql, 可查看。

What is ‘Site Reliability Engineering’?

main goals :

Site Reliability Engineering (SRE), The main goals are to create scalable and highly reliable software systems. According to Ben Treynor, founder of Google's Site Reliability Team, SRE is "what happens when a software engineer is tasked with what used to be called operations."[1]

It's also been associated with a practice that encompasses automation of manual tasks, continuous integration and continuous delivery.

SREs, being developers themselves, will naturally bring solutions that help remove the barriers between development teams and operations teams.

difference devops with sre:

SRE and DevOps share the same foundational principles. SRE is viewed by many (as cited in the Google SRE book) as a "specific implementation of DevOps with some idiosyncratic extensions."

from wiki

ImagePullBackOff or ErrImagePull on kubectl running, diagnose.

kubia-relica.yaml

apiVersion: v1
kind: ReplicationController
metadata:
  name: kubia1
spec:
  replicas: 1
  selector:
    app: kubia1
  template:
    metadata:
      labels:
        app: kubia1
    spec:
      containers:
      - name: kubia
        # image:  /pluckhuang/kubia:0.0.1 // error here
        image: docker.io/pluckhuang/kubia:0.0.1
        ports:
        - containerPort: 8080
          protocol: TCP
      imagePullSecrets:    # and add this
      - name: dockerhub-secret

init a etcd cluster

  • 3 host:
    myvm1: 192.168.99.100,
    myvm2: 192.168.99.101,
    myvm3: 192.168.99.102

  • myvm1: run

./etcd \
 --name myvm1 \
 --data-dir /var/lib/etcd \
 --initial-advertise-peer-urls http://192.168.99.100:2380 \
 --listen-peer-urls http://192.168.99.100:2380 \
 --listen-client-urls http://192.168.99.100:2379,http://127.0.0.1:2379 \
 --advertise-client-urls http://192.168.99.100:2379 \
 --initial-cluster-token etcd-cluster-1 \
 --initial-cluster 'myvm1=http://192.168.99.100:2380,myvm2=http://192.168.99.101:2380,myvm3=http://192.168.99.103:2380' \
 --initial-cluster-state new \
 --heartbeat-interval 1000 \
 --election-timeout 5000 \
 --enable-pprof --logger=zap --log-outputs=stderr
  • myvm2: run
./etcd \
 --name myvm2 \
 --data-dir /var/lib/etcd \
 --initial-advertise-peer-urls http://192.168.99.101:2380 \
 --listen-peer-urls http://192.168.99.101:2380 \
 --listen-client-urls http://192.168.99.101:2379,http://127.0.0.1:2379 \
 --advertise-client-urls http://192.168.99.101:2379 \
 --initial-cluster-token etcd-cluster-1 \
 --initial-cluster 'myvm1=http://192.168.99.100:2380,myvm2=http://192.168.99.101:2380,myvm3=http://192.168.99.103:2380' \
 --initial-cluster-state new \
 --heartbeat-interval 1000 \
 --election-timeout 5000 \
 --enable-pprof --logger=zap --log-outputs=stderr
  • myvm3: run
 ./etcd \
 --name myvm3 \
 --data-dir /var/lib/etcd \
 --initial-advertise-peer-urls http://192.168.99.103:2380 \
 --listen-peer-urls http://192.168.99.103:2380 \
 --listen-client-urls http://192.168.99.103:2379,http://127.0.0.1:2379 \
 --advertise-client-urls http://192.168.99.103:2379 \
 --initial-cluster-token etcd-cluster-1 \
 --initial-cluster 'myvm1=http://192.168.99.100:2380,myvm2=http://192.168.99.101:2380,myvm3=http://192.168.99.103:2380' \
 --initial-cluster-state new \
 --heartbeat-interval 1000 \
 --election-timeout 5000 \
 --enable-pprof --logger=zap --log-outputs=stderr
  • check member list:
    doc
  • this maybe error, can not konw which is the leader

  • check leader using this:
    ./etcdctl -w table --endpoints=192.168.99.100:2379,192.168.99.101:2379,192.168.99.103:2379 endpoint status

  • reference etcd doc cluster procfile