 
  


        Cloud Native              ,        .        ,    .        ,            .       ,     ,          .





 

 





      70 (76) :

*  Google Cloud Platform, Amazone WEB Services, Microsoft Azure;

*  : cat, sed, NPM, node, exit, curl, kill, Dockerd, ps, sudo, grep, git, cd, mkdir, rm, rmdir, mongos, Python, df, eval, ip, mongo, netstat, oc, pgrep, ping, pip, pstree, systemctl, top, uname, VirtualBox, which, sleep, wget, tar, unzip, ls, virsh, egrep, cp, mv, chmod, ifconfig, kvm, minishift;

*  : NGINX, MinIO, HAProxy, Docker, Consul, Vagrant, Ansible, kvm;

*  DevOps: Jenkins, GitLab CI, BASH, PHP, Micro Kubernetes, kubectl, Velero, Helm, "http  ";

*  Traefic, Kubernetes, Envoy, Istio, OpenShift, OKD, Rancher,;

*   : PHP, NodeJS, Python, Golang.





  

Limoncelli ( "The Practice of Cloud System Administration"),    Google Inc, ,  2010            .

* 1985-1994    ( )    ,  

   

* 1995-2000    -,

* 2000-2003

* 2003-2010

* 2010-2019



    ,   , ,    2       ,   2 .  ,       . ,     .

,   2000-2003 ,   ,    :

*   ;

*    ;

*  OpenSource ,       ,     ;

*   ;

*       ;

*            .

   ,    20032010 :

*         (power-location),         ;

*   ;

*    .

   Amazon  2010      .                                 .        , ,    ,                 ,   .

  

  ,   Linux,   - Windows,           ,            .     - Windows -  : 1C, Directum       Windows,   ,   ,      .   Windows        ,   DevOps    ,       .        , ,         HTML.       BOM   ,   Windows: "\n\r"  "\n"). BOM   ,        ,      ,          ,  Linux          .     GIT      ,     .

  Front .   ,  ,  JS (JavaScript), HTML  CSS  .              PHP       cms.  ,       ,    ,           HAML. HAML     HTML,   : ,  ,        .     ,  HTML   HTML.  MS Windows          IDE,       IDE WEBStorm.  CSS  , ,             LESS,      SASS     ,    RUBY   , ,     .   JS  CoffeScript.          ( HTML   ).

          "JS ",   SPA (Single Page Application,    ),    JS,    (Galp, Grunt),    NodeJS  ,    .         Linux,      BASH   Windows           ,     IDE . ,  WEB    MS Windows  MacOS,    UNIX ,     BASH.

Docker    

,           ,      ,      (   ,   ,  ,      ) , ,  .       (  ), ,  .    ,  , ,   .            .       /bin    .      ,   ,     ,       ,     ,    ,          ,    .

    ,   :

*    30%    ,   ,    ,   .      VT-X,         ,       .         ,     (VirtualBox, VMVare,  )       ,     .

*      ,         ,     ,     ,    ,   .

*        ,     ,  ,       .  ,      ,   ,      ,   . ,       ,  ,      . ,     10,      10  ,  3     30.

    WEB   Docker.        Docker  Linux,          FreeBSD,    Windows 10 Professional,       Linux,     Windows.        (  ,    ),    .       MS Windows,     RedHut,  Debian,    ,    ,   (       )   . ,    WEB ,        ,          ( Docker)   , ,    .     ,  ,  .

         Docker

       Docker,  ,    Docker    ,    ,       .            ,    ,   , ,   ,        .     Linux , ,    ,              .    6   .

      ,       chroot   1979 ,     ,     ,      ,     .    ,      ,     pid () 1.    ,     .     CGroups:  ,   .       Linux,   ,  Docker    .      , OpenSource  ,  ,         ,       . Docker       ,       LXC.    LXC   CGroup.  Docker     (  ),     ,   UnuonFS (UFS).

Docker   

 Docker   ,      Linux,       ,      .

     ,     ( )     .  Docker  Debian  125Mb,  ISO   290Mb.  ,          : uname -a  cat /proc/version,       cat /etc/ussue

Docker        Dockerfile,      ,         . ,        ,    .   ,      Docker commit,      ,       Dockerfile  Docker history   .     ,    ,   :    .

 Docker     Image,        Dockerfile.           ,          .         ,    ,      .    99%    ,        ,          ,  ,     .             () .           ,          .  ,    ,     ,   ,          .

         ,     ,   .        .      build no-cache=true,  Docker  ,   .       ADD,          . ,     ,   NGINX,    MySQL,      Ubuntu 14.04,      : MySQL, NGINX  Ubuntu.      Docker history.              2    ADD   ,    3    : ,        ,    .    127.  ,       ,    git clone,  git clone branch v1  git clone branch v2,  Docker  ,   Git Clone         .

Docker   ,    ,         (   m,    c).  Docker     , ,   . ,   ,   ,  ,     ,    .

                 .

  

  ,      ,   ,       Docker.      ,    1.13           .

 ,     docker run name_image,      docker rm -f id_container.   ,   ,       docker run -ti name_image bash       .      Cntl+D,   .  ,          rm. ,    ,    ,      ,      .      docker ps,      docker ps -a.       docker containers prune,     1.13      .       docker rm $(docker ps -q -f status=exited).       ,      Docker,           ,     .      ,        .

      .   ,   ,   . ,       ,        .     docker rmi name_image,         .       ,  Docker    ,     .   1.13  ,    docker imgae prune -a  ,        .     ,  Docker     ,       .       ,    Dockerfile,    ,  ,     Dockerfile     docker build name_image.         Dockerfile  ,        Docker history name_image.          Docker commit,    Dockerfile,   ,    .

     ,     ,        .       ,      docker image prune.

            .        , , docker run -v /page_host:/page_container nama_image,      docker run -v /page_container nama_image.     (),        Docker volume prune.    ,     .

    ,  ,          docker system prune.      .        ,         docker system df,      docker system df -v.

 ,    ,   Docker-compose.       ,         .   Docker-compose up  ,  docker-compose down -v  ,      .       Docker-compose.YML,     .  ,        ,      ,            up,    ,    .

     Docker         ,         .

    

       ,     .         .     ,     ,           ,       .  ,      ,           . , ,   ,            QA      -,          ,          .    DevOps,        ,         ,       .  ,         ,      ,     ,     ,   ,           .

  :   ,   

,      .   ,    :        ,      ,     ,         ,    ,  ,    .       , ,     .  ,  ,                :

*   ,      ,    ,      ,       .    ,          ,        .       .

*               ,     .       .    MakeFile,       .

*          ,   ,        .        .     :  ,  ,           .

*   Docker Hub  WEB     . , ,      .

               .         10G,         dm.min_free_space=5%,     ,      /etc/docker/daemon.json:

{

"storage-opts": [

"dm.basesize=50G",

"dm.min_free_space=5%",

]

}



  ,     :

* -m 256m       ( 256Mb);

* -c 512       (  1024);

* cpuset="0,1"      .

   

  , , ,          , ,   .         ,   .      .

,  ,      ,   -p,       ,      , , docker logs name_container | tail -p.

    ,           .       ,            , , Fluentd.      ElasticSearch,     .  ,        JSON.    ,  ,       ,    ,      .   - Kubana,      Elastic.

      .    ,     .   ,   Dockerfile   CMD: NPM run,   .

 :

*    Docker Hub (http://hub.docker.com)

*          .   registry

Docker      

     ,         Docker      ,      ,          .    ,       ,           .       JavaScript,        .       NodeJS,     , , WEBPack-,   .        ,      ,           , ,      - : docker run -it rm -v $(pwd):app node-build.     .         ,                   ,       ,    .               Docker-compose-build.yml  Docker-compose-test.yml    Docker-compose up -f ./docker-compose-build.

  

      Docker. , ,    .  VNC, SSH      Docker, ,      . ,   ,     Docker,   Docker   Docker    ,    Docker         Docker Engine.     Docker Machine  Docker REST API,        . ,         SSL- .    ,    ,     ,           ,      Docker     ,    .

 ,  ,      Unix  (  /var/run/socket.sock),     .    Unix        curl   curl unix-socket /var/run/docker.sock http:/v1.24/containers/json,       curl 1.40,      CentOS.            .      systemctl stop docker      dockerd -H tcp://0.0.0.0:9002 & (    9002,     ).    docker ps   ,   docker -H 5.23.52.111:9002 ps  docker -H tcp://geocode1.essch.ru:9202 ps.   Docker   http  2375,   https  2376.           ,     :

export DOCKER_HOST=5.23.52.111:9002

Docker ps

Docker info

unser DOCKER_HOST



  export      :  .       =.              Dockerd.      Docker Engine (Docker  )   Docker Machine.        Docker-machine.   :

base=https://github.com/docker/machine/releases/download/v0.14.0 &&

curl -L $base/docker-machine-$(uname -s)-$(uname -m) >/usr/local/bin/docker-machine &&

chmod +x /usr/local/bin/docker-machine



  

      , ,   NGINX, MySQL   .      ,        NGINX  MySQL           ,   : Docker run mysql, docker run Nginx,    docker build .; docker run myapp -p 80:80 bash.  ,  ,   ,    :   .

      ,         ().   ,   ,    ,   ,   docker start myapp,      ,     ,   ,    ,     :

if $(docker ps | grep myapp)

then

docker start myapp

else

if ! $(docker images | grep myimage)

docker build .

fi

docker run -d name myapp -p 80:80 myimage bash

fi



.      ,   :

if $(docker ps | grep myapp)

docker rm -f myapp

fi

if ! $(docker images | grep myimage)

docker build .

fi

docker run -d name myapp -p 80:80 myimage bash



. ,    ,  ,    , ,  Dockerfile ,           .    ,     ,   ()     ,   , ,   Docker run       . ,   ,  ,     ,         . ,  ,         , -    .   :     ,         ?       ,      .     : Docker-compose       :

#docker-compose

version: v1

services:

myapp:

container-name: myapp

images: myimages

ports:

80:80

build: .



.   docker-compose up -d,    docker down; docker up -d. ,   ,     ,    .

,       ,    .  ,      :

#docker-compose

version: v1

services:

mysql:

images: mysql

Nginx:

images: Nginx

ports:

80:80

myapp:

container-name: myapp

build: .

depence-on: mysql

images: myimages

link:

db:mysql

Nginx:Nginx



.       ,    ,      mysql  NGINX   db  NGINX, ,  myapp  ,       mysql,      .

Service Discovery

           ,            Service Discovery. ,     , ,               ,    Consul, ETCD  ZooKeeper.   Consul     :     ,      ,    (ZooKeeper   , ,       ,  ),          (Consul  Go, ZooKeeper  Java)       ,  , , ClickHouse (   ZooKeeper).

        key-value ,  ,       ,        ,      (Master)      .  Consul              https://www.consul.io/downloads. html   :

wget https://releases.hashicorp.com/consul/1.3.0/consul_1.3.0_linux_amd64.zip -O consul.zip

unzip consul.zip

rm -f consul.zip



    , ,  master consul -server -ui,    slave consul -server -ui  consul -server -ui .    Consul,    master,     ,   Consul    ,     ,  .     consul members:

consul members;



      :

curl -X PUT -d 'value1' .....:8500/v1/kv/group1/key1

curl -s .....:8500/v1/kv/group1/key1

curl -s .....:8500/v1/kv/group1/key1

curl -s .....:8500/v1/kv/group1/key1



  ,    https://www.consul.io/docs/agent/options. html #telemetry,   .... https://medium.com/southbridge/monitoring-consul-with-statsd-exporter-and-prometheus-bad8bee3961b

  ,          IP-  172.17.0.2:

essh@kubernetes-master:~$ mkdir consul && cd $_



essh@kubernetes-master:~/consul$ docker run -d name=dev-consul -e CONSUL_BIND_INTERFACE=eth0 consul

Unable to find image 'consul:latest' locally

latest: Pulling from library/consul

e7c96db7181b: Pull complete

3404d2df15cb: Pull complete

1b2797650ac6: Pull complete

42eaf145982e: Pull complete

cef844389e8c: Pull complete

bc7449359c58: Pull complete

Digest: sha256:94cdbd83f24ec406da2b5d300a112c14cf1091bed8d6abd49609e6fe3c23f181

Status: Downloaded newer image for consul:latest

c6079f82500a41f878d2c513cf37d45ecadd3fc40998cd35020c604eb5f934a1



essh@kubernetes-master:~/consul$ docker inspect dev-consul | jq ' .[] | .NetworkSettings.Networks.bridge.IPAddress'

"172.17.0.4"



essh@kubernetes-master:~/consul$ docker run -d name=consul_follower_1 -e CONSUL_BIND_INTERFACE=eth0 consul agent -dev -join=172.17.0.4

8ec88680bc632bef93eb9607612ed7f7f539de9f305c22a7d5a23b9ddf8c4b3e



essh@kubernetes-master:~/consul$ docker run -d name=consul_follower_2 -e CONSUL_BIND_INTERFACE=eth0 consul agent -dev -join=172.17.0.4

babd31d7c5640845003a221d725ce0a1ff83f9827f839781372b1fcc629009cb



essh@kubernetes-master:~/consul$ docker exec -t dev-consul consul members

Node Address Status Type Build Protocol DC Segment

53cd8748f031 172.17.0.5:8301 left server 1.6.1 2 dc1 < all>

8ec88680bc63 172.17.0.5:8301 alive server 1.6.1 2 dc1 < all>

babd31d7c564 172.17.0.6:8301 alive server 1.6.1 2 dc1 < all>



essh@kubernetes-master:~/consul$ curl -X PUT -d 'value1' 172.17.0.4:8500/v1/kv/group1/key1

true



essh@kubernetes-master:~/consul$ curl $(docker inspect dev-consul | jq -r ' .[] | .NetworkSettings.Networks.bridge.IPAddress'):8500/v1/kv/group1/key1

[

{

"LockIndex": 0,

"Key": "group1/key1",

"Flags": 0,

"Value": "dmFsdWUx",

"CreateIndex": 277,

"ModifyIndex": 277

}

]



essh@kubernetes-master:~/consul$ firefox $(docker inspect dev-consul | jq -r ' .[] | .NetworkSettings.Networks.bridge.IPAddress'):8500/ui



      ,     .

dockerd -H fd:// cluster-store=consul://192.168.1.6:8500 cluster-advertise=eth0:2376

* cluster-store      

* cluster-advertise   



docker network create driver overlay subnet 192.168.10.0/24 demo-network

docker network ls



 

          ,    : Docker Swarm  Google Kubernetes       . Docker Swarm ,    Docker  ,    (),  Kubernetes    ,      (,    Volume),            ( ,  ).

,         .    ,            :

*    ,   ( )   ;

*       ;

*    ,    ;

*  ,      ;

*      ;

*      , ,       ,  , ,         ;

*   , ,   ,    .

     Docker Swarm  Kubernetes.      ,     Docker (Kubernetes   RKT  Containerd),       -    Kubernetes  POD.  Docker Swarm,  Kubernetes     IP      ,      localhost,  ,     Docker Swarm,       , Kubernetes       POD.   Kubernetes    ,  ,      ,    .

      (Overlay Network)             .      ,       TCP/IP                  ,     TCP/IP     .   , ,    ,        ,        ,      ,            IP    (, 10.0.0.1),      IP  ,   ,       .        IP  ,              ,        ID / POD.        ,     IP .     ,   Docker Swarm   ,   Kubernetes        ,    .        ,      Kubernetes  (Service).   ,    Service,      , ,  Deployment.       IP- (   )   ,         DNS , ,   ,    my_service,        : curl my_service;.     ,       IP     (,  ,  )      , IP   DNS      ,    ,       .

      Ingress          ,           IP   Linux (iptalbes),      ,     IP      . , ,         IP     Ingress . Kubernetes        Ingress,     LoadBalancer  NodePort         HTTP, ,      (application router)    TSL/HTTPS,    GCP  AWS.

Kubernetes       Google  Borg,   Omega,         .    :

* POD   POD;

* ReplicaSet, Deployment   POD;

* DaemonSet       ;

*  (   ): ClusterIP ( ,   ), NodePort ( ,   ,   POD,     3000032767     POD  ), LoadBalancer (NodePort     IP-        ,  AWS  GCP), HostPort (      ,        9200,           )  HostNetwork (  POD      ).

   : kube-APIserver, kube-sheduler  kube-controller-manager.  :

* kubelet      (),    .     ,   kube-APIserver     ,   .

* cAdviser   .

    ,     AVS .       Docker  Docker-machine,  ,      . Docker-machine      Docker ,         VirtualBox      . ,  ,          Docker ,              master-,              . ,  Docker-machine   :

docker-machine create driver virtualbox virtualbox-cpu-count "2"virtualbox-memory "2048"virtualbox-disk-size "20000" swarm-node-1

docker-machine env swarm-node-1 // tcp://192.168.99.100:2376

eval $(docker-machine env swarm-node-1)



  :

docker-machine create driver virtualbox virtualbox-cpu-count "2"virtualbox-memory "2048"virtualbox-disk-size "20000" swarm-node-2

docker-machine env swarm-node-2

eval $(docker-machine env swarm-node-2)



  :

docker-machine create driver virtualbox virtualbox-cpu-count "2"virtualbox-memory "2048"virtualbox-disk-size "20000" swarm-node-3

eval $(docker-machine env swarm-node-3)



               ():

docker-machine ssh swarm-node-1

docker swarm init advertise-addr 192.168.99.100:2377

docker node ls //  

docker swarm join-token worker



   ,   ,        docker swarm join-token manager  docker swarm join-token worker.

     ()      Docker swarm join token  192.168.99.100:2377,    ,    ,      .      docker node info

 docker swarm init   ,  ,          ,        ,      , , docker swarm join token  192.168.99.100:2377.      SSH  docker-machine SSH name_node   .

     bridge,   .       ,        ,     ip    ,             .   ,      roundrobin,      .     overlay   DNS           .  :

docker network create driver overlay subnet 10.10.1.0/24 opt encrypted services



      .     ,  ,         .         replicas,     ,    .  ,    ,      (    )      -p,  Server Discovery (  ,   ip-, )    .

docker service create -p 80:80 name busybox replicas 2 network services busybox sleep 3000



   docker service ls,       docker service ps busybox    wget -O- 10.10.1.2.     ,          (  ),            ,    ,         ,    ,     .

Docker Swarm     Ingress load balacing,       ,    ,     80 .         ,      ,      .

       ,     ,     :

docker service create mount type=bind, src=, dst=.... name=.... ..... #

docker service create mount type=volume, src=, dst=.... name=.... ..... # 



    Docker-compose,    ().    Docker-compose   Docker stack,     :                ,  .          .  , :

docker stack deploy -c docker-compose.yml test_stack



docker service update label-add foo=bar Nginx docker service update label-rm foo Nginx docker service update publish-rm 80 Nginx docker node update availability=drain swarm-node-3 swarm-node-3

Docker Swarm

$ sudo docker pull swarm

$ sudo docker run rm swarm create



docker secrete create test_secret docker service create secret test_secret cat /run/secrets/test_secret  : hello-check-cobbalt  pipeline: trevisCI > Jenkins > config -> https://www.youtube.com/watch?v=UgUuF_qZmWc https://www.youtube.com/watch?v=6uVgR9WPjYM

Docker   

    :      .  ,             ,   ,          ,      ,    ,     ,    ( ) ,        .              .

 :

*     .

*          .

*  ,   -     ,   Chef, Pupel, Ansible, Salt,   .         .

*  ()   ,       . : Google Kubernetes, Apache Mesos, Hashicorp Nomad, Docker Swarm mode  YARN,   .   : Flocker (https://github.com/ClusterHQ/Flocker/), Helios (https://github.com/spotify/helios/).

   Docker-swarm.       Kubernetes (Kubernetes)  Mesos,             ,      ,           .       ,    ,  Google, Twitter  : Nomad, Scheduling, Scalling, Upgrades, Service Descovery,      .         Kubernetes,        ,       ,  Mesos    ,      .

 Kubernetes    :

* MiniKube      ,       ;

* kubeadm;

* kops;

* Kubernetes-Ansible;

* microKubernetes;

* OKD;

* MicroK8s.

     

KubeSai   Kubernetes



    POD,   YML-  Docker-compose.   POD,      :      YML-     .  ,  POD:

# test_pod.yml

# kybectl create -f test_pod.yaml

containers:

name: test

image: debian



   :

# test_replica_controller.yml

# kybectl create -f test_replica_controller.yml

apiVersion: v1

kind: ReplicationController

metadata:

name: Nginx

spec:

replicas: 3

selector:

app: Nginx // ,       

template:

containers:

name: test

image: debian



    service ( )  LoadBalancer,     ClasterIP  Node Port:

appVersion: v1

kind: Service

metadata:

name: test_service

apec:

type: LoadBalanser

ports:

port: 80

targetPort: 80

protocol: TCP

name: http

selector:

app: WEB



 overlay  (   ): Contig, Flannel, GCE networking, Linux bridging, Calico, Kube-DNS, SkyDNS. #configmap apiVersion: v1 kind: ConfigMap metadata: name: config_name data:

   Docker-swarm     Kubernetes,      NGINX :

#secrets

apiVersion: v1

kind: Secrets

metadata:name: test_secret

data:

password: ....



     POD,      POD:

....

valumes:

secret:

secretName: test_secret





 Kubernetes   Volumes:

* emptyDir;

* hostPatch;

* gcePersistentDisc    Google Cloud;

* awsElasticBlockStore    Amazon AWS.

volumeMounts:

name: app

nountPath: ""

volumes:

name: app

hostPatch:

....



  UI: Dashbord UI

 :

* Main metrics   ;

* Logs collect   ;

* Scheduled JOBs;

* Autentification;

* Federation    -;

* Helm   ,  Docker Hub.

https://www.youtube.com/watch?v=FvlwBWvI-Zg

 Docker

Docker      RKT.

 Linux,     PID=1,    NameSpace,      ,   , ,       .         ,         ,        systemd   .    : localhost  ,          localhost        ,     POD Kubernetes.

      -it.     Ctrl+D,      ,      rm       .     ,       ,  ,     ,      ,                name name_container. ,:

Docker run rm -it name name_container ubuntu BASH

  CLI Docker    ,     .  :

* Docker run   ;

* Docker ps    ;

* Docker rm   ;

* Docker build    ;

* Docker images    ;

* Docker rmi   .

   ,           ,    "Docker run"   "Docker container",   25   19  Docker.  ,    ,        .       . ,     -   ,       .    :

 :

docker run -d name name_container ubuntu bash

  :

docker rm -f name_container

  :

docker ps -a

  :

docker ps

    :

docker stats

   :

docker top {name_container}

     sh (BASH   alpine ):

docker exec -it sh

    :

docker image prune

  :

docker rmi $(docker images -f "dangling=true" -q)

 :

docker images

    dir  Dockerfile:

docker build -t docker_user/name_image dir

 :

docker rmi docker_user/name_image dir

  Docker hub:

docker login

   (    ,    )   Docker hub:

docker push ocker_user/name_image dir:latest

    https://niqdev.github.io/devops/docker/.

 Docker Machine    :

   VirtualBox

docker-machine create name_virtual_system

   generic

docker-machine create -d generic name_virtual_system

  :

docker-machine ls

  :

docker-machine stop name_virtual_system

   :

docker-machine start name_virtual_system

  :

docker-machine rm name_virtual_system

   :

eval "$(docker-machine env name_virtual_system)"

 Docker   :

eval $(docker-machine env -u)

  SSH:

docker-machine ssh name_virtual_system

   :

exit

c  sleep 10   :

docker-machine ssh name_virtual_system 'sleep 10'

    BASH:

docker-machine ssh dev 'bash -c "sleep 10 && echo 1"'

  dir   :

docker-machine scp -r /dir name_virtual_system:/dir

     :

curl $(docker-machine ip name_virtual_system):9000

  9005    9005  

docker-machine ssh name_virtual_system -f -N -L 9005:0.0.0.0:9007

 :

docker swarm init

     EXPOSE:

essh@kubernetes-master:~/mongo-rs$ docker run name redis -p 6379 -d redis

f3916da35b6ba5cd393c21d5305002b78c32b089a6cc01e3e2425930c9310cba

essh@kubernetes-master:~/mongo-rs$ docker ps | grep redis

f3916da35b6b redis"docker-entrypoint.s" 8seconds ago Up 6 seconds 0.0.0.0:32769->6379/tcp redis

essh@kubernetes-master:~/mongo-rs$ docker port reids

Error: No such container: reids

essh@kubernetes-master:~/mongo-rs$ docker port redis

6379/tcp > 0.0.0.0:32769

essh@kubernetes-master:~/mongo-rs$ docker port redis 6379

0.0.0.0:32769

        .           :

COPY ./ /src/app

WORKDIR /src/app

RUN NPM install

       :

COPY ./package.json /src/app/package.json

WORKDIR /src/app

RUN npm install



COPY . /src/app

    node:7-onbuild:

$ cat Dockerfile

FROM node:7-onbuild

EXPOSE 3000

$ docker build .

  , ,      ,    , , Dockerfile, .git, .node_modules,   ,     node_modules,   ,     .dockerignore.

v /config

docker cp config.conf name_container:/config/

     :

essh@kubernetes-master:~/mongo-rs$ docker ps -q | docker stats

CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS

c8222b91737e mongo-rs_slave_1 19.83% 44.12MiB / 15.55GiB 0.28% 54.5kB / 78.8kB 12.7MB / 5.42MB 31

aa12810d16f5 mongo-rs_backup_1 0.81% 44.64MiB / 15.55GiB 0.28% 12.7kB / 0B 24.6kB / 4.83MB 26

7537c906a7ef mongo-rs_master_1 20.09% 47.67MiB / 15.55GiB 0.30% 140kB / 70.7kB 19.2MB / 7.5MB 57

f3916da35b6b redis 0.15% 3.043MiB / 15.55GiB 0.02% 13.2kB / 0B 2.97MB / 0B 4

f97e0697db61 node_api 0.00% 65.52MiB / 15.55GiB 0.41% 862kB / 8.23kB 137MB / 24.6kB 20

8c0d1adc9b9c portainer 0.00% 8.859MiB / 15.55GiB 0.06% 102kB / 3.87MB 57.8MB / 122MB 20

6018b7e3d9cd node_payin 0.00% 9.297MiB / 15.55GiB 0.06% 222kB / 3.04kB 82.4MB / 24.6kB 11

^C

    :

**      ,  ,   , ,     'NPM i'      ;

*         ,                , ,    ,       . code-as-a-service: 12  (12factor.net)

* Codebase      ;

* Dependeces      ;

* Config     ;

* BackEnd           API;

* Processes      ,        (  )   ;

*        .

* I/CD  code control (git)  build (jenkins, GitLab)  relies (Docker, jenkins)  deploy (helm, Kubernetes).    ,   ,      ,    . -        ,   . , -       ,  - ,         ,          .

* Config      , , docker-compose.yml;

* Port bindign      ,      , ,   Dockerfile  EXPOSE PORT,       -P      .

* Env       ,    ,        , , docker-compose.yml

* Logs      , , ELK,    ,   Docker  .

 Dockerd:

essh@kubernetes-master:~/mongo-rs$ ps aux | grep dockerd

root 6345 1.1 0.7 3257968 123640 ? Ssl 05 76:11 /usr/bin/dockerd -H fd:// containerd=/run/containerd/containerd.sock

essh 16650 0.0 0.0 21536 1036 pts/6 S+ 23:37 0:00 grep color=auto dockerd

essh@kubernetes-master:~/mongo-rs$ pgrep dockerd

6345

essh@kubernetes-master:~/mongo-rs$ pstree -c -p -A $(pgrep dockerd)

dockerd(6345)-+-docker-proxy(720)-+-{docker-proxy}(721)

| |-{docker-proxy}(722)

| |-{docker-proxy}(723)

| |-{docker-proxy}(724)

| |-{docker-proxy}(725)

| |-{docker-proxy}(726)

| |-{docker-proxy}(727)

| `-{docker-proxy}(728)

|-docker-proxy(7794)-+-{docker-proxy}(7808)



Docker-File:

*     : apt-get, pip  ,      , 

    ,      ,    

,    .

*    , ,   APT,    

:      ,        ,

        ,      ,   

 .

*     , , ,    

   ,    -  ,    

      ,      

,    :

ADD ./app/package.json /app

RUN npm install

ADD ./app /app



 Docker

** Rocket  rkt      CoreOS  RedHut,     .

** Hyper-V     Docker    Windows,     (  ) .

 Docker    ,     ,      ,   RKT,    containerd:

* CRI-O  OpanSource ,         CRI (Container Runtime Interface), github.com/opencontainers/runtime-spec/">Runtime Specification  github.com/opencontainers/image-spec">Image Specification        .  c Docker,   CRI-O 1.0  Kubernetes (  )   1.7  2007,    MiniKube  Kubic.   CLI (Common Line Interface)   Pandom,     Docker,    (Docker Swarm),       Linux Fedora.

* CRI (Kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-Kubernetes/">Container Runtime Interface)     ,    (Executor, Supervisor, Metadata, Content, Snapshot, Events  Metrics)    Linux  ( ,   ..).

** CNI (Container Networking Interface)    .

Portainer

    Portainer:

essh@kubernetes-master:~/microKubernetes$ cat << EOF > docker-compose.monitoring.yml

version: '2'

>

services:

portainer:

image: portainer/portainer

command: -H unix:///var/run/Docker.sock

restart: always

ports:

9000:9000

volumes:

/var/run/Docker.sock:/var/run/Docker.sock

./portainer_data:/data

>

EOF



essh@kubernetes-master:~/microKubernetes$ docker-compose -f docker-compose.monitoring.yml up -d



   Prometheus

    ,    (,     , ,  SaaS PagerDuty),   , ,     IAOps (Artificial Intelligence For It Operations, https://www.gartner.com/en/information-technology/glossary/aiops-artificial-intelligence-operations).

   :

*  ;

*   ;

* ;

* .

       :



*  ( , , Kubernetes, ), ;

*  ( , ,  ), ;

* - (  ,  ).

    :

*  (), ;

*  (), ;

* IAOps (, ).

              .  

 ELK ,     Sentry (SaaS).  -     

,   Jeger  Zipkin.      ,  .

      ,         , ,

 ,   ,     ,       

 ,          GIT .        , 

        ,      .  Sentry

  ,         ,    

,  .

     :

*    Cloud: Azure Monitoring, Amazon CloudWatch, Google Cloud Monitoring

*        SaaS: DataDog, NewRelic

* CloudNative: Prometheus

*    OnPremis: Zabbix



Zabbix   1998     OpenSource    GPL  2001.   ,  :

 - ,    ,    .     

 ,     .  

     ,   , , ,    . 

  :



    ,      Zabbix 

HTTP  Zabbix    http, , 

SNMP       

IPMI       , ,  

 2019  Gratner       :

** Dynatrace;

** Cisco (AppDynamics);

** New Relic;

** Broadcom (CA Technologies);

** Riverbed  Microsoft;

** IBM;

** Oracle;

** SolarWinds;

** Micro Focus;

** ManageEngine  Tingyun.

   :

** Correlsense;

** Datadog;

** Elastic;

** Honeycomb;

** Instant;

** Jennifer Soft;

** Light Step;

** Nastel Technologies;

** SignalFx;

** Splunk;

** Sysdig.

     Docker ,    (,    )   ()   .       docker logs name_container.     Docker  "    "      .       less  tail.        ,           ,     vi.      

        . ,    ,       ,        ,    .  Docker        ,  10%.             ,    .         Dockerd ,    service docker stop (   )      service docker start (   ).      /bin/dockerd storange-opt dm.basesize=50G stirange-opt

 Container   ,   ,             .     .    , , Zabbix, Graphite, Prometheus, Nagios, InfluxData, OkMeter, DataDog, Bosum, Sensu  ,     Zabbix  Prometheus (). ,  ,      ,       (   SSH   ), ,      ,    ,   .     :        ,    ,      ,   ,     .    Zabbix  Prometheus               ,    . Zabbix       ,     ,       ,        ,   .      ,      Docker,   ,   Kubernetes,       ,               ,    Prometheus  Service Discovery   Kubernetes       (namespace),  (service)    (POD),     Grafana  .  Kubernetes,   The News Stack 2017 Kubernetes UserExperience,   63% ,       .

   (, CRU, RAM, ROM)   (   ).     ,   Kubernetes        non-core,    Kubernetes.     :

* cAdvisor + Heapster + InfluxDB

* cAdvisor + collectd + Heapster

* cAdvisor + Prometheus

* snapd + Heapster

* snapd + SNAP cluster-level agent

* Sysdig

      .    OpenSource,      .       :  ,    ,   ,  ,     .   ,   ,       .    InfluxDB,     ,    .         ,    .      ,        ,    ,      .   ,  pull- ,  Prometheus.         ,                , :

cpu_usage: 2

cpu_usage{app: myapp} : 2

Prometheus   ,    2012,   2016     CNCF (Cloud Native Computing Foundation). Prometheus  :

* TSDB (Time Series Satabase)  ,      ,    , , ,       .     Prometheus,    ,    Prometheus           . Prometheus   ,       ,         .

* Service Discovery  Kubernetes     API   POD,       9121  TPC.

* Grafana ( ,   )   UI    ,  Prometheus  PromQL.

         .       exporter,   ,     .     . , NodeExporter   ,      ,   ,      .   Prometheus  ,      ,     Prometheus,        node_*.    ,   NodeExporter     Prometheus URL  ,     .  NodeExporter    localhost      9256. ,      , :

** node_exporter    (CRU, Memory, Network);

** snmp_exporter    SNMP;

** mysqld_exporter     MySQL;

** consul_exporter     Consul;

** graphite_exporter     Graphite;

** memcached_exporter     Memcached;

** haproxy_exporter    HAProxy;

** CAdvisor   ;

** process-exporter     ;

** metrics-server  CRU, Memory, File-descriptors, Disks;

** cAdvisor  a Docker daemon metrics  containers monitoring;

** kube-state-metrics  deployments, PODs, nodes.

Prometheus     (https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write),     TSDB  Prometheus  Weave Works Cortex,    ,       Prometheus:

remote_write:

url: "http://localhost:9000/receive"

     .     www.katacoda.com/courses/istio/deploy-istio-on-kubernetes   .  Prometheus       9090:

controlplane $ kubectl -n istio-system get svc prometheus

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

prometheus ClusterIP 10.99.70.170 < none> 9090/TCP 6m59s

   UI,    WEB-     80  9090: https://2886795314-9090-ollie08.environments.katacoda.com/graph.          PromQL (Prometheus query language),    InfluxQL  InfluxDB  SQL  TimescaleDB.     CRU,       .     :          .      .   machine_cru_cores   Execute.  ,    , , machine_cru_cores  node_cru_cores.     ,      ,        ,         .

machine_cpu_cores{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",instance="controlplane",job="kubernetes-cadvisor",kubernetes_io_arch="amd64",kubernetes_io_hostname="controlplane",kubernetes_io_os="linux"} 2

machine_cpu_cores{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",instance="node01",job="kubernetes-cadvisor",kubernetes_io_arch="amd64",kubernetes_io_hostname="node01",kubernetes_io_os="linux"} 2

   MEMORY     machine_memory_bytes       (  ):

machine_memory_bytes{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",instance="controlplane",job="kubernetes-cadvisor",kubernetes_io_arch="amd64",kubernetes_io_hostname="controlplane",kubernetes_io_os="linux"} 2096992256

machine_memory_bytes{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",instance="node01",job="kubernetes-cadvisor",kubernetes_io_arch="amd64",kubernetes_io_hostname="node01",kubernetes_io_os="linux"} 4092948480

   ,   PromQL    Gb: machine_memory_bytes / 1000 / 1000 / 1000

{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",instance="controlplane",job="kubernetes-cadvisor",kubernetes_io_arch="amd64",kubernetes_io_hostname="controlplane",kubernetes_io_os="linux"} 2.096992256

{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",instance="node01",job="kubernetes-cadvisor",kubernetes_io_arch="amd64",kubernetes_io_hostname="node01",kubernetes_io_os="linux"} 4.09294848

  memory_bytes   container_memory_usage_bytes   .         ,    :

container_memory_usage_bytes{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",container="POD",container_name="POD",id="/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e619e5dc53ed9efcef63f5fe1d7ee71.slice/docker-b6549e892baa8687e4e98a106024b5c31a4af077d7c5544af03a3c72ec8997e0.scope",image="k8s.gcr.io/pause:3.1",instance="controlplane",job="kubernetes-cadvisor",kubernetes_io_arch="amd64",kubernetes_io_hostname="controlplane",kubernetes_io_os="linux",name="k8s_POD_etcd-controlplane_kube-system_0e619e5dc53ed9efcef63f5fe1d7ee71_0",namespace="kube-system",pod="etcd-controlplane",pod_name="etcd-controlplane"} 45056

container_memory_usage_bytes{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",container="POD",container_name="POD",id="/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a815a40_f2de_11ea_88d2_0242ac110032.slice/docker-76711789af076c8f2331d8212dad4c044d263c5cc3fa333347921bd6de7950a4.scope",image="k8s.gcr.io/pause:3.1",instance="controlplane",job="kubernetes-cadvisor",kubernetes_io_arch="amd64",kubernetes_io_hostname="controlplane",kubernetes_io_os="linux",name="k8s_POD_kube-proxy-nhzhn_kube-system_5a815a40-f2de-11ea-88d2-0242ac110032_0",namespace="kube-system",pod="kube-proxy-nhzhn",pod_name="kube-proxy-nhzhn"} 45056

container_memory_usage_bytes{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",container="POD",container_name="POD",id="/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6473aeea_f2de_11ea_88d2_0242ac110032.slice/docker-24ef0e898e1bb7dec9854b67291171aa9c5715d7683f53bdfc2cef49a19744fe.scope",image="k8s.gcr.io/pause:3.1",instance="node01",job="kubernetes-cadvisor",kubernetes_io_arch="amd64",kubernetes_io_hostname="node01",kubernetes_io_os="linux",name="k8s_POD_kube-proxy-6v49x_kube-system_6473aeea-f2de-11ea-88d2-0242ac110032_0",namespace="kube-system",pod="kube-proxy-6v49x",pod_name="kube-proxy-6v49x"} 835584

 ,    ,   : container_memory_usage_bytes{container_name="prometheus"}

container_MEMORY_usage_bytes{beta_Kubernetes_io_arch="amd64",beta_Kubernetes_io_os="linux",container="prometheus",container_name="prometheus",id="/kubePODs.slice/kubePODs-burstable.slice/kubePODs-burstable-PODeaf4e833_f2de_11ea_88d2_0242ac110032.slice/Docker-b314fb5c4ce8894f872f05bdd524b4b7d6ce5415aeb3fb91d6048441c47584a6.scope",image="sha256:b82ef1f3aa072922c657dd2b2c6b59ec0ac88e69c447998291066e1f67e741d8",instance="node01",JOB="Kubernetes-cadvisor",Kubernetes_io_arch="amd64",Kubernetes_io_hostname="node01",Kubernetes_io_os="linux",name="k8s_prometheus_prometheus-5b77b7d695-knf44_istio-system_eaf4e833-f2de-11ea-88d2-0242ac110032_0",namespace="istio-system",POD="prometheus-5b77b7d695-knf44",POD_name="prometheus-5b77b7d695-knf44"}



283443200

  Mb: container_memory_usage_bytes {container_name="prometheus"} / 1000 / 1000

{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",container="prometheus",container_name="prometheus",id="/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaf4e833_f2de_11ea_88d2_0242ac110032.slice/docker-b314fb5c4ce8894f872f05bdd524b4b7d6ce5415aeb3fb91d6048441c47584a6.scope",image="sha256:b82ef1f3aa072922c657dd2b2c6b59ec0ac88e69c447998291066e1f67e741d8",instance="node01",job="kubernetes-cadvisor",kubernetes_io_arch="amd64",kubernetes_io_hostname="node01",kubernetes_io_os="linux",name="k8s_prometheus_prometheus-5b77b7d695-knf44_istio-system_eaf4e833-f2de-11ea-88d2-0242ac110032_0",namespace="istio-system",pod="prometheus-5b77b7d695-knf44",pod_name="prometheus-5b77b7d695-knf44"}



286.18752

  container_memory_usage_bytes{container_name="prometheus", instance="node01"}

beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",container="prometheus",container_name="prometheus",id="/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaf4e833_f2de_11ea_88d2_0242ac110032.slice/docker-b314fb5c4ce8894f872f05bdd524b4b7d6ce5415aeb3fb91d6048441c47584a6.scope",image="sha256:b82ef1f3aa072922c657dd2b2c6b59ec0ac88e69c447998291066e1f67e741d8",instance="node01",job="kubernetes-cadvisor",kubernetes_io_arch="amd64",kubernetes_io_hostname="node01",kubernetes_io_os="linux",name="k8s_prometheus_prometheus-5b77b7d695-knf44_istio-system_eaf4e833-f2de-11ea-88d2-0242ac110032_0",namespace="istio-system",pod="prometheus-5b77b7d695-knf44",pod_name="prometheus-5b77b7d695-knf44"}



289.890304

    : container_memory_usage_bytes{container_name="prometheus", instance="node02"}

no data

    sum(container_memory_usage_bytes) / 1000 / 1000 / 1000

{} 22.812798976

max(container_memory_usage_bytes) / 1000 / 1000 / 1000

{} 3.6422983679999996

min(container_memory_usage_bytes) / 1000 / 1000 / 1000

{} 0

     instance: max(container_memory_usage_bytes) by (instance) / 1000 / 1000 / 1000

{instance="controlplane"} 1.641836544

{instance="node01"} 3.6622745599999997

       : container_memory_mapped_file / container_memory_usage_bytes * 100 > 80

{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",container="POD",container_name="POD",id="/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode45f10af1ae684722cbd74cb11807900.slice/docker-5cb2f2083fbc467b8b394b27b69686d309f951450bcb910d509572aea9922806.scope",image="k8s.gcr.io/pause:3.1",instance="controlplane",job="kubernetes-cadvisor",kubernetes_io_arch="amd64",kubernetes_io_hostname="controlplane",kubernetes_io_os="linux",name="k8s_POD_kube-controller-manager-controlplane_kube-system_e45f10af1ae684722cbd74cb11807900_0",namespace="kube-system",pod="kube-controller-manager-controlplane",pod_name="kube-controller-manager-controlplane"}



80.52631578947368

        container_fs_limit_bytes,         :

container_fs_limit_bytes{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",container="POD",container_name="POD",device="/dev/vda1",id="/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e619e5dc53ed9efcef63f5fe1d7ee71.slice/docker-b6549e892baa8687e4e98a106024b5c31a4af077d7c5544af03a3c72ec8997e0.scope",image="k8s.gcr.io/pause:3.1",instance="controlplane",job="kubernetes-cadvisor",kubernetes_io_arch="amd64",kubernetes_io_hostname="controlplane",kubernetes_io_os="linux",name="k8s_POD_etcd-controlplane_kube-system_0e619e5dc53ed9efcef63f5fe1d7ee71_0",namespace="kube-system",pod="etcd-controlplane",pod_name="etcd-controlplane"}



253741748224



container_fs_limit_bytes{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",container="POD",container_name="POD",device="/dev/vda1",id="/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a815a40_f2de_11ea_88d2_0242ac110032.slice/docker-76711789af076c8f2331d8212dad4c044d263c5cc3fa333347921bd6de7950a4.scope",image="k8s.gcr.io/pause:3.1",instance="controlplane",job="kubernetes-cadvisor",kubernetes_io_arch="amd64",kubernetes_io_hostname="controlplane",kubernetes_io_os="linux",name="k8s_POD_kube-proxy-nhzhn_kube-system_5a815a40-f2de-11ea-88d2-0242ac110032_0",namespace="kube-system",pod="kube-proxy-nhzhn",pod_name="kube-proxy-nhzhn"}



253741748224



        : "container_fs_limit_bytes{device="tmpfs"} / 1000 / 1000 / 1000"

{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",device="tmpfs",id="/",instance="controlplane",job="kubernetes-cadvisor",kubernetes_io_arch="amd64",kubernetes_io_hostname="controlplane",kubernetes_io_os="linux"} 0.209702912

{beta_kubernetes_io_arch="amd64",beta_kubernetes_io_os="linux",device="tmpfs",id="/",instance="node01",job="kubernetes-cadvisor",kubernetes_io_arch="amd64",kubernetes_io_hostname="node01",kubernetes_io_os="linux"} 0.409296896

     ,         : "min(container_fs_limit_bytes{device!="tmpfs"} / 1000 / 1000 / 1000)"

{} 253.74174822400002

 ,    ,   .  , ,   "_total".   ,     .   ,     (   rate)    (   ),   rate(name_metric_total)[time]. ,      .      "s", , 40s, 60s.    "m", , 2m, 5m.  ,    ,    exporter,     .

   ,       /metrics:

controlplane $ curl https://2886795314-9090-ollie08.environments.katacoda.com/metrics 2>/dev/null | head

# HELP go_gc_duration_seconds A summary of the GC invocation durations.

# TYPE go_gc_duration_seconds summary

go_gc_duration_seconds{quantile="0"} 3.536e-05

go_gc_duration_seconds{quantile="0.25"} 7.5348e-05

go_gc_duration_seconds{quantile="0.5"} 0.000163193

go_gc_duration_seconds{quantile="0.75"} 0.001391603

go_gc_duration_seconds{quantile="1"} 0.246707852

go_gc_duration_seconds_sum 0.388611299

go_gc_duration_seconds_count 74

# HELP go_goroutines Number of goroutines that currently exist.

  Prometheus  Graphana

      Prometheus,   Prometheus    :

essh@kubernetes-master:~$ docker run -d net=host name prometheus prom/prometheus

09416fc74bf8b54a35609a1954236e686f8f6dfc598f7e05fa12234f287070ab



essh@kubernetes-master:~$ docker ps -f name=prometheus

CONTAINER ID IMAGE NAMES

09416fc74bf8 prom/prometheus prometheus

UI     :

essh@kubernetes-master:~$ firefox localhost:9090

  go_gc_duration_seconds{quantile="0"}  :

essh@kubernetes-master:~$ curl localhost:9090/metrics 2>/dev/null | head -n 4

# HELP go_gc_duration_seconds A summary of the GC invocation durations.

# TYPE go_gc_duration_seconds summary

go_gc_duration_seconds{quantile="0"} 1.0097e-05

go_gc_duration_seconds{quantile="0.25"} 1.7841e-05

  UI   localhost:9090    Graph.     :       insert metrics at cursor.      ,     localhost:9090/metrics,    , ,  go_gc_duration_seconds.    go_gc_duration_seconds      Execute.   console   :

go_gc_duration_seconds{instance="localhost:9090",JOB="prometheus",quantile="0"} 0.000009186 go_gc_duration_seconds{instance="localhost:9090",JOB="prometheus",quantile="0.25"} 0.000012056 go_gc_duration_seconds{instance="localhost:9090",JOB="prometheus",quantile="0.5"} 0.000023256 go_gc_duration_seconds{instance="localhost:9090",JOB="prometheus",quantile="0.75"} 0.000068848 go_gc_duration_seconds{instance="localhost:9090",JOB="prometheus",quantile="1"} 0.00021869

,    Graph    .

 Prometheus     : go_*, net_*, process_*, prometheus_*, promhttp_*, scrape_*  up.     Docker       Prometheus   9323:

eSSH@Kubernetes-master:~$ curl http://localhost:9323/metrics 2>/dev/null | head -n 20

# HELP builder_builds_failed_total Number of failed image builds

# TYPE builder_builds_failed_total counter

builder_builds_failed_total{reason="build_canceled"} 0

builder_builds_failed_total{reason="build_target_not_reachable_error"} 0

builder_builds_failed_total{reason="command_not_supported_error"} 0

builder_builds_failed_total{reason="Dockerfile_empty_error"} 0

builder_builds_failed_total{reason="Dockerfile_syntax_error"} 0

builder_builds_failed_total{reason="error_processing_commands_error"} 0

builder_builds_failed_total{reason="missing_onbuild_arguments_error"} 0

builder_builds_failed_total{reason="unknown_instruction_error"} 0

# HELP builder_builds_triggered_total Number of triggered image builds

# TYPE builder_builds_triggered_total counter

builder_builds_triggered_total 0

# HELP engine_daemon_container_actions_seconds The number of seconds it takes to process each container action

# TYPE engine_daemon_container_actions_seconds histogram

engine_daemon_container_actions_seconds_bucket{action="changes",le="0.005"} 1

engine_daemon_container_actions_seconds_bucket{action="changes",le="0.01"} 1

engine_daemon_container_actions_seconds_bucket{action="changes",le="0.025"} 1

engine_daemon_container_actions_seconds_bucket{action="changes",le="0.05"} 1

engine_daemon_container_actions_seconds_bucket{action="changes",le="0.1"} 1

    ,   ,      ,            :

essh@kubernetes-master:~$ sudo chmod a+w /etc/docker/daemon.json

essh@kubernetes-master:~$ echo '{ "metrics-addr" : "127.0.0.1:9323", "experimental" :true }' |jq -M -f /dev/null > /etc/docker/daemon.json

essh@kubernetes-master:~$ cat /etc/docker/daemon.json

{

"metrics-addr": "127.0.0.1:9323",

"experimental": true

}



essh@kubernetes-master:~$ systemctl restart docker



Prometheus         .  ,            ,      ,  :

essh@kubernetes-master:~$ docker run -d \

v "/proc:/host/proc" \

v "/sys:/host/sys" \

v "/:/rootfs" \

-net="host" \

-name=explorer \

quay.io/prometheus/node-exporter:v0.13.0 \

collector.procfs /host/proc \

collector.sysfs /host/sys \

collector.filesystem.ignored-mount-points "^/(sys|proc|dev|host|etc)($|/)"

1faf800c878447e6110f26aa3c61718f5e7276f93023ab4ed5bc1e782bf39d56



    ,      , localhost:9100.   Prometheus    :

essh@kubernetes-master:~$ mkdir prometheus && cd $_



essh@kubernetes-master:~/prometheus$ cat << EOF > ./prometheus.yml

global:

scrape_interval: 1s

evaluation_interval: 1s



scrape_configs:

job_name: 'prometheus'



static_configs:

targets: ['127.0.0.1:9090', '127.0.0.1:9100', '127.0.0.1:9323']

labels:

group: 'prometheus'

EOF



essh@kubernetes-master:~/prometheus$ docker rm -f prometheus

prometheus



essh@kubernetes-master:~/prometheus$ docker run \

d \

-net=host \

-restart always \

-name prometheus \

v $(pwd)/prometheus.yml:/etc/prometheus/prometheus.yml

prom/prometheus

7dd991397d43597ded6be388f73583386dab3d527f5278b7e16403e7ea633eef



essh@kubernetes-master:~/prometheus$ docker ps \

f name=prometheus

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

7dd991397d43 prom/prometheus "/bin/prometheus c" 53 seconds ago Up 53 seconds prometheus

  1702  :

essh@kubernetes-master:~/prometheus$ curl http://localhost:9100/metrics | grep -v '#' |wc -l

1702

        , ,    node_memory_Active.     :

http://localhost:9090/consoles/node.html

http://localhost:9090/consoles/node-cpu.html

   Grafana.   ,   :

essh@kubernetes-master:~/prometheus$ docker run \

d \

-name=grafana \

-net=host

grafana/grafana

Unable to find image 'grafana/grafana:latest' locally

latest: Pulling from grafana/grafana

9d48c3bd43c5: Already exists

df58635243b1: Pull complete

09b2e1de003c: Pull complete

f21b6d64aaf0: Pull complete

719d3f6b4656: Pull complete

d18fca935678: Pull complete

7c7f1ccbce63: Pull complete

Digest: sha256:a10521576058f40427306fcb5be48138c77ea7c55ede24327381211e653f478a

Status: Downloaded newer image for grafana/grafana:latest

6f9ca05c7efb2f5cd8437ddcb4c708515707dbed12eaa417c2dca111d7cb17dc



essh@kubernetes-master:~/prometheus$ firefox localhost:3000

  admin   admin,      .     .

 Grafana     admin   .        Prometheus,  localhost:9090,      ,     (   )  ,          Save and Test  Prometheus .

,           .             ,   Microsoft Active Directory.

    Dashboard     .   New Dashboard     Prometheus 2.0 Stats. ,  :

    "+"  "Dashboard",   .     ,  ,     ,          .     ,  ,  ,        .  Prometheus

  :

essh@kubernetes-master:~/prometheus$ wget \

https://raw.githubusercontent.com/grafana/grafana/master/devenv/docker/ha_test/docker-compose.yaml



--2019-10-30 07:29:52 https://raw.githubusercontent.com/grafana/grafana/master/devenv/docker/ha_test/docker-compose.yaml

Resolving raw.githubusercontent.com (raw.githubusercontent.com) 151.101.112.133

Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.112.133|:443 connected.

HTTP request sent, awaiting response 200 OK

Length: 2996 (2,9K) [text/plain]

Saving to: docker-compose.yaml



docker-compose.yaml 100%[=========>] 2,93K .-KB/s in 0s



2019-10-30 07:29:52 (23,4 MB/s)  docker-compose.yaml saved [2996/2996]

   

     ,  Prometheus    ,   .        .     NodeJS     .  ,   NodeJS:

vagrant@ubuntu:~$ mkdir nodejs && cd $_



vagrant@ubuntu:~/nodejs$ npm init

This utility will walk you through creating a package.json file.

It only covers the most common items, and tries to guess sensible defaults.



See `npm help json` for definitive documentation on these fields

and exactly what they do.



Use `npm install < pkg>save` afterwards to install a package and

save it as a dependency in the package.json file.



name: (nodejs)

version: (1.0.0)

description:

entry point: (index.js)

test command:

git repository:

keywords:

author: ESSch

license: (ISC)

About to write to /home/vagrant/nodejs/package.json:



{

"name": "nodejs",

"version": "1.0.0",

"description": "",

"main": "index.js",

"scripts": {

"test": "echo \"Error: no test specified\" && exit 1"

},

"author": "ESSch",

"license": "ISC"

}



Is this ok? (yes) yes

   WEB-.      :

vagrant@ubuntu:~/nodejs$ npm install Express save

npm WARN deprecated Express@3.0.1: Package unsupported. Please use the express package (all lowercase) instead.

nodejs@1.0.0 /home/vagrant/nodejs

??? Express@3.0.1

npm WARN nodejs@1.0.0 No description

npm WARN nodejs@1.0.0 No repository field.



vagrant@ubuntu:~/nodejs$ cat << EOF > index.js

const express = require('express');

const app = express();

app.get('/healt', function (req, res) {

res.send({status: "Healt"});

});

app.listen(9999, () => {

console.log({status: "start"});

});

EOF



vagrant@ubuntu:~/nodejs$ node index.js &

[1] 18963

vagrant@ubuntu:~/nodejs$ { status: 'start' }



vagrant@ubuntu:~/nodejs$ curl localhost:9999/healt

{"status":"Healt"}

      Prometheus.    Prometheus  .

  Prometheus ,       , ,                 . Thanos   ,    ,     API,      Prometheus.   -,  Prometheus.     ,      side-car,    Istio.         Helm-. ,      ,   Prometheus,  Prometheus     .

docker run rm quay.io/thanos/thanos:v0.7.0 help

docker run -d net=host rm \

v $(pwd)/prometheus0_eu1.yml:/etc/prometheus/prometheus.yml \

-name prometheus-0-sidecar-eu1 \

u root \

quay.io/thanos/thanos:v0.7.0 \

sidecar \

-http-address 0.0.0.0:19090 \

-grpc-address 0.0.0.0:19190 \

-reloader.config-file /etc/prometheus/prometheus.yml \

-prometheus.url http://127.0.0.1:9090

    .       .     PromQL      Prometheus.    (  ), Prometheus    .    Alertmanager        ,      Slack.

, ,  "up",   0  1,   ,   ,     1 .    :

groups:

name: example

rules:

alert: Instance Down

expr: up == 0

for: 1m

   1   0,      Prometheus    Alertmanager.  Alertmanager ,     .   ,     InstanceDown     .    Alertmanager  :

global:

smtp_smarthost: 'localhost:25'

smtp_from: 'youraddress@example.org'

route:

receiver: example-email

receivers:

name: example-email

email_configs:

to: 'youraddress@example.org'

 Alertmanager      .  ,     ,   . ,  ,  SMTP (Simple Mail Transfer Protocol).  ,        Alert Manager  sendmail.

     

      OpenSource    Lucene.        : Sold  Elasticsearch,    ,       .      , ,     ElasticSearch: ELK (Elasticsearch(Apache Lucene), Logstash, Kibana), EFK (Elasticsearch, Fluentd, Kibana),   , , GrayLog2.  GrayLog2,    (ELK/EFK)   -      , , ,  EFK   Kubernetes    



helm install efk-stack stable/elastic-stack set logstash.enabled=false set fluentd.enabled=true set fluentd-elastics

,       ,     Prometheus, , PLG (Promtail()  Loki(Prometheus)  Grafana).

 ElasticSearch  Sold ( ):

Elastic:

**        ( );

**    ,  ,     ,   REST-full JSON-BASH, ,  , SQL ();

*** Full-text search;

*** Real-time index;

***  ();

***   Elastic FQ;

***   ();

***  ;

***     ;

**  Lucene;

** Parent-child (JOIN);

** Scalable native;

**   2010;

Solr:

** OpenSource;

**    JOIN;

*** Full-text search;

*** Real-time index;

***   ;

***    ;

***  : Work, PDF  ;

***    ;

*** :  ;

**  Lucene;

** JSON join;

** Scalable: Solar Cloud () && ZooKeeper ();

**   2004.



        ,     

         ,   .

    -     .   

   ,         .  

    , , access_log NGINX,     ,  ,

        ..     ELK.  ELK 

  : Logstash, Elasticsearch  Kubana,          

  .   ELK  Elastic Stack,      Logstash

   ,  Fluentd  Rsyslog,    Kibana   Grafana. ,  

Kibana     , Grafana      , 

     , , CAdVisor       .

 EKL    ,     ,    

     .



   Elasticsearch ,      JSON.    

  (   ,     ),   

   ,       .  

   JSON   :        ,

,  NGINX   . ,   ,    

  ,       .    

       JSON,   Logstash.  ,  

         (JSON, XML  ),   

,      ,       

  ,       .     

    ,        

.  ,         , ,

  NGINX    JSON .



 ,            

,  Logstash, File bear  Fluentd.        Elastic Stack  

      ELK  Docker  .      ,  

    ,    ,      Elastic Search.

Logstash  log-           telnet  

, ,       , ,  Elastic Search.   

     Elastic Stack,      .  

Java       ,    , ,   MySQL

   .     Filebeat.    

    Fluentd    ( ,    ..),

      Kubernetes   Helm ,   

-   ,     .



     Curator,     ElasticSearch 

   ,    .



      : logstash, fluentd, filebeat 

.

fluentd       Logstash. 

  /etc/td-agent/td-agent.conf,    :

** match      ;

** include      ;

** system    .

Logstash      . Logstash    logstash 

  .      ,    ,   logstash    

    bin/logstash agent -f /env/conf/my.conf.  

logstash        ,      

  Logstash Forwarder ( Lumberjack)     lumberjack 

logstash .       MySQL    Packetbeat

(https://www.8host.com/blog/sbor-metrik-infrastruktury-s-pomoshhyu-packetbeat-i-elk-v-ubuntu-14-04/).

 logstash     :

** grok        ,        JSON;

** date            ,      ;

** kv     key=value;

** mutate          , ,    "/"  "_";

** multiline      .

,     "  "   ,  "01.01.2021 INFO 1"    "message":

filter {

grok {

type => "my_log"

match => ["message", "%{MYDATE:date} %{WORD:loglevel} ${ID.id.int}"]

}

}

 ${ID.id.int}     ID,       id        int.

  Output   :        "Stdout",    "File",   http  JSON REST API  "Elasticsearch"      "Email".      ,    filter. ,:

output {

if [type] == "Info" {

elasticsearch {

host => localhost

index => "log-%{+YYYY.MM.dd}"

}

}

}



  Elasticsearch ( ,     SQL)   .              NoSQL,           .     ,       ,    .    Elasticsearch   WEB-ui   AngularJS  Kibana.                ,      ,       . ,    ,           ,            .

        Elasticsearch,   Kibana     ,  log-*,         .

      Logstash:

output {

if [type] == "Info" {

elasticsearch {

claster => elasticsearch

action => "create"

hosts => ["localhost:9200"]

index => "log-%{+YYYY.MM.dd}"

document_type => ....

document_id => "%{id}"

}

}

}



  ElasticSearch   JSON REST API,        .   ,    ,    Logstash,         JSON    .    ,     ,   %{IP:client }  ,     https://github.com/elastic/logstash/tree/v1.1.9/patterns.            , ,  NGINX  https://github.com/zooniverse/static/blob/master/logstash Nginx.conf.      https://habr.com/post/165059/.

ElasticSearch  NoSQL  ,      (    ).       ,     ,        ,    .       Serilog (DOT Net)   EventType         ,     .           ID  ,   ID ,         ,        .

 ElasticSearch (https://habr.com/post/280488/)    curl -X GET localhost:9200

sudo sysctl -w vm.max_map_count=262144

$ curl 'localhost:9200/_cat/indices?v'

health status index uuid pri rep docs.count docs.deleted store.size pri.store.size

green open graylog_0 h2NICPMTQlqQRZhfkvsXRw 4 0 0 0 1kb 1kb

green open .kibana_1 iMJl7vyOTuu1eG8DlWl1OQ 1 0 3 0 11.9kb 11.9kb

yellow open indexname le87KQZwT22lFll8LSRdjw 5 1 1 0 4.5kb 4.5kb

yellow open db i6I2DmplQ7O40AUzyA-a6A 5 1 0 0 1.2kb 1.2kb

     blog   post curl -X PUT "$ES_URL/blog/post/1?pretty" -d'

  ElasticSearch

      ELK,   ElasticSearch, Logstash  Kibana.   ,      Filebeat        Logstash,     .   ,  Logstash      ,  ,     JSON   API     Logstash.

    ,    ElasticSearch,     ,  Kibana          Dev Tools.         ,    ,     ,   ,       .     View,     .        ,    ,              .    :    ,      ,        ,    ,         .   ,        ,           .    ,         ,     .               .    ,     ,  ElasticSearch   22      ,       .

         ,   ElasticSearch    NoSQL  ,               .

ElasticSearch             http  GET, PUT  DELETE.       () Curl    BASH  linux:

#  (     )

curl -XPUT mydb/mytable/1 -d'{

....

}'



#   id

curl -XGET mydb/mytable/1

curl -XGET mydb/mytable/1



# 

curl -XGET mydb -d'{

"search": {

"match": {

"name": "my"

}

}

}'



# 

curl -XDELETE mydb



 ,    : Google Cloud  Amazon AWS

    ,    VPS,     (SAS, Service As Software) ,      WEB  ()       .     ,   ,     .       ,      ,      ,      .        :  ,       VPS   .       ,                      ()    ,                    .        ,          , build , DevOps     . ,   ,            (  ,     ).               ,      ,      .         ,   ,       .     ,      .

    Kubernetes   vender lock,      API                  . Kubernetes  Amazon AWS, Google Cloud, Microsoft Azure,       MiniKube.

 Google Cloud,   2018           (300  ),    ,      IAM   > .  ,        ,       ,        ,       .   ,         ( )         ,     ,     .

    cloud.google.com ,           console.cloud.google.com,          .     :   - 300     356  (      ).

      Back-End    (MBasS, Mobile backend as a service),     : Google Firebase, AWS Mobile, Azure Mobile

Google App Engine

   WEB-

   ()  >  > IAM   > ,       ,  Static IP addresses   1,         .        Kubernetes Engine        Kubernetes.       Marketplace  2  NGINX.           IP-.

Marketplace: , ,  Kubernetes: NGINX    standard-cluster- NGINX,   CPU  RAM, 2   3     Kubernetes (  1.11.3,          1.10).      Kubernetes Engine      .          cubectl,      : https://kubernetes.io/docs/reference/kubectl/overview/    https://gist.github.com/ipedrazas/95391ffd88190bea94ca188d3d2c1cbe.

  :

   ,         :

NAME_PROJECT=bitrix_12345;

NAME_CLUSTER=bitrix;



gcloud projects create $NAME_CLUSTER name $NAME_CLUSTER;

gcloud config set project $NAME_CLUSTER;

gcloud projects list;

 :  zone     ,      10Gb,       https://cloud.google.com/compute/docs/machine-types.     ,        :

gcloud container clusters create $NAME_CLUSTER zone europe-north1-a

    ,     ,     .

gcloud projects delete NAME_PROJECT;

,    ,    :

$ gcloud container clusters create mycluster \

-machine-type=n1-standard-1 disk-size=10GB image-type ubuntu \

-scopes compute-rw,gke-default \

-machine-type=custom-1-1024 \

-cluster-version=1.11 enable-autoupgrade \

-num-nodes=1 enable-autoscaling min-nodes=1 max-nodes=2 \

-zone europe-north1-a

 enable-autorepair              .    Kubernetes   1.11,         1.10      , ,cluster-version=1.11.4-gke.12.       cluster-version=1.11     enable-autoupgrade.      ,    :num-nodes=1 min-nodes=1 max-nodes=2 enable-autoscaling.

       .     n1-standart-1,      3.5Gb  ,   ,        10.5Gb  . ,           ,  ,       Kubernetes,      (   , , ).             .       ,     NGINX    1Gb (1024Mb)  ,       LAMP (Apache MySQL PHP)  ,       kube-dns-548976df6c-mlljx,    DNS  .    ,          ,           .    ,        1Gb    ,     2Gb   .   1080Mb (1.25Gb), ,       256Mb (0.25Gb)           ,   , 1Gb.     2   2.5Gb  3   10.5Gb,          .

     .        ${HOME}/.kube/config      :

$ gcloud container clusters get-credentials b zone europe-north1-a project essch

$ kubectl port-forward Nginxlamp-74c8b5b7f-d2rsg 8080:8080

Forwarding from 127.0.0.1:8080 > 8080

Forwarding from [::1]:8080 > 8080

$ google-chrome http://localhost:8080 #      Google Shell

$ kubectl expose Deployment Nginxlamp type="LoadBalancer"port=8080

   kubectl    gcloud      kubectl   gcloud components install kubectl,       .

      POD      front-end,      Deployment.      ,      .

              , ,       ,    enable-autoscaling min-nodes=1 max-nodes=2.

   GCP

      :    Google Cloud Platform    API  gcloud. ,      UI.           .   Kubernetes Engine   .  , 2CPU,  europe-north-1 (-      )    Kubernetes.         Cloud Shell.    API             :

gcloud container clusters create mycluster zone europe-north1-a

  ,        ,   3  ,        . :

esschtolts@cloudshell:~ (essch)$ gcloud container clusters list filter=name=mycluster

NAME LOCATION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS

mycluster europe-north1-a 35.228.37.100 n1-standard-1 1.10.9-gke.5 3 RUNNING



esschtolts@cloudshell:~ (essch)$ gcloud compute instances list

NAME MACHINE_TYPE EXTERNAL_IP STATUS

gke-mycluster-default-pool-43710ef9-0168 n1-standard-1 35.228.73.217 RUNNING

gke-mycluster-default-pool-43710ef9-39ck n1-standard-1 35.228.75.47 RUNNING

gke-mycluster-default-pool-43710ef9-g76k n1-standard-1 35.228.117.209 RUNNING

   :

esschtolts@cloudshell:~ (essch)$ gcloud projects list

PROJECT_ID NAME PROJECT_NUMBER

agile-aleph-203917 My First Project 546748042692

essch app 283762935665



esschtolts@cloudshell:~ (essch)$ gcloud container clusters get-credentials mycluster \

-zone europe-north1-a \

-project essch

Fetching cluster endpoint and auth data.

kubeconfig entry generated for mycluster.

    :

esschtolts@cloudshell:~ (essch)$ kubectl get pods

No resources found.

 :

esschtolts@cloudshell:~ (essch)$ kubectl run Nginx image=Nginx replicas=3

deployment.apps "Nginx" created

  :

esschtolts@cloudshell:~ (essch)$ kubectl get deployments selector=run=Nginx

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE

Nginx 3 3 3 3 14s



esschtolts@cloudshell:~ (essch)$ kubectl get pods selector=run=Nginx

NAME READY STATUS RESTARTS AGE

Nginx-65899c769f-9whdx 1/1 Running 0 43s

Nginx-65899c769f-szwtd 1/1 Running 0 43s

Nginx-65899c769f-zs6g5 1/1 Running 0 43s

,           :

esschtolts@cloudshell:~ (essch)$ kubectl describe pod Nginx-65899c769f-9whdx | grep Node:

Node: gke-mycluster-default-pool-43710ef9-g76k/10.166.0.5

esschtolts@cloudshell:~ (essch)$ kubectl describe pod Nginx-65899c769f-szwtd | grep Node:

Node: gke-mycluster-default-pool-43710ef9-39ck/10.166.0.4

esschtolts@cloudshell:~ (essch)$ kubectl describe pod Nginx-65899c769f-zs6g5 | grep Node:

Node: gke-mycluster-default-pool-43710ef9-g76k/10.166.0.5

   :

esschtolts@cloudshell:~ (essch)$ kubectl expose Deployment Nginx type="LoadBalancer"port=80

service "Nginx" exposed

,   :

esschtolts@cloudshell:~ (essch)$ kubectl expose Deployment Nginx type="LoadBalancer"port=80

service "Nginx" exposed



esschtolts@cloudshell:~ (essch)$ kubectl get svc selector=run=Nginx

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

Nginx LoadBalancer 10.27.245.187 pending> 80:31621/TCP 11s



esschtolts@cloudshell:~ (essch)$ sleep 60;



esschtolts@cloudshell:~ (essch)$ kubectl get svc selector=run=Nginx

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

Nginx LoadBalancer 10.27.245.187 35.228.212.163 80:31621/TCP 1m

  :

esschtolts@cloudshell:~ (essch)$ curl 35.228.212.163:80 2>\dev\null | grep h1

< h1>Welcome to Nginx!< /h1>

            (   JSONpath   Go: https://golang.org/pkg/text/template/#pkg-overview):

esschtolts@cloudshell:~ (essch)$ pod1=$(kubectl get pods -o jsonpath={.items[0].metadata.name});

esschtolts@cloudshell:~ (essch)$ pod2=$(kubectl get pods -o jsonpath={.items[1].metadata.name});

esschtolts@cloudshell:~ (essch)$ pod3=$(kubectl get pods -o jsonpath={.items[2].metadata.name});

esschtolts@cloudshell:~ (essch)$ echo $pod1 $pod2 $pod3

Nginx-65899c769f-9whdx Nginx-65899c769f-szwtd Nginx-65899c769f-zs6g5

    POD,      ,   ,     POD:

esschtolts@cloudshell:~ (essch)$ echo 1 > test.html;

esschtolts@cloudshell:~ (essch)$ kubectl cp test.html ${pod1}:/usr/share/Nginx/html/index.html

esschtolts@cloudshell:~ (essch)$ echo 2 > test.html;

esschtolts@cloudshell:~ (essch)$ kubectl cp test.html ${pod2}:/usr/share/Nginx/html/index.html

esschtolts@cloudshell:~ (essch)$ echo 3 > test.html;

esschtolts@cloudshell:~ (essch)$ kubectl cp test.html ${pod3}:/usr/share/Nginx/html/index.html



esschtolts@cloudshell:~ (essch)$ curl 35.228.212.163:80 && curl 35.228.212.163:80 && curl 35.228.212.163:80

3

2

1



esschtolts@cloudshell:~ (essch)$ curl 35.228.212.163:80 && curl 35.228.212.163:80 && curl 35.228.212.163:80

3

1

1

     POD:

esschtolts@cloudshell:~ (essch)$ kubectl delete pod ${pod1} && kubectl get pods && sleep 10 && kubectl get pods

pod "Nginx-65899c769f-9whdx" deleted

NAME READY STATUS RESTARTS AGE

Nginx-65899c769f-42rd5 0/1 ContainerCreating 0 1s

Nginx-65899c769f-9whdx 0/1 Terminating 0 54m

Nginx-65899c769f-szwtd 1/1 Running 0 54m

Nginx-65899c769f-zs6g5 1/1 Running 0 54m

NAME READY STATUS RESTARTS AGE

Nginx-65899c769f-42rd5 1/1 Running 0 12s

Nginx-65899c769f-szwtd 1/1 Running 0 55m

Nginx-65899c769f-zs6g5 1/1 Running 0 55m

  ,   ,  POD   (   )    . ,     .       ,     :

esschtolts@cloudshell:~ (essch)$ gcloud container clusters delete mycluster zone europe-north1-a;

The following clusters will be deleted.

[mycluster] in [europe-north1-a]

Do you want to continue (Y/n)? Y

Deleting cluster myclusterdone.

Deleted [https://container.googleapis.com/v1/projects/essch/zones/europe-north1-a/clusters/mycluster].

esschtolts@cloudshell:~ (essch)$ gcloud container clusters list filter=name=mycluster

.           run  expose,      IP-        NGINX.     ,             .

  

     ,     ,  ,   .   ,       ,               ,             .                   .          YAML     ,          .          , ,      .

esschtolts@cloudshell:~ (essch)$ kubectl get deployment/Nginx output=yaml

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

annotations:

deployment.kubernetes.io/revision: "1"

creationTimestamp: 2018-12-16T10:23:26Z

generation: 1

labels:

run: Nginx

name: Nginx

namespace: default

resourceVersion: "1612985"

selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/Nginx

uid: 9fb3ad6a-011c-11e9-bfaa-42010aa60088

spec:

progressDeadlineSeconds: 600

replicas: 1

revisionHistoryLimit: 10

selector:

matchLabels:

run: Nginx

strategy:

rollingUpdate:

maxSurge: 1

maxUnavailable: 1

type: RollingUpdate

template:

metadata:

creationTimestamp: null

labels:

run: Nginx

spec:

containers:

image: Nginx

imagePullPolicy: Always

name: Nginx

resources: {}

terminationMessagePath: /dev/termination-log

terminationMessagePolicy: File

dnsPolicy: ClusterFirst

restartPolicy: Always

schedulerName: default-scheduler

securityContext: {}

terminationGracePeriodSeconds: 30

status:

availableReplicas: 1

conditions:

lastTransitionTime: 2018-12-16T10:23:26Z

lastUpdateTime: 2018-12-16T10:23:26Z

message: Deployment has minimum availability.

reason: MinimumReplicasAvailable

status: "True"

type: Available

lastTransitionTime: 2018-12-16T10:23:26Z

lastUpdateTime: 2018-12-16T10:23:28Z

message: ReplicaSet "Nginx-64f497f8fd" has successfully progressed.

reason: NewReplicaSetAvailable

status: "True"

type: Progressing

observedGeneration: 1

readyReplicas: 1

replicas: 1

updatedReplicas: 1

    ,   ,   ,      ,      :

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

labels:

run: Nginx

name: Nginx

spec:

selector:

matchLabels:

run: Nginx

template:

metadata:

labels:

run: Nginx

spec:

containers:

image: Nginx

name: Nginx

   :

gcloud services enable compute.googleapis.com project=${PROJECT}

gcloud beta compute instance-templates create-with-container ${TEMPLATE} \

-machine-type=custom-1-4096 \

-image-family=cos-stable \

-image-project=cos-cloud \

-container-image=gcr.io/kuar-demo/kuard-amd64:1 \

-container-restart-policy=always \

-preemptible \

-region=${REGION} \

-project=${PROJECT}

gcloud compute instance-groups managed create ${TEMPLATE} \

-base-instance-name=${TEMPLATE} \

-template=${TEMPLATE} \

-size=${CLONES} \

-region=${REGION} \

-project=${PROJECT}

  

            . ,  ,     ,           .    ,      ,     :

esschtolts@cloudshell:~/bitrix (essch)$ cat deploymnet.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

name: Nginxlamp

spec:

selector:

matchLabels:

app: lamp

replicas: 1

template:

metadata:

labels:

app: lamp

spec:

containers:

name: lamp

image: mattrayner/lamp:latest-1604-php5

ports:

containerPort: 80



esschtolts@cloudshell:~/bitrix (essch)$ cat loadbalancer.yaml

apiVersion: v1

kind: Service

metadata:

name: frontend

spec:

type: LoadBalancer

ports:

name: front

port: 80

targetPort: 80

selector:

app: lamp



esschtolts@cloudshell:~/bitrix (essch)$ kubectl get pods

NAME READY STATUS RESTARTS AGE

Nginxlamp-7fb6fdd47b-jttl8 2/2 Running 0 3m



esschtolts@cloudshell:~/bitrix (essch)$ kubectl get svc

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

frontend LoadBalancer 10.55.242.137 35.228.73.217 80:32701/TCP,8080:32568/TCP 4m

kubernetes ClusterIP 10.55.240.1 none> 443/TCP 48m

       , ,  Production  Develop,       .    POD  ,      POD  production,   Developer .         . ,   ,   ,         .      namespace.    ,     POD           default,    POD    :

esschtolts@cloudshell:~/bitrix (essch)$ kubectl get namespace

NAME STATUS AGE

default Active 5h

kube-public Active 5h

kube-system Active



esschtolts@cloudshell:~/bitrix (essch)$ kubectl get pods namespace=kube-system

NAME READY STATUS RESTARTS AGE

event-exporter-v0.2.3-85644fcdf-tdt7h 2/2 Running 0 5h

fluentd-gcp-scaler-697b966945-bkqrm 1/1 Running 0 5h

fluentd-gcp-v3.1.0-xgtw9 2/2 Running 0 5h

heapster-v1.6.0-beta.1-5649d6ddc6-p549d 3/3 Running 0 5h

kube-dns-548976df6c-8lvp6 4/4 Running 0 5h

kube-dns-548976df6c-mcctq 4/4 Running 0 5h

kube-dns-autoscaler-67c97c87fb-zzl9w 1/1 Running 0 5h

kube-proxy-gke-bitrix-default-pool-38fa77e9-0wdx 1/1 Running 0 5h

kube-proxy-gke-bitrix-default-pool-38fa77e9-wvrf 1/1 Running 0 5h

l7-default-backend-5bc54cfb57-6qk4l 1/1 Running 0 5h

metrics-server-v0.2.1-fd596d746-g452c 2/2 Running 0 5h



esschtolts@cloudshell:~/bitrix (essch)$ kubectl get pods namespace=default

NAMEREADY STATUS RESTARTS AGE

Nginxlamp-b5dcb7546-g8j5r 1/1 Running 0 4h

  :

esschtolts@cloudshell:~/bitrix (essch)$ cat namespace.yaml

apiVersion: v1

kind: Namespace

metadata:

name: development

labels:

name: development



esschtolts@cloudshell:~ (essch)$ kubectl create -f namespace.yaml

namespace "development" created



esschtolts@cloudshell:~/bitrix (essch)$ kubectl get namespace show-labels

NAME STATUS AGE LABELS

default Active 5h none>

development Active 16m name=development

kube-public Active 5h none>

kube-system Active 5h none>

      ,              ,        .  ,    ,   kubectl get pods    ,     (Deployment, DaemonSet  )   (LoadBalancer, NodePort  )  ,       ,     pipeline :  ,     .        $HOME/ .kube/config    kubectl config view. ,              (  default ):

context:

cluster: gke_essch_europe-north1-a_bitrix

user: gke_essch_europe-north1-a_bitrix

name: gke_essch_europe-north1-a_bitrix

    :

esschtolts@cloudshell:~/bitrix (essch)$ kubectl config view -o jsonpath='{.contexts[4]}'

{gke_essch_europe-north1-a_bitrix {gke_essch_europe-north1-a_bitrix gke_essch_europe-north1-a_bitrix []}}

       :

esschtolts@cloudshell:~ (essch)$ kubectl config set-context dev \

>namespace=development \

>cluster=gke_essch_europe-north1-a_bitrix \

>user=gke_essch_europe-north1-a_bitrix

Context "dev" modified.



esschtolts@cloudshell:~/bitrix (essch)$ kubectl config set-context dev \

>namespace=development \

>cluster=gke_essch_europe-north1-a_bitrix \

>user=gke_essch_europe-north1-a_bitrix

Context "dev" modified.

   :

context:

cluster: gke_essch_europe-north1-a_bitrix

namespace: development

user: gke_essch_europe-north1-a_bitrix

name: dev

    :

esschtolts@cloudshell:~ (essch)$ kubectl config use-context dev

Switched to context "dev".



esschtolts@cloudshell:~ (essch)$ kubectl config current-context

dev



esschtolts@cloudshell:~ (essch)$ kubectl get pods

No resources found.



esschtolts@cloudshell:~ (essch)$ kubectl get pods namespace=default

NAMEREADY STATUS RESTARTS AGE

Nginxlamp-b5dcb7546-krkm2 1/1 Running 0 10h

       :

esschtolts@cloudshell:~/bitrix (essch)$ kubectl config set-context $(kubectl config current-context) namespace=development

Context "gke_essch_europe-north1-a_bitrix" modified.

       dev(         namespace=dev)        default (            namespace =default):

esschtolts@cloudshell:~ (essch)$ cd bitrix/



esschtolts@cloudshell:~/bitrix (essch)$ kubectl create -f deploymnet.yaml -f loadbalancer.yaml

deployment.apps "Nginxlamp" created

service "frontend" created



esschtolts@cloudshell:~/bitrix (essch)$ kubectl delete -f deploymnet.yaml -f loadbalancer.yaml namespace=default

deployment.apps "Nginxlamp" deleted

service "frontend" deleted



esschtolts@cloudshell:~/bitrix (essch)$ kubectl get pods

NAMEREADY STATUS RESTARTS AGE

Nginxlamp-b5dcb7546-8sl2f 1/1 Running 0 1m

   IP-   :

esschtolts@cloudshell:~/bitrix (essch)$ curl $(kubectl get -f loadbalancer.yaml -o json

| jq -r .status.loadBalancer.ingress[0].ip) 2>/dev/null | grep '< h2 >'

< h2>Welcome to github.com/mattrayner/docker-lamp" target="_blank">Docker-Lamp a.k.a mattrayner/lamp< /h2>



        ,       .     ( )      .htaccess,         /app. ,   ,   POD          (  Bitrix):

    ,     . ,  ,      ,   POD,               ,    ,         POD,  ,     POD,   POD    ,     .    ,        ,       POD,         ,    -      .  ,  POD     .   ,      Dockerfile:

esschtolts@cloudshell:~/bitrix (essch)$ cat Dockerfile

FROM mattrayner/lamp:latest-1604-php5

MAINTAINER ESSch ESSchtolts@yandex.ru>

RUN cd /app/ && ( \

wget https://www.1c-bitrix.ru/download/small_business_encode.tar.gz \

&& tar -xf small_business_encode.tar.gz \

&& sed -i '5i php_value short_open_tag 1' .htaccess \

&& chmod -R 0777 . \

&& sed -i 's/#php_value display_errors 1/php_value display_errors 1/' .htaccess \

&& sed -i '5i php_value opcache.revalidate_freq 0' .htaccess \

&& sed -i 's/#php_flag default_charset UTF-8/php_flag default_charset UTF-8/' .htaccess \

) && cd ..;

EXPOSE 80 3306

CMD ["/run.sh"]



esschtolts@cloudshell:~/bitrix (essch)$ docker build -t essch/app:0.12 . |grep Successfully

Successfully built f76e656dac53

Successfully tagged essch/app:0.12



esschtolts@cloudshell:~/bitrix (essch)$ docker image push essch/app | grep digest

0.12: digest: sha256:75c92396afacefdd5a3fb2024634a4c06e584e2a1674a866fa72f8430b19ff69 size: 11309



esschtolts@cloudshell:~/bitrix (essch)$ cat deploymnet.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

name: Nginxlamp

namespace: development

spec:

selector:

matchLabels:

app: lamp

replicas: 1

template:

metadata:

labels:

app: lamp

spec:

containers:

name: lamp

image: essch/app:0.12

ports:

containerPort: 80



esschtolts@cloudshell:~/bitrix (essch)$ IMAGE=essch/app:0.12 kubectl create -f deploymnet.yaml

deployment.apps "Nginxlamp" created



esschtolts@cloudshell:~/bitrix (essch)$ kubectl get pods -l app=lamp

NAME READY STATUS RESTARTS AGE

Nginxlamp-55f8cd8dbc-mk9nk 1/1 Running 0 5m



esschtolts@cloudshell:~/bitrix (essch)$ kubectl exec Nginxlamp-55f8cd8dbc-mk9nk  ls /app/

index.php

  ,   ,       , ,                app.           ,  (      ,        )   ,      ,           .

           POD  ,          ,      , ,  ,     .       init ,       ,     ,   ,         (     ).        POD  InitContainer,         init     .       volume    InitContainer     .    InitContainer,    ,  .       ,      ,         :

esschtolts@cloudshell:~/bitrix (essch)$ cat deploymnet.yaml

kind: Deployment

metadata:

name: Nginxlamp

namespace: development

spec:

selector:

matchLabels:

app: lamp

replicas: 1

template:

metadata:

labels:

app: lamp

spec:

initContainers:

name: init

image: ubuntu

command:

/bin/bash

-c

|

cd /app

apt-get update && apt-get install -y wget

wget https://www.1c-bitrix.ru/download/small_business_encode.tar.gz

tar -xf small_business_encode.tar.gz

sed -i '5i php_value short_open_tag 1' .htaccess

chmod -R 0777 .

sed -i 's/#php_value display_errors 1/php_value display_errors 1/' .htaccess

sed -i '5i php_value opcache.revalidate_freq 0' .htaccess

sed -i 's/#php_flag default_charset UTF-8/php_flag default_charset UTF-8/' .htaccess

volumeMounts:

name: app

mountPath: /app

containers:

name: lamp

image: essch/app:0.12

ports:

containerPort: 80

volumeMounts:

name: app

mountPath: /app

volumes:

name: app

emptyDir: {}

     POD   watch kubectl get events,   kubectl logs {ID_CONTAINER} -c init   :

kubectl logs $(kubectl get PODs -l app=lamp -o JSON | jq ".items[0].metadata.name" |sed 's/"//g') -c init

      , , alpine:3.5:

esschtolts@cloudshell:~ (essch)$ docker pull alpine 1>\dev\null

esschtolts@cloudshell:~ (essch)$ docker pull ubuntu 1>\dev\null

esschtolts@cloudshell:~ (essch)$ docker images

REPOSITORY TAGIMAGE ID CREATED SIZE

ubuntu latest 93fd78260bd1 4 weeks ago 86.2MB

alpine latest 196d12cf6ab1 3 months ago 4.41MB

       :

image: alpine:3.5

command:

/bin/bash

-c

|

cd /app

apk update add wget && rm -rf /var/cache/apk/*

tar -xf small_business_encode.tar.gz

rm -f small_business_encode.tar.gz

sed -i '5i php_value short_open_tag 1' .htaccess

sed -i 's/#php_value display_errors 1/php_value display_errors 1/' .htaccess

sed -i '5i php_value opcache.revalidate_freq 0' .htaccess

sed -i 's/#php_flag default_charset UTF-8/php_flag default_charset UTF-8/' .htaccess

chmod -R 0777 .

volumeMounts:

      ,   APIne  git: axeclbr/git  golang:1-alpine.

    

   .    ,    ,    ,  .  ,        .  ,        ,         .  ,   sleep:

vagrant@ubuntu:~$ sudo docker pull ubuntu > /dev/null

vagrant@ubuntu:~$ sudo docker run -d ubuntu sleep 60

0bd80651c6f97167b27f4e8df675780a14bd9e0a5c3f8e5e8340a98fc351bc64



vagrant@ubuntu:~$ sudo docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES

0bd80651c6f9 ubuntu "sleep 60" 15 seconds ago Up 12 seconds distracted_kalam



vagrant@ubuntu:~$ sleep 60



vagrant@ubuntu:~$ sudo docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES



vagrant@ubuntu:~$ sudo docker ps -a | grep ubuntu

0bd80651c6f9 ubuntu "sleep 60" 4minutes ago Exited (0) 3 minutes ago distracted_kalam

      ,     ,      .    -.        :

vagrant@ubuntu:~$ sudo docker run -d restart=always ubuntu sleep 10

c3bc2d2e37a68636080898417f5b7468adc73a022588ae073bdb3a5bba270165



vagrant@ubuntu:~$ sleep 30



vagrant@ubuntu:~$ sudo docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS

c3bc2d2e37a6 ubuntu sleep 10" 46 seconds ago Up 1 second

 , ,      .              .  -   -     , ,  ,    ,        ,       .  -  ,   . ,    -    -  , , -      -  .   ,       .     -     ,        .         :

vagrant@ubuntu:~$ sudo docker run -d restart=on-failure:3 ubuntu sleep 10

056c4fc6986a13936e5270585e0dc1782a9246f01d6243dd247cb03b7789de1c

vagrant@ubuntu:~$ sleep 10

vagrant@ubuntu:~$ sudo docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

c3bc2d2e37a6 ubuntu "sleep 10" 9minutes ago Up 2 seconds keen_sinoussi

vagrant@ubuntu:~$ sleep 10

vagrant@ubuntu:~$ sleep 10

vagrant@ubuntu:~$ sudo docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

c3bc2d2e37a6 ubuntu "sleep 10" 10 minutes ago Up 9 seconds keen_sinoussi

vagrant@ubuntu:~$ sleep 10

vagrant@ubuntu:~$ sudo docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

c3bc2d2e37a6 ubuntu "sleep 10" 10 minutes ago Up 2 seconds keen_sinoussi

   ,    .      . , ,        ,     . ,              ,       , , -   .       ,     .   ,    .  -     url, :

docker run rm -d \

-name=elasticsearch \

-health-cmd="curl silent fail localhost:9200/_cluster/health || exit 1" \

-health-interval=5s \

-health-retries=12 \

-health-timeout=20s \

{image}

      .         (  0)   ( ,  ),     ,      - :

vagrant@ubuntu:~$ sudo docker run \

d name healt \

-health-timeout=0s \

-health-interval=5s \

-health-retries=3 \

-health-cmd="ls /halth" \

ubuntu bash -c 'sleep 1000'

c0041a8d973e74fe8c96a81b6f48f96756002485c74e51a1bd4b3bc9be0d9ec5



vagrant@ubuntu:~$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

c0041a8d973e ubuntu "bash -c 'sleep 1000'" 4seconds ago Up 3 seconds (health: starting) healt



vagrant@ubuntu:~$ sleep 20

vagrant@ubuntu:~$ sudo docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

c0041a8d973e ubuntu "bash -c 'sleep 1000'" 38 seconds ago Up 37 seconds (unhealthy) healt



vagrant@ubuntu:~$ sudo docker rm -f healt

healt

              (healthy) :

vagrant@ubuntu:~$ sudo docker run \

d name healt \

-health-timeout=0s \

-health-interval=5s \

-health-retries=3 \

-health-cmd="ls /halth" \

ubuntu bash -c 'touch /halth && sleep 1000'



vagrant@ubuntu:~$ sudo docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

160820d11933 ubuntu "bash -c 'touch /hal" 4seconds ago Up 2 seconds (health: starting) healt

vagrant@ubuntu:~$ sudo docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

160820d11933 ubuntu "bash -c 'touch /hal" 6seconds ago Up 5 seconds (healthy) healt



vagrant@ubuntu:~$ sudo docker rm -f healt

healt

        :

vagrant@ubuntu:~$ sudo docker run \

d name healt \

-health-timeout=0s \

-health-interval=5s \

-health-retries=3 \

-health-cmd="ls /halth" \

ubuntu bash -c 'touch /halth && sleep 60 && rm -f /halth && sleep 60'



vagrant@ubuntu:~$ sudo docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

8ec3a4abf74b ubuntu "bash -c 'touch /hal" 7seconds ago Up 5 seconds (health: starting) healt

vagrant@ubuntu:~$ sudo docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

8ec3a4abf74b ubuntu "bash -c 'touch /hal" 24 seconds ago Up 22 seconds (healthy) healt

vagrant@ubuntu:~$ sleep 60

vagrant@ubuntu:~$ sudo docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

8ec3a4abf74b ubuntu "bash -c 'touch /hal" About a minute ago Up About a minute (unhealthy) healt

Kubernetes  (kubernetes.io/docs/tasks/configure-POD-container/configure-liveness-readiness-probes/)  ,      .    ,       ,       ,    .    ,    ,    . ,   liveness         Kubernetes   ,    .    .  liveness        ,         liveness  Kubernetes  .     shell-, -   ,          , ,       ,   JOB,       ,  .      HTTP-,             curl       kube-proxy.       TCP-,   HTTP    .     www.katacoda.com/courses/kubernetes/playground:

controlplane $ cat << EOF > liveness.yaml

apiVersion: v1

kind: Pod

metadata:

name: liveness

spec:

containers:

name: healtcheck

image: alpine:3.5

args:

/bin/sh

-c

touch /tmp/healthy; sleep 10; rm -rf /tmp/healthy; sleep 60

livenessProbe:

exec:

command:

cat

/tmp/healthy

initialDelaySeconds: 15

periodSeconds: 5

EOF



controlplane $ kubectl create -f liveness.yaml

pod/liveness created



controlplane $ kubectl get pods

NAME READY STATUS RESTARTS AGE

liveness 1/1 Running 2 2m11s



controlplane $ kubectl describe pod/liveness | tail -n 10

Type Reason Age From Message

    

Normal Scheduled 2m37s default-scheduler Successfully assigned default/liveness to node01

Normal Pulling 2m33s kubelet, node01 Pulling image "alpine:3.5"

Normal Pulled 2m30s kubelet, node01 Successfully pulled image "alpine:3.5"

Normal Created 33s (x3 over 2m30s) kubelet, node01 Created container healtcheck

Normal Started 33s (x3 over 2m30s) kubelet, node01 Started container healtcheck

Normal Pulled 33s (x2 over 93s) kubelet, node01 Container image "alpine:3.5" already present on machine

Warning Unhealthy 3s (x9 over 2m13s) kubelet, node01 Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory

Normal Killing 3s (x3 over 2m3s) kubelet, node01 Container healtcheck failed liveness probe, will be restarted

 ,    .

controlplane $ cat << EOF > liveness.yaml

apiVersion: v1

kind: Pod

metadata:

name: liveness

spec:

containers:

name: healtcheck

image: alpine:3.5

args:

/bin/sh

-c

touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 60

livenessProbe:

exec:

command:

cat

/tmp/healthy

initialDelaySeconds: 15

periodSeconds: 5

EOF



controlplane $ kubectl create -f liveness.yaml

pod/liveness created



controlplane $ kubectl get pods

NAME READY STATUS RESTARTS AGE

liveness 1/1 Running 2 2m53s



controlplane $ kubectl describe pod/liveness | tail -n 15

SecretName: default-token-9v5mb

Optional: false

QoS Class: BestEffort

Node-Selectors: < none>

Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s

node.kubernetes.io/unreachable:NoExecute for 300s

Events:

Type Reason Age From Message

    

Normal Scheduled 3m44s default-scheduler Successfully assigned default/liveness to node01

Normal Pulled 68s (x3 over 3m35s) kubelet, node01 Container image "alpine:3.5" already present on machine

Normal Created 68s (x3 over 3m35s) kubelet, node01 Created container healtcheck

Normal Started 68s (x3 over 3m34s) kubelet, node01 Started container healtcheck

Warning Unhealthy 23s (x9 over 3m3s) kubelet, node01 Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory

Normal Killing 23s (x3 over 2m53s) kubelet, node01 Container healtcheck failed liveness probe, will be restarted

      ,   cat /tmp/health     :

controlplane $ kubectl get events



controlplane $ kubectl get events | grep pod/liveness

13m Normal Scheduled pod/liveness Successfully assigned default/liveness to node01

13m Normal Pulling pod/liveness Pulling image "alpine:3.5"

13m Normal Pulled pod/liveness Successfully pulled image "alpine:3.5"

10m Normal Created pod/liveness Created container healtcheck

10m Normal Started pod/liveness Started container healtcheck

10m Warning Unhealthy pod/liveness Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory

10m Normal Killing pod/liveness Container healtcheck failed liveness probe, will be restarted

10m Normal Pulled pod/liveness Container image "alpine:3.5" already present on machine

8m32s Normal Scheduled pod/liveness Successfully assigned default/liveness to node01

4m41s Normal Pulled pod/liveness Container image "alpine:3.5" already present on machine

4m41s Normal Created pod/liveness Created container healtcheck

4m41s Normal Started pod/liveness Started container healtcheck

2m51s Warning Unhealthy pod/liveness Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory

5m11s Normal Killing pod/liveness Container healtcheck failed liveness probe, will be restarted

 RadyNess .    ,              :

controlplane $ cat << EOF > readiness.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

name: readiness

spec:

replicas: 2

selector:

matchLabels:

app: readiness

template:

metadata:

labels:

app: readiness

spec:

containers:

name: readiness

image: python

args:

/bin/sh

-c

sleep 15 && (hostname > health) && python -m http.server 9000

readinessProbe:

exec:

command:

cat

/tmp/healthy

initialDelaySeconds: 1

periodSeconds: 5

EOF



controlplane $ kubectl create -f readiness.yaml

deployment.apps/readiness created



controlplane $ kubectl get pods

NAME READY STATUS RESTARTS AGE

readiness-fd8d996dd-cfsdb 0/1 ContainerCreating 0 7s

readiness-fd8d996dd-sj8pl 0/1 ContainerCreating 0 7s



controlplane $ kubectl get pods

NAME READY STATUS RESTARTS AGE

readiness-fd8d996dd-cfsdb 0/1 Running 0 6m29s

readiness-fd8d996dd-sj8pl 0/1 Running 0 6m29s



controlplane $ kubectl exec -it readiness-fd8d996dd-cfsdb  curl localhost:9000/health

readiness-fd8d996dd-cfsdb

   .    :

controlplane $ kubectl expose deploy readiness \

-type=LoadBalancer \

-name=readiness \

-port=9000 \

-target-port=9000

service/readiness exposed



controlplane $ kubectl get svc readiness

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

readiness LoadBalancer 10.98.36.51 < pending> 9000:32355/TCP 98s



controlplane $ curl localhost:9000



controlplane $ for i in {1..5}; do curl $IP:9000/health; done

1

2

3

4

5

   . ,  ,            :

controlplane $ kubectl get pods

NAME READY STATUS RESTARTS AGE

readiness-5dd64c6c79-9vq62 0/1 CrashLoopBackOff 6 15m

readiness-5dd64c6c79-sblvl 0/1 CrashLoopBackOff 6 15m



kubectl exec -it .... -c .... bash -c "rm -f healt"



controlplane $ for i in {1..5}; do echo $i; done

1

2

3

4

5



controlplane $ kubectl delete deploy readiness

deployment.apps "readiness" deleted

 ,       :

(hostname > health) && (python -m http.server 9000 &) && sleep 60 && rm health && sleep 60 && (hostname > health) sleep 6000

/bin/sh -c sleep 60 && (python -m http.server 9000 &) && PID=$! && sleep 60 && kill -9 $PID

 ,   Running        Dockerfile   ,    CMD,        Command. ,  ,     ,     (         ),      ,        ,   ,            . ,     Feils,      .     ,              ,      ,     ()    .       : readinessProbe  livenessProbe,      HTTP         .

esschtolts@cloudshell:~/bitrix (essch)$ cat health_check.yaml

apiVersion: v1

kind: Pod

metadata:

labels:

test: healtcheck

name: healtcheck

spec:

containers:

name: healtcheck

image: alpine:3.5

args:

/bin/sh

-c

sleep 12; touch /tmp/healthy; sleep 10; rm -rf /tmp/healthy; sleep 60

readinessProbe:

exec:

command:

cat

/tmp/healthy

initialDelaySeconds: 5

periodSeconds: 5

livenessProbe:

exec:

command:

cat

/tmp/healthy

initialDelaySeconds: 15

periodSeconds: 5

   3    5       5 .    ( 15  )    cat /tmp/healthy  .         livenessProbe     ( 25 )  ,        .

esschtolts@cloudshell:~/bitrix (essch)$ kubectl create -f health_check.yaml && sleep 4 && kubectl get

pods && sleep 10 && kubectl get pods && sleep 10 && kubectl get pods

pod "liveness-exec" created

NAME READY STATUS RESTARTS AGE

liveness-exec 0/1 Running 0 5s

NAME READY STATUS RESTARTS AGE

liveness-exec 0/1 Running 0 15s

NAME READY STATUS RESTARTS AGE

liveness-exec 1/1 Running 0 26s

esschtolts@cloudshell:~/bitrix (essch)$ kubectl get pods

NAME READY STATUS RESTARTS AGE

liveness-exec 0/1 Running 0 53s

esschtolts@cloudshell:~/bitrix (essch)$ kubectl get pods

NAME READY STATUS RESTARTS AGE

liveness-exec 0/1 Running 0 1m

esschtolts@cloudshell:~/bitrix (essch)$ kubectl get pods

NAME READY STATUS RESTARTS AGE

liveness-exec 1/1 Running 1 1m

Kubernetes    startup,  ,    readiness  liveness   .     , ,  ,   .   .    www.katacoda.com/courses/Kubernetes/playground  Python.  TCP, EXEC  HTTP,    HTTP,   EXEC         " ".   ,      HTTP,        (https://www.katacoda.com/courses/kubernetes/playground):

controlplane $ kubectl version short

Client Version: v1.18.0

Server Version: v1.18.0



cat << EOF > job.yaml

apiVersion: v1

kind: Pod

metadata:

name: healt

spec:

containers:

name: python

image: python

command: ['sh', '-c', 'sleep 60 && (echo "work" > health) && sleep 60 && python -m http.server 9000']

readinessProbe:

httpGet:

path: /health

port: 9000

initialDelaySeconds: 3

periodSeconds: 3

livenessProbe:

httpGet:

path: /health

port: 9000

initialDelaySeconds: 3

periodSeconds: 3

startupProbe:

exec:

command:

cat

/health

initialDelaySeconds: 3

periodSeconds: 3

restartPolicy: OnFailure

EOF



controlplane $ kubectl create -f job.yaml

pod/healt



controlplane $ kubectl get pods #   

NAME READY STATUS RESTARTS AGE

healt 0/1 Running 0 11s



controlplane $ sleep 30 && kubectl get pods #   ,    

NAME READY STATUS RESTARTS AGE

healt 0/1 Running 0 51s



controlplane $ sleep 60 && kubectl get pods

NAME READY STATUS RESTARTS AGE

healt 0/1 Running 1 116s



controlplane $ kubectl delete -f job.yaml

pod "healt" deleted

   

  probe      bookinfo,    Istio  : https://github.com/istio/istio/tree/master/samples/bookinfo.    www.katacoda.com/courses/istio/deploy-istio-on-kubernetes.    

 

,   Kubernetes      UI-,        .    OpenShift,      .      Google  Kubernetes  ,      Google Cloud Platform. ,    ,   Open Shift  Rancher,         .  , ,    .

 , ,       API,     Mail. Cloud,     Open Shift. ,   ,   "  "  API     Terraform. ,    Kubernetes,     ,     ,     (, , ).                  .    ( Kubernetes  kubectl apply -f name_config .yml,   Hashicorp Terraform  terraform apply)       ,     , ,         ,     ,   ,     , ,     POD    ,      POD      . ,               gcloud   Google Cloud Platform (GCP),   ,                  Terraform,   GCP.

Terraform     ,             ,      :

** CFN;

** Pupet;

** Chef;

** Ansible;

**  AWS API, Kubernetes API;

* IasC: Terraform      (  120 ,     ),     ,   : CloudFormation  Amazon WEB Service, Azure Resource Manager  Microsoft Azure, Google Cloud Deployment Manager  Google Cloud Engine.

CloudFormation  Amazon    ,     CI/CD     AWS S3,     GIT.     Terraform:    ,       (https://www.terraform.io/docs/providers/index. html). Terraform    ,    , ,  ,   AWS  GCE. Terraform,      Hashicorp   Go        ,   ,       Linux:

(agile-aleph-203917)$ wget https://releases.hashicorp.com/terraform/0.11.13/terraform_0.11.13_linux_amd64.zip

(agile-aleph-203917)$ unzip terraform_0.11.13_linux_amd64.zip -d .

(agile-aleph-203917)$ rm -f terraform_0.11.13_linux_amd64.zip

(agile-aleph-203917)$ ./terraform version

Terraform v0.11.13

    ,        (https://registry.terraform.io/browse?offset=27&provider=google).          Terragrunt (https://davidbegin.github.io/terragrunt/), :

terragrant = {

terraform {

source = "terraform-aws-modules/"

}

dependencies {

path = ["..network"]

}

}

name = ""

ami = ""

instance_type = "t3.large"

     (AWS, GCE, .    ) ,     , ,         ,    (,   )   .  ,         (IaC,   ),      pipeline CI/CD (, , ,      ).  CI/CD            .    ,      ,       , ,      BASH-    Conditions ( )   .

Terraform          .tf  Hachiort Configuraiton Language (HCL)   .tf. json  JSON . ,       ,    :   ,    ,   .

   Terraform    GitHub -     API.   ,   WEB-: SettingsDeveloper sittings -> Personal access token > Generate new token   .    ,   :

(agile-aleph-203917)$ ls *.tf

main.tf variables.tf



$ cat variables.tf

variable "github_token" {

default = "630bc9696d0b2f4ce164b1cabb118eaaa1909838"

}

$ cat main.tf

provider "github" {

token = "${var.github_token}"

}



(agile-aleph-203917)$ ./terraform init

(agile-aleph-203917)$ ./terraform apply



Apply complete! Resources: 0added, 0 changed, 0 destroyed.

,    Settings > Organizations > New organization > Create organization.. : API Terraform    www.terraform.io/docs/providers/github/r/repository. html     :

(agile-aleph-203917)$ cat main.tf

provider "github" {

token = "${var.github_token}"

}

resource "github_repository" "terraform_repo" {

name = "terraform-repo"

description = "my terraform repo"

auto_init = true

}

  ,     ,   :

(agile-aleph-203917)$ ./terraform apply

provider.github.organization

The GitHub organization name to manage.



Enter a value: essch2



An execution plan has been generated and is shown below.

Resource actions are indicated with the following symbols:

+ create



Terraform will perform the following actions:



+ github_repository.terraform_repo

id:<computed>

allow_merge_commit: "true"

allow_rebase_merge: "true"

allow_squash_merge: "true"

archived: "false"

auto_init: "true"

default_branch: <computed>

description: "my terraform repo"

etag: <computed>

full_name: <computed>

git_clone_url: <computed>

html _url: <computed>

http_clone_url: <computed>

name: "terraform-repo"

ssh_clone_url: <computed>

svn_url: <computed>



Plan: 1to add, 0 to change, 0 to destroy.



Do you want to perform these actions?

Terraform will perform the actions described above.

Only 'yes' will be accepted to approve.



Enter a value: yes



github_repository.terraform_repo: Creating

allow_merge_commit: "" => "true"

allow_rebase_merge: "" => "true"

allow_squash_merge: "" => "true"

archived: "" => "false"

auto_init: "" => "true"

default_branch: "" => "<computed>"

description: "" => "my terraform repo"

etag: "" => "<computed>"

full_name: "" => "<computed>"

git_clone_url: "" => "<computed>"

html_url: "" => "<computed>"

http_clone_url: "" => "<computed>"

name: "" => "terraform-repo"

ssh_clone_url: "" => "<computed>"

svn_url: "" => "<computed>"

github_repository.terraform_repo: Creation complete after 4s (ID: terraform-repo)



Apply complete! Resources: 1added, 0 changed, 0 destroyed

     terraform-repo  WEB-.       ,   Terraform   ,   :

(agile-aleph-203917)$ ./terraform apply

provider.github.organization

The GITHub organization name to manage.

Enter a value: essch2

github_repository.terraform_repo: Refreshing state (ID: terraform-repo)

Apply complete! Resources: 0added, 0 changed, 0 destroyed.

     ,  Terraform            .  ,   ,              .  ,            ./Terraform plane.  , :

(agile-aleph-203917)$ cat main.tf

provider "github" {

token = "${var.github_token}"

}

resource "github_repository" "terraform_repo" {

name = "terraform-repo2"

description = "my terraform repo"

auto_init = true

}



(agile-aleph-203917)$ ./terraform plan

provider.github.organization

The GITHub organization name to manage.



Enter a value: essch



Refreshing Terraform state in-memory prior to plan

The refreshed state will be used to calculate this plan, but will not be

persisted to local or remote state storage.



github_repository.terraform_repo: Refreshing state (ID: terraform-repo)



-



An execution plan has been generated and is shown below.

Resource actions are indicated with the following symbols:

+ create



Terraform will perform the following actions:



+ github_repository.terraform_repo

id:<computed>

allow_merge_commit: "true"

allow_rebase_merge: "true"

allow_squash_merge: "true"

archived: "false"

auto_init: "true"

default_branch: <computed>

description: "my terraform repo"

etag: <computed>

full_name: <computed>

git_clone_url: <computed>

html_url: <computed>

http_clone_url: <computed>

name: "terraform-repo2"

ssh_clone_url: <computed>

svn_url: <computed>



"terraform apply" is subsequently run.

esschtolts@cloudshell:~/terraform (agile-aleph-203917)$ ./terraform apply

provider.github.organization

The GITHub organization name to manage.

Enter a value: essch2

github_repository.terraform_repo: Refreshing state (ID: terraform-repo)

An execution plan has been generated and is shown below.

Resource actions are indicated with the following symbols:

/+ destroy and then create replacement

Terraform will perform the following actions:

/+ github_repository.terraform_repo (new resource required)

id:"terraform-repo" => <computed> (forces new resource)

allow_merge_commit: "true" => "true"

allow_rebase_merge: "true" => "true"

allow_squash_merge: "true" => "true"

archived: "false" => "false"

auto_init: "true" => "true"

default_branch: "master" => <computed>

description: "my terraform repo" => "my terraform repo"

etag: "W/\"a92e0b300d8c8d2c869e5f271da6c2ab\"" => <computed>

full_name: "essch2/terraform-repo" => <computed>

git_clone_url: "git://github.com/essch2/terraform-repo.git" => <computed>

html_url: "https://github.com/essch2/terraform-repo" => <computed>

http_clone_url: "https://github.com/essch2/terraform-repo.git" => <computed>

name: "terraform-repo" => "terraform-repo2" (forces new resource)

ssh_clone_url: "git@github.com:essch2/terraform-repo.git" => <computed>

svn_url: "https://github.com/essch2/terraform-repo" => <computed>

Plan: 1to add, 0 to change, 1 to destroy.

Do you want to perform these actions?

Terraform will perform the actions described above.

Only 'yes' will be accepted to approve.

Enter a value: yes

github_repository.terraform_repo: Destroying (ID: terraform-repo)

github_repository.terraform_repo: Destruction complete after 0s

github_repository.terraform_repo: Creating

allow_merge_commit: "" => "true"

allow_rebase_merge: "" => "true"

allow_squash_merge: "" => "true"

archived: "" => "false"

auto_init: "" => "true"

default_branch: "" => "<computed>"

description: "" => "my terraform repo"

etag: "" => "<computed>"

full_name: "" => "<computed>"

git_clone_url: "" => "<computed>"

html_url: "" => "<computed>"

http_clone_url: "" => "<computed>"

name: "" => "terraform-repo2"

ssh_clone_url: "" => "<computed>"

svn_url: "" => "<computed>"

github_repository.terraform_repo: Creation complete after 5s (ID: terraform-repo2)

Apply complete! Resources: 1added, 0 changed, 1 destroyed.

  ,             ,     ,   ,          . Terraform     ,  .            :

(agile-aleph-203917)$ rm variables.tf

(agile-aleph-203917)$ sed -i 's/terraform-repo2/terraform-repo3/' main.tf

./terraform apply -var="github_token=f7602b82e02efcbae7fc915c16eeee518280cf2a"

   GCP  Terraform

     ,  API  .             ,         ,   .     API,   ,   .

    API   KOPS. KOPS     Kubernetes  GCP, AWS  Azure. KOPS   Kubectl     ,    ,    YML-,   ,     Kubectl    POD,   .  ,  Terraform,          IasC.

     ,    GCP   ,   .      : IAM   >   >          (    ),       JSON        Key. JSON.       www.terraform.io/docs/providers/google/index.html:

(agil7e-aleph-20391)$ cat main.tf

provider "google" {

credentials = "${file("key.json")}"

project = "agile-aleph-203917"

region = "us-central1"

}

resource "google_compute_instance" "terraform" {

name = "terraform"

machine_type = "n1-standard-1"

zone = "us-central1-a"

boot_disk {

initialize_params {

image = "debian-cloud/debian-9"

}

}

network_interface {

network = "default"

}

}

  :

(agile-aleph-203917)$ gcloud auth list

Credentialed Accounts

ACTIVE ACCOUNT

* esschtolts@gmail.com

To set the active account, run:

$ gcloud config set account `ACCOUNT`



     (    ):

$ gcloud config set project agil7e-aleph-20391;

(agil7e-aleph-20391)$ ./terraform init | grep success

Terraform has been successfully initialized!

     WEB-,      key.json    Terraform:

machine_type: "" => "n1-standard-1"

metadata_fingerprint: "" => "<computed>"

name: "" => "terraform"

network_interface.#: "" => "1"

network_interface.0.address: "" => "<computed>"

network_interface.0.name: "" => "<computed>"

network_interface.0.network: "" => "default"

network_interface.0.network_ip: "" => "<computed>"

network_interface.0.network: "" => "default"

project: "" => "<computed>"

scheduling.#: "" => "<computed>"

self_link: "" => "<computed>"

tags_fingerprint: "" => "<computed>"

zone: "" => "us-central1-a"

google_compute_instance.terraform: Still creating (10s elapsed)

google_compute_instance.terraform: Still creating (20s elapsed)

google_compute_instance.terraform: Still creating (30s elapsed)

google_compute_instance.terraform: Still creating (40s elapsed)

google_compute_instance.terraform: Creation complete after 40s (ID: terraform)



Apply complete! Resources: 1added, 0 changed, 0 destroyed.

,    .   :

~/terraform (agil7e-aleph-20391)$ ./terraform apply

google_compute_instance.terraform: Refreshing state (ID: terraform)

An execution plan has been generated and is shown below.

Resource actions are indicated with the following symbols:

destroy

Terraform will perform the following actions:

google_compute_instance.terraform

Plan: 0to add, 0 to change, 1 to destroy.

Do you want to perform these actions?

Terraform will perform the actions described above.

Only 'yes' will be accepted to approve.

Enter a value: yes

google_compute_instance.terraform: Destroying (ID: terraform)

google_compute_instance.terraform: Still destroying (ID: terraform, 10s elapsed)

google_compute_instance.terraform: Still destroying (ID: terraform, 20s elapsed)

google_compute_instance.terraform: Still destroying (ID: terraform, 30s elapsed)

google_compute_instance.terraform: Still destroying (ID: terraform, 40s elapsed)

google_compute_instance.terraform: Still destroying (ID: terraform, 50s elapsed)

google_compute_instance.terraform: Still destroying (ID: terraform, 1m0s elapsed)

google_compute_instance.terraform: Still destroying (ID: terraform, 1m10s elapsed)

google_compute_instance.terraform: Still destroying (ID: terraform, 1m20s elapsed)

google_compute_instance.terraform: Still destroying (ID: terraform, 1m30s elapsed)

google_compute_instance.terraform: Still destroying (ID: terraform, 1m40s elapsed)

google_compute_instance.terraform: Still destroying (ID: terraform, 1m50s elapsed)

google_compute_instance.terraform: Still destroying (ID: terraform, 2m0s elapsed)

google_compute_instance.terraform: Still destroying (ID: terraform, 2m10s elapsed)

google_compute_instance.terraform: Still destroying (ID: terraform, 2m20s elapsed)

google_compute_instance.terraform: Destruction complete after 2m30s

Apply complete! Resources: 0added, 0 changed, 1 destroyed.

   AWS

   AWS ,     ,     :

esschtolts@cloudshell:~/terraform (agil7e-aleph-20391)$ mkdir gcp

esschtolts@cloudshell:~/terraform (agil7e-aleph-20391)$ mv main.tf gcp/main.tf

esschtolts@cloudshell:~/terraform (agil7e-aleph-20391)$ mkdir aws

esschtolts@cloudshell:~/terraform (agil7e-aleph-20391)$ cd aws

Role   ,    ,   ,   AWS,        EKS.       ,  , ,   ,      ..       ,        (Polices).           ,   ,   : , ,  ,       WEB   (IMA)    API (   ) .     ,       ,        AWS.      AWS EC2 (), AWS ELB (Elastic Load Balancer, )  AWS KMS (Key Management Service,    )   AmazonEKSClusterPolicy,   AmazonEKSServicePolicy    CloudWatch Logs (  ), Route 53 (   ), IAM ( ).            IAM  : https://docs.aws.amazon.com/eks/latest/userguide/service_IAM_role. html #create-service-role.

    Kubernetes      ,    .   ,      ,      (  )   .  ,             , , US-east-1  US East (N. Virginia)  US-east-2  US East (Ohio)     .    EKS   US-east .

VPC  ,  ,       .

      www.terraform.io/docs/providers/aws/r/eks_cluster. html :

esschtolts@cloudshell:~/terraform/aws (agile-aleph-203917)$ cat main.tf

provider "aws" {

access_key = "${var.token}"

secret_key = "${var.key}"

region = "us-east-1"

}



# Params



variable "token" {

default = ""

}

variable "key" {

default = ""

}



# EKS



resource "aws_eks_cluster" "example" {

enabled_cluster_log_types = ["api", "audit"]

name = "exapmle"

role_arn = "arn:aws:iam::177510963163:role/ServiceRoleForAmazonEKS2"



vpc_config {

subnet_ids = ["${aws_subnet.subnet_1.id}", "${aws_subnet.subnet_2.id}"]

}

}



output "endpoint" {

value = "${aws_eks_cluster.example.endpoint}"

}



output "kubeconfig-certificate-authority-data" {

value = "${aws_eks_cluster.example.certificate_authority.0.data}"

}



# Role



data "aws_iam_policy_document" "eks-role-policy" {

statement {

actions = ["sts:AssumeRole"]



principals {

type = "Service"

identifiers = ["eks.amazonaws.com"]

}

}

}



resource "aws_iam_role" "tf_role" {

name = "tf_role"

assume_role_policy = "${data.aws_iam_policy_document.eks-role-policy.json}"

tags = {

tag-key = "tag-value"

}

}



resource "aws_iam_role_policy_attachment" "attach-cluster" {

role = "tf_role"

policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"

}



resource "aws_iam_role_policy_attachment" "attach-service" {

role = "tf_role"

policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"

}



# Subnet



resource "aws_subnet" "subnet_1" {

vpc_id = "${aws_vpc.main.id}"

cidr_block = "10.0.1.0/24"

availability_zone = "us-east-1a"



tags = {

Name = "Main"

}

}



resource "aws_subnet" "subnet_2" {

vpc_id = "${aws_vpc.main.id}"

cidr_block = "10.0.2.0/24"

availability_zone = "us-east-1b"



tags = {

Name = "Main"

}

}



resource "aws_vpc" "main" {

cidr_block = "10.0.0.0/16"

}

 9  44         Kubernetes:

esschtolts@cloudshell:~/terraform/aws (agile-aleph-203917)$ ./../terraform apply -var="token=AKIAJ4SYCNH2XVSHNN3A" -var="key=huEWRslEluynCXBspsul3AkKlinAlR9+MoU1ViY7"

  (   10  23 ):

esschtolts@cloudshell:~/terraform/aws (agile-aleph-203917)$ ./../terraform destroy -var="token=AKIAJ4SYCNH2XVSHNN3A" -var="key=huEWRslEluynCXBspsul3AkKlinAlR9+MoU1ViY7"

Destroy complete! Resources: 7destroyed.

  CI/CD

Amazon  (aws.amazon.com/ru/devops/)   DevOps ,    :

* AWS Code Pipeline            ,     ,      , ,   .

* AWS Code Build       ,       ,             ,          .

* AWS Code Deploy            .

* AWS CodeStar         .

  

 

aws s3 ls s3://name_backet aws s3 sync s3://name_backet name_fonder exclude *.tmp #       , , 

,       AWS:

esschtolts@cloudshell:~/terraform/aws (agile-aleph-203917)$ ./../terraform init | grep success

Terraform has been successfully initialized!

      AWS,         WEB-,  My account,   My Security Credentials,  ,  Access Key > Create New Access Key.  EKS (Elastic Kuberntes Service):

esschtolts@cloudshell:~/terraform/aws (agile-aleph-203917)$ ./../terraform apply

var="token=AKIAJ4SYCNH2XVSHNN3A" -var="key=huEWRslEluynCXBspsul3AkKlinAlR9+MoU1ViY7"

 :

$ ../terraform destroy

   GCP

node pool      

resource "google_container_cluster" "primary" {

name = "tf"

location = "us-central1"

$ cat main.tf #  

terraform {

required_version = "> 0.10.0"

}

terraform {

backend "s3" {

bucket = "foo-terraform"

key = "bucket/terraform.tfstate"

region = "us-east-1"

encrypt = "true"

}

}

$ cat cloud.tf#  

provider "google" {

token = "${var.hcloud_token}"

}



$ cat variables.tf#    

variable "hcloud_token" {}



$ cat instances.tf#  

resource "hcloud_server" "server" { ....



$ terraform import aws_acm_certificate.cert arn:aws:acm:eu-central-1:123456789012:certificate/7e7a28d2-163f-4b8f-b9cd-822f96c08d6a

$ terraform init #  

$ terraform plan #  

$ terraform apply #  

:

essh@kubernetes-master:~/graylog$ sudo docker run name graylog link graylog_mongo:mongo link graylog_elasticsearch:elasticsearch \

p 9000:9000 -p 12201:12201 -p 1514:1514 \

e GRAYLOG_HTTP_EXTERNAL_URI="http://127.0.0.1:9000/" \

d graylog/graylog:3.0

0f21f39192440d9a8be96890f624c1d409883f2e350ead58a5c4ce0e91e54c9d

docker: Error response from daemon: driver failed programming external connectivity on endpoint graylog (714a6083b878e2737bd4d4577d1157504e261c03cb503b6394cb844466fb4781): Bind for 0.0.0.0:9000 failed: port is already allocated.

essh@kubernetes-master:~/graylog$ sudo netstat -nlp | grep 9000

tcp6 0 0 :::9000:::* LISTEN 2505/docker-proxy

essh@kubernetes-master:~/graylog$ docker rm graylog

graylog

essh@kubernetes-master:~/graylog$ sudo docker run name graylog link graylog_mongo:mongo link graylog_elasticsearch:elasticsearch \

p 9001:9000 -p 12201:12201 -p 1514:1514 \

e GRAYLOG_HTTP_EXTERNAL_URI="http://127.0.0.1:9001/" \

d graylog/graylog:3.0

e5aefd6d630a935887f494550513d46e54947f897e4a64b0703d8f7094562875

https://blog.maddevs.io/terrafom-hetzner-a2f22534514b

 ,   :

$ cat aws/provider.tf

provider "aws" {

region = "us-west-1"

}

resource "aws_instance" "my_ec2" {

ami = "${data.aws_ami.ubuntu.id}"

instance_type = "t2.micro"

}



$ cd aws

$ aws configure

$ terraform init

$ terraform apply auto-approve

$ cd ..

provider "aws" {

region = "us-west-1"

}

resource "aws_sqs_queue" "terraform_queue" {

name = "terraform-queue"

delay_seconds = 90

max_message_size = 2048

message_retention_seconds = 86400

receive_wait_time_seconds = 10

}

data "aws_route53_zone" "vuejs_phalcon" {

name = "test.com."

private_zone = true

}

resource "aws_route53_record" "www" {

zone_id = "${data.aws_route53_zone.vuejs_phalcon.zone_id}"

name = "www.${data.aws_route53_zone.selected.name}"

type = "A"

ttl = "300"

records = ["10.0.0.1"]

}

resource "aws_elasticsearch_domain" "example" {

domain_name = "example"

elasticsearch_version = "1.5"

cluster_config {

instance_type = "r4.large.elasticsearch"

}

snapshot_options {

automated_snapshot_start_hour = 23

}

}

resource "aws_eks_cluster" "eks_vuejs_phalcon" {

name = "eks_vuejs_phalcon"

role_arn = "${aws_iam_role.eks_vuejs_phalcon.arn}"



vpc_config {

subnet_ids = ["${aws_subnet.eks_vuejs_phalcon.id}", "${aws_subnet.example2.id}"]

}

}

output "endpoint" {

value = "${aws_eks_cluster.eks_vuejs_phalcon.endpoint}"

}

output "kubeconfig-certificate-authority-data" {

value = "${aws_eks_cluster.eks_vuejs_phalcon.certificate_authority.0.data}"

}

provider "google" {

credentials = "${file("account.json")}"

project = "my-project-id"

region = "us-central1"

}

resource "google_container_cluster" "primary" {

name = "my-gke-cluster"

location = "us-central1"

remove_default_node_pool = true

initial_node_count = 1

master_auth {

username = ""

password = ""

}

}

output "client_certificate" {

value = "${google_container_cluster.primary.master_auth.0.client_certificate}"

}

output "client_key" {

value = "${google_container_cluster.primary.master_auth.0.client_key}"

}

output "cluster_ca_certificate" {

value = "${google_container_cluster.primary.master_auth.0.cluster_ca_certificate}"

}

$ cat deployment.yml

apiVersion: apps/v1

kind: Deployment

metadata:

name: phalcon_vuejs

namespace: development

spec:

selector:

matchLabels:

app: vuejs

replicas: 1

template:

metadata:

labels:

app: vuejs

spec:

initContainers:

name: vuejs_build

image: vuejs/ci

volumeMounts:

name: app

mountPath: /app/public

command:

/bin/bash

-c

|

cd /app/public

git clone essch/vuejs_phalcon:1.0 .

npm test

npm build

containers:

name: healtcheck

image: mileschou/phalcon:7.2-cli

args:

/bin/sh

-c

cd /usr/src/app && git clone essch/app_phalcon:1.0 && touch /tmp/healthy && sleep 10 && php script.php

readinessProbe:

exec:

command:

cat

/tmp/healthy

initialDelaySeconds: 5

periodSeconds: 5

livenessProbe:

exec:

command:

cat

/tmp/healthy

initialDelaySeconds: 15

periodSeconds: 5

voumes:

name: app

emptyDir: {}

    AWS EC2 .    ,   AWS API        Terraform.

,   , Terraform  ,   .

  :

resource "aws_vpc" "my_vpc" {

cidr_block = "190.160.0.0/16"

instance_target = "default"

}

resource "aws_subnet" "my_subnet" {

vpc_id = "${aws_vpc.my_vpc.id}"

cidr_block = "190.160.1.0/24"

}

$ cat gce/provider.tf

provider "google" {

credentials = "${file("account.json")}"

project = "my-project-id"

region = "us-central1"

}

resource "google_compute_instance" "default" {

name = "test"

machine_type = "n1-standard-1"

zone = "us-central1-a"

}

$ cd gce

$ terraform init

$ terraform apply

$ cd ..

      AWS S3   (     ),      :

terraform {

backend "s3" {

bucket = "tfstate"

key = "terraform.tfstate"

region = "us-state-2"

}

}

provider "kubernetes" {

host = "https://104.196.242.174"

username = "ClusterMaster"

password = "MindTheGap"

}



resource "kubernetes_pod" "my_pod" {

spec {

container {

image = "Nginx:1.7.9"

name = "Nginx"

port {

container_port = 80

}

}

}

}

:

terraform init #      ,  

terraform validate #  

terraform plan #  ,        , ,

            ,      .

terraform apply #  

      .

$ which aws

$ aws fonfigure # https://www.youtube.com/watch?v=IxA1IPypzHs



$ cat aws.tf

# https://www.terraform.io/docs/providers/aws/r/instance.html

resource "aws_instance" "ec2instance" {

ami = "${var.ami}"

instance_type = "t2.micro"

}



resource "aws_security_group" "instance_gc" {



}



$cat run.js

export AWS_ACCESS_KEY_ID="anaccesskey"

export AWS_SECRET_ACCESS_KEY="asecretkey"

export AWS_DEFAULT_REGION="us-west-2"

terraform plan

terraform apply



$ cat gce.tf # https://www.terraform.io/docs/providers/google/index.html#

# Google Cloud Platform Provider



provider "google" {

credentials = "${file("account.json")}"

project = "phalcon"

region = "us-central1"

}



#https://www.terraform.io/docs/providers/google/r/app_engine_application.html

resource "google_project" "my_project" {

name = "My Project"

project_id = "your-project-id"

org_id = "1234567"

}



resource "google_app_engine_application" "app" {

project = "${google_project.my_project.project_id}"

location_id = "us-central"

}



# google_compute_instance

resource "google_compute_instance" "default" {

name = "test"

machine_type = "n1-standard-1"

zone = "us-central1-a"



tags = ["foo", "bar"]



boot_disk {

initialize_params {

image = "debian-cloud/debian-9"

}

}



// Local SSD disk

scratch_disk {

}



network_interface {

network = "default"



access_config {

// Ephemeral IP

}

}



metadata = {

foo = "bar"

}



metadata_startup_script = "echo hi > /test.txt"



service_account {

scopes = ["userinfo-email", "compute-ro", "storage-ro"]

}

}



   external resource,        BASH:

data "external" "python3" {

program = ["Python3"]

}

     Terraform

    Terraform      GCP.      ,      .     GCE  (   ) node-cluster.   Kubernetes   IAM   >   >                kubernetes_key.JSON:

eSSH@Kubernetes-master:~/node-cluster$ cp ~/Downloads/node-cluster-243923-bbec410e0a83.JSON ./kubernetes_key.JSON

 terraform:

essh@kubernetes-master:~/node-cluster$ wget https://releases.hashicorp.com/terraform/0.12.2/terraform_0.12.2_linux_amd64.zip >/dev/null 2>/dev/null

essh@kubernetes-master:~/node-cluster$ unzip terraform_0.12.2_linux_amd64.zip && rm -f terraform_0.12.2_linux_amd64.zip

Archive: terraform_0.12.2_linux_amd64.zip

inflating: terraform

essh@kubernetes-master:~/node-cluster$ ./terraform version

Terraform v0.12.2



  GCE    "" :

essh@kubernetes-master:~/node-cluster$ cat main.tf

provider "google" {

credentials = "${file("kubernetes_key.json")}"

project = "node-cluster"

region = "us-central1"

}essh@kubernetes-master:~/node-cluster$ ./terraform init



Initializing the backend



Initializing provider plugins

Checking for available provider plugins

Downloading plugin for provider "google" (terraform-providers/google) 2.8.0



The following providers do not have any version constraints in configuration,

so the latest version was installed.



To prevent automatic upgrades to new major versions that may contain breaking

changes, it is recommended to add version = "" constraints to the

corresponding provider blocks in configuration, with the constraint strings

suggested below.



* provider.google: version = "~> 2.8"



Terraform has been successfully initialized!



You may now begin working with Terraform. Try running "terraform plan" to see

any changes that are required for your infrastructure. All Terraform commands

should now work.



If you ever set or change modules or backend configuration for Terraform,

rerun this command to reinitialize your working directory. If you forget, other

commands will detect it and remind you to do so if necessary.

  :

essh@kubernetes-master:~/node-cluster$ cat main.tf

provider "google" {

credentials = "${file("kubernetes_key.json")}"

project = "node-cluster-243923"

region = "europe-north1"

}

resource "google_compute_instance" "cluster" {

name = "cluster"

zone = "europe-north1-a"

machine_type = "f1-micro"



boot_disk {

initialize_params {

image = "debian-cloud/debian-9"

}

}



network_interface {

network = "default"

access_config {}

}



essh@kubernetes-master:~/node-cluster$ sudo ./terraform apply



An execution plan has been generated and is shown below.

Resource actions are indicated with the following symbols:

+ create



Terraform will perform the following actions:



# google_compute_instance.cluster will be created

+ resource "google_compute_instance" "cluster" {

+ can_ip_forward = false

+ cpu_platform = (known after apply)

+ deletion_protection = false

+ guest_accelerator = (known after apply)

+ id = (known after apply)

+ instance_id = (known after apply)

+ label_fingerprint = (known after apply)

+ machine_type = "f1-micro"

+ metadata_fingerprint = (known after apply)

+ name= "cluster"

+ project = (known after apply)

+ self_link = (known after apply)

+ tags_fingerprint = (known after apply)

+ zone= "europe-north1-a"



+ boot_disk {

+ auto_delete = true

+ device_name = (known after apply)

+ disk_encryption_key_sha256 = (known after apply)

+ source = (known after apply)



+ initialize_params {

+ image = "debian-cloud/debian-9"

+ size = (known after apply)

+ type = (known after apply)

}

}



+ network_interface {

+ address = (known after apply)

+ name = (known after apply)

+ network = "default"

+ network_ip = (known after apply)

+ subnetwork = (known after apply)

+ subnetwork_project = (known after apply)



+ access_config {

+ assigned_nat_ip = (known after apply)

+ nat_ip = (known after apply)

+ network_tier = (known after apply)

}

}



+ scheduling {

+ automatic_restart = (known after apply)

+ on_host_maintenance = (known after apply)

+ preemptible = (known after apply)



+ node_affinities {

+ key = (known after apply)

+ operator = (known after apply)

+ values = (known after apply)

}

}

}



Plan: 1to add, 0 to change, 0 to destroy.



Do you want to perform these actions?

Terraform will perform the actions described above.

Only 'yes' will be accepted to approve.



Enter a value: yes



google_compute_instance.cluster: Creating

google_compute_instance.cluster: Still creating [10s elapsed]

google_compute_instance.cluster: Creation complete after 11s [id=cluster]



Apply complete! Resources: 1added, 0 changed, 0 destroyed.

     IP-  SSH-:

essh@kubernetes-master:~/node-cluster$ ssh-keygen -f node-cluster

Generating public/private rsa key pair.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in node-cluster.

Your public key has been saved in node-cluster.pub.

The key fingerprint is:

SHA256:vUhDe7FOzykE5BSLOIhE7Xt9o+AwgM4ZKOCW4nsLG58 essh@kubernetes-master

The key's randomart image is:

+[RSA 2048]+

|.o. +. |

|o. o . = . |

|* + o . = . |

|=* . . . +o |

|B + . .S * |

| = + o o X + . |

| o. = . + = + |

| .= . . |

| ..E. |

+[SHA256]+

essh@kubernetes-master:~/node-cluster$ ls node-cluster.pub

node-cluster.pub

essh@kubernetes-master:~/node-cluster$ cat main.tf

provider "google" {

credentials = "${file("kubernetes_key.json")}"

project = "node-cluster-243923"

region = "europe-north1"

}



resource "google_compute_address" "static-ip-address" {

name = "static-ip-address"

}



resource "google_compute_instance" "cluster" {

name = "cluster"

zone = "europe-north1-a"

machine_type = "f1-micro"



boot_disk {

initialize_params {

image = "debian-cloud/debian-9"

}

}



metadata = {

ssh-keys = "essh:${file("./node-cluster.pub")}"

}



network_interface {

network = "default"

access_config {

nat_ip = "${google_compute_address.static-ip-address.address}"

}

}

}essh@kubernetes-master:~/node-cluster$ sudo ./terraform apply

  SSH  :

essh@kubernetes-master:~/node-cluster$ ssh -i ./node-cluster essh@35.228.82.222

The authenticity of host '35.228.82.222 (35.228.82.222)' can't be established.

ECDSA key fingerprint is SHA256:o7ykujZp46IF+eu7SaIwXOlRRApiTY1YtXQzsGwO18A.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '35.228.82.222' (ECDSA) to the list of known hosts.

Linux cluster 4.9.0-9-amd64 #1 SMP Debian 4.9.168-1+deb9u2 (2019-05-13) x86_64



The programs included with the Debian GNU/Linux system are free software;

the exact distribution terms for each program are described in the

individual files in /usr/share/doc/*/copyright.



Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent

permitted by applicable law.

essh@cluster:~$ ls

essh@cluster:~$ exit

logout

Connection to 35.228.82.222 closed.

 :

essh@kubernetes-master:~/node-cluster$ curl https://sdk.cloud.google.com | bash

essh@kubernetes-master:~/node-cluster$ exec -l $SHELL

essh@kubernetes-master:~/node-cluster$ gcloud init

 :

You are logged in as: [esschtolts@gmail.com].



Pick cloud project to use:

[1] agile-aleph-203917

[2] node-cluster-243923

[3] essch

[4] Create a new project

Please enter numeric choice or text value (must exactly match list

item):

Please enter a value between 1 and 4, or a value present in the list: 2



Your current project has been set to: [node-cluster-243923].

 :

[50] europe-north1-a

Did not print [12] options.

Too many options [62]. Enter "list" at prompt to print choices fully.

Please enter numeric choice or text value (must exactly match list

item):

Please enter a value between 1 and 62, or a value present in the list: 50

essh@kubernetes-master:~/node-cluster$ PROJECT_I="node-cluster-243923"

essh@kubernetes-master:~/node-cluster$ echo $PROJECT_I

node-cluster-243923

essh@kubernetes-master:~/node-cluster$ export GOOGLE_APPLICATION_CREDENTIALS=$HOME/node-cluster/kubernetes_key.json

essh@kubernetes-master:~/node-cluster$ sudo docker-machine create driver google google-project $PROJECT_ID vm01

sudo export GOOGLE_APPLICATION_CREDENTIALS=$HOME/node-cluster/kubernetes_key.json docker-machine create driver google google-project $PROJECT_ID vm01

// https://docs.docker.com/machine/drivers/gce/

// https://github.com/docker/machine/issues/4722

essh@kubernetes-master:~/node-cluster$ gcloud config list

[compute]

region = europe-north1

zone = europe-north1-a

[core]

account = esschtolts@gmail.com

disable_usage_reporting = False

project = node-cluster-243923



Your active configuration is: [default]



     :

essh@kubernetes-master:~/node-cluster$ cat main.tf

provider "google" {

credentials = "${file("kubernetes_key.json")}"

project = "node-cluster-243923"

region = "europe-north1"

}



resource "google_compute_address" "static-ip-address" {

name = "static-ip-address"

}



resource "google_compute_instance" "cluster" {

name = "cluster"

zone = "europe-north1-a"

machine_type = "f1-micro"



boot_disk {

initialize_params {

image = "debian-cloud/debian-9"

}

}



metadata = {

ssh-keys = "essh:${file("./node-cluster.pub")}"

}



network_interface {

network = "default"

access_config {

nat_ip = "${google_compute_address.static-ip-address.address}"

}

}

}



resource "null_resource" "cluster" {



triggers = {

cluster_instance_ids = "${join(",", google_compute_instance.cluster.*.id)}"

}



connection {

host = "${google_compute_address.static-ip-address.address}"

type = "ssh"

user = "essh"

timeout = "2m"

private_key = "${file("~/node-cluster/node-cluster")}"

# agent = "false"

}



provisioner "file" {

source = "client.js"

destination = "~/client.js"

}



provisioner "remote-exec" {

inline = [

"cd ~ && echo 1 > test.txt"

]

}



essh@kubernetes-master:~/node-cluster$ sudo ./terraform apply

google_compute_address.static-ip-address: Creating

google_compute_address.static-ip-address: Creation complete after 5s [id=node-cluster-243923/europe-north1/static-ip-address]

google_compute_instance.cluster: Creating

google_compute_instance.cluster: Still creating [10s elapsed]

google_compute_instance.cluster: Creation complete after 12s [id=cluster]

null_resource.cluster: Creating

null_resource.cluster: Provisioning with 'file'

null_resource.cluster: Provisioning with 'remote-exec'

null_resource.cluster (remote-exec): Connecting to remote host via SSH

null_resource.cluster (remote-exec): Host: 35.228.82.222

null_resource.cluster (remote-exec): User: essh

null_resource.cluster (remote-exec): Password: false

null_resource.cluster (remote-exec): Private key: true

null_resource.cluster (remote-exec): Certificate: false

null_resource.cluster (remote-exec): SSH Agent: false

null_resource.cluster (remote-exec): Checking Host Key: false

null_resource.cluster (remote-exec): Connected!

null_resource.cluster: Creation complete after 7s [id=816586071607403364]



Apply complete! Resources: 3added, 0 changed, 0 destroyed.



esschtolts@cluster:~$ ls /home/essh/

client.js test.txt



[sudo] password for essh:

google_compute_address.static-ip-address: Refreshing state [id=node-cluster-243923/europe-north1/static-ip-address]

google_compute_instance.cluster: Refreshing state [id=cluster]

null_resource.cluster: Refreshing state [id=816586071607403364]



Enter a value: yes



null_resource.cluster: Destroying [id=816586071607403364]

null_resource.cluster: Destruction complete after 0s

google_compute_instance.cluster: Destroying [id=cluster]

google_compute_instance.cluster: Still destroying [id=cluster, 10s elapsed]

google_compute_instance.cluster: Still destroying [id=cluster, 20s elapsed]

google_compute_instance.cluster: Destruction complete after 27s

google_compute_address.static-ip-address: Destroying [id=node-cluster-243923/europe-north1/static-ip-address]

google_compute_address.static-ip-address: Destruction complete after 8s

        ,                   :

  Kubernetes

          :

essh@kubernetes-master:~/node-cluster/Kubernetes$ cat main.tf

provider "google" {

credentials = "${file("../kubernetes_key.json")}"

project = "node-cluster-243923"

region = "europe-north1"

}



resource "google_container_cluster" "node-ks" {

name = "node-ks"

location = "europe-north1-a"

initial_node_count = 3

}

essh@kubernetes-master:~/node-cluster/Kubernetes$ sudo ../terraform init

essh@kubernetes-master:~/node-cluster/Kubernetes$ sudo ../terraform apply

   2:15,   ,    europe-north1-a    europe-north1 -b, europe-north1-c         ,    3:13 ,            -: europe-north1-a, europe-north1-b, europe-north1-c:

provider "google" {

credentials = "${file("../kubernetes_key.json")}"

project = "node-cluster-243923"

region = "europe-north1"

}



resource "google_container_cluster" "node-ks" {

name = "node-ks"

location = "europe-north1-a"

node_locations = ["europe-north1-b", "europe-north1-c"]

initial_node_count = 1

}

     :    Kubernetes     POD.         .    POD       2    (     ):

essh@kubernetes-master:~/node-cluster/Kubernetes$ cat main.tf

provider "google" {

credentials = "${file("../kubernetes_key.json")}"

project = "node-cluster-243923"

region = "europe-north1"

}



resource "google_container_cluster" "node-ks" {

name = "node-ks"

location = "europe-north1-a"

node_locations = ["europe-north1-b", "europe-north1-c"]

initial_node_count = 1

}



resource "google_container_node_pool" "node-ks-pool" {

name = "node-ks-pool"

cluster = "${google_container_cluster.node-ks.name}"

location = "europe-north1-a"

node_count = "1"



node_config {

machine_type = "n1-standard-1"

}



autoscaling {

min_node_count = 1

max_node_count = 2

}

}

,     IP-     :

essh@kubernetes-master:~/node-cluster/Kubernetes$ gcloud container clusters list

NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS

node-ks europe-north1-a 1.12.8-gke.6 35.228.20.35 n1-standard-1 1.12.8-gke.6 6 RECONCILING



essh@kubernetes-master:~/node-cluster/Kubernetes$ gcloud container clusters describe node-ks | grep '^endpoint'

endpoint: 35.228.20.35



essh@kubernetes-master:~/node-cluster/Kubernetes$ ping 35.228.20.35 -c 2

PING 35.228.20.35 (35.228.20.35) 56(84) bytes of data.

64 bytes from 35.228.20.35: icmp_seq=1 ttl=59 time=8.33 ms

64 bytes from 35.228.20.35: icmp_seq=2 ttl=59 time=7.09 ms



- 35.228.20.35 ping statistics 

2 packets transmitted, 2 received, 0% packet loss, time 1001ms

rtt min/avg/max/mdev = 7.094/7.714/8.334/0.620 ms

 ,         ,       ,    , ,      .     var.name_value,      JS: ${var.name_value},   path.root.

essh@kubernetes-master:~/node-cluster/Kubernetes$ cat variables.tf

variable "region" {

default = "europe-north1"

}



variable "project_name" {

type = string

default = ""

}



variable "gce_key" {

default = "./kubernetes_key.json"

}



variable "node_count_zone" {

default = 1

}

     -var, : sudo ./terraform apply -var="project_name=node-cluster-243923".

essh@kubernetes-master:~/node-cluster/Kubernetes$ cp ../kubernetes_key.json .

essh@kubernetes-master:~/node-cluster/Kubernetes$ sudo ../terraform apply -var="project_name=node-cluster-243923"

       ,   ,   :

essh@kubernetes-master:~/node-cluster/Kubernetes$ cd ..

essh@kubernetes-master:~/node-cluster$ cat main.tf

module "Kubernetes" {

source = "./Kubernetes"

project_name = "node-cluster-243923"

}

essh@kubernetes-master:~/node-cluster$ sudo ./terraform apply

     :

essh@kubernetes-master:~/node-cluster/Kubernetes$ git init

Initialized empty GIT repository in /home/essh/node-cluster/Kubernetes/.git/

essh@kubernetes-master:~/node-cluster/Kubernetes$ echo "terraform.tfstate" >> .gitignore

essh@kubernetes-master:~/node-cluster/Kubernetes$ echo "terraform.tfstate.backup" >> .gitignore

essh@kubernetes-master:~/node-cluster/Kubernetes$ echo ".terraform/" >> .gitignore

essh@kubernetes-master:~/node-cluster/Kubernetes$ rm -f kubernetes_key.json

essh@kubernetes-master:~/node-cluster/Kubernetes$ git remote add origin https://github.com/ESSch/terraform-google-kubernetes.git

essh@kubernetes-master:~/node-cluster/Kubernetes$ git add .

essh@kubernetes-master:~/node-cluster/Kubernetes$ git commit -m 'create a k8s Terraform module'

[master (root-commit) 4f73c64] create a Kubernetes Terraform module

3 files changed, 48 insertions(+)

create mode 100644 .gitignore

create mode 100644 main.tf

create mode 100644 variables.tf

essh@kubernetes-master:~/node-cluster/Kubernetes$ git push -u origin master

essh@kubernetes-master:~/node-cluster/Kubernetes$ git tag -a v0.0.2 -m 'publish'

essh@kubernetes-master:~/node-cluster/Kubernetes$ git push origin v0.0.2

     https://registry.terraform.io/,  ,    ,    :

essh@kubernetes-master:~/node-cluster$ cat main.tf

module "kubernetes" {

# source = "./Kubernetes"

source = "ESSch/kubernetes/google"

version = "0.0.2"



project_name = "node-cluster-243923"

}

essh@kubernetes-master:~/node-cluster$ sudo ./terraform init

essh@kubernetes-master:~/node-cluster$ sudo ./terraform apply

        ZONE_RESOURCE_POOL_EXHAUSTED "does not have enough resources available to fulfill the request. Try a different zone, or try again later",   ,       .            ,     ,          region = "europe-west2", terraform      ./terrafrom init    ./terraform apply      .      ,    (provider)    Kubernetes    (    ).    ,      ,             .             .             ,        ,   ,       . ,     ,      ,   ,       Kubernetes,     ,        Google           ,  Kubernetes.              .          .       :

essh@kubernetes-master:~/node-cluster$ ls nodejs/

main.tf



essh@kubernetes-master:~/node-cluster$ cat main.tf

//module "kubernetes" {

// source = "ESSch/kubernetes/google"

// version = "0.0.2"

//

// project_name = "node-cluster-243923"

// region = "europe-west2"

//}



provider "google" {

credentials = "${file("./kubernetes_key.json")}"

project = "node-cluster-243923"

region = "europe-west2"

}



module "Kubernetes" {

source = "./Kubernetes"

project_name = "node-cluster-243923"

region = "europe-west2"

}



module "nodejs" {

source = "./nodejs"

}

essh@kubernetes-master:~/node-cluster$ sudo ./terraform init

essh@kubernetes-master:~/node-cluster$ sudo ./terraform apply

      Kubernetes   :

essh@kubernetes-master:~/node-cluster$ cat Kubernetes/outputs.tf

output "endpoint" {

value = google_container_cluster.node-ks.endpoint

sensitive = true

}



output "name" {

value = google_container_cluster.node-ks.name

sensitive = true

}



output "cluster_ca_certificate" {

value = base64decode(google_container_cluster.node-ks.master_auth.0.cluster_ca_certificate)

}

essh@kubernetes-master:~/node-cluster$ cat main.tf

//module "kubernetes" {

// source = "ESSch/kubernetes/google"

// version = "0.0.2"

//

// project_name = "node-cluster-243923"

// region = "europe-west2"

//}



provider "google" {

credentials = file("./kubernetes_key.json")

project = "node-cluster-243923"

region = "europe-west2"

}



module "Kubernetes" {

source = "./Kubernetes"

project_name = "node-cluster-243923"

region = "europe-west2"

}



module "nodejs" {

source = "./nodejs"

endpoint = module.Kubernetes.endpoint

cluster_ca_certificate = module.Kubernetes.cluster_ca_certificate

}

essh@kubernetes-master:~/node-cluster$ cat nodejs/variable.tf

variable "endpoint" {}



variable "cluster_ca_certificate" {}

        NGINX,      .         .      Dockerfile  : CMD ["Nginx", "-g", "daemon off;"],    Nginx -g 'daemon off;'   .  ,  Dockerfile   BASH     ,     ,           ,       .       BASH:

essh@kubernetes-master:~/node-cluster$ sudo docker run -it Nginx:1.17.0 which Nginx

/usr/sbin/Nginx

sudo docker run -it rm -p 8333:80 Nginx:1.17.0 /bin/bash -c "echo \$HOSTNAME > /usr/share/Nginx/html/index2.html && /usr/sbin/Nginx -g 'daemon off;'"

   POD     NGINX,  Kubernetes        .      :

essh@kubernetes-master:~/node-cluster$ cat nodejs/main.tf

terraform {

required_version = ">= 0.12.0"

}



data "google_client_config" "default" {}



provider "kubernetes" {

host = var.endpoint



token = data.google_client_config.default.access_token

cluster_ca_certificate = var.cluster_ca_certificate



load_config_file = false

}



essh@kubernetes-master:~/node-cluster$ cat nodejs/main.tf

resource "kubernetes_deployment" "nodejs" {

metadata {

name = "terraform-nodejs"

labels = {

app = "NodeJS"

}

}

spec {

replicas = 3

selector {

match_labels = {

app = "NodeJS"

}

}

template {

metadata {

labels = {

app = "NodeJS"

}

}

spec {

container {

image = "Nginx:1.17.0"

name = "node-js"

command = ["/bin/bash"]

args = ["-c", "echo $HOSTNAME > /usr/share/Nginx/html/index.html && /usr/sbin/Nginx -g 'daemon off;'"]

}

}

}

}

}



resource "kubernetes_service" "nodejs" {

metadata {

name = "terraform-nodejs"

}

spec {

selector = {

app = kubernetes_deployment.nodejs.metadata.0.labels.app

}

port {

port = 80

target_port = var.target_port

}



type = "LoadBalancer"

}

    kubectl,      gcloud  kubectl.

essh@kubernetes-master:~/node-cluster$ sudo ./terraform apply



essh@kubernetes-master:~/node-cluster$ gcloud container clusters get-credentials node-ks region=europe-west2-a

Fetching cluster endpoint and auth data.

kubeconfig entry generated for node-ks.



essh@kubernetes-master:~/node-cluster$ kubectl get deployments -o wide

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR

terraform-nodejs 3 3 3 3 25m node-js Nginx:1.17.0 app=NodeJS



essh@kubernetes-master:~/node-cluster$ kubectl get pods -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE

terraform-nodejs-6bd565dc6c-8768b 1/1 Running 0 4m45s 10.4.3.15 gke-node-ks-node-ks-pool-07115c5b-bw15 none>

terraform-nodejs-6bd565dc6c-hr5vg 1/1 Running 0 4m42s 10.4.5.13 gke-node-ks-node-ks-pool-27e2e52c-9q5b none>

terraform-nodejs-6bd565dc6c-mm7lh 1/1 Running 0 4m43s 10.4.2.6 gke-node-ks-default-pool-2dc50760-757p none>



esschtolts@gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ docker ps | grep node-js_terraform

152e3c0ed940 719cd2e3ed04

"/bin/bash -c 'ech" 8minutes ago Up 8 minutes

Kubernetes_node-js_terraform-nodejs-6bd565dc6c-8768b_default_7a87ae4a-9379-11e9-a78e-42010a9a0114_0



esschtolts@gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ docker exec -it 152e3c0ed940 cat /usr/share/Nginx/html/index.html

terraform-nodejs-6bd565dc6c-8768b



esschtolts@gke-node-ks-node-ks-pool-27e2e52c-9q5b ~ $ docker exec -it c282135be446 cat /usr/share/Nginx/html/index.html

terraform-nodejs-6bd565dc6c-hr5vg



esschtolts@gke-node-ks-default-pool-2dc50760-757p ~ $ docker exec -it 8d1cf9ef44e6 cat /usr/share/Nginx/html/index.html

terraform-nodejs-6bd565dc6c-mm7lh



esschtolts@gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.4.2.6

terraform-nodejs-6bd565dc6c-mm7lh

esschtolts@gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.4.5.13

terraform-nodejs-6bd565dc6c-hr5vg

esschtolts@gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.4.3.15

terraform-nodejs-6bd565dc6c-8768b

    POD,          ,       spec.       ,       (    SSH WEB-  GCP      Compute Engine).     IP-     ,      terraform-nodejs   curl terraform-NodeJS:80,    DNS   .   IP- EXTERNAL -IP     kubectl  ,     -: GCP > Kubernetes Engine > :

essh@kubernetes-master:~/node-cluster$ kubectl get service

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

kubernetes ClusterIP 10.7.240.1 none> 443/TCP 6h58m

terraform-nodejs LoadBalancer 10.7.246.234 35.197.220.103 80:32085/TCP 5m27s



esschtolts@gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.7.246.234

terraform-nodejs-6bd565dc6c-mm7lh

esschtolts@gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.7.246.234

terraform-nodejs-6bd565dc6c-mm7lh

esschtolts@gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.7.246.234

terraform-nodejs-6bd565dc6c-hr5vg

esschtolts@gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.7.246.234

terraform-nodejs-6bd565dc6c-hr5vg

esschtolts@gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.7.246.234

terraform-nodejs-6bd565dc6c-8768b

esschtolts@gke-node-ks-node-ks-pool-07115c5b-bw15 ~ $ curl 10.7.246.234

terraform-nodejs-6bd565dc6c-mm7lh



essh@kubernetes-master:~/node-cluster$ curl 35.197.220.103

terraform-nodejs-6bd565dc6c-mm7lh

essh@kubernetes-master:~/node-cluster$ curl 35.197.220.103

terraform-nodejs-6bd565dc6c-mm7lh

essh@kubernetes-master:~/node-cluster$ curl 35.197.220.103

terraform-nodejs-6bd565dc6c-8768b

essh@kubernetes-master:~/node-cluster$ curl 35.197.220.103

terraform-nodejs-6bd565dc6c-hr5vg

essh@kubernetes-master:~/node-cluster$ curl 35.197.220.103

terraform-nodejs-6bd565dc6c-8768b

essh@kubernetes-master:~/node-cluster$ curl 35.197.220.103

terraform-nodejs-6bd565dc6c-mm7lh

     NodeJS:

essh@kubernetes-master:~/node-cluster$ sudo ./terraform destroy

essh@kubernetes-master:~/node-cluster$ sudo ./terraform apply

essh@kubernetes-master:~/node-cluster$ sudo docker run -it rm node:12 which node

/usr/local/bin/node

sudo docker run -it rm -p 8222:80 node:12 /bin/bash -c 'cd /usr/src/ && git clone https://github.com/fhinkel/nodejs-hello-world.git &&

/usr/local/bin/node /usr/src/nodejs-hello-world/index.js'

firefox http://localhost:8222

     :

container {

image = "node:12"

name = "node-js"

command = ["/bin/bash"]

args = [

"-c",

"cd /usr/src/ && git clone https://github.com/fhinkel/nodejs-hello-world.git && /usr/local/bin/node /usr/src/nodejs-hello-world/index.js"

]

}

   Kubernetes,     ,     :

essh@kubernetes-master:~/node-cluster$ ./terraform apply



Error: Provider configuration not present



essh@kubernetes-master:~/node-cluster$ ./terraform state list

data.google_client_config.default

module.Kubernetes.google_container_cluster.node-ks

module.Kubernetes.google_container_node_pool.node-ks-pool

module.nodejs.kubernetes_deployment.nodejs

module.nodejs.kubernetes_service.nodejs



essh@kubernetes-master:~/node-cluster$ ./terraform state rm module.nodejs.kubernetes_deployment.nodejs

Removed module.nodejs.kubernetes_deployment.nodejs

Successfully removed 1 resource instance(s).

essh@kubernetes-master:~/node-cluster$ ./terraform state rm module.nodejs.kubernetes_service.nodejs

Removed module.nodejs.kubernetes_service.nodejs

Successfully removed 1 resource instance(s).



essh@kubernetes-master:~/node-cluster$ ./terraform apply

module.Kubernetes.google_container_cluster.node-ks: Refreshing state [id=node-ks]

module.Kubernetes.google_container_node_pool.node-ks-pool: Refreshing state [id=europe-west2-a/node-ks/node-ks-pool]



Apply complete! Resources: 0added, 0 changed, 0 destroyed.



     Terraform

      https://codelabs.developers.google.com/codelabs/cloud-builder-gke-continuous-deploy/index. html #0.   . ,     ./terraform destroy      ,     .   - ,        terraform,  ,    API  10  ,    ,       -parallelism=1.   Terraform   Kubernetes  (Deployment  service)   (node-pull),    ,       ,   ,     Deployment.  Terraform  API    ./terraform apply -parallelism=1            API,        .            ./terraform apply,        ,    ./terraform apply -target=module.nodejs.kubernetes_deployment.nodejs.         ,        var.endpoint,     :

locals {

app = kubernetes_deployment.nodejs.metadata.0.labels.app

}

       depends_on = [var.endpoint]  depends_on = [kubernetes_deployment .nodejs].

     : Error: Get https://35.197.228.3/API/v1: dial tcp 35.197.228.3:443: connect: connection refused,     ,     6  (3600 ),       .

       ,        . ,   ,      .            ,   ,       . ,  :

essh@kubernetes-master:~/node-cluster$ cat app/server.js

const http = require('http');

const server = http.createServer(function(request, response) {

response.writeHead(200, {"Content-Type": "text/plain"});

response.end(`Nodejs_cluster is working! My host is ${process.env.HOSTNAME}`);

});



server.listen(80);



essh@kubernetes-master:~/node-cluster$ cat Dockerfile

FROM node:12

WORKDIR /usr/src/

ADD ./app /usr/src/



RUN npm install



EXPOSE 3000

ENTRYPOINT ["node", "server.js"]



essh@kubernetes-master:~/node-cluster$ sudo docker image build -t nodejs_cluster .

Sending build context to Docker daemon 257.4MB

Step 1/6 : FROM node:12

> b074182f4154

Step 2/6 : WORKDIR /usr/src/

> Using cache

> 06666b54afba

Step 3/6 : ADD ./app /usr/src/

> Using cache

> 13fa01953b4a

Step 4/6 : RUN npm install

> Using cache

> dd074632659c

Step 5/6 : EXPOSE 3000

> Using cache

> ba3b7745b8e3

Step 6/6 : ENTRYPOINT ["node", "server.js"]

> Using cache

> a957fa7a1efa

Successfully built a957fa7a1efa

Successfully tagged nodejs_cluster:latest



essh@kubernetes-master:~/node-cluster$ sudo docker images | grep nodejs_cluster

nodejs_cluster latest a957fa7a1efa 26 minutes ago 906MB



      GCP,   Docker Hub,               :

essh@kubernetes-master:~/node-cluster$ IMAGE_ID="nodejs_cluster"

essh@kubernetes-master:~/node-cluster$ sudo docker tag $IMAGE_ID:latest gcr.io/$PROJECT_ID/$IMAGE_ID:latest

essh@kubernetes-master:~/node-cluster$ sudo docker images | grep nodejs_cluster

nodejs_cluster latest a957fa7a1efa 26 minutes ago 906MB

gcr.io/node-cluster-243923/nodejs_cluster latest a957fa7a1efa 26 minutes ago 906MB



essh@kubernetes-master:~/node-cluster$ gcloud auth configure-docker

gcloud credential helpers already registered correctly.

essh@kubernetes-master:~/node-cluster$ docker push gcr.io/$PROJECT_ID/$IMAGE_ID:latest

The push refers to repository [gcr.io/node-cluster-243923/nodejs_cluster]

194f3d074f36: Pushed

b91e71cc9778: Pushed

640fdb25c9d7: Layer already exists

b0b300677afe: Layer already exists

5667af297e60: Layer already exists

84d0c4b192e8: Layer already exists

a637c551a0da: Layer already exists

2c8d31157b81: Layer already exists

7b76d801397d: Layer already exists

f32868cde90b: Layer already exists

0db06dff9d9a: Layer already exists

latest: digest: sha256:912938003a93c53b7c8f806cded3f9bffae7b5553b9350c75791ff7acd1dad0b size: 2629



essh@kubernetes-master:~/node-cluster$ gcloud container images list

NAME

gcr.io/node-cluster-243923/nodejs_cluster

Only listing images in gcr.io/node-cluster-243923. Use repository to list images in other repositories.

       GCP: Container Registry > .         .                 POD, ,   POD              .      latest,       .      ,      ,    terraform   ,       . ,            ./terraform taint ${NAME_SERVICE},     ,       ./terraform plan.   , ,    ./terraform destroy -target=${NAME_SERVICE}  ./terraform apply,       ./terraform state list:

essh@kubernetes-master:~/node-cluster$ ./terraform state list

data.google_client_config.default

module.kubernetes.google_container_cluster.node-ks

module.kubernetes.google_container_node_pool.node-ks-pool

module.Nginx.kubernetes_deployment.nodejs

module.Nginx.kubernetes_service.nodejs



essh@kubernetes-master:~/node-cluster$ ./terraform destroy -target=module.nodejs.kubernetes_deployment.nodejs



essh@kubernetes-master:~/node-cluster$ ./terraform apply

     :

container {

image = "gcr.io/node-cluster-243923/nodejs_cluster:latest"

name = "node-js"

}

     (     ):

essh@kubernetes-master:~/node-cluster$ curl http://35.246.85.138:80

Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lqg48essh@kubernetes-master:~/node-cluster$ curl http://35.246.85.138:80

Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh@kubernetes-master:~/node-cluster$ curl http://35.246.85.138:80

Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh@kubernetes-master:~/node-cluster$ curl http://35.246.85.138:80

Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh@kubernetes-master:~/node-cluster$ curl http://35.246.85.138:80

Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh@kubernetes-master:~/node-cluster$ curl http://35.246.85.138:80

Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh@kubernetes-master:~/node-cluster$ curl http://35.246.85.138:80

Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lqg48essh@kubernetes-master:~/node-cluster$ curl http://35.246.85.138:80

Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh@kubernetes-master:~/node-cluster$ curl http://35.246.85.138:80

Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh@kubernetes-master:~/node-cluster$ curl http://35.246.85.138:80

Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lhgsnessh@kubernetes-master:~/node-cluster$ curl http://35.246.85.138:80

Nodejs_cluster is working! My host is terraform-nodejs-997fd5c9c-lqg48essh@kubernetes-master:~/node-cluster$

   ,     Google Cloud Build (  5     50)         Cloud Source Repositories (  Google Cloud Platform Free Tier)   (). Google Cloud Platform > Menu >  > Cloud Build >  >  Cloud Build API >  >  ,     Google Cloud Platform > Menu >  >    (Cloud Source Repositories):

essh@kubernetes-master:~/node-cluster$ cd app/

essh@kubernetes-master:~/node-cluster/app$ ls

server.js

essh@kubernetes-master:~/node-cluster/app$ mv ./server.js ../

essh@kubernetes-master:~/node-cluster/app$ gcloud source repos clone nodejs project=node-cluster-243923

Cloning into '/home/essh/node-cluster/app/nodejs'

warning: You appear to have cloned an empty repository.

Project [node-cluster-243923] repository [nodejs] was cloned to [/home/essh/node-cluster/app/nodejs].

essh@kubernetes-master:~/node-cluster/app$ ls -a

. .. nodejs

essh@kubernetes-master:~/node-cluster/app$ ls nodejs/

essh@kubernetes-master:~/node-cluster/app$ ls -a nodejs/

. .. .git

essh@kubernetes-master:~/node-cluster/app$ cd nodejs/

essh@kubernetes-master:~/node-cluster/app/nodejs$ mv ../../server.js .

essh@kubernetes-master:~/node-cluster/app/nodejs$ git add server.js

essh@kubernetes-master:~/node-cluster/app/nodejs$ git commit -m 'test server'

[master (root-commit) 46dd957] test server

1 file changed, 7 insertions(+)

create mode 100644 server.js



essh@kubernetes-master:~/node-cluster/app/nodejs$ git push -u origin master

Counting objects: 3, done.

Delta compression using up to 8 threads.

Compressing objects: 100% (2/2), done.

Writing objects: 100% (3/3), 408 bytes | 408.00 KiB/s, done.

Total 3 (delta 0), reused 0 (delta 0)

To https://source.developers.google.com/p/node-cluster-243923/r/nodejs

* [new branch] master > master

Branch 'master' set up to track remote branch 'master' from 'origin'.

         :   GCP > Cloud Build >  >   >    Google Cloud > NodeJS.   ,       .      gcr.io/node-cluster-243923/NodeJS:$SHORT_SHA  gcr.io/node-cluster-243923/NodeJS:$SHORT_SHA,     60 .      :

essh@kubernetes-master:~/node-cluster/app/nodejs$ cp ../../Dockerfile .

essh@kubernetes-master:~/node-cluster/app/nodejs$ git add Dockerfile

essh@kubernetes-master:~/node-cluster/app/nodejs$ git add Dockerfile

essh@kubernetes-master:~/node-cluster/app/nodejs$ cp ../../Dockerfile .

essh@kubernetes-master:~/node-cluster/app/nodejs$ git add Dockerfile

essh@kubernetes-master:~/node-cluster/app/nodejs$ git commit -m 'add Dockerfile'

essh@kubernetes-master:~/node-cluster/app/nodejs$ git remote -v

origin https://source.developers.google.com/p/node-cluster-243923/r/nodejs (fetch)

origin https://source.developers.google.com/p/node-cluster-243923/r/nodejs (push)

essh@kubernetes-master:~/node-cluster/app/nodejs$ git push origin master

Counting objects: 3, done.

Delta compression using up to 8 threads.

Compressing objects: 100% (3/3), done.

Writing objects: 100% (3/3), 380 bytes | 380.00 KiB/s, done.

Total 3 (delta 0), reused 0 (delta 0)

To https://source.developers.google.com/p/node-cluster-243923/r/nodejs

46dd957..b86c01d master > master

essh@kubernetes-master:~/node-cluster/app/nodejs$ git tag

essh@kubernetes-master:~/node-cluster/app/nodejs$ git tag -a v0.0.1 -m 'test to run'

essh@kubernetes-master:~/node-cluster/app/nodejs$ git push origin v0.0.1

Counting objects: 1, done.

Writing objects: 100% (1/1), 161 bytes | 161.00 KiB/s, done.

Total 1 (delta 0), reused 0 (delta 0)

To https://source.developers.google.com/p/node-cluster-243923/r/nodejs

* [new tag] v0.0.1 > v0.0.1

,      ,     Container Registry c  :

essh@kubernetes-master:~/node-cluster/app/nodejs$ gcloud container images list

NAME

gcr.io/node-cluster-243923/nodejs

gcr.io/node-cluster-243923/nodejs_cluster

Only listing images in gcr.io/node-cluster-243923. Use repository to list images in other repositories.

       ,     :

essh@kubernetes-master:~/node-cluster/app/nodejs$ sed -i 's/HOSTNAME\}/HOSTNAME\}\n/' server.js

essh@kubernetes-master:~/node-cluster/app/nodejs$ git add server.js

essh@kubernetes-master:~/node-cluster/app/nodejs$ git commit -m 'fix'

[master 230d67e] fix

1 file changed, 2 insertions(+), 1 deletion(-)

essh@kubernetes-master:~/node-cluster/app/nodejs$ git push origin master

Counting objects: 3, done.

Delta compression using up to 8 threads.

Compressing objects: 100% (3/3), done.

Writing objects: 100% (3/3), 304 bytes | 304.00 KiB/s, done.

Total 3 (delta 1), reused 0 (delta 0)

remote: Resolving deltas: 100% (1/1)

To https://source.developers.google.com/p/node-cluster-243923/r/nodejs

b86c01d..230d67e master > master

essh@kubernetes-master:~/node-cluster/app/nodejs$ git tag -a v0.0.2 -m 'fix'

essh@kubernetes-master:~/node-cluster/app/nodejs$ git push origin v0.0.2

Counting objects: 1, done.

Writing objects: 100% (1/1), 158 bytes | 158.00 KiB/s, done.

Total 1 (delta 0), reused 0 (delta 0)

To https://source.developers.google.com/p/node-cluster-243923/r/nodejs

* [new tag] v0.0.2 > v0.0.2

essh@kubernetes-master:~/node-cluster/app/nodejs$ sleep 60



essh@kubernetes-master:~/node-cluster/app/nodejs$ gcloud builds list

ID CREATE_TIME DURATION SOURCE IMAGES STATUS

2b024d7e-87a9-4d2a-980b-4e7c108c5fad 2019-06-22T17:13:14+00:00 28S nodejs@v0.0.2 gcr.io/node-cluster-243923/nodejs:v0.0.2 SUCCESS

6b4ae6ff-2f4a-481b-9f4e-219fafb5d572 2019-06-22T16:57:11+00:00 29S nodejs@v0.0.1 gcr.io/node-cluster-243923/nodejs:v0.0.1 SUCCESS

e50df082-31a4-463b-abb2-d0f72fbf62cb 2019-06-22T16:56:48+00:00 29S nodejs@v0.0.1 gcr.io/node-cluster-243923/nodejs:v0.0.1 SUCCESS

essh@kubernetes-master:~/node-cluster/app/nodejs$ git tag -a latest -m 'fix'

essh@kubernetes-master:~/node-cluster/app/nodejs$ git push origin latest

Counting objects: 1, done.

Writing objects: 100% (1/1), 156 bytes | 156.00 KiB/s, done.

Total 1 (delta 0), reused 0 (delta 0)

To https://source.developers.google.com/p/node-cluster-243923/r/nodejs

* [new tag] latest > latest

essh@kubernetes-master:~/node-cluster/app/nodejs$ cd ../..



     Terraform

            ,    ,      ,      .       GCP >  > IAM   >      NodeJS-prod    ,   .     :

essh@kubernetes-master:~/node-cluster$ cat main.tf

provider "google" {

credentials = file("./kubernetes_key.json")

project = "node-cluster-243923"

region = "europe-west2"

}



module "kubernetes" {

source = "./Kubernetes"

}



data "google_client_config" "default" {}



module "Nginx" {

source = "./nodejs"

image = "gcr.io/node-cluster-243923/nodejs_cluster:latest"

endpoint = module.kubernetes.endpoint

access_token = data.google_client_config.default.access_token

cluster_ca_certificate = module.kubernetes.cluster_ca_certificate

}



essh@kubernetes-master:~/node-cluster$ gcloud config list project

[core]

project = node-cluster-243923



Your active configuration is: [default]



essh@kubernetes-master:~/node-cluster$ gcloud config set project node-cluster-243923

Updated property [core/project].



essh@kubernetes-master:~/node-cluster$ gcloud compute instances list

NAME ZONE INTERNAL_IP EXTERNAL_IP STATUS

gke-node-ks-default-pool-2e5073d4-csmg europe-north1-a 10.166.0.2 35.228.96.97 RUNNING

gke-node-ks-node-ks-pool-ccbaf5c6-4xgc europe-north1-a 10.166.15.233 35.228.82.222 RUNNING

gke-node-ks-default-pool-72a6d4a3-ldzg europe-north1-b 10.166.15.231 35.228.143.7 RUNNING

gke-node-ks-node-ks-pool-9ee6a401-ngfn europe-north1-b 10.166.15.234 35.228.129.224 RUNNING

gke-node-ks-default-pool-d370036c-kbg6 europe-north1-c 10.166.15.232 35.228.117.98 RUNNING

gke-node-ks-node-ks-pool-d7b09e63-q8r2 europe-north1-c 10.166.15.235 35.228.85.157 RUNNING

 gcloud     :

essh@kubernetes-master:~/node-cluster$ gcloud config set project node-cluster-prod-244519

Updated property [core/project].

essh@kubernetes-master:~/node-cluster$ gcloud config list project

[core]

project = node-cluster-prod-244519



Your active configuration is: [default]

essh@kubernetes-master:~/node-cluster$ gcloud compute instances list

Listed 0 items.

  ,  node-cluster-243923    ,      .       Terraform         IAM   >  .          Terraform    ,   SSH- ,    .        ,       , ,  Terraform           -       .       ,    (    ,      Google),          .  dev- :

essh@kubernetes-master:~/node-cluster$ ./terraform destroy

essh@kubernetes-master:~/node-cluster$ mkdir dev

essh@kubernetes-master:~/node-cluster$ cd dev/

essh@kubernetes-master:~/node-cluster/dev$ gcloud config set project node-cluster-243923

Updated property [core/project].

essh@kubernetes-master:~/node-cluster/dev$ gcloud config list project

[core]

project = node-cluster-243923



Your active configuration is: [default]

essh@kubernetes-master:~/node-cluster/dev$ ../kubernetes_key.json ../main.tf .

essh@kubernetes-master:~/node-cluster/dev$ cat main.tf

provider "google" {

alias = "dev"

credentials = file("./kubernetes_key.json")

project = "node-cluster-243923"

region = "europe-west2"

}



module "kubernetes_dev" {

source = "../Kubernetes"

node_pull = false

providers = {

google = google.dev

}

}



data "google_client_config" "default" {}



module "Nginx" {

source = "../nodejs"

providers = {

google = google.dev

}

image = "gcr.io/node-cluster-243923/nodejs_cluster:latest"

endpoint = module.kubernetes_dev.endpoint

access_token = data.google_client_config.default.access_token

cluster_ca_certificate = module.kubernetes_dev.cluster_ca_certificate

}



essh@kubernetes-master:~/node-cluster/dev$ ../terraform init

essh@kubernetes-master:~/node-cluster/dev$ ../terraform apply

essh@kubernetes-master:~/node-cluster/dev$ gcloud compute instances list

NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS

gke-node-ks-default-pool-71afadb8-4t39 europe-north1-a n1-standard-1 10.166.0.60 35.228.96.97 RUNNING

gke-node-ks-node-ks-pool-134dada1-3cdf europe-north1-a n1-standard-1 10.166.0.61 35.228.117.98 RUNNING

gke-node-ks-node-ks-pool-134dada1-c476 europe-north1-a n1-standard-1 10.166.15.194 35.228.82.222 RUNNING



essh@kubernetes-master:~/node-cluster/dev$ gcloud container clusters get-credentials node-ks

Fetching cluster endpoint and auth data.

kubeconfig entry generated for node-ks.



essh@kubernetes-master:~/node-cluster/dev$ kubectl get pods -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE

terraform-nodejs-6fd8498cb5-29dzx 1/1 Running 0 2m57s 10.12.3.2 gke-node-ks-node-ks-pool-134dada1-c476 none>

terraform-nodejs-6fd8498cb5-jcbj6 0/1 Pending 0 2m58s none> none> none>

terraform-nodejs-6fd8498cb5-lvfjf 1/1 Running 0 2m58s 10.12.1.3 gke-node-ks-node-ks-pool-134dada1-3cdf none>

  POD    ,        Kubernetes -   .  ,        ,           .    remove_default_node_pool  true,   POD Kubernetes   POD.   , Kubernetes     ,   POD ,   POD   ,      :

essh@kubernetes-master:~/node-cluster/Kubernetes$ gcloud compute instances list

NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS

gke-node-ks-node-ks-pool-495b75fa-08q2 europe-north1-a n1-standard-1 10.166.0.57 35.228.117.98 RUNNING

gke-node-ks-node-ks-pool-495b75fa-wsf5 europe-north1-a n1-standard-1 10.166.0.59 35.228.96.97 RUNNING



essh@kubernetes-master:~/node-cluster/Kubernetes$ gcloud container clusters get-credentials node-ks

Fetching cluster endpoint and auth data.

kubeconfig entry generated for node-ks.



essh@kubernetes-master:~/node-cluster/Kubernetes$ kubectl get pods -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE

terraform-nodejs-6fd8498cb5-97svs 1/1 Running 0 14m 10.12.2.2 gke-node-ks-node-ks-pool-495b75fa-wsf5 none>

terraform-nodejs-6fd8498cb5-d9zkr 0/1 Pending 0 14m none> none> none>

terraform-nodejs-6fd8498cb5-phk8x 0/1 Pending 0 14m none> none> none>

   ,     :

essh@kubernetes-master:~/node-cluster/dev$ gcloud auth login essh@kubernetes-master:~/node-cluster/dev$ gcloud projects create node-cluster-prod3 Create in progress for [https://cloudresourcemanager.googleapis.com/v1/projects/node-cluster-prod3]. Waiting for [operations/cp.7153345484959140898] to finishdone. https://medium.com/@pnatraj/how-to-run-gcloud-command-line-using-a-service-account-f39043d515b9

essh@kubernetes-master:~/node-cluster$ gcloud auth application-default login

essh@kubernetes-master:~/node-cluster$ cp ~/Downloads/node-cluster-prod-244519-6fd863dd4d38.json ./kubernetes_prod.json

essh@kubernetes-master:~/node-cluster$ echo "kubernetes_prod.json" >> .gitignore

essh@kubernetes-master:~/node-cluster$ gcloud iam service-accounts list

NAME EMAIL DISABLED

Compute Engine default service account 1008874319751-compute@developer.gserviceaccount.com False

terraform-prod terraform-prod@node-cluster-prod-244519.iam.gserviceaccount.com False



essh@kubernetes-master:~/node-cluster$ gcloud projects list | grep node-cluster

node-cluster-243923 node-cluster 26345118671

node-cluster-prod-244519 node-cluster-prod 1008874319751

 prod-:

essh@kubernetes-master:~/node-cluster$ mkdir prod

essh@kubernetes-master:~/node-cluster$ cd prod/

essh@kubernetes-master:~/node-cluster/prod$ cp ../main.tf ../kubernetes_prod_key.json .

essh@kubernetes-master:~/node-cluster/prod$ gcloud config set project node-cluster-prod-244519

Updated property [core/project].




  .


   .

   ,     (https://www.litres.ru/evgeniy-sergeevich-shtolc/oblachnaya-ekosistema/)  .

      Visa, MasterCard, Maestro,    ,   ,     ,  PayPal, WebMoney, ., QIWI ,       .


