Liens: OpenGitOps
Installation: voir la documentation Get Started.
Sur une nouvelle machine (ex: Ubuntu sur VMWare sur Ubuntu), ne pas oublier d'installer Docker Engine et faire le post install (user dans le groupe docker
).
minikube start -- or -- minikube start --addons=dashboard --addons=metrics-server --addons="ingress" --addons="ingress-dns" minikube status
minikube service helloworld
minikube addons list minikube addons enable dashboard minikube addons enable metrics-server minikube addons enable ingress
Run it:
minikube dashboard
minikube stop
Démarrer un pod:
$ kubectl run --image=nginx web # façon impérative
Façon déclarative, dans un fichier web-declarative.yaml
:
apiVersion: v1 kind: Pod metadata: name: web-declarative annotations: site: blog spec: containers: - name: web image: nginx:1.17.1
kubectl apply -f web-declarative.yaml
Imperative:
kubectl apply -f green.yaml kubectl expose pod green --port 8080 --name blue-green
kubectl delete service blue-green
Declarative:
apiVersion: v1 kind: Service metadata: name: blue-green spec: type: ClusterIP ports: - port: 80 targetPort: 8080 selector: app: blue-green
$ kubectl apply -f blue-green.yaml
Le port
80 est le port exposé et le target-port
8080 est le port interne du pod.
kubectl get nodes kubectl get all
kubectl create -f helloworld.yaml kubectl expose deployment helloworld --type=NodePort
kubectl get deployment kubectl get deployment/helloworld -o yaml kubectl get service/helloworld -o yaml
kubectl describe pod/helloworld
kubectl get replicaset
Create service and deployment (from two different fiiles):
kubectl create -f helloworld-deployment.yml kubectl create -f helloworld-service.yml
Build time, dans la section metadata
→ labels
:
apiVersion: v1 kind: Pod metadata: name: helloworld labels: env: production author: anyauthor application_type: ui release-version: "1.0" spec: containers: - name: helloworld image: anynamespace/anyimage:latest
kubectl get pods --show-labels
Au runtime, pour disons ajouter le label app=helloworld
, on fait:
kubectl label pod/helloworld app=helloworldapp --overwrite
Pour supprimer un label, ajouter -
au nom:
kubectl label pod/helloworld app-
Filtrer:
kubectl get pods --selector env=production kubectl get pods --selector dev-lead=jim,env!=production kubectl get pods -l 'release-version in (1.0,2.0)' kubectl get pods -l 'release-version notin (1.0,2.0)' --show-labels
Utiliser un filtre de labels pour supprimer:
kubectl delete pods -l dev-lead=jim
readinessProbe: # length of time to wait for a pod to initialize # after pod startup, before applying health checking initialDelaySeconds: 5 # Amount of time to wait before timing out timeoutSeconds: 1 # Probe for http httpGet: # Path to probe path: / # Port to probe port: 80 livenessProbe: # length of time to wait for a pod to initialize # after pod startup, before applying health checking initialDelaySeconds: 5 # Amount of time to wait before timing out timeoutSeconds: 1 # Probe for http httpGet: # Path to probe path: / # Port to probe port: 80
Guestbook
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: redis-master labels: app: redis spec: selector: matchLabels: app: redis role: master tier: backend replicas: 1 template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: k8s.gcr.io/redis:e2e # or just image: redis resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 --- apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend --- apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: redis-slave labels: app: redis spec: selector: matchLabels: app: redis role: slave tier: backend replicas: 2 template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google_samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # Using `GET_HOSTS_FROM=dns` requires your cluster to # provide a dns service. As of Kubernetes 1.3, DNS is a built-in # service launched automatically. However, if the cluster you are using # does not have a built-in DNS service, you can instead # access an environment variable to find the master # service's host. To do so, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 --- apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend --- apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: frontend labels: app: guestbook spec: selector: matchLabels: app: guestbook tier: frontend replicas: 3 template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # Using `GET_HOSTS_FROM=dns` requires your cluster to # provide a dns service. As of Kubernetes 1.3, DNS is a built-in # service launched automatically. However, if the cluster you are using # does not have a built-in DNS service, you can instead # access an environment variable to find the master # service's host. To do so, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # comment or delete the following line if you want to use a LoadBalancer type: NodePort # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend
Disons que nous avons une nouvelle version blue
:
kubectl set image deplayment/navbar-deployment helloworld=anynamescpace/helloworld:blue
Rollback:
kubectl rollout undo deployment/navbar-deployment [--to-revision=revision]
kubectl create configmap logger --from-literal=log_level=debug
apiVersion: apps/v1 kind: Deployment metadata: name: logreader-dynamic labels: app: logreader-dynamic spec: replicas: 1 selector: matchLabels: app: logreader-dynamic template: metadata: labels: app: logreader-dynamic spec: containers: - name: logreader image: anynamespace/reader:latest env: - name: log_level valueFrom: configMapKeyRef: name: logger #Read from a configmap called log-level key: log_level #Read the key called log_level
kubectl get configmap/logger -o yaml
kubectl create secret generic apikey --from-literal=api_key=1234567890 kubectl get secrets kubectl get secret apikey -o yaml
env: - name: api_key valueFrom: secretKeyRef: name: apikey key: api_key
apiVersion: batch/v1 kind: Job metadata: name: finalcountdown spec: template: metadata: name: finalcountdown spec: containers: - name: counter image: busybox command: - bin/sh - -c - "for i in 9 8 7 6 5 4 3 2 1 ; do echo $i ; done" restartPolicy: Never #could also be Always or OnFailure
kubectl get cronjobs kubectl get jobs
apiVersion: apps/v1 kind: DaemonSet metadata: name: example-daemonset namespace: default labels: k8s-app: example-daemonset spec: selector: matchLabels: name: example-daemonset template: metadata: labels: name: example-daemonset spec: #nodeSelector: minikube # Specify if you want to run on specific nodes containers: - name: example-daemonset image: busybox args: - /bin/sh - -c - date; sleep 1000 resources: limits: memory: 200Mi requests: cpu: 100m memory: 200Mi terminationGracePeriodSeconds: 30
kubectl get daemonsets
kubectl get namespaces
kubectl create namespace [namespacename] kubectl delete namespace [namespacename]
Pour déployer une ressource dans un namespace spécifique, simplement ajouter -n [namespacename]
.
Types of users:
Authentication Modules:
--client-ca-file=FILENAME
--token-auth-file=FILE_WITH_TOKENS
Authorization Modules:
kubectl logs podname
microk8s kubectl describe secret -n kube-system microk8s-dashboard-token
Create alias:
alias kubectl='microk8s kubectl'
Set up Ingress on Minikube with the NGINX Ingress Controller
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: example.com annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: blue-green.example.com http: paths: - path: /blue backend: serviceName: blue servicePort: 80 - path: /green backend: serviceName: green servicePort: 80 - host: nginx.example.com http: paths: - path: / backend: serviceName: nginx servicePort: 80
In file /etc/systemd/system/minikube.service
, or using systemctl edit –force –full minikube.service
:
[Unit] Description=minikube [Service] Type=oneshot RemainAfterExit=yes ExecStart=/usr/local/bin/minikube start ExecStop=/usr/local/bin/minikube stop User=username Group=docker [Install] WantedBy=multi-user.target
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml kubectl -n argocd get deployment
argocd version
Une erreur apparait puisque le port du service n'est pas exposé.
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "NodePort"}}' kubectl -n argocd get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ... argocd-server NodePort 10.152.183.68 <none> 80:31458/TCP,443:30452/TCP 15m