3 月 072020
 

主机列表

Ansible 18.163.102.197/172.31.34.153
k8s-01 18.163.35.70/172.31.43.3
k8s-02 18.162.148.167/172.31.37.84
k8s-03 18.163.103.104/172.31.37.22

Amazon EC2主机默认禁用root登录及密码验证的处理

sudo sed -i 's/^\#PermitRootLogin yes/PermitRootLogin yes/' /etc/ssh/sshd_config
sudo sed -i 's/^PasswordAuthentication no/PasswordAuthentication yes/' /etc/ssh/sshd_config
sudo systemctl restart sshd

查看本地主机Ansible版本信息

[root@ip-172-31-34-153 ~]# ansible --version
ansible 2.9.5
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /bin/ansible
  python version = 2.7.5 (default, Oct 30 2018, 23:45:53) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
[root@ip-172-31-34-153 ~]#

禁用本地主机严格密钥检查

[root@ip-172-31-34-153 ~]# vi /etc/ssh/ssh_config
StrictHostKeyChecking no

生成密钥对并分发公钥到远程主机

[root@ip-172-31-34-153 ~]# ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:Gj5nl42xywRn0/s9hjBeACErGJWjQhfoDuEDT2yjYfE root@ip-172-31-34-153.ap-east-1.compute.internal
The key's randomart image is:
+---[RSA 2048]----+
| oooo... ..      |
|++*.oo  o.       |
|*B.E....  .      |
|o=..  .    o     |
|o o   . S = o    |
| .   . o + X o   |
|      + o B * .  |
|       + + o o + |
|          o   o o|
+----[SHA256]-----+
[root@ip-172-31-34-153 ~]# cat .ssh/id_rsa.pub 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC29DSROHgwWlucHoL/B+S/4Rd1KsVEbYLmM4p0+Ptx4NjGooEhrnNjIhpKmPNI5zvGtganSia2A7Vsp5Y+IVOgThRjzptQQzmbEloIqv6SsJRDyrUQIPV9dv3jv5pvbtAN0D5rh1AATPh0FNBtnkvm6HLowjueKdE6pBiq74NTPc5jfDuvwq2S5s4Ztnw9NsTuIlIiC7STCfuDo7NoxRVl+QumD12tW52CPd4ZjA4vg4v7xr/BF/rRxdFuG6+740s2kO1EZNaUOoi99qMLQiScOK+SLw+/tN66EmZC0uMeYlDiZZ1VsLb2MMd11CJDWSZ9SZbd1dHQbXywUbj0tRQF root@ip-172-31-34-153.ap-east-1.compute.internal
[root@ip-172-31-34-153 ~]#

分发公钥

[root@ip-172-31-34-153 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@18.163.35.70
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: ".ssh/id_rsa.pub"
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@18.163.35.70's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'root@18.163.35.70'"
and check to make sure that only the key(s) you wanted were added.

[root@ip-172-31-34-153 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@18.162.148.167
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: ".ssh/id_rsa.pub"
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@18.162.148.167's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'root@18.162.148.167'"
and check to make sure that only the key(s) you wanted were added.

[root@ip-172-31-34-153 ~]# ssh-copy-id -i .ssh/id_rsa.pub root@18.163.103.104
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: ".ssh/id_rsa.pub"
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@18.163.103.104's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'root@18.163.103.104'"
and check to make sure that only the key(s) you wanted were added.

[root@ip-172-31-34-153 ~]#

配置Ansible主机清单

[root@ip-172-31-34-153 ~]# mkdir kube-cluster
[root@ip-172-31-34-153 ~]# cd kube-cluster/
[root@ip-172-31-34-153 kube-cluster]# vi hosts
[masters]
master ansible_host=18.163.35.70 ansible_user=root

[workers]
worker1 ansible_host=18.162.148.167 ansible_user=root
worker2 ansible_host=18.163.103.104 ansible_user=root

准备基本环境Playbook配置文件(k8s-01/k8s-02/k8s-03)

[root@ip-172-31-34-153 kube-cluster]# vi kube-dependencies.yaml
- hosts: all
  become: yes
  tasks:
   - name: Install yum utils
         yum:
         name: yum-utils
         state: latest

   - name: Install device-mapper-persistent-data
         yum:
         name: device-mapper-persistent-data
         state: latest

   - name: Install lvm2
         yum:
         name: lvm2
         state: latest

   - name: Add Docker repo
      get_url:
        url: https://download.docker.com/linux/centos/docker-ce.repo
        dest: /etc/yum.repos.d/docer-ce.repo 

   - name: install Docker
         yum:
         name: docker-ce
         state: latest
         update_cache: true

   - name: start Docker
     service:
       name: docker
       state: started
	   enabled: yes

   - name: disable SELinux
     command: setenforce 0

   - name: disable SELinux on reboot
     selinux:
       state: disabled

   - name: ensure net.bridge.bridge-nf-call-ip6tables is set to 1
     sysctl:
      name: net.bridge.bridge-nf-call-ip6tables
      value: 1
      state: present

   - name: ensure net.bridge.bridge-nf-call-iptables is set to 1
     sysctl:
      name: net.bridge.bridge-nf-call-iptables
      value: 1
      state: present

   - name: add Kubernetes' YUM repository
     yum_repository:
      name: Kubernetes
      description: Kubernetes YUM repository
      baseurl: https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
      gpgkey: https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
      gpgcheck: yes

   - name: install kubelet
     yum:
        name: kubelet-1.17.3
        state: present
        update_cache: true

   - name: install kubeadm
     yum:
        name: kubeadm-1.17.3
        state: present

   - name: start kubelet
     service:
       name: kubelet
       enabled: yes
       state: started

- hosts: master
  become: yes
  tasks:
   - name: install kubectl
     yum:
        name: kubectl-1.17.3
        state: present
        allow_downgrade: yes

执行

[root@ip-172-31-34-153 kube-cluster]# ansible-playbook -i ./hosts kube-dependencies.yaml

PLAY [all] *****************************************************************************************************

TASK [Gathering Facts] *****************************************************************************************
ok: [worker1]
ok: [worker2]
ok: [master]

TASK [Install yum utils] ***************************************************************************************
changed: [worker1]
changed: [master]
changed: [worker2]

TASK [Install device-mapper-persistent-data] *******************************************************************
changed: [worker1]
changed: [worker2]
changed: [master]

TASK [Install lvm2] ********************************************************************************************
changed: [worker2]
changed: [worker1]
changed: [master]

TASK [Add Docker repo] *****************************************************************************************
changed: [worker2]
changed: [worker1]
changed: [master]

TASK [install Docker] ******************************************************************************************
changed: [worker1]
changed: [worker2]
changed: [master]

TASK [start Docker] ********************************************************************************************
changed: [worker1]
changed: [master]
changed: [worker2]

TASK [disable SELinux] *****************************************************************************************
changed: [worker2]
changed: [worker1]
changed: [master]

TASK [disable SELinux on reboot] *******************************************************************************
[WARNING]: SELinux state change will take effect next reboot
changed: [worker2]
changed: [worker1]
changed: [master]

TASK [ensure net.bridge.bridge-nf-call-ip6tables is set to 1] **************************************************
[WARNING]: The value 1 (type int) in a string field was converted to u'1' (type string). If this does not look
like what you expect, quote the entire value to ensure it does not change.
changed: [worker2]
changed: [worker1]
changed: [master]

TASK [ensure net.bridge.bridge-nf-call-iptables is set to 1] ***************************************************
changed: [worker1]
changed: [worker2]
changed: [master]

TASK [add Kubernetes' YUM repository] **************************************************************************
changed: [worker2]
changed: [worker1]
changed: [master]

TASK [install kubelet] *****************************************************************************************
changed: [worker1]
changed: [worker2]
changed: [master]

TASK [install kubeadm] *****************************************************************************************
changed: [worker1]
changed: [worker2]
changed: [master]

TASK [start kubelet] *******************************************************************************************
changed: [worker2]
changed: [worker1]
changed: [master]

PLAY [master] **************************************************************************************************

TASK [Gathering Facts] *****************************************************************************************
ok: [master]

TASK [install kubectl] *****************************************************************************************
ok: [master]

PLAY RECAP *****************************************************************************************************
master : ok=17 changed=14 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 
worker1 : ok=15 changed=14 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 
worker2 : ok=15 changed=14 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

[root@ip-172-31-34-153 kube-cluster]#

准备主节点Palybook配置文件(k8s-01)

[root@ip-172-31-34-153 kube-cluster]# vi master.yaml
- hosts: master
  become: yes
  tasks:
    - name: initialize the cluster
      shell: kubeadm init --pod-network-cidr=10.244.0.0/16 >> cluster_initialized.txt
      args:
        chdir: $HOME
        creates: cluster_initialized.txt

    - name: create .kube directory
      become: yes
      become_user: centos
      file:
        path: $HOME/.kube
        state: directory
        mode: 0755

    - name: copy admin.conf to user's kube config
      copy:
        src: /etc/kubernetes/admin.conf
        dest: /home/centos/.kube/config
        remote_src: yes
        owner: centos

    - name: install Pod network
      become: yes
      become_user: centos
      shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml >> pod_network_setup.txt
      args:
        chdir: $HOME
        creates: pod_network_setup.txt

执行

[root@ip-172-31-34-153 kube-cluster]# ansible-playbook -i ./hosts master.yaml

PLAY [master] **************************************************************************************************

TASK [Gathering Facts] *****************************************************************************************
ok: [master]

TASK [initialize the cluster] **********************************************************************************
ok: [master]

TASK [create .kube directory] **********************************************************************************
[WARNING]: Module remote_tmp /home/centos/.ansible/tmp did not exist and was created with a mode of 0700, this
may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct
permissions manually
changed: [master]

TASK [copy admin.conf to user's kube config] *******************************************************************
changed: [master]

TASK [install Pod network] *************************************************************************************
changed: [master]

PLAY RECAP *****************************************************************************************************
master : ok=5 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

[root@ip-172-31-34-153 kube-cluster]#

使用非特权用户centos验证Kubernetes集群及主节点状态

[centos@k8s-01 ~]$ kubectl get nodes
NAME     STATUS   ROLES    AGE    VERSION
k8s-01   Ready    master   155m   v1.17.3
[centos@k8s-01 ~]$

准备工作节点Playbook配置文件(k8s-02/k8s-03)

[root@ip-172-31-34-153 kube-cluster]# vi workers.yaml
- hosts: master
  become: yes
  gather_facts: false
  tasks:
    - name: get join command
      shell: kubeadm token create --print-join-command
      register: join_command_raw

    - name: set join command
      set_fact:
        join_command: "{{ join_command_raw.stdout_lines[0] }}"


- hosts: workers
  become: yes
  tasks:
    - name: join cluster
      shell: "{{ hostvars['master'].join_command }} --ignore-preflight-errors all  >> node_joined.txt"
      args:
        chdir: $HOME
        creates: node_joined.txt

执行

[root@ip-172-31-34-153 kube-cluster]# ansible-playbook -i hosts workers.yaml

PLAY [master] **************************************************************************************************

TASK [get join command] ****************************************************************************************
changed: [master]

TASK [set join command] ****************************************************************************************
ok: [master]

PLAY [workers] *************************************************************************************************

TASK [Gathering Facts] *****************************************************************************************
ok: [worker2]
ok: [worker1]

TASK [join cluster] ********************************************************************************************
changed: [worker2]
changed: [worker1]

PLAY RECAP *****************************************************************************************************
master : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 
worker1 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 
worker2 : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

[root@ip-172-31-34-153 kube-cluster]#

验证集群状态

[centos@k8s-01 ~]$ kubectl get nodes
NAME     STATUS   ROLES    AGE    VERSION
k8s-01   Ready    master   159m   v1.17.3
k8s-02   Ready    <none>   41s    v1.17.3
k8s-03   Ready    <none>   41s    v1.17.3
[centos@k8s-01 ~]$

部署容器化应用程序进行测试

[centos@k8s-01 ~]$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[centos@k8s-01 ~]$ 

[centos@k8s-01 ~]$ kubectl expose deploy nginx --port 80 --target-port 80 --type NodePort
service/nginx exposed
[centos@k8s-01 ~]$ 

[centos@k8s-01 ~]$ kubectl get services
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        179m
nginx        NodePort    10.109.120.31   <none>        80:30596/TCP   15s
[centos@k8s-01 ~]$

使用浏览器访问

删除已部署的容器化应用

[centos@k8s-01 ~]$ kubectl delete service nginx
service "nginx" deleted
[centos@k8s-01 ~]$ kubectl get service
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   3h4m
[centos@k8s-01 ~]$ 

[centos@k8s-01 ~]$ kubectl delete deployment nginx
deployment.apps "nginx" deleted
[centos@k8s-01 ~]$ kubectl get deployments
No resources found in default namespace.
[centos@k8s-01 ~]$

客户端命令行接口配置文件详情(基于PKI体系的服务器及客户端证书验证)

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01ETXdOekE1TVRNeU1Wb1hEVE13TURNd05UQTVNVE15TVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTHl4Ckx2M25ESzZHaDgxN1pjWmpqUVV5em13RlVvdzZhZDV1T1Jabzg2Q0tsNW52RnF3VjRYL2E4OGx2S1grc2xqWDkKSDZGR2Y2bm1uM2JMTnlXWWEreThGcllUMHBQR2x3aG5qWE1WSkJlUW9SS2NiK2hySERPZlNGZ0xsZjQ0TWR1VwpPd3Vmb2VTYnJpL3hoZ0ExMXhqbStmVGJNV3ZkNkZVM0h6ZW9WeEtsdVJNcmJVL0YySHFVN0R1ZEV6dUNQUWFsCk1OOUxiblZJcUtwREp5VzhmODY1V29MUHJlWjhMZkZqMVQvMXl2ZEk1dkJwTFBKc0NZUndLdndSTEhZajAzTHMKRVA5QlpuRkhNRDYwV3RuZXc4bkdaRjJkWTdIRHZRa1V2M2hoemtVMXRLa3BncWhvM2tCUytoUUNwUEpLMzZLMgplOG9aT2NrTDJsYjJzTmpBck84Q0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFJTnFSMXYwUGVPKy9TR05OcXN6S2MzNHk5RGkKVjA5dFZoemRRNEV6aGtoM0ZOS0RRMDZ0VTNPcUw2dzREano2SnlwSW9IaGVsTXVxVmJtY0V5WE9WNzYwZ0hPRQpJaWJ0ZlJhcVdPMVc2RXE0NklQbjEwZkFWNzRwNjhVcWdQdjkra0RSb2grYWhobFJFTGJJdTJNcjAzNHBjcWduClZSK01lWGZ6Q0VvalF3dzd0ZVJGNnpMTCtQa0duNHI4L24rNFNGUjJIK2lDVCtiVzNxZWdCYi9MWWwyTmNMTHMKVDEvcnROZnFTaEIyV2dYbXZKUkl2YXRIWWtUdUZCN1IwZ0pkQUJJWXdkSGlpbVN4TkdDK05WRzIzL3BDdmRKUApFcjFPd2xuWFBMSStiOHpXNDNEanVjd0pPdTY0alZEVmduNUpJUDZqNjRuYnN2eC9VSkIvOUZNK0lVST0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://172.31.43.3:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJY0trbWVQMXNaZnd3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURBek1EY3dPVEV6TWpGYUZ3MHlNVEF6TURjd09URXpNak5hTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXhTbFZmb1IyVTQ2UGdCbzAKR2kyL2NROFEzVldCcVpaNzRwM3cxZ1pDS2dzaUhya3RGWTdrTm52Y3hLNXVPRjZIN1YxS0JrYmRUNXZvVlZ2YQpFRlY3TU5RZUZ6RDEzWkFKK2dOVFN5RFUrY21qT2xnQW1xMktZeHdKbTNBNUdnNFRSbVpUN01mS3FxMVc4V2lxClZlWkY1cnViUkdpb3Z0WWR5L3BHUEs1b0dJaWtpd2w0QU9SMXFGRG80ejR3SmtyMEd5OUxSSzhNZ0RkeEhrSk0KQklrZ2QrbnFpODBGZUpLM2JzWTBjUG9LYk9QbEx4Vm9XQW5iUWEyNjVqYXBQbitNdEpKWkdRelFwYXhranE5RApvek1Pa3pnV0dQMFZKcC9CUXFINGI5NTFXaUFpNTMwbVlvVTVRUDJwaFR6amtUbG1PQlErd3hoZDNKaU9TdjUwCkVmdkdHUUlEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFIdSt5MjRxa2F0Y21rZkJYRUtrUXg1SUdvNm9Ud0FIcnRqdQo5SUw1MTZ4cVZPMlUvblAwM3hqbHBVdkFSR1dSU3czRjZsanNkUTM5VS9DMGQ2SVNGT0t4K2VkMFE5MVptYW03CnNib0liaXJSeDdVa3ErdThoS3dRK1Zad1Z0akdLUWYwclB2STFkb2drcHJldkF2Myt3OUdld3p5Y0NqemxIbE0KU09pdFdYYkdpdzBoWmk3a25lYmdMQVEvdkVVSlFrNFNVK21oMTJIaVNZY0R2WlJOZkJOUzNONnpPMnZXUGFrcwpFMVIvZ1BBTmlMMllTSXpnQVAwSyszTzJGVzc1SndLa3dXUlNEM1NIZWQxbTZIYlVGcTlBUEdWOXB1eHJTZXJoCkF0T2QzbTdIUnRCS3Q1L29ZaUNva1NBRjZIR1hJcCtEYTFBMFZQRkU0YlVkQjl5MUlHWT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBeFNsVmZvUjJVNDZQZ0JvMEdpMi9jUThRM1ZXQnFaWjc0cDN3MWdaQ0tnc2lIcmt0CkZZN2tObnZjeEs1dU9GNkg3VjFLQmtiZFQ1dm9WVnZhRUZWN01OUWVGekQxM1pBSitnTlRTeURVK2Ntak9sZ0EKbXEyS1l4d0ptM0E1R2c0VFJtWlQ3TWZLcXExVzhXaXFWZVpGNXJ1YlJHaW92dFlkeS9wR1BLNW9HSWlraXdsNApBT1IxcUZEbzR6NHdKa3IwR3k5TFJLOE1nRGR4SGtKTUJJa2dkK25xaTgwRmVKSzNic1kwY1BvS2JPUGxMeFZvCldBbmJRYTI2NWphcFBuK010SkpaR1F6UXBheGtqcTlEb3pNT2t6Z1dHUDBWSnAvQlFxSDRiOTUxV2lBaTUzMG0KWW9VNVFQMnBoVHpqa1RsbU9CUSt3eGhkM0ppT1N2NTBFZnZHR1FJREFRQUJBb0lCQVFDQ2s1bDNyU3JncyszKwpIVnljYWVmOGJNbnlqSXJQVWtiQ0UzQkpqdU9MRE15UUpIdmpaenRsaWlyd1o4Vy90M3Uyaks1VjhlRG90SXp1CjIySlVwd2hya2xCTGM3V2lBNTlYNFpQc2tkWDdpTHQrRElKNTdxMVVibUUrZk5pVWxQWFhEalpPL3hNT2JyYkMKTTF0OGdJR1RDblVPblhJRTBiSHlRZEw2cFZkenh3Ri9EeFNNTy9zOGxLOEh3K0RzT0xxU3FPbHoyOUpuYk9CeAp1aEMzK3VMalc4Rmpsblh6K25JQWRaWFZoRkp0dG43a1dkak1jZXkyTGZCc1NZbGZlWlhZaTRGTE8xbmNPWGpuCkYwLzNhU2g0UmtPeXZvZDZRSEVxTmFnS0ZPOUZqd29hQzRmWkxLQjBrTG16UlZYa1BiR2lDRXB1N1ozSEw0c3UKaFRaYTNUekJBb0dCQU8zMXlBWDVtYTR2U3FlK2V5eEx2L201WEhtb2QweDhXNUFkcU51ZzNuRjdUSE4zMXppbQpmYVBwTjd4R2lwcXNwMVlGQzBheC9FZDNJYW1RcWVSRlRtTHgrRmttb3NNSThBbUV2U0EvL0JTVWVhYTUzeWtwCkt1NXEzNFBWWW5OSXZpcWpTM1ZITERGckw5MlUzNnVBTk9uMTJwZUw3ek1kOXVOT0srNlV3L20xQW9HQkFOUWIKd0g0RWRUbVAwS2V5V0hmYlBheFhxSVJqV2xLeFhHTU5JcnRVZWNLQ0NadWFTNnE5TFYxWk5KZkMyamN3TFRKMApDMVB2RkNjWjAwRUFScGlkS2lYL0ZaQzloRHZ6TkpsUnRseGs0aGVZVUVoa0lQL1RtcVUvTWZhSEhBREhlbDNCCkNPL1BuUnU5Y3g0NmwxZjBOcm5XRVJoa2J5TTJ4Mzc1ek5xb0tJbFZBb0dBUzhxKy9QZzFOTCtuWGFwVC9SWGIKZmFUR2laRlkvaW1WMkY4NkMwby96NUZnRmw4VFU5M2pvck9EcHhvb3gzODZoVEZ5R0FCVXhFWnptRmlWWkRtVwo3L2oyQ3g4OU5EWENqcVdTdjVUaHE0Um5BdTJzNEtWV0lUNDFGdjUrTHczNlZBWlM0SFhjNDVpcVZEODR4cDA5ClBVK3JZaDJXQUlnSXZQbUhFS1NkandrQ2dZQm53dHU3eWZwK21qZjhrV1p0MjdhajVJM3ZsWnJOOFMyODF1UXkKdC9TSWpveWNyakp0NS9XVlFOcFZrMkNrdHRDbGFkZFF6QmdUdUxKN2plTDdMWWM4NXpocGdneDZOMU4zM1YxVQpmWldNN1ZuNHorTEV3NE5YYXo3SjF2Wi8reFdGWDdVN2UxamtCUjJYb0JvQlVOcWt0bS9PZXZOVFNxejFGTVorCkFOMHpzUUtCZ1FDaDROSlEvVjhhc3prOURnZ2F5bnZ1Z2JWWVg1R0lFNGRSRng3Z3dXek5BckI0V1pUODVHeDgKSzByN3BLdTJsYmh2OFE1UU9GdFFhS0JwcCtjb1g2a3cvbTJZdWdYeVdiREpScEY4ODJXbkQzYWhvbW10WTlXZgpOWmJkeGRXNk8xZ1dURTg1ODV3YW5uOWFZR3g5Q21xNDJ4Sk9SaURPakFZWWEyR3phTHI2SHc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

命令行工具kubectl基本命令参数

[centos@k8s-01 ~]$ kubectl 
kubectl controls the Kubernetes cluster manager.

 Find more information at: https://kubernetes.io/docs/reference/kubectl/overview/

Basic Commands (Beginner):
  create         Create a resource from a file or from stdin.
  expose         Take a replication controller, service, deployment or pod and expose it as a new
Kubernetes Service
  run            Run a particular image on the cluster
  set            Set specific features on objects

Basic Commands (Intermediate):
  explain        Documentation of resources
  get            Display one or many resources
  edit           Edit a resource on the server
  delete         Delete resources by filenames, stdin, resources and names, or by resources and
label selector

Deploy Commands:
  rollout        Manage the rollout of a resource
  scale          Set a new size for a Deployment, ReplicaSet or Replication Controller
  autoscale      Auto-scale a Deployment, ReplicaSet, or ReplicationController

Cluster Management Commands:
  certificate    Modify certificate resources.
  cluster-info   Display cluster info
  top            Display Resource (CPU/Memory/Storage) usage.
  cordon         Mark node as unschedulable
  uncordon       Mark node as schedulable
  drain          Drain node in preparation for maintenance
  taint          Update the taints on one or more nodes

Troubleshooting and Debugging Commands:
  describe       Show details of a specific resource or group of resources
  logs           Print the logs for a container in a pod
  attach         Attach to a running container
  exec           Execute a command in a container
  port-forward   Forward one or more local ports to a pod
  proxy          Run a proxy to the Kubernetes API server
  cp             Copy files and directories to and from containers.
  auth           Inspect authorization

Advanced Commands:
  diff           Diff live version against would-be applied version
  apply          Apply a configuration to a resource by filename or stdin
  patch          Update field(s) of a resource using strategic merge patch
  replace        Replace a resource by filename or stdin
  wait           Experimental: Wait for a specific condition on one or many resources.
  convert        Convert config files between different API versions
  kustomize      Build a kustomization target from a directory or a remote url.

Settings Commands:
  label          Update the labels on a resource
  annotate       Update the annotations on a resource
  completion     Output shell completion code for the specified shell (bash or zsh)

Other Commands:
  api-resources  Print the supported API resources on the server
  api-versions   Print the supported API versions on the server, in the form of "group/version"
  config         Modify kubeconfig files
  plugin         Provides utilities for interacting with plugins.
  version        Print the client and server version information

Usage:
  kubectl [flags] [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
[centos@k8s-01 ~]$

内容引用

https://www.digitalocean.com/community/tutorials/how-to-create-a-kubernetes-cluster-using-kubeadm-on-centos-7
3 月 052020
 

集群通信需要开放的端口及用途

TCP2377 集群管理通信
TCP/UDP7946 集群内节点间通信
UDP4789 overlay网络流量

在初始管理节点上创建Swarm集群

[root@server1 ~]# docker swarm init --advertise-addr 172.31.37.77
Swarm initialized: current node (51coubi9fdbxhggwk240jhd2o) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-03vu2g8soxmz0i75mbcp6k447zj5snxkibqnox4vgu7fbtu5mf-0qtsru4h7knuu9kf2ptwdpuns 172.31.37.77:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

[root@server1 ~]#

查看集群当前状态信息

[root@server1 ~]# docker info
 Swarm: active
  NodeID: 51coubi9fdbxhggwk240jhd2o
  Is Manager: true
  ClusterID: xtiv1qnkuegsvuog4yw7t6i61
  Managers: 1
  Nodes: 1
  Default Address Pool: 10.0.0.0/8  
  SubnetSize: 24
  Data Path Port: 4789
  Orchestration:
   Task History Retention Limit: 5
[root@server1 ~]#

查看集群当前的节点信息

[root@server1 ~]# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
51coubi9fdbxhggwk240jhd2o *   server1             Ready               Active              Leader              19.03.7
[root@server1 ~]#

查询集群的加入配置信息(管理节点)

[root@server1 ~]# docker swarm join-token worker
To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-03vu2g8soxmz0i75mbcp6k447zj5snxkibqnox4vgu7fbtu5mf-0qtsru4h7knuu9kf2ptwdpuns 172.31.37.77:2377

[root@server1 ~]#

将剩余节点加入集群(工作节点)

[root@server2 ~]# docker swarm join --token SWMTKN-1-03vu2g8soxmz0i75mbcp6k447zj5snxkibqnox4vgu7fbtu5mf-0qtsru4h7knuu9kf2ptwdpuns 172.31.37.77:2377
This node joined a swarm as a worker.
[root@server2 ~]#

[root@server3 ~]# docker swarm join --token SWMTKN-1-03vu2g8soxmz0i75mbcp6k447zj5snxkibqnox4vgu7fbtu5mf-0qtsru4h7knuu9kf2ptwdpuns 172.31.37.77:2377
This node joined a swarm as a worker.
[root@server3 ~]#

[root@server4 ~]# docker swarm join --token SWMTKN-1-03vu2g8soxmz0i75mbcp6k447zj5snxkibqnox4vgu7fbtu5mf-0qtsru4h7knuu9kf2ptwdpuns 172.31.37.77:2377
This node joined a swarm as a worker.
[root@server4 ~]#

[root@server5 ~]# docker swarm join --token SWMTKN-1-03vu2g8soxmz0i75mbcp6k447zj5snxkibqnox4vgu7fbtu5mf-0qtsru4h7knuu9kf2ptwdpuns 172.31.37.77:2377
This node joined a swarm as a worker.
[root@server5 ~]#

查询集群节点状态

[root@server1 ~]# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
51coubi9fdbxhggwk240jhd2o *   server1             Ready               Active              Leader              19.03.7
pd0jygj230x1sw1e0h3cee9zt     server2             Ready               Active                                  19.03.7
avjy5atb5t31f7qp1wg91jcgf     server3             Ready               Active                                  19.03.7
z0tear836razrlw1ce39nkiyi     server4             Ready               Active                                  19.03.7
2mda4v4c09o646ia8i1b9i1em     server5             Ready               Active                                  19.03.7
[root@server1 ~]#

运行一个服务(Service)

[root@server1 ~]# docker service create --replicas 1 --name helloworld alpine ping docker.com
1fxvvocwsi8my58ttcnhg089i
overall progress: 1 out of 1 tasks 
1/1: running [==================================================>] 
verify: Service converged 
[root@server1 ~]#

服务运行参数说明:

docker service create 创建服务(Service)
--replicas 1 运行实例的数量
--name helloworld 服务名称
alpine ping docker.com 使用的镜像名称和执行的命令

查看服务运行状态

[root@server1 ~]# docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
1fxvvocwsi8m        helloworld          replicated          1/1                 alpine:latest       
[root@server1 ~]# 

查看服务状态信息

[root@server1 ~]# docker service inspect --pretty helloworld

ID:             1fxvvocwsi8my58ttcnhg089i
Name:           helloworld
Service Mode:   Replicated
 Replicas:      1
Placement:
UpdateConfig:
 Parallelism:   1
 On failure:    pause
 Monitoring Period: 5s
 Max failure ratio: 0
 Update order:      stop-first
RollbackConfig:
 Parallelism:   1
 On failure:    pause
 Monitoring Period: 5s
 Max failure ratio: 0
 Rollback order:    stop-first
ContainerSpec:
 Image:         alpine:latest@sha256:ab00606a42621fb68f2ed6ad3c88be54397f981a7b70a79db3d1172b11c4367d
 Args:          ping docker.com 
 Init:          false
Resources:
Endpoint Mode:  vip

[root@server1 ~]#

查看运行该服务的容器节点

[root@server1 ~]# docker service ps helloworld
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
s32zfioxk1bg        helloworld.1        alpine:latest       server1             Running             Running 9 minutes ago                       
[root@server1 ~]#

在运行服务的容器节点上查看容器的运行详情

[root@server1 ~]# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
690df1c6470f        alpine:latest       "ping docker.com"   11 minutes ago      Up 11 minutes                           helloworld.1.s32zfioxk1bgzlbcqjz9c0a0l
[root@server1 ~]#

调整服务中运行容器示例的保有数量

[root@server1 ~]# docker service scale helloworld=5
helloworld scaled to 5
overall progress: 5 out of 5 tasks 
1/5: running [==================================================>] 
2/5: running [==================================================>] 
3/5: running [==================================================>] 
4/5: running [==================================================>] 
5/5: running [==================================================>] 
verify: Service converged 
[root@server1 ~]#

查看调整保有数量后的容器节点

[root@server1 ~]# docker service ps helloworld
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
s32zfioxk1bg        helloworld.1        alpine:latest       server1             Running             Running 15 minutes ago                       
gp211qp0cwf1        helloworld.2        alpine:latest       server4             Running             Running 52 seconds ago                       
j7542eiyux9x        helloworld.3        alpine:latest       server5             Running             Running 52 seconds ago                       
l8bl51j3ak3n        helloworld.4        alpine:latest       server2             Running             Running 52 seconds ago                       
jvvfztivyvqw        helloworld.5        alpine:latest       server3             Running             Running 52 seconds ago                       
[root@server1 ~]#

删除服务(管理节点)

[root@server1 ~]# docker service rm helloworld
helloworld
[root@server1 ~]# docker service inspect helloworld
[]
Status: Error: no such service: helloworld, Code: 1
[root@server1 ~]#
3 月 042020
 

在主机k8s-1上设置主机名并禁用selinux设置

[root@ip-172-31-37-25 ~]# hostnamectl set-hostname k8s-1
[root@ip-172-31-37-25 ~]# sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
[root@ip-172-31-37-25 ~]# init 6

在主机k8s-1上安装docker服务

[root@k8s-1 ~]# yum -y install yum-utils
[root@k8s-1 ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Loaded plugins: fastestmirror
adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
[root@k8s-1 ~]#

查看当前docker版本信息

[root@k8s-1 ~]# yum info docker-ce
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: d36uatko69830t.cloudfront.net
 * extras: d36uatko69830t.cloudfront.net
 * updates: d36uatko69830t.cloudfront.net
docker-ce-stable                                                                   | 3.5 kB  00:00:00     
(1/2): docker-ce-stable/x86_64/primary_db                                          |  40 kB  00:00:00     
(2/2): docker-ce-stable/x86_64/updateinfo                                          |   55 B  00:00:00     
Available Packages
Name        : docker-ce
Arch        : x86_64
Epoch       : 3
Version     : 19.03.7
Release     : 3.el7
Size        : 25 M
Repo        : docker-ce-stable/x86_64
Summary     : The open-source application container engine
URL         : https://www.docker.com
License     : ASL 2.0
Description : Docker is a product for you to build, ship and run any application as a
            : lightweight container.
            : 
            : Docker containers are both hardware-agnostic and platform-agnostic. This means
            : they can run anywhere, from your laptop to the largest cloud compute instance and
            : everything in between - and they don't require you to use a particular
            : language, framework or packaging system. That makes them great building blocks
            : for deploying and scaling web apps, databases, and backend services without
            : depending on a particular stack or provider.

[root@k8s-1 ~]#

安装docker服务

[root@k8s-1 ~]# yum -y install docker-ce

修改docker启动配置文件

[root@k8s-1 ~]# mkdir -p /etc/docker
[root@k8s-1 ~]# cat > /etc/docker/daemon.json <<EOF
> {
>   "exec-opts": ["native.cgroupdriver=systemd"],
>   "log-driver": "json-file",
>   "log-opts": {
>     "max-size": "100m"
>   },
>   "storage-driver": "overlay2",
>   "storage-opts": [
>     "overlay2.override_kernel_check=true"
>   ]
> }
> EOF
[root@k8s-1 ~]#

注册并启动服务

[root@k8s-1 ~]# mkdir -p /etc/systemd/system/docker.service.d
[root@k8s-1 ~]# 
[root@k8s-1 ~]# systemctl daemon-reload
[root@k8s-1 ~]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@k8s-1 ~]# systemctl start docker
[root@k8s-1 ~]#

修改内核参数

[root@k8s-1 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
[root@k8s-1 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
* Applying /etc/sysctl.conf ...
[root@k8s-1 ~]#

安装kubernetes仓库并安装服务

[root@k8s-1 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
> exclude=kube*
> EOF
[root@k8s-1 ~]#

[root@k8s-1 ~]# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

[root@k8s-1 ~]# systemctl enable --now kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@k8s-1 ~]#

—-

在主机k8s-2上设置主机名并禁用selinux设置

[root@ip-172-31-45-40 ~]# hostnamectl set-hostname k8s-2
[root@ip-172-31-45-40 ~]# sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
[root@ip-172-31-45-40 ~]# init 6

在主机k8s-2上安装docker服务

[root@k8s-2 ~]# yum -y install yum-utils
[root@k8s-2 ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Loaded plugins: fastestmirror
adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
[root@k8s-2 ~]#

安装docker服务

[root@k8s-2 ~]# yum -y install docker-ce

修改docker启动配置文件

[root@k8s-2 ~]# mkdir -p /etc/docker
[root@k8s-2 ~]# cat > /etc/docker/daemon.json <<EOF
> {
>   "exec-opts": ["native.cgroupdriver=systemd"],
>   "log-driver": "json-file",
>   "log-opts": {
>     "max-size": "100m"
>   },
>   "storage-driver": "overlay2",
>   "storage-opts": [
>     "overlay2.override_kernel_check=true"
>   ]
> }
> EOF
[root@k8s-2 ~]#

注册并启动服务

[root@k8s-2 ~]# mkdir -p /etc/systemd/system/docker.service.d
[root@k8s-2 ~]# 
[root@k8s-2 ~]# systemctl daemon-reload
[root@k8s-2 ~]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@k8s-2 ~]# systemctl start docker
[root@k8s-2 ~]#

修改内核参数

[root@k8s-2 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
[root@k8s-2 ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
* Applying /etc/sysctl.conf ...
[root@k8s-2 ~]#

安装kubernetes仓库并安装服务

[root@k8s-2 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
> exclude=kube*
> EOF
[root@k8s-2 ~]# 

[root@k8s-2 ~]# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

[root@k8s-2 ~]# systemctl enable --now kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@k8s-2 ~]#

—-

初始化主节点

[root@k8s-1 ~]# kubeadm init --apiserver-advertise-address=172.31.37.25 --pod-network-cidr=10.244.0.0/16
W0304 12:53:01.696504    4233 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0304 12:53:01.696701    4233 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.3
[preflight] Running pre-flight checks
        [WARNING Hostname]: hostname "k8s-1" could not be reached
        [WARNING Hostname]: hostname "k8s-1": lookup k8s-1 on 172.31.0.2:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.37.25]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-1 localhost] and IPs [172.31.37.25 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-1 localhost] and IPs [172.31.37.25 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0304 12:53:27.750650    4233 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0304 12:53:27.751872    4233 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 15.504674 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: w13kyt.okovvot763i4tnbm
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.31.37.25:6443 --token w13kyt.okovvot763i4tnbm \
    --discovery-token-ca-cert-hash sha256:16a89254e6f2df256954d3dccb24aadbc8ad3f40b3f806cf53d67b715a5284c8 
[root@k8s-1 ~]#

配置本地kubectl命令行运行环境

[root@k8s-1 ~]# mkdir $HOME/.kube
[root@k8s-1 ~]# cp /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-1 ~]#

下载配置文件

[root@k8s-1 ~]# curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 14416  100 14416    0     0  32134      0 --:--:-- --:--:-- --:--:-- 32250
[root@k8s-1 ~]#

启动集群

[root@k8s-1 ~]# kubectl apply -f kube-flannel.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
[root@k8s-1 ~]#

查看集群状态

[root@k8s-1 ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
[root@k8s-1 ~]# 

—-

节点k8s-2加入集群

[root@k8s-2 ~]# kubeadm join 172.31.37.25:6443 --token w13kyt.okovvot763i4tnbm \
>--discovery-token-ca-cert-hash sha256:16a89254e6f2df256954d3dccb24aadbc8ad3f40b3f806cf53d67b715a5284c8
W0304 13:00:11.341421    4312 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
        [WARNING Hostname]: hostname "k8s-2" could not be reached
        [WARNING Hostname]: hostname "k8s-2": lookup k8s-2 on 172.31.0.2:53: no such host
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s-2 ~]#

在主节点k8s-1查看集群节点状态

[root@k8s-1 ~]# kubectl get nodes
NAME    STATUS   ROLES    AGE     VERSION
k8s-1   Ready    master   7m58s   v1.17.3
k8s-2   Ready    <none>   86s     v1.17.3
[root@k8s-1 ~]# 
5 月 282019
 

https://github.com/goharbor/harbor/releases

Support for OpenID Connect
OpenID Connect (OIDC) is an authentication layer on top of OAuth 2.0, allowing Harbor to verify the identity of users based on the authentication performed by an external authorization server or identity provider.

Robot accounts
Robot accounts can be configured to provide administrators with a token that can be granted appropriate permissions for pulling or pushing images. Harbor users can continue operating Harbor using their enterprise SSO credentials, and use robot accounts for CI/CD systems that perform Docker client commands.

Replication advancements
Harbor new version replication allows you to replicate your Harbor repository to and from non-Harbor registries. Harbor 1.8 expands on the Harbor-to-Harbor replication feature, adding the ability to replicate resources between Harbor and Docker Hub, Docker Registry, and Huawei Registry. This is enabled through both push and pull mode replication.

Additional Features
Harbor 1.8 brings numerous other capabilities for both administrators and end users:

Health check API, showing detailed status and health of all Harbor components.
Support for defining cron-based scheduled tasks in the Harbor UI. Administrators can now use cron strings to define the schedule of a job. Scan, garbage collection and replication jobs are all supported.
API explorer integration. End users can now explore and trigger Harbor’s API via the swagger UI nested inside Harbor’s UI.
Introduce a new master role to project, the role’s permissions are more than developer and less than project admin.
Introduce harbor.yml as the replacement of harbor.cfg and refactor the prepare script to provide more flexibility to the installation process based on docker-compose
Enhancement of the Job Service engine to include webhook events, additional APIs for automation, and numerous bug fixes to improve the stability of the service.
Docker Registry upgraded to v2.7.1.

5 月 242019
 

企业级开源镜像仓库服务

https://github.com/goharbor/harbor

特性

Cloud native registry: With support for both container images and Helm charts, Harbor serves as registry for cloud native environments like container runtimes and orchestration platforms.
云端适用的镜像仓库服务
Role based access control: Users and repositories are organized via 'projects' and a user can have different permission for images under a project.
基于角色的访问控制
Policy based image replication: Images can be replicated (synchronized) between multiple registry instances based on policies with multiple filters (repository, tag and label). Harbor will auto-retry to replicate if it encounters any errors. Great for load balancing, high availability, multi-datacenter, hybrid and multi-cloud scenarios.
基于策略的镜像复制技术
Vulnerability Scanning: Harbor scans images regularly and warns users of vulnerabilities.
镜像漏洞扫描
LDAP/AD support: Harbor integrates with existing enterprise LDAP/AD for user authentication and management, and supports importing LDAP groups into Harbor and assigning proper project roles to them.
LDAP和AD集成支持
Image deletion & garbage collection: Images can be deleted and their space can be recycled.
镜像删除和垃圾回收机制
Notary: Image authenticity can be ensured.
镜像可信公证机制
Graphical user portal: User can easily browse, search repositories and manage projects.
图形化用户门户
Auditing: All the operations to the repositories are tracked.
操作可审计
RESTful API: RESTful APIs for most administrative operations, easy to integrate with external systems.
具有RESTful接口
Easy deployment: Provide both an online and offline installer.
易于部署

安装配置环境要求
软件
On a Linux host: docker 17.03.0-ce+ and docker-compose 1.18.0+ .
硬件:最小2核4G
端口开放:TCP80/443/4443

下载离线安装包

https://storage.googleapis.com/harbor-releases/release-1.8.0/harbor-offline-installer-v1.8.0.tgz

[root@iZj6cehstgjoj3qav88fidZ ~]# curl -O https://storage.googleapis.com/harbor-releases/release-1.8.0/harbor-offline-installer-v1.8.0.tgz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 527M 100 527M 0 0 12.3M 0 0:00:42 0:00:42 --:--:-- 12.2M
[root@iZj6cehstgjoj3qav88fidZ ~]#

安装docker最新版

安装仓库源

[root@iZj6cehstgjoj3qav88fidZ ~]# yum install -y yum-utils device-mapper-persistent-data lvm2

[root@iZj6cehstgjoj3qav88fidZ ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Loaded plugins: fastestmirror
adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
[root@iZj6cehstgjoj3qav88fidZ ~]#

[root@iZj6cehstgjoj3qav88fidZ ~]# yum install docker-ce docker-ce-cli containerd.io

启动服务

[root@iZj6cehstgjoj3qav88fidZ ~]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@iZj6cehstgjoj3qav88fidZ ~]# systemctl start docker
[root@iZj6cehstgjoj3qav88fidZ ~]#

安装docker-compose最新版

下载

[root@iZj6cehstgjoj3qav88fidZ ~]# sudo curl -L "https://github.com/docker/compose/releases/download/1.24.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 617 0 617 0 0 567 0 --:--:-- 0:00:01 --:--:-- 567
100 15.4M 100 15.4M 0 0 2972k 0 0:00:05 0:00:05 --:--:-- 4764k
[root@iZj6cehstgjoj3qav88fidZ ~]#

修改权限

[root@iZj6cehstgjoj3qav88fidZ ~]# chmod +x /usr/local/bin/docker-compose
[root@iZj6cehstgjoj3qav88fidZ ~]# ll /usr/local/bin/docker-compose
-rwxr-xr-x 1 root root 16154160 May 27 18:00 /usr/local/bin/docker-compose
[root@iZj6cehstgjoj3qav88fidZ ~]#

安装OpenSSL及自签证书

https://github.com/goharbor/harbor/blob/master/docs/configure_https.md
安装OpenSSL

[root@iZj6cehstgjoj3qav88fidZ ~]# yum -y install openssl

生成CA证书和密钥

[root@iZj6cehstgjoj3qav88fidZ ~]# openssl genrsa -out ca.key 4096
Generating RSA private key, 4096 bit long modulus
..........................++
................++
e is 65537 (0x10001)
[root@iZj6cehstgjoj3qav88fidZ ~]#

[root@iZj6cehstgjoj3qav88fidZ ~]# openssl req -x509 -new -nodes -sha512 -days 3650 \
> -subj "/C=TW/ST=Taipei/L=Taipei/O=example/OU=Personal/CN=yourdomain.com" \
> -key ca.key \
> -out ca.crt
[root@iZj6cehstgjoj3qav88fidZ ~]#

生成服务器证书和密钥
1)生成密钥

[root@iZj6cehstgjoj3qav88fidZ ~]# openssl genrsa -out hub.licensebox.cn.key 4096 
Generating RSA private key, 4096 bit long modulus
.......................................................................................................++
..................................................................................................++
e is 65537 (0x10001)
[root@iZj6cehstgjoj3qav88fidZ ~]#

2)生成CSR

[root@iZj6cehstgjoj3qav88fidZ ~]# openssl req -sha512 -new \
> -subj "/C=TW/ST=Taipei/L=Taipei/O=example/OU=Personal/CN=hub.licensebox.cn" \
> -key hub.licensebox.cn.key \
> -out hub.licensebox.cn.csr
[root@iZj6cehstgjoj3qav88fidZ ~]# ls
ca.crt ca.key harbor-offline-installer-v1.8.0.tgz hub.licensebox.cn.csr hub.licensebox.cn.key
[root@iZj6cehstgjoj3qav88fidZ ~]#

3)生成证书

[root@iZj6cehstgjoj3qav88fidZ ~]# openssl x509 -req -sha512 -days 3650 \
> -CA ca.crt -CAkey ca.key -CAcreateserial \
> -in hub.licensebox.cn.csr \
> -out hub.licensebox.cn.crt
Signature ok
subject=/C=TW/ST=Taipei/L=Taipei/O=example/OU=Personal/CN=hub.licensebox.cn
Getting CA Private Key
[root@iZj6cehstgjoj3qav88fidZ ~]# ls
ca.crt ca.key ca.srl harbor-offline-installer-v1.8.0.tgz hub.licensebox.cn.crt hub.licensebox.cn.csr hub.licensebox.cn.key
[root@iZj6cehstgjoj3qav88fidZ ~]#

转换证书格式(crt–>cert)

[root@iZj6cehstgjoj3qav88fidZ ~]# openssl x509 -inform PEM -in hub.licensebox.cn.crt -out hub.licensebox.cn.cert
[root@iZj6cehstgjoj3qav88fidZ ~]# ls
ca.crt ca.key ca.srl harbor-offline-installer-v1.8.0.tgz hub.licensebox.cn.cert hub.licensebox.cn.crt hub.licensebox.cn.csr hub.licensebox.cn.key
[root@iZj6cehstgjoj3qav88fidZ ~]#

部署证书文件到本机Docker相关配置目录下

[root@iZj6cehstgjoj3qav88fidZ ~]# mkdir -p /etc/docker/certs.d/hub.licensebox.cn/
[root@iZj6cehstgjoj3qav88fidZ ~]# cp ca.crt /etc/docker/certs.d/hub.licensebox.cn/
[root@iZj6cehstgjoj3qav88fidZ ~]# cp hub.licensebox.cn.cert /etc/docker/certs.d/hub.licensebox.cn/
[root@iZj6cehstgjoj3qav88fidZ ~]# cp hub.licensebox.cn.key /etc/docker/certs.d/hub.licensebox.cn/

解压安装包并修改配置文件

[root@iZj6cehstgjoj3qav88fidZ ~]# tar xzf harbor-offline-installer-v1.8.0.tgz 
[root@iZj6cehstgjoj3qav88fidZ ~]# ls
ca.crt ca.key ca.srl harbor harbor-offline-installer-v1.8.0.tgz hub.licensebox.cn.cert hub.licensebox.cn.crt hub.licensebox.cn.csr hub.licensebox.cn.key
[root@iZj6cehstgjoj3qav88fidZ ~]# cd harbor
[root@iZj6cehstgjoj3qav88fidZ harbor]# ls
harbor.v1.8.0.tar.gz harbor.yml install.sh LICENSE prepare
[root@iZj6cehstgjoj3qav88fidZ harbor]#

修改配置文件

修改主机名

hostname: reg.mydomain.com
Hostname: hub.licensebox.cn

启用443端口和配置证书

# https related config
# https:
# # https port for harbor, default is 443
# port: 443
# # The path of cert and key files for nginx
# certificate: /your/certificate/path
# private_key: /your/private/key/path

# https related config
https:
# # https port for harbor, default is 443
port: 443
# # The path of cert and key files for nginx
certificate: /root/hub.licensebox.cn.crt
private_key: /root/hub.licensebox.cn.key

初始密码修改

harbor_admin_password: Harbor12345
harbor_admin_password: Harbor20190527

修改数据库初始密码(未修改)(生产环境应当修改)

# Harbor DB configuration
database:
# The password for the root user of Harbor DB. Change this before any production use.
password: root123

生成配置文件

[root@iZj6cehstgjoj3qav88fidZ harbor]# ./prepare 
prepare base dir is set to /root/harbor
Unable to find image 'goharbor/prepare:v1.8.0' locally
v1.8.0: Pulling from goharbor/prepare
4e360eca2e60: Pull complete 
cd3c4c42d48b: Pull complete 
647e8efbf475: Pull complete 
79272af3d010: Pull complete 
1aec5eb71578: Pull complete 
1a29af1ecc2c: Pull complete 
e382cb5c1ecb: Pull complete 
Digest: sha256:c590164ae2c54e360642b1174c8ad90306b05ca0582f02f35889346c113e555d
Status: Downloaded newer image for goharbor/prepare:v1.8.0
Generated configuration file: /config/log/logrotate.conf
Generated configuration file: /config/nginx/nginx.conf
Generated configuration file: /config/core/env
Generated configuration file: /config/core/app.conf
Generated configuration file: /config/registry/config.yml
Generated configuration file: /config/registryctl/env
Generated configuration file: /config/db/env
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
Generated and saved secret to file: /secret/keys/secretkey
Generated certificate, key file: /secret/core/private_key.pem, cert file: /secret/registry/root.crt
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir
[root@iZj6cehstgjoj3qav88fidZ harbor]#

执行安装脚本

[root@iZj6cehstgjoj3qav88fidZ harbor]# ./install.sh

[Step 0]: checking installation environment ...

Note: docker version: 18.09.6

Note: docker-compose version: 1.24.0

[Step 1]: loading Harbor images ...
1d4a1da12c02: Loading layer [==================================================>] 50.51MB/50.51MB
8eb1a006f3b0: Loading layer [==================================================>] 3.584kB/3.584kB
41b6f75847f4: Loading layer [==================================================>] 3.072kB/3.072kB
ec9bd6e4d4e8: Loading layer [==================================================>] 2.56kB/2.56kB
6d852bb664c2: Loading layer [==================================================>] 3.072kB/3.072kB
0e4ed2b5a5b8: Loading layer [==================================================>] 3.584kB/3.584kB
8dfb2b644f30: Loading layer [==================================================>] 12.29kB/12.29kB
Loaded image: goharbor/harbor-log:v1.8.0
d8c53538042b: Loading layer [==================================================>] 63.34MB/63.34MB
1b5fb7ee22e0: Loading layer [==================================================>] 47.96MB/47.96MB
a8bdca5e9d71: Loading layer [==================================================>] 6.656kB/6.656kB
f7cec940b52c: Loading layer [==================================================>] 2.048kB/2.048kB
301a4a2af7db: Loading layer [==================================================>] 7.68kB/7.68kB
e588e1e3a775: Loading layer [==================================================>] 2.56kB/2.56kB
539f28a5d0ea: Loading layer [==================================================>] 2.56kB/2.56kB
8b4a72241226: Loading layer [==================================================>] 2.56kB/2.56kB
Loaded image: goharbor/harbor-db:v1.8.0
c88db349fb2f: Loading layer [==================================================>] 8.972MB/8.972MB
1f2d4d72bba2: Loading layer [==================================================>] 35.77MB/35.77MB
dddbcf598df5: Loading layer [==================================================>] 2.048kB/2.048kB
0ced476c2d9c: Loading layer [==================================================>] 3.072kB/3.072kB
af24eb0bf40b: Loading layer [==================================================>] 35.77MB/35.77MB
Loaded image: goharbor/chartmuseum-photon:v0.8.1-v1.8.0
Loaded image: goharbor/prepare:v1.8.0
257ebcc1c9c4: Loading layer [==================================================>] 8.967MB/8.967MB
7579d3c94fca: Loading layer [==================================================>] 38.68MB/38.68MB
323611f7dd17: Loading layer [==================================================>] 38.68MB/38.68MB
Loaded image: goharbor/harbor-jobservice:v1.8.0
587a5757a7f6: Loading layer [==================================================>] 3.548MB/3.548MB
Loaded image: goharbor/nginx-photon:v1.8.0
a61ab2060e6e: Loading layer [==================================================>] 8.967MB/8.967MB
25359ae00f57: Loading layer [==================================================>] 5.143MB/5.143MB
610a1668f8bf: Loading layer [==================================================>] 15.13MB/15.13MB
db2252abd9e0: Loading layer [==================================================>] 26.47MB/26.47MB
4f406312560b: Loading layer [==================================================>] 22.02kB/22.02kB
1cee0947e5a7: Loading layer [==================================================>] 3.072kB/3.072kB
48db2b9b0752: Loading layer [==================================================>] 46.74MB/46.74MB
Loaded image: goharbor/notary-server-photon:v0.6.1-v1.8.0
aaf447150765: Loading layer [==================================================>] 113MB/113MB
6835441e1a1d: Loading layer [==================================================>] 10.94MB/10.94MB
9f4739e3a532: Loading layer [==================================================>] 2.048kB/2.048kB
928f489135f0: Loading layer [==================================================>] 48.13kB/48.13kB
1495a1a09ada: Loading layer [==================================================>] 3.072kB/3.072kB
1a5f5b141717: Loading layer [==================================================>] 10.99MB/10.99MB
Loaded image: goharbor/clair-photon:v2.0.8-v1.8.0
66006ea937c6: Loading layer [==================================================>] 337.8MB/337.8MB
d272ba122880: Loading layer [==================================================>] 106.5kB/106.5kB
Loaded image: goharbor/harbor-migrator:v1.8.0
05bc5efb1724: Loading layer [==================================================>] 8.967MB/8.967MB
af3a6f89469a: Loading layer [==================================================>] 46.85MB/46.85MB
452d238b3e48: Loading layer [==================================================>] 5.632kB/5.632kB
36e1cb2d6ffa: Loading layer [==================================================>] 27.14kB/27.14kB
5385ffb8451e: Loading layer [==================================================>] 46.85MB/46.85MB
Loaded image: goharbor/harbor-core:v1.8.0
268091c30a67: Loading layer [==================================================>] 71.66MB/71.66MB
4433bcd802e7: Loading layer [==================================================>] 3.072kB/3.072kB
420b26399278: Loading layer [==================================================>] 59.9kB/59.9kB
8864c4b9ac3d: Loading layer [==================================================>] 61.95kB/61.95kB
Loaded image: goharbor/redis-photon:v1.8.0
63645c97bf5d: Loading layer [==================================================>] 8.968MB/8.968MB
ccb295818ad9: Loading layer [==================================================>] 3.072kB/3.072kB
1ec2d1eefa8f: Loading layer [==================================================>] 2.56kB/2.56kB
b88acf0f9f5f: Loading layer [==================================================>] 20.1MB/20.1MB
0e7375de12e6: Loading layer [==================================================>] 20.1MB/20.1MB
Loaded image: goharbor/registry-photon:v2.7.1-patch-2819-v1.8.0
444b0c8bfeee: Loading layer [==================================================>] 3.548MB/3.548MB
ed0415346760: Loading layer [==================================================>] 6.568MB/6.568MB
572bd51089e0: Loading layer [==================================================>] 160.8kB/160.8kB
1410c2919a92: Loading layer [==================================================>] 215kB/215kB
8ecdca210598: Loading layer [==================================================>] 3.584kB/3.584kB
Loaded image: goharbor/harbor-portal:v1.8.0
7fb66591fb58: Loading layer [==================================================>] 8.968MB/8.968MB
42ec4a6394bf: Loading layer [==================================================>] 3.072kB/3.072kB
be6c2180cb57: Loading layer [==================================================>] 20.1MB/20.1MB
d956d9e974c5: Loading layer [==================================================>] 3.072kB/3.072kB
e2e0b4f17ad8: Loading layer [==================================================>] 7.465MB/7.465MB
7e29d670afe9: Loading layer [==================================================>] 27.56MB/27.56MB
Loaded image: goharbor/harbor-registryctl:v1.8.0
453732ea69d4: Loading layer [==================================================>] 13.72MB/13.72MB
c985f3824f33: Loading layer [==================================================>] 26.47MB/26.47MB
76eaa2763221: Loading layer [==================================================>] 22.02kB/22.02kB
0ef55a752948: Loading layer [==================================================>] 3.072kB/3.072kB
c5749b90723d: Loading layer [==================================================>] 45.33MB/45.33MB
Loaded image: goharbor/notary-signer-photon:v0.6.1-v1.8.0


[Step 2]: preparing environment ...
prepare base dir is set to /root/harbor
Clearing the configuration file: /config/log/logrotate.conf
Clearing the configuration file: /config/cert/server.key
Clearing the configuration file: /config/cert/server.crt
Clearing the configuration file: /config/db/env
Clearing the configuration file: /config/registryctl/env
Clearing the configuration file: /config/registryctl/config.yml
Clearing the configuration file: /config/registry/config.yml
Clearing the configuration file: /config/jobservice/env
Clearing the configuration file: /config/jobservice/config.yml
Clearing the configuration file: /config/nginx/nginx.conf
Clearing the configuration file: /config/core/app.conf
Clearing the configuration file: /config/core/env
Generated configuration file: /config/log/logrotate.conf
Generated configuration file: /config/nginx/nginx.conf
Generated configuration file: /config/core/env
Generated configuration file: /config/core/app.conf
Generated configuration file: /config/registry/config.yml
Generated configuration file: /config/registryctl/env
Generated configuration file: /config/db/env
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
loaded secret from file: /secret/keys/secretkey
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir

[Step 3]: starting Harbor ...
Creating network "harbor_harbor" with the default driver
Creating harbor-log ... done
Creating registryctl ... done
Creating harbor-db ... done
Creating redis ... done
Creating registry ... done
Creating harbor-core ... done
Creating harbor-jobservice ... done
Creating harbor-portal ... done
Creating nginx ... done

✔ ----Harbor has been installed and started successfully.----

Now you should be able to visit the admin portal at https://hub.licensebox.cn. 
For more details, please visit https://github.com/goharbor/harbor .

[root@iZj6cehstgjoj3qav88fidZ harbor]#

提示安装完成并已启动服务后登录Web门户(SSL证书信任错误)

登录界面

首页,默认已生成一个library名称的项目镜像仓库

Harbor服务的本地数据存储路径