Kubernetes 1.14.1结合MetalLB v0.7.3为LoadBalancer分配IP地址,以及 Nginx-Ingress-Controller 0.24 多域名目录转发

Kubernetes 1.14.1结合MetalLB v0.7.3为LoadBalancer分配IP地址,以及 Nginx-Ingress-Controller 0.24 多域名目录转发

目录

[TOC]

1、使用 MetaiLB 0.7.3 进行2层网络的分配
1.1、Cluster Addresses
1.2、Install MetalLB
1.3、Configure MetalLB
1.4、部署 echoserver 服务,测试 MetalLB 的 IP 分配
2、Nginx-Ingress-Controller 0.24.1 安装
2.1、安装 Ingress Controller 部署
2.2、安装 Ingress Service
3、验证 Nginx-Ingress-Controller 服务
3.1、部署 Whoami 服务
3.2、部署 Nginx 服务
3.3、创建 Ingress 访问规则
3.4、查看规则是否生效
4、推荐阅读

  最初我尝试在 Kubernetes 集群上设置一个 Ingress 控制器
阅读了很多文章和官方文档后,我仍然很难设置 Ingress
  但经过多次尝试,我设法建立了一个 nginx-ingress-controller,将外部流量转发到我的集群内服务,处理HTTP和HTTPS。我将过程记录下来希望对您有帮助。
  

1、使用 MetaiLB 0.7.3 进行2层网络的分配

  在集群中部署 MetalLB,并使用第2层模式分配负载均衡的IP。后面的示例中我们假设您已经运行了 Kubernetes 群集。第2层模式的好处是你根本不需要任何其他的网络硬件,它适用于任何以太网网络。
  查看 Kubernetes 的版本信息:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:02:58Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}

  

1.1、Cluster Addresses

  我们假设群集是使用 192.168.11.0/24 在网络上设置的。主路由器配置为DHCP 192.168.11.100-192.168.11.150 范围内的地址。
  我们需要为 MetalLB 服务分配另一块IP空间。我们将使用 192.168.11.240-192.168.11.250
  如果您的群集没有使用相同的地址,则需要在本教程的其余部分中替换相应的地址范围。
  

1.2、Install MetalLB

  MetalLB 运行后分为两部分 controllespeaker
  接下来我们通过文件安装 MetalLB,创建并编辑文件 metallb.yaml,内容如下:

apiVersion: v1
kind: Namespace
metadata:
  name: metallb-system
  labels:
    app: metallb
---

apiVersion: v1
kind: ServiceAccount
metadata:
  namespace: metallb-system
  name: controller
  labels:
    app: metallb
---
apiVersion: v1
kind: ServiceAccount
metadata:
  namespace: metallb-system
  name: speaker
  labels:
    app: metallb

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: metallb-system:controller
  labels:
    app: metallb
rules:
- apiGroups: [""]
  resources: ["services"]
  verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
  resources: ["services/status"]
  verbs: ["update"]
- apiGroups: [""]
  resources: ["events"]
  verbs: ["create", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: metallb-system:speaker
  labels:
    app: metallb
rules:
- apiGroups: [""]
  resources: ["services", "endpoints", "nodes"]
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: metallb-system
  name: config-watcher
  labels:
    app: metallb
rules:
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["events"]
  verbs: ["create"]
---

## Role bindings
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: metallb-system:controller
  labels:
    app: metallb
subjects:
- kind: ServiceAccount
  name: controller
  namespace: metallb-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: metallb-system:controller
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: metallb-system:speaker
  labels:
    app: metallb
subjects:
- kind: ServiceAccount
  name: speaker
  namespace: metallb-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: metallb-system:speaker
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  namespace: metallb-system
  name: config-watcher
  labels:
    app: metallb
subjects:
- kind: ServiceAccount
  name: controller
- kind: ServiceAccount
  name: speaker
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: config-watcher
---
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
  namespace: metallb-system
  name: speaker
  labels:
    app: metallb
    component: speaker
spec:
  selector:
    matchLabels:
      app: metallb
      component: speaker
  template:
    metadata:
      labels:
        app: metallb
        component: speaker
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "7472"
    spec:
      serviceAccountName: speaker
      terminationGracePeriodSeconds: 0
      hostNetwork: true
      containers:
      - name: speaker
        image: metallb/speaker:v0.7.3
        imagePullPolicy: IfNotPresent
        args:
        - --port=7472
        - --config=config
        env:
        - name: METALLB_NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        ports:
        - name: monitoring
          containerPort: 7472
        resources:
          limits:
            cpu: 100m
            memory: 100Mi

        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          capabilities:
            drop:
            - all
            add:
            - net_raw

---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  namespace: metallb-system
  name: controller
  labels:
    app: metallb
    component: controller
spec:
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      app: metallb
      component: controller
  template:
    metadata:
      labels:
        app: metallb
        component: controller
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "7472"
    spec:
      serviceAccountName: controller
      terminationGracePeriodSeconds: 0
      securityContext:
        runAsNonRoot: true
        runAsUser: 65534 # nobody
      containers:
      - name: controller
        image: metallb/controller:v0.7.3
        imagePullPolicy: IfNotPresent
        args:
        - --port=7472
        - --config=config
        ports:
        - name: monitoring
          containerPort: 7472
        resources:
          limits:
            cpu: 100m
            memory: 100Mi

        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - all
          readOnlyRootFilesystem: true

---

  
  通过以下命令启动安装,安装后 MetalLB 可以读取和写入所需的Kubernetes对象。也会创建了一堆资源,其中大多数都与访问控制有关。

$ kubectl apply -f metallb.yaml

metallb 文件的地址:https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml

  
  运行 kubectl get pods -n metallb-system 命令,能够看到 MetalLB 在运行的 pod

$ kubectl get pods -n metallb-system
NAME                         READY   STATUS    RESTARTS   AGE
controller-cd8657667-k6rrj   1/1     Running   0          47m
speaker-2g98v                1/1     Running   0          47m
speaker-t8hjh                1/1     Running   0          47m

  

1.3、Configure MetalLB

  创建 layer2-config.yaml 文件,对 MetalLB 进行配置,内容如下:

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: my-ip-space
      protocol: layer2
      addresses:
      - 192.168.11.240-192.168.11.250

如果在群集中使用了不同的IP地址,请在应用之前更改此配置中的IP范围。

  
  MetalLB 的配置是标准的 Kubernetes ConfigMap,在 metallb-system 命名空间下。
  它包含两条信息:允许分发哪些IP地址以及使用哪种协议。
  在这种配置中,我们告诉 MetalLB 使用第2层模式(protocol:layer2)从 192.168.11.240-192.168.11.250 范围分发地址。
  接下来我们对配置进行应用:

$ kubectl apply -f layer2-config.yaml
configmap/config configured

  
  创建以后,会在几秒内生效。通过命令 kubectl logs -l component=speaker -n metallb-system,可以看到日志信息:

$ kubectl logs -l component=speaker -n metallb-system
{"caller":"main.go:159","event":"startUpdate","msg":"start of service update","service":"kube-system/monitoring-influxdb","ts":"2019-04-23T02:21:00.941323921Z"}
{"caller":"main.go:163","event":"endUpdate","msg":"end of service update","service":"kube-system/monitoring-influxdb","ts":"2019-04-23T02:21:00.941380492Z"}
{"caller":"main.go:159","event":"startUpdate","msg":"start of service update","service":"default/kubernetes","ts":"2019-04-23T02:21:00.941427886Z"}
{"caller":"main.go:163","event":"endUpdate","msg":"end of service update","service":"default/kubernetes","ts":"2019-04-23T02:21:00.941489295Z"}
{"caller":"main.go:159","event":"startUpdate","msg":"start of service update","service":"kube-system/monitoring-grafana","ts":"2019-04-23T02:21:00.941553646Z"}
{"caller":"main.go:163","event":"endUpdate","msg":"end of service update","service":"kube-system/monitoring-grafana","ts":"2019-04-23T02:21:00.94161185Z"}
{"caller":"main.go:159","event":"startUpdate","msg":"start of service update","service":"kube-system/heapster","ts":"2019-04-23T02:21:00.941687545Z"}
{"caller":"main.go:163","event":"endUpdate","msg":"end of service update","service":"kube-system/heapster","ts":"2019-04-23T02:21:00.941745979Z"}
{"caller":"main.go:159","event":"startUpdate","msg":"start of service update","service":"kube-system/kube-dns","ts":"2019-04-23T02:21:00.9418122Z"}
{"caller":"main.go:163","event":"endUpdate","msg":"end of service update","service":"kube-system/kube-dns","ts":"2019-04-23T02:21:00.941875351Z"}
{"caller":"main.go:159","event":"startUpdate","msg":"start of service update","service":"kube-system/kubernetes-dashboard","ts":"2019-04-23T02:21:00.944627844Z"}
{"caller":"main.go:163","event":"endUpdate","msg":"end of service update","service":"kube-system/kubernetes-dashboard","ts":"2019-04-23T02:21:00.944687644Z"}
{"caller":"main.go:159","event":"startUpdate","msg":"start of service update","service":"kube-system/monitoring-influxdb","ts":"2019-04-23T02:21:00.944755841Z"}
{"caller":"main.go:163","event":"endUpdate","msg":"end of service update","service":"kube-system/monitoring-influxdb","ts":"2019-04-23T02:21:00.944841058Z"}
{"caller":"main.go:159","event":"startUpdate","msg":"start of service update","service":"kube-system/heapster","ts":"2019-04-23T02:21:00.944922977Z"}
{"caller":"main.go:163","event":"endUpdate","msg":"end of service update","service":"kube-system/heapster","ts":"2019-04-23T02:21:00.945037038Z"}
{"caller":"main.go:159","event":"startUpdate","msg":"start of service update","service":"kube-system/kube-dns","ts":"2019-04-23T02:21:00.945107242Z"}
{"caller":"main.go:163","event":"endUpdate","msg":"end of service update","service":"kube-system/kube-dns","ts":"2019-04-23T02:21:00.94518682Z"}
{"caller":"main.go:159","event":"startUpdate","msg":"start of service update","service":"default/kubernetes","ts":"2019-04-23T02:21:00.945257006Z"}
{"caller":"main.go:163","event":"endUpdate","msg":"end of service update","service":"default/kubernetes","ts":"2019-04-23T02:21:00.945331629Z"}

  speaker 已经加载,但未执行任何操作,因为集群中还没有 LoadBalancer 相关服务。
  

1.4、部署 echoserver 服务,测试 MetalLB 的 IP 分配

  创建并编辑配置文件 echoserver.yaml ,内容如下:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: echoserver-deployment
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: echoserver
        group: mshk.top
    spec:
      containers:
      - image: "gcr.io/kubernetes-e2e-test-images/echoserver:2.1"
        imagePullPolicy: IfNotPresent
        name: echoserver-container
        ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: echoserver-svc
spec:
  selector:
    app: echoserver
    group: mshk.top
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
      name: http

  
  执行以下命令,安装和部署 echoserver 服务

$ kubectl apply -f echoserver.yaml
deployment.extensions/echoserver-deployment created
service/echoserver-svc created

  
  查看服务部署状态

$ kubectl get svc
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
echoserver-svc   ClusterIP   10.96.172.113   <none>        80/TCP    52s
kubernetes       ClusterIP   10.96.0.1       <none>        443/TCP   3d16h

  
  将 echoserver 的网络改为 LoadBalancer,再次查看服务状态,我们可以看到已经通过 MetalLB 分配了之前设置的IP 192.168.11.240

$ kubectl patch svc echoserver-svc -p '{"spec":{"type": "LoadBalancer"}}'
service/echoserver-svc patched
$ kubectl get svc
NAME             TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)        AGE
echoserver-svc   LoadBalancer   10.96.172.113   192.168.11.240   80:31123/TCP   3m52s
kubernetes       ClusterIP      10.96.0.1       <none>           443/TCP        3d16h

  
  我们用 curl 192.168.11.240 测试服务,可看到类似下面的信息:

$ curl 192.168.11.240


Hostname: echoserver-deployment-687b8499bd-lbg7d

Pod Information:
    -no pod information available-

Server values:
    server_version=nginx: 1.12.2 - lua: 10010

Request Information:
    client_address=10.244.0.0
    method=GET
    real path=/
    query=
    request_version=1.1
    request_scheme=http
    request_uri=http://192.168.11.240:8080/

Request Headers:
    accept=*/*
    host=192.168.11.240
    user-agent=curl/7.29.0

Request Body:
    -no body in request-

  

2、Nginx-Ingress-Controller 0.24.1 安装

  Ingress Controller 通过和 kubernetes api 交互,动态的去感知集群中 Ingress 规则变化,然后读取它,按照自定义的规则,规则写明域名与 Service 的对应关系,生成一段 Nginx 配置,再写到 nginx-ingress-controlpod 里,这个 Ingress Controllerpod 里运行着一个 Nginx 服务,控制器会把生成的 Nginx 配置写入 /etc/nginx/nginx.conf 文件中,然后 reload 一下使配置生效。以此达到域名分配置和动态更新的问题。

  Ingress Controller工作架构类似下面的图:
mshk.top
  

2.1、安装 Ingress Controller 部署

  创建并编辑文件 ingress-nginx-controller.yaml,内容如下:

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10

---

  
  通过以下命令部署 Nginx-Ingress-Controller

$ kubectl apply -f ingress-nginx-controller.yaml
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created

过程中下载镜像会比较大,也可以使用命令docker pull quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1先下载镜像

  

2.2、安装 Ingress Service

  继续创建 ingress-nginxService 部分,创建并编辑文件 ingress-nginx-service.yaml,内容如下:

apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app: ingress-nginx
    group: com.mshk
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 80
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

  
  部署并启动 ingress-nginxService

$ kubectl apply -f ingress-nginx-service.yaml
service/ingress-nginx created

  
  查看所有 Service 状态,这时 MetalLB 再次为我们分配了IP 192.168.11.241

$ kubectl get svc -n ingress-nginx
NAME                TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)        AGE
ingress-nginx   LoadBalancer   10.107.78.226   192.168.11.241   80:31566/TCP   4m27s

  
  查看 ingress-nginx 的详细信息,可以看到类似下面的内容:

$ kubectl describe svc ingress-nginx -n ingress-nginx
Name:                     ingress-nginx
Namespace:                ingress-nginx
Labels:                   app=ingress-nginx
                          group=com.mshk
Annotations:              kubectl.kubernetes.io/last-applied-configuration:
                            {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"ingress-nginx","group":"com.mshk"},"name":"ingress-nginx...
Selector:                 app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx
Type:                     LoadBalancer
IP:                       10.105.102.158
LoadBalancer Ingress:     192.168.11.241
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  30658/TCP
Endpoints:                10.244.1.24:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
  Type    Reason       Age   From                Message
  ----    ------       ----  ----                -------
  Normal  IPAllocated  32s   metallb-controller  Assigned IP "192.168.11.241"

  

3、验证 Nginx-Ingress-Controller 服务

3.1、部署 Whoami 服务

  创建并编辑配置文件 whoami.yaml ,内容如下:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: whoami-deployment
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: whoami
        group: mshk.top
    spec:
      containers:
      - image: idoall/whoami
        name: whoami
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: whoami-svc
spec:
  selector:
    app: whoami
    group: mshk.top
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

  
  执行以下命令,安装和部署 whoami 服务

$ kubectl apply -f whoami.yaml
deployment.extensions/whoami-deployment created
service/whoami-svc created

  

3.2、部署 Nginx 服务

  创建并编辑配置文件 nginx.yaml ,内容如下:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
        group: mshk.top
    spec:
      containers:
      - image: nginx
        name: nginx
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
spec:
  selector:
    app: nginx
    group: mshk.top
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

  
  执行以下命令,安装和部署 whoami 服务

$ kubectl apply -f nginx.yaml
deployment.extensions/nginx-deployment created
service/nginx-svc created

  
  通过下面的命令,查看所有的部署情况

$ kubectl get deployments
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
echoserver-deployment   2/2     2            2           3h28m
nginx-deployment        2/2     2            2           23m
whoami-deployment       2/2     2            2           60m

  

3.3、创建 Ingress 访问规则

  我们建立以下访问规则:

  • 通过 rulesa.mshk.top 可以直接访问 whoami 服务
  • 通过 rulesb.mshk.top/whoami 可以访问 whoami 服务
  • 通过 rulesb.mshk.top/nginx 可以访问 nginx 服务

  
  创建访问规则 ingress-nginx-rules.yaml 文件,内容如下 :

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-mshk-top
  annotations: 
    nginx.ingress.kubernetes.io/rewrite-target: /$1
  namespace: default
spec:
  rules:
  - host: rulesa.mshk.top
    http:
      paths:
      - backend:
          serviceName: whoami-svc
          servicePort: 80
        path: /?(.*)
  - host: rulesb.mshk.top
    http:
      paths:
      - path: /whoami/?(.*)
        backend:
          serviceName: whoami-svc
          servicePort: 80
      - path: /nginx/?(.*)
        backend:
          serviceName: nginx-svc
          servicePort: 80

这里也遇到一些坑,网上的一些文章配置,在转发的时候不起作用,后来发现 Ingress Nginx 在 0.22.0 版本以后配置发生了变化,更多配置,可以参考 这里

  
  启动访问规则

$ kubectl apply -f ingress-nginx-rules.yaml
ingress.extensions/ingress-mshk-top created

  
  查看访问规则列表

$ kubectl get ing
NAME               HOSTS                             ADDRESS   PORTS   AGE
ingress-mshk-top   rulesa.mshk.top,rulesb.mshk.top             80      38s

  
  查看访问规则的详细信息

$ kubectl describe ing ingress-mshk-top
Name:             ingress-mshk-top
Namespace:        default
Address:          192.168.11.241
Default backend:  default-http-backend:80 (<none>)
Rules:
  Host             Path  Backends
  ----             ----  --------
  rulesa.mshk.top
                   /?(.*)   whoami-svc:80 (10.244.1.28:80,10.244.2.28:80)
  rulesb.mshk.top
                   /whoami/?(.*)   whoami-svc:80 (10.244.1.28:80,10.244.2.28:80)
                   /nginx/?(.*)    nginx-svc:80 (10.244.1.29:80,10.244.2.29:80)
Annotations:
  kubectl.kubernetes.io/last-applied-configuration:  {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"nginx.ingress.kubernetes.io/rewrite-target":"/$1"},"name":"ingress-mshk-top","namespace":"default"},"spec":{"rules":[{"host":"rulesa.mshk.top","http":{"paths":[{"backend":{"serviceName":"whoami-svc","servicePort":80},"path":"/?(.*)"}]}},{"host":"rulesb.mshk.top","http":{"paths":[{"backend":{"serviceName":"whoami-svc","servicePort":80},"path":"/whoami/?(.*)"},{"backend":{"serviceName":"nginx-svc","servicePort":80},"path":"/nginx/?(.*)"}]}}]}}

  nginx.ingress.kubernetes.io/rewrite-target:  /$1
Events:                                        <none>

  

3.4、查看规则是否生效

  找到 nginx-ingress-controllerpod 名字后通过命令查看里面 nginx 配置文件,也可以通过下面的命令,看到类似的信息

$ INGRESSNGINXNAME=`kubectl get pods --all-namespaces | grep nginx-ingress-controller | awk '{print $2}'`;kubectl -n ingress-nginx exec $INGRESSNGINXNAME -- cat /etc/nginx/nginx.conf
...
## start server rulesa.mshk.top
    server {
        server_name rulesa.mshk.top ;

        listen 80;

        set $proxy_upstream_name "-";
        set $pass_access_scheme $scheme;
        set $pass_server_port $server_port;
        set $best_http_host $http_host;
        set $pass_port $pass_server_port;

        location ~* "^/" {

            set $namespace      "default";
            set $ingress_name   "ingress-mshk-top";
            set $service_name   "whoami-svc";
            set $service_port   "80";
            set $location_path  "/";
...
    ## start server rulesb.mshk.top
    server {
        server_name rulesb.mshk.top ;

        listen 80;

        set $proxy_upstream_name "-";
        set $pass_access_scheme $scheme;
        set $pass_server_port $server_port;
        set $best_http_host $http_host;
        set $pass_port $pass_server_port;

        location ~* "^/whoami" {

            set $namespace      "default";
            set $ingress_name   "ingress-mshk-top";
            set $service_name   "whoami-svc";
            set $service_port   "80";
            set $location_path  "/whoami";
...
        location ~* "^/nginx" {

            set $namespace      "default";
            set $ingress_name   "ingress-mshk-top";
            set $service_name   "nginx-svc";
            set $service_port   "80";
            set $location_path  "/nginx";
...

  
  通过下面的命令,可以查看 nginx-ingress-controller 的日志:

$ INGRESSNGINXNAME=`kubectl get pods --all-namespaces | grep nginx-ingress-controller | awk '{print $2}'`;kubectl logs $INGRESSNGINXNAME -n ingress-nginx

  
  我们在 /etc/hosts 中将 rulesa.mshk.toprulesb.mshk.top 指向 192.168.11.241,内容如下:

...
192.168.11.241 rulesa.mshk.top
192.168.11.241 rulesb.mshk.top

  
  验证后的效果如下图:
  mshk.top

4、推荐阅读

  
ingress-nginx-examples

Kubernetes Ingress Controller的使用介绍及高可用落地

Setting up Nginx Ingress on Kubernetes

Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what?

  

  希望本文对您有帮助,感谢您的支持和阅读我的博客。
  


博文作者:迦壹
博客地址:Kubernetes 1.14.1结合MetalLB v0.7.3为LoadBalancer分配IP地址,以及 Nginx-Ingress-Controller 0.24 多域名目录转发
转载声明:可以转载, 但必须以超链接形式标明文章原始出处和作者信息及版权声明,谢谢合作!


One thought on “Kubernetes 1.14.1结合MetalLB v0.7.3为LoadBalancer分配IP地址,以及 Nginx-Ingress-Controller 0.24 多域名目录转发

回复 bang 取消回复

您的电子邮箱地址不会被公开。 必填项已用*标注