一、概述
什麼是 Kubernetes 中的 Ingress 控制器?
當您在 Kubernetes 集群內部運行應用程式時,您需要為外部使用者提供一種從Kubernetes 集群外部存取應用程式的方式。Kubernetes 提供了一個名為 Ingress 的物件,它允許您定義存取Kubernetes 集群中服務的規則。它提供了使用穩定 IP 位址從外部存取集群內運行的多個服務的最有效方法。 |
Ingress 控制器是部署在集群內的應用程式,它解釋 Ingress 中定義的規則。Ingress 控制器將 Ingress 規則轉換為與集群集成的負載均衡應用程式的配置指令。負載等化器可以是在Kubernetes 集群內部運行的軟體應用程式,也可以是在集群外部運行的硬體設備。
什麼是 Citrix ADC 入口控制器?
Citrix 提供了 Kubernetes 入口控制器的實現,以使用Citrix ADC(Citrix ADC CPX、VPX或 MPX)管理流量並將其路由到Kubernetes 集群。
使用 Citrix ADC 入口控制器,您可以根據入口規則配置Citrix ADC CPX、VPX 或MPX,並將您的 Citrix ADC 與Kubernetes 環境集成。
二、集成腳本說明
1. 部署RBAC,授權CIC操作資源和帳戶
apiVersion: v1 kind: Namespace metadata: name: kube-system — kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: vpx-ingress-k8s-role rules: – apiGroups: [“”] resources: [“endpoints”, “ingresses”, “pods”, “secrets”, “nodes”, “routes”, “namespaces”, “configmaps”] verbs: [“get”, “list”, “watch”] #services/status is needed to update the loadbalancer IP in service status forintegrating #service of type LoadBalancer with external-dns – apiGroups: [“”] resources: [“services/status”] verbs: [“patch”] – apiGroups: [“”] resources: [“services”] verbs: [“get”, “list”, “watch”, “patch”] – apiGroups: [“”] resources: [“events”] verbs: [“create”] – apiGroups: [“extensions”] resources: [“ingresses”, “ingresses/status”] verbs: [“get”, “list”, “watch”, “patch”] – apiGroups: [“networking.k8s.io”] resources: [“ingresses”, “ingresses/status”, “ingressclasses”] verbs: [“get”, “list”, “watch”, “patch”] – apiGroups: [“apiextensions.k8s.io”] resources: [“customresourcedefinitions”] verbs: [“get”, “list”, “watch”] – apiGroups: [“apps”] resources: [“deployments”] verbs: [“get”, “list”, “watch”] – apiGroups: [“citrix.com”] resources: [“rewritepolicies”, “authpolicies”, “ratelimits”, “listeners”, “httproutes”, “continuousdeployments”, “apigatewaypolicies”, “wafs”, “bots”] verbs: [“get”, “list”, “watch”, “create”, “delete”, “patch”] – apiGroups: [“citrix.com”] resources: [“rewritepolicies/status”, “continuousdeployments/status”, “authpolicies/status”, “ratelimits/status”, “listeners/status”, “httproutes/status”, “wafs/status”, “apigatewaypolicies/status”, “bots/status”] verbs: [“get”, “list”, “patch”] – apiGroups: [“citrix.com”] resources: [“vips”] verbs: [“get”, “list”, “watch”, “create”, “delete”] – apiGroups: [“route.openshift.io”] resources: [“routes”] verbs: [“get”, “list”, “watch”] – apiGroups: [“crd.projectcalico.org”] resources: [“ipamblocks”] verbs: [“get”, “list”, “watch”] — kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: vpx-ingress-k8s-role roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: vpx-ingress-k8s-role subjects: – kind: ServiceAccount name: vpx-ingress-k8s-role namespace: kube-system apiVersion: rbac.authorization.k8s.io/v1 — apiVersion: v1 kind: ServiceAccount metadata: name: vpx-ingress-k8s-role namespace: kube-system — |
2、部署CIC yaml集成存取外部ADC設備,根據業務需求劃分出兩個partition k8s1和k8s2分別關聯每個k8s集群。
注意⚠:
A. 需要額外設置“NS_ENABLE_MONITORING”參數為NO,該屬性預設參數為YES;作用是直接識別partition default設備,當設置為NO後才可以關聯到admin partition;
B. 除了在分配partition時關聯相關用戶外(用戶許可權設置partition admin level),還需要切換到相應的partition設備中,如k8s1中配置CIC登錄的帳號及許可權,可與分別partition資源時候設置帳號一致。(否則describe CIC yaml時候會一直報錯,error user andpassword);
C. 雖然 yaml中登錄位址標注NSIP,CIC登錄partition可以通過SNIP位址訪問,用戶名和密碼如果設置傳輸參數模式,請注意在K8S集群中設置相應用戶名和密碼;如kubectl create secret generic nsvpxlogin –from-literal=username=‘k8s1’–from-literal=password=‘k8s1’D. 如果有需要直接識別endpoint ip (pod ip),也需要在CIC中開啟相應功能“POD_IPS_FOR_SERVICEGROUP_MEMBERS”,將其屬性設置為TRUE,可以直接識別到endpoint ip ,反應到ADC的配置就是service group裡的member直接是真實pod的ip。預設情況下Citrix ADC是監控的nodeport ip ,遵循K8S原生設計規則。
apiVersion: apps/v1 kind: Deployment metadata: name: cic-vpx spec: selector: matchLabels: app: cic-vpx replicas: 1 template: metadata: name: cic-vpx labels: app: cic-vpx annotations: spec: serviceAccountName: vpx-ingress-k8s-role containers: – name: cic-vpx image: “quay.io/citrix/citrix-k8s-ingress-controller:1.17.13” env: # Set NetScaler NSIP/SNIP, SNIPin case of HA (mgmt has to be enabled) – name: “NS_IP” value: “10.105.158.148” – name: “NS_PROTOCOL” value: “HTTP” – name: “NS_PORT” value: “80” # set NetScaler admin partitionadc – name: “NS_ENABLE_MONITORING” value: “no” # expose node ip as servicegroupmember – name: “POD_IPS_FOR_SERVICEGROUP_MEMBERS” value: “true” # Set username for Nitro – name: “NS_USER” valueFrom: secretKeyRef: name: nsvpxlogin key: username # Set user password for Nitro – name: “NS_PASSWORD” valueFrom: secretKeyRef: name: nsvpxlogin key: password # Set log level – name: “EULA” value: “yes” args: – –ingress-classes vpx – –feature-node-watch true – –ipam citrix-ipam-controller imagePullPolicy: Always |
3、部署service yaml實現L4應用負載均衡,我測試環境直接通過annotation指定VS IP,當然也可以通過部署ipam自動劃分VS IP,根據需求選擇。
L4其他功能可以通過annotations實現,詳見https://developer-docs.citrix.com/projects/citrix-k8s-ingress-controller/en/latest/configure/annotations/#service-annotations
— apiVersion: v1 kind: Service metadata: name: nginx-svc annotations: service.citrix.com/frontend-ip: ‘10.1.9.1’ service.citrix.com/ipam-range: “yellow” service.citrix.com/service-type-0: ‘tcp’ service.citrix.com/lbmethod-0: {“lbmethod”:”LEASTCONNECTION”, “persistenceType”:”SOURCEIP”}}’ spec: type: LoadBalancer ports: – name: web-svc port: 80 targetPort: 80 selector: app: web — |
IPAM-CRD yaml
apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: vips.citrix.com spec: group: citrix.com version: v1 names: kind: vip plural: vips singular: vip scope: Namespaced subresources: status: {} additionalPrinterColumns: – name: Status type: string description: “CurrentStatus of the CRD” JSONPath: .status.state – name: Message type: string description: “StatusMessage” JSONPath: .status.status_message additionalPrinterColumns: – JSONPath: .spec.ipaddress name: VIP type: string – name: Age type: date JSONPath: .metadata.creationTimestamp validation: openAPIV3Schema: properties: spec: properties: ipaddress: type: string name: type: string kind: type: string enum: [“service”, “ingress”,“endpoint”] description: type: string |
IPAM yaml
— apiVersion: v1 kind: ServiceAccount metadata: name: citrix-ipam-controller namespace: kube-system — kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: citrix-ipam-controller rules: – apiGroups: – citrix.com resources: – vips verbs: – ‘*’ – apiGroups: – apiextensions.k8s.io resources: – customresourcedefinitions verbs: – ‘*’ — kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: citrix-ipam-controller subjects: – kind: ServiceAccount name: citrix-ipam-controller namespace: kube-system roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: citrix-ipam-controller — apiVersion: apps/v1 kind: Deployment metadata: name: citrix-ipam-controller namespace: kube-system spec: selector: matchLabels: app: citrix-ipam-controller replicas: 1 template: metadata: labels: app: citrix-ipam-controller spec: serviceAccountName: citrix-ipam-controller containers: – name: citrix-ipam-controller image: quay.io/citrix/citrix-ipam-controller:latest env: # This IPAM controller takesenvirnment variable VIP_RANGE. IPs in this range are used to assign values forIP range – name: “VIP_RANGE” value: ‘[{“yellow”: [“10.1.9.1-10.1.9.7”]},{“green” :[“10.1.8.1-10.1.8.7”]}]’ # The IPAM controller can also beconfigured with name spaces for which it would work through the environmentvariable # VIP_NAMESPACES, This expects aset of namespaces passed as space separated string |
4、部署ingress yaml實現L7層應用負載,Citrix ADC通過content switch功能實現,流量會先經過CS,然後匹配規則後轉給相應的VS;
L7層其他功能可以通過annotations實現,詳見https://developer-docs.citrix.com/projects/citrix-k8s-ingress-controller/en/latest/configure/annotations/#ingress-annotations
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-vpx annotations: kubernetes.io/ingress.class: “vpx” ingress.citrix.com/frontend-ip: “10.1.19.2” ingress.citrix.com/insecure-termination:”allow” ingress.citrix.com/insecure-port:”80″ ingress.citrix.com/lbvserver: ‘{“citrix-svc”:{“lbmethod”:”ROUNDROBIN”}}’ spec: rules: – host: test.citrix.com http: paths: – path: backend: serviceName: web-svc servicePort: 80 |
5、部署tls yaml證書實現https應用負載,或者在K8S集群中通過命令部署kubectl create tls NAME –cert=path/to/cert/file–key=path/to/key/file [–dry-run=server|client|none];需要在ingress調用tls 證書
apiVersion: v1 kind: Secret metadata: name: citrix-tls type: kubernetes.io/tls data: tls.crt: base64 encoded cert tls.key: base64 encoded key |
ingress調用
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-vpx annotations: kubernetes.io/ingress.class: “vpx” ingress.citrix.com/frontend-ip: “10.21.219.2” ingress.citrix.com/secure-port: “443” ingress.citrix.com/lbvserver: ‘{“citrix-svc”:{“lbmethod”:”ROUNDROBIN”}}’ spec: tls: – secretName: citrix-tls rules: – host:test.citrix.com http: paths: – path: backend: serviceName: web-svc servicePort: 80 |
三、驗證過程
L4層負載
1.部署RBAC

2.部署CIC,開partition識別

3.部署service

4.部署ingress,通過不配置host關聯上文配置service識別endpointmember,L4負載不需要部署 ingress。測試的時候還未發現可以通過新版本CIC調整參數的方式直接實現識別endpoint member;

5.Check配置是否下放成功,endpointmember是否可以被service讀取到資訊

6.驗證測試應用是否可以存取

發表迴響