Модуль прокси-сервера OAuth2 продолжает сбоить при использовании с ключом в режиме oidc на Kubernetes

#nginx #kubernetes #keycloak #oauth2-proxy

Вопрос:

Я пытаюсь запустить минималистичный образец oauth2-прокси с помощью ключа. Я использовал пример k8s oauth2-прокси, в котором используется dex, для создания своего примера блокировки ключей. Проблема в том, что я, похоже, не могу заставить прокси работать:

 # kubectl get pods                     
NAME                            READY   STATUS             RESTARTS   AGE
httpbin-774999875d-zbczh        1/1     Running            0          2m49s
keycloak-758d7c758-27pgh        1/1     Running            0          2m49s
oauth2-proxy-5875dd67db-8qwqn   0/1     CrashLoopBackOff   2          2m49s
 

Журналы указывают на сетевую ошибку:

 # kubectl logs oauth2-proxy-5875dd67db-8qwqn
[2021/09/22 08:14:56] [main.go:54] Get "http://keycloak.localtest.me/auth/realms/master/.well-known/openid-configuration": dial tcp 127.0.0.1:80: connect: connection refused
 

Однако я считаю, что правильно настроил вход.

Шаги по воспроизведению

  1. Настройка кластера:
 #Creare kind cluster
wget https://raw.githubusercontent.com/oauth2-proxy/oauth2-proxy/master/contrib/local-environment/kubernetes/kind-cluster.yaml
kind create cluster --name oauth2-proxy --config kind-cluster.yaml
#Setup dns
wget https://raw.githubusercontent.com/oauth2-proxy/oauth2-proxy/master/contrib/local-environment/kubernetes/custom-dns.yaml
kubectl apply -f custom-dns.yaml
kubectl -n kube-system rollout restart deployment/coredns
kubectl -n kube-system rollout status --timeout 5m deployment/coredns
#Setup ingress
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml
kubectl --namespace ingress-nginx rollout status --timeout 5m deployment/ingress-nginx-controller
#Deploy
#import keycloak master realm
wget https://raw.githubusercontent.com/oauth2-proxy/oauth2-proxy/master/contrib/local-environment/keycloak/master-realm.json 
kubectl create configmap keycloak-import-config --from-file=master-realm.json=master-realm.json
 
  1. Разверните тестовое приложение. Мое deployment.yaml досье:
 ###############oauth2-proxy#############
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    name: oauth2-proxy
  name: oauth2-proxy
spec:
  replicas: 1
  selector:
      matchLabels:
        name: oauth2-proxy
  template:
    metadata:
      labels:
        name: oauth2-proxy
    spec:
      containers:
        - args:
            - --provider=oidc
            - --oidc-issuer-url=http://keycloak.localtest.me/auth/realms/master
            - --upstream="file://dev/null"
            - --client-id=oauth2-proxy
            - --client-secret=72341b6d-7065-4518-a0e4-50ee15025608
            - --cookie-secret=x-1vrrMhC-886ITuz8ySNw==
            - --email-domain=*
            - --scope=openid profile email users
            - --cookie-domain=.localtest.me
            - --whitelist-domain=.localtest.me
            - --pass-authorization-header=true
            - --pass-access-token=true
            - --pass-user-headers=true
            - --set-authorization-header=true
            - --set-xauthrequest=true
            - --cookie-refresh=1m
            - --cookie-expire=30m
            - --http-address=0.0.0.0:4180
          image: quay.io/oauth2-proxy/oauth2-proxy:latest
          # image: "quay.io/pusher/oauth2_proxy:v5.1.0"
          name: oauth2-proxy
          ports:
            - containerPort: 4180
              name: http
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /ping
              port: http
              scheme: HTTP
            initialDelaySeconds: 0
            timeoutSeconds: 1
          readinessProbe:
            httpGet:
              path: /ping
              port: http
              scheme: HTTP
            initialDelaySeconds: 0
            timeoutSeconds: 1
            successThreshold: 1
            periodSeconds: 10
          resources:
            {}
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: oauth2-proxy
  name: oauth2-proxy
spec:
  type: ClusterIP
  ports:
    - port: 4180
      targetPort: 4180
      name: http
  selector:
    name: oauth2-proxy
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  labels:
    app: oauth2-proxy
  name: oauth2-proxy
  annotations:
    nginx.ingress.kubernetes.io/server-snippet: |
      large_client_header_buffers 4 32k;
spec:
  rules:
    - host: oauth2-proxy.localtest.me
      http:
        paths:
          - path: /
            backend:
              serviceName: oauth2-proxy
              servicePort: 4180
---
# ######################httpbin##################
apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpbin
spec:
  replicas: 1
  selector:
    matchLabels:
      name: httpbin
  template:
    metadata:
      labels:
        name: httpbin
    spec:
      containers:
        - image: kennethreitz/httpbin:latest
          name: httpbin
          resources: {}
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /
              port: http
          readinessProbe:
            httpGet:
              path: /
              port: http
      hostname: httpbin
      restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: httpbin-svc
  labels:
    app: httpbin
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
  selector:
    name: httpbin
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: httpbin
  labels:
    name: httpbin
  annotations:
    nginx.ingress.kubernetes.io/auth-response-headers: X-Auth-Request-User,X-Auth-Request-Email
    nginx.ingress.kubernetes.io/auth-signin: http://oauth2-proxy.localtest.me/oauth2/start
    nginx.ingress.kubernetes.io/auth-url: http://oauth2-proxy.localtest.me/oauth2/auth
spec:
  rules:
    - host: httpbin.localtest.me
      http:
        paths:
          - path: /
            backend:
              serviceName: httpbin-svc
              servicePort: 80
---
# ######################keycloak#############
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: keycloak
  name: keycloak
spec:
  replicas: 1
  selector:
    matchLabels:
      app: keycloak
  template:
    metadata:
      labels:
        app: keycloak
    spec:
      containers:
        - args:
            - -Dkeycloak.migration.action=import
            - -Dkeycloak.migration.provider=singleFile
            - -Dkeycloak.migration.file=/etc/keycloak_import/master-realm.json
            - -Dkeycloak.migration.strategy=IGNORE_EXISTING
          env:
            - name: KEYCLOAK_PASSWORD
              value: password
            - name: KEYCLOAK_USER
              value: admin@example.com
            - name: KEYCLOAK_HOSTNAME
              value: keycloak.localtest.me
            - name: PROXY_ADDRESS_FORWARDING
              value: "true"
          image: quay.io/keycloak/keycloak:15.0.2
          # image: jboss/keycloak:10.0.0
          name: keycloak
          ports:
            - name: http
              containerPort: 8080
            - name: https
              containerPort: 8443
          readinessProbe:
            httpGet:
              path: /auth/realms/master
              port: 8080
          volumeMounts:
            - mountPath: /etc/keycloak_import
              name: keycloak-config
      hostname: keycloak
      volumes:
      - configMap:
          defaultMode: 420
          name: keycloak-import-config
        name: keycloak-config
---
apiVersion: v1
kind: Service
metadata:
  name: keycloak-svc
  labels:
    app: keycloak
spec:
  type: ClusterIP
  sessionAffinity: None
  ports:
  - name: http
    targetPort: http
    port: 8080
  selector:
    app: keycloak
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: keycloak
spec:
  tls:
    - hosts:
      - "keycloak.localtest.me"
  rules:
  - host: "keycloak.localtest.me"
    http:
      paths:
      - path: /
        backend:
          serviceName: keycloak-svc
          servicePort: 8080
---

 
 # kubectl apply -f deployment.yaml
 
  1. Настройте /etc/hosts в файле машины разработки, чтобы включить localtest.me домен:
 127.0.0.1       oauth2-proxy.localtest.me
127.0.0.1       keycloak.localtest.me
127.0.0.1       httpbin.localtest.me
127.0.0.1       localhost
 

Обратите внимание, что я могу http://keycloak.localtest.me/auth/realms/master/.well-known/openid-configuration без проблем связаться с моим хост-браузером. Похоже, что oauth2-proxy модуль не может связаться со службой через вход. Был бы очень признателен за любую помощь здесь.

Ответ №1:

Оказалось, что мне нужно было добавить кодовый замок custom-dns.yaml .

 apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
        hosts {
            10.244.0.1 dex.localtest.me. # <----Configured for dex
            10.244.0.1 oauth2-proxy.localtest.me
            fallthrough
        }
    }
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
 

Добавлен ключ, как показано ниже:

 apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health {
           lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           fallthrough in-addr.arpa ip6.arpa
           ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
        hosts {
            10.244.0.1 keycloak.localtest.me
            10.244.0.1 oauth2-proxy.localtest.me
            fallthrough
        }
    }
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system