Невозможно развернуть модули на managedNodeGroups в EKS

Я пробовал свой кластер в EKS с управляемой группой узлов. Я могу подключить CSI к кластеру и создать storageClass и persistentVolumeClaim, но когда я когда-либо пытаюсь развернуть развертывание. Кажется, что поды не связаны с указанными узлами.

файл pod

apiVersion: apps/v1
kind: Deployment
metadata:
  name: voldeployment
  labels:
    app: my-app
spec:
  replicas: 2
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                - key: role
                  operator: In
                  values:
                    - general
                - key: disktype
                  operator: In
                  values:
                    - ssd
                - key: alpha.eksctl.io/nodegroup-name
                  operator: Exists
      containers:
        - name: ubuntu-ctr
          image: ubuntu:latest
          command:
          - /bin/bash

Так выглядят стручки

Name:           voldeployment-695bb94474-d6ptq
Namespace:      default
Priority:       0
Node:           <none>
Labels:         app=my-app
                pod-template-hash=695bb94474
Annotations:    kubernetes.io/psp: eks.privileged
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/voldeployment-695bb94474
Containers:
  ubuntu-ctr:
    Image:      ubuntu:latest
    Port:       <none>
    Host Port:  <none>
    Command:
      /bin/bash
      -c
      sleep 60m
    Environment:  <none>
    Mounts:
      /data from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xskpn (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  pvc1
    ReadOnly:   false
  default-token-xskpn:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-xskpn
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  73s (x4 over 4m12s)  default-scheduler  0/2 nodes are available: 2 Too many pods.

и узел выглядит так

Name:               ip-192-168-100-146.ec2.internal
Roles:              <none>
Labels:             alpha.eksctl.io/cluster-name=Example-cluster
                    alpha.eksctl.io/nodegroup-name=managed-ng-1
                    beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/instance-type=t2.micro
                    beta.kubernetes.io/os=linux
                    eks.amazonaws.com/capacityType=ON_DEMAND
                    eks.amazonaws.com/nodegroup=managed-ng-1
                    eks.amazonaws.com/nodegroup-image=ami-090a8b0372a148572
                    eks.amazonaws.com/sourceLaunchTemplateId=lt-01a95a53f5379e6c6
                    eks.amazonaws.com/sourceLaunchTemplateVersion=1
                    failure-domain.beta.kubernetes.io/region=us-east-1
                    failure-domain.beta.kubernetes.io/zone=us-east-1f
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=ip-192-168-100-146.ec2.internal
                    kubernetes.io/os=linux
                    node.kubernetes.io/instance-type=t2.micro
                    role=general
                    topology.ebs.csi.aws.com/zone=us-east-1f
                    topology.kubernetes.io/region=us-east-1
                    topology.kubernetes.io/zone=us-east-1f
Annotations:        csi.volume.kubernetes.io/nodeid: {"ebs.csi.aws.com":"i-0b551804ff7079b95"}
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 08 Feb 2021 16:02:14 +0530
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  ip-192-168-100-146.ec2.internal
  AcquireTime:     <unset>
  RenewTime:       Mon, 08 Feb 2021 17:08:07 +0530
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Mon, 08 Feb 2021 17:05:26 +0530   Mon, 08 Feb 2021 16:02:13 +0530   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Mon, 08 Feb 2021 17:05:26 +0530   Mon, 08 Feb 2021 16:02:13 +0530   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Mon, 08 Feb 2021 17:05:26 +0530   Mon, 08 Feb 2021 16:02:13 +0530   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Mon, 08 Feb 2021 17:05:26 +0530   Mon, 08 Feb 2021 16:02:44 +0530   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:   192.168.100.146
  Hostname:     ip-192-168-100-146.ec2.internal
  InternalDNS:  ip-192-168-100-146.ec2.internal
Capacity:
  attachable-volumes-aws-ebs:  39
  cpu:                         1
  ephemeral-storage:           8376300Ki
  hugepages-2Mi:               0
  memory:                      1006900Ki
  pods:                        4
Allocatable:
  attachable-volumes-aws-ebs:  39
  cpu:                         940m
  ephemeral-storage:           6645856244
  hugepages-2Mi:               0
  memory:                      598324Ki
  pods:                        4
System Info:
  Machine ID:                 4ec92546167f44ed87b3bccf78d597bb
  System UUID:                EC247791-4550-81B2-5CE7-0B5ACD5E96C5
  Boot ID:                    15d14b07-6e8e-48ad-9e6f-71225d16ea3c
  Kernel Version:             4.14.209-160.339.amzn2.x86_64
  OS Image:                   Amazon Linux 2
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://19.3.6
  Kubelet Version:            v1.18.9-eks-d1db3c
  Kube-Proxy Version:         v1.18.9-eks-d1db3c
ProviderID:                   aws:///us-east-1f/i-0b551804ff7079b95
Non-terminated Pods:          (4 in total)
  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
  kube-system                 aws-node-8vv8q             10m (1%)      0 (0%)      0 (0%)           0 (0%)         66m
  kube-system                 coredns-c79dcb98c-nk4vv    100m (10%)    0 (0%)      70Mi (11%)       170Mi (29%)    71m
  kube-system                 ebs-csi-node-x87sv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         63m
  kube-system                 kube-proxy-2fp4k           100m (10%)    0 (0%)      0 (0%)           0 (0%)         66m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource                    Requests    Limits
  --------                    --------    ------
  cpu                         210m (22%)  0 (0%)
  memory                      70Mi (11%)  170Mi (29%)
  ephemeral-storage           0 (0%)      0 (0%)
  hugepages-2Mi               0 (0%)      0 (0%)
  attachable-volumes-aws-ebs  0           0
Events:                       <none>

Есть идеи, что может вызвать эту проблему?


person Varun Nair    schedule 08.02.2021    source источник


Ответы (1)


Согласно документации AWS IP-адреса на сетевой интерфейс для каждого типа инстанса t2.micro имеет только 2 сетевых интерфейса и 2 адреса IPv4 на интерфейс.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI

В AWS EKS есть ограничение на планирование модуля: https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt

при желании вы можете снять это ограничение: https://medium.com/@swazza85/dealing-with-pod-dedensity-limitations-on-eks-worker-nodes-137a12c8b218

person Harsh Manvar    schedule 08.02.2021