Continuous of the zammad-0 pod

Infos:

  • Used Zammad version: 5.0.3
  • Used Zammad installation type: helm chart
  • Operating system: macOS 11.6
  • Browser + version: Brave and Safari

Expected behavior:

  • Pod initializing and running without restarts

Actual behavior:

  • Either initializes and then restarts randomly (around 20 restarts per day)

Steps to reproduce the behavior:

  • Install helm chart with values.yaml example below in k8s
  • The following pods are up: zammad-master-0, zammad-memcached
  • The pod zammad-0 stays in Pending state for hours

values.yaml

image:
  repository: zammad/zammad-docker-compose
  tag: 5.0.3-14
  pullPolicy: IfNotPresent
  imagePullSecrets: []
    # - name: "image-pull-secret"

service:
  type: ClusterIP
  port: 8080

ingress:
  enabled: true
  annotations: { 
    "kubernetes.io/ingress.class": "nginx",
    "kubernetes.io/tls-acme": "true",
    "cert-manager.io/cluster-issuer":"letsencrypt-internal",
    "nginx.ingress.kubernetes.io/use-regex": "true"}
  hosts:
    - host: <masked>
      paths:
        - path: /
          pathType: ImplementationSpecific
  tls:
   - secretName: zammad-tls-secret
     hosts:
       - <masked>

secrets:
  autowizard:
    useExisting: false
    secretKey: autowizard
    secretName: autowizard
  elasticsearch:
    useExisting: false
    secretKey: password
    secretName: elastic-credentials
  postgresql:
    useExisting: false
    secretKey: postgresql-pass
    secretName: postgresql-pass
  redis:
    useExisting: false
    secretKey: redis-password
    secretName: redis-pass

zammadConfig:
  elasticsearch:
    # enable/disable elasticsearch chart dependency
    enabled: true
    # host env var is only used when zammadConfig.elasticsearch.enabled is false
    host: <masked>
    initialisation: true
    pass: ""
    port: 9200
    reindex: true
    schema: http
    user: ""
  memcached:
    # enable/disable memcached chart dependency
    enabled: true
    # host env var is only used when zammadConfig.memcached.enabled is false
    host: zammad-memcached
    port: 11211
  nginx:
    extraHeaders: []
      # - 'HeaderName "Header Value"'
    websocketExtraHeaders: []
      # - 'HeaderName "Header Value"'
    livenessProbe:
      httpGet:
        path: /
        port: 8080
      initialDelaySeconds: 30
      successThreshold: 1
      failureThreshold: 5
      timeoutSeconds: 5
      periodSeconds: 10
    readinessProbe:
      httpGet:
        path: /
        port: 8080
      initialDelaySeconds: 30
      successThreshold: 1
      failureThreshold: 5
      timeoutSeconds: 5
      periodSeconds: 10
    resources:
       requests:
         cpu: 50m
         memory: 128Mi
       limits:
         cpu: 100m
         memory: 128Mi
  postgresql:
    # enable/disable postgresql chart dependency
    enabled: false
    db: <masked>
    # host env var is only used when postgresql.enabled is false
    host: <masked>
    port: <masked>
    # needs to be the same as the postgresql.postgresqlUsername
    user: <masked>
    # needs to be the same as the postgresql.postgresqlPassword
    pass: <masked>
  railsserver:
    livenessProbe:
      httpGet:
        path: /
        port: 3000
      initialDelaySeconds: 30
      successThreshold: 1
      failureThreshold: 5
      timeoutSeconds: 5
      periodSeconds: 10
    readinessProbe:
      httpGet:
        path: /
        port: 3000
      initialDelaySeconds: 30
      successThreshold: 1
      failureThreshold: 5
      timeoutSeconds: 5
      periodSeconds: 10
    resources:
      # requests:
      #   cpu: 100m
      #   memory: 3072Mi
      # limits:
      #   cpu: 2
      #   memory: 3072Mi
    trustedProxies: "['127.0.0.1', '::1']"
    webConcurrency: 0
  redis:
  # enable/disable redis chart dependency
    enabled: false
    host: <masked>
    # needs to be the same as the redis.auth.password
    pass: <masked>
    port: <masked>
  scheduler:
    resources:
      requests:
        cpu: 400m
        memory: 2048Mi
      limits:
        cpu: 2
        memory: 2048Mi
  websocket:
    livenessProbe:
      tcpSocket:
        port: 6042
      initialDelaySeconds: 10
      periodSeconds: 10
      successThreshold: 1
      failureThreshold: 5
      timeoutSeconds: 5
    readinessProbe:
      tcpSocket:
        port: 6042
      initialDelaySeconds: 10
      periodSeconds: 10
      successThreshold: 1
      failureThreshold: 5
      timeoutSeconds: 5
    resources:
      requests:
        cpu: 100m
        memory: 1028Mi
      limits:
        cpu: 800m
        memory: 1028Mi

# additional environment vars added to all zammad services
extraEnv: []
  # - name: FOO_BAR
  #   value: "foobar"

# autowizard config
# if a token is used the url hast to look like: http://zammad/#getting_started/auto_wizard/your_token_here
autoWizard:
  enabled: false
  # string with the autowizard config as json
  # config: |
  #   {
  #     "Token": "secret_zammad_autowizard_token",
  #     "TextModuleLocale": {
  #       "Locale": "en-us"
  #     },
  #     "Users": [
  #       {
  #         "login": "email@example.org",
  #         "firstname": "Zammad",
  #         "lastname": "Admin",
  #         "email": "email@example.org",
  #         "organization": "ZammadTest",
  #         "password": "YourPassword"
  #       }
  #     ],
  #     "Settings": [
  #       {
  #         "name": "product_name",
  #         "value": "ZammadTestSystem"
  #       },
  #       {
  #         "name": "system_online_service",
  #         "value": true
  #       }
  #     ],
  #     "Organizations": [
  #       {
  #         "name": "ZammadTest"
  #       }
  #     ]
  #   }

podAnnotations: {}
  # my-annotation: "value"

volumePermissions:
  enabled: true
  image:
    repository: alpine
    tag: "3.14"
    pullPolicy: IfNotPresent

# Configuration for persistence
persistence:
  enabled: true
  ## A manually managed Persistent Volume and Claim
  ## If defined, PVC must be created manually before volume will be bound
  ## The value is evaluated as a template, so, for example, the name can depend on .Release or .Chart
  ##
  # existingClaim:
  accessModes:
    - ReadWriteOnce
  storageClass: <masked>
  size: 10Gi
  annotations: {}

nodeSelector: {}
tolerations: []
affinity: {}

# service account configurations
serviceAccount:
  # Specifies whether a service account should be created
  create: false
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""

# RBAC configuration for scoping resources
# Role binding is used for accessing the pod security policy configured
# below
rbac:
  # Control whether RBAC resources are created
  create: false

# Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/
podSecurityPolicy:
  enabled: false
  # Create PSP
  create: false
  # Annotations to add to PSP. Only applicable if create is true
  annotations: {}
  # The name of the PSP to use. Only applicable if create is false
  name: ""

# dependency charts config

# Settings for the elasticsearch subchart
elasticsearch:
  image: "zammad/zammad-docker-compose"
  imageTag: "zammad-elasticsearch-5.0.3-14"
  clusterName: zammad
  replicas: 1
  # Workaround to get helm test to work in GitHub action CI
  # the [ES chart](https://github.com/elastic/helm-charts/tree/master/elasticsearch)
  # default would be: "wait_for_status=green&timeout=1s"
  # see: <https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html#request-params>
  clusterHealthCheckParams: "timeout=1s"
  resources: {}
    # requests:
    #   cpu: "100m"
    #   memory: "2Gi"
    # limits:
    #   cpu: "1000m"
    #   memory: "2Gi"
  initResources: {}
    # limits:
    #   cpu: "25m"
    #   # memory: "128Mi"
    # requests:
    #   cpu: "25m"
    #   memory: "128Mi"
  sidecarResources: {}
    # limits:
    #   cpu: "25m"
    #   # memory: "128Mi"
    # requests:
    #   cpu: "25m"
    #   memory: "128Mi"

# settings for the memcached subchart
memcached:
  replicaCount: 1
  resources: {}
    # requests:
    #   cpu: 50m
    #   memory: 64Mi
    # limits:
    #   cpu: 100m
    #   memory: 128Mi

# settings for the postgres subchart
postgresql:
  postgresqlUsername: <masked>
  postgresqlPassword: <masked>
  postgresqlDatabase: <masked>
  resources: {}
    # requests:
    #   cpu: 250m
    #   memory: 256Mi
    # limits:
    #   cpu: 500m
    #   memory: 512Mi

# settings for the redis subchart
redis:
  architecture: standalone
  auth:
    password: <masked>
  master:
    resources: {}
    # limits:https://charts.bitnami.com/bitnami
    #   cpu: 250m
    #   memory: 256Mi
    # requests:
    #   cpu: 250m
    #   memory: 256Mi

Hello,

Can you provide more details regarding your environnent (EKS,GKE ) also can you share the log for the zammad-0 and a pod describe so that i can check and help ?

Bilel

any chance the zammad-0 pod is the one which executes the zammad-init or backup? I think the init i supposed to just init, exit. and then just start again.

Hello,

Thank you for a quick reply. The issue with pending state of the zammad-0 pod was resolved by disabling indexing of the elastic search on the start. So once the zammad-0 was up and running we ran the indexing at a later time, which took several hours and completed successfully.

However, the issue with random restarts persists. It took a while to catch the actual logs that appeared before restart.

2022-02-18T14:34:19+01:00 from /opt/zammad/lib/sessions.rb:579:in `new'

2022-02-18T14:34:19+01:00 from /opt/zammad/lib/sessions.rb:579:in `thread_client'

2022-02-18T14:34:19+01:00 from /opt/zammad/lib/sessions.rb:534:in `block (3 levels) in jobs'

2022-02-18T14:34:19+01:00 /usr/local/bundle/gems/hiredis-0.6.3/lib/hiredis/ext/connection.rb:19:in `read': Resource temporarily unavailable (Errno::EAGAIN)

2022-02-18T14:34:19+01:00 from /usr/local/bundle/gems/hiredis-0.6.3/lib/hiredis/ext/connection.rb:19:in `read'

2022-02-18T14:34:19+01:00 from /usr/local/bundle/gems/redis-4.4.0/lib/redis/connection/hiredis.rb:55:in `read'

It looks like redis is not responding. We are using local instance of redis configured in the values.yml file, so I guess we can exclude some network issue. We would prefer to use our external redis for this, but I couldn’t find the way to configure it in the values.yml with username and password (both are required for external redis connection).

That’s because Zammad does not support it as of now - the environment configurations hint for that indirectly

https://docs.zammad.org/en/latest/appendix/configure-env-vars.html

There’s also a feature request for this topic:

This topic was automatically closed 120 days after the last reply. New replies are no longer allowed.