Загрузка данных


Running with gitlab-runner 15.8.0 (12335144)
  on gitlab-runner-gitlab-runner-5c5d8dfd84-fc2jj x3T1Qxkg, system ID: r_0t1aE5p8nfLN
Preparing the "kubernetes" executor
00:00
Using Kubernetes namespace: gitlab-runners
Using Kubernetes executor with image registry.rshbdev.ru/appfarm/infra/images/kube-client-apps:8.21.3 ...
Using attach strategy to execute scripts...
Preparing environment
00:14
Waiting for pod gitlab-runners/runner-x3t1qxkg-project-54353-concurrent-0tdvvk to be running, status is Pending
Waiting for pod gitlab-runners/runner-x3t1qxkg-project-54353-concurrent-0tdvvk to be running, status is Pending
	ContainersNotReady: "containers with unready status: [build helper]"
	ContainersNotReady: "containers with unready status: [build helper]"
Running on runner-x3t1qxkg-project-54353-concurrent-0tdvvk via gitlab-runner-gitlab-runner-5c5d8dfd84-fc2jj...
Getting source from Git repository
00:03
$ git config --global --add url."https://gitlab-ci-token:${CI_JOB_TOKEN}@${CI_SERVER_HOST}/".insteadOf "https://${CI_SERVER_HOST}" # collapsed multi-line command
Fetching changes with git depth set to 20...
Initialized empty Git repository in /builds/rshbintech/crft/ckof/ownfin/backend/prd/owf-ms-prd-refer/.git/
Created fresh repository.
Checking out 26a3d0de as stress...
Skipping Git submodules setup
Downloading artifacts
00:01
Downloading artifacts for devsecops_antivirus_scan (21724596)...
Downloading artifacts from coordinator... ok        id=21724596 responseStatus=200 OK token=64_Jid2L
Downloading artifacts for dockerfilegen (21724597)...
Downloading artifacts from coordinator... ok        id=21724597 responseStatus=200 OK token=64_Jid2L
Downloading artifacts for build (21724598)...
Downloading artifacts from coordinator... ok        id=21724598 responseStatus=200 OK token=64_Jid2L
Downloading artifacts for unit (21724600)...
Downloading artifacts from coordinator... ok        id=21724600 responseStatus=200 OK token=64_Jid2L
Executing "step_script" stage of the job script
00:47
$ ( umask 0077; mkdir -p ~/.kube && echo "$KUBECONFIG_COMBINED" | base64 -d > ~/.kube/config )
$ if [[ ${IS_SENSITIVE_SYSTEM} = true ]]; then export K8S_CLUSTERS=$K8S_CLUSTERS_SENSITIVE; fi
$ echo 'IS_SENSITIVE_SYSTEM:' $IS_SENSITIVE_SYSTEM
IS_SENSITIVE_SYSTEM: true
$ echo 'TARGET_CLUSTERS:' $K8S_CLUSTERS
TARGET_CLUSTERS: rcsdstbl
$ set -x
++ echo '$ for CLUSTER in $K8S_CLUSTERS; do'
$ for CLUSTER in $K8S_CLUSTERS; do
++ for CLUSTER in $K8S_CLUSTERS
++ echo '$ export CLUSTER=$CLUSTER'
$ export CLUSTER=$CLUSTER
++ export CLUSTER=rcsdstbl
++ CLUSTER=rcsdstbl
++ echo '$ if [[ ${CI_ENVIRONMENT_SLUG} = production ]]; then export NAMESPACE=isys-${ISYS_NAME}; else export NAMESPACE=isys-${ISYS_NAME}-${CI_ENVIRONMENT_SLUG}; fi'
$ if [[ ${CI_ENVIRONMENT_SLUG} = production ]]; then export NAMESPACE=isys-${ISYS_NAME}; else export NAMESPACE=isys-${ISYS_NAME}-${CI_ENVIRONMENT_SLUG}; fi
++ [[ stress = production ]]
++ export NAMESPACE=isys-ownfin-stress
++ NAMESPACE=isys-ownfin-stress
++ echo '$ read -r REVISION STATUS < <(helm --kube-context ${CLUSTER:-default} -n $NAMESPACE history $CI_PROJECT_NAME | tail -1 | cut -f1,3) || true'
$ read -r REVISION STATUS < <(helm --kube-context ${CLUSTER:-default} -n $NAMESPACE history $CI_PROJECT_NAME | tail -1 | cut -f1,3) || true
++ read -r REVISION STATUS
+++ helm --kube-context rcsdstbl -n isys-ownfin-stress history owf-ms-prd-refer
+++ tail -1
+++ cut -f1,3
++ echo '$ if [[ "$STATUS" =~ "pending" ]]; then helm --kube-context ${CLUSTER:-default} -n $NAMESPACE rollback $CI_PROJECT_NAME $REVISION || true; fi'
$ if [[ "$STATUS" =~ "pending" ]]; then helm --kube-context ${CLUSTER:-default} -n $NAMESPACE rollback $CI_PROJECT_NAME $REVISION || true; fi
++ [[ deployed =~ pending ]]
++ echo '$ helmfile ${HELMFILE_DEFAULT_NAMESPACE:+--namespace $HELMFILE_DEFAULT_NAMESPACE} --environment ${CLUSTER:-default} -f deploy/helmfile.yaml --log-level info apply --suppress-secrets'
$ helmfile ${HELMFILE_DEFAULT_NAMESPACE:+--namespace $HELMFILE_DEFAULT_NAMESPACE} --environment ${CLUSTER:-default} -f deploy/helmfile.yaml --log-level info apply --suppress-secrets
++ helmfile --environment rcsdstbl -f deploy/helmfile.yaml --log-level info apply --suppress-secrets
== /usr/local/link/helmfile: Initialize base deploy hierarchy in /builds/rshbintech/crft/ckof/ownfin/backend/prd/owf-ms-prd-refer
no matches for path: envs/stress/rcsdstbl/helmfile.yaml.gotmpl
Adding repo rshb-charts https://nexus.rshbdev.ru/repository/charts/
"rshb-charts" has been added to your repositories
Comparing release=vault-secrets-owf-ms-prd-refer, chart=rshb-charts/raw
Comparing release=psvc-owf-ms-prd-refer, chart=rshb-charts/raw
isys-ownfin-stress, owf-ms-prd-refer, PlatformService (production.platform.ckpr.integrations.rshbintech.ru) has changed:
  # Source: raw/templates/resources.yaml
  apiVersion: production.platform.ckpr.integrations.rshbintech.ru/v1
  kind: PlatformService
  metadata:
    labels:
      app: raw
      chart: raw-0.2.3-rshb.1.0.0
      ci.build.image.app.farm/name: maven
      ci.build.image.app.farm/version: 3.9.12-eclipse-temurin-21-rshb.0.2.0
      ci.kubeclientapps.app.farm/version: 8.21.3
      ci.runtime.image.app.farm/name: jre
      ci.runtime.image.app.farm/version: 21.0.9_10-jre-jammy-rshb.1.0.0
      ci.service.app.farm/lang: java
      ci.service.app.farm/side: backend
-     commitShortSHA: 792ed0ab
+     commitShortSHA: 26a3d0de
      heritage: Helm
      release: psvc-owf-ms-prd-refer
      version: stress
    name: owf-ms-prd-refer
  spec:
    cpuLimits: 800m
    cpuRequests: 200m
    description: МС Реферальная программа
    disasterRecovery:
      zones:
      - rumsk1
      - rumsk2
    environment: STRESS
    informationSystemId: ownfin-stress
    istioSidecarSettings:
      limits:
        cpu: 600m
        memory: 400Mi
      requests:
        cpu: 200m
        memory: 100Mi
    name: МС Реферальная программа
    projectPath: /rshbintech/crft/ckof/ownfin/backend/prd/owf-ms-prd-refer
    publicDomain: owf-ms-prd-refer
    publishAPI: true
    ramLimits: 2Gi
    ramRequests: 512Mi
    replicaCount: 1
    serviceReference:
      name: owf-ms-prd-refer
      namespace: isys-ownfin-stress
    side: backend
    updateStrategy: RollingUpdate
Comparing release=platform-database-owf-ms-prd-refer, chart=rshb-charts/raw
Comparing release=owf-ms-prd-refer-grafana-dashboard, chart=rshb-charts/grafana-dashboards
Comparing release=links-owf-ms-prd-refer, chart=rshb-charts/raw
Comparing release=owf-ms-prd-refer-rumsk1, chart=rshb-charts/base
isys-ownfin-stress, owf-ms-prd-refer-rumsk1, Deployment (apps) has changed:
  # Source: base/templates/workload.yaml
  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: owf-ms-prd-refer-rumsk1
    labels:
      app: owf-ms-prd-refer
      fullname: owf-ms-prd-refer-rumsk1
      chart: base-1.14.2
      release: owf-ms-prd-refer-rumsk1
      heritage: Helm
      isys: "ownfin-stress"
      psvc: "owf-ms-prd-refer"
      version: "stress"    
      workload.topology.app.farm/zone: "rumsk1"
    annotations:
-     platform.ckpr.integrations.rshbintech.ru/gitlab-commit-sha: 792ed0ab
-     platform.ckpr.integrations.rshbintech.ru/gitlab-pipeline-id: "3057016"
+     platform.ckpr.integrations.rshbintech.ru/gitlab-commit-sha: 26a3d0de
+     platform.ckpr.integrations.rshbintech.ru/gitlab-pipeline-id: "3116357"
  spec:
    strategy:
      type: RollingUpdate
    replicas: 1
    selector:
      matchLabels:
        app: owf-ms-prd-refer
        fullname: owf-ms-prd-refer-rumsk1
        release: owf-ms-prd-refer-rumsk1
    template:
      metadata:
        labels:
          app: owf-ms-prd-refer
          fullname: owf-ms-prd-refer-rumsk1
          chart: base-1.14.2
          release: owf-ms-prd-refer-rumsk1
          heritage: Helm
          isys: "ownfin-stress"
          psvc: "owf-ms-prd-refer"
          version: "stress"        
          workload.topology.app.farm/zone: "rumsk1"
        annotations:
          checksum/configMapsEnv: "042cb77b85b7837eca44b51f177040b5b7ebdee4f65e9b79cd179309480b3f54"
          checksum/secretEnv: "01276d692e202ea1b91576e86f398b905877c4218376452b800f8c997881477d"
-         ci/commithash: "792ed0ab"
+         ci/commithash: "26a3d0de"
          inject.istio.io/templates: "sidecar,custom"
          prometheus.io/path: "/metrics"
          prometheus.io/port: "8080"
          prometheus.io/scrape: "true"
          sidecar.istio.io/proxyCPU: "200m"
          sidecar.istio.io/proxyCPULimit: "600m"
          sidecar.istio.io/proxyMemory: "100Mi"
          sidecar.istio.io/proxyMemoryLimit: "400Mi"
          sidecar.istio.io/userVolume: "[{\"name\": \"wasmfilters-dir\",\"configMap\": {\"name\": \"ownfin-stress-envoy-filters\"}}]"
          sidecar.istio.io/userVolumeMount: "[{\"mountPath\":\"/var/local/lib/wasm-filters\",\"name\":\"wasmfilters-dir\"}]"
      spec:
        serviceAccountName: default
        nodeSelector:
          node-role.kubernetes.io/wload: ""
          workload.topology.app.farm/zone: rumsk1
        priorityClassName:
          rumsk1
        containers:
          - name: app
            image: registry.rshbdev.ru/rshbintech/crft/ckof/ownfin/backend/prd/owf-ms-prd-refer:stress
            imagePullPolicy: Always
            envFrom:
              - configMapRef:
                  name: owf-ms-prd-refer-rumsk1-app-cm-env
              - secretRef:
                  name: owf-ms-prd-refer-rumsk1-app-secret-env
            resources:
              limits:
                cpu: 800m
                memory: 2Gi
              requests:
                cpu: 200m
                memory: 512Mi
            ports:
            livenessProbe:
              failureThreshold: 5
              httpGet:
                path: /health/liveness
                port: 8080
              initialDelaySeconds: 240
              periodSeconds: 5
              successThreshold: 1
              timeoutSeconds: 1
            readinessProbe:
              failureThreshold: 3
              httpGet:
                path: /health/readiness
                port: 8080
              initialDelaySeconds: 240
              periodSeconds: 5
              successThreshold: 1
              timeoutSeconds: 1
            securityContext: 
              allowPrivilegeEscalation: false
              capabilities:
                drop:
                - ALL
              privileged: false
              procMount: Default
              readOnlyRootFilesystem: false
              runAsGroup: 1001
              runAsNonRoot: true
              runAsUser: 1001
        securityContext: 
          fsGroup: 1001
          fsGroupChangePolicy: OnRootMismatch
          runAsGroup: 1001
          runAsNonRoot: true
          runAsUser: 1001
        tolerations: 
          - effect: NoSchedule
            key: node-role.kubernetes.io/wload
            operator: Exists
          - effect: NoSchedule
            key: workload.topology.app.farm/zone
            operator: Exists
        hostNetwork: false
        volumes:
Comparing release=owf-ms-prd-refer-rumsk2, chart=rshb-charts/base
isys-ownfin-stress, owf-ms-prd-refer-rumsk2, Deployment (apps) has changed:
  # Source: base/templates/workload.yaml
  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: owf-ms-prd-refer-rumsk2
    labels:
      app: owf-ms-prd-refer
      fullname: owf-ms-prd-refer-rumsk2
      chart: base-1.14.2
      release: owf-ms-prd-refer-rumsk2
      heritage: Helm
      isys: "ownfin-stress"
      psvc: "owf-ms-prd-refer"
      version: "stress"    
      workload.topology.app.farm/zone: "rumsk2"
    annotations:
-     platform.ckpr.integrations.rshbintech.ru/gitlab-commit-sha: 792ed0ab
-     platform.ckpr.integrations.rshbintech.ru/gitlab-pipeline-id: "3057016"
+     platform.ckpr.integrations.rshbintech.ru/gitlab-commit-sha: 26a3d0de
+     platform.ckpr.integrations.rshbintech.ru/gitlab-pipeline-id: "3116357"
  spec:
    strategy:
      type: RollingUpdate
    replicas: 1
    selector:
      matchLabels:
        app: owf-ms-prd-refer
        fullname: owf-ms-prd-refer-rumsk2
        release: owf-ms-prd-refer-rumsk2
    template:
      metadata:
        labels:
          app: owf-ms-prd-refer
          fullname: owf-ms-prd-refer-rumsk2
          chart: base-1.14.2
          release: owf-ms-prd-refer-rumsk2
          heritage: Helm
          isys: "ownfin-stress"
          psvc: "owf-ms-prd-refer"
          version: "stress"        
          workload.topology.app.farm/zone: "rumsk2"
        annotations:
          checksum/configMapsEnv: "d4840626966bc97fa4aa09e8ea38ba43c46fb1260090fcaee66e56634b9d7ac5"
          checksum/secretEnv: "0d7ad9c244e39e54bbb9b4f6beab5f3e17bb506e213859f47bed1917f884430f"
-         ci/commithash: "792ed0ab"
+         ci/commithash: "26a3d0de"
          inject.istio.io/templates: "sidecar,custom"
          prometheus.io/path: "/metrics"
          prometheus.io/port: "8080"
          prometheus.io/scrape: "true"
          sidecar.istio.io/proxyCPU: "200m"
          sidecar.istio.io/proxyCPULimit: "600m"
          sidecar.istio.io/proxyMemory: "100Mi"
          sidecar.istio.io/proxyMemoryLimit: "400Mi"
          sidecar.istio.io/userVolume: "[{\"name\": \"wasmfilters-dir\",\"configMap\": {\"name\": \"ownfin-stress-envoy-filters\"}}]"
          sidecar.istio.io/userVolumeMount: "[{\"mountPath\":\"/var/local/lib/wasm-filters\",\"name\":\"wasmfilters-dir\"}]"
      spec:
        serviceAccountName: default
        nodeSelector:
          node-role.kubernetes.io/wload: ""
          workload.topology.app.farm/zone: rumsk2
        priorityClassName:
          rumsk2
        containers:
          - name: app
            image: registry.rshbdev.ru/rshbintech/crft/ckof/ownfin/backend/prd/owf-ms-prd-refer:stress
            imagePullPolicy: Always
            envFrom:
              - configMapRef:
                  name: owf-ms-prd-refer-rumsk2-app-cm-env
              - secretRef:
                  name: owf-ms-prd-refer-rumsk2-app-secret-env
            resources:
              limits:
                cpu: 800m
                memory: 2Gi
              requests:
                cpu: 200m
                memory: 512Mi
            ports:
            livenessProbe:
              failureThreshold: 5
              httpGet:
                path: /health/liveness
                port: 8080
              initialDelaySeconds: 240
              periodSeconds: 5
              successThreshold: 1
              timeoutSeconds: 1
            readinessProbe:
              failureThreshold: 3
              httpGet:
                path: /health/readiness
                port: 8080
              initialDelaySeconds: 240
              periodSeconds: 5
              successThreshold: 1
              timeoutSeconds: 1
            securityContext: 
              allowPrivilegeEscalation: false
              capabilities:
                drop:
                - ALL
              privileged: false
              procMount: Default
              readOnlyRootFilesystem: false
              runAsGroup: 1001
              runAsNonRoot: true
              runAsUser: 1001
        securityContext: 
          fsGroup: 1001
          fsGroupChangePolicy: OnRootMismatch
          runAsGroup: 1001
          runAsNonRoot: true
          runAsUser: 1001
        tolerations: 
          - effect: NoSchedule
            key: node-role.kubernetes.io/wload
            operator: Exists
          - effect: NoSchedule
            key: workload.topology.app.farm/zone
            operator: Exists
        hostNetwork: false
        volumes:
Comparing release=owf-ms-prd-refer, chart=rshb-charts/raw
Listing releases matching ^pjob-owf-ms-prd-refer$
Listing releases matching ^exsvc-owf-ms-prd-refer$
in deploy/helmfile.yaml: command "/usr/local/link/helm" exited with non-zero status:
PATH:
  /usr/local/link/helm
ARGS:
  0: helm (4 bytes)
  1: --kube-context (14 bytes)
  2: rcsdstbl (8 bytes)
  3: list (4 bytes)
  4: --filter (8 bytes)
  5: ^exsvc-owf-ms-prd-refer$ (24 bytes)
  6: --kube-context (14 bytes)
  7: rcsdstbl (8 bytes)
  8: --namespace (11 bytes)
  9: isys-ownfin-stress (18 bytes)
  10: --uninstalling (14 bytes)
  11: --deployed (10 bytes)
  12: --failed (8 bytes)
  13: --pending (9 bytes)
ERROR:
  exit status 1
EXIT STATUS
  1
STDERR:
  Error: Kubernetes cluster unreachable: Get "https://100.79.64.1:6443/version": dial tcp 100.79.64.1:6443: i/o timeout
COMBINED OUTPUT:
  Error: Kubernetes cluster unreachable: Get "https://100.79.64.1:6443/version": dial tcp 100.79.64.1:6443: i/o timeout
Uploading artifacts for failed job
00:01
Uploading artifacts...
deploy.env: found 1 matching artifact files and directories 
Uploading artifacts as "dotenv" to coordinator... 201 Created  id=21746589 responseStatus=201 Created token=64_Jid2L
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: command terminated with exit code 1