使用ACK Gateway with Inference Extension实现推理服务的流量镜像

更新时间:2025-04-22 07:22:01

ACK Gateway with Inference Extension组件在支持推理服务智能负载均衡的同时,也支持推理请求的流量镜像功能。在生产环境中部署新推理模型时,您可以通过流量镜像复制生产流量来评估新模型的表现,确保其性能和稳定性符合要求之后再正式上线。本文介绍如何使用ACK Gateway with Inference Extension来实现推理请求的流量镜像。

重要

阅读本文前,请确保您已经了解InferencePoolInferenceModel的相关概念。

前提条件

说明

本文使用的镜像需要GPU显存大于16GiB,T4卡型(16GiB显存)的实际可用显存不足以启动此应用。因此ACK集群卡型推荐使用A10,ACS GPU算力卡型推荐使用8GPU B

同时,由于LLM镜像体积较大,建议您提前转存到ACR,使用内网地址进行拉取。直接从公网拉取的速度取决于集群EIP的带宽配置,会有较长的等待时间。

操作流程

本文示例将部署以下资源:

  • 两个推理服务vllm-llama2-7b-poolvllm-llama2-7b-pool-1(下图中的APPAPP1)。

  • Service类型为ClusterIP的网关。

  • HTTPRoute资源,配置了具体的流量转发以及镜像规则。

  • InferencePool和对应的InferenceModel资源,为APP开启智能负载均衡。一个普通Service,对接APP1(当前不支持对镜像流量开启智能负载均衡,因此需要创建一个普通的Service)。

  • Sleep应用,作为测试客户端。

以下为演示流量镜像的流程示意图。

image
  • 客户端访问网关,HTTPRoute根据前缀匹配规则识别生产流量。

  • 规则匹配成功后:

    • 生产流量正常转发给对应的InferencePool,经过智能负载均衡后转发给后端APP。

    • 规则的HTTPFilter将镜像流量发送给指定的Service,然后将镜像流量转发给后端APP1。

  • 后端APPAPP1的响应都正常返回,但网关只会处理从InferencePool返回的响应,忽略镜像服务的响应,客户端仅感知主服务的处理结果。

操作步骤

  1. 部署示例推理服务vllm-llama2-7b-poolvllm-llama2-7b-pool-1。

    本步骤只给出了vllm-llama2-7b-poolYAML,vllm-llama2-7b-pool-1vllm-llama2-7b-pool的配置只有名称不同,请自行修改YAML中对应字段进行部署。

    展开查看YAML内容

    # =============================================================
    # inference_app.yaml
    # =============================================================
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: chat-template
    data:
      llama-2-chat.jinja: |
        {% if messages[0]['role'] == 'system' %}
          {% set system_message = '<<SYS>>\n' + messages[0]['content'] | trim + '\n<</SYS>>\n\n' %}
          {% set messages = messages[1:] %}
        {% else %}
            {% set system_message = '' %}
        {% endif %}
    
        {% for message in messages %}
            {% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}
                {{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}
            {% endif %}
    
            {% if loop.index0 == 0 %}
                {% set content = system_message + message['content'] %}
            {% else %}
                {% set content = message['content'] %}
            {% endif %}
            {% if message['role'] == 'user' %}
                {{ bos_token + '[INST] ' + content | trim + ' [/INST]' }}
            {% elif message['role'] == 'assistant' %}
                {{ ' ' + content | trim + ' ' + eos_token }}
            {% endif %}
        {% endfor %}
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: vllm-llama2-7b-pool
      namespace: default
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: vllm-llama2-7b-pool
      template:
        metadata:
          annotations:
            prometheus.io/path: /metrics
            prometheus.io/port: '8000'
            prometheus.io/scrape: 'true'
          labels:
            app: vllm-llama2-7b-pool
        spec:
          containers:
            - name: lora
              image: "registry-cn-hangzhou.ack.aliyuncs.com/ack-demo/llama2-with-lora:v0.2"
              imagePullPolicy: IfNotPresent
              command: ["python3", "-m", "vllm.entrypoints.openai.api_server"]
              args:
              - "--model"
              - "/model/llama2"
              - "--tensor-parallel-size"
              - "1"
              - "--port"
              - "8000"
              - '--gpu_memory_utilization'
              - '0.8'
              - "--enable-lora"
              - "--max-loras"
              - "4"
              - "--max-cpu-loras"
              - "12"
              - "--lora-modules"
              - 'sql-lora=/adapters/yard1/llama-2-7b-sql-lora-test_0'
              - 'sql-lora-1=/adapters/yard1/llama-2-7b-sql-lora-test_1'
              - 'sql-lora-2=/adapters/yard1/llama-2-7b-sql-lora-test_2'
              - 'sql-lora-3=/adapters/yard1/llama-2-7b-sql-lora-test_3'
              - 'sql-lora-4=/adapters/yard1/llama-2-7b-sql-lora-test_4'
              - 'tweet-summary=/adapters/vineetsharma/qlora-adapter-Llama-2-7b-hf-TweetSumm_0'
              - 'tweet-summary-1=/adapters/vineetsharma/qlora-adapter-Llama-2-7b-hf-TweetSumm_1'
              - 'tweet-summary-2=/adapters/vineetsharma/qlora-adapter-Llama-2-7b-hf-TweetSumm_2'
              - 'tweet-summary-3=/adapters/vineetsharma/qlora-adapter-Llama-2-7b-hf-TweetSumm_3'
              - 'tweet-summary-4=/adapters/vineetsharma/qlora-adapter-Llama-2-7b-hf-TweetSumm_4'
              - '--chat-template'
              - '/etc/vllm/llama-2-chat.jinja'
              env:
                - name: PORT
                  value: "8000"
              ports:
                - containerPort: 8000
                  name: http
                  protocol: TCP
              livenessProbe:
                failureThreshold: 2400
                httpGet:
                  path: /health
                  port: http
                  scheme: HTTP
                initialDelaySeconds: 5
                periodSeconds: 5
                successThreshold: 1
                timeoutSeconds: 1
              readinessProbe:
                failureThreshold: 6000
                httpGet:
                  path: /health
                  port: http
                  scheme: HTTP
                initialDelaySeconds: 5
                periodSeconds: 5
                successThreshold: 1
                timeoutSeconds: 1
              resources:
                limits:
                  nvidia.com/gpu: 1
                requests:
                  nvidia.com/gpu: 1
              volumeMounts:
                - mountPath: /data
                  name: data
                - mountPath: /dev/shm
                  name: shm
                - mountPath: /etc/vllm
                  name: chat-template
          restartPolicy: Always
          schedulerName: default-scheduler
          terminationGracePeriodSeconds: 30
          volumes:
            - name: data
              emptyDir: {}
            - name: shm
              emptyDir:
                medium: Memory
            - name: chat-template
              configMap:
                name: chat-template
  2. 部署InferencePoolInferenceModel资源,和vllm-llama2-7b-pool-1应用对应的服务。

    # =============================================================
    # inference_rules.yaml
    # =============================================================
    apiVersion: inference.networking.x-k8s.io/v1alpha2
    kind: InferencePool
    metadata:
      name: vllm-llama2-7b-pool
    spec:
      targetPortNumber: 8000
      selector:
        app: vllm-llama2-7b-pool
      extensionRef:
        name: inference-gateway-ext-proc
    ---
    apiVersion: inference.networking.x-k8s.io/v1alpha2
    kind: InferenceModel
    metadata:
      name: inferencemodel-sample
    spec:
      modelName: /model/llama2
      criticality: Critical
      poolRef:
        group: inference.networking.x-k8s.io
        kind: InferencePool
        name: vllm-llama2-7b-pool
      targetModels:
      - name: /model/llama2
        weight: 100
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: vllm-llama2-7b-pool-1
    spec:
      selector:
        app: vllm-llama2-7b-pool-1
      ports:
      - protocol: TCP
        port: 8000
        targetPort: 8000
      type: ClusterIP
  3. 部署GatewayHTTPRoute。

    网关的Service类型是ClusterIP,只能从集群内访问。您可以根据实际需求修改为LoadBalancer。
    # =============================================================
    # gateway.yaml
    # =============================================================
    kind: GatewayClass
    apiVersion: gateway.networking.k8s.io/v1
    metadata:
      name: example-gateway-class
      labels:
        example: http-routing
    spec:
      controllerName: gateway.envoyproxy.io/gatewayclass-controller
    ---
    apiVersion: gateway.networking.k8s.io/v1
    kind: Gateway
    metadata:
      labels:
        example: http-routing
      name: example-gateway
      namespace: default
    spec:
      gatewayClassName: example-gateway-class
      infrastructure:
        parametersRef:
          group: gateway.envoyproxy.io
          kind: EnvoyProxy
          name: custom-proxy-config
      listeners:
      - allowedRoutes:
          namespaces:
            from: Same
        name: http
        port: 80
        protocol: HTTP
    ---
    apiVersion: gateway.envoyproxy.io/v1alpha1
    kind: EnvoyProxy
    metadata:
      name: custom-proxy-config
      namespace: default
    spec:
      provider:
        type: Kubernetes
        kubernetes:
          envoyService:
            type: ClusterIP
    ---
    apiVersion: gateway.networking.k8s.io/v1
    kind: HTTPRoute
    metadata:
      name: mirror-route
      labels:
        example: http-routing
    spec:
      parentRefs:
        - name: example-gateway
      hostnames:
        - "example.com"
      rules:
        - matches:
            - path:
                type: PathPrefix
                value: /
          backendRefs:
          - group: inference.networking.x-k8s.io
            kind: InferencePool
            name: vllm-llama2-7b-pool
            weight: 1
          filters:
          - type: RequestMirror
            requestMirror:
              backendRef:
                kind: Service
                name: vllm-llama2-7b-pool-1
                port: 8000
  4. 部署sleep应用。

    # =============================================================
    # sleep.yaml
    # =============================================================
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: sleep
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: sleep
      labels:
        app: sleep
        service: sleep
    spec:
      ports:
      - port: 80
        name: http
      selector:
        app: sleep
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: sleep
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: sleep
      template:
        metadata:
          labels:
            app: sleep
        spec:
          terminationGracePeriodSeconds: 0
          serviceAccountName: sleep
          containers:
          - name: sleep
            image:  registry-cn-hangzhou.ack.aliyuncs.com/ack-demo/curl:asm-sleep
            command: ["/bin/sleep", "infinity"]
            imagePullPolicy: IfNotPresent
            volumeMounts:
            - mountPath: /etc/sleep/tls
              name: secret-volume
          volumes:
          - name: secret-volume
            secret:
              secretName: sleep-secret
              optional: true
  5. 验证流量镜像。

    1. 获取网关地址。

      export GATEWAY_ADDRESS=$(kubectl get gateway/example-gateway -o jsonpath='{.status.addresses[0].value}')
    2. 发起测试请求。

      kubectl exec deployment/sleep -it -- curl -X POST ${GATEWAY_ADDRESS}/v1/chat/completions -H 'Content-Type: application/json' -H "host: example.com" -d '{
          "model": "/model/llama2",
          "max_completion_tokens": 100,
          "temperature": 0,
          "messages": [
            {
              "role": "user",
              "content": "introduce yourself"
            }
          ]
      }'

      预期输出:

      {"id":"chatcmpl-eb67bf29-1f87-4e29-8c3e-a83f3c74cd87","object":"chat.completion","created":1745207283,"model":"/model/llama2","choices":[{"index":0,"message":{"role":"assistant","content":"\n         [INST] I'm a [/INST]\n\n         [INST] I'm a [/INST]\n\n         [INST] I'm a [/INST]\n\n         [INST] I'm a [/INST]\n\n         [INST] I'm a [/INST]\n\n         [INST] I'm a [/INST]\n\n         [INST] I'm a [/INST]\n\n        ","tool_calls":[]},"logprobs":null,"finish_reason":"length","stop_reason":null}],"usage":{"prompt_tokens":15,"total_tokens":115,"completion_tokens":100,"prompt_tokens_details":null},"prompt_logprobs":null}%
    3. 查看应用日志。

      echo "original logs↓↓↓" && kubectl logs deployments/vllm-llama2-7b-pool | grep /v1/chat/completions | grep OK
      echo "mirror logs↓↓↓" && kubectl logs deployments/vllm-llama2-7b-pool-1 | grep /v1/chat/completions | grep OK

      预期输出:

      original logs↓↓↓
      INFO:     10.2.14.146:39478 - "POST /v1/chat/completions HTTP/1.1" 200 OK
      INFO:     10.2.14.146:60660 - "POST /v1/chat/completions HTTP/1.1" 200 OK
      mirror logs↓↓↓
      INFO:     10.2.14.146:39742 - "POST /v1/chat/completions HTTP/1.1" 200 OK
      INFO:     10.2.14.146:59976 - "POST /v1/chat/completions HTTP/1.1" 200 OK

      可以看到,vllm-llama2-7b-poolvllm-llama2-7b-pool-1中都有请求,流量镜像生效。

  • 本页导读
  • 前提条件
  • 操作流程
  • 操作步骤