Gateway with Inference Extension组件支持根据OpenTelemetry的生成式AI语义约定(OpenTelemetry Gen AI Semantic Conventions)输出生成式AI请求的相关指标和日志。本文介绍如何通过Gateway with Inference Extension组件输出生成式AI请求的相关指标和日志。
本文内容依赖1.4.0及以上版本的Gateway with Inference Extension。
背景信息
OpenTelemetry Gen AI Semantic Conventions是针对生成式人工智能(如大型语言模型LLM、文本生成、图像生成等)的监控和追踪制定的一套标准化语义约定。其目标是统一生成式AI请求的指标、日志和追踪数据,便于跨系统分析和故障排查。该规范的核心目标是:
标准化数据采集:
定义生成式AI请求的通用属性(如模型名称、输入输出token数、配置参数)。
支持全链路追踪:
将生成式AI请求与其他系统(如数据库、API网关)的追踪数据关联。
统一分析与监控:
通过标准化标签,便于Prometheus、Grafana等工具聚合和可视化数据。
前提条件
已安装1.4.0版本的Gateway with Inference Extension并勾选启用Gateway API推理扩展。操作入口,请参见安装组件。
已部署mock-vllm应用。
配置输出可观测数据
部署生成式AI可观测插件
Gateway with Inference Extension组件需要结合gen-ai-telemetry可观测插件来实现可观测数据的输出。gen-ai-telemetry可观测插件采用镜像形式提供,无固定更新频率。您可以查看gen-ai-telemetry插件发布记录获取最新版本镜像。
kubectl apply -f - <<EOF
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyExtensionPolicy
metadata:
name: ack-gateway-llm-telemetry
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: mock-route
wasm:
- name: llm-telemetry
rootID: ack-gateway-extension
code:
type: Image
image:
url: registry-cn-hangzhou.ack.aliyuncs.com/acs/gen-ai-telemetry-wasmplugin:g2ad0869-aliyun
EOF
gen-ai-telemetry可观测插件镜像支持通过内网拉取,若您的集群无法通过公网拉取镜像,可以将镜像地址改为指定地域的VPC内网端点。例如,集群地域为华北2(北京),您可以使用registry-cn-beijing-vpc.ack.aliyuncs.com/acs/gen-ai-telemetry-wasmplugin:{image_tag}
来快速获取镜像。
配置网关的Metrics Tag规则
在部署mock-vllm应用时,会同步创建名为custom-proxy-config
的EnvoyProxy资源。输出网关的Metrics数据需要在此资源中补充Metrics Tag规则。
编辑EnvoyProxy资源。
kubectl edit envoyproxy custom-proxy-config
将以下YAML中的
spec.bootstrap
内容更新到custom-proxy-config
中。apiVersion: gateway.envoyproxy.io/v1alpha1 kind: EnvoyProxy metadata: name: custom-proxy-config namespace: default spec: bootstrap: type: JSONPatch jsonPatches: - op: add path: /stats_config value: stats_tags: - tag_name: gen_ai.operation.name regex: "(\\|gen_ai.operation.name=([^|]*))" - tag_name: gen_ai.system regex: "(\\|gen_ai.system=([^|]*))" - tag_name: gen_ai.token.type regex: "(\\|gen_ai.token.type=([^|]*))" - tag_name: gen_ai.request.model regex: "(\\|gen_ai.request.model=([^|]*))" - tag_name: gen_ai.response.model regex: "(\\|gen_ai.response.model=([^|]*))" - tag_name: gen_ai.error.type regex: "(\\|gen_ai.error.type=([^|]*))" - tag_name: server.port regex: "(\\|server.port=([^|]*))" - tag_name: server.address regex: "(\\|server.address=([^|]*))"
保存并退出后,配置实时生效。此时网关已经可以输出生成式AI相关的Metrics数据。
配置日志输出
输出网关日志同样需要修改EnvoyProxy资源。您可以根据实际需求进行追加。
编辑EnvoyProxy资源。
kubectl edit envoyproxy custom-proxy-config
将以下YAML中的
spec.telemetry
内容更新到custom-proxy-config
中。apiVersion: gateway.envoyproxy.io/v1alpha1 kind: EnvoyProxy metadata: name: custom-proxy-config namespace: default spec: telemetry: accessLog: disable: false settings: - sinks: - type: File file: path: /dev/stdout format: type: JSON json: # 默认的访问日志字段 start_time: "%START_TIME%" method: "%REQ(:METHOD)%" x-envoy-origin-path: "%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%" protocol: "%PROTOCOL%" response_code: "%RESPONSE_CODE%" response_flags: "%RESPONSE_FLAGS%" response_code_details: "%RESPONSE_CODE_DETAILS%" connection_termination_details: "%CONNECTION_TERMINATION_DETAILS%" upstream_transport_failure_reason: "%UPSTREAM_TRANSPORT_FAILURE_REASON%" bytes_received: "%BYTES_RECEIVED%" bytes_sent: "%BYTES_SENT%" duration: "%DURATION%" x-envoy-upstream-service-time: "%RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)%" x-forwarded-for: "%REQ(X-FORWARDED-FOR)%" user-agent: "%REQ(USER-AGENT)%" x-request-id: "%REQ(X-REQUEST-ID)%" :authority: "%REQ(:AUTHORITY)%" upstream_host: "%UPSTREAM_HOST%" upstream_cluster: "%UPSTREAM_CLUSTER%" upstream_local_address: "%UPSTREAM_LOCAL_ADDRESS%" downstream_local_address: "%DOWNSTREAM_LOCAL_ADDRESS%" downstream_remote_address: "%DOWNSTREAM_REMOTE_ADDRESS%" requested_server_name: "%REQUESTED_SERVER_NAME%" route_name: "%ROUTE_NAME%" # 新增生成式AI请求相关信息 gen_ai.operation.name: "%FILTER_STATE(wasm.gen_ai.operation.name:PLAIN)%" gen_ai.system: "%FILTER_STATE(wasm.gen_ai.system:PLAIN)%" gen_ai.request.model: "%FILTER_STATE(wasm.gen_ai.request.model:PLAIN)%" gen_ai.response.model: "%FILTER_STATE(wasm.gen_ai.response.model:PLAIN)%" gen_ai.error.type: "%FILTER_STATE(wasm.gen_ai.error.type:PLAIN)%" gen_ai.prompt.tokens: "%FILTER_STATE(wasm.gen_ai.prompt.tokens:PLAIN)%" gen_ai.completion.tokens: "%FILTER_STATE(wasm.gen_ai.completion.tokens:PLAIN)%" gen_ai.server.time_per_output_token: "%FILTER_STATE(wasm.gen_ai.server.time_per_output_token:PLAIN)%" gen_ai.server.time_to_first_token: "%FILTER_STATE(wasm.gen_ai.server.time_to_first_token:PLAIN)%"
发起测试请求
多次执行发起测试中的步骤,生成网关的可观测数据。
查看可观测数据
获取网关工作负载的名称。
export GATEWAY_DEPLOYMENT=$(k -n envoy-gateway-system get deployment -l gateway.envoyproxy.io/owning-gateway-name=mock-gateway -o jsonpath='{.items[0].metadata.name}') echo $GATEWAY_DEPLOYMENT
在本地监听网关的admin端口。
kubectl -n envoy-gateway-system port-forward deployments/$GATEWAY_DEPLOYMENT 19000:19000
重新开启一个终端窗口,获取网关Metrics数据。
curl -s localhost:19000/stats/prometheus | grep gen_ai
预期输出:
# TYPE gen_ai_client_operation_duration histogram gen_ai_client_operation_duration_bucket{gen_ai_operation_name="chat",gen_ai_system="example.com",gen_ai_request_model="mock",gen_ai_response_model="mock",gen_ai_error_type="",server_port="8000",server_address="10.3.0.9:8000",le="0.5"} 0 gen_ai_client_operation_duration_bucket{gen_ai_operation_name="chat",gen_ai_system="example.com",gen_ai_request_model="mock",gen_ai_response_model="mock",gen_ai_error_type="",server_port="8000",server_address="10.3.0.9:8000",le="1"} 0 gen_ai_client_operation_duration_bucket{gen_ai_operation_name="chat",gen_ai_system="example.com",gen_ai_request_model="mock",gen_ai_response_model="mock",gen_ai_error_type="",server_port="8000",server_address="10.3.0.9:8000",le="5"} 9 gen_ai_client_operation_duration_bucket{gen_ai_operation_name="chat",gen_ai_system="example.com",gen_ai_request_model="mock",gen_ai_response_model="mock",gen_ai_error_type="",server_port="8000",server_address="10.3.0.9:8000",le="10"} 9 gen_ai_client_operation_duration_bucket{gen_ai_operation_name="chat",gen_ai_system="example.com",gen_ai_request_model="mock",gen_ai_response_model="mock",gen_ai_error_type="",server_port="8000",server_address="10.3.0.9:8000",le="25"} 14 gen_ai_client_operation_duration_bucket{gen_ai_operation_name="chat",gen_ai_system="example.com",gen_ai_request_model="mock",gen_ai_response_model="mock",gen_ai_error_type="",server_port="8000",server_address="10.3.0.9:8000",le="50"} 16 gen_ai_client_operation_duration_bucket{gen_ai_operation_name="chat",gen_ai_system="example.com",gen_ai_request_model="mock",gen_ai_response_model="mock",gen_ai_error_type="",server_port="8000",server_address="10.3.0.9:8000",le="100"} 16 gen_ai_client_operation_duration_bucket{gen_ai_operation_name="chat",gen_ai_system="example.com",gen_ai_request_model="mock",gen_ai_response_model="mock",gen_ai_error_type="",server_port="8000",server_address="10.3.0.9:8000",le="250"} 16 gen_ai_client_operation_duration_bucket{gen_ai_operation_name="chat",gen_ai_system="example.com",gen_ai_request_model="mock",gen_ai_response_model="mock",gen_ai_error_type="",server_port="8000",server_address="10.3.0.9:8000",le="500"} 16 gen_ai_client_operation_duration_bucket{gen_ai_operation_name="chat",gen_ai_system="example.com",gen_ai_request_model="mock",gen_ai_response_model="mock",gen_ai_error_type="",server_port="8000",server_address="10.3.0.9:8000",le="1000"} 16 gen_ai_client_operation_duration_bucket{gen_ai_operation_name="chat",gen_ai_system="example.com",gen_ai_request_model="mock",gen_ai_response_model="mock",gen_ai_error_type="",server_port="8000",server_address="10.3.0.9:8000",le="2500"} 16 gen_ai_client_operation_duration_bucket{gen_ai_operation_name="chat",gen_ai_system="example.com",gen_ai_request_model="mock",gen_ai_response_model="mock",gen_ai_error_type="",server_port="8000",server_address="10.3.0.9:8000",le="5000"} 16 gen_ai_client_operation_duration_bucket{gen_ai_operation_name="chat",gen_ai_system="example.com",gen_ai_request_model="mock",gen_ai_response_model="mock",gen_ai_error_type="",server_port="8000",server_address="10.3.0.9:8000",le="10000"} 16 gen_ai_client_operation_duration_bucket{gen_ai_operation_name="chat",gen_ai_system="example.com",gen_ai_request_model="mock",gen_ai_response_model="mock",gen_ai_error_type="",server_port="8000",server_address="10.3.0.9:8000",le="30000"} 16 gen_ai_client_operation_duration_bucket{gen_ai_operation_name="chat",gen_ai_system="example.com",gen_ai_request_model="mock",gen_ai_response_model="mock",gen_ai_error_type="",server_port="8000",server_address="10.3.0.9:8000",le="60000"} 16 gen_ai_client_operation_duration_bucket{gen_ai_operation_name="chat",gen_ai_system="example.com",gen_ai_request_model="mock",gen_ai_response_model="mock",gen_ai_error_type="",server_port="8000",server_address="10.3.0.9:8000",le="300000"} 16 gen_ai_client_operation_duration_bucket{gen_ai_operation_name="chat",gen_ai_system="example.com",gen_ai_request_model="mock",gen_ai_response_model="mock",gen_ai_error_type="",server_port="8000",server_address="10.3.0.9:8000",le="600000"} 16 gen_ai_client_operation_duration_bucket{gen_ai_operation_name="chat",gen_ai_system="example.com",gen_ai_request_model="mock",gen_ai_response_model="mock",gen_ai_error_type="",server_port="8000",server_address="10.3.0.9:8000",le="1800000"} 16 gen_ai_client_operation_duration_bucket{gen_ai_operation_name="chat",gen_ai_system="example.com",gen_ai_request_model="mock",gen_ai_response_model="mock",gen_ai_error_type="",server_port="8000",server_address="10.3.0.9:8000",le="3600000"} 16 gen_ai_client_operation_duration_bucket{gen_ai_operation_name="chat",gen_ai_system="example.com",gen_ai_request_model="mock",gen_ai_response_model="mock",gen_ai_error_type="",server_port="8000",server_address="10.3.0.9:8000",le="+Inf"} 16 gen_ai_client_operation_duration_sum{gen_ai_operation_name="chat",gen_ai_system="example.com",gen_ai_request_model="mock",gen_ai_response_model="mock",gen_ai_error_type="",server_port="8000",server_address="10.3.0.9:8000"} 140.9499999999999886313162278384 gen_ai_client_operation_duration_count{gen_ai_operation_name="chat",gen_ai_system="example.com",gen_ai_request_model="mock",gen_ai_response_model="mock",gen_ai_error_type="",server_port="8000",server_address="10.3.0.9:8000"} 16
查看访问日志。
kubectl -n envoy-gateway-system logs deployments/$GATEWAY_DEPLOYMENT | tail -1 | jq
预期输出:
Defaulted container "envoy" out of: envoy, shutdown-manager { ":authority": "example.com", "bytes_received": 184, "bytes_sent": 355, "connection_termination_details": null, "downstream_local_address": "10.3.0.38:10080", "downstream_remote_address": "10.3.15.252:45492", "duration": 2, "gen_ai.completion.tokens": "76", "gen_ai.error.type": "", "gen_ai.operation.name": "chat", "gen_ai.prompt.tokens": "18", "gen_ai.request.model": "mock", "gen_ai.response.model": "mock", "gen_ai.server.time_per_output_token": "0", "gen_ai.server.time_to_first_token": "2", "gen_ai.system": "example.com", "method": "POST", "protocol": "HTTP/1.1", "requested_server_name": null, "response_code": 200, "response_code_details": "via_upstream", "response_flags": "-", "route_name": "httproute/default/mock-route/rule/0/match/0/*", "start_time": "2025-05-28T06:13:31.190Z", "upstream_cluster": "httproute/default/mock-route/rule/0/backend/0", "upstream_host": "10.3.0.9:8000", "upstream_local_address": "10.3.0.38:33370", "upstream_transport_failure_reason": null, "user-agent": "curl/8.8.0", "x-envoy-origin-path": "/v1/chat/completions", "x-envoy-upstream-service-time": null, "x-forwarded-for": "10.3.15.252", "x-request-id": "0e67d734-aca7-4c80-bda3-79641cd63e2c" }
对应的指标说明和日志字段含义,请参见OpenTelemetry Gen AI Semantic Conventions。
gen-ai-telemetry插件发布记录
镜像标签 | 发布时间 | 描述 |
g2ad0869-aliyun | 2025年5月 | 支持生成式AI请求的监控指标和日志增强。 |