Histogram类型指标的Buckets配置

服务网格 ASM(Service Mesh)支持将不同类型的指标进行采集到Prometheus,例如Histogram,Counter等。其中Histogram是Prometheus中的一种重要的数据类型,用于收集和分析分布数据,特别是用于测量请求持续时间、响应大小等指标。Histogram允许记录一组值及其分布,它不仅可以提供总计数和总值,还允许定义多个Buckets来统计不同范围的值。本文将介绍如何配置ASM中Histogram类型指标的Buckets。

前提条件

已创建并添加集群到ASM实例,实例版本为1.19及以上。具体操作,请参见添加集群到ASM实例

通过注解配置指标Buckets

ASM支持工作负载级别的指标Buckets配置,您可以通过为部署的应用Pod添加sidecar.istio.io/statsHistogramBuckets注解,来配置指定Histogram类型指标的Buckets。

您可以通过添加注解对以下Histogram指标进行配置。

指标类型

指标名

Istio指标

  • istiocustom.istio_request_duration_milliseconds

  • istiocustom.istio_request_bytes

  • istiocustom.istio_response_bytes

Envoy指标

  • cluster_manager

  • listener_manager

  • server

  • cluster.xds-grpc

关于上述指标的详细说明,请参见Istio Standard MetricsEnvoy Statistics

以下示例展示了为应用Pod配置Istio和Envoy的cluster.xds-grpcHistogram指标,将Buckets修改为[1,5,10]

kubectl patch pod <POD_NAME> -p '{"metadata":{"annotations":{"sidecar.istio.io/statsHistogramBuckets": {"istiocustom":[1,5,10],"cluster.xds-grpc":[1,5,10]}}}}'
重要

Istio采用前缀匹配的方式对指标名称进行匹配,例如,配置istiocustom会对所有Istio Histogram类型指标生效。

示例演示

以下演示如何通过添加注解来修改Envoy的xds-grpc指标Buckets。

部署示例应用

  1. 使用以下内容,创建httpbin应用。具体操作,请参见部署httpbin应用

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: httpbin
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: httpbin
          version: v1
      template:
        metadata:
          labels:
            app: httpbin
            version: v1
        spec:
          containers:
          - image: registry-cn-hangzhou.ack.aliyuncs.com/ack-demo/httpbin:0.1.0
            imagePullPolicy: IfNotPresent
            name: httpbin
            ports:
            - containerPort: 80
  2. 执行以下命令,查看httpbin应用的状态。

    kubectl get pod

    预期输出:

    NAME                      READY   STATUS    RESTARTS   AGE
    httpbin-fd686xxxx         2/2     Running   0          2m16s

查看并修改当前指标Buckets

  1. 执行以下命令,查看当前httpbin应用的指标Buckets。

    kubectl exec -it httpbin-fd686xxxx -c istio-proxy -- curl localhost:15000/stats/prometheus |grep envoy_cluster_upstream_cx_connect_ms_bucket

    预期输出:

    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="0.5"} 10
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="1"} 10
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="5"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="10"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="25"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="50"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="100"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="250"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="500"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="1000"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="2500"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="5000"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="10000"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="30000"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="60000"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="300000"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="600000"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="1800000"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="3600000"} 11
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="+Inf"} 11

    可以看到此时xds-grpc指标的取值范围为[0.5,1,5,10,25,50,100,250,500,1000,2500,5000,10000,30000,60000,300000,600000,1800000,3600000]

  2. 执行以下命令修改httpbin应用Pod的xds-grpc指标Buckets值。

    kubectl patch deployment httpbin -p '{"spec":{"template":{"metadata":{"annotations":{"sidecar.istio.io/statsHistogramBuckets":"{\"cluster.xds-grpc\":[1,5,10,25,50,100,250,500,1000,2500,5000,10000]}"}}}}}'
  3. 执行以下命令,查看Pod状态。

    kubectl get pod

    预期输出:

    NAME                       READY   STATUS    RESTARTS   AGE
    httpbin-85b555xxxx-xxxxx   2/2     Running   0          2m2s
  4. 执行以下命令,查看当前httpbin应用的指标Buckets。

    kubectl exec -it httpbin-85b555xxxx-xxxxx -c istio-proxy -- curl localhost:15000/stats/prometheus |grep envoy_cluster_upstream_cx_connect_ms_bucket

    预期输出:

    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="1"} 0
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="5"} 1
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="10"} 1
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="25"} 1
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="50"} 1
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="100"} 1
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="250"} 1
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="500"} 1
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="1000"} 1
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="2500"} 1
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="5000"} 1
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="10000"} 1
    envoy_cluster_upstream_cx_connect_ms_bucket{cluster_name="xds-grpc",le="+Inf"} 1

    可以看到xds-grpc指标的取值范围已改为[1,5,10,25,50,100,250,500,1000,2500,5000,10000]