文档

配置连接池实现熔断功能

熔断是一种流量管理的策略,用于在系统出现故障或超负荷的情况下,保护系统免受进一步的损害。在传统的Java服务中,使用Resilience4j等框架可以实现熔断功能。然而,与传统方式相比,Istio的熔断功能在网络级别提供支持,不需要在每个服务的应用程序代码中进行集成。您可以通过配置连接池来实现熔断功能,提高系统的稳定性和可靠性,保护目标服务免受异常请求的影响。

前提条件

已添加集群到ASM实例

连接池参数说明

启用熔断功能前,您需要创建一个目标规则为目标服务配置熔断。关于目标规则的字段说明,请参见目标规则(Destination Rule)CRD说明

连接池connectionPool字段定义了与熔断功能相关的参数。常见的连接池参数说明如下。

参数

类型

是否必选

说明

默认值

tcp.maxConnections

int32

到目标主机的最大HTTP1或TCP连接数。

2³²-1

http.http1MaxPendingRequests

int32

等待就绪的连接池连接时,可以排队等待的最大请求数量。

1024

http.http2MaxRequests

int32

对后端服务的最大活跃请求数。

1024

在简单场景中,例如一个客户端和一个目标服务实例(在Kubernetes环境中,一个实例相当于一个Pod)的情况下参数的配置较为清晰。但在实际生产环境中,可能出现以下场景:

  • 一个客户端实例和多个目标服务实例。

  • 多个客户端实例和单个目标服务实例。

  • 多个客户端实例和多个目标服务实例。

在不同的场景中,需要根据实际需求来调整这些参数的值,以确保连接池能够适应高负载和复杂的环境,并提供良好的性能和可靠性。下文将针对以上场景配置连接池,帮助您了解该配置对客户端和目标服务端的约束情况,进而配置适用自身生产环境的熔断策略。

示例介绍

本文创建两个Python脚本,一个表示目标服务,另一个表示调用服务的客户端。

  • 服务器脚本使用Flask框架创建一个应用程序,并在根路由上定义一个API端点。当访问根路由时,服务器会休眠5秒,然后返回一个包含"Hello World!"字符串的JSON响应。

    展开查看服务器脚本

    #! /usr/bin/env python3
    from flask import Flask
    import time
    
    app = Flask(__name__)
    
    @app.route('/hello')
    def get():
        time.sleep(5)
        return 'hello world!'
    
    if __name__ == '__main__':
        app.run(debug=True, host='0.0.0.0', port='9080', threaded = True)
  • 客户端脚本以10个为一组调用服务器端点,即10个并行请求。在发送下一批10个请求之前,客户端脚本会休眠一段时间。该脚本会在无限循环中执行此操作。为了确保当运行客户端的多个Pod时,它们都同时发送批处理,本示例使用系统时间(每分钟的第0、20和40秒)发送批处理。

    展开查看客户端脚本

    #! /usr/bin/env python3
    import requests
    import time
    import sys
    from datetime import datetime
    import _thread
    
    def timedisplay(t):
      return t.strftime("%H:%M:%S")
    
    def get(url):
      try:
        stime = datetime.now()
        start = time.time()
        response = requests.get(url)
        etime = datetime.now()
        end = time.time()
        elapsed = end-start
        sys.stderr.write("Status: " + str(response.status_code) + ", Start: " + timedisplay(stime) + ", End: " + timedisplay(etime) + ", Elapsed Time: " + str(elapsed)+"\n")
        sys.stdout.flush()
      except Exception as myexception:
        sys.stderr.write("Exception: " + str(myexception)+"\n")
        sys.stdout.flush()
    
    time.sleep(30)
    
    while True:
      sc = int(datetime.now().strftime('%S'))
      time_range = [0, 20, 40]
    
      if sc not in time_range:
        time.sleep(1)
        continue
    
      sys.stderr.write("\n----------Info----------\n")
      sys.stdout.flush()
    
      # Send 10 requests in parallel
      for i in range(10):
        _thread.start_new_thread(get, ("http://circuit-breaker-sample-server:9080/hello", ))
    
      time.sleep(2)

部署示例应用

  1. 使用以下内容创建YAML文件,然后执行kubectl apply -f ${YAML文件名称}.yaml命令,部署示例应用。

    展开查看YAML

    ##################################################################################################
    #  circuit-breaker-sample-server services
    ##################################################################################################
    apiVersion: v1
    kind: Service
    metadata:
      name: circuit-breaker-sample-server
      labels:
        app: circuit-breaker-sample-server
        service: circuit-breaker-sample-server
    spec:
      ports:
      - port: 9080
        name: http
      selector:
        app: circuit-breaker-sample-server
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: circuit-breaker-sample-server
      labels:
        app: circuit-breaker-sample-server
        version: v1
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: circuit-breaker-sample-server
          version: v1
      template:
        metadata:
          labels:
            app: circuit-breaker-sample-server
            version: v1
        spec:
          containers:
          - name: circuit-breaker-sample-server
            image: registry.cn-hangzhou.aliyuncs.com/acs/istio-samples:circuit-breaker-sample-server.v1
            imagePullPolicy: Always
            ports:
            - containerPort: 9080
    ---
    ##################################################################################################
    #  circuit-breaker-sample-client services
    ##################################################################################################
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: circuit-breaker-sample-client
      labels:
        app: circuit-breaker-sample-client
        version: v1
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: circuit-breaker-sample-client
          version: v1
      template:
        metadata:
          labels:
            app: circuit-breaker-sample-client
            version: v1
        spec:
          containers:
          - name: circuit-breaker-sample-client
            image: registry.cn-hangzhou.aliyuncs.com/acs/istio-samples:circuit-breaker-sample-client.v1
            imagePullPolicy: Always
            
  2. 执行以下命令,查看客户端和服务器端对应的Pod。

    kubectl get po |grep circuit  

    预期输出:

    circuit-breaker-sample-client-d4f64d66d-fwrh4   2/2     Running   0             1m22s
    circuit-breaker-sample-server-6d6ddb4b-gcthv    2/2     Running   0             1m22s

在没有明确定义目标规则限制的情况下,服务器端可以处理并发的10个客户端请求,因此服务器的响应结果始终为200。客户端的日志示例如下:

----------Info----------
Status: 200, Start: 02:39:20, End: 02:39:25, Elapsed Time: 5.016539812088013
Status: 200, Start: 02:39:20, End: 02:39:25, Elapsed Time: 5.012614488601685
Status: 200, Start: 02:39:20, End: 02:39:25, Elapsed Time: 5.015984535217285
Status: 200, Start: 02:39:20, End: 02:39:25, Elapsed Time: 5.015599012374878
Status: 200, Start: 02:39:20, End: 02:39:25, Elapsed Time: 5.012874364852905
Status: 200, Start: 02:39:20, End: 02:39:25, Elapsed Time: 5.018714904785156
Status: 200, Start: 02:39:20, End: 02:39:25, Elapsed Time: 5.010422468185425
Status: 200, Start: 02:39:20, End: 02:39:25, Elapsed Time: 5.012431621551514
Status: 200, Start: 02:39:20, End: 02:39:25, Elapsed Time: 5.011001348495483
Status: 200, Start: 02:39:20, End: 02:39:25, Elapsed Time: 5.01432466506958

配置连接池

通过服务网格技术启用熔断规则,只需要针对目标服务定义对应的目标规则DestinationRule即可。

使用以下内容,创建目标规则。具体操作,请参见管理目标规则。该目标规则定义与目标服务的最大TCP连接数为5。

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: circuit-breaker-sample-server
spec:
  host: circuit-breaker-sample-server
  trafficPolicy:
    connectionPool:
      tcp:
        maxConnections: 5

场景一:一个客户端Pod和一个目标服务Pod

  1. 启动客户端Pod并监控日志。

    建议重启下客户端以获得更直观的统计结果。可以看到类似如下日志:

    ----------Info----------
    Status: 200, Start: 02:49:40, End: 02:49:45, Elapsed Time: 5.0167787075042725
    Status: 200, Start: 02:49:40, End: 02:49:45, Elapsed Time: 5.011920690536499
    Status: 200, Start: 02:49:40, End: 02:49:45, Elapsed Time: 5.017078161239624
    Status: 200, Start: 02:49:40, End: 02:49:45, Elapsed Time: 5.018405437469482
    Status: 200, Start: 02:49:40, End: 02:49:45, Elapsed Time: 5.018689393997192
    Status: 200, Start: 02:49:40, End: 02:49:50, Elapsed Time: 10.018936395645142
    Status: 200, Start: 02:49:40, End: 02:49:50, Elapsed Time: 10.016417503356934
    Status: 200, Start: 02:49:40, End: 02:49:50, Elapsed Time: 10.019930601119995
    Status: 200, Start: 02:49:40, End: 02:49:50, Elapsed Time: 10.022735834121704
    Status: 200, Start: 02:49:40, End: 02:49:55, Elapsed Time: 15.02303147315979

    可以看到所有请求都成功。但是,每批中只有5个请求的响应时间约为5秒,其余请求的响应较慢(大部分为10秒以上)。该结果表明仅使用tcp.maxConnections会导致过多的请求排队,等待连接释放。默认情况下,可以排队的请求数为2³²-1。

  2. 使用以下内容,更新目标规则,仅允许1个待处理请求。具体操作,请参见管理目标规则

    为了实现真正的熔断行为(即快速失败),您还需要设置http.http1MaxPendingRequests限制排队的请求数量。该参数的默认值为1024。如果将该参数设置为0,将回滚到默认值。因此,必须至少将其设置为1。

    apiVersion: networking.istio.io/v1alpha3
    kind: DestinationRule
    metadata:
      name: circuit-breaker-sample-server
    spec:
      host: circuit-breaker-sample-server
      trafficPolicy:
        connectionPool:
          tcp:
            maxConnections: 5
          http:
            http1MaxPendingRequests: 1
  3. 重启客户端Pod,避免统计结果出现偏差,并继续观察日志。

    日志示例如下:

    ----------Info----------
    Status: 503, Start: 02:56:40, End: 02:56:40, Elapsed Time: 0.005339622497558594
    Status: 503, Start: 02:56:40, End: 02:56:40, Elapsed Time: 0.007254838943481445
    Status: 503, Start: 02:56:40, End: 02:56:40, Elapsed Time: 0.0044133663177490234
    Status: 503, Start: 02:56:40, End: 02:56:40, Elapsed Time: 0.008964776992797852
    Status: 200, Start: 02:56:40, End: 02:56:45, Elapsed Time: 5.018309116363525
    Status: 200, Start: 02:56:40, End: 02:56:45, Elapsed Time: 5.017424821853638
    Status: 200, Start: 02:56:40, End: 02:56:45, Elapsed Time: 5.019804954528809
    Status: 200, Start: 02:56:40, End: 02:56:45, Elapsed Time: 5.01643180847168
    Status: 200, Start: 02:56:40, End: 02:56:45, Elapsed Time: 5.025975227355957
    Status: 200, Start: 02:56:40, End: 02:56:50, Elapsed Time: 10.01716136932373

    可以看到4个请求立即被限制,5个请求发送到目标服务,1个请求排队,符合预期。

  4. 执行以下命令,查看客户端Istio代理与目标服务中的Pod建立的活跃连接数。

    kubectl exec $(kubectl get pod --selector app=circuit-breaker-sample-client --output jsonpath='{.items[0].metadata.name}') -c istio-proxy -- curl -X POST http://localhost:15000/clusters | grep circuit-breaker-sample-server | grep cx_active

    预期输出:

    outbound|9080||circuit-breaker-sample-server.default.svc.cluster.local::172.20.192.124:9080::cx_active::5

    可以看到客户端Istio代理与目标服务中的Pod建立的活跃连接数为5。

场景二:一个客户端Pod和多个目标服务Pod

本小节将验证连接数限制是否应用于Pod级别或服务级别。假设有一个客户端Pod和三个目标服务Pod。

  • 若连接数限制应用于Pod级别,目标服务的每个Pod最多5个连接。

    该场景应该看不到限制或排队,因为允许的最大连接数为15(3个Pod,每个Pod为5个连接)。由于一次只发送10个请求,所有请求都应该成功并在大约5秒内返回。

  • 若连接数限制应用于服务级别,无论目标服务中的Pod数量为多少,总共最多5个连接。

    该场景可以看到4个请求立即被限制,5个请求发送到目标服务,1个请求排队。

  1. 执行以下命令,将目标服务部署扩展到多个副本(例如3个)。

    kubectl scale deployment/circuit-breaker-sample-server  --replicas=3
  2. 重启客户端Pod并监控日志。

    日志示例如下:

    ----------Info----------
    Status: 503, Start: 03:06:20, End: 03:06:20, Elapsed Time: 0.011791706085205078
    Status: 503, Start: 03:06:20, End: 03:06:20, Elapsed Time: 0.0032286643981933594
    Status: 503, Start: 03:06:20, End: 03:06:20, Elapsed Time: 0.012153387069702148
    Status: 503, Start: 03:06:20, End: 03:06:20, Elapsed Time: 0.011871814727783203
    Status: 200, Start: 03:06:20, End: 03:06:25, Elapsed Time: 5.012892484664917
    Status: 200, Start: 03:06:20, End: 03:06:25, Elapsed Time: 5.013102769851685
    Status: 200, Start: 03:06:20, End: 03:06:25, Elapsed Time: 5.016939163208008
    Status: 200, Start: 03:06:20, End: 03:06:25, Elapsed Time: 5.014261484146118
    Status: 200, Start: 03:06:20, End: 03:06:25, Elapsed Time: 5.01246190071106
    Status: 200, Start: 03:06:20, End: 03:06:30, Elapsed Time: 10.021712064743042

    可以看到如上类似的限制和排队,表明增加目标服务的实例数量不会增加客户端的连接数限制。因此,连接数限制应用于服务级别。

  3. 运行一段时间之后,执行以下命令,查看客户端Istio代理与目标服务中的Pod建立的活跃连接数。

    kubectl exec $(kubectl get pod --selector app=circuit-breaker-sample-client --output jsonpath='{.items[0].metadata.name}') -c istio-proxy -- curl -X POST http://localhost:15000/clusters | grep circuit-breaker-sample-server | grep cx_active

    预期输出:

    outbound|9080||circuit-breaker-sample-server.default.svc.cluster.local::172.20.192.124:9080::cx_active::2
    outbound|9080||circuit-breaker-sample-server.default.svc.cluster.local::172.20.192.158:9080::cx_active::2
    outbound|9080||circuit-breaker-sample-server.default.svc.cluster.local::172.20.192.26:9080::cx_active::2

    可以看到客户端代理与目标服务中的每个Pod有2个活动连接,总共6个连接,而不是5个。如Envoy和Istio文档中所述,代理在连接数量方面具有一定的弹性。

场景三:多个客户端Pod和一个目标服务Pod

  1. 执行以下命令,调整目标服务和客户端的副本数。

    kubectl scale deployment/circuit-breaker-sample-server --replicas=1 
    kubectl scale deployment/circuit-breaker-sample-client --replicas=3
  2. 重启客户端Pod并监控日志。

    展开查看客户端监控日志

    Client 1
    
    ----------Info----------
    Status: 503, Start: 03:10:40, End: 03:10:40, Elapsed Time: 0.008828878402709961
    Status: 503, Start: 03:10:40, End: 03:10:40, Elapsed Time: 0.010806798934936523
    Status: 503, Start: 03:10:40, End: 03:10:40, Elapsed Time: 0.012855291366577148
    Status: 503, Start: 03:10:40, End: 03:10:40, Elapsed Time: 0.004465818405151367
    Status: 503, Start: 03:10:40, End: 03:10:40, Elapsed Time: 0.007823944091796875
    Status: 503, Start: 03:10:40, End: 03:10:40, Elapsed Time: 0.06221342086791992
    Status: 503, Start: 03:10:40, End: 03:10:40, Elapsed Time: 0.06922149658203125
    Status: 503, Start: 03:10:40, End: 03:10:40, Elapsed Time: 0.06859922409057617
    Status: 200, Start: 03:10:40, End: 03:10:45, Elapsed Time: 5.015282392501831
    Status: 200, Start: 03:10:40, End: 03:10:50, Elapsed Time: 9.378434181213379
    
    Client 2
    
    ----------Info----------
    Status: 503, Start: 03:11:00, End: 03:11:00, Elapsed Time: 0.007795810699462891
    Status: 503, Start: 03:11:00, End: 03:11:00, Elapsed Time: 0.00595545768737793
    Status: 503, Start: 03:11:00, End: 03:11:00, Elapsed Time: 0.013380765914916992
    Status: 503, Start: 03:11:00, End: 03:11:00, Elapsed Time: 0.004278898239135742
    Status: 503, Start: 03:11:00, End: 03:11:00, Elapsed Time: 0.010999202728271484
    Status: 200, Start: 03:11:00, End: 03:11:05, Elapsed Time: 5.015426874160767
    Status: 200, Start: 03:11:00, End: 03:11:05, Elapsed Time: 5.0184690952301025
    Status: 200, Start: 03:11:00, End: 03:11:05, Elapsed Time: 5.019806146621704
    Status: 200, Start: 03:11:00, End: 03:11:05, Elapsed Time: 5.0175628662109375
    Status: 200, Start: 03:11:00, End: 03:11:05, Elapsed Time: 5.031521558761597
    
    Client 3
    
    ----------Info----------
    Status: 503, Start: 03:13:20, End: 03:13:20, Elapsed Time: 0.012019157409667969
    Status: 503, Start: 03:13:20, End: 03:13:20, Elapsed Time: 0.012546539306640625
    Status: 503, Start: 03:13:20, End: 03:13:20, Elapsed Time: 0.013760805130004883
    Status: 503, Start: 03:13:20, End: 03:13:20, Elapsed Time: 0.014089822769165039
    Status: 503, Start: 03:13:20, End: 03:13:20, Elapsed Time: 0.014792442321777344
    Status: 503, Start: 03:13:20, End: 03:13:20, Elapsed Time: 0.015463829040527344
    Status: 503, Start: 03:13:20, End: 03:13:20, Elapsed Time: 0.01661539077758789
    Status: 200, Start: 03:13:20, End: 03:13:20, Elapsed Time: 0.02904224395751953
    Status: 200, Start: 03:13:20, End: 03:13:20, Elapsed Time: 0.03912043571472168
    Status: 200, Start: 03:13:20, End: 03:13:20, Elapsed Time: 0.06436014175415039

    每个客户端上的503错误数量增加。系统限制了来自所有三个客户端实例Pod的并发请求为5个。

  3. 查看客户端代理日志。

    展开查看客户端代理日志

    {"authority":"circuit-breaker-sample-server:9080","bytes_received":"0","bytes_sent":"81","downstream_local_address":"192.168.142.207:9080","downstream_remote_address":"172.20.192.31:44610","duration":"0","istio_policy_status":"-","method":"GET","path":"/hello","protocol":"HTTP/1.1","request_id":"d9d87600-cd01-421f-8a6f-dc0ee0ac8ccd","requested_server_name":"-","response_code":"503","response_flags":"UO","route_name":"default","start_time":"2023-02-28T03:14:00.095Z","trace_id":"-","upstream_cluster":"outbound|9080||circuit-breaker-sample-server.default.svc.cluster.local","upstream_host":"-","upstream_local_address":"-","upstream_service_time":"-","upstream_transport_failure_reason":"-","user_agent":"python-requests/2.21.0","x_forwarded_for":"-"}
    
    {"authority":"circuit-breaker-sample-server:9080","bytes_received":"0","bytes_sent":"81","downstream_local_address":"192.168.142.207:9080","downstream_remote_address":"172.20.192.31:43294","duration":"58","istio_policy_status":"-","method":"GET","path":"/hello","protocol":"HTTP/1.1","request_id":"931d080a-3413-4e35-91f4-0c906e7ee565","requested_server_name":"-","response_code":"503","response_flags":"URX","route_name":"default","start_time":"2023-02-28T03:12:20.995Z","trace_id":"-","upstream_cluster":"outbound|9080||circuit-breaker-sample-server.default.svc.cluster.local","upstream_host":"172.20.192.84:9080","upstream_local_address":"172.20.192.31:58742","upstream_service_time":"57","upstream_transport_failure_reason":"-","user_agent":"python-requests/2.21.0","x_forwarded_for":"-"}
    

    可以看到有两种不同类型的日志,用于解释被限制的请求(503错误)。在日志中,RESPONSE_FLAGS字段包括了UOURX这两个值。

    • UO:表示上游溢出(断路)。

    • URX:表示请求被拒绝,因为已达到上游重试限制(HTTP)或最大连接尝试次数(TCP)。

    根据日志中的其他字段的值(例如DURATIONUPSTREAM_HOSTUPSTREAM_CLUSTER),进一步得出结论:

    带有UO标志的请求由客户端代理在本地进行限制,而带有URX标志的请求被目标服务代理拒绝。

  4. 验证上一步结论的正确性,检查目标服务端的代理日志。

    展开查看目标服务端的代理日志

    {"authority":"circuit-breaker-sample-server:9080","bytes_received":"0","bytes_sent":"81","downstream_local_address":"172.20.192.84:9080","downstream_remote_address":"172.20.192.31:59510","duration":"0","istio_policy_status":"-","method":"GET","path":"/hello","protocol":"HTTP/1.1","request_id":"7684cbb0-8f1c-44bf-b591-40c3deff6b0b","requested_server_name":"outbound_.9080_._.circuit-breaker-sample-server.default.svc.cluster.local","response_code":"503","response_flags":"UO","route_name":"default","start_time":"2023-02-28T03:14:00.095Z","trace_id":"-","upstream_cluster":"inbound|9080||","upstream_host":"-","upstream_local_address":"-","upstream_service_time":"-","upstream_transport_failure_reason":"-","user_agent":"python-requests/2.21.0","x_forwarded_for":"-"}
    {"authority":"circuit-breaker-sample-server:9080","bytes_received":"0","bytes_sent":"81","downstream_local_address":"172.20.192.84:9080","downstream_remote_address":"172.20.192.31:58218","duration":"0","istio_policy_status":"-","method":"GET","path":"/hello","protocol":"HTTP/1.1","request_id":"2aa351fa-349d-4283-a5ea-dc74ecbdff8c","requested_server_name":"outbound_.9080_._.circuit-breaker-sample-server.default.svc.cluster.local","response_code":"503","response_flags":"UO","route_name":"default","start_time":"2023-02-28T03:12:20.996Z","trace_id":"-","upstream_cluster":"inbound|9080||","upstream_host":"-","upstream_local_address":"-","upstream_service_time":"-","upstream_transport_failure_reason":"-","user_agent":"python-requests/2.21.0","x_forwarded_for":"-"}

    与预期一致,目标服务代理的日志出现了503响应码。这也是导致客户端代理日志中出现"response_code":"503""response_flags":"URX"的原因。

总而言之,客户端代理根据它们的连接数限制(每个Pod最多5个连接)发送请求,并对多余的请求进行排队或限制(使用UO响应标志)。在批处理开始时,所有三个客户端代理最多可以发送15个并发请求。但是,只有5个请求成功,因为目标服务代理也在使用相同的配置(最多5个连接)进行限制。目标服务代理将仅接受5个请求并限制其余请求,这些请求在客户端代理日志中带有URX响应标志。

上述场景对应的示意图如下:

image

场景四:多个客户端Pod和多个目标服务Pod

当增加目标服务副本时,应该会看到请求的成功率整体增加,因为每个目标代理可以允许5个并发请求。通过此方式,可以观察到客户端和目标服务代理上的限制。

  1. 执行以下命令,将目标服务的副本数增加到2,将客户端的副本数增加到3。

    kubectl scale deployment/circuit-breaker-sample-server --replicas=2
    kubectl scale deployment/circuit-breaker-sample-client --replicas=3

    可以看到所有三个客户端代理在一个批次中生成的30个请求中有10个请求成功。

  2. 执行以下命令,将目标服务的副本数增加到3。

    kubectl scale deployment/circuit-breaker-sample-server --replicas=3

    可以看到15个成功的请求。

  3. 执行以下命令,将目标服务的副本数增加到4。

    kubectl scale deployment/circuit-breaker-sample-server --replicas=3

    可以看到目标服务的副本数从3增加到4,仍然只有15个成功的请求。无论目标服务有多少个副本,客户端代理上的限制都适用于整个目标服务。因此,无论有多少个副本,每个客户端代理最多可以向目标服务发出5个并发请求。

总结

连接池配置对客户端和目标服务端的约束如下。

角色

说明

客户端

每个客户端代理独立应用该限制。如果限制为100,则每个客户端代理在应用本地限制之前可以有100个未完成的请求。如果有N个客户端调用目标服务,则总共最多可以有100*N个未完成的请求。

客户端代理的限制是针对整个目标服务,而不是针对目标服务的单个副本。即使目标服务有200个活动Pod,限流仍然会是100。

目标服务端

每个目标服务代理应用该限制。如果该服务有50个活动的Pod,则在应用限流并返回503错误之前,每个Pod最多可以有100个来自客户端代理的未完成请求。

  • 本页导读 (1)
文档反馈