本文介绍如何部署定制版 Sentry 容器,使 Sentry 控制台能够读取存储在阿里云中的应用数据。
前提条件
该功能目前处于灰度试用阶段,请通过工单申请。
已有可用的 Sentry 自建环境,或具备部署 Sentry 服务的能力。
已安装 Docker 和 Docker Compose。
背景信息
原生 Sentry 服务的数据读取链路依赖 ClickHouse 作为事件存储、PostgreSQL 作为元数据存储。为实现从 SLS 读取数据,需要替换以下两个核心组件:
组件 | 原始功能 | 定制版功能 |
Sentry Web | 从 PostgreSQL 读取 Group、Nodestore 等数据 | 将部分数据源重定向至 SLS |
Snuba API | 从 ClickHouse 查询事件数据 | 将查询引擎切换至 SLS |
定制版容器仅修改数据读取逻辑,不影响 Sentry 控制台的界面和交互体验。
架构说明
部署定制版容器后,Sentry 控制台的数据读取链路如下:
数据源划分:
数据类型 | 存储位置 | 说明 |
用户、项目、团队 | PostgreSQL | 核心元数据,保持不变 |
Issue 聚合信息(Group) | 阿里云 | 错误分组和统计信息 |
事件详情(Nodestore) | 阿里云 | 完整的事件数据 |
事件查询 | 阿里云 | 通过 Snuba API 查询 |
步骤一:获取定制版容器镜像
阿里云提供了适配 SLS 的定制版 Sentry 容器镜像。
组件 | 镜像地址 |
Sentry Web(SLS 定制版) |
|
Snuba API(SLS 定制版) |
|
步骤二:配置 SLS 访问凭证
定制版容器需要访问 SLS 读取数据,需要配置 AccessKey 凭证。
创建 RAM 用户并授权
登录 RAM 控制台,创建一个 RAM 用户。
为该用户授予以下权限:
AliyunLogReadOnlyAccess(日志服务只读权限)。
创建 AccessKey 并记录 AccessKey ID 和 AccessKey Secret。
重要 建议使用 RAM 用户的 AccessKey,避免使用主账号 AccessKey。
步骤三:配置环境变量
在 Sentry 服务根目录的 .env 文件中添加以下配置:
# RUM 镜像配置
SENTRY_RUM_IMAGE=sls-registry.cn-hangzhou.cr.aliyuncs.com/sentry/sentry:25.9.0-rum-251230
SNUBA_RUM_IMAGE=sls-registry.cn-hangzhou.cr.aliyuncs.com/sentry/snuba:25.9.0-rum-251230
SENTRY_RUM_BIND=9001
# SLS 配置
SLS_ENDPOINT=<endpoint>
SLS_ACCESS_KEY_ID=<your_access_key_id>
SLS_ACCESS_KEY_SECRET=<your_access_key_secret>
SLS_PROJECT=<your_sls_project>
SLS_LOGSTORE=logstore-rum
SLS_TTL=30
ENABLE_SLS_NODESTORE=true
USE_SLS_GROUP_STORAGE=true
参数说明
参数 | 说明 | 示例 |
SLS_ENDPOINT | SLS 服务访问域名 |
|
SLS_ACCESS_KEY_ID | RAM 用户 AccessKey ID |
|
SLS_ACCESS_KEY_SECRET | RAM 用户 AccessKey Secret |
|
SLS_PROJECT | SLS Project 名称(需要在控制台获取RUM Service底层存储的project) |
|
SLS_LOGSTORE | SLS Logstore 名称,固定值 |
|
SLS_TTL | 数据查询天数 |
|
ENABLE_SLS_NODESTORE | 启用 SLS 存储事件详情 |
|
USE_SLS_GROUP_STORAGE | 启用 SLS 存储 Issue 聚合信息 |
|
步骤四:创建 Docker Compose 覆盖配置
在 Sentry 服务根目录创建 docker-compose.override.yml 文件:
services:
# ============================================
# Snuba API - RUM 版本
# ============================================
snuba-api-rum:
restart: unless-stopped
image: "${SNUBA_RUM_IMAGE}"
depends_on:
clickhouse:
condition: service_healthy
kafka:
condition: service_healthy
redis:
condition: service_healthy
environment:
SNUBA_SETTINGS: self_hosted
CLICKHOUSE_HOST: clickhouse
DEFAULT_BROKERS: "kafka:9092"
REDIS_HOST: redis
UWSGI_MAX_REQUESTS: "10000"
UWSGI_DISABLE_LOGGING: "true"
SENTRY_EVENT_RETENTION_DAYS:
# 配置
SLS_ENDPOINT: ${SLS_ENDPOINT:-cn-chengdu.log.aliyuncs.com}
SLS_ACCESS_KEY_ID: ${SLS_ACCESS_KEY_ID:-}
SLS_ACCESS_KEY_SECRET: ${SLS_ACCESS_KEY_SECRET:-}
SLS_PROJECT: ${SLS_PROJECT:-}
SLS_LOGSTORE: ${SLS_LOGSTORE:-logstore-rum}
SLS_TTL: ${SLS_TTL:-30}
USE_SLS_FOR_REFERRERS: sls
healthcheck:
test:
- "CMD"
- "/bin/bash"
- "-c"
- 'exec 3<>/dev/tcp/127.0.0.1/1218 && echo -e "GET /health HTTP/1.1\r\nhost: 127.0.0.1\r\n\r\n" >&3 && grep ok -s -m 1 <&3'
interval: "$HEALTHCHECK_INTERVAL"
timeout: "$HEALTHCHECK_TIMEOUT"
retries: $HEALTHCHECK_RETRIES
start_period: "$HEALTHCHECK_START_PERIOD"
# ============================================
# Web 服务 - RUM 版本
# ============================================
web-rum:
restart: unless-stopped
image: "${SENTRY_RUM_IMAGE}"
depends_on:
redis:
condition: service_healthy
kafka:
condition: service_healthy
pgbouncer:
condition: service_healthy
memcached:
condition: service_started
smtp:
condition: service_started
seaweedfs:
condition: service_started
snuba-api-rum:
condition: service_healthy
symbolicator:
condition: service_started
entrypoint: "/etc/sentry/entrypoint.sh"
command: ["run", "web"]
ulimits:
nofile:
soft: 4096
hard: 4096
environment:
PYTHONUSERBASE: "/data/custom-packages"
SENTRY_CONF: "/etc/sentry"
SNUBA: "http://snuba-api-rum:1218"
VROOM: "http://vroom:8085"
DEFAULT_CA_BUNDLE: "/etc/ssl/certs/ca-certificates.crt"
REQUESTS_CA_BUNDLE: "/etc/ssl/certs/ca-certificates.crt"
GRPC_DEFAULT_SSL_ROOTS_FILE_PATH_ENV_VAR: "/etc/ssl/certs/ca-certificates.crt"
COMPOSE_PROFILES:
SENTRY_EVENT_RETENTION_DAYS:
SENTRY_MAIL_HOST:
SENTRY_MAX_EXTERNAL_SOURCEMAP_SIZE:
# 配置
SLS_ENDPOINT: ${SLS_ENDPOINT:-cn-chengdu.log.aliyuncs.com}
SLS_ACCESS_KEY_ID: ${SLS_ACCESS_KEY_ID:-}
SLS_ACCESS_KEY_SECRET: ${SLS_ACCESS_KEY_SECRET:-}
SLS_PROJECT: ${SLS_PROJECT:-}
SLS_LOGSTORE: ${SLS_LOGSTORE:-logstore-rum}
SLS_TTL: ${SLS_TTL:-30}
ENABLE_SLS_NODESTORE: ${ENABLE_SLS_NODESTORE:-true}
USE_SLS_GROUP_STORAGE: ${USE_SLS_GROUP_STORAGE:-true}
volumes:
- "sentry-data:/data"
- "./sentry:/etc/sentry"
- "./geoip:/geoip:ro"
- "./certificates:/usr/local/share/ca-certificates:ro"
healthcheck:
test:
- "CMD"
- "/bin/bash"
- "-c"
- 'exec 3<>/dev/tcp/127.0.0.1/9000 && echo -e "GET /_health/ HTTP/1.1\r\nhost: 127.0.0.1\r\n\r\n" >&3 && grep ok -s -m 1 <&3'
interval: "$HEALTHCHECK_INTERVAL"
timeout: "$HEALTHCHECK_TIMEOUT"
retries: $HEALTHCHECK_RETRIES
start_period: "$HEALTHCHECK_START_PERIOD"
# ============================================
# Nginx - 双端口配置
# ============================================
nginx:
ports:
- "$SENTRY_BIND:80/tcp"
- "${SENTRY_RUM_BIND:-9001}:81/tcp"
depends_on:
web:
condition: service_healthy
restart: true
web-rum:
condition: service_healthy
restart: true
relay:
condition: service_healthy
restart: true步骤五:配置 Sentry Python 文件
修改 sentry/sentry.conf.example.py 文件,在文件末尾追加以下配置:
# ---------------------------------------------------------
# RUM NodeStore Configuration
# ---------------------------------------------------------
import os
# 获取环境变量配置
_enable_sls_nodestore = os.environ.get("ENABLE_SLS_NODESTORE", "").lower() in ("true", "1", "yes")
_use_sls_group_storage = os.environ.get("USE_SLS_GROUP_STORAGE", "").lower() in ("true", "1", "yes")
_sls_endpoint = os.environ.get("SLS_ENDPOINT", "")
_sls_access_key_id = os.environ.get("SLS_ACCESS_KEY_ID", "")
_sls_access_key_secret = os.environ.get("SLS_ACCESS_KEY_SECRET", "")
_sls_project = os.environ.get("SLS_PROJECT", "")
_sls_nodestore_logstore = os.environ.get("SLS_LOGSTORE", "logstore-rum")
_sls_ttl = int(os.environ.get("SLS_TTL", "30") or 30)
if _enable_sls_nodestore and _sls_endpoint and _sls_access_key_id and _sls_access_key_secret and _sls_project:
SENTRY_NODESTORE = "sentry_nodestore_sls.SLSNodeStorage"
SENTRY_NODESTORE_OPTIONS = {
"endpoint": _sls_endpoint,
"access_key_id": _sls_access_key_id,
"access_key_secret": _sls_access_key_secret,
"project": _sls_project,
"logstore": _sls_nodestore_logstore,
"compression": True,
"retry_attempts": 3,
"retry_delay": 0.5,
"default_ttl_days": _sls_ttl or None,
}
print("=" * 70)
print("==> NodeStore: SLS (Direct Mode ONLY)")
print(f"==> SLS Endpoint: {_sls_endpoint}")
print(f"==> SLS Project: {_sls_project}")
print(f"==> SLS Logstore: {_sls_nodestore_logstore}")
print(f"==> TTL (days): {SENTRY_NODESTORE_OPTIONS['default_ttl_days']}")
print("=" * 70)
else:
print("=" * 70)
print("==> NodeStore: Django (Database)")
print("==> Data stored in PostgreSQL")
if not _enable_sls_nodestore:
print("==> Info: Set ENABLE_SLS_NODESTORE=true in .env.custom to use SLS")
elif not _sls_endpoint or not _sls_access_key_id:
print("==> Warning: SLS credentials not configured in .env.custom")
print("=" * 70)
# ---------------------------------------------------------
# RUM Group Search Configuration
# ---------------------------------------------------------
if _use_sls_group_storage and _sls_endpoint and _sls_access_key_id and _sls_access_key_secret and _sls_project:
SENTRY_SEARCH = "sentry.search.sls.backend.SLSGroupSearchBackend"
print("=" * 70)
print("==> Search Backend: SLS Group Search")
print(f"==> Backend: {SENTRY_SEARCH}")
print("=" * 70)
else:
print("=" * 70)
print("==> Search Backend: Snuba (Default)")
print(f"==> Backend: {SENTRY_SEARCH}")
if not _use_sls_group_storage:
print("==> Info: Set USE_SLS_GROUP_STORAGE=true to use SLS Group Search")
print("=" * 70)
# 清理临时变量
del _enable_sls_nodestore, _use_sls_group_storage, _sls_endpoint, _sls_access_key_id, _sls_access_key_secret
del _sls_project, _sls_nodestore_logstore步骤六:配置 Nginx 双端口
修改 nginx.conf 文件,添加 RUM 版服务的端口监听,支持双环境访问(端口 80 → 标准版,端口 81 → 阿里云版):
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
reset_timedout_connection on;
keepalive_timeout 75s;
gzip off;
server_tokens off;
server_names_hash_bucket_size 64;
types_hash_max_size 2048;
types_hash_bucket_size 64;
client_body_buffer_size 64k;
client_max_body_size 100m;
proxy_http_version 1.1;
proxy_redirect off;
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
proxy_next_upstream error timeout invalid_header http_502 http_503 non_idempotent;
proxy_next_upstream_tries 2;
# Docker default address pools
set_real_ip_from 172.17.0.0/16;
set_real_ip_from 172.18.0.0/16;
set_real_ip_from 172.19.0.0/16;
set_real_ip_from 172.20.0.0/14;
set_real_ip_from 172.24.0.0/14;
set_real_ip_from 172.28.0.0/14;
set_real_ip_from 192.168.0.0/16;
set_real_ip_from 10.0.0.0/8;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
proxy_set_header Connection '';
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Request-Id $request_id;
proxy_read_timeout 30s;
proxy_send_timeout 5s;
upstream relay {
server relay:3000;
keepalive 2;
}
# 标准版 Sentry
upstream sentry {
server web:9000;
keepalive 2;
}
# RUM 版 Sentry
upstream sentry_rum {
server web-rum:9000;
keepalive 2;
}
# RUM 数据转发到阿里云 RUM(仅在启用镜像双写时使用)
upstream rum_forwarder {
# endpoint
server <endpoint>:80;
keepalive 2;
}
# ============================================
# RUM 转发路径映射
# 根据 Sentry Project ID 映射到对应的阿里云 RUM 服务
# ============================================
map $request_uri $forwarder_path {
# 默认转发路径(请替换为实际的 workspace_name 和 rum_service_id)
default /rum/sentry/<workspace_name>/<rum_service_id>;
# 按项目 ID 配置不同的转发路径示例:
# "~^/api/1/envelope/" /rum/sentry/<workspace_name>/<rum_service_id_for_project_1>;
# "~^/api/2/envelope/" /rum/sentry/<workspace_name>/<rum_service_id_for_project_2>;
}
# 标准版 Sentry - 端口 80
server {
listen 80;
location /api/store/ {
proxy_pass http://relay;
}
location ~ ^/api/[1-9]\d*/ {
proxy_pass http://relay;
mirror /rum_mirror;
mirror_request_body on;
}
location = /rum_mirror {
internal;
proxy_pass http://rum_forwarder$forwarder_path$request_uri;
# endpoint
proxy_set_header Host <endpoint>;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Request-Id $request_id;
proxy_set_header Connection '';
proxy_read_timeout 5s;
proxy_send_timeout 5s;
proxy_connect_timeout 5s;
proxy_ignore_client_abort on;
}
location ^~ /api/0/relays/ {
proxy_pass http://relay;
}
location ^~ /js-sdk/ {
root /var/www/;
add_header Access-Control-Allow-Origin *;
}
location / {
proxy_pass http://sentry;
}
location /_assets/ {
proxy_pass http://sentry/_static/dist/sentry/;
proxy_hide_header Content-Disposition;
}
location /_static/ {
proxy_pass http://sentry;
proxy_hide_header Content-Disposition;
}
}
# RUM 版 Sentry - 端口 81
server {
listen 81;
location /api/store/ {
proxy_pass http://relay;
}
location ~ ^/api/[1-9]\d*/ {
proxy_pass http://relay;
mirror /rum_mirror;
mirror_request_body on;
}
location = /rum_mirror {
internal;
proxy_pass http://rum_forwarder$forwarder_path$request_uri;
# endpoint
proxy_set_header Host <endpoint>;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Request-Id $request_id;
proxy_set_header Connection '';
proxy_read_timeout 5s;
proxy_send_timeout 5s;
proxy_connect_timeout 5s;
proxy_ignore_client_abort on;
}
location ^~ /api/0/relays/ {
proxy_pass http://relay;
}
location ^~ /js-sdk/ {
root /var/www/;
add_header Access-Control-Allow-Origin *;
}
location / {
proxy_pass http://sentry_rum;
}
location /_assets/ {
proxy_pass http://sentry_rum/_static/dist/sentry/;
proxy_hide_header Content-Disposition;
}
location /_static/ {
proxy_pass http://sentry_rum;
proxy_hide_header Content-Disposition;
}
}
}步骤七:启动服务
执行以下命令启动服务:
docker compose -f docker-compose.yml -f docker-compose.override.yml up -d查看服务状态:
docker compose ps确认 web-rum 和 snuba-api-rum 容器状态为 Up (healthy)。
步骤八:验证读取功能
访问 RUM 版 Sentry 控制台:
http://<your-host>:9001。进入已创建的项目,查看 Issues 列表。
确认是否能正常展示从 SLS 读取的错误和事件数据。
若 Issues 列表为空,请检查:
数据写入是否正常。
SLS 访问凭证是否正确配置。
容器日志是否有错误信息
查看容器日志:
docker compose logs -f web-rum docker compose logs -f snuba-api-rum
访问地址
环境 | 访问地址 | 说明 |
标准版 Sentry |
| 官方镜像,数据存储在 ClickHouse |
RUM 版 Sentry |
| 定制版镜像,数据存储在 SLS |
常见问题
Q:替换定制版容器后,Sentry 控制台的功能是否有影响?
A:定制版容器仅修改数据读取逻辑,控制台的界面、交互和功能与原生版本一致。
Q:是否支持同时读取 ClickHouse 和 SLS 的数据?
A:当前版本不支持混合读取。定制版容器启用后,所有事件数据均从 SLS 读取。
Q:如何回滚到原生 Sentry 容器?
A:将 web-rum 和 snuba-api-rum 服务停止,使用标准版 Sentry(端口 9000)即可。
SLS 中的数据保留多长时间?
A:数据保留时间默认 30 天。