全部产品
云市场

CREATE TABLE

更新时间:2020-04-02 14:15:35

本文主要介绍使用 DDL 语句进行建表的语法、子句和参数,以及基本方式。

注意:

  • DRDS 目前不支持使用 DDL 语句直接建库,请登录 DRDS 控制台进行创建。具体操作指南请参考 创建 DRDS 数据库
  • DRDS 支持全局二级索引 (Global Secondary Index, GSI) ,要求 MySQL 版本 >= 5.7, 并且 DRDS 版本 >= 5.4.1,基本原理请参考 DRDS 全局二级索引文档

语法

  1. CREATE [SHADOW] TABLE [IF NOT EXISTS] tbl_name
  2. (create_definition, ...)
  3. [table_options]
  4. [drds_partition_options]
  5. create_definition:
  6. col_name column_definition
  7. | mysql_create_definition
  8. | [UNIQUE] GLOBAL INDEX index_name [index_type] (index_sharding_col_name,...)
  9. [global_secondary_index_option]
  10. [index_option] ...
  11. # 全局二级索引相关
  12. global_secondary_index_option:
  13. [COVERING (col_name,...)]
  14. [drds_partition_options]
  15. # 分库分表子句
  16. drds_partition_options:
  17. DBPARTITION BY db_partition_algorithm
  18. [TBPARTITION BY table_partition_algorithm [TBPARTITIONS num]]
  19. db_sharding_algorithm:
  20. HASH([col_name])
  21. | {YYYYMM|YYYYWEEK|YYYYDD|YYYYMM_OPT|YYYYWEEK_OPT|YYYYDD_OPT}(col_name)
  22. | UNI_HASH(col_name)
  23. | RIGHT_SHIFT(col_name, n)
  24. | RANGE_HASH(col_name, col_name, n)
  25. table_sharding_algorithm:
  26. HASH(col_name)
  27. | {MM|DD|WEEK|MMDD|YYYYMM|YYYYWEEK|YYYYDD|YYYYMM_OPT|YYYYWEEK_OPT|YYYYDD_OPT}(col_name)
  28. | UNI_HASH(col_name)
  29. | RIGHT_SHIFT(col_name, n)
  30. | RANGE_HASH(col_name, col_name, n)
  31. # 以下为 MySQL DDL 语法
  32. index_sharding_col_name:
  33. col_name [(length)] [ASC | DESC]
  34. index_option:
  35. KEY_BLOCK_SIZE [=] value
  36. | index_type
  37. | WITH PARSER parser_name
  38. | COMMENT 'string'
  39. index_type:
  40. USING {BTREE | HASH}

注意: DRDS DDL 语法基于 MySQL 语法,以上主要列出了差异部分,详细语法请参考 MySQL 文档

分库分表子句和参数:

  • DBPARTITION BY hash(partition_key):指定分库键和分库算法;
  • TBPARTITION BY { HASH(column) | {MM|DD|WEEK|MMDD|YYYYMM|YYYYWEEK|YYYYDD|YYYYMM_OPT|YYYYWEEK_OPT|YYYYDD_OPT}(column)(可选):默认与 DBPARTITION BY 相同,指定物理表使用什么方式映射数据;
  • TBPARTITIONS num(可选):每个库上的物理表数目(默认为1),如不分表,就不需要指定该字段。
  • 拆分函数的详细介绍,参考 拆分函数概述

全局二级索引定义子句

  • [UNIQUE] GLOBAL:定义全局二级索引,UNIQUE GLOBAL 代表全局唯一索引
  • index_name:索引名,也是索引表的名称
  • index_type:索引表中分库分表键上局部索引的类型,支持范围参考 MySQL 文档
  • index_sharding_col_name,...:索引列,包含且仅包含索引表的全部分库分表键,详细说明参考 DRDS 全局二级索引使用文档
  • global_secondary_index_option:DRDS 全局二级索引的扩展语法
    • COVERING (col_name,...):覆盖列,索引表中除索引列以外的其他列,默认包含主键和主表的分库分表键,详细说明参考 DRDS 全局二级索引使用文档
    • drds_partition_options:索引表的分库分表子句,详细语法参考 “分库分表子句和参数” 部分
  • index_option:索引表中分库分表键上局部索引的属性,支持范围参考 MySQL 文档

全链路压测影子表子句

  • SHADOW:创建全链路压测影子表,表名必须以 __test_ 为前缀,前缀后的表名部分必须与关联的正式表名一致,且正式表必须先于影子表创建

单库单表

建一张单库单表,不做任何拆分。

  1. CREATE TABLE single_tbl(
  2. id bigint not null auto_increment,
  3. name varchar(30),
  4. primary key(id)
  5. );

查看逻辑表的节点拓扑,可以看出只在 0 库创建了一张单库单表的逻辑表。

  1. mysql> show topology from single_tbl;
  2. +------+------------------------------------------------------------------+------------+
  3. | ID | GROUP_NAME | TABLE_NAME |
  4. +------+------------------------------------------------------------------+------------+
  5. | 0 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | single_tbl |
  6. +------+------------------------------------------------------------------+------------+
  7. 1 row in set (0.01 sec)

指定

单库单表建表的时候也可以指定(select_statement),拆分表则不支持指定。

  1. CREATE [SHADOW] TABLE [IF NOT EXISTS] tbl_name
  2. [(create_definition,...)]
  3. [table_options]
  4. [partition_options]
  5. select_statement

示例:建一张单库单表 single_tbl2,数据来自表 single_tbl,不做任何拆分。

  1. CREATE TABLE single_tbl2(
  2. id bigint not null auto_increment,
  3. name varchar(30),
  4. primary key(id)
  5. ) select * from single_tbl;

分库不分表

假设已经建好的分库数为 8,建一张表,只分库不分表,分库方式为根据 id 列哈希。

  1. CREATE TABLE multi_db_single_tbl(
  2. id bigint not null auto_increment,
  3. name varchar(30),
  4. primary key(id)
  5. ) dbpartition by hash(id);

查看该逻辑表的节点拓扑,可以看出在每个分库都创建了 1 张分表,既只做了分库。

  1. mysql> show topology from multi_db_single_tbl;
  2. +------+------------------------------------------------------------------+---------------------+
  3. | ID | GROUP_NAME | TABLE_NAME |
  4. +------+------------------------------------------------------------------+---------------------+
  5. | 0 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | multi_db_single_tbl |
  6. | 1 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0001_RDS | multi_db_single_tbl |
  7. | 2 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0002_RDS | multi_db_single_tbl |
  8. | 3 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0003_RDS | multi_db_single_tbl |
  9. | 4 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0004_RDS | multi_db_single_tbl |
  10. | 5 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0005_RDS | multi_db_single_tbl |
  11. | 6 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0006_RDS | multi_db_single_tbl |
  12. | 7 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | multi_db_single_tbl |
  13. +------+------------------------------------------------------------------+---------------------+
  14. 8 rows in set (0.01 sec)

分库分表

本小节介绍如何使用不同的拆分方式进行分库分表:

以下示例均假设已经建好的分库数为 8。

使用哈希函数做拆分

建一张表,既分库又分表,每个库含有 3 张物理表,分库拆分方式为按照 id 列进行哈希,分表拆分方式为按照 bid 列进行哈希(先根据 id 列的值进行哈希运算,将表中数据分布在多个子库中,每个子库中的数据再根据 bid 列值的哈希运算结果分布在3个物理表中)。

  1. CREATE TABLE multi_db_multi_tbl(
  2. id bigint not null auto_increment,
  3. bid int,
  4. name varchar(30),
  5. primary key(id)
  6. ) dbpartition by hash(id) tbpartition by hash(bid) tbpartitions 3;

查看该逻辑表的节点拓扑,可以看出在每个分库都创建了 3 张分表。

  1. mysql> show topology from multi_db_multi_tbl;
  2. +------+------------------------------------------------------------------+-----------------------+
  3. | ID | GROUP_NAME | TABLE_NAME |
  4. +------+------------------------------------------------------------------+-----------------------+
  5. | 0 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | multi_db_multi_tbl_00 |
  6. | 1 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | multi_db_multi_tbl_01 |
  7. | 2 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | multi_db_multi_tbl_02 |
  8. | 3 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0001_RDS | multi_db_multi_tbl_03 |
  9. | 4 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0001_RDS | multi_db_multi_tbl_04 |
  10. | 5 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0001_RDS | multi_db_multi_tbl_05 |
  11. | 6 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0002_RDS | multi_db_multi_tbl_06 |
  12. | 7 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0002_RDS | multi_db_multi_tbl_07 |
  13. | 8 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0002_RDS | multi_db_multi_tbl_08 |
  14. | 9 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0003_RDS | multi_db_multi_tbl_09 |
  15. | 10 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0003_RDS | multi_db_multi_tbl_10 |
  16. | 11 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0003_RDS | multi_db_multi_tbl_11 |
  17. | 12 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0004_RDS | multi_db_multi_tbl_12 |
  18. | 13 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0004_RDS | multi_db_multi_tbl_13 |
  19. | 14 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0004_RDS | multi_db_multi_tbl_14 |
  20. | 15 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0005_RDS | multi_db_multi_tbl_15 |
  21. | 16 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0005_RDS | multi_db_multi_tbl_16 |
  22. | 17 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0005_RDS | multi_db_multi_tbl_17 |
  23. | 18 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0006_RDS | multi_db_multi_tbl_18 |
  24. | 19 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0006_RDS | multi_db_multi_tbl_19 |
  25. | 20 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0006_RDS | multi_db_multi_tbl_20 |
  26. | 21 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | multi_db_multi_tbl_21 |
  27. | 22 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | multi_db_multi_tbl_22 |
  28. | 23 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | multi_db_multi_tbl_23 |
  29. +------+------------------------------------------------------------------+-----------------------+
  30. 24 rows in set (0.01 sec)

查看该逻辑表的拆分规则,可以看出分库分表的拆分方式均为哈希,分库的拆分键为 id,分表的拆分键为 bid。

  1. mysql> show rule from multi_db_multi_tbl;
  2. +------+--------------------+-----------+------------------+---------------------+--------------------+------------------+---------------------+--------------------+
  3. | ID | TABLE_NAME | BROADCAST | DB_PARTITION_KEY | DB_PARTITION_POLICY | DB_PARTITION_COUNT | TB_PARTITION_KEY | TB_PARTITION_POLICY | TB_PARTITION_COUNT |
  4. +------+--------------------+-----------+------------------+---------------------+--------------------+------------------+---------------------+--------------------+
  5. | 0 | multi_db_multi_tbl | 0 | id | hash | 8 | bid | hash | 3 |
  6. +------+--------------------+-----------+------------------+---------------------+--------------------+------------------+---------------------+--------------------+
  7. 1 row in set (0.01 sec)

使用双字段哈希函数做拆分

  • 使用要求拆分键的类型必须是字符类型或数字类型。

  • 路由方式根据任一拆分键后 N 位计算哈希值,以哈希方式完成路由计算。N 为函数第三个参数。例如:RANGE_HASH(COL1, COL2, N),计算时会优先选择 COL1 ,截取其后 N 位进行计算。 COL1 不存在时按 COL2 计算。

  • 适用场景适合于需要有两个拆分键,并且仅使用其中一个拆分键值进行查询时的场景。例如,假设用户的 DRDS 里已经分了 8 个物理库, 现业务有如下的场景:

  1. 一个业务想按买家 ID 和订单 ID 对订单表进行分库;
  2. 查询时条件仅有买家 ID 或订单 ID。

此时可使用以下 DDL 对订单表进行构建:

  1. create table test_order_tb (
  2. id bigint not null auto_increment,
  3. seller_id varchar(30) DEFAULT NULL,
  4. order_id varchar(30) DEFAULT NULL,
  5. buyer_id varchar(30) DEFAULT NULL,
  6. create_time datetime DEFAULT NULL,
  7. primary key(id)
  8. ) ENGINE=InnoDB DEFAULT CHARSET=utf8 dbpartition by RANGE_HASH(buyer_id, order_id, 10) tbpartition by RANGE_HASH(buyer_id, order_id, 10) tbpartitions 3;
  • 注意事项
    • 两个拆分键皆不能修改。
    • 插入数据时如果发现两个拆分键指向不同的分库或分表时,插入会失败。

使用日期做拆分

除了可以使用哈希函数做拆分算法,您还可以使用日期函数 MM/DD/WEEK/MMDD 来作为分表的拆分算法,具体参照以下五个示例:

  • 建一张表,既分库又分表,分库方式为根据 userId 列哈希,分表方式为根据 actionDate 列,按照一周七天来拆分(WEEK(actionDate)计算的是DAY_OF_WEEK)。

比如 actionDate 列的值是 2017-02-27,这天是星期一,WEEK(actionDate)算出的值是2,该条记录就会被存储到 2(2 % 7 = 2)这张分表(位于某个分库,具体的表名是 user_log_2);比如 actionDate 列的值是 2017-02-26,这天是星期天,WEEK(actionDate)算出的值是1,该条记录就会被存储到 1(1 % 7 = 1)这张分表(位于某个分库,具体的表名是 user_log_1)。

  1. CREATE TABLE user_log(
  2. userId int,
  3. name varchar(30),
  4. operation varchar(30),
  5. actionDate DATE
  6. ) dbpartition by hash(userId) tbpartition by WEEK(actionDate) tbpartitions 7;

查看该逻辑表的节点拓扑,可以看出在每个分库都创建了 7 张分表(一周7天)。以下示例中返回的结果较长,用…做了省略处理。

  1. mysql> show topology from user_log;
  2. +------+------------------------------------------------------------------+------------+
  3. | ID | GROUP_NAME | TABLE_NAME |
  4. +------+------------------------------------------------------------------+------------+
  5. | 0 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log_0 |
  6. | 1 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log_1 |
  7. | 2 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log_2 |
  8. | 3 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log_3 |
  9. | 4 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log_4 |
  10. | 5 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log_5 |
  11. | 6 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log_6 |
  12. | 7 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0001_RDS | user_log_0 |
  13. | 8 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0001_RDS | user_log_1 |
  14. | 9 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0001_RDS | user_log_2 |
  15. | 10 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0001_RDS | user_log_3 |
  16. | 11 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0001_RDS | user_log_4 |
  17. | 12 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0001_RDS | user_log_5 |
  18. | 13 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0001_RDS | user_log_6 |
  19. ...
  20. | 49 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log_0 |
  21. | 50 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log_1 |
  22. | 51 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log_2 |
  23. | 52 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log_3 |
  24. | 53 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log_4 |
  25. | 54 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log_5 |
  26. | 55 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log_6 |
  27. +------+------------------------------------------------------------------+------------+
  28. 56 rows in set (0.01 sec)

查看该逻辑表的拆分规则,可以看出分库的拆分方式为哈希,分库的拆分键为 userId,分表的拆分方式为按照时间函数 WEEK 进行拆分,分表的拆分键为 actionDate。

  1. mysql> show rule from user_log;
  2. +------+------------+-----------+------------------+---------------------+--------------------+------------------+---------------------+--------------------+
  3. | ID | TABLE_NAME | BROADCAST | DB_PARTITION_KEY | DB_PARTITION_POLICY | DB_PARTITION_COUNT | TB_PARTITION_KEY | TB_PARTITION_POLICY | TB_PARTITION_COUNT |
  4. +------+------------+-----------+------------------+---------------------+--------------------+------------------+---------------------+--------------------+
  5. | 0 | user_log | 0 | userId | hash | 8 | actionDate | week | 7 |
  6. +------+------------+-----------+------------------+---------------------+--------------------+------------------+---------------------+--------------------+
  7. 1 row in set (0.00 sec)

查看给定分库键和分表键参数时, SQL 被路由到哪个物理分库和该物理分库下的哪张物理表。

  • 建一张表,既分库又分表,分库方式为根据 userId 列哈希,分表方式为根据 actionDate 列,按照一年 12 个月进行拆分(MM(actionDate)计算的是MONTH_OF_YEAR)。

比如 actionDate 列的值是 2017-02-27,MM(actionDate)算出的值是02,该条记录就会被存储到02(02 % 12 = 02)这张分表(位于某个分库,具体的表名是 user_log_02);比如 actionDate 列的值是 2016-12-27,MM(actionDate)算出的值是 12,该条记录就会被存储到00(12 % 12 = 00)这张分表(位于某个分库,具体的表名是 user_log_00)。

  1. CREATE TABLE user_log2(
  2. userId int,
  3. name varchar(30),
  4. operation varchar(30),
  5. actionDate DATE
  6. ) dbpartition by hash(userId) tbpartition by MM(actionDate) tbpartitions 12;

查看该逻辑表的节点拓扑,可以看出在每个分库都创建了 12 张分表(1 年有 12 个月)。由于返回结果较长,这里用…做了省略处理。

  1. mysql> show topology from user_log2;
  2. +------+------------------------------------------------------------------+--------------+
  3. | ID | GROUP_NAME | TABLE_NAME |
  4. +------+------------------------------------------------------------------+--------------+
  5. | 0 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log2_00 |
  6. | 1 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log2_01 |
  7. | 2 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log2_02 |
  8. | 3 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log2_03 |
  9. | 4 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log2_04 |
  10. | 5 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log2_05 |
  11. | 6 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log2_06 |
  12. | 7 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log2_07 |
  13. | 8 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log2_08 |
  14. | 9 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log2_09 |
  15. | 10 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log2_10 |
  16. | 11 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log2_11 |
  17. | 12 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0001_RDS | user_log2_00 |
  18. | 13 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0001_RDS | user_log2_01 |
  19. | 14 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0001_RDS | user_log2_02 |
  20. | 15 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0001_RDS | user_log2_03 |
  21. | 16 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0001_RDS | user_log2_04 |
  22. | 17 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0001_RDS | user_log2_05 |
  23. | 18 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0001_RDS | user_log2_06 |
  24. | 19 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0001_RDS | user_log2_07 |
  25. | 20 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0001_RDS | user_log2_08 |
  26. | 21 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0001_RDS | user_log2_09 |
  27. | 22 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0001_RDS | user_log2_10 |
  28. | 23 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0001_RDS | user_log2_11 |
  29. ...
  30. | 84 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log2_00 |
  31. | 85 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log2_01 |
  32. | 86 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log2_02 |
  33. | 87 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log2_03 |
  34. | 88 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log2_04 |
  35. | 89 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log2_05 |
  36. | 90 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log2_06 |
  37. | 91 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log2_07 |
  38. | 92 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log2_08 |
  39. | 93 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log2_09 |
  40. | 94 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log2_10 |
  41. | 95 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log2_11 |
  42. +------+------------------------------------------------------------------+--------------+
  43. 96 rows in set (0.02 sec)

查看该逻辑表的拆分规则,可以看出分库的拆分方式为哈希,分库的拆分键为 userId,分表的拆分方式为按照时间函数 MM 进行拆分,分表的拆分键为 actionDate。

  1. mysql> show rule from user_log2;
  2. +------+------------+-----------+------------------+---------------------+--------------------+------------------+---------------------+--------------------+
  3. | ID | TABLE_NAME | BROADCAST | DB_PARTITION_KEY | DB_PARTITION_POLICY | DB_PARTITION_COUNT | TB_PARTITION_KEY | TB_PARTITION_POLICY | TB_PARTITION_COUNT |
  4. +------+------------+-----------+------------------+---------------------+--------------------+------------------+---------------------+--------------------+
  5. | 0 | user_log2 | 0 | userId | hash | 8 | actionDate | mm | 12 |
  6. +------+------------+-----------+------------------+---------------------+--------------------+------------------+---------------------+--------------------+
  7. 1 row in set (0.00 sec)
  • 建一张表,既分库又分表,分库方式为根据 userId 列哈希,分表方式为按照一个月 31 天进行拆分(函数DD(actionDate)计算的是DAY_OF_MONTH)。

比如 actionDate 列的值是 2017-02-27,DD(actionDate)算出的值是 27,该条记录就会被存储到27(27 % 31 = 27)这张分表(位于某个分库,具体的表名是 user_log_27)。

  1. CREATE TABLE user_log3(
  2. userId int,
  3. name varchar(30),
  4. operation varchar(30),
  5. actionDate DATE
  6. ) dbpartition by hash(userId) tbpartition by DD(actionDate) tbpartitions 31;

查看该逻辑表的节点拓扑,可以看出在每个分库都创建了 31 张分表(按每个月有31天处理)。由于返回的结果较长,这里用…做了省略处理。

  1. mysql> show topology from user_log3;
  2. +------+------------------------------------------------------------------+--------------+
  3. | ID | GROUP_NAME | TABLE_NAME |
  4. +------+------------------------------------------------------------------+--------------+
  5. | 0 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_00 |
  6. | 1 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_01 |
  7. | 2 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_02 |
  8. | 3 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_03 |
  9. | 4 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_04 |
  10. | 5 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_05 |
  11. | 6 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_06 |
  12. | 7 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_07 |
  13. | 8 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_08 |
  14. | 9 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_09 |
  15. | 10 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_10 |
  16. | 11 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_11 |
  17. | 12 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_12 |
  18. | 13 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_13 |
  19. | 14 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_14 |
  20. | 15 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_15 |
  21. | 16 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_16 |
  22. | 17 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_17 |
  23. | 18 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_18 |
  24. | 19 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_19 |
  25. | 20 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_20 |
  26. | 21 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_21 |
  27. | 22 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_22 |
  28. | 23 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_23 |
  29. | 24 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_24 |
  30. | 25 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_25 |
  31. | 26 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_26 |
  32. | 27 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_27 |
  33. | 28 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_28 |
  34. | 29 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_29 |
  35. | 30 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log3_30 |
  36. ...
  37. | 237 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log3_20 |
  38. | 238 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log3_21 |
  39. | 239 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log3_22 |
  40. | 240 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log3_23 |
  41. | 241 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log3_24 |
  42. | 242 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log3_25 |
  43. | 243 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log3_26 |
  44. | 244 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log3_27 |
  45. | 245 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log3_28 |
  46. | 246 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log3_29 |
  47. | 247 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log3_30 |
  48. +------+------------------------------------------------------------------+--------------+
  49. 248 rows in set (0.01 sec)

查看该逻辑表的拆分规则,可以看出分库的拆分方式为哈希,分库的拆分键为 userId,分表的拆分方式为按照时间函数 DD 进行拆分,分表的拆分键为 actionDate。

  1. mysql> show rule from user_log3;
  2. +------+------------+-----------+------------------+---------------------+--------------------+------------------+---------------------+--------------------+
  3. | ID | TABLE_NAME | BROADCAST | DB_PARTITION_KEY | DB_PARTITION_POLICY | DB_PARTITION_COUNT | TB_PARTITION_KEY | TB_PARTITION_POLICY | TB_PARTITION_COUNT |
  4. +------+------------+-----------+------------------+---------------------+--------------------+------------------+---------------------+--------------------+
  5. | 0 | user_log3 | 0 | userId | hash | 8 | actionDate | dd | 31 |
  6. +------+------------+-----------+------------------+---------------------+--------------------+------------------+---------------------+--------------------+
  7. 1 row in set (0.01 sec)
  • 建一张表,既分库又分表,分库方式为根据 userId 列哈希,分表方式为按照一年 365 天进行拆分,路由到 365 张物理表(MMDD(actionDate) tbpartitions 365计算的是DAY_OF_YEAR % 365)。

比如 actionDate 列的值是 2017-02-27,MMDD(actionDate)算出的值是 58,该条记录就会被存储到 58 这张分表(位于某个分库,具体的表名是user_log_58)。

  1. CREATE TABLE user_log4(
  2. userId int,
  3. name varchar(30),
  4. operation varchar(30),
  5. actionDate DATE
  6. ) dbpartition by hash(userId) tbpartition by MMDD(actionDate) tbpartitions 365;

查看该逻辑表的节点拓扑,可以看出在每个分库都创建了 365 张分表(按每年有 365 天处理)。由于返回的结果较长,这里用…做了省略处理。

  1. mysql> show topology from user_log4;
  2. +------+------------------------------------------------------------------+---------------+
  3. | ID | GROUP_NAME | TABLE_NAME |
  4. +------+------------------------------------------------------------------+---------------+
  5. ...
  6. | 2896 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log4_341 |
  7. | 2897 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log4_342 |
  8. | 2898 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log4_343 |
  9. | 2899 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log4_344 |
  10. | 2900 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log4_345 |
  11. | 2901 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log4_346 |
  12. | 2902 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log4_347 |
  13. | 2903 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log4_348 |
  14. | 2904 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log4_349 |
  15. | 2905 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log4_350 |
  16. | 2906 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log4_351 |
  17. | 2907 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log4_352 |
  18. | 2908 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log4_353 |
  19. | 2909 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log4_354 |
  20. | 2910 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log4_355 |
  21. | 2911 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log4_356 |
  22. | 2912 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log4_357 |
  23. | 2913 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log4_358 |
  24. | 2914 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log4_359 |
  25. | 2915 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log4_360 |
  26. | 2916 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log4_361 |
  27. | 2917 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log4_362 |
  28. | 2918 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log4_363 |
  29. | 2919 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log4_364 |
  30. +------+------------------------------------------------------------------+---------------+
  31. 2920 rows in set (0.07 sec)

查看该逻辑表的拆分规则,可以看出分库的拆分方式为哈希,分库的拆分键为 userId,分表的拆分方式为按照时间函数 MMDD 进行拆分,分表的拆分键为 actionDate。

  1. mysql> show rule from user_log4;
  2. +------+------------+-----------+------------------+---------------------+--------------------+------------------+---------------------+--------------------+
  3. | ID | TABLE_NAME | BROADCAST | DB_PARTITION_KEY | DB_PARTITION_POLICY | DB_PARTITION_COUNT | TB_PARTITION_KEY | TB_PARTITION_POLICY | TB_PARTITION_COUNT |
  4. +------+------------+-----------+------------------+---------------------+--------------------+------------------+---------------------+--------------------+
  5. | 0 | user_log4 | 0 | userId | hash | 8 | actionDate | mmdd | 365 |
  6. +------+------------+-----------+------------------+---------------------+--------------------+------------------+---------------------+--------------------+
  7. 1 row in set (0.02 sec)
  • 建一张表,既分库又分表,分库方式为根据 userId 列哈希,分表方式为按照一年 365 天进行拆分,路由到 10 张物理表(MMDD(actionDate) tbpartitions 10计算的是DAY_OF_YEAR % 10)。
  1. CREATE TABLE user_log5(
  2. userId int,
  3. name varchar(30),
  4. operation varchar(30),
  5. actionDate DATE
  6. ) dbpartition by hash(userId) tbpartition by MMDD(actionDate) tbpartitions 10;

查看该逻辑表的节点拓扑,可以看出在每个分库都创建了 10 张分表(按照一年 365 天进行拆分,路由到 10 张物理表)。由于返回的结果较长,这里用…做了省略处理。

  1. mysql> show topology from user_log5;
  2. +------+------------------------------------------------------------------+--------------+
  3. | ID | GROUP_NAME | TABLE_NAME |
  4. +------+------------------------------------------------------------------+--------------+
  5. | 0 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log5_00 |
  6. | 1 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log5_01 |
  7. | 2 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log5_02 |
  8. | 3 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log5_03 |
  9. | 4 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log5_04 |
  10. | 5 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log5_05 |
  11. | 6 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log5_06 |
  12. | 7 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log5_07 |
  13. | 8 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log5_08 |
  14. | 9 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0000_RDS | user_log5_09 |
  15. ...
  16. | 70 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log5_00 |
  17. | 71 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log5_01 |
  18. | 72 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log5_02 |
  19. | 73 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log5_03 |
  20. | 74 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log5_04 |
  21. | 75 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log5_05 |
  22. | 76 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log5_06 |
  23. | 77 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log5_07 |
  24. | 78 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log5_08 |
  25. | 79 | SANGUAN_TEST_123_1488766060743ACTJSANGUAN_TEST_123_WVVP_0007_RDS | user_log5_09 |
  26. +------+------------------------------------------------------------------+--------------+
  27. 80 rows in set (0.02 sec)

查看该逻辑表的拆分规则,可以看出分库的拆分方式为哈希,分库的拆分键为 userId,分表的拆分方式为按照时间函数 MMDD 进行拆分,路由到10张物理表,分表的拆分键为 actionDate。

  1. mysql> show rule from user_log5;
  2. +------+------------+-----------+------------------+---------------------+--------------------+------------------+---------------------+--------------------+
  3. | ID | TABLE_NAME | BROADCAST | DB_PARTITION_KEY | DB_PARTITION_POLICY | DB_PARTITION_COUNT | TB_PARTITION_KEY | TB_PARTITION_POLICY | TB_PARTITION_COUNT |
  4. +------+------------+-----------+------------------+---------------------+--------------------+------------------+---------------------+--------------------+
  5. | 0 | user_log5 | 0 | userId | hash | 8 | actionDate | mmdd | 10 |
  6. +------+------------+-----------+------------------+---------------------+--------------------+------------------+---------------------+--------------------+
  7. 1 row in set (0.01 sec)

使用主键作为拆分键

当拆分算法不指定任何拆分字段时,系统默认使用主键作为拆分字段。以下示例将介绍如何使用主键当分库和分表键。

  • 使用主键当分库键
  1. CREATE TABLE prmkey_tbl(
  2. id bigint not null auto_increment,
  3. name varchar(30),
  4. primary key(id)
  5. ) dbpartition by hash();
  • 使用主键当分库分表键
  1. CREATE TABLE prmkey_multi_tbl(
  2. id bigint not null auto_increment,
  3. name varchar(30),
  4. primary key(id)
  5. ) dbpartition by hash() tbpartition by hash() tbpartitions 3;

广播表

子句BROADCAST用来指定创建广播表。广播表是指将这个表复制到每个分库上,在分库上通过同步机制实现数据一致,有秒级延迟。这样做的好处是可以将 JOIN 操作下推到底层的 RDS(MySQL),来避免跨库 JOIN。SQL 优化方法 详细讲述了如何使用广播表来做 SQL 优化。

  1. CREATE TABLE brd_tbl(
  2. id bigint not null auto_increment,
  3. name varchar(30),
  4. primary key(id)
  5. ) ENGINE=InnoDB DEFAULT CHARSET=utf8 BROADCAST;

其他 MySQL 建表属性

您在分库分表的同时还可以指定其他的 MySQL 建表属性,例如:

  1. CREATE TABLE multi_db_multi_tbl(
  2. id bigint not null auto_increment,
  3. name varchar(30),
  4. primary key(id)
  5. ) ENGINE=InnoDB DEFAULT CHARSET=utf8 dbpartition by hash(id) tbpartition by hash(id) tbpartitions 3;

全局二级索引

本小节介绍如何在建表时定义全局二级索引:

定义全局二级索引

示例

  1. CREATE TABLE t_order (
  2. `id` bigint(11) NOT NULL AUTO_INCREMENT,
  3. `order_id` varchar(20) DEFAULT NULL,
  4. `buyer_id` varchar(20) DEFAULT NULL,
  5. `seller_id` varchar(20) DEFAULT NULL,
  6. `order_snapshot` longtext DEFAULT NULL,
  7. `order_detail` longtext DEFAULT NULL,
  8. PRIMARY KEY (`id`),
  9. GLOBAL INDEX `g_i_seller`(`seller_id`) dbpartition by hash(`seller_id`)
  10. ) ENGINE=InnoDB DEFAULT CHARSET=utf8 dbpartition by hash(`order_id`);

其中:

  • 主表:t_order 只分库不分表,分库的拆分方式为按照 order_id 列进行哈希
  • 索引表:g_i_seller 只分库不分表,分库的拆分方式为按照 seller_id 列进行哈希,未指定覆盖列
  • 索引定义子句:GLOBAL INDEX `g_i_seller`(`seller_id`) dbpartition by hash(`seller_id`)

通过 SHOW INDEX 查看索引信息,包含拆分键 order_id 上的局部索引,和 seller_id, id, order_id 上的 GSI,其中 seller_id 为索引表的拆分键,id, order_id 为默认的覆盖列(主键和主表的拆分键)。

注意:GSI 的限制与约定参考 DRDS 全局二级索引使用文档,SHOW INDEX 详细说明参考 SHOW INDEX 文档

  1. mysql> show index from t_order;
  2. +---------+------------+-------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+----------+---------------+
  3. | TABLE | NON_UNIQUE | KEY_NAME | SEQ_IN_INDEX | COLUMN_NAME | COLLATION | CARDINALITY | SUB_PART | PACKED | NULL | INDEX_TYPE | COMMENT | INDEX_COMMENT |
  4. +---------+------------+-------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+----------+---------------+
  5. | t_order | 0 | PRIMARY | 1 | id | A | 0 | NULL | NULL | | BTREE | | |
  6. | t_order | 1 | auto_shard_key_order_id | 1 | order_id | A | 0 | NULL | NULL | YES | BTREE | | |
  7. | t_order | 1 | g_i_seller | 1 | seller_id | NULL | 0 | NULL | NULL | YES | GLOBAL | INDEX | |
  8. | t_order | 1 | g_i_seller | 2 | id | NULL | 0 | NULL | NULL | | GLOBAL | COVERING | |
  9. | t_order | 1 | g_i_seller | 3 | order_id | NULL | 0 | NULL | NULL | YES | GLOBAL | COVERING | |
  10. +---------+------------+-------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+----------+---------------+

通过 SHOW GLOBAL INDEX 可以单独查看 GSI 信息,详细说明参考 SHOW GLOBAL INDEX 文档

  1. mysql> show global index from t_order;
  2. +--------+---------+------------+------------+-------------+----------------+------------+------------------+---------------------+--------------------+------------------+---------------------+--------------------+--------+
  3. | SCHEMA | TABLE | NON_UNIQUE | KEY_NAME | INDEX_NAMES | COVERING_NAMES | INDEX_TYPE | DB_PARTITION_KEY | DB_PARTITION_POLICY | DB_PARTITION_COUNT | TB_PARTITION_KEY | TB_PARTITION_POLICY | TB_PARTITION_COUNT | STATUS |
  4. +--------+---------+------------+------------+-------------+----------------+------------+------------------+---------------------+--------------------+------------------+---------------------+--------------------+--------+
  5. | d7 | t_order | 1 | g_i_seller | seller_id | id, order_id | NULL | seller_id | HASH | 8 | | NULL | NULL | PUBLIC |
  6. +--------+---------+------------+------------+-------------+----------------+------------+------------------+---------------------+--------------------+------------------+---------------------+--------------------+--------+

查看索引表的结构,索引表包含主表的主键、分库分表键 和 默认的覆盖列,主键列去除了 AUTO_INCREMENT 属性,并且去除了主表中的局部索引

  1. mysql> show create table g_i_seller;
  2. +------------+-----------------------------------------------------------+
  3. | Table | Create Table |
  4. +------------+-----------------------------------------------------------+
  5. | g_i_seller | CREATE TABLE `g_i_seller` (
  6. `id` bigint(11) NOT NULL,
  7. `order_id` varchar(20) DEFAULT NULL,
  8. `seller_id` varchar(20) DEFAULT NULL,
  9. PRIMARY KEY (`id`),
  10. KEY `auto_shard_key_seller_id` (`seller_id`) USING BTREE
  11. ) ENGINE=InnoDB DEFAULT CHARSET=utf8 dbpartition by hash(`seller_id`) |
  12. +------------+-----------------------------------------------------------+

定义全局唯一索引

示例

  1. CREATE TABLE t_order (
  2. `id` bigint(11) NOT NULL AUTO_INCREMENT,
  3. `order_id` varchar(20) DEFAULT NULL,
  4. `buyer_id` varchar(20) DEFAULT NULL,
  5. `seller_id` varchar(20) DEFAULT NULL,
  6. `order_snapshot` longtext DEFAULT NULL,
  7. `order_detail` longtext DEFAULT NULL,
  8. PRIMARY KEY (`id`),
  9. UNIQUE GLOBAL INDEX `g_i_buyer`(`buyer_id`) COVERING(`seller_id`, `order_snapshot`)
  10. dbpartition by hash(`buyer_id`) tbpartition by hash(`buyer_id`) tbpartitions 3
  11. ) ENGINE=InnoDB DEFAULT CHARSET=utf8 dbpartition by hash(`order_id`);

其中:

  • 主表:t_order 只分库不分表,分库的拆分方式为按照 order_id 列进行哈希
  • 索引表:g_i_buyer 只分库且分表,分库和分表的拆分方式均为按照 buyer_id 列进行哈希,覆盖列包含 seller_id, order_snapshot
  • 索引定义子句:UNIQUE GLOBAL INDEX `g_i_buyer`(`buyer_id`) COVERING(`seller_id`, `order_snapshot`) dbpartition by hash(`buyer_id`) tbpartition by hash(`buyer_id`) tbpartitions 3

通过 SHOW INDEX 查看索引信息,包含拆分键 order_id 上的局部索引,和 buyer_id, id, order_id, seller_id, order_snapshot 上的 GSI,其中 buyer_id 为索引表的拆分键,id, order_id 为默认的覆盖列(主键和主表的拆分键),seller_id, order_snapshot 为显示指定的覆盖列。

注意: GSI 的限制与约定参考 DRDS 全局二级索引使用文档,SHOW INDEX 详细说明参考 SHOW INDEX 文档

  1. mysql> show index from t_order;
  2. +--------------+------------+-------------------------+--------------+----------------+-----------+-------------+----------+--------+------+------------+----------+---------------+
  3. | TABLE | NON_UNIQUE | KEY_NAME | SEQ_IN_INDEX | COLUMN_NAME | COLLATION | CARDINALITY | SUB_PART | PACKED | NULL | INDEX_TYPE | COMMENT | INDEX_COMMENT |
  4. +--------------+------------+-------------------------+--------------+----------------+-----------+-------------+----------+--------+------+------------+----------+---------------+
  5. | t_order_dthb | 0 | PRIMARY | 1 | id | A | 0 | NULL | NULL | | BTREE | | |
  6. | t_order_dthb | 1 | auto_shard_key_order_id | 1 | order_id | A | 0 | NULL | NULL | YES | BTREE | | |
  7. | t_order | 0 | g_i_buyer | 1 | buyer_id | NULL | 0 | NULL | NULL | YES | GLOBAL | INDEX | |
  8. | t_order | 1 | g_i_buyer | 2 | id | NULL | 0 | NULL | NULL | | GLOBAL | COVERING | |
  9. | t_order | 1 | g_i_buyer | 3 | order_id | NULL | 0 | NULL | NULL | YES | GLOBAL | COVERING | |
  10. | t_order | 1 | g_i_buyer | 4 | seller_id | NULL | 0 | NULL | NULL | YES | GLOBAL | COVERING | |
  11. | t_order | 1 | g_i_buyer | 5 | order_snapshot | NULL | 0 | NULL | NULL | YES | GLOBAL | COVERING | |
  12. +--------------+------------+-------------------------+--------------+----------------+-----------+-------------+----------+--------+------+------------+----------+---------------+

通过 SHOW GLOBAL INDEX 可以单独查看 GSI 信息,详细说明参考 SHOW GLOBAL INDEX 文档

  1. mysql> show global index from t_order;
  2. +--------+---------+------------+-----------+-------------+-----------------------------------------+------------+------------------+---------------------+--------------------+------------------+---------------------+--------------------+--------+
  3. | SCHEMA | TABLE | NON_UNIQUE | KEY_NAME | INDEX_NAMES | COVERING_NAMES | INDEX_TYPE | DB_PARTITION_KEY | DB_PARTITION_POLICY | DB_PARTITION_COUNT | TB_PARTITION_KEY | TB_PARTITION_POLICY | TB_PARTITION_COUNT | STATUS |
  4. +--------+---------+------------+-----------+-------------+-----------------------------------------+------------+------------------+---------------------+--------------------+------------------+---------------------+--------------------+--------+
  5. | d7 | t_order | 0 | g_i_buyer | buyer_id | id, order_id, seller_id, order_snapshot | NULL | buyer_id | HASH | 8 | buyer_id | HASH | 3 | PUBLIC |
  6. +--------+---------+------------+-----------+-------------+-----------------------------------------+------------+------------------+---------------------+--------------------+------------------+---------------------+--------------------+--------+

查看索引表的结构,索引表包含主表的主键、分库分表键、默认覆盖列 和 GSI 定义中指定的覆盖列,主键列去除了 AUTO_INCREMENT 属性,并且去除了主表中局部索引,全局唯一索引默认在索引表的所有分库分表键上创建一个唯一索引,以实现全局唯一约束

  1. mysql> show create table g_i_buyer;
  2. +-----------+--------------------------------------------------------------------------------------------------------+
  3. | Table | Create Table |
  4. +-----------+--------------------------------------------------------------------------------------------------------+
  5. | g_i_buyer | CREATE TABLE `g_i_buyer` (
  6. `id` bigint(11) NOT NULL,
  7. `order_id` varchar(20) DEFAULT NULL,
  8. `buyer_id` varchar(20) DEFAULT NULL,
  9. `seller_id` varchar(20) DEFAULT NULL,
  10. `order_snapshot` longtext,
  11. PRIMARY KEY (`id`),
  12. UNIQUE KEY `auto_shard_key_buyer_id` (`buyer_id`) USING BTREE
  13. ) ENGINE=InnoDB DEFAULT CHARSET=utf8 dbpartition by hash(`buyer_id`) tbpartition by hash(`buyer_id`) tbpartitions 3 |
  14. +-----------+--------------------------------------------------------------------------------------------------------+