Docker Compse 部署 Seata

Docker Compse 部署 Seata

Seata

环境准备

本次部署是基于 Nacos 搭建 Seata 服务,因此在安装 Seata 之前,需要先准备好 Nacos 服务器。

  1. 拉取镜像

    1
    docker pull seataio/seata-server:1.3.0
  2. 启动临时容器

    1
    docker run --name temp-seata-server -p 8091:8091 seataio/seata-server:1.3.0
  3. 查看容器ID

    1
    docker ps -a

    找到刚才启动的容器,复制容器ID。

  4. 导出Seata配置文件

    1
    2
    3
    4
    # 创建配置文件保存目录
    mkdir -p /home/seata-config
    # 复制配置文件到目标目录
    docker cp <容器ID>:/seata-server/resources/* /home/seata-config

    复制完成后,就可以删除掉临时容器了。

  5. 准备数据库

    下载Seata源代码,地址:https://github.com/seata/seata/archive/refs/tags/v1.3.0.zip

    找到源码目录的script/server/db/mysql脚本文件,使用脚本创建数据库。创建表之前先创建一个seata-db的数据库。

    脚本文件

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    -- -------------------------------- The script used when storeMode is 'db' --------------------------------
    -- the table to store GlobalSession data
    CREATE TABLE IF NOT EXISTS `global_table`
    (
    `xid` VARCHAR(128) NOT NULL,
    `transaction_id` BIGINT,
    `status` TINYINT NOT NULL,
    `application_id` VARCHAR(32),
    `transaction_service_group` VARCHAR(32),
    `transaction_name` VARCHAR(128),
    `timeout` INT,
    `begin_time` BIGINT,
    `application_data` VARCHAR(2000),
    `gmt_create` DATETIME,
    `gmt_modified` DATETIME,
    PRIMARY KEY (`xid`),
    KEY `idx_gmt_modified_status` (`gmt_modified`, `status`),
    KEY `idx_transaction_id` (`transaction_id`)
    ) ENGINE = InnoDB
    DEFAULT CHARSET = utf8;

    -- the table to store BranchSession data
    CREATE TABLE IF NOT EXISTS `branch_table`
    (
    `branch_id` BIGINT NOT NULL,
    `xid` VARCHAR(128) NOT NULL,
    `transaction_id` BIGINT,
    `resource_group_id` VARCHAR(32),
    `resource_id` VARCHAR(256),
    `branch_type` VARCHAR(8),
    `status` TINYINT,
    `client_id` VARCHAR(64),
    `application_data` VARCHAR(2000),
    `gmt_create` DATETIME(6),
    `gmt_modified` DATETIME(6),
    PRIMARY KEY (`branch_id`),
    KEY `idx_xid` (`xid`)
    ) ENGINE = InnoDB
    DEFAULT CHARSET = utf8;

    -- the table to store lock data
    CREATE TABLE IF NOT EXISTS `lock_table`
    (
    `row_key` VARCHAR(128) NOT NULL,
    `xid` VARCHAR(96),
    `transaction_id` BIGINT,
    `branch_id` BIGINT NOT NULL,
    `resource_id` VARCHAR(256),
    `table_name` VARCHAR(32),
    `pk` VARCHAR(36),
    `gmt_create` DATETIME,
    `gmt_modified` DATETIME,
    PRIMARY KEY (`row_key`),
    KEY `idx_branch_id` (`branch_id`)
    ) ENGINE = InnoDB
    DEFAULT CHARSET = utf8;

    上面的脚本会创建出三个表,这三个表是Seata控制事务时必须的三张表。

修改配置

从Docker容器导出的配置文件很多,我们主要修改两个配置file.confregistry.conf

file.conf:事务日志存储配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53

## transaction log store, only used in seata-server
store {
## store mode: file、db、redis
mode = "db"

## file store property
file {
## store location dir
dir = "sessionStore"
# branch session size , if exceeded first try compress lockkey, still exceeded throws exceptions
maxBranchSessionSize = 16384
# globe session size , if exceeded throws exceptions
maxGlobalSessionSize = 512
# file buffer size , if exceeded allocate new buffer
fileWriteBufferCacheSize = 16384
# when recover batch read size
sessionReloadReadSize = 100
# async, sync
flushDiskMode = async
}

## database store property
db {
## the implement of javax.sql.DataSource, such as DruidDataSource(druid)/BasicDataSource(dbcp)/HikariDataSource(hikari) etc.
datasource = "druid"
## mysql/oracle/postgresql/h2/oceanbase etc.
dbType = "mysql"
driverClassName = "com.mysql.cj.jdbc.Driver"
url = "jdbc:mysql://172.16.3.36:3306/seata-db?useUnicode=true&characterEncoding=UTF-8&useCompression=true&rewriteBatchedStatements=true&useSSL=false&serverTimezone=Asia/Shanghai"
user = "root"
password = "admin123"
minConn = 5
maxConn = 30
globalTable = "global_table"
branchTable = "branch_table"
lockTable = "lock_table"
queryLimit = 100
maxWait = 5000
}

## redis store property
redis {
host = "127.0.0.1"
port = "6379"
password = ""
database = "0"
minConn = 1
maxConn = 10
queryLimit = 100
}

}

重点是 store.mode和store.db。

registry.conf:服务注册发现配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
registry {
# file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
type = "nacos"

nacos {
application = "seata-server"
serverAddr = "172.16.3.2"
group = "SEATA_GROUP"
namespace = "b1d472f3-5672-4af3-a222-c312f157858a"
cluster = "default"
username = "nacos"
password = "nacos"
}
eureka {
serviceUrl = "http://localhost:8761/eureka"
application = "default"
weight = "1"
}
redis {
serverAddr = "localhost:6379"
db = 0
password = ""
cluster = "default"
timeout = 0
}
zk {
cluster = "default"
serverAddr = "127.0.0.1:2181"
sessionTimeout = 6000
connectTimeout = 2000
username = ""
password = ""
}
consul {
cluster = "default"
serverAddr = "127.0.0.1:8500"
}
etcd3 {
cluster = "default"
serverAddr = "http://localhost:2379"
}
sofa {
serverAddr = "127.0.0.1:9603"
application = "default"
region = "DEFAULT_ZONE"
datacenter = "DefaultDataCenter"
cluster = "default"
group = "SEATA_GROUP"
addressWaitTime = "3000"
}
file {
name = "file.conf"
}
}

config {
# file、nacos 、apollo、zk、consul、etcd3
type = "nacos"

nacos {
serverAddr = "172.16.3.2"
namespace = "b1d472f3-5672-4af3-a222-c312f157858a"
group = "SEATA_GROUP"
username = "nacos"
password = "nacos"
}
consul {
serverAddr = "127.0.0.1:8500"
}
apollo {
appId = "seata-server"
apolloMeta = "http://192.168.1.204:8801"
namespace = "application"
}
zk {
serverAddr = "127.0.0.1:2181"
sessionTimeout = 6000
connectTimeout = 2000
username = ""
password = ""
}
etcd3 {
serverAddr = "http://localhost:2379"
}
file {
name = "file.conf"
}
}

重点是registry.type、registry.nacos、config.type、config.nacos。

初始化配置中心

这一步的目的是将客户端与服务端共享的配置使用 Nacos 来管理。

修改配置

源码目录/script/config-center/config.txt

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
transport.type=TCP
transport.server=NIO
transport.heartbeat=true
transport.enableClientBatchSendRequest=false
transport.threadFactory.bossThreadPrefix=NettyBoss
transport.threadFactory.workerThreadPrefix=NettyServerNIOWorker
transport.threadFactory.serverExecutorThreadPrefix=NettyServerBizHandler
transport.threadFactory.shareBossWorker=false
transport.threadFactory.clientSelectorThreadPrefix=NettyClientSelector
transport.threadFactory.clientSelectorThreadSize=1
transport.threadFactory.clientWorkerThreadPrefix=NettyClientWorkerThread
transport.threadFactory.bossThreadSize=1
transport.threadFactory.workerThreadSize=default
transport.shutdown.wait=3
service.vgroupMapping.youzi_maill_group=default
service.default.grouplist=127.0.0.1:8091
service.enableDegrade=false
service.disableGlobalTransaction=false
client.rm.asyncCommitBufferLimit=10000
client.rm.lock.retryInterval=10
client.rm.lock.retryTimes=30
client.rm.lock.retryPolicyBranchRollbackOnConflict=true
client.rm.reportRetryCount=5
client.rm.tableMetaCheckEnable=false
client.rm.sqlParserType=druid
client.rm.reportSuccessEnable=false
client.rm.sagaBranchRegisterEnable=false
client.tm.commitRetryCount=5
client.tm.rollbackRetryCount=5
client.tm.degradeCheck=false
client.tm.degradeCheckAllowTimes=10
client.tm.degradeCheckPeriod=2000
store.mode=db
store.file.dir=file_store/data
store.file.maxBranchSessionSize=16384
store.file.maxGlobalSessionSize=512
store.file.fileWriteBufferCacheSize=16384
store.file.flushDiskMode=async
store.file.sessionReloadReadSize=100
store.db.datasource=druid
store.db.dbType=mysql
store.db.driverClassName=com.mysql.cj.jdbc.Driver
store.db.url=jdbc:mysql://172.16.3.36:3306/seata-db?useUnicode=true&characterEncoding=UTF-8&useCompression=true&rewriteBatchedStatements=true&useSSL=false&serverTimezone=Asia/Shanghai
store.db.user=root
store.db.password=admin123
store.db.minConn=5
store.db.maxConn=30
store.db.globalTable=global_table
store.db.branchTable=branch_table
store.db.queryLimit=100
store.db.lockTable=lock_table
store.db.maxWait=5000
store.redis.host=127.0.0.1
store.redis.port=6379
store.redis.maxConn=10
store.redis.minConn=1
store.redis.database=0
store.redis.password=null
store.redis.queryLimit=100
server.recovery.committingRetryPeriod=1000
server.recovery.asynCommittingRetryPeriod=1000
server.recovery.rollbackingRetryPeriod=1000
server.recovery.timeoutRetryPeriod=1000
server.maxCommitRetryTimeout=-1
server.maxRollbackRetryTimeout=-1
server.rollbackRetryTimeoutUnlockEnable=false
client.undo.dataValidation=true
client.undo.logSerialization=jackson
client.undo.onlyCareUpdateColumns=true
server.undo.logSaveDays=7
server.undo.logDeletePeriod=86400000
client.undo.logTable=undo_log
client.log.exceptionRate=100
transport.serialization=seata
transport.compressor=none
metrics.enabled=false
metrics.registryType=compact
metrics.exporterList=prometheus
metrics.exporterPrometheusPort=9898

重点关注的配置:

  • service.vgroupMapping.<分组名称>=default:配置事务分组,重点是分组名称,Seta通过此名称进行事务编组。如果有多个组,则多复制几行,改一下组名即可。这个组名会在微服务配置中体现。default是集群名称,对应的是Seata Server 的registry.conf配置文件中的registry.nacos.cluster配置的值。
  • service.default.grouplist=:<端口号>:Seata Server集群地址和端口列表。
  • store.db.datasource:使用哪个连接池组件。
  • store.db.dbType:使用哪种数据库
  • store.db.driverClassName=com.mysql.cj.jdbc.Driver:数据库驱动
  • store.db.url:数据库连接地址
  • store.db.user:数据库账号
  • store.db.password:数据库密码
  • store.db.globalTable:全局表,对应前面创建的表名。
  • store.db.branchTable:事务分支表,对应前面创建的表名。
  • store.db.lockTable:事务锁表,对应前面创建的表名。

其他配置默认即可,如果有场景需要,也可以自行配置。配置方法可以参阅官方文档说明。

初始化配置到 Nacos

编写完配置之后,可以使用源码中的脚本将配置写入到 Nacos 中,这样 Seata 的服务端和客户端就可以共享配置了。

脚本位置:源码目录/script/config-center/nacos/nacos-config.sh

执行命令

1
2
# 进入目录后执行
sh nacos-config.sh -h 172.16.3.2 -p 80 -g SEATA_GROUP -t b1d472f3-5672-4af3-a222-c312f157858a -u nacos -w nacos
  • -h:Nacos服务器地址。
  • -p:Nacos服务器端口。
  • -g:Nacos配置文件组名。
  • -t:Nacos配置中心命名空间。
  • -u:Nacos账号。
  • -w:Nacos密码。

执行完命令,会在控制台看到一堆的写入日志。此时打开Nacos控制台,找到配置中心,刷新就可以看到刚才写入的配置了。

Seata服务的编排与部署

编排文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
version: "3.1"

services:

seata-server:
image: seataio/seata-server:1.3.0
hostname: seata-server
container_name: seata-server
ports:
- 8091:8091
environment:
- SEATA_IP=172.16.3.3
- SEATA_PORT=8091
volumes:
- /home/seata-config/resources:/seata-server/resources
expose:
- 8091

其中 环境变量 SEATA_IPvolumes要注意,按照自己的实际情况去填写。

编写完服务编排之后,就可以启动服务了。

1
docker-compose -f <编排文件路径> up -d

启动完成后,查看容器日志,没有报错信息即可。

1
docker logs -f <容器ID>

Docker Compse 部署 Seata
https://kael.52dev.fun/2023/04/19/Docker Compse 部署 Seata/
作者
Kael
发布于
2023年4月19日
许可协议
BY (KAEL)