CLUSTER RESET [HARD|SOFT]
重置一个redis集群节点,激烈的方式或多或少的取决于重置的类型硬的或软的.注意此命令不为masters工作,如果他们持有一个或多个键, 在这种情况下完全重置master节点键必须先删除 例如:先使用FLUSHALL,然后后CLUSTER RESET. 节点的影响:
1、All the other nodes in the cluster are forgotten.
在集群中的所有其他节点被遗忘
2、All the assigned / open slots are reset, so the slots-to-nodes mapping is totally cleared.
所有已分配/打开的slots被重置,因此slots-to-nodes映射全部被清除
3、If the node is a slave it is turned into an (empty) master. Its dataset is flushed, so at the end the node will be an empty master.
4、Hard reset only: a new Node ID is generated.
一个新的Node ID被生成
5、 Hard reset only: currentEpoch and configEpoch vars are set to 0.
currentEpoch 和 configEpoch 变量重置为0
6、The new configuration is persisted on disk in the node cluster configuration file.
新的配置持久化到磁盘上在node cluster配置文件里。
This command is mainly useful to re-provision a Redis Cluster node in order to be used in the context of a new, different cluster. The command is also extensively used by the Redis Cluster testing framework in order to reset the state of the cluster every time a new test unit is executed.
这个命令主要用于重新提供一个redis集群节点用于一个新的环境,不同的cluster。该命令还广泛使用的Redis集群测试框架以重置每次一个新的测试单元进行聚类的状态。
If no reset type is specified, the default is soft. 如果没有reset类型指定,默认是soft
Return value 返回值
Simple string reply: OK if the command was successful. Otherwise an error is returned. 如果命令成功,简单字符串回复:OK 要不然返回error
具体案例:
# ./redis-cli -c -p 6381 -h 192.168.2.205
192.168.2.205:6381> FLUSHALL
OK
192.168.2.205:6381> CLUSTER RESET
OK
192.168.2.205:6381> CLUSTER RESET HARD
OK
192.168.2.205:6381> CLUSTER RESET SOFT
OK
192.168.2.205:6381> exit
# /data/lnmp/6381/bin/redis-trib.rb check 192.168.2.205:6381
>>> Performing Cluster Check (using node 192.168.2.205:6381)
M: b227de30b608e8eefeee9fdd2c633786e3b2d314 192.168.2.205:6381
slots: (0 slots) master
0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[ERR] Not all 16384 slots are covered by nodes.
# /data/lnmp/6381/bin/redis-trib.rb check 192.168.2.205:6382
*** WARNING: 192.168.2.205:6382 claims to be slave of unknown node ID b4ee0dac81bc99bc2e5f93888503239636e728e4.
>>> Performing Cluster Check (using node 192.168.2.205:6382)
S: fb5e9a6a2f4be5c41b166e5dac1e953be73909ba 192.168.2.205:6382
slots: (0 slots) slave
replicates b4ee0dac81bc99bc2e5f93888503239636e728e4
M: 2b9d4483cf885d796aa87518f457ba8e878cb061 192.168.2.207:6381
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: 6ad8e7bdcfaed6acac134b7a07e7d291db44c307 192.168.2.207:6382
slots: (0 slots) slave
replicates e1ed22a04537e50908938ad924726cee7b9ffdad
S: 850a1ed8e723b4a80a92a1198b84689e9026a08e 192.168.2.206:6382
slots: (0 slots) slave
replicates 2b9d4483cf885d796aa87518f457ba8e878cb061
M: e1ed22a04537e50908938ad924726cee7b9ffdad 192.168.2.206:6381
slots:5461-10922 (5462 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[ERR] Not all 16384 slots are covered by nodes.