午夜视频免费看_日韩三级电影网站_国产精品久久一级_亚洲一级在线播放_人妻体内射精一区二区三区_91夜夜揉人人捏人人添红杏_91福利在线导航_国产又粗又猛又黄又爽无遮挡_欧美日韩一区在线播放_中文字幕一区二区三区四区不卡 _日日夜夜精品视频免费观看_欧美韩日一区二区三区

主頁 > 知識庫 > Redis Cluster添加、刪除的完整操作步驟

Redis Cluster添加、刪除的完整操作步驟

熱門標簽:地圖標注費用 小紅書怎么地圖標注店 西藏教育智能外呼系統價格 玄武湖地圖標注 地圖標注如何即時生效 百度商家地圖標注怎么做 太原營銷外呼系統 竹間科技AI電銷機器人 最簡單的百度地圖標注

前言

最近學習了Redis,發現Redis還是挺好玩的,今天測試了集群的添加、刪除節點、重分配slot等。更深入的理解redis的游戲規則。步驟繁多,但是詳細,話不多說了,來一起看看詳細的介紹吧。

環境解釋:

我是在一臺Centos 6.9上測試的,各個redis節點以端口號區分。文中針對各個redis,我只是以端口號代表。

~~~~Master Node~~~~~
172.16.32.116:7000
172.16.32.116:7001
172.16.32.116:7002
~~~~Slave Node~~~~~
172.16.32.116:8000
172.16.32.116:8001
172.16.32.116:8002
~~~~用來折騰的Node~~~~~
172.16.32.116:9000
172.16.32.116:9001

1. 創建redis集群

注:更多redis集群創建,請參閱

Redis Cluster集群部署搭建            

# ./redis-trib.rb create --replicas 1 172.16.32.116:7000 172.16.32.116:7001 172.16.32.116:7002 172.16.32.116:8000 172.16.32.116:8001 172.16.32.116:8002
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
172.16.32.116:7000
172.16.32.116:7001
172.16.32.116:7002
Adding replica 172.16.32.116:8000 to 172.16.32.116:7000
Adding replica 172.16.32.116:8001 to 172.16.32.116:7001
Adding replica 172.16.32.116:8002 to 172.16.32.116:7002
M: a0b91f48e933c1f1d427c54917ce970bd25d29f8 172.16.32.116:7000
 slots:0-5460 (5461 slots) master
M: 273107e5ac994d675749be0979556e761274bb93 172.16.32.116:7001
 slots:5461-10922 (5462 slots) master
M: 88fe075375295b59eabe69fa1438ed7c7c314f43 172.16.32.116:7002
 slots:10923-16383 (5461 slots) master
S: aeb684429d220c0fd1392574d193cc1ae7577782 172.16.32.116:8000
 replicates a0b91f48e933c1f1d427c54917ce970bd25d29f8
S: a96cad95dca2a8e1e0302bff4f835260d92e3d31 172.16.32.116:8001
 replicates 273107e5ac994d675749be0979556e761274bb93
S: 3d27f60a1cc4d9c8f09aca928b03f0e083722d3b 172.16.32.116:8002
 replicates 88fe075375295b59eabe69fa1438ed7c7c314f43
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join...
>>> Performing Cluster Check (using node 172.16.32.116:7000)
M: a0b91f48e933c1f1d427c54917ce970bd25d29f8 172.16.32.116:7000
 slots:0-5460 (5461 slots) master
M: 273107e5ac994d675749be0979556e761274bb93 172.16.32.116:7001
 slots:5461-10922 (5462 slots) master
M: 88fe075375295b59eabe69fa1438ed7c7c314f43 172.16.32.116:7002
 slots:10923-16383 (5461 slots) master
M: aeb684429d220c0fd1392574d193cc1ae7577782 172.16.32.116:8000
 slots: (0 slots) master
 replicates a0b91f48e933c1f1d427c54917ce970bd25d29f8
M: a96cad95dca2a8e1e0302bff4f835260d92e3d31 172.16.32.116:8001
 slots: (0 slots) master
 replicates 273107e5ac994d675749be0979556e761274bb93
M: 3d27f60a1cc4d9c8f09aca928b03f0e083722d3b 172.16.32.116:8002
 slots: (0 slots) master
 replicates 88fe075375295b59eabe69fa1438ed7c7c314f43
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

2. 檢查集群狀態

# ./redis-trib.rb check 172.16.32.116:7000
>>> Performing Cluster Check (using node 172.16.32.116:7000)
M: a0b91f48e933c1f1d427c54917ce970bd25d29f8 172.16.32.116:7000
 slots:0-5460 (5461 slots) master
 1 additional replica(s)
M: 88fe075375295b59eabe69fa1438ed7c7c314f43 172.16.32.116:7002
 slots:10923-16383 (5461 slots) master
 1 additional replica(s)
M: 273107e5ac994d675749be0979556e761274bb93 172.16.32.116:7001
 slots:5461-10922 (5462 slots) master
 1 additional replica(s)
S: 3d27f60a1cc4d9c8f09aca928b03f0e083722d3b 172.16.32.116:8002
 slots: (0 slots) slave
 replicates 88fe075375295b59eabe69fa1438ed7c7c314f43
S: a96cad95dca2a8e1e0302bff4f835260d92e3d31 172.16.32.116:8001
 slots: (0 slots) slave
 replicates 273107e5ac994d675749be0979556e761274bb93
S: aeb684429d220c0fd1392574d193cc1ae7577782 172.16.32.116:8000
 slots: (0 slots) slave
 replicates a0b91f48e933c1f1d427c54917ce970bd25d29f8
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

~~~~~~~~~~~~~~~~~~~~~~~~~~~添加節點~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

3. 添加新節點redis-trib.rb add-node 新增節點名  原集群節點名

# ./redis-trib.rb add-node 172.16.32.116:9000 172.16.32.116:7000
>>> Adding node 172.16.32.116:9000 to cluster 172.16.32.116:7000
>>> Performing Cluster Check (using node 172.16.32.116:7000)
M: a0b91f48e933c1f1d427c54917ce970bd25d29f8 172.16.32.116:7000
 slots:0-5460 (5461 slots) master
 1 additional replica(s)
M: 88fe075375295b59eabe69fa1438ed7c7c314f43 172.16.32.116:7002
 slots:10923-16383 (5461 slots) master
 1 additional replica(s)
M: 273107e5ac994d675749be0979556e761274bb93 172.16.32.116:7001
 slots:5461-10922 (5462 slots) master
 1 additional replica(s)
S: 3d27f60a1cc4d9c8f09aca928b03f0e083722d3b 172.16.32.116:8002
 slots: (0 slots) slave
 replicates 88fe075375295b59eabe69fa1438ed7c7c314f43
S: a96cad95dca2a8e1e0302bff4f835260d92e3d31 172.16.32.116:8001
 slots: (0 slots) slave
 replicates 273107e5ac994d675749be0979556e761274bb93
S: aeb684429d220c0fd1392574d193cc1ae7577782 172.16.32.116:8000
 slots: (0 slots) slave
 replicates a0b91f48e933c1f1d427c54917ce970bd25d29f8
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...

4. 查看當前集群狀態,9000是一個空的Master

# ./redis-cli -p 9000 cluster nodes
a0b91f48e933c1f1d427c54917ce970bd25d29f8 172.16.32.116:7000 master - 0 1505321254767 1 connected 0-5460
273107e5ac994d675749be0979556e761274bb93 172.16.32.116:7001 master - 0 1505321250759 2 connected 5461-10922
88fe075375295b59eabe69fa1438ed7c7c314f43 172.16.32.116:7002 master - 0 1505321251761 3 connected 10923-16383
3d27f60a1cc4d9c8f09aca928b03f0e083722d3b 172.16.32.116:8002 slave 88fe075375295b59eabe69fa1438ed7c7c314f43 0 1505321255769 3 connected
aeb684429d220c0fd1392574d193cc1ae7577782 172.16.32.116:8000 slave a0b91f48e933c1f1d427c54917ce970bd25d29f8 0 1505321253765 1 connected
a96cad95dca2a8e1e0302bff4f835260d92e3d31 172.16.32.116:8001 slave 273107e5ac994d675749be0979556e761274bb93 0 1505321256771 2 connected

5. 為9000分配slot, redis的solt是固定的,就16384個,只能從其他節點獲取slot,然后分配到9000

# ./redis-trib.rb reshard 172.16.32.116:9000
>>> Performing Cluster Check (using node 172.16.32.116:9000)
M: 364ae8322ab2627e25b05d45b702448c74afad10 172.16.32.116:9000
 slots: (0 slots) master
 0 additional replica(s)
M: a0b91f48e933c1f1d427c54917ce970bd25d29f8 172.16.32.116:7000
 slots:0-5460 (5461 slots) master
 1 additional replica(s)
M: 273107e5ac994d675749be0979556e761274bb93 172.16.32.116:7001
 slots:5461-10922 (5462 slots) master
 1 additional replica(s)
S: 3d27f60a1cc4d9c8f09aca928b03f0e083722d3b 172.16.32.116:8002
 slots: (0 slots) slave
 replicates 88fe075375295b59eabe69fa1438ed7c7c314f43
S: aeb684429d220c0fd1392574d193cc1ae7577782 172.16.32.116:8000
 slots: (0 slots) slave
 replicates a0b91f48e933c1f1d427c54917ce970bd25d29f8
M: 88fe075375295b59eabe69fa1438ed7c7c314f43 172.16.32.116:7002
 slots:10923-16383 (5461 slots) master
 1 additional replica(s)
S: a96cad95dca2a8e1e0302bff4f835260d92e3d31 172.16.32.116:8001
 slots: (0 slots) slave
 replicates 273107e5ac994d675749be0979556e761274bb93
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 
What is the receiving node ID? 364ae8322ab2627e25b05d45b702448c74afad10 Please enter all the source node IDs.
 Type 'all' to use all the nodes as source nodes for the hash slots.
 Type 'done' once you entered all the source nodes IDs.
Source node #1:all 
Ready to move 300 slots.
 Source nodes:
 M: a0b91f48e933c1f1d427c54917ce970bd25d29f8 172.16.32.116:7000
 slots:0-5460 (5461 slots) master
 1 additional replica(s)
 M: 273107e5ac994d675749be0979556e761274bb93 172.16.32.116:7001
 slots:5461-10922 (5462 slots) master
 1 additional replica(s)
 M: 88fe075375295b59eabe69fa1438ed7c7c314f43 172.16.32.116:7002
 slots:10923-16383 (5461 slots) master
 1 additional replica(s)
 Destination node:
 M: 364ae8322ab2627e25b05d45b702448c74afad10 172.16.32.116:9000
 slots: (0 slots) master
 0 additional replica(s)
 Resharding plan:
 Moving slot 5461 from 273107e5ac994d675749be0979556e761274bb93
 Moving slot 5469 from 273107e5ac994d675749be0979556e761274bb93
Do you want to proceed with the proposed reshard plan (yes/no)? yes
Moving slot 5461 from 172.16.32.116:7001 to 172.16.32.116:9000:

6. 可以看到,9000已經分配到的slot是0-98 5461-5561 10923-11021

# ./redis-cli -p 9000 cluster nodes
a0b91f48e933c1f1d427c54917ce970bd25d29f8 172.16.32.116:7000 master - 0 1505324905062 1 connected 99-5460
273107e5ac994d675749be0979556e761274bb93 172.16.32.116:7001 master - 0 1505324910075 2 connected 5562-10922
364ae8322ab2627e25b05d45b702448c74afad10 172.16.32.116:9000 myself,master - 0 0 7 connected 0-98 5461-5561 10923-11021
3d27f60a1cc4d9c8f09aca928b03f0e083722d3b 172.16.32.116:8002 slave 88fe075375295b59eabe69fa1438ed7c7c314f43 0 1505324908070 3 connected
aeb684429d220c0fd1392574d193cc1ae7577782 172.16.32.116:8000 slave a0b91f48e933c1f1d427c54917ce970bd25d29f8 0 1505324911077 1 connected
88fe075375295b59eabe69fa1438ed7c7c314f43 172.16.32.116:7002 master - 0 1505324902057 3 connected 11022-16383
a96cad95dca2a8e1e0302bff4f835260d92e3d31 172.16.32.116:8001 slave 273107e5ac994d675749be0979556e761274bb93 0 1505324909073 2 connected

~~~~~~~~~~~~~~~~~~~~~~~~~~~將9000變為slave~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

7. 希望將9000變成7000的slave,但是由于有slot,執行失敗,需要先轉移slot

# redis-cli -c -p 9000 cluster replicate a0b91f48e933c1f1d427c54917ce970bd25d29f8
(error) ERR To set a master the node must be empty and without assigned slots.

8. 刪除節點也是不可以的,總之,只要上面有slot。redis是不會讓你刪除的,而且需要人工介入,rebalance這些slot之后才行

# ./redis-trib.rb del-node 172.16.32.116:9000 364ae8322ab2627e25b05d45b702448c74afad10
>>> Removing node 364ae8322ab2627e25b05d45b702448c74afad10 from cluster 172.16.32.116:9000
[ERR] Node 172.16.32.116:9000 is not empty! Reshard data away and try again.

9.重新分配9000的slot到7000上

# ./redis-trib.rb reshard 172.16.32.116:9000 重新分配slot
>>> Performing Cluster Check (using node 172.16.32.116:9000)
M: 364ae8322ab2627e25b05d45b702448c74afad10 172.16.32.116:9000
 slots:0-98,5461-5561,10923-11021 (299 slots) master
 0 additional replica(s)
M: a0b91f48e933c1f1d427c54917ce970bd25d29f8 172.16.32.116:7000
 slots:99-5460 (5362 slots) master
 1 additional replica(s)
M: 273107e5ac994d675749be0979556e761274bb93 172.16.32.116:7001
 slots:5562-10922 (5361 slots) master
 1 additional replica(s)
S: 3d27f60a1cc4d9c8f09aca928b03f0e083722d3b 172.16.32.116:8002
 slots: (0 slots) slave
 replicates 88fe075375295b59eabe69fa1438ed7c7c314f43
S: aeb684429d220c0fd1392574d193cc1ae7577782 172.16.32.116:8000
 slots: (0 slots) slave
 replicates a0b91f48e933c1f1d427c54917ce970bd25d29f8
M: 88fe075375295b59eabe69fa1438ed7c7c314f43 172.16.32.116:7002
 slots:11022-16383 (5362 slots) master
 1 additional replica(s)
S: a96cad95dca2a8e1e0302bff4f835260d92e3d31 172.16.32.116:8001
 slots: (0 slots) slave
 replicates 273107e5ac994d675749be0979556e761274bb93
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 300  9000節點全部需要遷移的節點
What is the receiving node ID? a0b91f48e933c1f1d427c54917ce970bd25d29f8 7000的ID
Please enter all the source node IDs.
 Type 'all' to use all the nodes as source nodes for the hash slots.
 Type 'done' once you entered all the source nodes IDs.
Source node #1:364ae8322ab2627e25b05d45b702448c74afad10 9000的ID
Source node #2:done
Ready to move 300 slots.
 Source nodes:
 M: 364ae8322ab2627e25b05d45b702448c74afad10 172.16.32.116:9000 Source nodes
 slots:0-98,5461-5561,10923-11021 (299 slots) master
 0 additional replica(s)
 Destination node:
 M: a0b91f48e933c1f1d427c54917ce970bd25d29f8 172.16.32.116:7000 Destination node
 slots:99-5460 (5362 slots) master
 1 additional replica(s)
 Resharding plan:
 Moving slot 0 from 364ae8322ab2627e25b05d45b702448c74afad10
.........
 Moving slot 11021 from 364ae8322ab2627e25b05d45b702448c74afad10
Do you want to proceed with the proposed reshard plan (yes/no)? yes
Moving slot 0 from 172.16.32.116:9000 to 172.16.32.116:7000:
Moving slot 1 from 172.16.32.116:9000 to 172.16.32.116:7000:
........

10. 查詢,可以看到9000已經沒有slot了

# ./redis-cli -p 9000 cluster nodes
a0b91f48e933c1f1d427c54917ce970bd25d29f8 172.16.32.116:7000 master - 0 1505328938056 8 connected 0-5561 10923-11021
273107e5ac994d675749be0979556e761274bb93 172.16.32.116:7001 master - 0 1505328939059 2 connected 5562-10922
364ae8322ab2627e25b05d45b702448c74afad10 172.16.32.116:9000 myself,master - 0 0 7 connected
3d27f60a1cc4d9c8f09aca928b03f0e083722d3b 172.16.32.116:8002 slave 88fe075375295b59eabe69fa1438ed7c7c314f43 0 1505328936053 3 connected
aeb684429d220c0fd1392574d193cc1ae7577782 172.16.32.116:8000 slave a0b91f48e933c1f1d427c54917ce970bd25d29f8 0 1505328933046 8 connected
88fe075375295b59eabe69fa1438ed7c7c314f43 172.16.32.116:7002 master - 0 1505328937054 3 connected 11022-16383
a96cad95dca2a8e1e0302bff4f835260d92e3d31 172.16.32.116:8001 slave 273107e5ac994d675749be0979556e761274bb93 0 1505328934049 2 connected

11. 再次執行命令,將9000變成7000的slave,成功

# redis-cli -c -p 9000 cluster replicate a0b91f48e933c1f1d427c54917ce970bd25d29f8
OK

12. 查看狀態,9000已經成為7000的slave

# ./redis-cli -p 9000 cluster nodes
a0b91f48e933c1f1d427c54917ce970bd25d29f8 172.16.32.116:7000 master - 0 1505329564286 8 connected 0-5561 10923-11021
273107e5ac994d675749be0979556e761274bb93 172.16.32.116:7001 master - 0 1505329561281 2 connected 5562-10922
364ae8322ab2627e25b05d45b702448c74afad10 172.16.32.116:9000 myself,slave a0b91f48e933c1f1d427c54917ce970bd25d29f8 0 0 7 connected 
3d27f60a1cc4d9c8f09aca928b03f0e083722d3b 172.16.32.116:8002 slave 88fe075375295b59eabe69fa1438ed7c7c314f43 0 1505329558274 3 connected
aeb684429d220c0fd1392574d193cc1ae7577782 172.16.32.116:8000 slave a0b91f48e933c1f1d427c54917ce970bd25d29f8 0 1505329554266 8 connected
88fe075375295b59eabe69fa1438ed7c7c314f43 172.16.32.116:7002 master - 0 1505329563285 3 connected 11022-16383
a96cad95dca2a8e1e0302bff4f835260d92e3d31 172.16.32.116:8001 slave 273107e5ac994d675749be0979556e761274bb93 0 1505329562283 2 connected

13. 刪除節點9000,成功刪除

# ./redis-trib.rb del-node 172.16.32.116:9000 364ae8322ab2627e25b05d45b702448c74afad10
>>> Removing node 364ae8322ab2627e25b05d45b702448c74afad10 from cluster 172.16.32.116:9000
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.

14. 連接9000,發現已經shutdown,無法連接

# ./redis-cli -p 9000 cluster nodes
Could not connect to Redis at 127.0.0.1:9000: Connection refused
Could not connect to Redis at 127.0.0.1:9000: Connection refused

15. 查看集群狀態,9000已經不見了

# ./redis-cli -p 7000 cluster nodes
88fe075375295b59eabe69fa1438ed7c7c314f43 172.16.32.116:7002 master - 0 1505329693835 3 connected 11022-16383
273107e5ac994d675749be0979556e761274bb93 172.16.32.116:7001 master - 0 1505329694837 2 connected 5562-10922
a0b91f48e933c1f1d427c54917ce970bd25d29f8 172.16.32.116:7000 myself,master - 0 0 8 connected 0-5561 10923-11021
3d27f60a1cc4d9c8f09aca928b03f0e083722d3b 172.16.32.116:8002 slave 88fe075375295b59eabe69fa1438ed7c7c314f43 0 1505329696841 6 connected
a96cad95dca2a8e1e0302bff4f835260d92e3d31 172.16.32.116:8001 slave 273107e5ac994d675749be0979556e761274bb93 0 1505329695840 5 connected
aeb684429d220c0fd1392574d193cc1ae7577782 172.16.32.116:8000 slave a0b91f48e933c1f1d427c54917ce970bd25d29f8 0 1505329692833 8 connected

~~~~~~~~~~~~~~~~~~~~~~~~~~~再次啟動9000,發現不同~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

16. 再次啟動9000,發現一個有意思的事情。上面檢查7000,集群已經沒有9000了

# ./redis-cli -p 7000 cluster nodes
88fe075375295b59eabe69fa1438ed7c7c314f43 172.16.32.116:7002 master - 0 1505329898241 3 connected 11022-16383
273107e5ac994d675749be0979556e761274bb93 172.16.32.116:7001 master - 0 1505329899242 2 connected 5562-10922
a0b91f48e933c1f1d427c54917ce970bd25d29f8 172.16.32.116:7000 myself,master - 0 0 8 connected 0-5561 10923-11021
3d27f60a1cc4d9c8f09aca928b03f0e083722d3b 172.16.32.116:8002 slave 88fe075375295b59eabe69fa1438ed7c7c314f43 0 1505329902249 6 connected
a96cad95dca2a8e1e0302bff4f835260d92e3d31 172.16.32.116:8001 slave 273107e5ac994d675749be0979556e761274bb93 0 1505329901246 5 connected
aeb684429d220c0fd1392574d193cc1ae7577782 172.16.32.116:8000 slave a0b91f48e933c1f1d427c54917ce970bd25d29f8 0 1505329900244 8 connected

17. 但是查看9000,確仍然能看到整個集群的信息。

說明,在刪除節點的過程,只是在原有集群中刪除9000的信息。但是9000自身的信息并沒有被刪除,依然保留全部的信息,只是9000實例被關閉而已。

# ./redis-cli -p 9000 cluster nodes
a0b91f48e933c1f1d427c54917ce970bd25d29f8 172.16.32.116:7000 master - 0 1505329902003 8 connected 0-5561 10923-11021
273107e5ac994d675749be0979556e761274bb93 172.16.32.116:7001 master - 0 1505329903006 2 connected 5562-10922
3d27f60a1cc4d9c8f09aca928b03f0e083722d3b 172.16.32.116:8002 slave 88fe075375295b59eabe69fa1438ed7c7c314f43 0 1505329906013 3 connected
a96cad95dca2a8e1e0302bff4f835260d92e3d31 172.16.32.116:8001 slave 273107e5ac994d675749be0979556e761274bb93 0 1505329908019 2 connected
aeb684429d220c0fd1392574d193cc1ae7577782 172.16.32.116:8000 slave a0b91f48e933c1f1d427c54917ce970bd25d29f8 0 1505329904008 8 connected
364ae8322ab2627e25b05d45b702448c74afad10 172.16.32.116:9000 myself,slave a0b91f48e933c1f1d427c54917ce970bd25d29f8 0 0 7 connected
88fe075375295b59eabe69fa1438ed7c7c314f43 172.16.32.116:7002 master - 0 1505329907016 3 connected 11022-16383

18. 而9000的全部信息,是記錄在自身目錄的nodes.conf中

# more nodes.conf
a0b91f48e933c1f1d427c54917ce970bd25d29f8 172.16.32.116:7000 master - 0 1505329544244 8 connected 0-5561 10923-11021
273107e5ac994d675749be0979556e761274bb93 172.16.32.116:7001 master - 0 1505329542241 2 connected 5562-10922
364ae8322ab2627e25b05d45b702448c74afad10 172.16.32.116:9000 myself,slave a0b91f48e933c1f1d427c54917ce970bd25d29f8 0 0 7 connected
3d27f60a1cc4d9c8f09aca928b03f0e083722d3b 172.16.32.116:8002 slave 88fe075375295b59eabe69fa1438ed7c7c314f43 0 1505329541239 3 connected
aeb684429d220c0fd1392574d193cc1ae7577782 172.16.32.116:8000 slave a0b91f48e933c1f1d427c54917ce970bd25d29f8 0 1505329545246 8 connected
88fe075375295b59eabe69fa1438ed7c7c314f43 172.16.32.116:7002 master - 0 1505329543242 3 connected 11022-16383
a96cad95dca2a8e1e0302bff4f835260d92e3d31 172.16.32.116:8001 slave 273107e5ac994d675749be0979556e761274bb93 0 1505329546248 2 connected
vars currentEpoch 8 lastVoteEpoch 0

19. 到7001等其他節點中查看,nodes.conf已經沒有9000的信息

# more nodes.conf
273107e5ac994d675749be0979556e761274bb93 172.16.32.116:7001 myself,master - 0 0 2 connected 5562-10922
a0b91f48e933c1f1d427c54917ce970bd25d29f8 172.16.32.116:7000 master - 0 1505329680312 8 connected 0-5561 10923-11021
aeb684429d220c0fd1392574d193cc1ae7577782 172.16.32.116:8000 slave a0b91f48e933c1f1d427c54917ce970bd25d29f8 0 1505329684319 8 connected
3d27f60a1cc4d9c8f09aca928b03f0e083722d3b 172.16.32.116:8002 slave 88fe075375295b59eabe69fa1438ed7c7c314f43 0 1505329686321 6 connected
a96cad95dca2a8e1e0302bff4f835260d92e3d31 172.16.32.116:8001 slave 273107e5ac994d675749be0979556e761274bb93 0 1505329685318 5 connected
88fe075375295b59eabe69fa1438ed7c7c314f43 172.16.32.116:7002 master - 0 1505329683317 3 connected 11022-16383
vars currentEpoch 8 lastVoteEpoch 0

~~~~~~~~~~~~~~~~~~~~~~~~~~~再次添加9000,以及9001~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

20. 將9000和9001再次加入集群,開始下面的折騰。

注意:需要將9000和9001下的redis.conf外的文件清除,重啟。然后才能再次加入集群。不然會遇到錯誤:

[ERR] Node 172.16.32.116:9001 is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0.

21. 加入集群

./redis-trib.rb add-node 172.16.32.116:9000 172.16.32.116:7002
./redis-trib.rb add-node 172.16.32.116:9001 172.16.32.116:7002

22. 現在是兩個空的Master節點

# ./redis-trib.rb check 172.16.32.116:9001
>>> Performing Cluster Check (using node 172.16.32.116:9001)
M: c4ba7a1f537ac66076791461d6af9012741fee74 172.16.32.116:9001
 slots: (0 slots) master
 0 additional replica(s)
M: dbf78b73f2ab9e37cbf31abbc2beb3d5413d5516 172.16.32.116:9000
 slots: (0 slots) master
 0 additional replica(s)
 
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

23. 重分配100個slot到9000上,其實redis很聰明的,我連接的是9001,但是在分配的時候,它會問你,receiving node是誰,Source node 是誰。

# ./redis-trib.rb reshard 172.16.32.116:9001
>>> Performing Cluster Check (using node 172.16.32.116:9001)
......
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 100
What is the receiving node ID? dbf78b73f2ab9e37cbf31abbc2beb3d5413d5516 receiving node ID
Please enter all the source node IDs.
 Type 'all' to use all the nodes as source nodes for the hash slots.
 Type 'done' once you entered all the source nodes IDs.
Source node #1:aeb684429d220c0fd1392574d193cc1ae7577782 Source node我選的是 8000
*** The specified node is not known or is not a master, please retry. 然而,并沒能欺騙redis,它發現了,這個是slave,沒有slot可以提供的。 
Source node #1:273107e5ac994d675749be0979556e761274bb93 Source node再次指定為7001,開始分配了
Source node #2:done
Ready to move 100 slots.
 Source nodes:
 M: 273107e5ac994d675749be0979556e761274bb93 172.16.32.116:7001
 slots:5562-10922 (5361 slots) master
 1 additional replica(s)
 Destination node:
 M: dbf78b73f2ab9e37cbf31abbc2beb3d5413d5516 172.16.32.116:9000
 slots: (0 slots) master
 0 additional replica(s)
 Resharding plan:
 Moving slot 5562 from 273107e5ac994d675749be0979556e761274bb93
 Moving slot 5563 from 273107e5ac994d675749be0979556e761274bb93

24. 查看分配情況,redis還是很聰明靈活的。

# redis-cli -p 7001 cluster nodes
273107e5ac994d675749be0979556e761274bb93 172.16.32.116:7001 myself,master - 0 0 2 connected 5662-10922
a0b91f48e933c1f1d427c54917ce970bd25d29f8 172.16.32.116:7000 master - 0 1505330856605 8 connected 0-5561 10923-11021
aeb684429d220c0fd1392574d193cc1ae7577782 172.16.32.116:8000 slave a0b91f48e933c1f1d427c54917ce970bd25d29f8 0 1505330853598 8 connected
3d27f60a1cc4d9c8f09aca928b03f0e083722d3b 172.16.32.116:8002 slave 88fe075375295b59eabe69fa1438ed7c7c314f43 0 1505330860611 6 connected
a96cad95dca2a8e1e0302bff4f835260d92e3d31 172.16.32.116:8001 slave 273107e5ac994d675749be0979556e761274bb93 0 1505330859608 5 connected
c4ba7a1f537ac66076791461d6af9012741fee74 172.16.32.116:9001 master - 0 1505330862615 9 connected       依然是空的
88fe075375295b59eabe69fa1438ed7c7c314f43 172.16.32.116:7002 master - 0 1505330861612 3 connected 11022-16383
dbf78b73f2ab9e37cbf31abbc2beb3d5413d5516 172.16.32.116:9000 master - 0 1505330858607 10 connected 5562-5661    從7001要來100個slot

25. 將9001添加為9000的slave節點 redis-cli -p slave IP:port> cluster nodes Master ID 號>

# redis-cli -p 9001 cluster nodes dbf78b73f2ab9e37cbf31abbc2beb3d5413d5516
273107e5ac994d675749be0979556e761274bb93 172.16.32.116:7001 myself,master - 0 0 2 connected 5662-10922
a0b91f48e933c1f1d427c54917ce970bd25d29f8 172.16.32.116:7000 master - 0 1505331457798 8 connected 0-5561 10923-11021
aeb684429d220c0fd1392574d193cc1ae7577782 172.16.32.116:8000 slave a0b91f48e933c1f1d427c54917ce970bd25d29f8 0 1505331454791 8 connected
3d27f60a1cc4d9c8f09aca928b03f0e083722d3b 172.16.32.116:8002 slave 88fe075375295b59eabe69fa1438ed7c7c314f43 0 1505331456795 6 connected
a96cad95dca2a8e1e0302bff4f835260d92e3d31 172.16.32.116:8001 slave 273107e5ac994d675749be0979556e761274bb93 0 1505331458799 10 connected
c4ba7a1f537ac66076791461d6af9012741fee74 172.16.32.116:9001 slave dbf78b73f2ab9e37cbf31abbc2beb3d5413d5516 0 1505331459801 10 connected
88fe075375295b59eabe69fa1438ed7c7c314f43 172.16.32.116:7002 master - 0 1505331455793 3 connected 11022-16383
dbf78b73f2ab9e37cbf31abbc2beb3d5413d5516 172.16.32.116:9000 master - 0 1505331453788 10 connected 5562-5661

經過各種折騰,redis添加,刪除,重分配slot等操作,都測試完了。

中間有很多命令輸出部門,視乎有點重復。但是為了更好的閱讀理解,觀察每一步操作的變化。后面查閱也更容易一些。
畢竟,年紀大了,記性不好。好多自己寫過的blog,回頭翻閱的時候,發現某些步驟,不是很好理解了。雖然我的blog,都是基于自己測試的結果,但是依然會忘記。

總結

以上就是這篇文章的全部內容了,希望本文的內容對大家的學習或者工作能帶來一定的幫助,如果有疑問大家可以留言交流,謝謝大家對腳本之家的支持。

您可能感興趣的文章:
  • 詳解SpringBoot Redis自適應配置(Cluster Standalone Sentinel)
  • Redis Cluster集群數據分片機制原理
  • docker redis5.0 cluster集群搭建的實現
  • 使用Ruby腳本部署Redis Cluster集群步驟講解
  • php成功操作redis cluster集群的實例教程
  • Redis cluster集群的介紹
  • Windows環境下Redis Cluster環境搭建(圖文)
  • 如何用docker部署redis cluster的方法
  • spring集成redis cluster詳解
  • 解析Redis Cluster原理

標簽:唐山 揚州 澳門 景德鎮 香港 林芝 廣東 贛州

巨人網絡通訊聲明:本文標題《Redis Cluster添加、刪除的完整操作步驟》,本文關鍵詞  Redis,Cluster,添加,刪除,的,;如發現本文內容存在版權問題,煩請提供相關信息告之我們,我們將及時溝通與處理。本站內容系統采集于網絡,涉及言論、版權與本站無關。
  • 相關文章
  • 下面列出與本文章《Redis Cluster添加、刪除的完整操作步驟》相關的同類信息!
  • 本頁收集關于Redis Cluster添加、刪除的完整操作步驟的相關信息資訊供網民參考!
  • 推薦文章
    国产精品自产拍在线观| 色老头一区二区三区在线观看| 国产精品18久久久久久久久久久久 | 国产精品免费一区豆花| 91九色国产在线| 欧美国产日韩免费| 欧美日韩免费一区二区三区视频| 99综合电影在线视频| 亚洲国产精品久久久久久久| 高清一区二区三区日本久| 亚洲午夜精品久久久久久久久久久久| 精品国偷自产在线| 欧美一区二区三区电影| 欧美日韩一区二区免费在线观看| 久久精品在线| 成人18视频免费69| 一级少妇精品久久久久久久| 99精品视频国产| 麻豆传媒在线看| 日本一区二区三区视频在线播放 | av高清久久久| 国产午夜精品一区二区三区视频| 中文字幕有码视频| 精品少妇久久久久久888优播| 欧美一级在线免费观看| 日本美女一级片| 99re亚洲国产精品| 一级黄色片在线观看| 成人免费在线网| 青青草国产精品| 特级特黄刘亦菲aaa级| 亚洲乱码中文字幕久久孕妇黑人| 国产精品激情自拍| 久久久久一本一区二区青青蜜月 | 亚洲一区综合| 国产在线播放一区二区| 日本在线视频www| 视频二区在线播放| 黄色网络在线观看| 欧洲精品在线视频| 欧美国产极速在线| 欧美性受xxxx黑人猛交88| 少妇精品高潮欲妇又嫩中文字幕| 亚洲日本丝袜连裤袜办公室| 亚洲一级一级97网| 欧美伦理片在线看| 久久精品xxx| 成人做爰69片免网站| 亚洲国产欧美91| 超碰人人cao| 成人av无码一区二区三区| 亚洲精品久久久久久国| 野外性满足hd| 日本va欧美va欧美va精品| 成人精品小蝌蚪| 亚洲a v网站| 国产裸体无遮挡| kk眼镜猥琐国模调教系列一区二区 | 黄色国产一级视频| 免费欧美一级视频| 人人澡人人澡人人看| 国产一区二区免费在线| 精品毛片乱码1区2区3区| 99久久国产免费免费| 精品国产av色一区二区深夜久久| 一级片中文字幕| 黄色一级视频在线观看| 日韩精品视频免费播放| 黄色av小说在线观看| 国产女同在线观看| 国产精品视频在线观看免费| 亚洲永久精品大片| 亚洲电影免费观看| 欧美成人激情图片网| 久久久久久亚洲精品不卡4k岛国| 91精品国产91久久久久福利| 国产手机精品在线| 中文字幕视频网| 欧美日韩国产一中文字不卡| 免费看欧美黑人毛片| 久久综合五月| 五月婷婷综合在线| 尤物yw午夜国产精品视频| 俺也去精品视频在线观看| 青青青国产精品一区二区| 国产免费a级片| 亚洲 国产 日韩 欧美| 久久久综合网| 另类视频在线观看| 激情六月天婷婷| 中文字幕乱码在线人视频| 麻豆一区二区在线| 亚洲精品久久久久久久久久| 欧美大片一区二区| 久久影视免费观看| av福利精品导航| 老牛国产精品一区的观看方式| 国产乱码在线观看| 国产成人精品综合久久久久99 | 婷婷开心激情网| 亚洲综合精品国产一区二区三区 | 中文字幕欧美日韩一区二区三区| 99精品黄色片免费大全| 人妻一区二区三区免费| 日本三级一区二区三区| 久久久精品一区二区涩爱| 成人乱码一区二区三区av| 97人人爽人人| 欧美成人蜜桃| 欧美俄罗斯乱妇| 这里只有精品丝袜| 在线观看欧美精品| 久久久亚洲精品石原莉奈| 97国产成人无码精品久久久| 亚洲 欧美 日韩 综合| 国产精品 欧美激情| 国产福利精品一区二区三区| 国产麻花豆剧传媒精品mv在线| 日韩精品在线视频免费观看| 干日本少妇视频| 亚洲人成人77777线观看| 国产成人久久久精品一区| 亚洲激情成人网| 午夜精品福利一区二区蜜股av| 亚洲欧洲日韩av| 国内成人免费视频| 国产成a人亚洲精v品无码| 国产精品久久久免费看| 第一次破处视频| 久久婷五月综合| 黄色aaa级片| 欧美日韩在线免费观看视频| 成人欧美视频在线| 17婷婷久久www| 韩国福利视频一区| 日韩欧美www| 欧美成va人片在线观看| 日韩你懂的在线播放| 日韩亚洲欧美中文三级| 亚洲一二三区不卡| 久久久久99精品一区| 91免费国产视频网站| 美女网站在线免费欧美精品| 国产欧美日韩成人| 国产wwwxxx| 欧美 日韩 国产 成人 在线 91| 免费av一级片| 日一区二区三区| 国产999久久久| 动漫av一区二区三区| 午夜激情在线视频| 国产尤物视频在线观看| 国产chinasex对白videos麻豆| 依依成人在线视频| 亚洲精品视频网| 天堂一区二区在线免费观看| 日本va欧美va瓶| 国产aⅴ精品一区二区三区色成熟| 婷婷五月综合激情| 美女性感视频久久| 国产成人亚洲综合色影视| 成人国产精品免费网站| 欧美经典一区二区| 99久久伊人精品| 国产精品麻豆久久久| av一区二区久久| 国产欧美一区二区三区在线看蜜臀| 中文字幕一区二| 国产亲近乱来精品视频| 亚洲欧美韩国综合色| 日本一区二区视频在线| 悠悠色在线精品| 国产精品久久三区| 午夜影院久久久| 欧美丰满一区二区免费视频| 欧美日韩亚洲精品一区二区三区| 欧美日韩成人一区| 亚洲欧美日韩图片| 精品在线观看国产| 亚洲第一福利视频| 久久久精品国产| 国产福利视频一区| 激情小说网站亚洲综合网| 91免费看网站| 91久色国产| 一区二区免费电影| 999香蕉视频| 99久久人妻精品免费二区| www.av成人| 一区二区精品视频在线观看| 久久综合九色| 久久久亚洲精品一区二区三区| 亚洲成av人片一区二区三区| 日韩欧美成人一区| 久久99久久99精品免观看粉嫩 | 亚洲综合五月天| 国产精品免费观看久久| 欧美在线一级片| 37p粉嫩大胆色噜噜噜| 免费一级黄色大片| 国产绳艺sm调教室论坛| 狠狠色2019综合网| 亚洲欧美在线另类| 欧美精品久久99久久在免费线| 亚洲日本成人女熟在线观看| 亚洲一级黄色片| 国产成人高潮免费观看精品| 日本一区二区三区四区在线观看| 国产91对白刺激露脸在线观看| 亚洲精品乱码久久久久久蜜桃图片| 久久免费小视频| 东京干手机福利视频| 91麻豆精品视频| 色综合天天做天天爱| 欧美亚洲国产bt| 在线播放中文一区| 欧美成人免费观看| 成人动漫视频在线观看完整版| 欧洲精品在线播放| 国产激情在线观看视频| 久久精品国产亚洲AV熟女| 波多野结衣小视频| 国产一区二区三区四区在线观看| 亚洲品质自拍视频| 欧美一区在线视频| 欧美高清自拍一区| 久久视频在线观看中文字幕| 91香蕉视频污版| www日韩在线| 男人天堂av网| 国产精品高潮呻吟| 亚洲成色777777在线观看影院| 国产成人精品网站| 国产免费xxx| 精品人妻一区二区三区日产乱码卜| 美国美女黄色片| wwwav在线播放| 国产欧美精品区一区二区三区| 欧美一区二区福利视频| 91精品国产91久久久久久不卡 | 鲁丝一区二区三区免费| 波多野结衣xxxx| 国产精品成人aaaa在线| 特级西西444www大胆免费看| 国产成人综合亚洲91猫咪| 一本一本大道香蕉久在线精品| 日韩中文字幕视频在线| 精品欧美一区二区三区久久久| 国产三级三级看三级| 亚洲一区欧美在线| 国产成人精品aa毛片| 欧美亚洲综合一区| 2019中文字幕免费视频| 国产亚洲成年网址在线观看| www.日韩高清| www激情久久| 欧美精品18+| 欧美中文字幕在线| 久久福利一区二区| a级黄色免费视频| 搜索黄色一级片| 天天综合网在线观看| 亚洲免费视频中文字幕| 在线性视频日韩欧美| 久久精品国产美女| 性色av浪潮av| 中文字幕你懂的| 久久精品男人的天堂| 日韩成人在线视频观看| 久久久久中文字幕2018| www.午夜色| 四虎国产成人精品免费一女五男| 久久久久国产精品午夜一区| 亚洲高清视频在线| 精品久久久三级丝袜| 国产日韩专区在线| 超碰在线免费观看97| 五月天精品在线| 天堂一区二区在线| 欧洲精品一区二区三区在线观看| 欧美综合国产精品久久丁香| 91九色在线观看视频| 欧美特级黄色录像| 日韩有码一区二区三区| 欧美色图在线视频| 在线不卡国产精品| 日本精品一区二区三区不卡无字幕| 在线观看日韩精品视频| 久久久久国产精品午夜一区| 91国偷自产一区二区三区成为亚洲经典| 亚洲丝袜在线视频| 水蜜桃一区二区| www.偷拍.com| 超碰免费在线97| 午夜精品福利视频网站| 1769国内精品视频在线播放| 午夜精品久久久内射近拍高清| 亚洲av无码不卡| 亚洲视频在线观看三级| 欧美极品少妇xxxxⅹ免费视频| 品久久久久久久久久96高清| 性欧美精品男男| 国产精品中文有码| 欧美性精品220| 国产精品高潮粉嫩av| 亚洲午夜精品一区| 老牛影视av牛牛影视av| 91成人国产精品| 92看片淫黄大片看国产片| avav在线看| 亚洲图片视频小说| 亚洲国产精品精华液网站| 日本免费在线精品| 日韩精品久久一区二区| 久草视频在线观| 最新国产精品久久精品| 午夜精品福利电影| 久久人人爽av| 国产成人无码av| 成人黄页毛片网站| 国产香蕉97碰碰久久人人| 无码人妻精品一区二区蜜桃百度| 亚洲AV无码成人精品区东京热| 亚洲免费看黄网站| 国产精品va在线播放我和闺蜜| 国产人妻精品久久久久野外| 日韩主播视频在线| 精品国一区二区三区| 日韩资源av在线| 一级片手机在线观看| 国产成人亚洲综合色影视| 在线观看不卡av| 草草久久久无码国产专区| 国产精品一区二区免费视频| 欧美性猛xxx| 国产精品久久波多野结衣| 亚洲综合123| 天天干天天色天天| 精品少妇一区二区三区在线播放| 亚洲精品中文字幕在线| wwwxxx亚洲| 亚洲国产精品久久不卡毛片| 92福利视频午夜1000合集在线观看| 偷拍夫妻性生活| 99久久国产综合色|国产精品| 欧美成人免费小视频| 自拍偷拍21p| 久久国内精品自在自线400部| 亚洲视频网站在线观看| 国产人妻777人伦精品hd| 亚洲精品久久久久avwww潮水| 日韩女同互慰一区二区| 国产精品无码乱伦| 国产精品一级视频| 日韩一区二区三区三四区视频在线观看 | 国产激情一区二区三区| 久久黄色av网站| 天天操天天干天天做| 国产在线精品一区二区不卡了 | 国产又粗又猛又爽又黄的视频四季| 91麻豆高清视频| 热re91久久精品国99热蜜臀| 亚洲国产欧美视频| 久久久www免费人成精品| 日韩av手机在线看| 国产伦理片在线观看| 国产精品成人午夜| 亚洲xxxx18| 国产乱码久久久久久| 欧美性色视频在线| 日韩一区国产在线观看| 国产精品久久久久久久久久久久久久久久久久 | 亚洲欧美二区三区| 春色成人在线视频| 天天操天天爽天天干| 激情成人中文字幕| 日韩av毛片网| 亚洲高潮女人毛茸茸| 亚洲精品大片www| 国产伦精品一区二区三区免 | 精品欧美乱码久久久久久1区2区| 野外做受又硬又粗又大视频√| 少妇精品高潮欲妇又嫩中文字幕| 亚洲人成网站色ww在线| 亚洲18在线看污www麻豆| 豆国产96在线|亚洲| 欧美孕妇与黑人孕交| av片在线免费看| 五月综合激情婷婷六月色窝| 日韩久久不卡| 亚洲第一成年人网站| 国产一区二区三区精品久久久 | 亚洲视频在线一区| 久久riav二区三区| 国产一区二区在线播放视频| 亚洲国产欧美一区二区三区久久| 亚洲高清在线免费观看| 成人h精品动漫一区二区三区| 国产精品久久久久久久午夜| 国产亚洲自拍av| 欧美人妖巨大在线| 日韩精品 欧美| 国产又黄又大久久| 国产97人人超碰caoprom| 激情小说中文字幕| 欧美日韩国产免费一区二区 | 激情综合色综合久久综合| 欧美在线激情视频| 久久精品久久精品久久| 欧美一区欧美二区|