我是靠谱客的博主 直率芝麻,最近开发中收集的这篇文章主要介绍Speeding up Migration on ApsaraDB for Redis,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

database_tuning_practices

Abstract: Redis supports the MIGRATE command to transfer a key from a source instance to a destination instance. During migration, the serialized version of the key value is generated with the DUMP command, and then the target node executes the RESTORE command to load data into memory. In this article, we migrated a key with a data size of 800 MB and a data type (ZSET). We compared the performance of a migration on a native Redis environment with a migration on an optimized environment.

Redis supports the MIGRATE command to transfer a key from a source instance to a destination instance. During migration, the serialized version of the key value is generated with the DUMP command, and then the target node executes the RESTORE command to load data into memory. In this article, we migrated a key with a data size of 800 MB and a data type (ZSET). We compared the performance of a migration on a native Redis environment with a migration on an optimized environment. The test environment consists of two Redis databases on the local development machine and the impact of the network is ignored. Based on these conditions, executing the RESTORE command on the native Redis environment takes 163 seconds while executing it on the optimized Redis takes only 27 seconds. This analysis was performed using Alibaba Cloud ApsaraDB for Redis.

1. Native Redis RESTORE performance bottleneck

Our analysis result shows the CPU status as follows:

_21

We can see from the source code that the hash table values and scores of the ZSET from migrate traversal are serialized and then packaged to the target node.

The target node then deserializes the data and refactors the ZSET structure, including running the zslinsert and dictadd operations. This process is time-consuming, and the refactoring cost increases as the data size increases.

2. Method of optimization

From our analysis, we can see that the bottleneck is due to data model refactoring. To optimize the process, we can serialize and package the data model of the source node together and send the data to the target node. The target node parses the data, pre-constructs the memory, and then crams the parsed members.

Because ZSET is a fairly complicated data structure in Redis, we will briefly introduce the concepts used in ZSET..

2.1 ZSET data structure

ZSET consists of two data structures, one being the hash table, which stores the value of each member and the corresponding scores, and the other being the skip list, where all members are sorted in order as shown in the figure below:

01

02

2.2 Serialize the ZSET structure model

In Redis, the memory for ZSET dicts and the memory for zsl members and scores are shared. The two structures also share the same memory. The cost will be higher if you describe the same copy of data in two indexes in the serialization.

2.2.1 Serialize the dict model

When looking at the CPU resource consumption, we can see that the hash table part consumes more CPU resources when calculating the index, rehash, and compare key. (Rehash is used when the pre-allocated hash table size is not enough, and a larger hash table is needed to transfer the old table to the new table. The compare key is used when traversing in the list to determine whether a key already exists).

Based on this, the largest hash table size is specified during serialization, removing the need for rehash when generating a dict table when executing RESTROE.

To restore the zsl structure, we need to deserialize the member and score, as well as recalculate the member index and insert it to the table of the designated index. Because the zsl from the traversal does not contain key conflicts, members of the same index can be added to the list directly, eliminating the compare key.

2.2.2 Serialize zsl model

Zsl has a multi-layer structure as shown in the figure below.

03

The difficulty of the description lies in the unknown total number of levels of zskiplistNode on each layer. We also need to describe the node context on each layer while considering compatibility.

Based on the above considerations, we decided to traverse from the highest level of zsl, and the serialized format is:
level | header span | level_len | [ span ( | member | score ... ) ]

ItemDescription
levelLevel of the data
header spanThe span value on the layer of the header node
level lenTotal number of nodes on this layer
spanThe span value on the layer of the node
member | scoreBecause redundant nodes may exist on top of Level 0, we can add up the span values to determine whether a redundant node exists. If a redundant node exists, the member | score will not be serialized. Otherwise, member | score are included for non-redundant nodes. The deserialization algorithm follows the same principle.

Conclusion

By now, the description of the ZSET data model is complete and the performance of RESTORE is faster. However, this optimization method introduces a tradeoff because it consumes more bandwidth. The extra bandwidth originates from the field that describes the node. The data size after optimization is 20 MB larger than the 800 MB of data before the optimization.

ApsaraDB for Redis is a stable, reliable, and scalable database service with superb performance. It is structured on the Apsara Distributed File System and full SSD high-performance storage, and supports master-slave and cluster-based high-availability architectures. ApsaraDB for Redis offers a full range of database solutions including disaster switchover, failover, online expansion, and performance optimization. Try ApsaraDB for Redis today!

最后

以上就是直率芝麻为你收集整理的Speeding up Migration on ApsaraDB for Redis的全部内容,希望文章能够帮你解决Speeding up Migration on ApsaraDB for Redis所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(50)

评论列表共有 0 条评论

立即
投稿
返回
顶部