You can increase or decrease the number of replica nodes on the Node Management tab to improve the disaster recovery capabilities of your instance. You can also enable the read/write splitting feature. After this feature is enabled, your instance automatically detects read and write requests and forwards them accordingly without requiring modifications to your business code. This feature is ideal for high-concurrency scenarios where read operations occur more frequently than write operations.
Prerequisites
The instance is deployed in cloud-native mode.
The instance is a Redis Open-Source Edition or Tair memory-optimized or persistent memory instance.
The instance type is 1 GB or higher.
The instance is a high availability instance.
Considerations
For dual-zone instances, we recommend that you configure at least two nodes in both the primary zone and the secondary zone:
Primary zone: one primary node and one replica or read-only node. When high availability (HA) is triggered, the system preferentially performs a failover within the same zone to avoid increased latency caused by a failover to the secondary zone.
Secondary zone: two replica or read-only nodes.
Increase or decrease the number of replica nodes
The standard architecture supports 1 to 9 replica nodes, and each shard in the cluster architecture supports 1 to 4 replica nodes.
Log on to the console and go to the Instances page. In the top navigation bar, select the region in which the instance that you want to manage resides. Then, find the instance and click the instance ID.
In the left-side navigation pane, click Node Management.
On the Node Management page, click Operation and then Modify.
In the panel that appears, increase or decrease the number of replica nodes.
Follow the instructions to complete the payment.
After the payment is completed, the instance status changes to Changing Configuration. Wait for 1 to 5 minutes until the instance status changes to Running, which indicates that the configuration change is complete. You can view the progress on the instance details page.
Enable read/write splitting
The read/write splitting feature uses a star replication architecture in which all read-only nodes synchronize data from the primary node, resulting in low data synchronization latency.
Enabling or disabling Read/write Splitting causes a transient connection to the instance and triggers data migration in the background. Adjusting the number of read-only nodes does not cause a transient connection. Perform this operation during off-peak hours. Make sure that the instance does not receive many write requests during the operation and that your application can automatically reconnect to the instance.
Log on to the console and go to the Instances page. In the top navigation bar, select the region in which the instance that you want to manage resides. Then, find the instance and click the instance ID.
In the left-side navigation pane, click Node Management.
Turn on the Read/Write Splitting switch.
In the panel that appears, confirm the instance configuration and order cost.Click Pay.
NoteThe specifications of new read replicas are the same as the specifications of the instance.
Follow the instructions to complete the payment.
After the payment is completed, the instance status changes to Changing Configuration. Wait for 1 to 5 minutes until the instance status changes to Running, which indicates that the configuration change is complete. You can view the progress on the instance details page.
NoteIf the instance is deployed across two zones, the instance provides endpoints for both the primary zone and the secondary zone (both endpoints support read and write operations). You need to distinguish between the primary zone endpoint and the secondary zone endpoint, and direct requests for the secondary zone to the secondary zone endpoint to achieve proximity-based access and load balancing.
(Optional) Adjust the number of read-only nodes.
On the Node Management page, click Operation and then Modify to adjust the number of read-only nodes. The standard architecture supports 1 to 9 read-only nodes, and each shard in the cluster architecture supports 1 to 4 read-only nodes.