MySQL Cluster provides what is known as geographical replication, a situation in which a cluster is running in two data centers: one as Master, the other as Slave. This allows you to backup up both using MySQL Cluster’s online backup functionality. Usually, when the Slave is out of sync, you simply reset it as a new slave, using a fresh snapshot of the Master. However, as an alternative, you can use the backup of the Slave instead. This could prove to be a faster method of reinstalling the slave since it does not require retrieving data across the network and thereby draining resources.
The Steps
As of MySQL Cluster 6.3.23, restoring a Slave using its backup is easier to do. It is still considered experimental, as some switches to ndb_restore might change (e.g. -f or –dont_ignore_systab_0). Therefore, using a direct backup from the Master is still considered best practice. Before proceeding with this solution, though, remember that for this to work, the binary logs should still be available on the Master side.
1. First, reset the slave so that you have a clean start using the RESET SLAVE statement with the mysql client. As an extra precaution, it’s good to restart the Slave SQL node with –skip-slave-start option set, so that replication doesn’t start automatically.
2. Next, restore the Slave’s backups as usual using the ndb_restore tool. There is no need to use the -e (or –restore_epoch) option when restoring the meta information and data. The ndb_restore version that’s shipped starting with MySQL Cluster 6.3.23 has some new switches allowing you to include or exclude databases or tables. This is what you are going to use to restore the mysql.ndb_apply_status table. You will also be using a new experimental switch, –dont-ignore_systab_0, to restore the data. For example, restoring the backup with ID 1 on 2 data nodes, after restarting them initially:
ndb_restore -n 3 -b 1 -r /path/to/BACKUP-1 -f --include-table='mysql.ndb_apply_status' ndb_restore -n 4 -b 1 -r /path/to/BACKUP-1 -f --include-table='mysql.ndb_apply_status'
You won’t have to use the -m or –restore_meta option because the table should be there already.
3. The next step is to connect to the Slave SQL Node and do the following on the slave:
SELECT * FROM mysql.ndb_apply_status; +-----------+-----------------+-------------------+-----------+---------+ | server_id | epoch | log_name | start_pos | end_pos | +-----------+-----------------+-------------------+-----------+---------+ | 1 | 217458489163782 | master-bin.000001 | 305572 | 305976 | +-----------+-----------------+-------------------+-----------+---------+
The values for your server, of course, will be different.
4. Take the epoch value which is next to the server_id (1 in this example) of your Master and use it in the following SQL statement, which you need to execute on the Master:
SELECT SUBSTRING_INDEX(File, '/', -1) AS Master_Log_File , Position AS Master_Log_Pos FROM mysql.ndb_binlog_index WHERE epoch > 217458489163782 ORDER BY epoch ASC LIMIT 1; +-------------------+----------------+ | Master_Log_File | Master_Log_Pos | +-------------------+----------------+ | master-bin.000001 | 305976 | +-------------------+----------------+
5. Back on the Slave you’ll have to change the master information using the information from the Master. You’ll have to give more options such as the host, user and password.
CHANGE MASTER TO MASTER_LOG_FILE="master-bin.00001",MASTER_LOG_POS=305976;
6. Now you’re ready to start the Slave. Use the START SLAVE statement on the Slave. It should begin synchronizing. You can follow this entering the following SQL statement from the Slave a few times until you’re satisfied that everything is proceeding smoothly:
SHOW SLAVE STATUS \G
If you issue this statement using the mysql client, you can use a \G statement terminator rather than a semicolon to obtain a more readable vertical layout.