• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

The Geek Diary

HowTos | Basics | Concepts

  • Solaris
    • Solaris 11
    • SVM
    • ZFS
    • Zones
    • LDOMs
    • Hardware
  • Linux
    • CentOS/RHEL 7
    • RHCSA notes
    • SuSE Linux Enterprise
    • Linux Services
  • VCS
    • VxVM
  • Interview Questions
  • oracle
    • ASM
    • mysql
    • RAC
    • oracle 12c
    • Data Guard
  • DevOps
    • Docker
    • Shell Scripting
  • Hadoop
    • Hortonworks HDP
      • HDPCA
    • Cloudera
      • CCA 131

How to Restore a Cluster Slave Using its Own Backups

By admin

MySQL Cluster provides what is known as geographical replication, a situation in which a cluster is running in two data centers: one as Master, the other as Slave. This allows you to backup up both using MySQL Cluster’s online backup functionality. Usually, when the Slave is out of sync, you simply reset it as a new slave, using a fresh snapshot of the Master. However, as an alternative, you can use the backup of the Slave instead. This could prove to be a faster method of reinstalling the slave since it does not require retrieving data across the network and thereby draining resources.

The Steps

As of MySQL Cluster 6.3.23, restoring a Slave using its backup is easier to do. It is still considered experimental, as some switches to ndb_restore might change (e.g. -f or –dont_ignore_systab_0). Therefore, using a direct backup from the Master is still considered best practice. Before proceeding with this solution, though, remember that for this to work, the binary logs should still be available on the Master side.

1. First, reset the slave so that you have a clean start using the RESET SLAVE statement with the mysql client. As an extra precaution, it’s good to restart the Slave SQL node with –skip-slave-start option set, so that replication doesn’t start automatically.

2. Next, restore the Slave’s backups as usual using the ndb_restore tool. There is no need to use the -e (or –restore_epoch) option when restoring the meta information and data. The ndb_restore version that’s shipped starting with MySQL Cluster 6.3.23 has some new switches allowing you to include or exclude databases or tables. This is what you are going to use to restore the mysql.ndb_apply_status table. You will also be using a new experimental switch, –dont-ignore_systab_0, to restore the data. For example, restoring the backup with ID 1 on 2 data nodes, after restarting them initially:

ndb_restore -n 3 -b 1 -r /path/to/BACKUP-1 -f --include-table='mysql.ndb_apply_status'
ndb_restore -n 4 -b 1 -r /path/to/BACKUP-1 -f --include-table='mysql.ndb_apply_status'

You won’t have to use the -m or –restore_meta option because the table should be there already.

3. The next step is to connect to the Slave SQL Node and do the following on the slave:

SELECT * FROM mysql.ndb_apply_status;

+-----------+-----------------+-------------------+-----------+---------+
| server_id | epoch           | log_name          | start_pos | end_pos |
+-----------+-----------------+-------------------+-----------+---------+
|         1 | 217458489163782 | master-bin.000001 |    305572 |  305976 |
+-----------+-----------------+-------------------+-----------+---------+

The values for your server, of course, will be different.

4. Take the epoch value which is next to the server_id (1 in this example) of your Master and use it in the following SQL statement, which you need to execute on the Master:

SELECT SUBSTRING_INDEX(File, '/', -1) AS Master_Log_File , Position AS Master_Log_Pos
FROM mysql.ndb_binlog_index WHERE epoch > 217458489163782
ORDER BY epoch ASC LIMIT 1;

+-------------------+----------------+
| Master_Log_File   | Master_Log_Pos |
+-------------------+----------------+
| master-bin.000001 |         305976 |
+-------------------+----------------+

5. Back on the Slave you’ll have to change the master information using the information from the Master. You’ll have to give more options such as the host, user and password.

CHANGE MASTER TO MASTER_LOG_FILE="master-bin.00001",MASTER_LOG_POS=305976;

6. Now you’re ready to start the Slave. Use the START SLAVE statement on the Slave. It should begin synchronizing. You can follow this entering the following SQL statement from the Slave a few times until you’re satisfied that everything is proceeding smoothly:

SHOW SLAVE STATUS \G

If you issue this statement using the mysql client, you can use a \G statement terminator rather than a semicolon to obtain a more readable vertical layout.

Filed Under: mysql, MySQL Cluster

Some more articles you might also be interested in …

  1. MySQL : How To Find the Slowest Queries
  2. How to Add New Nodes To an Existing MySQL Cluster Setup
  3. MySQL Server Error – “Can’t Create A New Thread (errno 11)”
  4. MySQL Server 8.0 – How to create a REPLICATION SLAVE using MEB
  5. MySQL 8.0 : Persisted Variables
  6. Managing MySQL Using Systemd As A Non Root User
  7. Can MySQL Cluster Run Multiple Nodes On a Single Server
  8. Understanding MySQL Pluggable Authentication
  9. Beginners Guide to Management of MySQL Cluster Log Files
  10. MySQL ‘show processlist’ statement

You May Also Like

Primary Sidebar

Recent Posts

  • How to Pause and Resume Docker Containers
  • How to find docker storage device and its size (device mapper storage driver)
  • Understanding “docker stats” Command Output
  • ‘docker images’ command error – “Permission Denied”
  • Docker Basics – Expose ports, port binding and docker link
  • Archives
  • Contact Us
  • Copyright

© 2019 · The Geek Diary