Skip to content

Copying database to a fresh MySQL slave

Archived (pre-2022)

Preserved for reference only -- likely outdated. View original | Last updated: September 2019

When setting up a fresh MySQL slave, the main database needs to be copied there, preferably while the source MySQL still running (as this process takes many hours).

The following procedure uses Percona XtraBackup, netcat and a few custom scripts for this task.

How it works

  • On the source server a multi-threaded xtrabackup job is started, reading the data files from the MySQL data directory, compressing them with qpress and pushing them to the destination server using netcat.
  • On the destination server, netcat writes the compressed files to the destination directory
  • During the whole process xtrabackup also streams the innodb logs (write-ahead-logs) to capture any changes done to the database after the copy started
  • When the copy is finished, the files on the destination server need to be uncompressed then the streamed innodb logs applied to bring the database files back into a consistent state
  • As a last step, file ownership needs to be set to mysql, then the mysql can be started, the slave configured, and replication will start to catch up

Step-by-step instructions

Preparing the destination server

As netcat will be running as your user, you need to create a destination directory and give yourself ownership. Make sure the destination has enough free space (~ 4TB):

sudo mkdir /storage/mysql_dst
sudo chown myuser /storage/mysql_dst

Next, use screen (or tmux) to start a background session you can disconnect from without stopping any processes:

screen -S nc

will start a session, that you can leave anytime by pressing CTRL-A d

screen -r nc

will bring you back that session

Within the screen start the receiving netcat:

 nc -l 9999 2>/var/tmp/nc.log | xbstream -x -C /storage/mysql_dst/ > /var/tmp/xbstream.log 2>&1

This command will not return until the copy has finished or an error occurs

Starting the copy on the source server

Pick a server to use as source / donor for the copy process. Do not use the master as source, unless in emergencies! Copying puts a high cpu and io load onto the source server. Preferably choose an otherwise idle server.

Again, start a background session in a screen

screen -S hotcopy

and run our custom hotcopy script:

hotcopy_prd.sh [destination] /storage/mysql

Waiting for the copy to finish

The copy will take a few hours to finish, you can watch progress by looking at the /storage/mysql_dst directory on the destination server or by tailing /var/tmp/innobackupex.log.

In case of problems look at the above log or at /var/tmp/nc.log or /var/tmp/xbstream.log on the destination server.

Uncompressing and "preparing" the destination

The copy is finished once the processes stop running and some text including "completed OK!" has been written to /var/tmp/innobackupex.log on the source server.

Connect to the destination an re-attach to your screen:

screen -r nc

start the prepare script:

prepare.sh /storage/mysql_dst

Cleaning up

Again, this step will run for a few hours. It too has finished once the process has stopped running and "completed OK!" has been written to /var/tmp/prepare.log

Now, file ownership has to be transferred to the mysql user and the database be moved to the correct place. Make sure no mysql process is running on the destination server during this step.

sudo chown -R mysql:mysql /storage/mysql_dst
sudo mv /storage/mysql /storage/mysql.old
sudo mv /storage/mysql_dst /storage/mysql

Starting mysql

systemctl start mysql

Check replication to the new slave

All information about replication is stored in the database itself. This means that the new slave will have the same replication configuration as the source and will thus connect to the master server of the source. Once we start MySQL, replication will start immediately (and with the correct binlogs configured).

So there is no need to configure anything on the new slave.

Just to be sure, we should verify that replication is indeed runnning as expected:

mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Queueing master event to the relay log
                  Master_Host: dbs002.prd.fyber.com
                  Master_User: replication
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.091945
          Read_Master_Log_Pos: 84448705
               Relay_Log_File: mysql-relay-bin.000318
                Relay_Log_Pos: 84448864
        Relay_Master_Log_File: mysql-bin.091945
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table: cms_db_production.advertisement_completion_rates
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 84448705
              Relay_Log_Space: 84448975
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: 27832
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Master_Server_Id: 174269471
                  Master_UUID: b8382237-6b22-11e7-92cd-a0369f696050
             Master_Info_File: mysql.slave_master_info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
           Master_Retry_Count: 86400
                  Master_Bind:
      Last_IO_Error_Timestamp:
     Last_SQL_Error_Timestamp:
               Master_SSL_Crl:
           Master_SSL_Crlpath:
           Retrieved_Gtid_Set:
            Executed_Gtid_Set:
                Auto_Position: 0
1 row in set (0.01 sec)

It's important to check that both replication threads are running:

             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

A rather long lag due to the time it took to copy+prepare is expected:

Seconds_Behind_Master: 27832

This value should start to decline soon. Once it has reached 0 the slave has caught up.

Re-enabling master-master replication

A special case is when the new slave should also become master of the old master (master-master replicatíon). In this case we need to configure the master  to become slave of the new server.

Note:

  • When starting  up the new server the slave on the master should not be running.
  • As long as there are now writes to the new server it does not really matter at which binlog position we will start replicating. In the beginning we will only send the master's own writes back, so they will be ignored anyway.

For this last reason, the process is rather simple:

At any point in time note the current binlog position on the new server:

mysql> show master status;
+------------------+----------+--------------+------------------+-------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000982 |   105846 |              |                  |                   |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)

Use these coordinates to configure the slave on the master, then start the slave:

mysql> change master to master_host='mysql1.prd-aws.fyber.com', master_log_file='mysql-bin.000982', master_log_pos=105846;
Query OK, 0 rows affected (0.04 sec)

mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

Re-parent the new server

In case you want the new server to replicate from a different master than the old server, follow the steps described here: Re-parenting a MySQL slave