Altering Large Tables in MySQL

At Avid Life Media, our dating sites can produce a lot of data. Some of our MySQL databases contain tables that are as large as 120GB. Ashley Madison‘s “recent messages” table has over 22,000,000 rows alone. Modifying large databases can be daunting, but with some planning and forethought it can be done safely, without downtime and without any replication delay.

Normally, altering a table in MySQL means a full table rebuild and swapping the old table for a new one. For any system with extensive writes this means a downtime during the ALTER. Depending on the complexity of the application, this could also mean additional downtime to stop background workers, etc.

To further complicate matters, having a master and slave configuration means that all downtime is doubled because the ALTER statement will replicate to the slave only after it has completed on the master. While not recommended, this can be reduced by stopping replication, performing a dangerous parallel execution and then re-enabling replication when the alters have completed.

With tables over 10G in size, an ALTER could take more than an hour to complete, resulting in a downtime window of about five hours (one hour for the master alter, one hour for the slaves, two hours to allow time for a potential rollback plus the time to start and stop the application).

There had to be a better way..

At Avid, we use the following approach to rebuild large tables:

    • Create a new empty ‘shadow’ table with the new structure that will eventually replace the existing table
    • Place BEFORE UPDATE, AFTER INSERT and AFTER DELETE TRIGGERS on the existing table to populate the shadow table with any updates incoming to the existing table
      • Since we perform similar operations for INSERT and UPDATE triggers, the following stored procedure is created and called by the triggers:
CREATE PROCEDURE proc_test_table1_update( op int
                                        , a_id int(11)
                                        , a_field1 int(11) )
   INSERT INTO table1_shadow SET
     id=a_id, field1=a_field1
     id=a_id, field1=a_field1;
      • AFTER INSERT trigger:
CREATE TRIGGER test_table1_insert_after_trigger AFTER INSERT ON table1 FOR EACH ROW
   CALL proc_test_table1_update(0
                               , new.field1
      • BEFORE UPDATE trigger:
CREATE TRIGGER test_table1_update_before_trigger BEFORE update ON table1 FOR EACH ROW BEGIN
   CALL proc_test_table1_update(1
                               , new.field1
      • AFTER DELETE trigger:
CREATE TRIGGER test_table1_delete_after_trigger AFTER delete ON table1 FOR EACH ROW BEGIN
   DELETE FROM table1_shadow where LIMIT 1;
  • The most important step: Execute ‘fake’ updates
    UPDATE table1 set id=id where id=1;
    UPDATE table1 set id=id where id=2;
    UPDATE table1 set id=id where id=3;

    This will invoke the BEFORE UPDATE trigger, causing the shadow table to be populated without changing the existing table. The updates should be executed in batches with a separate statement for each primary key value. This allows us to control the rate, limiting load on the servers and minimizing replication delay.

  • Swap the existing table with the shadow table via an atomic rename for both tables:
    RENAME TABLE table1 to table1_old, table1_shadow to table1
  • Drop the triggers and the old table

As an alternative to the “do it yourself” approach outlined above, there are tools available that implement something very similar. For example, pt-online-schema-change from the Percona toolkit. While these tools cover many cases and are likely to work as advertised, we are more comfortable with a simple, pre-prepared SQL script when working with our precious production data.

Some benefits of our approach:

  • Easy rollback: when the shadow table has been populated and the tables have been switched, there is still the possibility of switching the tables back. Just apply any changes that occurred on the new table since the switch was made, perhaps via the binary log.
  • Translation: You can do additional manipulations inside the triggers before inserting into or updating the shadow table. For example, the shadow table may have a new unique constraint added which can be easily resolved with:

    This allows us to fix duplicates ‘on the fly’.

  • Full control over shadow table population: To minimize replication delay and avoid overloading the servers, detailed monitoring can be used to monitor and throttle the update rate. If a specific time of day is known to have higher DB load, updates can be paused entirely.

So, how long does this process take? Roughly 4-5 times the estimated ALTER time. One thing to note is that a traditional ALTER’s speed can be extremely variable in a production environment, where as row-by-row population is very consistent and predictable. As noted previously, an ALTER with an estimated 1 hour duration results in 5 hours of planned downtime. The proposed approach can be expected to take 5 hours without any downtime. This is clearly better for both users and the business.

For a more complete example of the SQL involved, check out this sample.

But what if your table has 50 fields? Manually writing this code can become tedious and error prone. For this reason, we have have written a generator that produces the code above. We’ll cover this generator and its use in our next article.

About Sergiy Tytarenko

Database Lead

Let me see your primary keys!

  • Kevin Thorpe Jul 20, 2012 at 07:20

    Take a look at TokuDB as well. That supports in-place non-blocking schema changes.

    Saves you having to do any of this.

  • Santiago Basulto Jul 20, 2012 at 08:08

    Nice. Back in my days you couldn’t invoke a SP from a trigger. Where was it introduced?

  • Toby Nov 5, 2012 at 19:04

    Is there a race condition in the delete case during the table population step?

    1) SELECT completes in connection A (reading from source table)
    2) DELETE completes in Application
    3) TRIGGER fires in Application —> DELETE from shadow table
    4) INSERT completes in connection B (updating shadow table)

    Since there is no synchronisation that would ensure 4 occurs before 3.

    Am I missing something?

  • Claim Job Jul 16, 2013 at 15:48

    I’ve been surfing online more than 4 hours today, yet I never found any interesting article like yours. It is pretty worth enough for me. In my view, if all website owners and bloggers made good content as you did, the net will be much more useful than ever before.

  • winter park home appraiser Jul 27, 2013 at 18:43

    I have read so many articles or reviews concerning the
    blogger lovers however this paragraph is in fact a nice article, keep
    it up.

  • pravin jha Nov 20, 2013 at 11:19

    But what if your table has 50 fields? Manually writing this code can become tedious and error prone. For this reason, we have have written a generator that produces the code above. We’ll cover this generator and its use in our next article.

    Can we get the generator mentioned in article?

  • Kalyani Dec 8, 2013 at 12:00

    Does removing both the databases on Master & slave & also removing Bin log files from both the servers have any negative impact on replication If I create the databases with the same name.