InnoDB, given its (relatively) recent change as the default engine of MySQL, can quickly take an unfamiliar administrator off-guard if they’re not aware of the way that InnoDB works behind the scenes. With that in mind, there’s a few key concepts you should probably be aware of if you’re working with any InnoDB tables – which, if you’re running MySQL 5.6+, you will at the very least have a few of MySQL’s system tables using InnoDB, even with the default set to MyISAM in the configuration file.


1. Save for the transportable tablespace functionality introduced in MySQL 5.6, InnoDB table files should not be considered portable.

In MyISAM, the table file structure was fairly flexible in that the engine would adapt when it detected new table files or if database folders were present. At a glance, it sounds like this is a great feature, and it is great in the context of MyISAM, but the reason that this is not feasible in InnoDB is not because they’re trying to tie you down or make things more difficult for you – it’s because InnoDB adheres to data consistency standards, or specifically, ACID compliance. This stands for Atomicity, Consistency, Isolation, and Durability.

In short, this means: Operations only succeed or fail (no in-between), data integrity is ensured, operations are kept from interfering with one another, and the data is stored in a way that minimizes impact from any environmental failures.

To actually meet these standards, though, InnoDB must maintain a method of verifying the data. To accomplish this, it stores information about your InnoDB tables and their respective databases in places other than the table files themselves. However, if something is moved out of place or changed by hand, InnoDB will notice and raise the alarm – because as far as InnoDB is concerned, the data integrity has been compromised.

While 5.6’s transportable tablespace functionality is very useful, it’s still not as simple as throwing the table file into the database directory and firing up MySQL unfortunately – though it’s certainly further in the right direction than we were before.

If you’re interested in taking advantage of this feature, I’d certainly suggest looking through the documentation, as it can come in handy.

 

2. Re-initializing the ibdata file is almost certainly not the answer.

There are very few situations when the InnoDB data file must be re-created to resolve an issue.

I’ve come across a number of guides and articles online recommending this, and I highly suggest not following their advice unless you are absolutely sure that it is the right move for your situation. It’s presented often as a “solution to everything InnoDB-related”, which – in a way – it is.

If MySQL is crashing because of InnoDB, removing or renaming (renaming always preferred) ibdata1 and the ib_logfiles will certainly bring MySQL back online. However, it will do so without the inclusion of any of your data, even though it will recognize that the table files are present and ask questions about them.

So if the goal is simply to get MySQL online, and you’re not concerned about any of the data involved, this will do the trick. However, it’d be a good idea to at least move the old database folders out of your data directory first.

On the other hand, re-creating the log files (ib_logfile*) alone can actually be helpful in a few different situations, and is not quite as risky (of course backups should always be made prior – be ready to replace with the original files in case you run into trouble).

For example, in some versions of MySQL, if your log files become corrupted, there’s a chance that it will cause MySQL to segfault, producing a “Signal 11” error in the error logs (signal 11 is the POSIX signal code for a generic segmentation fault, so this is not the only reason that you might run into this). In situations like this, it makes complete sense to re-initialize the ib_logfiles.

Afterwards, you’ll see log sequence number errors flood the error logs – don’t panic. That’s not a bad thing – the logs are fresh, and don’t contain any data. It will take a minute for InnoDB to run through the data and get everything up to speed. Keep in mind that the log files are exclusively for the purpose of crash recovery. There is no other time that its records are utilized.

 

3. The innodb_force_recovery option should not be used as a way of keeping your server online

I’ve come across too many servers where, upon investigating, it turns out that the my.cnf file has unnecessarily defined “innodb_force_recovery=6”, with the administrator not even aware of or recollecting this changes, meaning that it may have been enabled in this way for a significant period of time.

Granted, it’s difficult to go for any length of time in this mode of recovery without noticing something, given the restrictions that it involves, but yet it does happen often enough to be concerning.

On the surface, it may seem like a good idea to leave it this way, or in any of the other recovery modes – the important part is that your server stays online, right? The problem here is due to the way that innodb_force_recovery works, and the reason that it allows your MySQL server to start at all.

Each mode, with the lowest severity being 1, highest being 6, disables core components of InnoDB. These components are what helps to maintain the integrity and the structure of your data. The higher you go, the more aspects of the InnoDB engine you’re invalidating or disabling. This feature is included solely for the purpose of allowing you to perform dumps or to retrieve other specific MySQL data for recovery so that you can stop MySQL again, and continue with your primary recovery method.

Operating with innodb_force_recovery enabled, even at its lowest value, is just putting your data at additional risk, and potentially introducing new problems into the mix.

The general rule of thumb is that modes 1-4 can be enabled without much significant risk to your data (though stability and functionality will be affected) and that 5-6 should only be used in extreme situations. I posted a guide on the forums some time ago that includes a bit more detail on the innodb_force_recovery setting, as well as some of the circumstances that you should use it in. That said, there’s no substitute for going through the official documentation on this topic.

 

4. Seeing an InnoDB crash recovery in the MySQL error logs, on its own, is a good thing.

Believe it or not, InnoDB is designed to crash. That’s a bit of an exaggeration, but its components function the way that they do because they know that there are fail-safes in place to save them if things go awry. One of the reasons that InnoDB’s performance is highly regarded is due to its ability to perform sequential I/O, which in the context of InnoDB requires that writes are delayed so that they can be grouped and re-ordered before flushing to the disk.

The problem here is that, when you include delays like this, in which data changes that have not been written to disk yet are still hanging around in the memory, you introduce the risk that a system crash will occur, and anything that was in memory will be wiped clean.

To solve this, InnoDB has the redo log files. By default, InnoDB stores two of these in your data directory, going by the names ib_logfile0 and ib_logfile1 (the number of log files stored, as well as their file sizes, can be adjusted via startup parameters defined in my.cnf as documented here).

InnoDB treats these as if they were a single, concatenated file, functioning as a circular buffer, essentially meaning that data is appended to the top of the file, then rotated out at the end of the file. This makes the size of these files very relevant when it comes to the recovery procedure, its capabilities, and its performance.

When changes are made, a corresponding record is created within the log files which includes a log sequence number that is tracked by InnoDB. In the event of a crash, and upon MySQL attempting to start back up, InnoDB will be able to determine whether any changes exist that had not been flushed to disk prior to the crash, and it will automatically recover these. This is one of the ways that InnoDB maintains its “durability” side of the ACID concept.

With that in mind, if you see crash recoveries occurring – again, don’t panic. As long as everything came online, and there are no other more severe errors indicated, you could be A-OK. However, if on the other hand you’re seeing frequent, repeated crash recoveries, or if the crash recoveries are the result of MySQL being unexpectedly terminated and restarting frequently, then you could have another problem on your hands, at which point it would be a good idea to examine the system logs to try and determine what’s going on.

These are just a handful of the big issues I’ve run into during my time troubleshooting the many InnoDB-related server problems that have presented themselves over the past few years, many of which can be easily avoided, saving you from going through a lot more pain and frustration than a simple corrupt page or a data dictionary error. Know any admins out there that are prone to InnoDB trouble? Spread the word and pass this along so we can fight the misinformation that still readily shows itself in typical InnoDB google results.

If you have any questions about anything you run into, or if you have anything to add to the discussion, feel free to leave a comment for us here!


This post was originally published on September 8, 2014 and was updated on August 11, 2017.