Heya! Recently embarked on an epic project to bring some of our ancient databases into utf8_unicode_ci and noticed a minor issue. When you go to convert a table's collation, either in the table's own Options page or in the Bulk Table Editor you can get by right-clicking on a DB, there's a set of circumstances that can cause it to silently fail.
As far as we can tell, the tables will fail to convert (but not throw any errors about it) IF the data in them is set to convert (in the Options panel, the 'Convert data' toggle, and in the bulk editor, having the "convert to charset" checked) AND there are no strings present in any rows in the table. Even if there are string columns, the converter will effectively "skip" those tables if no rows are present that include an actual string yet. For tables that have no string columns at all, conversion will never succeed, even if there's data. It's especially baffling when you're doing this from a table's Options panel, because the table will repeatedly spring back to its original default collation without any explanation.
This can easily be circumvented by running a second conversion with the "convert to charset" button unchecked, which DOES successfully convert empty/stringless tables, which is what makes me think this is a bug and not just me failing to understand the underlying logic. Not a huge deal, but it confused us for about fifteen minutes there, and it has slowed down my process a little bit!
Thanks so much for all your work!