Problem to import sql file using BigDump

[expired user #8607]'s profile image [expired user #8607] posted 9 years ago in Import/Export Permalink
Hi, I exported database tables to SQL files from my local computer and tried to import into the database on my server using BigDump.

Small tables works fine, but for larger, like one one with 10,738 records, I cannot ...
BigDump stop with following error msg:

"Stopped at the line 127.
At this place the current query includes more than 100 dump lines. That can happen if your dump file was created by some tool which doesn't place a semicolon followed by a linebreak at the end of each query, or if your dump contains extended inserts or very long procedure definitions. Please read the BigDump usage notes for more infos. Ask for our support services in order to handle dump files containing extended inserts."

Is there a way to set how many rows to be included in each query when I export to SQL files using Heidi SQL?
If, how and where do I do it?

/Stefan
jfalch's profile image jfalch posted 9 years ago Permalink
no, there isn't. you could use the mysqldump tool (part of the mysql server distribution) with the --skip-extended-insert option to do this.
[expired user #8607]'s profile image [expired user #8607] posted 9 years ago Permalink
Thanks for that info, but it really would be nice if that feature could be added, since I prefer to work with Heidi SQL...

I now had to use RazorSQL where I could set it to 25 dumplines per query in the export file, and it worked like a charm to import...

no, there isn't. you could use the mysqldump tool (part of the mysql server distribution) with the --skip-extended-insert option to do this.

ansgar's profile image ansgar posted 9 years ago Permalink
This is a problem in "BigDump", not in HeidiSQL. You should file a bug report at their tracker instead of asking if HeidiSQL can nicely do what BugDump expects. On normal clients, the extended INSERTs do not have any problem, unless their size in bytes exceeds the server variable "max_allowed_packet".
[expired user #8607]'s profile image [expired user #8607] posted 9 years ago Permalink
Well, some other similar software, like RazorSQL do have the option to set number of dumplines / query.
Even it's a bug in BigDump, I still think it would be a great feature if Heidi SQL had this feature.

As you said, softwares has bugs, like BigDump, and sometimes we just need to adjust ...
I adjusted by using another software ... I would have prefered to use Heidi SQl for this also, but couldn't ...

As a user I have little or no influense in the softwares functions and features, I'm just affected by bugs and limitations, and all I can do it speak out what features I would like to see to make my work easier ;-)

Anyway, it's up to you ... Still Heidi SQL is great, and I use it on a daily basis :-)

P.s.
Did you notice BigDump error msg point at "some tool" as the problem, and you point at "BigDump" ;-)

This is a problem in "BigDump", not in HeidiSQL. You should file a bug report at their tracker instead of asking if HeidiSQL can nicely do what BugDump expects. On normal clients, the extended INSERTs do not have any problem, unless their size in bytes exceeds the server variable "max_allowed_packet".

ansgar's profile image ansgar posted 9 years ago Permalink
Well, to be honest, you're not the first one asking for a single-INSERT option. I guess I can add one new checkbox labeled "Multiple rows per INSERT/REPLACE", which is checked by default.
[expired user #8607]'s profile image [expired user #8607] posted 9 years ago Permalink
In RazorSQL and indeed phpMyAdmin, you can select the number of lines per query. RazorSQL default to 25 rows and phpMyAdmin to 50000 rows. Both can be changed by the user ... Not sure what number you use now, but just put it as default in a textbox and let user be able to change if needed. Just my idea :-)

I hope to see this feature soon and I think it's a smart decision by you. If you include it then Heidi SQL will become an even better total solution for db management!
ansgar's profile image ansgar posted 9 years ago Permalink
I was planning to use a checkbox, just to switch that on or off. Length is 1M at maximum, to undercut the max_allowed_packet variable. But as I think further, people might want to limit this to some certain size without disabling it entirely. Anyway it will introduce new problems with the max_allowed_packet variable when you limit that to a *number* of queries. Instead, I should introduce a *size* limit in bytes.
Code modification/commit from ansgar.becker, 9 years ago, revision 9.1.0.4894
Add edit box + updown buttons for limiting the size of INSERT queries in bytes. See http://www.heidisql.com/forum.php?t=17345
ansgar's profile image ansgar posted 9 years ago Permalink
Should be working in r4894. Please test and shout if it works (not).
[expired user #8607]'s profile image [expired user #8607] posted 9 years ago Permalink
I tested to export a smaller SQL file and it looks fine with the inserts.
I will try the table I had problem with before in about an hr or so and let you know how the imports goessmile
[expired user #8607]'s profile image [expired user #8607] posted 9 years ago Permalink
Okay, I tested the problem table now, and first I set it to '10' KB and got same / similar problem.
Then I tested '1' KB and now import is running, hopefully it works.

With '1' import is slow and will probably take a couple hrs, but so far so good. Important is it is working...

Btw, this KB thing, depending on the table it will be more or less rows in an insert... Can't make it so we set # of rows instead of KB, like I would like to set it to 300 rows / insert query ? Or will it make more trouble/work for you?
ansgar's profile image ansgar posted 9 years ago Permalink
Please read my above comment about the max_allowed_packet variable. When limiting to a number of *rows*, and your table has a huge row size of ~1MB, a good feature of HeidiSQL's export will render useless. HeidiSQL limits the *size* of a query to undercut the max_allowed_packet variable. If you send a query larger than this, you will get disconnected with the message "Server has gone away". See here for more details: http://dev.mysql.com/doc/refman/5.1/en/packet-too-large.html
ansgar's profile image ansgar posted 9 years ago Permalink
In other words: When you just control the number of rows, you have no control about the size of your query, and so the query may fail due to its large size.
[expired user #8607]'s profile image [expired user #8607] posted 9 years ago Permalink
Okay, but then I would just lower the number of rows and try again, same as I had to do with KB ... Either way, I see your point and if you think this is the smart way, then so be it ...

I increased my export test (~3.2 M rows) to 5 KB and the import now finished successfully :-)

I'm happy with this new feature and thank you for adding it !!!
ansgar's profile image ansgar posted 9 years ago Permalink
Of course you could lower the number of rows, but the one who's responsible for doing that is suddenly you, not HeidiSQL any longer. I think most users are happy with HeidiSQL taking care for that problem. So, I'll keep it as it is now.

Nice that it's working now.
[expired user #8653]'s profile image [expired user #8653] posted 9 years ago Permalink
Hi, I am using HeidiSQL to smoothly sync between 2 MySQL Server manually because I don't have other operation right except select on the primary server.

After the release of r4894, the replace function had added a max inserts size option. Since then the syncing speed had been enormously decreased. Now the syncing operation takes more than 5 times no matter how much the max inserts size value is set.

Is there any suitable setting to gain the speed back?
ansgar's profile image ansgar posted 9 years ago Permalink
Performance should be as before when you select ~1MB per INSERT. That's what was hardcoded before. The higher you set this value, the more speedy the export should go. Only you will get disconnects when you import an sql file with INSERTs bigger than the max_allowed_packet variable on the server.

When you decrease that value, the export gets slower. That's right. I'm planning to fix that by optimizing some file stream code in that dialog.
[expired user #8653]'s profile image [expired user #8653] posted 9 years ago Permalink
The max_allowed_packet variable on the primary server is set to 1048576,
and I check the variable set on the sync server is 1600000+.

Is it been a problem between OS net connection?
These machine situation is:
The Primary server is running MySQL 5.0 with Linux
The sync server is running Mariadb 10.0.15 with Windows 2008 Server,
Bridged client by a Windows XP SP3 running HeidiSQL r4898.
The only change is the HeidiSQL version since r4894.

Thank you for the instructions and I will try to change the export size setting value.
[expired user #8607]'s profile image [expired user #8607] posted 9 years ago Permalink
I've been using this new feature for a while now and all I can say is I'm happy with it, works like a charm and all the problems I had before is gone :-)
[expired user #8653]'s profile image [expired user #8653] posted 9 years ago Permalink
After trying to set the max size in 1024KB, the performance seems to gain back.
Thanks for the help.smile

Please login to leave a reply, or register at first.