Importing a rather big .sql file...
It would be nice to have some alternative way of importing .sql files without preloading them into the SQL editor.
https://sourceforge.net/tracker/index.php?func=detail&aid=1748286&group_id=164593&atid=832350
How do you see the possibility of adding it?
EDIT: Also, what about an option for not adding the columns when exporting tables? like in old MySQL-Front, otherwise the sql file is nearly the double in size.
EDIT2: And the current exporting interface doesn't seem to be (correctly?) showing which record is currently exporting, this is a problem for tables and rows with a large amount of data (mainly BLOB fields), where someone could think that the program is frozen.
Heh, yes, it seems I've not been the only that has come into this, and that has thought the same alternative...
How do you see the possibility of adding it?
Yes, it's one of the next things I'm going to implement.
Also, what about an option for not adding the columns when exporting tables? like in old MySQL-Front, otherwise the sql file is nearly the double in size.
We dropped the option "Complete Inserts" some time ago for the benefit of safety. You can get into big trouble if you don't export the column names, then alter the table structure and then try to import the sql-file.
If you wish to get smaller files you should simply check "Extended Inserts" . (That applies for 3.0 final. The next releases will do theses extended syntax by default.)
the current exporting interface doesn't seem to be (correctly?) showing which record is currently exporting, this is a problem for tables and rows with a large amount of data (mainly BLOB fields), where someone could think that the program is frozen.
Yes, that's a known problem when exporting large tables. No clue if we get that more beautiful sometime.
Yes, it's one of the next things I'm going to implement.
It's nice to know. Thanks for your efforts.
We dropped the option "Complete Inserts" some time ago for the benefit of safety. You can get into big trouble if you don't export the column names, then alter the table structure and then try to import the sql-file.
If you wish to get smaller files you should simply check "Extended Inserts" . (That applies for 3.0 final. The next releases will do theses extended syntax by default.)
Well, I'm aware that it's more safe to add the column names, but I think it would be nice to have it as an option as well. Also, using extended inserts is giving me a lot of problems, mainly with tables with several large BLOB fields (and by large I mean fields up to 50mbs...), and HeidiSQL ends freezing most times. Dunno if it's just me and the way I've set up one of my databases but MySQL-Front is giving me a better performance when exporting.
Well, I'm aware that it's more safe to add the column names, but I think it would be nice to have it as an option as well.
Hmm. I don't agree. What would be the advantage of having this option? (Besides the performance issue)
Also, using extended inserts is giving me a lot of problems, mainly with tables with several large BLOB fields (and by large I mean fields up to 50mbs...), and HeidiSQL ends freezing most times.
You mean, if you chose to export extended syntax, HeidiSQL performs slower than with "normal" syntax? If so, that's worth a bugtracker item I think: http://bugs.heidisql.com/
Dunno if it's just me and the way I've set up one of my databases but MySQL-Front is giving me a better performance when exporting.
I guess we're talking about the 2.5 release. Yes, that's definitely true, it performed better in many parts. But also included many bugs partly lead to loss of data. We're trying to make HeidiSQL as bugfree as we can. After that, performance optimisation are implemented.
- MySQL-Front (With "Include 'DROP TABLE' statements" option checked): It takes less than half a minute. 45.7mb
- HeidiSQL (Database otpion unchecked, Table set to recreate, target compatibility set to the server used in this situation (4.1), extended insert not checked): Takes around 2 minutes and the size is 86.6mb.
- HeidiSQL (The same as above, plus extended inserts): It's been already running for more than 30minutes...
Also, about the exporting functions, it would be better if you wouldn't show the popup with the "Wait for query to finish", as showing and closing it constantly makes the process to slowdown a bit, but I guess that's something you already know.
1.- New Importing function: very nice, I think this is an option that can offer a nice advantage over other programs and scripts. I haven't checked its state before coming to work, but last time I saw it at night was running rather nice (even importing the file using MySQL commandline was giving some problems with latest stable release).
I think you should make the minimum size a bit bigger, 5mb is something easily surpassed by databases, or just add an option for setting the value. Also, I don't know if it's already implemented, but when the importing function fails on a query, it should be possible to retry it, modify it, abort the operation or skip that query (+ a "Always use this on error" option) with a resume of errors at the end.
2.- Exporting function: Using extended inserts works now, although is a bit slow. Also, the resulting size of the file is still big (86.6mb with the database used on my previous reports). I've noticed that using Standard ANSI SQL as the target is a bit faster, plus the file size is just 44.3mb, even smaller than the MySQL-Front, is that normal?
And again, I think you shouldn't show and close the Status dialog constantly.
Of course, I'm not demanding anything, but just giving my opinion, and I don't feel like there is a rush for all this.
EDIT: Just noticed that when you are browsing a BLOB field with an image in it if you browse to a MEMO field the image isn't refreshed (well, in fact, if you are in a BLOB field and browse to a MEMO of another row, it shows the image of that corresponding row). Don't know if that happens on previous builds. Anyway, it's a minor bug.
Thanks for your tests and opinions!
It would be nicer if I could lend a hand heh, although I've never used Pascal nor Delphi. For my projects I use .Net, VB6, AS 1, 2 & 3.0, and I've also worked a bit with Java and C.
I've just noticed that the .sql file using extended inserts is as big as the one from the previous HeidiSQL version without them, so I wonder if the dump is fine. I guess it's because of transforming each binary field byte into hexadecimal values.
If yes, maybe you added lots of rows in the meantime. Or, the global max_allowed_packet variable is set to a low value. Extended INSERTs are generated taking care of that variable to ensure the file can be imported (at least on servers with the same value for max_allowed_packet). The default value is 1M I think. You can check that by firing:
SHOW VARIABLES LIKE 'max_allowed_packet'
Oh yes, or the binary conversion to hex code could also be the cause.
Also, talking about conversions... I don't know about Delphi, but in VB6 converting a byte into its corresponding hexadecimal value is rather slow... but VB6 isn't very good for these tasks. Is it really needed to do it?
I'll show the relevant code here (simplyfied, irrelevant code stripped). So probably you'll get a feeling for how the conversion is done.
if hasIrregularChars(Text) then
Result := escAllCharacters(Text, CharSet, sql_version);
function hasIrregularChars
begin
for i:=1 to length(Text) do
begin
b := Ord(Text[i]);
// Latin1 characters is everything except 0..31 and 127..159.
// 9..13 is HTAB, LF, VTAB, FF, CR. We only allow 9, 10 and 13,
// because those are the ones that we can escape in esc().
if b in [0..8, 11..12, 14..31, 127..159] then
result := true;
end;
end;
function escAllCharacters
begin
s := '0x';
for i:=1 to length(Text) do s := s + IntToHex(Ord(Text[i]), 2);
// Ensure correct import on servers (v4.1+) supporting multiple character sets.
Result := '/*!40100 _' + CharSet + '*/ ' + s;
end;
EDIT: Better sent a PM.
After clicking on Accept the importing progress has continued.
I have a patch ready for making the export of big BLOBs vastly faster. Just waits for a review by another developer.
... that it's a bit annoying to show the status dialog over every window
Done: http://heidisql.svn.sourceforge.net/viewvc/heidisql?view=rev&revision=791
Checkout this bugreport: https://sourceforge.net/tracker/?func=detail&atid=832347&aid=1757524&group_id=164593
I have a patch ready for making the export of big BLOBs vastly faster. Just waits for a review by another developer.
Nice. Thanks for both updates.
If I can be of any help testing this update with any of the databases I have here at work or at home feel free to tell me.
I know that HeidiSQL can import large sql data. But how do I do it?
My databases are 200 and 500MB big.
When I try to import the databases I get the Messag that the memory is to smal, or (with the 200MB database) it creates a few tables and then freezes / stopps creating without any failure message.
Im sure I'm forgetting some settings or something else.
Thanks for your help
Sabine
I guess it's because of transforming each binary field byte into hexadecimal values.
Probably. Feel free to gzip the output!
Is it really needed to do it?
Yeah.. There is no standard defining extended character set usage in .sql files. Standard Unicode BOM markers do not work because most SQL tools do not understand them.
Extended characters are therefore interpreted however your particular SQL tool sees fit. If you just use one tool, that's probably OK. If you want SQL files that will work in a year from now, and in all available tools, that's _not_ ok.
The only safe characters to use in .sql dumps are therefore ASCII, and the most compatible and safe way to encode extended characters into ASCII when dealing with SQL is to prepend the string with the intended destination character set and print all the characters as a stream of bytes. This way, the SQL dump will work from computer to computer using different character sets and from SQL tool to SQL tool using different file interpretation mechanisms.
You might rightfully ask why the entire string is encoded, instead of only the non-ASCII characters, seeing as this makes the entire string unreadable to most humans. The entire field's data is encoded because of a performance consideration; it is much faster for the server to import a bunch of streams than it is to run CONCAT(ascii, extended stream, ascii, extended stream, etc) for each field of data.
Please login to leave a reply, or register at first.