Total number of bytes used per record?

BenH posted 7 years ago in General
When creating a new table and assigning the various data types and lengths, is there a way (for optimisation purposes) to tell the total space (in bytes) that are required for each record. I hope that makes sense.... I'm new to this!
ansgar posted 7 years ago
You can query your rows in such a way:

FROM table

Should deliver reliable sizes at least for text based data types such as VARCHAR and TEXT.
BenH posted 7 years ago
Many thanks for your reply although it wasn't quite what I was looking for. Could I suggest this as a feature to be added to a future version? I think it would be pretty useful.
ansgar posted 7 years ago
A feature? What should that feature look like? We talked about some calculation here, not about some GUI feature in HeidiSQL, did we?
BenH posted 7 years ago
My apologies for not being clear. I was thinking of it as a feature of the GUI.
ansgar posted 7 years ago
Well, describe it then, please.
BenH posted 7 years ago
OK, this is how it came about...
I was creating a new table with about 20 columns of all different data types which I was trying to optimise with regard to the storage size for each record. It's the first time I have used a database so am not totally clear on the exactly storage size of each data type. So I thought it would be useful (to me at least) to have an indication of the total storage required for the records in the table (as you add new columns). It would help make sure the records weren't taking up more space than they need to.
Is it clear now what I mean?
kalvaro posted 7 years ago
This must be the first time I hear database design in such terms... What are you trying to accomplish? Save a string like "Hello" serialized as integer if that saves a byte?

If you could provide a real life example of those optimizations you have in mind, we could probably help you point out the precise misconception.
jfalch posted 7 years ago
for what it´s worth, 11.5. Data Type Storage Requirements contains details.

however, i must side with kalvaro; size-optimizing db record structures by hand in the age of terabyte drives seems to me a strangely retro kind of hobby.

one alternative adhoc strategy that i have used myself would be:
1) make fields quite large;
2) wait until table contains a sizeable amount of data;
3) statistically analyze field usage, and optionally shrink some fields accordingly.
BenH posted 7 years ago
Ok, it's beginning to make sense why you guys think my question is so strange. I come from a world (programming microcontrollers) where memory is limited optimisation is important. My mind set it to 'optimise where one can' and 'why use more than you need to'. The records (which contain a mixture of datetime, char, charvar, int and floats) are to hold data from a remote monitoring system I have developed and I hope will grow to be widely used. Each individual system can have 12 sensors which could each be recording 24 records per day so it could (hopefully) end up being a very large amount of data. My thinking is to get it right now rather than have to do it again when I find out my database it taking up a lot of hard drive space on a web server.
kalvaro posted 7 years ago
Large time based datasets are good candidates for partitioning:

jeremysawesome posted 3 months ago

I've noticed that if you want the size of a certain result set you can "export the grid". The Export dialog will inform you of the size of the result. You can see the size of the Selection or the Complete size.

Attached a Screenshot to this post to illustrate. See the Row Selection section.

1 attachment(s):
  • heidi_size

Please login to leave a reply, or register at first.