Ronan McGlue writes:
> Hi Olivier,
>
> On 28/11/2018 8:00 pm, Olivier wrote:
>> Hello,
>>
>> Is there a way that gives an estimate of the size of a mysqldump such a
>> way that it would always be larger than the real size?
>>
>> So far, I have fou
Ronan McGlue writes:
> Hi Olivier,
>
> On 28/11/2018 8:00 pm, Olivier wrote:
>> Hello,
>>
>> Is there a way that gives an estimate of the size of a mysqldump such a
>> way that it would always be larger than the real size?
>>
>> So far, I have fou
Am 28.11.18 um 10:00 schrieb Olivier:
> Is there a way that gives an estimate of the size of a mysqldump such a
> way that it would always be larger than the real size?
keep in mind that a dump has tons of sql statements not existing that
way in the data
--
MySQL General Mailing List
Fo
Hi Olivier,
On 28/11/2018 8:00 pm, Olivier wrote:
Hello,
Is there a way that gives an estimate of the size of a mysqldump such a
way that it would always be larger than the real size?
So far, I have found:
mysql -s -u root -e "SELECT SUM(data_length) Data_BB
Hello,
Is there a way that gives an estimate of the size of a mysqldump such a
way that it would always be larger than the real size?
So far, I have found:
mysql -s -u root -e "SELECT SUM(data_length) Data_BB FROM
information_schema.tables WHERE table_s
Not sure about the size of your dump, but, have you tried to set the new
value on the server and client side? you can increase max_allowed_packet up
to 1G. Let us know after you tried that, and maybe other guys have another
solution to share...
--
*Wagner Bianchi, +55.31.8654.9510*
Oracle ACE
Hi,
When we are trying to restore the dump file, we got an error like Got a
packet bigger than max_allowed_packet. Then we increased max_allowed_packet
variable size and passed along with MySQL restore command.
mysql -max_allowed_packet=128M -uusername -p /path/file.sql
After increasing
- Original Message -
From: Rick James rja...@yahoo-inc.com
Hey Rick,
Thanks for your thoughts.
* Smells like some huge LONGTEXTs were INSERTed, then DELETEd.
Perhaps just a single one of nearly 500M.
I considered that, too; but I can see the on-disk size grow over a period
.
* In InnoDB, the LONGTEXT will usually be stored separately, thereby making a
full table scan relatively efficient.
-Original Message-
From: Johan De Meersman [mailto:vegiv...@tuxera.be]
Sent: Friday, February 15, 2013 4:21 AM
To: mysql.
Subject: MyISAM table size vs actual data
it reaches a point of no return, when the queries
get slow enough that a cascade of pending connections happens until we run out
of free handles and the site just stops responding.
Now, in my understanding, the size of the file, while unusual, really shouldn't
have much bearing on the execution time
Thanks for the replies.
After examining the logs carefully. We found several devices sending snmp
traps to the application making it composing large sql statements to mysql.
Statment over a meg on size on 1 line. We disabled those devices and the
problems have gone away.
Thanks.
Kent
to the application making it composing large sql statements to mysql.
Statment over a meg on size on 1 line. We disabled those devices and the
problems have gone away.
Thanks.
Kent.
- Original Message -
From: Rick James
Sent: 10/17/12 04:50 PM
To: Kent Ho, mysql@lists.mysql.com
Hi,
I have a Mysql replicate setup running for a while over 6 months and recent we
had an outage. We fix it, bought the server back up and we spotted something
peculiar and worrying. The replication logs are growing in size, all of a
sudden on Tuesday 9th Oct based on clues from monitoring
: Unexpected gradual replication log size increase.
Hi,
I have a Mysql replicate setup running for a while over 6 months and
recent we had an outage. We fix it, bought the server back up and we
spotted something peculiar and worrying. The replication logs are
growing in size, all of a sudden
, Johan De Meersman vegiv...@tuxera.bewrote:
- Original Message -
From: Manivannan S. manivanna...@spanservices.com
How to reduce the ibdata1 file size in both LINUX and WINDOWS
machine.
This is by design - you cannot reduce it, nor can you remove added
datafiles.
If you want
...@spanservices.com
How to reduce the ibdata1 file size in both LINUX and WINDOWS
machine.
This is by design - you cannot reduce it, nor can you remove added
datafiles.
If you want to shrink the ibdata files, you must stop all connections to
the server, take a full backup, stop
Thanks for the reply, but in my case the datafile is growing 1 GB per day
with only 1 DB (apart from mysql / information_schema / test) and the size
of the DB is just 600MB, where records get updated / deleted / added and on
an average it maintains 600MB only. Now the datafile is increased to 30GB
) and the size
of the DB is just 600MB, where records get updated / deleted / added and on
an average it maintains 600MB only. Now the datafile is increased to 30GB
from the past 30 days, do you have any idea how to reduce this ?
Also just wondering what does the datafile contains actually and why can't
.
On Tue, May 22, 2012 at 2:20 PM, Kishore Vaishnav
kish...@railsfactory.org wrote:
Thanks for the reply, but in my case the datafile is growing 1 GB per day
with only 1 DB (apart from mysql / information_schema / test) and the size
of the DB is just 600MB, where records get updated / deleted
, Kishore Vaishnav kish...@railsfactory.org
wrote:
Thanks for the reply, but in my case the datafile is growing 1 GB per day
with only 1 DB (apart from mysql / information_schema / test) and the size
of the DB is just 600MB, where records get updated / deleted / added and on
an average
as multiple answered, yes it matters!
there is no way to reduce the size of a single tablespace
with file per table you can shrink the files with
optimize table tblname which is in fact a ALTER TABLE
without real changes
Am 22.05.2012 11:28, schrieb Kishore Vaishnav:
Right now one tablespace
Hi Reindl Harald,
Does this means that if we have a single tablespace with file per table and
doing the optimization will reduce the size of the datafile size ? If yes,
then why this not possible on the datafile (one single file) too ?
*
*
*thanks regards,*
*__*
Kishore Kumar
/ test) and the
size
of the DB is just 600MB, where records get updated / deleted / added and
on
an average it maintains 600MB only. Now the datafile is increased to 30GB
from the past 30 days, do you have any idea how to reduce this ?
Also just wondering what does the datafile contains
with file per table and
doing the optimization will reduce the size of the datafile size ? If yes,
then why this not possible on the datafile (one single file) too ?
*
*
*thanks regards,*
*__*
Kishore Kumar Vaishnav
*
*
On Tue, May 22, 2012 at 3:07 PM, Reindl Harald h.rei
a single tablespace with file per table and
doing the optimization will reduce the
size of the datafile size ? If yes, then why this not possible on the
datafile (one single file) too ?
On Tue, May 22, 2012 at 3:07 PM, Reindl Harald h.rei...@thelounge.net
mailto:h.rei...@thelounge.net wrote
- Original Message -
From: Pothanaboyina Trimurthy skd.trimur...@gmail.com
hi sir,
Please keep the list in CC, others may benefit from your questions, too.
can we see any performance related improvements if we use
innodb_file_per_table other than using a single
- Original Message -
From: Reindl Harald h.rei...@thelounge.net
as multiple said the default of a single table space
is idiotic in my opinion, but however this is well
known over years
I suppose there's a certain logic to favouring one-shot allocation and never
giving up free
yes, there some new features you can use to improve performance.
If you are using mysql 5.5 and above, with files per table, you can enable
BARACUDA file format, which in turn provides data compression
and dynamic row format, which will reduce IO.
For more benefits read the doc
On Tue, May 22,
Am 22.05.2012 13:19, schrieb Johan De Meersman:
- Original Message -
From: Reindl Harald h.rei...@thelounge.net
as multiple said the default of a single table space
is idiotic in my opinion, but however this is well
known over years
I suppose there's a certain logic to favouring
In regards to why the file grows large, you may wish to read some of
the posts on the MySQL Performance Blog, which has quite a bit of
information on this, such as
http://www.mysqlperformanceblog.com/2010/06/10/reasons-for-run-away-main-innodb-tablespace/
--
MySQL General Mailing List
For list
- Original Message -
From: Ananda Kumar anan...@gmail.com
yes, there some new features you can use to improve performance.
If you are using mysql 5.5 and above, with files per table, you can
enable BARACUDA file format, which in turn provides data compression
and dynamic row format,
yes, Barracuda is limited to FILE_PER_TABLE.
Yes, true there is CPU cost, but very less.
To gain some you have to loss some.
On Tue, May 22, 2012 at 5:07 PM, Johan De Meersman vegiv...@tuxera.bewrote:
--
*From: *Ananda Kumar anan...@gmail.com
yes, there some
- Original Message -
From: Reindl Harald h.rei...@thelounge.net
Subject: Re: Reducing ibdata1 file size
well but for what price?
the problem is the DEFAULT
users with enough knowledge could easy change the default
currently what is happening is that mostly every beginner
Am 22.05.2012 13:40, schrieb Johan De Meersman:
- Original Message -
From: Reindl Harald h.rei...@thelounge.net
Subject: Re: Reducing ibdata1 file size
well but for what price?
the problem is the DEFAULT
users with enough knowledge could easy change the default
currently what
- Original Message -
From: Ananda Kumar anan...@gmail.com
yes, Barracuda is limited to FILE_PER_TABLE.
Ah, I didn't realise that. Thanks :-)
Yes, true there is CPU cost, but very less.
To gain some you have to loss some.
I've only got it enabled on a single environment, but
/500)
[OK] Key buffer size / total MyISAM indexes: 128.0M/76.4M
[OK] Key buffer hit rate: 98.6% (40M cached / 559K reads)
signature.asc
Description: OpenPGP digital signature
- Original Message -
From: Reindl Harald h.rei...@thelounge.net
95% of mysqld-installations have no problem with
innodb_file_per_table so DEFAULTS should not be for 5%
There is no problem, and there is better practice - and if your system is
I/O bound it makes sense to minimize
Am 22.05.2012 13:59, schrieb Johan De Meersman:
- Original Message -
From: Reindl Harald h.rei...@thelounge.net
95% of mysqld-installations have no problem with
innodb_file_per_table so DEFAULTS should not be for 5%
There is no problem, and there is better practice
and if your
- Original Message -
From: Reindl Harald h.rei...@thelounge.net
interesting because i have here a dbmail-server with no CPU load and
innodb with compression enabled since 2009 (innodb plugin in the past)
Ah, this is a mixed-use server that also receives data from several Cacti
Is you system READ intensive or WRITE intensive.
If you have enable compression for WRITE intensive data, then CPU cost will
be more.
On Tue, May 22, 2012 at 5:41 PM, Johan De Meersman vegiv...@tuxera.bewrote:
- Original Message -
From: Reindl Harald h.rei...@thelounge.net
or it could be that your buffer size is too small, as mysql is spending lot
of CPU time for compress and uncompressing
On Tue, May 22, 2012 at 5:45 PM, Ananda Kumar anan...@gmail.com wrote:
Is you system READ intensive or WRITE intensive.
If you have enable compression for WRITE intensive data
From: Claudio Nanni claudio.na...@gmail.com
No, as already explained, it is not possible, Innodb datafiles *never* shrink.
That's been the common wisdom for a long time.
However, this just popped up on my RSS reader. I haven't even looked at it, let
alone tried it.
I'm interested in what
- Original Message -
From: Jan Steinman j...@bytesmiths.com
That's been the common wisdom for a long time.
However, this just popped up on my RSS reader. I haven't even looked
at it, let alone tried it.
In brief: convert all your tables to myisam, delete ibdatafile during a
Jan,
that's not common wisdom, Innodb datafiles ***never*** shrink,
that in the blog from 22th of May is a workaround, one of the many.
If you ask my my favourite is to use a stand by instance and work on that.
Claudio
2012/5/22 Jan Steinman j...@bytesmiths.com
From: Claudio Nanni
Despite the conventional wisdom, converting to innodb_file_per_table will not
necessarily help you. It depends on your situation. If most of your growth is
in a single table, you will only have transferred the problem from the ibdata1
file to a new file. The ibdata1 file may also continue to
Okay, my mistake. I should write precisely when communicating with precise
people. :-)
What I meant was, dumping and importing is the common knowledge way of
virtually shrinking innodb files.
So, now that I've conceded the meta-argument, what do you think of the linked
procedure for reducing
]
Sent: Monday, May 21, 2012 6:04 AM
To: mysql@lists.mysql.com
Subject: Reducing ibdata1 file size
Hi ,
I am trying to reduce the ibdata1 data file in MySQL.
In MySQL data directory the ibdata1 data file is always increasing
whenever I am creating a new database
the server but data
still exist in the ibdata1 data file.
How to reduce the ibdata1 file size in both LINUX and WINDOWS machine.
Do you have any idea how to solve this problem. Thanks for any feedback.
Thanks
Manivannan S
DISCLAIMER: This email message and all attachments are confidential
still exist in the ibdata1 data file.
How to reduce the ibdata1 file size in both LINUX and WINDOWS machine.
Do you have any idea how to solve this problem. Thanks for any feedback.
Thanks
Manivannan S
DISCLAIMER: This email message and all attachments are confidential and
may contain
- Original Message -
From: Manivannan S. manivanna...@spanservices.com
How to reduce the ibdata1 file size in both LINUX and WINDOWS
machine.
This is by design - you cannot reduce it, nor can you remove added datafiles.
If you want to shrink the ibdata files, you must stop all
417672 fcm.0812.sql.gz
Though the files in two hosts have the same md5sum, but why they have
different size with 'du -k' showed?
Thanks.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
du reports how much space the file takes on the disk. This # depends on the
block size of each file system.
On Aug 11, 2011 9:13 PM, Feng He short...@gmail.com wrote:
Hello DBAs,
Though this is not exactly a mysql problem, but I think this list may
be helpful for my question.
I have dumped
Dear all,
I am research on several commands through which I can monitor the size
of specific data of tables in different . I want to write a script that
fetches the data of different database tables databases too daily and
write it into a file .
Is there is any way or commands to achieve
Hello,
We are having issues with one of our servers sometimes hanging up and when
attempting to shutdown the DB, we get cannot create thread errors.
This server has 6GB of RAM and no swap. According to some reasearch I was
doing I found this formula for calculating memory size
create thread errors.
This server has 6GB of RAM and no swap. According to some reasearch I was
doing I found this formula for calculating memory size:
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections =
(in your case) 384M + (64M + 2M)*1000 = 66384M
That come directly
this formula for calculating memory size:
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections = (in
your case) 384M + (64M + 2M)*1000 = 66384M That come directly from this old
post: http://bugs.mysql.com/bug.php?id=5656In our case, the result is just
below 6GB and then accounting
Hi Geoff,
This server has 6GB of RAM and no swap. According to some reasearch I was
doing I found this formula for calculating memory size:
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections =
(in your case) 384M + (64M + 2M)*1000 = 66384M
That come directly from
Hello everyone,
I've actually a database (MySAM) which is growing very quickly (1,3Go/hour).
I would like to limit the size of the database but with a log rotation after
the size is reached. Do you know a way to do it ?
I thought of maybe a script who would delete the oldest entry when it reach
Well, it wouldn't exactly limit the size of your tables, but you may want to
look into creating a partitioned table to store your data. You could define
your partition ranges to store a single day's worth of data or whatever
granularity works best for you. Then, when you need to remove older
On Fri, Jun 25, 2010 at 7:11 AM, Prabhat Kumar aim.prab...@gmail.comwrote:
In case MyISAM it will grow up to space on your data drive or the Max size
of file limited by OS..
Not entirely correct. There is some kind of limit to a MyISAM file that has
to do with pointer size - I've encountered
I think you're confusing table size with data base size. The original post
grouped by schema so it appears the question concerns database size. I
don't believe mysql imposes any limits on that. Is there a limit on the
number of tables you can have in a schema imposed by mysql?
On Fri, Jun 25
On Fri, 25 Jun 2010 06:31:11 -0500, Jim Lyons jlyons4...@gmail.com
wrote:
I think you're confusing table size with data base size. The original
post
grouped by schema so it appears the question concerns database size. I
don't believe mysql imposes any limits on that. Is there a limit
I feel like I am missing something, because I am not able to find the
answer to this simple question.
How can I increase the size of a database?
I am using the following query to check the available space and notice
that it is time to increase.
SELECT
table_schema AS 'Db Name',
Round( Sum
what is the innodb file size that u have specified in my.cnf.
If the last file is autoextend, that this will grow to the size of the disk
space avaliable.
regards
anandkl
On Thu, Jun 24, 2010 at 7:43 PM, Sarkis Karayan skara...@gmail.com wrote:
I feel like I am missing something, because I am
What do you mean time to increase? What tells you that?
A database's size is determined by the amount of available diskspace. If
you need more than the filesystem that it is currently on has, then you can
either move the entire schema (which is synonymous to database) to another
filesystem
There is 2 way to check databases size :
A. OS level, you can do *#du -hs *of data dir , it will show current usages
of you database size at File system level.
B. You can also check on Database level check details
herehttp://adminlinux.blogspot.com/2009/12/mysql-tips-calculate-database
Are there any publicly available data on how the size of some (or better
yet, many) particular real database(s) changed over time (for a longish
period of time)? How about data on how the throughput (in any interesting
terms) varied over time?
Thanks,
Mike Spreitzer
search for buffer pool size on mysqlperformanceblog.com, you
will get good advice. You should also get a copy of High Performance
MySQL, Second Edition. (I'm the lead author.) In short: ignore
advice about ratios, and ignore advice about the size of your data.
Configure the buffer pool to use
In infinite wisdom Machiel Richards machi...@rdc.co.za wrote:
The current Innodb buffer pool size is at 4Gb for instance, and the
innodb tables then grow to be about 8Gb in size.
InnoDB manages the pool as a list, using a least recently used (LRU) algorithm
incorporating a midpoint
Hi Guys
I just have a quick question.
I have done some research into how to determine the size of your Innodb
buffer pool.
All of the sources I used, specified that the Innodb buffer pool size
should be the same size as your database + 10%.
However, as far as I
Hi,
First thing that comes to my mind is that it is probably the best time to put
your application server and database server on different hosts. Having said
that, in this case increasing buffer pool size is still advisable as per my
understanding. Your swap consumption will go up in that case
said
that, in this case increasing buffer pool size is still advisable as per my
understanding. Your swap consumption will go up in that case which is not
very good either. But giving only 4 GB to Innodb is even worse for the
performance. It is subjective though. You should first check
On Sun, Apr 18, 2010 at 9:04 PM, Eric Bergen eric.ber...@gmail.com wrote:
Usually I prefer to have linux kill processes rather than excessively
swapping. I've worked on machines before that have swapped so badly
I guess you never had the OOM killer randomly shooting down your SSH daemon
on a
Google oom_adj and oom_score. You can control which process is most
likely to be killed.
On Mon, Apr 19, 2010 at 12:53 AM, Johan De Meersman vegiv...@tuxera.be wrote:
On Sun, Apr 18, 2010 at 9:04 PM, Eric Bergen eric.ber...@gmail.com wrote:
Usually I prefer to have linux kill processes
Linux will normally swap out a few pages of rarely used memory so it's
a good idea to have some swap around. 2G seems excessive though.
Usually I prefer to have linux kill processes rather than excessively
swapping. I've worked on machines before that have swapped so badly
that it took minutes
On Sun, Apr 18, 2010 at 12:04 PM, Eric Bergen eric.ber...@gmail.com wrote:
Linux will normally swap out a few pages of rarely used memory so it's
a good idea to have some swap around. 2G seems excessive though.
Usually I prefer to have linux kill processes rather than excessively
swapping.
The impact of swap activity on performance is dependent on the rate at
which things are being swapped and the speed of swapping. A few pages
per second probably won't kill things but in this case it was swapping
hundreds of pages per second which killed performance. Disks are much
slower than
Correct, but when something *does* go amiss, some swap may give you the time
you need to fix things before you really go down :-)
So, yeah, a gig or two should be fine. There's also no real need for an
actual swap partition, these days - just use a swap file. Performance is
only marginally less
--- On Wed, 14/4/10, Dan Nelson dnel...@allantgroup.com wrote:
Hammerman said:
My organization has a dedicated MySQL server. The
system has 32Gb of
memory, and is running CentOS 5.3. The default
engine will be InnoDB.
Does anyone know how much space should be dedicated to
swap?
I
Hello all,
My organization has a dedicated MySQL server. The system has
32Gb of memory, and is running CentOS 5.3. The default engine will be InnoDB.
Does anyone know how much space should be dedicated to swap?
Thanks!
In the last episode (Apr 13), Joe Hammerman said:
My organization has a dedicated MySQL server. The system has 32Gb of
memory, and is running CentOS 5.3. The default engine will be InnoDB.
Does anyone know how much space should be dedicated to swap?
I say zero swap, or if for some reason you
Yeah. One of the telltale signs of something amiss is excessive swap activity.
You're not going to be happy with the performance when the swap space
is actually in use heavily.
Kyong
On Tue, Apr 13, 2010 at 8:15 PM, Dan Nelson dnel...@allantgroup.com wrote:
In the last episode (Apr 13), Joe
i type
bzr branch lp:mysql-server
and now 986582KB downloaded
What size of repo i must download with this command ?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
...@gmail.com
Subject: What size i must download
To: mysql@lists.mysql.com
Date: Sunday, March 21, 2010, 12:07 PM
i type
bzr branch lp:mysql-server
and now 986582KB downloaded
What size of repo i must download with this command ?
--
MySQL General Mailing List
For list archives: http
Machiel,
That is how it is supposed to work.
You assign a certain amount of memory(RAM) to it and the database engine
then manages it. It is highly desirable that this buffer is fully used, and
if the growing curve is slow it is because it is not undersized. If you
really need more ram for other
Thank you very much.
This now explains a lot.
From: Claudio Nanni [mailto:claudio.na...@gmail.com]
Sent: 18 December 2009 10:05 AM
To: machiel.richards
Cc: mysql@lists.mysql.com
Subject: Re: RE: Innodb buffer pool size filling up
Machiel,
That is how it is supposed
-Original Message-
From: machiel.richards [mailto:machiel.richa...@gmail.com]
Sent: Friday, December 18, 2009 12:33 AM
To: mysql@lists.mysql.com
Subject: RE: Innodb buffer pool size filling up
Good Morning all
QUOTE: We have a MySQL database where
.
Regards
Machiel
-Original Message-
From: Jerry Schwartz [mailto:jschwa...@the-infoshop.com]
Sent: 01 December 2009 10:04 PM
To: 'machiel.richards'; 'Claudio Nanni'
Cc: mysql@lists.mysql.com
Subject: RE: Innodb buffer pool size filling up
-Original Message-
From
AM
To: mysql@lists.mysql.com
Subject: RE: Innodb buffer pool size filling up
Machiel:
We have a MySQL database where the
INNODB_BUFFER_POOL_SIZE
keeps on filling up.
Are you getting any errors or just noticing the buffer
pool is full?
I saw some error messages about
The size was at 2Gb and was recently changed to 3Gb in size during the last
week of November (around the 23rd / 24th) and as of this morning was already
sitting at 2.3gb used.
The total database size is about 750Mb.
Regards
Machiel
From: Claudio Nanni [mailto:claudio.na
The Innodb Buffer Pull usually follow a growth over time that resembles an
horizontal asintot (
http://www.maecla.it/bibliotecaMatematica/go_file/MONE_BESA/grafico.gif)
This to leverage all its size!
So should not be a problem!
Cheers
Claudio
2009/12/1 machiel.richards machiel.richa
-Original Message-
From: machiel.richards [mailto:machiel.richa...@gmail.com]
Sent: Tuesday, December 01, 2009 6:17 AM
To: 'Claudio Nanni'
Cc: mysql@lists.mysql.com
Subject: RE: Innodb buffer pool size filling up
The size was at 2Gb and was recently changed to 3Gb in size during the last
Machiel:
We have a MySQL database where the
INNODB_BUFFER_POOL_SIZE
keeps on filling up.
Are you getting any errors or just noticing the buffer
pool is full?
I saw some error messages about the buffer pool size
becoming a problem if the fscync is slow. Do you see
any more
...@jammconsulting.com]
Sent: 01 December 2009 08:55 AM
To: mysql@lists.mysql.com
Subject: RE: Innodb buffer pool size filling up
Machiel:
We have a MySQL database where the
INNODB_BUFFER_POOL_SIZE
keeps on filling up.
Are you getting any errors or just noticing the buffer
pool is full?
I saw
memory I need?
Consider a simple case, a MyISAM table is 10GB in size, with 2GB
index, how much memory I need?
Thanks.
It's not the size of the table, it's the size of the index that you
need to watch. MyISAM keeps the table and index separate, so the
memory requirements can be considerably
table is 10GB in size, with 2GB
index, how much memory I need?
Thanks.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
need?
Consider a simple case, a MyISAM table is 10GB in size, with 2GB
index, how much memory I need?
If by table scan you mean a full table scan with no index usage, your RAM
is irrevelant unless you have at leat 10GB (enough to cache the entire
table). Anything less than that and you will have
You have stumbled across the secret. No, there is no difference at
all as the calculations suggested here confirm.
http://dev.mysql.com/doc/refman/5.1/en/storage-requirements.html
Note: as you can see in the above, CHAR data DOES take up room for it's
full size, stupidly enough.
On Tue, Nov
/storage-requirements.html
Note: as you can see in the above, CHAR data DOES take up room for it's
full size, stupidly enough.
On Tue, Nov 10, 2009 at 6:37 PM, Waynn Lue waynn...@gmail.com wrote:
Hey all,
I was building a table for storing email addresses today and ran into an
issue that I
, CHAR data DOES take up room for it's
full size, stupidly enough.
On Tue, Nov 10, 2009 at 6:37 PM, Waynn Lue waynn...@gmail.com wrote:
Hey all,
I was building a table for storing email addresses today and ran into an
issue that I couldn't find an answer for using Google. If I declare
1 - 100 of 920 matches
Mail list logo