On Nov 22, 2009, at 8:54 AM, Ryan Chan wrote:
Hello,
Is it common heard from people that if you have large table (assume
MyISAM in my case), you need large memory in order to have the
key/index in memory for performance, otherwise, table scan on disk is
slow.
But how to estimate how much memo
In the last episode (Nov 22), Ryan Chan said:
> Is it common heard from people that if you have large table (assume MyISAM
> in my case), you need large memory in order to have the key/index in
> memory for performance, otherwise, table scan on disk is slow.
>
> But how to estimate how much memory
nal Message
From: Baron Schwartz <[EMAIL PROTECTED]>
To: Dan Nelson <[EMAIL PROTECTED]>
Cc: Josh <[EMAIL PROTECTED]>; mysql@lists.mysql.com
Sent: Sunday, October 28, 2007 9:25:11 AM
Subject: Re: Table Size
Dan Nelson wrote:
In the last episode (Oct 27), Baron Schwartz s
<[EMAIL PROTECTED]>; mysql@lists.mysql.com
Sent: Sunday, October 28, 2007 9:25:11 AM
Subject: Re: Table Size
Dan Nelson wrote:
> In the last episode (Oct 27), Baron Schwartz said:
>> InnoDB has the following extra things, plus some things I might forget:
>>
>> a) the primar
Dan Nelson wrote:
In the last episode (Oct 27), Baron Schwartz said:
InnoDB has the following extra things, plus some things I might forget:
a) the primary key B-Tree
b) row versioning information for every row
c) 16k page size; each page might not be completely full
Those are all counted towa
In the last episode (Oct 27), Baron Schwartz said:
> InnoDB has the following extra things, plus some things I might forget:
>
> a) the primary key B-Tree
> b) row versioning information for every row
> c) 16k page size; each page might not be completely full
>
> Those are all counted towards the
rday, October 27, 2007 10:17:32 AM
Subject: Re: Table Size
Josh wrote:
> Hello,
>
> I have a database that is growing at a rate of 4-5 MB per day (that number is
> getting larger as well). Not too bad but I'm trying to clean up the tables
> to minimize the amount of space they
(`repID`) REFERENCES `Reports`
(`repID`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=latin1
1 row in set (0.00 sec)
- Original Message
From: Baron Schwartz <[EMAIL PROTECTED]>
To: Josh <[EMAIL PROTECTED]>
Cc: mysql@lists.mysql.com
Sent: Saturday, October 27, 2007 10:17:32 A
Have you tried optimize table?
On 10/27/07, Josh <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I have a database that is growing at a rate of 4-5 MB per day (that number is
> getting larger as well). Not too bad but I'm trying to clean up the tables
> to minimize the amount of space they take up.
>
>
sec)
- Original Message
From: Baron Schwartz <[EMAIL PROTECTED]>
To: Josh <[EMAIL PROTECTED]>
Cc: mysql@lists.mysql.com
Sent: Saturday, October 27, 2007 10:17:32 AM
Subject: Re: Table Size
Josh wrote:
> Hello,
>
> I have a database that is growing at a rate of 4-5 MB per day (t
Josh wrote:
Hello,
I have a database that is growing at a rate of 4-5 MB per day (that number is
getting larger as well). Not too bad but I'm trying to clean up the tables to
minimize the amount of space they take up.
I have one particular table that has 2 columns:
rolID int(10) unsigned
re
Ratheesh K J wrote:
Helo all,
Just wanted to know when should a Table be considered for partitioning ( or
should it be archiving ).
Almost all of our tables are of Innodb type. I am looking for an estimate rather than a
"Depends on situation" kind of an answer.
We have few of our table swhic
At 2:15 pm -0700 5/4/06, Ravi Kumar wrote:
>What command used to check table size and database size?
For table size:
http://dev.mysql.com/doc/refman/5.0/en/show-table-status.html
I imagine (though I don't know for sure) that you can get the same info from
the information_schema database that wa
I have multiple databases running tables with thousands of records in them. Some of my
tables have
as many as 130 million records in them. Memberships and patient data can easily run
from thousands
to tens of thousands of records. If you are looking into things like DNA/Genome
mapping, you can
e
* nm
> Is it possible to MERGE innodb tables?
No, MERGE is for MyIsam tables only. InnoDb tables are stored in
tablespaces, the problem with file size does not apply. You simply use
multiple tablespaces when the data outgrows the OS limits.
> Can't find docs on mysql.com
hm... mysql.com seems to
* NEWMEDIAPLAN
> what variable values /mysql tuning you suggest for more than 2000
> potential concurrent users and big tables.
2000 concurrent users is much, at least if you mean 2000 concurrent requests
to the database, as opposed to 2000 concurrent users of a web site. It is
hard to give you a
* NEWMEDIAPLAN
> how many records can i put in a mysql table.
As many as you like, pretty much. The total file size could be delimited by
your OS, but this can be dealt with using MERGE tables (splitting a single
table in multiple files) or InnoDB tables (with multiple table spaces).
> i need a t
On Wednesday 15 August 2001 11:30, Dan Nelson wrote:
> In the last episode (Aug 15), Nathanial Hendler said:
> > I have a table that holds a lot of information. I tried to INSERT
> > something into it, and received...
> >
> > DBD::mysql::st execute failed: The table 'fancy_big_table' is full at
>
In the last episode (Aug 15), Nathanial Hendler said:
> I have a table that holds a lot of information. I tried to INSERT
> something into it, and received...
>
> DBD::mysql::st execute failed: The table 'fancy_big_table' is full at
> ./tom_to_mutt.pl line 156.
>
> The table is 4G is size. Th
Look into MAX_ROWS... ie:
alter table mytable max_rows = 1
ryan
- Original Message -
From: "Nathanial Hendler" <[EMAIL PROTECTED]>
To: "MySQL" <[EMAIL PROTECTED]>
Sent: Wednesday, August 15, 2001 12:19 PM
Subject: Table size limitations...
>
> I have a table that holds a lot o
I'm not an SQL expert but if FreeBSD supports > 4GB files than you should
check your "MAX_DATA_LENGTH" properties on the table you're using. You can
do this by running SHOW TABLE STATUS on the table. I believe you can use
ALTER (or on create table statements) to change this value.
Hope this helps
21 matches
Mail list logo