Olaf is right.

It is really more about query execution time, and more importantly, QUERY 
OPTIMIZATION.

Depending on how you setup your keys, your table type, and what else your 
server does, you should be able to run multiple queries on this table without 
too much of an issue.

2 BIG suggestions -- 

1) Whatever query you want to run on this table, run EXPLAIN. Then study the 
results and do your optimization and key creation.

2) QUERY_CACHE. This is where you are going to live or die. Since you said you 
will be doing a lot of SELECTs and not-so-many INSERTs or UPDATEs, the 
QUERY_CACHE is going to help out a lot here.

HTH!
J.R.

----------------------------------------

                                From: Olaf Stein <[EMAIL PROTECTED]>
Sent: Tuesday, June 12, 2007 8:13 AM
To: [EMAIL PROTECTED]>, "David T. Ashley" <[EMAIL PROTECTED]
Subject: Re: maximum number of records in a table 

I guess a lot of that depends what an acceptable query execution time for
you is.
Also, what else does the machine do, are there other databases or tables
that are queried at the same time, do you have to join other tables in for
your queries, etc?

Olaf

On 6/12/07 3:24 AM, "kalin mintchev"  wrote:

> 
> hi david..  thanks...
> 
> i've done this many times and yes either trough php, perl, python or on
> the mysql cl client. but my question here is not about doing it and insert
> times it's more about hosting it and query times. i currently have a
> working table for the same purpose with about 1.5 million records in it.
> and the thing runs smoot on a machine that is 4 years old with 1 gig of
> ram and 2.8 ghz ghz processor. the thing is that now i'm talking about
> this x 100 times. more or less. i'm not worried about the insert times -
> this happens only ones and for a million entries, depending on what
> technic is used, it takes no longer than a few minutes.
> what i was asking basically was somebody to share experience with running
> a server with that amount of records in one table.
> 
> currently the table i have has a size of 65 mgb which by 100 is about 6600
> mgb or 6.6 gigs. which means that i have to have about 8 gigs of ram to
> successfully use a table like that. either that or cluster 2 machines with
> 4 gigs each and split the table. does this sound reasonable? is my logic
> flawed somehow?
> 
> i'll appreciate any comments on this subject ....   thanks...
> 
> 
> 
>> On 6/11/07, kalin mintchev  wrote:
>>> 
>>> hi all...
>>> 
>>> from http://dev.mysql.com/doc/refman/5.0/en/features.html:
>>> 
>>> "Handles large databases. We use MySQL Server with databases that
>>> contain
>>> 50 million records. We also know of users who use MySQL Server with
>>> 60,000
>>> tables and about 5,000,000,000 rows."
>>> 
>>> that's cool but i assume this is distributed over a few machines...
>>> 
>>> we have a new client that needs a table with 99 000 000 rows, 2 -3
>>> columns.
>>> i was just wondering if i have a two dual core 2 processors in a machine
>>> with 4 gigs of ram - is that enough to host and serve queries from a
>>> table
>>> of this size?
>>> a few tables on the same machine?
>>> more than one machine?
>>> what are the query times like?
>>> 
>>> can somebody please share some/any experience s/he has/had with managing
>>> databases/tables with that amount of records. i'd really appreciate
>>> it...
>> 
>> 
>> 99 million isn't that large of a number.
>> 
>> If you key the database properly, search times should be very modest.  I
>> can't speak for insert times, though, especially when keys are involved.
>> 
>> This kind of thing is easy enough to do in your favorite scripting
>> language.  I would just create a table with a few keys and just for($i=0;
>> $i<99000000; $i++) it with random numbers.
>> 
>> If you have PHP on your system, here is some PHP code (runnable from the
>> command line) that you should be able to hack down.  It should answer your
>> immediate questions about which PHP statements to use (if you've never
>> done
>> this from PHP before):
>> 
>> http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/standalo
>> ne/dbtestpop.php?rev=1.31&content-type=text/vnd.viewcvs-markup
>> 
>> http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/phplib/u
>> srs.inc?rev=1.11&content-type=text/vnd.viewcvs-markup
>> 
>> Near the end of it, especially if the software writes output, you should
>> get
>> an intuitive feel for how long each INSERT is taking.
>> 
>> You can even do test queries using the barebones MySQL client ... you
>> should
>> see interactively how long a query takes.
>> 
>> I would ALMOST do this for you, but it is just beyond the threshold of
>> what
>> I'd do because I'm bored and watching TV.  I'm just a little curious
>> myself.  I've never messed with a table about 10,000 rows or so.
>> 
>> Dave
>> 
> 
> 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]


Reply via email to