If you are talking about a Dual Core computer with 4 GBs of RAM 
and at least 6GB of swap space, you should have no problems running it on one 
computer.

MySQL is really robust and can handle quite a load, esp. on Linux. If you are 
running Windows, just remember to remove all the processes that you don't need 
to give MySQL the most memory and availability.

Depending on the size of your queries and the results of said queries, you may 
be able to keep most of them in the query cache (QC). Since you said there were 
only 2-3 columns in the table, and said columns only have integers (for 
example), you could keep quite a bit of information in the QC. However, of 
course, if they are all longtext, you will run into a lot of disk swapping.

To quote Jay Pipes at the MySQL Users Conference, the default answer to your IT 
question is "IT DEPENDS".

I know for a fact that I was using commodity hardware (Intel Celeron D 2.0 GHz, 
1GB RAM) and was able to run heavy statistical analyses (MIN, MAX, STDEV, AVG, 
MEAN, etc) on 130M records with 50+ columns in 1 table and that was returning 
the data in 80 seconds. Not bad for that old of a computer. Nowadays it's a 
Quad Xeon 2.5GHz with 4GB RAM and the same query on 400M+ records returns in 
less than 15 seconds.

It's all about query optimization and the query cache (or key buffer length, if 
you use InnoDB).

J.R.

----------------------------------------

                                From: "kalin mintchev" <[EMAIL PROTECTED]>
Sent: Tuesday, June 12, 2007 11:20 AM
To: [EMAIL PROTECTED]
Subject: Re: maximum number of records in a table 

thanks j.r...

>
>               Olaf is right.
>
> It is really more about query execution time, and more importantly, QUERY
> OPTIMIZATION.
>
> Depending on how you setup your keys, your table type, and what else your
> server does, you should be able to run multiple queries on this table
> without too much of an issue.

are you suggesting that ONE machine can handle that load without problems?

>
> 2 BIG suggestions --
>
> 1) Whatever query you want to run on this table, run EXPLAIN. Then study
> the results and do your optimization and key creation.

ok...

> 2) QUERY_CACHE. This is where you are going to live or die. Since you said
> you will be doing a lot of SELECTs and not-so-many INSERTs or UPDATEs, the
> QUERY_CACHE is going to help out a lot here.

well. not sure about this since basically the queries will be different
every time - i mean the query itself is the same but the looked up values
are 99 million different ones.

i'll be running some tests and will report...  might be helpful to
somebody else...

> HTH!

me too..  thanks.....

> J.R.
>
> ----------------------------------------
>
>                               From: Olaf Stein 
> Sent: Tuesday, June 12, 2007 8:13 AM
> To: [EMAIL PROTECTED]>, "David T. Ashley" 
> Subject: Re: maximum number of records in a table
>
> I guess a lot of that depends what an acceptable query execution time for
> you is.
> Also, what else does the machine do, are there other databases or tables
> that are queried at the same time, do you have to join other tables in
> for
> your queries, etc?
>
> Olaf
>
> On 6/12/07 3:24 AM, "kalin mintchev"  wrote:
>
>>
>> hi david..  thanks...
>>
>> i've done this many times and yes either trough php, perl, python or on
>> the mysql cl client. but my question here is not about doing it and
>> insert
>> times it's more about hosting it and query times. i currently have a
>> working table for the same purpose with about 1.5 million records in
>> it.
>> and the thing runs smoot on a machine that is 4 years old with 1 gig of
>> ram and 2.8 ghz ghz processor. the thing is that now i'm talking about
>> this x 100 times. more or less. i'm not worried about the insert times
>> -
>> this happens only ones and for a million entries, depending on what
>> technic is used, it takes no longer than a few minutes.
>> what i was asking basically was somebody to share experience with
>> running
>> a server with that amount of records in one table.
>>
>> currently the table i have has a size of 65 mgb which by 100 is about
>> 6600
>> mgb or 6.6 gigs. which means that i have to have about 8 gigs of ram to
>> successfully use a table like that. either that or cluster 2 machines
>> with
>> 4 gigs each and split the table. does this sound reasonable? is my
>> logic
>> flawed somehow?
>>
>> i'll appreciate any comments on this subject ....   thanks...
>>
>>
>>
>>> On 6/11/07, kalin mintchev  wrote:
>>>>
>>>> hi all...
>>>>
>>>> from http://dev.mysql.com/doc/refman/5.0/en/features.html:
>>>>
>>>> "Handles large databases. We use MySQL Server with databases that
>>>> contain
>>>> 50 million records. We also know of users who use MySQL Server with
>>>> 60,000
>>>> tables and about 5,000,000,000 rows."
>>>>
>>>> that's cool but i assume this is distributed over a few machines...
>>>>
>>>> we have a new client that needs a table with 99 000 000 rows, 2 -3
>>>> columns.
>>>> i was just wondering if i have a two dual core 2 processors in a
>>>> machine
>>>> with 4 gigs of ram - is that enough to host and serve queries from a
>>>> table
>>>> of this size?
>>>> a few tables on the same machine?
>>>> more than one machine?
>>>> what are the query times like?
>>>>
>>>> can somebody please share some/any experience s/he has/had with
>>>> managing
>>>> databases/tables with that amount of records. i'd really appreciate
>>>> it...
>>>
>>>
>>> 99 million isn't that large of a number.
>>>
>>> If you key the database properly, search times should be very modest.
>>> I
>>> can't speak for insert times, though, especially when keys are
>>> involved.
>>>
>>> This kind of thing is easy enough to do in your favorite scripting
>>> language.  I would just create a table with a few keys and just
>>> for($i=0;
>>> $i<99000000; $i++) it with random numbers.
>>>
>>> If you have PHP on your system, here is some PHP code (runnable from
>>> the
>>> command line) that you should be able to hack down.  It should answer
>>> your
>>> immediate questions about which PHP statements to use (if you've never
>>> done
>>> this from PHP before):
>>>
>>> http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/standalo
>>> ne/dbtestpop.php?rev=1.31&content-type=text/vnd.viewcvs-markup
>>>
>>> http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/phplib/u
>>> srs.inc?rev=1.11&content-type=text/vnd.viewcvs-markup
>>>
>>> Near the end of it, especially if the software writes output, you
>>> should
>>> get
>>> an intuitive feel for how long each INSERT is taking.
>>>
>>> You can even do test queries using the barebones MySQL client ... you
>>> should
>>> see interactively how long a query takes.
>>>
>>> I would ALMOST do this for you, but it is just beyond the threshold of
>>> what
>>> I'd do because I'm bored and watching TV.  I'm just a little curious
>>> myself.  I've never messed with a table about 10,000 rows or so.
>>>
>>> Dave
>>>
>>
>>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]
>
>
>

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]


Reply via email to