> I guess a lot of that depends what an acceptable query execution time for
> you is.

well...  i don't really know. 30 secs maximum?! i've never worked with
such huge tables. 3 - 5 million records is fine but i've never worked on a
db with a table with 100 000 000 records.


> Also, what else does the machine do, are there other databases or tables
> that are queried at the same time, do you have to join other tables in for
> your queries, etc?

that would be installed on a separate machine that might run only that
project. so yea there will be queries to other tables but only after the
result of the 99 million table returns.
there are no join requests with the 99 m table.

my calculation was mostly based on resources - like ram. like i mentioned
earlier the .MYD and .MYI files together on the current one that i have -
which has about 1.2 million records - are 90 mgb.
are the .MYI files kept in ram or both .MYD and .MYI?

multiplying 90x100 is what the size of the MYI + MYD will be, right?
is that all living in ram?

thanks....




> Olaf
>
>
> On 6/12/07 3:24 AM, "kalin mintchev" <[EMAIL PROTECTED]> wrote:
>
>>
>> hi david..  thanks...
>>
>> i've done this many times and yes either trough php, perl, python or on
>> the mysql cl client. but my question here is not about doing it and
>> insert
>> times it's more about hosting it and query times. i currently have a
>> working table for the same purpose with about 1.5 million records in it.
>> and the thing runs smoot on a machine that is 4 years old with 1 gig of
>> ram and 2.8 ghz ghz processor. the thing is that now i'm talking about
>> this x 100 times. more or less. i'm not worried about the insert times -
>> this happens only ones and for a million entries, depending on what
>> technic is used, it takes no longer than a few minutes.
>> what i was asking basically was somebody to share experience with
>> running
>> a server with that amount of records in one table.
>>
>> currently the table i have has a size of 65 mgb which by 100 is about
>> 6600
>> mgb or 6.6 gigs. which means that i have to have about 8 gigs of ram to
>> successfully use a table like that. either that or cluster 2 machines
>> with
>> 4 gigs each and split the table. does this sound reasonable? is my logic
>> flawed somehow?
>>
>> i'll appreciate any comments on this subject ....   thanks...
>>
>>
>>
>>> On 6/11/07, kalin mintchev <[EMAIL PROTECTED]> wrote:
>>>>
>>>> hi all...
>>>>
>>>> from http://dev.mysql.com/doc/refman/5.0/en/features.html:
>>>>
>>>> "Handles large databases. We use MySQL Server with databases that
>>>> contain
>>>> 50 million records. We also know of users who use MySQL Server with
>>>> 60,000
>>>> tables and about 5,000,000,000 rows."
>>>>
>>>> that's cool but i assume this is distributed over a few machines...
>>>>
>>>> we have a new client that needs a table with 99 000 000 rows, 2 -3
>>>> columns.
>>>> i was just wondering if i have a two dual core 2 processors in a
>>>> machine
>>>> with 4 gigs of ram - is that enough to host and serve queries from a
>>>> table
>>>> of this size?
>>>> a few tables on the same machine?
>>>> more than one machine?
>>>> what are the query times like?
>>>>
>>>> can somebody please share some/any experience s/he has/had with
>>>> managing
>>>> databases/tables with that amount of records. i'd really appreciate
>>>> it...
>>>
>>>
>>> 99 million isn't that large of a number.
>>>
>>> If you key the database properly, search times should be very modest.
>>> I
>>> can't speak for insert times, though, especially when keys are
>>> involved.
>>>
>>> This kind of thing is easy enough to do in your favorite scripting
>>> language.  I would just create a table with a few keys and just
>>> for($i=0;
>>> $i<99000000; $i++) it with random numbers.
>>>
>>> If you have PHP on your system, here is some PHP code (runnable from
>>> the
>>> command line) that you should be able to hack down.  It should answer
>>> your
>>> immediate questions about which PHP statements to use (if you've never
>>> done
>>> this from PHP before):
>>>
>>> http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/standalo
>>> ne/dbtestpop.php?rev=1.31&content-type=text/vnd.viewcvs-markup
>>>
>>> http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/phplib/u
>>> srs.inc?rev=1.11&content-type=text/vnd.viewcvs-markup
>>>
>>> Near the end of it, especially if the software writes output, you
>>> should
>>> get
>>> an intuitive feel for how long each INSERT is taking.
>>>
>>> You can even do test queries using the barebones MySQL client ... you
>>> should
>>> see interactively how long a query takes.
>>>
>>> I would ALMOST do this for you, but it is just beyond the threshold of
>>> what
>>> I'd do because I'm bored and watching TV.  I'm just a little curious
>>> myself.  I've never messed with a table about 10,000 rows or so.
>>>
>>> Dave
>>>
>>
>>
>
>
>



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to