Re: maximum number of records in a table
At 6:24 PM -0400 6/11/07, kalin mintchev wrote: hi all... from http://dev.mysql.com/doc/refman/5.0/en/features.html: Handles large databases. We use MySQL Server with databases that contain 50 million records. We also know of users who use MySQL Server with 60,000 tables and about 5,000,000,000 rows. that's cool but i assume this is distributed over a few machines... No. -- Paul DuBois, MySQL Documentation Team Madison, Wisconsin, USA MySQL AB, www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: maximum number of records in a table
We have 1/2 a billion records in one Innodb table on one server. Still extremely quick. The only limit is hardware John Mancuso Linux Administrator/MySQL DBA IT Infrastructure American Home Mortgage w: 631-622-6382 c: 516-652-2475 -Original Message- From: Paul DuBois [mailto:[EMAIL PROTECTED] Sent: Tuesday, June 19, 2007 4:53 PM To: [EMAIL PROTECTED]; mysql@lists.mysql.com Subject: Re: maximum number of records in a table At 6:24 PM -0400 6/11/07, kalin mintchev wrote: hi all... from http://dev.mysql.com/doc/refman/5.0/en/features.html: Handles large databases. We use MySQL Server with databases that contain 50 million records. We also know of users who use MySQL Server with 60,000 tables and about 5,000,000,000 rows. that's cool but i assume this is distributed over a few machines... No. -- Paul DuBois, MySQL Documentation Team Madison, Wisconsin, USA MySQL AB, www.mysql.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: maximum number of records in a table
hi david.. thanks... i've done this many times and yes either trough php, perl, python or on the mysql cl client. but my question here is not about doing it and insert times it's more about hosting it and query times. i currently have a working table for the same purpose with about 1.5 million records in it. and the thing runs smoot on a machine that is 4 years old with 1 gig of ram and 2.8 ghz ghz processor. the thing is that now i'm talking about this x 100 times. more or less. i'm not worried about the insert times - this happens only ones and for a million entries, depending on what technic is used, it takes no longer than a few minutes. what i was asking basically was somebody to share experience with running a server with that amount of records in one table. currently the table i have has a size of 65 mgb which by 100 is about 6600 mgb or 6.6 gigs. which means that i have to have about 8 gigs of ram to successfully use a table like that. either that or cluster 2 machines with 4 gigs each and split the table. does this sound reasonable? is my logic flawed somehow? i'll appreciate any comments on this subject thanks... On 6/11/07, kalin mintchev [EMAIL PROTECTED] wrote: hi all... from http://dev.mysql.com/doc/refman/5.0/en/features.html: Handles large databases. We use MySQL Server with databases that contain 50 million records. We also know of users who use MySQL Server with 60,000 tables and about 5,000,000,000 rows. that's cool but i assume this is distributed over a few machines... we have a new client that needs a table with 99 000 000 rows, 2 -3 columns. i was just wondering if i have a two dual core 2 processors in a machine with 4 gigs of ram - is that enough to host and serve queries from a table of this size? a few tables on the same machine? more than one machine? what are the query times like? can somebody please share some/any experience s/he has/had with managing databases/tables with that amount of records. i'd really appreciate it... 99 million isn't that large of a number. If you key the database properly, search times should be very modest. I can't speak for insert times, though, especially when keys are involved. This kind of thing is easy enough to do in your favorite scripting language. I would just create a table with a few keys and just for($i=0; $i9900; $i++) it with random numbers. If you have PHP on your system, here is some PHP code (runnable from the command line) that you should be able to hack down. It should answer your immediate questions about which PHP statements to use (if you've never done this from PHP before): http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/standalone/dbtestpop.php?rev=1.31content-type=text/vnd.viewcvs-markup http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/phplib/usrs.inc?rev=1.11content-type=text/vnd.viewcvs-markup Near the end of it, especially if the software writes output, you should get an intuitive feel for how long each INSERT is taking. You can even do test queries using the barebones MySQL client ... you should see interactively how long a query takes. I would ALMOST do this for you, but it is just beyond the threshold of what I'd do because I'm bored and watching TV. I'm just a little curious myself. I've never messed with a table about 10,000 rows or so. Dave -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: maximum number of records in a table
I guess a lot of that depends what an acceptable query execution time for you is. Also, what else does the machine do, are there other databases or tables that are queried at the same time, do you have to join other tables in for your queries, etc? Olaf On 6/12/07 3:24 AM, kalin mintchev [EMAIL PROTECTED] wrote: hi david.. thanks... i've done this many times and yes either trough php, perl, python or on the mysql cl client. but my question here is not about doing it and insert times it's more about hosting it and query times. i currently have a working table for the same purpose with about 1.5 million records in it. and the thing runs smoot on a machine that is 4 years old with 1 gig of ram and 2.8 ghz ghz processor. the thing is that now i'm talking about this x 100 times. more or less. i'm not worried about the insert times - this happens only ones and for a million entries, depending on what technic is used, it takes no longer than a few minutes. what i was asking basically was somebody to share experience with running a server with that amount of records in one table. currently the table i have has a size of 65 mgb which by 100 is about 6600 mgb or 6.6 gigs. which means that i have to have about 8 gigs of ram to successfully use a table like that. either that or cluster 2 machines with 4 gigs each and split the table. does this sound reasonable? is my logic flawed somehow? i'll appreciate any comments on this subject thanks... On 6/11/07, kalin mintchev [EMAIL PROTECTED] wrote: hi all... from http://dev.mysql.com/doc/refman/5.0/en/features.html: Handles large databases. We use MySQL Server with databases that contain 50 million records. We also know of users who use MySQL Server with 60,000 tables and about 5,000,000,000 rows. that's cool but i assume this is distributed over a few machines... we have a new client that needs a table with 99 000 000 rows, 2 -3 columns. i was just wondering if i have a two dual core 2 processors in a machine with 4 gigs of ram - is that enough to host and serve queries from a table of this size? a few tables on the same machine? more than one machine? what are the query times like? can somebody please share some/any experience s/he has/had with managing databases/tables with that amount of records. i'd really appreciate it... 99 million isn't that large of a number. If you key the database properly, search times should be very modest. I can't speak for insert times, though, especially when keys are involved. This kind of thing is easy enough to do in your favorite scripting language. I would just create a table with a few keys and just for($i=0; $i9900; $i++) it with random numbers. If you have PHP on your system, here is some PHP code (runnable from the command line) that you should be able to hack down. It should answer your immediate questions about which PHP statements to use (if you've never done this from PHP before): http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/standalo ne/dbtestpop.php?rev=1.31content-type=text/vnd.viewcvs-markup http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/phplib/u srs.inc?rev=1.11content-type=text/vnd.viewcvs-markup Near the end of it, especially if the software writes output, you should get an intuitive feel for how long each INSERT is taking. You can even do test queries using the barebones MySQL client ... you should see interactively how long a query takes. I would ALMOST do this for you, but it is just beyond the threshold of what I'd do because I'm bored and watching TV. I'm just a little curious myself. I've never messed with a table about 10,000 rows or so. Dave -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: maximum number of records in a table
Olaf is right. It is really more about query execution time, and more importantly, QUERY OPTIMIZATION. Depending on how you setup your keys, your table type, and what else your server does, you should be able to run multiple queries on this table without too much of an issue. 2 BIG suggestions -- 1) Whatever query you want to run on this table, run EXPLAIN. Then study the results and do your optimization and key creation. 2) QUERY_CACHE. This is where you are going to live or die. Since you said you will be doing a lot of SELECTs and not-so-many INSERTs or UPDATEs, the QUERY_CACHE is going to help out a lot here. HTH! J.R. From: Olaf Stein [EMAIL PROTECTED] Sent: Tuesday, June 12, 2007 8:13 AM To: [EMAIL PROTECTED], David T. Ashley [EMAIL PROTECTED] Subject: Re: maximum number of records in a table I guess a lot of that depends what an acceptable query execution time for you is. Also, what else does the machine do, are there other databases or tables that are queried at the same time, do you have to join other tables in for your queries, etc? Olaf On 6/12/07 3:24 AM, kalin mintchev wrote: hi david.. thanks... i've done this many times and yes either trough php, perl, python or on the mysql cl client. but my question here is not about doing it and insert times it's more about hosting it and query times. i currently have a working table for the same purpose with about 1.5 million records in it. and the thing runs smoot on a machine that is 4 years old with 1 gig of ram and 2.8 ghz ghz processor. the thing is that now i'm talking about this x 100 times. more or less. i'm not worried about the insert times - this happens only ones and for a million entries, depending on what technic is used, it takes no longer than a few minutes. what i was asking basically was somebody to share experience with running a server with that amount of records in one table. currently the table i have has a size of 65 mgb which by 100 is about 6600 mgb or 6.6 gigs. which means that i have to have about 8 gigs of ram to successfully use a table like that. either that or cluster 2 machines with 4 gigs each and split the table. does this sound reasonable? is my logic flawed somehow? i'll appreciate any comments on this subject thanks... On 6/11/07, kalin mintchev wrote: hi all... from http://dev.mysql.com/doc/refman/5.0/en/features.html: Handles large databases. We use MySQL Server with databases that contain 50 million records. We also know of users who use MySQL Server with 60,000 tables and about 5,000,000,000 rows. that's cool but i assume this is distributed over a few machines... we have a new client that needs a table with 99 000 000 rows, 2 -3 columns. i was just wondering if i have a two dual core 2 processors in a machine with 4 gigs of ram - is that enough to host and serve queries from a table of this size? a few tables on the same machine? more than one machine? what are the query times like? can somebody please share some/any experience s/he has/had with managing databases/tables with that amount of records. i'd really appreciate it... 99 million isn't that large of a number. If you key the database properly, search times should be very modest. I can't speak for insert times, though, especially when keys are involved. This kind of thing is easy enough to do in your favorite scripting language. I would just create a table with a few keys and just for($i=0; $i9900; $i++) it with random numbers. If you have PHP on your system, here is some PHP code (runnable from the command line) that you should be able to hack down. It should answer your immediate questions about which PHP statements to use (if you've never done this from PHP before): http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/standalo ne/dbtestpop.php?rev=1.31content-type=text/vnd.viewcvs-markup http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/phplib/u srs.inc?rev=1.11content-type=text/vnd.viewcvs-markup Near the end of it, especially if the software writes output, you should get an intuitive feel for how long each INSERT is taking. You can even do test queries using the barebones MySQL client ... you should see interactively how long a query takes. I would ALMOST do this for you, but it is just beyond the threshold of what I'd do because I'm bored and watching TV. I'm just a little curious myself. I've never messed with a table about 10,000 rows or so. Dave -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: maximum number of records in a table
I guess a lot of that depends what an acceptable query execution time for you is. well... i don't really know. 30 secs maximum?! i've never worked with such huge tables. 3 - 5 million records is fine but i've never worked on a db with a table with 100 000 000 records. Also, what else does the machine do, are there other databases or tables that are queried at the same time, do you have to join other tables in for your queries, etc? that would be installed on a separate machine that might run only that project. so yea there will be queries to other tables but only after the result of the 99 million table returns. there are no join requests with the 99 m table. my calculation was mostly based on resources - like ram. like i mentioned earlier the .MYD and .MYI files together on the current one that i have - which has about 1.2 million records - are 90 mgb. are the .MYI files kept in ram or both .MYD and .MYI? multiplying 90x100 is what the size of the MYI + MYD will be, right? is that all living in ram? thanks Olaf On 6/12/07 3:24 AM, kalin mintchev [EMAIL PROTECTED] wrote: hi david.. thanks... i've done this many times and yes either trough php, perl, python or on the mysql cl client. but my question here is not about doing it and insert times it's more about hosting it and query times. i currently have a working table for the same purpose with about 1.5 million records in it. and the thing runs smoot on a machine that is 4 years old with 1 gig of ram and 2.8 ghz ghz processor. the thing is that now i'm talking about this x 100 times. more or less. i'm not worried about the insert times - this happens only ones and for a million entries, depending on what technic is used, it takes no longer than a few minutes. what i was asking basically was somebody to share experience with running a server with that amount of records in one table. currently the table i have has a size of 65 mgb which by 100 is about 6600 mgb or 6.6 gigs. which means that i have to have about 8 gigs of ram to successfully use a table like that. either that or cluster 2 machines with 4 gigs each and split the table. does this sound reasonable? is my logic flawed somehow? i'll appreciate any comments on this subject thanks... On 6/11/07, kalin mintchev [EMAIL PROTECTED] wrote: hi all... from http://dev.mysql.com/doc/refman/5.0/en/features.html: Handles large databases. We use MySQL Server with databases that contain 50 million records. We also know of users who use MySQL Server with 60,000 tables and about 5,000,000,000 rows. that's cool but i assume this is distributed over a few machines... we have a new client that needs a table with 99 000 000 rows, 2 -3 columns. i was just wondering if i have a two dual core 2 processors in a machine with 4 gigs of ram - is that enough to host and serve queries from a table of this size? a few tables on the same machine? more than one machine? what are the query times like? can somebody please share some/any experience s/he has/had with managing databases/tables with that amount of records. i'd really appreciate it... 99 million isn't that large of a number. If you key the database properly, search times should be very modest. I can't speak for insert times, though, especially when keys are involved. This kind of thing is easy enough to do in your favorite scripting language. I would just create a table with a few keys and just for($i=0; $i9900; $i++) it with random numbers. If you have PHP on your system, here is some PHP code (runnable from the command line) that you should be able to hack down. It should answer your immediate questions about which PHP statements to use (if you've never done this from PHP before): http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/standalo ne/dbtestpop.php?rev=1.31content-type=text/vnd.viewcvs-markup http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/phplib/u srs.inc?rev=1.11content-type=text/vnd.viewcvs-markup Near the end of it, especially if the software writes output, you should get an intuitive feel for how long each INSERT is taking. You can even do test queries using the barebones MySQL client ... you should see interactively how long a query takes. I would ALMOST do this for you, but it is just beyond the threshold of what I'd do because I'm bored and watching TV. I'm just a little curious myself. I've never messed with a table about 10,000 rows or so. Dave -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: maximum number of records in a table
thanks j.r... Olaf is right. It is really more about query execution time, and more importantly, QUERY OPTIMIZATION. Depending on how you setup your keys, your table type, and what else your server does, you should be able to run multiple queries on this table without too much of an issue. are you suggesting that ONE machine can handle that load without problems? 2 BIG suggestions -- 1) Whatever query you want to run on this table, run EXPLAIN. Then study the results and do your optimization and key creation. ok... 2) QUERY_CACHE. This is where you are going to live or die. Since you said you will be doing a lot of SELECTs and not-so-many INSERTs or UPDATEs, the QUERY_CACHE is going to help out a lot here. well. not sure about this since basically the queries will be different every time - i mean the query itself is the same but the looked up values are 99 million different ones. i'll be running some tests and will report... might be helpful to somebody else... HTH! me too.. thanks. J.R. From: Olaf Stein [EMAIL PROTECTED] Sent: Tuesday, June 12, 2007 8:13 AM To: [EMAIL PROTECTED], David T. Ashley [EMAIL PROTECTED] Subject: Re: maximum number of records in a table I guess a lot of that depends what an acceptable query execution time for you is. Also, what else does the machine do, are there other databases or tables that are queried at the same time, do you have to join other tables in for your queries, etc? Olaf On 6/12/07 3:24 AM, kalin mintchev wrote: hi david.. thanks... i've done this many times and yes either trough php, perl, python or on the mysql cl client. but my question here is not about doing it and insert times it's more about hosting it and query times. i currently have a working table for the same purpose with about 1.5 million records in it. and the thing runs smoot on a machine that is 4 years old with 1 gig of ram and 2.8 ghz ghz processor. the thing is that now i'm talking about this x 100 times. more or less. i'm not worried about the insert times - this happens only ones and for a million entries, depending on what technic is used, it takes no longer than a few minutes. what i was asking basically was somebody to share experience with running a server with that amount of records in one table. currently the table i have has a size of 65 mgb which by 100 is about 6600 mgb or 6.6 gigs. which means that i have to have about 8 gigs of ram to successfully use a table like that. either that or cluster 2 machines with 4 gigs each and split the table. does this sound reasonable? is my logic flawed somehow? i'll appreciate any comments on this subject thanks... On 6/11/07, kalin mintchev wrote: hi all... from http://dev.mysql.com/doc/refman/5.0/en/features.html: Handles large databases. We use MySQL Server with databases that contain 50 million records. We also know of users who use MySQL Server with 60,000 tables and about 5,000,000,000 rows. that's cool but i assume this is distributed over a few machines... we have a new client that needs a table with 99 000 000 rows, 2 -3 columns. i was just wondering if i have a two dual core 2 processors in a machine with 4 gigs of ram - is that enough to host and serve queries from a table of this size? a few tables on the same machine? more than one machine? what are the query times like? can somebody please share some/any experience s/he has/had with managing databases/tables with that amount of records. i'd really appreciate it... 99 million isn't that large of a number. If you key the database properly, search times should be very modest. I can't speak for insert times, though, especially when keys are involved. This kind of thing is easy enough to do in your favorite scripting language. I would just create a table with a few keys and just for($i=0; $i9900; $i++) it with random numbers. If you have PHP on your system, here is some PHP code (runnable from the command line) that you should be able to hack down. It should answer your immediate questions about which PHP statements to use (if you've never done this from PHP before): http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/standalo ne/dbtestpop.php?rev=1.31content-type=text/vnd.viewcvs-markup http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/phplib/u srs.inc?rev=1.11content-type=text/vnd.viewcvs-markup Near the end of it, especially if the software writes output, you should get an intuitive feel for how long each INSERT is taking. You can even do test queries using the barebones MySQL client ... you should see interactively how long a query takes. I would ALMOST do this for you, but it is just beyond the threshold of what I'd do because I'm bored and watching TV. I'm just a little curious myself
Re: maximum number of records in a table
thanks j.r... Olaf is right. It is really more about query execution time, and more importantly, QUERY OPTIMIZATION. Depending on how you setup your keys, your table type, and what else your server does, you should be able to run multiple queries on this table without too much of an issue. are you suggesting that ONE machine can handle that load without problems? 2 BIG suggestions -- 1) Whatever query you want to run on this table, run EXPLAIN. Then study the results and do your optimization and key creation. ok... 2) QUERY_CACHE. This is where you are going to live or die. Since you said you will be doing a lot of SELECTs and not-so-many INSERTs or UPDATEs, the QUERY_CACHE is going to help out a lot here. well. not sure about this since basically the queries will be different every time - i mean the query itself is the same but the looked up values are 99 million different ones. i'll be running some tests and will report... might be helpful to somebody else... HTH! me too.. thanks. J.R. From: Olaf Stein [EMAIL PROTECTED] Sent: Tuesday, June 12, 2007 8:13 AM To: [EMAIL PROTECTED], David T. Ashley [EMAIL PROTECTED] Subject: Re: maximum number of records in a table I guess a lot of that depends what an acceptable query execution time for you is. Also, what else does the machine do, are there other databases or tables that are queried at the same time, do you have to join other tables in for your queries, etc? Olaf On 6/12/07 3:24 AM, kalin mintchev wrote: hi david.. thanks... i've done this many times and yes either trough php, perl, python or on the mysql cl client. but my question here is not about doing it and insert times it's more about hosting it and query times. i currently have a working table for the same purpose with about 1.5 million records in it. and the thing runs smoot on a machine that is 4 years old with 1 gig of ram and 2.8 ghz ghz processor. the thing is that now i'm talking about this x 100 times. more or less. i'm not worried about the insert times - this happens only ones and for a million entries, depending on what technic is used, it takes no longer than a few minutes. what i was asking basically was somebody to share experience with running a server with that amount of records in one table. currently the table i have has a size of 65 mgb which by 100 is about 6600 mgb or 6.6 gigs. which means that i have to have about 8 gigs of ram to successfully use a table like that. either that or cluster 2 machines with 4 gigs each and split the table. does this sound reasonable? is my logic flawed somehow? i'll appreciate any comments on this subject thanks... On 6/11/07, kalin mintchev wrote: hi all... from http://dev.mysql.com/doc/refman/5.0/en/features.html: Handles large databases. We use MySQL Server with databases that contain 50 million records. We also know of users who use MySQL Server with 60,000 tables and about 5,000,000,000 rows. that's cool but i assume this is distributed over a few machines... we have a new client that needs a table with 99 000 000 rows, 2 -3 columns. i was just wondering if i have a two dual core 2 processors in a machine with 4 gigs of ram - is that enough to host and serve queries from a table of this size? a few tables on the same machine? more than one machine? what are the query times like? can somebody please share some/any experience s/he has/had with managing databases/tables with that amount of records. i'd really appreciate it... 99 million isn't that large of a number. If you key the database properly, search times should be very modest. I can't speak for insert times, though, especially when keys are involved. This kind of thing is easy enough to do in your favorite scripting language. I would just create a table with a few keys and just for($i=0; $i9900; $i++) it with random numbers. If you have PHP on your system, here is some PHP code (runnable from the command line) that you should be able to hack down. It should answer your immediate questions about which PHP statements to use (if you've never done this from PHP before): http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/standalo ne/dbtestpop.php?rev=1.31content-type=text/vnd.viewcvs-markup http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/phplib/u srs.inc?rev=1.11content-type=text/vnd.viewcvs-markup Near the end of it, especially if the software writes output, you should get an intuitive feel for how long each INSERT is taking. You can even do test queries using the barebones MySQL client ... you should see interactively how long a query takes. I would ALMOST do this for you, but it is just beyond the threshold of what I'd do because I'm bored and watching TV. I'm just a little curious myself
Re: maximum number of records in a table
On Tue, Jun 12, 2007 at 11:09:41AM -0400, kalin mintchev wrote: I guess a lot of that depends what an acceptable query execution time for you is. well... i don't really know. 30 secs maximum?! i've never worked with such huge tables. 3 - 5 million records is fine but i've never worked on a db with a table with 100 000 000 records. I wouldn't worry about it. I have a table here with nearly a billion rows, and it fetches based on the primary key in 0.00 sec. my calculation was mostly based on resources - like ram. like i mentioned earlier the .MYD and .MYI files together on the current one that i have - which has about 1.2 million records - are 90 mgb. are the .MYI files kept in ram or both .MYD and .MYI? Neither of them are kept in RAM. You want a reasonable amount of memory for buffers, query cacheing, etc, but you don't need to hold any of the files in memory. My main concern with large tables is disk space, especially if you're ever going to want to do ALTER TABLE, when you will need enough free space to hold the existing and new copies of the table on disk at the same time. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: maximum number of records in a table
You should be fine. 100 000 000 is not that much. Just make sure you set the right keys (and then query by them) on the table and even more importantly, set caches and buffers to utilize your RAM appropriately. Olaf On 6/12/07 11:09 AM, kalin mintchev [EMAIL PROTECTED] wrote: I guess a lot of that depends what an acceptable query execution time for you is. well... i don't really know. 30 secs maximum?! i've never worked with such huge tables. 3 - 5 million records is fine but i've never worked on a db with a table with 100 000 000 records. Also, what else does the machine do, are there other databases or tables that are queried at the same time, do you have to join other tables in for your queries, etc? that would be installed on a separate machine that might run only that project. so yea there will be queries to other tables but only after the result of the 99 million table returns. there are no join requests with the 99 m table. my calculation was mostly based on resources - like ram. like i mentioned earlier the .MYD and .MYI files together on the current one that i have - which has about 1.2 million records - are 90 mgb. are the .MYI files kept in ram or both .MYD and .MYI? multiplying 90x100 is what the size of the MYI + MYD will be, right? is that all living in ram? thanks Olaf On 6/12/07 3:24 AM, kalin mintchev [EMAIL PROTECTED] wrote: hi david.. thanks... i've done this many times and yes either trough php, perl, python or on the mysql cl client. but my question here is not about doing it and insert times it's more about hosting it and query times. i currently have a working table for the same purpose with about 1.5 million records in it. and the thing runs smoot on a machine that is 4 years old with 1 gig of ram and 2.8 ghz ghz processor. the thing is that now i'm talking about this x 100 times. more or less. i'm not worried about the insert times - this happens only ones and for a million entries, depending on what technic is used, it takes no longer than a few minutes. what i was asking basically was somebody to share experience with running a server with that amount of records in one table. currently the table i have has a size of 65 mgb which by 100 is about 6600 mgb or 6.6 gigs. which means that i have to have about 8 gigs of ram to successfully use a table like that. either that or cluster 2 machines with 4 gigs each and split the table. does this sound reasonable? is my logic flawed somehow? i'll appreciate any comments on this subject thanks... On 6/11/07, kalin mintchev [EMAIL PROTECTED] wrote: hi all... from http://dev.mysql.com/doc/refman/5.0/en/features.html: Handles large databases. We use MySQL Server with databases that contain 50 million records. We also know of users who use MySQL Server with 60,000 tables and about 5,000,000,000 rows. that's cool but i assume this is distributed over a few machines... we have a new client that needs a table with 99 000 000 rows, 2 -3 columns. i was just wondering if i have a two dual core 2 processors in a machine with 4 gigs of ram - is that enough to host and serve queries from a table of this size? a few tables on the same machine? more than one machine? what are the query times like? can somebody please share some/any experience s/he has/had with managing databases/tables with that amount of records. i'd really appreciate it... 99 million isn't that large of a number. If you key the database properly, search times should be very modest. I can't speak for insert times, though, especially when keys are involved. This kind of thing is easy enough to do in your favorite scripting language. I would just create a table with a few keys and just for($i=0; $i9900; $i++) it with random numbers. If you have PHP on your system, here is some PHP code (runnable from the command line) that you should be able to hack down. It should answer your immediate questions about which PHP statements to use (if you've never done this from PHP before): http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/standa lo ne/dbtestpop.php?rev=1.31content-type=text/vnd.viewcvs-markup http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/phplib /u srs.inc?rev=1.11content-type=text/vnd.viewcvs-markup Near the end of it, especially if the software writes output, you should get an intuitive feel for how long each INSERT is taking. You can even do test queries using the barebones MySQL client ... you should see interactively how long a query takes. I would ALMOST do this for you, but it is just beyond the threshold of what I'd do because I'm bored and watching TV. I'm just a little curious myself. I've never messed with a table about 10,000 rows or so. Dave -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql
Re: maximum number of records in a table
If you are talking about a Dual Core computer with 4 GBs of RAM and at least 6GB of swap space, you should have no problems running it on one computer. MySQL is really robust and can handle quite a load, esp. on Linux. If you are running Windows, just remember to remove all the processes that you don't need to give MySQL the most memory and availability. Depending on the size of your queries and the results of said queries, you may be able to keep most of them in the query cache (QC). Since you said there were only 2-3 columns in the table, and said columns only have integers (for example), you could keep quite a bit of information in the QC. However, of course, if they are all longtext, you will run into a lot of disk swapping. To quote Jay Pipes at the MySQL Users Conference, the default answer to your IT question is IT DEPENDS. I know for a fact that I was using commodity hardware (Intel Celeron D 2.0 GHz, 1GB RAM) and was able to run heavy statistical analyses (MIN, MAX, STDEV, AVG, MEAN, etc) on 130M records with 50+ columns in 1 table and that was returning the data in 80 seconds. Not bad for that old of a computer. Nowadays it's a Quad Xeon 2.5GHz with 4GB RAM and the same query on 400M+ records returns in less than 15 seconds. It's all about query optimization and the query cache (or key buffer length, if you use InnoDB). J.R. From: kalin mintchev [EMAIL PROTECTED] Sent: Tuesday, June 12, 2007 11:20 AM To: [EMAIL PROTECTED] Subject: Re: maximum number of records in a table thanks j.r... Olaf is right. It is really more about query execution time, and more importantly, QUERY OPTIMIZATION. Depending on how you setup your keys, your table type, and what else your server does, you should be able to run multiple queries on this table without too much of an issue. are you suggesting that ONE machine can handle that load without problems? 2 BIG suggestions -- 1) Whatever query you want to run on this table, run EXPLAIN. Then study the results and do your optimization and key creation. ok... 2) QUERY_CACHE. This is where you are going to live or die. Since you said you will be doing a lot of SELECTs and not-so-many INSERTs or UPDATEs, the QUERY_CACHE is going to help out a lot here. well. not sure about this since basically the queries will be different every time - i mean the query itself is the same but the looked up values are 99 million different ones. i'll be running some tests and will report... might be helpful to somebody else... HTH! me too.. thanks. J.R. From: Olaf Stein Sent: Tuesday, June 12, 2007 8:13 AM To: [EMAIL PROTECTED], David T. Ashley Subject: Re: maximum number of records in a table I guess a lot of that depends what an acceptable query execution time for you is. Also, what else does the machine do, are there other databases or tables that are queried at the same time, do you have to join other tables in for your queries, etc? Olaf On 6/12/07 3:24 AM, kalin mintchev wrote: hi david.. thanks... i've done this many times and yes either trough php, perl, python or on the mysql cl client. but my question here is not about doing it and insert times it's more about hosting it and query times. i currently have a working table for the same purpose with about 1.5 million records in it. and the thing runs smoot on a machine that is 4 years old with 1 gig of ram and 2.8 ghz ghz processor. the thing is that now i'm talking about this x 100 times. more or less. i'm not worried about the insert times - this happens only ones and for a million entries, depending on what technic is used, it takes no longer than a few minutes. what i was asking basically was somebody to share experience with running a server with that amount of records in one table. currently the table i have has a size of 65 mgb which by 100 is about 6600 mgb or 6.6 gigs. which means that i have to have about 8 gigs of ram to successfully use a table like that. either that or cluster 2 machines with 4 gigs each and split the table. does this sound reasonable? is my logic flawed somehow? i'll appreciate any comments on this subject thanks... On 6/11/07, kalin mintchev wrote: hi all... from http://dev.mysql.com/doc/refman/5.0/en/features.html: Handles large databases. We use MySQL Server with databases that contain 50 million records. We also know of users who use MySQL Server with 60,000 tables and about 5,000,000,000 rows. that's cool but i assume this is distributed over a few machines... we have a new client that needs a table with 99 000 000 rows, 2 -3 columns. i was just wondering if i have a two dual core 2 processors in a machine with 4 gigs of ram - is that enough to host and serve queries from a table
Re: maximum number of records in a table
You should be fine. 100 000 000 is not that much. Just make sure you set the right keys (and then query by them) on the table and even more importantly, set caches and buffers to utilize your RAM appropriately. thanks. Olaf On 6/12/07 11:09 AM, kalin mintchev [EMAIL PROTECTED] wrote: I guess a lot of that depends what an acceptable query execution time for you is. well... i don't really know. 30 secs maximum?! i've never worked with such huge tables. 3 - 5 million records is fine but i've never worked on a db with a table with 100 000 000 records. Also, what else does the machine do, are there other databases or tables that are queried at the same time, do you have to join other tables in for your queries, etc? that would be installed on a separate machine that might run only that project. so yea there will be queries to other tables but only after the result of the 99 million table returns. there are no join requests with the 99 m table. my calculation was mostly based on resources - like ram. like i mentioned earlier the .MYD and .MYI files together on the current one that i have - which has about 1.2 million records - are 90 mgb. are the .MYI files kept in ram or both .MYD and .MYI? multiplying 90x100 is what the size of the MYI + MYD will be, right? is that all living in ram? thanks Olaf On 6/12/07 3:24 AM, kalin mintchev [EMAIL PROTECTED] wrote: hi david.. thanks... i've done this many times and yes either trough php, perl, python or on the mysql cl client. but my question here is not about doing it and insert times it's more about hosting it and query times. i currently have a working table for the same purpose with about 1.5 million records in it. and the thing runs smoot on a machine that is 4 years old with 1 gig of ram and 2.8 ghz ghz processor. the thing is that now i'm talking about this x 100 times. more or less. i'm not worried about the insert times - this happens only ones and for a million entries, depending on what technic is used, it takes no longer than a few minutes. what i was asking basically was somebody to share experience with running a server with that amount of records in one table. currently the table i have has a size of 65 mgb which by 100 is about 6600 mgb or 6.6 gigs. which means that i have to have about 8 gigs of ram to successfully use a table like that. either that or cluster 2 machines with 4 gigs each and split the table. does this sound reasonable? is my logic flawed somehow? i'll appreciate any comments on this subject thanks... On 6/11/07, kalin mintchev [EMAIL PROTECTED] wrote: hi all... from http://dev.mysql.com/doc/refman/5.0/en/features.html: Handles large databases. We use MySQL Server with databases that contain 50 million records. We also know of users who use MySQL Server with 60,000 tables and about 5,000,000,000 rows. that's cool but i assume this is distributed over a few machines... we have a new client that needs a table with 99 000 000 rows, 2 -3 columns. i was just wondering if i have a two dual core 2 processors in a machine with 4 gigs of ram - is that enough to host and serve queries from a table of this size? a few tables on the same machine? more than one machine? what are the query times like? can somebody please share some/any experience s/he has/had with managing databases/tables with that amount of records. i'd really appreciate it... 99 million isn't that large of a number. If you key the database properly, search times should be very modest. I can't speak for insert times, though, especially when keys are involved. This kind of thing is easy enough to do in your favorite scripting language. I would just create a table with a few keys and just for($i=0; $i9900; $i++) it with random numbers. If you have PHP on your system, here is some PHP code (runnable from the command line) that you should be able to hack down. It should answer your immediate questions about which PHP statements to use (if you've never done this from PHP before): http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/standa lo ne/dbtestpop.php?rev=1.31content-type=text/vnd.viewcvs-markup http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/phplib /u srs.inc?rev=1.11content-type=text/vnd.viewcvs-markup Near the end of it, especially if the software writes output, you should get an intuitive feel for how long each INSERT is taking. You can even do test queries using the barebones MySQL client ... you should see interactively how long a query takes. I would ALMOST do this for you, but it is just beyond the threshold of what I'd do because I'm bored and watching TV. I'm just a little curious myself. I've never messed with a table about 10,000 rows or so. Dave -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To
maximum number of records in a table
hi all... from http://dev.mysql.com/doc/refman/5.0/en/features.html: Handles large databases. We use MySQL Server with databases that contain 50 million records. We also know of users who use MySQL Server with 60,000 tables and about 5,000,000,000 rows. that's cool but i assume this is distributed over a few machines... we have a new client that needs a table with 99 000 000 rows, 2 -3 columns. i was just wondering if i have a two dual core 2 processors in a machine with 4 gigs of ram - is that enough to host and serve queries from a table of this size? a few tables on the same machine? more than one machine? what are the query times like? can somebody please share some/any experience s/he has/had with managing databases/tables with that amount of records. i'd really appreciate it... thanks a lot -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: maximum number of records in a table
The answer depends upon the actual queries and/or how much data is being returned. there is ALWAYS only one record found/returned per query it's like looking up one unique id it's like checking if number 5893786 is there in row number one... or something like number 5893786 - a unique id... trying to answer only based upon row count is NONSENSICAL, IMO. On 6/11/07, kalin mintchev [EMAIL PROTECTED] wrote: hi all... from http://dev.mysql.com/doc/refman/5.0/en/features.html: Handles large databases. We use MySQL Server with databases that contain 50 million records. We also know of users who use MySQL Server with 60,000 tables and about 5,000,000,000 rows. that's cool but i assume this is distributed over a few machines... we have a new client that needs a table with 99 000 000 rows, 2 -3 columns. i was just wondering if i have a two dual core 2 processors in a machine with 4 gigs of ram - is that enough to host and serve queries from a table of this size? a few tables on the same machine? more than one machine? what are the query times like? can somebody please share some/any experience s/he has/had with managing databases/tables with that amount of records. i'd really appreciate it... thanks a lot -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
Re: maximum number of records in a table
On 6/11/07, kalin mintchev [EMAIL PROTECTED] wrote: hi all... from http://dev.mysql.com/doc/refman/5.0/en/features.html: Handles large databases. We use MySQL Server with databases that contain 50 million records. We also know of users who use MySQL Server with 60,000 tables and about 5,000,000,000 rows. that's cool but i assume this is distributed over a few machines... we have a new client that needs a table with 99 000 000 rows, 2 -3 columns. i was just wondering if i have a two dual core 2 processors in a machine with 4 gigs of ram - is that enough to host and serve queries from a table of this size? a few tables on the same machine? more than one machine? what are the query times like? can somebody please share some/any experience s/he has/had with managing databases/tables with that amount of records. i'd really appreciate it... 99 million isn't that large of a number. If you key the database properly, search times should be very modest. I can't speak for insert times, though, especially when keys are involved. This kind of thing is easy enough to do in your favorite scripting language. I would just create a table with a few keys and just for($i=0; $i9900; $i++) it with random numbers. If you have PHP on your system, here is some PHP code (runnable from the command line) that you should be able to hack down. It should answer your immediate questions about which PHP statements to use (if you've never done this from PHP before): http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/standalone/dbtestpop.php?rev=1.31content-type=text/vnd.viewcvs-markup http://gpl.e3ft.com/vcvsgpl01/viewcvs.cgi/gpl01/webprojs/fboprime/sw/phplib/usrs.inc?rev=1.11content-type=text/vnd.viewcvs-markup Near the end of it, especially if the software writes output, you should get an intuitive feel for how long each INSERT is taking. You can even do test queries using the barebones MySQL client ... you should see interactively how long a query takes. I would ALMOST do this for you, but it is just beyond the threshold of what I'd do because I'm bored and watching TV. I'm just a little curious myself. I've never messed with a table about 10,000 rows or so. Dave