Re: Can MySQL handle 120 million records?

2002-12-22 Thread Jeremy Zawodny
On Fri, Dec 20, 2002 at 09:01:21PM -0800, JamesD wrote:
 Jeremy,
 
 mySql has no brand recognition compared to Oracle, Sybase, MS, IBM.

Well, it doesn't have as much, that's for sure.  But to say it has
none is an over-simplification.

 that is why there is not enough confidence.
 people buy things because they trust and have confidence in the brand.

It's a sad day when confidence is built by a company's PR budget
rather than the product's track record.

Jeremy
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

MySQL 3.23.51: up 7 days, processed 265,533,509 queries (430/sec. avg)

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-22 Thread Michael T. Babcock
Michael She wrote:


The gravity is a great analogy.  It works with databases too.  People 
are confident in gravity because it is an observable fact of our 
planet.  For millennia people have experienced gravity and have grown 
accustomed to it.  The same can be said of DB2 and Oracle.  People 
have been using it for years, hence the comfort level with these 
products.


People have been using databases for years.  Some of them were produced 
by Oracle Corp or IBM.  However, many people have _not_ directly used 
Oracle or IBM DB2 that are entering or currently in the database market. 
They have used products built on those engines and have certain levels 
of faith in those engines but have to consider the DBA's involvement as 
well as any support contracts with Oracle or IBM that kept the software 
running as it was, as well as Oracle and IBM's tendancies to recommend 
specific (very high-end and fault-tolerant) hardware.

MySQL is another iteration of the database engine by another group of 
people.  This group of people may or may not be 'new' to database 
design, just as the people currently working on Oracle 10 (or X?) may be 
freshmen in college (for all I know, but I highly doubt it).  People 
have faith in Oracle or IBM because they have chosen to have that faith, 
often on the basis of their high marketing profiles, not on the basis of 
hard facts or evidence.

I'm not saying that Oracle and IBM don't make good DB products.  They 
certainly make some of the best software in the world, but don't have 
faith in any software product just because you've heard its name a lot. 
OpenBSD and Linux were helping run the majority of the Internet long 
before most people had heard either name.

--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.
http://www.fibrespeed.net/~mbabcock



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: Can MySQL handle 120 million records?

2002-12-22 Thread Michael T. Babcock
Jeremy Zawodny wrote:


It's a sad day when confidence is built by a company's PR budget
rather than the product's track record.
 


You mean like Microsoft?

Oh, sorry to bring that up ... :-)

--
Michael T. Babcock
C.T.O., FibreSpeed Ltd. ... sql ... for this one :)
http://www.fibrespeed.net/~mbabcock



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-22 Thread Jeremy Zawodny
On Sun, Dec 22, 2002 at 10:43:49PM -0500, Michael T. Babcock wrote:
 Jeremy Zawodny wrote:
 
 It's a sad day when confidence is built by a company's PR budget
 rather than the product's track record.
   
 
 
 You mean like Microsoft?

Yeah, they are one of the worst offenders. :-)

 Oh, sorry to bring that up ... :-)

Jeremy
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

MySQL 3.23.51: up 7 days, processed 294,584,169 queries (427/sec. avg)

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Can MySQL handle 120 million records?

2002-12-21 Thread JamesD
Jeremy,

if i run the command

show tables; on some database
it returns

5 rows in set (0.02 sec)

and if, on the same database i run
the command

select count(*) from 'table';
it returns

+-+
| count(*)|
+-+
| 47226322|
+-+
1 row in set (0.00 sec)

now, i believe its reasonable to assume
a CPU must spend more time to count 47 million records,
than to count 5 tables. This is why I believe
that many common selects are built-in to mysql in the code
somewhere. make sense? Its a good idea actually, that building
in commonly run selects when the process is idling and stuffing
results into variables that can be called in an instant, saves
us all time and money...

Jim


-Original Message-
From: Jeremy Zawodny [mailto:[EMAIL PROTECTED]]
Sent: Friday, December 20, 2002 3:28 PM
To: JamesD
Cc: [EMAIL PROTECTED]
Subject: Re: Can MySQL handle 120 million records?


On Wed, Dec 18, 2002 at 08:05:46PM -0800, JamesD wrote:

 i like mySQL, but it has a long way to go to gain the level of
 'confidence' that oracle, db2 or mssql or sybase have when it comes
 to frontline mission critical stuff.

Can you explain why?

 I think it will in time...thats why i stick with it. besides,
 confidence is often just a synonym for 'knowledge'

Really?  I tend to see it more like confidence builds with experience
(and therefore time).  You don't need to know a lot about to become
confident in its operation.  Many of us don't *really* understand how
gravity works, but we're quite confident in it.

 select count(*) from x (0.00 seconds)

 if you dig into the source, you will probably find this common
 select is built in and running all the time,

 and we are all just getting a pre-filled variable
 returned from the method  select count(*) from (x) 

Please find it.  I'll bet you money that it's not.

 I hope someone can prove me wrong...

I think the burden of proof is on you.

Jeremy
--
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

MySQL 3.23.51: up 5 days, processed 219,909,773 queries (441/sec. avg)

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Can MySQL handle 120 million records?

2002-12-21 Thread Dean Harding
It's just that the number of rows in a table is stored directly in the
.MYD file (I believe) so a query like select count(*) from table will
just return that value, rather than counting all the rows.  It's not
that the functions are built-in, rather they're *optimized* - a
totally different thing.

Dean.

 -Original Message-
 From: JamesD [mailto:[EMAIL PROTECTED]]
 Sent: Saturday, 21 December 2002 7:33 pm
 To: [EMAIL PROTECTED]
 Subject: RE: Can MySQL handle 120 million records?
 
 Jeremy,
 
 if i run the command
 
 show tables; on some database
 it returns
 
 5 rows in set (0.02 sec)
 
 and if, on the same database i run
 the command
 
 select count(*) from 'table';
 it returns
 
 +-+
 | count(*)|
 +-+
 | 47226322|
 +-+
 1 row in set (0.00 sec)
 
 now, i believe its reasonable to assume
 a CPU must spend more time to count 47 million records,
 than to count 5 tables. This is why I believe
 that many common selects are built-in to mysql in the code
 somewhere. make sense? Its a good idea actually, that building
 in commonly run selects when the process is idling and stuffing
 results into variables that can be called in an instant, saves
 us all time and money...
 
 Jim
 
 
 -Original Message-
 From: Jeremy Zawodny [mailto:[EMAIL PROTECTED]]
 Sent: Friday, December 20, 2002 3:28 PM
 To: JamesD
 Cc: [EMAIL PROTECTED]
 Subject: Re: Can MySQL handle 120 million records?
 
 
 On Wed, Dec 18, 2002 at 08:05:46PM -0800, JamesD wrote:
 
  i like mySQL, but it has a long way to go to gain the level of
  'confidence' that oracle, db2 or mssql or sybase have when it comes
  to frontline mission critical stuff.
 
 Can you explain why?
 
  I think it will in time...thats why i stick with it. besides,
  confidence is often just a synonym for 'knowledge'
 
 Really?  I tend to see it more like confidence builds with experience
 (and therefore time).  You don't need to know a lot about to become
 confident in its operation.  Many of us don't *really* understand how
 gravity works, but we're quite confident in it.
 
  select count(*) from x (0.00 seconds)
 
  if you dig into the source, you will probably find this common
  select is built in and running all the time,
 
  and we are all just getting a pre-filled variable
  returned from the method  select count(*) from (x) 
 
 Please find it.  I'll bet you money that it's not.
 
  I hope someone can prove me wrong...
 
 I think the burden of proof is on you.
 
 Jeremy
 --
 Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
 [EMAIL PROTECTED]  |  http://jeremy.zawodny.com/
 
 MySQL 3.23.51: up 5 days, processed 219,909,773 queries (441/sec. avg)
 
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail mysql-unsubscribe-
 [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 
 
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail mysql-unsubscribe-
 [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Can MySQL handle 120 million records?

2002-12-21 Thread JamesD
Thanks Dean.

-Original Message-
From: Dean Harding [mailto:[EMAIL PROTECTED]]
Sent: Saturday, December 21, 2002 12:46 AM
To: 'JamesD'; [EMAIL PROTECTED]
Subject: RE: Can MySQL handle 120 million records?


It's just that the number of rows in a table is stored directly in the
.MYD file (I believe) so a query like select count(*) from table will
just return that value, rather than counting all the rows.  It's not
that the functions are built-in, rather they're *optimized* - a
totally different thing.

Dean.

 -Original Message-
 From: JamesD [mailto:[EMAIL PROTECTED]]
 Sent: Saturday, 21 December 2002 7:33 pm
 To: [EMAIL PROTECTED]
 Subject: RE: Can MySQL handle 120 million records?

 Jeremy,

 if i run the command

 show tables; on some database
 it returns

 5 rows in set (0.02 sec)

 and if, on the same database i run
 the command

 select count(*) from 'table';
 it returns

 +-+
 | count(*)|
 +-+
 | 47226322|
 +-+
 1 row in set (0.00 sec)

 now, i believe its reasonable to assume
 a CPU must spend more time to count 47 million records,
 than to count 5 tables. This is why I believe
 that many common selects are built-in to mysql in the code
 somewhere. make sense? Its a good idea actually, that building
 in commonly run selects when the process is idling and stuffing
 results into variables that can be called in an instant, saves
 us all time and money...

 Jim


 -Original Message-
 From: Jeremy Zawodny [mailto:[EMAIL PROTECTED]]
 Sent: Friday, December 20, 2002 3:28 PM
 To: JamesD
 Cc: [EMAIL PROTECTED]
 Subject: Re: Can MySQL handle 120 million records?


 On Wed, Dec 18, 2002 at 08:05:46PM -0800, JamesD wrote:
 
  i like mySQL, but it has a long way to go to gain the level of
  'confidence' that oracle, db2 or mssql or sybase have when it comes
  to frontline mission critical stuff.

 Can you explain why?

  I think it will in time...thats why i stick with it. besides,
  confidence is often just a synonym for 'knowledge'

 Really?  I tend to see it more like confidence builds with experience
 (and therefore time).  You don't need to know a lot about to become
 confident in its operation.  Many of us don't *really* understand how
 gravity works, but we're quite confident in it.

  select count(*) from x (0.00 seconds)
 
  if you dig into the source, you will probably find this common
  select is built in and running all the time,
 
  and we are all just getting a pre-filled variable
  returned from the method  select count(*) from (x) 

 Please find it.  I'll bet you money that it's not.

  I hope someone can prove me wrong...

 I think the burden of proof is on you.

 Jeremy
 --
 Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
 [EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

 MySQL 3.23.51: up 5 days, processed 219,909,773 queries (441/sec. avg)

 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail mysql-unsubscribe-
 [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail mysql-unsubscribe-
 [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-20 Thread Jeremy Zawodny
On Wed, Dec 18, 2002 at 08:05:46PM -0800, JamesD wrote:

 i like mySQL, but it has a long way to go to gain the level of
 'confidence' that oracle, db2 or mssql or sybase have when it comes
 to frontline mission critical stuff.

Can you explain why?

 I think it will in time...thats why i stick with it. besides,
 confidence is often just a synonym for 'knowledge'

Really?  I tend to see it more like confidence builds with experience
(and therefore time).  You don't need to know a lot about to become
confident in its operation.  Many of us don't *really* understand how
gravity works, but we're quite confident in it.

 select count(*) from x (0.00 seconds)
 
 if you dig into the source, you will probably find this common
 select is built in and running all the time,

 and we are all just getting a pre-filled variable
 returned from the method  select count(*) from (x) 

Please find it.  I'll bet you money that it's not.
 
 I hope someone can prove me wrong...

I think the burden of proof is on you.

Jeremy
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

MySQL 3.23.51: up 5 days, processed 219,909,773 queries (441/sec. avg)

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-20 Thread Michael She
At 03:28 PM 12/20/2002 -0800, Jeremy Zawodny wrote:


 I think it will in time...thats why i stick with it. besides,
 confidence is often just a synonym for 'knowledge'

Really?  I tend to see it more like confidence builds with experience
(and therefore time).  You don't need to know a lot about to become
confident in its operation.  Many of us don't *really* understand how
gravity works, but we're quite confident in it.


The gravity is a great analogy.  It works with databases too.  People are 
confident in gravity because it is an observable fact of our planet.  For 
millennia people have experienced gravity and have grown accustomed to 
it.  The same can be said of DB2 and Oracle.  People have been using it for 
years, hence the comfort level with these products.

MySQL can be analogized to a new phenomena.  People wonder what it is, what 
it does, and how it operates.  Over time, MySQL will gain the confidence 
and accept of lay-people and turn into gravity of sorts.
--
Michael She  : [EMAIL PROTECTED]
Mobile   : (519) 589-7309
WWW Homepage : http://www.binaryio.com/


-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



RE: Can MySQL handle 120 million records?

2002-12-20 Thread JamesD
Jeremy,

mySql has no brand recognition compared to Oracle, Sybase, MS, IBM.

that is why there is not enough confidence.
people buy things because they trust and have confidence in the brand.

As long as the brand owner can keep the product performing to the level
of expectations embodied in the brand, then the brand
will hold its position.

MS-SQL and Oracle etc, will hold position in the front line
for decades to come because they can afford to and because they want to,

and i think mySQL can do just fine anyway in its own way...

Jim




-Original Message-
From: Jeremy Zawodny [mailto:[EMAIL PROTECTED]]
Sent: Friday, December 20, 2002 3:28 PM
To: JamesD
Cc: [EMAIL PROTECTED]
Subject: Re: Can MySQL handle 120 million records?


On Wed, Dec 18, 2002 at 08:05:46PM -0800, JamesD wrote:

 i like mySQL, but it has a long way to go to gain the level of
 'confidence' that oracle, db2 or mssql or sybase have when it comes
 to frontline mission critical stuff.

Can you explain why?

 I think it will in time...thats why i stick with it. besides,
 confidence is often just a synonym for 'knowledge'

Really?  I tend to see it more like confidence builds with experience
(and therefore time).  You don't need to know a lot about to become
confident in its operation.  Many of us don't *really* understand how
gravity works, but we're quite confident in it.

 select count(*) from x (0.00 seconds)

 if you dig into the source, you will probably find this common
 select is built in and running all the time,

 and we are all just getting a pre-filled variable
 returned from the method  select count(*) from (x) 

Please find it.  I'll bet you money that it's not.

 I hope someone can prove me wrong...

I think the burden of proof is on you.

Jeremy
--
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

MySQL 3.23.51: up 5 days, processed 219,909,773 queries (441/sec. avg)

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-19 Thread Csongor Fagyal
MySQL may be new compared to Oracle, for example, but many other 
in-use DBs are in fact fairly new designs.  They just happen to be 
written by* large companies you recognize every day.

Any ideas about Postgresql vs. MySQL? I have always preferred MySQL 
because of the speed, but I have heard that Postgres also got improved 
over time. Anybody has some experience willing to share? How much slower 
is Postgres?

- Cs.


-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: Can MySQL handle 120 million records? - Ok, If you guys really can handle tens of millions records, you have to help me to enjoy MySQL too :-)

2002-12-19 Thread Brent Baisley
Why isn't Seq_ID not an unsigned int? Primary key should always be 
something generated by the system that has no other significance than 
being a primary key. If there actually is a seq_id piece of data that 
has some other significance, I wouldn't use it to link all your data. 
Then you don't have to have a compound index (Seq_ID, Homolog_PID) in 
your NewSequence_Homolog table.

I could understand why you may need to use text for the Comment, but 
would char(255) be good enough for the title? Then you could split 
Comment out to a separate table so you can get fixed length records. If 
comments should always be displayed with the data, then perhaps putting 
a flag field in the table to indicate there are comments. Then comments 
can be viewed individually.

Your simple left join does seem to be taking quite a long time. I 
wouldn't consider myself an expert in MySQL, but I would think that your 
index key length of 50 has to be slowing things down. The left join 
example you have is also scanning an entire table of 2676711 records. I 
don't know how big the table is in disk space, but I would guess the 
entire table is not cached in RAM.

A few weeks back I did read something about a company that was doing 
modeling on grain falling in a silo or something very complex like fluid 
dynamics. They were having severe performance issue where it would take 
10 hours to model something. They used all the various Unix tools to 
determine where the bottleneck was (disk, memory, or CPU). It was RAM 
and disk I/O (due to low RAM) that was slowing things down. They started 
adding disks for scratch areas and virtual memory and made sure there 
were no hot disks. They knocked about two hours off of the time. They 
then added a whole bunch of RAM so the entire table could be loaded into 
RAM and got things down to something like 17 minutes.
Regardless of whether you are using Oracle or MySQL, you are still 
limited by the hardware you are running it on. Finding what the 
bottleneck is (disk, memory, cpu, or network) is the key.

On Wednesday, December 18, 2002, at 04:17 PM, Qunfeng Dong wrote:

This NewSequence table is used to track some general
info about sequence. Notice I have to use text
datatype to describe Comment and Seq_Title fields;
therefore I have to use varchar for other string
fields. In addition, the Seq_ID is not numerical.
BTW, I found indexing on Seq_Type. Organism which are
very repeative still helps with accessing. This table
has 2676711 rows.


--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577


-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Can MySQL handle 120 million records?

2002-12-19 Thread John Griffin
Hi Guys,

I'm a lurker on this list but I have decided to come out of my shell for a moment. I a 
previous job I was the Oracle DBA for my development team. We had a persistent problem 
with Oracle corrupting the development database. Oracle had no idea with the problem 
was even after I sent them a copy of the data files. I eventually traced the problem 
to a flaky memory chip in the development database server. We replaced the chip, 
reinstalled Oracle and the problem went away.

Now, the lessons learned from this story;

No database server software can account for all possible conditions. Regardless of the 
database, you will have an event at some point in time that causes database 
corruption. It's like death and taxes. It's going to happen. Deal with it. Create a 
good backup strategy, a good disaster recovery plan and practice doing restores at 
least once a month. Expecting the software to save you from all situations is just a 
bad idea.

John Griffin

-Original Message-
From: Michael She [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 18, 2002 3:36 PM
To: Michael T. Babcock
Cc: [EMAIL PROTECTED]; Muruganandam
Subject: Re: Can MySQL handle 120 million records?


I guess you can say I'm a follower.  Other DB systems have been in use for 
years, so their reliability has been generally proven through use.  It's 
good to know that a lot of people have had success with MySQL, but 
considering MySQL is the new comer, I'm still a little tepid!


At 01:22 PM 12/18/2002 -0500, Michael T. Babcock wrote:
Michael She wrote:
X-MDRcpt-To: [EMAIL PROTECTED]
X-Return-Path: [EMAIL PROTECTED]
X-MDaemon-Deliver-To: [EMAIL PROTECTED]

2. Some of the comments in the mySQL manual... people losing data doing 
routine stuff like table optimizations, adding keys, etc.  If a database 
is reliable, things like that shouldn't happen.  Comments like those in 
the MySQL manual scared me.


1) Do you believe this doesn't ever happen with other 'enterprise' level 
DB systems?
2) What do you think Microsoft's pages would look like if they allowed 
arbitrary user comments (read their newsgroups ...)
3) Those reports should be filed as bugs to be fixed, not added as 
comments in the manual.

-- 
Michael She  : [EMAIL PROTECTED]
Mobile   : (519) 589-7309
WWW Homepage : http://www.binaryio.com/


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Re: Can MySQL handle 120 million records?

2002-12-19 Thread Michael Bacarella
 I'm a lurker on this list but I have decided to come out of my shell for a moment. I 
a previous job I was the Oracle DBA for my development team. We had a persistent 
problem with Oracle corrupting the development database. Oracle had no idea with the 
problem was even after I sent them a copy of the data files. I eventually traced the 
problem to a flaky memory chip in the development database server. We replaced the 
chip, reinstalled Oracle and the problem went away.

I propose an addition to the MySQL manual in order
to promote general good sys adminlyness.

A good way to weed out these problems early and with
minimal grief is by stress testing a server before
it is ever purposed.
 
Most people try to develop their own stress test schemes. Don't
bother, there's a very good one out there called CTCS. I believe
VA Linux developed it in-house to stress test servers that they
had just built for customers.
 
After running CTCS on 7 x86 servers we recently received (NOT
from VA Linux), we discovered 2 of them had bad RAM, and one of
them had a faulty RAID controller(!).  The disturbing part is
these machines appeared perfectly functional, even held up
under high load for weeks sometimes before crashing. If I hadn't
found CTCS when I did our agony probably would have been far
more prolonged.
 
We learned our lesson. Now it's policy that machines must
stand 48 hours of CTCS before being put into any role at all.

Presumbably your vendors do this too, but its a good idea to
do it after UPS hands you the box too, for reasons that
should be obvious.
 
-- 
Michael Bacarella  | Netgraft Corp
   | 545 Eighth Ave #401
 Systems Analysis  | New York, NY 10018
Technical Support  | 212 946-1038 | 917 670-6982
 Managed Services  | http://netgraft.com/


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Thanks! Re: Can MySQL handle 120 million records? - Ok, If you guys really can handle tens of millions records, you have to help me to enjoy MySQL too :-)

2002-12-19 Thread Qunfeng Dong
Well, thanks to all of your great help! I am able to
speed up the query {select count(*) from NEW_Sequence
s left join NEW_Sequence_Homolog h on s.Seq_ID =
h.Seq_ID;} from 1 min 52.61 sec down to 20.62 sec now.
The only thing I changed so far was the Seq_ID from
type varchar to bigint. The Seq_ID was not all
numerical for different type of Sequences; but I
managed to assign numerical code to those
non-numerical ones now. 

Qunfeng

  CREATE TABLE NewSequence
  (
  Seq_ID  varchar(50) NOT NULL,
  GenBank_Acc varchar(10),
  Organismvarchar(50) NOT NULL,
  Seq_Type  enum(EST,GSS,EST
 Contig,EST
  Singlet,GSS Contig,GSS Singlet,GSS Plasmid
  Contig,Protein) NOT NULL,
  Seq_Length  int NOT NULL,
  Seq_Title   textNOT NULL,
  Comment text,
  Entry_Date  dateNOT NULL,
  PRIMARY KEY (Seq_ID),
  UNIQUE  (GenBank_Acc),
  INDEX (Seq_Type),
  INDEX (Organism)
  );
 
  This NewSequence table is used to track some
 general
  info about sequence. Notice I have to use text
  datatype to describe Comment and Seq_Title
 fields;
  therefore I have to use varchar for other string
  fields. In addition, the Seq_ID is not numerical.
  BTW, I found indexing on Seq_Type. Organism which
 are
  very repeative still helps with accessing. This
 table
  has 2676711 rows.
 
 
  CREATE TABLE NewSequence_Homolog
  (
  Seq_ID  varchar(50) NOT NULL,
  Homolog_PID int NOT NULL,
  Homolog_Descvarchar(50) NOT NULL,
  Homolog_Species varchar(50),
  PRIMARY KEY (Seq_ID, Homolog_PID)
  );
 
  This NewSequence_Homolog table is to track which
  protein sequences (homolog) are similar to the
  sequence I store in the NewSequence table. This
 table
  has 997654 rows.
 
  mysql select count(*) from NewSequence s left
 join
  NewSequence_Homolog h on s.Seq_ID = h.Seq_ID;
  +--+
  | count(*) |
  +--+
  |  3292029 |
  +--+
  1 row in set (1 min 30.50 sec)
 
  So a simple left join took about 1 min and half.
  First, is this slow or I am too picky?
 
  This is the Explain.
  mysql explain select count(*) from NewSequence s
 left
  join NewSequence_Homolog h on s.Seq_ID = h.Seq_ID;
 

+---+---+---+-+-+--+-+--
 ---+
  | table | type  | possible_keys | key |
 key_len |
  ref  | rows| Extra   |
 

+---+---+---+-+-+--+-+--
 ---+
  | s | index | NULL  | PRIMARY | 
 50 |
  NULL | 2676711 | Using index |
  | h | ref   | PRIMARY   | PRIMARY | 
 50 |
  s.Seq_ID |9976 | Using index |
 

+---+---+---+-+-+--+-+--
 ---+
 
 
  I am running MySQL 3.23.49 on RedHat linux 7.3 on
 a
  dedicated server with 4 GB memory. The only
 setting I
  changed is to copy the my-huge.cnf into
 /etc/my.cnf.
 
  Qunfeng
 
  --- Michael T. Babcock [EMAIL PROTECTED]
  wrote:
   Qunfeng Dong wrote:
  
   not-so-good performance (join on tables much
   smaller
   than yours takes minutes even using index) and
 I
   seem
   to read all the docs I could find on the web
 about
   how
   to optimize but they are not working for me (I
 am
   
  
   Have you stored a slow query log to run them
 through
   'explain' and see
   why they're slow?  Do you want to post some of
 them
   here so we can
   suggest what might be done to make them faster?
 
=== message truncated ===


__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Michael She
I agree.  MySQL is a great database, but I wouldn't call it enterprise 
grade.  Considering that the database is used to store billing 
information... one has to be weary about losing all the records due to a 
bug or deficiency in MySQL.  I was searching through some of the MySQL help 
documentation, and there have been more than a few examples in the comments 
where people have lost data due to wacky functions on databases greater 
than a couple of GBs...


At 08:43 PM 12/17/2002 -0800, Qunfeng Dong wrote:
I am not sure. Does anyone know any real examples of
mysql handling huge database and still perform well? I
am having problems with the performance with the MySQL
Left join recently. A big table (about 2.5 million
records) left join a small table (about 350K records)
takes generally 2 mins to finish. I check the
explain and primary key index on the small table was
indeed used for the joining. My system is Redhat Linux
7.3 with 4 GB memory. I also tried replacing the
default my.cnf with my-huge.cnf. It didn't help at
all.

Another thing, with some linux system, there is a size
limit for file. MySQL seems to store each of its table
as single file. You need to choose a file system
without that limit.

Qunfeng Dong
--- B.G. Mahesh [EMAIL PROTECTED]
wrote:

 hi

 We are evaluating few databases for developing an
 application with
 following specs,

 1.OS not very important. Leaning towards Linux

 2.Currently the database has about 5 million
 records but it will grow
 to 120 million records.

 3.The tables will have billing information for a
 telecom company.
 Nothing complex.

 4.Back office staff will use the data in the
 database to create
 invoices to be sent to customers. This data is not
 connected to the
 live telecom system [e.g. switches etc]. We get the
 data every day
 from the telecom company.

 5.Staff may perform queries on the database to get
 reports like
 busiest hour of the day etc etc. I don't see too
 many concurrent
 users using the system, however the system needs to
 be stable.

 6.Need to create excel, pdf files from the data in
 the database. This
 I think has nothing to do with the database, however
 this is a requirement.

 7.Needless to say, good security is a must which
 will also be built
 into the front end application.

 We are considering the following databases,

 1.MYSQL
 2.Postgres
 3.Oracle
 4.MSQL

 If MYSQL or Postgres can do the job I prefer not to
 spend the money on
 Oracle/MSQL. However, if Oracle/MSQL are required
 for getting good
 reports and scalability, so be it. We will use
 Oracle/MSQL.

 Any pointers/advice is appreciated


 --
 --
 B.G. Mahesh
 mailto:[EMAIL PROTECTED]
 http://www.indiainfo.com/
 India's first ISO certified portal


-
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list
 archive)

 To request this thread, e-mail
 [EMAIL PROTECTED]
 To unsubscribe, e-mail

[EMAIL PROTECTED]
 Trouble unsubscribing? Try:
 http://lists.mysql.com/php/unsubscribe.php



__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail 
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

--
Michael She  : [EMAIL PROTECTED]
Mobile   : (519) 589-7309
WWW Homepage : http://www.binaryio.com/


-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Michael She
Being paranoid...

Have you ever lost data with MySQL before?  Is it reliable.  I have no 
problems using MySQL as a lightweight database for simple chores, but I'm a 
bit weary about putting into a mission critical environment.


At 10:19 PM 12/17/2002 -0800, Jeremy Zawodny wrote:
On Wed, Dec 18, 2002 at 11:55:32AM -0500, Muruganandam wrote:
 Dear Jeremy,
 Can you end me the hardware details of your DB  Engine server...

The hardware is about 2 years old now.  The master is a dual P3 866
with 2GB RAM and 6 36GB SCSI disks.  Some of the slaves are a little
faster (dual P3 1GHz).


--
Michael She  : [EMAIL PROTECTED]
Mobile   : (519) 589-7309
WWW Homepage : http://www.binaryio.com/



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread David T-G
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Jim, et al --

...and then JamesD said...
% 
% I've read limits are based on the filesize your OS can handle,
% the HDD size, memory, how fast your RISC

Yeah, these make sense, but that means that they're not really mysql
limits but hardware or OS limits.  I suspect that there's some value
like 2^32 or 2^64 records as a limit or some such, and the practical
answer is that it's the box and not the software that will limit you,
but I don't have the internal details.


% or CISC processors are...and how the stars are aligned...

*grin*


% 
% i think 4 billion records will need some horses pullin...

Oh, sure; I definitely agree.  That wasn't the question, though.


% 8 x 2.4 XEON/2 at least. I've run simple queries on

Maybe more than that, depending on how big the records are and how much
they're being accessed.  But I specifically wasn't talking about HW/OS
limitations because he wasn't :-)


% 80 million records and it takes minutes with a gig of RAM
% and 800 pentium III. each minute in front of a computer is
% like dog years... 1 = 7

*grin*  Too true...


% 
% 
% Jim


HTH  HAND  Happy Holidays

mysql query,
:-D
- -- 
David T-G  * There is too much animal courage in 
(play) [EMAIL PROTECTED] * society and not sufficient moral courage.
(work) [EMAIL PROTECTED]  -- Mary Baker Eddy, Science and Health
http://www.justpickone.org/davidtg/Shpx gur Pbzzhavpngvbaf Qrprapl Npg!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.0.7 (FreeBSD)

iD8DBQE+AHQLGb7uCXufRwARAlwlAKDLJayKJY4zl+zVxOf0k81D1WZCsACguBtZ
14t/phnq1inNkNtmAQrDioU=
=LtVh
-END PGP SIGNATURE-

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Can MySQL handle 120 million records?

2002-12-18 Thread Peter Vertes
Hi,

I've been using MySQL intercompany for a while now with great results.  Even 
the diehard MSSQL people are amazed at how fast it can be at time.  One of the things 
I use it for is to store syslog events in it.  I wrote a backend that parses a syslog 
file as data is being written into it and does multiple things with each syslog entry 
depending what the entry contains.  When I'm done with it the syslog entry goes into a 
MySQL database where I can store the data and let the operations team access it 
through a PHP enabled webpage to see either what is going on in the system real-time 
of be able to do queries about certain hosts, processes or show some stats (what 
happened to machine x on date y and what processes were running on it, etc...).
The MySQL database is being hosted on a Dell Precisions 540 workstation box.  
It's a P4 1.7GHz Xeon with 512MB of ram and a 40GB IDE disc running Windows 2000 
Server.  That MySQL database is also being used for other things (nothing too 
intensive) and I muck around with it also and use it as a test db.  The machine also 
handles webserving chores and runs backup chores and other operations related tasks.
The database only holds about 1 months worth of data in it, the rest we don't 
really need but we keep around for a while outside of the db zipped up.  As of when 
I'm writing this there were about 18.7 million entries in that table:

mysql select count(*) from notifications;
+--+
| count(*) |
+--+
| 18711190 |
+--+
1 row in set (0.00 sec)

All these entries have been accumulated from December 1, 2002 till present day:

mysql select distinct syslogdate from notifications order by syslogdate;
++
| syslogdate |
++
| 2002-12-01 |
| 2002-12-02 |
| 2002-12-03 |
| 2002-12-04 |
| 2002-12-05 |
| 2002-12-06 |
| 2002-12-07 |
| 2002-12-08 |
| 2002-12-09 |
| 2002-12-10 |
| 2002-12-11 |
| 2002-12-12 |
| 2002-12-13 |
| 2002-12-14 |
| 2002-12-15 |
| 2002-12-16 |
| 2002-12-17 |
| 2002-12-18 |
++
18 rows in set (12.95 sec)

Notice it took almost 13 seconds to complete that last query.  I tried this on 
a MSSQL server and after 2 minutes I turned the query off.  That kind of performance 
was unacceptable for a webapp that uses a database that does real time queries.  I'm 
quite happy with the performance of MySQL and I just love to see the MSSQL guys 
retreat when I show off how fast some queries can be (they always strike back with 
transactional stuff, blah, blah, blah :)  Anyway, I would suggest you use Linux for 
your dbserver with some kind of journaling file system.  I would go with ReiserFS 
because if memory serves correctly it can handle files up to 4 terabytes but you might 
want to double check since I'm quite forgetful with facts like that :)  I would also 
recommend the fastest SCSI drives you can find.  When I do queries in any 10 million+ 
database I barely get any CPU activity but I get A LOT of disk activity and I think 
this IDE drive is holding MySQL back.  When I have time I'm thinking about moving this 
database/webapp beast onto a SCSI Linux box and see how well it performs.  I think 
you'll be very pleased with the performance you'll get out of MySQL.

-Pete

P.S.: Thanks again MySQL team :)

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Michael T. Babcock
Qunfeng Dong wrote:


Another thing, with some linux system, there is a size
limit for file. MySQL seems to store each of its table
as single file. You need to choose a file system
without that limit. 


Just use InnoDB tables for these files and you won't have a problem 
AFAIK; you can have multiple 2G files that are used to create one big 
table if you like (any InnoDB people want to comment on actual limits?)

--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.
http://www.fibrespeed.net/~mbabcock



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: Can MySQL handle 120 million records?

2002-12-18 Thread Michael T. Babcock
Michael She wrote:


I agree.  MySQL is a great database, but I wouldn't call it enterprise 
grade.  Considering that the database is used to store billing 
information... one has to be weary about losing all the records due to 
a bug or deficiency in


Besides actual additional features (management software, SNMP support, 
stored-procedures and the like), what would lead you to decide that 
MySQL isn't ready for the big time, assuming a site didn't need the 
above.

--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.
http://www.fibrespeed.net/~mbabcock



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: Can MySQL handle 120 million records?

2002-12-18 Thread Jeremy Zawodny
On Wed, Dec 18, 2002 at 02:38:31AM -0500, Michael She wrote:
 Being paranoid...
 
 Have you ever lost data with MySQL before?

No.

 Is it reliable.

Yes.  It doesn't crash and doesn't lose data.  If it did either, we'd
never have used it this much.

 I have no problems using MySQL as a lightweight database for simple
 chores, but I'm a bit weary about putting into a mission critical
 environment.

Why, exactly?

Jeremy
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

MySQL 3.23.51: up 3 days, processed 136,569,878 queries (457/sec. avg)

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Jeremy Zawodny
On Wed, Dec 18, 2002 at 02:37:07AM -0500, Michael She wrote:

 I agree.  MySQL is a great database, but I wouldn't call it
 enterprise grade.

Since you haven't told us what enterprise grade means to you, that
doesn't tell us much.  What is it lacking to become enterprise grade
in your mind?

 Considering that the database is used to store billing
 information... one has to be weary about losing all the records due
 to a bug or deficiency in MySQL.

That's true of any database server, right?  It's not really a
MySQL-specific complaint.

 I was searching through some of the MySQL help documentation, and
 there have been more than a few examples in the comments where
 people have lost data due to wacky functions on databases greater
 than a couple of GBs...

Imagine what you'd read it Oracle was open enough to allow comments in
their on-line docs.  Seriously.  I've heard pretty nasty stories about
Oracle, Microsoft SQL Server, and so on.

Jeremy
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

MySQL 3.23.51: up 3 days, processed 136,618,914 queries (457/sec. avg)

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Jocelyn Fournier
Hi,

I'm using MySQL on a database with 134 Millions of rows (10.9 GB) (some
tables contains more than 40 millions of rows) under quite high stress
(about 500 queries/sec avg). (using HEAP, MyISAM and InnoDB tables)
I never experienced any losses, *even with MySQL-4.1* (yes, I'm currently
using 4.1 on this production server ;)).
So for me MySQL is ready for a mission critical environment :)

Regards,
  Jocelyn
- Original Message -
From: Jeremy Zawodny [EMAIL PROTECTED]
To: Michael She [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; Muruganandam [EMAIL PROTECTED]
Sent: Wednesday, December 18, 2002 4:06 PM
Subject: Re: Can MySQL handle 120 million records?


 On Wed, Dec 18, 2002 at 02:38:31AM -0500, Michael She wrote:
  Being paranoid...
 
  Have you ever lost data with MySQL before?

 No.

  Is it reliable.

 Yes.  It doesn't crash and doesn't lose data.  If it did either, we'd
 never have used it this much.

  I have no problems using MySQL as a lightweight database for simple
  chores, but I'm a bit weary about putting into a mission critical
  environment.

 Why, exactly?

 Jeremy
 --
 Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
 [EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

 MySQL 3.23.51: up 3 days, processed 136,569,878 queries (457/sec. avg)

 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
[EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php






-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Can MySQL handle 120 million records?

2002-12-18 Thread Joe Stump
Without trying to sound like a troll or a rant I'd like to chime in on the
side of Jeremy.

I've worked with MySQL on sites that serve up over a million hits a day. We
hit the 2gb file limit in Linux (NOT a MySQL problem) and moved to Solaris
without incident.

A friend of mine had over a billion rows in a few of this tables
(statistical data mostly).

As Jeremy points out all DB's have their problems, shortcomings, etc. If you
have specific complaints fill out a feature request, if you've got problems
fill out a bug report, but don't knock MySQL as
not-worthy-of-enterprise-status because it doesn't *work* like Oracle, etc.

Overall, in my many experiences, it is more than sufficient for web apps.

--Joe


--
Joe Stump [EMAIL PROTECTED]
http://www.joestump.net


-Original Message-
From: Jeremy Zawodny [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 18, 2002 8:09 AM
To: Michael She
Cc: Qunfeng Dong; [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: Can MySQL handle 120 million records?


On Wed, Dec 18, 2002 at 02:37:07AM -0500, Michael She wrote:

 I agree.  MySQL is a great database, but I wouldn't call it
 enterprise grade.

Since you haven't told us what enterprise grade means to you, that
doesn't tell us much.  What is it lacking to become enterprise grade
in your mind?

 Considering that the database is used to store billing
 information... one has to be weary about losing all the records due
 to a bug or deficiency in MySQL.

That's true of any database server, right?  It's not really a
MySQL-specific complaint.

 I was searching through some of the MySQL help documentation, and
 there have been more than a few examples in the comments where
 people have lost data due to wacky functions on databases greater
 than a couple of GBs...

Imagine what you'd read it Oracle was open enough to allow comments in
their on-line docs.  Seriously.  I've heard pretty nasty stories about
Oracle, Microsoft SQL Server, and so on.

Jeremy
--
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

MySQL 3.23.51: up 3 days, processed 136,618,914 queries (457/sec. avg)

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Michael She
At 08:06 AM 12/18/2002 -0800, Jeremy Zawodny wrote:



 I have no problems using MySQL as a lightweight database for simple
 chores, but I'm a bit weary about putting into a mission critical
 environment.

Why, exactly?



Mainly for 2 reasons:

1. MySQL hasn't been proven yet in the corporate environment
2. Some of the comments in the mySQL manual... people losing data doing 
routine stuff like table optimizations, adding keys, etc.  If a database is 
reliable, things like that shouldn't happen.  Comments like those in the 
MySQL manual scared me.
--
Michael She  : [EMAIL PROTECTED]
Mobile   : (519) 589-7309
WWW Homepage : http://www.binaryio.com/



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



RE: Can MySQL handle 120 million records? - Impressive! How do you guys do that?

2002-12-18 Thread Qunfeng Dong
I am very encouraged to hear all these successful
proofs. I do want to stick to MySQL (we are using it
to develop a biology database). But I am indeed seeing
not-so-good performance (join on tables much smaller
than yours takes minutes even using index) and I seem
to read all the docs I could find on the web about how
to optimize but they are not working for me (I am
going to order Jeremy Zawodny's Advanced MySQL and
see if I am missing anything). Am I one of the few who
are encountering the problems? What's your secrets to
successfully run such large databases with MySQL? How
much time have you spend on fine-tune the performance?

Qunfeng

--- Peter Vertes [EMAIL PROTECTED] wrote:
 Hi,
 
   I've been using MySQL intercompany for a while now
 with great results.  Even the diehard MSSQL people
 are amazed at how fast it can be at time.  One of
 the things I use it for is to store syslog events in
 it.  I wrote a backend that parses a syslog file as
 data is being written into it and does multiple
 things with each syslog entry depending what the
 entry contains.  When I'm done with it the syslog
 entry goes into a MySQL database where I can store
 the data and let the operations team access it
 through a PHP enabled webpage to see either what is
 going on in the system real-time of be able to do
 queries about certain hosts, processes or show some
 stats (what happened to machine x on date y and what
 processes were running on it, etc...).
   The MySQL database is being hosted on a Dell
 Precisions 540 workstation box.  It's a P4 1.7GHz
 Xeon with 512MB of ram and a 40GB IDE disc running
 Windows 2000 Server.  That MySQL database is also
 being used for other things (nothing too intensive)
 and I muck around with it also and use it as a test
 db.  The machine also handles webserving chores and
 runs backup chores and other operations related
 tasks.
   The database only holds about 1 months worth of
 data in it, the rest we don't really need but we
 keep around for a while outside of the db zipped up.
  As of when I'm writing this there were about 18.7
 million entries in that table:
 
 mysql select count(*) from notifications;
 +--+
 | count(*) |
 +--+
 | 18711190 |
 +--+
 1 row in set (0.00 sec)
 
 All these entries have been accumulated from
 December 1, 2002 till present day:
 
 mysql select distinct syslogdate from notifications
 order by syslogdate;
 ++
 | syslogdate |
 ++
 | 2002-12-01 |
 | 2002-12-02 |
 | 2002-12-03 |
 | 2002-12-04 |
 | 2002-12-05 |
 | 2002-12-06 |
 | 2002-12-07 |
 | 2002-12-08 |
 | 2002-12-09 |
 | 2002-12-10 |
 | 2002-12-11 |
 | 2002-12-12 |
 | 2002-12-13 |
 | 2002-12-14 |
 | 2002-12-15 |
 | 2002-12-16 |
 | 2002-12-17 |
 | 2002-12-18 |
 ++
 18 rows in set (12.95 sec)
 
   Notice it took almost 13 seconds to complete that
 last query.  I tried this on a MSSQL server and
 after 2 minutes I turned the query off.  That kind
 of performance was unacceptable for a webapp that
 uses a database that does real time queries.  I'm
 quite happy with the performance of MySQL and I just
 love to see the MSSQL guys retreat when I show off
 how fast some queries can be (they always strike
 back with transactional stuff, blah, blah, blah :) 
 Anyway, I would suggest you use Linux for your
 dbserver with some kind of journaling file system. 
 I would go with ReiserFS because if memory serves
 correctly it can handle files up to 4 terabytes but
 you might want to double check since I'm quite
 forgetful with facts like that :)  I would also
 recommend the fastest SCSI drives you can find. 
 When I do queries in any 10 million+ database I
 barely get any CPU activity but I get A LOT of disk
 activity and I think this IDE drive is holding MySQL
 back.  When I have time I'm thinking about moving
 this database/webapp beast onto a SCSI Linux box and
 see how well it performs.  I think you'll be very
 pleased with the performance you'll get out of
 MySQL.
 
 -Pete
 
 P.S.: Thanks again MySQL team :)
 

-
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list
 archive)
 
 To request this thread, e-mail
 [EMAIL PROTECTED]
 To unsubscribe, e-mail

[EMAIL PROTECTED]
 Trouble unsubscribing? Try:
 http://lists.mysql.com/php/unsubscribe.php
 


__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Michael T. Babcock

Joe Stump wrote:


As Jeremy points out all DB's have their problems, shortcomings, etc. If you
have specific complaints fill out a feature request, if you've got problems
fill out a bug report, but don't knock MySQL as
 


There's a nice point on the MySQL site somewhere that if you really want 
a feature, you could always call them and pay for the development of 
that feature.

--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.
http://www.fibrespeed.net/~mbabcock



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: Can MySQL handle 120 million records?

2002-12-18 Thread Jocelyn Fournier
Previous hardware was a Bi PIII-733 with 786 MB of RAM, and 1 SCSI drive,
under Linux (kernel 2.4.18).
It worked fine, with sometimes some slowdown, mainly because of the hard
drive.
Now the server is Bi Athlon MP 2200+, 2 GB of RAM, and Maxtor Atlas 10K3
SCSI 320 (RAID-5) (still kernel 2.4.18)
The load average is roughly 0.7.

Regards,
  Jocelyn

- Original Message -
From: W. D. [EMAIL PROTECTED]
To: Jocelyn Fournier [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; Michael She [EMAIL PROTECTED];
[EMAIL PROTECTED]; Muruganandam [EMAIL PROTECTED]
Sent: Wednesday, December 18, 2002 5:38 PM
Subject: Re: Can MySQL handle 120 million records?




At 10:40 12/18/2002, Jocelyn Fournier wrote:
Hi,

I'm using MySQL on a database with 134 Millions of rows (10.9 GB) (some
tables contains more than 40 millions of rows) under quite high stress
(about 500 queries/sec avg). (using HEAP, MyISAM and InnoDB tables)
I never experienced any losses, *even with MySQL-4.1* (yes, I'm currently
using 4.1 on this production server ;)).
So for me MySQL is ready for a mission critical environment :)

Wow!  What kind of hardware?  What OS?

Start Here to Find It Fast!© -
http://www.US-Webmasters.com/best-start-page/






-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Can MySQL handle 120 million records?

2002-12-18 Thread David Brodbeck


 -Original Message-
 From: Joe Stump [mailto:[EMAIL PROTECTED]]

 We hit the 2gb file limit in Linux (NOT a MySQL problem) and 
 moved to Solaris without incident.

This appears to have been largely fixed in Linux, too, if you use a recent
kernel and glibc.  I recently tried creating a 3 gigabyte file on one of my
ext2fs partitions and it worked fine.  I'm not sure what the new limit is.
The system is RedHat 7.0 with all the current bugfix updates (including
glibc 2.2.4), plus a custom-compiled 2.4.17 kernel.

This has been discussed a lot on the samba mailing list.  It seems to still
be a problem with some distributions and not with others, and no one's quite
sure what the deciding factor is.  It's easy enough to test by dd'ing a few
billion bytes from /dev/zero into a file, though.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Michael T. Babcock

Michael She wrote:


2. Some of the comments in the mySQL manual... people losing data 
doing routine stuff like table optimizations, adding keys, etc.  If a 
database is reliable, things like that shouldn't happen.  Comments 
like those in the MySQL manual scared me.


1) Do you believe this doesn't ever happen with other 'enterprise' level 
DB systems?
2) What do you think Microsoft's pages would look like if they allowed 
arbitrary user comments (read their newsgroups ...)
3) Those reports should be filed as bugs to be fixed, not added as 
comments in the manual.

--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.
http://www.fibrespeed.net/~mbabcock



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: Can MySQL handle 120 million records?

2002-12-18 Thread Jeremy Zawodny
On Wed, Dec 18, 2002 at 12:16:00PM -0500, Michael She wrote:
 At 08:06 AM 12/18/2002 -0800, Jeremy Zawodny wrote:
 
 1. MySQL hasn't been proven yet in the corporate environment

You mean in your corporate environment?

It works well in ours. :-) And we use it to store data that we serve
to millions of users daily.

 2. Some of the comments in the mySQL manual... people losing data
 doing routine stuff like table optimizations, adding keys, etc.  If
 a database is reliable, things like that shouldn't happen.  Comments
 like those in the MySQL manual scared me.

You can find horror stories about any database product.  You just need
to know where to look or who to ask.

Jeremy
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

MySQL 3.23.51: up 3 days, processed 141,006,515 queries (459/sec. avg)

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Michael Bacarella
   I have no problems using MySQL as a lightweight database for simple
   chores, but I'm a bit weary about putting into a mission critical
   environment.

 1. MySQL hasn't been proven yet in the corporate environment

We run a periodic billing system backed with MySQL, in addition to
the rest of a web site that is hammered with approximately 300 million
hits per month. These all go against a single MySQL instance running on
modest x86 hardware.

 2. Some of the comments in the mySQL manual... people losing data doing 
 routine stuff like table optimizations, adding keys, etc.  If a database is 
 reliable, things like that shouldn't happen.  Comments like those in the 
 MySQL manual scared me.

We've never lost data. Our database server has crashed hard from OS failures
and suffered plenty of unclean shutdowns. MySQL/InnoDB always recovers
perfectly.

-- 
Michael Bacarella  | Netgraft Corp
   | 545 Eighth Ave #401
 Systems Analysis  | New York, NY 10018
Technical Support  | 212 946-1038 | 917 670-6982
 Managed Services  | http://netgraft.com/


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Can MySQL handle 120 million records?

2002-12-18 Thread Joe Stump
1. MySQL hasn't been proven yet in the corporate environment

Is Yahoo! proven enough? Seriously, how many large corporations have to use
a DB in order for it to be proven? Is Access proven because every
company on the planet uses it at some level?

2. Some of the comments in the mySQL manual... people losing data doing
routine stuff like table optimizations, adding keys, etc.  If a database is
reliable, things like that shouldn't happen.  Comments like those in the
MySQL manual scared me.

Like previous posters have pointed out. If given the same freedom within
Oracle's online documentation you'd have to believe there would be horror
stories outlining loss of data.

The main questions you have to ask yourself are as follows:

- What type of application am I programming?
- What type of features do I need in my DB?
- How much data am I storing?
- What type of data am I storing?
- How much traffic will this database see?
- How much money can I spend?

If you're storing massive amounts of video and sound within the actual DB
then maybe Oracle is the choice for you. If you need seemless integration
with VB then MS SQL might be the best candidate. If all you need is data and
speed then any DB on the planet would work (if your traffic was low enough
XML in flat files would work). We aren't saying MySQL is the best DB for
every task, but what we are saying is MySQL is proven to be more than
adequate for all sorts of tasks - it just depends on which task you need
your DB to perform very well in.

--Joe

--
Joe Stump [EMAIL PROTECTED]
http://www.joestump.net



1. MySQL hasn't been proven yet in the corporate environment
2. Some of the comments in the mySQL manual... people losing data doing
routine stuff like table optimizations, adding keys, etc.  If a database is
reliable, things like that shouldn't happen.  Comments like those in the
MySQL manual scared me.
--
Michael She  : [EMAIL PROTECTED]
Mobile   : (519) 589-7309
WWW Homepage : http://www.binaryio.com/



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Jocelyn Fournier
Hi,

I assume you are speaking about this comment :

++--+--+--
+
| Table | Op | Msg_type |
Msg_text |
++--+--+--
+
| database.table_name | optimize | error | 28
when fixing table |
| database.table_name| optimize | status |
Operation failed |
++--+--+--
+
2 rows in set (40.91 sec)

I typed it in again thinking it had some sort of
rollback or perhaps just a machine glitch but
then I typed it in again and got the folowing

mysql optimize table table_name;
++--+--+--
---+
| Table | Op | Msg_type |
Msg_text |
++--+--+--
---+
| database.table_name | optimize | error |
Can't open file: 'table_name.MYD'. (errno: 144) |
++--+--+--
---+

and lo all my data is lost... thank god for mysql
dump.

Well error 28 means there is no space left on the device.
When you run and OPTIMIZE TABLE statement, MySQL locks the main table and
recreate in // the index file.
As MySQL failed to recreate the index file, the table was marked as crashed
(errno: 144), but in any case data were lost (data file is not altered
during an optimize) :

he just have to execute a REPAIR TABLE statement to have all his record
back.

Take a look at what happens during an optimize of the following :

Before OPTIMIZE TABLE searchmainhardwarefr8 :

[root@forum] /home/mysql/Hardwarefr l searchmainhardwarefr8.*
19:28:52
-rw-rw1 mysqlmysql27589205 Dec 18 19:25
searchmainhardwarefr8.MYD
-rw-rw1 mysqlmysql16257024 Dec 18 19:25
searchmainhardwarefr8.MYI
-rw-rw1 mysqlmysql8596 Oct 18 17:03
searchmainhardwarefr8.frm

During OPTIMIZE TABLE searchmainhardwarefr8 :

[root@forum] /home/mysql/Hardwarefr l searchmainhardwarefr8.*
19:29:21
-rw-rw1 mysqlmysql27589205 Dec 18 19:25
searchmainhardwarefr8.MYD
-rw-rw1 mysqlmysql16257024 Dec 18 19:25
searchmainhardwarefr8.MYI
-rw-rw1 mysqlmysql 6696960 Dec 18 19:29
searchmainhardwarefr8.TMM
-rw-rw1 mysqlmysql8596 Oct 18 17:03
searchmainhardwarefr8.frm

After OPTIMIZE TABLE searchmainhardwarefr8 :

[root@forum] /home/mysql/Hardwarefr l searchmainhardwarefr8.*
19:29:22
-rw-rw1 mysqlmysql27589205 Dec 18 19:25
searchmainhardwarefr8.MYD
-rw-rw1 mysqlmysql16257024 Dec 18 19:29
searchmainhardwarefr8.MYI
-rw-rw1 mysqlmysql8596 Oct 18 17:03
searchmainhardwarefr8.frm


As you can see, only the MYI file (index file) has changed, the data file
remains untouched.
Since you can completly recreate the MYI using the MYD file, there is no
data lost possibility, even if the hard disk is full.


Regards,
  Jocelyn


- Original Message -
From: Michael She [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; Muruganandam [EMAIL PROTECTED]
Sent: Wednesday, December 18, 2002 5:16 PM
Subject: Re: Can MySQL handle 120 million records?


 At 08:06 AM 12/18/2002 -0800, Jeremy Zawodny wrote:


   I have no problems using MySQL as a lightweight database for simple
   chores, but I'm a bit weary about putting into a mission critical
   environment.
 
 Why, exactly?


 Mainly for 2 reasons:

 1. MySQL hasn't been proven yet in the corporate environment
 2. Some of the comments in the mySQL manual... people losing data doing
 routine stuff like table optimizations, adding keys, etc.  If a database
is
 reliable, things like that shouldn't happen.  Comments like those in the
 MySQL manual scared me.
 --
 Michael She  : [EMAIL PROTECTED]
 Mobile   : (519) 589-7309
 WWW Homepage : http://www.binaryio.com/



 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
[EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php






-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Gerald Jensen
Joe is right ... we switched from another SQL server to MySQL in 1999, and
have never looked back.

MySQL has been rock solid for our applications, the MySQL development team
is great to work with, and our customers like it.

Gerald Jensen

- Original Message -
From: Joe Stump [EMAIL PROTECTED]
To: [EMAIL PROTECTED]; Michael She [EMAIL PROTECTED]
Cc: Qunfeng Dong [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Wednesday, December 18, 2002 1:55 PM
Subject: RE: Can MySQL handle 120 million records?


 Without trying to sound like a troll or a rant I'd like to chime in on the
 side of Jeremy.

 I've worked with MySQL on sites that serve up over a million hits a day.
We
 hit the 2gb file limit in Linux (NOT a MySQL problem) and moved to Solaris
 without incident.

 A friend of mine had over a billion rows in a few of this tables
 (statistical data mostly).

 As Jeremy points out all DB's have their problems, shortcomings, etc. If
you
 have specific complaints fill out a feature request, if you've got
problems
 fill out a bug report, but don't knock MySQL as
 not-worthy-of-enterprise-status because it doesn't *work* like Oracle,
etc.

 Overall, in my many experiences, it is more than sufficient for web apps.

 --Joe


 --
 Joe Stump [EMAIL PROTECTED]
 http://www.joestump.net


 -Original Message-
 From: Jeremy Zawodny [mailto:[EMAIL PROTECTED]]
 Sent: Wednesday, December 18, 2002 8:09 AM
 To: Michael She
 Cc: Qunfeng Dong; [EMAIL PROTECTED]; [EMAIL PROTECTED]
 Subject: Re: Can MySQL handle 120 million records?


 On Wed, Dec 18, 2002 at 02:37:07AM -0500, Michael She wrote:
 
  I agree.  MySQL is a great database, but I wouldn't call it
  enterprise grade.

 Since you haven't told us what enterprise grade means to you, that
 doesn't tell us much.  What is it lacking to become enterprise grade
 in your mind?

  Considering that the database is used to store billing
  information... one has to be weary about losing all the records due
  to a bug or deficiency in MySQL.

 That's true of any database server, right?  It's not really a
 MySQL-specific complaint.

  I was searching through some of the MySQL help documentation, and
  there have been more than a few examples in the comments where
  people have lost data due to wacky functions on databases greater
  than a couple of GBs...

 Imagine what you'd read it Oracle was open enough to allow comments in
 their on-line docs.  Seriously.  I've heard pretty nasty stories about
 Oracle, Microsoft SQL Server, and so on.

 Jeremy
 --
 Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
 [EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

 MySQL 3.23.51: up 3 days, processed 136,618,914 queries (457/sec. avg)

 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
[EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php





-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Can MySQL handle 120 million records?

2002-12-18 Thread Greg_Cope
 From: Michael She [mailto:[EMAIL PROTECTED]]
   I have no problems using MySQL as a lightweight database 
 for simple
   chores, but I'm a bit weary about putting into a mission critical
   environment.
 
 Why, exactly?
 
 
 Mainly for 2 reasons:
 
 1. MySQL hasn't been proven yet in the corporate environment

Works for me and my past and present clients (I am a contractor), and it
would appear others.

 2. Some of the comments in the mySQL manual... people losing 
 data doing 
 routine stuff like table optimizations, adding keys, etc.  If 
 a database is 
 reliable, things like that shouldn't happen.  Comments like 
 those in the 
 MySQL manual scared me.

Cannot really comment about this, but like others I have never lost any data
that I could blame MySQL for.

I suggest you try it ...

Greg Cope
JITC


 



This message and any attachment has been virus checked by
Pfizer Corporate Information Technology, Sandwich.



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Csongor Fagyal
W. D. wrote:


At 10:40 12/18/2002, Jocelyn Fournier wrote:
 

Hi,

I'm using MySQL on a database with 134 Millions of rows (10.9 GB) (some
tables contains more than 40 millions of rows) under quite high stress
(about 500 queries/sec avg). (using HEAP, MyISAM and InnoDB tables)
I never experienced any losses, *even with MySQL-4.1* (yes, I'm currently
using 4.1 on this production server ;)).
So for me MySQL is ready for a mission critical environment :)
   


Wow!  What kind of hardware?  What OS?


We have a magnitude smallar DB, with aroung 5M rows (0.5GB data).

STATUS:
Threads: 15  Questions: 171147358  Slow queries: 2388  Opens: 22690  
Flush tables: 1  Open tables: 119 Queries per second avg: 68.198

The thing is it runs on a commercial P3 server (733Mhz) with 512M RAM. 
It is 3.23.53a, this is a RedHat 7.3 (or 7.1, I don't remember :-)), 
with one 60GB 7200 IDE HDD. We are peaking at aroung 300-500 queries/sec 
(30% of those are inserts or updates). This machine also runs a very 
busy Apache server, yet the load is under 1.0 most of the time (goes up 
to 3-4 when we do the batch processing). So MySQL can be very efficient 
- we have not encountered any data loss whatsoever in the last few 
months. (But that is MyISAM - we had performance problems with InnoDB).

What you need to have is a _good_ install, and then MySQL is superb. But 
to have a good install is not as easy as it sounds.

- Csongor


-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: Can MySQL handle 120 million records?

2002-12-18 Thread Paul DuBois
At 10:10 -0500 12/18/02, Michael T. Babcock wrote:

Qunfeng Dong wrote:


Another thing, with some linux system, there is a size
limit for file. MySQL seems to store each of its table
as single file. You need to choose a file system
without that limit.


Just use InnoDB tables for these files and you won't have a problem 
AFAIK; you can have multiple 2G files that are used to create one 
big table if you like (any InnoDB people want to comment on actual 
limits?)

Check out:

http://www.innodb.com/ibman.html#InnoDB_restrictions

And take a look at the last few items in the list, pertaining to page
size, max number of pages, etc.



--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.
http://www.fibrespeed.net/~mbabcock


sql, query

-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records? - Impressive! How doyou guys do that?

2002-12-18 Thread R. Hannes Niedner
On 12/18/02 9:48 AM, Qunfeng Dong [EMAIL PROTECTED] wrote:

 But I am indeed seeing
 not-so-good performance (join on tables much smaller
 than yours takes minutes even using index) and I seem
 to read all the docs I could find on the web about how
 to optimize but they are not working for me

Why don't you just post the table structures and the join query that you
have trouble with? There are enough expert here on this list who are happy
to help you further optimize your database if possible.

/h

MySQL, TABLE


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records? - Impressive! How do youguys do that?

2002-12-18 Thread Michael T. Babcock

Qunfeng Dong wrote:


not-so-good performance (join on tables much smaller
than yours takes minutes even using index) and I seem
to read all the docs I could find on the web about how
to optimize but they are not working for me (I am



Have you stored a slow query log to run them through 'explain' and see 
why they're slow?  Do you want to post some of them here so we can 
suggest what might be done to make them faster?

--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.
http://www.fibrespeed.net/~mbabcock



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: Can MySQL handle 120 million records?

2002-12-18 Thread W. D.


At 10:40 12/18/2002, Jocelyn Fournier wrote:
Hi,

I'm using MySQL on a database with 134 Millions of rows (10.9 GB) (some
tables contains more than 40 millions of rows) under quite high stress
(about 500 queries/sec avg). (using HEAP, MyISAM and InnoDB tables)
I never experienced any losses, *even with MySQL-4.1* (yes, I'm currently
using 4.1 on this production server ;)).
So for me MySQL is ready for a mission critical environment :)

Wow!  What kind of hardware?  What OS?

Start Here to Find It Fast!© - http://www.US-Webmasters.com/best-start-page/


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Paul DuBois
At 14:28 -0500 12/18/02, Michael T. Babcock wrote:

Paul DuBois wrote:


And take a look at the last few items in the list, pertaining to page
size, max number of pages, etc.



4 billion * 16kB = max table size = 64TB

Correct?  Sounds pretty serious ;-)


That's what it looks like to me!




--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.
http://www.fibrespeed.net/~mbabcock


sql, query

-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread W. D.
At 13:05 12/18/2002, Csongor Fagyal, wrote:
What you need to have is a _good_ install, and then MySQL is superb. But 
to have a good install is not as easy as it sounds.

Can you list the elements of a good install?



Start Here to Find It Fast!© - http://www.US-Webmasters.com/best-start-page/


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Michael T. Babcock

Michael Bacarella wrote:


We've never lost data. Our database server has crashed hard from OS failures
and suffered plenty of unclean shutdowns. MySQL/InnoDB always recovers
perfectly.
 


Running a slave off-site tops off crash recovery almost 100%.  We run a 
backup of our clients' data to multiple (unfortunately) mysql instances 
on our server over SSH links to their sites.  If their sites were dead 
for some reason or actually lost data, we have it all safe and live.

--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.
http://www.fibrespeed.net/~mbabcock



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: Can MySQL handle 120 million records?

2002-12-18 Thread Michael T. Babcock

Joe Stump wrote:


Like previous posters have pointed out. If given the same freedom within
Oracle's online documentation you'd have to believe there would be horror
stories outlining loss of data.
 


The most significant factor I've ever seen in people liking Oracle for 
their sites is the speed with which an Oracle DBA will show up and fix 
everything for them.  As I understand it, the MySQL team gives quite 
good response times as well, and you get to deal with the actual 
programmers if you pay for the privilege (and even on this list).

--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.
http://www.fibrespeed.net/~mbabcock



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



RE: Can MySQL handle 120 million records?

2002-12-18 Thread Wayne Lewis
Use iostat -x while the query is running. You are likely I/O bound doing a
table scan on the protected (BIG) table.

There has been alot of discussion about RAM and CPU on this thread regarding
performance, but nothing regarding disk I/O. If you're going to put tens of
millions of records in a database and expect fast random access to them
(i.e. not just reading the more recently inserted records). Then you need
lots of disks and a good volume manager. Don't try to compensate for lack of
disk with lots of RAM. Eventually the DB will be too big to budget for that
RAM.

Plan the disk requirements before anything else. A good rule of thumb is
that a 10K SCSI disk can do about 200 random I/O per second. You can verify
your disks using bonnie (http://www.textuality.com/bonnie/) or the like
(make sure you are looking at RANDOM I/O not sequential I/O).

Next compute the rate of I/O needed. If you are using InnoDB (which you
should, otherwise MyISAM's course grain locking will introduce
non-linearities into the performance that are too hard to model) then you
know I/O is done in 16K pages. How many such pages need to be read per
second? Look at all your queries and determine the access plans. Focus on
the ones that will dominate I/O requirements. Understanding how InnoDB lays
out data is critical for modeling. Each secondary index is in its own B-Tree
with leaves containing the value of the primary key. The data itself is in
another B-Tree keyed off the primary key. (Very similar to Oracle w/ index
organized tables.)

Unless you can benefit from clustering around the primary key, it safest to
assume one I/O per leaf-item (secondary or primary) as the tables and
indexes get large. For back of the envelope calculations its okay to ignore
the non-leaf pages and assume they are cached.

Hence for a table with primary and secondary keys named PK and SK
respectively:
SELECT * FROM FOO WHERE PK = ? = 1 I/O
SELECT * FROM FOO WHERE SK = ? = 2 I/O
SELECT FOO.* FROM FOO, BAR WHERE FOO.SK = BAR.PK AND BAR.SK = ? = 3 I/O

Now take all the queries and determine the number of their occurances for
some unit of work. That unit of work should correspond to something
externally observable event, like user logins. Say you have 3 queries in
the system like this:

Query   # I/O   Occurances Per Unit Work
  Q1   3  3
  Q2   4  2
  Q3   1  3

Now you can say that on average it takes 20 I/Os per unit of work. With one
disk you can do 200/20 = 10 Units of work / second / disk.

To ensure that performance scales with disks, stripe across the disks. This
is where volume management becomes key, MySQL/InnoDB fills each datafile
sequentially so don't just put each datafile on its own disk, that will just
create hotspots and you will be bound by the performance of a single disk.

This is the general idea of planning with any database. The previous only
covers equality match via an index but you can figure out the difference for
tablescans (consider how many rows fit into a page) and range scans
(consider the affect of the primary key as a cluster index).

-Wayne


-Original Message-


I am not sure. Does anyone know any real examples of
mysql handling huge database and still perform well? I
am having problems with the performance with the MySQL
Left join recently. A big table (about 2.5 million
records) left join a small table (about 350K records)
takes generally 2 mins to finish. I check the
explain and primary key index on the small table was
indeed used for the joining. My system is Redhat Linux
7.3 with 4 GB memory. I also tried replacing the
default my.cnf with my-huge.cnf. It didn't help at
all.

Another thing, with some linux system, there is a size
limit for file. MySQL seems to store each of its table
as single file. You need to choose a file system
without that limit.

Qunfeng Dong
--- B.G. Mahesh [EMAIL PROTECTED]
wrote:

 hi

 We are evaluating few databases for developing an
 application with
 following specs,

 1.OS not very important. Leaning towards Linux

 2.Currently the database has about 5 million
 records but it will grow
 to 120 million records.

 3.The tables will have billing information for a
 telecom company.
 Nothing complex.

 4.Back office staff will use the data in the
 database to create
 invoices to be sent to customers. This data is not
 connected to the
 live telecom system [e.g. switches etc]. We get the
 data every day
 from the telecom company.

 5.Staff may perform queries on the database to get
 reports like
 busiest hour of the day etc etc. I don't see too
 many concurrent
 users using the system, however the system needs to
 be stable.

 6.Need to create excel, pdf files from the data in
 the database. This
 I think has nothing to do with the database, however
 this is a requirement.

 7.Needless to say, good security is a must which
 will also be built
 into the front end application.

 We are considering the following databases,

 1.MYSQL
 

RE: Can MySQL handle 120 million records?

2002-12-18 Thread Adam Nelson
That's the only thing wrong with Mysql is what it doesn't do.
Everything it does do it does fantastically.

 -Original Message-
 From: Michael T. Babcock [mailto:[EMAIL PROTECTED]] 
 Sent: Wednesday, December 18, 2002 10:12 AM
 To: Michael She
 Cc: Qunfeng Dong; [EMAIL PROTECTED]; [EMAIL PROTECTED]
 Subject: Re: Can MySQL handle 120 million records?
 
 
 Michael She wrote:
 
  I agree.  MySQL is a great database, but I wouldn't call it 
 enterprise 
  grade.  Considering that the database is used to store billing 
  information... one has to be weary about losing all the 
 records due to 
  a bug or deficiency in
 
 
 Besides actual additional features (management software, SNMP 
 support, 
 stored-procedures and the like), what would lead you to decide that 
 MySQL isn't ready for the big time, assuming a site didn't need the 
 above.
 
 -- 
 Michael T. Babcock
 C.T.O., FibreSpeed Ltd.
 http://www.fibrespeed.net/~mbabcock
 
 
 


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Michael She
I guess you can say I'm a follower.  Other DB systems have been in use for 
years, so their reliability has been generally proven through use.  It's 
good to know that a lot of people have had success with MySQL, but 
considering MySQL is the new comer, I'm still a little tepid!


At 01:22 PM 12/18/2002 -0500, Michael T. Babcock wrote:
Michael She wrote:
X-MDRcpt-To: [EMAIL PROTECTED]
X-Return-Path: [EMAIL PROTECTED]
X-MDaemon-Deliver-To: [EMAIL PROTECTED]


2. Some of the comments in the mySQL manual... people losing data doing 
routine stuff like table optimizations, adding keys, etc.  If a database 
is reliable, things like that shouldn't happen.  Comments like those in 
the MySQL manual scared me.


1) Do you believe this doesn't ever happen with other 'enterprise' level 
DB systems?
2) What do you think Microsoft's pages would look like if they allowed 
arbitrary user comments (read their newsgroups ...)
3) Those reports should be filed as bugs to be fixed, not added as 
comments in the manual.

--
Michael She  : [EMAIL PROTECTED]
Mobile   : (519) 589-7309
WWW Homepage : http://www.binaryio.com/


-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Michael T. Babcock

Michael She wrote:


I guess you can say I'm a follower.  Other DB systems have been in use 
for years, so their reliability has been generally proven through 
use.  It's good to know that a lot of people have had success with 
MySQL, but considering MySQL is the new comer, I'm still a little tepid!


MySQL may be new compared to Oracle, for example, but many other in-use 
DBs are in fact fairly new designs.  They just happen to be written by* 
large companies you recognize every day.

*written by  = written by programmers that were hired by and/or written 
by a company that was purchased by ...

--
Michael T. Babcock
C.T.O., FibreSpeed Ltd.
http://www.fibrespeed.net/~mbabcock



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: Can MySQL handle 120 million records? - Ok, If you guys really can handle tens of millions records, you have to help me to enjoy MySQL too :-)

2002-12-18 Thread Qunfeng Dong
Boy, you guys are die-hard MySQL fans :-) I think your
strong defending convinced us MySQL can handle 120
million records. But I know some ordinary users out
there like me who are not experts on tuning the MySQL
performance (they did send me private emails saying
they encountered the similar slow join problem). So
please help us to keep the faith.  

We are trying to develop a simple biology database to
maintain some DNA Sequence information. My problem is
coming from the following two tables:

CREATE TABLE NewSequence
(
Seq_ID  varchar(50) NOT NULL,
GenBank_Acc varchar(10),
Organismvarchar(50) NOT NULL,
Seq_Type  enum(EST,GSS,EST Contig,EST
Singlet,GSS Contig,GSS Singlet,GSS Plasmid
Contig,Protein) NOT NULL,
Seq_Length  int NOT NULL,
Seq_Title   textNOT NULL,
Comment text,
Entry_Date  dateNOT NULL,
PRIMARY KEY (Seq_ID),
UNIQUE  (GenBank_Acc),
INDEX (Seq_Type),
INDEX (Organism)
);

CREATE TABLE NewSequence_Homolog
(
Seq_ID  varchar(50) NOT NULL,
Homolog_PID int NOT NULL,
Homolog_Descvarchar(50) NOT NULL,
Homolog_Species varchar(50),
PRIMARY KEY (Seq_ID, Homolog_PID)
);






--- Michael T. Babcock [EMAIL PROTECTED]
wrote:
 Qunfeng Dong wrote:
 
 not-so-good performance (join on tables much
 smaller
 than yours takes minutes even using index) and I
 seem
 to read all the docs I could find on the web about
 how
 to optimize but they are not working for me (I am
 
 
 Have you stored a slow query log to run them through
 'explain' and see 
 why they're slow?  Do you want to post some of them
 here so we can 
 suggest what might be done to make them faster?
 
 -- 
 Michael T. Babcock
 C.T.O., FibreSpeed Ltd.
 http://www.fibrespeed.net/~mbabcock
 
 


__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records? - Ok, If you guys really can handle tens of millions records, you have to help me to enjoy MySQL too :-)

2002-12-18 Thread Qunfeng Dong
Boy, you guys are die-hard MySQL fans :-) I think your
strong defending convinced us MySQL can handle 120
million records :-) But I know some ordinary users out
there like me who are not experts on tuning the MySQL
performance (they did send me private emails saying
they encountered the similar slow join problem). So
please help us to keep the faith.  

We are trying to develop a simple biology database to
maintain some DNA Sequence information. My problem is
coming from the following two tables:

CREATE TABLE NewSequence
(
Seq_ID  varchar(50) NOT NULL,
GenBank_Acc varchar(10),
Organismvarchar(50) NOT NULL,
Seq_Type  enum(EST,GSS,EST Contig,EST  
Singlet,GSS Contig,GSS Singlet,GSS Plasmid
Contig,Protein) NOT NULL,
Seq_Length  int NOT NULL,
Seq_Title   textNOT NULL,
Comment text,
Entry_Date  dateNOT NULL,
PRIMARY KEY (Seq_ID),
UNIQUE  (GenBank_Acc),
INDEX (Seq_Type),
INDEX (Organism)
);

This NewSequence table is used to track some general
info about sequence. Notice I have to use text
datatype to describe Comment and Seq_Title fields;
therefore I have to use varchar for other string
fields. In addition, the Seq_ID is not numerical. 
BTW, I found indexing on Seq_Type. Organism which are
very repeative still helps with accessing. This table
has 2676711 rows.


CREATE TABLE NewSequence_Homolog
(
Seq_ID  varchar(50) NOT NULL,
Homolog_PID int NOT NULL,
Homolog_Descvarchar(50) NOT NULL,
Homolog_Species varchar(50),
PRIMARY KEY (Seq_ID, Homolog_PID)
);

This NewSequence_Homolog table is to track which
protein sequences (homolog) are similar to the
sequence I store in the NewSequence table. This table
has 997654 rows. 

mysql select count(*) from NewSequence s left join
NewSequence_Homolog h on s.Seq_ID = h.Seq_ID;
+--+
| count(*) |
+--+
|  3292029 |
+--+
1 row in set (1 min 30.50 sec)

So a simple left join took about 1 min and half.
First, is this slow or I am too picky?

This is the Explain.
mysql explain select count(*) from NewSequence s left
join NewSequence_Homolog h on s.Seq_ID = h.Seq_ID;
+---+---+---+-+-+--+-+-+
| table | type  | possible_keys | key | key_len |
ref  | rows| Extra   |
+---+---+---+-+-+--+-+-+
| s | index | NULL  | PRIMARY |  50 |
NULL | 2676711 | Using index |
| h | ref   | PRIMARY   | PRIMARY |  50 |
s.Seq_ID |9976 | Using index |
+---+---+---+-+-+--+-+-+


I am running MySQL 3.23.49 on RedHat linux 7.3 on a
dedicated server with 4 GB memory. The only setting I
changed is to copy the my-huge.cnf into /etc/my.cnf.

Qunfeng

--- Michael T. Babcock [EMAIL PROTECTED]
wrote:
 Qunfeng Dong wrote:
 
 not-so-good performance (join on tables much
 smaller
 than yours takes minutes even using index) and I
 seem
 to read all the docs I could find on the web about
 how
 to optimize but they are not working for me (I am
 
 
 Have you stored a slow query log to run them through
 'explain' and see 
 why they're slow?  Do you want to post some of them
 here so we can 
 suggest what might be done to make them faster?
 
 -- 
 Michael T. Babcock
 C.T.O., FibreSpeed Ltd.
 http://www.fibrespeed.net/~mbabcock
 
 


__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Aaron Clausen
On Wed, 18 Dec 2002, Gerald Jensen wrote:

 Joe is right ... we switched from another SQL server to MySQL in 1999, and
 have never looked back.

 MySQL has been rock solid for our applications, the MySQL development team
 is great to work with, and our customers like it.


That's been my experience as well.  We have an in-house billing system which
I originally wrote to work with MS-Access.  I converted the whole thing over
to MySQL via ODBC in June 2000, and it has worked flawlessly ever since.  We
run it under Win2k, though I'm seriously thinking of moving the database
server over to Linux in the next six months.  But MySQL has been rock solid.
I have lost no data, save through my own stupidity, at any point.  I would
recommend it without reservations.

-- 
Aaron Clausen


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-18 Thread Csongor Fagyal



What you need to have is a _good_ install, and then MySQL is superb. But 
to have a good install is not as easy as it sounds.
   


Can you list the elements of a good install?
 

Well...

One which does not make mysqld hang once in every hour (or minute).

Seriously speaking, this is what I meant. (I have previously posted a 
message on this list about this problem I am still facing.) A good 
install is one which works - if it works for a day, it will work 
basically forever. Usually it is even easy to install MySQL. However, if 
you start to twist it and/or you have some sort of incompatibility (say, 
in your gcc or some library you don't even know abaout), you are 
screwed. (At least I am.)

Currently I have two MySQL-s running on the same machine, on different 
ports, under different users, etc... I just could not set up this using 
one binary (it looks like some paths are getting compilled into the 
binaries) - even though when it worked, it crashed after a few minutes. 
Right now I have an RPM and a source install - this way the two MySQL 
instances are not messing up each other (but both of them are randomly 
crashing). I think this has something to do with that I am using RedHat 
8... will compile 3.23.54a tomorrow (and freshen the RPM... brrr!) to 
see what develops.

- Cs.



-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: Can MySQL handle 120 million records? - Ok, If you guys really can handle tens of millions records, you have to help me to enjoy MySQL too :-)

2002-12-18 Thread Jocelyn Fournier
Hi,

I think you'd better add an unique ID to both table defined as int
corresponding to each seq_ID, and then do the join on this ID rather than on
Seq_ID (join on varchar is far from the fastest solution :)) (unless seq_ID
could be converted into int directly ?)

(but it takes time, even for me (bi athlon MP 2200+) :


mysql SELECT COUNT(*) FROM searchmainhardwarefr7 LEFT JOIN
searchjoinhardwarefr7 ON
searchjoinhardwarefr7.numreponse=searchmainhardwarefr7.numreponse;
+--+
| COUNT(*) |
+--+
| 39396361 |
+--+
1 row in set (3 min 23.15 sec)

mysql EXPLAIN SELECT COUNT(*) FROM searchmainhardwarefr7 LEFT JOIN
searchjoinhardwarefr7 ON
searchjoinhardwarefr7.numreponse=searchmainhardwarefr7.numreponse;
++-+---++---+---
-+-+--+--+-+
| id | select_type | table | type   | possible_keys | key
| key_len | ref  | rows | Extra   |
++-+---++---+---
-+-+--+--+-+
|  1 | SIMPLE  | searchmainhardwarefr7 | index  | NULL  |
numreponse |   4 | NULL | 39396576 | Using
index |
|  1 | SIMPLE  | searchjoinhardwarefr7 | eq_ref | numreponse|
numreponse |   4 | searchmainhardwarefr7.numreponse |1 | Using
index |
++-+---++---+---
-+-+--+--+-+
)


Regards,
  Jocelyn
- Original Message -
From: Qunfeng Dong [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, December 18, 2002 9:17 PM
Subject: Re: Can MySQL handle 120 million records? - Ok, If you guys really
can handle tens of millions records, you have to help me to enjoy MySQL too
:-)


 Boy, you guys are die-hard MySQL fans :-) I think your
 strong defending convinced us MySQL can handle 120
 million records :-) But I know some ordinary users out
 there like me who are not experts on tuning the MySQL
 performance (they did send me private emails saying
 they encountered the similar slow join problem). So
 please help us to keep the faith.

 We are trying to develop a simple biology database to
 maintain some DNA Sequence information. My problem is
 coming from the following two tables:

 CREATE TABLE NewSequence
 (
 Seq_ID  varchar(50) NOT NULL,
 GenBank_Acc varchar(10),
 Organismvarchar(50) NOT NULL,
 Seq_Type  enum(EST,GSS,EST Contig,EST
 Singlet,GSS Contig,GSS Singlet,GSS Plasmid
 Contig,Protein) NOT NULL,
 Seq_Length  int NOT NULL,
 Seq_Title   textNOT NULL,
 Comment text,
 Entry_Date  dateNOT NULL,
 PRIMARY KEY (Seq_ID),
 UNIQUE  (GenBank_Acc),
 INDEX (Seq_Type),
 INDEX (Organism)
 );

 This NewSequence table is used to track some general
 info about sequence. Notice I have to use text
 datatype to describe Comment and Seq_Title fields;
 therefore I have to use varchar for other string
 fields. In addition, the Seq_ID is not numerical.
 BTW, I found indexing on Seq_Type. Organism which are
 very repeative still helps with accessing. This table
 has 2676711 rows.


 CREATE TABLE NewSequence_Homolog
 (
 Seq_ID  varchar(50) NOT NULL,
 Homolog_PID int NOT NULL,
 Homolog_Descvarchar(50) NOT NULL,
 Homolog_Species varchar(50),
 PRIMARY KEY (Seq_ID, Homolog_PID)
 );

 This NewSequence_Homolog table is to track which
 protein sequences (homolog) are similar to the
 sequence I store in the NewSequence table. This table
 has 997654 rows.

 mysql select count(*) from NewSequence s left join
 NewSequence_Homolog h on s.Seq_ID = h.Seq_ID;
 +--+
 | count(*) |
 +--+
 |  3292029 |
 +--+
 1 row in set (1 min 30.50 sec)

 So a simple left join took about 1 min and half.
 First, is this slow or I am too picky?

 This is the Explain.
 mysql explain select count(*) from NewSequence s left
 join NewSequence_Homolog h on s.Seq_ID = h.Seq_ID;

+---+---+---+-+-+--+-+--
---+
 | table | type  | possible_keys | key | key_len |
 ref  | rows| Extra   |

+---+---+---+-+-+--+-+--
---+
 | s | index | NULL  | PRIMARY |  50 |
 NULL | 2676711 | Using index |
 | h | ref   | PRIMARY   | PRIMARY |  50 |
 s.Seq_ID |9976 | Using index |

+---+---+---+-+-+--+-+--
---+


 I am running MySQL 3.23.49 on RedHat linux 7.3 on a
 dedicated server with 4 GB memory. The only setting I
 changed is to copy

RE: Can MySQL handle 120 million records?

2002-12-18 Thread RBRoa
I am proud to be a MySQL solid rock database engine user, I used it as my
backend data holder in any form. From traffic analysis to subscriber
administration. I never experienced downtime due to bug ever since. So if ur
planning to use the open source as your billing handler. U better make use
of your brain to think and decide which is handy to you. And if your
convinced about the solidity of this engine then, I think your better to
mail everything to other side.



R R--

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records? - Ok, If you guys really can handle tens of millions records, you have to help me to enjoy MySQL too :-)

2002-12-18 Thread Ryan Fox

- Original Message -
From: Qunfeng Dong [EMAIL PROTECTED]

 We are trying to develop a simple biology database to
 maintain some DNA Sequence information. My problem is
 coming from the following two tables:
snip

Making indexes smaller will help.  Does it need to be varchar(50)?
Also, I'd consider creating a numeric auto_increment primary key on your
NewSequence table, and using it to relate the 2 tables together.  It may
make for some more complex SQL statements to describe the relationship, but
you'll gain the time back in performance.  Consider using the following.

CREATE TABLE NewSequence
(
id  int(11) not null auto_increment,
Seq_ID  varchar(50) NOT NULL,
GenBank_Acc varchar(10),
Organismvarchar(50) NOT NULL,
Seq_Type  enum(EST,GSS,EST Contig,EST
Singlet,GSS Contig,GSS Singlet,GSS Plasmid
Contig,Protein) NOT NULL,
Seq_Length  int NOT NULL,
Seq_Title   textNOT NULL,
Comment text,
Entry_Date  dateNOT NULL,
PRIMARY KEY (id),
UNIQUE (Seq_ID),
UNIQUE  (GenBank_Acc),
INDEX (Seq_Type),
INDEX (Organism)
);

CREATE TABLE NewSequence_Homolog
(
id  int(11) NOT NULL,
Homolog_PID int NOT NULL,
Homolog_Descvarchar(50) NOT NULL,
Homolog_Species varchar(50),
PRIMARY KEY (id, Homolog_PID)
);

This would make your example query:
select count(*) from NewSequence s left join NewSequence_Homolog h on s.id =
h.id;
And this would run much quicker, as instead of searching through 50
character indexes for each table, it would only have to look at 11 digit
indexes.  Much quicker.

Ryan

sql, query and stuff


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Can MySQL handle 120 million records?

2002-12-18 Thread JamesD
i like mySQL, but it has a long way to go to
gain the level of 'confidence'  that oracle, db2 or mssql or sybase
have when it comes to frontline mission
critical stuff. I think it will in time...thats why i stick with
it. besides, confidence is often just a synonym for 'knowledge'

select count(*) from x (0.00 seconds)

if you dig into the source, you will probably find this
common select is built in and running all the time,

and we are all just getting a pre-filled variable
returned from the method  select count(*) from (x) 

I hope someone can prove me wrong...

i agree, in many common queries it is a heck of a lot faster
than MSSQL.

Jim

-Original Message-
From: Peter Vertes [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 18, 2002 6:47 AM
To: [EMAIL PROTECTED]
Subject: RE: Can MySQL handle 120 million records?


Hi,

I've been using MySQL intercompany for a while now with great results.
Even the diehard MSSQL people are amazed at how fast it can be at time.  One
of the things I use it for is to store syslog events in it.  I wrote a
backend that parses a syslog file as data is being written into it and does
multiple things with each syslog entry depending what the entry contains.
When I'm done with it the syslog entry goes into a MySQL database where I
can store the data and let the operations team access it through a PHP
enabled webpage to see either what is going on in the system real-time of be
able to do queries about certain hosts, processes or show some stats (what
happened to machine x on date y and what processes were running on it,
etc...).
The MySQL database is being hosted on a Dell Precisions 540 workstation
box.  It's a P4 1.7GHz Xeon with 512MB of ram and a 40GB IDE disc running
Windows 2000 Server.  That MySQL database is also being used for other
things (nothing too intensive) and I muck around with it also and use it as
a test db.  The machine also handles webserving chores and runs backup
chores and other operations related tasks.
The database only holds about 1 months worth of data in it, the rest we
don't really need but we keep around for a while outside of the db zipped
up.  As of when I'm writing this there were about 18.7 million entries in
that table:

mysql select count(*) from notifications;
+--+
| count(*) |
+--+
| 18711190 |
+--+
1 row in set (0.00 sec)

All these entries have been accumulated from December 1, 2002 till present
day:

mysql select distinct syslogdate from notifications order by syslogdate;
++
| syslogdate |
++
| 2002-12-01 |
| 2002-12-02 |
| 2002-12-03 |
| 2002-12-04 |
| 2002-12-05 |
| 2002-12-06 |
| 2002-12-07 |
| 2002-12-08 |
| 2002-12-09 |
| 2002-12-10 |
| 2002-12-11 |
| 2002-12-12 |
| 2002-12-13 |
| 2002-12-14 |
| 2002-12-15 |
| 2002-12-16 |
| 2002-12-17 |
| 2002-12-18 |
++
18 rows in set (12.95 sec)

Notice it took almost 13 seconds to complete that last query.  I tried this
on a MSSQL server and after 2 minutes I turned the query off.  That kind of
performance was unacceptable for a webapp that uses a database that does
real time queries.  I'm quite happy with the performance of MySQL and I just
love to see the MSSQL guys retreat when I show off how fast some queries can
be (they always strike back with transactional stuff, blah, blah, blah :)
Anyway, I would suggest you use Linux for your dbserver with some kind of
journaling file system.  I would go with ReiserFS because if memory serves
correctly it can handle files up to 4 terabytes but you might want to double
check since I'm quite forgetful with facts like that :)  I would also
recommend the fastest SCSI drives you can find.  When I do queries in any 10
million+ database I barely get any CPU activity but I get A LOT of disk
activity and I think this IDE drive is holding MySQL back.  When I have time
I'm thinking about moving this database/webapp beast onto a SCSI Linux box
and see how well it performs.  I think you'll be very pleased with the
performance you'll get out of MySQL.

-Pete

P.S.: Thanks again MySQL team :)

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-17 Thread Jeremy Zawodny
On Wed, Dec 18, 2002 at 08:10:41AM +, B.G. Mahesh wrote:
 
 If MYSQL or Postgres can do the job I prefer not to spend the money on
 Oracle/MSQL. However, if Oracle/MSQL are required for getting good
 reports and scalability, so be it. We will use Oracle/MSQL.

MySQL will have no problem with 120 million records, as long as you
have decent hardware.
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

MySQL 3.23.51: up 2 days, processed 107,599,733 queries (426/sec. avg)

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-17 Thread B.G. Mahesh
+ On Wed, Dec 18, 2002 at 08:10:41AM +, B.G. Mahesh wrote:
+  
+  If MYSQL or Postgres can do the job I prefer not to spend the money on
+  Oracle/MSQL. However, if Oracle/MSQL are required for getting good
+  reports and scalability, so be it. We will use Oracle/MSQL.
+ 
+ MySQL will have no problem with 120 million records, as long as you
+ have decent hardware.

hi

Thanks for the quick response. My question may not be very valid..but is
there a upper limit on the number of records it can handle?

Hardware wise, I guess if I have atleast 512MB RAM, P4 Intel motherboard I should
be fine. Isn't it?


-- 
-- 
B.G. Mahesh
mailto:[EMAIL PROTECTED]
http://www.indiainfo.com/
India's first ISO certified portal

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-17 Thread David T-G
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

BG --

...and then B.G. Mahesh said...
% 
...
% 
% Thanks for the quick response. My question may not be very valid..but is
% there a upper limit on the number of records it can handle?

I'll leave this to the experts.  I'm almost certain there must be, though
it's probably something like 4 billion or some such.


% 
% Hardware wise, I guess if I have atleast 512MB RAM, P4 Intel motherboard I should
% be fine. Isn't it?

There was actually a fair bit of discussion about this (including the
merits of various chips, their capabilities, and their speeds) on this
list a bit back, and it's come up other times before; see the list
archives for details.  Basically it comes down to

  - figure out where your bottleneck will lie
- RAM? disk I/O? CPU power? floating point math? 'net bandwidth?

  - build a system that caters to those bottlenecks

You haven't said how large the records are, how many people will need to
hit it at once, how much math will be involved, and what sort of peak
load it will have to handle (it might sit basically idle for three and a
half weeks and then be slammed generating invoices), but my loose guess
is that you should get a dual-PIII or dual-Athlon box with 512M - 1024M
of RAM and some good SCSI disks you can RAID.  It won't be a $500 desktop,
but it probably shouldn't cost you more than $2k to build a quite nice
box (but don't forget to build a second for redundancy, perhaps as a
slave server, since this is business and time is money).


HTH  HAND  Happy Holidays

mysql query,
:-D
- -- 
David T-G  * There is too much animal courage in 
(play) [EMAIL PROTECTED] * society and not sufficient moral courage.
(work) [EMAIL PROTECTED]  -- Mary Baker Eddy, Science and Health
http://www.justpickone.org/davidtg/Shpx gur Pbzzhavpngvbaf Qrprapl Npg!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.0.7 (FreeBSD)

iD8DBQE9//n/Gb7uCXufRwARAoZFAJ0W6SxXCR8pXD0bT4aS54lqlp1WiACgmMJk
GbUxlpjbdzQ7kkEk8OOHgD4=
=YzNx
-END PGP SIGNATURE-

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-17 Thread Qunfeng Dong
I am not sure. Does anyone know any real examples of
mysql handling huge database and still perform well? I
am having problems with the performance with the MySQL
Left join recently. A big table (about 2.5 million
records) left join a small table (about 350K records)
takes generally 2 mins to finish. I check the
explain and primary key index on the small table was
indeed used for the joining. My system is Redhat Linux
7.3 with 4 GB memory. I also tried replacing the
default my.cnf with my-huge.cnf. It didn't help at
all. 

Another thing, with some linux system, there is a size
limit for file. MySQL seems to store each of its table
as single file. You need to choose a file system
without that limit. 

Qunfeng Dong
--- B.G. Mahesh [EMAIL PROTECTED]
wrote:
 
 hi
 
 We are evaluating few databases for developing an
 application with
 following specs,
 
 1.OS not very important. Leaning towards Linux
 
 2.Currently the database has about 5 million
 records but it will grow
 to 120 million records.
 
 3.The tables will have billing information for a
 telecom company.
 Nothing complex.
 
 4.Back office staff will use the data in the
 database to create
 invoices to be sent to customers. This data is not
 connected to the
 live telecom system [e.g. switches etc]. We get the
 data every day
 from the telecom company.
 
 5.Staff may perform queries on the database to get
 reports like
 busiest hour of the day etc etc. I don't see too
 many concurrent
 users using the system, however the system needs to
 be stable.
 
 6.Need to create excel, pdf files from the data in
 the database. This
 I think has nothing to do with the database, however
 this is a requirement.
 
 7.Needless to say, good security is a must which
 will also be built
 into the front end application.
 
 We are considering the following databases,
 
 1.MYSQL
 2.Postgres
 3.Oracle
 4.MSQL
 
 If MYSQL or Postgres can do the job I prefer not to
 spend the money on
 Oracle/MSQL. However, if Oracle/MSQL are required
 for getting good
 reports and scalability, so be it. We will use
 Oracle/MSQL.
 
 Any pointers/advice is appreciated
 
 
 -- 
 -- 
 B.G. Mahesh
 mailto:[EMAIL PROTECTED]
 http://www.indiainfo.com/
 India's first ISO certified portal
 

-
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list
 archive)
 
 To request this thread, e-mail
 [EMAIL PROTECTED]
 To unsubscribe, e-mail

[EMAIL PROTECTED]
 Trouble unsubscribing? Try:
 http://lists.mysql.com/php/unsubscribe.php
 


__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Can MySQL handle 120 million records?

2002-12-17 Thread JamesD
I've read limits are based on the filesize your OS can handle,
the HDD size, memory, how fast your RISC
or CISC processors are...and how the stars are aligned...

i think 4 billion records will need some horses pullin...
8 x 2.4 XEON/2 at least. I've run simple queries on
80 million records and it takes minutes with a gig of RAM
and 800 pentium III. each minute in front of a computer is
like dog years... 1 = 7


Jim

-Original Message-
From: David T-G [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, December 17, 2002 8:31 PM
To: mysql users
Cc: B.G. Mahesh
Subject: Re: Can MySQL handle 120 million records?


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

BG --

...and then B.G. Mahesh said...
%
...
%
% Thanks for the quick response. My question may not be very valid..but is
% there a upper limit on the number of records it can handle?

I'll leave this to the experts.  I'm almost certain there must be, though
it's probably something like 4 billion or some such.


%
% Hardware wise, I guess if I have atleast 512MB RAM, P4 Intel motherboard I
should
% be fine. Isn't it?

There was actually a fair bit of discussion about this (including the
merits of various chips, their capabilities, and their speeds) on this
list a bit back, and it's come up other times before; see the list
archives for details.  Basically it comes down to

  - figure out where your bottleneck will lie
- RAM? disk I/O? CPU power? floating point math? 'net bandwidth?

  - build a system that caters to those bottlenecks

You haven't said how large the records are, how many people will need to
hit it at once, how much math will be involved, and what sort of peak
load it will have to handle (it might sit basically idle for three and a
half weeks and then be slammed generating invoices), but my loose guess
is that you should get a dual-PIII or dual-Athlon box with 512M - 1024M
of RAM and some good SCSI disks you can RAID.  It won't be a $500 desktop,
but it probably shouldn't cost you more than $2k to build a quite nice
box (but don't forget to build a second for redundancy, perhaps as a
slave server, since this is business and time is money).


HTH  HAND  Happy Holidays

mysql query,
:-D
- --
David T-G  * There is too much animal courage in
(play) [EMAIL PROTECTED] * society and not sufficient moral courage.
(work) [EMAIL PROTECTED]  -- Mary Baker Eddy, Science and Health
http://www.justpickone.org/davidtg/Shpx gur Pbzzhavpngvbaf Qrprapl Npg!

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.0.7 (FreeBSD)

iD8DBQE9//n/Gb7uCXufRwARAoZFAJ0W6SxXCR8pXD0bT4aS54lqlp1WiACgmMJk
GbUxlpjbdzQ7kkEk8OOHgD4=
=YzNx
-END PGP SIGNATURE-

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-17 Thread Jeremy Zawodny
rOn Tue, Dec 17, 2002 at 08:43:38PM -0800, Qunfeng Dong wrote:

 I am not sure. Does anyone know any real examples of mysql handling
 huge database and still perform well?

Our largest table is about 340 million rows now.  It grows by roughly
100,000 records per business day.  Performance is quite good so far.   
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

MySQL 3.23.51: up 2 days, processed 110,127,603 queries (425/sec. avg)

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-17 Thread Muruganandam
Dear Jeremy,
Can you end me the hardware details of your DB  Engine server...

thanks.
muruganandam g
Jeremy Zawodny wrote:

 rOn Tue, Dec 17, 2002 at 08:43:38PM -0800, Qunfeng Dong wrote:

  I am not sure. Does anyone know any real examples of mysql handling
  huge database and still perform well?

 Our largest table is about 340 million rows now.  It grows by roughly
 100,000 records per business day.  Performance is quite good so far.
 --
 Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
 [EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

 MySQL 3.23.51: up 2 days, processed 110,127,603 queries (425/sec. avg)

 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail 
[EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Can MySQL handle 120 million records?

2002-12-17 Thread Jeremy Zawodny
On Wed, Dec 18, 2002 at 11:55:32AM -0500, Muruganandam wrote:
 Dear Jeremy,
 Can you end me the hardware details of your DB  Engine server...

The hardware is about 2 years old now.  The master is a dual P3 866
with 2GB RAM and 6 36GB SCSI disks.  Some of the slaves are a little
faster (dual P3 1GHz).

Jeremy
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

MySQL 3.23.51: up 3 days, processed 111,804,443 queries (424/sec. avg)

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php