Re: Debugging mysql limits

2008-03-05 Thread Thufir
On Tue, 04 Mar 2008 08:18:08 -0500, Phil wrote:

 Just inheritance from an old design that has passed it's limits.

Just checking :)

I was talking to someone about redundancy in a table and he was like 
that's good though, because there are multiple (blah, blah, blah)...but 
it does screw up some queries!  when I asked what the primary key was 
going to be for the new table(s) he mentioned that when the db was 
initially designed that they didn't know about primary keys!  As if PK's 
are a fad...

-Thufir


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Debugging mysql limits

2008-03-04 Thread Thufir
On Thu, 28 Feb 2008 11:19:40 -0500, Phil wrote:

 I have 50 plus tables lets call them A_USER, B_USER, C_USER etc which I
 daily refresh with updated (and sometimes new) data.
 
 I insert the data into a temporary table using LOAD DATA INFILE. This
 works great and is very fast.


May I ask why you have fifty plus tables with, apparently, the same 
schema?  Why not have one table with an extra column user?



-Thufir


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Debugging mysql limits

2008-03-04 Thread Phil
Just inheritance from an old design that has passed it's limits.

I actually have a development version which does just that, but there is a
lot of work to convert many php scripts and sql to include the new column.
It's some way away from live though, so the problem I outlined still exists.

Phil

On Tue, Mar 4, 2008 at 4:03 AM, Thufir [EMAIL PROTECTED] wrote:

 On Thu, 28 Feb 2008 11:19:40 -0500, Phil wrote:

  I have 50 plus tables lets call them A_USER, B_USER, C_USER etc which I
  daily refresh with updated (and sometimes new) data.
 
  I insert the data into a temporary table using LOAD DATA INFILE. This
  works great and is very fast.


 May I ask why you have fifty plus tables with, apparently, the same
 schema?  Why not have one table with an extra column user?



 -Thufir


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Debugging mysql limits

2008-02-29 Thread Phil
Just a little more info on this.

I tried setting all of this up on a home server with, as far as I can see,
more or less identical specs with the exception being that it's a 64bit
linux build rather than 32bit.

Same insert on duplicate update takes 3 mins.

I spent all day yesterday trying to figure out what limits are being hit
without success.

Would certainly appreciate any pointers to look at..

Phil

On Thu, Feb 28, 2008 at 11:19 AM, Phil [EMAIL PROTECTED] wrote:

 I'm trying to figure out which limits I'm hitting on some inserts.

 I have 50 plus tables lets call them A_USER, B_USER, C_USER etc which I
 daily refresh with updated (and sometimes new) data.

 I insert the data into a temporary table using LOAD DATA INFILE. This
 works great and is very fast.

 Then I do an

 INSERT INTO A_USER (Select col1,col2,col3...,col 20, 0,0,0,0,0,0,etc etc
 from A_TEMP) on DUPLICATE KEY UPDATE col1=A_TEMP.col1,col2= etc

 The sizes in the tables range from 500 entries up to 750,000.

 two of them in the 200,000 range take 2-3 mins for this to complete, the
 largest at 750,000 takes over an hour.

 a sampling of my cnf file is

 old_passwords=1
 max_connections = 50
 max_user_connections = 50
 table_cache=2000
 open_files_limit=4000
 log-slow-queries = /var/log/mysql-slow.log
 long_query_time = 12
 log-queries-not-using-indexes
 thread_cache_size = 100
 query_cache_size = 64M
 key_buffer_size = 512M
 join_buffer_size = 24M
 sort_buffer_size = 64M
 read_buffer_size = 4M
 tmp_table_size = 64M
 max_heap_table_size = 64M

 There is 2Gb Ram in the server which I would gladly increase if I knew I
 could tweak these settings to fix this?

 Any ideas what I should do to figure out what is causing it?

 Regards

 Phil




Debugging mysql limits

2008-02-28 Thread Phil
I'm trying to figure out which limits I'm hitting on some inserts.

I have 50 plus tables lets call them A_USER, B_USER, C_USER etc which I
daily refresh with updated (and sometimes new) data.

I insert the data into a temporary table using LOAD DATA INFILE. This works
great and is very fast.

Then I do an

INSERT INTO A_USER (Select col1,col2,col3...,col 20, 0,0,0,0,0,0,etc etc
from A_TEMP) on DUPLICATE KEY UPDATE col1=A_TEMP.col1,col2= etc

The sizes in the tables range from 500 entries up to 750,000.

two of them in the 200,000 range take 2-3 mins for this to complete, the
largest at 750,000 takes over an hour.

a sampling of my cnf file is

old_passwords=1
max_connections = 50
max_user_connections = 50
table_cache=2000
open_files_limit=4000
log-slow-queries = /var/log/mysql-slow.log
long_query_time = 12
log-queries-not-using-indexes
thread_cache_size = 100
query_cache_size = 64M
key_buffer_size = 512M
join_buffer_size = 24M
sort_buffer_size = 64M
read_buffer_size = 4M
tmp_table_size = 64M
max_heap_table_size = 64M

There is 2Gb Ram in the server which I would gladly increase if I knew I
could tweak these settings to fix this?

Any ideas what I should do to figure out what is causing it?

Regards

Phil


Re: mysql limits

2007-02-07 Thread kalin mintchev


 Search speeds and CPU with MyISAM is quite good. I tried InnoDb and insert
 speeds was far too slow because of its row locking versus MyISAM's table
 locking. Some people have been able to fine tune InnoDb but it requires
 even more RAM because InnoDb works best when the entire table fits into
 memory.

thanks...



 Mike





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql limits

2007-02-05 Thread kalin mintchev

thanks...  my question was more like IF mysql can handle that amount of
records - about 100 million...  and if it's just a question of cpu power
and  memory?


 Hi,

 The limit for the table can be set when you create the table itself.
 the MAX_ROWS and AVG_ROW_LENGTH variables (m X n matrix) will decide the
 table size.

 MAX_ROWS limts the maximum number of rows in that table.  The
 AVG_ROW_LENGTH
 variable decides the length of the row.  The specified value can be used
 by
 a single column itself or depends on the size of the columns.

 Thanks
 ViSolve DB Team.
 - Original Message -
 From: kalin mintchev [EMAIL PROTECTED]
 To: mysql@lists.mysql.com
 Sent: Monday, February 05, 2007 9:14 AM
 Subject: mysql limits


 hi all...

 i just wanted to ask here if somebody has experience in pushing the
 mysql
 limits...  i might have a job that needs to have a table (or a few
 tables)
 holding about a 100 million records. that's a lot of records is
 there
 any limitation of some kind that wouldn;t allow mysql to handle that
 kind
 of amounts or it all depends on memory and cpu... or how are the
 searches
 - speed and otherwise - affected by such numbers?

 thanks


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql limits

2007-02-05 Thread ViSolve DB Team

Hi,

It can handle.  You can extend the file size also. File size limit depends
on the OS.  Obviously the performance depends on both the processor speed
and the memory.  Table optimization,indexing will improve performance.

Thanks
ViSolve DB Team
- Original Message - 
From: kalin mintchev [EMAIL PROTECTED]

To: ViSolve DB Team [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Monday, February 05, 2007 4:07 PM
Subject: Re: mysql limits




thanks...  my question was more like IF mysql can handle that amount of
records - about 100 million...  and if it's just a question of cpu power
and  memory?



Hi,

The limit for the table can be set when you create the table itself.
the MAX_ROWS and AVG_ROW_LENGTH variables (m X n matrix) will decide the
table size.

MAX_ROWS limts the maximum number of rows in that table.  The
AVG_ROW_LENGTH
variable decides the length of the row.  The specified value can be used
by
a single column itself or depends on the size of the columns.

Thanks
ViSolve DB Team.
- Original Message -
From: kalin mintchev [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Monday, February 05, 2007 9:14 AM
Subject: mysql limits



hi all...

i just wanted to ask here if somebody has experience in pushing the
mysql
limits...  i might have a job that needs to have a table (or a few
tables)
holding about a 100 million records. that's a lot of records is
there
any limitation of some kind that wouldn;t allow mysql to handle that
kind
of amounts or it all depends on memory and cpu... or how are the
searches
- speed and otherwise - affected by such numbers?

thanks


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: 
http://lists.mysql.com/[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql limits

2007-02-05 Thread mos

At 09:44 PM 2/4/2007, kalin mintchev wrote:

hi all...

i just wanted to ask here if somebody has experience in pushing the mysql
limits...  i might have a job that needs to have a table (or a few tables)
holding about a 100 million records. that's a lot of records is there
any limitation of some kind that wouldn;t allow mysql to handle that kind
of amounts or it all depends on memory and cpu... or how are the searches
- speed and otherwise - affected by such numbers?

thanks


Put as much memory in the machine as possible. Building indexes for a table 
of that size will consume a lot of memory and if you don't have enough 
memory, building the index will be done on the hard disk where it is 100x 
slower. I've had 100M row tables without too much problem. However when I 
tried 500M rows the indexes could not be built (took days) because I too 
little RAM.


Mike 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql limits

2007-02-05 Thread kalin mintchev

 Put as much memory in the machine as possible. Building indexes for a
 table
 of that size will consume a lot of memory and if you don't have enough
 memory, building the index will be done on the hard disk where it is 100x
 slower. I've had 100M row tables without too much problem. However when I
 tried 500M rows the indexes could not be built (took days) because I too
 little RAM.

thanks  would you please be more specific about to little RAM? what
amount of memory is enough for the 500M? what about search speeds? cpu?
also what kind of tables did you use?

thanks







 Mike

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql limits

2007-02-05 Thread mos

At 12:18 PM 2/5/2007, kalin mintchev wrote:


 Put as much memory in the machine as possible. Building indexes for a
 table
 of that size will consume a lot of memory and if you don't have enough
 memory, building the index will be done on the hard disk where it is 100x
 slower. I've had 100M row tables without too much problem. However when I
 tried 500M rows the indexes could not be built (took days) because I too
 little RAM.

thanks  would you please be more specific about to little RAM?


I had only 1gb on a Windows XP box. I was able to put it up to 3gb and it 
speeded things up quite a bit.




what
amount of memory is enough for the 500M?


You need enough memory to hold the entire index into memory.


what about search speeds? cpu?
also what kind of tables did you use?


Search speeds and CPU with MyISAM is quite good. I tried InnoDb and insert 
speeds was far too slow because of its row locking versus MyISAM's table 
locking. Some people have been able to fine tune InnoDb but it requires 
even more RAM because InnoDb works best when the entire table fits into memory.


Mike

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysql limits

2007-02-04 Thread kalin mintchev
hi all...

i just wanted to ask here if somebody has experience in pushing the mysql
limits...  i might have a job that needs to have a table (or a few tables)
holding about a 100 million records. that's a lot of records is there
any limitation of some kind that wouldn;t allow mysql to handle that kind
of amounts or it all depends on memory and cpu... or how are the searches
- speed and otherwise - affected by such numbers?

thanks


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql limits

2007-02-04 Thread ViSolve DB Team

Hi,

The limit for the table can be set when you create the table itself.
the MAX_ROWS and AVG_ROW_LENGTH variables (m X n matrix) will decide the 
table size.


MAX_ROWS limts the maximum number of rows in that table.  The AVG_ROW_LENGTH 
variable decides the length of the row.  The specified value can be used by 
a single column itself or depends on the size of the columns.


Thanks
ViSolve DB Team.
- Original Message - 
From: kalin mintchev [EMAIL PROTECTED]

To: mysql@lists.mysql.com
Sent: Monday, February 05, 2007 9:14 AM
Subject: mysql limits



hi all...

i just wanted to ask here if somebody has experience in pushing the mysql
limits...  i might have a job that needs to have a table (or a few tables)
holding about a 100 million records. that's a lot of records is there
any limitation of some kind that wouldn;t allow mysql to handle that kind
of amounts or it all depends on memory and cpu... or how are the searches
- speed and otherwise - affected by such numbers?

thanks


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: 
http://lists.mysql.com/[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL limits.

2004-05-21 Thread Ken Menzel
Hi,
  Signal 11 can also indicate hardawre problems on BSD.  Also FreeBSD
might get you more answers quickly as there are more of us running
FreeBSD with MYSQL for some reason.  We runn FreBSD w MySQL/Linux
threads on 4.9 and 5.2 and both work just fine.

Ken
- Original Message - 
From: RV Tec [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, May 18, 2004 9:28 AM
Subject: MySQL limits.


 Folks,

 I have a couple of questions that I could not find  the answer
 at the MySQL docs or list archives. Hope you guys can help me.

 We have  a database  with approximately  135 tables  (MyISAM).
 Most of them are small,  but we have 5 tables,  with 8.000.000
 records. And  that number  is to  increase at  least 1.000.000
 records per month (until the end of the year, the growing rate
 might surpass 2.000.000 records/month). So, today our database
 size is 6GB.

 The server handles about 35-40 concurrent connections. We have
 a lot of table locks, but that does not seem to be a  problem.
 Most of the time it works really well.

 From time to time  (2 weeks uptime or  so), we have to  face a
 Signal 11 crash (which is pretty scary, since we have to run a
 myisamchk  that  takes us  offline  for at  least  1 hour). We
 believe this  signal 11  is related  to the  MySQL server load
 (since we have changed OS's and hardware -- RAM mostly).

 Our server  is one  P4 3GHz,  2GB RAM  (400mhz), SCSI Ultra160
 36GB  disks (database  only) running  on OpenBSD  3.5. We  are
 aware  that  OpenBSD  might  not  be  the  best  OS  for  this
 application... at first, it  was chosen by it's  security. Now
 we  are looking  (if that  helps) to  a OS  with LinuxThreads
 (FreeBSD perharps?).

 The fact is that we  are running MySQL on a  dedicated server,
 that  keeps the  load between  0.5 and  1.5. CPU  definitively
 is not  a  problem. The  memory  could  be  a  problem...  our
 key_buffer is set to 384M, according to the recommendations at
 my-huge.cnf. So,  it seems  we have  a lot  of free memory. We
 have  already  tried  to increase  key_buffer (along  with the
 other  settings),  but it does not seem to  hurt or to improve
 our performance (although, the memory use increases).

 To track down this signal 11, we have just compiled MySQL with
 debugandreturned   totheoriginal   my-huge.cnf
 recommendations.  Now it seems we are running on a overclocked
 486 66mhz.

 Is there any way to prevent this signal 11 to happen or is  it
 a message that we have exceeded MySQL capability?

 Is MySQL able to handle such load with no problems/turbulences
 at  all?   If  so,   what  would   be  the   best  hardware/OS
 configuration?

 What is the largest DB known to MySQL community?

 If it's needed, I can provide DMESG, MySQL error log,  compile
 options and some database statistics.

 Thanks a lot for your help!

 Best regards,
 RV Tec

 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL limits.

2004-05-18 Thread RV Tec
Folks,

I have a couple of questions that I could not find  the answer
at the MySQL docs or list archives. Hope you guys can help me.

We have  a database  with approximately  135 tables  (MyISAM).
Most of them are small,  but we have 5 tables,  with 8.000.000
records. And  that number  is to  increase at  least 1.000.000
records per month (until the end of the year, the growing rate
might surpass 2.000.000 records/month). So, today our database
size is 6GB.

The server handles about 35-40 concurrent connections. We have
a lot of table locks, but that does not seem to be a  problem.
Most of the time it works really well.

From time to time  (2 weeks uptime or  so), we have to  face a
Signal 11 crash (which is pretty scary, since we have to run a
myisamchk  that  takes us  offline  for at  least  1 hour). We
believe this  signal 11  is related  to the  MySQL server load
(since we have changed OS's and hardware -- RAM mostly).

Our server  is one  P4 3GHz,  2GB RAM  (400mhz), SCSI Ultra160
36GB  disks (database  only) running  on OpenBSD  3.5. We  are
aware  that  OpenBSD  might  not  be  the  best  OS  for  this
application... at first, it  was chosen by it's  security. Now
we  are looking  (if that  helps) to  a OS  with LinuxThreads
(FreeBSD perharps?).

The fact is that we  are running MySQL on a  dedicated server,
that  keeps the  load between  0.5 and  1.5. CPU  definitively
is not  a  problem. The  memory  could  be  a  problem...  our
key_buffer is set to 384M, according to the recommendations at
my-huge.cnf. So,  it seems  we have  a lot  of free memory. We
have  already  tried  to increase  key_buffer (along  with the
other  settings),  but it does not seem to  hurt or to improve
our performance (although, the memory use increases).

To track down this signal 11, we have just compiled MySQL with
debugandreturned   totheoriginal   my-huge.cnf
recommendations.  Now it seems we are running on a overclocked
486 66mhz.

Is there any way to prevent this signal 11 to happen or is  it
a message that we have exceeded MySQL capability?

Is MySQL able to handle such load with no problems/turbulences
at  all?   If  so,   what  would   be  the   best  hardware/OS
configuration?

What is the largest DB known to MySQL community?

If it's needed, I can provide DMESG, MySQL error log,  compile
options and some database statistics.

Thanks a lot for your help!

Best regards,
RV Tec

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL limits.

2004-05-18 Thread Tim Cutts
On 18 May 2004, at 2:28 pm, RV Tec wrote:
Is MySQL able to handle such load with no problems/turbulences
at  all?   If  so,   what  would   be  the   best  hardware/OS
configuration?
What is the largest DB known to MySQL community?
We regularly run databases with around 200 GB of data per instance, and 
up to 1000 simultaneous clients.  Admittedly on slightly beefier 
machines than yours - usually 4-way AlphaServers running Tru64.

You didn't say what version of MySQL you were using?
Tim
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: MySQL limits.

2004-05-18 Thread RV Tec
Folks, Tim,

Oops! Forgot to mention that... we are running MySQL 4.0.18.

Thanks a lot!

Best regards,
RV Tec

On Tue, 18 May 2004, Tim Cutts wrote:


 On 18 May 2004, at 2:28 pm, RV Tec wrote:

 
  Is MySQL able to handle such load with no problems/turbulences
  at  all?   If  so,   what  would   be  the   best  hardware/OS
  configuration?
 
  What is the largest DB known to MySQL community?
 

 We regularly run databases with around 200 GB of data per instance, and
 up to 1000 simultaneous clients.  Admittedly on slightly beefier
 machines than yours - usually 4-way AlphaServers running Tru64.

 You didn't say what version of MySQL you were using?

 Tim



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MySQL limits.

2004-05-18 Thread Donny Simonton
Let's see if I can give you some ideas.

 -Original Message-
 From: RV Tec [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, May 18, 2004 8:28 AM
 To: [EMAIL PROTECTED]
 Subject: MySQL limits.
 
 We have  a database  with approximately  135 tables  (MyISAM).
 Most of them are small,  but we have 5 tables,  with 8.000.000
 records. And  that number  is to  increase at  least 1.000.000
 records per month (until the end of the year, the growing rate
 might surpass 2.000.000 records/month). So, today our database
 size is 6GB.

That's an average size for most applications.

 
 The server handles about 35-40 concurrent connections. We have
 a lot of table locks, but that does not seem to be a  problem.
 Most of the time it works really well.

Table locks in my opinion are bad.  Especially with 35 concurrent
connections.  On one of my servers we currently have 1498 threads running,
we are averaging 2044.431 queries per second, and 1 slow query for the past
month.  I restarted mysql on the wrong box on accident.  But I would still
consider these numbers to be nothing compared to some others around here.
 

 From time to time  (2 weeks uptime or  so), we have to  face a
 Signal 11 crash (which is pretty scary, since we have to run a
 myisamchk  that  takes us  offline  for at  least  1 hour). We
 believe this  signal 11  is related  to the  MySQL server load
 (since we have changed OS's and hardware -- RAM mostly).

What does it say in the mysql_error_log when this happens?  Mysql will
usually dump the reason out in the error log and it's pretty easy to solve
after that.  Have you considered using the binary version of MySQL instead
of compiling from source?

 
 Our server  is one  P4 3GHz,  2GB RAM  (400mhz), SCSI Ultra160
 36GB  disks (database  only) running  on OpenBSD  3.5. We  are
 aware  that  OpenBSD  might  not  be  the  best  OS  for  this
 application... at first, it  was chosen by it's  security. Now
 we  are looking  (if that  helps) to  a OS  with LinuxThreads
 (FreeBSD perharps?).

Sorry, can't help you with BSD.  Linux for me all of the way.

 
 The fact is that we  are running MySQL on a  dedicated server,
 that  keeps the  load between  0.5 and  1.5. CPU  definitively
 is not  a  problem. The  memory  could  be  a  problem...  our
 key_buffer is set to 384M, according to the recommendations at
 my-huge.cnf. So,  it seems  we have  a lot  of free memory. We
 have  already  tried  to increase  key_buffer (along  with the
 other  settings),  but it does not seem to  hurt or to improve
 our performance (although, the memory use increases).

384 for key_buffer is probably fine with 2gigs of memory.  Some will say
that you can go up to 1/2 of the memory, but I like to stay around 400
myself.  But it really varies based on what you are doing.  We had to do a
lot of testing of our application to find the right number.


 
 To track down this signal 11, we have just compiled MySQL with
 debugandreturned   totheoriginal   my-huge.cnf
 recommendations.  Now it seems we are running on a overclocked
 486 66mhz.
That's what debug does.  Use the binary, that's my recommendation.

 
 Is there any way to prevent this signal 11 to happen or is  it
 a message that we have exceeded MySQL capability?

Exceeded MySQL's capability?  I don't think you have scratched the surface
yet.  Error messages are just that, an error of some type.  Without knowing
the version of MySQL you are running, it's even harder to know.


 
 Is MySQL able to handle such load with no problems/turbulences
 at  all?   If  so,   what  would   be  the   best  hardware/OS
 configuration?

For me, I buy dual proc xeons with hyperthreading.  2 or 4 gigs of memory.
Fedora Linux, RPM install of mysql 4.1.1 (4.1.2 is getting close!)  Apache
2.x, and php.  I install apache and php on all of our servers no matter
what, because you never know when you need them.   I know many people will
tell you to buy opteron's, we just haven't bought one yet, since our vendor
of choice doesn't offer them yet.


 
 What is the largest DB known to MySQL community?

I've heard that cox communications is fairly large, at least according to
this:
http://www.mysql.com/news-and-events/press-release/release_2003_21.html

It says theirs is about 600 gigs.  But I am sure there are larger ones
around.

On one server we have about 170 gigs right now of databases.


Donny

 
 If it's needed, I can provide DMESG, MySQL error log,  compile
 options and some database statistics.
 
 Thanks a lot for your help!
 
 Best regards,
 RV Tec
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]
 




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL Limits....streched !

2003-03-12 Thread Ahmed S K Anis
Hi,
I need to set a variable limit on the MySQL file size (Average row
length * no of rows )
When we insert data in to the table using JDBC .i should get a
unique JDBC exception (so that i trigger an archive). 
Is this posible in MySQL?

I notice that during creation of table i can give such options, but i
need to change it too often.
Please help me here.

Anis


 



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



mysql - limits?

2001-09-25 Thread Cael Mahold

Hi,
how can i check the performance of my mysql db?

Would a MySQL DB work well as a offline machine to store a
huge amount of data (up to 2.000.000 measurements) to generate
time controlled output...

I am a Newby in using MySQL with that big amount of data, so
it would be nice if someone of u could tell me some major
Problems i could get, things i have to look for or required hardware 
structures i do need to handle this  

thanx in advance
bye
RĂ¼di
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
 [EMAIL PROTECTED]  /  [EMAIL PROTECTED]
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php