InnoDB: Error: tablespace size stored in header

2002-11-13 Thread Shakeel Sorathia
Hi, we're using Innodb here and I just upped the number of datafiles 
that innodb was using.  When I did a show table status I noticed that I 
didn't get all the space that I had added.  When I looked at the error 
log, I got recieved the following error.

InnoDB: Error: tablespace size stored in header is 96 pages, but
InnoDB: the sum of data file sizes is 1152000 pages


Can anyone help me out here?  BTW, we recompiled innodb to use 64kb 
pages rather then the default of 16kb.  Here is the my.cnf we are using...

Thanks!

--shak

innodb_data_file_path = 
ibdata1:4000M;ibdata2:4000M;ibdata3:4000M;ibdata4:4000M;ibdata5:4000M;ibdata6:4000M;ibdata7:4000M;ibdata8:4000M;ibdata9:4000M;ibdata10:4000M;ibdata11:4000M;ibdata12:4000M;ibdata13:4000M;ibdata14:4000M;ibdata15:4000M;ibdata16:4000M;ibdata17:4000M;ibdata18:4000M
innodb_flush_log_at_trx_commit=0
set-variable = innodb_buffer_pool_size=2048M
innodb_data_home_dir = /opt/mysql/data/
innodb_log_group_home_dir = /opt/mysql/data/
innodb_log_arch_dir = /opt/mysql/data/
set-variable = innodb_log_files_in_group=3
set-variable = innodb_log_file_size=128M
set-variable = innodb_log_buffer_size=192M
innodb_log_archive=0
innodb_fast_shutdown=1
innodb_flush_method=nosync
set-variable = innodb_additional_mem_pool_size=128M
set-variable = innodb_file_io_threads=4
set-variable = innodb_lock_wait_timeout=50
set-variable = innodb_thread_concurrency=12


-
Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



Re: innodb bug

2002-06-27 Thread Shakeel Sorathia

Heikki,

Thanks for the patch.  As I'm going on vacation tomorrow, I'll give it a 
try next week and let you know if I find anything.

--shak

Heikki Tuuri wrote:

Hi!

It turned out that the bug indeed was connected with a 32-bit signed integer
wrap-over if the buffer pool on a 32-bit computer is bigger than 2 GB.

The following patch may fix the problem, but better test first if there are
also other similar wrap-overs which I did not notice.

Best regards,

Heikki
Innobase Oy

--- 1.4/innobase/include/buf0buf.ic Tue Dec  4 16:14:52 2001
+++ 1.5/innobase/include/buf0buf.ic Wed Jun 26 21:42:32 2002
@@ -209,7 +209,7 @@

  ut_ad((ulint)ptr = (ulint)frame_zero);

- block = buf_pool_get_nth_block(buf_pool, (ptr - frame_zero)
+ block = buf_pool_get_nth_block(buf_pool, ((ulint)(ptr - frame_zero))
   UNIV_PAGE_SIZE_SHIFT);
  ut_a(block = buf_pool-blocks);
  ut_a(block  buf_pool-blocks + buf_pool-max_size);
@@ -236,7 +236,7 @@

  ut_ad((ulint)ptr = (ulint)frame_zero);

- block = buf_pool_get_nth_block(buf_pool, (ptr - frame_zero)
+ block = buf_pool_get_nth_block(buf_pool, ((ulint)(ptr - frame_zero))
   UNIV_PAGE_SIZE_SHIFT);
  ut_a(block = buf_pool-blocks);
  ut_a(block  buf_pool-blocks + buf_pool-max_size);

- Original Message -
From: Heikki Tuuri [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, June 26, 2002 4:42 PM
Subject: Re: innodb bug


  

Shakeel,

this may be something with 32-bit unsigned integer / signed integer
arithmetic. I assume mysqld runs in the 32-bit mode?

Are you able to compile mysqld yourself? You could add the following to


line
  

214 of mysql/innobase/include/buf0buf.ic

...
if (block  buf_pool-blocks) {
printf(Values %lu, %lu, %lu, %lu\n, (ulint)(ptr - frame_zero),
(ulint)((ptr - frame_zero)  UNIV_PAGE_SIZE_SHIFT),
   (ulint)block, (ulint)(buf_pool-blocks),
(ulint)ptr, (ulint)frame_zero);
}
...

Regards,

Heikki
Innobase Oy

- Original Message -
From: Shakeel Sorathia [EMAIL PROTECTED]
Newsgroups: mailing.database.mysql
Sent: Wednesday, June 26, 2002 1:19 AM
Subject: innodb bug




I've been having a problem with innodb lately.  We just upgraded one of
our machine to have 4 GB of ram in it.  However, whenever I make the
innodb_buffer_pool_size greater then 2048M  It crashes with the
following in the error log.  It's 3.23.51 running on a Solaris 8
Ultrasparc II machine with 4 GB ram.  Is the limit 2gb of ram, or is
there something that I'm doing wrong?  Thanks for the help!

--shak

020625 12:57:14  mysqld started
InnoDB: Assertion failure in thread 1 in file ../include/buf0buf.ic line
  

214


InnoDB: We intentionally generate a memory trap.
InnoDB: Send a detailed bug report to [EMAIL PROTECTED]
mysqld got signal 11;
This could be because you hit a bug. It is also possible that this
  

binary
  

or one of the libraries it was linked agaist is corrupt, improperly
  

built,
  

or misconfigured. This error can also be caused by malfunctioning
  

hardware.


We will try our best to scrape up some info that will hopefully help
diagnose
the problem, but since we have already crashed, something is definitely
wrong
and this may fail

key_buffer_size=8388600
record_buffer=131072
sort_buffer=1048568
max_used_connections=0
max_connections=1024
threads_connected=0
It is possible that mysqld could use up to
key_buffer_size + (record_buffer + sort_buffer)*max_connections =
  

1187831
  

K


bytes of memory
Hope that's ok, if not, decrease some variables in the equation

020625 12:57:54  mysqld ended

--
  Shakeel Sorathia
Systems Administrator
   (213) 739-5348



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
  

[EMAIL PROTECTED]


Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

  




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

  


-- 
  Shakeel Sorathia
Systems Administrator
   (213) 739-5348



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: innodb bug

2002-06-26 Thread Shakeel Sorathia

Ah, that makes sense.  So it potentially could be the simple matter of 
telling the compiler that the type is unsigned.

--shak

Chuck Simmons wrote:

 Bert --

 Your problem is not the same as Shakeel's.  For you, the database is 
 saying that it couldn't allocate memory.  For Shakeel, it is saying 
 that an assert failed.  At about line 213, there is a right shift (X 
  Y) that is occuring.  The behavior of a right shift is different 
 depending on whether the value being shifted is signed or unsigned.  
 The value is supposed to be unsigned, but the programmers forgot to 
 tell the compiler.  This effectively means that mysql cannot allocate 
 more than 2GB of ram.

 
block = buf_pool_get_nth_block(buf_pool, (ptr - frame_zero)
 UNIV_PAGE_SIZE_SHIFT);
ut_a(block = buf_pool-blocks);
 

 Chuck


 Bert VdB wrote:

 Hi,

 I'm sort of glad we're not the only one having this problem.
 Yesterday we had kind of the same error message on an Solaris 8 
 machine with
 512Mb of ram.
 Our buffer_pool_size was set to 250Mb, because the other 250Mb is 
 used by
 the orion-web-server.

 Today I will perform crash-tests on another machine and try to find 
 out the
 problem.

 Fyi, our error log:
 =
 /opt/nusphere/mysql-max-3.23.49-sun-solaris2.8-sparc/bin/mysqld: 
 ready for
 connections
 mysqld got signal 10;
 This could be because you hit a bug. It is also possible that this 
 binary
 or one of the libraries it was linked agaist is corrupt, improperly 
 built,
 or misconfigured. This error can also be caused by malfunctioning 
 hardware.
 We will try our best to scrape up some info that will hopefully help
 diagnose
 the problem, but since we have already crashed, something is definitely
 wrong
 and this may fail

 key_buffer_size=8388600
 record_buffer=131072
 sort_buffer=2097144
 max_used_connections=16
 max_connections=100
 threads_connected=3
 It is possible that mysqld could use up to key_buffer_size + 
 (record_buffer + sort_buffer)*max_connections = 225791 K
 bytes of memory
 Hope that's ok, if not, decrease some variables in the equation

 020625 15:39:58  mysqld restarted
 020625 15:40:34  InnoDB: Database was not shut down normally.
 InnoDB: Starting recovery from log files...
 InnoDB: Starting log scan based on checkpoint at
 InnoDB: log sequence number 0 272046313
 InnoDB: Fatal error: cannot allocate 2310548 bytes of
 InnoDB: memory with malloc! Total allocated memory
 InnoDB: by InnoDB 334012166 bytes. Operating system errno: 11
 InnoDB: Cannot continue operation!
 InnoDB: Check if you should increase the swap file or
 InnoDB: ulimits of your operating system.
 InnoDB: On FreeBSD check you have compiled the OS with
 InnoDB: a big enough maximum process size.
 020625 15:40:37  mysqld ended
 


 -Original Message-
 From: Shakeel Sorathia [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, June 25, 2002 21:01
 To: [EMAIL PROTECTED]
 Subject: innodb bug


 I've been having a problem with innodb lately.  We just upgraded one 
 of our machine to have 4 GB of ram in it.  However, whenever I make 
 the innodb_buffer_pool_size greater then 2048M  It crashes with the 
 following in the error log.  It's 3.23.51 running on a Solaris 8 
 Ultrasparc II machine with 4 GB ram.  Is the limit 2gb of ram, or is 
 there something that I'm doing wrong?  Thanks for the help!

 --shak

 020625 12:57:14  mysqld started
 InnoDB: Assertion failure in thread 1 in file ../include/buf0buf.ic 
 line 214
 InnoDB: We intentionally generate a memory trap.
 InnoDB: Send a detailed bug report to [EMAIL PROTECTED]
 mysqld got signal 11;
 This could be because you hit a bug. It is also possible that this 
 binary
 or one of the libraries it was linked agaist is corrupt, improperly 
 built,
 or misconfigured. This error can also be caused by malfunctioning 
 hardware.
 We will try our best to scrape up some info that will hopefully help 
 diagnose
 the problem, but since we have already crashed, something is 
 definitely wrong
 and this may fail

 key_buffer_size=8388600
 record_buffer=131072
 sort_buffer=1048568
 max_used_connections=0
 max_connections=1024
 threads_connected=0
 It is possible that mysqld could use up to
 key_buffer_size + (record_buffer + sort_buffer)*max_connections = 
 1187831 K
 bytes of memory
 Hope that's ok, if not, decrease some variables in the equation

 020625 12:57:54  mysqld ended

  





-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




mysql binding to a port

2002-06-06 Thread Shakeel Sorathia

We have a situation that I was wondering if anyone had an answer to.  We 
have a situation where we have multiple mysql boxes being used in random 
order for load balancing and fault tolerance.  Basically the app chooses 
one of the machines at random, and if it can't get a connection, it 
tries another.  

Here is the situation that happens to us.  When we bring one mysql down, 
no big deal, any request that would have been destined for that machine 
goes to another machine.  This part works well.  The problem occurs when 
we start up mysql.  Mysql binds to the port, then it takes about 15 
seconds for innodb to startup and get ready to start serving requests. 
 Problem is that in this 15 seconds a few hundred connections have 
queued up to mysql.  When innodb is ready to start serving data, all of 
these query's hit the base and we have a backlog of requests to serve. 
 This normally takes quite a bit of time before it is able to recover.  

My question is this.  Is there a way to have innodb get ready and do 
everything it needs to do to serve data before mysql binds to the port? 
 This way until the base is ready to serve requests, incoming clients 
cannot get a connection.

Thanks...

--shak

-- 
  Shakeel Sorathia
Systems Administrator
   (213) 739-5348



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: CPU intensive query

2002-02-26 Thread Shakeel Sorathia
 time:

| 18823 | webs | localhost.localdomain | webs_aptrate  | Query

 | 1 |

Sending data | select count(*) from aptreviews, aptcomplexes where
aptreviews.complex_id = aptcomplexes.complex_id  |
| 18867 | webs | localhost.localdomain | webs_aptrate  | Query

 | 1 |

Sending data | select count(*) from aptreviews, aptcomplexes where
aptreviews.complex_id = aptcomplexes.complex_id  |

The tables that are used are somewhat large:

mysql select count(*) from aptreviews;
+--+
| count(*) |
+--+
|15263 |
+--+
1 row in set (0.00 sec)

mysql select count(*) from aptcomplexes;
+--+
| count(*) |
+--+
|35395 |
+--+
1 row in set (0.00 sec)

Any ideas what might be causing this?

Here's the version:
[root@s2 tauren]# mysql -V
mysql  Ver 11.15 Distrib 3.23.40, for pc-linux-gnu (i686)

--
Michael Bacarella  | 545 Eighth Ave #401
   | New York, NY 10018
Systems Analysis  Support | [EMAIL PROTECTED]
Managed Services   | 212 946-1038



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-- 
  Shakeel Sorathia
Systems Administrator
   (626) 660-3502




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: 3.23.47 compile problems with sun's forte compiler

2002-01-03 Thread Shakeel Sorathia

Thanks for the help on this, it seems to be working now.  One question 
though, wouldn't configure be able to take of this?

--shak

Michael Widenius wrote:

Hi!

Sinisa == Sinisa Milivojevic [EMAIL PROTECTED] writes:


Sinisa Shakeel Sorathia writes:

I just downloaded the 3.23.47 source for mysql, and I tried to
compile it using Sun's Forte Compiler (6.2) however when trying to
build libmysql/hash.c I got the following error.


cut

-DPIC -o hash.o hash.c, line 189: reference to static variable
hash_key in inline extern function hash.c, line 229: cannot
recover from previous errors cc: acomp failed for hash.c *** Error
code 1


I took a look at the file and noticed the inline byte* line.  I
compared that with the hash.c from the 3.23.44 build and noticed
that the .44 build didn't have the inline there, so I took it out
and it's gotten past that part.  Not sure if this is a bug in
configure, compile, code, or what.

Anyone know if this was the right thing to do, or if there is a fix
for it.

--shak

-- Shakeel Sorathia Systems Administrator (626) 660-3502


Sinisa #undef _FORTREC_ in config files and it should work.

Sinisa got accidently this wrong.

The fix is to add -D_FORTREC_ to your CFLAGS when compiling MySQL.

If this doesn't work, please email me and I will try to help you fix
this.

Regards,
Monty


-- 
  Shakeel Sorathia
Systems Administrator
   (626) 660-3502




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




3.23.47 compile problems with sun's forte compiler

2002-01-02 Thread Shakeel Sorathia

I just downloaded the 3.23.47 source for mysql, and I tried to compile 
it using Sun's Forte Compiler (6.2)  however when trying to build 
libmysql/hash.c I got the following error.

/opt/SUNWspro/bin/cc -DDEFAULT_CHARSET_HOME=\/opt/mysql/3.23.44\ 
-DDATADIR=\/opt/mysql/3.23.44/var\ 
-DSHAREDIR=\/opt/mysql/3.23.44/share/mysql\ -DUNDEF_THREADS_HACK 
-DDONT_USE_RAID -I./../include -I../include -I./.. -I.. -I.. -O 
-DDBUG_OFF -Xa -dalign -fns -fsimple=2 -fsingle -ftrap=%none -nofstore 
-xbuiltin=%all -xlibmil -xO5 -xO4 -xtarget=ultra2 -xstrconst -mt 
-DHAVE_CURSES_H -I/opt/tmp/mysql-3.23.47/include -DHAVE_RWLOCK_T -c 
hash.c  -KPIC -DPIC -o hash.o
hash.c, line 189: reference to static variable hash_key in inline 
extern function
hash.c, line 229: cannot recover from previous errors
cc: acomp failed for hash.c
*** Error code 1


I took a look at the file and noticed the inline byte*  line.  I 
compared that with the hash.c from the 3.23.44 build and noticed that 
the .44 build didn't have the inline there, so I took it out and it's 
gotten past that part.  Not sure if this is a bug in configure, compile, 
code, or what.

Anyone know if this was the right thing to do, or if there is a fix for it.

--shak

-- 
  Shakeel Sorathia
Systems Administrator
   (626) 660-3502




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: SELECT'ing only 1st matching row

2001-12-13 Thread Shakeel Sorathia

I believe what you want is to put a limit clause at the end of your sql 
query.

ie  Select  Limit 1

That will only return one record that matches your query.

hope this helps...

--shak

Steve Osborne wrote:

Is there a way to SELECT only the first matching row of a query?  I would
like to allow the registration of identical products (unique serial numbers
/ owned by one user), without the user having to re-enter all their data.
I've set up a page that allows them to just enter their login info (username
and password) and the serial number of the product they want to register.
What I need to do is grab their product preferences to be duplicated in the
new record, however is they have more than one registered product, it throws
off my plan.  I only need one record to get the values that I need. (I know
that duplicating values is not proper database form, however I need to allow
the user to change their preferences on each owned product.)

Any advice,

Steve Osborne
Database Programmer
Chinook Multimedia Inc.
[EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-- 
  Shakeel Sorathia
Systems Administrator
   (626) 660-3502




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Mysql in NFS

2001-12-12 Thread Shakeel Sorathia

For mysql, if your datafiles will not fit in ram, I would highly 
recommend not putting it on nfs.  Mysql doesn't have any data caching, 
so every query will have to go thru the network to get the data.  if, 
however, you do have enough ram on the machine to store all the 
datafiles in memory, then we have not seen this problem.  But the other 
points that Matthew brought up about network connectivity till apply.

OT, Matthew, we've got oracle running over NFS on NetAPP filers and 
we've had extremely good performance.  Of course we have 100Meg or GigE 
between them, but the only problems we ever have is when we have queries 
that do full table scans.  Anyways just my couple pennies..

--shak

Matthew Darcy wrote:

I have done oracle on NFS and it is not really the best option due to NFS
locking.

ie a poor network or if the NFS server drops, or the NIS/NIS+ (assuming you
are using automount maps) dies this will hold your development/production up
no end. Also oracle's table locking (not sure if mysql has this) causes
problems over NFS.

The only time I have seen it work ok was on a veritas cluster using the
Oracle/NFS export as a failover and it worked BAD the machine failed over
quite a few times and picked up but the machines had to be very powerfull as
there was tons of rollbacks and non commited transactions, and non bound
variables over NFS was s slow.

I am sure you could do it but it is not wise.

I have stored Oracle binarys on NFS so that clients could access oracle and
manage it over NFS but never had good performance keeping the data on NFS.

Matt.


-Original Message-
From: Shen, Lei (CIT) [mailto:[EMAIL PROTECTED]]
Sent: 12 December 2001 14:28
To: [EMAIL PROTECTED]
Subject: Mysql in NFS



Hi! Dose anyone has a experience to building mysql database in network files
system? and php? can you get me some information? thank you

-Original Message-
From: Marek Kustka [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, December 12, 2001 8:23 AM
To: [EMAIL PROTECTED]
Subject: Embedded MySQL server  the outside world


Hi folks,

does embedded server tcp-listen to the outside world i.e. it could
be used by another app or perhaps been accessed by the same app
using ODBC?

OR

is MySQL C API the only way to control it?

Thanks, Marek


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-- 
  Shakeel Sorathia
Systems Administrator
   (626) 660-3502




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Kill Thread

2001-12-10 Thread Shakeel Sorathia

try using mysqladmin with a grep command, or redirect it's output to a 
file.  something like this should do it.

mysqladmin -u user -p processlist | grep delete  deleteprocess.out

then less the file and you will be able to scroll thru the output.

hope this helps..

--shak

Karl J. Stubsjoen wrote:

Hello,

I need to kill a thread.  I've issued a command which has locked a table
(delete from table where id = 9) was the command and it is taking a
very very long time.

Now I'd like to kill that thread.  However, I can't read the the thread ID
because the ID scrolls out of view in my little Win98 dos window when I
issue:  Show Processlist.

So, any suggestions?

We are running mysql on a linux server (ver 4.0???) and connect from a Win98
MySql emulation.

Thanks

Karl
www.excelbus.com
..opportunity knocking..


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-- 
  Shakeel Sorathia
Systems Administrator
   (626) 660-3502




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Cluster?

2001-12-06 Thread Shakeel Sorathia

I can tell you how we solved this problem.  

we have a 24/7 op where we need to constantly pull data from mysql 
machines.  This data can be updated at anytime by our tools, but the 
live servers will only be reading data.  So we have one machine which is 
connected to our tools box, this is where all of our updates go, then we 
use one way mysql replication to the 2 machines we have in the 
datacenter.  Our app picks one of these at random, if it can't get a 
connection, it will try the other one.  We find that we actually get 
pretty good load balancing on these machines.  We've also been able to 
turn off one machine to do upgrades and such, and found that we don't 
suffer any downtime.  btw, the machines that are in prod are Sun E250's 
but the machine that is connected to the tools is a linux box.  I don't 
see any problems doing this with linux hardware, we just prefer sun's 
for our prod db's.

hope this helps..

--shak

Barry Roomberg wrote:

I'm trying to determine the correct way of
dealing with a high availability situation,
which might also be a high performance (spread queries
across multiple systems) situation.

I've googled for MySQL cluster, and found one
project that last seems last touched a year ago.

1)  All data is readonly.  No write requirement except 
monthly load.  

2) We'd rather spend money on hardware and MySQL support
than Oracle or some other commercial database.  Hardware
and MySQL is scalable at a reasonable price, Oracle is not.

3) This will be a 24/7/365 environment, so there can't be
any single points of failure.

4) I already have the back end disk, so new purchase there
is not an option.  Same with network gear and fibre channels.
Systems may be purchased, but we'd prefer to reuse current until
we have to get more.

5) I can have dual net cards and dual fibre-channel cards
in each system, along with 2 net switches and 2 FC switches,
to allow full matrix connectivity.

6) I've already proven that the data can be served by MySQL
in a rapid fashion, now I need to prove it can be continuous.

1st Goal: Setup 2 systems that share data and allow
either to satisfy read requests.  Seems like a simple
replication to 2 box scenario, with application level
handling to go to the live box if 1 is down.  Are there
any preferred projects that handle the director portion
of this?  I assume I will have total data duplication on each
system, ie: they won't be reading the same disk.

2nd goal:  Setup a cluster of systems sharing the same
disk (it is a fibrechannel back end).  I ASSUME I can
read/only mount to a BUNCH of independent systems, which
will then serve queries.  Simple?  Seems so.  Can MySQL
open a database read-only of a truly read-only file system?
The actual performance the disk is capable of providing
is far more than MySQL reads for a query, so I should be
able to stacka few systems against it before it degrades.

There will not be a regular SQL connection to these systems,
there will be an application specific deamon handling requests
via network pipes, so I don't nead to deal with client side 
issues (yet).

Feel free to point me in a FAQ/Doc direction, I love to read.

Thanks


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-- 
  Shakeel Sorathia
Systems Administrator
   (626) 660-3502




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php