RE: problem with DBI connection interaction with sub-process

2003-06-09 Thread Robin Keech
You could try forking before making a connection to the database.

A forked program will share the same resources (I think), and the DBI could be getting 
the child signal and closing connection. However, any resources you create after the 
fork will be your own.

Hope it helps (bit of a guess really),

Robin Keech
Java Developer
Synectics Ltd


-Original Message-
From: Martin Waite [mailto:[EMAIL PROTECTED] 
Sent: 09 June 2003 12:10
To: MySQL List
Subject: problem with DBI connection interaction with sub-process


snip

What seems to happen is that when the child exits, the
DBI connection to the database is lost and automatically
reconnected (how?).   Obviously this blows away my temporary
tables.

Does anyone know how to avoid this ?

snip

regards,
Martin


 

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.474 / Virus Database: 272 - Release Date: 18/04/2003
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: performance help required. (2G - 4G Ram upgrade)

2002-02-12 Thread Robin Keech

Hi,

We use 
Red Hat Linux release 7.1 (Seawolf)
Kernel 2.4.3 on a 2-processor i686

Only around 5000 of the rows match out of the 58,000,000.

As I understand it, we do have single indexes on each of the columns.

Given the replies below, I feel that mysql is not getting the optimisation
correct.

Do I need to analyse the key distribution or something to give MySQL more up
to date information.
If so, can I do that without locking the table / taking the database down?

When I use USE INDEX(log_id_idx) it still ignores me and does what it wants.
Is that right?

Thanks again for your help,

Robin

-Original Message-
From: Almar van Pel [mailto:[EMAIL PROTECTED]]
Sent: 05 February 2002 00:02
To: [EMAIL PROTECTED]; Robin Keech
Subject: RE: performance help required. (2G - 4G Ram upgrade)


Hi,

Your problem is that your query uses all of the rows in the table. That
means that the indexes on the table are incorrect.

You should try putting a single index on prev_e3_id and a single index on
the other table. It'll probably improve your performance a lot.

BTW. Schedule some time for maintenance periodically to optimize these heavy
tables.

Regards,

Almar van Pel

-Oorspronkelijk bericht-
Van: Jeremy Zawodny [mailto:[EMAIL PROTECTED]]
Verzonden: dinsdag 5 februari 2002 0:04
Aan: Robin Keech
CC: '[EMAIL PROTECTED]'
Onderwerp: Re: performance help required. (2G - 4G Ram upgrade)


On Mon, Feb 04, 2002 at 05:41:46PM -, Robin Keech wrote:
 (Please reply directly, I am only on the digest list, thanks)

 I have an SQL statement that is taking far too long to run, around
 11 minutes (while locking inserts/updates).

 We have installed some more RAM, and I need help with my.cnf and the
 troublesome SQL.

 Can anyone suggest some config values to best utilise the extra RAM
 I have just fitted?  We have gone from 2G to 4G

On what operating system?

 The table in question is very large,
 ie
 e3_id_chain.MYD = 3.7G ~ 53 million records
 e3_id_chain.MYI  = 3.2G

 These are my current settings

 # The MySQL server
 [mysqld]
 port= 3306
 socket  = /tmp/mysql.sock
 skip-locking
 set-variable= key_buffer=512M
 set-variable= max_allowed_packet=2M
 set-variable= max_connections=500
 set-variable= table_cache=512
 set-variable= sort_buffer=2M
 set-variable= record_buffer=2M
 set-variable= thread_cache=8
 set-variable= wait_timeout=1800
 # Try number of CPU's*2 for thread_concurrency
 set-variable= thread_concurrency=4
 set-variable= myisam_sort_buffer_size=64M
 log-bin
 low-priority-update

 When doing an SQL select it takes 11 minutes

 mysql explain SELECT e3_id, prev_e3_id, log_id FROM e3_id_chain WHERE
 prev_e3_id IS NOT NULL AND log_id IS NOT NULL;

+-+--+---+--+-+--+--
 ++
 | table   | type | possible_keys | key  | key_len | ref  |
 rows | Extra  |

+-+--+---+--+-+--+--
 ++
 | e3_id_chain | ALL  | prev_e3_id_idx,log_id_idx | NULL |NULL | NULL |
 58179805 | where used |

+-+--+---+--+-+--+--
 ++
 1 row in set (0.00 sec)

 But its not using the index???

Correct.  And that key_buffer won't help much.

How many of the rows actually match your query?

 Can anyone explain this behaviour, and suggest a solution?

MySQL will decide not to use an index in cases when it believes there
is a faster way, such as a full table scan.  This is often the case
when a significant percentage of the rows are likely to match.

Jeremy
--
Jeremy D. Zawodny, [EMAIL PROTECTED]
Technical Yahoo - Yahoo Finance
Desk: (408) 349-7878   Fax: (408) 349-5454   Cell: (408) 685-5936

MySQL 3.23.41-max: up 0 days, processed 22,771,714 queries (385/sec. avg)

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.313 / Virus Database: 174 - Release Date: 02/01/02
 

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.313 / Virus Database: 174 - Release Date: 02/01/02
 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

RE: performance help required. (2G - 4G Ram upgrade)

2002-02-05 Thread Robin Keech

Hi,

We use 
Red Hat Linux release 7.1 (Seawolf)
Kernel 2.4.3 on a 2-processor i686

Only around 5000 of the rows match out of the 58,000,000.

As I understand it, we do have single indexes on each of the columns.

Given the replies below, I feel that mysql is not getting the optimisation
correct.

Do I need to analyse the key distribution or something to give MySQL more up
to date information.
If so, can I do that without locking the table / taking the database down?

When I use USE INDEX(log_id_idx) it still ignores me and does what it wants.
Is that right?

Thanks again for your help,

Robin

-Original Message-
From: Almar van Pel [mailto:[EMAIL PROTECTED]]
Sent: 05 February 2002 00:02
To: [EMAIL PROTECTED]; Robin Keech
Subject: RE: performance help required. (2G - 4G Ram upgrade)


Hi,

Your problem is that your query uses all of the rows in the table. That
means that the indexes on the table are incorrect.

You should try putting a single index on prev_e3_id and a single index on
the other table. It'll probably improve your performance a lot.

BTW. Schedule some time for maintenance periodically to optimize these heavy
tables.

Regards,

Almar van Pel

-Oorspronkelijk bericht-
Van: Jeremy Zawodny [mailto:[EMAIL PROTECTED]]
Verzonden: dinsdag 5 februari 2002 0:04
Aan: Robin Keech
CC: '[EMAIL PROTECTED]'
Onderwerp: Re: performance help required. (2G - 4G Ram upgrade)


On Mon, Feb 04, 2002 at 05:41:46PM -, Robin Keech wrote:
 (Please reply directly, I am only on the digest list, thanks)

 I have an SQL statement that is taking far too long to run, around
 11 minutes (while locking inserts/updates).

 We have installed some more RAM, and I need help with my.cnf and the
 troublesome SQL.

 Can anyone suggest some config values to best utilise the extra RAM
 I have just fitted?  We have gone from 2G to 4G

On what operating system?

 The table in question is very large,
 ie
 e3_id_chain.MYD = 3.7G ~ 53 million records
 e3_id_chain.MYI  = 3.2G

 These are my current settings

 # The MySQL server
 [mysqld]
 port= 3306
 socket  = /tmp/mysql.sock
 skip-locking
 set-variable= key_buffer=512M
 set-variable= max_allowed_packet=2M
 set-variable= max_connections=500
 set-variable= table_cache=512
 set-variable= sort_buffer=2M
 set-variable= record_buffer=2M
 set-variable= thread_cache=8
 set-variable= wait_timeout=1800
 # Try number of CPU's*2 for thread_concurrency
 set-variable= thread_concurrency=4
 set-variable= myisam_sort_buffer_size=64M
 log-bin
 low-priority-update

 When doing an SQL select it takes 11 minutes

 mysql explain SELECT e3_id, prev_e3_id, log_id FROM e3_id_chain WHERE
 prev_e3_id IS NOT NULL AND log_id IS NOT NULL;

+-+--+---+--+-+--+--
 ++
 | table   | type | possible_keys | key  | key_len | ref  |
 rows | Extra  |

+-+--+---+--+-+--+--
 ++
 | e3_id_chain | ALL  | prev_e3_id_idx,log_id_idx | NULL |NULL | NULL |
 58179805 | where used |

+-+--+---+--+-+--+--
 ++
 1 row in set (0.00 sec)

 But its not using the index???

Correct.  And that key_buffer won't help much.

How many of the rows actually match your query?

 Can anyone explain this behaviour, and suggest a solution?

MySQL will decide not to use an index in cases when it believes there
is a faster way, such as a full table scan.  This is often the case
when a significant percentage of the rows are likely to match.

Jeremy
--
Jeremy D. Zawodny, [EMAIL PROTECTED]
Technical Yahoo - Yahoo Finance
Desk: (408) 349-7878   Fax: (408) 349-5454   Cell: (408) 685-5936

MySQL 3.23.41-max: up 0 days, processed 22,771,714 queries (385/sec. avg)

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.313 / Virus Database: 174 - Release Date: 02/01/02
 

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.313 / Virus Database: 174 - Release Date: 02/01/02
 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




performance help required. (2G - 4G Ram upgrade)

2002-02-04 Thread Robin Keech

(Please reply directly, I am only on the digest list, thanks)

I have an SQL statement that is taking far too long to run, around 11
minutes (while locking inserts/updates).

We have installed some more RAM, and I need help with my.cnf and the
troublesome SQL.

Can anyone suggest some config values to best utilise the extra RAM I have
just fitted?
We have gone from 2G to 4G

The table in question is very large,
ie 
e3_id_chain.MYD = 3.7G ~ 53 million records
e3_id_chain.MYI  = 3.2G

These are my current settings

# The MySQL server
[mysqld]
port= 3306
socket  = /tmp/mysql.sock
skip-locking
set-variable= key_buffer=512M
set-variable= max_allowed_packet=2M
set-variable= max_connections=500
set-variable= table_cache=512
set-variable= sort_buffer=2M
set-variable= record_buffer=2M
set-variable= thread_cache=8
set-variable= wait_timeout=1800
# Try number of CPU's*2 for thread_concurrency
set-variable= thread_concurrency=4
set-variable= myisam_sort_buffer_size=64M
log-bin
low-priority-update

When doing an SQL select it takes 11 minutes

mysql explain SELECT e3_id, prev_e3_id, log_id FROM e3_id_chain WHERE
prev_e3_id IS NOT NULL AND log_id IS NOT NULL;
+-+--+---+--+-+--+--
++
| table   | type | possible_keys | key  | key_len | ref  |
rows | Extra  |
+-+--+---+--+-+--+--
++
| e3_id_chain | ALL  | prev_e3_id_idx,log_id_idx | NULL |NULL | NULL |
58179805 | where used |
+-+--+---+--+-+--+--
++
1 row in set (0.00 sec)

But its not using the index???

mysql desc e3_id_chain;
++--+--+-+-+---+
| Field  | Type | Null | Key | Default | Extra |
++--+--+-+-+---+
| e3_id  | varchar(80)  |  | MUL | |   |
| prev_e3_id | varchar(80)  | YES  | MUL | NULL|   |
| log_id | int(10) unsigned | YES  | MUL | NULL|   |
++--+--+-+-+---+
3 rows in set (0.00 sec)

even when trying to force it to use an index, it still looks like it doesn't


mysql explain SELECT e3_id, prev_e3_id, log_id FROM e3_id_chain USE INDEX
(log_id_idx) WHERE prev_e3_id IS NOT NULL AND log_id IS NOT NULL;
+-+--+---+--+-+--+--
++
| table   | type | possible_keys | key  | key_len | ref  |
rows | Extra  |
+-+--+---+--+-+--+--
++
| e3_id_chain | ALL  | prev_e3_id_idx,log_id_idx | NULL |NULL | NULL |
58179810 | where used |
+-+--+---+--+-+--+--
++
1 row in set (0.00 sec)


Can anyone explain this behaviour, and suggest a solution?


Thanks

Robin

---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.313 / Virus Database: 174 - Release Date: 02/01/02
 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: C API - mysql_store_result segfaulting

2001-10-29 Thread Robin Keech

Hi Carsten,

I have just had a similar experience, which Jorge from Mysql support helped
me sort out (successfully) - here is his advice to me.

  
  Hi!
  
  Which compiler / version are you using ?
  
  If it is any from 2.96 series, you'd be better off using any in the 2.95
  or 3 series, as 2.96 is still buggy.
  
  Another question, was the libmysqlclient you're using downloaded from
  us, or was it the one shipped with the Red Hat CD ?
  
  If it's the one from the Red Hat CD, then it's the same thing  Buggy
  compiler !
  
  Regards,
  
  Jorge


Robin.

-Original Message-
From: Carsten H. Pedersen [mailto:[EMAIL PROTECTED]]
Sent: 27 October 2001 23:07
To: mysql mailing list
Subject: C API - mysql_store_result segfaulting


I'm having some trouble moving a C application from
a test machine to production. The program, which uses
the small wrapper around mysql_query() shown below, runs 
flawlessly on the test machine (hours of runtime; tens 
of thousands if not a million+ queries).

On the production machine, the program will execute exactly
10 queries, then segfault. The 10 queries which are executed 
before the fault are a mix of INSERT, DELETE, and SELECTs 
which return both empty and non-empty sets. The last query 
is a SELECT which returns (well -- is *supposed* to return)
an empty set.

The problem is twofold: The call to mysql_field_count() 
returns 1 (should be 0) and upon reaching the 
mysql_store_result() function call, the program segfaults.

I have tried switching around the two statements (as per the
examples in the docs), to no avail.

Both machines run RH Linux 7.1, and use GCC 2.96-85
Test machine: 
- MySQL 3.23.39-1 RPM from mysql.com
- glibc-2.2-12
- kernel v. 2.2 (as included in RH 7.1)

Production machine:
- MySQL 3.23.43-1 RPM from mysql.com
- glibc-2.2.4-19
- updated kernel (2.4)

I've been trying all kinds of up- and downgrades of gcc
and the MySQL packages, different levels of compiler
optimizations, etc., but can't seem to get around this
problem. If anyone has any hint as to what is wrong, I
would appreciate hearing from you.

[code snippet below]

/ Carsten

MYSQL_RES *mysql_query_e(char *query, char* message, int lineno) {
  int result; 
  char *errmsg;
  MYSQL_RES *resultset;

  // Do the query
  if (mysql_query(mysql, query)) {
printf(\n
   ERROR: Error when querying database on line %d.\n
   --\n%s\n\nQuery:\n %s\n\nError message:\n%d: %s\n, 
   lineno, message, query, mysql_errno(mysql), 
   mysql_error(mysql));
fflush(stdout);
exit(2);
  }

  // Check whether we should expect any data
  if (mysql_field_count(mysql)) {
// On the 11th query...
// ... this line prints 1 where it should say 0:
printf (%d fields in result\n, mysql_field_count(mysql));
// ... and the following line segfaults:
resultset = mysql_store_result(mysql);
if (NULL == resultset) {
  printf(Error: Could not get result for query on line %d\n
 --\nExpected %d columns\n
 %s\n\nQuery:\n %s\n\nError message:\n%d:\n%s\n, 
 lineno, mysql_field_count(mysql),
 message, query, mysql_errno(mysql), 
 mysql_error(mysql));
  fflush(stdout);
  exit(2);
} 
// yes, there is data. Return the the result to caller
return resultset; 
  }
  // No data, no columns. That's fine
  return NULL;
}


---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.282 / Virus Database: 150 - Release Date: 25/09/01
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.282 / Virus Database: 150 - Release Date: 25/09/01


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: repost: Too many open files

2001-10-03 Thread Robin Keech

Thanks for all the replies. They have all helped me solve the actual
problem, which was fairly obvious once you know it, as always.

I was starting the mysql server from a prompt as user mysql, not root.  Only
root has the privileges to up the open file limit of a bash session.
This explained the setrlimit warning in the error log.  Once I started the
server as root then this warning went away, and I presume the open file
limit was raised to what was requested.

Again, thanks for all the help.

Robin

  -Original Message-
 From: Adams, Bill TQO [mailto:[EMAIL PROTECTED]] 
 Sent: 02 October 2001 19:08
 To:   Colin Faber
 Cc:   Robin Keech; [EMAIL PROTECTED]
 Subject:  Re: repost: Too many open files
 
 Colin Faber wrote:
 
  it sounds like you've run out of file descriptors, I suggest
  rebuilding your kernel to handle more.
 
  In a bsd kernel you can do this simply by upping the maximum number of
  users allowed to access the machine at any given time.
 
 Or in Linux (in my rc.local):
 
 echo Set max files to 32768
 echo 32768 /proc/sys/fs/file-max
 
 --Bill
 
 mysql
 ---
 Incoming mail is certified Virus Free.
 Checked by AVG anti-virus system (http://www.grisoft.com).
 Version: 6.0.282 / Virus Database: 150 - Release Date: 25/09/01
 
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.282 / Virus Database: 150 - Release Date: 25/09/01


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




repost: Too many open files

2001-10-02 Thread Robin Keech

Hi,

Any help would be greatly appreciated as I have a production server that is
struggling.
(I only get the digest list, so could you copy me on any replies -
thankyou).

My error log show this...

011002  9:16:11  Error in accept: Too many open files
011002  9:36:43  /usr/local/mysql/libexec/mysqld: Shutdown Complete

011002 09:36:43  mysqld ended

011002 09:36:52  mysqld started
011002  9:36:52  Warning: setrlimit couldn't increase number of open files
to more than 1024
011002  9:36:52  Warning: Changed limits: max_connections: 500  table_cache:
257
/usr/local/mysql/libexec/mysqld: ready for connections


I am getting the too many open files error.


What does the setrlimit log line mean?
How many connections can I have before reaching 'too many connections'?
What can I do about it?

I have looked in docs.

open_files_limit 
   If this is not 0, then mysqld will use this value to reserve file
descriptors to use with setrlimit(). If this value is 0 then mysqld will
reserve max_connections*5 or max_connections + table_cache*2 (whichever is
larger) number of files. You should try increasing this if mysqld gives you
the error 'Too many open files'. 

mine is 0 so it should use quite a big number ( 500 * 5) - is this too big?
Should I set open_file_limits directly to something smaller?

I have ...

/proc/sys/fs/file-max = 8192
/proc/sys/fs/file-nr = 2097 14118192

my.cnf
# The MySQL server
[mysqld]
port= 3306
socket  = /tmp/mysql.sock
skip-locking
set-variable= key_buffer=384M
set-variable= max_allowed_packet=2M
set-variable= max_connections=500
set-variable= table_cache=512
set-variable= sort_buffer=2M
set-variable= record_buffer=2M
set-variable= thread_cache=8
set-variable= wait_timeout=1800
# Try number of CPU's*2 for thread_concurrency
set-variable= thread_concurrency=4
set-variable= myisam_sort_buffer_size=64M
log-bin
low-priority-updates
binlog-ignore-db=e3
server-id   = 1
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.282 / Virus Database: 150 - Release Date: 25/09/01


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Rolling over on sequence number?

2001-09-13 Thread Robin Keech

I have a requirement to use a sequence number table that I can reserve a
block of sequences, (so auto increment is not suitable).

This is my proposed table structure

create table SEQNO (
   SEQNO INT UNSIGNED NOT NULL DEFAULT 0,
   SESSION VARCHAR(10),
   LOCK_TIMEOUT TIMESTAMP);

my blocks will go up in 100's, using SQL like

update SEQNO set SEQNO = SEQNO + 100;

My question is, is there any way to get the database to roll over the int
value when it gets to 4,294,967,295?

In my tests it goes upto the max value and stays there.  I could do it
programmatically, but it would be so much nicer if the database rolled over
the value.

for example 

4,294,967,200
+ 100
-
4


Any ideas?

I have read manual, and MySQL book regarding sequences, but could not find
anything relevant.

Thanks

Robin

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Too many open files

2001-09-10 Thread Robin Keech

My Mysqld stopped accepting connections with

Error log:

010910 15:34:19  Error in accept: Too many open files
010910 15:49:02  /usr/local/mysql/libexec/mysqld: Normal shutdown

when trying to connect I got:

ERROR 2013: Lost connection to MySQL server during query


changed max_connections and restarted server
got error.log :

010910 15:49:04  /usr/local/mysql/libexec/mysqld: Shutdown Complete

010910 15:49:04  mysqld ended

010910 15:49:15  mysqld started
010910 15:49:15  Warning: setrlimit couldn't increase number of open files
to more than 1024
010910 15:49:15  Warning: Changed limits: max_connections: 300  table_cache:
357
/usr/local/mysql/libexec/mysqld: ready for connections


Documentation states:
open_files_limit 
If this is not 0, then mysqld will use this value to reserve file
descriptors to use with setrlimit(). If this value is 0 then mysqld will
reserve max_connections*5 or max_connections + table_cache*2 (whichever is
larger) number of files. You should try increasing this if mysqld gives you
the error 'Too many open files'. 

Can I increase setrlimits limit to more then 1024, or is that the ceiling on
my OS/ hardware?

(I thought about dropping the value of wait_timeout from 8 hours to 4 (or 2)
hours, as this would disconnect idle clients.  I have numerous apache
clients, all with many children hanging onto a connection.  Any thoughts
about this before I do it?)

Thanks,

Robin



Running Redhat Linux 7.1 
on intel
Mysql  version  3.23.37-log  

Some Variables:
| open_files_limit| 0 
| max_connections | 300 
| max_user_connections | 0
| max_tmp_tables  | 32  

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




bin log question

2001-07-03 Thread Robin Keech

using the old update logs I could look inside them and use the contents to
help debug dynamic SQL statements.

How do I do the same with the binary log?
I have tried 
mysqlbinlog hostname.001
but get
ERROR: Could not read entry at offset 5 : Error in log format or
read error 

The documentation says..
mysqlbinlog hostname-bin.[0-9]* | mysql
however I do not want to pipe the output to mysql, just to STDOUT so that I
can examine it.

Is there any way of doing this?

Other things I have tried

[mysql@data]$ mysqlbinlog --help 
Usage: mysqlbinlog [options] log-files
Options:
-s,--short-form just show the queries, no extra info
-o,--offset=N   skip the first N entries
-h,--host=serverget the binlog from server
-P,--port=port  use port to connect to the remove server
-u,--user=username  connect to the remove server as username
-p,--password=password  use this password to connect to remote server
-j,--position=N start reading the binlog at postion N
-?,--help   this message

[mysql@data]$ mysqlbinlog -s host-bin.001 
ERROR: Could not read entry at offset 5 : Error in log format or read error

[mysql@data]$ mysqlbinlog -s -j 6 host-bin.001 
ERROR: failed on my_fseek()

[mysql@data]$ mysqlbinlog -j 10 host-bin.001 
ERROR: failed on my_fseek()

[mysql@data]$ mysqlbinlog --offset=100 host-bin.001 
ERROR: Could not read entry at offset 5 : Error in log format or read error

[mysql@data]$ mysqlbinlog --offset=100 -j 110 host-bin.001 
ERROR: failed on my_fseek()

Thanks in advance,

Robin Keech
www.dialogue.co.uk

Dialogue Communications (Estd. 1994) is a world leader in the development
and implementation of mobile data and internet messaging solutions. The
company's technology bridges the gap between the fixed and mobile worlds -
delivering carrier class platforms to both service providers and enterprise
customers around the globe.



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Having success with 20 million rows

2001-05-10 Thread Robin Keech

We have a large database with two tables of around 31.5 million rows each.
When I did an alter table, again to modify a column, it took 17 hours.

I noticed that the table was still readable and writes were blocked.
However, during the ALTER table MySQL repaired the table using a keysort.
This seemed to lock the table for reading as well.

As you can't know before hand that MySQL will try a repair, don't count on
the fact that the table will be available.

Robin

-Original Message-
From: Paul DuBois [mailto:[EMAIL PROTECTED]]
Sent: 10 May 2001 05:00
To: Christian Jaeger; David J. Potter; [EMAIL PROTECTED]
Subject: Re: Having success with 20 million rows


At 11:00 PM +0200 5/9/01, Christian Jaeger wrote:
Was the table inaccessible during these 4 days? Or is mysql able to 
still write to the table while it's being altered?

Those are not mutually exclusive possibilities. :-)

According to the manual, reads can be done on the original table while
the new table is being constructed. Updates are stalled and then applied
to the new table after the alteration.

http://www.mysql.com/doc/A/L/ALTER_TABLE.html


Christian.

At 12:08 Uhr -0700 9.5.2001, David J. Potter wrote:
occur very fast.  We use both Windows and Linux.  Adding a column is the
only action that is slow (it took 4 days once to add a column)

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-- 
Paul DuBois, [EMAIL PROTECTED]

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Problem using SELECT INTO DUMPFILE

2001-05-09 Thread Robin Keech

use 
SELECT * INTO OUTFILE 'filename' .

-Original Message-
From: Bill Moran [mailto:[EMAIL PROTECTED]]
Sent: 09 May 2001 00:44
To: [EMAIL PROTECTED]
Subject: Problem using SELECT INTO DUMPFILE


I get an error stating: you have an error in your SQL syntax near
'dumpfile '/data/file.doc''

The command I'm trying to execute is:
select resume from apps where appid=23 into dumpfile '/data/file.doc'
The server and client are on two different computers. 

I'm using client version 3.23.36 on machine redfs
Server version 3.22.32 on machine redsql

Do I need to upgrade the server before this will be possible? I
understand that SELECT INTO DUMPFILE must put the file on the local
machine, but does that mean when running the client on redfs, I'm trying
to put the file on redsql? (just thought of this, but it would be weird
to get that particular error if that were the case) That wouldn't work,
since there is no /data directory on redsql. If that's the problem, I
suppose I'll have to establish a NFS mount.

Any pointers are welcome. Please keep me in the CC box as I'm not
currently subscribed to this list.

TIA,
Bill

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: tty beeps uncontrollably when hitting CR after mysqld

2001-05-03 Thread Robin Keech

It's just a quick thought, and may not be relevant at all, but ...

If you have a tail on the mysql log file that is doing something with blobs
(binary objects), sometimes the bell character turns up and causes your
system to beep.  It worried me untill I realised what was happening.

May not be much help, sorry.

Robin

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
Sent: 02 May 2001 15:18
To: [EMAIL PROTECTED]
Subject: tty beeps uncontrollably when hitting CR after mysqld


Description:
This happens whether launching mysqld directly or safe_mysqld.
Following
the instructions carefully (with the  backgrounder at the end), I
get:

/usr/local/libexec/mysqld: ready for connections

but the prompt does not return. If you then hit ENTER on your
keyboard,
it begins to beep uncontrollably (using Win2K Telnet). Quitting ALL
appli-
cations doesn't even fix it, and Restart will not work. I'm forced
to 
power down my PC or else annoy everyone else in the entire building!

Perhaps the daemon isn't backgrounding properly.
How-To-Repeat:

Fix:
Unknown. I'm hoping it doesn't present problems when I put it into
my startup script.

Submitter-Id:  submitter ID
Originator:Dave Kitabjian
Organization:
 
MySQL support: none 
Synopsis:  
Severity:  unknown
Priority:  
Category:  mysql
Class: sw-bug 
Release:   mysql-3.22.27 (Source distribution)

Environment:

System: FreeBSD ldap1.netcarrier.net 3.5-STABLE FreeBSD 3.5-STABLE #0: Thu
Jul  6 13:30:20 EDT 2000
[EMAIL PROTECTED]:/usr/src/sys/compile/LB1  i386


Some paths:  /usr/bin/perl /usr/bin/make /usr/bin/gcc /usr/bin/cc
GCC: gcc version 2.7.2.3
Compilation info: CC='gcc'  CFLAGS=''  CXX='gcc'  CXXFLAGS=''  LDFLAGS=''
Configure command: ./configure  --with-unix-socket-path=/var/tmp/mysql.sock
--with-low-memory --with-mit-threads=yes
Perl: This is perl, version 5.005_03 built for i386-freebsd

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Off Topic Perl Mysql modules install problem

2001-03-07 Thread Robin Keech

its looking for the z compression library.
To fix do this

1) perl Makefile.PL
2) cd mysql
3) vi Makefile
4) search for line LDLOADLIBS and add -lz to end
5) cd ..
6) make
7) make test
8) make install


Robin

-Original Message-
From: Curtis Maurand [mailto:[EMAIL PROTECTED]]
Sent: 06 March 2001 16:55
To: MySQL List
Subject: Off Topic Perl Mysql modules install problem


Hello,
  I'm now lost.  I cannot figure out what the make file is looking for.
Anyone have any ideas?

LD_RUN_PATH="/usr/lib/mysql:/lib:/usr/lib:/usr/lib/gcc-lib/i386-redhat-linux
/2.9
6" cc -o ../blib/arch/auto/DBD/mysql/mysql.so  -shared -L/usr/local/lib
dbdimp.o
 mysql.o -L/usr/lib/mysql-L/usr/lib/mysql -lmysqlclient -lm -lz
-L/usr/lib/g
cc-lib/i386-redhat-linux/2.96 -lgcc
/usr/bin/ld: cannot find -lz
collect2: ld returned 1 exit status
make[1]: *** [../blib/arch/auto/DBD/mysql/mysql.so] Error 1
make[1]: Leaving directory `/home/curtis/Msql-Mysql-modules-1.2215/mysql'
make: *** [subdirs] Error 2


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Do index in MySQL is case insensitive?

2001-03-05 Thread Robin Keech

Hi,

Do you mean case insensitive when you do a like?
a like is case insensitive, to make it sensitive do a LIKE BINARY 'abc',
    or use CLIKE (from memory, might be wrong).

Robin

-Original Message-
From: Carfield Yim [mailto:[EMAIL PROTECTED]]
Sent: 02 March 2001 10:42
To: '[EMAIL PROTECTED]'
Subject: Do index in MySQL is case insensitive?


I have heard from somebody that MySQL index is case insensitive. If this is
true, how can I set it to case sensitive?

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Slow Select count(*) - Second Post

2001-02-20 Thread Robin Keech

Thanks for your response,

What benefit would that give me?  Would a combined index be faster? I have
the three columns indexed anyway, and do not really want to dedicate more of
my index file to a duplication . it is getting to the 1G mark already, and I
have a 2G limit for any one file.

I have some more information that may prove useful...

If I do a select count(*) with the date set to 2001-02-10 then the soc_date
index is used and the query runs really quickly

mysql explain select count(*) from log where queue_id = 5 and soc_date =
'2001-02-10';
+---+--++--+-+---+--
-++
| table | type | possible_keys  | key  | key_len | ref   | rows
| Extra  |
+---+--++--+-+---+--
-++
| log   | ref  | soc_date_idx,q_idx | soc_date_idx |   4 | const | 42558
| where used |
+---+--++--+-+---+--
-++
1 row in set (0.00 sec)


However, if I do a select count(*) with the date set to 2001-02-11 then the
q_idx is used and the query runs very slowly.

mysql explain select count(*) from log where queue_id = 5 and soc_date =
'2001-02-11';
+---+--++---+-+---+---+-
---+
| table | type | possible_keys  | key   | key_len | ref   | rows  |
Extra  |
+---+--++---+-+---+---+-
---+
| log   | ref  | soc_date_idx,q_idx | q_idx |   5 | const | 58410 |
where used |
+---+--++---+-+---+---+-
---+
1 row in set (0.00 sec)


Here are the real life results:

mysql select count(*) from log where queue_id = 5 and soc_date =
'2001-02-10';
+--+
| count(*) |
+--+
|   11 |
+--+
1 row in set (0.89 sec)

mysql select count(*) from log where queue_id = 5 and soc_date =
'2001-02-11';
+--+
| count(*) |
+--+
|   17 |
+--+
1 row in set (1 min 52.85 sec)


Here are the results of the explains if the queue_id is taken out of the
where.  The rows estimate is fairly similar.

mysql explain select count(*) from log where soc_date = '2001-02-11';
+---+--+---+--+-+---+---+---
--+
| table | type | possible_keys | key  | key_len | ref   | rows  |
Extra   |
+---+--+---+--+-+---+---+---
--+
| log   | ref  | soc_date_idx  | soc_date_idx |   4 | const | 60410 |
where used; Using index |
+---+--+---+--+-+---+---+---
--+
1 row in set (0.00 sec)


mysql explain select count(*) from log where soc_date = '2001-02-10';
+---+--+---+--+-+---+---+---
--+
| table | type | possible_keys | key  | key_len | ref   | rows  |
Extra   |
+---+--+---+--+-+---+---+---
--+
| log   | ref  | soc_date_idx  | soc_date_idx |   4 | const | 42564 |
where used; Using index |
+---+--+---+--+-+---+---+---
--+
1 row in set (0.00 sec)


Is it just tipping the balance and making the optimiser choose a different
index?
If so, why does the select against the queue_index take so long?

Thanks for your help in advance,

Robin

-Original Message-
From: Patrick FICHE [mailto:[EMAIL PROTECTED]]
Sent: 20 February 2001 14:26
To: Robin Keech
Subject: RE: Slow Select count(*) - Second Post


Hi,

What do you think of creating a combined index containing fro example
 soc_date, server_id, queue_id ) in this order...
It depends of course of your application as the index choice is usually one
of the most difficult tasks when designing a database.

Patrick


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Slow Select count(*) - Second Post

2001-02-20 Thread Robin Keech


Thanks for your replies.

Patrick: I will try the combined index, and see if it takes too much disk
space.  Most queries return within 2 seconds, which is acceptable for me.
Therefore I'm not too concerned about the speed increase in using a combined
index as such.

What I am concerned about is the fact the mysql is so slow when it chooses
the q_idx.  If Jarmo is correct then the q_idx is not a very good index and
the optimiser has got it wrong.  It has chosen an index based upon the
number of rows returned and not paid any attention to the 'quality' of the
index.  I'll try dropping the q_idx and see what that gives me.

Is there any way to re-write SQL to force the optimiser to use a certain
index?

Thanks again

Robin

-Original Message-
From: Jarmo Paavilainen [mailto:[EMAIL PROTECTED]]
Sent: 20 February 2001 15:50
To: Robin Keech
Subject: SV: Slow Select count(*) - Second Post


Hi,

Im maybe just stupid but here I go:

The question is, why does MySQL use "soc_date_idx" in the first query and
then "q_idx" in the second one?

It might be that "q_idx" (that is "queue_id" field) has a lot of identical
rows in it and "soc_date_idx" does not. And thereby the "soc_date_idx" is
better as a query limitator than "q_idx".

If you can, kill the "q_idx" index and try the queries again.

Thats all I could think of, let me know if it helped.

// Jarmo

...
 Can anyone please explain the following?
...
 mysql select count(server_id) from log where server_id = 1 and queue_id =
1 and soc_date = '2001-02-18';
...
 mysql select count(server_id) from log where server_id = 1 and queue_id =
5 and soc_date = '2001-02-18';

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Slow Select count(*) - Second Post

2001-02-20 Thread Robin Keech

Thanks for all your help.

This has got the query time down to 5 seconds from 2 minutes!!

Much appreciated everyone.

Robin

-Original Message-
From: Tibor Simko [mailto:[EMAIL PROTECTED]]
Sent: 20 February 2001 16:46
To: Robin Keech
Cc: '[EMAIL PROTECTED]'
Subject: Re: Slow Select count(*) - Second Post


Hello

 Is there any way to re-write SQL to force the optimiser to use a certain
 index?

SELECT foo FROM bar USE INDEX baz WHERE ...

See the manual at http://www.mysql.com/doc/J/O/JOIN.html.

cheers
-- 
TS

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Innobase + transactions row level locks will be released in a few days in MySQL 3.23.3x

2001-02-19 Thread Robin Keech



On a related note, how mature is the Innobase code? It is mostly new?
Or like NuSphere's Gemini, it is older code which has been adapted to
fit in with MySQL's table handler interface?

Thanks!

Jeremy

Hi,

I am also very interested in finding more out about innobase - code
maturity, company background, etc.
I have asked around collegues, but havn't found out much.

Does anyone know of any English language sites where this information can be
found?

My company uses MySQL and the next project will need transactional support.
My bosses are not comfortable about using MySQL for financial type
transactions at the moment (fair enough), but I need some information to
encourage MySQLs continued use.

Also, any news on the Gemini Project would be very welcome.  ie progress,
how it compares/contrasts to 
Innobase.

What about BDB, it seems to me that the Berkley stuff isn't as hardcore as
the other two.  Is it up to the job of coping with critical financial
transactions?

Any information would be gratefully received,

Robin 


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Help with slow select count(*)

2001-02-19 Thread Robin Keech

Hi,

Can anyone please explain the following?

I do two select count(*)'s from a big table, (21,000,000 + rows  6.5 Gb).
All columns in the where are indexed.  The first select is very quick, the
second very slow.

The ONLY difference is the value of one column variable (queue_id), which is
an INT UNSIGNED;

Here is the quick one .

mysql select count(server_id) from log where server_id = 1 and queue_id = 1
and soc_date = '2001-02-18';
+--+
| count(*) |
+--+
| 1703 |
+--+
1 row in set (2.05 sec)

And here is the slow one ..

mysql select count(server_id) from log where server_id = 1 and queue_id = 5
and soc_date = '2001-02-18';
+--+
| count(*) |
+--+
|5 |
+--+
1 row in set (4 min 9.44 sec)

As you can see all columns are indexed:

mysql show index from log;
+---++---+--+--+
---+-+--++-+
| Table | Non_unique | Key_name  | Seq_in_index | Column_name  |
Collation | Cardinality | Sub_part | Packed | Comment |
+---++---+--+--+
---+-+--++-+
| log   |  1 | log_id_idx|1 | log_id   | A
|NULL | NULL | NULL   | NULL|
| log   |  1 | interface_idx |1 | interface_id | A
| 1836325 | NULL | NULL   | NULL|
| log   |  1 | soc_date_idx  |1 | soc_date | A
| 427 | NULL | NULL   | NULL|
| log   |  1 | server_idx|1 | server_id| A
|  12 | NULL | NULL   | NULL|
| log   |  1 | q_idx |1 | queue_id | A
| 185 | NULL | NULL   | NULL|
+---++---+--+--+
---+-+--++-+
5 rows in set (0.00 sec)

An explain between the two seems to indicate that the query that is quick is
trawling through more records than the slow one?! I'm confused as to what
the explain is trying to tell me.

mysql explain select count(server_id) from log where server_id = 1 and
queue_id = 1 and soc_date = '2001-02-18';
+---+--+---+--+-+---
+++
| table | type | possible_keys | key  | key_len |
ref   | rows   | Extra  |
+---+--+---+--+-+---
+++
| log   | ref  | soc_date_idx,server_idx,q_idx | soc_date_idx |   4 |
const | 337190 | where used |
+---+--+---+--+-+---
+++
1 row in set (0.00 sec)

mysql explain select count(server_id) from log where server_id = 1 and
queue_id = 5 and soc_date = '2001-02-18';
+---+--+---+---+-+---+--
-++
| table | type | possible_keys | key   | key_len | ref   |
rows  | Extra  |
+---+--+---+---+-+---+--
-++
| log   | ref  | soc_date_idx,server_idx,q_idx | q_idx |   5 | const |
58437 | where used |
+---+--+---+---+-+---+--
-++
1 row in set (35.08 sec)

mysql show variables;
|--
|Variable_name   | Value
|--
| ansi_mode   | OFF
| back_log| 50
| basedir | /usr/local/mysql/
| bdb_cache_size  | 8388600
| bdb_home| /home/mysql/data/
| bdb_logdir  |
| bdb_tmpdir  | /home/mysql/data/tmp/
| character_set   | latin1
| character_sets  | latin1 dec8 dos german1 hp8 koi8_ru latin2 swe7
usa7 cp1251 danish hebrew win1251 estonia hungarian koi8_ukr |
| connect_timeout | 5
| concurrent_insert   | ON
| datadir | /home/mysql/data/
| delay_key_write | ON
| delayed_insert_limit| 100 
| delayed_insert_timeout  | 300
| delayed_queue_size  | 1000
| join_buffer_size| 131072
| flush   | OFF
| flush_time  | 0
| init_file   |
| interactive_timeout | 28800
| key_buffer_size | 402649088
| language| /usr/local/mysql/share/mysql/english/
| locked_in_memory| OFF
| log | OFF
| log_update  | ON
| log_bin | OFF
| log_slave_updates   | OFF
| long_query_time | 10
| low_priority_updates| OFF
| lower_case_table_names  | 0
| max_allowed_packet  | 1047552
| max_connections | 100
| max_connect_errors  | 10  
| max_delayed_threads | 20
| max_heap_table_size | 16777216
| max_join_size   | 4294967295
| max_sort_length 

RE: Update: Status of MySQL 4.0/Innobase/trx row level locks

2001-02-05 Thread Robin Keech

I'm confused.

Am I right in thinking that the Nusphere Gemini project and this announcment
are linked?  Is the Innobase code going to provide the transactional support
for MySQL v.4?  If not, whats the difference between this and Nuspheres
work?

My company will definitely need transactions soon, and the more information
I have, the more of a case I can make for sticking with MySQL.

Can someone please clarify the situation for me.

Thanks

Robin

-Original Message-
From: Heikki Tuuri [mailto:[EMAIL PROTECTED]]
Sent: 02 February 2001 16:14
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Update: Status of MySQL 4.0/Innobase/trx  row level locks


Hi, readers of the mailing list!

I promised to report to the mailing list how the interfacing of MySQL
and Innobase engine progresses. Innobase will provide transactions,
rollback, row level locking, and Oracle-style non-locking consistent read
to MySQL. It currently runs on Intel Linux and NT.
Regards,

--snip

Heikki Tuuri
Innobase Oy
Helsinki, Finland


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php