problem with BDB table, error -30996

2002-02-22 Thread Geoffrey Soh

Hi,

Has anyone got the following error on a BDB table :

read_const: Got error -30996 when reading table

We did some load testing on a particular BDB table and got this when we
tried to run 20 concurrent queries on the table.  The query in question
works fine when the database instance is not loaded.

Does anyone have a list of BDB error codes handy?  I've searched the web
and the mailing list archives and seems like no one ever responded to the
same question about what this error means in previous postings :(

Any help appreciated.  Thanks!

Cheers,
Geoffrey
__
Geoffrey Soh, Software Architect
Ufinity - http://www.ufinity.com
Leading Enterprise Access Management Software!
9 Scotts Road, Pacific Plaza, #06-01, Singapore 228210
Tel   : +65 830-0341
Fax  : +65 737-0213
__


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




replication for BDB tables

2002-01-25 Thread Geoffrey Soh

Hi,

With all the recent talk about replication, just wondering if there are
any others out there who have experience replicating BDB tables?

My experience so far has been that the slave instance of MySQL has been
crashing once in a while, when applying perfectly legitimate DELETE queries
on the replicated BDB tables.

MySQL then restarts after the crash, but not without spewing out various
BDB Log Sequence errors etc in the log when trying to recover.

On restarting, the same query causes an error on the affected table again,
and the only way to get things going again is to convert BDB - MyISAM -
BDB, then SLAVE START.

This of course does not bode well on a production system where the slave
acts as a hot backup just in case :)

Just sharing my experience so far :)  Anyone else got anything to share?

Some other questions :

1. Does anyone know the behaviour of BDB recovery for such cases i.e. table
crashes, log sequence errors etc?

2. What happens to BDB tables on shutdown/startup i.e. is the transaction
log flushed etc?

Any comments welcome!  Thanks.

Cheers,
Geoffrey
__
Geoffrey Soh, Software Architect
Ufinity - http://www.ufinity.com
Leading Enterprise Access Management Software!
9 Scotts Road, Pacific Plaza, #06-01, Singapore 228210
Tel   : +65 830-0341
Fax  : +65 737-0213
__


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




bdb table crashes?

2002-01-21 Thread Geoffrey Soh
, the values shown above may be invalid

 The manual page at http://www.mysql.com/doc/C/r/Crashing.html contains
 information that should help you find out what is causing the crash

 Number of processes running now: 0
 020121 16:04:41  mysqld restarted
 /home/mysql/mysql-lite/libexec/mysqld: ready for connections


Cheers,
Geoffrey
__
Geoffrey Soh, Software Architect
Ufinity - http://www.ufinity.com
Leading Enterprise Access Management Software!
9 Scotts Road, Pacific Plaza, #06-01, Singapore 228210
Tel   : +65 830-0341
Fax  : +65 737-0213
__


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Altering InnoDB tables

2002-01-15 Thread Geoffrey Soh

Hi,

I also did experience the same problem converting a 2 million + row table
from MYISAM to INNODB.  It took 3 days!  I had highlighted the problem to
Heikki. (thanks Heikki for the response previously :)).

In my case, there was no UNIQUE KEY for the table, but just an index on one
of the varchars.  However, common to both schemas in your email and mine was
the primary key which happened to be an auto_increment :

SubscriberID int(11) NOT NULL auto_increment,
PRIMARY KEY  (SubscriberID),I

msgid  bigint(20) unsigned NOT NULL auto_increment
PRIMARY KEY (msgid)

Could this be a problem?  Anyway, to get around this long conversion
process, I performed the conversion on a table on a slave to the master
MySQL instance, then let the replication update this new InnoDB table.  On
the next scheduled downtime of that particular service, I did a switch of
this replicated InnoDB table to the master, and then moved the original
MyISAM to be the slave.  I'm happy to report that I experienced the same
performance improvements with InnoDB and it's row-level locking.  Needless
to say, the users of the service are a lot happier :)

Next thing would be to work on another table, but this one is about 100+ GB
so it might take a while :)

Any comments?

Cheers,
Geoffrey
__

Geoffrey Soh, Software Architect
Ufinity - http://www.ufinity.com
Leading Enterprise Access Management Software!
9 Scotts Road, Pacific Plaza, #06-01, Singapore 228210
Tel   : +65 830-0341
Fax  : +65 737-0213
__



 -Original Message-
 From: John Kemp [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, January 15, 2002 11:58 PM
 To: Heikki Tuuri
 Cc: [EMAIL PROTECTED]
 Subject: Re: Altering InnoDB tables


 Heikki,

 I didn't config the log that big, but the buffer pool is using 1GB of
 memory, and yes, the UNIQUE index is definitely causing a lot of disk I/O!

 I will be very happy to see your change, and thanks for (as usual) being
 so quick to make changes to an already very welcome addition to MySQL.

 I'll add this - we're running several tables that have 5-10 million
 rows, one logging table with 56 million rows on a web-based system that
 is used by 30 people at our company, editing our newsletters (which are
 stored in the database), and almost 1 million email subscribers, who
 receive 3 database-driven emails every week, and use our similarly
 data-driven website. So MySQL is dealing with a lot of roughly
 simultaneous requests. Row-level locking has improved the performance
 massively, and made my internal and external customers happier, which
 makes my day better (fewer complaints!)

 Thanks again,

 John Kemp
 Director, Software Development
 Streetmail

 Heikki Tuuri wrote:

  John,
 
  did you configure innodb_buffer_pool_size and InnoDB log files
 big? Look at
  the online manual on the recommended sizes.
 
  If the table is very big, then UNIQUE secondary keys will cause a lot of
  disk i/o, because a disk read is required to check the
 constraint for each
  row, and the insert buffer cannot be used.
 
  Hmm... a fix would be to disable UNIQUE checking during the
 conversion. I
  could add an option where you can switch the UNIQUE checking
 off in my.cnf.
  That option would be used during big conversion procedures, if
 you already
  know that the data satisifies the constraint.
 
  Ok, I will introduce the my.cnf option
 
  innodb_no_unique_checks
 
  in 3.23.4x. Thank you for bringing this up :).
 
  About Robert Ricci's question: SHOW TABLE STATUS FROM ...
 reports FOREIGN
  KEY constraints in the table comment section.
 
  Best regards,
 
  Heikki Tuuri
  Innobase Oy
  ---
  Order technical MySQL/InnoDB support at https://order.mysql.com/
  See http://www.innodb.com for the online manual and latest news
 on InnoDB
 
 
 
  John Kemp wrote in message ...
 
 I am also in the process of altering tables from MyISAM to INNODB. I
 have one big problem - a very large table ( 5 million rows) with 5
 indices/indexes on it, including a couple that are UNIQUE but with NULL
 values allowed. Creating the last of these indices is taking an absurd
 amount of time on my test system, which makes it impossible for me to
 recommend this change for production as our systems would be down for
 many days.
 
 Here are the steps I took (note that after each failure, I recreated the
 original data structure)
 
 1) I tried altering the table directly to be Innodb. I stopped this
 after 4 days, when all it said in the process list was 'copying to tmp
 table' - after 4 days!
 2) Then I tried mysqldumping the table structure, and changing MyISAM -
 INNODB, dropped the table, and ran the mysqldump output back into the
 database. The INSERTS ran horribly slowly - something like 50,000
 inserts in 4 hours. This extrapolates to a query that would run for
 several weeks on 5 million rows, so I stopped it again.
 3) Then I tried

RE: Redhat 7.2 Linux Maximum Database/Table Size

2002-01-14 Thread Geoffrey Soh

 On Fri, Jan 11, 2002 at 08:51:33AM +0100, Dr. Frank Ullrich wrote:
  Benjamin,
  can you also grow MyISAM tables to such sizes?

 You can.

I understand that the RAID option can help break the 2GB/4GB barrier, esp.
on Linux machines.

But how do you surpass the Max_data_length restriction of 4294967295 bytes
on a RAIDED table?  do you increase max_rows on such a table?  if so,
would this affect the performance of a large table e.g. above 50GB?

Without changing max_rows it seems that MySQL will still restrict the table
size to 4GB, even with raid_chunks and raid_chunksize set to e.g. 50 and
256?

Anyone out there tweaked these settings before and what was the outcome?
Thanks.

Cheers,
Geoffrey
__

Geoffrey Soh, Software Architect
Ufinity - http://www.ufinity.com
Leading Enterprise Access Management Software!
9 Scotts Road, Pacific Plaza, #06-01, Singapore 228210
Tel   : +65 830-0341
Fax  : +65 737-0213
__





-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Redhat 7.2 Linux Maximum Database/Table Size

2002-01-14 Thread Geoffrey Soh

Hi,

Sorry for being unclear :) I was talking about the --with-raid compilation
option in MySQL that lets you create tables with the RAID_TYPE RAID_CHUNKS
RAID_CHUNKSIZE options, allowing tables to span across multiple data files,
each file having a size below the OS limit.  Thanks for the response.

Cheers,
Geoffrey
__

Geoffrey Soh, Software Architect
Ufinity - http://www.ufinity.com
Leading Enterprise Access Management Software!
9 Scotts Road, Pacific Plaza, #06-01, Singapore 228210
Tel   : +65 830-0341
Fax  : +65 737-0213
__



 -Original Message-
 From: Benjamin Arai [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, January 15, 2002 1:52 AM
 To: Geoffrey Soh
 Cc: [EMAIL PROTECTED]
 Subject: RE: Redhat 7.2 Linux Maximum Database/Table Size


 You don't understand.  You need to use a operating system which has a
 filesystem which lifts the 2 GB limit.  By default from every Linux
 distrobtion I have used, if the OS has lifted the limit then they usually
 fix all the programs to uses the new file size capabilities.

 Raid doesn't help at all for the limit because the physical limit by the
 OS is a file size limit and not a partition or drive limit.

 Increase the max rows as you see appropriate but that is almost never the
 problem in terms of file size issues like you are having.

 Raids don't really help Table performance because in almost all cases the
 bottlneck is caused by the drives access time.  raiding drives doesn't
 increase the access time therefore, you are most likely not going to see
 and poerformance increases using a raid system unless you are change to
 drives to ones with lower access times.

 Benjamin Arai
 [EMAIL PROTECTED]

 On Mon, 14 Jan 2002, Geoffrey Soh wrote:

   On Fri, Jan 11, 2002 at 08:51:33AM +0100, Dr. Frank Ullrich wrote:
Benjamin,
can you also grow MyISAM tables to such sizes?
  
   You can.
 
  I understand that the RAID option can help break the 2GB/4GB
 barrier, esp.
  on Linux machines.
 
  But how do you surpass the Max_data_length restriction of
 4294967295 bytes
  on a RAIDED table?  do you increase max_rows on such a table?  if so,
  would this affect the performance of a large table e.g. above 50GB?
 
  Without changing max_rows it seems that MySQL will still
 restrict the table
  size to 4GB, even with raid_chunks and raid_chunksize set to e.g. 50 and
  256?
 
  Anyone out there tweaked these settings before and what was the outcome?
  Thanks.
 
  Cheers,
  Geoffrey
  __
 
  Geoffrey Soh, Software Architect
  Ufinity - http://www.ufinity.com
  Leading Enterprise Access Management Software!
  9 Scotts Road, Pacific Plaza, #06-01, Singapore 228210
  Tel   : +65 830-0341
  Fax  : +65 737-0213
  __
 
 
  
 
 
  -
  Before posting, please check:
 http://www.mysql.com/manual.php   (the manual)
 http://lists.mysql.com/   (the list archive)
 
  To request this thread, e-mail [EMAIL PROTECTED]
  To unsubscribe, e-mail
 [EMAIL PROTECTED]
  Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 

 --
 --
 
 Benjamin Arai
 [EMAIL PROTECTED]
 --
 



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




problem with BDB tables

2001-12-27 Thread Geoffrey Soh

Hi,

Just wondering if anyone has received the following error code before on a
BDB table :

ERROR 1030: Got error -30991 from table handler

I have a BDB table consisting of 4 varchar and 2 unsigned bigint columns
with about 30K+ rows.  This table is replicated to a slave.  Both master and
slave are running a compiled binary from the source distribution with the
BDB and raid option.

I got the above error in the log of the slave and then the replication
stopped.  The query in question was just a simple update query that was
applied successfully to the master.  Further attempts to get the replication
going again by applying the query manually to the slave data failed.
However, selects on the same row are always successful.

Can anyone enlighten me on what the above error means and how to go about
recovering the data?  I already have a snapshot handy to resync the slave,
but the above error makes me concerned.

Any help appreciated.  Thanks!

Cheers,
Geoffrey
__
Geoffrey Soh, Software Architect
Ufinity - http://www.ufinity.com
Leading Enterprise Access Management Software!
9 Scotts Road, Pacific Plaza, #06-01, Singapore 228210
Tel   : +65 830-0341
Fax  : +65 737-0213
__


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php