crash recovery

2012-07-12 Thread madu...@gmail.com
Hi,

Lately I got this messgae in my errorLog file, need more datails why the DB
was restarted!

120711 19:42:06  mysqld restarted
120711 19:42:06 [Warning] Asked for 196608 thread stack, but got 126976
120711 19:42:07  InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer...
120711 19:42:07  InnoDB: Starting log scan based on checkpoint at
InnoDB: log sequence number 4 4031987031.
InnoDB: Doing recovery: scanned up to log sequence number 4 4031987031
InnoDB: Last MySQL binlog file position 0 78062804, file name
./sdc-bin.97
120711 19:42:07  InnoDB: Flushing modified pages from the buffer pool...
120711 19:42:07  InnoDB: Started; log sequence number 4 4031987031
/usr/sbin/mysqld: ready for connections.
Version: '4.1.14-standard-log'  socket: '/var/lib/mysql/mysql.sock'  port:
3306  MySQL Community Edition - Standard (GPL)

-mad


Crash Recovery Problem

2008-02-08 Thread Grant Limberg
I've got MySQL 5.0.55a running on a Windows XP system.  All tables in the
active database on the system are innodb.  The startup log says that innodb
recovery completed and that connections are available.  Meanwhile, mysql is
writing to the hdd at around 2MB/sec according to process explorer.

I can connect to the server fine until I run a query against the database.
Then no other connections can issue any command whatsoever.   The mysql
client just freezes up, and I can't make any new connections to the server
either.  After a period of time, mysqld-nt.exe crashed and one thread is
left running, still writing to the drive at approx 2MB/sec.

Does anyone out there have any idea what's going wrong here?  I'm stumped.

-- 
Grant Limberg
[EMAIL PROTECTED]


InnoDB Crash RECOVERY HELP (Urgent)

2006-09-21 Thread Sayed Hadi Rastgou Haghi

Dear all,
our DB server crashed and when I try to start Mysql
/etc/init.d/mysql/start
I get these lins in my error log

060921 13:00:14  mysqld started
060921 13:00:14  InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer...
060921 13:00:14  InnoDB: Starting log scan based on checkpoint at
InnoDB: log sequence number 3 3546172175.
InnoDB: Error: tried to read 65536 bytes at offset 0 2173440.
InnoDB: Was only able to read 54784.
InnoDB: Fatal error: cannot read from file. OS error number 17.
060921 13:01:24InnoDB: Assertion failure in thread 3086943936 in file
os0file.c line 2107
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/mysql/en/Forcing_recovery.html
InnoDB: about forcing recovery.
mysqld got signal 11;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose
the problem, but since we have already crashed, something is definitely
wrong
and this may fail.

key_buffer_size=0
read_buffer_size=536866816
max_used_connections=0
max_connections=550
threads_connected=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections =
2094947 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

thd=(nil)
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
Cannot determine thread, fp=0xbffe3d4c, backtrace may not be correct.
Stack range sanity check OK, backtrace follows:
0x814cbfb
0x8d28b8
0x1
0x8355aed
0x835c659
0x835ce73
0x829ba01
0x81d3af3
0x81c5cb2
0x815028a
0x773e33
0x80e0c71
New value of fp=(nil) failed sanity check, terminating stack trace!
Please read http://dev.mysql.com/doc/mysql/en/Using_stack_trace.html and
follow instructions on how to resolve the stack trace. Resolved
stack trace is much more helpful in diagnosing the problem, so please do
resolve it
The manual page at http://www.mysql.com/doc/en/Crashing.html contains
information that should help you find out what is causing the crash.
060921 13:01:24  mysqld ended

when I add skip-innodb in my.cnf, it startsup but my innodb tables could not
be accessed.

How can I start MySQL server again?

--
Sincerely,
Hadi Rastgou
A Google Account is the key that unlocks the world of Google.
a href= http://www.spreadfirefox.com/?q=affiliatesamp;id=0amp;t=1; Get
Firefox! /a


Re: InnoDB Crash RECOVERY HELP (Urgent)

2006-09-21 Thread Eric Bergen

The error message says to go to
http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html to learn
how to set the different recovery options for innodb.

On 9/21/06, Sayed Hadi Rastgou Haghi [EMAIL PROTECTED] wrote:

Dear all,
our DB server crashed and when I try to start Mysql
/etc/init.d/mysql/start
I get these lins in my error log

060921 13:00:14  mysqld started
060921 13:00:14  InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer...
060921 13:00:14  InnoDB: Starting log scan based on checkpoint at
InnoDB: log sequence number 3 3546172175.
InnoDB: Error: tried to read 65536 bytes at offset 0 2173440.
InnoDB: Was only able to read 54784.
InnoDB: Fatal error: cannot read from file. OS error number 17.
060921 13:01:24InnoDB: Assertion failure in thread 3086943936 in file
os0file.c line 2107
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/mysql/en/Forcing_recovery.html
InnoDB: about forcing recovery.
mysqld got signal 11;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose
the problem, but since we have already crashed, something is definitely
wrong
and this may fail.

key_buffer_size=0
read_buffer_size=536866816
max_used_connections=0
max_connections=550
threads_connected=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections =
2094947 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

thd=(nil)
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
Cannot determine thread, fp=0xbffe3d4c, backtrace may not be correct.
Stack range sanity check OK, backtrace follows:
0x814cbfb
0x8d28b8
0x1
0x8355aed
0x835c659
0x835ce73
0x829ba01
0x81d3af3
0x81c5cb2
0x815028a
0x773e33
0x80e0c71
New value of fp=(nil) failed sanity check, terminating stack trace!
Please read http://dev.mysql.com/doc/mysql/en/Using_stack_trace.html and
follow instructions on how to resolve the stack trace. Resolved
stack trace is much more helpful in diagnosing the problem, so please do
resolve it
The manual page at http://www.mysql.com/doc/en/Crashing.html contains
information that should help you find out what is causing the crash.
060921 13:01:24  mysqld ended

when I add skip-innodb in my.cnf, it startsup but my innodb tables could not
be accessed.

How can I start MySQL server again?

--
Sincerely,
Hadi Rastgou
A Google Account is the key that unlocks the world of Google.
a href= http://www.spreadfirefox.com/?q=affiliatesamp;id=0amp;t=1; Get
Firefox! /a





--
Eric Bergen
[EMAIL PROTECTED]
http://www.provenscaling.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



InnoDB crash recovery and innodb_support_xa, sync_binlog

2006-07-04 Thread HIROSE, Masaaki
Hello,

I have some question about InnoDB crash recovery.

Q: I understand transaction and write to disk sequence as following
   figure. Is this correct?

w/
  innodb_flush_log_at_trx_commit = 1
  innodb_support_xa = on
  sync_binlog = 1
  skip-innodb_doublewrite


BEGIN

INSERT \
   +-- store  -- biglog_cache (memory)
   +-- fsync? -- innodb_log_file (disk)
(A) ==
COMMIT \
   +-- fdatasync-- binlog (disk)
(B) ==|
   +-- (prepare) fsync? -- innodb_log_file (disk)
(C) ==|
   +-- (commit)  fsycn? -- innodb_log_file (disk)
(D) ==



Q: How InnoDB crash recovery when suddenly OS crash at (A)..(D)?

  (A) roll back by innodb_log_file.

  (B) roll back by innodb_log_file and remove INSERT from binlog.

  (C) roll back by innodb_log_file and remove INSERT from binlog.

  (D) roll forward by innodb_log_file.


Q: If sync_binlog = 0, what happen?

  If still binlog did not sync to disk ...

  (B),(C) does mysqld failed to removing INSERT from binglog in crash
  recovery sequence?

  (D) INSERT in binlog is vanished? If so, table data in replicated
  master and slave is collapsed? (master has INSERTed row but
  slave doest not have.)


Q: If innodb_support_xa = off, what happen?

  (B),(C) When roll back by innodb_log_file, mysqld does or does not
  remove INSERT from binlog?


Q: What is best setting for crash recovery?

  I suppose, innodb_support_xa = on and sync_binlog = 1 is best
  setting.

  But mysqld is VERY VERY slower when enable innodb_support_xa and/or
  sync_binlog.

  xa=on + sync_binlog=0 is2 times slower than xa=off + sync_binlog=0.
  xa=on + sync_binlog=1 is 4..7 times slower than xa=off + sync_binlog=0.

  # This benchmark done with hard disk that have 128MB write cache and
  # battery backup unit. but xa=on + sync_binlog=1 is very slower...


Please any answer or advice.

-- 
HIROSE, Masaaki

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



InnoDB crash - recovery

2003-08-14 Thread Mads =?unknown-8bit?q?J=F8rgensen?=
Greetings all.

I have a bit of a problem here, a database i'm administering was somehow corrupted, 
and i'm unable to recover it in any way. Is there any way at all to recover a corrupt 
InnoDB database? (I read on innodb.com that it is impossible, but hope it is not)

When I run a query from any InnoDB table in the database MySQL crashes with the 
following stack trace and errors. 

I'm running a GNU/Linux system and MySQL 4.0.13 from the Debian unstable.

Error: trying to access field 4294967295 in rec
030807 13:53:24  InnoDB: Assertion failure in thread 180234 in file rem0rec.c line 111
InnoDB: Failing assertion: 0
...
thd=0x86e3990
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
Cannot determine thread, fp=0xbe7fe898, backtrace may not be correct.
Stack range sanity check OK, backtrace follows:
0x8102bc3
0x401ad75a
0x82b9a60
0x8230d50
0x822e42c
0x816952f
0x8169c84
0x816bf6a
0x816c2be
0x815e77f
0x8178c60
0x810f8e8
0x8112a15
0x810db3d
0x810d6cc
0x810d059
0x401a7d53
0x4038a3f7
New value of fp=(nil) failed sanity check, terminating stack trace!
...
0x8102bc3 mysql_binlog_send__FP3THDPcUxUs + 1419
0x401ad75a _end + 936375294
0x82b9a60 _tr_flush_block + 640
0x8230d50 page_cur_delete_rec + 5780
0x822e42c page_copy_rec_list_end_to_created_page + 392
0x816952f yyparse + 3855
0x8169c84 yylex + 1572
0x816bf6a opt_search_plan_for_table + 742
0x816c2be opt_search_plan_for_table + 1594
0x815e77f row_upd_clust_step + 431
0x8178c60 btr_compress + 3852
0x810f8e8 srv_master_thread + 172
0x8112a15 innobase_start_or_create_for_mysql + 1297
0x810db3d srv_sprintf_innodb_monitor + 425
0x810d6cc srv_suspend_mysql_thread + 1372
0x810d059 srv_table_reserve_slot_for_mysql + 473
0x401a7d53 _end + 936352247
0x4038a3f7 _end + 938328219

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



InnoDB crash recovery

2003-03-31 Thread Murad Nayal


Hello,

from the InnoDB documentation

To be able to recover your InnoDB database to the present from the
binary backup described above, you have to run your MySQL database with
the general logging and log archiving of MySQL switched on. Here by the
general logging we mean the logging mechanism of the MySQL server which
is independent of InnoDB logs.

http://www.vanderouderaa.nl/MySQL/doc/en/Backing_up.html

What 'general logging and log archiving' does this paragraph refer to.
is it the new bin logging. These bin files are very large and I'd rather
not activate the bin-log if I don't have to. Are they important for
crash recovery (server crash) in general or for innoDB in particular

many thanks

Murad

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Hoping for Crash recovery Miracle

2001-09-18 Thread Terje Kristensen

You might be able to do a few select * from mubo_boekhoud_boekhoud where
 queries to be able to dump out most of the data in the table. Then you
can drop and recreate the table and import the data.
It's worth a try anyway.

Terje K
 -Opprinnelig melding-
 Fra: David Bouw [mailto:[EMAIL PROTECTED]]
 Sendt: 18. september 2001 01:53
 Til: [EMAIL PROTECTED]
 Kopi: [EMAIL PROTECTED]
 Emne: Fw: Hoping for Crash recovery Miracle


 Hi there,

 I would really appreciate any help someone can give on the
 following awful problem..

 I have been working all day on filling a table with data. (Small
 records..)
 Because I have a table which I filled with some binary data
 someone asked me how big this will get..
  I looked at the directory and did a 'tar cfvz' command to (pack
 the data to a file) an look how big  the file would get..

  I am not sure what I did wrong, but I got an error which I
 didn't look at very carefully

  After this it looked as if I damaged the table I worked on all day

  The table consists of: 4440 records.. On this moment I can see
 this number listed when I do a:
 select count(*) from tablename...  (So far so good)

 When I do a select * from ... this table I get the following error:
 Warning: MySQL: Unable to save result set in
 /www/websites/mubo/login/phpmyadmin/sql.php3 on line 65
 (This ss a error from the PHP script doing the query, but the
 error: 'MySQL: Unable to save result set' probably is from mysql..)

 The three table files look as follow:
 2.8M -rw-r-1 root root 2.7M Sep 18 00:37
 mubo_boekhoud_boekhoud.ISD
 140k -rw-r-1 root root 135k Sep 18 00:37
 mubo_boekhoud_boekhoud.ISM
  12k -rw-r-1 root root 8.6k Sep 18 00:37
 mubo_boekhoud_boekhoud.frm

 When I do a isamchk mubo_boekhoud_boekhoud I get the following:

 [root@jaguar test]# isamchk mubo_boekhoud_boekhoud
  Checking ISAM file: mubo_boekhoud_boekhoud
 Data records:4440   Deleted blocks:   0
 - check file-size
  isamchk: warning: Size of datafile is: 2878557Should be: 403794
 - check delete-chain
 - check index reference
 - check data record references index: 1
 - check data record references index: 2
 - check data record references index: 3
 - check record links
 isamchk: error: Wrong bytesec: 31-139-8 at linkstart: 0
 ISAM-table 'mubo_boekhoud_boekhoud' is corrupted
 Fix it using switch -r or -o

 What are my options...??? If I could only get the data in a
 readable form, I could at least get back some data..
 The .ISD file should be O.K, is it possible to regenerate the
 .ISM files.. ??

 What can I try.. If anyone think I still have some options I will
 very much appreciate to hear this..

 I did try a isamchk -o command, this was the result:

 [root@jaguar test]# isamchk -o mubo_boekhoud_boekhoud
 - recovering ISAM-table 'mubo_boekhoud_boekhoud.ISM'
 Data records: 4440
 Wrong bytesec:  31-139-  8 at  0; Skipped
 Data records: 0
 [root@jaguar test]#

  The .ISD file is truncated to 0 bytes and I then have a table
 with 0 records...

  Thankyou very much for any help..

 With kind regards
 David ..




 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Hoping for Crash recovery Miracle

2001-09-18 Thread David Bouw

Hi Dean,

Thanks for the reply...

I indeed also thought of the mistake that I reversed the parameters...
Though I must say that all the other tables in the directory (luckily) still worked..

But reading your words it looks to me as if not a easy (unpossible) table to repair..

This really was bad luck, the machine the database is on makes a backup every night at 
1:00 clock...

I started working on Monday morning.. Continued in the midday and evening.. When it 
was about 12:15 in the
night I made the mistake.. If would have been 50 minutes later, I would probably have 
had an very recent backup..
I even make some hourly rotating backups of some (important) tables, but these tables 
weren't included in the script.. :-(

Well, O.K.. A hard way to learn that unix always does what you give it to do without 
letting you play the o.k/cancel button game

I indeed have the raw data... But it consists of about 1000 lines of tect of data for 
a small accountancy program which is made with
PHP and uses
Mysql to store the data..  These 1000 records were just all booked correctly by 
hand...  :-)

Luckily the second time you do something everything goes quicker.. I restarted this 
morning at 9:00 and at 14:00 I finished the
work..

Luckily the damage was not that high and I leaned again to respect the unix commands.. 
:-)

Thanks for the help..

Bye Bye
David


 Most likely you reversed the source and destination parameters in the tar
 command and stepped on your table(s). Usually, if this was indeed the cause
 of the problem, other tables in the directory would also be damaged, if any
 type of file globing was used. I don't suppose you have a backup of your
 original raw data?


 DG

 -Original Message-
 From: David Bouw [mailto:[EMAIL PROTECTED]]
 Sent: Monday, September 17, 2001 6:17 PM
 To: [EMAIL PROTECTED]
 Subject: Hoping for Crash recovery Miracle


 Hi there,

 I would really appreciate any help someone can give on the following awful
 problem..

 I have been working all day on filling a table with data. (Small records..)
 Because I have a table which I filled with some binary data someone asked me
 how big this will get..
 I looked at the directory and did a 'tar cfvz' command to (pack the data to
 a file) an look how big  the file would get..

 I am not sure what I did wrong, but I got an error which I didn't look at
 very carefully

 After this it looked as if I damaged the table I worked on all day

 The table consists of: 4440 records.. On this moment I can see this number
 listed when I do a:
 select count(*) from tablename...  (So far so good)

 When I do a select * from ... this table I get the following error:
 Warning: MySQL: Unable to save result set in
 /www/websites/mubo/login/phpmyadmin/sql.php3 on line 65
 (This ss a error from the PHP script doing the query, but the error: 'MySQL:
 Unable to save result set' probably is from mysql..)

 The three table files look as follow:
 2.8M -rw-r-1 root root 2.7M Sep 18 00:37
 mubo_boekhoud_boekhoud.ISD
 140k -rw-r-1 root root 135k Sep 18 00:37
 mubo_boekhoud_boekhoud.ISM
  12k -rw-r-1 root root 8.6k Sep 18 00:37
 mubo_boekhoud_boekhoud.frm

 When I do a isamchk mubo_boekhoud_boekhoud I get the following:

 [root@jaguar test]# isamchk mubo_boekhoud_boekhoud
 Checking ISAM file: mubo_boekhoud_boekhoud
 Data records:4440   Deleted blocks:   0
 - check file-size
 isamchk: warning: Size of datafile is: 2878557Should be: 403794
 - check delete-chain
 - check index reference
 - check data record references index: 1
 - check data record references index: 2
 - check data record references index: 3
 - check record links
 isamchk: error: Wrong bytesec: 31-139-8 at linkstart: 0
 ISAM-table 'mubo_boekhoud_boekhoud' is corrupted
 Fix it using switch -r or -o

 What are my options...??? If I could only get the data in a readable form, I
 could at least get back some data..
 The .ISD file should be O.K, is it possible to regenerate the .ISM files..
 ??

 What can I try.. If anyone think I still have some options I will very much
 appreciate to hear this..

 I did try a isamchk -o command, this was the result:

 [root@jaguar test]# isamchk -o mubo_boekhoud_boekhoud
 - recovering ISAM-table 'mubo_boekhoud_boekhoud.ISM'
 Data records: 4440
 Wrong bytesec:  31-139-  8 at  0; Skipped
 Data records: 0
 [root@jaguar test]#

 The .ISD file is truncated to 0 bytes and I then have a table with 0
 records...

 Thankyou very much for any help..

 With kind regards
 David ..




 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Hoping for Crash recovery Miracle

2001-09-17 Thread David Bouw

Hi there, 

I would really appreciate any help someone can give on the following awful problem..

I have been working all day on filling a table with data. (Small records..)
Because I have a table which I filled with some binary data someone asked me how big 
this will get.. 
I looked at the directory and did a 'tar cfvz' command to (pack the data to a file) an 
look how big  the file would get.. 

I am not sure what I did wrong, but I got an error which I didn't look at very 
carefully

After this it looked as if I damaged the table I worked on all day

The table consists of: 4440 records.. On this moment I can see this number listed when 
I do a:
select count(*) from tablename...  (So far so good)

When I do a select * from ... this table I get the following error: 
Warning: MySQL: Unable to save result set in 
/www/websites/mubo/login/phpmyadmin/sql.php3 on line 65
(This ss a error from the PHP script doing the query, but the error: 'MySQL: Unable to 
save result set' probably is from mysql..)

The three table files look as follow:
2.8M -rw-r-1 root root 2.7M Sep 18 00:37 mubo_boekhoud_boekhoud.ISD
140k -rw-r-1 root root 135k Sep 18 00:37 mubo_boekhoud_boekhoud.ISM
 12k -rw-r-1 root root 8.6k Sep 18 00:37 mubo_boekhoud_boekhoud.frm

When I do a isamchk mubo_boekhoud_boekhoud I get the following:

[root@jaguar test]# isamchk mubo_boekhoud_boekhoud 
Checking ISAM file: mubo_boekhoud_boekhoud
Data records:4440   Deleted blocks:   0
- check file-size
isamchk: warning: Size of datafile is: 2878557Should be: 403794
- check delete-chain
- check index reference
- check data record references index: 1
- check data record references index: 2
- check data record references index: 3
- check record links
isamchk: error: Wrong bytesec: 31-139-8 at linkstart: 0
ISAM-table 'mubo_boekhoud_boekhoud' is corrupted
Fix it using switch -r or -o

What are my options...??? If I could only get the data in a readable form, I could at 
least get back some data.. 
The .ISD file should be O.K, is it possible to regenerate the .ISM files.. ??

What can I try.. If anyone think I still have some options I will very much appreciate 
to hear this..

I did try a isamchk -o command, this was the result:

[root@jaguar test]# isamchk -o mubo_boekhoud_boekhoud
- recovering ISAM-table 'mubo_boekhoud_boekhoud.ISM'
Data records: 4440
Wrong bytesec:  31-139-  8 at  0; Skipped
Data records: 0
[root@jaguar test]# 

The .ISD file is truncated to 0 bytes and I then have a table with 0 records...

Thankyou very much for any help..

With kind regards
David .. 




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Fw: Hoping for Crash recovery Miracle

2001-09-17 Thread David Bouw

Hi there, 
 
I would really appreciate any help someone can give on the following awful problem..
 
I have been working all day on filling a table with data. (Small records..)
Because I have a table which I filled with some binary data someone asked me how big 
this will get.. 
 I looked at the directory and did a 'tar cfvz' command to (pack the data to a file) 
an look how big  the file would get.. 
 
 I am not sure what I did wrong, but I got an error which I didn't look at very 
carefully
 
 After this it looked as if I damaged the table I worked on all day
 
 The table consists of: 4440 records.. On this moment I can see this number listed 
when I do a:
select count(*) from tablename...  (So far so good)
 
When I do a select * from ... this table I get the following error: 
Warning: MySQL: Unable to save result set in 
/www/websites/mubo/login/phpmyadmin/sql.php3 on line 65
(This ss a error from the PHP script doing the query, but the error: 'MySQL: Unable to 
save result set' probably is from mysql..)
 
The three table files look as follow:
2.8M -rw-r-1 root root 2.7M Sep 18 00:37 mubo_boekhoud_boekhoud.ISD
140k -rw-r-1 root root 135k Sep 18 00:37 mubo_boekhoud_boekhoud.ISM
 12k -rw-r-1 root root 8.6k Sep 18 00:37 mubo_boekhoud_boekhoud.frm

When I do a isamchk mubo_boekhoud_boekhoud I get the following:
 
[root@jaguar test]# isamchk mubo_boekhoud_boekhoud 
 Checking ISAM file: mubo_boekhoud_boekhoud
Data records:4440   Deleted blocks:   0
- check file-size
 isamchk: warning: Size of datafile is: 2878557Should be: 403794
- check delete-chain
- check index reference
- check data record references index: 1
- check data record references index: 2
- check data record references index: 3
- check record links
isamchk: error: Wrong bytesec: 31-139-8 at linkstart: 0
ISAM-table 'mubo_boekhoud_boekhoud' is corrupted
Fix it using switch -r or -o

What are my options...??? If I could only get the data in a readable form, I could at 
least get back some data.. 
The .ISD file should be O.K, is it possible to regenerate the .ISM files.. ??

What can I try.. If anyone think I still have some options I will very much appreciate 
to hear this..
 
I did try a isamchk -o command, this was the result:

[root@jaguar test]# isamchk -o mubo_boekhoud_boekhoud
- recovering ISAM-table 'mubo_boekhoud_boekhoud.ISM'
Data records: 4440
Wrong bytesec:  31-139-  8 at  0; Skipped
Data records: 0
[root@jaguar test]# 
 
 The .ISD file is truncated to 0 bytes and I then have a table with 0 records...
 
 Thankyou very much for any help..

With kind regards
David .. 
 



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Crash Recovery

2001-09-12 Thread Kevin Kotun


I have a table with over 26 million records that experiences a high volume of traffic.

Recently, query results became a little screwy; upon investigation it was found that 
the table and the index were fouled up.

The results of an isamchk on the table yielded:

Checking ISAM file: /usr8/hydrology/measurement
Data records: 27034726   Deleted blocks:   0
- check file-size
./bin/isamchk: warning: Size of datafile is: 11 Should be: 3554138
- check delete-chain
- check index reference
- check data record references index: 1
./bin/isamchk: error: Found key at page 1024 that points to record outside datafile
ISAM-table '/usr8/hydrology/measurement' is corrupted
Fix it using switch -r or -o

and on the index yielded:

Data records: 27034726   Deleted blocks:   0
- check file-size
./bin/isamchk: warning: Size of datafile is: 11 Should be: 3554138
- check delete-chain
- check index reference
- check data record references index: 1
./bin/isamchk: error: Found key at page 1024 that points to record outside datafile
ISAM-table '/usr8/hydrology/measurement.ISM' is corrupted
Fix it using switch -r or -o


Trying to fix the table as suggested results in the datafile being reduced to a file 
size of 0 bytyes.  Before the fix it is over 4Gigs.  

I have tried several different options for isamchk including the --quick,which does 
not kill the data file but still does not work.

So the question is:

since the isamchk can detect the 27 million records, what is my best hope for getting 
them all out.

select * results in garbage

any thoughts would be appreciated.

I can get about 90% of the records out with carefully crafted where clauses, 
but I am a little concerned with the integrity of them.  What is the probability 
that the integrity of these records can be trusted.  They look ok and a few random 
checks that I have done show no problems.

thanks to all.

Kevin Kotun, P.E.
Hydrologist
Everglades National Park





-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Large table crash recovery

2001-05-31 Thread Aaron Williams



 Greetings list,

 I have a semi large table (~10gig) that when I run a select query
 from the table it gives me an errno 145. So I run myisamchk on the table,
 as it is a myisam table. However, my problem is, when I run myisamchk it
 tells me that the table has about 20 million data records, then
 proceeds to count through them. But when it reaches 20 million
 it doesn't stop, and just keeps on counting up. The longest one I let
 run reached 26 million, before it was aborted.

 I have run the myisamchk with two different options

 the first: myisamchk --fast -r table
 the second: myisamchk -q -r table

 I received no errors from myisamchk on either as it ran through,
 it just doesn't stop when it reaches the number of data records
 it says it has.

 I tried running myisamchk --safe-recover and got the following
 message:

 [server /usr/local/var/my_db ]#  myisamchk --safe-recover my_table
 - recovering (with keycache) MyISAM-table 'my_table.MYI'
 Data records: 19424783
 myisamchk: error: Can't create new tempfile: 'my_table.TMD'
 MyISAM-table 'my_table' is not fixed because of errors
 Try fixing it by using the --safe-recover (-o) or the --force (-f) option


 Running it with the -f option yeilds this:

 [server /usr/local/var/my_db ]# myisamchk -f my_table
 Checking MyISAM file: my_table
 Data records: 19424783   Deleted blocks:   0
 myisamchk: warning: Table is marked as crashed and last repair failed
 - check file-size
 myisamchk: warning: Size of indexfile is: 2917998592  Should be: 1024
 - check key delete-chain
 - check record delete-chain
 - check index reference
 - check data record references index: 1
 myisamchk: error: Can't read indexpage from filepos: -1
 - recovering (with keycache) MyISAM-table 'my_table.MYI'


 It then starts to count through the records. Since each check takes
 about 14 hours I do not know if the -f option will stop at 19million
 or not.



 Can any of you guru's offer me any advice in where to take it from here?
 Any other myisamchk command line options that I am not aware of? It just
 seems to me that if the --safe-recover tells you to use --safe-recover
 because it didn't work, things aren't good.

 _Any_ advice is appreciated.




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php