Re: [vox-tech] GL apps not running, NVIDIA X11 module

2003-05-29 Thread Mark K. Kim
So I got gears working.  It turns out you gotta link -lglut before -lGL.
Apparently they didn't create the configure.in correctly.

Still, SDLgears isn't working.  I'm linking SDL first, then linking -lGL
later (but no GLUT or GLU or whatever for SDLgears.)  Now I gotta figure
out how to fix that... Anyone with ideas?

-Mark


On Wed, 28 May 2003, Mark K. Kim wrote:

> I used the NVIDIA's installer program which did install its own libglx X11
> module.  It also installed libGLcore, but not as an X11 module.  I read on
> google groups that NVIDIA's driver automatically loads the libGLcore
> library if it needs it (though that didn't stop me from trying to force X
> to load it, which resulted in X refusing to load it due to unresolved
> symbols.)
>
> Can you compile the following program for me on your NVIDIA system?:
>
>http://www.libsdl.org/opengl/SDLgears-1.0.2.tar.gz
>
> If it works, then it's probably my system thing and I'll look in that
> direction.  If not, then I'll look in more general direction.  Thanks!!
>
> Oh, I do have an NVIDIA motherboard (yes, NVIDIA motherboard!) requiring
> the nFORCE chipset modules I got off of NVIDIA's website.  It's required
> to get the network, X11, and soundcard working.  Yeah, I need the NVIDIA's
> nvidia module *and* the kernel module to get X working.  I wonder if that
> has anything to do with it?  Anybody else have the nFORCE motherboard
> chipset?
>
> -Mark
>
>
> On Wed, 28 May 2003, ME wrote:
>
> > For my NVidia setup on my laptop, I have RtCW working with the the GL
> > stuff. In order to make this work, I not only installed the proprietary
> > NVidia kernel module, but also installed the GLX "stuff" (separate
> > download) from nvidia.
> >
> > Did you get the GLX/GL source of the same version as the kernel modules
> > for NVidia and X?
> >
> > Install Both, restart X. There may be mods to the XF86.config to make it
> > use the GLX stuff that comes from NVidia, but I will need to check my
> > laptop when I get back to work.
> >
> > -ME
> >
> >
> > Mark K. Kim said:
> > > Hello,
> > >
> > > So I tried running Quake 2, which ran fine except it doesn't let me use
> > > either the GLX or SDL GL video drivers.  I also tried to compile `gears`
> > > and `SDLgears` from http://www.libsdl.org/opengl/SDLgears-1.0.2.tar.gz and
> > > it won't run (it brings up a broken image, like it tried to draw the first
> > > frame but drew only a part of it, then froze.)
> > >
> > > But `glxgears` and the 3D screen savers work perfectly fine.  Also,
> > > "planets", a wireframe drawing test program from the OpenGL book, works
> > > perfectly fine.
> > >
> > > Does anyone know why these three apps (Quake2, gears, SDLgears) won't run
> > > on my system?  I'm running Debian Stable with NVIDIA's X11 drivers.
> > > Thanks in advance!
> > >
> > > -Mark
> > >
> > > PS: I read on google groups that I should disable the GLcore and DRI
> > > modules because NVIDIA's drivers have their own thing.  I did but it
> > > doesn't help.
> > >
> > > --
> > > Mark K. Kim
> > > http://www.cbreak.org/
> > > PGP key available upon request.
> > >
> > > ___
> > > vox-tech mailing list
> > > [EMAIL PROTECTED]
> > > http://lists.lugod.org/mailman/listinfo/vox-tech
> > >
> > >
> >
> > ___
> > vox-tech mailing list
> > [EMAIL PROTECTED]
> > http://lists.lugod.org/mailman/listinfo/vox-tech
> >
>
> --
> Mark K. Kim
> http://www.cbreak.org/
> PGP key available upon request.
>
>
> ___
> vox-tech mailing list
> [EMAIL PROTECTED]
> http://lists.lugod.org/mailman/listinfo/vox-tech
>

-- 
Mark K. Kim
http://www.cbreak.org/
PGP key available upon request.

___
vox-tech mailing list
[EMAIL PROTECTED]
http://lists.lugod.org/mailman/listinfo/vox-tech


Re: [vox-tech] Data Recovery

2003-05-29 Thread Mark K. Kim
You should always feel free to look into professional services, but, out
of curiosity, what do you mean by "crashed"?  Did you dunk it in water,
hit it with a sledge hammer, put it in fire, etc?  If not, it may still be
recoverable via software by much cheaper techniques.  I think the
professional guys also charge by number of files or size of data they
recover, not by hard drive count.

Again, you should always feel free to look into professional services, but
I thought I'd give you some heads up so you know some of your options in
case you weren't aware already...

-Mark


On Wed, 28 May 2003, Larry Ozeran wrote:

> I hope this topic is appropriate to this forum since it is not Linux specific.
>
> I must sheepishly admit that my backup procedures have been inadequate and
> my laptop hard disk has crashed. 8-(
>
> Does anyone have experience with any data recovery services locally (or
> elsewhere). My search has only found one in Placerville and one in Irvine
> and the rest out of state. They are not inexpensive and from those Data
> Recovery companies with whom I have spoken, I gather that if the first
> service can't get the data, it may be lost forever. I have not been able to
> find any reviews of services, so any (favorable) personal experiences would
> be appreciated.
>
> Thanks,
>
> Larry
>
> ___
> vox-tech mailing list
> [EMAIL PROTECTED]
> http://lists.lugod.org/mailman/listinfo/vox-tech
>

-- 
Mark K. Kim
http://www.cbreak.org/
PGP key available upon request.

___
vox-tech mailing list
[EMAIL PROTECTED]
http://lists.lugod.org/mailman/listinfo/vox-tech


[vox-tech] Data Recovery

2003-05-29 Thread Larry Ozeran
I hope this topic is appropriate to this forum since it is not Linux specific.

I must sheepishly admit that my backup procedures have been inadequate and
my laptop hard disk has crashed. 8-(

Does anyone have experience with any data recovery services locally (or
elsewhere). My search has only found one in Placerville and one in Irvine
and the rest out of state. They are not inexpensive and from those Data
Recovery companies with whom I have spoken, I gather that if the first
service can't get the data, it may be lost forever. I have not been able to
find any reviews of services, so any (favorable) personal experiences would
be appreciated.

Thanks,

Larry 

___
vox-tech mailing list
[EMAIL PROTECTED]
http://lists.lugod.org/mailman/listinfo/vox-tech


Re: [vox-tech] GL apps not running, NVIDIA X11 module

2003-05-29 Thread Mark K. Kim
I used the NVIDIA's installer program which did install its own libglx X11
module.  It also installed libGLcore, but not as an X11 module.  I read on
google groups that NVIDIA's driver automatically loads the libGLcore
library if it needs it (though that didn't stop me from trying to force X
to load it, which resulted in X refusing to load it due to unresolved
symbols.)

Can you compile the following program for me on your NVIDIA system?:

   http://www.libsdl.org/opengl/SDLgears-1.0.2.tar.gz

If it works, then it's probably my system thing and I'll look in that
direction.  If not, then I'll look in more general direction.  Thanks!!

Oh, I do have an NVIDIA motherboard (yes, NVIDIA motherboard!) requiring
the nFORCE chipset modules I got off of NVIDIA's website.  It's required
to get the network, X11, and soundcard working.  Yeah, I need the NVIDIA's
nvidia module *and* the kernel module to get X working.  I wonder if that
has anything to do with it?  Anybody else have the nFORCE motherboard
chipset?

-Mark


On Wed, 28 May 2003, ME wrote:

> For my NVidia setup on my laptop, I have RtCW working with the the GL
> stuff. In order to make this work, I not only installed the proprietary
> NVidia kernel module, but also installed the GLX "stuff" (separate
> download) from nvidia.
>
> Did you get the GLX/GL source of the same version as the kernel modules
> for NVidia and X?
>
> Install Both, restart X. There may be mods to the XF86.config to make it
> use the GLX stuff that comes from NVidia, but I will need to check my
> laptop when I get back to work.
>
> -ME
>
>
> Mark K. Kim said:
> > Hello,
> >
> > So I tried running Quake 2, which ran fine except it doesn't let me use
> > either the GLX or SDL GL video drivers.  I also tried to compile `gears`
> > and `SDLgears` from http://www.libsdl.org/opengl/SDLgears-1.0.2.tar.gz and
> > it won't run (it brings up a broken image, like it tried to draw the first
> > frame but drew only a part of it, then froze.)
> >
> > But `glxgears` and the 3D screen savers work perfectly fine.  Also,
> > "planets", a wireframe drawing test program from the OpenGL book, works
> > perfectly fine.
> >
> > Does anyone know why these three apps (Quake2, gears, SDLgears) won't run
> > on my system?  I'm running Debian Stable with NVIDIA's X11 drivers.
> > Thanks in advance!
> >
> > -Mark
> >
> > PS: I read on google groups that I should disable the GLcore and DRI
> > modules because NVIDIA's drivers have their own thing.  I did but it
> > doesn't help.
> >
> > --
> > Mark K. Kim
> > http://www.cbreak.org/
> > PGP key available upon request.
> >
> > ___
> > vox-tech mailing list
> > [EMAIL PROTECTED]
> > http://lists.lugod.org/mailman/listinfo/vox-tech
> >
> >
>
> ___
> vox-tech mailing list
> [EMAIL PROTECTED]
> http://lists.lugod.org/mailman/listinfo/vox-tech
>

-- 
Mark K. Kim
http://www.cbreak.org/
PGP key available upon request.


___
vox-tech mailing list
[EMAIL PROTECTED]
http://lists.lugod.org/mailman/listinfo/vox-tech


Re: [vox-tech] GL apps not running, NVIDIA X11 module

2003-05-29 Thread ME
For my NVidia setup on my laptop, I have RtCW working with the the GL
stuff. In order to make this work, I not only installed the proprietary
NVidia kernel module, but also installed the GLX "stuff" (separate
download) from nvidia.

Did you get the GLX/GL source of the same version as the kernel modules
for NVidia and X?

Install Both, restart X. There may be mods to the XF86.config to make it
use the GLX stuff that comes from NVidia, but I will need to check my
laptop when I get back to work.

-ME


Mark K. Kim said:
> Hello,
>
> So I tried running Quake 2, which ran fine except it doesn't let me use
> either the GLX or SDL GL video drivers.  I also tried to compile `gears`
> and `SDLgears` from http://www.libsdl.org/opengl/SDLgears-1.0.2.tar.gz and
> it won't run (it brings up a broken image, like it tried to draw the first
> frame but drew only a part of it, then froze.)
>
> But `glxgears` and the 3D screen savers work perfectly fine.  Also,
> "planets", a wireframe drawing test program from the OpenGL book, works
> perfectly fine.
>
> Does anyone know why these three apps (Quake2, gears, SDLgears) won't run
> on my system?  I'm running Debian Stable with NVIDIA's X11 drivers.
> Thanks in advance!
>
> -Mark
>
> PS: I read on google groups that I should disable the GLcore and DRI
> modules because NVIDIA's drivers have their own thing.  I did but it
> doesn't help.
>
> --
> Mark K. Kim
> http://www.cbreak.org/
> PGP key available upon request.
>
> ___
> vox-tech mailing list
> [EMAIL PROTECTED]
> http://lists.lugod.org/mailman/listinfo/vox-tech
>
>

___
vox-tech mailing list
[EMAIL PROTECTED]
http://lists.lugod.org/mailman/listinfo/vox-tech


[vox-tech] GL apps not running, NVIDIA X11 module

2003-05-29 Thread Mark K. Kim
Hello,

So I tried running Quake 2, which ran fine except it doesn't let me use
either the GLX or SDL GL video drivers.  I also tried to compile `gears`
and `SDLgears` from http://www.libsdl.org/opengl/SDLgears-1.0.2.tar.gz and
it won't run (it brings up a broken image, like it tried to draw the first
frame but drew only a part of it, then froze.)

But `glxgears` and the 3D screen savers work perfectly fine.  Also,
"planets", a wireframe drawing test program from the OpenGL book, works
perfectly fine.

Does anyone know why these three apps (Quake2, gears, SDLgears) won't run
on my system?  I'm running Debian Stable with NVIDIA's X11 drivers.
Thanks in advance!

-Mark

PS: I read on google groups that I should disable the GLcore and DRI
modules because NVIDIA's drivers have their own thing.  I did but it
doesn't help.

-- 
Mark K. Kim
http://www.cbreak.org/
PGP key available upon request.

___
vox-tech mailing list
[EMAIL PROTECTED]
http://lists.lugod.org/mailman/listinfo/vox-tech


Re: [vox-tech] Zaurus wireless compactflash card

2003-05-29 Thread Gabriel Rosa
On Wed, May 28, 2003 at 04:25:26PM -0700, Foo Lim wrote:
> There is no way to test it without setting up one's own access point, is 
> there?
> 

You can get a few (2+) nodes together using ad-hoc mode. Or just find a public
hotspot to test it out with.

-Gabe
___
vox-tech mailing list
[EMAIL PROTECTED]
http://lists.lugod.org/mailman/listinfo/vox-tech


Re: [vox-tech] Zaurus wireless compactflash card

2003-05-29 Thread Foo Lim
On Wed, 28 May 2003, Gabriel Rosa wrote:

> I have a D-Link DCF-660W that I use with my ipaq (running familiar
> 0.6.1), and it works very well. It's supposed to be pretty low power,
> but it's hard to gauge without having another card.
> 
> It uses the orinoco_cs driver, and with the backlight on I get about
> 2hours of use out of the ipaq with the card plugged in (regular about
> 4hours).
> 
> The D-Link is also physicaly small, and not T-Shaped like some of the
> Linksys (I think) ones i've seen.
> 
> -Gabe

Thanks for the feedback.

The Linksys WCF12 is the newer model in the WCF series.  It's not T-shaped 
anymore.  It doesn't get in the way of the stylus.  One review of the 
WCF12 commended it on its range.

There is no way to test it without setting up one's own access point, is 
there?

FL

___
vox-tech mailing list
[EMAIL PROTECTED]
http://lists.lugod.org/mailman/listinfo/vox-tech


Re: [vox-tech] Zaurus wireless compactflash card

2003-05-29 Thread Gabriel Rosa
On Wed, May 28, 2003 at 04:08:55PM -0700, Foo Lim wrote:
> Hi all,
> 
> Does anyone have any recommendations on wireless compactflash cards for
> the Zaurus?  I bought a 802.11b card for $30 after rebate (Linksys WCF12)  
> but I haven't tested it out yet, since I still need to setup a wireless
> access point.  There's a special for a D-Link DCF-660W for $20 after
> rebate this week.  Does anyone have any personal preference?
> 

I have a D-Link DCF-660W that I use with my ipaq (running familiar 0.6.1), and
it works very well. It's supposed to be pretty low power, but it's hard to
gauge without having another card.

It uses the orinoco_cs driver, and with the backlight on I get about 2hours of
use out of the ipaq with the card plugged in (regular about 4hours).

The D-Link is also physicaly small, and not T-Shaped like some of the
Linksys (I think) ones i've seen.

-Gabe
___
vox-tech mailing list
[EMAIL PROTECTED]
http://lists.lugod.org/mailman/listinfo/vox-tech


[vox-tech] Zaurus wireless compactflash card

2003-05-29 Thread Foo Lim
Hi all,

Does anyone have any recommendations on wireless compactflash cards for
the Zaurus?  I bought a 802.11b card for $30 after rebate (Linksys WCF12)  
but I haven't tested it out yet, since I still need to setup a wireless
access point.  There's a special for a D-Link DCF-660W for $20 after
rebate this week.  Does anyone have any personal preference?

This site:
http://www.zaurus.com/dev/support/peripherals.htm
lists the WCF12 as supported.  I've searched for the D-Link card, and that 
is also supported, so it just comes down to personal experience (range, 
power consumption, etc.)

TIA,
FL

___
vox-tech mailing list
[EMAIL PROTECTED]
http://lists.lugod.org/mailman/listinfo/vox-tech


Re: [vox-tech] Linux Block Layer is Lame (it retries too much)

2003-05-29 Thread Mike Simons

--xXmbgvnjoT4axfJE
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Wed, May 28, 2003 at 02:58:30PM -0700, Michael Wenk wrote:
> In this link you have the following proc settings:=20
>=20
> echo file_readahead:0 > /proc/ide/hdc/settings
> echo breada_readahead:0 > /proc/ide/hdc/settings

*AHH*  ... I tried a handful of ways to set items in this file and
didn't get something that worked... since I found that some items could
be changed via hdparm (like breada was affected by -a), and I didn't
think the others would apply to the problem I didn't look...
  I've always wanted to play with the "current_speed" and "acoustic"
values of my drives... *:)

--=20
GPG key: http://simons-clan.com/~msimons/gpg/msimons.asc
Fingerprint: 524D A726 77CB 62C9 4D56  8109 E10C 249F B7FA ACBE

--xXmbgvnjoT4axfJE
Content-Type: application/pgp-signature
Content-Disposition: inline

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iD8DBQE+1Tmm4Qwkn7f6rL4RAlssAJ9puZ5MV2R3r+Z2vGXF7C/+Q+Mp7gCgmr+o
9HxBq8018LOyDGWMk3H0thU=
=MeqJ
-END PGP SIGNATURE-

--xXmbgvnjoT4axfJE--
___
vox-tech mailing list
[EMAIL PROTECTED]
http://lists.lugod.org/mailman/listinfo/vox-tech


Re: [vox-tech] Linux Block Layer is Lame (it retries too much)

2003-05-29 Thread Michael Wenk
On Wednesday 28 May 2003 12:09 pm, Mike Simons wrote:
> On Wed, May 28, 2003 at 11:31:56AM -0700, Jeff Newmiller wrote:
> > On Tue, 27 May 2003, Mike Simons wrote:
> > >   Last week I was having problems with the ide layer... it retries
> > > way too many times.  I was trying to read 512 byte blocks from a dying
> > > /dev/hda (using dd_rescue which calls pread), for each bad sector the
> > > kernel would try 8 times,
>
> [...]
>
> > >   Even better because the process is inside a system call, it is not
> > > killable and so there is no practical way to speed up the process.
> >
> > It should be open to termination between the time the read system call
> > returns and the write system call starts.
>
>   Yes, it was "killable" in that you could ^C or send a signal with kill,
> after waiting 10 minutes the kernel would finish retrying and the
> process would exit cleanly on the signal.
>
>   I meant there was no way to abort the 8 sector read attempt.
>
> > > - How does a 1 sector read is expanded to an 8 sector chunk?
> >
> > I don't know.  But I suspect it has to do with the "natural" way files
> > are read in... by "mmap"ing them to pages in RAM.  i386 memory managers
> > usually use 4k pages... ergo, 8 x 512B sectors.
> >
> > Some of this behavior may be due to the algorithms in dd_rescue.
>
>   Nah... dd_rescue is certainly not the cause.  It is a very simple
> program that reads blocks of a size you can specify on the command line.
>
>   It has the concept of a "soft block size" which it uses to quickly cover
> the good sections of disk, and a "hard block size" which it uses to
> slowly walk the bad sections of disk.  By default it will use the soft
> size until a read error happens, it will then drop to the hard block
> size and read until it travels a few "soft" block sizes without errors.
>   I realize I was not explicit enough, but I has set the "soft" and
> "hard" block size to 512 bytes, which because soft and hard are the
> same will prevents dd_rescue from retrying the read of any bad blocks...
>
> > > - Any other ideas on how to pull the disk blocks?
> >
> > Not easy ones. (Build your own device driver that doesn't use mmap.)
>
>   Michael Wenk suggested using O_DIRECT on the open call, which is
> an excellent idea.  This was what the Oracle people at their Clustering
> Filesystem talk.  I have one more failing hard drive around which I'm
> going to try that on...


The other thing I was looking at was an ioctl call for BLKRASET, or BLKFRASET.  
I googled this and came up with an interesting link on my first shot.  

http://www.linuxtv.org/mailinglists/vdr/2002/04-2002/msg00061.html


In this link you have the following proc settings: 

echo file_readahead:0 > /proc/ide/hdc/settings
echo breada_readahead:0 > /proc/ide/hdc/settings

Maybe this or the ioctl will help(or possibly O_DIRECT)  

Mike


-- 
[EMAIL PROTECTED]
Mike Wenk

___
vox-tech mailing list
[EMAIL PROTECTED]
http://lists.lugod.org/mailman/listinfo/vox-tech


Re: [vox-tech] Linux Block Layer is Lame (it retries too much)

2003-05-29 Thread Jeff Newmiller
On Wed, 28 May 2003, Mike Simons wrote:

> On Wed, May 28, 2003 at 11:31:56AM -0700, Jeff Newmiller wrote:
> > On Tue, 27 May 2003, Mike Simons wrote:
> > >   Last week I was having problems with the ide layer... it retries
> > > way too many times.  I was trying to read 512 byte blocks from a dying
> > > /dev/hda (using dd_rescue which calls pread), for each bad sector the
> > > kernel would try 8 times, 

[...]

> > > - Any other ideas on how to pull the disk blocks?
> > 
> > Not easy ones. (Build your own device driver that doesn't use mmap.)
> 
>   Michael Wenk suggested using O_DIRECT on the open call, which is
> an excellent idea.  This was what the Oracle people at their Clustering 
> Filesystem talk.  I have one more failing hard drive around which I'm 
> going to try that on... 

Kudos to Michael.  I had no idea such a flag existed (it isn't in my
Debian manpage, nor in "Linux Application Development").

[...]

---
Jeff NewmillerThe .   .  Go Live...
DCN:<[EMAIL PROTECTED]>Basics: ##.#.   ##.#.  Live Go...
  Live:   OO#.. Dead: OO#..  Playing
Research Engineer (Solar/BatteriesO.O#.   #.O#.  with
/Software/Embedded Controllers)   .OO#.   .OO#.  rocks...2k
---

___
vox-tech mailing list
[EMAIL PROTECTED]
http://lists.lugod.org/mailman/listinfo/vox-tech


Re: [vox-tech] Linux Block Layer is Lame (it retries too much)

2003-05-29 Thread Mike Simons

--cWoXeonUoKmBZSoM
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Wed, May 28, 2003 at 11:31:56AM -0700, Jeff Newmiller wrote:
> On Tue, 27 May 2003, Mike Simons wrote:
> >   Last week I was having problems with the ide layer... it retries
> > way too many times.  I was trying to read 512 byte blocks from a dying
> > /dev/hda (using dd_rescue which calls pread), for each bad sector the
> > kernel would try 8 times,=20
[...]
> >   Even better because the process is inside a system call, it is not
> > killable and so there is no practical way to speed up the process.
>=20
> It should be open to termination between the time the read system call
> returns and the write system call starts.

  Yes, it was "killable" in that you could ^C or send a signal with kill,
after waiting 10 minutes the kernel would finish retrying and the=20
process would exit cleanly on the signal.

  I meant there was no way to abort the 8 sector read attempt.


> > - How does a 1 sector read is expanded to an 8 sector chunk?
>=20
> I don't know.  But I suspect it has to do with the "natural" way files are
> read in... by "mmap"ing them to pages in RAM.  i386 memory managers
> usually use 4k pages... ergo, 8 x 512B sectors.
>=20
> Some of this behavior may be due to the algorithms in dd_rescue.

  Nah... dd_rescue is certainly not the cause.  It is a very simple
program that reads blocks of a size you can specify on the command line.

  It has the concept of a "soft block size" which it uses to quickly cover
the good sections of disk, and a "hard block size" which it uses to
slowly walk the bad sections of disk.  By default it will use the soft
size until a read error happens, it will then drop to the hard block
size and read until it travels a few "soft" block sizes without errors.
  I realize I was not explicit enough, but I has set the "soft" and=20
"hard" block size to 512 bytes, which because soft and hard are the
same will prevents dd_rescue from retrying the read of any bad blocks...

> > - Any other ideas on how to pull the disk blocks?
>=20
> Not easy ones. (Build your own device driver that doesn't use mmap.)

  Michael Wenk suggested using O_DIRECT on the open call, which is
an excellent idea.  This was what the Oracle people at their Clustering=20
Filesystem talk.  I have one more failing hard drive around which I'm=20
going to try that on...=20


> >   I was using a custom Knoppix boot floppy and a standard Knoppix CD to=
=20
> > boot a laptop with the bad drive, NFS mounting a local machine, where I=
=20
> > was dd_rescue sending the blocks that could be read.=20
>
> I had a similar experience a few weeks ago... dd would fail at certain
> areas on the disk, so I would use the skip option for dd to pick up after
> the dead spots.  (I didn't know about dd_rescue.) Nevertheless, the
> process was too slow, so I pulled the disk and simply replaced it.

  The slowness is really due to how many times the kernel retries, it
only takes a few seconds for the kernel to know the block is bad...

  If you haven't already returned that drive, you may be able to get=20
most of the filesystem off of it... all of the 4 or 5 failing drives
I've tried pulling data off have provided a working filesystem (if
you ignore the one that last one that transfer didn't complete due to=20
timing).


> Slick.  I was using netcat.

  What I used to do was use attach a good drive to the system, use the
debian install floppies to boot the system, then mount a floppy disk
with the junk I needed (dd_rescue) to pull the image.

  Knoppix as the rescue system works much nicer.  If you want to tweak
the kernel Knoppix uses create a "boot floppy" from the Knoppix CD
(which is meant to allow the disk to boot on machines with non-bootable
CD rom drives).  The .img is a dos filesystem which you can replace
the vmlinuz image with one of your own making.

  In order to be a NFS client in Knoppix you will need to start a two
local services... nfs-common and portmap (see previous post on Knoppix).
The Knoppix images support NFSv3, which results in a dramatically faster
transfer rate... but you will need the nfs-kernel-server package on
the server side to support that... over 100 Mbit Ethernet I was getting
something like 11 MiB per second.
  I would still recommend putting the target drive in the machine with
the source bad drive if you can, because with dma mode on you should be=20
able to get about 40 or 50 MiB/s ... in the good sections of disk.

  By using a nfs server I was easily able reboot a few times and still=20
keep all log files from the mirroring and scripts to minimize how much=20
I needed to type to get things going right... "ddr" or "ddr -s 10289.0k"
I'll send the ddr script I was using if you are interested...

--=20
GPG key: http://simons-clan.com/~msimons/gpg/msimons.asc
Fingerprint: 524D A726 77CB 62C9 4D56  8109 E10C 249F B7FA ACBE

--cWoXeonUoKmBZSoM
Content-Type: applicatio

Re: [vox-tech] Linux Block Layer is Lame (it retries too much)

2003-05-29 Thread Jeff Newmiller
On Tue, 27 May 2003, Mike Simons wrote:

>   Last week I was having problems with the ide layer... basically it retries
> way too many times.  I was trying to read 512 byte blocks from a dying
> /dev/hda (using dd_rescue which calls pread), for each bad sector the
> kernel would try 8 times, at time 4 and 8 it would reset the IDE bus
> (turning off things like DMA mode), and for every other failed attempt it 
> would seek to track 0...
> 
>   If that wasn't bad enough, for some reason the kernel was often trying
> to fetch 8 sectors worth of information for a single sector read.  The 
> 8 sector "chunk" being fetched was somehow related to the modulus of
> the actual sector being requested, so if you had an 8 sector bad region...
>   So if you requested any one of the 8 bad sectors from a chunk, each
> of the 8 would have 8 read attempts made... 64 read attempts all will
> fail before you can even move to the next sector, when you request the 
> next bad sector the process would begin again... even if you wanted the 
> last of the 8 sector chunk.
> 
>   Normally that is good... it's a best effort attempt to read a disk.
> However I had hundreds of bad sectors on this drive and just wanted as
> much of the filesystem as possible and didn't have days to wait.
> 
>   One bizarre thing is that this 8 sector chunk read didn't always happen,
> it appears that if one of the 8 sectors was good the other 7 would only
> be tried one batch of times.
> 
> 
>   I found that linux/include/linux/ide.h has the following three defines:
> ===
> /*
>  * Probably not wise to fiddle with these
>  */
> #define ERROR_MAX   8   /* Max read/write errors per sector */
> #define ERROR_RESET 3   /* Reset controller every 4th retry */
> #define ERROR_RECAL 1   /* Recalibrate every 2nd retry */
> ===
> 
>   These settings are not configurable via /proc or sysctl... so
> I changed them and recompiled such that only 3 attempts would be made 
> on any given block no resets or re-calibrations were done.  Still 
> reading *each* sector in a bad 8 chunk region was taking 100 seconds
> (about 14 minutes to move to the next chunk of 8 sectors).
> 
>   Even better because the process is inside a system call, it is not
> killable and so there is no practical way to speed up the process.

It should be open to termination between the time the read system call
returns and the write system call starts.

>   I still do not know what is causing the 8 sector "chunk" to be read.
> It seems that sys_pread calls mm/filemap.c: generic_file_read ->
> do_generic_file_read, which seems like it might be expanding the request
> size based on some read ahead parameters, it figures out what
> "max_readahead" is by calling get_max_readahead on the inode.
> 
>   I tried most everything with hdparm and fiddled with sysctl 
> (vm/max-readahead and vm/min-readahead)... but there was no change in 
> behavior.  I tried obvious things like "hdparm -m 0 -a 0 -A 0 -m 0 -P 0",
> I also tried 1's, all with no noticeable effect.
> 
> I want to be more ready next time...
> 
> - How does a 1 sector read is expanded to an 8 sector chunk?

I don't know.  But I suspect it has to do with the "natural" way files are
read in... by "mmap"ing them to pages in RAM.  i386 memory managers
usually use 4k pages... ergo, 8 x 512B sectors.

Some of this behavior may be due to the algorithms in dd_rescue.

> - How this chunk reading behavior can be turned off 
>   (via command line or custom kernel patch)?

Dunno.

> - Any other ideas on how to pull the disk blocks?

Not easy ones. (Build your own device driver that doesn't use mmap.)

> 
> Thanks,
>   Mike Simons
> 
> 
>   Basically I spent 20 hours trying to read from a failing drive, I 
> got about half way through the drive before time was up.  Of the 30 million
> sectors, I read only about 500 were bad.  It is likely that the NTFS 
> filesystem on the drive would have been recoverable, if the pull had 
> finished... because I could read only mount the filesystem from the 
> drive itself, but what I have of the image has an error at mount time.

I had a similar experience a few weeks ago... dd would fail at certain
areas on the disk, so I would use the skip option for dd to pick up after
the dead spots.  (I didn't know about dd_rescue.) Nevertheless, the
process was too slow, so I pulled the disk and simply replaced it.

>   I was using a custom knoppix boot floppy and a standard knoppix CD to 
> boot a laptop with the bad drive, NFS mounting a local machine, where I 
> was dd_rescue sending the blocks that could be read. 

Slick.  I was using netcat.

---
Jeff NewmillerThe .   .  Go Live...
DCN:<[EMAIL PROTECTED]>Basics: ##.#.   ##.#.  Live Go...
  Live:   OO#.. Dead: OO#..  Playing
Research Engineer (Solar/BatteriesO.O#.   #.O#.  wit