On Sun, Apr 20, 2008 at 09:35:32PM +0200, Louis V. Lambrecht wrote:
> Yeah! Got a 500Gig eSATA mounted, 6 slices. The problem is not how
> to address the drive, the problem is to backup all that data. That
> is, eventually, 4 gig per DVD, or XFS, or a cluster. My main database
> I can't live wi
On Sun, Apr 20, 2008 at 03:35:13PM -0400, Chris Zakelj wrote:
> Matthew Weigel wrote:
>> Chris Zakelj wrote:
>>
>>> ... I'm wondering if thought is being given on how to make the physical
>>> size (not filesystem... I totally understand why those should be kept
>>> small) limitation of http://ww
On Sun, 2008-04-20 at 22:53 -0500, Matthew Weigel wrote:
> David Gwynne wrote:
>
> > solaris suffers from this problem. you cant use big disks with 32bit
> > solaris kernels.
>
> For UFS, at least, but doesn't ZFS on i386 (not amd64) scale?
The filesystem yes, but the block addressing no. I had to
On 2008-04-21, Siegbert Marschall <[EMAIL PROTECTED]> wrote:
> i think there are some companies out there having collected a lot
> more smart-data the we do, wonder what they do with it... ;)
in the case of Google, they wrote a paper, "Failure Trends in a
Large Disk Drive Population" (Pinheiro, We
On 21/04/2008, at 1:53 PM, Matthew Weigel wrote:
David Gwynne wrote:
solaris suffers from this problem. you cant use big disks with
32bit solaris kernels.
For UFS, at least, but doesn't ZFS on i386 (not amd64) scale?
this is a block layer problem, nothing to do with the filesystems. if
David Gwynne wrote:
solaris suffers from this problem. you cant use big disks with 32bit
solaris kernels.
For UFS, at least, but doesn't ZFS on i386 (not amd64) scale?
--
Matthew Weigel
hacker
unique & idempot.ent
* Siegbert Marschall <[EMAIL PROTECTED]> [2008-04-21 02:38:10]:
> Hello,
>
> >
> > I'm curious how much more failure in the new "perpendicular" drives
> > you are seeing. I can certainly see various drive makers pushing
> > capacity irrespective of reliability. Germane to this case, some
> > of
Hello,
>
> I'm curious how much more failure in the new "perpendicular" drives
> you are seeing. I can certainly see various drive makers pushing
> capacity irrespective of reliability. Germane to this case, some
> of them reduce the reserve storage for bad sectors for that extra
> storage. Tis
On 21/04/2008, at 4:46 AM, Matthew Weigel wrote:
Chris Zakelj wrote:
a non-issue on 64-bit platforms
Whether a system is 64-bit or not isn't very relevant to this -
that mostly establishes what the memory address space is, *not* the
size of integers that can be used by the system.
solar
Matthew Weigel wrote:
Chris Zakelj wrote:
... I'm wondering if thought is being given on how to make the
physical size (not filesystem... I totally understand why those
should be kept small) limitation of
http://www.openbsd.org/faq/faq14.html#LargeDrive
http://www.openbsd.org/43.html
"New F
Chris Zakelj wrote:
Travers Buda wrote:
I can certainly see various drive makers pushing capacity
irrespective of reliability. Germane to this case, some of them
reduce the reserve storage for bad sectors for that extra storage.
Going along with this, on a recent trip to my local computer
Chris Zakelj wrote:
... I'm wondering if
thought is being given on how to make the physical size (not
filesystem... I totally understand why those should be kept small)
limitation of http://www.openbsd.org/faq/faq14.html#LargeDrive
http://www.openbsd.org/43.html
"New Functionality:
...
o T
Travers Buda wrote:
I can certainly see various drive makers pushing capacity
irrespective of reliability. Germane to this case, some of them
reduce the reserve storage for bad sectors for that extra storage.
Going along with this, on a recent trip to my local computer megastore,
I notice
* Siegbert Marschall <[EMAIL PROTECTED]> [2008-04-20 11:19:31]:
> Hello,
>
> > I don't know if anyone brought this up, and I hate to state the
> > obvious, but if you're getting bad blocks then the hard drive has
> > exhausted its ability to deal with them on its own and should be
> > replaced.
Hello,
> I don't know if anyone brought this up, and I hate to state the
> obvious, but if you're getting bad blocks then the hard drive has
> exhausted its ability to deal with them on its own and should be
> replaced. Otherwise you'll see data loss/corruption and a higher
> probability of a tot
On 19/04/2008, ropers <[EMAIL PROTECTED]> wrote:
> On 18/04/2008, Calomel <[EMAIL PROTECTED]> wrote:
> > Ropers,
> >
> > You can find the badblocks utility prepackaged in "e2fsprogs".
>
>
> THANK YOU! :) I had wondered why I couldn't find badblocks among
> OpenBSD's packages. This explains it.
On 2008-04-19, ropers <[EMAIL PROTECTED]> wrote:
> Looking at the package contents (
> http://www.openbsd.org/4.2_packages/i386/e2fsprogs-1.27p5.tgz-contents.html
> ), I've also figured out how to search for stuff like this in the
> future:
>
> http://www.google.ie/search?q=badblocks+inurl%3Aopenbs
* ropers <[EMAIL PROTECTED]> [2008-04-19 02:19:18]:
> On 18/04/2008, Calomel <[EMAIL PROTECTED]> wrote:
> > Ropers,
> >
> > You can find the badblocks utility prepackaged in "e2fsprogs".
>
> THANK YOU! :) I had wondered why I couldn't find badblocks among
> OpenBSD's packages. This explains it.
On 18/04/2008, Calomel <[EMAIL PROTECTED]> wrote:
> Ropers,
>
> You can find the badblocks utility prepackaged in "e2fsprogs".
THANK YOU! :) I had wondered why I couldn't find badblocks among
OpenBSD's packages. This explains it. I will say in my defense ;-)
that badblocks is not ext2-specific, s
Jon Simola wrote:
Not claiming to be an optimal solution (dd is faster), but does a
read pass across the entire partition: $ sudo md5 /dev/rwd0c MD5
(/dev/rwd0c) = a85c2c67475f983a98007fd9a47378b7
I think part of what he wanted about badblocks is that it does a
non-destructive write test as we
On 4/18/08, ropers <[EMAIL PROTECTED]> wrote:
> Sometimes I find myself in need of a disk checking utility that can
> check both disks with known *and unknown* filesystems, and/or that can
> check even currently unpartitioned space on a disk.
Not claiming to be an optimal solution (dd is faster)
Ropers,
You can find the badblocks utility prepackaged in "e2fsprogs".
Hope this helps,
BadBlocks Hard Drive Validation and/or Destructive Wipe
http://calomel.org/badblocks_wipe.html
--
Calomel @ http://calomel.org
Open Source Research and Reference
On Fri, Apr 18, 2008 at 08:44:27P
Sometimes I find myself in need of a disk checking utility that can
check both disks with known *and unknown* filesystems, and/or that can
check even currently unpartitioned space on a disk.
There exists such a program for Linux, called badblocks:
http://www.linuxmanpages.com/man8/badblocks.8.php
23 matches
Mail list logo