On 4/14/2012 5:00 AM, Ed W wrote:
> On 14/04/2012 04:48, Stan Hoeppner wrote:
>> On 4/13/2012 10:31 AM, Ed W wrote:
>>
>>> You mean those "answers" like:
>>> "you need to read 'those' articles again"
>>>
>>> Referring to some unknown and hard to find previous emails is not the
>>> same as answ
On 4/14/2012 5:04 AM, Jan-Frode Myklebust wrote:
> On Fri, Apr 13, 2012 at 07:33:19AM -0500, Stan Hoeppner wrote:
>>>
>>> What I meant wasn't the drive throwing uncorrectable read errors but
>>> the drives are returning different data that each think is correct or
>>> both may have sent the correct
On 14/04/2012 04:31, Stan Hoeppner wrote:
On 4/13/2012 10:31 AM, Ed W wrote:
On 13/04/2012 13:33, Stan Hoeppner wrote:
In closing, I'll simply say this: If hardware, whether a mobo-down SATA
chip, or a $100K SGI SAN RAID controller, allowed silent data corruption
or transmission to occur, ther
On Fri, Apr 13, 2012 at 07:33:19AM -0500, Stan Hoeppner wrote:
> >
> > What I meant wasn't the drive throwing uncorrectable read errors but
> > the drives are returning different data that each think is correct or
> > both may have sent the correct data but one of the set got corrupted
> > on the
On 14/04/2012 04:48, Stan Hoeppner wrote:
On 4/13/2012 10:31 AM, Ed W wrote:
You mean those "answers" like:
"you need to read 'those' articles again"
Referring to some unknown and hard to find previous emails is not the
same as answering?
No, referring to this:
On 4/12/2012 5:58 AM, Ed
On 4/13/2012 10:31 AM, Ed W wrote:
> You mean those "answers" like:
> "you need to read 'those' articles again"
>
> Referring to some unknown and hard to find previous emails is not the
> same as answering?
No, referring to this:
On 4/12/2012 5:58 AM, Ed W wrote:
> The claim by ZFS/BTRFS
On 4/13/2012 10:31 AM, Ed W wrote:
> On 13/04/2012 13:33, Stan Hoeppner wrote:
>> In closing, I'll simply say this: If hardware, whether a mobo-down SATA
>> chip, or a $100K SGI SAN RAID controller, allowed silent data corruption
>> or transmission to occur, there would be no storage industry, an
On Fri, 13 Apr 2012, Ed W wrote:
On 13/04/2012 13:33, Stan Hoeppner wrote:
What I meant wasn't the drive throwing uncorrectable read errors but
the drives are returning different data that each think is correct or
both may have sent the correct data but one of the set got corrupted
on the fly.
On 13/04/2012 13:33, Stan Hoeppner wrote:
What I meant wasn't the drive throwing uncorrectable read errors but
the drives are returning different data that each think is correct or
both may have sent the correct data but one of the set got corrupted
on the fly. After reading the articles posted,
On 13/04/2012 06:29, Stan Hoeppner wrote:
On 4/12/2012 5:58 AM, Ed W wrote:
The claim by ZFS/BTRFS authors and others is that data silently "bit
rots" on it's own. The claim is therefore that you can have a raid1 pair
where neither drive reports a hardware failure, but each gives you
different
On 4/13/2012 8:12 AM, Jim Lawson wrote:
> On 04/13/2012 08:33 AM, Stan Hoeppner wrote:
>>> What I meant wasn't the drive throwing uncorrectable read errors but
>>> the drives are returning different data that each think is correct or
>>> both may have sent the correct data but one of the set got co
On 13/04/2012 13:21, Timo Sirainen wrote:
On 13.4.2012, at 15.17, Ed W wrote:
On 13/04/2012 12:51, Timo Sirainen wrote:
- Use the checksums to assist with replication speed/efficiency (dsync or
custom imap commands)
It would be of some use with dbox index rebuilding. I don't think it would h
On 04/13/2012 08:33 AM, Stan Hoeppner wrote:
>> What I meant wasn't the drive throwing uncorrectable read errors but
>> the drives are returning different data that each think is correct or
>> both may have sent the correct data but one of the set got corrupted
>> on the fly. After reading the arti
On 4/13/2012 1:12 AM, Emmanuel Noobadmin wrote:
> On 4/12/12, Stan Hoeppner wrote:
>> On 4/11/2012 9:23 PM, Emmanuel Noobadmin wrote:
>> I suppose the controller could throw an error if
>>> the two drives returned data that didn't agree with each other but it
>>> wouldn't know which is the accurat
On 13.4.2012, at 15.17, Ed W wrote:
> On 13/04/2012 12:51, Timo Sirainen wrote:
>>> - Use the checksums to assist with replication speed/efficiency (dsync or
>>> custom imap commands)
>> It would be of some use with dbox index rebuilding. I don't think it would
>> help with dsync.
> ..
>>> - Fil
On 13/04/2012 12:51, Timo Sirainen wrote:
- Use the checksums to assist with replication speed/efficiency (dsync or
custom imap commands)
It would be of some use with dbox index rebuilding. I don't think it would help
with dsync.
..
- File RFCs for new imap features along the "lemonde" lines
On 12.4.2012, at 15.10, Ed W wrote:
> On 12/04/2012 12:09, Timo Sirainen wrote:
>> On 12.4.2012, at 13.58, Ed W wrote:
>>
>>> The claim by ZFS/BTRFS authors and others is that data silently "bit rots"
>>> on it's own. The claim is therefore that you can have a raid1 pair where
>>> neither drive
On 4/12/12, Stan Hoeppner wrote:
> On 4/11/2012 9:23 PM, Emmanuel Noobadmin wrote:
> I suppose the controller could throw an error if
>> the two drives returned data that didn't agree with each other but it
>> wouldn't know which is the accurate copy but that wouldn't protect the
>> integrity of t
On 4/12/2012 5:58 AM, Ed W wrote:
> The claim by ZFS/BTRFS authors and others is that data silently "bit
> rots" on it's own. The claim is therefore that you can have a raid1 pair
> where neither drive reports a hardware failure, but each gives you
> different data?
You need to read those article
Hi there,
> I have to say - I haven't actually seen this happen... Do any of your
> big mailstore contacts observe this, eg rackspace, etc?
Just to throw in to the discussion that with (silent) data corruption
not only "the disk" is involved but many other parts of your systems.
So perhaps you w
On 12/04/2012 12:09, Timo Sirainen wrote:
On 12.4.2012, at 13.58, Ed W wrote:
The claim by ZFS/BTRFS authors and others is that data silently "bit rots" on
it's own. The claim is therefore that you can have a raid1 pair where neither drive
reports a hardware failure, but each gives you differ
On 12/04/2012 02:18, Stan Hoeppner wrote:
On 4/11/2012 11:50 AM, Ed W wrote:
Re XFS. Have you been watching BTRFS recently?
I will concede that despite the authors considering it production ready
I won't be using it for my servers just yet. However, it's benchmarking
on single disk benchmarks
On 12.4.2012, at 13.58, Ed W wrote:
> The claim by ZFS/BTRFS authors and others is that data silently "bit rots" on
> it's own. The claim is therefore that you can have a raid1 pair where neither
> drive reports a hardware failure, but each gives you different data?
That's one reason why I pla
On 12/04/2012 11:20, Stan Hoeppner wrote:
On 4/11/2012 9:23 PM, Emmanuel Noobadmin wrote:
On 4/12/12, Stan Hoeppner wrote:
On 4/11/2012 11:50 AM, Ed W wrote:
One of the snags of md RAID1 vs RAID6 is the lack of checksumming in the
event of bad blocks. (I'm not sure what actually happens when
On 4/11/2012 9:23 PM, Emmanuel Noobadmin wrote:
> On 4/12/12, Stan Hoeppner wrote:
>> On 4/11/2012 11:50 AM, Ed W wrote:
>>> One of the snags of md RAID1 vs RAID6 is the lack of checksumming in the
>>> event of bad blocks. (I'm not sure what actually happens when md
>>> scrubbing finds a bad sect
On 4/12/12, Stan Hoeppner wrote:
> On 4/11/2012 11:50 AM, Ed W wrote:
>> One of the snags of md RAID1 vs RAID6 is the lack of checksumming in the
>> event of bad blocks. (I'm not sure what actually happens when md
>> scrubbing finds a bad sector with raid1..?). For low performance
>> requirement
On 4/11/2012 11:50 AM, Ed W wrote:
> Re XFS. Have you been watching BTRFS recently?
>
> I will concede that despite the authors considering it production ready
> I won't be using it for my servers just yet. However, it's benchmarking
> on single disk benchmarks fairly similarly to XFS and in cer
On 4/10/2012 5:22 AM, Adrian Minta wrote:
> On 04/10/12 08:00, Stan Hoeppner wrote:
>> Interestingly, I designed a COTS server back in January to handle at
>> least 5k concurrent IMAP users, using best of breed components. If you
>> or someone there has the necessary hardware skills, you could asse
On 2012-04-11 4:48 PM, Adrian Minta wrote:
On 04/11/12 19:50, Ed W wrote:
One of the snags of md RAID1 vs RAID6 is the lack of checksumming in
the event of bad blocks. (I'm not sure what actually happens when md
scrubbing finds a bad sector with raid1..?). For low performance
requirements I hav
On 04/11/12 19:50, Ed W wrote:
...
One of the snags of md RAID1 vs RAID6 is the lack of checksumming in
the event of bad blocks. (I'm not sure what actually happens when md
scrubbing finds a bad sector with raid1..?). For low performance
requirements I have become paranoid and been using RAI
Re XFS. Have you been watching BTRFS recently?
I will concede that despite the authors considering it production ready
I won't be using it for my servers just yet. However, it's benchmarking
on single disk benchmarks fairly similarly to XFS and in certain cases
(multi-threaded performance) c
On 4/10/2012 1:09 AM, Emmanuel Noobadmin wrote:
> On 4/10/12, Stan Hoeppner wrote:
>> SuperMicro H8SGL G34 mobo w/dual Intel GbE, 2GHz 8-core Opteron
>> 32GB Kingston REG ECC DDR3, LSI 9280-4i4e, Intel 24 port SAS expander
>> 20 x 1TB WD RE4 Enterprise 7.2K SATA2 drives
>> NORCO RPC-4220 4U 20 Ho
On 04/10/12 08:00, Stan Hoeppner wrote:
Interestingly, I designed a COTS server back in January to handle at
least 5k concurrent IMAP users, using best of breed components. If you
or someone there has the necessary hardware skills, you could assemble
this system and simply use it for NFS instea
On 4/10/12, Stan Hoeppner wrote:
>> So I have to make do with OTS commodity parts and free software for
>> the most parts.
>
> OTS meaning you build your own systems from components? Too few in the
> business realm do so today. :(
For the inhouse stuff and budget customers yes, in fact both the
On 4/9/2012 2:15 PM, Emmanuel Noobadmin wrote:
> Unfortunately, the usual kind of customers we have here, spending that
> kind of budget isn't justifiable. The only reason we're providing
> email services is because customers wanted freebies and they felt
> there was no reason why we can't give th
On 4/9/12, Stan Hoeppner wrote:
> So it seems you have two courses of action:
> 1. Identify individual current choke points and add individual systems
> and storage to eliminate those choke points.
>
> 2. Analyze your entire workflow and all systems, identifying all choke
> points, then design a
On 4/7/2012 9:43 AM, Emmanuel Noobadmin wrote:
> On 4/7/12, Stan Hoeppner wrote:
>
> Firstly, thanks for the comprehensive reply. :)
>
>> I'll assume "networked storage nodes" means NFS, not FC/iSCSI SAN, in
>> which case you'd have said "SAN".
>
> I haven't decided on that but it would either
On 4/7/2012 3:45 PM, Robin wrote:
>
>> Putting XFS on a singe RAID1 pair, as you seem to be describing above
>> for the multiple "thin" node case, and hitting one node with parallel
>> writes to multiple user mail dirs, you'll get less performance than
>> EXT3/4 on that mirror pair--possibly less
Putting XFS on a singe RAID1 pair, as you seem to be describing above
for the multiple "thin" node case, and hitting one node with parallel
writes to multiple user mail dirs, you'll get less performance than
EXT3/4 on that mirror pair--possibly less than half, depending on the
size of the disks
On 4/7/12, Stan Hoeppner wrote:
Firstly, thanks for the comprehensive reply. :)
> I'll assume "networked storage nodes" means NFS, not FC/iSCSI SAN, in
> which case you'd have said "SAN".
I haven't decided on that but it would either be NFS or iSCSI over
Gigabit. I don't exactly get a big budge
On 4/5/2012 3:02 PM, Emmanuel Noobadmin wrote:
Hi Emmanuel,
> I'm trying to improve the setup of our Dovecot/Exim mail servers to
> handle the increasingly huge accounts (everybody thinks it's like
> infinitely growing storage like gmail and stores everything forever in
> their email accounts) by
41 matches
Mail list logo