Thanks for your observations.
HOWEVER, I didn't pose the question
"How do I architect the HA and storage and everything for an email system?"
Our site like many other data centers has HA standards and politics and all
this other baggage that may lead a design to a certain point. Thus our answe
On Fri, 30 Nov 2007, Vincent Fox wrote:
... reformatted ...
> We will be using Cyrus to store mail on 2540 arrays.
>
> We have chosen to build 5-disk RAID-5 LUNs in 2 arrays which are
> both connected to same host, and mirror and stripe the LUNs. So a
> ZFS RAID-10 set composed of 4 LUNs. Mult
> > That depends upon exactly what effect turning off
> the
> > ZFS cache-flush mechanism has.
>
> The only difference is that ZFS won't send a
> SYNCHRONIZE CACHE command at the end of a transaction
> group (or ZIL write). It doesn't change the actual
> read or write commands (which are always se
> Bill, you have a long-winded way of saying "I don't
> know". But thanks for elucidating the possibilities.
Hmmm - I didn't mean to be *quite* as noncommittal as that suggests: I was
trying to say (without intending to offend) "FOR GOD'S SAKE, MAN: TURN IT BACK
ON!", and explaining why (i.e.
> That depends upon exactly what effect turning off the
> ZFS cache-flush mechanism has.
The only difference is that ZFS won't send a SYNCHRONIZE CACHE command at the
end of a transaction group (or ZIL write). It doesn't change the actual read or
write commands (which are always sent as ordinary
Bill, you have a long-winded way of saying "I don't know". But thanks for
elucidating the possibilities.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/z
> I think the point of dual battery-backed controllers
> is
> that data should never be lost. Am I wrong?
That depends upon exactly what effect turning off the ZFS cache-flush mechanism
has. If all data is still sent to the controllers as 'normal' disk writes and
they have no concept of, say,
> From Neil's comment in the blog entry that you
> referenced, that sounds *very* dicey (at least by
> comparison with the level of redundancy that you've
> built into the rest of your system) - even if you
> have rock-solid UPSs (which have still been known to
> fail). Allowing a disk to lie to h
> We are running Solaris 10u4 is the log option in
> there?
Someone more familiar with the specifics of the ZFS releases will have to
answer that.
>
> If this ZIL disk also goes dead, what is the failure
> mode and recovery option then?
The ZIL should at a minimum be mirrored. But since that
> Sounds good so far: lots of small files in a largish
> system with presumably significant access parallelism
> makes RAID-Z a non-starter, but RAID-5 should be OK,
> especially if the workload is read-dominated. ZFS
> might aggregate small writes such that their
> performance would be good as w
> On Dec 1, 2007 7:15 AM, Vincent Fox
>
> Any reason why you are using a mirror of raid-5
> lun's?
>
> I can understand that perhaps you want ZFS to be in
> control of
> rebuilding broken vdev's, if anything should go wrong
> ... but
> rebuilding RAID-5's seems a little over the top.
Because the
> Hi Bill,
...
lots of small files in a
> largish system with presumably significant access
> parallelism makes RAID-Z a non-starter,
> Why does "lots of small files in a largish system
> with presumably
> significant access parallelism makes RAID-Z a
> non-starter"?
> thanks,
> max
Every ZFS
Hi Bill,
can you guess? wrote:
>> We will be using Cyrus to store mail on 2540 arrays.
>>
>> We have chosen to build 5-disk RAID-5 LUNs in 2
>> arrays which are both connected to same host, and
>> mirror and stripe the LUNs. So a ZFS RAID-10 set
>> composed of 4 LUNs. Multi-pathing also in use fo
> Any reason why you are using a mirror of raid-5
> lun's?
Some people aren't willing to run the risk of a double failure - especially
when recovery from a single failure may take a long time. E.g., if you've
created a disaster-tolerant configuration that separates your two arrays and a
fire c
> We will be using Cyrus to store mail on 2540 arrays.
>
> We have chosen to build 5-disk RAID-5 LUNs in 2
> arrays which are both connected to same host, and
> mirror and stripe the LUNs. So a ZFS RAID-10 set
> composed of 4 LUNs. Multi-pathing also in use for
> redundancy.
Sounds good so far:
On Dec 1, 2007 7:15 AM, Vincent Fox <[EMAIL PROTECTED]> wrote:
> We will be using Cyrus to store mail on 2540 arrays.
>
> We have chosen to build 5-disk RAID-5 LUNs in 2 arrays which are both
> connected to same host, and mirror and stripe the LUNs. So a ZFS RAID-10 set
> composed of 4 LUNs. Mu
We will be using Cyrus to store mail on 2540 arrays.
We have chosen to build 5-disk RAID-5 LUNs in 2 arrays which are both connected
to same host, and mirror and stripe the LUNs. So a ZFS RAID-10 set composed of
4 LUNs. Multi-pathing also in use for redundancy.
My question is any guidance on
17 matches
Mail list logo