>The problem with fully automated systems for remote replication is
>that they are fully automated. This opens you up to a set of failure modes
>that you may want to avoid, such as replication of data that you don't
>want to replicate. This is why most replication is used to support disaster
>rec
Matt Beebe wrote:
> But what happens to the secondary server? Specifically to its bit-for-bit
> copy of Drive #2... presumably it is still good, but ZFS will offline that
> disk on the primary server, replicate the metadata, and when/if I "promote"
> the seconday server, it will also be runnin
Let me drag this thread kicking and screaming back to ZFS...
Use case:
- We need an NFS server that can be replicated to another building to
handle both scheduled powerdowns and unplanned outages. For scheduled
powerdowns we'd want to fail over a week in advance, and fail back some
time later.
Erast Benson wrote:
>> Uh, no, DRBD addresses only replication. Linux-HA (aka Heartbeat)
>> address availability. They can be an integrated solution and are to
>> some degree intended that way, so I have no idea where your opinion
>> is coming from.
>>
>
> Because in my opinion DRBD takes s
We are currently working on a Solaris/ZFS based central file system to
replace the DCE/DFS-based implementation we have had in place for over 10
years. One of the features of our previous implementation was that access
to files regardless of method (CIFS, AFP, HTTP, FTP, etc) was completely
contro
On Wed, 2008-09-10 at 19:42 -0400, Maurice Volaski wrote:
> >On Wed, 2008-09-10 at 19:10 -0400, Maurice Volaski wrote:
> >> >On Wed, 2008-09-10 at 18:37 -0400, Maurice Volaski wrote:
> >> >> >On Wed, 2008-09-10 at 15:00 -0400, Maurice Volaski wrote:
> >> >> >> >On Wed, 2008-09-10 at 14:36 -04
I ran into an odd problem importing a zpool while testing avs. I was
trying to simulate a drive failure, break SNDR replication, and then
import the pool on the secondary. To simulate the drive failure is just
offlined one of the disks in the RAIDZ set.
>On Wed, 2008-09-10 at 19:10 -0400, Maurice Volaski wrote:
>> >On Wed, 2008-09-10 at 18:37 -0400, Maurice Volaski wrote:
>> >> >On Wed, 2008-09-10 at 15:00 -0400, Maurice Volaski wrote:
>> >> >> >On Wed, 2008-09-10 at 14:36 -0400, Maurice Volaski wrote:
>> >> >> >> A disadvantage, however
On Wed, 2008-09-10 at 19:10 -0400, Maurice Volaski wrote:
> >On Wed, 2008-09-10 at 18:37 -0400, Maurice Volaski wrote:
> >> >On Wed, 2008-09-10 at 15:00 -0400, Maurice Volaski wrote:
> >> >> >On Wed, 2008-09-10 at 14:36 -0400, Maurice Volaski wrote:
> >> >> >> A disadvantage, however, is that
>On Wed, 2008-09-10 at 18:37 -0400, Maurice Volaski wrote:
>> >On Wed, 2008-09-10 at 15:00 -0400, Maurice Volaski wrote:
>> >> >On Wed, 2008-09-10 at 14:36 -0400, Maurice Volaski wrote:
>> >> >> A disadvantage, however, is that Sun StorageTek Availability Suite
>> >> >> (AVS), the DRBD equ
On Wed, 2008-09-10 at 18:37 -0400, Maurice Volaski wrote:
> >On Wed, 2008-09-10 at 15:00 -0400, Maurice Volaski wrote:
> >> >On Wed, 2008-09-10 at 14:36 -0400, Maurice Volaski wrote:
> >> >> A disadvantage, however, is that Sun StorageTek Availability Suite
> >> >> (AVS), the DRBD equivalent i
correction below...
Richard Elling wrote:
> Haiou Fu (Kevin) wrote:
>
>> The closest thing I can find is:
>> http://bugs.opensolaris.org/view_bug.do?bug_id=6421958
>>
>>
>
> Look at the man page section on zfs(1m) for -R and -I option explanations.
> http://docs.sun.com/app/docs/doc/81
On Wed, Sep 10, 2008 at 16:56, Matt Beebe <[EMAIL PROTECTED]> wrote:
> So how 'bout it hardware vendors? when can we get a PCIe(x8) SAS/SATA
> controller with an x4 internal port and an x4 external port and 512MB battery
> backed cache for about $250?? :) Heck, I'd take SATA only if I could ge
>On Wed, 2008-09-10 at 15:00 -0400, Maurice Volaski wrote:
>> >On Wed, 2008-09-10 at 14:36 -0400, Maurice Volaski wrote:
>> >> A disadvantage, however, is that Sun StorageTek Availability Suite
>> >> (AVS), the DRBD equivalent in OpenSolaris, is much less flexible than
>> >> DRBD. For exampl
Haiou Fu (Kevin) wrote:
> The closest thing I can find is:
> http://bugs.opensolaris.org/view_bug.do?bug_id=6421958
>
Look at the man page section on zfs(1m) for -R and -I option explanations.
http://docs.sun.com/app/docs/doc/819-2240/zfs-1m?a=view
> But just like it says: " Incremental +
> r
The closest thing I can find is:
http://bugs.opensolaris.org/view_bug.do?bug_id=6421958
But just like it says: " Incremental +
recursive will be a bit tricker, because how do you specify the multiple
source and dest snaps? "
Let me clarify this more:
Without "send -r" I need do something l
Can you explain more about "zfs send -l " I know "zfs send -i" but didn't
know there is a "-l" option? In which release is this option available?
Thanks!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.o
>
> > I'm guessing one of the reasons you wanted a
> non-RAID controller with
> > a write cache was so that if the controller failed,
> and the exact same
> > model wasn't available to replace it, most of your
> pool would still be
> > readable with any random controller, modulo risk of
> corrupti
Just to clarify a few items... consider a setup where we desire to use AVS to
replicate the ZFS pool on a 4 drive server to like hardware. The 4 drives are
setup as RaidZ.
If we lose a drive (say #2) in the primary server, RaidZ will take over, and
our data will still be "available" but the ar
Haiou Fu (Kevin) wrote:
> I wonder if there are any equivalent commands in zfs to dump all its
> associated snapshots at maximum efficiency (only the changed data blocks
> among all snapshots)? I know you can just "zfs send" all snapshots but each
> one is like a full dump and if you use "zfs
I wonder if there are any equivalent commands in zfs to dump all its associated
snapshots at maximum efficiency (only the changed data blocks among all
snapshots)? I know you can just "zfs send" all snapshots but each one is like
a full dump and if you use "zfs send -i" it is hard to maintain
On Wed, 2008-09-10 at 15:00 -0400, Maurice Volaski wrote:
> >On Wed, 2008-09-10 at 14:36 -0400, Maurice Volaski wrote:
> >> A disadvantage, however, is that Sun StorageTek Availability Suite
> >> (AVS), the DRBD equivalent in OpenSolaris, is much less flexible than
> >> DRBD. For example, AVS is
On Wed, 10 Sep 2008, Keith Bierman wrote:
>> written at once, 512KB needs to be erased at once. This means that
>> write performance to an empty device will seem initially pretty good,
>> but then it will start to suffer as 512KB regions need to be erased to
>> make space for more writes.
>
> Tha
>On Wed, 2008-09-10 at 14:36 -0400, Maurice Volaski wrote:
>> A disadvantage, however, is that Sun StorageTek Availability Suite
>> (AVS), the DRBD equivalent in OpenSolaris, is much less flexible than
>> DRBD. For example, AVS is intended to replicate in one direction,
>> from a primary to a s
On Wed, 2008-09-10 at 14:36 -0400, Maurice Volaski wrote:
> A disadvantage, however, is that Sun StorageTek Availability Suite
> (AVS), the DRBD equivalent in OpenSolaris, is much less flexible than
> DRBD. For example, AVS is intended to replicate in one direction,
> from a primary to a seconda
On Wed, 10 Sep 2008, Keith Bierman wrote:
>> ...
>> That is reasonable. It adds to product cost and size though.
>> Super-capacitors are not super-small.
>>
> True, but for enterprise class devices they are sufficiently small. Laptops
> will have a largish battery and won't need the caps ;> Des
On Sep 10, 2008, at 12:37 PM, Bob Friesenhahn wrote:
> On Wed, 10 Sep 2008, Keith Bierman wrote:
>
>>> written at once, 512KB needs to be erased at once. This means that
>>> write performance to an empty device will seem initially pretty
>>> good,
>>> but then it will start to suffer as 512KB
>I'd like to know where the *real* advantages of Nexenta/ZFS (i.e.
>ZFS/StorageTek) over DRBD/Heartbeat are.
The main advantage of OpenSolaris is native ZFS, the many advantages
of which are well described in many places, such as
http://opensolaris.org/os/community/zfs/docs/zfs_last.pdf.
A dis
Bob Friesenhahn wrote:
> On Wed, 10 Sep 2008, Al Hopper wrote:
>
>
>> Interesting flash technology overview and SSD review here:
>>
>> http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403
>> and another review here:
>> http://www.tomshardware.com/reviews/Intel-x25-m-SSD,2012.html
>>
On Sep 10, 2008, at 11:40 AM, Bob Friesenhahn wrote:
>
> Write performance to SSDs is not all it is cracked up to be. Buried
> in the AnandTech writeup, there is mention that while 4K can be
> written at once, 512KB needs to be erased at once. This means that
> write performance to an empty dev
Well, obviously - its Linux vs. OpenSolaris question. Most serious
advantage of OpenSolaris is ZFS and its enterprise level storage stack.
Linux just not there yet..
On Wed, 2008-09-10 at 14:51 +0200, Axel Schmalowsky wrote:
> Hallo list,
>
> hope that so can help me on this topic.
>
> I'd like
On Wed, 10 Sep 2008, Al Hopper wrote:
> Interesting flash technology overview and SSD review here:
>
> http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403
> and another review here:
> http://www.tomshardware.com/reviews/Intel-x25-m-SSD,2012.html
These seem like regurgitations of the sa
On Sat, 6 Sep 2008, Sean McGrath wrote:
> The sfw project's bit has whats needed here, the libsunwrap.a src etc,
> http://www.opensolaris.org/os/project/sfwnv/
Thanks for the pointer, I was able to pull out the libsunwrap.a source code
and use it to compile the bundled samba source from S10U5
Interesting flash technology overview and SSD review here:
http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403
and another review here:
http://www.tomshardware.com/reviews/Intel-x25-m-SSD,2012.html
Regards,
--
Al Hopper Logical Approach Inc,Plano,TX [EMAIL PROTECTED]
On Wed, Sep 10, 2008 at 5:57 AM, W. Wayne Liauh <[EMAIL PROTECTED]> wrote:
>> I'm a fan of ZFS since I've read about it last year.
>>
>> Now I'm on the way to build a home fileserver and I'm
>> thinking to go with Opensolaris and eventually ZFS!!
>
> This seems to be a good candidate to build a hom
Hallo list,
hope that so can help me on this topic.
I'd like to know where the *real* advantages of Nexenta/ZFS (i.e.
ZFS/StorageTek) over DRBD/Heartbeat are.
I'm pretty new to this topic and hence do not have enough experience to judge
their respective advantages/disadvantages reasonably.
Any
On Wed, Sep 10, 2008 at 03:57:13AM -0700, W. Wayne Liauh wrote:
> This seems to be a good candidate to build a home ZFS server:
>
> http://tinyurl.com/msi-so
>
> It's cheap, low power, fan-less; the only concern is the Realtek 8111C NIC.
> According to a Sun Blogger, there is no Solaris driver:
> I'm a fan of ZFS since I've read about it last year.
>
> Now I'm on the way to build a home fileserver and I'm
> thinking to go with Opensolaris and eventually ZFS!!
This seems to be a good candidate to build a home ZFS server:
http://tinyurl.com/msi-so
It's cheap, low power, fan-less; the on
Tuomas Leikola wrote:
> On Mon, Sep 8, 2008 at 8:35 PM, Miles Nordin <[EMAIL PROTECTED]> wrote:
>>ps> iSCSI with respect to write barriers?
>>
>> +1.
>>
>> Does anyone even know of a good way to actually test it? So far it
>> seems the only way to know if your OS is breaking write barriers is
On 09.09.08 19:32, Richard Elling wrote:
> Ralf Ramge wrote:
>> Richard Elling wrote:
>>
Yes, you're right. But sadly, in the mentioned scenario of having
replaced an entire drive, the entire disk is rewritten by ZFS.
>>> No, this is not true. ZFS only resilvers data.
>> Okay, I see we
40 matches
Mail list logo