I've asked before, but I don't think anyone answered.
Do you know of any eSATA based methods of using ZFS right now? My hope
was to pick up some one or two 4-port eSATA cards that have port
multiplier support, (enabling 5 drives per port, or 20 drives per
card) - but it does not look like Solaris
Richard Elling wrote:
> You might consider some of the mobos with 6 SATA ports, but in any case,
> the chipset is somewhat important. There is pretty good support with
> Solaris for NVidia NForce series.
I had a look for some 6x SATA(2) boards. However they generally have more
things attached to
Dave Sneddon wrote:
For the hardware I was looking at these specs (now these are all listed in
Australian dollars. So please convert from US/whatever first before you say "Why
don't you get this part cheaper at XXX dollars). Affordability is my main
concern.
I don't want to spend too much. The o
Torrey McMahon wrote:
Richard Elling wrote:
Good question. If you consider that mechanical wear out is what ultimately
causes many failure modes, then the argument can be made that a spun down
disk should last longer. The problem is that there are failure modes which
are triggered by a spin up.
Ben Rockwood wrote:
Jim Dunham wrote:
Robert,
Hello Ben,
Monday, February 5, 2007, 9:17:01 AM, you wrote:
BR> I've been playing with replication of a ZFS Zpool using the
BR> recently released AVS. I'm pleased with things, but just
BR> replicating the data is only part of the problem. The bi
Hi all,
So I am new here (both using Solaris and also posting on this forum) and I
need some advice.
I have a plan on making a machine set up as a Network Storage Server and I
just want some of your recommendations and opinions on how to go about
this.
I do a lot of video editing of DV files a
I've got a similar setup in a small ISP I'm helping out.
Right now, they use the AOE driver for Linux on both client and server
(they don't use a CoRaid box, rather a standard Linux device). It
actually works quite well, including failover and hot-migration, when
used in conjunction with EVMS. Rem
Robert Milkowski wrote:
I haven't tried it but what if you mounted ro via loopback into a zone
/zones/myzone01/root/.zfs is loop mounted in RO to /zones/myzone01/.zfs
That is so wrong. ;)
Besides just being evil, I doubt it'd work. And if it does, it probly
shouldn't.
On Mon, 5 Feb 2007, Kevin Abbey wrote:
> Hi,
>
> I'd like to consider using the coraid products with solaris and ZFS but
> I need them to work with x86_64 on on generic opteron/amd compatible
> hardware. Currently the AOE driver is beta for sparc only. I am
^^^
AOE
Hi,
I'd like to consider using the coraid products with solaris and ZFS but
I need them to work with x86_64 on on generic opteron/amd compatible
hardware. Currently the AOE driver is beta for sparc only. I am
planning to use the ZFS file system so the raid hardware in the coraid
device wil
Jim Dunham wrote:
Robert,
Hello Ben,
Monday, February 5, 2007, 9:17:01 AM, you wrote:
BR> I've been playing with replication of a ZFS Zpool using the
BR> recently released AVS. I'm pleased with things, but just
BR> replicating the data is only part of the problem. The big
BR> question is: ca
On Feb 5, 2007, at 7:57 AM, Robert Milkowski wrote:
I haven't tried it but what if you mounted ro via loopback into a zone
/zones/myzone01/root/.zfs is loop mounted in RO to /zones/
myzone01/.zfs
I've tried something similar but found out that vfstab is evaluated
prior to zpool import, so
Gary,
Thanks for the information these kernel type of patches.
David
On Mon, 2007-02-05 at 11:33 -0600, Gary Mills wrote:
> On Mon, Feb 05, 2007 at 09:20:49AM -0800, David W. Smith wrote:
> >
> > Also, has anyone had problems installing 118855-36 with smpatch? I had
> > issues, and ended up ha
> > That is, when a zfs legacy filesystem is mounted in
> > read-only mode, and then remounted read/write,
> > atime updates are off:
> >
> > # zfs create -o mountpoint=legacy files/foobar
> >
> > # mount -F zfs -o ro files/foobar /mnt
> >
> > # zfs get atime files/foobar
> > NAME PROP
On Mon, Feb 05, 2007 at 09:20:49AM -0800, David W. Smith wrote:
>
> Also, has anyone had problems installing 118855-36 with smpatch? I had
> issues, and ended up having to install it with patchadd.
Apparently, this patch, and probably all future kernel patches,
can't be applied with smpatch. Th
Hi
118855-36 is marked interactive and is not installable by automation, or
at least should not be installed by smpatch.
If you look in the
patchpro.download.directory
from "smpatch get"
under the dir cache ( if I remember correctly )
you will see a current.zip ( possibly with a time stamp a
I'm pretty sure I have a service plan, but smpatch is not returning me
the 124205 patch. I'm currently running Solaris 10, update 2.
Also, has anyone had problems installing 118855-36 with smpatch? I had
issues, and ended up having to install it with patchadd.
David
On Mon, 2007-02-05 at 08:5
Jürgen Keil wrote:
I have my /usr filesystem configured as a zfs filesystem,
using a legacy mountpoint. I noticed that the system boots
with atime updates temporarily turned off (and doesn't record
file accesses in the /usr filesystem):
# df -h /usr
Filesystem size used avail cap
On 2/5/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
Hello Casper,
Monday, February 5, 2007, 2:32:49 PM, you wrote:
>>Hello zfs-discuss,
>>
>> I've patched U2 system to 118855-36. Several zfs related bugs id
>> should be covered between -19 and -36 like HotSpare support.
>>
>> However desp
Robert,
Hello Ben,
Monday, February 5, 2007, 9:17:01 AM, you wrote:
BR> I've been playing with replication of a ZFS Zpool using the
BR> recently released AVS. I'm pleased with things, but just
BR> replicating the data is only part of the problem. The big
BR> question is: can I have a zpool op
/* Warning : soapbox speech ahead */
>
> Something here is broken.
>
As a rule don't trust smpatch. Don't trust the freeware pca either.
Either one may or may not include patches that you don't need or they
may list patches you do need or seem to need but once you apply them
you find your s
Hello Casper,
Monday, February 5, 2007, 2:41:28 PM, you wrote:
>>Looks like 124205-04 is needed.
>>While I can see it on SunSolve smpatch doesn't show it.
>>
>>Also many ZFS bugs listed in 124205-04 are also listed in 118855-36 while
>>it looks like only 124205-04 is actually covering them and p
Robert Milkowski wrote:
Hello Casper,
Monday, February 5, 2007, 2:32:49 PM, you wrote:
Hello zfs-discuss,
I've patched U2 system to 118855-36. Several zfs related bugs id
should be covered between -19 and -36 like HotSpare support.
However despite -36 is installed 'zpool upgrade' still
Robert Milkowski wrote:
Hello Robert,
Monday, February 5, 2007, 2:26:57 PM, you wrote:
RM> Hello zfs-discuss,
RM> I've patched U2 system to 118855-36. Several zfs related bugs id
RM> should be covered between -19 and -36 like HotSpare support.
RM> However despite -36 is installed 'zpool
Hello Casper,
Monday, February 5, 2007, 2:32:49 PM, you wrote:
>>Hello zfs-discuss,
>>
>> I've patched U2 system to 118855-36. Several zfs related bugs id
>> should be covered between -19 and -36 like HotSpare support.
>>
>> However despite -36 is installed 'zpool upgrade' still claims only
>>
>Looks like 124205-04 is needed.
>While I can see it on SunSolve smpatch doesn't show it.
>
>Also many ZFS bugs listed in 124205-04 are also listed in 118855-36 while
>it looks like only 124205-04 is actually covering them and provides
>necessary binaries.
>
>Something is messed up with -36.
Som
Hello Robert,
Monday, February 5, 2007, 2:26:57 PM, you wrote:
RM> Hello zfs-discuss,
RM> I've patched U2 system to 118855-36. Several zfs related bugs id
RM> should be covered between -19 and -36 like HotSpare support.
RM> However despite -36 is installed 'zpool upgrade' still claims onl
>Hello zfs-discuss,
>
> I've patched U2 system to 118855-36. Several zfs related bugs id
> should be covered between -19 and -36 like HotSpare support.
>
> However despite -36 is installed 'zpool upgrade' still claims only
> v1 and v2 support. Alse there's no zfs promote, etc.
>
> /kernel/drv
Hello zfs-discuss,
I've patched U2 system to 118855-36. Several zfs related bugs id
should be covered between -19 and -36 like HotSpare support.
However despite -36 is installed 'zpool upgrade' still claims only
v1 and v2 support. Alse there's no zfs promote, etc.
/kernel/drv/zfs is da
Btw, in case that gets lost between my devil's advocatism:
A happy +1 from me for the proposal !
FrankH.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, 5 Feb 2007, Jim Dunham wrote:
Frank,
On Fri, 2 Feb 2007, Torrey McMahon wrote:
Jason J. W. Williams wrote:
Hi Jim,
Thank you very much for the heads up. Unfortunately, we need the
write-cache enabled for the application I was thinking of combining
this with. Sounds like SNDR and ZFS
Hello Ben,
Monday, February 5, 2007, 11:03:37 AM, you wrote:
BR> Is there an existing RFE for, what I'll wrongly call,
BR> "recursively visable snapshots"? That is, .zfs in directories other than
the dataset root.
BR> Frankly, I don't need it available in all directories, although
BR> it'd be
Hello Ben,
Monday, February 5, 2007, 9:17:01 AM, you wrote:
BR> I've been playing with replication of a ZFS Zpool using the
BR> recently released AVS. I'm pleased with things, but just
BR> replicating the data is only part of the problem. The big
BR> question is: can I have a zpool open in 2 pl
Frank,
On Fri, 2 Feb 2007, Torrey McMahon wrote:
Jason J. W. Williams wrote:
Hi Jim,
Thank you very much for the heads up. Unfortunately, we need the
write-cache enabled for the application I was thinking of combining
this with. Sounds like SNDR and ZFS need some more soak time together
befor
Ben,
I've been playing with replication of a ZFS Zpool using the recently released AVS. I'm pleased with things, but just replicating the data is only part of the problem. The big question is: can I have a zpool open in 2 places?
No. The ability to have a zpool open in two place would req
I have my /usr filesystem configured as a zfs filesystem,
using a legacy mountpoint. I noticed that the system boots
with atime updates temporarily turned off (and doesn't record
file accesses in the /usr filesystem):
# df -h /usr
Filesystem size used avail capacity Mounted on
fil
Is there an existing RFE for, what I'll wrongly call, "recursively visable
snapshots"? That is, .zfs in directories other than the dataset root.
Frankly, I don't need it available in all directories, although it'd be nice,
but I do have a need for making it visiable 1 dir down from the dataset
Hi,
Artem: Thanks. And yes, Peter S. is a great actor!
Christian Mueller wrote:
> who is peter stormare? (sorry, i'm from old europe...)
as usual, Wikipedia knows it:
http://en.wikipedia.org/wiki/Peter_Stormare
and he's european too :). Great actor, great movies. I particularly like
Constant
On Fri, 2 Feb 2007, Torrey McMahon wrote:
Jason J. W. Williams wrote:
Hi Jim,
Thank you very much for the heads up. Unfortunately, we need the
write-cache enabled for the application I was thinking of combining
this with. Sounds like SNDR and ZFS need some more soak time together
before you ca
I've been playing with replication of a ZFS Zpool using the recently released
AVS. I'm pleased with things, but just replicating the data is only part of
the problem. The big question is: can I have a zpool open in 2 places?
What I really want is a Zpool on node1 open and writable (productio
40 matches
Mail list logo