heya,
I'm attempting to set up a ZFS share to be served via Samba. I originally tried
to use NFSv4, but hit a bump in the form of c*appy Windows client support, and
the Hummingbird Maestro requires hclnfsd to be installed, which wouldn't run
properly on Sol10 etc.)
Anyway, I'm sorry if this is
Lin Ling wrote:
> Ian Collins wrote:
>
>> Thanks for the heads up.
>>
>> I'm building a new file server at the moment and I'd like to make sure I
>> can migrate to ZFS boot when it arrives.
>>
>> My current plan is to create a pool on 4 500GB drives and throw in a
>> small boot drive.
>>
>> Will I
> Ian Collins wrote:
> > Thanks for the heads up.
> >
> > I'm building a new file server at the moment and
> I'd like to make sure I
> > can migrate to ZFS boot when it arrives.
> >
> > My current plan is to create a pool on 4 500GB
> drives and throw in a
> > small boot drive.
> >
> > Will I be ab
Ian Collins wrote:
Thanks for the heads up.
I'm building a new file server at the moment and I'd like to make sure I
can migrate to ZFS boot when it arrives.
My current plan is to create a pool on 4 500GB drives and throw in a
small boot drive.
Will I be able to drop the boot drive and move /
Yes, the initial release of bootable zfs has restriction on the root
pool: i.e.
no concatenation or RAIDZ, only single deviced pool or a mirrored
configuration.
This is mainly due to limitations on how many disks the firmware can
access at boot time.
Lin
Francois Dion wrote:
On Thu, 2007-0
On Thu, 2007-03-08 at 14:22 -0800, Darren Dunham wrote:
> This thread from a year ago suggests that at least the first round of
> ZFS root pools will have restrictions that are not necessary on other
> pools (like no concatenation or RAIDZ).
>
> http://www.opensolaris.org/jive/thread.jspa?threadID
> I'm building a new file server at the moment and I'd like to make sure I
> can migrate to ZFS boot when it arrives.
>
> My current plan is to create a pool on 4 500GB drives and throw in a
> small boot drive.
>
> Will I be able to drop the boot drive and move / over to the pool when
> ZFS boot
Any details on the use case ?
Such an option will clearly make any filesystem just crawl on so many
common operation.
So it's rather interesting to know who/what is ready to sacrifice so
much performance.
In exchange for what ?
Le 8 mars 07 à 21:19, Bruce Shaw a écrit :
Would a forcesyn
it's an absolute necessity
On 3/8/07, Roch Bourbonnais <[EMAIL PROTECTED]> wrote:
Le 8 mars 07 à 20:08, Selim Daoud a écrit :
> robert,
> this applies only if you have full control on the application forsure
> ..but how do you do it if you don't own the application ... can you
> mount zfs with
Lori Alt wrote:
> The latest on when the update zfsboot support will
> go into Nevada is either build 61 or 62. We are
> making some final fixes and getting tests run. We
> are aiming for 61, but we might just miss it. In
> that case, we should be putting back into 62.
Thanks for the heads up
>Would a forcesync flag be something of interest to the community ?
Yes.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Le 8 mars 07 à 20:08, Selim Daoud a écrit :
robert,
this applies only if you have full control on the application forsure
..but how do you do it if you don't own the application ... can you
mount zfs with forcedirectio flag ?
selim
ufs directio and O_DSYNC are different things.
Would a forc
Hello Selim,
Thursday, March 8, 2007, 8:08:50 PM, you wrote:
SD> robert,
SD> this applies only if you have full control on the application forsure
SD> ..but how do you do it if you don't own the application ... can you
SD> mount zfs with forcedirectio flag ?
No
--
Best regards,
Robert
robert,
this applies only if you have full control on the application forsure
..but how do you do it if you don't own the application ... can you
mount zfs with forcedirectio flag ?
selim
On 3/8/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
Hello Manoj,
Thursday, March 8, 2007, 7:10:57 AM,
Hello Matt,
Wednesday, March 7, 2007, 7:31:14 PM, you wrote:
MB> So it sounds like the consensus is that I should not worry about using
slices with ZFS
MB> and the swap best practice doesn't really apply to my situation of a 4 disk
x4200.
MB> So in summary(please confirm) this is what we are s
Hello Manoj,
Thursday, March 8, 2007, 7:10:57 AM, you wrote:
MJ> Ayaz Anjum wrote:
>> 2. with zfs mounted on one cluster node, i created a file and keeps it
>> updating every second, then i removed the fc cable, the writes are still
>> continuing to the file system, after 10 seconds i have put
Manoj Joseph writes:
> Matt B wrote:
> > Any thoughts on the best practice points I am raising? It disturbs me
> > that it would make a statement like "don't use slices for
> > production".
>
> ZFS turns on write cache on the disk if you give it the entire disk to
> manage. It is good for
17 matches
Mail list logo