RM:
> I do not understand - why in some cases with smaller block writing
> block twice could be actually faster than doing it once every time?
> I definitely am missing something here...
In addition to what Neil said, I want to add that
when an application O_DSYNC write cover only parts o
Robert Milkowski writes:
> Hello Neil,
>
> Thursday, August 10, 2006, 7:02:58 PM, you wrote:
>
> NP> Robert Milkowski wrote:
> >> Hello Matthew,
> >>
> >> Thursday, August 10, 2006, 6:55:41 PM, you wrote:
> >>
> >> MA> On Thu, Aug 10, 2006 at 06:50:45PM +0200, Robert Milkowski wrote:
Hi there
Are there any consideration given to this feature...?
I would also agree that this will not only be a "testing" feature, but will
find it's way into production.
It would probably work on the same princaple of swap -a and swap -d ;) Just a
little bit more complex.
This message post
Darren:
> > With all of the talk about performance problems due to
> > ZFS doing a sync to force the drives to commit to data
> > being on disk, how much of a benefit is this - especially
> > for NFS?
I would not call those things as problems, more like setting
proper expectations.
My unde
Hi,
I'm looking at moving two UFS quota-ed filesystems to ZFS under
Solaris 10 release 6/06, and the quota issue is gnarly.
One filesystem is user home directories and I'm aiming towards the
"one zfs filesystem per user" model, attempting to use Casper
Dik's auto_home script for on-the-fly zfs f
On Fri, Aug 11, 2006 at 02:47:19AM -0700, Louwtjie Burger wrote:
> Are there any consideration given to this feature...?
Yes, this is on our radar. We have some ideas about how to implement
it, but it will probably be at least 6 months until it is ready. We
have several higher-priority tasks to
On Aug 9, 2006, at 8:18 AM, Roch wrote:
So while I'm feeling optimistic :-) we really ought to be
able to do this in two I/O operations. If we have, say, 500K
of data to write (including all of the metadata), we should
be able to allocate a contiguous 500K block on disk and
Thanks for replying (I thought nobody would bother.)
So, If understand correctly, I won't give up ANYTHING available in
EVMS. LVM , Linux Raid -by going to ZFS and Raid -Z Right ?
This message posted from opensolaris.org
___
zfs-discuss mailin
Following up on earlier mail, here's a proposal for create-time
properties. As usual, any feedback or suggestions is welcome.
For those curious about the implementation, this finds its way all the
way down to the create callback, so that we can pick out true
create-time properties (e.g. volblocks
No, there are some features we haven't implemented, that may or may not
be available in other RAID solutions. In particular:
- ZFS storage pool cannot be 'shrunk', i.e. removing an entire toplevel
device (mirror, RAID group, etc). Devices can be removed by attaching
and detaching to existing
Just wanted to point this out --
I have a large web tree that used to have UFS user quotas on it. I converted
to ZFS using
the model that each user has their own ZFS filesystem quota instead. I worked
around some
NFS/automounter issues, and it now seems to be working fine.
Except now I ha
Greetings
Have used zfs raidz for while and question rised is it possible to expand raidz
with additional disks.
Got answer pool yes but raidz "group" no.
So very high level idea for you , maybe already know.
And i'm not detail level expert of zfs so here might be "trivial" things for
you.
So
On Fri, Aug 11, 2006 at 11:04:06AM -0500, Anton Rang wrote:
> >Once the data blocks are on disk we have the information
> >necessary to update the indirect blocks iteratively up to
> >the ueberblock. Those are the smaller I/Os; I guess that
> >becauseof ditto blocks they go to phy
On Aug 11, 2006, at 12:38 PM, Jonathan Adams wrote:
The problem is that you don't know the actual *contents* of the
parent block
until *all* of its children have been written to their final
locations.
(This is because the block pointer's value depends on the final
location)
But I know whe
Leon Koll wrote:
On 8/11/06, eric kustarz <[EMAIL PROTECTED]> wrote:
Leon Koll wrote:
> <...>
>
>> So having 4 pools isn't a recommended config - i would destroy
those 4
>> pools and just create 1 RAID-0 pool:
>> #zpool create sfsrocks c4t00173801014Bd0 c4t00173801014Cd0
>> c4t00173
Just a data point -- our netapp filer actually creates additional raid groups
that are added to the greater pool when you "add disks", much as zfs does now.
They aren't simply used to expand
the one large raid group of the volume.I've been meaning to rebuild the
whole thing to
get use of
On Fri, Aug 11, 2006 at 10:02:41AM -0700, Brad Plecs wrote:
> There doesn't appear to be a way to move zfspool/www and its
> decendants en masse to a new machine with those quotas intact. I have
> to script the recreation of all of the descendant filesystems by hand.
Yep, you need
6421959 want
What about the Asus M2N-SLI Deluxe motherboard? It has 7 SATA ports,
supports ECC memory, socket AM2, generally looks very attractive for
my home storage server. Except that it, and the nvidia nForce 570-SLI
it's built on, don't seem to be on the HCL. I'm hoping that's just
"yet", not reported
> Just a data point -- our netapp filer actually creates additional raid
> groups that are added to the greater pool when you "add disks", much
> as zfs does now. They aren't simply used to expand the one large raid
> group of the volume. I've been meaning to rebuild the whole thing to
> get use
Hey sorry if this is really basic, but I just started evaluating Solaris 10.
Hated it at first but I'm sure that was just Windows withdrawal. The more I
play the more I like.
Just started with Solaris 10 for x86 and testing out ZFS for perhaps a home
server.
I have 4 SATA drives installed in m
Hi All,
Sun Fire V440
Solaris 10
Solaris Resource Manager
Customer wrote the following:
I have a v490 with 4 zones:
tsunami:/#->zoneadm list -iv
ID NAME STATUS PATH
0 global running /
4 fmstage running /fmstage
12 fmprod running /fmprod
15 fmtest running /fmtest
fmtest has a pool assigned to
On 8/11/06, eric kustarz <[EMAIL PROTECTED]> wrote:
Leon Koll wrote:
> On 8/11/06, eric kustarz <[EMAIL PROTECTED]> wrote:
>
>> Leon Koll wrote:
>>
>> > <...>
>> >
>> >> So having 4 pools isn't a recommended config - i would destroy
>> those 4
>> >> pools and just create 1 RAID-0 pool:
>> >> #zp
After I saw that ZFS performance (when the box isn't stuck) is about 3
times lower than UFS/VxFS, I understood I should wait with ZFS for
Solaris 11official release.
I don't believe that it's possible to do some magic with my setup and
increase the ZFS performance 3 times. Fix me if I'm wrong.
That's the default, I think, but you can use 'vol add -g' to add disks to an
existing RAID group. This is fairly new functionality (V6.2 I think). ZFS will
probably not take so long to add this feature. :-)
This message posted from opensolaris.org
_
This is a great question for the Solaris forum at NVidia.
http://www.nvnews.net/vbulletin/forumdisplay.php?f=45
My experience has been that NVidia does a pretty good job keeping the
NForce software compatible with the hardware going forward. For Solaris,
pre-NForce4 is a little spotty, but that
Irma Garcia wrote:
Hi All,
Sun Fire V440
Solaris 10
Solaris Resource Manager
Customer wrote the following:
I have a v490 with 4 zones:
tsunami:/#->zoneadm list -iv
ID NAME STATUS PATH
0 global running /
4 fmstage running /fmstage
12 fmprod running /fmprod
15 fmtest running /fmtest
fmtest has
On August 11, 2006 10:31:50 AM -0400 "Jeff A. Earickson" <[EMAIL PROTECTED]>
wrote:
Suggestions please?
Ideally you'd be able to move to mailboxes in $HOME instead of /var/mail.
-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://
I looked into backing up ZFS and quite honostly I can't say I am convinced
about its usefullness here when compared to the traditional ufsdump/restore.
While snapshots are nice they can never substitute offline backups. And
although you can keep quite some snapshots lying about it will consume
Follow-up: it looks to me like prstat displays the portion of the system's
physical memory in use by the processes in that zone.
How much memory does that system have? Something seems amiss, as a V490 can hold
up to 32GB, and prstat is showing 163GB of physical memory just for fmtest.
Irma
On August 11, 2006 5:25:11 PM -0700 Peter Looyenga <[EMAIL PROTECTED]> wrote:
I looked into backing up ZFS and quite honostly I can't say I am convinced
about its usefullness
here when compared to the traditional ufsdump/restore. While snapshots are nice
they can never
substitute offline backup
On 8/11/06, Irma Garcia <[EMAIL PROTECTED]> wrote:
ZONEID NPROC SIZE RSS MEMORY TIME CPU ZONE
15 188 169G 163G 100% 0:46:00 48% fmtest
0 54 708M 175M 0.1% 2:23:40 0.1% global
12 27 112M 51M 0.0% 0:02:48 0.0% fmprod
4 27 281M 66M 0.0% 0:14:13 0.0% fmstage
Questions?
Does the 100% memory usage on
On 8/11/06, Richard Elling - PAE <[EMAIL PROTECTED]> wrote:
This is a great question for the Solaris forum at NVidia.
http://www.nvnews.net/vbulletin/forumdisplay.php?f=45
Thanks, I have asked there.
My experience has been that NVidia does a pretty good job keeping the
NForce software compati
32 matches
Mail list logo