On Mon, 29 Jun 2009, Carsten Aulbert wrote:
s11 console login: root
Password:
Last login: Mon Jun 29 10:37:47 on console
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
s11:~# zpool export atlashome
s11:~# ls -l /atlashome
/atlashome: No such file or directory
s11:~# zpool import
On 28 Jun 2009, at 11:22, Daniel J. Priem wrote:
snip
Snapshots are significantly faster as well. My average transfer speed
went from about 15MB/sec to over 40MB/sec. I imagine that 40MB/sec is
now a limitation of the CPU, as I can see SSH maxing out a single
core
on the quad cores.
Maybe
Nils Goroll wrote:
Hi,
I just noticed that Mark Shellenbaum has replied to the same question in
a thread ACL not being inherited correctly on zfs-discuss.
Sorry for the noise.
Out of curiosity, I would still be interested in answers to this question:
It there a reason why inheritable
On Mon, 22 Jun 2009, Ross wrote:
All seemed well, I replaced the faulty drive, imported the pool again, and
kicked off the repair with:
# zpool replace zfspool c1t1d0
What build are you running? Between builds 105 and 113 inclusive there's
a bug in the resilver code which causes it to miss
Thomas Fili wrote:
Hi @all,
with ZFS its recommended to create a new filesystem, for example for each user
to give them a home directory.
So far, so good. The homes should be under tank/export/home/staff and my
intention is to restrict the ACL rights so only the user self can access his
own
Kyle McDonald wrote:
Hi all,
I'm setting up a new fileserver, and while I'm not planning on enabling
CIFS right away, I know I will in the future.
I know there are several ZFS properties or attributes that affect how
CIFS behaves. I seem to recall that at least one of those needs to be
set
Andrew Watkins wrote:
[I did post this in NFS, but I think it should be here]
I am playing with ACL on snv_114 (and Storage 7110) system and I have
noticed that strange things are happing to ACL's or am I doing something
wrong.
When you create a new sub-directory or file the ACL's seem to
On Mon, 15 Jun 2009, Todd Stansell wrote:
Any thoughts on how this can be done? I do have other systems I can use
to test this procedure, but ideally it would not introduce any downtime,
but that can be arranged if necessary.
I think the only work-around is to re-promote 'data', destroy the
I'm interested in the snapshot and cloning side of ZFS.
Whilst I'm happy with the way that this works at the share level i.e.
snapshot share
clone share to a new share
nfs mount (or somesuch) to the cloned share
I'm wondering whether it's possible to snap / clone at the file / dierctory
level
Hi Jim,
See if 'zpool history' gives you what you're looking for.
Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, 29 May 2009, Rich Teer wrote:
zpool attach dpool c1t0d0 c2t0d0
zpool attach dpool c1t1d0 c2t1d0
zpool attach dpool c1t2d0 c2t2d0
These should all be zpool add dpool mirror {disk1} {disk2}, but yes. I
recommend trying this out using files instead of disks beforehand so you
get a
On Thu, 21 May 2009, Nandini Mocherla wrote:
Then I booted into failsafe mode of 101a and then tried to run the
following command as given in luactivate output.
Yeah, that's a known bug in the luactivate output. CR 6722845
# mount -F zfs /dev/dsk/c1t2d0s0 /mnt
cannot open
On Thu, 21 May 2009, Ian Collins wrote:
I'm trying to use zfs send/receive to replicate the root pool of a system and
I can't think of a way to stop the received copy attempting to mount the
filesystem over the root of the destination pool.
If you're using build 107 or later, there's a
Drew Balfour wrote:
I have OSol 2009.06 (b111a), and I'm not sure I'm getting this ZFS ACL
thing:
%whoami
abalfour
% ls -V file
--+ 1 abalfour root 1474560 May 11 18:43 file
owner@:-w--d--A-W-C--:---:deny
according to that ACL I shouldn't be able to write
abalf...@gmail.com wrote:
On May 21, 2009 11:08am, Mark Shellenbaum mark.shellenb...@sun.com wrote:
Nope, the owner always has the ability to fix broken permissions on
files. Otherwise the owner would be locked out of their own files.
Nuts; That's what I was trying to do; lock owners
On Thu, 7 May 2009, Mike Gerdts wrote:
Perhaps you have change the configuration of the array since the last
reconfiguration boot. If you run devfsadm then run format, does it
see more disks?
Another thing to check is to see if the controller has a jbod mode as
opposed to passthrough.
On Fri, 17 Apr 2009, Mark J Musante wrote:
The dependency is based on the names.
I should clarify what I mean by that. There are actually two dependencies
here: one is based on dataset names, and one is based on snapshots and
clones.
If there are two datasets, pool/foo and pool/foo/bar
On Thu, 9 Apr 2009, shyamali.chakrava...@sun.com wrote:
Hi All,
I have corefile where we see NULL pointer de-reference PANIC as we have sent
(deliberately) NULL pointer for return value.
vdev_disk_io_start()
error = ldi_ioctl(dvd-vd_lh, zio-io_cmd,
On Fri, 10 Apr 2009, Patrick Skerrett wrote:
degradation) when these write bursts come in, and if I could buffer them
even for 60 seconds, it would make everything much smoother.
ZFS already batches up writes into a transaction group, which currently
happens every 30 seconds. Have you
On Fri, 27 Mar 2009, Alec Muffett wrote:
The inability to create more than 1 clone at a time (ie: in separate
TXGs) is something which has hampered me (and several projects on which
I have worked) for some years, now.
Hi Alec,
Does CR 6475257 cover what you're looking for?
Regards,
markm
On Tue, 17 Mar 2009, Neal Pollack wrote:
Can anyone share some instructions for setting up the rpool mirror of
the boot disks during the Solaris Nevada (SXCE) install?
You'll need to use the text-based installer, and in there you choose two
the two bootable disks instead of just one.
On 17 Mar, 2009, at 16.21, Bryan Allen wrote:
Then mirror the VTOC from the first (zfsroot) disk to the second:
# prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c1t1d0s2
# zpool attach -f rpool c1t0d0s0 c1t1d0s0
# zpool status -v
And then you'll still need to run installgrub to put
whereas
user:joe:write_data/execute:deny
user:joe:read_data/write_data:allow
would deny joe the ability to execute or write_data, but joe could
still read the files data.
Once a bit has been denied only a privilege subsystem override can give
you that ability.
-Mark
the creation mode you will need to inherit at
least one of the abstract ACEs (owner@, group@ or everyone@). Those are
the ACEs that affect the mode of the file.
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Hi Steven,
Try doing 'zfs list -t all'. This is a change that went in late last year
to list only datasets unless snapshots were explicitly requested.
On Fri, 6 Mar 2009, Steven Sim wrote:
Gurus;
I am using OpenSolaris 2008.11 snv_101b_rc2 X86
Prior to this I was using SXCE built 91
On Fri, 6 Mar 2009, Blake wrote:
I have savecore enabled, but it doesn't look like the machine is dumping
core as it should - that is, I don't think it's a panic - I suspect
interrupt handling.
Then when you say you had a machine crash, what did you mean?
Did you look in /var/crash/* to see
On Fri, 6 Mar 2009, Blake wrote:
I have savecore enabled, but nothing in /var/crash:
r...@filer:~# savecore -v
savecore: dump already processed
r...@filer:~# ls /var/crash/filer/
r...@filer:~#
OK, just to ask the dumb questions: is dumpadm configured for
/var/crash/filer? Is the dump zvol
On Thu, 5 Mar 2009, Blake wrote:
I had a 2008.11 machine crash while moving a 700gb file from one machine
to another using cp. I looked for an existing bug for this, but found
nothing.
Has anyone else seen behavior like this? I wanted to check before
filing a bug.
Have you got a copy of
Hi Harry,
I doubt it too. Try here to be sure (no need to install, unzip in a folder
and just run).
CPUID http://www.cpuid.com/
Check the processor features when you run the app. I hope that helps.
/Mark :-)
2009/2/26 Tim t...@tcsac.net
Then you would be looking for AMD-V extensions. VT
On Fri, 13 Feb 2009, Tony Marshall wrote:
How would i obtain the current setting for the vdev_cache from a
production system? We are looking at trying to tune ZFS for better
performance with respect to oracle databases, however before we start
changing settings via the /etc/system file we
Handojo wrote:
hando...@opensolaris:~# zpool add rpool c4d0
Two problems: first, the command needed is 'zpool attach', because
you're making a mirror. 'zpool add' is for extending stripes, and
currently stripes are not supported as root pools.
The second problem is that when the drive is
.
On 02/02/09 08:55, Mark Shellenbaum wrote:
The time has come to review the current Contributor and Core
contributor
grants for ZFS. Since all of the ZFS core contributors grants are
set
to expire on 02-24-2009 we need to renew the members that are still
contributing at core contributor
to both Contributor and Core contributor levels.
First the current list of Core contributors:
Bill Moore (billm)
Cindy Swearingen (cindys)
Lori M. Alt (lalt)
Mark Shellenbaum (marks)
Mark Maybee (maybee)
Matthew A. Ahrens (ahrens)
Neil V. Perrin (perrin)
Jeff Bonwick (bonwick)
Eric Schrock (eschrock
To set the mountpoint back to default, use 'zfs inherit mountpoint dataset'
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, 30 Jan 2009, Frank Cusack wrote:
so, is there a way to tell zfs not to perform the mounts for data2? or
another way i can replicate the pool on the same host, without exporting
the original pool?
There is not a way to do that currently, but I know it's coming down the
road.
Hi Pål,
CR 6420274 covers the -p part of your question. As far as kstats go, we only
have them in the arc and the vdev read-ahead cache.
Regards,
markm
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
On Fri, 30 Jan 2009, Ed Kaczmarek wrote:
And/or step me thru the required mdb/kdb/whatever it's called stack
trace dump command sequence after booting with -kd
Dan Mick's got a good guide on his blog:
http://blogs.sun.com/dmick/entry/diagnosing_kernel_hangs_panics_with
Regards,
markm
be 0700 or something similar.
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, 28 Jan 2009, Richard Elling wrote:
Orvar Korvar wrote:
I have 5 terabyte discs in a raidz1. Could I add one SSD drive in a
similar vein? Would it be easy to do?
Yes.
To be specific, you use the 'cache' argument to zpool, as in:
zpool create pool ... cache cache-device
@:full_set:fd:allow /tank/public
then all cifs created files will have full access and anyone can
read,write,delete,...
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, 22 Jan 2009, Al Slater wrote:
Mounting root on rpool/ROOT/Sol11_b105 with filesystem type zfs is not
supported
This line is coming from svm, which leads me to believe that the zfs
boot blocks were not properly installed by live upgrade.
You can try doing this by hand, with the
directly. The CIFS administration guide should have examples
of using sharemgr.
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Amy,
This is a known problem with ZFS and live upgrade. I believe the docs for
s10u6 discourage the config you show here. A patch should be ready some
time next month with a fix for this.
On Fri, 16 Jan 2009, amy.r...@tufts.edu wrote:
I've installed an s10u6 machine with no UFS
On Fri, 16 Jan 2009, amy.r...@tufts.edu wrote:
mmusante This is a known problem with ZFS and live upgrade. I believe the
mmusante docs for s10u6 discourage the config you show here. A patch should
mmusante be ready some time next month with a fix for this.
Do you happen to have a bugid
Ian Collins wrote:
satya wrote:
Any idea if we can use pax command to backup ZFS acls? will -p option of
pax utility do the trick?
pax should, according to
http://docs.sun.com/app/docs/doc/819-5461/gbchx?a=view
pax isn't ACL aware. It does handle extended attributes, though.
Here
that this is intended behavior to comply with POSIX. As the
author of the thread mentioned, I would like to see an inheritance mode that
completely ignores POSIX. The thread ends with Mark Shellenbaum commenting
that he will fasttrack the behavior that many people want. It is not clear
to me
I'm exporting/importing a zpool from a sun 4200 running Solaris 10 10/08
s10x_u6wos_07b X86
to a t2000 running Solaris 10 10/08 s10s_u6wos_07b SPARC. Neither one is yet
patched,
but I didn't see anything obvious on sunsolve for recent updates.
The filesysem contains symbolic links. I made a
The best you can do right now is mirroring. During the install, choose
more than one hard drive and zfs will create a mirror configuration.
Support for raidz and/or striping is for a future project.
On Fri, 19 Dec 2008, iman habibi wrote:
Hello All
Im new in solaris 10 zfs structure.my
the
forums are such an efficient format. I guess the new directive should
be RTFF.
On 14-Dec-08, at 2:09 PM, Bob Netherton wrote:
Jeff Bonwick wrote:
On Sat, Dec 13, 2008 at 04:44:10PM -0800, Mark Dornfeld wrote:
I have installed Solaris 10 on a ZFS filesystem that is not
mirrored
I have installed Solaris 10 on a ZFS filesystem that is not mirrored. Since I
have an identical disk in the machine, I'd like to add that disk to the
existing pool as a mirror. Can this be done, and if so, how do I do it?
Thanks
--
This message posted from opensolaris.org
Vahid Moghaddasi wrote:
Hi all,
We have this problem of losing permission and ownership of the raw zfs
devices when the pool is moved from one system to another.
The owner is an application account and each time we failover to another
machine, have to set the permission and owner manually
Brian Cameron wrote:
Mark Others:
I think you may have misunderstood what people were suggesting. They
weren't suggesting changing the mode of the file, but using chmod(1M) to
add/modify ZFS ACLs on the device file.
chmod A+user:gdm:rwx:allow file
See chmod(1M) or the zfs admin guide
You should probably make sure that you just don't keep continually
adding the same entry over and over again to the ACL. With NFSv4 ACLs
you can insert the same entry multiple times and if you keep doing it
long enough you will eventually get an error back when you reach the
ACE limit on
Mark Shellenbaum wrote:
You should probably make sure that you just don't keep continually
adding the same entry over and over again to the ACL. With NFSv4 ACLs
you can insert the same entry multiple times and if you keep doing it
long enough you will eventually get an error back when you
On Tue, 9 Dec 2008, Tim Haley wrote:
ludelete doesn't handle this any better than beadm destroy does, it
fails for the same reasons. lucreate does not promote the clone it
creates when a new BE is spawned, either.
Live upgrade's luactivate command is meant to promote the BE during init 6
On Tue, 9 Dec 2008, elaine ashton wrote:
Thanks! That'd be great as I have an snv_79 system that doesn't exhibit
this behaviour so I'll assume that this has been added in sometime
between that release and 101a?
According to the CR, the putback went into build 66.
external link:
On Tue, 9 Dec 2008, Elaine Ashton wrote:
If I fdisk 2 disks to have EFI partitions and label them with the
appropriate partition beginning at sector 34 and then give them to ZFS
for a pool, ZFS would appear to change the beginning sector to 256.
Right. This is done deliberately so that we
ACL's seemed a good solution since it leaves the overall ownership
and permissions of the device the same, but just adds the gdm user as
having permission to access the device as needed. Is there any way to
get this same sort of behavior when using ZFS.
I think you may have
is a
bit mask, and it is possible for a file system to support multiple ACL
flavors.
Here is an example of pathconf() as used in acl_strip(3sec)
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libsec/common/aclutils.c#390
-Mark
On Mon, 17 Nov 2008, Vincent Boisard wrote:
#zpool create pool1 c1d1s0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c1d1s0 overlaps with /dev/dsk/c1d1s2
That's CR 6419310.
Regards,
markm
___
zfs-discuss mailing
Just to try this out, I created a 9g zpool and a 5g volume in that zpool.
Then I used dd to write to every block of the volume.
Taking a snapshot of the volume at that point attempts to reserve an
additional 5g, which fails.
With 1g volumes we see it in action:
bash-3.00# zpool create tank
I beg to differ.
# cat /etc/release
Solaris 10 10/08 s10s_u6wos_07b SPARC
Copyright 2008 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 27 October 2008
# lustatus
Boot
for the quick responses, eh?
Mark
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, 6 Nov 2008, Chris Ridd wrote:
I probably need to downgrade a machine from 10u5 to 10u3. The zpool on
u5 is a v4 pool, and AIUI 10u3 only supports up to v3 pools.
The only difference between a v4 pool and a v3 pool is that v4 added
history ('zpool history pool'). I would expect a v3
Hi Michael,
Did you try doing an export/import of tank?
On Thu, 6 Nov 2008, Michael Schuster wrote:
all,
I've gotten myself into a fix I don't know how to resolve (and I can't
reboot the machine, it's a build server we share):
$ zfs list -r tank/schuster
NAME
a few months ago to deal with out of
range timestamps.
PSARC 2008/508 allow touch/settime to fix out of range timestamps
6709455 settime should be able to manipulate files with wrong timestamp
-Mark
___
zfs-discuss mailing list
zfs-discuss
On Tue, Sep 9, 2008 at 7:56 AM, Mark Shellenbaum
[EMAIL PROTECTED] wrote:
David Bartley wrote:
On Tue, Sep 9, 2008 at 11:43 AM, Mark Shellenbaum
[EMAIL PROTECTED] wrote:
David Bartley wrote:
Hello,
We're repeatedly seeing a kernel panic on our disk server. We've been
unable to determine
is able to translate this to NFSv4 ACLs, you may have luck.
So it might be better to use something like:
# cd /dir1
# find . -print -depth | cpio -Ppdm /dir2
[dir1 on UFS; dir2 on ZFS]
ufsrestore will translate a UFS ACL to a ZFS ACL. Its all handled in
the acl_set() interface.
-Mark
in the pool?
Thank for your answer,
C Rouhassia
One of the simplest ways to see how the pool hierarchy works is to use
zdb -vvv pool and then peruse the output.
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
in notepad though and putting the digest and filename
on one line worked a treat.
Sorry this went a little OT / top-posted.
Sent from my iPod
Mark.
On 25 Oct 2008, at 06:01, Johan Hartzenberg [EMAIL PROTECTED]
wrote:
On Sat, Oct 25, 2008 at 6:59 AM, Johan Hartzenberg
[EMAIL PROTECTED
So this is where I stand. I'd like to ask zfs-discuss if they've seen any
ZIL/Replay style bugs associated with u3/u5 x86? Again, I'm confident in my
hardware, and /var/adm/messages is showing no warnings/errors.
Are you absolutely sure the hardware is OK? Is there another disk you can
On Tue, 30 Sep 2008, Ram Sharma wrote:
Hi,
can anyone please tell me what is the maximum number of files that can
be there in 1 folder in Solaris with ZSF file system.
By folder, I assume you mean directory and not, say, pool. In any case,
the 'limit' is 2^48, but that's effectively no
On Sat, 27 Sep 2008, Marcin Woźniak wrote:
After successful upgrade from snv_95 to snv_98 ( ufs boot - zfs boot).
After luactive new BE with zfs. I am not able to ludelete old BE with
ufs. problem is, I think that zfs boot is /rpool/boot/grub.
This is due to a bug in the /usr/lib/lu/lulib
On Tue, 30 Sep 2008, Ian Collins wrote:
Mark J Musante wrote:
On Sat, 27 Sep 2008, Marcin Woźniak wrote:
After successful upgrade from snv_95 to snv_98 ( ufs boot - zfs
boot). After luactive new BE with zfs. I am not able to ludelete old
BE with ufs. problem is, I think that zfs boot
Hi Glenn,
Where is it hanging? Could you provide a stack trace? It's possible
that it's just a bug and not a configuration issue.
On 18 Sep, 2008, at 16.12, Glenn Lagasse wrote:
I had a disk that contained a zpool. For reasons that we won't go in
to, that disk had zero's written all
On 13 Sep 2008, at 08:33, Guido [EMAIL PROTECTED] wrote:
Hi all,
after installing OpenSolaris 2008.05 in VirtualBox I've created a
ZFS Root Mirror by:
zfs attach rpool Disk B
and it works like a charm. Now I tried to restore the rpool from the
worst Case
Scenario: The Disk the
domain SID
table.
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
David Bartley wrote:
On Tue, Sep 9, 2008 at 11:43 AM, Mark Shellenbaum
[EMAIL PROTECTED] wrote:
David Bartley wrote:
Hello,
We're repeatedly seeing a kernel panic on our disk server. We've been
unable to determine exactly how to reproduce it, but it seems to occur
fairly frequently (a few
On Mon, 8 Sep 2008, jan damborsky wrote:
Is there any way to release dump ZFS volume after it was activated by
dumpadm(1M) command ?
Try 'dumpadm -d swap' to point the dump to the swap device.
Regards,
markm
___
zfs-discuss mailing list
have been referring to:
6343667 scrub/resilver has to start over when a snapshot is taken
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 3 Sep 2008, at 05:20, F. Wessels [EMAIL PROTECTED] wrote:
Hi,
can anybody describe the correct procedure to replace a disk (in a
working OK state) with a another disk without degrading my pool?
This command ought to do the trick:
zfs replace pool old-disk new-disk
The type of pool
On Mon, 1 Sep 2008, Gavin Maltby wrote:
I'd like to be able to utter cmdlines such as
$ zfs set readonly=on .
$ zfs snapshot [EMAIL PROTECTED]
with '.' interpreted to mean the dataset corresponding to the current
working directory.
Sounds like it would be a useful RFE.
This would
On Thu, 28 Aug 2008, Paul Floyd wrote:
Does anyone have a pointer to a howto for doing a liveupgrade such that
I can convert the SXCE 94 UFS BE to ZFS (and liveupgrade to SXCE 96
while I'm at it) if this is possible? Searching with google shows a lot
of blogs that describe the early
Ian Collins wrote:
Mark Shellenbaum wrote:
Ian Collins wrote:
I have a pretty standard ZFS boot AMD64 desktop. A the moment, most ZFS
related commands are hanging (can't be killed) . Running 'truss share'
the last few lines I see are:
Can you provide a kernel thread list report? You can
Ian Collins wrote:
Mark Shellenbaum wrote:
Paul B. Henson wrote:
Are the libsec undocumented interfaces likely to remain the same when the
acl_t structure changes? They will still require adding the prototypes to
my code so the compiler knows what to make of them, but less chance of
breakage
::threadlist -v
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
feedback...
We are currently investigating adding more functionality to libsec to
provide many of the things you desire. We will have iterators, editing
capabilities and so on.
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
to determine if a file system supports POSIX draft ACLs or NFSv4 ACL and
then uses the native acl(2) syscall to retrieve the data.
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Paul B. Henson wrote:
On Fri, 15 Aug 2008, Mark Shellenbaum wrote:
The layout of the acl_t will likely change in the not too distant future.
[...]
of the ACL, but they aren't documented interfaces, such as acl_data()
which will return you the pointer to the array of ace_t's and acl_cnt
then you will get errors if you
attempt to set inheritance flags on files. That problem has been fixed
in snv_95.
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
?
There isn't an easy way to do exactly what you want.
That's unfortunate :(
How do I go about requesting a feature like this?
You can open an RFE via:
http://www.opensolaris.org/bug/report.jspa
-Mark
___
zfs-discuss mailing list
zfs-discuss
it.
$ ls -% all file
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
If you have already performed the install I'm not sure if there is an
easy way
to have the compression affect the current files so that you can regain some
of the currently used space. Maybe someone else has an idea on that.
-Mark D.
W. Wayne Liauh wrote:
Is it possible to compress a root
about yourself (or someone else)
at the bottom of the page.
Check it out, get involved, and I hope to see you there! It will be a
blast. More details later...
-- mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
On Fri, 25 Jul 2008, Alan Burlison wrote:
Enda O'Connor wrote:
probably
6722767 lucreate did not add new BE to menu.lst ( or grub )
Yeah, I found that bug, added A CR bumped the priority.
Unfortunatrly there's no analysis or workaround in the bug, so I've no
idea what the real problem
datasets and mount
them on the dataset tank/fs and all of its descendants.
See the zfs admin guide for example of the zfs delegated admin model.
Your pool must be version 8 or greater for delegated administration support.
-Mark
___
zfs-discuss
On Wed, 23 Jul 2008, [EMAIL PROTECTED] wrote:
Rainer,
Sorry for your trouble.
I'm updating the installboot example in the ZFS Admin Guide with the
-F zfs syntax now. We'll fix the installboot man page as well.
Mark, I don't have an x86 system to test right now, can you send me
On Tue, 22 Jul 2008, Rainer Orth wrote:
I just wanted to attach a second mirror to a ZFS root pool on an Ultra
1/170E running snv_93.
I've followed the workarounds for CR 6680633 and 6680633 from the ZFS
Admin Guide, but booting from the newly attached mirror fails like so:
I think you're
Martin Gisch wrote:
I noticed an oddity on my 2008.05 box today.
Created a new zfs file system that I was planning to nfs share out to an old
FreeBSD box, after I put sharenfs=on for it, I noticed there was a bunch of
others shared too:
-bash-3.2# dfshares -F nfs
RESOURCE
On Thu, 2008-07-10 at 11:42 +0100, Darren J Moffat wrote:
I regularly create new zfs filesystems or snapshots and I find it
annoying that I have to type the full dataset name in all of those cases.
I propose we allow zfs(1) to infer the part of the dataset name upto the
current working
201 - 300 of 515 matches
Mail list logo