On Mon, 24 Sep 2007, Michael Schuster wrote:
> I recently started seeing zfs chattiness at boot time: "reading zfs config"
> and something like "mounting zfs filesystems (n/n)".
This was added recently because ZFS can take a while to mount large
configs. Consoles would appear to freeze after the
On Wed, 19 Sep 2007, Mike Gerdts wrote:
> The rather consistent answer is that zoneadm clone will not do zfs until
> live upgrade does zfs. Since there is a new project in the works (Snap
> Upgrade) that is very much targeted at environments that use zfs, I
> would be surprised to see zfs support
On Mon, 17 Sep 2007, Robert Milkowski wrote:
> If you do 'zpool create -f test A B C spare D E' and D or E contains UFS
> filesystem then despite of -f zpool command will complain that there is
> UFS file system on D.
This was fixed recently in build 73. See CR 6573276.
Regards,
markm
.
Thanks
Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hey all again,
Looking into a few other options. How about infiniband? it would give us more
bandwidth, but will it increase complexity/price? any thoughts?
Cheers
Mark
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
/next/
functioning drive in the loop can be the reporting one sometimes. It's
just a quirk of that storage unit.
These days devices will usually have an individual internal FC-AL loop
to each drive to alleviate this sort of problem.
Cheers,
Mark.
> Hi all,
>
> yesterday we had a
at by
using IPMP the bandwidth is increased due to sharing across all the network
cards, is this true?
Thanks again for all your help
Cheers
Mark
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
.
Does anybody see a problem with this?
Also i know this isnt ZFS, but is there any upper limit on file size with samba?
Thanks For all your help.
Mark
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hey,
I will submit it. However does Opensolaris have a seperate HCL? or do i just
use the solaris one?
Cheers
Mark
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
to buy it all for
around AUS$350. cpu, mobo, ram and everything. ive tried it with a few solaris
distro's and its worked fine and been rather fast.
Cheers
Mark
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-di
t it really. is this
correct?
thanks again for all your help
Cheers
Mark
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. They will be connected by gigabit ethernet. So my
question is how do I mirror one raidz Array across the network to the other?
Thanks for all your help
Mark.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
e system image
(see TEST section). Here are some questions:
* Am I missing a command or something?
* Is there support for lofiadm in a more recent version of ZFS?
* Or is there any way to safely mount a file system image?
Thanks for your help.
Regards
Mark
GOOD NEWS
It looks as if the z
s. Plus
I'm not sure I've tuned the file systems for Oracle block sizes.
Depending on your solution that probably isn't an issue with you.
We like the ability to do ZFS snapshots and clones, we can copy an
entire DB setup and create a clone in about ten seconds. Before it
remove the LUNs and free up the SE6140 arrays so their owners
can begin to use them.
At the moment once a device is in a zpool, it's stuck there. That's a
problem. What sort of time frame are we looking at until it's possible
to remove LU
On Tue, 17 Jul 2007, Kwang-Hyun Baek wrote:
> # uname -a
> SunOS solaris-devx 5.11 opensol-20070713 i86pc i386 i86pc
>
> ===
> What's more interesting is that ZFS version shows that it's 8does it
> even exist?
Yes, 8 was created to support
On Mon, 16 Jul 2007, Kwang-Hyun Baek wrote:
> Is there any way to fix this? I actually tried to destroy the pool and
> try to create a new one, but it doesn't let me. Whenever I try, I get
> the following error:
>
> [EMAIL PROTECTED]:/var/crash# zpool create -f pool c0d0s5
> internal error: No s
On Fri, 13 Jul 2007, Kwang-Hyun Baek wrote:
> NAMESTATE READ WRITE CKSUM
> poolUNKNOWN 0 0 0
> c0d0s5UNKNOWN 0 0 0
> c0d0s6UNKNOWN 0 0 0
> c0d0s4UNKNOWN 0 0 0
>
On Fri, 13 Jul 2007, Kwang-Hyun Baek wrote:
> zpool list
>
> it shows my pool with health UNKNOWN
That means it's already imported. What's the output of 'zpool status'?
Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://ma
On Fri, 13 Jul 2007, Kwang-Hyun Baek wrote:
> zpool import pool (my pool is named 'pool') returns
>
> cannot import 'pool': no such pool available
What does 'zpool import' by itself show you? It should give you a list of
available pools to import.
Regards,
markm
__
On Wed, 27 Jun 2007, [UTF-8] Jürgen Keil wrote:
> Yep, I just tried it, and it refuses to "zpool import" the newer pool,
> telling me about the incompatible version. So I guess the pool format
> isn't the correct explanation for the Dick Davies' (number9) problem.
Have you tried creating the poo
On Tue, 26 Jun 2007 [EMAIL PROTECTED] wrote:
> In particular zfs sometimes takes a while to return from zfs snapshot -r
> tank/[EMAIL PROTECTED] in the case where there are a great many iscsi shared
> volumes underneath. A little progress feedback would go a long way.
This would certainly be nice
Nicolas Williams wrote:
Couldn't wait for ZFS delegation, so I cobbled something together; see
attachment.
Nico
The *real* ZFS delegation code was integrated into Nevada this morning.
I've placed a little overview in my blog.
http://blogs.sun.com/mark
On Tue, 19 Jun 2007, John Brewer wrote:
> bash-3.00# zpool import
> pool: zones
> id: 4567711835620380868
> state: ONLINE
> status: The pool is formatted using an older on-disk version.
> action: The pool can be imported using its name or numeric identifier, though
> some features w
nteresting limitations,
beyond those that pertain to ZFS or SVM in isolation.
As a practical measure, you should probably not be duplicating levels of
striping or redundancy in both layers.
--Mark
___
zfs-discuss mailing list
zfs-discuss@opensolari
On Tue, 12 Jun 2007, Tim Cook wrote:
> This pool should have 7 drives total, which it does, but for some reason
> c4d0 is displayed twice. Once as online (which it is), and once as
> unavail (which it is not).
What's the name of the 7th drive? Did you take all the drives from the
old system and
On Mon, 11 Jun 2007, Rick Mann wrote:
> ZFS Readonly implemntation is loaded!
Is that a copy-n-paste error, or is that typo in the actual output?
Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailm
A patent lawyer could give his *opinion*, but actual infringement
would likely have to be determined in court. Before that happens,
most corporations end up signing a cross license agreement and not
actually answering the question of infringement.
-- mark
Chad Lewis wrote:
With US patent laws
On Fri, 1 Jun 2007, Ben Bressler wrote:
> When I do the zfs send | ssh zfs recv part, the file system (folder) is
> getting created, but none of the data that I have in my snapshot is
> sent. I can browse on the source machine to view the snapshot data
> pool/.zfs/snapshot/snap-name and I see the
On Fri, 1 Jun 2007, John Plocher wrote:
> This seems especially true when there is closure on actions - the set of
> zfs snapshot foo/[EMAIL PROTECTED]
> zfs destroy foo/[EMAIL PROTECTED]
> commands is (except for debugging zfs itself) a noop
Note that if you use the recursive snap
On Fri, 1 Jun 2007, Krzys wrote:
> bash-3.00# zpool replace mypool c1t2d0 emcpower0a
> bash-3.00# zpool status
>pool: mypool
> state: ONLINE
> status: One or more devices is currently being resilvered. The pool will
> continue to function, possibly in a degraded state.
> action: Wa
C/2006/465
http://www.opensolaris.org/jive/thread.jspa?messageID=47766
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
http://ask.slashdot.org/article.pl?sid=07/05/30/0135218&from=rss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, 25 May 2007, Ben Rockwood wrote:
> May 25 23:32:59 summer unix: [ID 836849 kern.notice]
> May 25 23:32:59 summer ^Mpanic[cpu1]/thread=1bf2e740:
> May 25 23:32:59 summer genunix: [ID 335743 kern.notice] BAD TRAP: type=e (#pf
> Page fault) rp=ff00232c3a80 addr=490 occurred in mo
I have a T2000 with an 11/06 release of Solaris 10 installed. I had created a
zpool with one LUN in it. Due to an apparent incompatibility with our HBA's and
switches I unplugged the fibre cables from the server's HBAs. Obviously this is
a dev server ;)
Two days later an admin logs in on the se
On Tue, 15 May 2007, Trevor Watson wrote:
> I don't suppose that it has anything to do with the flag being "wm"
> instead of "wu" on your second drive does it? Maybe if the driver thinks
> slice 2 is writeable, it treats it as a valid slice?
If the slice doesn't take up the *entire* disk, then it
On Mon, 14 May 2007, Alec Muffett wrote:
> I suspect the proper thing to do would be to build the six new large
> disks into a new RAID-Z vdev, add it as a mirror of the older,
> smaller-disk RAID-Z vdev, rezilver to zynchronize them, and then break
> the mirror.
The 'zpool replace' command is a
On Fri, 11 May 2007, Jason J. W. Williams wrote:
> Is it possible (or even technically feasible) for zfs to have a "destroy
> to" feature? Basically destroy any snapshot older than a certain date?
Sorta-kinda. You can use 'zfs get' to get the creation time of a
snapshot. If you give it -p, it'l
On Thu, 10 May 2007, Bruce Shaw wrote:
>
> I don't have enough disk to do clones and I haven't figured out how to
> mount snapshots directly.
Maybe I'm misunderstanding what you're saying, but 'zfs clone' is exactly
the way to mount a snapshot. Creating a clone uses up a negligible amount
of disk
On Wed, 9 May 2007, Anantha N. Srirama wrote:
> However, the poor performance of the destroy is still valid. It is quite
> possible that we might create another clone for reasons beyond my
> original reason.
There are a few open bugs against destroy. It sounds like you may be
running into 650962
On 8 May, 2007, at 22.51, Cyril Plisko wrote:
So I quickly hacked together a script which defines the necessary
complete clauses (yes I am a tcsh user). After playing with it
for a while I decided to share it with community in a hope that
it may be improved/extended and be a useful tool in day
Regards,
Mark V. Dalton
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I'm hoping that this is simpler than I think it is. :-)
We routinely clone our boot disks using a fairly simple script that:
1) Copies the source disk's partition layout to the target disk using
[i]prtvtoc[/i], [i]fmthard[/i] and [i]installboot.[/i]
2) Using a list, runs [i]newfs[/i] against the
On Thu, 26 Apr 2007, Ben Miller wrote:
> I just rebooted this host this morning and the same thing happened again. I
> have the core file from zfs.
>
> [ Apr 26 07:47:01 Executing start method ("/lib/svc/method/nfs-server start")
> ]
> Assertion failed: pclose(fp) == 0, file ../common/libzfs_mo
Gavin Maltby wrote:
Hi,
Is it expected that if I have filesystem tank/foo and tank/foo/bar
(mounted under /tank) then in order to be able to browse via
/net down into tank/foo/bar I need to have group/other permissions
on /tank/foo open?
You are running into bug:
4697677 permissions of underl
On Tue, 24 Apr 2007, Darren J Moffat wrote:
> There are obvious other places that would really benefit but I think
> having them as separate datasets really depends on what the machine is
> doing. For example /var/apache if you really are a webserver, but then
> why not go one better and split ou
On Mon, 23 Apr 2007, Eric Schrock wrote:
> On Mon, Apr 23, 2007 at 11:48:53AM -0700, Lyle Merdan wrote:
> > So If I send a snapshot of a filesystem to a receive command like this:
> > zfs send tank/[EMAIL PROTECTED] | zfs receive backup/jump
> >
> > In order to get compression turned on, am I corr
On Mon, 23 Apr 2007, Lyle Merdan wrote:
> So If I send a snapshot of a filesystem to a receive command like this:
> zfs send tank/[EMAIL PROTECTED] | zfs receive backup/jump
>
> In order to get compression turned on, am I correct in my thought that I
> need to start the send/receive and then in a
Ah, ok. Nice to see this is going to happen.
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
VFS-level
changes?
As one of the leaders of the FUSE project I'd really like to see a
general file-system community one which encompasses all Solaris
filesystems (including ZFS and UFS).
Cheers,
-Mark
Thanks,
Rich
This message posted from opensolaris.org
__
On Thu, 19 Apr 2007, Mario Goebbels wrote:
> Is it possible to gracefully and permanently remove a vdev from a pool
> without data loss?
Is this what you're looking for?
http://bugs.opensolaris.org/view_bug.do?bug_id=4852783
If so, the answer is 'not yet'.
Regards,
markm
__
Once pool is exported
there should be no ZIL records to replay on import, so modifing
lr_setattr_t size isn't really critical, is it?
What are your suggestions?
I am currently working on adding a number of the BSD flags into ZFS.
I will be placing them
On Thu, 12 Apr 2007, Simon wrote:
> I'm installing Oracle 9i on Solaris 10 11/06(update 3),I created some
> zfs volumes which will be used by oracle data file,as:
Have you tried using SVM volumes? I ask, because SVM does the same thing:
soft-link to /devices
If it works for SVM and not for ZFS,
On Wed, 11 Apr 2007, Constantin Gonzalez Schmitz wrote:
> So, instead of detaching, would unplugging, then detaching work?
I don't see why that couldn't work. The original question of striped
mirrors would be similar:
- zpool create tank mirror mirror
- {physically move and to new box}
ugh, thanks for exploring this and isolating the problem. We will look
into what is going on (wrong) here. I have filed bug:
6545015 RAID-Z resilver broken
to track this problem.
-Mark
Marco van Lienen wrote:
On Sat, Apr 07, 2007 at 05:05:18PM -0500, in a galaxy far far away, Chris
On Tue, 10 Apr 2007, Rich Teer wrote:
> I have a pool called tank/home/foo and I want to rename it to
> tank/home/bar. What's the best way to do this (the zfs and zpool man
> pages don't have a "rename" option)?
In fact, there is a rename option for zfs:
# zfs create tank/home
# zfs create tank
t is the required unmount (also required in a rename op). Has
there been recent activity in the Sunday-1 snapshot (like a backup or
'find' perhaps)? This will cause the unmount to proceed very slowly.
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
vdev ... right?
Yes, although it depends on the nature of the write failure. If the
write failed because the device is no longer available, ZFS will not
continue to try different blocks.
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Tue, 10 Apr 2007, Constantin Gonzalez wrote:
> Has anybody tried it yet with a striped mirror? What if the pool is
> composed out of two mirrors? Can I attach devices to both mirrors, let
> them resilver, then detach them and import the pool from those?
You'd want to export them, not detach th
On Tue, 10 Apr 2007, Martin Girard wrote:
> Is it possible to make my zpool redundant by adding a new disk in the pool
> and making it a mirror with the initial disk?
Sure, by using zpool attach:
# mkfile 64m /tmp/foo /tmp/bar
# zpool create tank /tmp/foo
# zpool status
pool: tank
state: ONLI
ommited txg_id?
No.
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Frederic Payet - Availability Services wrote:
Hi gurus,
When creating some small files an ZFS directory, used blocks number is
not what could be espected:
hinano# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool2 702K 16.5G 26.5K /pool2
pool2/new
Robert Thurlow wrote:
Richard Elling wrote:
Peter Eriksson wrote:
ufsdump/ufsrestore doesn't restore the ACLs so that doesn't work,
same with rsync.
ufsrestore obviously won't work on ZFS.
ufsrestore works fine; it only reads from a 'ufsdump' format medium and
writes through generic files
Carson Gaspar wrote:
Mark Shellenbaum wrote:
Can you post the full ACL on the directory and on the file you are
being allowed to delete.
Simple test:
carson:gandalf 2 $ uname -a
SunOS gandalf.taltos.org 5.10 Generic_125101-02 i86pc i386 i86pc
carson:gandalf 0 $ mkdir foo
carson:gandalf 0
y suspicion that this is related.
I would suspect you are seeing:
6541829 zfs delete permissions are not working correctly.
That bug is fixed in Nevada, and will be in update 4.
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
sty suspicion that this is related.
Can you post the full ACL on the directory and on the file you are being
allowed to delete.
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, 28 Mar 2007, prasad wrote:
> We create iso images of our product in the following way (high-level):
>
> # mkfile 3g /isoimages/myiso
> # lofiadm -a /isoimages/myiso
> /dev/lofi/1
> # newfs /dev/rlofi/1
> # mount /dev/lofi/1 /mnt
> # cd /mnt; zcat /product/myproduct.tar.Z | tar xf -
How bi
On Tue, 27 Mar 2007, [UTF-8] �^Aukasz wrote:
> >Out of curiosity, what is the timing difference between a userland script
> >and performing the operations in the kernel?
>
> Operation takes 15 - 20 seconds
>
> In kernel it takes ( time in ms ):
[between 2.5 and 14.5 seconds]
Very nice improvem
On Tue, 27 Mar 2007, [UTF-8] �^Aukasz wrote:
> zfs send then would:
> 1. create replicate snapshot if it does not exist
> 2. send data
> 3. wait 10 seconds
> 4. rename snapshot to replicate_previous ( destroy previous if exists )
> 5. goto 1.
>
> All snapshot operations are done in kernel - it wor
snv_62
On Fri, 23 Mar 2007, Rich Teer wrote:
Date: Fri, 23 Mar 2007 11:41:21 -0700 (PDT)
From: Rich Teer <[EMAIL PROTECTED]>
To: Adam Leventhal <[EMAIL PROTECTED]>
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] gzip compression support
On Fri, 23 Mar 2007, Adam Leventhal wrote:
Peter Tribble wrote:
On 3/23/07, Mark Shellenbaum <[EMAIL PROTECTED]> wrote:
Peter Tribble wrote:
> What exactly is the POSIX compliance requirement here?
>
The ignoring of a users umask.
Where in POSIX does it specify the interaction of ACLs and a
user's umask?
Let me
Peter Tribble wrote:
On 3/23/07, Mark Shellenbaum <[EMAIL PROTECTED]> wrote:
The original plan was to allow the inheritance of owner/group/other
permissions. Unfortunately, during ARC reviews we were forced to remove
that functionality, due to POSIX compliance and security concerns.
iance and security concerns.
We can look into alternatives to provide a way to force the creation of
directory trees with a specified set of permissions.
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/ma
.test
# chmod A+group:::fd:allow dir.test
create files and directories under dir.test.
This should allow anyone in the the desired group to read/write all
files, and the passthrough of aclmode stops chmod(2) from prepending
deny entries.
-Mark
___
zfs-di
rce_file_create_mode=0660'
(like for samba shares) property would be even better - no need to fight
with ACLs...
That would be bad. That would mean that every file in a file system
would be forced to be created with forced set of permissions.
-Mark
__
Jens Elkner wrote:
Hi,
2) On zfs
- e.g. as root do:
cp -P -r -p /dir /pool1/zfsdir
# cp: Insufficient memory to save acl entry
I will open a bug on that.
cp -r -p /dir /pool1/zfsdir
# cp: Insufficient memory to save acl entry
find dir | cpio -pu
Al Hopper wrote:
On Fri, 16 Mar 2007, Mark Shellenbaum wrote:
Darren J Moffat wrote:
Is there a time line for when we should expect the integration of the
user delegation functionality ? I'm desperately waiting for it and I
keep seeing new functionality that was approved after it integr
ntegrated. The sharetab
code is going through final testing now.
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
a explicit actions with mdb(1):
# mdb -kw
> arc::print -a c_max
c_max =
> /Z
In the current opensolaris nevada bits, and in s10u4, you can use
the system variable 'zfs_arc_max' to set the maximum arc size. Just
set this in /etc/system.
-Mark
Erik Vanden Meersch wrote:
Co
high degree of fragmentation in the pool.
-Mark
Robert Milkowski wrote:
Hi.
T2000 1.2GHz 8-core, 32GB RAM, S10U3, zil_disable=1.
Command 'zpool export f3-2' is hung for 30 minutes now and still is going.
Nothing else is running on the server. I can see one CPU being 100% in SYS l
out the IO, but we have seen instances where
this never seems to happen.
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jason J. W. Williams wrote:
Hi Mark,
That does help tremendously. How does ZFS decide which zio cache to
use? I apologize if this has already been addressed somewhere.
The ARC caches data blocks in the zio_buf_xxx() cache that matches
the block size. For example, dnode data is stored on disk
Al Hopper wrote:
On Wed, 10 Jan 2007, Mark Maybee wrote:
Jason J. W. Williams wrote:
Hi Robert,
Thank you! Holy mackerel! That's a lot of memory. With that type of a
calculation my 4GB arc_max setting is still in the danger zone on a
Thumper. I wonder if any of the ZFS developers could
Jason J. W. Williams wrote:
Hi Robert,
Thank you! Holy mackerel! That's a lot of memory. With that type of a
calculation my 4GB arc_max setting is still in the danger zone on a
Thumper. I wonder if any of the ZFS developers could shed some light
on the calculation?
In a worst-case scenario, Ro
perty to passthrough, then the deny entries won't be
inserted. This is discussed in the zfs(1m). Look at the description
for aclmode. There is a companion property aclinherit which controls
inheritance behavior.
-Mark
vault:/pool/home/wcerich/sample#ls -al
total 12
drwxr-xr-x 2 roo
Tomas Ögren wrote:
On 05 January, 2007 - Mark Maybee sent me these 1,5K bytes:
So it looks like this data does not include ::kmastat info from *after*
you reset arc_reduce_dnlc_percent. Can I get that?
Yeah, attached. (although about 18 hours after the others)
Excellent, this confirms #3
. You are tying the arc hands
here, so it has no ability to reduce its size.
Number 3 is the most difficult issue. We are looking into that at the
moment as well.
-Mark
Tomas Ögren wrote:
On 05 January, 2007 - Mark Maybee sent me these 0,8K bytes:
Thomas,
This could be fragmentation in
Thomas,
This could be fragmentation in the meta-data caches. Could you
print out the results of ::kmastat?
-Mark
Tomas Ögren wrote:
On 05 January, 2007 - Robert Milkowski sent me these 3,8K bytes:
Hello Tomas,
I saw the same behavior here when ncsize was increased from default.
Try with
Ah yes! Thank you Casper. I knew this looked familiar! :-)
Yes, this is almost certainly what is happening here. The
bug was introduced in build 51 and fixed in build 54.
[EMAIL PROTECTED] wrote:
Hmmm, so there is lots of evictable cache here (mostly in the MFU
part of the cache)... could yo
Hmmm, so there is lots of evictable cache here (mostly in the MFU
part of the cache)... could you make your core file available?
I would like to take a look at it.
-Mark
Tomas Ögren wrote:
On 03 January, 2007 - Mark Maybee sent me these 5,0K bytes:
Tomas,
There are a couple of things going
desired size down to c_min (64MB), its
actually still consuming ~800MB in the hung kernel. This is odd.
The bulk of this space is in the 32K and 64K data caches. Could
you print out the contents of ARC_anon, ARC_mru, ARC_mfu, ARC_mru_ghost,
and ARC_mfu_ghost?
-Mark
Tomas Ögren wrote:
Hello.
Having
mask:rwx
other:r-x
I want to have the same acl on a zfs filesytem. How do I accomplish that?
Assuming you already have /home/users/ahege/incoming set to mode 0755
then the following should do the trick.
# chmod A+user:nobody:rwx:allow /home/users/ahege/incoming
-Mark
ks that have been overwritten will show up as corrupted (bad
checksums).
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ed caching data in compressed form, but have not
really explored the idea fully yet.
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
oing to be fixed any time soon, perhaps we need a better
workaround:
Anyone internal working on this?
Yes. But its going to be a few months.
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
nt of available space.
This number may be useful as some sort of upper bound, but no more than
that.
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ite on ...) -- need to reallocate writes
and
6322646 ZFS should gracefully handle all devices failing (when writing)
These bugs are actively being worked on, but it will probably be a while
before fixes appear.
-Mark
I'm extremely surprised that this kind of bug can make it into a Solaris
rele
Peter Tribble wrote:
On 10/27/06, *Mark Shellenbaum* <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:
Peter Tribble wrote:
> Make everything be group writeable.
>
> % chmod A+group@:rwxp:fd:allow a
>
You can't use the a
Peter Tribble wrote:
On 10/24/06, *Mark Shellenbaum* <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:
Chris Gerhard wrote:
>
> I want a file system that is shared by the group. Everything in
the file
> system writable by the group no
This is:
6483887 without direct management, arc ghost lists can run amok
The fix I have in mind is to control the ghost lists as part of
the arc_buf_hdr_t allocations. If you want to test out my fix,
I can send you some diffs...
-Mark
Juergen Keil wrote:
Jürgen Keil writes:
> > ZFS 1
401 - 500 of 573 matches
Mail list logo