Vincent Fox wrote:
Let's say you are paranoid and have built a pool with 40+ disks in a Thumper.
Is there a way to set metadata copies=3 manually?
After having built RAIDZ2 sets with 7-9 disks and then pooled these together,
it just seems like a little bit of extra insurance to increase
Tom,
Can you confirm that you are running Solaris 10? If so, then logging a
support call is the appropriate thing so that you can get the complete
set of patches to address the issues seen on x4500. Here are the patches
and IDRs that you will need:
Path Set:
125370-06 x86 Fault Manager,
J Duff wrote:
Under what circumstances would the BP_IDENTITY of zio-io_bp not equal the
BP_IDENTITY of zio-io_orig_bp?
Duff
--
This messages posted from opensolaris.org
___
zfs-code mailing list
zfs-code at opensolaris.org
Krzys wrote:
hello folks, I am running Solaris 10 U3 and I have small problem that I dont
know how to fix...
I had a pool of two drives:
bash-3.00# zpool status
pool: mypool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
Darren J Moffat wrote:
Mark Maybee wrote:
wait Block all I/O access until the device connectivity
is recovered and the errors are cleared. This is the
default behavior.
It isn't clear from the case material but I assume that reads that can
be
The latest ZFS patches for Solaris 10 are now available:
120011-14 - SunOS 5.10: kernel patch
120012-14 - SunOS 5.10_x86: kernel patch
ZFS Pool Version available with patches = 4
These patches will provide access to all of the latest features and bug
fixes:
Features:
PSARC 2006/288 zpool
Bernhard,
Here are the solaris 10 patches:
120011-14 - SunOS 5.10: kernel patch
120012-14 - SunOS 5.10_x86: kernel patch
See http://www.opensolaris.org/jive/thread.jspa?threadID=39951tstart=0
for more info.
Thanks,
George
Bernhard Holzer wrote:
Hi,
this parameter (zfs_nocacheflush) is
Bhaskar,
In the code below, dbuf_find() will return with db_mtx locked which will
prevent dbuf_do_evict() from proceedin as it will block waiting on this
mutex:
dbuf_do_evict(void *private)
{
if (!MUTEX_HELD(db-db_mtx))
mutex_enter(db-db_mtx);
Thanks,
George
Bhaskar
Bhaskar,
In your scenario the mutex would be held by the other thread in
dbuf_hold_impl(). So the caller of the dbuf_do_evict() would not own the
mutex and would have to acquire it. It's quite possible that the
eviction thread could have acquired the mutex earlier in the call path.
ZFS Fans,
Here's a list of features that we are proposing for Solaris 10u5. Keep
in mind that this is subject to change.
Features:
PSARC 2007/142 zfs rename -r
PSARC 2007/171 ZFS Separate Intent Log
PSARC 2007/197 ZFS hotplug
PSARC 2007/199 zfs {create,clone,rename} -p
PSARC 2007/283 FMA for
You need to install patch 120011-14. After you reboot you will be able
to run 'zpool upgrade -a' to upgrade to the latest version.
Thanks,
George
sunnie wrote:
Hey, guys
Since corrent zfs software only support ZFS pool version 3, how should I
do to upgrade the zfs software or package?
Ben,
Much of this code has been revamped as a result of:
6514331 in-memory delete queue is not needed
Although this may not fix your issue it would be good to try this test
with more recent bits.
Thanks,
George
Ben Miller wrote:
Hate to re-open something from a year ago, but we just had
Darren J Moffat wrote:
I'm looking for some guidance on when it is appropriate to increase the
SPA and ZFS version numbers.
Currently for ZFS Crypto I've only increased the SPA version number and
I'm using spa_version() in a few places - probably a few more are needed
though.
What
Darren J Moffat wrote:
George Wilson wrote:
Darren J Moffat wrote:
Mark Maybee wrote:
Darren J Moffat wrote:
For an encrypted dataset it is possible that by the time we arrive
in zio_write() [ zio_write_encrypt() ] that when we lookup which
key is needed to encrypted this data that key
The on-disk format for s10u4 will be version 4. This is equivalent to
Opensolaris build 62.
Thanks,
George
David Evans wrote:
As the release date Solaris 10 Update 4 approaches (hope, hope), I was
wondering if someone could comment on which versions of opensolaris ZFS will
seamlessly work
I'm planning on putting back the changes to ZFS into Opensolaris in
upcoming weeks. This will still require a manual step as the changes
required in the sd driver are still under development.
The ultimate plan is to have the entire process totally automated.
If you have more questions, feel
Darren J Moffat wrote:
I've been hoping to use elements of the blkptr_t as the initalisation
vector (IV) for the AES crypto algorithms - specifically the offset and
blk_birth. When do these get filled in ?
This happens during the allocate phase of the pipeline (either
zio_dva_allocate or
Darren J Moffat wrote:
I've been hoping to use elements of the blkptr_t as the initalisation
vector (IV) for the AES crypto algorithms - specifically the offset and
blk_birth. When do these get filled in ?
This happens during the allocate phase of the pipeline (either
zio_dva_allocate or
This fix plus the fix for '6495013 Loops and recursion in
metaslab_ff_alloc can kill performance, even on a pool with lots of free
data' will greatly help your situation.
Both of these fixes will be in Solaris 10 update 4.
Thanks,
George
?ukasz wrote:
I have a huge problem with ZFS pool
Darren J Moffat wrote:
Is it possible to have dataset properties that are managed using the
dsl_prop_set() / dsl_prop_get() interfaces that aren't made available
via zfs(1), in fact I probably don't want them in userland at all.
You can set the pd_visible field in the zfs_prop_table[] to
Darren J Moffat wrote:
George Wilson wrote:
Darren J Moffat wrote:
Is it possible to have dataset properties that are managed using the
dsl_prop_set() / dsl_prop_get() interfaces that aren't made available
via zfs(1), in fact I probably don't want them in userland at all
David Smith wrote:
I was wondering if anyone had a script to parse the zpool status -v output
into a more machine readable format?
Thanks,
David
This message posted from opensolaris.org
___
zfs-discuss mailing list
Peter,
Can you send the 'zpool status -x' output after your reboot. I suspect
that the pool error is occurring early in the boot and later the devices
are all available and the pool is brought into an online state.
Take a look at:
*6401126 ZFS DE should verify that diagnosis is still valid
Peter Goodman wrote:
# zpool status -x
pool: mtf
state: UNAVAIL
status: One or more devices could not be opened. There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
see:
Gino,
Can you send me the corefile from the zpool command? This looks like a
case where we can't open the device for some reason. Are you using a
multi-pathing solution other than MPXIO?
Thanks,
George
Gino wrote:
Today we lost an other zpool!
Fortunately it was only a backup repository.
Gino,
Were you able to recover by setting zfs_recover?
Thanks,
George
Gino wrote:
Hi All,
here is an other kind of kernel panic caused by ZFS that we found.
I have dumps if needed.
#zpool import
pool: zpool8
id: 7382567111495567914
state: ONLINE
status: The pool is formatted using an
William D. Hathaway wrote:
I'm running Nevada build 60 inside VMWare, it is a test rig with no data of value.
SunOS b60 5.11 snv_60 i86pc i386 i86pc
I wanted to check out the FMA handling of a serious zpool error, so I did the
following:
2007-04-07.08:46:31 zpool create tank mirror c0d1 c1d1
.
Can you share a little of your background and interest in the project with
this list? We can provide better answers to your questions if we
understand what you're trying to accomplish.
Thanks,
George Wilson
IBM LTC Security Development
--
redhat-lspp mailing list
redhat-lspp@redhat.com
https://www.redhat.com/mailman/listinfo/redhat-lspp
Thanks,
George Wilson
IBM LTC Security Development--
redhat-lspp mailing list
redhat-lspp@redhat.com
https://www.redhat.com/mailman/listinfo/redhat-lspp
its in that state.
Dunno what to capture. Anything interesting in /var/log/security? I
didn't think to look there last time.
-- ljk
--
redhat-lspp mailing list
redhat-lspp@redhat.com
https://www.redhat.com/mailman/listinfo/redhat-lspp
Thanks,
George Wilson
IBM LTC Security Development
Ihsan,
If you are running Solaris 10 then you are probably hitting:
6456939 sd_send_scsi_SYNCHRONIZE_CACHE_biodone() can issue TUR which
calls biowait()and deadlock/hangs host
This was fixed in opensolaris (build 48) but a patch is not yet
available for Solaris 10.
Thanks,
George
Ihsan
for this patch, Paul. Especially given the router
fun Loulwa had, this is should prove useful.
Thanks,
George Wilson
IBM LTC Security Development--
redhat-lspp mailing list
redhat-lspp@redhat.com
https://www.redhat.com/mailman/listinfo/redhat-lspp
Now that Solaris 10 11/06 is available, I wanted to post the complete list of
ZFS features and bug fixes that were included in that release. I'm also
including the necessary patches for anyone wanting to get all the ZFS features
and fixes via patches (NOTE: later patch revision may already be
storage-disk wrote:
Hi there
I have 3 questions regarding zfs.
1. what are zfs packages?
SUNWzfsr, SUNWzfskr, and SUNWzfsu. Note that ZFS has dependencies on
other components of Solaris so installing just the packages in not
supported.
2. what services need to be started in order for
Derek,
I don't think 'zpool attach/detach' is what you want as it will always
result in a complete resilver.
You're best bet is to export and re-import the pool after moving
devices. You might also try to 'zpool offline' the device, move it and
then 'zpool online' it. This should force a
Siegfried,
Can you provide the panic string that you are seeing? We should be able
to pull out the persistent error log information from the corefile. You
can take a look at spa_get_errlog() function as a starting point.
Additionally, you can look at the corefile using mdb and take a look at
Derek,
Have you tried doing a 'zpool replace poolname c1t53d0 c2t53d0'? I'm not
sure if this will work but worth a shot. You may still end up with a
complete resilver.
Thanks,
George
Derek E. Lewis wrote:
On Thu, 28 Dec 2006, George Wilson wrote:
You're best bet is to export and re-import
Bill,
If you want to find the file associated with the corruption you could do
a find /u01 -inum 4741362 or use the output of zdb -d u01 to
find the object associated with that id.
Thanks,
George
Bill Casale wrote:
Please reply directly to me. Seeing the message below.
Is it possible
it. Perhaps we can document this as a special case of
control not being enforced by an open if we have to.
Thanks,
George Wilson
IBM LTC Security Development--
redhat-lspp mailing list
redhat-lspp@redhat.com
https://www.redhat.com/mailman/listinfo/redhat-lspp
I would like an AD group Unix Admins to have root group membership.
How does one accomplish this?
Thanks,
George
--
To unsubscribe from this list go to the following URL and read the
instructions: https://lists.samba.org/mailman/listinfo/samba
There seems to be hundreds of ways to skin this cat but I can't seem to
find anyone who describes a complete process to make it work. I am using
Fedora 5 and the latest build of Samba 3.0. My end goal is to have the
samba server be a member of the windows 2003 domain and AD users be able
to ssh
There seems to be hundreds of ways to skin this cat but I can't seem to
find anyone who describes a complete process to make it work. I am using
Fedora 5 and the latest build of Samba 3.0. My end goal is to have the
samba server be a member of the windows 2003 domain and AD users be able
to ssh
with enforcing=0 initially to avoid init panicking the system. Once the filesystem was relabeled, I rebooted in enforcing without problems. I can even login, both console and ssh, in enforcing mode.
Thanks,
George Wilson
IBM LTC Security Development
Klaus Weidner [EMAIL PROTECTED]
Klaus Weidner
Stuart,
Can you send the output of 'zpool status -v' from both nodes?
Thanks,
George
Stuart Low wrote:
Nada.
[EMAIL PROTECTED] ~]$ zpool export -f ax150s
cannot open 'ax150s': no such pool
[EMAIL PROTECTED] ~]$
I wonder if it's possible to force the pool to be marked as inactive? Ideally
Stuart,
Given that the pool was imported on both nodes simultaneously may have
corrupted it beyond repair. I'm assuming that the same problem is a
system panic? If so, can you send the panic string from that node?
Thanks,
George
Stuart Low wrote:
I thought that might work too but having
Stuart,
Issuing a 'zpool import' will show all the pools which are accessible
for import and that's why you are seeing them. The fact that a forced
import gives results in a panic is indicative of pool corruption that
resulted from being imported on more than one host.
Thanks,
George
A fix for this should be integrated shortly.
Thanks,
George
Michael Schuster - Sun Microsystems wrote:
Robert Milkowski wrote:
Hello Michael,
Wednesday, August 23, 2006, 12:49:28 PM, you wrote:
MSSM Roch wrote:
MSSM I sent this output offline to Roch, here's the essential ones
and
Neal,
This is not fixed yet. Your best best is to run a replicated pool.
Thanks,
George
Neal Miskin wrote:
Hi Dana
It is ZFS bug 6322646; a flaw.
Is this fixed in a patch yet?
nelly_bo
This message posted from opensolaris.org
___
Robert,
One of your disks is not responding. I've been trying to track down why
the scsi command is not being timed out but for now check out each of
the devices to make sure they are healthy.
BTW, if you capture a corefile let me know.
Thanks,
George
Robert Milkowski wrote:
Hi.
S10U2 +
Robert Milkowski wrote:
Hello George,
Thursday, August 24, 2006, 5:48:08 PM, you wrote:
GW Robert,
GW One of your disks is not responding. I've been trying to track down why
GW the scsi command is not being timed out but for now check out each of
GW the devices to make sure they are
Roch wrote:
Dick Davies writes:
On 22/08/06, Bill Moore [EMAIL PROTECTED] wrote:
On Mon, Aug 21, 2006 at 02:40:40PM -0700, Anton B. Rang wrote:
Yes, ZFS uses this command very frequently. However, it only does this
if the whole disk is under the control of ZFS, I believe; so a
Frank,
The SC 3.2 beta maybe closed but I'm forwarding your request to Eric
Redmond.
Thanks,
George
Frank Cusack wrote:
On August 10, 2006 6:04:38 PM -0700 eric kustarz [EMAIL PROTECTED]
wrote:
If you're doing HA-ZFS (which is SunCluster 3.2 - only available in
beta right now),
Is the
I believe this is what you're hitting:
6456888 zpool attach leads to memory exhaustion and system hang
We are currently looking at fixing this so stay tuned.
Thanks,
George
Daniel Rock wrote:
Joseph Mocker schrieb:
Today I attempted to upgrade to S10_U2 and migrate some mirrored UFS
SVM
Ricardo,
We have also discovered this bug:
6453172 ztest turns into a sloth due to massive arc_min_prefetch_lifespan
I believe Neil has a fix in the works.
Thanks,
George
Ricardo Correia wrote:
Hi,
I've received a bug report in zfs-fuse that doesn't seem to be specific to
the
port.
Luke,
You can run 'zpool upgrade' to see what on-disk version you are capable
of running. If you have the latest features then you should be running
version 3:
hadji-2# zpool upgrade
This system is currently running ZFS version 3.
Unfortunately this won't tell you if you are running the
Leon,
Looking at the corefile doesn't really show much from the zfs side. It
looks like you were having problems with your san though:
/scsi_vhci/[EMAIL PROTECTED] (ssd5) offline
/scsi_vhci/[EMAIL PROTECTED] (ssd5) multipath status: failed, path
/[EMAIL PROTECTED],70/SUNW,[EMAIL
5.8_x86 5.9_x86 5.10_x86: Live Upgrade Patch
Thanks,
George
George Wilson wrote:
Dave,
I'm copying the zfs-discuss alias on this as well...
It's possible that not all necessary patches have been installed or they
maybe hitting CR# 6428258. If you reboot the zone does it continue to
end up
Dave,
I'm copying the zfs-discuss alias on this as well...
It's possible that not all necessary patches have been installed or they
maybe hitting CR# 6428258. If you reboot the zone does it continue to
end up in maintenance mode? Also do you know if the necessary ZFS/Zones
patches have been
We have putback a significant number of fixes and features from
OpenSolaris into what will become Solaris 10 11/06. For reference here's
the list:
Features:
PSARC 2006/223 ZFS Hot Spares
6405966 Hot Spare support in ZFS
PSARC 2006/303 ZFS Clone Promotion
6276916 support for clone swap
PSARC
Rainer,
This will hopefully go into build 06 of s10u3. It's on my list... :-)
Thanks,
George
Rainer Orth wrote:
George Wilson [EMAIL PROTECTED] writes:
We have putback a significant number of fixes and features from
OpenSolaris into what will become Solaris 10 11/06. For reference here's
I forgot to highlight that RAIDZ2 (a.k.a RAID-6) is also in this wad:
6417978 double parity RAID-Z a.k.a. RAID6
Thanks,
George
George Wilson wrote:
We have putback a significant number of fixes and features from
OpenSolaris into what will become Solaris 10 11/06. For reference here's
Grant,
Expect patches late September or so. Once available I'll post the patch
information.
Thanks,
George
grant beattie wrote:
On Mon, Jul 31, 2006 at 11:51:09AM -0400, George Wilson wrote:
We have putback a significant number of fixes and features from
OpenSolaris into what will become
that we have 800 servers, 30,000
users, 140 million lines of ASCII per day all fitting in a 2u T2000 box!
thanks
sean
George Wilson wrote:
Sean,
Sorry for the delay getting back to you.
You can do a 'zpool upgrade' to see what version of the on-disk format
you pool is currently running
Robert,
The patches will be available sometime late September. This may be a
week or so before s10u3 actually releases.
Thanks,
George
Robert Milkowski wrote:
Hello eric,
Thursday, July 27, 2006, 4:34:16 AM, you wrote:
ek Robert Milkowski wrote:
Hello George,
Wednesday, July 26, 2006,
Linda Knippers [EMAIL PROTECTED] wrote on 07/14/2006 13:38:02:
George Wilson wrote:
[EMAIL PROTECTED] wrote on 07/14/2006 12:37:29:
On Fri, Jul 14, 2006 at 01:17:28PM -0400, Daniel J Walsh wrote:
Internal Red Hat people are interested if we can do this another way
without
upgrade does so that it happens as part of the promotion.
A best practice would be to keep the application data and config/logging
data separately. This would avoid the need for this feature.
Thanks,
George
Darren J Moffat wrote:
George Wilson wrote:
Matt,
This is really cool! One thing that I
Eric,
Ive done the same thing with about a dozen domains. Using
private IPs on the back end, I just added the extra IPs to the NIC in network
properties and then mapped the NAT in the Pix.
George
From: Eric Wilson
[EMAIL PROTECTED]
Subject: [IMail Forum] NAT
Support for IMAIL
201 - 267 of 267 matches
Mail list logo