': snapshot is busy
Extending the hold mechanism to filesystems and volumes would be quite nice.
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
vfs.zfs.txg.synctime_ms: 1000
vfs.zfs.txg.timeout: 5
On Thu, Jul 19, 2012 at 8:47 PM, John Martin john.m.mar...@oracle.com wrote:
On 07/19/12 19:27, Jim Klimov wrote:
However, if the test file was written in 128K blocks and then
is rewritten with 64K blocks, then Bob's answer is probably
$ du -k 1gig
0 1gig
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
happen):
If processing in interrupt context (use intrstat) is dominating cpu
usage, you may be able to use pcitool to cause the device generating
all of those expensive interrupts to be moved to another CPU.
--
Mike Gerdts
http://mgerdts.blogspot.com
(such as
https://pkg.oracle.com/solaris/support), I suggest reading
https://forums.oracle.com/forums/thread.jspa?threadID=2380689tstart=15
before updating to SRU 6 (SRU 5 is fine, however). The fix for the
problem mentioned in that forums thread should show up in an upcoming
SRU via CR 7157313.
--
Mike
2012/3/26 ольга крыжановская olga.kryzhanov...@gmail.com:
How can I test if a file on ZFS has holes, i.e. is a sparse file,
using the C api?
See SEEK_HOLE in lseek(2).
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs
[ 1332804325.889143166 ]
bsz=131072 blks=32fs=zfs
Notice that it says it has 32 512 byte blocks.
The mechanism you suggest does work for every other file system that
I've tried it on.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing
/dev/chassis//SYS/SASBP/HDD1/disk disk c0t5000CCA012B68AC8d0
The text in the left column represents text that should be printed on
the corresponding disk slots.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs
mount -o mountpoint=/mnt/rpool/var rpool/ROOT/solaris/var
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
were issued. I'd never do that in
production without some form of I/O fencing in place.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Aug 4, 2011 at 2:47 PM, Stuart James Whitefish
swhitef...@yahoo.com wrote:
# zpool import -f tank
http://imageshack.us/photo/my-images/13/zfsimportfail.jpg/
I encourage you to open a support case and ask for an escalation on CR 7056738.
--
Mike Gerdts
http://mgerdts.blogspot.com
of:
zlogin z1c1 init 0
zoneadm -z z1c1 detach
zfs rename rpool/zones/z1c1 rpool/new/z1c1
zoneadm -z z1c1 'set zonepath=/new/z1c1'
zoneadm -z z1c1 attach
zoneadm -z z1c1 boot
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss
. This is created in
757 * a special directory, $EXTEND, at the root of the shared file
758 * system. To hide this directory prepend a '.' (dot).
759 */
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss
-discuss
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
follow-ups should probably go to Oracle Support or zones-discuss.
Your problems are not related to zfs.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
/zfs-discuss
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
are the underlying file
system zfs vs ufs. Any thoughts to speed up the backup of the Sun 7000 nfs
mount?
Thanks you.
Mike MacNeil
Global IT Infrastructure
[cid:image001.gif@01CBDF3D.6192F090]
4281 Harvester Rd.
Burlington, ON l7l 5m4
Canada
Phone: 905 632 2999 ext.2920
Fax: 905 632 2055
Email
to Canada as well without issue)
Why use USB ? You wll get much better performance/throughput on eSata
(if you have good drivers of course). I use their sil3124 eSata
controller on FreeBSD as well as a number of PM units and they work great.
---Mike
--
---
Mike Tancsa, tel
On 1/31/2011 4:19 PM, Mike Tancsa wrote:
On 1/31/2011 3:14 PM, Cindy Swearingen wrote:
Hi Mike,
Yes, this is looking much better.
Some combination of removing corrupted files indicated in the zpool
status -v output, running zpool scrub and then zpool clear should
resolve the corruption
On 1/29/2011 6:18 PM, Richard Elling wrote:
On Jan 29, 2011, at 12:58 PM, Mike Tancsa wrote:
On 1/29/2011 12:57 PM, Richard Elling wrote:
0(offsite)# zpool status
pool: tank1
state: UNAVAIL
status: One or more devices could not be opened. There are insufficient
replicas
On 1/31/2011 3:14 PM, Cindy Swearingen wrote:
Hi Mike,
Yes, this is looking much better.
Some combination of removing corrupted files indicated in the zpool
status -v output, running zpool scrub and then zpool clear should
resolve the corruption, but its depends on how bad the corruption
On 1/30/2011 12:39 AM, Richard Elling wrote:
Hmmm, doesnt look good on any of the drives.
I'm not sure of the way BSD enumerates devices. Some clever person thought
that hiding the partition or slice would be useful. I don't find it useful.
On a Solaris
system, ZFS can show a disk
)#
---Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
for backups
of backups in a DR site
---Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 1/29/2011 6:18 PM, Richard Elling wrote:
0(offsite)#
The next step is to run zdb -l and look for all 4 labels. Something like:
zdb -l /dev/ada2
If all 4 labels exist for each drive and appear intact, then look more closely
at how the OS locates the vdevs. If you can't solve the
Hi,
I am using FreeBSD 8.2 and went to add 4 new disks today to expand my
offsite storage. All was working fine for about 20min and then the new
drive cage started to fail. Silly me for assuming new hardware would be
fine :(
The new drive cage started to fail, it hung the server and the
I am trying to bring in my zpool from build 121 into build 134 and every time I
do a zpool import the system crashes.
I have read other posts for this and have tried setting zfs_recover = 1 and
aok = 1 in /etc/system I have used mdb to verify that they are in the kernel
but the system still
On Wed, Oct 27, 2010 at 9:27 AM, bhanu prakash bhanu.sys...@gmail.com wrote:
Hi Mike,
Thanks for the information...
Actually the requirement is like this. Please let me know whether it matches
for the below requirement or not.
Question:
The SAN team will assign the new LUN’s on EMC DMX4
.
Perhaps this belongs somewhere other than zfs-discuss - it has nothing
to do with zfs.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that you are comfortable that the zone data moved over ok...
zfs destroy -r oldpool/zones
Again, verify the procedure works on a test/lab/whatever box before
trying it for real.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs
For posterity, I'd like to point out the following:
neel's original arcstat.pl uses a crude scaling routine that results in a large
loss of precision as numbers cross from Kilobytes to Megabytes to Gigabytes.
The 1G reported arc size case described here, could actually be anywhere
between
Hello Christian,
Thanks for bringing this to my attention. I believe I've fixed the rounding
error in the latest version.
http://github.com/mharsch/arcstat
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
przemol,
Thanks for the feedback. I had incorrectly assumed that any machine running
the script would have L2ARC implemented (which is not the case with Solaris
10). I've added a check for this that allows the script to work on non-L2ARC
machines as long as you don't specify L2ARC stats on
On Mon, Sep 27, 2010 at 6:23 AM, Robert Milkowski mi...@task.gda.pl wrote:
snip
Also see http://www.symantec.com/connect/virtualstoreserver
And
http://blog.scottlowe.org/2008/12/03/2031-enhancements-to-netapp-cloning-technology/
--
Mike Gerdts
http://mgerdts.blogspot.com
On 9/23/2010 at 12:38 PM Erik Trimble wrote:
| [snip]
|If you don't really care about ultra-low-power, then there's
absolutely
|no excuse not to buy a USED server-class machine which is 1- or 2-
|generations back. They're dirt cheap, readily available,
| [snip]
=
Anyone have
it effect that data? zpool consists of 8 SANs
Luns.
Thanks
mike
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Sep 16, 2010 at 08:15:53AM -0700, Rich Teer wrote:
On Thu, 16 Sep 2010, Erik Ableson wrote:
OpenSolaris snv129
Hmm, SXCE snv_130 here. Did you have to do any server-side tuning
(e.g., allowing remote connections), or did it just work out of the
box? I know that Sendmail needs
On Wed, Sep 15, 2010 at 12:08:20PM -0700, Nabil wrote:
any resolution to this issue? I'm experiencing the same annoying
lockd thing with mac osx 10.6 clients. I am at pool ver 14, fs ver
3. Would somehow going back to the earlier 8/2 setup make things
better?
As noted in the earlier
, to be released on Tuesday, is based on b146 or later.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
With this in place, I would imagine a next step is for zfs to issue
TRIM commands as zil entries have been committed to the data disks.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Update: version 3.2.5 out now, with changes to better support snv_134:
http://forums.halcyoninc.com/showthread.php?t=368
If you've downloaded v3.2.4 and are on 09/06, there is no reason to upgrade.
Regards,
mike.k...@halcyoninc.com
--
This message posted from opensolaris.org
Hi zfs user,
Is the beta free? for how long? if not how much for 5 machines?
Everything on our web site (including the beta) runs for 30 days with the
baked-in license. After 30 days it will stop collecting fresh numbers, unless
you add a license key, or a demo extension file from the sales
Bump this up. Anyone?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
What I would really like to know is why do pci-e raid controller cards cost
more than an entire motherboard with processor. Some cards can cost over $1,000
dollars, for what.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
can I mount my first drive that has
opensolaris installed ?
To list the zpools it can see:
zpool import
To import one called rpool at an alternate root:
zpool import -R /mnt rpool
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing
On 8/13/2010 at 8:56 PM Eric D. Mudama wrote:
|On Fri, Aug 13 at 19:06, Frank Cusack wrote:
|Interesting POV, and I agree. Most of the many distributions of
|OpenSolaris had very little value-add. Nexenta was the most
interesting
|and why should Oracle enable them to build a business at their
I am trying to give a general user permissions to create zfs filesystems in the
rpool.
zpool set=delegation=on rpool
zfs allow user create rpool
both run without any issues.
zfs allow rpool reports the user does have create permissions.
zfs create rpool/test
cannot create rpool/test :
Thanks adding mount did allow me to create it but does not allow me to create
the mountpoint.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Jul 26, 2010 at 1:27 AM, Garrett D'Amore garr...@nexenta.com wrote:
On Sun, 2010-07-25 at 21:39 -0500, Mike Gerdts wrote:
On Sun, Jul 25, 2010 at 8:50 PM, Garrett D'Amore garr...@nexenta.com wrote:
On Sun, 2010-07-25 at 17:53 -0400, Saxon, Will wrote:
I think there may be very good
On Mon, Jul 26, 2010 at 2:56 PM, Miles Nordin car...@ivy.net wrote:
mg == Mike Gerdts mger...@gmail.com writes:
mg it is rather common to have multiple 1 Gb links to
mg servers going to disparate switches so as to provide
mg resilience in the face of switch failures
there was an option to load balance using
a round robin hashing algorithm. When pushing high network loads this
may cause performance problems with reassembly.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
, otherwise the checksum will get changed by the archiving
agent.
What is the likelihood that the same data is re-written to the file?
If that is unlikely, it looks as though znode_t's z_seq may be useful.
While it isn't a checksum, it seems to be incremented on every file
change.
--
Mike Gerdts
of reads that didn't turn into physical I/Os.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
with options
like:
netperf -H $host -t TCP_RR -r 32768 -l 30
That is speculation based on reading
http://www.netperf.org/netperf/training/Netperf.html. Someone else
(perhaps on networking or performance lists) may have better tests to
run.
--
Mike Gerdts
http://mgerdts.blogspot.com
block size? How does 32 KB compare
to the block size on the relevant zfs filesystem or zvol? Are blocks
aligned at the various layers?
http://blogs.sun.com/dlutz/entry/partition_alignment_guidelines_for_unified
--
Mike Gerdts
http://mgerdts.blogspot.com
' outputs on a 5 min interval and a 'zpool iostat -v 30 5' which would help
visualize the I/O behavior.
Regards,
Mike
http://blog.laspina.ca/
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
I havnt tried it yet, but supposedly this will backup/restore the
comstar config:
$ svccfg export -a stmf comstar.bak.${DATE}
If you ever need to restore the configuration, you can attach the
storage and run an import:
$ svccfg import comstar.bak.${DATE}
- Mike
On 6/28/10, bso
of engineering where group projects were common
and CAD, EDA, and simulation tools could generate big files very
quickly.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
I would have
written it for the last decade or so...
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
there are
two blocks that are identical, thus confounding deduplication as well.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that the idle time reaches 0 or the process' latency
column is more then a few tenths of a percent, you are probably short
on CPU.
It could also be that interrupts are stealing cycles from rsync.
Placing it in a processor set with interrupts disabled in that
processor set may help.
--
Mike Gerdts
Sorry, turned on html mode to avoid gmail's line wrapping.
On Mon, May 31, 2010 at 4:58 PM, Sandon Van Ness san...@van-ness.comwrote:
On 05/31/2010 02:52 PM, Mike Gerdts wrote:
On Mon, May 31, 2010 at 4:32 PM, Sandon Van Ness san...@van-ness.com
wrote:
On 05/31/2010 01:51 PM, Bob
that have been swapped out are never
scheduled) then those pages will not consume RAM.
The best thing to do with processes that can be swapped out forever is
to not run them.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs
On Thu, Apr 22, 2010 at 12:40:37PM -0700, Rich Teer wrote:
On Thu, 22 Apr 2010, Tomas Ögren wrote:
Copying via terminal (and cp) works.
Interesting: if I copy a file *which has no extended attributes* using cp in
a terminal, it works fine. If I try to cp a file that has EA (to the same
On Thu, Apr 22, 2010 at 01:54:26PM -0700, Rich Teer wrote:
On Thu, 22 Apr 2010, Mike Mackovitch wrote:
Hi Mike,
So, it looks like you need to investigate why the client isn't
getting responses from the server's lockd.
This is usually caused by a firewall or NAT getting in the way
I would appreciate if somebody can clarify a few points.
I am doing some random WRITES (100% writes, 100% random) testing and observe
that ARC grows way beyond the hard limit during the test. The hard limit is
set 512 MB via /etc/system and I see the size going up to 1 GB - how come is it
I am trying to see how ZFS behaves under resource starvation - corner cases in
embedded environments. I see some very strange behavior. Any help/explanation
would really be appreciated.
My current setup is :
OpenSolaris 111b (iSCSI seems to be broken in 132 - unable to get multiple
systems will
millions of files with relatively few changes.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the NDMP_CONFIG_GET_BUTYPE_INFO request.
http://www.ndmp.org/download/sdk_v4/draft-skardal-ndmp4-04.txt
It seems pretty clear from this that an NDMP data stream can contain
most anything and is dependent on the device being backed up.
--
Mike Gerdts
http://mgerdts.blogspot.com
zfs send | zfs receive.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Jan 25, 2010 at 2:32 AM, Kjetil Torgrim Homme
kjeti...@linpro.no wrote:
Mike Gerdts mger...@gmail.com writes:
John Hoogerdijk wrote:
Is there a way to zero out unused blocks in a pool? I'm looking for
ways to shrink the size of an opensolaris virtualbox VM and using the
compact
On Sat, Jan 23, 2010 at 11:55 AM, John Hoogerdijk
john.hoogerd...@sun.com wrote:
Mike Gerdts wrote:
On Fri, Jan 22, 2010 at 1:00 PM, John Hoogerdijk
john.hoogerd...@sun.com wrote:
Is there a way to zero out unused blocks in a pool? I'm looking for ways
to
shrink the size
and
to be able to extract your data after you no longer use netbackup.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
should be able to just use mkfile or dd
if=/dev/zero ... to create a file that consumes most of the free
space then delete that file. Certainly it is not an ideal solution,
but seems quite likely to be effective.
--
Mike Gerdts
http://mgerdts.blogspot.com
32 Jan 22 04:13 sha256
-rw-r--r-- 1 428411 Jan 22 04:14 sha256.Z
-rw-r--r-- 1 321846 Jan 22 04:14 sha256.bz2
-rw-r--r-- 1 320068 Jan 22 04:14 sha256.gz
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss
I use zfs send/recv in the enterprise and in smaller environments all time and
it's is excellent.
Have a look at how awesome the functionally is in this example.
http://blog.laspina.ca/ubiquitous/provisioning_disaster_recovery_with_zfs
Regards,
Mike
--
This message posted from
in the successor to flash archives. This initial proposal seems
to imply using the same mechanism for a system image backup (instead
of just system provisioning).
http://mail.opensolaris.org/pipermail/caiman-discuss/2010-January/015909.html
--
Mike Gerdts
http://mgerdts.blogspot.com
a pkg image create
followed by *multiple* pkg install invocations. No checksum errors
pop up there.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
, but the fix might be to remove the option.
This unsupported feature is supported with the use of Sun Ops Center
2.5 when a zone is put on a NAS Storage Library.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Fri, Jan 8, 2010 at 5:28 AM, Frank Batschulat (Home)
frank.batschu...@sun.com wrote:
[snip]
Hey Mike, you're not the only victim of these strange CHKSUM errors, I hit
the same during my slightely different testing, where I'm NFS mounting an
entire, pre-existing remote file living
On Fri, Jan 8, 2010 at 9:11 AM, Mike Gerdts mger...@gmail.com wrote:
I've seen similar errors on Solaris 10 in the primary domain and on a
M4000. Unfortunately Solaris 10 doesn't show the checksums in the
ereport. There I noticed a mixture between read errors and checksum
errors - and lots
On Fri, Jan 8, 2010 at 12:28 PM, Torrey McMahon tmcmah...@yahoo.com wrote:
On 1/8/2010 10:04 AM, James Carlson wrote:
Mike Gerdts wrote:
This unsupported feature is supported with the use of Sun Ops Center
2.5 when a zone is put on a NAS Storage Library.
Ah, ok. I didn't know
[removed zones-discuss after sending heads-up that the conversation
will continue at zfs-discuss]
On Mon, Jan 4, 2010 at 5:16 PM, Cindy Swearingen
cindy.swearin...@sun.com wrote:
Hi Mike,
It is difficult to comment on the root cause of this failure since
the several interactions
in a mirror setup.
Any help would be appreciated.
Thanks,
Mikko
--
Mikko Lammi | l...@lmmz.net | http://www.lmmz.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Mike
Thanks for the response Marion. I'm glad that Im not the only one. :)
Message was edited by: mijohnst
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
.
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
rocket scientists. :)
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Just thought I would let you all know that I followed what Alex suggested along
with what many of you pointed out and it worked! Here are the steps I followed:
1. Break root drive mirror
2. zpool export filesystem
3. run the command to start MPIOX and reboot the machine
4. zpool import
I'm just wondering what some of you might do with your systems.
We have an EMC Clariion unit that I connect several sun machines to. I allow
the EMC to do it's hardware raid5 for several luns and then I stripe them
together. I considered using raidz and just configuring the EMC as a JBOD,
errors: No known data errors
r...@soltrain19# zlogin osol uptime
5:31pm up 1 min(s), 0 users, load average: 0.69, 0.38, 0.52
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Tue, Dec 22, 2009 at 8:02 PM, Mike Gerdts mger...@gmail.com wrote:
I've been playing around with zones on NFS a bit and have run into
what looks to be a pretty bad snag - ZFS keeps seeing read and/or
checksum errors. This exists with S10u8 and OpenSolaris dev build
snv_129. This is likely
compressratio.
If I disable compression and enable dedup, does it count deduplicated
blocks of zeros toward the dedupratio?
--
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Model: Hitachi HTS5425 Revision: Serial No: 080804BB6300HCG Size:
160.04GB 160039305216 bytes
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 0
...
That /should/ be printed on the disk somewhere.
--
Mike Gerdts
http
be done.
On Wed, Dec 9, 2009 at 5:16 AM, Alexander J. Maidak ajmai...@mchsi.comwrote:
On Tue, 2009-12-08 at 09:15 -0800, Mike wrote:
I had a system that I was testing zfs on using EMC Luns to create a
striped zpool without using the multi-pathing software PowerPath. Of coarse
a storage
Alex, thanks for the info. You made my heart stop a little when reading your
problem with PowerPath, but MPxIO seems like it might be a good option for me.
I'll will try that as well although I have not used it before. Thank you!
--
This message posted from opensolaris.org
I had a system that I was testing zfs on using EMC Luns to create a striped
zpool without using the multi-pathing software PowerPath. Of coarse a storage
emergency came up so I lent this storage out for temp storage and we're still
using. I'd like to add PowerPath to take advanage of the
Thanks Cindys for your input... I love your fear example too, but lucky for me
I have 10 years before I have to worry about that and hopefully we'll all be in
hovering bumper cars by then.
It looks like I'm going to have to create another test system and try
recommondations give here...and
I'm sure its been asked a thousand times but is there any prospect of being
able to remove a vdev from a pool anytime soon?
Thanks!
--
Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
1 - 100 of 472 matches
Mail list logo