On 5 January 2011 13:26, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
One comment about etiquette though:
I'll certainly bear your comments in mind in future, however I'm not
sure what happened to the subject, as I used the interface at
On 6 January 2011 20:02, Chris Murray chrismurra...@gmail.com wrote:
On 5 January 2011 13:26, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
One comment about etiquette though:
I'll certainly bear your comments in mind in future, however I'm not
sure what
Hi Edward,
Thank you for the feedback. All makes sense.
To clarify, yes, I snapshotted the VM within ESXi, not the filesystems within
the pool. Unfortunately, because of my misunderstanding of how ESXi
snapshotting works, I'm now left without the option of investigating whether
the replaced
Hi,
I have some strange goings-on with my VM of Solaris Express 11, and I
hope someone can help.
It shares out other virtual machine files for use in ESXi 4.0 (it,
too, runs in there)
I had two disks inside the VM - one for rpool and one for 'vmpool'.
All was fine.
vmpool has some deduped data.
Another hang on zpool import thread, I'm afraid, because I don't seem to have
observed any great successes in the others and I hope there's a way of saving
my data ...
In March, using OpenSolaris build 134, I created a zpool, some zfs filesystems,
enabled dedup on them, moved content into them
Absolutely spot on George. The import with -N took seconds.
Working on the assumption that esx_prod is the one with the problem, I bumped
that to the bottom of the list. Each mount was done in a second:
# zfs mount zp
# zfs mount zp/nfs
# zfs mount zp/nfs/esx_dev
# zfs mount zp/nfs/esx_hedgehog
That's a good idea, thanks. I get the feeling the remainder won't be zero,
which will back up the misalignment theory. After a bit more digging, it seems
the problem is just an NTFS issue and can be addressed irrespective of
underlying storage system.
I think I'm going to try the process in
Please excuse my pitiful example. :-)
I meant to say *less* overlap between virtual machines, as clearly
block AABB occurs in both.
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Chris Murray
Sent: 18 March 2010 18
I'm trying to import a pool into b132 which once had dedup enabled, after the
machine was shut down with an init 5.
However, the import hangs the whole machine and I eventually get kicked off my
SSH sessions. As it's a VM, I can see that processor usage jumps up to near
100% very quickly, and
-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Chris Murray
Sent: 16 December 2009 17:19
To: Cyril Plisko; Andrey Kuzmin
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Troubleshooting dedup performance
So if the ZFS checksum is set to fletcher4
troubles are due to the calculation of
two different checksums?
Thanks,
Chris
-Original Message-
From: cyril.pli...@gmail.com [mailto:cyril.pli...@gmail.com] On Behalf
Of Cyril Plisko
Sent: 16 December 2009 17:09
To: Andrey Kuzmin
Cc: Chris Murray; zfs-discuss@opensolaris.org
Subject: Re: [zfs
I knew it would be something simple!! :-)
Now 3.63TB, as expected, and no need to export and import either! Thanks
Richard, that's done the trick.
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Cheers, I did try that, but still got the same total on import - 2.73TB
I even thought I might have just made a mistake with the numbers, so I made a
sort of 'quarter scale model' in VMware and OSOL 2009.06, with 3x250G and
1x187G. That gave me a size of 744GB, which is *approx* 1/4 of what I
I've had an interesting time with this over the past few days ...
After the resilver completed, I had the message no known data errors in a
zpool status.
I guess the title of my post should have been how permanent are permanent
errors?. Now, I don't know whether the action of completing the
Ok, the resilver has been restarted a number of times over the past few days
due to two main issues - a drive disconnecting itself, and power failure. I
think my troubles are 100% down to these environmental factors, but would like
some confidence that after the resilver has completed, if it
I can flesh this out with detail if needed, but a brief chain of events is:
1. RAIDZ1 zpool with drives A, B, C D (I don't have access to see original
drive names)
2. New disk E. Replaced A with E.
3. Part way through resilver, drive D was 'removed'
4. 700+ persistent errors detected, and lots
Thanks David. Maybe I mis-understand how a replace works? When I added disk E,
and used 'zpool replace [A] [E]' (still can't remember those drive names), I
thought that disk A would still be part of the pool, and read from in order to
build the contents of disk E? Sort of like a safer way of
Nico, what is a zero-link file, and how would I go about finding whether I have
one? You'll have to bear with me, I'm afraid, as I'm still building my Solaris
knowledge at the minute - I was brought up on Windows. I use Solaris for my
storage needs now though, and slowly improving on my
That looks like it indeed. Output of zdb -
Object lvl iblk dblk lsize asize type
9516K 8K 150G 14.0G ZFS plain file
264 bonus ZFS znode
path???object#9
Thanks for the help in clearing this up - satisfies my
I don't have quotas set, so I think I'll have to put this down to some sort of
bug. I'm on SXCE 105 at the minute, ZFS version is 3, but zpool is version 13
(could be 14 if I upgrade). I don't have everything backed-up so won't do a
zpool upgrade just at the minute. I think when SXCE 120 is
Accidentally posted the below earlier against ZFS Code, rather than ZFS
Discuss.
My ESXi box now uses ZFS filesystems which have been shared over NFS. Spotted
something odd this afternoon - a filesystem which I thought didn't have any
files in it, weighs in at 14GB. Before I start deleting
Thanks Tim. Results are below:
# zfs list -t snapshot -r zp/nfs/esx_temp
no datasets available
# zfs get refquota,refreservation,quota,reservation zp/nfs/esx_temp
NAME PROPERTYVALUE SOURCE
zp/nfs/esx_temp refquotanone default
zp/nfs/esx_temp refreservation
Hello,
Hopefully a quick and easy permissions problem here, but I'm stumped and
quickly reached the end of my Unix knowledge.
I have a ZFS filesystem called fs/itunes on pool zp. In it, the iTunes
music folder contained a load of other folders - one for each artist.
During a resilver
The plot thickens ... I had a brainwave and tried accessing a 'missing' folder
with the following on Windows:
explorer \\mammoth\itunes\iTunes music\Dubfire
I can open files within it and can rename them too. So .. still looks like a
permissions problem to me, but in what way, I'm not quite
Thanks Mark. I ran the script and found references in the output to 'aclmode'
and 'aclinherit'. I had in the back of my mind that I've had to mess on with
ZFS ACL's in the past, aside from using chmod with the usual numeric values.
That's given me something to go on. I'll post to cifs-discuss
Ok, used the development 2008.11 (b95) livecd earlier this morning to import
the pool, and it worked fine. I then rebooted back into Nexenta and all is
well. Many thanks for the help guys!
Chris
This message posted from opensolaris.org
___
Hi all,
I can confirm that this is fixed too. I ran into the exact same issue yesterday
after destroying a clone:
http://www.opensolaris.org/jive/thread.jspa?threadID=70459tstart=0
I used the b95-based 2008.11 development livecd this morning and the pool is
now back up and running again after a
Hi all,
I have a RAID-Z zpool made up of 4 x SATA drives running on Nexenta 1.0.1
(OpenSolaris b85 kernel). It has on it some ZFS filesystems and few volumes
that are shared to various windows boxes over iSCSI. On one particular iSCSI
volume, I discovered that I had mistakenly deleted some
Ah-ha! That certainly looks like the same issue Miles - well spotted! As it
happens, the zdb command failed with out of memory -- generating core dump
whereas all four dd's completed successfully.
I'm downloading snv96 right now - I'll install in the morning and post my
results both here, and
That's a good point - I'll try svn94 if I can get my hands on it - any idea
where the download for it is? I've been going round in circles and all I can
come up with are the variants of svn96 - CD, DVD (2 images), DVD (single
image). Maybe that's a sign I should give up for the night!
Chris
This process should work, but make sure you don't swap any cables around while
you replace a drive, or you'll run into the situation described in the
following thread, as I did:
http://www.opensolaris.org/jive/thread.jspa?threadID=48483tstart=0
Chris
This message posted from
About that issue, please check my post in:
http://www.opensolaris.org/jive/thread.jspa?threadID=48483tstart=0
Thanks - when I originally tried to replace the first drive, my intention was
to:
1. Move solaris box and drives
2. Power up to test it still works
3. Power down
4. Replace drive.
I
with a new WD
drive. Once the scrub completes, What do I do?
Many thanks,
Chris Murray
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks for the help guys - unfortunately the only hardware at my disposal just
at the minute is all 32 bit, so I'll just have to wait a while and fork out on
some 64-bit kit before I get the drives. I'm a home user so I'm glad I didnt
buy the drives and discover I couldnt use them without
Hi all,
I am experiencing an issue when trying to set up a large ZFS volume in
OpenSolaris build 74 and the same problem in Nexenta alpha 7. I have looked on
Google for the error and have found zero (yes, ZERO) results, so I'm quite
surprised! Please can someone help?
I am setting up a test
35 matches
Mail list logo