Hi, I have a ZFS pool backed by iscsi volumes and the filesystem is
dissapearing a lot lately, all that rectifies it is rebooting the
machine.
running zfs list I don't get a list of the filesystems on the pool
running zpool status I do get a list of the pool and the disks behind it.
I'm running
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of TianHong Zhao
There seems to be a few threads about zpool hang, do we have a
workaround to resolve the hang issue without rebooting ?
In my case, I have a pool with disks from external
Thanks for the reply.
This sounds a serious issue if we have to reboot a machine in such case, I am
wondering if anybody is working on this.
BTW, the zpool failmode is set to continue, in my test case.
Tianhong Zhao
-Original Message-
From: Edward Ned Harvey
On Tue, May 3, 2011 19:39, Rich Teer wrote:
I'm playing around with nearline backups using zfs send | zfs recv.
A full backup made this way takes quite a lot of time, so I was
wondering: after the initial copy, would using an incremental send
(zfs send -i) make the process much quick because
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Frank Van Damme
another dedup question. I just installed an ssd disk as l2arc. This
is a backup server with 6 GB RAM (ie I don't often read the same data
again), basically it has a large
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Rich Teer
Also related to this is a performance question. My initial test involved
copying a 50 MB zfs file system to a new disk, which took 2.5 minutes
to complete. The strikes me as
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Rich Teer
Not such a silly question. :-) The USB1 port was indeed the source of
much of the bottleneck. The same 50 MB file system took only 8 seconds
to copy when I plugged the drive
There are a number of threads (this one[1] for example) that describe
memory requirements for deduplication. They're pretty high.
I'm trying to get a better understanding... on our NetApps we use 4K
block sizes with their post-process deduplication and get pretty good
dedupe ratios for VM
On Wed, 4 May 2011, Edward Ned Harvey wrote:
I suspect you're using a junky 1G slow-as-dirt usb thumb drive.
Nope--unless an IOMega Prestige Desktop Hard Drive (containing an
Hitachi 7200K RPM hard drive with 32MB of cache) counts as a slow
as dirt USB thumb drive!
--
Rich Teer, Publisher
On Wed, 4 May 2011, Edward Ned Harvey wrote:
4G is also lightweight, unless you're not doing much of anything. No dedup,
no L2ARC, just simple pushing bits around. No services running... Just ssh
Yep, that's right. This is a repurposed workstation for use in my home network.
I don't
We have an X4540 running Solaris 11 Express snv_151a that has developed an
issue where its write performance is absolutely abysmal. Even touching a file
takes over five seconds both locally and remotely.
/pool1/data# time touch foo
real0m5.305s
user0m0.001s
sys 0m0.004s
On 5/4/2011 9:57 AM, Ray Van Dolson wrote:
There are a number of threads (this one[1] for example) that describe
memory requirements for deduplication. They're pretty high.
I'm trying to get a better understanding... on our NetApps we use 4K
block sizes with their post-process deduplication
On Wed, May 4 at 12:21, Adam Serediuk wrote:
Both iostat and zpool iostat show very little to zero load on the devices even
while blocking.
Any suggestions on avenues of approach for troubleshooting?
is 'iostat -en' error free?
--
Eric D. Mudama
edmud...@bounceswoosh.org
iostat doesn't show any high service times and fsstat also shows low
throughput. Occasionally I can generate enough load that you do see some very
high asvc_t but when that occurs the pool is performing as expected. As a
precaution I just added two extra drives to the zpool incase zfs was
On Tue, May 3, 2011 at 11:42 PM, Peter Jeremy
peter.jer...@alcatel-lucent.com wrote:
- Is the source pool heavily fragmented with lots of small files?
Peter,
We've some servers holding Xen VMs and the setup was create to have a
default VM from where others would be cloned so the space
On Wed, May 04, 2011 at 12:29:06PM -0700, Erik Trimble wrote:
On 5/4/2011 9:57 AM, Ray Van Dolson wrote:
There are a number of threads (this one[1] for example) that describe
memory requirements for deduplication. They're pretty high.
I'm trying to get a better understanding... on our
On Wed, May 4, 2011 at 12:29 PM, Erik Trimble erik.trim...@oracle.com wrote:
I suspect that NetApp does the following to limit their resource
usage: they presume the presence of some sort of cache that can be
dedicated to the DDT (and, since they also control the hardware, they can
From: Richard Elling [mailto:richard.ell...@gmail.com]
Sent: Friday, April 29, 2011 12:49 AM
The lower bound of ARC size is c_min
# kstat -p zfs::arcstats:c_min
I see there is another character in the plot: c_max
c_max seems to be 80% of system ram (at least on my systems).
I assume
Dedup is disabled (confirmed to be.) Doing some digging it looks like this is a
very similar issue to
http://forums.oracle.com/forums/thread.jspa?threadID=2200577tstart=0.
On May 4, 2011, at 2:26 PM, Garrett D'Amore wrote:
My first thought is dedup... perhaps you've got dedup enabled and the
On 5/4/2011 2:54 PM, Ray Van Dolson wrote:
On Wed, May 04, 2011 at 12:29:06PM -0700, Erik Trimble wrote:
(2) Block size: a 4k block size will yield better dedup than a 128k
block size, presuming reasonable data turnover. This is inherent, as
any single bit change in a block will make it
On Wed, May 04, 2011 at 02:55:55PM -0700, Brandon High wrote:
On Wed, May 4, 2011 at 12:29 PM, Erik Trimble erik.trim...@oracle.com wrote:
I suspect that NetApp does the following to limit their resource
usage: they presume the presence of some sort of cache that can be
dedicated
On Wed, May 04, 2011 at 03:49:12PM -0700, Erik Trimble wrote:
On 5/4/2011 2:54 PM, Ray Van Dolson wrote:
On Wed, May 04, 2011 at 12:29:06PM -0700, Erik Trimble wrote:
(2) Block size: a 4k block size will yield better dedup than a 128k
block size, presuming reasonable data turnover. This
On May 4, 2011, at 4:16 PM, Victor Latushkin wrote:
Try
echo metaslab_debug/W1 | mdb -kw
If it does not help, reset it back to zero
echo metaslab_debug/W0 | mdb -kw
That appears to have resolved the issue! Within seconds of making the change
performance has increased by an order of
On 5/4/2011 4:14 PM, Ray Van Dolson wrote:
On Wed, May 04, 2011 at 02:55:55PM -0700, Brandon High wrote:
On Wed, May 4, 2011 at 12:29 PM, Erik Trimbleerik.trim...@oracle.com wrote:
I suspect that NetApp does the following to limit their resource
usage: they presume the presence of
On Wed, May 4, 2011 at 6:36 PM, Erik Trimble erik.trim...@oracle.comwrote:
On 5/4/2011 4:14 PM, Ray Van Dolson wrote:
On Wed, May 04, 2011 at 02:55:55PM -0700, Brandon High wrote:
On Wed, May 4, 2011 at 12:29 PM, Erik Trimbleerik.trim...@oracle.com
wrote:
I suspect that NetApp
On 5/4/2011 4:17 PM, Ray Van Dolson wrote:
On Wed, May 04, 2011 at 03:49:12PM -0700, Erik Trimble wrote:
On 5/4/2011 2:54 PM, Ray Van Dolson wrote:
On Wed, May 04, 2011 at 12:29:06PM -0700, Erik Trimble wrote:
(2) Block size: a 4k block size will yield better dedup than a 128k
block size,
On 5/4/2011 4:44 PM, Tim Cook wrote:
On Wed, May 4, 2011 at 6:36 PM, Erik Trimble erik.trim...@oracle.com
mailto:erik.trim...@oracle.com wrote:
On 5/4/2011 4:14 PM, Ray Van Dolson wrote:
On Wed, May 04, 2011 at 02:55:55PM -0700, Brandon High wrote:
On Wed, May 4,
On Wed, May 04, 2011 at 04:51:36PM -0700, Erik Trimble wrote:
On 5/4/2011 4:44 PM, Tim Cook wrote:
On Wed, May 4, 2011 at 6:36 PM, Erik Trimble erik.trim...@oracle.com
wrote:
On 5/4/2011 4:14 PM, Ray Van Dolson wrote:
On Wed, May 04, 2011 at 02:55:55PM
On Wed, May 4, 2011 at 2:25 PM, Giovanni Tirloni gtirl...@sysdroid.com wrote:
The problem we've started seeing is that a zfs send -i is taking hours to
send a very small amount of data (eg. 20GB in 6 hours) while a zfs send full
transfer everything faster than the incremental (40-70MB/s).
On Wed, May 4, 2011 at 4:36 PM, Erik Trimble erik.trim...@oracle.com wrote:
If so, I'm almost certain NetApp is doing post-write dedup. That way, the
strictly controlled max FlexVol size helps with keeping the resource limits
down, as it will be able to round-robin the post-write dedup to each
On Wed, May 4, 2011 at 6:51 PM, Erik Trimble erik.trim...@oracle.comwrote:
On 5/4/2011 4:44 PM, Tim Cook wrote:
On Wed, May 4, 2011 at 6:36 PM, Erik Trimble erik.trim...@oracle.comwrote:
On 5/4/2011 4:14 PM, Ray Van Dolson wrote:
On Wed, May 04, 2011 at 02:55:55PM -0700, Brandon High
On 5/4/2011 5:11 PM, Brandon High wrote:
On Wed, May 4, 2011 at 4:36 PM, Erik Trimbleerik.trim...@oracle.com wrote:
If so, I'm almost certain NetApp is doing post-write dedup. That way, the
strictly controlled max FlexVol size helps with keeping the resource limits
down, as it will be able to
On 05/03/11 22:45, Rich Teer wrote:
True, but the SB1000 only supports 2GB of RAM IIRC! I'll soon be
Actually you can get up to 16GB ram in a SB1000 (or SB2000). The 4GB
dimms are most likely not too common however the 1GB and 2GB dimms seem
to be common. At one time Dataram and maybe
This is a summary of a much longer discussion Dedup and L2ARC memory
requirements (again)
Sorry even this summary is long. But the results vary enormously based on
individual usage, so any rule of thumb metric that has been bouncing
around on the internet is simply not sufficient. You need to go
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Erik Trimble
ZFS's problem is that it needs ALL the resouces for EACH pool ALL the
time, and can't really share them well if it expects to keep performance
from tanking... (no pun intended)
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ray Van Dolson
Are any of you out there using dedupe ZFS file systems to store VMware
VMDK (or any VM tech. really)? Curious what recordsize you use and
what your hardware specs /
On Wed, May 4, 2011 at 10:15 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Erik Trimble
ZFS's problem is that it needs ALL the resouces for EACH pool ALL
On Wed, May 4, 2011 at 10:23 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ray Van Dolson
Are any of you out there using dedupe ZFS file systems to store
Good summary, Ned. A couple of minor corrections.
On 5/4/2011 7:56 PM, Edward Ned Harvey wrote:
This is a summary of a much longer discussion Dedup and L2ARC memory
requirements (again)
Sorry even this summary is long. But the results vary enormously based on
individual usage, so any rule of
From: Tim Cook [mailto:t...@cook.ms]
ZFS's problem is that it needs ALL the resouces for EACH pool ALL the
time, and can't really share them well if it expects to keep performance
from tanking... (no pun intended)
That's true, but on the flipside, if you don't have adequate resources
From: Tim Cook [mailto:t...@cook.ms]
That's patently false. VM images are the absolute best use-case for dedup
outside of backup workloads. I'm not sure who told you/where you got the
idea that VM images are not ripe for dedup, but it's wrong.
Well, I got that idea from this list. I said
41 matches
Mail list logo