On Wed, May 05, 2010 at 11:32:23PM -0400, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Robert Milkowski
if you can disable ZIL and compare the performance to when it is off it
will give you an estimate of
On Thu, May 6, 2010 at 2:06 AM, Richard Jahnel rich...@ellipseinc.com wrote:
I've googled this for a bit, but can't seem to find the answer.
What does compression bring to the party that dedupe doesn't cover already?
Compression will reduce the storage requirements for non-duplicate data.
As
On 6 maj 2010, at 08.17, Pasi Kärkkäinen wrote:
On Wed, May 05, 2010 at 11:32:23PM -0400, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Robert Milkowski
if you can disable ZIL and compare the performance to
I'm unable to snapshot a dataset, receiving the error dataset is
busy. Google and some bug reports suggest it's from a zil that hasn't
been completely replayed, and that mounting and unmounting the dataset
will fix it. Which is great, except it's a zvol.
Any other way to fix it? There's no data
On Thu, May 6, 2010 at 1:31 AM, Brandon High bh...@freaks.com wrote:
Any other way to fix it? There's no data in the zvol that I can't
easily reproduce if it needs to be destroyed.
I did a rollback to the most recent snapshot, which seems to have worked.
-B
--
Brandon High : bh...@freaks.com
With the put back of:
[PSARC/2010/108] zil synchronicity
zfs datasets now have a new 'sync' property to control synchronous behaviour.
The zil_disable tunable to turn synchronous requests into asynchronous
requests (disable the ZIL) has been removed. For systems that use that switch
on upgrade
Please find this thread for further info about this topic :
http://www.opensolaris.org/jive/thread.jspa?threadID=120824start=0tstart=0
In short, ZFS doesn't support thin reclamation today, although we have RFE open
to implement it somewhere in the future.
Regards,
sendai
--
This message
Based on comments, some people say nay, some say yah. so I decided
to give it a spin, and see
how I get on.
To make my mirror bootable I followed instructions posted here :
http://www.taiter.com/blog/2009/04/opensolaris-200811-adding-disk.html
I plan to do a quick write up myself of my
On Thu, May 06, 2010 at 11:28:37AM +0100, Robert Milkowski wrote:
With the put back of:
[PSARC/2010/108] zil synchronicity
zfs datasets now have a new 'sync' property to control synchronous
behaviour.
The zil_disable tunable to turn synchronous requests into asynchronous
requests
On 06/05/2010 12:24, Pawel Jakub Dawidek wrote:
I read that this property is not inherited and I can't see why.
If what I read is up-to-date, could you tell why?
It is inherited, this changed as a result of the PSARC review.
--
Darren J Moffat
___
On 5/05/10 10:42 PM, Bruno Sousa wrote:
Hi all,
I have faced yet another kernel panic that seems to be related to mpt
driver.
This time i was trying to add a new disk to a running system (snv_134)
and this new disk was not being detected...following a tip i ran the
lsitool to reset the bus and
On Wed, 2010-05-05 at 09:45 -0600, Evan Layton wrote:
No that doesn't appear like an EFI label. So it appears that ZFS
is seeing something there that it's interpreting as an EFI label.
Then the command to set the bootfs property is failing due to that.
To restate the problem the BE can't be
On 06/05/2010 12:24, Pawel Jakub Dawidek wrote:
I read that this property is not inherited and I can't see why.
If what I read is up-to-date, could you tell why?
It is inherited. Sorry for the confusion but there was a discussion if
it should or should not be inherited, then we propose
On 06/05/2010 13:12, Robert Milkowski wrote:
On 06/05/2010 12:24, Pawel Jakub Dawidek wrote:
I read that this property is not inherited and I can't see why.
If what I read is up-to-date, could you tell why?
It is inherited. Sorry for the confusion but there was a discussion if
it should or
On Thu, May 06, 2010 at 01:15:41PM +0100, Robert Milkowski wrote:
On 06/05/2010 13:12, Robert Milkowski wrote:
On 06/05/2010 12:24, Pawel Jakub Dawidek wrote:
I read that this property is not inherited and I can't see why.
If what I read is up-to-date, could you tell why?
It is inherited.
From: Pasi Kärkkäinen [mailto:pa...@iki.fi]
In neither case do you have data or filesystem corruption.
ZFS probably is still OK, since it's designed to handle this (?),
but the data can't be OK if you lose 30 secs of writes.. 30 secs of
writes
that have been ack'd being done to the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ragnar Sundblad
But if you have an application, protocol and/or user that demands
or expects persistant storage, disabling ZIL of course could be fatal
in case of a crash. Examples are mail
On May 6, 2010, at 8:34 AM, Edward Ned Harvey solar...@nedharvey.com
wrote:
From: Pasi Kärkkäinen [mailto:pa...@iki.fi]
In neither case do you have data or filesystem corruption.
ZFS probably is still OK, since it's designed to handle this (?),
but the data can't be OK if you lose 30
On Wed, 5 May 2010, Edward Ned Harvey wrote:
In the L2ARC (cache) there is no ability to mirror, because cache device
removal has always been supported. You can't mirror a cache device, because
you don't need it.
How do you know that I don't need it? The ability seems useful to me.
Bob
--
On 06 May, 2010 - Bob Friesenhahn sent me these 0,6K bytes:
On Wed, 5 May 2010, Edward Ned Harvey wrote:
In the L2ARC (cache) there is no ability to mirror, because cache device
removal has always been supported. You can't mirror a cache device, because
you don't need it.
How do you know
Hi all,
It seems like the market has yet another type of ssd device, this time a
USB 3.0 portable SSD device by OCZ.
Going on the specs it seems to me that if this device has a good price
it might be quite useful for caching purposes on ZFS based storage.
Take a look at
On 06/05/2010 15:31, Tomas Ögren wrote:
On 06 May, 2010 - Bob Friesenhahn sent me these 0,6K bytes:
On Wed, 5 May 2010, Edward Ned Harvey wrote:
In the L2ARC (cache) there is no ability to mirror, because cache device
removal has always been supported. You can't mirror a cache
On Wed, May 5, 2010 at 8:47 PM, Michael Sullivan
michael.p.sulli...@mac.com wrote:
While it explains how to implement these, there is no information regarding
failure of a device in a striped L2ARC set of SSD's. I have been hard
pressed to find this information anywhere, short of testing it
Everyone,
Thanks for the help. I really appreciate it.
Well, I actually walked through the source code with an associate today and we
found out how things work by looking at the code.
It appears that L2ARC is just assigned in round-robin fashion. If a device
goes offline, then it goes to
Hi Michael,
What makes you think striping the SSDs would be faster than round-robin?
-marc
On Thu, May 6, 2010 at 1:09 PM, Michael Sullivan michael.p.sulli...@mac.com
wrote:
Everyone,
Thanks for the help. I really appreciate it.
Well, I actually walked through the source code with an
Hi Marc,
Well, if you are striping over multiple devices the you I/O should be spread
over the devices and you should be reading them all simultaneously rather than
just accessing a single device. Traditional striping would give 1/n
performance improvement rather than 1/1 where n is the
This is interesting, but what about iSCSI volumes for virtual machines?
Compress or de-dupe? Assuming the virtual machine was made from a clone of the
original iSCSI or a master iSCSI volume.
Does anyone have any real world data this? I would think the iSCSI volumes
would diverge quite a bit
On Thu, May 6, 2010 at 1:18 AM, Edward Ned Harvey solar...@nedharvey.comwrote:
From the information I've been reading about the loss of a ZIL device,
What the heck? Didn't I just answer that question?
I know I said this is answered in ZFS Best Practices Guide.
Hi.
How can i get this info?
Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, 7 May 2010, Michael Sullivan wrote:
Well, if you are striping over multiple devices the you I/O should be spread
over the devices and you
should be reading them all simultaneously rather than just accessing a single
device. Traditional
striping would give 1/n performance improvement
Hi Bob,
You can review the latest Solaris 10 and OpenSolaris release dates here:
http://www.oracle.com/ocom/groups/public/@ocom/documents/webcontent/059542.pdf
Solaris 10 release, CY2010
OpenSolaris release, 1st half CY2010
Thanks,
Cindy
On 05/05/10 18:03, Bob Friesenhahn wrote:
On Wed, 5
On Thu, May 6, 2010 at 11:08 AM, Michael Sullivan
michael.p.sulli...@mac.com wrote:
The round-robin access I am referring to, is the way the L2ARC vdevs appear
to be accessed. So, any given object will be taken from a single device
rather than from several devices simultaneously, thereby
On Thu, May 6, 2010 at 11:31 AM, eXeC001er execoo...@gmail.com wrote:
How can i get this info?
$ man zpool
$ zpool list
NAMESIZE ALLOC FREECAP DEDUP HEALTH ALTROOT
rpool 111G 15.5G 95.5G13% 1.00x ONLINE -
tank 7.25T 3.16T 4.09T43% 1.12x ONLINE -
$ zpool get
On 06/05/2010 19:08, Michael Sullivan wrote:
Hi Marc,
Well, if you are striping over multiple devices the you I/O should be
spread over the devices and you should be reading them all
simultaneously rather than just accessing a single device.
Traditional striping would give 1/n performance
On Fri, 2010-05-07 at 03:10 +0900, Michael Sullivan wrote:
This is interesting, but what about iSCSI volumes for virtual machines?
Compress or de-dupe? Assuming the virtual machine was made from a clone of
the original iSCSI or a master iSCSI volume.
Does anyone have any real world data
On 5/6/10 5:28 AM, Robert Milkowski wrote:
sync=disabled
Synchronous requests are disabled. File system transactions
only commit to stable storage on the next DMU transaction group
commit which can be many seconds.
Is there a way (short of DTrace) to write() some data and get notified
when
On Thu, May 06, 2010 at 03:30:05PM -0500, Wes Felter wrote:
On 5/6/10 5:28 AM, Robert Milkowski wrote:
sync=disabled
Synchronous requests are disabled. File system transactions
only commit to stable storage on the next DMU transaction group
commit which can be many seconds.
Is there a
Hi--
Even though the dedup property can be set on a file system basis,
dedup space usage is accounted for from the pool level by using
zpool list command.
My non-expert opinion is that it would be near impossible to report
space usage for dedup and non-dedup file systems at the file system
On May 6, 2010, at 11:08 AM, Michael Sullivan wrote:
Well, if you are striping over multiple devices the you I/O should be spread
over the devices and you should be reading them all simultaneously rather
than just accessing a single device. Traditional striping would give 1/n
performance
On 06/05/2010 21:45, Nicolas Williams wrote:
On Thu, May 06, 2010 at 03:30:05PM -0500, Wes Felter wrote:
On 5/6/10 5:28 AM, Robert Milkowski wrote:
sync=disabled
Synchronous requests are disabled. File system transactions
only commit to stable storage on the next DMU transaction
Hi Gary,
I would not remove this line in /etc/system.
We have been combatting this bug for a while now on our ZFS file system running
JES Commsuite 7.
I would be interested in finding out how you were able to pin point the
problem.
We seem to have no worries with the system currently, but
On Fri, May 7, 2010 at 4:57 AM, Brandon High bh...@freaks.com wrote:
I believe that the L2ARC behaves the same as a pool with multiple
top-level vdevs. It's not typical striping, where every write goes to
all devices. Writes may go to only one device, or may avoid a device
entirely while using
42 matches
Mail list logo