Re: [zfs-discuss] ZFS Distro Advice

2013-03-05 Thread Robert Milkowski
) builds > with checksum=sha256 and compression!=off. AFAIK, Solaris ZFS will COW > the blocks even if their content is identical to what's already there, > causing the snapshots to diverge. > > See https://www.illumos.org/issues/3236 for details. > This is in

Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Robert Milkowski
> > Robert Milkowski wrote: > > > > Solaris 11.1 (free for non-prod use). > > > > But a ticking bomb if you use a cache device. It's been fixed in SRU (although this is only for customers with a support contract - still, will be in 11.2 as well). Then, I&#x

Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Robert Milkowski
Solaris 11.1 (free for non-prod use). From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Tiernan OToole Sent: 25 February 2013 14:58 To: zfs-discuss@opensolaris.org Subject: [zfs-discuss] ZFS Distro Advice Good morning all. My home NA

Re: [zfs-discuss] RFE: Un-dedup for unique blocks

2013-01-29 Thread Robert Milkowski
nd not in open, while if Oracle does it they are bad? Isn't it at least a little bit being hypocritical? (bashing Oracle and doing sort of the same) -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensol

Re: [zfs-discuss] RFE: Un-dedup for unique blocks

2013-01-29 Thread Robert Milkowski
> > It also has a lot of performance improvements and general bug fixes > in > > the Solaris 11.1 release. > > Performance improvements such as? Dedup'ed ARC for one. 0 block automatically "dedup'ed" in-memory. Improvements to ZIL performance. Zero-copy

Re: [zfs-discuss] Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)

2013-01-04 Thread Robert Milkowski
bug fixes by Oracle that Illumos is not getting (lack of resource, limited usage, etc.). -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-04 Thread Robert Milkowski
> Personally, I'd recommend putting a standard Solaris fdisk > partition on the drive and creating the two slices under that. Why? In most cases giving zfs an entire disk is the best option. I wouldn't bother with any manual partitioning. -- Robert Milkowski http://mi

Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-26 Thread Robert Milkowski
- 24x 2.5" disks in front, another 2x 2.5" in rear, Sandy Bridge as well. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-22 Thread Robert Milkowski
contract though. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zvol access rights - chown zvol on reboot / startup / boot

2012-11-16 Thread Robert Milkowski
No, there isn't other way to do it currently. SMF approach is probably the best option for the time being. I think that there should be couple of other properties for zvol where permissions could be stated. Best regards, Robert Milkowski http://milek.blogspot.com From: zfs-di

Re: [zfs-discuss] ARC de-allocation with large ram

2012-10-22 Thread Robert Milkowski
set to 1 after the cache size is decreased, and if it stays that way. The fix is in one of the SRUs and I think it should be in 11.1 I don't know if it was fixed in Illumos or even if Illumos was affected by this at all. -- Robert Milkowski http://milek.blogspot.com > -Original

Re: [zfs-discuss] encfs on top of zfs

2012-07-31 Thread Robert Milkowski
dup because you will shrink the average record > size and balloon the memory usage). Can you expand a little bit more here? Dedup+compression works pretty well actually (not counting "standard" problems with current dedup - compression or no

Re: [zfs-discuss] NFS asynchronous writes being written to ZIL

2012-06-14 Thread Robert Milkowski
nly sync writes will go to zil right a way (and not always, see logbias, etc.) and to arc to be committed later to a pool when txg closes. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://ma

Re: [zfs-discuss] Resilver restarting several times

2012-05-11 Thread Robert Milkowski
night I rebooted the machine into single-user mode, to rule out > zones, crontabs and networked abusers, but I still get resilvering resets > every > now and then, about once an hour. > > I'm now trying a run with all zfs datasets unmounted, hope that helps > somew

Re: [zfs-discuss] ZFS performance on LSI 9240-8i?

2012-05-11 Thread Robert Milkowski
own/HDD19/disk ONLINE 0 0 0 /dev/chassis/SUN-FIRE-X4270-M2-SERVER.unknown/HDD17/disk ONLINE 0 0 0 /dev/chassis/SUN-FIRE-X4270-M2-SERVER.unknown/HDD15/disk ONLINE 0 0 0 errors: No known data errors Best regards, Robert

Re: [zfs-discuss] cluster vs nfs (was: Re: ZFS on Linux vs FreeBSD)

2012-04-25 Thread Robert Milkowski
And he will still need an underlying filesystem like ZFS for them :) > -Original Message- > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Nico Williams > Sent: 25 April 2012 20:32 > To: Paul Archer > Cc: ZFS-Discuss mailing list >

Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup

2011-12-29 Thread Robert Milkowski
referring to dedup efficiency which with lower recordsize values should improve dedup ratios (although it will require more memory for ddt). From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Brad Diggs Sent: 29 December 2011 15:55 To: Robert

Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup

2011-12-16 Thread Robert Milkowski
p, however in pre Solaris 11 GA (and in Illumos) you would end up with 2x copies of blocks in ARC cache, while in S11 GA ARC will keep only 1 copy of all blocks. This can make a big difference if there are even more than just 2x files being dedupped and you need arc memory to cache other data as well. -- Robert Milkowski ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs sync=disabled property

2011-11-11 Thread Robert Milkowski
> disk. This behavior is what makes NFS over ZFS slow without a slog: NFS does > everything O_SYNC by default, No, it doesn't. Howver VMWare by default issues all writes as SYNC. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-10 Thread Robert Milkowski
. But I have enough memory and such a workload that I see little physical reads going on. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-08 Thread Robert Milkowski
On 01/ 7/11 09:02 PM, Pawel Jakub Dawidek wrote: On Fri, Jan 07, 2011 at 07:33:53PM +, Robert Milkowski wrote: Now what if block B is a meta-data block? Metadata is not deduplicated. Good point but then it depends on a perspective. What if you you are storing lots of VMDKs? One

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-07 Thread Robert Milkowski
at dedup or not all the other possible cases of data corruption are there anyway, adding yet another one might or might not be acceptable. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-06 Thread Robert Milkowski
cting duplicate blocks. I don't believe that fletcher is still allowed for dedup - right now it is only sha256. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailma

Re: [zfs-discuss] A few questions

2011-01-04 Thread Robert Milkowski
On 01/ 4/11 11:35 PM, Robert Milkowski wrote: On 01/ 3/11 04:28 PM, Richard Elling wrote: On Jan 3, 2011, at 5:08 AM, Robert Milkowski wrote: On 12/26/10 05:40 AM, Tim Cook wrote: On Sat, Dec 25, 2010 at 11:23 PM, Richard Elling mailto:richard.ell...@gmail.com>> wrote: The

Re: [zfs-discuss] A few questions

2011-01-04 Thread Robert Milkowski
On 01/ 3/11 04:28 PM, Richard Elling wrote: On Jan 3, 2011, at 5:08 AM, Robert Milkowski wrote: On 12/26/10 05:40 AM, Tim Cook wrote: On Sat, Dec 25, 2010 at 11:23 PM, Richard Elling mailto:richard.ell...@gmail.com>> wrote: There are more people outside of Oracle developing f

Re: [zfs-discuss] A few questions

2011-01-03 Thread Robert Milkowski
dates bi-weekly out of Sun. Nexenta spending hundreds of man-hours on a GUI and userland apps isn't work on ZFS. Exactly my observation as well. I haven't seen any ZFS related development happening at Ilumos or Nexenta, at least not yet. -- Robert

Re: [zfs-discuss] ZFS ... open source moving forward?

2010-12-11 Thread Robert Milkowski
9 Oct 2010 at src.opensolaris.org they are still old versions from August, at least the ones I checked. See http://src.opensolaris.org/source/history/onnv/onnv-gate/usr/src/uts/common/fs/zfs/ the mercurial gate doesn't have any updates either. Best regards, Robert

Re: [zfs-discuss] Increase Volume Size

2010-12-07 Thread Robert Milkowski
On 07/12/2010 23:54, Tony MacDoodle wrote: Is is possible to expand the size of a ZFS volume? It was created with the following command: zfs create -V 20G ldomspool/test see man page for zfs, section about volsize property. Best regards, Robert Milkowski http://milek.blogspot.com

Re: [zfs-discuss] RAID-Z/mirror hybrid allocator

2010-11-22 Thread Robert Milkowski
On 18/11/2010 17:53, Cindy Swearingen wrote: Markus, Let me correct/expand this: 1. If you create a RAIDZ pool on OS 11 Express (b151a), you will have some mirrored metadata. This feature integrated into b148 and the pool version is 29. This is the part I mixed up. 2. If you have an existing R

Re: [zfs-discuss] zfs send|recv and inherited recordsize

2010-10-04 Thread Robert Milkowski
any files, it just dumps data into the underlying objects. --matt On Mon, Oct 4, 2010 at 11:20 AM, Robert Milkowski wrote: Hi, I thought that if I use zfs send snap | zfs recv if on a receiving side the recordsize property is set to different value it will be honored. But it doesn&#x

[zfs-discuss] zfs send|recv and inherited recordsize

2010-10-04 Thread Robert Milkowski
m2/m1 [ZPL], ID 1110, cr_txg 33537, 2.03M, 6 objects Object lvl iblk dblk dsize lsize %full type 6216K32K 1.00M 1M 100.00 ZFS plain file Now it is fine. -- Robert Milkowski http://milek.blogspot.com ___ zfs-di

[zfs-discuss] file level clones

2010-09-27 Thread Robert Milkowski
; ^[18] <http://en.wikipedia.org/wiki/Btrfs#cite_note-17> Cloning from byte ranges in one file to another is also supported, allowing large files to be more efficiently manipulated like standard rope <http://en.wikipedia.org/wiki/Rope_%28computer_science%29> data structures."

Re: [zfs-discuss] 'sync' properties and write operations.

2010-08-28 Thread Robert Milkowski
in sync mode: system write file in sync or async mode? async The sync property takes an effect immediately for all new writes even if a file was open before the property was changed. -- Robert Milkowski http://milek.blogspot.com ___ zfs-disc

[zfs-discuss] zfs set readonly=on does not entirely go into read-only mode

2010-08-27 Thread Robert Milkowski
ehave this way and it should be considered as a bug. What do you think? ps. I tested it on S10u8 and snv_134. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/ma

Re: [zfs-discuss] iScsi slow

2010-08-03 Thread Robert Milkowski
x27;t remember if it offered or not an ability to manipulate zvol's WCE flag but if it didn't then you can do it anyway as it is a zvol property. For an example see http://milek.blogspot.com/2010/02/zvols-write-cache.html -- Robert Milkowski http://mil

Re: [zfs-discuss] iScsi slow

2010-08-03 Thread Robert Milkowski
recent build you have zfs set sync={disabled|default|always} which also works with zvols. So you do have a control over how it is supposed to behave and to make it nice it is even on per zvol basis. It is just that the default is synchronous. -- Robert Milko

[zfs-discuss] Fwd: Read-only ZFS pools [PSARC/2010/306 FastTrack timeout 08/06/2010]

2010-07-30 Thread Robert Milkowski
fyi Original Message Subject:Read-only ZFS pools [PSARC/2010/306 FastTrack timeout 08/06/2010] Date: Fri, 30 Jul 2010 14:08:38 -0600 From: Tim Haley To: psarc-...@sun.com CC: zfs-t...@sun.com I am sponsoring the following fast-track for George Wilson.

[zfs-discuss] Fwd: zpool import despite missing log [PSARC/2010/292 Self Review]

2010-07-28 Thread Robert Milkowski
fyi -- Robert Milkowski http://milek.blogspot.com Original Message Subject:zpool import despite missing log [PSARC/2010/292 Self Review] Date: Mon, 26 Jul 2010 08:38:22 -0600 From: Tim Haley To: psarc-...@sun.com CC: zfs-t...@sun.com I am sponsoring

Re: [zfs-discuss] zfs raidz1 and traditional raid 5 perfomrance comparision

2010-07-22 Thread Robert Milkowski
On 22/07/2010 03:25, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Robert Milkowski I had a quick look at your results a moment ago. The problem is that you used a server with 4GB of RAM + a raid card

Re: [zfs-discuss] zfs raidz1 and traditional raid 5 perfomrance comparision

2010-07-21 Thread Robert Milkowski
hough but it might be that a stripe size was not matched to ZFS recordsize and iozone block size in this case. The issue with raid-z and random reads is that as cache hit ratio goes down to 0 the IOPS approaches IOPS of a single drive. For a little bit more information see http://blogs.sun.

Re: [zfs-discuss] Debunking the dedup memory myth

2010-07-20 Thread Robert Milkowski
"compress" the file much better than a compression. Also please note that you can use both: compression and dedup at the same time. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zpool throughput: snv 134 vs 138 vs 143

2010-07-20 Thread Robert Milkowski
han a regression. Are you sure it is not a debug vs. non-debug issue? -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] carrying on [was: Legality and the future of zfs...]

2010-07-19 Thread Robert Milkowski
outdone, they've stopped other OS releases as well. Surely, this is a temporary situation. AFAIK the dev OSOL releases are still being produced - they haven't been made public since b134 though. -- Robert Milkowski http://milek.blogspot.com _

Re: [zfs-discuss] Legality and the future of zfs...

2010-07-19 Thread Robert Milkowski
(async or sync) to be written synchronously. ps. still, I'm not saying it would made ZFS ACID. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinf

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Robert Milkowski
ndom reads. http://blogs.sun.com/roch/entry/when_to_and_not_to -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Robert Milkowski
performance as a much greater number of disk drives in RAID-10 configuration and if you don't need much space it could make sense. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Robert Milkowski
On 24/06/2010 14:32, Ross Walker wrote: On Jun 24, 2010, at 5:40 AM, Robert Milkowski wrote: On 23/06/2010 18:50, Adam Leventhal wrote: Does it mean that for dataset used for databases and similar environments where basically all blocks have fixed size and there is no other data

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Robert Milkowski
On 23/06/2010 19:29, Ross Walker wrote: On Jun 23, 2010, at 1:48 PM, Robert Milkowski wrote: 128GB. Does it mean that for dataset used for databases and similar environments where basically all blocks have fixed size and there is no other data all parity information will end-up on one

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Robert Milkowski
smaller writes to metadata that will distribute parity. What is the total width of your raidz1 stripe? 4x disks, 16KB recordsize, 128GB file, random read with 16KB block. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-23 Thread Robert Milkowski
big of a file are you making? RAID-Z does not explicitly do the parity distribution that RAID-5 does. Instead, it relies on non-uniform stripe widths to distribute IOPS. Adam On Jun 18, 2010, at 7:26 AM, Robert Milkowski wrote: Hi, zpool create test raidz c0t0d0 c1t0d0 c2t0d0 c3t0d0

Re: [zfs-discuss] Question : Sun Storage 7000 dedup ratio per share

2010-06-18 Thread Robert Milkowski
dedup enabled in a pool you can't really get a dedup ratio per share. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] raid-z - not even iops distribution

2010-06-18 Thread Robert Milkowski
rather except all of them to get about the same number of iops. Any idea why? -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] hot detach of disks, ZFS and FMA integration

2010-06-18 Thread Robert Milkowski
lly intent to get it integrated into ON? Because if you do then I think that getting Nexenta guys expanding on it would be better for everyone instead of having them reinventing the wheel... -- Robert Milkowski http://milek.blogspot.com ___ zfs-discu

Re: [zfs-discuss] At what level does the “zfs ” directory exist?

2010-06-17 Thread Robert Milkowski
. Previous Versions should work even if you have a one large filesystems with all users homes as directories within. What Solaris/OpenSolaris version did you try for the 5k test? -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list

Re: [zfs-discuss] At what level does the “zfs ” directory exist?

2010-06-16 Thread Robert Milkowski
? It maps the snapshots so windows can access them via "previous versions" from the explorers context menu. btw: the CIFS service supports Windows Shadow Copies out-of-the-box. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discus

Re: [zfs-discuss] OCZ Devena line of enterprise SSD

2010-06-16 Thread Robert Milkowski
whole point of having L2ARC is to serve high random read iops from RAM and L2ARC device instead of disk drives in a main pool. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensola

Re: [zfs-discuss] Scrub issues

2010-06-14 Thread Robert Milkowski
full priority. Is this problem known to the developers? Will it be addressed? http://sparcv9.blogspot.com/2010/06/slower-zfs-scrubsresilver-on-way.html http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6494473 -- Robert Milkowski http://milek.blogspot.com

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-11 Thread Robert Milkowski
On 11/06/2010 10:58, Andrey Kuzmin wrote: On Fri, Jun 11, 2010 at 1:26 PM, Robert Milkowski <mailto:mi...@task.gda.pl>> wrote: On 11/06/2010 09:22, sensille wrote: Andrey Kuzmin wrote: On Fri, Jun 11, 2010 at 1:54 AM, Richard Elling mailto:ri

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-11 Thread Robert Milkowski
cely coalesce these IOs and do a sequential writes with large blocks. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-11 Thread Robert Milkowski
port is nothing unusual and has been the case for at least several years. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-10 Thread Robert Milkowski
On 10/06/2010 15:39, Andrey Kuzmin wrote: On Thu, Jun 10, 2010 at 6:06 PM, Robert Milkowski <mailto:mi...@task.gda.pl>> wrote: On 21/10/2009 03:54, Bob Friesenhahn wrote: I would be interested to know how many IOPS an OS like Solaris is able to push through a sing

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-10 Thread Robert Milkowski
0 IOPS to a single SAS port. It also scales well - I did run above dd's over 4x SAS ports at the same time and it scaled linearly by achieving well over 400k IOPS. hw used: x4270, 2x Intel X5570 2.93GHz, 4x SAS SG-PCIE8SAS-E-Z (fw. 1.27.3.0), connected to F5100. -- Robert Milkowski

Re: [zfs-discuss] [zones-discuss] ZFS ARC cache issue

2010-06-04 Thread Robert Milkowski
: why do you need to do this at all? Isn't the ZFS ARC supposed to release memory when the system is under pressure? Is that mechanism not working well in some cases ... ? My understanding is that if kmem gets heavily fragmaneted ZFS won't be able to give back much memory.

Re: [zfs-discuss] Odd dump volume panic

2010-05-17 Thread Robert Milkowski
s/zvol.c#1785) - but zfs send|recv should replicate it I think. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] osol monitoring question

2010-05-10 Thread Robert Milkowski
are very useful at times. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Heads Up: zil_disable has expired, ceased to be, ...

2010-05-06 Thread Robert Milkowski
On 06/05/2010 21:45, Nicolas Williams wrote: On Thu, May 06, 2010 at 03:30:05PM -0500, Wes Felter wrote: On 5/6/10 5:28 AM, Robert Milkowski wrote: sync=disabled Synchronous requests are disabled. File system transactions only commit to stable storage on the next DMU transaction

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Robert Milkowski
would probably decrease performance and would invalidate all blocks if only a single l2arc device would die. Additionally having each block only on one l2arc device allows to read from all of l2arc devices at the same time. -- Robert Milkowski http://milek.blogspo

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Robert Milkowski
ce failover in a cluster L2ARC will be kept warm. Then the only thing which might affect L2 performance considerably would be a L2ARC device failure... -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-dis

Re: [zfs-discuss] Heads Up: zil_disable has expired, ceased to be, ...

2010-05-06 Thread Robert Milkowski
On 06/05/2010 13:12, Robert Milkowski wrote: On 06/05/2010 12:24, Pawel Jakub Dawidek wrote: I read that this property is not inherited and I can't see why. If what I read is up-to-date, could you tell why? It is inherited. Sorry for the confusion but there was a discussion if it shou

Re: [zfs-discuss] Heads Up: zil_disable has expired, ceased to be, ...

2010-05-06 Thread Robert Milkowski
opose that it shouldn't but it was changed again during a PSARC review that it should. And I did a copy'n'paste here. Again, sorry for the confusion. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@

[zfs-discuss] Heads Up: zil_disable has expired, ceased to be, ...

2010-05-06 Thread Robert Milkowski
nformation on it you might look at http://milek.blogspot.com/2010/05/zfs-synchronous-vs-asynchronous-io.html -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZIL behavior on import

2010-05-05 Thread Robert Milkowski
fails prior to completing a series of writes and I reboot using a failsafe (i.e. install disc), will the log be replayed after a zpool import -f ? yes -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] Performance of the ZIL

2010-05-04 Thread Robert Milkowski
when it is off it will give you an estimate of what's the absolute maximum performance increase (if any) by having a dedicated ZIL device. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolari

Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-05-04 Thread Robert Milkowski
0 zil synchronicity No promise on date, but it will bubble to the top eventually. So everyone knows - it has been integrated into snv_140 :) -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Compellant announces zNAS

2010-04-29 Thread Robert Milkowski
ution*. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Performance drop during scrub?

2010-04-29 Thread Robert Milkowski
s no room for improvement here. All I'm saying is that it is not as easy problem as it seems. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs & ec2]

2010-04-26 Thread Robert Milkowski
ch means it couldn't discover it. does 'zpool import' (no other options) list the pool? -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs & ec2]

2010-04-26 Thread Robert Milkowski
(and do so with -R). That way you can easily script it so import happens after your disks ara available. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS Pool, what happen when disk failure

2010-04-25 Thread Robert Milkowski
. Then you can "zpool import" I think requiring the -f or -F, and reboot again normal. I just did a test on Solaris 10/09 - and system came up properly, entirely on its own, with a failed pool. zpool status showed the pool as unavailable (as I removed an underlying device) which is fi

Re: [zfs-discuss] ZFS Pool, what happen when disk failure

2010-04-24 Thread Robert Milkowski
. You will need to power cycle. The system won't boot up again; you'll have to The system should boot-up properly even if some pools are not accessible (except rpool of course). If it is not the case then there is a bug - last time I checked it worked perfectly fine. -- Robert

Re: [zfs-discuss] Benchmarking Methodologies

2010-04-24 Thread Robert Milkowski
u can also find some benchmarks with sysbench + mysql or oracle. I don't remember if I posted or not some of my results but I'm pretty sure you can find others. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-dis

Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs & ec2]

2010-04-23 Thread Robert Milkowski
attach EBS. That way Solaris won't automatically try to import the pool and your scripts will do it once disks are available. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Benchmarking Methodologies

2010-04-21 Thread Robert Milkowski
size for database vs. default, atime off vs. on, lzjb, gzip, ssd). Also comparison of benchmark results with all default zfs setting compared to whatever setting you did which gave you the best result. -- Robert Milkowski http://milek.blogspot.com __

Re: [zfs-discuss] Double slash in mountpoint

2010-04-21 Thread Robert Milkowski
but it suggests that it had nothing to do with a double slash - rather some process (your shell?) had an open file within the mountpoint. But supplying -f you forced zfs to unmount it anyway. -- Robert Milkowski http://milek.blogspot.com On 21/04/2010 06:16, Ryan John wrote: Thanks. That

Re: [zfs-discuss] Is file cloning anywhere on ZFS roadmap

2010-04-21 Thread Robert Milkowski
without going through the process of actually copying the blocks, but just duplicating its meta data like NetApp does? I don't know about file cloning but why not put each VM on top of a zvol - then you can clone a zvol. ? -- Robert Milkowski http://milek.blogspo

Re: [zfs-discuss] casesensitivity mixed and CIFS

2010-04-14 Thread Robert Milkowski
, while accessing \\filer\arch\myfolder\myfile.txt works. Any ideas? We are running snv_130. you are not using Samba daemon, are you? -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-07 Thread Robert Milkowski
normal reboots zfs won't read data from slog. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-07 Thread Robert Milkowski
letely die as well. Other than that you are fine even with unmirrored slog device. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] To slice, or not to slice

2010-04-03 Thread Robert Milkowski
ris is doing more or less for some time now. look in the archives of this mailing list for more information. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Question about large pools

2010-04-03 Thread Robert Milkowski
fine. So for example - on x4540 servers try to avoid creating a pool with a single RAID-Z3 group made of 44 disks, rather create 4 RAID-Z2 groups each made of 11 disks all of them in a single pool. -- Robert Milkowski http://milek.blogspot.com __

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-02 Thread Robert Milkowski
On 02/04/2010 16:04, casper@sun.com wrote: sync() is actually *async* and returning from sync() says nothing about to clarify - in case of ZFS sync() is actually synchronous. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss

Re: [zfs-discuss] can't destroy snapshot

2010-04-01 Thread Robert Milkowski
the pool, resume the resource group and enable the storage resource The other approach is to keep a pool under a cluster management but eventually suspend a resource group so there won't be any unexpected failovers (but it really depends on circumstances and what you are t

Re: [zfs-discuss] can't destroy snapshot

2010-04-01 Thread Robert Milkowski
s are part of a cluster both of them have a full access to shared storage and you can force zpool import on both nodes at the same time. When you think about it you need actually such behavior for RAC to work on raw devices or real cluster volumes or filesystems, etc. -- Robert Milkowski http://mil

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-01 Thread Robert Milkowski
you can export a share with as sync (default) or async share while on Solaris you can't really currently force a NFS server to start working in an async mode. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-04-01 Thread Robert Milkowski
sfy a race condition for the sake of internal consistency. Applications which need to know their next commands will not begin until after the previous sync write was committed to disk. ROTFL!!! I think you should explain it even further for Casper :) :) :) :) :) :) :) -- Robert Milk

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Robert Milkowski
e thing is well-documented. I double checked the documentation and you're right - the default has changed to sync. I haven't found in which RH version it happened but it doesn't really matter. So yes, I was wrong - the current default it seems to be sync on L

Re: [zfs-discuss] bit-flipping in RAM...

2010-03-31 Thread Robert Milkowski
On 31/03/2010 16:44, Bob Friesenhahn wrote: On Wed, 31 Mar 2010, Robert Milkowski wrote: or there might be an extra zpool level (or system wide) property to enable checking checksums onevery access from ARC - there will be a siginificatn performance impact but then it might be acceptable for

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Robert Milkowski
Unless you are talking about doing regular snapshots and making sure that application is consistent while doing so - for example putting all Oracle tablespaces in a hot backup mode and taking a snapshot... otherwise it doesn't really make sense. -- Robert Milkowski http://mil

Re: [zfs-discuss] Sun Flash Accelerator F20 numbers

2010-03-31 Thread Robert Milkowski
need to re-import a database or recover lots of files over NFS - your service is down and disabling ZIL makes a recovery MUCH faster. Then there are cases when leaving the ZIL disabled is acceptable as well. -- Robert Milkowski http://milek.blogspot.com ___

Re: [zfs-discuss] bit-flipping in RAM...

2010-03-31 Thread Robert Milkowski
ld cause a significant performance problem. or there might be an extra zpool level (or system wide) property to enable checking checksums onevery access from ARC - there will be a siginificatn performance impact but then it might be acceptable for really paranoid folks especially with modern ha

  1   2   3   4   5   6   7   8   9   10   >