Re: [zfs-discuss] ZFS Distro Advice

2013-03-05 Thread Robert Milkowski
) builds > with checksum=sha256 and compression!=off. AFAIK, Solaris ZFS will COW > the blocks even if their content is identical to what's already there, > causing the snapshots to diverge. > > See https://www.illumos.org/issues/3236 for details. > This is in

Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Robert Milkowski
> > Robert Milkowski wrote: > > > > Solaris 11.1 (free for non-prod use). > > > > But a ticking bomb if you use a cache device. It's been fixed in SRU (although this is only for customers with a support contract - still, will be in 11.2 as well). Then, I&#x

Re: [zfs-discuss] ZFS Distro Advice

2013-02-26 Thread Robert Milkowski
Solaris 11.1 (free for non-prod use). From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Tiernan OToole Sent: 25 February 2013 14:58 To: zfs-discuss@opensolaris.org Subject: [zfs-discuss] ZFS Distro Advice Good morning all. My home NA

Re: [zfs-discuss] RFE: Un-dedup for unique blocks

2013-01-29 Thread Robert Milkowski
nd not in open, while if Oracle does it they are bad? Isn't it at least a little bit being hypocritical? (bashing Oracle and doing sort of the same) -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensol

Re: [zfs-discuss] RFE: Un-dedup for unique blocks

2013-01-29 Thread Robert Milkowski
> > It also has a lot of performance improvements and general bug fixes > in > > the Solaris 11.1 release. > > Performance improvements such as? Dedup'ed ARC for one. 0 block automatically "dedup'ed" in-memory. Improvements to ZIL performance. Zero-copy

Re: [zfs-discuss] Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)

2013-01-04 Thread Robert Milkowski
bug fixes by Oracle that Illumos is not getting (lack of resource, limited usage, etc.). -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] poor CIFS and NFS performance

2013-01-04 Thread Robert Milkowski
> Personally, I'd recommend putting a standard Solaris fdisk > partition on the drive and creating the two slices under that. Why? In most cases giving zfs an entire disk is the best option. I wouldn't bother with any manual partitioning. -- Robert Milkowski http://mi

Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-26 Thread Robert Milkowski
- 24x 2.5" disks in front, another 2x 2.5" in rear, Sandy Bridge as well. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS Appliance as a general-purpose server question

2012-11-22 Thread Robert Milkowski
contract though. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zvol access rights - chown zvol on reboot / startup / boot

2012-11-16 Thread Robert Milkowski
No, there isn't other way to do it currently. SMF approach is probably the best option for the time being. I think that there should be couple of other properties for zvol where permissions could be stated. Best regards, Robert Milkowski http://milek.blogspot.com From: zfs-di

Re: [zfs-discuss] ARC de-allocation with large ram

2012-10-22 Thread Robert Milkowski
set to 1 after the cache size is decreased, and if it stays that way. The fix is in one of the SRUs and I think it should be in 11.1 I don't know if it was fixed in Illumos or even if Illumos was affected by this at all. -- Robert Milkowski http://milek.blogspot.com > -Original

Re: [zfs-discuss] encfs on top of zfs

2012-07-31 Thread Robert Milkowski
dup because you will shrink the average record > size and balloon the memory usage). Can you expand a little bit more here? Dedup+compression works pretty well actually (not counting "standard" problems with current dedup - compression or no

Re: [zfs-discuss] NFS asynchronous writes being written to ZIL

2012-06-14 Thread Robert Milkowski
nly sync writes will go to zil right a way (and not always, see logbias, etc.) and to arc to be committed later to a pool when txg closes. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://ma

Re: [zfs-discuss] Resilver restarting several times

2012-05-11 Thread Robert Milkowski
night I rebooted the machine into single-user mode, to rule out > zones, crontabs and networked abusers, but I still get resilvering resets > every > now and then, about once an hour. > > I'm now trying a run with all zfs datasets unmounted, hope that helps > somew

Re: [zfs-discuss] ZFS performance on LSI 9240-8i?

2012-05-11 Thread Robert Milkowski
own/HDD19/disk ONLINE 0 0 0 /dev/chassis/SUN-FIRE-X4270-M2-SERVER.unknown/HDD17/disk ONLINE 0 0 0 /dev/chassis/SUN-FIRE-X4270-M2-SERVER.unknown/HDD15/disk ONLINE 0 0 0 errors: No known data errors Best regards, Robert

Re: [zfs-discuss] cluster vs nfs (was: Re: ZFS on Linux vs FreeBSD)

2012-04-25 Thread Robert Milkowski
And he will still need an underlying filesystem like ZFS for them :) > -Original Message- > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Nico Williams > Sent: 25 April 2012 20:32 > To: Paul Archer > Cc: ZFS-Discuss mailing list >

Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup

2011-12-29 Thread Robert Milkowski
referring to dedup efficiency which with lower recordsize values should improve dedup ratios (although it will require more memory for ddt). From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Brad Diggs Sent: 29 December 2011 15:55 To: Robert

Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup

2011-12-16 Thread Robert Milkowski
p, however in pre Solaris 11 GA (and in Illumos) you would end up with 2x copies of blocks in ARC cache, while in S11 GA ARC will keep only 1 copy of all blocks. This can make a big difference if there are even more than just 2x files being dedupped and you need arc memory to cache other data as well. -- Robert Milkowski ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs sync=disabled property

2011-11-11 Thread Robert Milkowski
> disk. This behavior is what makes NFS over ZFS slow without a slog: NFS does > everything O_SYNC by default, No, it doesn't. Howver VMWare by default issues all writes as SYNC. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail

Re: [zfs-discuss] File contents changed with no ZFS error

2011-10-27 Thread Robert Watzlavick
Just to close out the discussion, I wasn't able to prove any issues with ZFS. The files that were changed all seem to have plausible scenarios. I've moved my external USB drive backups over to ZFS directly connected to the file server and it's all working fine. Thanks for everyone's help! -

[zfs-discuss] unsubscribe

2011-10-25 Thread Chen, Robert(Xiaoliang)
unsubscribe ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] File contents changed with no ZFS error

2011-10-24 Thread Robert Watzlavick
On Oct 24, 2011, at 9:42, Edward Ned Harvey wrote: > > I would suggest finding a way to connect the external disks directly to the > ZFS server, and start using zfs send instead. > Since these were my offsite backups I was using Truecrypt which drove the use of ext3 and Linux. Also I wanted

Re: [zfs-discuss] File contents changed with no ZFS error

2011-10-23 Thread Robert Watzlavick
On 10/22/2011 04:14 PM, Mark Sandrock wrote: Why don't you see which byte differs, and how it does? Maybe that would suggest the "failure mode". Is it the same byte data in all affected files, for instance? Mark I found something interesting with the .ppt file. Apparently, just opening a .ppt

Re: [zfs-discuss] File contents changed with no ZFS error

2011-10-22 Thread Robert Watzlavick
can still be applied? -Bob > On Oct 22, 2011, at 9:27 AM, Robert Watzlavick wrote: > >> I've noticed something strange over the past few months with four files on >> my raidz. Here's the setup: >> OpenSolaris snv_111b >> ZFS Pool version 14 >> AMD-b

Re: [zfs-discuss] File contents changed with no ZFS error

2011-10-22 Thread Robert Watzlavick
On Oct 22, 2011, at 13:14, Edward Ned Harvey wrote: >> > How can you outrule the possibility of "something changed the file." > Intentionally, not as a form of filesystem corruption. I suppose that's possible but seems unlikely. One byte on a file changed on the disk with no corresponding chan

[zfs-discuss] File contents changed with no ZFS error

2011-10-22 Thread Robert Watzlavick
I've noticed something strange over the past few months with four files on my raidz. Here's the setup: OpenSolaris snv_111b ZFS Pool version 14 AMD-based server with ECC RAM. 5 ST3500630AS 500 GB SATA drives (4 active plus spare) in raidz1 The other day, I observed what appears to be undetected

[zfs-discuss] Solaris Express server name broadcast

2011-03-07 Thread Robert Soubie
7;ll certainly find out, in due time, how to have a ZFS server using smb shares broadcast its name on the network.. Thanks again for willing to help. Amitiés, Robert PS: since cross-posting seems to be the rage these days, I'll copy that to the zfs-discuss list, in case a noble sou

Re: [zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Robert Hartzell
On Mar 4, 2011, at 10:46 AM, Cindy Swearingen wrote: > Hi Robert, > > We integrated some fixes that allowed you to replace disks of equivalent > sizes, but 40 MB is probably beyond that window. > > Yes, you can do #2 below and the pool size will be adjusted down to the >

Re: [zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Robert Hartzell
On Mar 4, 2011, at 11:19 AM, Eric D. Mudama wrote: > On Fri, Mar 4 at 9:22, Robert Hartzell wrote: >> In 2007 I bought 6 WD1600JS 160GB sata disks and used 4 to create a raidz >> storage pool and then shelved the other two for spares. One of the disks >> failed last nig

Re: [zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Robert Hartzell
On Mar 4, 2011, at 11:46 AM, Cindy Swearingen wrote: > Robert, > > Which Solaris release is this? > > Thanks, > > Cindy > Solaris 11 express 2010.11 -- Robert Hartzell b...@rwhartzell.net RwHartzell.Net, Inc. ___

Re: [zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Robert Hartzell
On Mar 4, 2011, at 10:01 AM, Tim Cook wrote: > > > On Fri, Mar 4, 2011 at 10:22 AM, Robert Hartzell wrote: > In 2007 I bought 6 WD1600JS 160GB sata disks and used 4 to create a raidz > storage pool and then shelved the other two for spares. One of the disks > failed la

[zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Robert Hartzell
#2 is possible would I still be able to use the last still shelved disk as a spare? If #2 is possible I would probably recreate the zpool as raidz2 instead of the current raidz1. Any info/comments would be greatly appreciated. Robert -- Robert Hartzell b...@rwhartzell.net

Re: [zfs-discuss] Sil3124 Sata controller for ZFS on Sparc OpenSolaris Nevada b130

2011-02-08 Thread Robert Soubie
Le 08/02/2011 07:10, Jerry Kemp a écrit : As part of a small home project, I have purchased a SIL3124 hba in hopes of attaching an external drive/drive enclosure via eSATA. The host in question is an old Sun Netra T1 currently running OpenSolaris Nevada b130. The card in question is this Sil

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-10 Thread Robert Milkowski
. But I have enough memory and such a workload that I see little physical reads going on. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-08 Thread Robert Milkowski
On 01/ 7/11 09:02 PM, Pawel Jakub Dawidek wrote: On Fri, Jan 07, 2011 at 07:33:53PM +, Robert Milkowski wrote: Now what if block B is a meta-data block? Metadata is not deduplicated. Good point but then it depends on a perspective. What if you you are storing lots of VMDKs? One

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-07 Thread Robert Milkowski
at dedup or not all the other possible cases of data corruption are there anyway, adding yet another one might or might not be acceptable. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://

Re: [zfs-discuss] (Fletcher+Verification) versus (Sha256+No Verification)

2011-01-06 Thread Robert Milkowski
cting duplicate blocks. I don't believe that fletcher is still allowed for dedup - right now it is only sha256. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailma

Re: [zfs-discuss] A few questions

2011-01-04 Thread Robert Milkowski
On 01/ 4/11 11:35 PM, Robert Milkowski wrote: On 01/ 3/11 04:28 PM, Richard Elling wrote: On Jan 3, 2011, at 5:08 AM, Robert Milkowski wrote: On 12/26/10 05:40 AM, Tim Cook wrote: On Sat, Dec 25, 2010 at 11:23 PM, Richard Elling mailto:richard.ell...@gmail.com>> wrote: The

Re: [zfs-discuss] A few questions

2011-01-04 Thread Robert Milkowski
On 01/ 3/11 04:28 PM, Richard Elling wrote: On Jan 3, 2011, at 5:08 AM, Robert Milkowski wrote: On 12/26/10 05:40 AM, Tim Cook wrote: On Sat, Dec 25, 2010 at 11:23 PM, Richard Elling mailto:richard.ell...@gmail.com>> wrote: There are more people outside of Oracle developing f

Re: [zfs-discuss] What are .$EXTEND directories?

2011-01-04 Thread Robert Soubie
Le 04/01/2011 08:24, Alan Wright a écrit : Those objects are created automatically when you share a dataset over SMB to support remote ZFS user/group quota management from the Windows desktop. The dot in .$EXTEND is to make the directory less intrusive on Solaris. There is no Solaris or ZFS func

Re: [zfs-discuss] A few questions

2011-01-03 Thread Robert Milkowski
dates bi-weekly out of Sun. Nexenta spending hundreds of man-hours on a GUI and userland apps isn't work on ZFS. Exactly my observation as well. I haven't seen any ZFS related development happening at Ilumos or Nexenta, at least not yet. -- Robert

Re: [zfs-discuss] ZFS ... open source moving forward?

2010-12-13 Thread Robert Soubie
Le 13/12/2010 01:56, Tim Cook a écrit : Yes, only the USA, which is where all relevant companies in this discussion do business. On a mailing list centered around a company founded in and doing business in the USA. So what exactly is your point? Don't you forget that these companies also do mu

Re: [zfs-discuss] ZFS ... open source moving forward?

2010-12-11 Thread Robert Milkowski
9 Oct 2010 at src.opensolaris.org they are still old versions from August, at least the ones I checked. See http://src.opensolaris.org/source/history/onnv/onnv-gate/usr/src/uts/common/fs/zfs/ the mercurial gate doesn't have any updates either. Best regards, Robert

Re: [zfs-discuss] Increase Volume Size

2010-12-07 Thread Robert Milkowski
On 07/12/2010 23:54, Tony MacDoodle wrote: Is is possible to expand the size of a ZFS volume? It was created with the following command: zfs create -V 20G ldomspool/test see man page for zfs, section about volsize property. Best regards, Robert Milkowski http://milek.blogspot.com

[zfs-discuss] ZFS imported into GRUB

2010-12-02 Thread Robert Millan
t the code covered by them can be used freely. If you intend for your code to be free for all users, always use the latest version of the GPL. -- Robert Millan ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] RAID-Z/mirror hybrid allocator

2010-11-22 Thread Robert Milkowski
On 18/11/2010 17:53, Cindy Swearingen wrote: Markus, Let me correct/expand this: 1. If you create a RAIDZ pool on OS 11 Express (b151a), you will have some mirrored metadata. This feature integrated into b148 and the pool version is 29. This is the part I mixed up. 2. If you have an existing R

Re: [zfs-discuss] zfs send|recv and inherited recordsize

2010-10-04 Thread Robert Milkowski
any files, it just dumps data into the underlying objects. --matt On Mon, Oct 4, 2010 at 11:20 AM, Robert Milkowski wrote: Hi, I thought that if I use zfs send snap | zfs recv if on a receiving side the recordsize property is set to different value it will be honored. But it doesn&#x

[zfs-discuss] zfs send|recv and inherited recordsize

2010-10-04 Thread Robert Milkowski
m2/m1 [ZPL], ID 1110, cr_txg 33537, 2.03M, 6 objects Object lvl iblk dblk dsize lsize %full type 6216K32K 1.00M 1M 100.00 ZFS plain file Now it is fine. -- Robert Milkowski http://milek.blogspot.com ___ zfs-di

[zfs-discuss] file level clones

2010-09-27 Thread Robert Milkowski
; ^[18] <http://en.wikipedia.org/wiki/Btrfs#cite_note-17> Cloning from byte ranges in one file to another is also supported, allowing large files to be more efficiently manipulated like standard rope <http://en.wikipedia.org/wiki/Rope_%28computer_science%29> data structures."

Re: [zfs-discuss] Proper procedure when device names have changed

2010-09-13 Thread Robert Mustacchi
shuffled with either a kernel or udev upgrade. Robert On 9/13/10 10:31 AM, LaoTsao 老曹 wrote: try export and import the zpool On 9/13/2010 1:26 PM, Brian wrote: I am running zfs-fuse on an Ubuntu 10.04 box. I have a dual mirrored pool: mirror sdd sde mirror sdf sdg Recently the device names sh

Re: [zfs-discuss] Terrible ZFS performance on a Dell 1850 w/ PERC 4e/Si (Sol10U6)

2010-08-31 Thread Robert Loper
m DVD/Jumpstart you should see 2 disks and just do a ZFS 2 disk mirror for rpool. Hope this helps... - Robert Loper -- Forwarded message -- From: Andrei Ghimus To: zfs-discuss@opensolaris.org Date: Mon, 30 Aug 2010 11:05:27 PDT Subject: Re: [zfs-discuss] Terrible ZFS performance on a

Re: [zfs-discuss] 'sync' properties and write operations.

2010-08-28 Thread Robert Milkowski
in sync mode: system write file in sync or async mode? async The sync property takes an effect immediately for all new writes even if a file was open before the property was changed. -- Robert Milkowski http://milek.blogspot.com ___ zfs-disc

[zfs-discuss] zfs set readonly=on does not entirely go into read-only mode

2010-08-27 Thread Robert Milkowski
ehave this way and it should be considered as a bug. What do you think? ps. I tested it on S10u8 and snv_134. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/ma

Re: [zfs-discuss] How do I Import rpool to an alternate location?

2010-08-16 Thread Robert Hartzell
On 08/16/10 10:38 PM, George Wilson wrote: Robert Hartzell wrote: On 08/16/10 07:47 PM, George Wilson wrote: The root filesystem on the root pool is set to 'canmount=noauto' so you need to manually mount it first using 'zfs mount '. Then run 'zfs mount -a'. -

Re: [zfs-discuss] How do I Import rpool to an alternate location?

2010-08-16 Thread Robert Hartzell
y and "zfs mount -a" failed I guess because the first command failed. -- Robert Hartzell b...@rwhartzell.net RwHartzell.Net ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] How do I Import rpool to an alternate location?

2010-08-16 Thread Robert Hartzell
On 08/16/10 07:39 PM, Mark Musante wrote: On 16 Aug 2010, at 22:30, Robert Hartzell wrote: cd /mnt ; ls bertha export var ls bertha boot etc where is the rest of the file systems and data? By default, root filesystems are not mounted. Try doing a "zfs mount bertha/ROOT/snv_134&qu

[zfs-discuss] How do I Import rpool to an alternate location?

2010-08-16 Thread Robert Hartzell
legacy bertha/zones/bz2/ROOT/zbe 821M 126G 821M legacy cd /mnt ; ls bertha export var ls bertha boot etc where is the rest of the file systems and data? -- Robert Hartzell b...@rwhartzell.net RwHartzell.Net ___ zfs-discuss mailing list

Re: [zfs-discuss] iScsi slow

2010-08-03 Thread Robert Milkowski
x27;t remember if it offered or not an ability to manipulate zvol's WCE flag but if it didn't then you can do it anyway as it is a zvol property. For an example see http://milek.blogspot.com/2010/02/zvols-write-cache.html -- Robert Milkowski http://mil

Re: [zfs-discuss] iScsi slow

2010-08-03 Thread Robert Milkowski
recent build you have zfs set sync={disabled|default|always} which also works with zvols. So you do have a control over how it is supposed to behave and to make it nice it is even on per zvol basis. It is just that the default is synchronous. -- Robert Milko

[zfs-discuss] Fwd: Read-only ZFS pools [PSARC/2010/306 FastTrack timeout 08/06/2010]

2010-07-30 Thread Robert Milkowski
fyi Original Message Subject:Read-only ZFS pools [PSARC/2010/306 FastTrack timeout 08/06/2010] Date: Fri, 30 Jul 2010 14:08:38 -0600 From: Tim Haley To: psarc-...@sun.com CC: zfs-t...@sun.com I am sponsoring the following fast-track for George Wilson.

[zfs-discuss] Fwd: zpool import despite missing log [PSARC/2010/292 Self Review]

2010-07-28 Thread Robert Milkowski
fyi -- Robert Milkowski http://milek.blogspot.com Original Message Subject:zpool import despite missing log [PSARC/2010/292 Self Review] Date: Mon, 26 Jul 2010 08:38:22 -0600 From: Tim Haley To: psarc-...@sun.com CC: zfs-t...@sun.com I am sponsoring

Re: [zfs-discuss] zfs raidz1 and traditional raid 5 perfomrance comparision

2010-07-22 Thread Robert Milkowski
On 22/07/2010 03:25, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Robert Milkowski I had a quick look at your results a moment ago. The problem is that you used a server with 4GB of RAM + a raid card

Re: [zfs-discuss] zfs raidz1 and traditional raid 5 perfomrance comparision

2010-07-21 Thread Robert Milkowski
hough but it might be that a stripe size was not matched to ZFS recordsize and iozone block size in this case. The issue with raid-z and random reads is that as cache hit ratio goes down to 0 the IOPS approaches IOPS of a single drive. For a little bit more information see http://blogs.sun.

[zfs-discuss] zpool import issue

2010-07-20 Thread Robert Hofmann
x27;disk' id=0 guid=13726396776693410521 path='/dev/dsk/c5t600601608D642400B78DD7589A5DDF11d0s2' devid='id1,s...@n600601608d642400b78dd7589a5ddf11/c' phys_path='/scsi_vhci/s...@g600601608d642400b78dd7589a5ddf11:c' whole_disk=0 metaslab_array=26 metaslab_shift=23

Re: [zfs-discuss] Debunking the dedup memory myth

2010-07-20 Thread Robert Milkowski
"compress" the file much better than a compression. Also please note that you can use both: compression and dedup at the same time. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zpool throughput: snv 134 vs 138 vs 143

2010-07-20 Thread Robert Milkowski
han a regression. Are you sure it is not a debug vs. non-debug issue? -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] carrying on [was: Legality and the future of zfs...]

2010-07-19 Thread Robert Milkowski
outdone, they've stopped other OS releases as well. Surely, this is a temporary situation. AFAIK the dev OSOL releases are still being produced - they haven't been made public since b134 though. -- Robert Milkowski http://milek.blogspot.com _

Re: [zfs-discuss] Legality and the future of zfs...

2010-07-19 Thread Robert Milkowski
(async or sync) to be written synchronously. ps. still, I'm not saying it would made ZFS ACID. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinf

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Robert Milkowski
ndom reads. http://blogs.sun.com/roch/entry/when_to_and_not_to -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Robert Milkowski
performance as a much greater number of disk drives in RAID-10 configuration and if you don't need much space it could make sense. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Robert Milkowski
On 24/06/2010 14:32, Ross Walker wrote: On Jun 24, 2010, at 5:40 AM, Robert Milkowski wrote: On 23/06/2010 18:50, Adam Leventhal wrote: Does it mean that for dataset used for databases and similar environments where basically all blocks have fixed size and there is no other data

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Robert Milkowski
On 23/06/2010 19:29, Ross Walker wrote: On Jun 23, 2010, at 1:48 PM, Robert Milkowski wrote: 128GB. Does it mean that for dataset used for databases and similar environments where basically all blocks have fixed size and there is no other data all parity information will end-up on one

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-24 Thread Robert Milkowski
smaller writes to metadata that will distribute parity. What is the total width of your raidz1 stripe? 4x disks, 16KB recordsize, 128GB file, random read with 16KB block. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] raid-z - not even iops distribution

2010-06-23 Thread Robert Milkowski
128GB. Does it mean that for dataset used for databases and similar environments where basically all blocks have fixed size and there is no other data all parity information will end-up on one (z1) or two (z2) specific disks? On 23/06/2010 17:51, Adam Leventhal wrote: Hey Robert, How

Re: [zfs-discuss] Question : Sun Storage 7000 dedup ratio per share

2010-06-18 Thread Robert Milkowski
dedup enabled in a pool you can't really get a dedup ratio per share. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] raid-z - not even iops distribution

2010-06-18 Thread Robert Milkowski
rather except all of them to get about the same number of iops. Any idea why? -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] hot detach of disks, ZFS and FMA integration

2010-06-18 Thread Robert Milkowski
lly intent to get it integrated into ON? Because if you do then I think that getting Nexenta guys expanding on it would be better for everyone instead of having them reinventing the wheel... -- Robert Milkowski http://milek.blogspot.com ___ zfs-discu

Re: [zfs-discuss] At what level does the “zfs ” directory exist?

2010-06-17 Thread Robert Milkowski
. Previous Versions should work even if you have a one large filesystems with all users homes as directories within. What Solaris/OpenSolaris version did you try for the 5k test? -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list

Re: [zfs-discuss] At what level does the “zfs ” directory exist?

2010-06-16 Thread Robert Milkowski
? It maps the snapshots so windows can access them via "previous versions" from the explorers context menu. btw: the CIFS service supports Windows Shadow Copies out-of-the-box. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discus

Re: [zfs-discuss] OCZ Devena line of enterprise SSD

2010-06-16 Thread Robert Milkowski
whole point of having L2ARC is to serve high random read iops from RAM and L2ARC device instead of disk drives in a main pool. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensola

Re: [zfs-discuss] Scrub issues

2010-06-14 Thread Robert Milkowski
full priority. Is this problem known to the developers? Will it be addressed? http://sparcv9.blogspot.com/2010/06/slower-zfs-scrubsresilver-on-way.html http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6494473 -- Robert Milkowski http://milek.blogspot.com

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-11 Thread Robert Milkowski
On 11/06/2010 10:58, Andrey Kuzmin wrote: On Fri, Jun 11, 2010 at 1:26 PM, Robert Milkowski <mailto:mi...@task.gda.pl>> wrote: On 11/06/2010 09:22, sensille wrote: Andrey Kuzmin wrote: On Fri, Jun 11, 2010 at 1:54 AM, Richard Elling mailto:ri

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-11 Thread Robert Milkowski
cely coalesce these IOs and do a sequential writes with large blocks. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-11 Thread Robert Milkowski
port is nothing unusual and has been the case for at least several years. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-10 Thread Robert Milkowski
On 10/06/2010 15:39, Andrey Kuzmin wrote: On Thu, Jun 10, 2010 at 6:06 PM, Robert Milkowski <mailto:mi...@task.gda.pl>> wrote: On 21/10/2009 03:54, Bob Friesenhahn wrote: I would be interested to know how many IOPS an OS like Solaris is able to push through a sing

Re: [zfs-discuss] Sun Flash Accelerator F20

2010-06-10 Thread Robert Milkowski
0 IOPS to a single SAS port. It also scales well - I did run above dd's over 4x SAS ports at the same time and it scaled linearly by achieving well over 400k IOPS. hw used: x4270, 2x Intel X5570 2.93GHz, 4x SAS SG-PCIE8SAS-E-Z (fw. 1.27.3.0), connected to F5100. -- Robert Milkowski

Re: [zfs-discuss] [zones-discuss] ZFS ARC cache issue

2010-06-04 Thread Robert Milkowski
: why do you need to do this at all? Isn't the ZFS ARC supposed to release memory when the system is under pressure? Is that mechanism not working well in some cases ... ? My understanding is that if kmem gets heavily fragmaneted ZFS won't be able to give back much memory.

Re: [zfs-discuss] Odd dump volume panic

2010-05-17 Thread Robert Milkowski
s/zvol.c#1785) - but zfs send|recv should replicate it I think. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] osol monitoring question

2010-05-10 Thread Robert Milkowski
are very useful at times. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Heads Up: zil_disable has expired, ceased to be, ...

2010-05-06 Thread Robert Milkowski
On 06/05/2010 21:45, Nicolas Williams wrote: On Thu, May 06, 2010 at 03:30:05PM -0500, Wes Felter wrote: On 5/6/10 5:28 AM, Robert Milkowski wrote: sync=disabled Synchronous requests are disabled. File system transactions only commit to stable storage on the next DMU transaction

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Robert Milkowski
would probably decrease performance and would invalidate all blocks if only a single l2arc device would die. Additionally having each block only on one l2arc device allows to read from all of l2arc devices at the same time. -- Robert Milkowski http://milek.blogspo

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Robert Milkowski
ce failover in a cluster L2ARC will be kept warm. Then the only thing which might affect L2 performance considerably would be a L2ARC device failure... -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-dis

Re: [zfs-discuss] Heads Up: zil_disable has expired, ceased to be, ...

2010-05-06 Thread Robert Milkowski
On 06/05/2010 13:12, Robert Milkowski wrote: On 06/05/2010 12:24, Pawel Jakub Dawidek wrote: I read that this property is not inherited and I can't see why. If what I read is up-to-date, could you tell why? It is inherited. Sorry for the confusion but there was a discussion if it shou

Re: [zfs-discuss] Heads Up: zil_disable has expired, ceased to be, ...

2010-05-06 Thread Robert Milkowski
opose that it shouldn't but it was changed again during a PSARC review that it should. And I did a copy'n'paste here. Again, sorry for the confusion. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@

[zfs-discuss] Heads Up: zil_disable has expired, ceased to be, ...

2010-05-06 Thread Robert Milkowski
nformation on it you might look at http://milek.blogspot.com/2010/05/zfs-synchronous-vs-asynchronous-io.html -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZIL behavior on import

2010-05-05 Thread Robert Milkowski
fails prior to completing a series of writes and I reboot using a failsafe (i.e. install disc), will the log be replayed after a zpool import -f ? yes -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] Performance of the ZIL

2010-05-04 Thread Robert Milkowski
when it is off it will give you an estimate of what's the absolute maximum performance increase (if any) by having a dedicated ZIL device. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolari

Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-05-04 Thread Robert Milkowski
0 zil synchronicity No promise on date, but it will bubble to the top eventually. So everyone knows - it has been integrated into snv_140 :) -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Compellant announces zNAS

2010-04-29 Thread Robert Milkowski
ution*. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Performance drop during scrub?

2010-04-29 Thread Robert Milkowski
s no room for improvement here. All I'm saying is that it is not as easy problem as it seems. -- Robert Milkowski http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

  1   2   3   4   5   6   7   8   9   10   >