Re: [zfs-discuss] best migration path from Solaris 10
On Sun, Mar 20, 2011 at 01:54:54PM +0700, Fajar A. Nugraha wrote: On Sun, Mar 20, 2011 at 4:05 AM, Pawel Jakub Dawidek p...@freebsd.org wrote: On Fri, Mar 18, 2011 at 06:22:01PM -0700, Garrett D'Amore wrote: Newer versions of FreeBSD have newer ZFS code. Yes, we are at v28 at this point (the lastest open-source version). That said, ZFS on FreeBSD is kind of a 2nd class citizen still. [...] That's actually not true. There are more FreeBSD committers working on ZFS than on UFS. How is the performance of ZFS under FreeBSD? Is it comparable to that in Solaris, or still slower due to some needed compatibility layer? This compatibility layer is just a bunch of ugly defines, etc. to allow for less code modifications. It introduces no overhead. I made performance comparison between FreeBSD 9 with ZFSv28 and Solaris 11 Express, but I don't think Solaris license allows me to publish the results. But believe me, the results were very surprising:) -- Pawel Jakub Dawidek http://www.wheelsystems.com FreeBSD committer http://www.FreeBSD.org Am I Evil? Yes, I Am! http://yomoli.com pgpHCdMIWMoFb.pgp Description: PGP signature ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] best migration path from Solaris 10
On 03/23/11 09:07 AM, Pawel Jakub Dawidek wrote: On Sun, Mar 20, 2011 at 01:54:54PM +0700, Fajar A. Nugraha wrote: On Sun, Mar 20, 2011 at 4:05 AM, Pawel Jakub Dawidekp...@freebsd.org wrote: On Fri, Mar 18, 2011 at 06:22:01PM -0700, Garrett D'Amore wrote: Newer versions of FreeBSD have newer ZFS code. Yes, we are at v28 at this point (the lastest open-source version). That said, ZFS on FreeBSD is kind of a 2nd class citizen still. [...] That's actually not true. There are more FreeBSD committers working on ZFS than on UFS. How is the performance of ZFS under FreeBSD? Is it comparable to that in Solaris, or still slower due to some needed compatibility layer? This compatibility layer is just a bunch of ugly defines, etc. to allow for less code modifications. It introduces no overhead. I made performance comparison between FreeBSD 9 with ZFSv28 and Solaris 11 Express, but I don't think Solaris license allows me to publish the results. But believe me, the results were very surprising:) You can compare OpenIndiana oi_148 (and oi148a with IllumOS) and publish comparisons. I think site: Phoronix.com already did comparisons with ZFS under several platforms and other (Linux) file systems without sweat. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] best migration path from Solaris 10
On Wed, Mar 23, 2011 at 3:50 PM, Nikola M. minik...@gmail.com wrote: I think site: Phoronix.com already did comparisons with ZFS under several platforms and other (Linux) file systems without sweat. with single disk configuration no less (er, more) ;) You may want to check this instead: http://www.zfsbuild.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] best migration path from Solaris 10
OpenIndiana and others (i.e. Benunix) are distributions that actively support full desktop workstations based on the Illumos base. It is true, that the storage server application is a popular one and so has supporters both commercially and others. ZFS is amazing and quite rightly it stands out, it works even better when used with zones, crossbow, dtrace, etc. and so its obvious to see what it's a focus and often seems the only priority. However is isn't the only interest, by a long shot. The SFE package repositories has many packages available to install for when the binary packaging aren't up to date. OpenIndiana is hard at work trying to build bigger binary repositories with more apps and newer versions. A simple pkg install APPLICATION is the aim for the majority of main applications. Is it not moving fast enough, or missing the packages you need? Well that's the beauty of Open Source, we welcome and have systems to help newcomers add and update the packages and applications they want, so we all benefit. Ultimately I'd (and I'm sure many would) like to have a level of binary repositories similar to Debian, with stable and faster changing place repos and support for many different applications, however that requires a lot of work and manpower. Bye, Deano -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Fajar A. Nugraha Sent: 23 March 2011 01:09 To: Jeff Bacon Cc: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] best migration path from Solaris 10 On Wed, Mar 23, 2011 at 7:33 AM, Jeff Bacon ba...@walleyesoftware.com wrote: I've also started conversations with Pogo about offering an OpenIndiana based workstation, which might be another option if you prefer more of Sometimes I'm left wondering if anyone uses the non-Oracle versions for anything but file storage... ? Seeing that userland programs for *Solaris and derivatives (GUI, daemons, tools, etc) is usually late compared to bleeding-edge Linux distros (e.g. Ubuntu), with no particular dedicated team working on improvement there, I'm guessing the answer will be highly unlikely. -- Fajar ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] best migration path from Solaris 10
On 3/23/2011 6:14 AM, Deano wrote: OpenIndiana and others (i.e. Benunix) are distributions that actively support full desktop workstations based on the Illumos base. It is true, that the storage server application is a popular one and so has supporters both commercially and others. ZFS is amazing and quite rightly it stands out, it works even better when used with zones, crossbow, dtrace, etc. and so its obvious to see what it's a focus and often seems the only priority. However is isn't the only interest, by a long shot. The SFE package repositories has many packages available to install for when the binary packaging aren't up to date. OpenIndiana is hard at work trying to build bigger binary repositories with more apps and newer versions. A simple pkg install APPLICATION is the aim for the majority of main applications. Is it not moving fast enough, or missing the packages you need? Well that's the beauty of Open Source, we welcome and have systems to help newcomers add and update the packages and applications they want, so we all benefit. Ultimately I'd (and I'm sure many would) like to have a level of binary repositories similar to Debian, with stable and faster changing place repos and support for many different applications, however that requires a lot of work and manpower. Bye, Deano Honestly (and I say this from purely personal preferences and bias, not any official statement), I see the long-term future of Solaris (and IllumOS-based distros) as the new engine for appliances, supplanting Linux and the *BSDs in that space. For a lot of reasons, Solaris has a long list of very superior functionality that make is very appealing for appliance makers. Right now, we see that in two areas: ZFS for storage, and high scaleability for DBs (the various Oracle ExaData stuff). I'm expecting to see a whole raft of things start to show up - JVM container systems (Run Your App Server in SUPERMAN MODE! ), online backup devices, firewall appliances, spam and mail filter systems, intrusion detection systems, maybe even software routers, etc... It's here that I think Solaris' strengths can beat its competitors, and where its weaknesses aren't significant. Sadly, I think Solaris' future as a general-purpose OS is likely finished. Of course, that's just my reading of the tea leaves... -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] best migration path from Solaris 10
On Wed, Mar 23, 2011 at 9:27 AM, Erik Trimble erik.trim...@oracle.com wrote: For a lot of reasons, Solaris has a long list of very superior functionality that make is very appealing for appliance makers. Right now, we see that in two areas: ZFS for storage, and high scaleability for DBs (the various Oracle ExaData stuff). I'm expecting to see a whole raft of things start to show up - JVM container systems (Run Your App Server in SUPERMAN MODE! ), online backup devices, firewall appliances, spam and mail filter systems, intrusion detection systems, maybe even software routers, etc... It's here that I think Solaris' strengths can beat its competitors, and where its weaknesses aren't significant. Sadly, I think Solaris' future as a general-purpose OS is likely finished. It has been a long time since I thought that Solaris made a good Workstation, SunRays not withstanding. The JDS spin of Gnome was an attempt to get back into the Workstation space, but IMHO was not really a player. Solaris' strengths have been on the server side and some of the very serious innovation in Solaris 10 really solidified that position (ZFS, dtrace, SMF, FMD, etc.). With this as the starting point, it is easy to see how packaging Solaris into an appliance is appealing. While I am mostly a Solaris admin, my desktop runs Linux and has for over 5 years. The strength of the desktop tools consistently available on Linux as part of the distribution was what converted me over. Back in 1996 I had a dual CPU SPARC20 running OpenLook/OpenWindows as my desktop and it was fantastic, but times change. -- {1-2-3-4-5-6-7-} Paul Kraus - Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ ) - Sound Coordinator, Schenectady Light Opera Company ( http://www.sloctheater.org/ ) - Technical Advisor, RPI Players ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Any use for extra drives?
Hi ladies and gents, I've got a new Solaris 10 development box with ZFS mirror root using 500G drives. I've got several extra 320G drives and I'm wondering if there's any way I can use these to good advantage in this box. I've got enough storage for my needs with the 500G pool. At this point I would be looking for a way to speed things up if possible or add redundancy if necessary but I understand I can't use these smaller drives to stripe the root pool, so what would you suggest? Thanks. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] ZFS and standard backup programs
OK, I know this is only tangentially related to ZFS, but we're desperate and I thought someone might have a clue or idea of what kind of thing to look for. Also, this issue is holding up widespread adoption of ZFS at our shop. It's making the powers-that-be balk a little - understandably. If we can't back up stuff on ZFS, we can't really use it. We have a ZFS filesystem that's guarded by the Vormetric encryption product to prevent unauthorized users from reading it. Our backup software, HP's Data Protector, refuses to back up this dataset even though it runs as a user with privileges to read the files. When we guard a ZFS dataset with Vormetric, we get the alerts below in HP DP and the data is not backed up. Any suggestions at all are welcome. Note that, yes - files in similarly protected directories on UFS file systems do get backed up correctly. So it has *something* to do with ZFS. Warning] From: v...@hostname.ourdomain.commailto:v...@hostname.ourdomain.com /directoryname Time: 3/23/2011 3:02:25 AM /directoryname Directory is a mount point to a different filesystem. Backed up as empty directory. [Minor] From: v...@hostname.ourdomain.commailto:v...@hostname.ourdomain.com /directoryname Time: 3/23/2011 3:02:25 AM [81:84] /directoryname Cannot read ACLs: ([89] Operation not applicable). -- Learn more about Merchant Link at www.merchantlink.com. THIS MESSAGE IS CONFIDENTIAL. This e-mail message and any attachments are proprietary and confidential information intended only for the use of the recipient(s) named above. If you are not the intended recipient, you may not print, distribute, or copy this message or any attachments. If you have received this communication in error, please notify the sender by return e-mail and delete this message and any attachments from your computer. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS and standard backup programs
On 23/03/11 12:13 PM, Linder, Doug wrote: OK, I know this is only tangentially related to ZFS, but we’re desperate and I thought someone might have a clue or idea of what kind of thing to look for. Also, this issue is holding up widespread adoption of ZFS at our shop. It’s making the powers-that-be balk a little – understandably. If we can’t back up stuff on ZFS, we can’t really use it. We have a ZFS filesystem that’s guarded by the Vormetric encryption product to prevent unauthorized users from reading it. Our backup software, HP’s Data Protector, refuses to back up this dataset even though it runs as a user with privileges to read the files. When we guard a ZFS dataset with Vormetric, we get the alerts below in HP DP and the data is not backed up. Any suggestions at all are welcome. Note that, yes - files in similarly protected directories on UFS file systems do get backed up correctly. So it has **something** to do with ZFS. Wouldn't this firstly be a question for the vendor of Vormetric? --Toby Warning] From: v...@hostname.ourdomain.com mailto:v...@hostname.ourdomain.com /directoryname Time: 3/23/2011 3:02:25 AM /directoryname Directory is a mount point to a different filesystem. Backed up as empty directory. [Minor] From: v...@hostname.ourdomain.com mailto:v...@hostname.ourdomain.com /directoryname Time: 3/23/2011 3:02:25 AM [ 81:84 ] /directoryname Cannot read ACLs: ([89] Operation not applicable). -- Learn more about Merchant Link at www.merchantlink.com. THIS MESSAGE IS CONFIDENTIAL. This e-mail message and any attachments are proprietary and confidential information intended only for the use of the recipient(s) named above. If you are not the intended recipient, you may not print, distribute, or copy this message or any attachments. If you have received this communication in error, please notify the sender by return e-mail and delete this message and any attachments from your computer. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS and standard backup programs
Toby Thain wrote: Wouldn't this firstly be a question for the vendor of Vormetric? Yes, and we've asked. Alas, they haven't been able to help so far. For all we know it might be a bug in Data Protector, too. But we do know for sure that it works with UFS but not ZFS. On 23/03/11 12:13 PM, Linder, Doug wrote: OK, I know this is only tangentially related to ZFS, but we're desperate and I thought someone might have a clue or idea of what kind of thing to look for. Also, this issue is holding up widespread adoption of ZFS at our shop. It's making the powers-that-be balk a little - understandably. If we can't back up stuff on ZFS, we can't really use it. We have a ZFS filesystem that's guarded by the Vormetric encryption product to prevent unauthorized users from reading it. Our backup software, HP's Data Protector, refuses to back up this dataset even though it runs as a user with privileges to read the files. When we guard a ZFS dataset with Vormetric, we get the alerts below in HP DP and the data is not backed up. Any suggestions at all are welcome. Note that, yes - files in similarly protected directories on UFS file systems do get backed up correctly. So it has **something** to do with ZFS. Warning] From: v...@hostname.ourdomain.com mailto:v...@hostname.ourdomain.com /directoryname Time: 3/23/2011 3:02:25 AM /directoryname Directory is a mount point to a different filesystem. Backed up as empty directory. [Minor] From: v...@hostname.ourdomain.com mailto:v...@hostname.ourdomain.com /directoryname Time: 3/23/2011 3:02:25 AM [ 81:84 ] /directoryname Cannot read ACLs: ([89] Operation not applicable). -- Learn more about Merchant Link at www.merchantlink.com. THIS MESSAGE IS CONFIDENTIAL. This e-mail message and any attachments are proprietary and confidential information intended only for the use of the recipient(s) named above. If you are not the intended recipient, you may not print, distribute, or copy this message or any attachments. If you have received this communication in error, please notify the sender by return e-mail and delete this message and any attachments from your computer. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS and standard backup programs
On Wed, March 23, 2011 13:31, Linder, Doug wrote: Toby Thain wrote: Linder, Doug wrote: Warning] From: v...@hostname.ourdomain.com mailto:v...@hostname.ourdomain.com /directoryname Time: 3/23/2011 3:02:25 AM /directoryname Directory is a mount point to a different filesystem. Backed up as empty directory. [Minor] From: v...@hostname.ourdomain.com mailto:v...@hostname.ourdomain.com /directoryname Time: 3/23/2011 3:02:25 AM [ 81:84 ] /directoryname Cannot read ACLs: ([89] Operation not applicable). Wouldn't this firstly be a question for the vendor of Vormetric? Yes, and we've asked. Alas, they haven't been able to help so far. For all we know it might be a bug in Data Protector, too. But we do know for sure that it works with UFS but not ZFS. Kick off a back up of the dataset/s in question, and run truss(1) on the processes in question to see what they're doing. Dtrace(1M) would be another option, and you could limit the tracing to only file system operations (as opposed to every system call). The first one looks like it's tripping up on the fact that each dataset is treated as a different mount point / file system (in the df(1M) sense). You may have to specify each data set independently. For the second, it may be that the software is calling acl(2) or acl_get(3SEC) and it doesn't support the new NFSv4-style structures that are coming back. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS and standard backup programs
On 03/24/11 07:28 AM, David Magda wrote: On Wed, March 23, 2011 13:31, Linder, Doug wrote: Toby Thain wrote: Linder, Doug wrote: [Minor] From: v...@hostname.ourdomain.com mailto:v...@hostname.ourdomain.com /directoryname Time: 3/23/2011 3:02:25 AM [ 81:84 ] /directoryname Cannot read ACLs: ([89] Operation not applicable). Wouldn't this firstly be a question for the vendor of Vormetric? Yes, and we've asked. Alas, they haven't been able to help so far. For all we know it might be a bug in Data Protector, too. But we do know for sure that it works with UFS but not ZFS. For the second, it may be that the software is calling acl(2) or acl_get(3SEC) and it doesn't support the new NFSv4-style structures that are coming back. Error 89 (ENOSYS) is returned by (f)acl_get if the file system does not support ACLs. Again, truss or dtrace should show which function is being called and the fie. -- Ian. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] A resilver record?
On Mon, Mar 21, 2011 at 3:45 PM, Roy Sigurd Karlsbakk r...@karlsbakk.net wrote: Our main backups storage server has 3x 8-drive raidz2 vdevs. Was replacing the 500 GB drives in one vdev with 1 TB drives. The last 2 drives took just under 300 hours each. :( The first couple drives took approx 150 hours each, and then it just started taking longer and longer for each drive. That's strange indeed. I just replaced 21 drives (seven 2TB drives in three raidz2 VDEVs) drives with 3TB ones, and resilver times were quite stable, until the last replace, which was a bit faster. Have you checked 'iostat -en'? If one (or more) of the drives are having i/o errors, that may slow down the whole pool. We've production servers with 9 vdev's (mirrored) doing `zfs send` daily to backup servers with with 7 vdev's (each 3-disk raidz1). Some backup servers that receive datasets with lots of small files (email/web) keep getting worse resilver times. # zpool status pool: backup state: DEGRADED status: One or more devices has been removed by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Online the device using 'zpool online' or replace the device with 'zpool replace'. scrub: resilver in progress for 646h13m, 100.00% done, 0h0m to go config: NAME STATE READ WRITE CKSUM backup DEGRADED 0 0 0 raidz1-0 ONLINE 0 0 0 c4t2d0 ONLINE 0 0 0 c4t3d0 ONLINE 0 0 0 c4t4d0 ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 c4t5d0 ONLINE 0 0 0 c4t6d0 ONLINE 0 0 0 c4t7d0 ONLINE 0 0 0 raidz1-2 DEGRADED 0 0 0 c4t8d0 ONLINE 0 0 0 spare-1 DEGRADED 0 0 216M c4t9d0 REMOVED 0 0 0 c4t1d0 ONLINE 0 0 0 874G resilvered c4t10d0 ONLINE 0 0 0 raidz1-3 ONLINE 0 0 0 c4t11d0 ONLINE 0 0 0 c4t12d0 ONLINE 0 0 0 c4t13d0 ONLINE 0 0 0 raidz1-4 ONLINE 0 0 0 c4t14d0 ONLINE 0 0 0 c4t15d0 ONLINE 0 0 0 c4t16d0 ONLINE 0 0 0 raidz1-5 ONLINE 0 0 0 c4t17d0 ONLINE 0 0 0 c4t18d0 ONLINE 0 0 0 c4t19d0 ONLINE 0 0 0 raidz1-6 ONLINE 0 0 0 c4t20d0 ONLINE 0 0 0 c4t21d0 ONLINE 0 0 0 c4t22d0 ONLINE 0 0 0 spares c4t1d0 INUSE currently in use # zpool list backup NAME SIZE USED AVAIL CAP HEALTH ALTROOT backup 19.0T 18.7T 315G 98% DEGRADED - Even though the pool is at 98% utilization, it's usually not a problem if the production server is sending datasets which hold VM machines. Here we seem to be clearly maxing out on IOPS of the disks in the raidz1-2 vdev. It seems logical to go back to mirrors for this kind of workload (lots of small files, nothing sequential). What I cannot explain is why c4t1d0 is doings lots of reads, besides the expected reads. It seems to be holding back the resilver while I would expect only c4t9d0 and c4t10d0 should be reading. I do not understand the ZFS internals that are making this happen. Can anyone explain that? The server is doing nothing but the resilver (not even receiving new zfs send's). By the way, since this is OpenSolaris 2009.6, there is a nasty bug that if I enable fmd, it'll record billions of checksums errors until the disk is full (so I've had to disable it while resilvering is happening). # iostat -Xn 1 | egrep '(c4t(8|10|1)d0|r/s)' r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 35.2 14.9 907.9 135.8 0.0 0.4 0.1 8.6 1 12 c4t1d0 44.7 4.0 997.6 78.3 0.0 0.3 0.1 5.8 1 10 c4t8d0 44.8 4.0 997.6 78.3 0.0 0.3 0.1 5.8 1 10 c4t10d0 r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 98.6 46.9 2628.2 52.7 0.0 1.3 0.2 8.6 2 39 c4t1d0 146.5 0.0 2739.2 0.0 0.0 0.8 0.1 5.1 2 25 c4t8d0 144.5 0.0 2805.9 0.0 0.0 0.7 0.1 5.1 2 26 c4t10d0 r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 108.6 45.7 2809.1 50.7 0.0 1.1 0.1 6.9 2 35 c4t1d0 146.2 0.0 2624.2 0.0 0.0 0.3 0.1 2.3 1 18 c4t8d0 149.2 0.0 2737.0 0.0 0.0 0.3 0.1 2.3 1 16 c4t10d0 r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 113.0 23.0 3226.9 28.0