Re: [zfs-discuss] Heavy writes freezing system
On Tue, 16 Jan 2007, Rainer Heilke wrote: Greetings, everyone. We are having issues with some Oracle databases on ZFS. We would appreciate any useful feedback you can provide. You did'nt give any details of the system (configuration) on which the DB runs... Not even SPARC or x86/AMD64... ?? Al Hopper Logical Approach Inc, Plano, TX. [EMAIL PROTECTED] Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005 OpenSolaris Governing Board (OGB) Member - Feb 2006 ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: Heavy writes freezing system
What hardware is used? Sparc? x86 32-bit? x86 64-bit? How much RAM is installed? Which version of the OS? Sorry, this is happening on two systems (test and production). They're both Solaris 10, Update 2. Test is a V880 with 8 CPU's and 32GB, production is an E2900 with 12 dual-core CPU's and 48GB. Did you already try to monitor kernel memory usage, while writing to zfs? Maybe the kernel is running out of free memory? (I've bugs like 6483887 in mind, without direct management, arc ghost lists can run amok) We haven't seen serious kernel memory usage that I know of (I'll be honest--I came into this problem late). For a live system: echo ::kmastat | mdb -k echo ::memstat | mdb -k I can try this if the DBA group is willing to do another test, thanks. In case you've got a crash dump for the hung system, you can try the same ::kmastat and ::memstat commands using the kernel crash dumps saved in directory /var/crash/`hostname` # cd /var/crash/`hostname` # mdb -k unix.1 vmcore.1 ::memstat ::kmastat The system doesn't actually crash. It also doesn't freeze _completely_. While I call it a freeze (best name for it), it actually just slows down incredibly. It's like the whole system bogs down like molasses in January. Things happen, but very slowly. Rainer This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS and HDLM 5.8 ... does that coexist well ?
Hi Torrey, The MD21 entries were removed from the /etc/format.dat file in the Solaris 10 release although the controller itself was EOL'd long before this release. However, the entries are not removed upon upgrade from a previous release, which is this bug: http://bugs.opensolaris.org/view_bug.do?bug_id=5023396 Cindy Torrey McMahon wrote: Richard Elling wrote: Gael wrote: jumps8002:/etc/apache2 #cat /etc/release Solaris 10 11/06 s10s_u3wos_10 SPARC Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 14 November 2006 The file is a little bit too long to flood the list with it, here a quick grep jumps8002:/etc/apache2 #cat /etc/format.dat |grep MD21 # This is the list of supported disks for the Emulex MD21 controller. : ctlr = MD21 \ : ctlr = MD21 \ : ctlr = MD21 \ # This is the list of partition tables for the Emulex MD21 controller. : disk = Micropolis 1355 : ctlr = MD21 \ : disk = Micropolis 1355 : ctlr = MD21 \ : disk = Toshiba MK 156F : ctlr = MD21 \ : disk = Micropolis 1558 : ctlr = MD21 \ : disk = Micropolis 1558 : ctlr = MD21 \ As I thought. That /etc/format.dat probably didn't come from Solaris 10, or at least I don't see those entries in NV. FWIW, the Micropolis 1355 is a 141 MByte (!) ESDI disk. The MD21 is an ESDI to SCSI converter. Maybe it's time to clean that file up? Do we even need it anymore? ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Mounting a ZFS clone
On Tue, Jan 16, 2007 at 01:28:04PM -0800, Eric Kustarz wrote: Albert Chin wrote: On Mon, Jan 15, 2007 at 10:55:23AM -0600, Albert Chin wrote: I have no hands-on experience with ZFS but have a question. If the file server running ZFS exports the ZFS file system via NFS to clients, based on previous messages on this list, it is not possible for an NFS client to mount this NFS-exported ZFS file system on multiple directories on the NFS client. At least, I thought I read this somewhere. Is the above possible? I don't see why it should not be. Yes, you can mount multiple *filesystems* via NFS. And the fact that the file systems on the remote server are ZFS is irrelevant? -- albert chin ([EMAIL PROTECTED]) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: Re: Heavy writes freezing system
Rainer Heilke, You have 1/4 of the amount of memory that the 2900 0 system is capable of (192GBs : I think). Yep. The server does not hold the application (three-tier architecture) so this is the standard build we bought. The memory has not indicated any problems. All errors point to write issues. Secondly, output from fsstat(1M) could be helpful. Run this command over time and check to see if the values change over time.. Thanks. I'll pass this along to the person doing the testing. He's been doing some measuring, but I'm not sure if fsstat was one of them. Rainer This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] ZFS and HDLM 5.8 ... does that coexist well ?
All, And on that one big mea culpa, the wanboot.conf install file used the solaris 9 miniroot to load that solaris 10 U3 machine... explaining why the MD21 lines appeared on that machine ... (last time I do play lazy admin and don't refresh the whole wanboot config files before loading Solaris 10 ...) On the other hand, MPxIO and ZFS appears to work great with that Hitachi array... the only concern as of today is that people are asking how to simulate the dlnkmgr view -drv with MPxIO. Any ideas ? Regards Gael On 1/16/07, Cindy Swearingen [EMAIL PROTECTED] wrote: Hi Torrey, The MD21 entries were removed from the /etc/format.dat file in the Solaris 10 release although the controller itself was EOL'd long before this release. However, the entries are not removed upon upgrade from a previous release, which is this bug: http://bugs.opensolaris.org/view_bug.do?bug_id=5023396 Cindy Torrey McMahon wrote: Richard Elling wrote: Gael wrote: jumps8002:/etc/apache2 #cat /etc/release Solaris 10 11/06 s10s_u3wos_10 SPARC Copyright 2006 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 14 November 2006 The file is a little bit too long to flood the list with it, here a quick grep jumps8002:/etc/apache2 #cat /etc/format.dat |grep MD21 # This is the list of supported disks for the Emulex MD21 controller. : ctlr = MD21 \ : ctlr = MD21 \ : ctlr = MD21 \ # This is the list of partition tables for the Emulex MD21 controller. : disk = Micropolis 1355 : ctlr = MD21 \ : disk = Micropolis 1355 : ctlr = MD21 \ : disk = Toshiba MK 156F : ctlr = MD21 \ : disk = Micropolis 1558 : ctlr = MD21 \ : disk = Micropolis 1558 : ctlr = MD21 \ As I thought. That /etc/format.dat probably didn't come from Solaris 10, or at least I don't see those entries in NV. FWIW, the Micropolis 1355 is a 141 MByte (!) ESDI disk. The MD21 is an ESDI to SCSI converter. Maybe it's time to clean that file up? Do we even need it anymore? ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Gael ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re[2]: [zfs-discuss] Mounting a ZFS clone
Hello Albert, Tuesday, January 16, 2007, 11:26:04 PM, you wrote: AC On Tue, Jan 16, 2007 at 01:28:04PM -0800, Eric Kustarz wrote: Albert Chin wrote: On Mon, Jan 15, 2007 at 10:55:23AM -0600, Albert Chin wrote: I have no hands-on experience with ZFS but have a question. If the file server running ZFS exports the ZFS file system via NFS to clients, based on previous messages on this list, it is not possible for an NFS client to mount this NFS-exported ZFS file system on multiple directories on the NFS client. At least, I thought I read this somewhere. Is the above possible? I don't see why it should not be. Yes, you can mount multiple *filesystems* via NFS. AC And the fact that the file systems on the remote server are ZFS is AC irrelevant? Yep. -- Best regards, Robertmailto:[EMAIL PROTECTED] http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Extremely poor ZFS perf and other observations
Anantha N. Srirama wrote: I'm observing the following behavior in our environment (Sol10U2, E2900, 24x96, 2x2Gbps, ...) In general, i would recommend upgrading to s10u3 (if you can). - I've a compressed ZFS filesystem where I'm creating a large tar file. I notice that the tar process is running fine (accumulating CPU, truss shows writes, ...) but for whatever reason the timestamp on the file doesn't change nor does the file size change. The same is true for 'zpool list' output, the usage numbers don't change for minutes at a time. - I started a tar job to the compressed ZFS filesystem reading from another compressed ZFS filesystem. At the same time I started copying files from another ZFS filesysem (same pool same attributes) to a remote server (GigE connection) using SCP writing to an UFS filesystem. [b]Guess what? My scp over the wire beat the pants off of the local ZFS tar session writing to a 2x2Gbps SAN and EMC disks![/b] Can you send the actual command you did (is the tar job creating a tar archive or extracting files)? Is this only a problem when compression is turned on? If so, i suspect its this bug: 6460622 zio_nowait() doesn't live up to its name http://bugs.opensolaris.org/view_bug.do?bug_id=6460622 Which should be putback very shortly. eric ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Heavy writes freezing system
Hello Rainer, Tuesday, January 16, 2007, 5:02:01 PM, you wrote: RH scenario. Due to the number of files, UFS was not an option. Since RH the environment is going to RAC in six months, upgrading Veritas RH did not seem like a justifiable option, with the (mistaken?) RH belief ZFS performance would be more than adequate. What do you mean by UFS wasn't an option due to number of files? Also do you have any tunables in system? Can you send 'zpool status' output? (raidz, mirror, ...?) When the DBA’s do clones - you mean that by just doing 'zfs clone ...' you get big performance problem? OR maybe just before when you do 'zfs snapshot' first? How much free space is left in a pool? Do you have sar data when problems occured? Any paging in a system? And one advise - before any more testing I would definitely upgrade/reinstall system to U3 when it comes to ZFS. -- Best regards, Robertmailto:[EMAIL PROTECTED] http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re[2]: [zfs-discuss] Remote Replication
Hello Matthew, Thursday, January 4, 2007, 12:11:26 AM, you wrote: MA There's also a number of areas where performance could be improved, which MA hopefully I'll be able to get to soon. Any update? I would be definitely interested in speeding up zfs send/recv process. MA When doing remote replication, more memory will help, because then we will MA be able to keep more of the recent changes cached in memory and not have to MA read them off disk. I was looking at the code some time ago and it looks like implementing asynchronous and continuos replication would be quite easy to implement. Actually one of my developers is looking into it right now. We're thinking about creating another ioctl and then similar to how it's done right now we would like to send transactions to specified file descriptor - loop in a kernel with some sleep (2x txg_time?) and send all transactions. Maybe we miss something but if we're not it should be really easy to implement. That way one could really easily setup continuos replication between two file systems and do the snapshoting separately to get in point copies. If we get there then we would like to create userland tool to actually manage all replications, etc. That way we should get most (all) transactions from memory once we're synced up on the source host and really recent backup. -- Best regards, Robertmailto:[EMAIL PROTECTED] http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Bathing ape hoody Bathing ape bape hoodie lil wayne BBC
[u]Bathing ape hoody Bathing ape bape hoodie lil wayne BBC [/u] [nobr]bBathing ape hoody/b Bape hoody bathing ape hoody clothing clothes a href=http://wholesale-distributors-dropship-suppliers-sources.com;img src=http://wholesale-distributors-dropship-suppliers-sources.com/01hoodie.jpg; border=0 height=101 width=142/a Bape a href=http://wholesale-distributors-dropship-suppliers-sources.com;Bathing ape Hoodies/ab US font face=Arial size=2SIZES:/font [nobr]LIL WAYNE bape hoodies bape hoody bathing ape hoodies SHIPPING WORLDWIDE $6 flat rate a href=http://wholesale-distributors-dropship-suppliers-sources.c om/aa href=http://wholesale-distributors-dropship-suppliers-sources.com;/aa href=http://wholesale-distributors-dropship-suppliers-sources.com;http://wholesale-distributors-dropship-suppliers-sources.com/a a href=http://wholesale-distributors-dropship-suppliers-source s.com/01hoodie .jpg bape hoody/a bape bape bape bape hoodies bape hoodies bape hoodies bape shoes bape shoes bape shoes bape clothing bape clothing bape clothing bape sta bape sta bape sta ape bape ape bape ape bape bape hoody bape hoody bape hoody bape jacket bape jacket bape jacket bape sta shoes bape sta shoes bape sta shoes bape hoodie bape hoodie bape hoodie authentic bape hoodies authentic bape hoodies authentic bape hoodies bape store bape store bape store bape cheap hoodies bape cheap hoodies bape cheap hoodies bape clothing line bape clothing line bape clothing line bape layout bape layout bape layout bathing ape bape bathing ape bape bathing ape bape authentic bape shoes authentic bape shoes authentic bape shoes bape ape clothing bape ape clothing bape ape clothing ape bape shoes ape bape shoes ape bape shoes bape cartoon character bape cartoon character bape cartoon character bape sweater bape sweater bape sweater bape clothes bape clothes bape clothes bape layout myspace bape layout myspace bape layout myspace bape jeans bape jeans bape jeans bape sneaker bape sneaker bape sneaker bape cartoon character create bape cartoon character create bape cartoon character create bape sta hoodies bape sta hoodies bape sta hoodies agnes bape burberry cdg comme lv nike not prada supreme agnes bape burberry cdg comme lv nike not prada supreme agnes bape burberry cdg comme lv nike not prada supreme bape shirt bape shirt bape shirt bape star bape star bape star ape bape jacket ape bape jacket ape bape jacket bape stas bape stas bape stas bape hoodies wholesale bape hoodies wholesale bape hoodies wholesale wholesale bape wholesale bape wholesale bape fake bape fake bape fake bape bape cartoon bape cartoon bape cartoon ape bape sta ape bape sta ape bape sta bape t shirt bape t shirt bape t shirt ape bape bathing jacket ape bape bathing jacket ape bape bathing jacket bape belt bape belt bape belt ape bape hoodies ape bape hoodies ape bape hoodies authentic bape hoody authentic bape hoody authentic bape hoody ape bape sta shoes ape bape sta shoes ape bape sta shoes bape talk bape talk bape talk bape wallpaper bape wallpaper bape wallpaper authentic bape authentic bape authentic bape authentic bape hoodies wholesale authentic bape hoodies wholesale authentic bape hoodies wholesale bape man clothing bape man clothing bape man clothing bape coat bape coat bape coat authentic bape jacket authentic bape jacket authentic bape jacket bape watch bape watch bape watch baby milo bape baby milo bape baby milo bape bape hoodys bape hoodys bape hoodys bape cheap hoody bape cheap hoody bape cheap hoody bape character bape character bape character bape cheap jacket bape cheap jacket bape cheap jacket bape hoodies jacket sweater bape hoodies jacket sweater bape hoodies jacket sweater bape cartoon make bape cartoon make bape cartoon make buy bape clothing buy bape clothing buy bape clothing bathing bape bathing bape bathing bape bathing ape bape sta bathing ape bape sta bathing ape bape sta bape camo bape camo bape camo bape sweatshirt bape sweatshirt bape sweatshirt woman bape woman bape woman bape bape cheap hoodies jacket bape cheap hoodies jacket bape cheap hoodies jacket bape hoodies sweater bape hoodies sweater bape hoodies sweater bape logo bape logo bape logo bape clothing line nigo bape clothing line nigo bape clothing line nigo milo bape milo bape milo bape bbc bape bbc bape bbc bape man bape man bape man bape cheap bape cheap bape cheap bape background bape background bape background bape authentic bape sneaker authentic bape sneaker authentic bape sneaker bape ice cream bape ice cream bape ice cream baby bape clothing milo baby bape clothing milo baby bape clothing milo bape ape.com bape ape.com bape ape.com ape bape n ape bape n ape bape n bape kick bape kick bape kick bape cartoon character creator bape cartoon character
Re: [zfs-discuss] ZFS and HDLM 5.8 ... does that coexist well ?
This command is often used to identify the physical disk linked to the LUN (iLu) to allow the San team to deallocate/identify the right physical disk(s) etc... vsmd8008:/root #/opt/DynamicLinkManager/bin/dlnkmgr view -drv PathID HDevName Device LDEV 00 c4t50060E8004572420d71 ssd2 USP.0022308.10B1 01 c4t50060E8004572420d0 ssd143 USP.0022308.106A Tried to play with the mpathadm command but didn't find anything close Regards Gael On 1/16/07, Torrey McMahon [EMAIL PROTECTED] wrote: What does that view show? Gael wrote: All, And on that one big mea culpa, the wanboot.conf install file used the solaris 9 miniroot to load that solaris 10 U3 machine... explaining why the MD21 lines appeared on that machine ... (last time I do play lazy admin and don't refresh the whole wanboot config files before loading Solaris 10 ...) On the other hand, MPxIO and ZFS appears to work great with that Hitachi array... the only concern as of today is that people are asking how to simulate the dlnkmgr view -drv with MPxIO. Any ideas ? Regards Gael ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re: Eliminating double path with ZFS's volume manager
Hi Philip, I'm not an expert, so I'm afraid I don't know what to tell you. I'd call Apple Support and see what they say. As horrid as they are at Enterprise support they may be the best ones to clarify if multipathing is available without Xsan. Best Regards, Jason On 1/16/07, Philip Mötteli [EMAIL PROTECTED] wrote: Looks like its got a half-way decent multipath design: http://docs.info.apple.com/article.html?path=Xsan/1.1/ en/c3xs12.html Great, but that is with Xsan. If I don't exchange our Hitachi with an Xsan, I don't have this 'cvadmin'. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss