Re: [OpenIndiana-discuss] Slow performance of guests in virtualbox on OI?
To clarify: this was all kinda subjective. To wit: remote desktop to a windows guest. Move the mouse around, click on start menu and/or program icons, do various activities. Under vbox/vdi, this was laggy and 'jumpy'. I tried the zvol trick and it was better but still not great (and has the disadvantage of my needing to hack around on the command line to create new guests and make sure the zvols are chowned properly on reboot, etc...) I know it's not perfect, but I also tried running crystaldiskmark and got about 140MB/sec read with vbox/vdi and about 200MB/sec vbox/zvol. The main thing was: I can guarantee that if the lag/jumpiness doesn't go away, this will be a non-starter with my wife (I wasn't real happy either, to be honest...) I shutdown the host and pulled one of the mirrored disks in case I changed my mind. I reinstalled ESXi on the SSD and did a fresh win7 install on the exact same data pool (only now, OI 151a7 is now the virtualized SAN/NAS like originally with HBA passed in via vt-d). Did same subjective experiments and everything runs as smoothly as I remembered. I like hacking around on the command line as much as the next guy, but there has to be a payoff for it, and the various experiments I have tried (kvm on debian [aka proxmox], vbox on OI/vdi, vbox on OI/zvol) just haven't met that criterion. OTOH, I have nothing but praise for OI when it's acting as a storage back-end... ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Slow performance of guests in virtualbox on OI?
On 12/4/2012 11:10 AM, Edward Ned Harvey (openindiana) wrote: From: Dan Swartzendruber [mailto:dswa...@druber.com] So I have an OI151a7 box. Latest vbox is installed with several guests. Make sure you have guest additions installed into each of the guests. Make sure you don't give all the CPU's to any single guest. In my experience, I give each guest half the CPU's. Then it load balances pretty well... If I have 8 cores and I'm running 4-8 guests, I give them all 4 cores. So I am overloading the CPU's, but as long as they're not all assigned to a single guest, they seem to load balance well. Virtualbox disk performance is poor if you're using vdi. (IMHO.) So instead, wrap a raw device (zvol) in vmdk file. It works much better. All guests have guest additions. Nothing special was done for CPU usage (default of 1 CPU). I'm wondering about the vdi vs zvol. Let me give that a try... ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Slow performance of guests invirtualboxon OI?
Might be worth a look indeed. FYI this is a supermicro x9scl-f motherboard with Xeon e31230 cpu (4-core 3.2ghz) very state of the art, but not bleeding edge, AFAIK... -Original Message- From: Jonathan Adams [mailto:t12nsloo...@gmail.com] Sent: Tuesday, December 04, 2012 9:11 AM To: Discussion list for OpenIndiana Subject: Re: [OpenIndiana-discuss] Slow performance of guests invirtualboxon OI? On 4 December 2012 13:58, Jim Klimov wrote: > On 2012-12-04 14:53, Dan Swartzendruber wrote: >>> virtualboxon OI? >>> IMHO try KVM... >> Believe it or not, that ocurred to me too - but comments I've seen >> gave me the feeling it was still 'bleeding edge'... > At the very least, this would require particular features from recent > Intel CPUs. On the statistical average, it is more probable that it > won't even start on your rig, than the other way around. But it does > not hurt to try (at least, if VBoxes are halted first - I have no idea > how several hypervisors would interact within one host). > You actually can't run VirtualBox on a machine with KVM installed ... we use KVM to run a Windows Server (with an RDP server in it) and Linux servers, it's not bleeding edge Intel CPU's that we're using ... we're on "Dell PowerEdge T310"s with only the bulk standard 2.4Ghz cpu's you just have to remember to turn on all the virtualization stuff in the BIOS ... might be worth a look. PS. make sure you have a large swap area, KVM wants as much swap as memory you grant to the KVM instance. Jon ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Slow performance of guests in virtualboxon OI?
Thanks for the suggestions! #1 and #2 are new to me - I came from ESXi, where you just set the VMs up and the bridging works automatically. I had read old articles for vbox about having to do this by hand, but the new version seemed to 'just work', so it didn't occur to me there might be issues there. No idea on #3. #4 yes, I did install the extension pack and guest additions for everyone. #5 nothing in logs. I'm pretty sure there is no memory pressue on the guests - neither one is using more than 1GB of the 2GB assigned when this happens. It's frustrating, because it is not horribly slow, just noticeably laggier than under ESXi. -Original Message- From: Jim Klimov [mailto:jimkli...@cos.ru] Sent: Tuesday, December 04, 2012 7:01 AM To: Discussion list for OpenIndiana Subject: Re: [OpenIndiana-discuss] Slow performance of guests in virtualboxon OI? Hi, I don't think I've had substantial lags in VBoxes, but it may be a lucky case of hardware - and that's not only the different layers of virtualization support in CPUs. In particular, recent list mails suggested that server-grade NICs with buffers and networking-task offload features might implement things like CrossBow VNICs with their unique MAC addresses in hardware rather than with CPU emularion and brute promiscuous mode. Likewise, I believe it might be possible to find such hardware that would lag due to HW or (lack of) drivers in illumos, and you might have had bad luck with that on a particular motherboard. Thus, you might have better luck with separate NICs, usually those with e1000g driver pose almost no problems (during OS upgrades, they forget the Jumbo settings, if those were used, due to overwriting of /kernel/drv/e1000g.conf). To verify that you hit VBox-intrinsic problems rather than something else, can you: 1) Set up an etherstub on the host and attach VM VNICs to that? This should rule out any hardware from your networking between VMs and host, leaving the VM engine performance and limitations. If this does work for you better than equivalent setup on hardware, you can organize routing/NAT via the host GZ or a dedicated LZ with exclusive IP stack attached to this etherstub. 2) You did set up the MAC addressing for bridged VM interfaces properly, right? (You have a dedicated host VNIC per VM NIC, and the unique MAC addresses of the two are equal) I did not really work with non-bridged interfaces (VBox NAT, VBox host-only), so can't comment on performance of those. 3) Recent VBox might add some optimizations like "memory balooning" and memory deduplication. I wonder if these happen to you, and if they are troublemakers or not... 4) Do you use the proprietary extension pack above the GPL VBox software, and do you add guest additions to your VM's OSes? Usually, these should help speed up your VM experiences. 5) Look in VBox logs (stored under VMs' folders), to see if the engine complains about anything unusual that it can't initialize or use, etc. 6) Scroll over VirtualBox under Solaris forum, perhaps ask some questions there too :) On another hand, I don't think I've stressed my systems that much. Basically, for an interactive VM I've only used my laptop in the past few months, and when there were tests with a dualbooted Win7 (on its dedicated partition where it could native-boot from too) performance felt acceptable. It did take slower to boot and halt, that with a native boot, but after I added RAM (and about 3Gb of the 8Gb available got dedicated to the Windows VM), performance during work (word processing, browsing) was not really discernible from that with a native boot with 4GB HW RAM before. Windows is memory-greedy however, so anything around 2GB lagged even on HW native boots for Win7 and Vista. For virtualization purposes, you might also need to provision much swap space (at least as much as your VMs are configured to use as their vRAM), and disk performance/cache for the backing stores might matter - though you do say that is blazing fast. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Slow performance of guests in virtualboxon OI?
-Original Message- From: Jorge Palma [mailto:jpal...@gmail.com] Sent: Tuesday, December 04, 2012 6:31 AM To: Discussion list for OpenIndiana Subject: Re: [OpenIndiana-discuss] Slow performance of guests in virtualboxon OI? IMHO try KVM... El 03/12/2012 22:29, "Dan Swartzendruber" escribió: Believe it or not, that ocurred to me too - but comments I've seen gave me the feeling it was still 'bleeding edge'... ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] Slow performance of guests in virtualbox on OI?
So I have an OI151a7 box. Latest vbox is installed with several guests. Performance of a couple of windows7 VMs seems kind of jerky. I started playing around with iperf and such to see if network was the issue. Between guests on the same host and guests <===> host, I can barely crack 1gb (and in some cases am barely hitting 300mb/sec). I know it's not the HW, since when I had ESXi on the same host, guests were able to hit 3-5gb/sec internally. I am wondering if this is just virtualbox issues? Disk I/O from guests seems fine - I run something like crystaldisk mark from a win7 guest and can exceed 130MB/sec both ways. But if I launch a browser window, it can take 4-5 seconds for it to appear, and clicking on links, likewise. The host is not remotely loaded down (for example, right now, with 4 guests running, load factor is 0.94). I am trying to get my telecommuter VM and my wife's (both 32-bit XP) migrated to windows 7, and if it's going to be that laggy and slow, this will not fly with her (nor with me, for that matter!) The previous incarnation was ESXi on the host, OI151a7 as a VM with the HBA passed-in via vmdirectpath (pci passthru). OI VM was on small local SSD with ESXi. Performance absolutely rocked. This is a major step down. Even if I go back to the original plan, I'm still using OI as the NAS/SAN, but I'd rather get off ESXi (if only because there are monitoring functions I cannot run on the host...) Any thoughts/ideas welcome... ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] ZFS High-Availability and Sync Replication
I haven't had much more luck than you... -Original Message- From: Sašo Kiselkov [mailto:skiselkov...@gmail.com] Sent: Thursday, November 15, 2012 9:07 AM To: Discussion list for OpenIndiana Cc: Dan Swartzendruber; z...@lists.illumos.org Subject: Re: [OpenIndiana-discuss] ZFS High-Availability and Sync Replication On 11/15/2012 12:48 PM, Dan Swartzendruber wrote: > How sophisticated does it need to be? I do 5-min dataset-based > replication to a remote pool using zrep, but that's all I use it for - a backup... Well, it's more of a question of mapping out the landscape of available tools. Async replication with no automatic failover is easy enough to do using periodic point-in-time snapshots, as you write. I was hoping there'd be something more akin to DRBD or such like, i.e. some cluster-aware logic behind it, something that can automatically switch over sharing services (or something I can use to implement such an automatic switchover, such as corosync/pacemaker), etc. In general, I've identified these possible HA/replication scenarios: 1) Shared-nothing nodes, i.e. separate heads & JBODs a) periodic async replication, run zrep/zynk/whatever every few minutes/seconds to send over the diffs to snapshots b) AVS for continuous sync/async replication 2) Shared-storage nodes, i.e. separate heads with shared JBODs In case 1 I need to handle two things: A) shipping the deltas over to the other node B) ensuring fail-over in case one node goes down (i.e. promote the slave to a master, enable file sharing services, take over IPs and possibly reverse the replication flow) In case 2 I only need to handle the fail-over, as the data is the same, but I need to handle it with very high reliability - a split brain in this case would be catastrophic (perhaps by doing a STONITH on the other node's PDUs). I could code this myself, but then I suspect I'd be reinventing the wheel. This problem certainly isn't unique to myself and there's a good chance somebody already took care of it. Sadly, though, my Google searches haven't been very fruitful so far. Cheers, -- Saso ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] ZFS High-Availability and Sync Replication
How sophisticated does it need to be? I do 5-min dataset-based replication to a remote pool using zrep, but that's all I use it for - a backup... -Original Message- From: Sašo Kiselkov [mailto:skiselkov...@gmail.com] Sent: Thursday, November 15, 2012 8:40 AM To: Discussion list for OpenIndiana; Subject: [OpenIndiana-discuss] ZFS High-Availability and Sync Replication I've been lately looking around the net for high-availability and sync replication solutions for ZFS and came up pretty dry - seems like all the jazz is going around on Linux with corosync/pacemaker and DRBD. I found a couple of tools, such as AVS and OHAC, but these seem rather unmaintained, so it got me wondering what others use for ZFS clustering, HA and sync replication. Can somebody please point me in the right direction? Cheers, -- Saso ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Gnome desktop system commands don't work?
On 11/13/2012 3:39 PM, Udo Grabowski (IMK) wrote: On 11/13/12 08:49 PM, Dan Swartzendruber wrote: I did google for this extensively, but only found things that don't quite match. So I have a brand-new oi151a7 install with gnome desktop. So I click on System => Administration => Users & Groups, expecting to get a prompt for password. In the margin at the bottom of the screen I can see 'Starting Users & Groups', followed a few seconds later by... nothing? You have to enable the 'rad' service. Already enabled (by default). ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Gnome desktop system commands don't work?
On 11/13/2012 3:25 PM, Peter Tribble wrote: On Tue, Nov 13, 2012 at 8:10 PM, Dan Swartzendruber wrote: I think there were reports of GNOME GUI root-password prompts ignoring the password defined during OS installation (from Live Media), either if the pass includes certain characters, or always. Redefining the root password (even to the same text) after booting into the installed OS solved the issue for those posters. May have to do with PAM integrations taking place (as is the known case with CIFS-compatible passwords) or something like that... BINGO! Thank you Jim! I did 'passwd root' and changed to current one, but that apparently doesn't help my existing gdm session. Since I am not yet running the 2-3 VMs headless, I'm afraid to disconnect the session lest I hose them. I'll do that tonight when everyone is off... If it's the known password bug, then it's because the root password is initially expired. Fixing it should be effective immediately. However, what you would normally have got in that case is it prompting for the root password and refusing to accept it. Another possibility is to run 'users-admin' from a terminal window and see if it generates anything useful in terms of output. (It's likely to generate some meaningless chatter anyway.) Dunno, but before, any such command got no graphical output. After I refreshed the root password and fired up a new VNC session, I *do* get the password prompt. I'm guessing there might have been a 'bad password' rejection but it doesn't make it to the screen? ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Gnome desktop system commands don't work?
I think there were reports of GNOME GUI root-password prompts ignoring the password defined during OS installation (from Live Media), either if the pass includes certain characters, or always. Redefining the root password (even to the same text) after booting into the installed OS solved the issue for those posters. May have to do with PAM integrations taking place (as is the known case with CIFS-compatible passwords) or something like that... BINGO! Thank you Jim! I did 'passwd root' and changed to current one, but that apparently doesn't help my existing gdm session. Since I am not yet running the 2-3 VMs headless, I'm afraid to disconnect the session lest I hose them. I'll do that tonight when everyone is off... ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] Gnome desktop system commands don't work?
I did google for this extensively, but only found things that don't quite match. So I have a brand-new oi151a7 install with gnome desktop. So I click on System => Administration => Users & Groups, expecting to get a prompt for password. In the margin at the bottom of the screen I can see 'Starting Users & Groups', followed a few seconds later by... nothing? I have found related complaints where the user is prompted but their password is rejected. I get no prompt at all - it's like I never tried. Not just that command - tried several others off of System, with equal (lack of) success. I'm not that experienced in gnome so this is probably something silly, but what? ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] An interesting experience with sata drives on an HBA
On 11/12/2012 11:44 AM, Rich wrote: The 830 has a known problem where all the drives report the same WWN. So if you put them in the same SAS topology... Ah, good to know, thanks Rich! ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] An interesting experience with sata drives on an HBA
I have an M1015 (rebadged LSI HBA with two 8087 connectors.) One of them connects to a 3.5" jbod chassis with 4 SAS nearline drives (tank pool.) I also have two samsung 830 SSD (sata) connected to the 2nd port on the HBA with a forward breakout cable. Works just fine. Apparently some SATA drives (WD blues at least) *do* seem to have a WWN assigned. Others (like the SSDs) don't - I infer the HBA fakes up WWNs for them. So I spent several hours trying to salvage my virtualized OI install to move it to the bare-metal server. The first hurdle was a hang at boot. Booting with '-kv' showed the last message being 'audio0 is /pseudo/audio0' and nothing after that. I did try 'touch /reconfigure' via the live CD but that didn't help. Not sure what else to do, and google was not real helpful. I really hoped OI wouldn't be as finicky as windows, but alas :( I decided to do a fresh install and fix up stuff later. So I do a live CD install to get the GUI desktop so I can run virtualbox. Comes up just fine, except one of the two SSDs shows as faulted. After much experimentation (and I apologize that I forgot to save output from the format command - it was 1:30AM by that point and I was exhausted.) Anyway, here is the output from the virtualized system (back to that now): 4. c5t5000C50041BD3E85d0 /pci@0,0/pci15ad,7a0@15/pci1000,3040@0/iport@f/disk@w5000c50041bd3e85,0 5. c5t5000C50041BD703Dd0 /pci@0,0/pci15ad,7a0@15/pci1000,3040@0/iport@f/disk@w5000c50041bd703d,0 Note the '@w5000' stuff? In my fail case, it was like 'c65000xxx' and 'c75000x' because the two SSDs were on different SATA connectors of the breakout cable. Here is the gotcha though: the part *after* the 'c6' or 'c7' was identical! And it was identical because the part after the '@w5000' was identical! I can only infer this is some kind of glitch with the HBA where it got confused and assigned duplicate WWN for the two SATA SSDs, and this led OI to think they were in fact the same device, so trying to add the 2nd SSD to the pool would get a message that it is already in the pool. Confused the heck out of me until I noticed they were duplicates :) Even if I could jigger the HBA into 'fixing' this, no guarantee it wouldn't bork later. For now, I put the two SSDs on motherboard sata ports, so they get the canonical cXtYdZ. It's odd that OI seemed to think they were the same device, but different. H... ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] OpenIndiana desktop sharing?
On 11/9/2012 10:10 AM, Arhipkin Ilya wrote: Check out the article I described on the integrated server management OI http://www.web.arhipkin.com/Desktop_Sharing_en.html [3] I did all that and get a black screen. On the other hand, thinking about this some more, I'm not sure this is the best approach. Since I'm running headless, I'd rather not have to login to the main console to allow 'desktop sharing'. I think I'd be better off running a 2nd VNC session entirely. Just need to figure out how to do that :) ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] OpenIndiana desktop sharing?
I am NOT an X11 expert by any means, so I always just hope 'things will just work...' So I found an OI blog entry that referred to sharing the OI desktop, since the server runs headless (note: I am doing this experimentation on another server, not the real one - just practicing :) ) I clicked on the desktop sharing menu item and selected the right checkboxes. It then popped up a window having to do with a new keyring or somesuch - I haven't seen that again, even though I have enabled/disabled multiple times. Anyway, the issue: I fire up tightvnc from my windows 7 box, and give the IP of the OI server. It pops up a new window that is totally black, and I'm never prompted for a password. After a few minutes, the session times out and goes away. google has not been very helpful :( Any thoughts much appreciated... Here is the article that looked closest, but didn't resolve it for me: http://broken.net/openindiana/how-enable-or-install-xvnc-on-openindiana-148/ ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] "bad mutex" crash?
Rich fixed the account for me (thanks Rich!) -Original Message- From: Jason Matthews [mailto:ja...@broken.net] Sent: Wednesday, October 31, 2012 5:01 AM To: 'Discussion list for OpenIndiana' Subject: Re: [OpenIndiana-discuss] "bad mutex" crash? I will take a shot in the dark here. I imagine the request will time out and be deleted in a few days. Then you could try again -- but I am just speculating. If Jesus was a web developer, that's how he would do it. j. -Original Message- From: Dan Swartzendruber [mailto:dswa...@druber.com] Sent: Tuesday, October 30, 2012 9:56 PM To: 'Discussion list for OpenIndiana' Subject: Re: [OpenIndiana-discuss] "bad mutex" crash? Wonderful fail on the illumos site. I went to the page to sign up to submit issues. When the activation mail arrived, somehow I deleted it by mistake. Guess what? Apparently there is no way to request a new one. I *can* click on the link to change my password, but that doesn't help because the account has not been activated, so I'm stuck. Fail 2 is the apparent lack of any email address ("contact us") anywhere I can see on the top-level page. Seriously? I guess I will now create an email alias and a different user name and see if *that* works. Sigh... ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] "bad mutex" crash?
Wonderful fail on the illumos site. I went to the page to sign up to submit issues. When the activation mail arrived, somehow I deleted it by mistake. Guess what? Apparently there is no way to request a new one. I *can* click on the link to change my password, but that doesn't help because the account has not been activated, so I'm stuck. Fail 2 is the apparent lack of any email address ("contact us") anywhere I can see on the top-level page. Seriously? I guess I will now create an email alias and a different user name and see if *that* works. Sigh... ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] "bad mutex" crash?
Well, definitely ticket time. I seem to be able to make this happen at will. Shut down once via ESXi ACPI and once via 'init 6'. Kernel panic both times... -Original Message- From: Richard Lowe [mailto:richl...@richlowe.net] Sent: Monday, October 29, 2012 10:13 PM To: Discussion list for OpenIndiana Subject: Re: [OpenIndiana-discuss] "bad mutex" crash? Ruled out what I was wondering. I'd file an illumos bug describing the configuration, etc, and make the crashdump available somehow. -- Rich ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] "bad mutex" crash?
Okay thanks. -Original Message- From: Richard Lowe [mailto:richl...@richlowe.net] Sent: Monday, October 29, 2012 10:13 PM To: Discussion list for OpenIndiana Subject: Re: [OpenIndiana-discuss] "bad mutex" crash? Ruled out what I was wondering. I'd file an illumos bug describing the configuration, etc, and make the crashdump available somehow. -- Rich ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] "bad mutex" crash?
Oh, anyway, no VM snapshots for this VM anyway, since it has a passed-in HBA therefore VM snapshots are disabled... -Original Message- From: Jim Klimov [mailto:jimkli...@cos.ru] Sent: Monday, October 29, 2012 5:33 PM To: openindiana-discuss@openindiana.org Subject: Re: [OpenIndiana-discuss] "bad mutex" crash? 2012-10-28 23:55, Dan Swartzendruber wrote: > fill). I guess I can create a 32GB virtual disk and move the rpool to > it, but it seems like kind of a waste just for the off chance I get a > kernel panic. Are there any ways to dodge this, or do I need to just > bite the bullet? I think you can configure dumpadm to use another pool for dumps. You can certainly use another pool for swap area, and free up space on rpool - if rpool/dump must indeed be on rpool. Quite likely, you can also expand (live or offline) your VM's harddisk and then autoexpand your rpool, though that would likely require merging and destruction of older VM (hypervizor's) snapshots. HTH, //Jim ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] "bad mutex" crash?
I'm thinking of migrating to a bigger root pool vmdk. In normal circumstances, the dump zvol will be empty, so using veeam backup should be able to squeeze the dump down to a smaller file. Thanks! -Original Message- From: Jim Klimov [mailto:jimkli...@cos.ru] Sent: Monday, October 29, 2012 5:33 PM To: openindiana-discuss@openindiana.org Subject: Re: [OpenIndiana-discuss] "bad mutex" crash? 2012-10-28 23:55, Dan Swartzendruber wrote: > fill). I guess I can create a 32GB virtual disk and move the rpool to > it, but it seems like kind of a waste just for the off chance I get a > kernel panic. Are there any ways to dodge this, or do I need to just > bite the bullet? I think you can configure dumpadm to use another pool for dumps. You can certainly use another pool for swap area, and free up space on rpool - if rpool/dump must indeed be on rpool. Quite likely, you can also expand (live or offline) your VM's harddisk and then autoexpand your rpool, though that would likely require merging and destruction of older VM (hypervizor's) snapshots. HTH, //Jim ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] How to disable local/remote login, still allowing access to smb share?
Wrt /bin/false, I ran into such an exception: I installed freeradius on my ubuntu main server so my astaro gateway could authenticate people. They already had accounts on that host for email - all of them using /bin/false. I naively tried to use the freeradius plugin "unix password" (not the right name, but the gist is accurate.) freeradius would reject auth attempts due to 'invalid shell'. I ended up using the pam plugin and all was well... -Original Message- From: Jan Owoc [mailto:jso...@gmail.com] Sent: Monday, October 29, 2012 11:24 AM To: Discussion list for OpenIndiana Subject: Re: [OpenIndiana-discuss] How to disable local/remote login, still allowing access to smb share? Hi Dmitry, On Mon, Oct 29, 2012 at 9:17 AM, Dmitry Kozhinov wrote: > I am still newbie to UNIX administration. Please advise. After setting > up a storage server (a number of smb shares, as described at > http://wiki.openindiana.org/oi/Using+OpenIndiana+as+a+storage+server), > I ended up having a number of users at my system, each one needed only > to access an smb share from a Windows client machine. How do I prevent > using these usernames/passwords to login locally or remotely to the > server, and only use them to access smb shares? I'm not a professional UNIX administrator, but the way I've seen it done is to set the logon shell for those users to "/bin/false". An alternative is "/usr/bin/passwd", so they can't get a logon shell, but they can "log on" to change their password. There are some things for which /bin/false doesn't work, but it might be enough for your needs [1]. [1] http://www.semicomplete.com/articles/ssh-security/ Jan ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] "bad mutex" crash?
On a related topic: this is my backup OI VM. It mainly just receives 5-minute snaps to a replicate of the main ESXi datastore share. Both VMs have fairly small root "disks", but I noticed I didn't have a rpool/dump on the main OI VM, so I went to create it and dumpadm bitched me out for having insufficient space. The reason is that the backup OI has 8GB ram (which is fine, since it is 'write only'), but the main OI has 20GB, since I want a decent hit in ARC (as well as enough RAM to allow the two 128GB SSDs to fill). I guess I can create a 32GB virtual disk and move the rpool to it, but it seems like kind of a waste just for the off chance I get a kernel panic. Are there any ways to dodge this, or do I need to just bite the bullet? ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] "bad mutex" crash?
Ah, thanks. I did in fact need to create /var/crash/openindiana2 (the hostname is openindiana2). root@nas2:/var/crash/openindiana2# mdb unix.0 vmcore.0 Loading modules: [ unix genunix specfs dtrace mac cpu.generic uppc apix scsi_vhci zfs sd mpt ip hook neti sockfs arp usba stmf stmf_sbd fctl md lofs random idm nfs crypto cpc fcp fcip ufs logindmux ptm sppp nsmb smbsrv ] >f7f9a9c5::whatis f7f9a9c5 is f7f9a000+9c5, freed from the heaptext vmem arena: ADDR TYPESTART END SIZE THREADTIMESTAMP ff0194e9e930 FREE f7f9a000 f7f9d00012288 (doesn't look real useful?) -Original Message- From: Richard Lowe [mailto:richl...@richlowe.net] Sent: Sunday, October 28, 2012 3:20 PM To: Discussion list for OpenIndiana Subject: Re: [OpenIndiana-discuss] "bad mutex" crash? Sorry, you have to run savecore twice. Once: # savecore should write out the vmdump.0 The second # savecore -vf vmdump.0 Will extract it. -- Rich ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] "bad mutex" crash?
I note from the fmdump output references to /var/crash, but there is no such directory on my system? ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] "bad mutex" crash?
Hmmm. Doing that yields: root@nas2:~# savecore -vf vmdump.0 -d . savecore: stat("vmdump.0"): No such file or directory savecore: open("vmdump.0"): No such file or directory -Original Message- From: Richard Lowe [mailto:richl...@richlowe.net] Sent: Sunday, October 28, 2012 2:51 PM To: Discussion list for OpenIndiana Subject: Re: [OpenIndiana-discuss] "bad mutex" crash? If you save the dump (savecore), extract it (savecore -vf vmdump.0 -d .), and run mdb (mdb unix.0 vmcore.0) What does: f7f9a9c5::whatis say? The stack looks reasonable enough that I'm wondering if you have a module loaded someone's stripped that's actually at fault, rather than it being our fault and a damage stack. If it says it's a 3rd party module, I'd suggest reporting a bug to wherever it came from. Otherwise, please file an illumos bug and make the crash dump available (over http ideally)? -- Rich ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] "bad mutex" crash?
I've got a virtualized OI151a7 running under ESXi 5.1. A small root pool VM disk, and two SATA disks passed as RDMs. I wanted to back it up to amazon glacier, so I removed the two RDMs before doing the backup. The problem happened when I was shutting down the OI VM to be able to edit the config for it. I clicked on the shutdown VM option in the vsphere menu for the VM and almost instantly saw the panic. I believe ESXi does this kind of stuff under the sheets using the vmware tools to simulate an ACPI shutdown. I do have a crash dump if anyone wants to look at it. The fmdump info: root@nas2:~# fmdump -Vp -u e78c0b4e-63ee-ef79-b964-974ca7eca6a0 TIME UUID SUNW-MSG-ID Oct 28 2012 14:34:13.78610 e78c0b4e-63ee-ef79-b964-974ca7eca6a0 SUNOS-8000-KL TIME CLASS ENA Oct 28 14:34:03.7006 ireport.os.sunos.panic.dump_pending_on_device 0x nvlist version: 0 version = 0x0 class = list.suspect uuid = e78c0b4e-63ee-ef79-b964-974ca7eca6a0 code = SUNOS-8000-KL diag-time = 1351449253 766125 de = fmd:///module/software-diagnosis fault-list-sz = 0x1 fault-list = (array of embedded nvlists) (start fault-list[0]) nvlist version: 0 version = 0x0 class = defect.sunos.kernel.panic certainty = 0x64 asru = sw:///:path=/var/crash/openindiana2/.e78c0b4e-63ee-ef79-b964-974ca7eca6a0 resource = sw:///:path=/var/crash/openindiana2/.e78c0b4e-63ee-ef79-b964-974ca7eca6a0 savecore-succcess = 0 os-instance-uuid = e78c0b4e-63ee-ef79-b964-974ca7eca6a0 panicstr = mutex_enter: bad mutex, lp=c0391e78 owner=ff019cb008a0 thread=ff00072cac40 panicstack = unix:mutex_panic+73 () | unix:mutex_vector_enter+446 () | genunix:cv_timedwait_hires+fd () | genunix:cv_timedwait_sig_hires+336 () | genunix:cv_timedwait_sig+4c () | f7f9a9c5 () | unix:thread_start+8 () | crashtime = 1351448035 panic-time = October 28, 2012 02:13:55 PM EDT EDT (end fault-list[0]) fault-status = 0x1 severity = Major __ttl = 0x1 __tod = 0x508d7aa5 0x2edaef20 ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Slow ssh login?
Truly funny. So I start banging on this problem with truss. I notice that at some point sshd forks off a shell to run 'locale -a'. I notice that is taking several seconds longer on the slow sshd host than the other one. So I try this: time locale -a > /dev/null It takes a fraction of a second on host A and several seconds on host B. I mutter a few choice expletives and start wondering if there is some kind of performance b0rkage on host B. I start groveling through zfs properties for rpool on each host. Ready for the punchline? I was only able to upgrade the RAM assigned to the OI VM on the main hypervisor a few weeks ago, and until then, RAM (and therefore ARC) was tight, so (drumroll) I had set primary and secondary cache to 'off' on rpool. I set both properties back to 'all' and run the locale test again. No change (and no surprise). Run it again (and the locale files should now be in ARC), and... Runs like greased lightning. Fire up an ssh session to host B. No more slow logins. Boy, that was fun :) ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Slow ssh login?
It gets odder and odder. I tried disabling sshd on both hosts and running with '-ddd' to debug. Here is the slow one: Connection from 10.0.0.1 port 40262 debug1: Client protocol version 2.0; client software version OpenSSH_5.9p1 Debi\ an-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-Sun_SSH_1.5 debug2: Waiting for monitor monitor debug1: list_hostkey_types: ssh-rsa,ssh-dss (5 second delay) monitor debug2: Monitor pid 16727, unprivileged child pid 16729 monitor debug1: reading the context from the child debug2: Monitor signalled readiness debug1: use_engine is 'yes' debug1: pkcs11 engine initialized, now setting it as default for RSA, DSA, and \ symmetric ciphers debug1: pkcs11 engine initialization complete debug1: list_hostkey_types: ssh-rsa,ssh-dss (10 second delay) debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha1,diffie-hellman-gr\ (everything after here is copascetic) And now the fast one: Connection from 10.0.0.1 port 39510 debug1: Client protocol version 2.0; client software version OpenSSH_5.9p1 Debia n-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-Sun_SSH_1.5 debug2: Waiting for monitor monitor debug1: list_hostkey_types: ssh-rsa,ssh-dss monitor debug2: Monitor pid 13757, unprivileged child pid 13758 debug2: Monitor signalled readiness debug1: use_engine is 'yes' debug1: pkcs11 engine initialized, now setting it as default for RSA, DSA, and s ymmetric ciphers debug1: pkcs11 engine initialization complete debug1: list_hostkey_types: ssh-rsa,ssh-dss monitor debug1: reading the context from the child debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha1,diffie-hellman-gro No delay at all after either 'list_hostkey_types' message. The only difference I saw is that the 'reading the context from the child' happens later in the fast case. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] Slow ssh login?
Hi, all. I've got an issue that is bugging me. I've got an OI 151a7 VM and ssh to it takes 15 seconds or so, then I get a prompt. It's not the usual reverse dns or gssapi stuff, since my backup node is also OI 151a7 and it responds instantly to the ssh request. Google has not turned up anything useful except for the usual suspects that are innocent in this case. The only hint I can see is if I give '-v' on the client, I see this: OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to nas [10.0.0.4] port 22. debug1: Connection established. debug1: identity file /home/dswartz/.ssh/id_rsa type -1 debug1: identity file /home/dswartz/.ssh/id_rsa-cert type -1 debug1: identity file /home/dswartz/.ssh/id_dsa type -1 debug1: identity file /home/dswartz/.ssh/id_dsa-cert type -1 debug1: identity file /home/dswartz/.ssh/id_ecdsa type -1 debug1: identity file /home/dswartz/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version Sun_SSH_1.5 debug1: no match: Sun_SSH_1.5 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 debug1: SSH2_MSG_KEXINIT sent (the delay is here) debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 8c:78:a0:17:6b:17:1b:bf:83:69:a3:bf:59:df:18:07 debug1: Host 'nas' is known and matches the RSA host key. debug1: Found key in /home/dswartz/.ssh/known_hosts:9 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Trying private key: /home/dswartz/.ssh/id_rsa debug1: Trying private key: /home/dswartz/.ssh/id_dsa debug1: Trying private key: /home/dswartz/.ssh/id_ecdsa debug1: Next authentication method: keyboard-interactive Any thoughts where to look? It's got to be something that is different between the two OI hosts, but offhand, I'm not sure where to look. Thanks... ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Zfs stability
+1. What the previous poster is missing is this: it's entirely possible for sectors on a disk to go bad and if you haven't read them in awhile, you might not notice. Then, say, the other disk (in a mirror for example) dies entirely. You are dismayed to realize your redundant disk configuration has lost data for you anyway. -Original Message- From: Doug Hughes [mailto:d...@will.to] Sent: Friday, October 12, 2012 4:42 PM To: Discussion list for OpenIndiana Subject: Re: [OpenIndiana-discuss] Zfs stability yes, you shoud do a scrub and no, there isn't very much risk to this. This will scan your disks for bits that have gone stale or the like. You should do it. We do a scrub once per week. On Fri, Oct 12, 2012 at 3:55 PM, Roel_D wrote: > Being on the list and reading all ZFS problem and question posts makes > me a little scared. > > I have 4 Sun X4140 servers running in the field for 4 years now and > they all have ZFS mirrors (2x HD). They are running Solaris 10 and 1 > is running solaris 11. I also have some other servers running OI, also with ZFS. > > The Solaris servers N E V E R had any ZFS scrub. I didn't even knew > such existed ;-) > > Since it all worked flawless for years now i am a huge Solaris/OI fan. > > But how stable are things nowaday? Does one need to do a scrub? Or a > resilver? > > How come i see so much ZFS trouble? > > > > Kind regards, > > The out-side > ___ > OpenIndiana-discuss mailing list > OpenIndiana-discuss@openindiana.org > http://openindiana.org/mailman/listinfo/openindiana-discuss > ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Replacing both disks in a mirror set
Good point on split vs detach. Unfortunately this particular misinformation seems widespread :( -Original Message- From: Richard Elling [mailto:richard.ell...@richardelling.com] Sent: Monday, October 08, 2012 8:39 PM To: Discussion list for OpenIndiana Subject: Re: [OpenIndiana-discuss] Replacing both disks in a mirror set On Oct 8, 2012, at 4:07 PM, Martin Bochnig wrote: > Marilio, > > > at first a reminder: never ever detach a disk before you have a third > disk that already completed resilvering. > The term "detach" is misleading, because it detaches the disk from the > pool. Afterwards you cannot access the disk's previous contents > anymore. Your "detached" half of a mirror can neither be imported, nor > mounted and also not even rescued (unlike a disk with a "zpool > destroy"ed disk). If I ever mentally recover from a zfs encryption > caused 2TB (or 3 years!) data loss, then I may offer an implementation > with less ambigous naming to Illumos. > > > "zpool detach" suggests, that you could still use this disk as a > reserve backup copy of the pool you were detaching it from. No it doesn't -- there is no documentation that suggests this usage. > And that > you could simply "zpool attach" it again, in case the other disk would > die. You are confusing zpool detach and zpool split commands. -- richard > > Unfortunately, this is not the case. > Well, you can of course attach it again. Like any new or empty disk. > But only if and only if you have enough replicas, and that's not what > one wanted if one fell in this misunderstanding trap. > And there are no warnings in the zpool/zfs man pages. > > > What you want: > > zpool replaceBut last > weekend I lost 7 years of trust that I had in ZFS. > Because Oracle Solaris 11/11 x86 with an encrypted and gzip-9 > compressed mirror cannot be accessed anymore after VirtualBox forced > me to remove prower from the host machine. > Since then a 1:1 mirror of 2TB disks cannot be mounted anymore. It > always ends in a kernel panic due to a pf in > aes:aes_decrypt_contiguous_blocks. > > Well: TITANIC IS UNSINKABLE! > The problem is, that scrub doesn't find an error, and so has nothing > to auto-repair. > Even zpool attach sucessfully completes resilver, but the newly > resilvered disk contains the same error. Be aware that ZFS is not free > of bugs. > If it stays like that (I contacted some folks for help), then my trust > in ZFS has destroyed, VAPORIZED 3 years of my work and life. > > So, back to your question: To be as cautious as possible, what I would > do in your case: > > > 0.) zpool offline > > 1.) Physically remove this disc (important, because I have seen cases, > where zfs forgets that you offlined a vdev after a reboot) > > 2.) AFTER (!IMPORTANT!) you physically disconnected the disc to be > replaced, "zpool detach it" or alternatively take "zpool replace > > !> > > > 3.) Depending on if you did detach or replace in step 2.), "zpool > attach or ommit this step, > if you took "zpool replace" in step 2.) > > > NEVER TRUST ZFS TOO MUCH. > What I do from now on: For each 1:1 mirror that I have I will take a > third disk, resilver it, offline and physically disconnect it, and > store it at a secure place. > > Because if you have this much bad luck as I had last weekend, ZFS > replicates the data corruption, too. > And then you could have 1000 discs mirrored, they would all contain > the corruption. > For this reason, you are only on the safe side, if you physically > disconnect a third copy! > > > > Good luck! > %martin > > > > > > > On 10/8/12, Maurilio Longo wrote: >> Dan Swartzendruber wrote: >>> I'm not understanding your problem. If you add a 3rd temporary >>> disk, wait for it to resilver, then replace c1t5d0, let the new disk >>> resilver, then detach the temporary disk, you will never have less >>> than 2 up to date disks in the mirror. What am I missing? >>> >> >> Dan, >> >> you're right, I was trying to find a way to "move" the new disk in >> the failing disk bay instead of simply replacing the failing one :) >> >> Thanks for the advice! >> >> Maurilio. >> >> -- >> __ >> | | | |__| Maurilio Longo >> |_|_|_|| >> >> >> >> ___ >> OpenIndiana-discuss mailing list >> OpenIndiana-discuss@openindiana.org >> http://openindiana.org/m
Re: [OpenIndiana-discuss] Replacing both disks in a mirror set
Wow, Martin, that's a shocker. I've been doing exactly this to 'backup' my rpool :( ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Replacing both disks in a mirror set
I'm not understanding your problem. If you add a 3rd temporary disk, wait for it to resilver, then replace c1t5d0, let the new disk resilver, then detach the temporary disk, you will never have less than 2 up to date disks in the mirror. What am I missing? -Original Message- From: Maurilio Longo [mailto:maurilio.lo...@libero.it] Sent: Monday, October 08, 2012 8:58 AM To: Discussion list for OpenIndiana Subject: [OpenIndiana-discuss] Replacing both disks in a mirror set Hi all, I have a zpool on an oi_147 host system which is made up of 3 mirror sets, tank mirror-0 c11t5d0 c11t4d1 mirror-1 c11t3d0 c11t2d0 mirror-2 c11t1d0 c11t0d0 both c11t5d0 and c11t4d0 (SATA 1Tb disks, ST31000528AS) are developing errors, both disks have around one-hundred pending sectors and I'm getting nervous :) I'd like to add a third disk to mirror-0 so that I can let it resilver without decreasing parity (replacing one disk) and increasing my overall risk of loosing the whole zpool A simple zpool attach tank c11t5d0 c12t0d0 should be ok to make mirror-0 a three disks mirror set. The problem, for me at least, arises here: how can I remove/replace disks so that I can end up with the new disk (c12t0d0) in c11t5d0 (or c11t4d0) disk bay and without powering off the system? from googling around it seems that zpool offline cannot be used to replace a disk and if I remove, for example, c11t5d0 with a zpool detach tank c11t5d0 when I move c12t0d0 to the place (disk bay) where c11t5d0 was I fear that I have to let it resilver from the beginning which leaves me without a mirror for at least three days. Is there some way to solve this issue without exporting pool, powering off host system, moving c12t0d0 in c11t5d0 bay and then restarting system and importing pool again? Thanks. Maurilio. -- __ | | | |__| Maurilio Longo |_|_|_|| ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Raid type selection for large # of ssds
LOL, good point Bob :) -Original Message- From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us] Sent: Saturday, October 06, 2012 4:46 PM To: Discussion list for OpenIndiana Subject: Re: [OpenIndiana-discuss] Raid type selection for large # of ssds On Fri, 5 Oct 2012, Dan Swartzendruber wrote: >> > a 24-drive raidz2 is a really bad idea. you will get one drive IOPs. > you Everyone who has commented thus far seems to have missed that this fellow is only using SSDs for his pool (no rotating rust) so drive seek time is not an issue. It is still true that running more SSDs in parallel should improve available IOPS though. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Raid type selection for large # of ssds
On 10/5/2012 12:39 PM, Grant Albitz wrote: I feel bad asking this question because I generally know what raid type to pick. I am about to configure 24 256gig ssd drives in a ZFS/Comstar deployment. This will serve as the datastore for a vmware deployment. Does anyone know what raid level would be best. I know the workload will determine alot, but obviously there is varying workload across a vmware environment. Since we are talking about ssds I dont see a particular reason to not create 1 big zfs pool, with the exception that I know people generally try to keep the drive count from getting out of control. Raid 10 seems like a waste of space with little benefit in performance in this case. i am leaning towards raid z2 but wanted to get everyones input. The datastore will host a fileserver, and exchange server for about 50 users. The environment is all 10g and they have solid states in all desktops so essentially that is the reason for such a large SSD deployment for a small # of users. There seems to be varying opinions, especially when you factor in trying to keep writes low for ssds. a 24-drive raidz2 is a really bad idea. you will get one drive IOPs. you said raid10 is a waste? depends on your workload. keep in mind for read-heavy random IOPs, raid10 is extremely good. If you insist on some raidz* flavor, I would do something like 3 8-drive raidz2 vdevs, so you get 3 vdevs random IOPs... ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] XStreamOS distro available
On theory that it's something with my win7 workstation (it hangs exactly at 332MB every time, no matter which mirror I use), I'm going to try to wget it directly to my esxi server. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] XStreamOS distro available
Yeah, I tried a couple of mirrors. Must be something weird at this end. I need to poke some more... -Original Message- From: Jan Owoc [mailto:jso...@gmail.com] Sent: Wednesday, September 19, 2012 7:41 PM To: Discussion list for OpenIndiana Subject: Re: [OpenIndiana-discuss] XStreamOS distro available Not sure if you tried this, but maybe the mirror you selected is having issues. Try a different one. Jan On Wed, Sep 19, 2012 at 5:25 PM, Dan Swartzendruber wrote: > I don't know. I tried a couple. Maybe something at my end. I'll try > again later I guess... > > -Original Message- > From: Gabriele Bulfon [mailto:gbul...@sonicle.com] > Sent: Wednesday, September 19, 2012 6:51 PM > To: Discussion list for OpenIndiana > Cc: Discussion list for OpenIndiana > Subject: Re: [OpenIndiana-discuss] XStreamOS distro available > > Very strange, it's on Sourceforge mirrors > ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] XStreamOS distro available
I don't know. I tried a couple. Maybe something at my end. I'll try again later I guess... -Original Message- From: Gabriele Bulfon [mailto:gbul...@sonicle.com] Sent: Wednesday, September 19, 2012 6:51 PM To: Discussion list for OpenIndiana Cc: Discussion list for OpenIndiana Subject: Re: [OpenIndiana-discuss] XStreamOS distro available Very strange, it's on Sourceforge mirrors Inviato da iPad Il giorno 19/set/2012, alle ore 22:37, "Dan Swartzendruber" ha scritto: > Hmmm, I thought of DL'ing to take a look. I've tried 3-4 times, and > each time the download hangs at 332MB. Dunno why, but it's not that > important to me, so maybe later... > > -Original Message- > From: ken mays [mailto:maybird1...@yahoo.com] > Sent: Wednesday, September 19, 2012 1:01 PM > To: Discussion list for OpenIndiana > Subject: Re: [OpenIndiana-discuss] XStreamOS distro available > > > Gabriele, > > Very good server distro. Lightweight window manager or desktop > environment is not an issue. A solid and 'stable' server distro is > what the community needs for most small and mid-size businesses. > > Since we can use products like JWM 2.1.0 with XStreamOS, it is good > enough for now... > > > ~ Ken Mays > > > > > > From: Gabriele Bulfon > To: Discussion list for OpenIndiana > > Sent: Wednesday, September 19, 2012 11:36 AM > Subject: Re: [OpenIndiana-discuss] XStreamOS distro available > > Server only, at the moment. No desktop, at the moment. > -- > -- > -- > Da: Apostolos Syropoulos > A: Discussion list for OpenIndiana > Data: 19 settembre 2012 17.46.14 CEST > Oggetto: Re: [OpenIndiana-discuss] XStreamOS distro available Hi, > anyone interested may donwload the iso of our XStreamOS distro here: > https://sourceforge.net/projects/xstreamos/ > We would like to know if you feel it nice as a development distro for > the illumos kernel, or any other use. > Is this a desktop system or some server thing only? > A.S. > -- > Apostolos Syropoulos > Xanthi, Greece > ___ > OpenIndiana-discuss mailing list > OpenIndiana-discuss@openindiana.org > http://openindiana.org/mailman/listinfo/openindiana-discuss > > ___ > OpenIndiana-discuss mailing list > OpenIndiana-discuss@openindiana.org > http://openindiana.org/mailman/listinfo/openindiana-discuss > ___ > OpenIndiana-discuss mailing list > OpenIndiana-discuss@openindiana.org > http://openindiana.org/mailman/listinfo/openindiana-discuss > > > ___ > OpenIndiana-discuss mailing list > OpenIndiana-discuss@openindiana.org > http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] XStreamOS distro available
Hmmm, I thought of DL'ing to take a look. I've tried 3-4 times, and each time the download hangs at 332MB. Dunno why, but it's not that important to me, so maybe later... -Original Message- From: ken mays [mailto:maybird1...@yahoo.com] Sent: Wednesday, September 19, 2012 1:01 PM To: Discussion list for OpenIndiana Subject: Re: [OpenIndiana-discuss] XStreamOS distro available Gabriele, Very good server distro. Lightweight window manager or desktop environment is not an issue. A solid and 'stable' server distro is what the community needs for most small and mid-size businesses. Since we can use products like JWM 2.1.0 with XStreamOS, it is good enough for now... ~ Ken Mays From: Gabriele Bulfon To: Discussion list for OpenIndiana Sent: Wednesday, September 19, 2012 11:36 AM Subject: Re: [OpenIndiana-discuss] XStreamOS distro available Server only, at the moment. No desktop, at the moment. -- Da: Apostolos Syropoulos A: Discussion list for OpenIndiana Data: 19 settembre 2012 17.46.14 CEST Oggetto: Re: [OpenIndiana-discuss] XStreamOS distro available Hi, anyone interested may donwload the iso of our XStreamOS distro here: https://sourceforge.net/projects/xstreamos/ We would like to know if you feel it nice as a development distro for the illumos kernel, or any other use. Is this a desktop system or some server thing only? A.S. -- Apostolos Syropoulos Xanthi, Greece ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Interesting question about L2ARC
At the moment, 20GB. Here is the hit/miss info from later in arc_summary.pl: CACHE HITS BY DATA TYPE: Demand Data:45%8649309 Prefetch Data: 1%232747 Demand Metadata:31%5979043 Prefetch Metadata: 22%4286704 CACHE MISSES BY DATA TYPE: Demand Data:56%996300 Prefetch Data: 8%140398 Demand Metadata:35%615346 Prefetch Metadata: 0%1196 -Original Message- From: Sašo Kiselkov [mailto:skiselkov...@gmail.com] Sent: Tuesday, September 11, 2012 9:31 AM To: Dan Swartzendruber Cc: z...@lists.illumos.org; 'Discussion list for OpenIndiana' Subject: Re: [OpenIndiana-discuss] Interesting question about L2ARC On 09/11/2012 03:27 PM, Dan Swartzendruber wrote: > > Saso, I think you might be on to something here. This is a handful of > VM's, none doing anything all that disk intensive, so any disk I/O > will tend to be somewhat random. I'm not so worried about premature > death of the device, since it's a brand-new device, and we're not > talking hundreds of terabytes per month or anything. My pool is > definitely NOT throughput stressed (rarely exceeding a 20-30 MB/sec at > most), so latency is more of a priority to me. I'll give this a try, thanks! > What is your ARC size? Chances are your dataset fits entirely in the ARC. Just a quick guess. Cheers, -- Saso ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Interesting question about L2ARC
(snipped my OP) At first glance it's hard to tell why your l2arc is failing to fill up, but my suspicion is that it has something to do with your workload. As a recap, here's how the l2arc works: * there is a feed thread (l2arc_feed_thread) that periodically scans the end of the MRU/MFU lists of the ARC in order to capture buffers before they are evicted and write them to l2arc * l2arc by default only caches non-prefetch data (i.e. random-reads), since it is primarily a tool to lower random access latency, not increase linear throughput (the main pool is assumed to be faster than l2arc in bulk read volume) It is somewhat suspicious that your l2arc only contains 9GB of data. Run the following command to check your l2arc growth in response to your workloads: # while sleep 2; do echo ---; kstat -m zfs -n arcstats | grep l2 ;\ done Look for the l2_size parameter. If your workload's random-access portion fits entirely into the ARC, then l2arc isn't going to do you any good. If you do want to cache prefetched data in the l2arc as well (because your l2arc devices have cumulatively higher throughput than your main pool), try setting l2arc_noprefetch=0: # echo l2arc_noprefetch/W0t0 | mdb -kw Be advised though, that this might start an unending heavy writing frenzy to your l2arc devices, which might burn through your l2arc device's flash write cycles much faster than you'd want. *** Saso, I think you might be on to something here. This is a handful of VM's, none doing anything all that disk intensive, so any disk I/O will tend to be somewhat random. I'm not so worried about premature death of the device, since it's a brand-new device, and we're not talking hundreds of terabytes per month or anything. My pool is definitely NOT throughput stressed (rarely exceeding a 20-30 MB/sec at most), so latency is more of a priority to me. I'll give this a try, thanks! ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] Interesting question about L2ARC
I got a 256GB Crucial M4 to use for L2ARC for my OI box. I added it to the tank pool and let it warm for a day or so. By that point, 'zpool iostat -v' said the cache device had about 9GB of data, but (and this is what has me puzzled) kstat showed ZERO l2_hits. That's right, zero. kstat | egrep "(l2_hits|l2_misses)" l2_hits 0 l2_misses 1143249 The box has 20GB of RAM (it's actually a virtual machine on an ESXi host.) The datastore for the VMs is about 256GB. My first thought was everything is hitting in ARC, but that is clearly not the case, since it WAS gradually filling up the cache device. Maybe it's possible that every single miss is never ever being re-read, but that seems unlikely, no? If the l2_hits was a small number, I'd think it just wasn't giving me any bang for the buck, but zero sounds suspiciously like some kind of bug/mis-configuration. primarycache and secondarycache are both set to all. arc stats via arc_summary.pl: ARC Efficency: Cache Access Total: 12324974 Cache Hit Ratio: 87% 10826363 [Defined State for buffer] Cache Miss Ratio: 12% 1498611[Undefined State for Buffer] REAL Hit Ratio: 68% 8469470[MRU/MFU Hits Only] Data Demand Efficiency:85% Data Prefetch Efficiency:59% For the moment, I gave up and moved the SSD back to being my windows7 drive, where it does make a difference :) I'd be willing to shell out for another SSD, but only if I can gain some benefit from it. Any thoughts would be appreciated (if this is too esoteric for the OI list, I can try the zfs discussion list - I am starting here because of common platform with the rest of the audience...) ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] sata hba choice
On 8/24/2012 12:08 PM, Rich wrote: You know, you'd think so. There's lots of opinions on SAS expanders, and the general consensus seems to be "if you can avoid the complication, doso". The only complaints I've heard are putting SATA drives on SAS expanders. Even that is not very clear - lots of he said/she said stories out there... ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] sata hba choice
On 8/24/2012 11:51 AM, Rich wrote: I believe the -8i also ships with IR firmware OOTB, but flashing is no more complicated than the M1015. A little caveat here: if you haven't read the right articles on the right forums, it seems the application that has to be run to reflash this can be cantankerous about which motherboards it runs (believe it or not.) ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] sata hba choice
I went the m1015 route. Flashing was a bit tricky but was worth it (IMO) to get rid of the raid stack and possible complications. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] SSDs for ZFS Pool?
AFAIK, TRIM doesn't with with any flavor of ZFS yet. Also, if read IOPS is important, I'd prefer raid10 to raidz*. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] ZFS and AVS guru $500
Lol -- Sent from my Android phone with K-9 Mail. Please excuse my brevity. Jason Matthews wrote: are you missing a zero to the left of the decimal place? j. Sent from Jasons' hand held On Jul 23, 2012, at 8:57 PM, "John T. Bittner" wrote: > Subject: ZFS and AVS guru > > I am working on setting up 2 SAN's to replicate via AVS. > The 2 sans I build have 15 drives SAS drives + 2 Cache SSD's and 2 Log SSD's. > OS drives are also SSD's and are mirrored. > Units are running current version of Openindiana with AVS installed. > Our environment we run comstar fiber channel targets with ISCSI backup. > I have conflicting reports on if active / active is possible but if not > active / passive will do. > > I need someone that has done this before and is familiar with this type of > setup. > $500.00 for the work, to include 1 hour of training on the system, how to > monitor the replication, failover and failback. > > The need to get this done is a rush, so you must be available in the next day > or so. > > Anyone interested please email me direct at j...@xaccel.net > > Thanks > > John Bittner > Xaccel Networks. > > > > > >_ > OpenIndiana-discuss mailing list > OpenIndiana-discuss@openindiana.org > http://openindiana.org/mailman/listinfo/openindiana-discuss _ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Dovecot documentation woes.
Hans I feel your pain. Particularly given there are two very different implementations: dovecot 1.1 and 2.x :( I am running postfix+dovecot on a ubuntu server, and fortunately, the packagers implemented a combo package that pulls in and tweaks everything so it pretty much just works. -Original Message- From: Hans J. Albertsson [mailto:hans.j.alberts...@branneriet.se] Sent: Saturday, July 21, 2012 11:38 AM To: openindiana-discuss@openindiana.org Subject: [OpenIndiana-discuss] Dovecot documentation woes. I got the postfix installation running smoothly, and I went on to dovecot. And problems began: maybe I actually AM stupid, but I hope not. Still, I cannot seem to find a dovecot tutorial that actually A: works B: is in agreement with the current dovecot distro (especially: the dovecot conf file/dir structure is nothing like what is assumed in the HowTos) Also, they seem all generally of low quality, and seem not to have been proofread by anyone and certainly not vetted for use by absolute beginners. Postfix provided a very well executed couple of straightforward simple setups, which enabled me to learn a lot in a short time. Trying to follow the dovecot HowTos often leaves you confused: like at the start it goes on about one specific layout, and in the middle somewhere something subtly changes, so you suddenly have no idea where to put stuff. And you're left wondering: am I supposed to create this, or will it happen by automagical means? (I've come across both..) Does anyone here know of a simple and straightforward instruction on how to set something simple up for dovecot, dovecot SASL and lmtp with postfix. Something that has been proofread. If someone were to publish a book on Dovecot, I'd put my money down immediately. Is there one? On 2012-07-16 14:31, openindiana-discuss-requ...@openindiana.org wrote: > Message: 6 > Date: Mon, 16 Jul 2012 13:58:04 +0200 > From: "Hans J. Albertsson" > To:openindiana-discuss@openindiana.org > Subject: [OpenIndiana-discuss] Help, please! Starting to use postfix > in an OI151_a5 environment, from scratch > Message-ID:<500401cc.3010...@branneriet.se> > Content-Type: text/plain; charset=ISO-8859-1; format=flowed > > I am in the process of starting up a mailserver for about a thousand > mail users. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Usefulness of prefetch?
thanks, saso. i will try that out... most of the I/O is random in nature, and read-heavy, since it is feeding an ESXi datastore on behalf of 6 or so VMs... -Original Message- From: Sašo Kiselkov [mailto:skiselkov...@gmail.com] Sent: Monday, July 09, 2012 3:58 AM To: Discussion list for OpenIndiana Cc: Dan Swartzendruber Subject: Re: [OpenIndiana-discuss] Usefulness of prefetch? On 07/09/2012 07:21 AM, Dan Swartzendruber wrote: > Unless I am misunderstanding the above, we are almost never hitting on > prefetched data, and barely ever on prefetched metadata. Given that, is > there even a reason to leave prefetch on? I mean, it does generate extra > reads, no? My experience with prefetch has been a nuanced one. I frequently use ZFS for media streaming, which means I have dozens or hundreds of parallel readers, each reading linearly. If I have lots of RAM, I leave prefetch on, since it is quite effective at detecting these linear access patterns (even though they're being issued by many applications, essentially resulting in near-random request ordering). However, with less RAM, I see prefetch over-stressing the I/O subsystem, since it frequently prefetches too much data, fills up the prefetch buffers and immediately dumps them (thus amplifying my read load significantly). That being said, I would recommend you try another script, arcstat.pl - that is a bit clearer and it gives you stats immediately, sort of in a vmstat fashion, and post your results here. Cheers, -- Saso ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] Usefulness of prefetch?
My file/san server has 8GB RAM (not extendable). I've tweaked the arc settings to give all but 256MB to arc. Here is what arc_summary shows after several days to let the cache get hot: System Memory: Physical RAM: 8180 MB Free Memory : 347 MB LotsFree: 127 MB ZFS Tunables (/etc/system): ARC Size: Current Size: 2786 MB (arcsize) Target Size (Adaptive): 6227 MB (c) Min Size (Hard Limit):894 MB (zfs_arc_min) Max Size (Hard Limit):7156 MB (zfs_arc_max) ARC Size Breakdown: Most Recently Used Cache Size: 38%2426 MB (p) Most Frequently Used Cache Size:61%3801 MB (c-p) ARC Efficency: Cache Access Total: 43671308 Cache Hit Ratio: 61% 26878124 [Defined State for buffer] Cache Miss Ratio: 38% 16793184 [Undefined State for Buffer] REAL Hit Ratio: 60% 26585766 [MRU/MFU Hits Only] Data Demand Efficiency:68% Data Prefetch Efficiency: 1% CACHE HITS BY CACHE LIST: Anon: --%Counter Rolled. Most Recently Used: 56%15311834 (mru) [ Return Customer ] Most Frequently Used: 41%11273932 (mfu) [ Frequent Customer ] Most Recently Used Ghost:6%1848670 (mru_ghost)[ Return Customer Evicted, Now Back ] Most Frequently Used Ghost: 18%4977067 (mfu_ghost)[ Frequent Customer Evicted, Now Back ] CACHE HITS BY DATA TYPE: Demand Data:76%20624325 Prefetch Data: 0%50886 Demand Metadata:22%5917690 Prefetch Metadata: 1%285223 CACHE MISSES BY DATA TYPE: Demand Data:56%9537239 Prefetch Data: 20%3503843 Demand Metadata:18%3180817 Prefetch Metadata: 3%571285 Unless I am misunderstanding the above, we are almost never hitting on prefetched data, and barely ever on prefetched metadata. Given that, is there even a reason to leave prefetch on? I mean, it does generate extra reads, no? ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] zpool upgrade in OI151a5?
Ah, that makes a lot of sense. I DL'ed and read the paper. Looks very nice. Thanks! -Original Message- From: Sašo Kiselkov [mailto:skiselkov...@gmail.com] Sent: Thursday, July 05, 2012 2:41 AM To: Discussion list for OpenIndiana Cc: Dan Swartzendruber Subject: Re: [OpenIndiana-discuss] zpool upgrade in OI151a5? On 07/05/2012 04:02 AM, Dan Swartzendruber wrote: > Okay, thanks. Is there a reason the pool version is not in the property > anymore? That's a bit confusing :( The reason is that "version" is a one-dimensional number allowing for only a single controlling authority (Sun) who defines what a version means (in terms of features). So instead, Illumos decided to abandon it (setting it to an arbitrarily high number to avoid conflicts with intervening Oracle ZFS releases which continue to use the "version" property) and instead using the so-called "feature-flags". See http://blog.delphix.com/csiden/files/2012/01/ZFS_Feature_Flags.pdf for a more detailed description. Cheers, -- Saso ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] zpool upgrade in OI151a5?
Okay, thanks. Is there a reason the pool version is not in the property anymore? That's a bit confusing :( -Original Message- From: Richard Lowe [mailto:richl...@richlowe.net] Sent: Wednesday, July 04, 2012 9:55 PM To: Discussion list for OpenIndiana Subject: Re: [OpenIndiana-discuss] zpool upgrade in OI151a5? On Wed, Jul 4, 2012 at 9:49 PM, Dan Swartzendruber wrote: > I downloaded a PDF on the new zfs feature flag stuff. I'm not sure what > 5000 means, but I'm not worried now. I don't think I'll upgrade the data > pool yet though. 5000 is an arbitrarily high number, to provide separation between the two versioning schemes. -- Rich ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] zpool upgrade in OI151a5?
I downloaded a PDF on the new zfs feature flag stuff. I'm not sure what 5000 means, but I'm not worried now. I don't think I'll upgrade the data pool yet though. -Original Message- From: Dan Swartzendruber [mailto:dswa...@druber.com] Sent: Wednesday, July 04, 2012 9:41 PM To: 'Discussion list for OpenIndiana' Subject: Re: [OpenIndiana-discuss] zpool upgrade in OI151a5? Oh never mind, I didn't read the release notes carefully enough. I wonder what pool version 5000 means though :) ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] zpool upgrade in OI151a5?
Oh never mind, I didn't read the release notes carefully enough. I wonder what pool version 5000 means though :) ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] zpool upgrade in OI151a5?
So I did the pkg image-update to go to 151a5. I grabbed off a copy of the root pool first. Boot and activated and all seemed well. I happened to do 'zpool status' and saw these: root@openindiana:~# zpool status pool: rpool1 state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using 'zpool upgrade'. Once this is done, the pool will no longer be accessible on older software versions. scan: scrub repaired 0 in 0h1m with 0 errors on Wed Jul 4 17:08:43 2012 config: NAME STATE READ WRITE CKSUM rpool1 ONLINE 0 0 0 c2t50014EE2AEDF73CEd0s0 ONLINE 0 0 0 errors: No known data errors pool: tank state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using 'zpool upgrade'. Once this is done, the pool will no longer be accessible on older software versions. scan: scrub repaired 0 in 2h17m with 0 errors on Wed Jul 4 14:34:45 2012 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c2t5000C50041AB0C47d0 ONLINE 0 0 0 c2t5000C50041BD3E87d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c2t50014EE206CED4ECd0 ONLINE 0 0 0 c2t50014EE25C240034d0 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 c2t50014EE204411A53d0 ONLINE 0 0 0 c2t50014EE2AEDF7498d0 ONLINE 0 0 0 cache c2t500A0751033D0AA2d0ONLINE 0 0 0 errors: No known data errors I updated rpool first, and was dismayed to see the following: Successfully upgraded 'rpool1' from version 28 to version 5000 Say what? I then did 'zpool get version rpool1' and get: root@openindiana:~# zpool get version NAMEPROPERTY VALUESOURCE rpool1 version -default tankversion 28 local So tank still explicitly lists the pool version as 28, but the root pool is showing '-'? Should I fall back to my backup pool image? Any ideas what the happened? ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] OI and chassis/jbod support?
Okay, this may not be the most elegant solution, but... My LSI HBA in IT mode presents all drives, SAS or SATA with a GUID, so I hacked up the script to remove the prtconf/sas2ircu serial number crap and just go by the guid. Now, I am seeing: 0:01:07c2t5000C5000D3C80FFd0 ST9500430SS 500.0G Ready (RDY) rpool1: stripped 0:02:00c2t5000C50041AB0C47d0 ST1000NM0001 1000.2G Ready (RDY) tank: mirror-0 0:02:03c2t500A0751033D0AA2d0M4-CT256M4SSD2 256.1G Ready (RDY) tank: cache 0:02:05c2t50014EE25C240034d0 WDC WD10EALX-009 1000.2G Ready (RDY) tank: mirror-1 0:02:06c2t50014EE206CED4ECd0 WDC WD10EALX-009 1000.2G Ready (RDY) tank: mirror-1 0:02:09c2t50014EE2AEDF7498d0 WDC WD6400AAKS-0 640.1G Ready (RDY) tank: mirror-2 0:02:10c2t50014EE204411A53d0 WDC WD6400AAKS-0 640.1G Ready (RDY) tank: mirror-2 0:02:11c2t5000C50041BD3E87d0 ST1000NM0001 1000.2G Ready (RDY) tank: mirror-0 Drives : 8 Total Capacity : 6.0T which is now 100% correct, as far as I can see. Well, aside from the typo for rpool1 where it says 'stripped', I assume 'striped' was what was intended? ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] OI and chassis/jbod support?
This is a bit annoying. I am replacing two of the 600GB SATA drives (WDC) with 1TB Seagate SAS drives. diskmap.py finds neither of the 1TB SAS drives, nor the 500GB SAS boot drive. After much hair pulling, I discover the problem is that sas2ircu and prtconf do not have the same serial number, and not the '-' in the middle like diskmap.py tries to mangle. example: sas2ircu gives me Z1N1LX05 for one of the drives, whereas prtconf gives Z1N1LX05C2424AUR. This seems to be true for all 3 SAS drives. So the hack I want to add to diskmap.py is to allow the leading string match, but my python skills are 0 - having to learn this as I go :( -Original Message- From: Rich [mailto:rerc...@acm.jhu.edu] Sent: Monday, July 02, 2012 1:45 PM To: Discussion list for OpenIndiana Subject: Re: [OpenIndiana-discuss] OI and chassis/jbod support? diskmap.py is a publicly available script written by someone else whose name escapes me ATM which is useful for this. I also have my own scripts which aren't in any state for anyone else to look at ATM which do similar things. - Rich On Mon, Jul 2, 2012 at 1:42 PM, Lucas Van Tol wrote: > > Some of the sg_ses and sg_vpd tools from the sg3_utils package can sort of give that information if you have an SES compatible backplane/expander. > It stops working when disks die though; so you would want to record any mappings somewhere for later reference. (Or just look for the blank spot in the map later on...) > > I have been working on some scripts; but they are still in a very 'ugly' state. > sg_vpd will give you a 'sas address' for any disk. sg_ses --page=0xa /dev/es/ses??? will list SAS addresses for each slot; along with some other stuff. > Match those up and you have a map of slots / disks. > > I dump the output from the following script for each 'active' disk into a file to make a map. > I've been working on a script to blink LED's as well; but it involves sending binary blobs to sg_ses; and I'm not entirely sure it wouldn't brick another brand of expander... > > ## > #! /bin/bash > ##Usage:: Print SES device; Element # for a given disk. > UDISK=$1 > DISK=$1 > > DISKSAS=$(sg_vpd --page=di /dev/rdsk/$DISK | grep 0x | tail -2|head -1 | awk '{print $1}') > DISKSIZE=$(sg_readcap /dev/rdsk/$DISK | grep size | awk '{print $3}') > echo $DISKSAS | grep 'x' >> /dev/null > if [ $? -eq 0 ] ; then ##Found a SAS address... > SESDEVICE=$(for SESDEV in `ls /dev/es`; do sg_ses --page=0xa /dev/es/$SESDEV | grep -i $DISKSAS >> /dev/null; if [ $? -eq 0 ] ; then echo $SESDEV; fi; done) > echo $SESDEVICE| grep 's' >> /dev/null > if [ $? -eq 0 ] ; then ##Found a ses device... > > SLOT='NUL' > idex=0 > sg_ses --page=0xa /dev/es/$SESDEVICE | while read line > do > if [[ "$line" =~ "element" ]] ; then > idex=$(echo "$line" | awk '{print $3}') > fi > if [[ "$line" =~ "$DISKSAS" ]] ; then > SLOT=$idex > echo "Disk $UDISK $DISKSAS at $SLOT on $SESDEVICE size $DISKSIZE" > fi > done > else > echo "Disk $UDISK $DISKSAS at NUL on NUL size $DISKSIZE ##No SES detected" > fi > else > echo "Disk $UDISK NUL at NUL on NUL size $DISKSIZE ##No SAS detected" > fi > > > > > > > -Lucas Van Tol > >> From: dswa...@druber.com >> To: openindiana-discuss@openindiana.org >> Date: Sun, 1 Jul 2012 19:14:01 -0400 >> Subject: [OpenIndiana-discuss] OI and chassis/jbod support? >> >> >> >> During my brief experiment with S11, I was pleasantly surprised at the >> croinfo app, which shows you which disks are in which enclosure/slot. Is >> there anything like that for OI? If not, any idea what doing this would >> entail? >> >> ___ >> OpenIndiana-discuss mailing list >> OpenIndiana-discuss@openindiana.org >> http://openindiana.org/mailman/listinfo/openindiana-discuss > > ___ > OpenIndiana-discuss mailing list > OpenIndiana-discuss@openindiana.org > http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] OI and chassis/jbod support?
True enough. Honestly though, I've got a small number of disks, I just don't want to have to be bothered figuring out which drive is where if something is dying :) -Original Message- From: Lucas Van Tol [mailto:catsey...@hotmail.com] Sent: Monday, July 02, 2012 4:30 PM To: openindiana-discuss@openindiana.org Subject: Re: [OpenIndiana-discuss] OI and chassis/jbod support? I think the only reason to use mine would be if you didn't like/couldn't use sas2ircu for some reason (used by diskmap.py). I didn't have much luck turning on locator LED's on failed drives using sas2ircu; which is why I had used sg_ses and sg_vpd instead. -Lucas Van Tol > Date: Mon, 2 Jul 2012 15:29:00 -0400 > From: dswa...@druber.com > To: openindiana-discuss@openindiana.org > Subject: Re: [OpenIndiana-discuss] OI and chassis/jbod support? > > > Lucas, thanks for the sg utils tip - I think I will go with the > diskmap.py Rich suggested - it worked pretty much out of the box, so I > needn't hack on anything else for now... > > ___ > OpenIndiana-discuss mailing list > OpenIndiana-discuss@openindiana.org > http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] OI and chassis/jbod support?
Lucas, thanks for the sg utils tip - I think I will go with the diskmap.py Rich suggested - it worked pretty much out of the box, so I needn't hack on anything else for now... ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] OI and chassis/jbod support?
On 7/2/2012 1:45 PM, Rich wrote: diskmap.py is a publicly available script written by someone else whose name escapes me ATM which is useful for this. Rich, I got diskmap.py downloaded and tweaked to work on my system. Looks good so far. The rackables enclosure has 16 slots in a 4x4, numbering from 0 (top left) across and then down, so bottom right is 15. I got this after discover&disks: Diskmap - openindiana> disks 0:02:01c2t50014EE2AEDF73CEd0 WDC WD6400AAKS-0 640.1G Ready (RDY) tank: mirror-0 0:02:02c2t50014EE2AF872299d0 WDC WD6400AAKS-0 640.1G Ready (RDY) tank: mirror-0 0:02:05c2t50014EE25C240034d0 WDC WD10EALX-009 1000.2G Ready (RDY) tank: mirror-1 0:02:06c2t50014EE206CED4ECd0 WDC WD10EALX-009 1000.2G Ready (RDY) tank: mirror-1 0:02:09c2t50014EE2AEDF7498d0 WDC WD6400AAKS-0 640.1G Ready (RDY) tank: mirror-2 0:02:10c2t50014EE204411A53d0 WDC WD6400AAKS-0 640.1G Ready (RDY) tank: mirror-2 The 6 tank drives are in the middle of the enclosure, taking up a 2x3 sub-matrix. By the chart I worked up, that is slots 1, 2, 5, 6, 9 and 10. You can see the controller number is 0, the enclosure number is 2 and the disks are 1, 2, 5, 6, 9 and 10 :) I did see this during probe: Diskmap - openindiana> discover Warning : Got the serial 9SP00QNTS932ULZ4 from prtconf, but can't find it in disk detected by sas2ircu (disk removed/not on backplane ?) Warning : Got the disk /dev/rdsk/c2t5000C5000D3C80FFd0 from zpool status, but can't find it in disk detected by sas2ircu (disk removed ?) Diskmap - openindiana> discover Warning : Got the serial 9SP00QNTS932ULZ4 from prtconf, but can't find it in disk detected by sas2ircu (disk removed/not on backplane ?) Warning : Got the disk /dev/rdsk/c2t5000C5000D3C80FFd0 from zpool status, but can't find it in disk detected by sas2ircu (disk removed ?) Note the drive it is bitching about is the 500GB SAS root pool drive, which is NOT in the enclosure, but plugged into a forward breakout cable on the 2nd port of the LSI controller. The only reason for this is it is a 2.5 inch drive (the tank drives are all 3.5 inch). I got an icy dock 2.5/3.5 adapter but it only takes SATA drives :( So it is slated to take the Crucial M4 256GB SSD to be L2ARC. I would like to get the root drive moved to the enclosure, but it is not a high priority. Thanks! ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] OI and chassis/jbod support?
During my brief experiment with S11, I was pleasantly surprised at the croinfo app, which shows you which disks are in which enclosure/slot. Is there anything like that for OI? If not, any idea what doing this would entail? ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] quasi-hang hot-plugging sata drive?
OI 151a4. 3x2 mirrored data pool with single SAS drive for root. Hot plugged 1TB sata drive and started backing up to it using zfs send | zfs recv. That was chugging along nicely. I then hot-plugged a 160GB sata 2,5 drive in a 2.5/3.5 adapter to make sure it will fit correctly when I get my l2arc SSD. I ran devfsadm -Cv to make sure everything was clean. That command hung. Unplugged the drive and no change. Eventually things came back and I got a prompt, but it seems like any command I type that does disk activity (like zfs list) also hangs. While trying to figure *that* out, my wife comes downstairs and asks me 'what the is going on? my telecommuting session is totally gone!' I had to power-cycle the box to get it back. Even worse, I kept getting errors about 'zfs mount -a' not working because one of the windows shares directories was not empty, so the whole mount process would blow up. I finally had to 'zfs delete' on that share to recover. I kept seeing messages about: Jun 27 09:16:09 nas devfsadmd[402]: [ID 937045 daemon.error] failed to lookup dev name for /pci@0,0/pci8086,2779@1/pci1000,3040@0/iport@f/disk@w5000cca59bf478a0,0 I did 'devfsadm -Cv' and it purged a bunch of crap and now I am back up. I am guessing it did NOT like the 160GB drive I plugged in? This was NOT a fun way to start the day :( Any thoughts on what went wrong? ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Garrett D'Amore on using SATA drives in a SASsystem
On 6/26/2012 1:15 PM, Richard Elling wrote: On Jun 26, 2012, at 6:29 AM, Dan Swartzendruber wrote: Keep in mind this is almost 2 yrs old, though. I seem to recall a thread here or there that has pinned the SATA toxicity issues to an mpt driver bug or somesuch? Not really. Search for other OSes and their tales of woe. In some cases, a bad SATA drive can make the machine fail at POST, well before an OS is loaded. Best results for SATA is direct-connect: no expanders, no extenders. Next best is SATA with a good-quality SATA/SAS interposer. Known to be a poor mix: both SATA and SAS devices sharing an expander. NB, we have seen changes in HBA, expander, and disk firmware all contribute to happiness or sadness when SATA devices are used in SAS fabrics (STP). At one time, we tried to keep a list of combinations known for happiness, but maintaining that list proved to be impractical, due to the constant churn and unavailability of disk firmware patches from various vendors. Also, do not assume that you will be able to get a firmware upgrade, even if it exists :-( -- richard To clarify: on hardocp a few months back, there was a discussion on this whole issue and there was a cryptic reference to Oracle having supposedly root-caused a software/driver bug that was causing a specific problem. I wish I could remember anything more specific than that... ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Garrett D'Amore on using SATA drives in a SASsystem
Keep in mind this is almost 2 yrs old, though. I seem to recall a thread here or there that has pinned the SATA toxicity issues to an mpt driver bug or somesuch? -Original Message- From: Roy Sigurd Karlsbakk [mailto:r...@karlsbakk.net] Sent: Tuesday, June 26, 2012 9:02 AM To: Discussion list for OpenIndiana Subject: [OpenIndiana-discuss] Garrett D'Amore on using SATA drives in a SASsystem Hi all Seems Garrett D'Amore from Nexenta has a few things to say about using SATA drives in SAS systems http://gdamore.blogspot.no/2010/08/why-sas-sata-is-not-such-great-idea.html The digest is "Just don't do it" Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 98013356 r...@karlsbakk.net http://blogg.karlsbakk.net/ GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt -- I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av idiomer med xenotyp etymologi. I de fleste tilfeller eksisterer adekvate og relevante synonymer på norsk. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Diagnosis help needed
On 6/25/2012 3:31 PM, michelle wrote: I did a hard reset and moved the drive to another channel. The fault followed the drive so I'm certain it is the drive, as people have said. The thing that bugs me is that this ZFS fault locked up the OS - and that's a real concern. I think I'm going to need to have a hard think about my options and possibly leave OI for FreeNAS, Nexenta or Schillix. Given that Nexenta has the same underlying (basically) OS as OI, you may be in for a disappointment if you think this will help you. Can't speak to Schillix. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] AUTO: Eamon Roque ist außer Haus (Rückkehr am 02.07.2012)
On 6/22/2012 11:06 AM, Eamon Roque wrote: Ich bin bis 02.07.2012 abwesend. Ab dem 02.07.2012 werde ich wieder zur Verfügung stehen. Hinweis: Dies ist eine automatische Antwort auf Ihre Nachricht "OpenIndiana-discuss Digest, Vol 23, Issue 27" gesendet am 20.06.2012 00:28:38. Diese ist die einzige Benachrichtigung, die Sie empfangen werden, während diese Person abwesend ist. Uh, okay, sure. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] mapping target number to disk?
Thanks, Jim! The WWN would be good enough, since all my drives (in the data pool anyway) have the WWN printed on the top :) -Original Message- From: Jim Klimov [mailto:j...@cos.ru] Sent: Monday, June 11, 2012 3:08 PM To: Discussion list for OpenIndiana Cc: Dan Swartzendruber Subject: Re: [OpenIndiana-discuss] mapping target number to disk? 2012-06-11 17:10, Dan Swartzendruber wrote: > I've been seeing a ton of messages like: > > > > Jun 11 08:59:08 nas Log info 0x31120303 received for target 9. > > Jun 11 08:59:08 nas scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc > > Jun 11 08:59:08 nas scsi: [ID 365881 kern.info] > /pci@0,0/pci8086,27d0@1c/pci1000,3040@0 (mpt_sas10): > Actually, re-reading that original post, I have some more ideas :) Methods 1 and 2 can help you find the OS device name that it is complaining about. Method 3 can help find the serial number of the disk, which should help to physically locate it. 1) Run the "format" command to list the drives. You might see the /pci... device string there, i.e.: # format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t0d0 /pci@0,0/pci1022,7458@1/pci11ab,11ab@1/disk@0,0 1. c0t1d0 /pci@0,0/pci1022,7458@1/pci11ab,11ab@1/disk@1,0 2) Run a listing of /dev/dsk and grep for the device path, i.e.: # ls -la /dev/dsk | grep pci@0,0/pci1022,7458@1/pci11ab,11ab@1/disk@1,0 lrwxrwxrwx 1 root root 63 Jul 14 2009 c0t1d0 -> ../../devices/pci@0,0/pci1022,7458@1/pci11ab,11ab@1/disk@1,0:wd lrwxrwxrwx 1 root root 62 Jul 14 2009 c0t1d0p0 -> ../../devices/pci@0,0/pci1022,7458@1/pci11ab,11ab@1/disk@1,0:q ... lrwxrwxrwx 1 root root 62 Jul 14 2009 c0t1d0s9 -> ../../devices/pci@0,0/pci1022,7458@1/pci11ab,11ab@1/disk@1,0:j 3) Search the /var/adm/messages* logs for the device path, this may yield the hardware details (like smartctl might). If your log rotation works, this may need to be done soon after boot, or after a pool import. Example: # cat /var/adm/messages* | ggrep -A12 -B1 pci@0,0/pci1022,7458@1/pci11ab,11ab@1/disk@1,0 May 16 03:11:48 thumper scsi: [ID 583861 kern.info] sd10 at marvell88sx0: target 1 lun 0 May 16 03:11:48 thumper genunix: [ID 936769 kern.info] sd10 is /pci@0,0/pci1022,7458@1/pci11ab,11ab@1/disk@1,0 May 16 03:11:48 thumper genunix: [ID 408114 kern.info] /pci@0,0/pci1022,7458@1/pci11ab,11ab@1/disk@1,0 (sd10) online May 16 03:11:48 thumper sata: [ID 663010 kern.info] /pci@0,0/pci1022,7458@1/pci11ab,11ab@1 : May 16 03:11:48 thumper sata: [ID 761595 kern.info] SATA disk device at port 2 May 16 03:11:48 thumper sata: [ID 846691 kern.info] model SEAGATE ST32500NSSUN250G 0743B590GG May 16 03:11:48 thumper sata: [ID 693010 kern.info] firmware 3AZQ May 16 03:11:48 thumper sata: [ID 163988 kern.info] serial number 5QE590GG May 16 03:11:48 thumper sata: [ID 594940 kern.info] supported features: May 16 03:11:48 thumper sata: [ID 981177 kern.info] 48-bit LBA, DMA, Native Command Queueing, SMART, SMART self-test May 16 03:11:48 thumper sata: [ID 643337 kern.info] SATA Gen2 signaling speed (3.0Gbps) May 16 03:11:48 thumper sata: [ID 349649 kern.info] Supported queue depth 32 May 16 03:11:48 thumper sata: [ID 349649 kern.info] capacity = 488390625 sectors ... HTH, //Jim Klimov ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] mapping target number to disk?
Hmmm, nothing obvious leaps out. Looking at output from sasinfo command: sasinfo target-port -v Target Port SAS Address: 50014ee204411a53 Type: SATA Device HBA Port Name: /dev/cfg/c13 Expander Device SAS Address: None (Failed to Get Attached Port) Target Port SAS Address: 50014ee0abcee0a9 Type: SATA Device HBA Port Name: /dev/cfg/c6 Expander Device SAS Address: None (Failed to Get Attached Port) Target Port SAS Address: 50014ee206ced4ec Type: SATA Device HBA Port Name: /dev/cfg/c3 Expander Device SAS Address: None (Failed to Get Attached Port) Target Port SAS Address: 5000c5000d3c80fd Type: SAS Device HBA Port Name: /dev/cfg/c10 Expander Device SAS Address: None (Failed to Get Attached Port) Target Port SAS Address: 50014ee2af872299 Type: SATA Device HBA Port Name: /dev/cfg/c9 Expander Device SAS Address: None (Failed to Get Attached Port) Target Port SAS Address: 5000cca00b078f2d Type: SAS Device HBA Port Name: /dev/cfg/c8 Expander Device SAS Address: None (Failed to Get Attached Port) Target Port SAS Address: 500a07510324d633 Type: SATA Device HBA Port Name: /dev/cfg/c5 Expander Device SAS Address: None (Failed to Get Attached Port) Target Port SAS Address: 50014ee25c240034 Type: SATA Device HBA Port Name: /dev/cfg/c2 Expander Device SAS Address: None (Failed to Get Attached Port) Target Port SAS Address: 50014ee2aedf73ce Type: SATA Device HBA Port Name: /dev/cfg/c1 Expander Device SAS Address: None (Failed to Get Attached Port) is the integer after /dev/cfg/c in these strings the target number? If not, how do I figure this out? ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] mapping target number to disk?
Ah, I think I know what happened. It didn't seem to want me to execute it while it was on that cifs shared dataset. No idea why. I copied it to a system directory and it runs fine now. Go figure. Now I need to figure out how to use it to figure out what "Target 9" is... -Original Message- From: Rich [mailto:rerc...@acm.jhu.edu] Sent: Monday, June 11, 2012 10:42 AM To: Discussion list for OpenIndiana Subject: Re: [OpenIndiana-discuss] mapping target number to disk? I added that because I have a folder with the Win32/Linux x86/Solaris x86 binaries all in it; it should be the same file you have. I'm wondering if there's any problems with executing binaries from the FS or user you were? - Rich On Mon, Jun 11, 2012 at 10:40 AM, wrote: > > The program I was trying to run didn't have the SUNOS on the end. > Unfortunately, my VPN to home seems to be down, so I cannot check now. > > > ___ > OpenIndiana-discuss mailing list > OpenIndiana-discuss@openindiana.org > http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] mapping target number to disk?
thanks, i got it. unfortunately, the solaris x86 executable doesn't seem to run under openindiana. Here is the result: root@nas:/tank/windows/dswartz# file sas2ircu sas2ircu: ELF 32-bit LSB executable 80386 Version 1, dynamically linked, stripped root@nas:/tank/windows/dswartz# ./sas2ircu Killed Maybe I am dense, but I am not sure I see the point of having kernel messages identifying a specific HW item that is impossible (or at the very least extremely difficult and/or non-intuitive) to chase back to the specific device that is having issues. Ugh, not how I wanted to start Monday :( -Original Message- From: Jim Klimov [mailto:jimkli...@cos.ru] Sent: Monday, June 11, 2012 9:20 AM To: Discussion list for OpenIndiana Cc: Dan Swartzendruber Subject: Re: [OpenIndiana-discuss] mapping target number to disk? 2012-06-11 17:10, Dan Swartzendruber ?: > > How do I map 'target 9' to a drive? Google was not remotely helpful - the > only thread I found recommended downloading lsiutil (which I can't find > anywhere). I installed sasinfo, which prints a lot of info, but nothing > obviously useful to me. Any help appreciated... There were some references in zfs-discuss list last month, I think the URL should be like this (I did not check myself): = http://www.lsi.com/channel/products/storagecomponents/Pages/LSISAS9211-8i.as px select * SUPPORT & DOWNLOADS download SAS2IRCU_P13 = HTH, //Jim Klimov ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] mapping target number to disk?
I've been seeing a ton of messages like: Jun 11 08:59:08 nas Log info 0x31120303 received for target 9. Jun 11 08:59:08 nas scsi_status=0x0, ioc_status=0x804b, scsi_state=0xc Jun 11 08:59:08 nas scsi: [ID 365881 kern.info] /pci@0,0/pci8086,27d0@1c/pci1000,3040@0 (mpt_sas10): root pool is a single sata drive: NAMESTATE READ WRITE CKSUM rpool ONLINE 0 0 0 c10t5000C5000D3C80FDd0s0 ONLINE 0 0 0 tank pool is 6 sata drives with a sas and ssd as cache: NAMESTATE READ WRITE CKSUM tankONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c9t50014EE2AF872299d0 ONLINE 0 0 0 c1t50014EE2AEDF73CEd0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c3t50014EE206CED4ECd0 ONLINE 0 0 0 c2t50014EE25C240034d0 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 c13t50014EE204411A53d0 ONLINE 0 0 0 c6t50014EE0ABCEE0A9d0 ONLINE 0 0 0 cache c5t500A07510324D633d0 ONLINE 0 0 0 c8t5000CCA00B078F2Dd0 ONLINE 0 0 0 How do I map 'target 9' to a drive? Google was not remotely helpful - the only thread I found recommended downloading lsiutil (which I can't find anywhere). I installed sasinfo, which prints a lot of info, but nothing obviously useful to me. Any help appreciated... ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] What happens when a ZIL drive dies?
On 6/4/2012 1:15 PM, Richard Elling wrote: On Jun 4, 2012, at 10:06 AM, Dan Swartzendruber wrote: On 6/4/2012 11:56 AM, Richard Elling wrote: On Jun 4, 2012, at 8:24 AM, Nick Hall wrote: For NFS workloads, the ZIL implements the synchronous semantics between the NFS server and client. The best way to get better performance is to have the client run in async mode when possible (Solaris clients do this automatically, and have for a very long time, Linux... not so much). The risk is that the server unexpectedly reboots and the synchronous writes from the client are lost. In that case, the client thinks data is written, but it is not. The server is happy either way... it is the client that is sad. The most annoying client in this respect is ESXi, which insists on doing sync operations. I understand the logic there - unlike, say, an application which can decide to do async operations, ESXi is using NFS as the backing store for virtual disks, so when a client (windows, linux, whatever) does disk writes to virtualized SCSI controller (frex), the guest may be doing writes on behalf of a journalized filesystem which is doing writes in a specific order, possibly even with write barriers. In that case, cheating and forcing the writes to be asynchronous (say by 'sync=disabled') can in fact cause guest filesystem corruption. AIUI, ESXi has its own, native NFS implementation and I've never seen it do async writes. -- richard Right, for the reasons just explained. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] What happens when a ZIL drive dies?
On 6/4/2012 11:56 AM, Richard Elling wrote: On Jun 4, 2012, at 8:24 AM, Nick Hall wrote: For NFS workloads, the ZIL implements the synchronous semantics between the NFS server and client. The best way to get better performance is to have the client run in async mode when possible (Solaris clients do this automatically, and have for a very long time, Linux... not so much). The risk is that the server unexpectedly reboots and the synchronous writes from the client are lost. In that case, the client thinks data is written, but it is not. The server is happy either way... it is the client that is sad. The most annoying client in this respect is ESXi, which insists on doing sync operations. I understand the logic there - unlike, say, an application which can decide to do async operations, ESXi is using NFS as the backing store for virtual disks, so when a client (windows, linux, whatever) does disk writes to virtualized SCSI controller (frex), the guest may be doing writes on behalf of a journalized filesystem which is doing writes in a specific order, possibly even with write barriers. In that case, cheating and forcing the writes to be asynchronous (say by 'sync=disabled') can in fact cause guest filesystem corruption. I can't afford a high quality SSD to reduce latency, so I made an informed decision to disable sync mode. The mitigation for me is that I do zfs snapshots every night of the ESXi datastore, so the worst case is losing a day's work. Given this is for a home/soho setup, and given that the openindiana SAN is on a hefty UPS, I'm willing to take the chance. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] OT postfix v.s Qmail
On 4/24/2012 12:43 PM, Gary Gendel wrote: Dan, I've been using qmail since the end of the 80's Yes, greylisting is a powerful tool. I get that with spamdyke for qmail. Spamdyke and mailfront were the two biggest reasons that I stayed with qmail so long. I saw two greylisting packages for postfix when I was doing my searching. I'm convinced that I want to go to postfix, but it looks like it will be a painful transition. I've got two chains. One from port 25 that uses spamdyke, and one from port 587 that uses mailfront (to give me SSL/TLS and authorization). Since they both converge at the qmail-queue process that handles the delivery portion, it's a nice clean path to delivery. I may try to use postfix to replace one first so I can really thresh out the issues. Gary, have you visited the postfix site? There are howtos for a bunch of these issues. It really shouldn't need to be painful or tricky. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] OT postfix v.s Qmail
I am a long-time postfix user. The single biggest winner is greylisting. As I recall, there are a couple of greylist packages you can plug into postfix and it just works. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] OMNIOS
Looks interesting. I dl'ed it and set it up in an ESXi VM and am playing with it. Definitely seems snappy. -Original Message- From: Richard Elling [mailto:richard.ell...@richardelling.com] Sent: Monday, April 23, 2012 11:23 AM To: Discussion list for OpenIndiana Subject: Re: [OpenIndiana-discuss] OMNIOS On Apr 23, 2012, at 6:27 AM, paolo marcheschi wrote: > HI > > I see that there is a variant of opensolaris known as Omnios: No, it is an illumos distribution. > > http://omnios.omniti.com/ > > Is that related with Openindiana ?, Are there any advantages with it ? It is designed for the server market, not the desktop market. -- richard -- ZFS Performance and Training richard.ell...@richardelling.com +1-760-896-4422 ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Using OI as a combined storage serverandvirtual server enviroment
I would argue that running a GUI on top of OI so that you can run VMs under virtualbox carries the same complexity (if I'm wrong, where is the extra layer?) Also, I wasn't aware of your HW limitations - AFAIK you never stated them. I don't think either solution is better or more complicated - nor did I say so - just an alternative. -Original Message- From: Mats Taraldsvik [mailto:mats.taralds...@gmail.com] Sent: Monday, April 23, 2012 3:12 AM To: openindiana-discuss@openindiana.org Subject: Re: [OpenIndiana-discuss] Using OI as a combined storage serverandvirtual server enviroment Thanks. Doesn't the ESXi solution introduce another layer of complexity, though? That is, if my proposed solution with OI as the main OS and "Storage server" combined, with the vms on top of OI (and each vm with access to the "storage server"), isn't hard to set up/maintain than the ESXi solution. I should also note that I need to buy new hw. I have a LSI SAS1068E controller (reflashed Intel branded) which does not work with my current mainboard (only a single pcie x16 port which is reserved for a graphics card). Regards, Mats On 04/20/2012 02:51 PM, Dan Swartzendruber wrote: > Depending on your cpu&motherboard, an alternative 'all in one' server is to > run ESXi on it, and virtualize OI. A key requirement for good performance > is that cpu/motherboard support vt-d (pci pass-through.) the only trick is > that you need two disk controllers - one that you pass through, and one to > run a single (small& cheap) disk for the local datastore that OI would live > on. Be aware though that this is not officially supported by VMWare, so you > need to make sure you are using HW that people know works. > > -Original Message- > From: Mike La Spina [mailto:mike.lasp...@laspina.ca] > Sent: Friday, April 20, 2012 8:41 AM > To: Discussion list for OpenIndiana > Subject: Re: [OpenIndiana-discuss] Using OI as a combined storage > serverandvirtual server enviroment > > Hi, > > You can do all of that and much more. Here is one example. > > http://blog.laspina.ca/ubiquitous/provisioning_disaster_recovery_with_zf > s > > Regards, > Mike > > http://blog.laspina.ca/ > > > -Original Message- > From: Mats Taraldsvik [mailto:mats.taralds...@gmail.com] > Sent: Friday, April 20, 2012 5:31 AM > To: openindiana-discuss@openindiana.org > Subject: [OpenIndiana-discuss] Using OI as a combined storage server > andvirtual server enviroment > > Hi, > > I'm currently using EON ZFS as a storage server, but would like to be > able to host a couple of virtual machines as well -- using only a single > server. > > Will I, using OI, be able to share a zpool between the virtual machines > (created with KVM or another OI-supported technology, and stored in the > same or a different zpool), either directly,through nfs or iscsi? > > Regards, > Mats Taraldsvik > > ___ > OpenIndiana-discuss mailing list > OpenIndiana-discuss@openindiana.org > http://openindiana.org/mailman/listinfo/openindiana-discuss > > ___ > OpenIndiana-discuss mailing list > OpenIndiana-discuss@openindiana.org > http://openindiana.org/mailman/listinfo/openindiana-discuss > > > ___ > OpenIndiana-discuss mailing list > OpenIndiana-discuss@openindiana.org > http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Using OI as a combined storage serverandvirtual server enviroment
Depending on your cpu&motherboard, an alternative 'all in one' server is to run ESXi on it, and virtualize OI. A key requirement for good performance is that cpu/motherboard support vt-d (pci pass-through.) the only trick is that you need two disk controllers - one that you pass through, and one to run a single (small & cheap) disk for the local datastore that OI would live on. Be aware though that this is not officially supported by VMWare, so you need to make sure you are using HW that people know works. -Original Message- From: Mike La Spina [mailto:mike.lasp...@laspina.ca] Sent: Friday, April 20, 2012 8:41 AM To: Discussion list for OpenIndiana Subject: Re: [OpenIndiana-discuss] Using OI as a combined storage serverandvirtual server enviroment Hi, You can do all of that and much more. Here is one example. http://blog.laspina.ca/ubiquitous/provisioning_disaster_recovery_with_zf s Regards, Mike http://blog.laspina.ca/ -Original Message- From: Mats Taraldsvik [mailto:mats.taralds...@gmail.com] Sent: Friday, April 20, 2012 5:31 AM To: openindiana-discuss@openindiana.org Subject: [OpenIndiana-discuss] Using OI as a combined storage server andvirtual server enviroment Hi, I'm currently using EON ZFS as a storage server, but would like to be able to host a couple of virtual machines as well -- using only a single server. Will I, using OI, be able to share a zpool between the virtual machines (created with KVM or another OI-supported technology, and stored in the same or a different zpool), either directly,through nfs or iscsi? Regards, Mats Taraldsvik ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] Changing grub defaults?
So I have a GUI install which I switched to text mode by disabling the gdm service. Unfortunately, it still boots in GUI mode. I see what I want to do, which is: title openindiana-1 findroot (pool_rpool,0,a) bootfs rpool/ROOT/openindiana-1 splashimage /boot/splashimage.xpm foreground FF background A8A8A8 kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS,console=graphics module$ /platform/i86pc/$ISADIR/boot_archive I want to delete the foreground and background and splashimage lines. Also, remove ',console=graphics'. I presumably want to do this for all entries. Here is what I am having trouble finding out: where/how does it figure out (it being beadm) what the defaults should be when a new BE is created? I've googled and done 'man beadm and man bootadm' etc with no joy. Any help appreciated... ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] messages are not posting to list
On 4/3/2012 10:42 AM, Richard Heil wrote: messages are not posting to list from me. let me know I just got one. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] Need a PCI e-sata card for OI151a (SOLVED)
I kept poking around on ebay and came up with a brand-new intel S3000AH motherboard. I can re-use the ram and the processors. 1 PCI-E x8, 1 PCI-E x8 (x4 electrical) and one PCI-E x4 (x1 electrical), so even if the motherboard sata ports don't work right for hot-plug, I can put *something* in the (de facto) x4 slot. $50 was a pretty reasonable price. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] spurious degraded multipath messages?
On 3/28/2012 4:31 PM, Jason Matthews wrote: you plugged a consumer sata disk into an enterprise sas controller and it told you your sata disk doesnt have two data ports connected to the controller. i suspect you knew this already :-) Yes, quite. My only concern was with the alarming choice of DEGRADED :) ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] spurious degraded multipath messages?
Cool thanks. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] spurious degraded multipath messages?
On 3/28/2012 2:40 PM, Richard Elling wrote: On Mar 28, 2012, at 11:24 AM, Dan Swartzendruber wrote: On 3/28/2012 1:38 PM, Richard Elling wrote: On Mar 28, 2012, at 9:52 AM, Dan Swartzendruber wrote: So I have an M1015 and it works fine. I noticed the other day I hotplugged a crucial M4 into the last free port on the HBA, and later noticed in the dmesg output: Mar 27 17:55:40 nas genunix: [ID 483743 kern.info] /scsi_vhci/disk@g500a07510324 d633 (sd9) multipath status: degraded: path 8 mpt_sas9/disk@w500a07510324d633,0 is online I don't actually have any multipathing. Is this spurious? No Let me rephrase. Is it spurious in my case? Your second comment seems to imply it is? No, it is not spurious. It is informational. Is there some something I can tweak to get rid of stuff like this? Here's a bad idea: configure syslog to ignore kernel info messages. Better idea: consider yourself informed and be happy ;-) -- richard See above? I tend to choose the "better idea" Heh. Bad choice of words on my part then. I meant spurious as far as indicating an issue in my configuration. I assume there is no issue, since I do not multipath? ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] spurious degraded multipath messages?
On 3/28/2012 1:38 PM, Richard Elling wrote: On Mar 28, 2012, at 9:52 AM, Dan Swartzendruber wrote: So I have an M1015 and it works fine. I noticed the other day I hotplugged a crucial M4 into the last free port on the HBA, and later noticed in the dmesg output: Mar 27 17:55:40 nas genunix: [ID 483743 kern.info] /scsi_vhci/disk@g500a07510324 d633 (sd9) multipath status: degraded: path 8 mpt_sas9/disk@w500a07510324d633,0 is online I don't actually have any multipathing. Is this spurious? No Let me rephrase. Is it spurious in my case? Your second comment seems to imply it is? Is there some something I can tweak to get rid of stuff like this? Here's a bad idea: configure syslog to ignore kernel info messages. Better idea: consider yourself informed and be happy ;-) -- richard See above? ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] spurious degraded multipath messages?
So I have an M1015 and it works fine. I noticed the other day I hotplugged a crucial M4 into the last free port on the HBA, and later noticed in the dmesg output: Mar 27 17:55:40 nas genunix: [ID 483743 kern.info] /scsi_vhci/disk@g500a07510324 d633 (sd9) multipath status: degraded: path 8 mpt_sas9/disk@w500a07510324d633,0 is online I don't actually have any multipathing. Is this spurious? Is there some something I can tweak to get rid of stuff like this? I ask because of the other mpxio thread I've been reading, which reminded me :) ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Need a PCI e-sata card for OI151a
On 3/27/2012 3:06 PM, Jason Matthews wrote: Sorry I wasn't more helpful. btw, "log device" implies ZIL, as in intent log. I'm aware of that :( ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Need a PCI e-sata card for OI151a
Already flashed to IT firmware and working fine. Unfortunately, I am on a budget constraint (soho setup), and the mobo+cpu+8GB was already sitting around handy. My plan for the M4 was L2ARC, not ZIL. I think I'm going to get the cheap rosewill and use that for the esata drive - as I said, this is only running one consumer quality sata drive occasionally, so if it's not great, I can live with that. I may keep trolling ebay looking for a used 16-port card like the one you referenced. Thanks! ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Need a PCI e-sata card for OI151a
On 3/27/2012 2:02 PM, Jason Matthews wrote: e-SATA shares the same electrical specification as SATA. The primary difference is in the cable shielding specification. I would consider using a LSI card and an adapter to go from the non/lightly shielded internal SATA cable to the heavily shielded e-SATA cable. I am not aware of anyone who makes one with a repeater to maintain super high signal quality but it should work w/o such a device (supermicro does it all the time ;-], where as Intel doesn't). If you get an 8-port card you could dump your onboard ports altogether and get the hot-plug you deserve. For 1068e based products you'll want the SAS3801-R -- while it comes with the RAID firmware it is designed for internal ports. The plan SAS3081 has two four port external plugs which you don't want. You should be able to flash the SAS3081-R with the IT firmware or simply not configure RAID on it, in which case the drives are exposed as JBODs by default. Jason, I appreciate the input, but as I said, I'm already slot limited. My mobo only has one pci-e slot, and that is occupied by the 8-port m1015 HBA :( ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Need a PCI e-sata card for OI151a
On 3/27/2012 12:51 PM, Russell Hansen wrote: A quick glance at the motherboard manual for what you have indicates the probability that your on-board SATA ports aren't using AHCI. Likely some compatibility mode meant for those poor souls that may have needed Win9x when that BIOS was basically coded. http://www.supermicro.com/manuals/motherboard/3000/MNL-0889.pdf Page 4-4 In the BIOS check to see if the SATA Controller Mode is set to Compatibile. You will need to change it to Enhanced. (*Note: The Enhanced mode is supported by the Windows 2000 OS or a later version. :-p ) From there the SATA RAID mode should default to Disabled. The SATA AHCI mode you will want to change to Enabled. AHCI mode should give you hot-swap capabilities as well as let you use your SSD with some sanity. Must be a bios bug then. I am pretty sure (will confirm tonite) that the sata controller is set to enhanced. My other mobo x9scl-f that runs esxi has TWO settings for each port: one for whether to enable AHCI and one for whether to enable hot plug. I kid you not. If I can't figure this out, I may just get the $14 rosewill sil card - it's only for occasional backup to a WD Blue consumer sata drive. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Need a PCI e-sata card for OI151a
Gary, thanks. I guess I will keep looking :( -Original Message- From: Gary Gendel [mailto:g...@genashor.com] Sent: Tuesday, March 27, 2012 9:36 AM To: Discussion list for OpenIndiana Subject: Re: [OpenIndiana-discuss] Need a PCI e-sata card for OI151a Dan, I can't give you specifics, but I went through a lot of pain early on until I found cards from LSI that worked really well. At the time, the best supported chipset was from Marvell. The Silicon Image chipsets worked unreliably and needed firmware reflashing to turn off RAID support. I contributed to a few fixes in the Silicon Image drivers, but the code was not completely finished the last time I looked. I would stay away from sata multiplexers. Since then, I believe that the Intel chipset has gotten the most attention but I have never tried them. Gary On 3/27/12 9:10 AM, Dan Swartzendruber wrote: > Here's my situation: m1015 with 6 sata drives for pool tank. 7th port has > 15K 73GB SAS drive as cache device. 8th port currently connected to e-sata > connector on front panel of case for monthly backups. 160GB sata drive on > one of the 4 motherboard sata ports (supermicro pdsmi+). I have a 64GB > crucial m4 I want to use as a log device, but plugging it into the > motherboard seems to only yield sata1 speed (in addition to the fact that OI > apparently refuses to even go to the grub menu and just hangs - sigh...) > Even when I didn't have that issue (e.g. just switched from nexenta back to > OI), I discovered the motherboard ports apparently do NOT support hot-plug, > so switching the M4 and the e-sata connector is a no-go (unless I want to > have to boot with the e-sata drive plugged in and turned on - LOL). My > motherboard only has one pcie slot (x8) which is where the m1015 resides. > So the plan is to get a pci card with e-sata connector. I've found a couple > of cheap rosewill cards on newegg, that indicate the sil3512 chipset, but > I'm having trouble finding out if that is supported or not. Any help > appreciated! > > ___ > OpenIndiana-discuss mailing list > OpenIndiana-discuss@openindiana.org > http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
[OpenIndiana-discuss] Need a PCI e-sata card for OI151a
Here's my situation: m1015 with 6 sata drives for pool tank. 7th port has 15K 73GB SAS drive as cache device. 8th port currently connected to e-sata connector on front panel of case for monthly backups. 160GB sata drive on one of the 4 motherboard sata ports (supermicro pdsmi+). I have a 64GB crucial m4 I want to use as a log device, but plugging it into the motherboard seems to only yield sata1 speed (in addition to the fact that OI apparently refuses to even go to the grub menu and just hangs - sigh...) Even when I didn't have that issue (e.g. just switched from nexenta back to OI), I discovered the motherboard ports apparently do NOT support hot-plug, so switching the M4 and the e-sata connector is a no-go (unless I want to have to boot with the e-sata drive plugged in and turned on - LOL). My motherboard only has one pcie slot (x8) which is where the m1015 resides. So the plan is to get a pci card with e-sata connector. I've found a couple of cheap rosewill cards on newegg, that indicate the sil3512 chipset, but I'm having trouble finding out if that is supported or not. Any help appreciated! ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] Fwd: poor zfs compression ratio
Not to nitpick, but dedup isn't really compression in one significant respect. e.g. you can have 3 copies of the same data chunk and it is only stored as one (effectively a compression ratio of 4:1), even if the data in question is uncompressible (due to already being compressed.) -Original Message- From: Edward Ned Harvey [mailto:openindi...@nedharvey.com] Sent: Wednesday, November 02, 2011 9:36 AM To: openindiana-discuss@openindiana.org Subject: Re: [OpenIndiana-discuss] Fwd: poor zfs compression ratio > From: Krishna PMV [mailto:krishna@gmail.com] > > Resending as my previous email didn`t get through the list. Can someone > please advice how we can improve compression ratio here? Thanks! Nothing you can do, except store the mail in compressed files like xz or whatever. All general-purpose compression algorithms rely on one basic principle: Reducing or remapping repeated data patterns. If you're getting weak compression ratios, it means your data is already compressed, or not massively repeated. Either way, you're already doing the best you can do. Note: Dedup is in fact a compression algorithm, but it's being applied pool-wide instead of at the block level, so it deserves a different name. We call it dedup instead of compression. ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] cleaning out stale entries in zpool cache
5 of the 6. The reason only 5 is that I originally had a 5-disk raidz pool, and later decided to add a disk and go with 3x2 mirror... -Original Message- From: Matt Connolly [mailto:matt.connolly...@gmail.com] Sent: Tuesday, October 25, 2011 8:34 AM To: Discussion list for OpenIndiana Subject: Re: [OpenIndiana-discuss] cleaning out stale entries in zpool cache On 25/10/2011, at 9:58 AM, Dan Swartzendruber wrote: > > Sweet, that did it! The last two disks are now resilvering, but the stale > raid-z1 pool is now gone. Many thanks George and Jesus! Great to hear it's sorted. Just wondering if you can share with the list how much of the disks were clobbered with dd in case someone else comes up with the issue in the future. Cheers, Matt ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] cleaning out stale entries in zpool cache
Sweet, that did it! The last two disks are now resilvering, but the stale raid-z1 pool is now gone. Many thanks George and Jesus! ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] cleaning out stale entries in zpool cache
Looking good! I have to do this in two phases. First 3 of the disks on one side of the mirrors, wait (forever LOL) to resilver, then do the other 2 disks on the other side. Here is the 'zpool import' after phase 1: tank UNAVAIL insufficient replicas raidz1-0 UNAVAIL insufficient replicas /dev/label/disk1 UNAVAIL cannot open c0t50014EE0ABCEE0A9d0 ONLINE c0t50014EE0AC01D8EDd0 ONLINE /dev/label/disk4 UNAVAIL cannot open /dev/label/disk5 UNAVAIL cannot open Thanks for the help! ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] cleaning out stale entries in zpool cache
George Wilson wrote: Dan, Actually you'll need to 'dd' the end of the disk since it's labels 2 and 3 that are still visible to the system. I would start by dd-ing the last mega or so of the p0 device. that makes sense. thanks... ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss
Re: [OpenIndiana-discuss] cleaning out stale entries in zpool cache
George Wilson wrote: Since these disks are mirrors in your current pool, you could detach them and then try dd-ing over the first megabyte of the disk. This should blow away the partition table. I would verify that you can no longer see the disk by using the 'zdb' command below. Once you're satisfied then you can re-attach it your mirror. Yeah, thanks, I was wondering about that myself. I want to do a backup first to my esata 1TB drive to be safe. Thanks for the help... I'll post the results... ___ OpenIndiana-discuss mailing list OpenIndiana-discuss@openindiana.org http://openindiana.org/mailman/listinfo/openindiana-discuss