Re: [OpenIndiana-discuss] zpool import issues continued - block alignment

2019-05-19 Thread Stephan Budach
Hi Rainer,

- Ursprüngliche Mail -
> Von: "Rainer Heilke" 
> An: openindiana-discuss@openindiana.org
> Gesendet: Samstag, 18. Mai 2019 23:34:24
> Betreff: Re: [OpenIndiana-discuss] zpool import issues continued - block 
> alignment
> 
> Thank you.
> 
> On 5/18/2019 2:26 PM, Michal Nowak wrote:
> > 
> > though I didn't look in to it in detail and the warning might be
> > just a
> > symptom and not the actual problem..., perhaps this older thread is
> > relevant to your problem:
> > 
> > https://openindiana.org/pipermail/openindiana-discuss/2013-March/012373.html
> 
> I'll look into this when I get back from my meeting.
> 
> Rainer
>

does this error concerns only single disks, or all disks in that pool? Also, 
what does zpool imnport without actually trying to import the zpool by name 
say? Usually, if a zpool is not ready for importing, zpool will list any device 
of that pool with it's current status (missing, damaged…).

Cheers,
Stephan
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS hangs - causes host to panic

2018-02-07 Thread Stephan Budach


- Ursprüngliche Mail -
> Von: "Stephan Budach" <stephan.bud...@jvm.de>
> An: "Discussion list for OpenIndiana" <openindiana-discuss@openindiana.org>
> Gesendet: Dienstag, 16. Januar 2018 14:15:37
> Betreff: [OpenIndiana-discuss] ZFS hangs - causes host to panic
> 
> 
> Hi,
> 
> 
> I am currently putting my new NVME servers through their paces and I
> already experinced two panics on one of those hosts.
> After taking "forever" writing the crash dump, I found this in the
> syslog after reboot:
> 
> 
> 
> Jan 16 13:25:29 nfsvmpool09 savecore: [ID 570001 auth.error] reboot
> after panic: I/O to pool 'nvmeTank02' appears to be hung.
> Jan 16 13:25:29 nfsvmpool09 savecore: [ID 771660 auth.error] Panic
> crashdump pending on dump device but dumpadm -n in effect; run
> savecore(1M) manually to extract. Image UUID
> 995846d5-8c94-4f68-bada-e05ae5e4cb25(fault-management initiated).
> 
> 
> I ran mdb against the crash dump, but I am still a dummy, reading
> those information:
> 
> 
> 
> root@nfsvmpool09:/var/crash/nfsvmpool09# mdb unix.0 vmcore.0
> Loading modules: [ unix genunix specfs dtrace mac cpu.generic uppc
> apix scsi_vhci zfs sata sd ip hook neti sockfs arp usba fctl stmf
> stmf_sbd mm lofs i40e idm cpc crypto fcip fcp random ufs logindmux
> nsmb ptm smbsrv nfs sppp ipc ]
> > $C
> d000f5dd79d0 vpanic()
> d000f5dd7a20 vdev_deadman+0x10b(d0320fb69980)
> d000f5dd7a70 vdev_deadman+0x4a(d0333b018940)
> d000f5dd7ac0 vdev_deadman+0x4a(d03228f796c0)
> d000f5dd7af0 spa_deadman+0xad(d03229543000)
> d000f5dd7b90 cyclic_softint+0xfd(d031eac4db00, 0)
> d000f5dd7ba0 cbe_low_level+0x14()
> d000f5dd7bf0 av_dispatch_softvect+0x78(2)
> d000f5dd7c20 apix_dispatch_softint+0x35(0, 0)
> d000f5da1990 switch_sp_and_call+0x13()
> d000f5da19e0 apix_do_softint+0x6c(d000f5da1a50)
> d000f5da1a40 apix_do_interrupt+0x362(d000f5da1a50, 2)
> d000f5da1a50 _interrupt+0xba()
> d000f5da1bc0 acpi_cpu_cstate+0x11b(d031e98a43e0)
> d000f5da1bf0 cpu_acpi_idle+0x8d()
> d000f5da1c00 cpu_idle_adaptive+0x13()
> d000f5da1c20 idle+0xa7()
> d000f5da1c30 thread_start+8()
> > 
> 
> 
> Can anybody make something useful of that?
> 
> 
> Thanks,
> Stephan


I have been trying to hunt that down further, as it only seems to affect some 
NVMe SSDs and consequently the error moves along with where I am putting thise 
NVMe SSDs in. What seems to happen is, that at some random point, writes to the 
NVMe SSDs are not coming back and finally the ZFS deadman timer kicks in, 
panicing the host.

What I was able to gather is that at that point the SSD becomes 100% busy with 
no actual transfer between the device and the host. iostst -xenM will show 
something like this:

extended device statistics    errors ---
r/sw/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
device
0,00,00,00,0  0,0  1,00,00,0   0 100   0  27   0  27 
c21t1d0
0,00,00,00,0  0,0  0,00,00,0   0   0   0   3   0   3 
c14t1d0
0,00,00,00,0  0,0  0,00,00,0   0   0   0   2   0   2 
c29t1d0
0,00,00,00,0  0,0  0,00,00,0   0   0   0   3   0   3 
c6t1d0
0,00,00,00,0  0,0  0,00,00,0   0   0   0   3   0   3 
c15t1d0
0,00,00,00,0  0,0  1,00,00,0   0 100   0   2   0   2 
c13t1d0
0,00,00,00,0  0,0  0,00,00,0   0   0   0  28   0  28 
c23t1d0
0,00,00,00,0  0,0  0,00,00,0   0   0   0   2   0   2 
c16t1d0
0,00,00,00,0  0,0  0,00,00,0   0   0   0   2   0   2 
c24t1d0
0,00,00,00,0  0,0  0,00,00,0   0   0   0  27   0  27 
c19t1d0
0,00,00,00,0  0,0  0,00,00,0   0   0   0  27   0  27 
c22t1d0
0,00,00,00,0  0,0  0,00,00,0   0   0   0   3   0   3 
c12t1d0
0,00,00,00,0  0,0  0,00,00,0   0   0   0   2   0   2 
c17t1d0
0,00,00,00,0  0,0  0,00,00,0   0   0   0   3   0   3 
c7t1d0
0,00,00,00,0  0,0  0,00,00,0   0   0   0  27   0  27 
c20t1d0
0,00,00,00,0  0,0  0,00,00,0   0   0   0   3   0   3 
c10t1d0
0,00,00,00,0  0,0  0,00,00,0   0   0   0   3   0   3 
c26t1d0
0,00,00,00,0  0,0  0,00,00,0   0   0   0   2   0   2 
c8t1d0
0,00,00,00,0  0,0  0,00,00,0   0   0   0   3   0   3 
c25t1d0
0,00,00,00,0  0,0  0,00,00,0   0   0   0   2   0   2 
c27t1d0
 1844,20,0   14,40,0  0,0  0,40,00,2   0  39   0  27   0  27 
c18t1d0
0,00,00,00,0  0,0

[OpenIndiana-discuss] How to rescan for re-inserted NVMe devices?

2018-01-31 Thread Stephan Budach
Hi all, 


I am trying to remove/insert NVMe devices using nvmeadm, where nvmeadm detach 
nvmeX/Y works, but upon removing the actual SSDs, the PCI devices get retired 
and also removed. After re-inserting a different SSD, it seems, that the PCI 
devices need to be scanned again somehow… 


What is the command to have the kernel scan and detect new PCI devices? 


Thanks, 
Stephan 





___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] ZFS hangs - causes host to panic

2018-01-16 Thread Stephan Budach

Hi, 


I am currently putting my new NVME servers through their paces and I already 
experinced two panics on one of those hosts. 
After taking "forever" writing the crash dump, I found this in the syslog after 
reboot: 



Jan 16 13:25:29 nfsvmpool09 savecore: [ID 570001 auth.error] reboot after 
panic: I/O to pool 'nvmeTank02' appears to be hung. 
Jan 16 13:25:29 nfsvmpool09 savecore: [ID 771660 auth.error] Panic crashdump 
pending on dump device but dumpadm -n in effect; run savecore(1M) manually to 
extract. Image UUID 995846d5-8c94-4f68-bada-e05ae5e4cb25(fault-management 
initiated). 


I ran mdb against the crash dump, but I am still a dummy, reading those 
information: 



root@nfsvmpool09:/var/crash/nfsvmpool09# mdb unix.0 vmcore.0 
Loading modules: [ unix genunix specfs dtrace mac cpu.generic uppc apix 
scsi_vhci zfs sata sd ip hook neti sockfs arp usba fctl stmf stmf_sbd mm lofs 
i40e idm cpc crypto fcip fcp random ufs logindmux nsmb ptm smbsrv nfs sppp ipc 
] 
> $C 
d000f5dd79d0 vpanic() 
d000f5dd7a20 vdev_deadman+0x10b(d0320fb69980) 
d000f5dd7a70 vdev_deadman+0x4a(d0333b018940) 
d000f5dd7ac0 vdev_deadman+0x4a(d03228f796c0) 
d000f5dd7af0 spa_deadman+0xad(d03229543000) 
d000f5dd7b90 cyclic_softint+0xfd(d031eac4db00, 0) 
d000f5dd7ba0 cbe_low_level+0x14() 
d000f5dd7bf0 av_dispatch_softvect+0x78(2) 
d000f5dd7c20 apix_dispatch_softint+0x35(0, 0) 
d000f5da1990 switch_sp_and_call+0x13() 
d000f5da19e0 apix_do_softint+0x6c(d000f5da1a50) 
d000f5da1a40 apix_do_interrupt+0x362(d000f5da1a50, 2) 
d000f5da1a50 _interrupt+0xba() 
d000f5da1bc0 acpi_cpu_cstate+0x11b(d031e98a43e0) 
d000f5da1bf0 cpu_acpi_idle+0x8d() 
d000f5da1c00 cpu_idle_adaptive+0x13() 
d000f5da1c20 idle+0xa7() 
d000f5da1c30 thread_start+8() 
> 


Can anybody make something useful of that? 


Thanks, 
Stephan 
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] OI_Hipster doesn't register, AOC-STG-I4S/BCM57840S 10GbE SFP adaptor

2018-01-14 Thread Stephan Budach
Hi Russel,


- Ursprüngliche Mail -
> Von: "russell" <russ...@willows7.myzen.co.uk>
> An: "stephan budach" <stephan.bud...@jvm.de>
> Gesendet: Samstag, 13. Januar 2018 22:55:14
> Betreff: [OpenIndiana-discuss] OI_Hipster doesn't register, 
> AOC-STG-I4S/BCM57840S 10GbE SFP adaptor
> 
> Hi Stephan,
> 
> On your system run "prtconf -pv" and search though the listing to
> find
> the Broadcom 57840S information that will provide you with the Node
> information.
> 
>      Node 0x2b
>      assigned-addresses:
> 82850010..bffe..0002.82850014..bffc..0002.81850018..ec00..0020
>      reg:
> 0085.....02850010....0002.02850014....0002.01850018....0020
>      compatible: 'pciex8086,105e.8086.115e.6' +
> 'pciex8086,105e.8086.115e' + 'pciex8086,105e.6' + 'pciex8086,105e' +
> 'pciexclass,02' + 'pciexclass,0200' + 'pci8086,105e.8086.115e.6'
> +
> 'pci8086,105e.8086.115e' + 'pci8086,115e' + 'pci8086,105e.6' +
> 'pci8086,105e' + 'pciclass,02' + 'pciclass,0200'
>      model:  'Ethernet controller'
>      power-consumption:  0001.0001
>      devsel-speed:  
>      interrupts:  0001
>      subsystem-vendor-id:  8086
>      subsystem-id:  115e
>      unit-address:  '0'
>      class-code:  0002
>      revision-id:  0006
>      vendor-id:  8086
>      device-id:  105e
>      name:  'pci8086,115e'
> 
> When you find the device, make a note of the name. In the example
> above
> the Intel Ethernet adapter it is pci8086,115e
> 
> You then need to edit the /etc/driver_aliases, search through until
> you
> find the entries for the Broadcom driver bnxe
> 
> bnxe "pci14e4,164e"
> bnxe "pci14e4,164f"
> bnxe "pci14e4,1650"
> bnxe "pciex14e4,164e"
> bnxe "pciex14e4,164f"
> bnxe "pciex14e4,1650"
> bnxe "pciex14e4,16a1"
> bnxe "pciex14e4,16a5"
> bnxe "pciex14e4,16a4"
> bnxe "pciex14e4,168a"
> bnxe "pciex14e4,168d"
> bnxe "pciex14e4,168e"
> bnxe "pciex14e4,16ab"
> bnxe "pciex14e4,16ae"
> bnxe "pciex14e4,1662"
> bnxe "pciex14e4,1663"
> 
> Duplicate the last entry, and replace the vendor-id,device-id
> information you found when performing the prtconf -pv
> 
> This will only work if the current Broadcom driver is compatible with
> your chipset but has not been updated to reflect support.
> If you find it works, create a support ticket to get the change
> included
> in OpenIndiana/IllumOS.
> 
> Hope that helps
> 
> Russell
> 
> 


thanks - I knew, I missed something, when I tried to query the installed PCI 
devices.
Firts off, I'll have to correct myself. The NIC installed is actually an Intel 
710 one, as prtconf -pv has revealed:

Node 0xb2
acpi-namespace:  '\_SB_.PCI1.QR1A.D084'
assigned-addresses:  
c3820010..f980..0080.c382001c..fa018000..8000
reg:  
0082.....43820010....0080.4382001c....8000
compatible: 'pciex8086,1572.15d9.87e.2' + 
'pciex8086,1572.15d9.87e' + 'pciex8086,1572.2' + 'pciex8086,1572' + 
'pciexclass,02' + 'pciexclass,0200' + 'pci8086,1572.15d9.87e.2' + 
'pci8086,1572.15d9.87e' + 'pci15d9,87e' + 'pci8086,1572.2' + 'pci8086,1572' + 
'pciclass,02' + 'pciclass,0200'
model:  'Ethernet controller'
power-consumption:  0001.0001
devsel-speed:  
interrupts:  0001
subsystem-vendor-id:  15d9
subsystem-id:  087e
unit-address:  '0'
class-code:  0002
revision-id:  0002
vendor-id:  8086
device-id:  1572
name:  'pci15d9,87e'

8086:1572, that resolves to an Intel x710, actually the card I ordered. This 
card needs the i40e driver, which hadn't been installed and I wonder why that 
was. After installing the i40e driver manually, the card got initialized.


Thank you,
Stephan


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] OI_Hipster doesn't register AOC-STG-I4S/BCM57840S 10GbE SFP adaptor

2018-01-12 Thread Stephan Budach
Hi, 


today, I installed OI_Hipster on my brand new Supermicro 2028R-NR48N NVMe 
server, that also contains this network card: AOC-STG-I4S/BCM57840S. 
While I got the NVMe to register correctly (24 x Intel DC P4500 2TB), I can't 
get the 10GbE working, it simply doesn't show up on the pci bus, when running 
lspci, although I do see it flashing by in the BIOS when the system boots up. 


I have reset the Supermicro's BIOS to optimum defaults, but that didn't help 
either. Anyone having an experience with this adaptor and can suggest what else 
to try? 



Thanks, 
Stephan 
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] smartmon for OI/Hipster

2017-10-05 Thread Stephan Budach
Gee… smartmontools are in the SFE repo…, but I didn't spot them.

Sorry for the noise, installing now…

Thanks,
stephan

- Ursprüngliche Mail -
> Von: "Stephan Budach" <stephan.bud...@jvm.de>
> An: "Discussion list for OpenIndiana" <openindiana-discuss@openindiana.org>
> Gesendet: Donnerstag, 5. Oktober 2017 10:16:00
> Betreff: [OpenIndiana-discuss] smartmon for OI/Hipster
> 
> Hi,
> 
> 
> I am trying to install the smartmontools on OI, but I cannot find any
> pkg to do so. Can anybody point me into the right direction, where
> to get the package?
> 
> 
> Thanks,
> stephan
> 
> 
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> https://openindiana.org/mailman/listinfo/openindiana-discuss
> 


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] smartmon for OI/Hipster

2017-10-05 Thread Stephan Budach
Hi, 


I am trying to install the smartmontools on OI, but I cannot find any pkg to do 
so. Can anybody point me into the right direction, where to get the package? 


Thanks, 
stephan 


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Trying to run Java 1.7.0.11 on OpenIndiana Hipster 2017.04

2017-09-27 Thread Stephan Budach

- Ursprüngliche Mail -
> Von: "Peter Tribble" <peter.trib...@gmail.com>
> An: "Discussion list for OpenIndiana" <openindiana-discuss@openindiana.org>
> Gesendet: Mittwoch, 27. September 2017 09:57:22
> Betreff: Re: [OpenIndiana-discuss] Trying to run Java 1.7.0.11 on OpenIndiana 
> Hipster 2017.04
> 
> On Wed, Sep 27, 2017 at 8:10 AM, Stephan Budach
> <stephan.bud...@jvm.de>
> wrote:
> 
> > Hi,
> >
> >
> > I am trying to replace omniOS on a RSF-1 node with OI and I need to
> > run
> > some old Java agent on it which requires, Java 1.7.0.11, so I
> > copied over
> > the Java folder from the previous installation and tweaked the
> > symlinks
> > accordingly, but when running java -version, I am getting this
> > error:
> >
> >
> >
> > root@zfsha02gh79:/usr/bin# java -version
> > Error: dl failure on line 864
> > Error: failed /usr/java_1.7.0_11/jre/lib/i386/server/libjvm.so,
> > because
> > ld.so.1: java: fatal: libCrun.so.1: open failed: No such file or
> > directory
> >
> >
> > Can I get this version of Java running on OI?
> > Thanks,
> > Stephan
> >
> 
> Should work, but you'll need the old C++ runtime installed.
> 
> pkg install system/library/c++/sunpro
> 
> (as found by, eg, 'pkg search libCrun.so.1')
> 
> --
> -Peter Tribble
> http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/

Thanks Peter and Predrag,

I successfully installed openjdk.1.7.80 with runtime64 and the agent seems to 
run fine with it. I will have to check that and if if doesn't work properly, 
I'll take Peter's advice on installing the old C++ runtime and give it another 
shot using jre-1.7.0.11.

Thanks a lot,
Stephan
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] Trying to run Java 1.7.0.11 on OpenIndiana Hipster 2017.04

2017-09-27 Thread Stephan Budach
Hi, 


I am trying to replace omniOS on a RSF-1 node with OI and I need to run some 
old Java agent on it which requires, Java 1.7.0.11, so I copied over the Java 
folder from the previous installation and tweaked the symlinks accordingly, but 
when running java -version, I am getting this error: 



root@zfsha02gh79:/usr/bin# java -version 
Error: dl failure on line 864 
Error: failed /usr/java_1.7.0_11/jre/lib/i386/server/libjvm.so, because 
ld.so.1: java: fatal: libCrun.so.1: open failed: No such file or directory 


Can I get this version of Java running on OI? 
Thanks, 
Stephan 


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] 12TB SATA HGST Support

2017-07-07 Thread Stephan Budach
Hi,

- Ursprüngliche Mail -
> Von: "Nikola M" 
> An: "Handojo via openindiana-discuss" 
> Gesendet: Freitag, 7. Juli 2017 10:41:10
> Betreff: Re: [OpenIndiana-discuss] 12TB SATA HGST Support
> 
> On 07/06/17 07:13 PM, Handojo via openindiana-discuss wrote:
> > Dear OI Users,
> > Anyone of you try to attach 12TB SATA HGST Drives into OI ? Does it
> > work flawlessly ?
> 
> Isn't that like a question for a hardware manufacturer? ;)
> (Yeah I know - if anyone used it on OI/illumos to say something
> etc..)
> 
> Most countries have local laws that allow turning back hardware to
> the
> shop if you figure that it's not what you wanted to buy and get the
> money back if turned back within few days,
> so I one can try it and let us know how it worked. ;)
> 
> It's always good to use some kind of redundancy for drives if one
> values
> it's data, so at least two drives to mirror data or more drives in
> raidz/2/3 for better drive number per usable space ratio.
> These days, with illumos loader being installed by default, one can
> boot
> even from the raidz drives.
> 

Hmm… I'd suspect, that this 12TB is one of those new "archive" drives, isn't 
it? If yes, I wonder if support for those drives has already landed in Illumos. 
If yes, it should be of course safe to use them from OI's point of view.
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Slow write speeds on HGST HUH728080AL5200 via COMSTAR target

2017-07-02 Thread Stephan Budach
Hi,

- Ursprüngliche Mail -
> Von: "Stephan Budach" <stephan.bud...@jvm.de>
> An: "Discussion list for OpenIndiana" <openindiana-discuss@openindiana.org>
> Gesendet: Sonntag, 2. Juli 2017 10:09:37
> Betreff: [OpenIndiana-discuss] Slow write speeds on HGST HUH728080AL5200 via  
> COMSTAR target
> 
> 
> Hi eveyone,
> 
> 
> I have setup three Supermicro storage servers, which are running HGST
> HUH728080AL5200. I wanted to export each drive as a raw LUN over
> iSCSI. A fter having installed the latest OI hipster and recovering
> the primary label from the backups on each disk (which were all
> bad/corrup, for whatever reason), I went forth and used fdisk to
> partition the drives to one big partition, on which I created a LUN,
> which I then exprted over iSCSI.
> 
> 
> The issue I am now facing is, that the read speeds are very good, but
> writing is really slow. The drives are HGST HUH728080AL5200 and HGST
> states the following about them:
> 
> 
> 
> 
> Interface: SAS 12Gb/s
> Capacity: (GB) 8TB
> 
> Sector Size (Variable, Bytes/sector): 4Kn: 4096, 4112, 4160, 4224
> 
> 
> Could it be that the fact that the COMSTAR LUN only advertises 512
> byte sectors leads to this issue and if yes, can I do something
> about that?
> 

So, it turned out, that my suspicion about COMSTAR target only being able to 
export devices advertising a 512-byte block size. The afore mentioned HUH 
drives are native 4k, but the LUNs on the target server - in this case this is 
a S11 box, created a zpool with ashift.9 vdevs.

To veryfy that I tweaked the sd.conf on one of my new OI boxes to map all 
COMSTAR LUNs to have native 4k block sizes. That did the trick on OI. 
Interestingly, the zpool that I created on my OI box using -o version=28 
imported just fine on S11 and since the vdevs were ashift-12 and the 
performance on S11 became very reasonable.

Does anyone happen to know, if it's possible to trick S11 into regarding 
COMSTAR LUNs as being 4k drives, or is the detour via OI the only way to create 
a zpool with raw iSCSI LUNs against native 4k COMSTAR target drives?

Cheers,
Stephan
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] Slow write speeds on HGST HUH728080AL5200 via COMSTAR target

2017-07-02 Thread Stephan Budach

Hi eveyone, 


I have setup three Supermicro storage servers, which are running HGST 
HUH728080AL5200. I wanted to export each drive as a raw LUN over iSCSI. A fter 
having installed the latest OI hipster and recovering the primary label from 
the backups on each disk (which were all bad/corrup, for whatever reason), I 
went forth and used fdisk to partition the drives to one big partition, on 
which I created a LUN, which I then exprted over iSCSI. 


The issue I am now facing is, that the read speeds are very good, but writing 
is really slow. The drives are HGST HUH728080AL5200 and HGST states the 
following about them: 




Interface: SAS 12Gb/s 
Capacity: (GB) 8TB 

Sector Size (Variable, Bytes/sector): 4Kn: 4096, 4112, 4160, 4224 


Could it be that the fact that the COMSTAR LUN only advertises 512 byte sectors 
leads to this issue and if yes, can I do something about that? 


Thanks, 
Stephan 
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Where is tcpd in OI

2017-04-29 Thread Stephan Budach
- Ursprüngliche Mail -
> Von: "Peter Tribble" <peter.trib...@gmail.com>
> An: "Discussion list for OpenIndiana" <openindiana-discuss@openindiana.org>
> Gesendet: Samstag, 29. April 2017 17:01:52
> Betreff: Re: [OpenIndiana-discuss] Where is tcpd in OI
> 
> On Sat, Apr 29, 2017 at 3:49 PM, Stephan Budach
> <stephan.bud...@jvm.de>
> wrote:
> 
> >
> > I am trying to install check_mk_agent in OI hpister, but it's
> > missing the
> > tcpd program, which I cannot even find using pkg search.
> > Can anyone tell me, if tcpd is available in OI and if not, if
> > there's a
> > supplement for it?
> >
> 
> The tcpd program comes with tcp-wrappers which is part of illumos.
> On IPS distros the following ought to work:
> 
> pkg install library/security/tcp-wrapper
> 
> --
> -Peter Tribble
> http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/

I see, tcp-wrappers wasn't available, since OI hipster has advanced since I 
installed/updated my installation a couple of days ago. After I 
updated/rebootet my OI hosts, I was able to install tcpd.

Thanks, Peter!

Regards,
Stephan
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] Where is tcpd in OI

2017-04-29 Thread Stephan Budach
Hi, 


I am trying to install check_mk_agent in OI hpister, but it's missing the tcpd 
program, which I cannot even find using pkg search. 
Can anyone tell me, if tcpd is available in OI and if not, if there's a 
supplement for it? 



Thanks, 
Stephan 
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] How to get bios version on running OI.

2016-02-15 Thread Stephan Budach

Am 16.02.16 um 05:53 schrieb Harry Putnam:

Can anyone advise me about how to acquire bios version on OI running
on HP xw8600 with 2x xeon hardware?



Just issue  smbios on the terminal and you should get anything you want, 
including the BIOS, board name/manufacturer and a lot more…


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] how to stop a zfs resilver

2014-06-05 Thread Stephan Budach
I think the only way to stop a resilver is to remove the vdev that is 
currently being resilvered. In your case you should


zfs remove c1t5000CCA225C03FC0d0

I don't know of any other way to stop a resilver

Cheers,
budy

Am 05.06.14 12:05, schrieb John McEntee:

I have a production system so I should have tried this at the weekend but is it 
a 3 way mirror across 7 vdevs (is that the right term), see below.

I started of the resilver yesterday, but it is have too much of a performance 
penalty on of the of VMWARE windows machines. I tried to reduce the priority 
with

echo zfs_resilver_delay/w8 | mdb -kw
echo zfs_resilver_min_time_ms/W0t100|mdb -kw

but that did not have a big enough affect.  Next  I tried to stop the resilver 
so I could run it over the weekend it by running offline but the resilver is 
still happening. Any idea how I stop it?

Thanks

John

   pool: tank
state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
 continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Wed Jun  4 16:05:13 2014
 9.23T scanned out of 14.7T at 143M/s, 11h2m to go
 1.21T resilvered, 63.00% done
config:

 NAME   STATE READ WRITE CKSUM
 tank   DEGRADED 0 0 0
   mirror-0 DEGRADED 0 0 0
 c1t5000CCA225C5244Ed0  ONLINE   0 0 0
 c1t5000CCA225C54DDDd0  ONLINE   0 0 0
 c1t5000CCA225C03FC0d0  OFFLINE  0 0 0  (resilvering)
   mirror-1 ONLINE   0 0 0
 c1t5000CCA225C50784d0  ONLINE   0 0 0
 c1t5000CCA225C5502Ed0  ONLINE   0 0 0
 c1t5000CCA225C49869d0  ONLINE   0 0 0
   mirror-2 ONLINE   0 0 0
 c1t5000CCA225C54ED8d0  ONLINE   0 0 0
 c1t5000CCA225C56814d0  ONLINE   0 0 0
 c1t5000CCA225C4E775d0  ONLINE   0 0 0
   mirror-3 ONLINE   0 0 0
 c1t5000CCA225C2ADDAd0  ONLINE   0 0 0
 c1t5000CCA225C04039d0  ONLINE   0 0 0
 c1t5000CCA225C53428d0  ONLINE   0 0 0
   mirror-4 ONLINE   0 0 0
 c1t5000CCA225C50517d0  ONLINE   0 0 0
 c1t5000CCA225C55025d0  ONLINE   0 0 0
 c1t5000CCA225C5660Dd0  ONLINE   0 0 0
   mirror-5 ONLINE   0 0 0
 c1t5000CCA225C484A3d0  ONLINE   0 0 0
 c1t5000CCA225C4824Dd0  ONLINE   0 0 0
(yes I know there is a missing disk here, that will be replaced next)
   mirror-6 ONLINE   0 0 0
 c1t5000CCA225C4E366d0  ONLINE   0 0 0
 c1t5000CCA225C54DDCd0  ONLINE   0 0 0
 c1t5000CCA225C56751d0  ONLINE   0 0 0
 logs
   c1t500A075103053202d0p2  ONLINE   0 0 0
   c1t500A07510306F9A7d0p2  ONLINE   0 0 0
 cache
   c1t500A075103053202d0p3  ONLINE   0 0 0
   c1t500A07510306F9A7d0p3  ONLINE   0 0 0

errors: No known data errors




___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Zpool crashes system on reboot and import

2013-12-12 Thread Stephan Budach

Hi all,

Am 11.12.13 21:28, schrieb Jim Klimov:

Also, that thread mentions that you may use this work around with
the read-only dataset with the pool to enable writes to the dataset
and keeping it read-only before exporting the pool. Still, yes,
budy mentions setting the dataset attribute while the pool is
imported read-only (Stephan? would you chime in with more details,
if that's really you?)

Yes, that's me.  And I have to confess, that I am still running with 
this zpool since. I had a long running SR with Oracle about this, but in 
the end I was told by Oracle engineering to re-create the zpool, which I 
refused to do.
So basically, I am still running with this work around of setting the 
affected fs to read-only, before I export the zpool.
This server is under constant load and I just don't have the time and 
resources to move all 370+ ZFS fs onto another storage.


To make things even worse, this error is inside the data structure of 
the ZFS fs, so zfs send/revc, doesn't help here and the data would have 
to be copied manually - nasty indeed.


However, I seem to recall, that Oracle Support told me, that this bug 
had been fixed in S11SRU13. I am not sure, but I could look that up in 
the SR. Of course, this only prevents this from happening to fs that are 
not yet affected by this issue - there's currently no cure, afaik.



If you manage to reproduce this trick in command-line and if it does
indeed help (and if you want to keep using this pool i.e. to help
the developers reproduce and fix the core problem) instead of just
remaking the pool, you might build on the (unbaked as of yet) scripts
and SMF manifests here:

http://wiki.openindiana.org/oi/Advanced+-+ZFS+Pools+as+SMF+services+and+iSCSI+loopback+mounts 

I would go for that, but I don't know, if that'd be to any avail, since 
I am still running S11.


This would enable you to code all this logic about setting and removing
the readonly bits from your pool around the import-as-a-service, so that
your system would gracefully import the pool, enable the dataset, etc.
and undo this upon proper shutdown.

HTH,
//Jim Klimov


Cheers,
budy

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Zpool crashes system on reboot and import

2013-12-12 Thread Stephan Budach

Am 12.12.13 15:18, schrieb Jim Klimov:

On 2013-12-12 14:38, Stephan Budach wrote:

So basically, I am still running with this work around of setting the
affected fs to read-only, before I export the zpool.
This server is under constant load and I just don't have the time and
resources to move all 370+ ZFS fs onto another storage.


And how did you manage to set the read-only attribute the first time?
Were there any problems or tricks involved? As CJ suggested, one
wouldn't be able to do this on a pool imported read-only... did you
import it without mounts indeed?

//Jim

You surely can set the readonly attribute for a ZFS fs on a read-only 
mounted zpool. Mounting the zpool readonly only seems to affect the 
global setting. It seems to possible to change the ZFS FS attributes 
without any issue. So the work around was…


zpool import -o ro zpool
zfs set readonly=on zpool/zfs
zpool export zpool
zpool import zpool
zfs set readonly=off zpool/zfs

This has always worked for me and it still does.

Cheers,
budy

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Zpool crashes system on reboot and import

2013-12-12 Thread Stephan Budach

Am 12.12.13 17:14, schrieb Jan Owoc:

On Thu, Dec 12, 2013 at 8:50 AM, CJ Keist cj.ke...@colostate.edu wrote:

Thanks. Stephan,
 What is the process or time frame of a bug fix in ZFS from Oracle making
it's way down to Illuminous and on to OI?

 From what I understand, there isn't one. The two are developed
independently. However, features that are present in Oracle's ZFS may
have a higher priority of being independently re-implemented in OI,
but not necessarily in the same way.
That's right. Although the issue has been present in OSol and thus may 
have made it over to Ilumos and it's ZUFS heritage, Oracle resolved this 
bug later on and thus this fix won't make it into Ilumos or any of it's 
decendants. It also may or may not have been addressed as a sidekick 
through the many fixes the folks around here have issued to ZFS.






Am 11.12.13 21:28, schrieb Jim Klimov:

To make things even worse, this error is inside the data structure of
the ZFS fs, so zfs send/revc, doesn't help here and the data would have
to be copied manually - nasty indeed.

Is there anything stopping one from creating a new zfs on the same
pool, copying over the data manually, and then destroying the
corrupt zfs? Or is the affected zfs generally larger than the free
space on the pool?
Well, no - there isn't. In the end it's the fs itself that corrupts and 
not the zpool. This issue doesn't have any side effect on the zpool and 
we have since created hundreds of new fs on that particular zpool. 
Despite of the fact that the zfs automount will panic the system, there 
is no other issue present with this fs. In the end it's not the zpool 
that crashes, but the auto-mounting…


Cheers,
budy

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Zpool crashes system on reboot and import

2013-12-12 Thread Stephan Budach

Am 12.12.13 17:21, schrieb CJ Keist:
I not able to set any attributes on ZFS FS's when the pool has been 
imported as readonly:


zpool import -o readonly=on data

Not sure that is different from your command: -o ro

zfs set readonly=on data/projects/ALP
cannot set property for 'data/projects/ALP': dataset is read-only
Interesting. I am on S11, so that may differ. However, I can assure you, 
that this is the way I was dealing with this, when I had that issue, but 
that has been at least 430+ days ago.






On 12/12/13, 9:06 AM, Udo Grabowski (IMK) wrote:

On 12/12/2013 16:14, Stephan Budach wrote:

Am 12.12.13 15:18, schrieb Jim Klimov:

On 2013-12-12 14:38, Stephan Budach wrote:

So basically, I am still running with this work around of setting the
affected fs to read-only, before I export the zpool.
This server is under constant load and I just don't have the time and
resources to move all 370+ ZFS fs onto another storage.


And how did you manage to set the read-only attribute the first time?
Were there any problems or tricks involved? As CJ suggested, one
wouldn't be able to do this on a pool imported read-only... did you
import it without mounts indeed?

//Jim


You surely can set the readonly attribute for a ZFS fs on a read-only
mounted zpool. Mounting the zpool readonly only seems to affect the
global setting. It seems to possible to change the ZFS FS attributes
without any issue. So the work around was…

zpool import -o ro zpool
zfs set readonly=on zpool/zfs
zpool export zpool
zpool import zpool
zfs set readonly=off zpool/zfs

This has always worked for me and it still does.


Would be interesting to know under which cicumstances this problem
appears. I saw from one of the crash dumps that there was a scrub
active, could it be that this happens on servers which go down with
an active scrub on that pool and fail to reactivate the scrub then ?


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss






--
Stephan Budach
Deputy Managing Director
Jung von Matt/it-services GmbH
Glashüttenstraße 79
20357 Hamburg


Tel: +49 40-4321-1353
Fax: +49 40-4321-1114
E-Mail: stephan.bud...@jvm.de
Internet: http://www.jvm.com

Geschäftsführer: Frank Wilhelm, Stephan Budach (stellv.)
AG HH HRB 98380


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Zpool crashes system on reboot and import

2013-12-12 Thread Stephan Budach

Am 12.12.13 17:36, schrieb CJ Keist:



On 12/12/13, 9:06 AM, Udo Grabowski (IMK) wrote:

On 12/12/2013 16:14, Stephan Budach wrote:

Am 12.12.13 15:18, schrieb Jim Klimov:

On 2013-12-12 14:38, Stephan Budach wrote:

So basically, I am still running with this work around of setting the
affected fs to read-only, before I export the zpool.
This server is under constant load and I just don't have the time and
resources to move all 370+ ZFS fs onto another storage.


And how did you manage to set the read-only attribute the first time?
Were there any problems or tricks involved? As CJ suggested, one
wouldn't be able to do this on a pool imported read-only... did you
import it without mounts indeed?

//Jim


You surely can set the readonly attribute for a ZFS fs on a read-only
mounted zpool. Mounting the zpool readonly only seems to affect the
global setting. It seems to possible to change the ZFS FS attributes
without any issue. So the work around was…

zpool import -o ro zpool
zfs set readonly=on zpool/zfs
zpool export zpool
zpool import zpool
zfs set readonly=off zpool/zfs

This has always worked for me and it still does.


Would be interesting to know under which cicumstances this problem
appears. I saw from one of the crash dumps that there was a scrub
active, could it be that this happens on servers which go down with
an active scrub on that pool and fail to reactivate the scrub then ?



If looking at my post the Scrub was started after the system crashed.  
I wanted to see if a scrub might fix the issue of importing this data 
pool in.  But when I saw the scrub was going to take 60+hours, I had 
to re-export it out and back in read only so I could start migrating 
data to a new location to keep the downtime to a minimum.


I not sure what caused the initial crash, I know I was working at the 
time through the web gui of NappIt, I think my last action on the web 
gui was to show all logical volumes.
I just had a look at my SR from 2 years ago and I was performing a 
Solaris update back then. When I tried to unmount that zpool, this fs 
wouldn't and claimed to be busy, for no apparent reason, so I finally 
forced the zpool to export.


That was, when this issue started on that particular fs after the 
follwoing reboot.



___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] N40L rear e-sata

2012-11-11 Thread Stephan Budach

Am 11.11.12 19:09, schrieb Sašo Kiselkov:

On 11/11/2012 02:02 PM, Michelle Knight wrote:

Hi Folks,

I made a new installation of OI a couple of weeks ago, on a HP N40L
system.

At the back is an e-sata port that I want to hook up to an external
drive, but when I hook up a drive through it and issue cfgadm -lav I
can't see it on any channel.

Does anyone have any experience/advice please?

You need to reflash the BIOS to enable AHCI mode on the fifth SATA port
(the e-SATA one). Have a look at:

http://homeservershow.com/hp-microserver-n40l-build-and-bios-modification.html

Cheers,
--
Saso


+1  - flashing the N40L BIOS is a must. I did that on three units myself 
and it was really easy using a USB thumb drive. I also just crammed 16 
GB of RAM into my unit at home, which really made a big difference, 
since I am also running Vbox on it. :)


You just have to be aware, that if you installed OI on the disks in IDE 
mode, you won't be able to boot off the rpool, once you changed your 
drives from IDE to SATA, but you can overcome that using the 
instructions for repairing the rpool in the ZFS admin guide.


Cheers,
budy

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] N40L rear e-sata

2012-11-11 Thread Stephan Budach

Am 11.11.12 20:16, schrieb Jim Klimov:

On 2012-11-11 20:00, Stephan Budach wrote:

+1  - flashing the N40L BIOS is a must. I did that on three units myself
and it was really easy using a USB thumb drive. I also just crammed 16
GB of RAM into my unit at home, which really made a big difference,
since I am also running Vbox on it. :)



Ah-hah! So it is possible! (I asked a few times with vague results)

What models of RAM did you use, and approximately what was the cost?
Is it worth it in your opinion? ;)

Did the HP BIOS complain about component compatibility or try to
refuse to accept some details? (I saw other HP servers do that)

Thanks,
//Jim

Hi Jim,

yes it's possible and after searching and comparing (and even had one 
shot into the foot as well) I finally figured that the HP would run with 
PC1333 organized in 512Mx8. I paid € 107,- (VAT incl.) für both modules.


At first my box would only recognize 8GB when I put both modules in, and 
I was only able to get it to run using one 8 GB and one of the remaining 
4 GB modules, which accounted for 12 G in total. After hitting Google 
pretty hard, I checked in with the BIOS of my N40L and just tried out of 
curiosity to disable parity-checking in the Southbridge menu of the 
modded BIOS, which suddenly seemed to turn on the second 8 GB module… 
really weird. I am suspecting the BIOS to not perform a full discovery 
when I inserted the two 8 GB modules, but the 16 GB have been consistent 
available throughout 10 reboots, so I deemd them to operate.


As for the specs of the RAM modules I purchased, I can provide the 
information from the Sticker, that has been put on, hopefully you can 
get something from it - I also could ask my dealer what he exactly 
ordered from his distributor:


CSX:
8 GB DDR3 1333MHz
512x8 Long DIMM ECC
AP_ECC1333D3_8G
XECC-D3-1333-512X8-8GB

As I said, I think that the chip layout 512x8 is key here.

And yes… I do think that it is really worth it - running 3 VMs on the 
N40L really hits hard on the memory and with that more GB to spend my 
VMs really run much better (system load has increased, which is often a 
good sign in that regard) - after all I do have 9 GBs left where ZFS can 
operate on.


So, my N40L is now quite beefed up, without going too crazy… ;)

Cheers,
budy


___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] XStreamOS distro available

2012-09-23 Thread Stephan Budach

Am 21.09.12 16:51, schrieb Gabriele Bulfon:

Hi, can you try and boot in debug mode?
When at grub screen, press e to edit, move to third row, e to edit, add  -v 
at the end, then boot this third row.
Let me know what yout get.
Gabriele.


When booting into the installer in debug mode I am getting this at the 
end, before the system hangs:


Reading Intel IOMMU boot options
pci0 at root: space 0 offset 0
pci0 is /pci@0,0
PCI-device: ias@1, isa0

Afterwards the vcpus stay at 100% and I have to kill the VM.

Cheers,
budy

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] XStreamOS distro available

2012-09-20 Thread Stephan Budach

Am 20.09.12 11:10, schrieb Gabriele Bulfon:

We have it working on VMWare 4 and 5, community and licensed versions.
We have it working on VirtualBox, both on Windows and Solaris hosts.
We have it working on KVM for illumos, tested on an experimental version of 
XStreamOS on bare metal with KVM built on it.
I have no OracleVM to test it.
If you can do it, we will love to know the results.
Thanks,
Gabriele.


Unfortuanetly, XStreamOS has also issues on OracleVM. After starting up 
and (presumeably loading the kernel) it just sits there and burns cpu 
cycles.

That is the boot message appears and then it's stuck.

Cheers,
budy

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS Pool configuration hanging around

2011-01-17 Thread Stephan Budach

Am 17.01.11 12:30, schrieb Michelle Knight:

I'm using an external caddy to mount backup drives to. The ZFS pools are,
rather imaginatively, called, backup.

However, there seems to be a phantom of an old set that was present when the
machine hung once upon a time, and I can't get rid of it. I've tried deleting
/etc/zfs/zpool.cache but no joy.

Here you can see the problem. A single drive is on c3d0p0, called backup. (I
know, there is no redundancy, I'm testing)

A zpool import shows both the valid drive but also the phantom set.

This means I've got to import the pool via the id number.

mich@jaguar:/etc/zfs# zpool import
   pool: backup
 id: 1064873577100856
  state: ONLINE
status: The pool is formatted using an older on-disk version.
action: The pool can be imported using its name or numeric identifier, though
 some features will not be available without an explicit 'zpool
upgrade'.
config:

 backup  ONLINE
   c3d0p0ONLINE

   pool: backup
 id: 15407100200514227053
  state: FAULTED
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
 devices and try again.
 The pool may be active on another system, but can be imported using
 the '-f' flag.
see: http://www.sun.com/msg/ZFS-8000-3C
config:

 backupFAULTED  corrupted data
   mirror-0DEGRADED
 c2t4d0p0  ONLINE
 c5t0d0UNAVAIL  cannot open


Anyone know how to get rid of this phantom set from the system please? I've
tried destroy but it obviously can't find the pool to get rid of it.

To make it worse, the drive on c2t4d0p0 was part of the faulted backup set.

Should I reboot after clearing the zpool cache file? Would that be the missing
step?

I know ... I get myself in to some really dumb situations, don't I.
I'd import the good zpool using it's numeric identifier and then just 
rename it alongside the import like this:


zpool import 1064873577100856 newName

Then export it back out and see what zpool import returns. If it still 
returns the ghost zpool you can of course reboot the host once more and 
see if that clears things up.


Cheers,
budy

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss