Re: [OpenIndiana-discuss] Support for LSI-2308 SAS controller in oi_151a8

2015-07-01 Thread Schweiss, Chip
Sounds exactly like a firmware problem that Seagate Constellations had.
With lsiutils you can read what threshold the disks report.

You likely need a firmware upgrade or downgrade for the disks.

-Chip


On Wed, Jul 1, 2015 at 7:16 PM, Matt Boswell 
wrote:

> Hi all,
>
> We bit the bullet and bought a companion storage server to use for ZFS
> replication and failover. Our main box is about 3 years old and is also
> running oi_151a8. We're running into some issues insisting that some of the
> disks are over-temperature (via console messages), and now format shows
> only 6 of the 26 disks installed in the system.  "fmadm faulty" shows what
> I expected, that a slew of disks exceeded temperature thresholds and have
> been marked as faulty.  I find this hard to believe, as the system is
> properly cooled, in a colo rack, and the sensors on board all show normal
> temperatures via IPMI.   FWIW these disks have never been used; they are
> brand new and waiting to be added into zpools.  The controller BIOS shows
> all the disks as connected and healthy.  Just wondering if anyone has run
> across this before and what can be done about it.  The hardware is
> SuperMicro 6047R-E1R24L and the disks are WD SAS 4TB drives, with two intel
> SSDs in the back for rpool and log devices.
> Matt
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ashift 13?

2015-04-07 Thread Schweiss, Chip
While the SSD will address 512b sectors, it doesn't work at all like
platters.   Like ZFS,  SSDs too will group writes together to make up 8k
sectors, do copy on write and garbage collect later.

I did a bunch of iozone testing about a year ago and found very little
benefit from using larger sector sizes on the SSD.   By far the best thing
for them was to slice them in to a partitions and use 70-90% of the
available capacity.   This gives a bigger chunk of sectors to use for
garbage collection, greatly improving sustained write performance.

This varies a lot depending on the SSD.   I did my testing on Samsung 840
Pro SSDs and Intel S3700.   The 840 Pro benefited a lot from smaller
allocations where the S3700 which already has a large over-provisioning
didn't make much difference.

BTW, the latest firmware on Intel S3500, changed to reporting 4K sectors
and 512b emulated.   They don't have near the over-provision as the more
expensive S3700, so 4K probably helps them if their entire usable partition
is used.  I use the S3500 for my rpool on most of my ZFS systems.

-Chip

On Tue, Apr 7, 2015 at 1:25 PM, Jason Matthews  wrote:

>
>
> On 4/7/2015 11:07 AM, Jim Klimov wrote:
>
>>
>> As for inflation - whenever you have smaller zfs allocations, such as
>> those "tails" from blocks sized not power of 2 thanks to compression, they
>> become a complete minimal "recordsize" block such as 4k or 8k as native to
>> your drives, with trailing zero-padding. Some metadata blocks may also fall
>> into this category, though for larger files these are clustered at 16kb(?)
>> chunks. You also have less uberblocks in the fixed-size ring buffer of
>> zpool labels.
>>
>
> I am not sure tails justifies the inflation. I can accept some increased
> utilization from tails but this is totally out of line.
>
> Here is a 512b system of a database master.
> root@shard035a:/home/postgres/data/base/16414# du -hs .
>  203G   .
> root@shard035a:/home/postgres/data/base/16414# ls -l |wc -l
> 4109
>
> 203GB and 4109 files.
>
> Here is the slave that I built from the master yesterday. They should be
> nearly identical.
>
> root@shard035b:/home/postgres/data/base/16414# du -hs .
>  474G   .
> root@shard035b:/home/postgres/data/base/16414# ls -l |wc -l
> 4081
>
> My feeling is there are not enough tails in 4100 files to consume 271GB of
> storage. I dont understand what is going on just yet.
>
> j.
>
>
>
>
>  On the upside, if your ssd does compression, these zeroes will in effect
>> likely count toward wear-leveling reserves ;) With hdds this is more of a
>> lost space as compared to 512-byte sectored disks. However this just become
>> similar to usage on other systems (ext3, ntfs) typically with 4k clusters
>> anyway. So we're told to not worry and love the a-bomb ;)
>>
>>
>
>
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Test on Sun X4170M2 between STEC Mach16 SLC/MLC and Intel DC S3700

2015-02-23 Thread Schweiss, Chip
This may be a better topic for the Illumos ZFS mailing.

There are a couple things affecting your results here.

The SSDs  perform best when multiple threads are filling their queue.   In
your test you have a single thread.

It is my understanding the ZIL is also one thread per ZFS file system.
The latency of the SAS bus and SSD will stack up against you here.  You
should get better results across the board if you execute against several
ZFS file systems on the pool.

-Chip

On Sun, Feb 22, 2015 at 9:28 PM, Albert Chin <
openindiana-disc...@mlists.thewrittenword.com> wrote:

> I've tested three SSDs in a Sun X4170M2. This server has 8 internal
> SSD 2.5" drive bays with a Sun Storage 6 Gb SAS PCIe RAID HBA. I
> believe the chipset on the HBA is a LSI SAS2108 (according to
> http://tinyurl.com/koc6kdn).
>
> I tested by adding each SSD as a ZIL for a pool and then running the
> following command on one of the file systems in the pool:
>   $ /usr/gnu/bin/dd if=/dev/zero of=file bs=1 oflag=sync
>
> Iostat numbers below are given by:
>   $ iostat -cnx 1
>
> $ cat /kernel/drv/sd.conf
> ...
> sd-config-list="*MACH16*","disksort:false, cache-nonvolatile:true",
>"*INTELSSD*","disksort:false, cache-nonvolatile:true,
> physical-block-size:8192";
>
> The 8k block size for the Intel S3700 comes from:
>
> http://wiki.illumos.org/display/illumos/List+of+sd-config-list+entries+for+Advanced-Format+drives
>
> Product part numbers:
>   1. STEC Mach16 SLC 100GB - M16CSD2-100UIU
>   2. STEC Mach16 MLC 200GB - M16ISD2-200UCV
>   3. Intel DC S3700 - SSDSC2BA200G301
>
> ATA-STECMACH16 M-0300-93.16GB (SLC)
>   $ /usr/gnu/bin/dd if=/dev/zero of=file bs=1 oflag=sync
>   $ iostat -cnx 1
>   r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b
>   0.0 1439.20.0 5756.9  0.0  0.80.00.6   0  84
>   0.0 1478.70.0 5915.0  0.0  0.90.00.6   1  86
>   0.0 1491.10.0 5964.2  0.0  0.90.00.6   1  88
>   0.0 1506.00.0 6023.8  0.0  0.90.00.6   0  89
>
> ATA-STECMACH16 M-0289-186.31GB (MLC)
>   $ /usr/gnu/bin/dd if=/dev/zero of=file bs=1 oflag=sync
>   $ iostat -cnx 1
>   r/sw/s   kr/skw/s wait actv wsvc_t asvc_t  %w  %b
>   0.0 8079.70.0 32318.9  0.0  0.50.00.1   2  52
>   0.0 8229.10.0 32916.5  0.0  0.50.00.1   2  53
>   0.0 7368.00.0 29471.9  0.0  0.50.00.1   2  47
>   0.0 7318.00.0 29272.2  0.0  0.50.00.1   2  47
>
> ATA-INTEL SSDSC2BA20-0270-186.31GB (MLC)
>   $ /usr/gnu/bin/dd if=/dev/zero of=file bs=1 oflag=sync
>   $ iostat -cnx 1
>   r/sw/s   kr/skw/s wait actv wsvc_t asvc_t  %w  %b
>   0.0 9196.00.0 18392.0  0.0  0.20.00.0   1  19
>   0.0 9144.30.0 18288.6  0.0  0.20.00.0   1  19
>   0.0 9288.70.0 18575.5  0.0  0.20.00.0   1  19
>   0.0 8352.00.0 16704.0  0.0  0.20.00.0   1  17
>
> The STEC Mach16's are 3.0Gbps devices. The Intel SSD DC S3700 is a
> 6.0Gbps device. Just two questions:
>   1. Why don't I see double the IOPS performance between the
>  6.0Gbps device than the 3.0Gbps devices?
>   2. Why does the STEC Mach16 100GB SLC suck so badly in comparison
>  to it's 200GB MLC cousin? I know that the 100GB drives won't
>  perform as well as the 200GB models but I did not expect this
>  much of a difference.
>
> Even using bs=4096 on the Intel S3700, I was hoping to see >10K IOPS,
> possibly matching the numbers from the anandtech review:
>   http://www.anandtech.com/show/6433/intel-ssd-dc-s3700-200gb-review/3
> ATA-INTEL SSDSC2BA20-0270-186.31GB (MLC)
>   $ /usr/gnu/bin/dd if=/dev/zero of=file bs=4096 oflag=sync
>   $ iostat -cnx 1
>   r/sw/s   kr/skw/s wait actv wsvc_t asvc_t  %w  %b
>   0.0 8714.50.0 34858.0  0.0  0.20.00.0   1  22
>   0.0 8410.90.0 33639.6  0.0  0.20.00.0   1  21
>   0.0 8431.10.0 33728.2  0.0  0.20.00.0   1  21
>   0.0 8295.00.0 33224.0  0.0  0.20.00.0   1  21
>   0.0 8970.10.0 35740.4  0.0  0.20.00.0   1  23
>
> Am I getting the best possible performance out of the Intel S3700?
>
> --
> albert chin (ch...@thewrittenword.com)
>
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] A ZFS related question: How successful is ZFS, really???

2015-01-12 Thread Schweiss, Chip
On Mon, Jan 12, 2015 at 8:17 AM, Andrew Gabriel <
illu...@cucumber.demon.co.uk> wrote:

>
> Since you mention Sun/Oracle, I don't see them pushing ZFS very much
> anymore, although I am aware their engineers still work on it.
>

Oracle pushes ZFS hard and aggressively.   I dare you to fill out their
contact form or download their virtual appliance demo.  Their sales people
will be calling within the hour.

We just recently went through a bidding war on an HA + DR system with 1/2
PB useable storage with many vendors including Nexenta and Oracle.  Oracle
was price competitive with Nexenta and is in my opinion a much more
polished product.

We still chose to build our own on OmniOS because we could still do that
for about 1/2 the price of Oracle / Nexenta.  That's less than 1/4 the
price of 3par/HP, IBM, Dell/Compellent   BTW, our OmniOS build is on the
exact same hardware, Nexenta would have been.

-Chip
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] [developer] HBA recommended except LSI and ARECA

2014-05-04 Thread Schweiss, Chip
The place that good SATA support is needed it not for spinning disk, but
for SSD.   The price delta is huge and even consumer grade SSDs are ideal
for L2ARC.

-Chip
On May 4, 2014 4:43 AM, "Fred Liu"  wrote:

>
> [fred]:ok. Let's see how it goes after I get the hba.
>
> [fred]: I have got 6805H HBA. It can recognize the sata drives in bios but
> these drives cannot be detected in the latest illumos(smartos,oi) release.
>   I haven’t got a sas  drive to test. But the price delta
> between sas and sata drive under the same capacity is not small like USD30
> at all. ☹.
>
>Thanks.
>
>  Fred
> ___
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] fail2ban for sshd

2014-04-24 Thread Schweiss, Chip
On Thu, Apr 24, 2014 at 5:43 AM, Gary Gendel  wrote:

> Fail2ban seems to randomly miss ssh matches.  I've been hacking at the
> filter but nothing I seem to do works.  What regex are others using that
> works? The line that should catch the ones missed is:
>
>
A much easier way to manage this is never run sshd on port 22 when exposing
to the Internet.   Pick any nonstandard port and these drive by scans
pretty much go away.

-Chip
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] received unsolicited ack for DL_UNITDATA_REQ on bnx0arl_dlpi_pending

2014-04-07 Thread Schweiss, Chip
Were you able to resolve the cause of this?

I found this thread today when one of my servers started having the same
problem.   Hard lockups with the nearly the same message on the console.

My affected server is a Supermicro 6037R-TXRF with the X9DRX+-F
motherboard.   It has 2 Intel nics.  The console message reads:

received unsolicited ack for DL_UNITDATA_REQ on igp0arl_dlpi_pending

It repeats 4 times and the system locks.

This system is completely unloaded and the problem occurs after only a few
minutes of uptime.

Thanks for any additional info you can provide.

-Chip





On Thu, Mar 21, 2013 at 3:53 PM, James Carlson wrote:

> On 03/21/13 14:14, Pico Aeterna wrote:
> > James,
> >
> > Thanks for the reply and the dtrace hint.
> >
> > # dtrace -n 'fbt::arl_dlpi_pending:return/arg1==0/{stack();}'
> > dtrace: description 'fbt::arl_dlpi_pending:return' matched 1 probe
> >
> > Pretty much this problem occurs when the system is under network load as
> > it is the NFS store for about 5-6 vms via my ESXi server.  Once I have a
> > viable dump I'll post the results.  I found it a little odd as well that
> > it was referencing bnx0 since my system is an x8DTU which uses the Intel
> > 82576 for it's NIC
> >
> > I did some digging and found an old 2012 reference to this however don't
> > know if it's relevant to my issue
> >
> > http://tech.groups.yahoo.com/group/solarisx86/message/54599
>
> Seems plausible.  If so, this might be relevant:
>
> https://www.illumos.org/issues/1333
>
> --
> James Carlson 42.703N 71.076W 
>
> ___
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Powerloss protected SSDs for ...Re: Low low end server

2014-02-10 Thread Schweiss, Chip
On Mon, Feb 10, 2014 at 3:58 PM, Volker A. Brandt  wrote:

>
> This thread started out as a discussion of the merits of the HP N54L
> microserver for home use.  I am not really sure if a home server needs
> mirrored battery-protected SSDs.  :-)
>

I tend to agree with this.   My approach is to slice a Samsung 840 Pro,
which holds up performance really well and do an aggressive backup cycle to
my disk pool.

Turn off sync, and forget about a ZIL unless your running a database.
Even still just do a snapshot policy so you have an acceptable fallback
point if power loss does bite you.

ZFS is consistent even with sync off.

-Chip
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Powerloss protected SSDs for ...Re: Low low end server

2014-02-10 Thread Schweiss, Chip
On Mon, Feb 10, 2014 at 5:22 AM, Hans J. Albertsson <
hans.j.alberts...@branneriet.se> wrote:

> Samsung 843




The 843 while called and enterprise SSD, does not have capacitors for power
loss protection.

http://www.thessdreview.com/our-reviews/samsung-843/2/
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] sd.conf trouble and illumos bug #3220

2014-02-04 Thread Schweiss, Chip
I'm having the same problem.   What I can't seem to find is the correct
syntax for using a wildcard in sd.conf.   Can you provide an example?

Thanks!
-Chip


On Wed, Jan 8, 2014 at 1:55 PM, Bryan N Iotti
wrote:

> Adding a wildcard for the all devices to use the 4K block size did indeed
> work and drastically improved the performance (and, I guess, life
> expectancy) of that SSD.
>
> I also added "physical-block-size=512" entries for all traditional
> 512-byte block disks connected to this machine and they are running fine.
>
> The scrub speed on the SSD (only quick test I could think of to try
> sustained read and write) went from 160MB/sec to 250MB/sec. If I use 8K it
> doesn't change as much, but seems to be more constant. System has been
> running for a couple of hours and it's nice and snappy.
>
> Should I add a line to the OI wiki about the wildcard options? I didn't
> see them there last time I looked and they can be handy for newcomers and
> the like.
>
> Bryan
>
> On Mon, 6 Jan 2014 12:49:53 -0800 (PST)
> Reginald Beardsley  wrote:
>
> > Sadly yahoo mail makes a mess of replies embedded in the text :-(
> >
> > I'll have to try the 3 TB USB drive again.  I certainly can't see any
> reason that your wildcarding scheme shouldn't work.  If there is a problem
> it's probably a bug.
> >
> > There are so many lies being told about geometry that it's hard to say
> what you should do.  At one point I wound up with a partition that was not
> 4k aligned if I started with cylinder #1, so I used #2 which was 4k
> aligned.  However, the 2k aligned partition seemed to work OK, so I'm not
> certain.  I wasn't willing to leave it that way to avoid wasting a little
> space.
> >
> > I can't think of any reason that 512B drives wouldn't work fine aligned
> to larger boundaries.   The only consequence I can see is some wasted space
> which really isn't an issue given we're talking pennies per GB.  I can't
> imagine a performance issue that could result.
> >
> > It seems to me that the default alignment rules would give less trouble
> if the alignment was to the most restrictive requirement.
> >
> > 
> > On Mon, 1/6/14, Bryan N Iotti  wrote:
> >
> >  Subject: Re: [OpenIndiana-discuss] sd.conf trouble and illumos bug #3220
> >  To: openindiana-discuss@openindiana.org
> >  Date: Monday, January 6, 2014, 12:08 PM
> >
> >  Hi Reginald,
> >
> >  Thanks for your reply.
> >
> >  In the end, I just booted from a live DVD and plugged that
> >  drive in on my laptop using a double asterisk as the sd.conf
> >  line and an 8k block size. A test pool create read back
> >  ashift=13, which should be fine.
> >
> >  Then I used format -e and fdisk to create a single Solaris
> >  slice with start and end blocks that would fall on a
> >  cylinder number that was divisible by 8 (I did it like for
> >  4K disks, if I'm wrong I'd like some input on that please).
> >
> >  Then I created the rpool in the first slice and moved the
> >  disk over to the final system, where I sent and received the
> >  zfs datasets. Change menu.lst, install grub and I was done.
> >  Working fine so far (160MB/sec sustained on a scrub, normal
> >  wsvc_time and asvc_time in iostat).
> >
> >  Would anyone see any problem in doing the sd.conf line
> >  backwards, I mean masking all drives as "**" with a 4K
> >  blocksize and adding entries for the other 512-byte drives?
> >  What would happen if ZFS thought a 512-byte block device
> >  used a 4K block size? Wasted space, performance issue,
> >  errors?
> >
> >  Thanks, as always, for any and all input.
> >
> >
> > Bryan
> >
> >
> >  On Sun, 5 Jan 2014 16:10:58 -0800 (PST)
> >  Reginald Beardsley 
> >  wrote:
> >
> >  > I can't comment on your particular issue which is why I
> >  didn't respond earlier.  However,  I had trouble
> >  getting a 3 TB Toshiba USB drive to work properly.  I
> >  spent a bunch of time reading the code that parses sd.conf,
> >  but finally gave up and didn't go further.
> >  >
> >  > On reflection I wonder if the problem is a failure to
> >  propagate the information about block size to ZFS correctly
> >  under certain circumstances.  I think it would be nice
> >  if one could force ashift when creating a pool.
> >  Automagic is nice, but there's often no substitute for human
> >  intelligence.  I spent a couple of days trying to
> >  persuade OI to create a properly aligned pool before I gave
> >  up.
> >  >
> >  >
> >  >
> >  >
> >  > ___
> >  > OpenIndiana-discuss mailing list
> >  > OpenIndiana-discuss@openindiana.org
> >  > http://openindiana.org/mailman/listinfo/openindiana-discuss
> >  >
> >
> >
> >  --
> >  Bryan N Iotti
> >
> >  +39 366 3708436
> >  ironsides.med...@runbox.com
> >
> >  ___
> >  OpenIndiana-discuss mailing list
> >  OpenIndiana-discuss@openindiana.org
> >  http://openindiana.org/mailman/listinfo/openindiana-discuss
> >
> >
> > __

Re: [OpenIndiana-discuss] NFS

2014-01-30 Thread Schweiss, Chip
I ran into a similar issue on OmniOS 151008j recently.

When I ran 'rpcinfo -p {nfs_server}' it returned access denied.

Restarting the rpc service fixed it:

svcadm restart svc:/network/rpc/bind:default

I don't know what put the server in that state, but it's happened only once
on the heavily used NFS server.

-Chip


On Thu, Jan 30, 2014 at 4:54 PM, Edward Ned Harvey (openindiana) <
openindi...@nedharvey.com> wrote:

> > From: Edward Ned Harvey (openindiana)
>
> It *appears* that NFSv4 is fine in both 151a7 and 151a9.
> It *appears* that NFSv3 is broken in 151a9.  Which was unfortunately,
> necessary to support ESXi client and Ubuntu 10.04 client.
>
> ___
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] VMware

2013-08-10 Thread Schweiss, Chip
I use it to back 250 VMs on 10 ESXi hosts via NFS.  40 NL-SAS spindles,
1.4TB L2ARC, 2 ZiL SSD, 72GB ram.   Couldn't ask for a better platform for
VM storage.

-Chip

On Sat, Aug 10, 2013 at 5:11 AM, James Relph wrote:

>
> Hi all,
>
> Is anybody using Oi as a data store for VMware using NFS or iSCSI?
>
> Thanks,
>
> James.
>
> Sent from my iPhone
>
>
> ___
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] 4TB SAS drive with 4KB sectors (Advanced Format)

2013-07-31 Thread Schweiss, Chip
The 4 TB Constellation ES.3 is 512b native sectors.

If you want 4K sectors on it you can override in /kernel/drv/sd.conf.   The
drive will still be 512b, but ZFS will use ashift=12 when configuring.

This is working for me on pool built with these drives.

I have had very good experience with Seagate Constellations.

-Chip

On Tue, Jul 30, 2013 at 7:18 AM, Geoff Nordli  wrote:

>
> I am looking at building a new pool with 4KB sectors.
>
> Does the ST4000NM0023  Constellation ES.3 4TB SAS drive work properly?
>
> Any other suggestions for a 4TB SAS drive?
>
> thanks,
>
> Geoff
>
> __**_
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss@**openindiana.org
> http://openindiana.org/**mailman/listinfo/openindiana-**discuss
>
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] Crash when zfs unmount -f

2013-05-03 Thread Schweiss, Chip
I'm running OI 151a7 on a SuperMicro 1026-RFT+  w/ 1 CPU & 72GB ECC DDR3.
with NL-SAS disk connected via LSI 9201-16e HBA and LSI sas expanders.

I've seen a few times now the system dump and reboot when an zfs folder is
either forcefully unmounted or destroyed.I believe each time this has
happened a client may have been connected via NFS.

/var/adm/messages doesn't have any information about the crash.   This most
recent time I have reset the server because the dump was taking too long.

I suspect this may be a IO or Illumos, but the only place I'm seeing the
crash is on the console where most the information has scrolled off the
screen.

How do I capture everything possible about this crash so it can be debugged?

-Chip
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] PACS DICOM server based on OI presented as a poster at CMBBE 2013!

2013-04-04 Thread Schweiss, Chip
Bryan,

I'm guessing the image was attached.  It didn't seem to make it through the
mailing list.

We have a PACS project going on in my department at Washington University
in St. Louis.   I'd love to see your poster.

Could you email it to me directly or post a link for it?

-Chip


On Thu, Apr 4, 2013 at 1:51 PM, Bryan N Iotti wrote:

> Hi all,
>
>   As I had told you a while back, the OI-based PACS server I installed in
> my University was accepted as one of the two posters I'm presenting here in
> Salt Lake City, USA, at the 11th International Symposium of Computer
> Methods in Biomechanics and Biomedical Engineering.
>
> I did my best to ensure that OI gets the visibility it deserves for this,
> since without it the system would not be running as well as it has.
>
> Here's what the poster looks like (scaled down of course, the real one is
> 850x850mm):
> PACS Poster
>
>
> Bryan
>
> ___
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>
>
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Copying a clone with zfs send/receive

2013-04-04 Thread Schweiss, Chip
Answering my own question I have figured out the problem.

I should not be creating the clone destination first.  zfs receive figures
out that a clone is being sent and creates the clone itself.

zfs send -i sourcepool/parrentfolder@origin
sourcepool/clonefolder@snapshot| zfs receive -d trargetpool


On Thu, Apr 4, 2013 at 8:42 AM, Schweiss, Chip  wrote:

> In the past when I've sent an entire zpool, clones would copy to the
> recieving pool just fine.
>
> I'm trying to figure out how to get a single clone to copy with zfs
> send/recieve.
>
> So far the primary zfs folder and snapshots have been copied with all
> snapshots.
>
> Where I'm struggling is getting the clone to copy.
>
> I've tried several combinations of sending incremental to full streams,
> but nothing seems to work.
>
> Each time I have created the clone on the receiving pool first from the
> corresponding snapshot.
>
> Can anyone give me a clue as to how I do this?
>
> Thanks!
>  -Chip
>
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] Copying a clone with zfs send/receive

2013-04-04 Thread Schweiss, Chip
In the past when I've sent an entire zpool, clones would copy to the
recieving pool just fine.

I'm trying to figure out how to get a single clone to copy with zfs
send/recieve.

So far the primary zfs folder and snapshots have been copied with all
snapshots.

Where I'm struggling is getting the clone to copy.

I've tried several combinations of sending incremental to full streams, but
nothing seems to work.

Each time I have created the clone on the receiving pool first from the
corresponding snapshot.

Can anyone give me a clue as to how I do this?

Thanks!
-Chip
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Multiple devices appearing as one behind SAS expander

2013-03-21 Thread Schweiss, Chip
No interposer,   Connected directly in the JBOD.  So there is no multipath
available to them, but since the World Wide Name is not unique on them, OI
is seeing them as one device.

Curriously,  How do you physically use one of those interposers?   The
drive would not be able to be fully inserted into the hotswap bay with that
attached.

-Chip

>
> > How do I break this and get OI to see these as 4 independent devices?
>
> These are SATA SSDs, I presume. Are they attached to the SAS array via
> an interposer board?
>
> http://www.lsi.com/products/storagecomponents/Pages/LSISS9252.aspx
>
> --
> Saso
>
> ___
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] Multiple devices appearing as one behind SAS expander

2013-03-21 Thread Schweiss, Chip
I have four 480GB SSDs I'm trying to re-add to my pool as L2ARC.   They
were all connected internally on the server, but now I have a dedicated SAS
JBOD for the 2 1/2 SSDs in the system.   The JBOD is a SuperMicro 2U with
LSI SAS2X36 expander.

The SSDs are are all showing up as a single device: c11t1200d0s0

They are incorrectly identified as the same device by the multipathadm:

mutipathadm list lu:

/dev/rdsk/c11t1200d0s2
Total Path Count: 4
Operational Path Count: 4

Not sure where napp-it gets its information for it's controller view but
this is how it reports them:

 c14::w5003048001d91b4e,0   connected   configured   unknown   Client
Device: /dev/dsk/c11t1200d0s0(sd75) disk-path n /devices/pci@0
,0/pci8086,340e@7/pci1000,30d0@0/iport@f000:scsi::w5003048001d91b4e,0
 c14::w5003048001d91b4f,0   connected   configured   unknown   Client
Device: /dev/dsk/c11t1200d0s0(sd75) disk-path n /devices/pci@0
,0/pci8086,340e@7/pci1000,30d0@0/iport@f000:scsi::w5003048001d91b4f,0
 c14::w5003048001d91b5a,0   connected   configured   unknown   Client
Device: /dev/dsk/c11t1200d0s0(sd75) disk-path n /devices/pci@0
,0/pci8086,340e@7/pci1000,30d0@0/iport@f000:scsi::w5003048001d91b5a,0
 c14::w5003048001d91b5b,0   connected   configured   unknown   Client
Device: /dev/dsk/c11t1200d0s0(sd75) disk-path n /devices/pci@0
,0/pci8086,340e@7/pci1000,30d0@0/iport@f000:scsi::w5003048001d91b5b,0

How do I break this and get OI to see these as 4 independent devices?

Thank you,
-Chip
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] EFI labled devices on root pools

2013-02-19 Thread Schweiss, Chip
Well that's embarrassing.

Thanks for debugging my error.  It all works now.

-Chip

On Tue, Feb 19, 2013 at 8:52 AM, Jim Klimov  wrote:

> On 2013-02-19 15:44, Schweiss, Chip wrote:
>
>> When I attempted to add a second disk to mirror the rpool the zpool
>> command
>> reports: EFI labeled devices are not supported on root pools.
>>
>> root@hcp-dr-zfs01:~# prtvtoc /dev/rdsk/**c4t5001517972EE63FBd0s2 |
>> fmthard -s - /dev/rdsk/**c4t5001517972EE64A2d0s2
>> fmthard: Partition 2 specifies the full disk and is not equal
>> full size of disk.  The full disk capacity is 156273152 sectors.
>> fmthard:  New volume table of contents now in place.
>> root@hcp-dr-zfs01:~# zpool attach -f rpool c4t5001517972EE63FBd0s0
>> c4t5001517972EE64A2d0
>> cannot label 'c4t5001517972EE64A2d0': EFI labeled devices are not
>> supported on root pools.
>>
>> Seems the installer is supoorting EFI labled disks for rpool but the zpool
>> command is not.
>>
>
> If you look carefully, you're attaching a "d0" (whole disk with
> a ZFS-created EFI partitioning to map it all) to an "s0" (slice
> in Solaris SMI label in MBR partition), so you are not really
> doing the right thing.
>
> Also, AFAIK, it is a long-standing issue with only SMI/MBR label
> being supported for bootable rpools (and it is more a bootloader
> problem with GRUB vs. GRUB2 than that of the installer).
>
>
>
>> How do I work around this?
>>
>
> Redefine the second disk to MBR partitioning. Label its partition
> with Solaris2 SMI. Provide a slice to be attached into the rpool.
> Run installgrub to make the second disk bootable. Verify that the
> MBR partition for Solaris is marked active, just in case.
>
> HTH,
> //Jim
>
>
> __**_
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss@**openindiana.org
> http://openindiana.org/**mailman/listinfo/openindiana-**discuss<http://openindiana.org/mailman/listinfo/openindiana-discuss>
>
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] EFI labled devices on root pools

2013-02-19 Thread Schweiss, Chip
I just reported this bug on the issue tracker, hopefully someone here can
give me advice on how to work around this problem.

When I installed 151a7 on a new system with disks connected directly to an
LSI-9207-8i HBA. The installer correctly located the attached disks, built
the rpool and boots fine:

root@hcp-dr-zfs01:~# zpool status
  pool: rpool
 state: ONLINE
  scan: none requested
config:

NAME   STATE READ WRITE CKSUM
rpool  ONLINE   0 0 0
  c4t5001517972EE63FBd0s0  ONLINE   0 0 0

errors: No known data errors

When I attempted to add a second disk to mirror the rpool the zpool command
reports: EFI labeled devices are not supported on root pools.

root@hcp-dr-zfs01:~# prtvtoc /dev/rdsk/c4t5001517972EE63FBd0s2 |
fmthard -s - /dev/rdsk/c4t5001517972EE64A2d0s2
fmthard: Partition 2 specifies the full disk and is not equal
full size of disk.  The full disk capacity is 156273152 sectors.
fmthard:  New volume table of contents now in place.
root@hcp-dr-zfs01:~# zpool attach -f rpool c4t5001517972EE63FBd0s0
c4t5001517972EE64A2d0
cannot label 'c4t5001517972EE64A2d0': EFI labeled devices are not
supported on root pools.

Seems the installer is supoorting EFI labled disks for rpool but the zpool
command is not.

How do I work around this?

-Chip
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] OpenIndiana on EC2

2012-06-19 Thread Schweiss, Chip
I've been working from the wiki page:
http://wiki.openindiana.org/oi/Creating+OpenIndiana+EC2+image

No problem getting OI running in Xen on Debian Squeeze w/ the pv-grub.gz
built with the supplied patches.   The problem comes in when launching on
EC2.  The kernel can never mount root.

I've moved my testing to my local Xen and get the same problem anytime the
device numbers are set to 2048 and 2064:

disk = [
'file:/etc/xen/vm/oi_boot.img,2048,w','file:/etc/xen/vm/images/oi_147.img,2064,w'
]

It doesn't seem to matter which build I'm using.   I've tried 147, 148,
151a and 151a3.   This leads me to believe I'm missing something.

If I set the device numbers to 0 and 1,  OI boots fine.

While it was running this way I created the links in /dev/dsk and /dev/rdsk
and changed the device numbers back to 2048 and 2064, but it still fails to
mount root:

Booting command-list

findroot (pool_rpool,0,a)

 Filesystem type is zfs, partition type 0xbf
bootfs rpool/ROOT/openindiana
kernel$ /platform/i86xpv/kernel/$ISADIR/unix -B $ZFS-BOOTFS
loading '/platform/i86xpv/kernel/$ISADIR/unix -B $ZFS-BOOTFS' ...

'/platform/i86xpv/kernel/amd64/unix -B zfs-bootfs=rpool/85,bootpath="/xpvd/xdf@
1:a"' is loaded
module$ /platform/i86pc/$ISADIR/boot_archive
loading '/platform/i86pc/$ISADIR/boot_archive' ...

'/platform/i86pc/amd64/boot_archive' is loaded

kexec(..,'/platform/i86xpv/kernel/amd64/unix -B zfs-bootfs=rpool/85,bootpat
h="/xpvd/xdf@1:a"')
xc: error: panic: xc_dom_bzimageloader.c:556:
xc_dom_probe_bzimage_kernel: kernel is not a bzImage: Invalid kernel

close blk: backend=/local/domain/0/backend/vbd/68/2048 node=device/vbd/2048
close blk: backend=/local/domain/0/backend/vbd/68/2064 node=device/vbd/2064
v4.0.1 chgset 'unavailable'
OpenIndiana Build oi_147 64-bit

SunOS Release 5.11 - Copyright 1983-2010 Oracle and/or its affiliates.
All rights reserved. Use is subject to license terms.
NOTICE: Can not read the pool label from '/xpvd/xdf@1:a'
NOTICE: spa_import_rootpool: error 5

Cannot mount root on /xpvd/xdf@1:a fstype zfs

panic[cpu0]/thread=fbc609e0: vfs_mountroot: cannot mount root

Warning - stack not written to the dump buffer
fbcb5080 genunix:vfs_mountroot+33e  ()

fbcb50b0 genunix:main+136  ()
fbcb50c0 unix:_locore_start+7e ()

skipping system dump - no dump device configured
rebooting...


I'm guessing the links were not loaded properly.  Any help would be greatly
appreciated.
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss