Re: [OpenIndiana-discuss] Kernel panic on hung zpool accessed via lofi

2015-09-28 Thread Jason Matthews


Sent from my iPhone

> On Sep 16, 2015, at 11:49 AM, Watson, Dan  wrote:
> 
> I've noticed that drives with a labeled WWN tend to be less error prone, and 
> only when a driv

This a distinction that only exists in your head.  :)

I have only 768 HGST 600gb sas drives in spinning in production at a time in a 
Hadoop environment. They have no labeled WWN and they may be the best spinning 
rust I have ever used in terms for failures. Also when the drives fail, they 
have so far done so in a graceful manner. 

J. 
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Kernel panic on hung zpool accessed via lofi

2015-09-26 Thread Nikola M

On 09/16/15 10:42 PM, Andrew Gabriel wrote:
Also what OI/illumos is that, because I was reading long ago there 
were some bugs solved in illumos for mpt_sas.


Somewhere around 18 months ago IIRC, Nexenta pushed a load of fixes 
for this into their git repo. I don't think I've seen these picked up 
yet by Illumos, although maybe I missed it? The fixes were in mpt_sas 
and FMA, to more accurately determine when disks are going bad by 
pushing the timing of the SCSI commands right down to the bottom of 
the stack (so delays in the software stack are not mistaken for bad 
drives), and to have FMA better analyse and handle errors when they do 
happen. 


It is strange how companies in illumos ecosystem do not push their 
changes (or do push veeery slow and late) to illumos upstream, but keep 
them by themselves i their codebase.
And if other distros want to have them, they need to port them 
themselves in possibly different ways.


Maybe illumos have big barrier on code quality of what enters in illumos 
or not,
regarding "always stable" mantra without stable/supportable/older 
illumos releases and developing branches.


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Kernel panic on hung zpool accessed via lofi

2015-09-16 Thread Rich Murphey
I'm also seeing panics in 'deadman' caused by failing drives, for SATA
drives, but also for SAS and SATA NAS drives as well.

In my limited experience, I've found Smart stats very effective in
resolving drive issues, using five specific metrics recommended by
Backblaze (below).
By eliminating drives that have non-zero values for any of these specific
metrics, the panics (for me) were eliminated.

I mention this also because drives that are gradually failing can cause
intermittent hangs, and cause one to suspect SAS cables, expanders, etc.
I don't want to discourage you from swapping other parts to try to resolve
issues, but rather look at Smart metrics as well.

Best regards,
Rich


   - SMART 5 – Reallocated_Sector_Count.
   - SMART 187 – Reported_Uncorrectable_Errors.
   - SMART 188 – Command_Timeout.
   - SMART 197 – Current_Pending_Sector_Count.
   - SMART 198 – Offline_Uncorrectable.

https://www.backblaze.com/blog/hard-drive-smart-stats/




On Wed, Sep 16, 2015 at 1:50 PM Watson, Dan <dan.wat...@bcferries.com>
wrote:

> I know it's not the best route to go but for personal use and budget SATA
> drives on SAS expanders is much easier to achieve.  Used 3Gbit SAS trays
> with expander can be had for $12/drive bay, while for retail there is still
> a significant price jump going from SATA interface to SAS. And even drives
> like the new Seagate 8TB "Cloud backup" drive don't have a SAS option,
> although now that I think about it they are probably marketed more towards
> "personal cloud" devices than actual datacenter based cloud services.
>
> Also the newer SATA drives are much less disruptive in a SAS tray than the
> early capacity drives. I've noticed that drives with a labeled WWN tend to
> be less error prone, and only when a drive completely dies do you get the
> cascading bus reset that kills all IO. Just don't daisy chain the SAS
> expanders/trays because that seems to introduce significant errors.
>
> This is an updated fresh install of OI. I'm not using any special
> publisher so I imagine it's somewhat out of date.
>
> I've managed to get the zpool working using the read-only  import option
> mentioned previously and it seems to be working fine. I'm betting I just
> did not have enough RAM available to do dedupe.
>
> Thanks!
> Dan
>
> -Original Message-
> From: Nikola M [mailto:minik...@gmail.com]
> Sent: September 16, 2015 11:25 AM
> To: Discussion list for OpenIndiana
> Subject: Re: [OpenIndiana-discuss] Kernel panic on hung zpool accessed via
> lofi
>
> On 09/11/15 08:57 PM, Watson, Dan wrote:
> > I'm using mpt_sas with SATA drives, and I_DO_  have error counters
> climbing for some of those drives, is it probably that?
> > Any other ideas?
>
> It is generally strongly advised to use SATA disks on SATA controllers
> and SAS disks on SAS controllers. And to use controller that can do JBOD.
>
> Also, using SAS to SATA multipliers or using port multipliers at all is
> strongly disadvised too,
> because it is usually cheap logic in it, that can go crazy and disk is
> not under direct control of the controller..
>
> Also what OI/illumos is that, because I was reading long ago there were
> some bugs solved in illumos for mpt_sas.
>
> First two issues could be hardware problems, and such config is usually
> unsupportable (I know it is not on Smartos), third issue could be seen
> further.
>
>
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>
> ___
> openindiana-discuss mailing list
> openindiana-discuss@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Kernel panic on hung zpool accessed via lofi

2015-09-16 Thread Watson, Dan
I know it's not the best route to go but for personal use and budget SATA 
drives on SAS expanders is much easier to achieve.  Used 3Gbit SAS trays with 
expander can be had for $12/drive bay, while for retail there is still a 
significant price jump going from SATA interface to SAS. And even drives like 
the new Seagate 8TB "Cloud backup" drive don't have a SAS option, although now 
that I think about it they are probably marketed more towards "personal cloud" 
devices than actual datacenter based cloud services.
 
Also the newer SATA drives are much less disruptive in a SAS tray than the 
early capacity drives. I've noticed that drives with a labeled WWN tend to be 
less error prone, and only when a drive completely dies do you get the 
cascading bus reset that kills all IO. Just don't daisy chain the SAS 
expanders/trays because that seems to introduce significant errors.

This is an updated fresh install of OI. I'm not using any special publisher so 
I imagine it's somewhat out of date.

I've managed to get the zpool working using the read-only  import option 
mentioned previously and it seems to be working fine. I'm betting I just did 
not have enough RAM available to do dedupe.

Thanks!
Dan

-Original Message-
From: Nikola M [mailto:minik...@gmail.com] 
Sent: September 16, 2015 11:25 AM
To: Discussion list for OpenIndiana
Subject: Re: [OpenIndiana-discuss] Kernel panic on hung zpool accessed via lofi

On 09/11/15 08:57 PM, Watson, Dan wrote:
> I'm using mpt_sas with SATA drives, and I_DO_  have error counters climbing 
> for some of those drives, is it probably that?
> Any other ideas?

It is generally strongly advised to use SATA disks on SATA controllers 
and SAS disks on SAS controllers. And to use controller that can do JBOD.

Also, using SAS to SATA multipliers or using port multipliers at all is 
strongly disadvised too,
because it is usually cheap logic in it, that can go crazy and disk is 
not under direct control of the controller..

Also what OI/illumos is that, because I was reading long ago there were 
some bugs solved in illumos for mpt_sas.

First two issues could be hardware problems, and such config is usually 
unsupportable (I know it is not on Smartos), third issue could be seen 
further.


___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Kernel panic on hung zpool accessed via lofi

2015-09-14 Thread Watson, Dan
>-Original Message-
>From: Jim Klimov [mailto:jimkli...@cos.ru] 
>Sent: September 12, 2015 10:31 AM
>To: Discussion list for OpenIndiana; Watson, Dan; 
>openindiana-discuss@openindiana.org
>Subject: Re: [OpenIndiana-discuss] Kernel panic on hung zpool accessed via lofi
>
>11 сентября 2015 г. 20:57:46 CEST, "Watson, Dan" <dan.wat...@bcferries.com> 
>пишет:
>>Hi all,
>>
>>I've been enjoying OI for quite a while butI'm running into a problem
>>with accessing zpool on disk image files sitting on zfs accessed via
>>lofi that I hope someone can give me a hint on.

>>I have been able to reproduce this problem several times, although it
>>has managed to complete enough to rename the original zpool.
>>
>>Has anyone else encountered this issue with lofi mounted zpools?
>>I'm using mpt_sas with SATA drives, and I _DO_ have error counters
>>climbing for some of those drives, is it probably that?
>>Any other ideas?
>>
>>I'd greatly appreciate any suggestions.
>>
>>Thanks!
>>Dan
>>
>>___
>>openindiana-discuss mailing list
>>openindiana-discuss@openindiana.org
>>http://openindiana.org/mailman/listinfo/openindiana-discuss
>
>From the zpool status I see it also refers to cache disks. Are those device 
>names actually available (present and not used by another pool)? Can you 
>remove them from the pool after you've imported it?
>
>Consider importing with '-N' to not automount (and autoshare) filesystems from 
>this pool, and '-R /a' or some other empty/absent altroot path to ensure lack 
>of conflicts when you do mount (and also does not add the poll into >zfs.cache 
>file for later autoimports). At least, mounting and sharing as a (partially) 
>kernel-side operation is something that might time out...
>
>Also, you might want to tune or disable the deadman timer and increase other 
>acceptable latencies (see OI wiki or other resources).
>
>How much RAM does the box have (you pay twice the ARC cache for oldtank and 
>for pool which hosts the dd files), maybe tune down primary/secondary caching 
>for the files store.
>
>How did you get into this recovery situation? Maybe oldtank is corrupted and 
>so is trying to recover during import? E.g. I had a history with a deduped 
>pool where I deleted lots of data and the kernel wanted more RAM to process 
>>the delete-queue of blocks than I had, and it took dozens of panic-reboots to 
>complete (progress can be tracked with zdb).
>
>Alternately you can import the pool read-only to maybe avoid these recoveries 
>altogether if you only want to retrieve the data.
>
>Jim
>
>--
>Typos courtesy of K-9 Mail on my Samsung Android

I can't remove the cache drives from the zpool as all zpool commands seem to 
hang waiting for something but they are not available on the host (anymore). 
I'm hoping they show up as absent/degraded.

I'll try -N and/or -R /a

I'll read up on how to tune the deadman timer, I've been looking at 
https://smartos.org/bugview/OS-2415 and that has lots of useful things to tune.

I ended up doing this because the original host of the zpool stopped being able 
to make it to multi-user while attached to the disk. With SATA disk in a SAS 
tray that usually means (to me) that one of the disks is faulty and sending 
resets to the controller causing the whole disk tray to reset. I tried 
identifying the faulty disk but tested individually all the disks worked fine. 
I decided to try copying the disk images to the alternate host to try to 
recover the data. Further oddities have cropped up on the original host so I'm 
going to try connecting the original disk tray to an alternate host.

I'll try read-only first. I was unaware there was a way to do this. I obviously 
need a ZFS refresher.

Thanks Jim!

Dan
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Kernel panic on hung zpool accessed via lofi

2015-09-14 Thread Jim Klimov
14 сентября 2015 г. 20:23:18 CEST, "Watson, Dan" <dan.wat...@bcferries.com> 
пишет:
>>-Original Message-
>>From: Jim Klimov [mailto:jimkli...@cos.ru] 
>>Sent: September 12, 2015 10:31 AM
>>To: Discussion list for OpenIndiana; Watson, Dan;
>openindiana-discuss@openindiana.org
>>Subject: Re: [OpenIndiana-discuss] Kernel panic on hung zpool accessed
>via lofi
>>
>>11 сентября 2015 г. 20:57:46 CEST, "Watson, Dan"
><dan.wat...@bcferries.com> пишет:
>>>Hi all,
>>>
>>>I've been enjoying OI for quite a while butI'm running into a problem
>>>with accessing zpool on disk image files sitting on zfs accessed via
>>>lofi that I hope someone can give me a hint on.
>
>>>I have been able to reproduce this problem several times, although it
>>>has managed to complete enough to rename the original zpool.
>>>
>>>Has anyone else encountered this issue with lofi mounted zpools?
>>>I'm using mpt_sas with SATA drives, and I _DO_ have error counters
>>>climbing for some of those drives, is it probably that?
>>>Any other ideas?
>>>
>>>I'd greatly appreciate any suggestions.
>>>
>>>Thanks!
>>>Dan
>>>
>>>___
>>>openindiana-discuss mailing list
>>>openindiana-discuss@openindiana.org
>>>http://openindiana.org/mailman/listinfo/openindiana-discuss
>>
>>From the zpool status I see it also refers to cache disks. Are those
>device names actually available (present and not used by another pool)?
>Can you remove them from the pool after you've imported it?
>>
>>Consider importing with '-N' to not automount (and autoshare)
>filesystems from this pool, and '-R /a' or some other empty/absent
>altroot path to ensure lack of conflicts when you do mount (and also
>does not add the poll into >zfs.cache file for later autoimports). At
>least, mounting and sharing as a (partially) kernel-side operation is
>something that might time out...
>>
>>Also, you might want to tune or disable the deadman timer and increase
>other acceptable latencies (see OI wiki or other resources).
>>
>>How much RAM does the box have (you pay twice the ARC cache for
>oldtank and for pool which hosts the dd files), maybe tune down
>primary/secondary caching for the files store.
>>
>>How did you get into this recovery situation? Maybe oldtank is
>corrupted and so is trying to recover during import? E.g. I had a
>history with a deduped pool where I deleted lots of data and the kernel
>wanted more RAM to process >the delete-queue of blocks than I had, and
>it took dozens of panic-reboots to complete (progress can be tracked
>with zdb).
>>
>>Alternately you can import the pool read-only to maybe avoid these
>recoveries altogether if you only want to retrieve the data.
>>
>>Jim
>>
>>--
>>Typos courtesy of K-9 Mail on my Samsung Android
>
>I can't remove the cache drives from the zpool as all zpool commands
>seem to hang waiting for something but they are not available on the
>host (anymore). I'm hoping they show up as absent/degraded.
>
>I'll try -N and/or -R /a
>
>I'll read up on how to tune the deadman timer, I've been looking at
>https://smartos.org/bugview/OS-2415 and that has lots of useful things
>to tune.
>
>I ended up doing this because the original host of the zpool stopped
>being able to make it to multi-user while attached to the disk. With
>SATA disk in a SAS tray that usually means (to me) that one of the
>disks is faulty and sending resets to the controller causing the whole
>disk tray to reset. I tried identifying the faulty disk but tested
>individually all the disks worked fine. I decided to try copying the
>disk images to the alternate host to try to recover the data. Further
>oddities have cropped up on the original host so I'm going to try
>connecting the original disk tray to an alternate host.
>
>I'll try read-only first. I was unaware there was a way to do this. I
>obviously need a ZFS refresher.
>
>Thanks Jim!
>
>Dan
>___
>openindiana-discuss mailing list
>openindiana-discuss@openindiana.org
>http://openindiana.org/mailman/listinfo/openindiana-discuss

BTW regarding '-N' to not-mount filesystems from the imported pool - you can 
follow up with 'zfs mount -a' if/when the pool gets imported. It may be that 
some one specific dataset fails to mount and/or share due to some fatal errors, 
while others work ok. If that does bite, on the next reboot you can script a 
oneliner to mount fs'es one by one to determine the troublemaker. ;)
--
Typos courtesy of K-9 Mail on my Samsung Android

___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] Kernel panic on hung zpool accessed via lofi

2015-09-12 Thread Jim Klimov
11 сентября 2015 г. 20:57:46 CEST, "Watson, Dan"  
пишет:
>Hi all,
>
>I've been enjoying OI for quite a while butI'm running into a problem
>with accessing zpool on disk image files sitting on zfs accessed via
>lofi that I hope someone can give me a hint on.
>
>To recover data from a zpool I've copied slice 0 off of all the disks
>to a different host under /alt (zfs file system)
>root@represent:/alt# ls
>c1t50014EE0037B0FF3d0s0.dd  c1t50014EE0AE25CF55d0s0.dd 
>c1t50014EE2081874CAd0s0.dd  c1t50014EE25D6CDE92d0s0.dd 
>c1t50014EE25D6DDBC7d0s0.dd  c1t50014EE2B2C380C3d0s0.dd
>c1t50014EE0037B105Fd0s0.dd  c1t50014EE0AE25EFD1d0s0.dd 
>c1t50014EE20818C0ECd0s0.dd  c1t50014EE25D6DCF0Ed0s0.dd 
>c1t50014EE2B2C27AE2d0s0.dd  c1t50014EE6033DD776d0s0.dd
>
>I use lofiadm to access the disk images as devices because for some
>reason zfs can't access a "device" formatted vdev as a file
>root@represent:/alt# lofiadm
>Block Device File   Options
>/dev/lofi/1  /alt/c1t50014EE0037B0FF3d0s0.dd-
>/dev/lofi/2  /alt/c1t50014EE0037B105Fd0s0.dd-
>/dev/lofi/3  /alt/c1t50014EE0AE25CF55d0s0.dd-
>/dev/lofi/4  /alt/c1t50014EE0AE25EFD1d0s0.dd-
>/dev/lofi/5  /alt/c1t50014EE2081874CAd0s0.dd-
>/dev/lofi/6  /alt/c1t50014EE20818C0ECd0s0.dd-
>/dev/lofi/7  /alt/c1t50014EE25D6CDE92d0s0.dd-
>/dev/lofi/8  /alt/c1t50014EE25D6DCF0Ed0s0.dd-
>/dev/lofi/9  /alt/c1t50014EE25D6DDBC7d0s0.dd-
>/dev/lofi/10 /alt/c1t50014EE2B2C27AE2d0s0.dd-
>/dev/lofi/11 /alt/c1t50014EE2B2C380C3d0s0.dd-
>/dev/lofi/12 /alt/c1t50014EE6033DD776d0s0.dd-
>
>The zpool is identifiable
>root@represent:/alt# zpool import -d /dev/lofi
>   pool: oldtank
> id: 1346358639852818
>  state: ONLINE
> status: One or more devices are missing from the system.
> action: The pool can be imported using its name or numeric identifier.
>   see: http://illumos.org/msg/ZFS-8000-2Q
> config:
>oldtank  ONLINE
>  raidz2-0   ONLINE
>/dev/lofi/4  ONLINE
>/dev/lofi/2  ONLINE
>/dev/lofi/1  ONLINE
>/dev/lofi/3  ONLINE
>/dev/lofi/8  ONLINE
>/dev/lofi/10 ONLINE
>/dev/lofi/11 ONLINE
>/dev/lofi/7  ONLINE
>/dev/lofi/6  ONLINE
>/dev/lofi/9  ONLINE
>/dev/lofi/5  ONLINE
>/dev/lofi/12 ONLINE
>cache
>  c1t50015178F36728A3d0
>  c1t50015178F3672944d0
>
>And I import the zpool (this command never exits)
>root@represent:/alt# zpool import -d /dev/lofi oldtank
>
>In another window it is evident that the system has managed to add the
>zpool
>   extended device statistics    errors ---
>r/sw/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn
>tot device
>101.10.01.70.0  0.3  2.82.9   27.5  28 100   0   0   0 
> 0 lofi1
>118.60.01.30.0  0.3  2.92.4   24.3  28 100   0   0   0 
> 0 lofi2
>123.80.01.00.0  0.3  2.92.7   23.3  31  94   0   0   0 
> 0 lofi3
>133.10.01.10.0  0.4  2.82.7   20.7  34  92   0   0   0 
> 0 lofi4
>144.80.01.60.0  0.2  2.71.3   18.7  17  97   0   0   0 
> 0 lofi5
>132.30.01.20.0  0.2  2.51.4   18.7  17  95   0   0   0 
> 0 lofi6
>100.30.01.00.0  0.2  2.71.9   26.6  18 100   0   0   0 
> 0 lofi7
>117.30.01.20.0  0.2  2.71.9   23.3  21  99   0   0   0 
> 0 lofi8
>142.10.01.00.0  0.3  2.51.9   17.3  26  85   0   0   0 
> 0 lofi9
>142.80.01.00.0  0.2  2.51.5   17.4  20  83   0   0   0 
> 0 lofi10
>144.10.00.90.0  0.3  2.72.0   19.0  28  96   0   0   0 
> 0 lofi11
>101.80.00.80.0  0.2  2.72.2   26.1  21  96   0   0   0 
> 0 lofi12
>1502.10.0   13.70.0 3229.1 35.3 2149.7   23.5 100 100   0   0  
>0   0 oldtank
>...
>195.60.05.80.0  0.0  6.10.0   31.4   0  95   0   0   0 
> 0 c0t50014EE25F8307D2d0
>200.90.05.80.0  0.0  7.50.0   37.2   0  97   0   0   0 
> 0 c0t50014EE2B4CAA6D3d0
>200.10.05.80.0  0.0  7.00.0   35.1   0  97   0   0   0 
> 0 c0t50014EE25F74EC15d0
>197.90.05.90.0  0.0  7.20.0   36.2   0  96   0   0   0 
> 0 c0t50014EE25F74DD46d0
>198.10.05.50.0  0.0  6.70.0   34.0   0  95   0   0   0 
> 0 c0t50014EE2B4D7C1C9d0
>202.40.05.90.0  0.0  6.90.0   34.1   0  97   0   0   0 
> 0 c0t50014EE2B4CA8F9Bd0
>223.90.06.90.0  0.0  8.80.0   39.1   0 100   0   0   0 
> 0 c0t50014EE20A2DAE1Ed0
>201.60.05.90.0  0.0  6.60.0   32.9   0  96   0   0   0 
> 0 

[OpenIndiana-discuss] Kernel panic on hung zpool accessed via lofi

2015-09-11 Thread Watson, Dan
Hi all,

I've been enjoying OI for quite a while butI'm running into a problem with 
accessing zpool on disk image files sitting on zfs accessed via lofi that I 
hope someone can give me a hint on.

To recover data from a zpool I've copied slice 0 off of all the disks to a 
different host under /alt (zfs file system)
root@represent:/alt# ls
c1t50014EE0037B0FF3d0s0.dd  c1t50014EE0AE25CF55d0s0.dd  
c1t50014EE2081874CAd0s0.dd  c1t50014EE25D6CDE92d0s0.dd  
c1t50014EE25D6DDBC7d0s0.dd  c1t50014EE2B2C380C3d0s0.dd
c1t50014EE0037B105Fd0s0.dd  c1t50014EE0AE25EFD1d0s0.dd  
c1t50014EE20818C0ECd0s0.dd  c1t50014EE25D6DCF0Ed0s0.dd  
c1t50014EE2B2C27AE2d0s0.dd  c1t50014EE6033DD776d0s0.dd

I use lofiadm to access the disk images as devices because for some reason zfs 
can't access a "device" formatted vdev as a file
root@represent:/alt# lofiadm
Block Device File   Options
/dev/lofi/1  /alt/c1t50014EE0037B0FF3d0s0.dd-
/dev/lofi/2  /alt/c1t50014EE0037B105Fd0s0.dd-
/dev/lofi/3  /alt/c1t50014EE0AE25CF55d0s0.dd-
/dev/lofi/4  /alt/c1t50014EE0AE25EFD1d0s0.dd-
/dev/lofi/5  /alt/c1t50014EE2081874CAd0s0.dd-
/dev/lofi/6  /alt/c1t50014EE20818C0ECd0s0.dd-
/dev/lofi/7  /alt/c1t50014EE25D6CDE92d0s0.dd-
/dev/lofi/8  /alt/c1t50014EE25D6DCF0Ed0s0.dd-
/dev/lofi/9  /alt/c1t50014EE25D6DDBC7d0s0.dd-
/dev/lofi/10 /alt/c1t50014EE2B2C27AE2d0s0.dd-
/dev/lofi/11 /alt/c1t50014EE2B2C380C3d0s0.dd-
/dev/lofi/12 /alt/c1t50014EE6033DD776d0s0.dd-

The zpool is identifiable
root@represent:/alt# zpool import -d /dev/lofi
   pool: oldtank
 id: 1346358639852818
  state: ONLINE
 status: One or more devices are missing from the system.
 action: The pool can be imported using its name or numeric identifier.
   see: http://illumos.org/msg/ZFS-8000-2Q
 config:
oldtank  ONLINE
  raidz2-0   ONLINE
/dev/lofi/4  ONLINE
/dev/lofi/2  ONLINE
/dev/lofi/1  ONLINE
/dev/lofi/3  ONLINE
/dev/lofi/8  ONLINE
/dev/lofi/10 ONLINE
/dev/lofi/11 ONLINE
/dev/lofi/7  ONLINE
/dev/lofi/6  ONLINE
/dev/lofi/9  ONLINE
/dev/lofi/5  ONLINE
/dev/lofi/12 ONLINE
cache
  c1t50015178F36728A3d0
  c1t50015178F3672944d0

And I import the zpool (this command never exits)
root@represent:/alt# zpool import -d /dev/lofi oldtank

In another window it is evident that the system has managed to add the zpool
extended device statistics    errors ---
r/sw/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
device
  101.10.01.70.0  0.3  2.82.9   27.5  28 100   0   0   0   0 
lofi1
  118.60.01.30.0  0.3  2.92.4   24.3  28 100   0   0   0   0 
lofi2
  123.80.01.00.0  0.3  2.92.7   23.3  31  94   0   0   0   0 
lofi3
  133.10.01.10.0  0.4  2.82.7   20.7  34  92   0   0   0   0 
lofi4
  144.80.01.60.0  0.2  2.71.3   18.7  17  97   0   0   0   0 
lofi5
  132.30.01.20.0  0.2  2.51.4   18.7  17  95   0   0   0   0 
lofi6
  100.30.01.00.0  0.2  2.71.9   26.6  18 100   0   0   0   0 
lofi7
  117.30.01.20.0  0.2  2.71.9   23.3  21  99   0   0   0   0 
lofi8
  142.10.01.00.0  0.3  2.51.9   17.3  26  85   0   0   0   0 
lofi9
  142.80.01.00.0  0.2  2.51.5   17.4  20  83   0   0   0   0 
lofi10
  144.10.00.90.0  0.3  2.72.0   19.0  28  96   0   0   0   0 
lofi11
  101.80.00.80.0  0.2  2.72.2   26.1  21  96   0   0   0   0 
lofi12
 1502.10.0   13.70.0 3229.1 35.3 2149.7   23.5 100 100   0   0   0   0 
oldtank
...
  195.60.05.80.0  0.0  6.10.0   31.4   0  95   0   0   0   0 
c0t50014EE25F8307D2d0
  200.90.05.80.0  0.0  7.50.0   37.2   0  97   0   0   0   0 
c0t50014EE2B4CAA6D3d0
  200.10.05.80.0  0.0  7.00.0   35.1   0  97   0   0   0   0 
c0t50014EE25F74EC15d0
  197.90.05.90.0  0.0  7.20.0   36.2   0  96   0   0   0   0 
c0t50014EE25F74DD46d0
  198.10.05.50.0  0.0  6.70.0   34.0   0  95   0   0   0   0 
c0t50014EE2B4D7C1C9d0
  202.40.05.90.0  0.0  6.90.0   34.1   0  97   0   0   0   0 
c0t50014EE2B4CA8F9Bd0
  223.90.06.90.0  0.0  8.80.0   39.1   0 100   0   0   0   0 
c0t50014EE20A2DAE1Ed0
  201.60.05.90.0  0.0  6.60.0   32.9   0  96   0   0   0   0 
c0t50014EE25F74F90Fd0
  210.90.06.00.0  0.0  8.70.0   41.5   0 100   0   0   0   0