Hello I have a strange issue, 

I'm having a setup with 24 disk enclosure connected with LSI3801-R. I created 
two pools. Pool have 24 healthy disks. 

I disabled LUN persistency on the LSI adapter. 

When I cold boot the server (power off by pulling all power cables), a warning 
is shown on the console during boot: 

....
WARNING: /pci/path/ ... (mpt0): 
        wwn for traget has changed
WARNING: /pci/path/ ... (mpt0): 
        wwn for traget has changed
...

It seems that the disk take longer then the enclosure service device SES to 
boot. 

SAS1068E's links are 3.0 G, 3.0 G, 3.0 G, 3.0 G, down, down, down, down

 B___T     SASAddress     PhyNum  Handle  Parent  Type
        500605b0015a5130           0001           SAS Initiator
        500605b0015a5131           0002           SAS Initiator
        500605b0015a5132           0003           SAS Initiator
        500605b0015a5133           0004           SAS Initiator
        500605b0015a5134           0005           SAS Initiator
        500605b0015a5135           0006           SAS Initiator
        500605b0015a5136           0007           SAS Initiator
        500605b0015a5137           0008           SAS Initiator
        50030480003ac2ff     0     0009    0001   Edge Expander
 0   1  50030480003ac2c4     4     000a    0009   SATA Target
 0   2  50030480003ac2c5     5     000b    0009   SATA Target
 0   3  50030480003ac2c6     6     000c    0009   SATA Target
 0   4  50030480003ac2c7     7     000d    0009   SATA Target
 0   5  50030480003ac2c8     8     000e    0009   SATA Target
 0   8  50030480003ac2c9     9     000f    0009   SATA Target
 0   6  50030480003ac2ca    10     0010    0009   SATA Target
 0   7  50030480003ac2cb    11     0011    0009   SATA Target
 0  12  50030480003ac2cc    12     0012    0009   SATA Target
 0  10  50030480003ac2cd    13     0013    0009   SATA Target
 0  11  50030480003ac2ce    14     0014    0009   SATA Target
 0   9  50030480003ac2cf    15     0015    0009   SATA Target
 0  13  50030480003ac2d0    16     0016    0009   SATA Target
 0  14  50030480003ac2d1    17     0017    0009   SATA Target
 0  15  50030480003ac2d2    18     0018    0009   SATA Target
 0  16  50030480003ac2d3    19     0019    0009   SATA Target
 0  19  50030480003ac2d4    20     001a    0009   SATA Target
 0  20  50030480003ac2d5    21     001b    0009   SATA Target
 0  17  50030480003ac2d6    22     001c    0009   SATA Target
 0  18  50030480003ac2d7    23     001d    0009   SATA Target
 0  21  50030480003ac2d8    24     001e    0009   SATA Target
 0  22  50030480003ac2d9    25     001f    0009   SATA Target
 0  23  50030480003ac2da    26     0020    0009   SATA Target
 0  24  50030480003ac2db    27     0021    0009   SATA Target
 0   0  50030480003ac2fd    36     0022    0009   SAS Initiator and Target

Then I login to the system, I see 22 disks only with format, altought all pools 
are healthy and contain 24 disks. 

> zpool status | grep c0 | awk '{print $1}' | sed -e "s/[[:space:]]//g" -e 
> "s/c[0-9]*t\([0-9]*\)d[0-9]*/\1 \0/g" | sort -n
24 (this is OK, pool has 24 disks)

I did a scrub and still the pool is healthy, all disks there.

> format
Shows 22 disks two disks are missing.  (this is STRANGE)

ls -al /dev/dsk/c0t8d0 says "no such file or directory"
ls -al /dev/dsk/c0t7d0 says "no such file or directory"

Then I use zdb to find the phy path. It is: 

/devices/p...@0,0/pci8086,3...@9/pci1000,3...@0/s...@7,0:a
/devices/p...@0,0/pci8086,3...@9/pci1000,3...@0/s...@8,0:a

When I now issue a 
> ls -al /devices/p...@0,0/pci8086,3...@9/pci1000,3...@0/s...@7,0:a
no such file or directory 

> ls -al /devices/p...@0,0/pci8086,3...@9/pci1000,3...@0/
.. lots of devices ..

> ls -al /devices/p...@0,0/pci8086,3...@9/pci1000,3...@0/s...@7,0:a
brw-r----- 1 root sys 230, 640 Feb  7 13:28 
/devices/p...@0,0/pci8086,3...@9/pci1000,3...@0/s...@7,0:a

Strange ?? - a "ls" created device links. Is this normal ? 

When I run "format" now, I can see all 24 disks. 

My assumtion is that only the device files are missing, however the device (in 
kernel) is there and used. 

How can ZFS use a device that does not exist on the system ?
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to