Re: [zfs-discuss] Unable to import exported zpool on a new server

2012-03-13 Thread P-O Yliniemi

Jim Klimov skrev 2012-03-13 15:24:

2012-03-13 16:52, Hung-Sheng Tsao (LaoTsao) Ph.D wrote:

hi
are the disk/sas controller the same on both server?


Seemingly no. I don't see the output of "format" on Server2,
but for Server1 I see that the 3TB disks are used as IDE
devices (probably with motherboard SATA-IDE emulation?)
while on Server2 addressing goes like SAS with WWN names.


Correct, the servers are all different.
Server1 is a HP xw8400, and the disks are connected to the first four 
SATA ports (the xw8400 has both SAS and SATA ports, of which I use the 
SAS ports for the system disks).
On Server2, the disk controller used for the data disks is a LSI SAS 
9211-8i, updated with the latest IT-mode firmware (also tested with the 
original IR-mode firmware)


The output of the 'format' command on Server2 is:

AVAILABLE DISK SELECTIONS:
   0. c2t0d0 
  /pci@0,0/pci8086,3410@9/pci15d9,5@0/sd@0,0
   1. c2t1d0 
  /pci@0,0/pci8086,3410@9/pci15d9,5@0/sd@1,0
   2. c3d1 
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
   3. c4d0 
  /pci@0,0/pci-ide@1f,5/ide@0/cmdk@0,0
   4. c7t5000C5003F45CCF4d0 
  /scsi_vhci/disk@g5000c5003f45ccf4
   5. c7t5000C50044E0F0C6d0 
  /scsi_vhci/disk@g5000c50044e0f0c6
   6. c7t5000C50044E0F611d0 
  /scsi_vhci/disk@g5000c50044e0f611

Note that this is what it looks like now, not at the time I sent the 
question. The difference is that I have set up three other disks (items 
4-6) on the new server, and are currently transferring the contents from 
Server1 to this one using zfs send/receive.


I will probably be able to reconnect the correct disks to the Server2 
tomorrow when the data has been transferred to the new disks (problem 
'solved' at that moment), if there is anything else that I can do to try 
to solve it the 'right' way.



It may be possible that on one controller disks are used
"natively" while on another they are attached as a JBOD
or a set of RAID0 disks (so the controller's logic or its
expected layout intervenes), as recently discussed on-list?

On the HP, on a reboot, I was reminded that the 3TB disks were displayed 
as 800GB-something by the BIOS (although correctly identified by 
OpenIndiana and ZFS). This could be a part of the problem with the 
ability to export/import the pool.



On Mar 13, 2012, at 6:10, P-O Yliniemi  wrote:


Hello,

I'm currently replacing a temporary storage server (server1) with 
the one that should be the final one (server2). To keep the data 
storage from the old one I'm attempting to import it on the new 
server. Both servers are running OpenIndiana server build 151a.


Server 1 (old)
The zpool consists of three disks in a raidz1 configuration:
# zpool status
c4d0ONLINE   0 0 0
c4d1ONLINE   0 0 0
c5d0ONLINE   0 0 0

errors: No known data errors

Output of format command gives:
# format
AVAILABLE DISK SELECTIONS:
   0. c2t1d0sec 126>
  
/pci@0,0/pci8086,25e2@2/pci8086,350c@0,3/pci103c,3015@6/sd@1,0

   1. c4d0
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
   2. c4d1
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
   3. c5d0
  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0



Server 2 (new)
I have attached the disks on the new server in the same order (which 
shouldn't matter as ZFS should locate the disks anyway)

zpool import gives:

root@backup:~# zpool import
   pool: storage
 id: 17210091810759984780
  state: UNAVAIL
action: The pool cannot be imported due to damaged devices or data.
config:

storageUNAVAIL  insufficient replicas
  raidz1-0 UNAVAIL  corrupted data
c7t5000C50044E0F316d0  ONLINE
c7t5000C50044A30193d0  ONLINE
c7t5000C50044760F6Ed0  ONLINE



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Unable to import exported zpool on a new server

2012-03-13 Thread P-O Yliniemi

Hello,

I'm currently replacing a temporary storage server (server1) with the 
one that should be the final one (server2). To keep the data storage 
from the old one I'm attempting to import it on the new server. Both 
servers are running OpenIndiana server build 151a.


Server 1 (old)
The zpool consists of three disks in a raidz1 configuration:
# zpool status
  pool: storage
 state: ONLINE
  scan: none requested
config:

NAMESTATE READ WRITE CKSUM
storage ONLINE   0 0 0
  raidz1-0  ONLINE   0 0 0
c4d0ONLINE   0 0 0
c4d1ONLINE   0 0 0
c5d0ONLINE   0 0 0

errors: No known data errors

Output of format command gives:
# format
AVAILABLE DISK SELECTIONS:
   0. c2t1d0 sec 126>

  /pci@0,0/pci8086,25e2@2/pci8086,350c@0,3/pci103c,3015@6/sd@1,0
   1. c4d0 
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0
   2. c4d1 
  /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0
   3. c5d0 
  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0
   4. c5d1 
  /pci@0,0/pci-ide@1f,2/ide@1/cmdk@1,0

(c5d1 was previously used as a hot spare, but I removed it as an attempt 
to export and import the zpool without the spare)


# zpool export storage

# zpool list
(shows only rpool)

# zpool import
   pool: storage
 id: 17210091810759984780
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

storage ONLINE
  raidz1-0  ONLINE
c4d0ONLINE
c4d1ONLINE
c5d0ONLINE

(check to see if it is importable to the old server, this has also been 
verified since I moved back the disks to the old server yesterday to 
have it available during the night)


zdb -l output in attached files.

---

Server 2 (new)
I have attached the disks on the new server in the same order (which 
shouldn't matter as ZFS should locate the disks anyway)

zpool import gives:

root@backup:~# zpool import
   pool: storage
 id: 17210091810759984780
  state: UNAVAIL
 action: The pool cannot be imported due to damaged devices or data.
 config:

storageUNAVAIL  insufficient replicas
  raidz1-0 UNAVAIL  corrupted data
c7t5000C50044E0F316d0  ONLINE
c7t5000C50044A30193d0  ONLINE
c7t5000C50044760F6Ed0  ONLINE

The problem is that all the disks are there and online, but the pool is 
showing up as unavailable.


Any ideas on what I can do more in order to solve this problem ?

Regards,
  PeO



# zdb -l c4d0s0

LABEL 0

version: 28
name: 'storage'
state: 0
txg: 2450439
pool_guid: 17210091810759984780
hostid: 13183520
hostname: 'backup'
top_guid: 11913540592052933027
guid: 14478395923793210190
vdev_children: 1
vdev_tree:
type: 'raidz'
id: 0
guid: 11913540592052933027
nparity: 1
metaslab_array: 31
metaslab_shift: 36
ashift: 9
asize: 9001731096576
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 14478395923793210190
path: '/dev/dsk/c4d0s0'
devid: 'id1,cmdk@AST3000DM001-9YN166=W1F07HW4/a'
phys_path: '/pci@0,0/pci-ide@1f,2/ide@0/cmdk@0,0:a'
whole_disk: 1
create_txg: 4
children[1]:
type: 'disk'
id: 1
guid: 9273576080530492359
path: '/dev/dsk/c4d1s0'
devid: 'id1,cmdk@AST3000DM001-9YN166=W1F05H2Y/a'
phys_path: '/pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0:a'
whole_disk: 1
create_txg: 4
children[2]:
type: 'disk'
id: 2
guid: 6205751126661365015
path: '/dev/dsk/c5d0s0'
devid: 'id1,cmdk@AST3000DM001-9YN166=W1F032RJ/a'
phys_path: '/pci@0,0/pci-ide@1f,2/ide@1/cmdk@0,0:a'
whole_disk: 1
create_txg: 4

LABEL 1

version: 28
name: 'storage'
state: 0
txg: 2450439
pool_guid: 17210091810759984780
hostid: 13183520
hostname: 'backup'
top_guid: 11913540592052933027
guid: 14478395923793210190
vdev_children: 1
vdev_tree:
type: 'raidz'
id: 0
guid: 11913540592052933027
nparity: 1
metaslab_array: 31
metaslab_shift: 36
ashift: 9
asize: 9001731096576
is_log: 0
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 14478395923793210190
path: '/dev/dsk/c4d0s0'
devid: 'id1,

Re: [zfs-discuss] Solaris 10u9 with zpool version 22, but no DEDUP (version 21 reserved)

2010-09-11 Thread P-O Yliniemi

 Will dedup ever be supported on ZFS/Solaris ?

If not, will any possible problems be avoided if I remove (transfer data 
away from) any filesystems with dedup=on ?


/PeO

Prabahar Jeyaram skrev 2010-09-11 18:39:

What, if you will use Zpools created with OSOL and Dedup on Solaris 10u9


Not supported. You are on your own, if you encounter any issues.

--
Prabahar.


On Sep 10, 2010, at 10:23 PM, Hans Foertsch wrote:


bash-3.00# uname -a
SunOS testxx10 5.10 Generic_142910-17 i86pc i386 i86pc

bash-3.00# zpool upgrade -v
This system is currently running ZFS pool version 22.

The following versions are supported:

VER  DESCRIPTION
---  
1   Initial ZFS version
2   Ditto blocks (replicated metadata)
3   Hot spares and double parity RAID-Z
4   zpool history
5   Compression using the gzip algorithm
6   bootfs pool property
7   Separate intent log devices
8   Delegated administration
9   refquota and refreservation properties
10  Cache devices
11  Improved scrub performance
12  Snapshot properties
13  snapused property
14  passthrough-x aclinherit
15  user/group space accounting
16  stmf property support
17  Triple-parity RAID-Z
18  Snapshot user holds
19  Log device removal
20  Compression using zle (zero-length encoding)
21  Reserved
22  Received properties

For more information on a particular version, including supported releases,
see the ZFS Administration Guide.

this is an interesting condition..

What, if you will use Zpools created with OSOL and Dedup on Solaris 10u9

Hans Foertsch
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best usage of SSD-disk in ZFS system

2010-08-08 Thread P-O Yliniemi

 Thanks everyone for the suggestions. To summarize:

* There's no point in using SSD's for the operating system (except an 
incredibly fast boot speed, and probably a bit lower system system 
temperature / power usage - but because of the rest of the components in 
the system it doesn't matter). I could probably use a USB key directly 
connected to the USB type A connector on the motherboard.
* Using a separate disk for logging might give problems if the log 
device goes wrong. To avoid this - keep the log on the disk pool, or 
mirror the log device.
* A SSD drive as a log device would increase read speed over network 
based sharing protocols (NFS or SMB, which will be used at the location 
of this server)
* Disk performance increased by adding SSD cache and log devices will 
die out (to a speed slower than the spindle disks) when the log and 
cache disks once have been filled up to its maximum capacity. So the use 
of SSD disks for cache and log is a waste of money and drive bays.

* Use DDR-based drives like DDRDrive (or i-RAM ?) for ZIL

I am still able to reorganize the use of the SSD disks (reinstall 
OpenSolaris), and even replace any one of them with 2.5" spindle disks, 
so any more suggestions are highly welcome.

I will do some more performance tests during my work day tomorrow.

Regards,
  PeO


P-O Yliniemi skrev 2010-08-06 12:44:

 Hello!

I have built a OpenSolaris / ZFS based storage system for one of our 
customers. The configuration is about this:


Motherboard/CPU: SuperMicro X7SBE / Xeon (something, sorry - can't 
remember and do not have my specification nearby)

RAM: 8GB ECC (X7SBE won't take more)
Drives for storage: 16*1.5TB Seagate ST31500341AS, connected to two 
AOC-SAT2-MV8 controllers

Drives for operating system: 2*80GB Intel X25-M (mirror)

ZFS configuration: Two vdevs, raid-z of 7+1 disks per set, striped 
together (gives a zpool with about 21TB storage space)


Disk performance: around 700-800MB/s, tested and timed with 'mkfile' 
and 'time' (a 40GB file is created in just about a minute)
I have a spare X25-M drive of 40GB to use for cache or log (or both), 
but since the disk array is a lot faster than the SSD-disk, I can not 
see the advantage in using it as a cache device.


Is there any advantages for using a separate log or cache device in 
this case ?


Regards,
  PeO

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Best usage of SSD-disk in ZFS system

2010-08-06 Thread P-O Yliniemi

 Hello!

I have built a OpenSolaris / ZFS based storage system for one of our 
customers. The configuration is about this:


Motherboard/CPU: SuperMicro X7SBE / Xeon (something, sorry - can't 
remember and do not have my specification nearby)

RAM: 8GB ECC (X7SBE won't take more)
Drives for storage: 16*1.5TB Seagate ST31500341AS, connected to two 
AOC-SAT2-MV8 controllers

Drives for operating system: 2*80GB Intel X25-M (mirror)

ZFS configuration: Two vdevs, raid-z of 7+1 disks per set, striped 
together (gives a zpool with about 21TB storage space)


Disk performance: around 700-800MB/s, tested and timed with 'mkfile' and 
'time' (a 40GB file is created in just about a minute)
I have a spare X25-M drive of 40GB to use for cache or log (or both), 
but since the disk array is a lot faster than the SSD-disk, I can not 
see the advantage in using it as a cache device.


Is there any advantages for using a separate log or cache device in this 
case ?


Regards,
  PeO

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedup stats per file system

2010-05-10 Thread P-O Yliniemi

Darren J Moffat skrev 2010-05-10 10:58:

On 08/05/2010 21:45, P-O Yliniemi wrote:

I have noticed that dedup is discussed a lot in this list right now..

Starting to experiment with dedup=on, I feel it would be interesting in
knowing exactly how efficient dedup is. The problem is that I've found
no way of checking this per file system. I have turned dedup on for a
few file systems to try it out:


You can't because dedup is per pool not per filesystem.  Each file 
system gets to choose if it is opting in to the pool wide dedup.


So dedup is operating on the pool level rather than the file system 
level, so if I have two file systems with dedup=on, they share the 
blocks and checksums pool wide ?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Dedup stats per file system

2010-05-08 Thread P-O Yliniemi

I have noticed that dedup is discussed a lot in this list right now..

Starting to experiment with dedup=on, I feel it would be interesting in 
knowing exactly how efficient dedup is. The problem is that I've found 
no way of checking this per file system. I have turned dedup on for a 
few file systems to try it out:


p...@opensolaris-fs:~$ zfs get all storage/virtualbox|grep dedup
storage/virtualbox  dedup  on local
p...@opensolaris-fs:~$ zfs get all storage/testshare|grep dedup
storage/testshare  dedup  on local

zfs list shows a dedup value of 2.22x, mainly because I made a few 
copies of my vdi files in the 'virtualbox' file system. For what I 
understand, this value represents the dedup efficiency for the whole zpool.


For my 'testshare' file system, the dedup value should be about 6x, 
since I just made five copies of a folder with a few hundred files.


The question is: is there any way to check/confirm the level of dedup 
efficiency ?


/PeO

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss