Re: [zfs-discuss] [OpenIndiana-discuss] Question about ZFS/CIFS

2011-08-15 Thread Roy Sigurd Karlsbakk
  We've migrated from an old samba installation to a new box with
  openindiana, and it works well, but... It seems Windows now honours
  the executable bit, so that .exe files for installing packages, are
  no longer directly executable. While it is positive that windows
  honours this bit, it breaks things when we have a software
  repository on this server.
 
  Does anyone know a way to counter this without chmod -R o+x?
 
 Does setting the aclinherit=passthrough-x zfs property on the
 filesystem help?
 
 I'm not sure, but you may still need to do a chmod -R on each
 filesystem to set the ACLs on each existing directory.

Setting aclinherit didn't help much. It seems +x isn't inherited to files, only 
dirs. I found new files are created with the correct permissions, so I just 
chmod +x the lot...

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Wrong rpool used after reinstall!

2011-08-15 Thread Stu Whitefish
- Original Message -

 From: Brian Wilson brian.wil...@doit.wisc.edu
 To: zfs-discuss@opensolaris.org
 Cc: 
 Sent: Thursday, August 4, 2011 2:57:26 PM
 Subject: Re: [zfs-discuss] Wrong rpool used after reinstall!
 
 I'm curious - would it work to boot from a live CD, go to shell, and 
 deport/import/rename the old rpool, then boot normally?

Hi Brian,

No, it doesn't work. Kernel panic still happens.

  Most modern boards will be boot from a live USB stick.

I think this quote was from Ian. Yes they will, but only Solaris 11 Express and 
OpenIndiana seem to have USB bootable installers.

I installed Solaris 11 this way. You can see from the screen shot I posted the 
error still occurs.

The data is still inaccessible. I was hope somebody from Oracle would say 
something but I don't see any emails.

Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-15 Thread Stu Whitefish
 On Thu, Aug 4, 2011 at 2:47 PM, Stuart James Whitefish

 swhitef...@yahoo.com wrote:
  # zpool import -f tank
 
  http://imageshack.us/photo/my-images/13/zfsimportfail.jpg/
 
 I encourage you to open a support case and ask for an escalation on CR 
 7056738.
 
 -- 
 Mike Gerdts

Hi Mike,

Unfortunately I don't have a support contract. I've been trying to set up a 
development system on Solaris and learn it.
Until this happened, I was pretty happy with it. Even so, I don't have 
supported hardware so I couldn't buy a contract
until I bought another machine and I really have enough machines so I cannot 
justify the expense right now. And I
refuse to believe Oracle would hold people hostage in a situation like this, 
but I do believe they could generate a lot of
goodwill by fixing this for me and whoever else it happened to and telling us 
what level of Solaris 10 this is fixed at so
this doesn't continue happening. It's a pretty serious failure and I'm not the 
only one who it happened to.

It's incredible but in all the years I have been using computers I don't ever 
recall losing data due to a filesystem or OS issue.
That includes DOS, Windows, Linux, etc.

I cannot believe ZFS on Intel is so fragile that people lose hundreds of gigs 
of data and that's just the way it is. There
must be a way to recover this data and some advice on preventing it from 
happening again.

Thanks,
Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS raidz on top of hardware raid0

2011-08-15 Thread Tom Tang
Suppose I want to build a 100-drive storage system, wondering if there is any 
disadvantages for me to setup 20 arrays of HW RAID0 (5 drives each), then setup 
ZFS file system on these 20 virtual drives and configure them as RAIDZ?

I understand people always say ZFS doesn't prefer HW RAID.  Under this case, 
the HW RAID0 is only for stripping (allows higher data transfer rate), while 
the actual RAID5 (i.e. RAIDZ) is done via ZFS which takes care all the 
checksum/error detection/auto-repair.  I guess this will not affect any 
advantages of using ZFS, while I could get higher data transfer rate.  
Wondering if it's the case?  

Any suggestion or comment?  Please kindly advise.  Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel 320 as ZIL?

2011-08-15 Thread Cooper Hubbell
Over provisioning does not directly increase flash performance, but allows
for greater reliability as the drive ages by improving garbage collection
efforts and reducing write amplification.  This article doesn't provide any
sources, but it explains the concept at a very basic level -
http://thessdreview.com/ssd-guides/optimization-guides/ssd-performance-loss-and-its-solution/
.

This thread contains quite a bit of testing and analysis regarding
performance of several different SSDs under constant, 100% write workloads.
 Some of the drives have had close to 300TiB of writes and are still kicking
-
http://www.xtremesystems.org/forums/showthread.php?271063-SSD-Write-Endurance-25nm-Vs-34nm.
 The tests were all conducted under Windows with TRIM, however, so this
isn't directly applicable to using a SSD for a ZIL.



On Fri, Aug 12, 2011 at 8:53 PM, Edward Ned Harvey 
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:

  From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
  boun...@opensolaris.org] On Behalf Of Ray Van Dolson
 
  For ZIL, I
  suppose we could get the 300GB drive and overcommit to 95%!

 What kind of benefit does that offer?  I suppose, if you have a 300G drive
 and the OS can only see 30G of it, then the drive can essentially treat all
 the other 290G as having been TRIM'd implicitly, even if your OS doesn't
 support TRIM.  It is certainly conceivable this could make a big
 difference.


 Have you already tested it?  Anybody?  Or is it still just theoretical
 performance enhancement, compared to using a normal sized drive in a
 normal mode?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sudden drop in disk performance - WD20EURS 4k sectors to blame?

2011-08-15 Thread chris scott
Did you 4k align your partition table and is ashift=12?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sudden drop in disk performance - WD20EURS 4k sectors to blame?

2011-08-15 Thread Andrew Gabriel

David Wragg wrote:

I've not done anything different this time from when I created the original 
(512b)  pool. How would I check ashift?
  


For a zpool called export...

# zdb export | grep ashift
ashift: 12
^C
#

As far as I know (although I don't have any WD's), all the current 4k 
sectorsize hard drives claim to be 512b sectorsize, so if you didn't do 
anything special, you'll probably have ashift=9.


I would look at a zpool iostat -v to see what the IOPS rate is (you may 
have bottomed out on that), and I would also work out average transfer 
size (although that alone doesn't necessarily tell you much - a dtrace 
quantize aggregation would be better). Also check service times on the 
disks (iostat) to see if there's one which is significantly worse and 
might be going bad.


--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-15 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.

may be try the following
1)boot s10u8 cd into single user mode (when boot cdrom, choose Solaris 
then choose single user mode(6))

2)when ask to mount rpool just say no
3)mkdir /tmp/mnt1 /tmp/mnt2
4)zpool  import -f -R /tmp/mnt1 tank
5)zpool import -f -R /tmp/mnt2 rpool


On 8/15/2011 9:12 AM, Stu Whitefish wrote:

On Thu, Aug 4, 2011 at 2:47 PM, Stuart James Whitefish
swhitef...@yahoo.com  wrote:

  # zpool import -f tank

  http://imageshack.us/photo/my-images/13/zfsimportfail.jpg/

I encourage you to open a support case and ask for an escalation on CR 7056738.

--
Mike Gerdts

Hi Mike,

Unfortunately I don't have a support contract. I've been trying to set up a 
development system on Solaris and learn it.
Until this happened, I was pretty happy with it. Even so, I don't have 
supported hardware so I couldn't buy a contract
until I bought another machine and I really have enough machines so I cannot 
justify the expense right now. And I
refuse to believe Oracle would hold people hostage in a situation like this, 
but I do believe they could generate a lot of
goodwill by fixing this for me and whoever else it happened to and telling us 
what level of Solaris 10 this is fixed at so
this doesn't continue happening. It's a pretty serious failure and I'm not the 
only one who it happened to.

It's incredible but in all the years I have been using computers I don't ever 
recall losing data due to a filesystem or OS issue.
That includes DOS, Windows, Linux, etc.

I cannot believe ZFS on Intel is so fragile that people lose hundreds of gigs 
of data and that's just the way it is. There
must be a way to recover this data and some advice on preventing it from 
happening again.

Thanks,
Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool replace

2011-08-15 Thread Doug Schwabauer


  
  
Help - I've got a bad disk in a zpool and need to replace it. I've
got
an extra drive that's not being used, although it's still marked
like
it's in a pool. So I need to get the "xvm" pool destroyed, c0t5d0
marked as available, and replace c0t3d0 with c0t5d0.

root@kc-x4450a # zpool status -xv
 pool: vms
state: UNAVAIL
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run
'zpool
clear'.
 see: http://www.sun.com/msg/ZFS-8000-HC
scrub: none requested
config:

 NAME STATE READ WRITE CKSUM
 vms UNAVAIL 0 3 0 insufficient
replicas
 c0t2d0 ONLINE 0 0 0
 c0t3d0 UNAVAIL 0 6 0 experienced I/O
failures
 c0t4d0 ONLINE 0 0 0

errors: Permanent errors have been detected in the following files:

 vms:0x5
 vms:0xb
root@kc-x4450a # zpool replace -f vms c0t3d0 c0t5d0
cannot replace c0t3d0 with c0t5d0: pool I/O is currently
  suspended
root@kc-x4450a # zpool import
 pool: xvm
 id: 14176680653869308477
state: DEGRADED
status: The pool was last accessed by another system.
action: The pool can be imported despite missing or damaged
devices.
The
 fault tolerance of the pool may be compromised if imported.
 see: http://www.sun.com/msg/ZFS-8000-EY
config:

 xvm DEGRADED
 mirror-0 DEGRADED
 c0t4d0 FAULTED corrupted data
 c0t5d0 ONLINE

Thanks!

-Doug
  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS raidz on top of hardware raid0

2011-08-15 Thread Cindy Swearingen

D'oh. I shouldn't answer questions first thing Monday morning.

I think you test this configuration with and without the
underlying hardware RAID.

If RAIDZ is the right redundancy level for your workload,
you might be pleasantly surprised with a RAIDZ configuration
built on the h/w raid array in JBOD mode.

http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

cs

On 08/15/11 08:41, Cindy Swearingen wrote:


Hi Tom,

I think you test this configuration with and without the
underlying hardware RAID.

If RAIDZ is the right redundancy level for your workload,
you might be pleasantly surprised.

http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

Thanks,

Cindy

On 08/12/11 19:34, Tom Tang wrote:
Suppose I want to build a 100-drive storage system, wondering if there 
is any disadvantages for me to setup 20 arrays of HW RAID0 (5 drives 
each), then setup ZFS file system on these 20 virtual drives and 
configure them as RAIDZ?


I understand people always say ZFS doesn't prefer HW RAID.  Under this 
case, the HW RAID0 is only for stripping (allows higher data transfer 
rate), while the actual RAID5 (i.e. RAIDZ) is done via ZFS which takes 
care all the checksum/error detection/auto-repair.  I guess this will 
not affect any advantages of using ZFS, while I could get higher data 
transfer rate.  Wondering if it's the case? 
Any suggestion or comment?  Please kindly advise.  Thanks!



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-15 Thread Stu Whitefish


Hi. Thanks I have tried this on update 8 and Sol 11 Express.

The import always results in a kernel panic as shown in the picture.

I did not try an alternate mountpoint though. Would it make that much 
difference?


- Original Message -
 From: Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D. laot...@gmail.com
 To: zfs-discuss@opensolaris.org
 Cc: 
 Sent: Monday, August 15, 2011 3:06:20 PM
 Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data 
 inaccessible!
 
 may be try the following
 1)boot s10u8 cd into single user mode (when boot cdrom, choose Solaris 
 then choose single user mode(6))
 2)when ask to mount rpool just say no
 3)mkdir /tmp/mnt1 /tmp/mnt2
 4)zpool  import -f -R /tmp/mnt1 tank
 5)zpool import -f -R /tmp/mnt2 rpool
 
 
 On 8/15/2011 9:12 AM, Stu Whitefish wrote:
  On Thu, Aug 4, 2011 at 2:47 PM, Stuart James Whitefish
  swhitef...@yahoo.com  wrote:
    # zpool import -f tank
 
   http://imageshack.us/photo/my-images/13/zfsimportfail.jpg/
  I encourage you to open a support case and ask for an escalation on CR 
 7056738.
 
  -- 
  Mike Gerdts
  Hi Mike,
 
  Unfortunately I don't have a support contract. I've been trying to 
 set up a development system on Solaris and learn it.
  Until this happened, I was pretty happy with it. Even so, I don't have 
 supported hardware so I couldn't buy a contract
  until I bought another machine and I really have enough machines so I 
 cannot justify the expense right now. And I
  refuse to believe Oracle would hold people hostage in a situation like 
 this, but I do believe they could generate a lot of
  goodwill by fixing this for me and whoever else it happened to and telling 
 us what level of Solaris 10 this is fixed at so
  this doesn't continue happening. It's a pretty serious failure and 
 I'm not the only one who it happened to.
 
  It's incredible but in all the years I have been using computers I 
 don't ever recall losing data due to a filesystem or OS issue.
  That includes DOS, Windows, Linux, etc.
 
  I cannot believe ZFS on Intel is so fragile that people lose hundreds of 
 gigs of data and that's just the way it is. There
  must be a way to recover this data and some advice on preventing it from 
 happening again.
 
  Thanks,
  Jim
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS raidz on top of hardware raid0

2011-08-15 Thread Bob Friesenhahn

On Fri, 12 Aug 2011, Tom Tang wrote:

Suppose I want to build a 100-drive storage system, wondering if 
there is any disadvantages for me to setup 20 arrays of HW RAID0 (5 
drives each), then setup ZFS file system on these 20 virtual drives 
and configure them as RAIDZ?


The main concern would be resilver times if a drive in one of the HW 
RAID0's fails.  The resilver time would be similar to one huge disk 
drive since there would not be any useful concurrency.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool replace

2011-08-15 Thread Mark J Musante


Hi Doug,

The vms pool was created in a non-redundant way, so there is no way to 
get the data off of it unless you can put back the original c0t3d0 disk.


If you can still plug in the disk, you can always do a zpool replace on it 
afterwards.


If not, you'll need to restore from backup, preferably to a pool with 
raidz or mirroring so zfs can repair faults automatically.



On Mon, 15 Aug 2011, Doug Schwabauer wrote:


Help - I've got a bad disk in a zpool and need to replace it.  I've got an 
extra drive that's not being used, although it's still marked like it's in a 
pool. 
So I need to get the xvm pool destroyed, c0t5d0 marked as available, and 
replace c0t3d0 with c0t5d0.

root@kc-x4450a # zpool status -xv
  pool: vms
 state: UNAVAIL
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
   see: http://www.sun.com/msg/ZFS-8000-HC
 scrub: none requested
config:

    NAME    STATE READ WRITE CKSUM
    vms UNAVAIL  0 3 0  insufficient replicas
  c0t2d0    ONLINE   0 0 0
  c0t3d0    UNAVAIL  0 6 0  experienced I/O failures
  c0t4d0    ONLINE   0 0 0

errors: Permanent errors have been detected in the following files:

    vms:0x5
    vms:0xb
root@kc-x4450a # zpool replace -f vms c0t3d0 c0t5d0
cannot replace c0t3d0 with c0t5d0: pool I/O is currently suspended
root@kc-x4450a # zpool import
  pool: xvm
    id: 14176680653869308477
 state: DEGRADED
status: The pool was last accessed by another system.
action: The pool can be imported despite missing or damaged devices.  The
    fault tolerance of the pool may be compromised if imported.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:

    xvm DEGRADED
  mirror-0  DEGRADED
    c0t4d0  FAULTED  corrupted data
    c0t5d0  ONLINE

Thanks!

-Doug




Regards,
markm___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel 320 as ZIL?

2011-08-15 Thread Ray Van Dolson
On Fri, Aug 12, 2011 at 06:53:22PM -0700, Edward Ned Harvey wrote:
  From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
  boun...@opensolaris.org] On Behalf Of Ray Van Dolson
  
  For ZIL, I
  suppose we could get the 300GB drive and overcommit to 95%!
 
 What kind of benefit does that offer?  I suppose, if you have a 300G drive
 and the OS can only see 30G of it, then the drive can essentially treat all
 the other 290G as having been TRIM'd implicitly, even if your OS doesn't
 support TRIM.  It is certainly conceivable this could make a big difference.

Perhaps this is it.  Pulled the recommendation from Intel's Solid-State
Drive 320 Series in Server Storage Applications whitepaper.

Section 4.1:

  A small reduction in an SSD’s usable capacity can provide a large
  increase in random write performance and endurance. 

  All Intel SSDs have more NAND capacity than what is available for
  user data. The unused capacity is called spare capacity. This area is
  reserved for internal operations.  The larger the spare capacity, the
  more efficiently the SSD can perform random write operations and the
  higher the random write performance. 

  On the Intel SSD 320 Series, the spare capacity reserved at the
  factory is 7% to 11% (depending on the SKU) of the full NAND
  capacity. For better random write performance and endurance, the
  spare capacity can be increased by reducing the usable capacity of
  the drive; this process is called over-provisioning.

 
 
 Have you already tested it?  Anybody?  Or is it still just theoretical
 performance enhancement, compared to using a normal sized drive in a
 normal mode?
 

Haven't yet tested it, but hope to shortly.

Ray
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-15 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.



On 8/15/2011 11:25 AM, Stu Whitefish wrote:


Hi. Thanks I have tried this on update 8 and Sol 11 Express.

The import always results in a kernel panic as shown in the picture.

I did not try an alternate mountpoint though. Would it make that much 
difference?

try it



- Original Message -

From: Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.laot...@gmail.com
To: zfs-discuss@opensolaris.org
Cc:
Sent: Monday, August 15, 2011 3:06:20 PM
Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data 
inaccessible!

may be try the following
1)boot s10u8 cd into single user mode (when boot cdrom, choose Solaris
then choose single user mode(6))
2)when ask to mount rpool just say no
3)mkdir /tmp/mnt1 /tmp/mnt2
4)zpool  import -f -R /tmp/mnt1 tank
5)zpool import -f -R /tmp/mnt2 rpool


On 8/15/2011 9:12 AM, Stu Whitefish wrote:

  On Thu, Aug 4, 2011 at 2:47 PM, Stuart James Whitefish
  swhitef...@yahoo.com   wrote:

# zpool import -f tank

   http://imageshack.us/photo/my-images/13/zfsimportfail.jpg/

  I encourage you to open a support case and ask for an escalation on CR

7056738.
  -- 
  Mike Gerdts

  Hi Mike,

  Unfortunately I don't have a support contract. I've been trying to

set up a development system on Solaris and learn it.

  Until this happened, I was pretty happy with it. Even so, I don't have

supported hardware so I couldn't buy a contract

  until I bought another machine and I really have enough machines so I

cannot justify the expense right now. And I

  refuse to believe Oracle would hold people hostage in a situation like

this, but I do believe they could generate a lot of

  goodwill by fixing this for me and whoever else it happened to and telling

us what level of Solaris 10 this is fixed at so

  this doesn't continue happening. It's a pretty serious failure and

I'm not the only one who it happened to.

  It's incredible but in all the years I have been using computers I

don't ever recall losing data due to a filesystem or OS issue.

  That includes DOS, Windows, Linux, etc.

  I cannot believe ZFS on Intel is so fragile that people lose hundreds of

gigs of data and that's just the way it is. There

  must be a way to recover this data and some advice on preventing it from

happening again.

  Thanks,
  Jim
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

attachment: laotsao.vcf___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel 320 as ZIL?

2011-08-15 Thread Richard Elling
On Aug 11, 2011, at 1:16 PM, Ray Van Dolson wrote:

 On Thu, Aug 11, 2011 at 01:10:07PM -0700, Ian Collins wrote:
  On 08/12/11 08:00 AM, Ray Van Dolson wrote:
 Are any of you using the Intel 320 as ZIL?  It's MLC based, but I
 understand its wear and performance characteristics can be bumped up
 significantly by increasing the overprovisioning to 20% (dropping
 usable capacity to 80%).
 
 A log device doesn't have to be larger than a few GB, so that shouldn't 
 be a problem.  I've found even low cost SSDs make a huge difference to 
 the NFS write performance of a pool.
 
 We've been using the X-25E (SLC-based).  It's getting hard to find, and
 since we're trying to stick to Intel drives (Nexenta certifies them),
 and Intel doesn't have a new SLC drive available until late September,
 we're hoping an overprovisioned 320 could fill the gap until then and
 perform at least as well as the X-25E.

The 320 has not yet passed qualification testing at Nexenta.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-15 Thread Stu Whitefish
Unfortunately this panics the same exact way. Thanks for the suggestion though.



- Original Message -
 From: Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D. laot...@gmail.com
 To: zfs-discuss@opensolaris.org
 Cc: 
 Sent: Monday, August 15, 2011 3:06:20 PM
 Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data 
 inaccessible!
 
 may be try the following
 1)boot s10u8 cd into single user mode (when boot cdrom, choose Solaris 
 then choose single user mode(6))
 2)when ask to mount rpool just say no
 3)mkdir /tmp/mnt1 /tmp/mnt2
 4)zpool  import -f -R /tmp/mnt1 tank
 5)zpool import -f -R /tmp/mnt2 rpool

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS raidz on top of hardware raid0

2011-08-15 Thread LaoTsao
imho, not a good idea, any two hdd in your raid0 fail zpool is dead
if possible just do one hdd raid0 then use zfs to do mirror
raidz or raidz2 will be the last choice

Sent from my iPad
Hung-Sheng Tsao ( LaoTsao) Ph.D

On Aug 12, 2011, at 21:34, Tom Tang thomps...@supermicro.com wrote:

 Suppose I want to build a 100-drive storage system, wondering if there is any 
 disadvantages for me to setup 20 arrays of HW RAID0 (5 drives each), then 
 setup ZFS file system on these 20 virtual drives and configure them as RAIDZ?
 
 I understand people always say ZFS doesn't prefer HW RAID.  Under this case, 
 the HW RAID0 is only for stripping (allows higher data transfer rate), while 
 the actual RAID5 (i.e. RAIDZ) is done via ZFS which takes care all the 
 checksum/error detection/auto-repair.  I guess this will not affect any 
 advantages of using ZFS, while I could get higher data transfer rate.  
 Wondering if it's the case?  
 
 Any suggestion or comment?  Please kindly advise.  Thanks!
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-15 Thread LaoTsao
iirc if you use two hdd, you can import the zpool
can you try to import -R with only two hdd at time

Sent from my iPad
Hung-Sheng Tsao ( LaoTsao) Ph.D

On Aug 15, 2011, at 13:42, Stu Whitefish swhitef...@yahoo.com wrote:

 Unfortunately this panics the same exact way. Thanks for the suggestion 
 though.
 
 
 
 - Original Message -
 From: Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D. laot...@gmail.com
 To: zfs-discuss@opensolaris.org
 Cc: 
 Sent: Monday, August 15, 2011 3:06:20 PM
 Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data 
 inaccessible!
 
 may be try the following
 1)boot s10u8 cd into single user mode (when boot cdrom, choose Solaris 
 then choose single user mode(6))
 2)when ask to mount rpool just say no
 3)mkdir /tmp/mnt1 /tmp/mnt2
 4)zpool  import -f -R /tmp/mnt1 tank
 5)zpool import -f -R /tmp/mnt2 rpool
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel 320 as ZIL?

2011-08-15 Thread David Magda
On Mon, August 15, 2011 12:25, Ray Van Dolson wrote:

 Perhaps this is it.  Pulled the recommendation from Intel's Solid-State
 Drive 320 Series in Server Storage Applications whitepaper.

 Section 4.1:
[...]
   On the Intel SSD 320 Series, the spare capacity reserved at the
   factory is 7% to 11% (depending on the SKU) of the full NAND
   capacity. For better random write performance and endurance, the
   spare capacity can be increased by reducing the usable capacity of
   the drive; this process is called over-provisioning.

So this is hard-coded at the factory, and one must 'decode' the SKU to
determine how much is set aside? Are the different SKU's values documented
somewhere?


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-15 Thread Paul Kraus
I am catching up here and wanted to see if I correctly understand the
chain of events...

1. Install system to pair of mirrored disks (c0t2d0s0 c0t3d0s0),
system works fine
2. add two more disks (c0t0d0s0 c0t1d0s0), create zpool tank, test and
determine these disks are fine
3. copy data to save to rpool (c0t2d0s0 c0t3d0s0)
3. install OS to c0t0d0s0, c0t1d0s0
4. reboot, system still boots from old rpool (c0t2d0s0 c0t3d0s0)
5. change boot device and boot from new OS (c0t0d0s0 c0t1d0s0)
6. cannot import old rpool (c0t2d0s0 c0t3d0s0) with your data

At this point could you still boot from the old rpool (c0t2d0s0 c0t3d0s0) ?

something happens and

7. cannot import old rpool (c0t2d0s0 c0t3d0s0), any attempt causes a
kernel panic, even when booted from different OS versions

Have you been using the same hardware for all of this ?

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Designer: Frankenstein, A New Musical
(http://www.facebook.com/event.php?eid=123170297765140)
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-15 Thread Stu Whitefish
I'm sorry, I don't understand this suggestion.

The pool that won't import is a mirror on two drives.



- Original Message -
 From: LaoTsao laot...@gmail.com
 To: Stu Whitefish swhitef...@yahoo.com
 Cc: zfs-discuss@opensolaris.org zfs-discuss@opensolaris.org
 Sent: Monday, August 15, 2011 5:50:08 PM
 Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data 
 inaccessible!
 
 iirc if you use two hdd, you can import the zpool
 can you try to import -R with only two hdd at time
 
 Sent from my iPad
 Hung-Sheng Tsao ( LaoTsao) Ph.D
 
 On Aug 15, 2011, at 13:42, Stu Whitefish swhitef...@yahoo.com wrote:
 
  Unfortunately this panics the same exact way. Thanks for the suggestion 
 though.
 
 
 
  - Original Message -
  From: Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D. 
 laot...@gmail.com
  To: zfs-discuss@opensolaris.org
  Cc: 
  Sent: Monday, August 15, 2011 3:06:20 PM
  Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data 
 inaccessible!
 
  may be try the following
  1)boot s10u8 cd into single user mode (when boot cdrom, choose Solaris 
  then choose single user mode(6))
  2)when ask to mount rpool just say no
  3)mkdir /tmp/mnt1 /tmp/mnt2
  4)zpool  import -f -R /tmp/mnt1 tank
  5)zpool import -f -R /tmp/mnt2 rpool
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-15 Thread John D Groenveld
In message 1313431448.5331.yahoomail...@web121911.mail.ne1.yahoo.com, Stu Whi
tefish writes:
I'm sorry, I don't understand this suggestion.

The pool that won't import is a mirror on two drives.

Disconnect all but the two mirrored drives that you must import
and try to import from a S11X LiveUSB.

John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-15 Thread Stu Whitefish
Hi Paul,

 1. Install system to pair of mirrored disks (c0t2d0s0 c0t3d0s0),

 system works fine

I don't remember at this point which disks were which, but I believe it was 0 
and 1 because during the first install there were only 2 drives in the box 
because I had only 2 drives.

 2. add two more disks (c0t0d0s0 c0t1d0s0), create zpool tank, test and
 determine these disks are fine

Again, probably was on disks 2 and 3 but in principle, correct.

 3. copy data to save to rpool (c0t2d0s0 c0t3d0s0)

I did this in a few steps that probably don't make sense because I had only 2 
500G drives at the beginning when I did my install. Later I got two 320G and 
realized I should have the root pool on the smaller drives. But in the interim, 
I installed the new pair of 320G and moved a bunch of data onto that pool. 
After the initial installation when update 8 first came out, what happened next 
was something like:

1. I created tank mirror on the 2 320G drives and moved data from another 
system on to the tank. After I verified it was good I rebooted the box and 
checked again and everything was healthy, all pools were imported and mounted 
correctly.

2. Then I realized I should install on the 320s and use the 500s for storage so 
I copied everything I had just put on the 320s (tank) onto the 500s (root). I 
rebooted again and verified the data on root was good, then I deleted it from 
tank.

3. I installed a new install on the 320s (formerly tank)

4. I rebooted and it used my old root on the 500s as root, which surprised me 
but makes sense now because it was created as rpool during the very first 
install.

5. I rebooted in single user mode and tried to import the new install. It 
imported fine.

6. I don't know what happened next but I believe after that I rebooted again to 
see why Solaris didn't choose the new install, the tank pool could not be 
imported and I got the panic shown in the screenshot.

 3. install OS to c0t0d0s0, c0t1d0s0
 4. reboot, system still boots from old rpool (c0t2d0s0 c0t3d0s0)

Correct. At some point I read you can change the name of the pool so I imported 
rpool as tank and that much worked. At this point both pools were still good, 
and now the install was correctly called rpool and my tank was called tank.

 5. change boot device and boot from new OS (c0t0d0s0 c0t1d0s0)

That was the surprising thing. I had already changed my BIOS to boot from the 
new pool, but that didn't stop Solaris from using the old install as the root 
pool, I guess because of the name. I thought originally as long as I specified 
the correct boot device I wouldn't have any problem, but even taking the old 
rpool out of the boot sequence and specifying only the newly installed pool as 
boot devices wasn't enough.

 6. cannot import old rpool (c0t2d0s0 c0t3d0s0) with your data
 
 At this point could you still boot from the old rpool (c0t2d0s0 c0t3d0s0) ?

Yes, I could use the newly installed pool to boot from, or import it from shell 
in several versions of Solaris/Sol 11, etc. Of course now I cannot, since I 
have installed so many times over that pool trying to get the other pool 
imported.

 
 something happens and
 
 7. cannot import old rpool (c0t2d0s0 c0t3d0s0), any attempt causes a
 kernel panic, even when booted from different OS versions

Right. I have tried OpenIndiana 151 and Solaris 11 Express (latest from Oracle) 
several times each as well as 2 new installs of Update 8.

 Have you been using the same hardware for all of this ?

Yes, I have. 

Thanks for the help,

Jim


Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-15 Thread Stu Whitefish
Given I can boot to single user mode and elect not to import or mount any 
pools, and that later I can issue an import against only the pool I need, I 
don't understand how this can help.

Still, given that nothing else seems to help I will try this and get back to 
you tomorrow.

Thanks,

Jim



- Original Message -
 From: John D Groenveld jdg...@elvis.arl.psu.edu
 To: zfs-discuss@opensolaris.org zfs-discuss@opensolaris.org
 Cc: 
 Sent: Monday, August 15, 2011 6:12:37 PM
 Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data 
 inaccessible!
 
 In message 1313431448.5331.yahoomail...@web121911.mail.ne1.yahoo.com, 
 Stu Whi
 tefish writes:
 I'm sorry, I don't understand this suggestion.
 
 The pool that won't import is a mirror on two drives.
 
 Disconnect all but the two mirrored drives that you must import
 and try to import from a S11X LiveUSB.
 
 John
 groenv...@acm.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-15 Thread Alexander Lesle
Hello Stu Whitefish and List,

On August, 15 2011, 21:17 Stu Whitefish wrote in [1]:

 7. cannot import old rpool (c0t2d0s0 c0t3d0s0), any attempt causes a
 kernel panic, even when booted from different OS versions

 Right. I have tried OpenIndiana 151 and Solaris 11 Express (latest
 from Oracle) several times each as well as 2 new installs of Update 8.

When I understand you right is your primary interest to recover your
data on tank pool.

Have you check the way to boot from a Live-DVD, mount your safe place
and copy the data on a other machine?

-- 
Best Regards
Alexander
August, 15 2011

[1] mid:1313435871.14520.yahoomail...@web121919.mail.ne1.yahoo.com


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel 320 as ZIL?

2011-08-15 Thread Brandon High
On Thu, Aug 11, 2011 at 1:00 PM, Ray Van Dolson rvandol...@esri.com wrote:
 Are any of you using the Intel 320 as ZIL?  It's MLC based, but I
 understand its wear and performance characteristics can be bumped up
 significantly by increasing the overprovisioning to 20% (dropping
 usable capacity to 80%).

Intel recently added the 311, a small SLC-based drive for use as a
temp cache with their Z68 platform. It's limited to 20GB, but it might
be a better fit for use as a ZIL than the 320.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel 320 as ZIL?

2011-08-15 Thread Ray Van Dolson
On Mon, Aug 15, 2011 at 01:38:36PM -0700, Brandon High wrote:
 On Thu, Aug 11, 2011 at 1:00 PM, Ray Van Dolson rvandol...@esri.com wrote:
  Are any of you using the Intel 320 as ZIL?  It's MLC based, but I
  understand its wear and performance characteristics can be bumped up
  significantly by increasing the overprovisioning to 20% (dropping
  usable capacity to 80%).
 
 Intel recently added the 311, a small SLC-based drive for use as a
 temp cache with their Z68 platform. It's limited to 20GB, but it might
 be a better fit for use as a ZIL than the 320.
 
 -B

Looks interesting... specs around the same as the old X-25E.  We have
heard however, that Intel will be announcing a true successor to their
X-25E line shortly.

Ray
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel 320 as ZIL?

2011-08-15 Thread Edward Ned Harvey
 From: Ray Van Dolson [mailto:rvandol...@esri.com]
 Sent: Monday, August 15, 2011 12:26 PM
 
   On the Intel SSD 320 Series, the spare capacity reserved at the
   factory is 7% to 11% (depending on the SKU) of the full NAND
   capacity. For better random write performance and endurance, the
   spare capacity can be increased by reducing the usable capacity of
   the drive; this process is called over-provisioning.

I have a sneaking suspicion that you'll see the greatest performance when it's 
more than 50% overprovisioned.  (Say, 55% or so).  That will guarantee at all 
times, there's plenty of unused space available, which the drive can do GC on, 
even though the OS doesn't say anything like TRIM to the drive.

Specifically I say over 50% because 8k pages, and 4k blocks.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS not creating devices

2011-08-15 Thread zfs-dev
I am creating a custom Solaris 11 Express CD used for disaster recovery. 
I have included the necessary files on the system to run zfs commands 
without error (no apparent missing libraries or drivers). However, when 
I create a zvol, the device in /devices and the link to 
/dev/zvol/dsk/rpool do not exist. In fact /dev/zvol/dsk is completely 
empty. I am trying to determine what creates the devices and the links 
in /dev/zvol/dsk.  This is a sparc system.


I use this command to create my rpool/swap device:

# zfs create -b 8k -V 512m rpool/swap

I get a zero return code from the command.

# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
rpool 1.34G  3.55G31K  /rpool
rpool/ROOT  63K  3.55G32K  /rpool/ROOT
rpool/ROOT/solaris  31K  3.55G31K  /rpool/ROOT/solaris
rpool/dump  16K  3.55G16K  -
rpool/export   841M  3.55G32K  /rpool/export
rpool/export/home 1.03M  4.37G32K  /rpool/export/home
rpool/swap 528M  4.07G16K  -

#  ls -l /dev/zvol/dsk
total 0

#  ls -l /devices/pseudo | grep zfs
drwxr-xr-x   2 root sys0 Aug 15 23:16 zfs@0
crw-rw-rw-   1 root sys  161,  0 Aug 12 19:50 zfs@0:zfs

Any insight as to how these devices and links get created by zfs would 
be appreciated. I am pretty sure I must be missing something in the way 
of a driver or file, but truss did not point out any glaring problems.


Thanks
David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss