Re: [zfs-discuss] x4540 dead HDD replacement, remains "configured".

2009-08-05 Thread Jorgen Lundman


I suspect this is what it is all about:

 # devfsadm -v
devfsadm[16283]: verbose: no devfs node or mismatched dev_t for 
/devices/p...@0,0/pci10de,3...@b/pci1000,1...@0/s...@5,0:a

[snip]

and indeed:

brw-r-   1 root sys   30, 2311 Aug  6 15:34 s...@4,0:wd
crw-r-   1 root sys   30, 2311 Aug  6 15:24 s...@4,0:wd,raw
drwxr-xr-x   2 root sys2 Aug  6 14:31 s...@5,0
drwxr-xr-x   2 root sys2 Apr 17 17:52 s...@6,0
brw-r-   1 root sys   30, 2432 Jul  6 09:50 s...@6,0:a
crw-r-   1 root sys   30, 2432 Jul  6 09:48 s...@6,0:a,raw

Perhaps because it was booted with the dead disk in place, it never 
configured the entire "sd5" mpt driver. Why the other hard-disks work I 
don't know.


I suspect the only way to fix this, is to reboot again.

Lund


Jorgen Lundman wrote:


x4540 snv_117

We lost a HDD last night, and it seemed to take out most of the bus or 
something and forced us to reboot. (We have yet to experience losing a 
disk that didn't force a reboot mind you).


So today, I'm looking at replacing the broken HDD, but no amount of work 
makes it "turn on the blue LED". After trying that for an hour, we just 
replaced the HDD anyway. But no amount of work will make it 
use/recognise it. (We tried more than one working spare HDD too).


For example:

# zpool status

  raidz1  DEGRADED 0 0 0
c5t1d0ONLINE   0 0 0
c0t5d0ONLINE   0 0 0
spare DEGRADED 0 0  285K
  c1t5d0  UNAVAIL  0 0 0  cannot open
  c4t7d0  ONLINE   0 0 0  4.13G resilvered
c2t5d0ONLINE   0 0 0
c3t5d0ONLINE   0 0 0
spares
  c4t7d0  INUSE currently in use



# zpool offline zpool1 c1t5d0

  raidz1  DEGRADED 0 0 0
c5t1d0ONLINE   0 0 0
c0t5d0ONLINE   0 0 0
spare DEGRADED 0 0  285K
  c1t5d0  OFFLINE  0 0 0
  c4t7d0  ONLINE   0 0 0  4.13G resilvered
c2t5d0ONLINE   0 0 0
c3t5d0ONLINE   0 0 0


# cfgadm -al
Ap_Id  Type Receptacle   Occupant Condition
c1 scsi-bus connectedconfigured unknown
c1::dsk/c1t5d0 disk connectedconfigured   
failed


# cfgadm -c unconfigure c1::dsk/c1t5d0
# cfgadm -al
c1::dsk/c1t5d0 disk connectedconfigured   
failed

# cfgadm -c unconfigure c1::dsk/c1t5d0
# cfgadm -c unconfigure c1::dsk/c1t5d0
# cfgadm -fc unconfigure c1::dsk/c1t5d0
# cfgadm -fc unconfigure c1::dsk/c1t5d0
# cfgadm -al
c1::dsk/c1t5d0 disk connectedconfigured   
failed


# hdadm offline slot 13
 1:5:9:   13:   17:   21:   25:   29:   33:   37:   41:   45:
c0t1  c0t5  c1t1  c1t5  c2t1  c2t5  c3t1  c3t5  c4t1  c4t5  c5t1  c5t5
^b+   ^++   ^b+   ^--   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++

# cfgadm -al
c1::dsk/c1t5d0 disk connectedconfigured   
failed


 # fmadm faulty
FRU : "HD_ID_47" 
(hc://:product-id=Sun-Fire-X4540:chassis-id=0915AMR048:server-id=x4500-10.unix:serial=9QMB024K:part=SEAGATE-ST35002NSSUN500G-09107B024K:revision=SU0D/chassis=0/bay=47/disk=0) 


  faulty

 # fmadm repair HD_ID_47
fmadm: recorded repair to HD_ID_47

 # format | grep c1t5d0
 #

 # hdadm offline slot 13
 1:5:9:   13:   17:   21:   25:   29:   33:   37:   41:   45:
c0t1  c0t5  c1t1  c1t5  c2t1  c2t5  c3t1  c3t5  c4t1  c4t5  c5t1  c5t5
^b+   ^++   ^b+   ^--   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++

 # cfgadm -al
c1::dsk/c1t5d0 disk connectedconfigured   
failed


 # ipmitool sunoem led get|grep 13
 hdd13.fail.led   | ON
 hdd13.ok2rm.led  | OFF

# zpool online zpool1 c1t5d0
warning: device 'c1t5d0' onlined, but remains in faulted state
use 'zpool replace' to replace devices that are no longer present

# cfgadm -c disconnect c1::dsk/c1t5d0
cfgadm: Hardware specific failure: operation not supported for SCSI device


Bah, why were they changed to SCSI? Increasing the size of the hammer...


# cfgadm -x replace_device c1::sd37
Replacing SCSI device: /devices/p...@0,0/pci10de,3...@b/pci1000,1...@0/s...@5,0
This operation will suspend activity on SCSI bus: c1
Continue (yes/no)? y
SCSI bus quiesced successfully.
It is now safe to proceed with hotplug operation.
Enter y if operation is complete or n to abort (yes/no)? y

# cfgadm -al
c1::dsk/c1t5d0 disk connectedconfigured   
failed



I am fairly certain that if I reboot, it will all come back ok again. 
But I would like to believe that I should be able to replace a disk 
without rebooting on a X4540.


Any other commands I should try?

Lund



--
Jorgen

Re: [zfs-discuss] Lundman home NAS

2009-08-05 Thread Jorgen Lundman


The case is made by Chyangfun, and the model made for Mini-ITX 
motherboards is called CGN-S40X. They had 6 pcs left last I talked to 
them, and need 3 week lead for more if I understand it correctly. I need 
to finish my LCD panel work before I will open shop to sell these.


As for temperature, I have only check the server HDDs so far (on my 
wiki) but will test with green HDDs tonight.


I do not know if Solaris can retrieve the Atom chipset temperature readings.

The parts I used should be listed on my wiki.



Anon wrote:

I have the same case which I use as directed attached storage.  I never thought 
about using it with a motherboard inside.

Could you provide a complete parts list?

What sort of temperatures at the chip, chipset, and drives did you find?

Thanks!


--
Jorgen Lundman   | 
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] x4540 dead HDD replacement, remains "configured".

2009-08-05 Thread Jorgen Lundman


x4540 snv_117

We lost a HDD last night, and it seemed to take out most of the bus or 
something and forced us to reboot. (We have yet to experience losing a 
disk that didn't force a reboot mind you).


So today, I'm looking at replacing the broken HDD, but no amount of work 
makes it "turn on the blue LED". After trying that for an hour, we just 
replaced the HDD anyway. But no amount of work will make it 
use/recognise it. (We tried more than one working spare HDD too).


For example:

# zpool status

  raidz1  DEGRADED 0 0 0
c5t1d0ONLINE   0 0 0
c0t5d0ONLINE   0 0 0
spare DEGRADED 0 0  285K
  c1t5d0  UNAVAIL  0 0 0  cannot open
  c4t7d0  ONLINE   0 0 0  4.13G resilvered
c2t5d0ONLINE   0 0 0
c3t5d0ONLINE   0 0 0
spares
  c4t7d0  INUSE currently in use



# zpool offline zpool1 c1t5d0

  raidz1  DEGRADED 0 0 0
c5t1d0ONLINE   0 0 0
c0t5d0ONLINE   0 0 0
spare DEGRADED 0 0  285K
  c1t5d0  OFFLINE  0 0 0
  c4t7d0  ONLINE   0 0 0  4.13G resilvered
c2t5d0ONLINE   0 0 0
c3t5d0ONLINE   0 0 0


# cfgadm -al
Ap_Id  Type Receptacle   Occupant 
Condition
c1 scsi-bus connectedconfigured 
unknown

c1::dsk/c1t5d0 disk connectedconfigured   failed

# cfgadm -c unconfigure c1::dsk/c1t5d0
# cfgadm -al
c1::dsk/c1t5d0 disk connectedconfigured   failed
# cfgadm -c unconfigure c1::dsk/c1t5d0
# cfgadm -c unconfigure c1::dsk/c1t5d0
# cfgadm -fc unconfigure c1::dsk/c1t5d0
# cfgadm -fc unconfigure c1::dsk/c1t5d0
# cfgadm -al
c1::dsk/c1t5d0 disk connectedconfigured   failed

# hdadm offline slot 13
 1:5:9:   13:   17:   21:   25:   29:   33:   37:   41:   45:
c0t1  c0t5  c1t1  c1t5  c2t1  c2t5  c3t1  c3t5  c4t1  c4t5  c5t1  c5t5
^b+   ^++   ^b+   ^--   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++

# cfgadm -al
c1::dsk/c1t5d0 disk connectedconfigured   failed

 # fmadm faulty
FRU : "HD_ID_47" 
(hc://:product-id=Sun-Fire-X4540:chassis-id=0915AMR048:server-id=x4500-10.unix:serial=9QMB024K:part=SEAGATE-ST35002NSSUN500G-09107B024K:revision=SU0D/chassis=0/bay=47/disk=0)

  faulty

 # fmadm repair HD_ID_47
fmadm: recorded repair to HD_ID_47

 # format | grep c1t5d0
 #

 # hdadm offline slot 13
 1:5:9:   13:   17:   21:   25:   29:   33:   37:   41:   45:
c0t1  c0t5  c1t1  c1t5  c2t1  c2t5  c3t1  c3t5  c4t1  c4t5  c5t1  c5t5
^b+   ^++   ^b+   ^--   ^++   ^++   ^++   ^++   ^++   ^++   ^++   ^++

 # cfgadm -al
c1::dsk/c1t5d0 disk connectedconfigured   failed

 # ipmitool sunoem led get|grep 13
 hdd13.fail.led   | ON
 hdd13.ok2rm.led  | OFF

# zpool online zpool1 c1t5d0
warning: device 'c1t5d0' onlined, but remains in faulted state
use 'zpool replace' to replace devices that are no longer present

# cfgadm -c disconnect c1::dsk/c1t5d0
cfgadm: Hardware specific failure: operation not supported for SCSI device


Bah, why were they changed to SCSI? Increasing the size of the hammer...


# cfgadm -x replace_device c1::sd37
Replacing SCSI device: /devices/p...@0,0/pci10de,3...@b/pci1000,1...@0/s...@5,0
This operation will suspend activity on SCSI bus: c1
Continue (yes/no)? y
SCSI bus quiesced successfully.
It is now safe to proceed with hotplug operation.
Enter y if operation is complete or n to abort (yes/no)? y

# cfgadm -al
c1::dsk/c1t5d0 disk connectedconfigured   failed


I am fairly certain that if I reboot, it will all come back ok again. 
But I would like to believe that I should be able to replace a disk 
without rebooting on a X4540.


Any other commands I should try?

Lund

--
Jorgen Lundman   | 
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Motherboard for home zfs/solaris file server

2009-08-05 Thread chris
Ok, i am ready to try.

2 last questions before I go for it:
- which version of (open)solaris for Ecc support (which seems to have been 
dropped from 200906) and general as-few-headaches-as-possible installation?

- do you think this issue with the AMD Athlon II X2 250 
http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=3572&p=2&cp=4
would affect cool'n'quiet support in solaris?

thx for your insight.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Can I setting 'zil_disable' to increase ZFS/iscsi performance ?

2009-08-05 Thread Mr liu
Is there any way to increase the ZFS performance?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] limiting the ARC cache during early boot, without /etc/system

2009-08-05 Thread Sanjeev
Matt,

On Wed, Aug 05, 2009 at 07:06:06PM -0700, Matt Ingenthron wrote:
> Hi,
> 
> Other than modifying /etc/system, how can I keep the ARC cache low at boot 
> time?
> 
> Can I somehow create an SMF service and wire it in at a very low level to put 
> a fence around ZFS memory usage before other services come up?
> 
> I have a deployment scenario where I will have some reasonably large memory 
> systems (1.7GByte) on Amazon EC2 where the application I'm running needs a 
> lot of memory, is using large pages and won't use ZFS in any significant way. 
>  Therefore, I would like to limit ZFS's use of memory on the system.
> 

If ZFS is not beinng used significantly, then ARC should not grow. ARC grows
based on the usage (ie. amount of ZFS files/data accessed). Hence, if you are
sure that the ZFS usage is low, things should be fine.

Hope that helps.

Regards,
Sanjeev

> I'd followed the evil tuning guide to modify /etc/system, however I've just 
> found by corresponding with the EC2 support folks that it is not supported to 
> modify /etc/system (and it doesn't work... it keeps the system from booting).
> 
> Thanks in advance,
> 
> - Matt
> -- 
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 

Sanjeev Bagewadi
Solaris RPE 
Bangalore, India
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recovering from ZFS command lock up after yanking a non-redundant drive?

2009-08-05 Thread Sanjeev
Chris,

On Wed, Aug 05, 2009 at 05:33:24AM -0700, Chris Baker wrote:
> Sanjeev
> 
> Thanks for taking an interest. Unfortunately I did have failmode=continue, 
> but I have just destroyed/recreated and double confirmed and got exactly the 
> same results.
> 
> zpool status shows both drives mirror, ONLINE, no errors
> 
> dmesg shows:
> 
> SATA device detached at port 0
> 
> cfgadm shows:
> 
> sata-portemptyunconfigured
> 
> The IO process has just hung. 
> 
> It seems to me that zfs thinks it has a drive with a really long response 
> time rather than a dead drive so no failmode processing, no mirror resilience 
> etc. Clearly something has been reported back to the kernel re the port going 
> dead but whether that came from the driver or not I wouldn't know.

Would it be possible for you to take a crashdump of the machine and point me to
it. We could try looking at where things are stuck.

Thanks and regards,
Sanjeev

-- 

Sanjeev Bagewadi
Solaris RPE 
Bangalore, India
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Ross
And along those lines, why stop at SSD's?  Get ZFS shrink working, and Sun 
could release a set of upgrade kits for x4500's and x4540's.  Kits could range 
from a couple of SSD devices to crazy specs like 40 2TB drives, and 8 SSD's.

And zpool shrink would be a key facilitator driving sales of these.  As Jordan 
says, if you can shrink your pool down, you can create space to fit the SSD 
devices.  However, shrinking the pool also allows you to upgrade the drives 
much more quickly.  

If you have a 46 disk zpool, you can't replace many disks at once, and the 
upgrade is high risk if you're running single parity raid.  Provided the pool 
isn't full however, if you can shrink it down to say 40 drives first, you can 
then upgrade in batches of 6 at once.  The zpool replace is then an operation 
between two fully working disks, and doesn't affect pool integrity at all.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Lundman home NAS

2009-08-05 Thread Anon
I have the same case which I use as directed attached storage.  I never thought 
about using it with a motherboard inside.

Could you provide a complete parts list?

What sort of temperatures at the chip, chipset, and drives did you find?

Thanks!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Live Upgrade UFS --> ZFS

2009-08-05 Thread Bill Korb
I can confirm that it is fixed in 121430-37, too.

Bill
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Elizabeth Schwartz
A lot of us have run *with * the ability to shrink because we were
using Veritas. Once you have a feature, processes tend to expand to
use it. Moving to ZFS was a good  move for many reasons but I still
missed being able to do something that used to be so easy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Martin
Bob wrote:

> Perhaps the problem is one of educating the customer
> so that they can 
> ammend their accounting practices.  Different
> business groups can 
> share the same pool if necessary.

Bob, while I don't mean to pick on you, that statement captures a major 
thinking flaw in IT when it comes to sales.

Yes, Brian should do everything possible to shape the customer's expectations; 
that's his job.

At the same time, let's face it.  If the customer thinks he needs X (whether or 
not he really does) and Brian can't get him to move away from it, Brian is 
sunk.  Here Brian sits with a potential multi-million dollar sale which is 
stuck on a missing feature, and probably other obstacles.  The truth is that 
the other obstacles are irrelevant as long as the customer can't get past 
feature X, valid or not.

So millions of dollars to Sun hang in the balance and these discussions revolve 
around whether or not the customer is planning optimally.  Imagine how much 
rapport Brian will gain when he tells this guy, "You know, if you guys just 
planned better, you wouldn't need feature X."  Brian would probably not get his 
phone calls returned after that.

You can rest assured that when the customer meets with IBM the next day, the 
IBM rep won't let the customer get away from feature X that JFS has.  The 
conversation might go like this.

Customer: You know, we are really looking at Sun and ZFS.

IBM: Of course you are, because that's a wise thing to do.  ZFS has a lot of 
exciting potential.

Customer: Huh?

IBM: ZFS has a solid base and Sun is adding features which will make it quite 
effective for your applications.

Customer: So you like ZFS?

IBM: Absolutely.  At some point it will have the features you need.  You 
mentioned you use feature X to provide the flexibility you have to continue to 
outperform your competition during this recession.  I understand Sun is working 
hard to integrate that feature, even as we speak.

Customer: Maybe we don't need feature X.

IBM: You would know more than I.  When did you last use feature X?

Customer: We used X last quarter when we scrambled to add FOO to our product 
mix so that we could beat our competition to market.

IBM: How would it have been different if feature X was unavailable?

Customer (mind racing): We would have found a way.

IBM: Of course, as innovative as your company is, you would have found a way.  
How much of a delay?

Customer (thinking through the scenarios): I don't know.

IBM: It wouldn't have impacted the rollout, would it?

Customer: I don't know.

IBM: Even if it did delay things, the delay wouldn't blow back on you, right?

Customer (sweating): I don't think so.

Imagine the land mine Brian now has to overcome when he tries to convince the 
customer that they don't need feature X, and even if they do, Sun will have it 
"real soon now."

Does anyone really think that Oracle made their money lecturing customers on 
how Table Partitions are stupid and if the customer would have planned their 
schema better, they wouldn't need them anyway?  Of course not.  People wanted 
partitions (valid or not) and Oracle delivered.

Marty
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] limiting the ARC cache during early boot, without /etc/system

2009-08-05 Thread Matt Ingenthron
Hi,

Other than modifying /etc/system, how can I keep the ARC cache low at boot time?

Can I somehow create an SMF service and wire it in at a very low level to put a 
fence around ZFS memory usage before other services come up?

I have a deployment scenario where I will have some reasonably large memory 
systems (1.7GByte) on Amazon EC2 where the application I'm running needs a lot 
of memory, is using large pages and won't use ZFS in any significant way.  
Therefore, I would like to limit ZFS's use of memory on the system.

I'd followed the evil tuning guide to modify /etc/system, however I've just 
found by corresponding with the EC2 support folks that it is not supported to 
modify /etc/system (and it doesn't work... it keeps the system from booting).

Thanks in advance,

- Matt
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] clone rpool to smaller disk

2009-08-05 Thread nawir
cindy,

You are brilliant.
I can successfully boot os after follow the steps below.
But I got some small problem
1. when I "zpool list", I saw two pools (altrpool & rpool)
I want to delete altrpool using "zpool destroy altrpool" but after I reboot it 
panic

2. I got this error message
ERROR MSG:
/usr/sbin/pmconfig: "/etc/power.conf" line 16, ufs statefile with zfs root is no
t supported
BUG ID:
http://bugs.opensolaris.org/bugdatabase/view_bug.do;jsessionid=5da34891a4f40f33f6d0b14870e3?bug_id=6844540

thanks

STEPS TAKEN
# zpool create -f altrpool c1t1d0s0
# SNAPNAME=`date +%Y%m%d`
# zfs snapshot -r rpool/r...@$snapname
# zfs list -t snapshot
# zfs send -R rp...@$snapname | zfs recv -vFd altrpool
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk 
/dev/rdsk/c1t1d0s0
for x86 do
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
# zfs destroy rpool/r...@$snapname
# zpool export altrpool
# init 5
remove source disk (c1t0d0s0) and move target disk (c1t1d0s0) to slot0
-insert solaris10 dvd
ok boot cdrom -s
# zpool import -f altrpool rpool
# zpool set bootfs=rpool/ROOT/s10s_u7wos_08 rpool
# init 6
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [cifs-discuss] ZFS CIFS problem with Ubuntu, NFS as an alternative?

2009-08-05 Thread Alan M Wright

On 08/05/09 07:10, Mark Shellenbaum wrote:

Christian Flaig wrote:

Hello,

I got a very strange problem here, tried out many things, can't solve it.
I run a virtual machine via VirtualBox 2.2.4, with Ubuntu 9.04. 
OpenSolaris as the host is 2009-06, with snv118. Now I try to mount 
(via CIFS) a share in Ubuntu from OpenSolaris. Mounting is successful, 
I can see all files, also change directories. But I can't read the 
files! Whenever I try to copy a file, I get a "Permission denied" from 
Ubuntu. But when I mount the same share in Windows XP, I can read the 
files also. So might be an Ubuntu issue, anyone also experienced this? 
Any logs I can check/configure to find out more?
Here the permissions for the directory (tmns is the user I use for 
mounting):

dr-xr-xr-x+ 31 chrisstaff588 Aug  4 23:57 video
  user:tmns:r-x---a-R-c---:fd-:allow
 user:chris:rwxpdDaARWcCos:fd-:allow
(The "x" shouldn't be necessary, but XP seems not able to list 
subdirectories without it...)


Why do you think the "x" is unnecessary?

Alan

So I thought about using NFS instead, which should be better for an 
Unix - Unix connection anyway. But here I face another issue, which 
might be because of missing knowledge about NFS...
I share the "video" directory above with the ZFS sharenfs command, 
options are "anon=0,ro". Without "anon=0" I always get a "Permission 
denied" when I want to mount the share via NFS on Ubuntu (mounting 
with root user). But with "anon=0" I can only read the files on the 
Ubuntu side with root, the mounted directory had numerical ids for 
owner and group on the Ubuntu side.

Any clue how I can solve this?

Many thanks for your help, I'm not sure how to progress on this...

Cheers,

Chris



This is better asked on cifs-disc...@opensolaris.org

They will start out by asking you to run:

http://opensolaris.org/os/project/cifs-server/files/cifs-gendiag


  -Mark
___
cifs-discuss mailing list
cifs-disc...@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/cifs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [cifs-discuss] ZFS CIFS problem with Ubuntu, NFS as an alternative?

2009-08-05 Thread Afshin Salek

How does the permission look like on one of these files that you
have problem copying?

A network trace would also be helpful. Start the trace before you
do the mount to have a complete context, and stop it after trying
to copy a file. Don't do any extra stuff between mounting and copying
so the trace is not polluted. You can use any tool on the client or
the server to capture the traffic. If you are sending the trace also
send the name of the file with problem and the permissions on that
file.

Thanks,
Afshin

Mark Shellenbaum wrote:

Christian Flaig wrote:

Hello,

I got a very strange problem here, tried out many things, can't solve it.
I run a virtual machine via VirtualBox 2.2.4, with Ubuntu 9.04. 
OpenSolaris as the host is 2009-06, with snv118. Now I try to mount 
(via CIFS) a share in Ubuntu from OpenSolaris. Mounting is successful, 
I can see all files, also change directories. But I can't read the 
files! Whenever I try to copy a file, I get a "Permission denied" from 
Ubuntu. But when I mount the same share in Windows XP, I can read the 
files also. So might be an Ubuntu issue, anyone also experienced this? 
Any logs I can check/configure to find out more?
Here the permissions for the directory (tmns is the user I use for 
mounting):

dr-xr-xr-x+ 31 chrisstaff588 Aug  4 23:57 video
  user:tmns:r-x---a-R-c---:fd-:allow
 user:chris:rwxpdDaARWcCos:fd-:allow
(The "x" shouldn't be necessary, but XP seems not able to list 
subdirectories without it...)


So I thought about using NFS instead, which should be better for an 
Unix - Unix connection anyway. But here I face another issue, which 
might be because of missing knowledge about NFS...
I share the "video" directory above with the ZFS sharenfs command, 
options are "anon=0,ro". Without "anon=0" I always get a "Permission 
denied" when I want to mount the share via NFS on Ubuntu (mounting 
with root user). But with "anon=0" I can only read the files on the 
Ubuntu side with root, the mounted directory had numerical ids for 
owner and group on the Ubuntu side.

Any clue how I can solve this?

Many thanks for your help, I'm not sure how to progress on this...

Cheers,

Chris



This is better asked on cifs-disc...@opensolaris.org

They will start out by asking you to run:

http://opensolaris.org/os/project/cifs-server/files/cifs-gendiag


  -Mark
___
cifs-discuss mailing list
cifs-disc...@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/cifs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ?: SMI vs. EFI label and a disk's write cache

2009-08-05 Thread Steffen Weiberle

Hi Cindy, thanks for the reply...

On 08/05/09 18:55, cindy.swearin...@sun.com wrote:

Hi Steffen,

Go with a mirrored root pool is my advice with all the disk space in s0
on each disk. Simple is best and redundant simple is even better.


I will suggest that. Had already considered it. Since they may be forced 
to do a fresh load, they could turn off the HW RAID that is currently in 
place. System are T5xy0s.




I'm no write cache expert, but a few simple tests on Solaris 10 5/09,
show me that the write cache is enabled on a disk that is labeled with
an SMI label and slice when the pool is created, if the whole disk's 
capacity is in slice 0, for example. 


Was that a performance test or a status test using something like format -e?

Turns out I am trying this on an ATA drive, and format -e doesn't do 
anything there.


> However, its not enabled on my

s10u7 root pool slice, all disk space is in slice 0, but it is enabled
on my upcoming Solaris 10 root pool disk. Don't know what's up with
that.


And I don't fully follow, since 5/09 is update 7 :)

Now maybe I do. The former case is a non-root pool and the latter is a 
root pool?



If performance is a goal then go with two pools anyway so that you have
more flexibility in configuring a mirrored or RAID-Z config for the data 
pool or adding log devices (if that helps their workload) and also
provides more flexibility in management of ZFS BEs vs ZFS data in zones, 
and so on.


System has two drives, so I don't see how I/they could get more 
performance by using RAID, at least for the write side of things (and I 
don't know where the performance issue is).


My other concern with two pools on a single disk is there less 
likelihood of putting two unrelated writes close together if they are in 
different pools, not just different file systems/data sets in the same 
pool. So two pools might force considerably more head movements--across 
more of the platter.


With a root pool, you currently constrained by no RAID-Z, can't add 
add'l mirrored VDEVs, no log devices, can't be exported to another

system, and so on.


These would be internal disks. Good point about the lack of log 
devices--not sure if there might be interest or opportunity in adding 
SSD later.



The ZFS BP wiki provides more performance-related tips:

http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide


Ah, and I only looked on the Evil Tuning Guide one. The BP mentions the 
whole disk, however it is not clear whether that applies to the root, 
non-EFI pool, so your information is of value to me.


Steffen



Cindy

On 08/05/09 15:07, Steffen Weiberle wrote:

For Solaris 10 5/09...

There are supposed to be performance improvements if you create a 
zpool on a full disk, such as one with an EFI label. Does the same 
apply if the full disk is used with an SMI label, which is required to 
boot?


I am trying to determine the trade-off, if any, of having a single 
rpool on cXtYd0s2, if I can even do that, and improved performance 
compared to having two pools, a root pool and a separate data pool, 
for improved manageability and isolation. The data pool will have zone 
root paths on it. Customer has stated they are experiencing some 
performance limits in their application due to the disk, and if 
creating a single pool will help by enabling the write cache, that may 
be of value.


If the *current* answer is no to having ZFS turn on the write cache at 
this time, is it something that is coming in OpenSolaris or an update 
to S10?


Thanks
Steffen
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Jordan Schwartz
>Preface: yes, shrink will be cool.  But we've been running highly
available,
>mission critical datacenters for more than 50 years without shrink being
>widely available.

Agreed, and shrink IS cool, I used it to migrate VxVM volumes from direct
attached storage to slightly smaller SAN  LUNS on a solaris sparc box.  It
sure is nice to add the new storage to the volume and mirror as opposed to
copying to a new filesystem.

It will be cool when SSDs are released for my fully loaded x4540s, if I can
migrate enough users off and shrink the pool perhaps I can drop a couple of
SATA disks and then add the SSDs, all on the fly.

Perhaps Steve Martin said it best, "Let's get real small!".

Thanks,

Jordan


On Wed, Aug 5, 2009 at 12:47 PM, Richard Elling wrote:

> Preface: yes, shrink will be cool.  But we've been running highly
> available,
> mission critical datacenters for more than 50 years without shrink being
> widely available.
>
> On Aug 5, 2009, at 9:17 AM, Martin wrote:
>
>> You are the 2nd customer I've ever heard of to use shrink.
>>>
>>
>> This attitude seems to be a common theme in ZFS discussions: "No
>> enterprise uses shrink, only grow."
>>
>> Maybe.  The enterprise I work for requires that every change be reversible
>> and repeatable.  Every change requires a backout plan and that plan better
>> be fast and nondisruptive.
>>
>
> Do it exactly the same way you do it for UFS.  You've been using UFS
> for years without shrink, right?  Surely you have procedures in place :-)
>
>  Who are these enterprise admins who can honestly state that they have no
>> requirement to reverse operations?
>>
>
> Backout plans are not always simple reversals.  A well managed site will
> have procedures for rolling upgrades.
>
>  Who runs a 24x7 storage system and will look you in the eye and state,
>> "The storage decisions (parity count, number of devices in a stripe, etc.)
>> that I make today will be valid until the end of time and will NEVER need
>> nondisruptive adjustment.  Every storage decision I made in 1993 when we
>> first installed RAID is still correct and has needed no changes despite
>> changes in our business models."
>>
>> My experience is that this attitude about enterprise storage borders on
>> insane.
>>
>> Something does not compute.
>>
>
> There is more than one way to skin a cat.
>  -- richard
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Matthew Ahrens

Brian Kolaci wrote:
So Sun would see increased hardware revenue stream if they would just 
listen to the customer...  Without [pool shrink], they look for alternative 
hardware/software vendors.


Just to be clear, Sun and the ZFS team are listening to customers on this 
issue.  Pool shrink has been one of our top priorities for some time now.


It is unfortunately a very difficult problem, and will take some time to 
solve even with the application of all possible resources (including the 
majority of my time).  We are updating CR 4852783 at least once a month with 
progress reports.


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Bob Friesenhahn

On Wed, 5 Aug 2009, Richard Elling wrote:


Thanks Cindy,
This is another way to skin the cat. It works for simple volumes, too.
But there are some restrictions, which could impact the operation when a
large change in vdev size is needed. Is this planned to be backported
to Solaris 10?

CR 6844090 has more details.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6844090


A potential partial solution is to have a pool creation option where 
the tail device labels are set to a point much smaller than the device 
size rather than being written to the end of the device.  As zfs 
requires more space, the tail device labels are moved to add 
sufficient free space that storage blocks can again be efficiently 
allocated.  Since no zfs data is written beyond the tail device 
labels, the storage LUN could be truncated down to the point where the 
tail device labels are still left intact.  This seems like minimal 
impact to ZFS and no user data would need to be migrated.


If the user's usage model tends to periodically fill the whole LUN 
rather than to gradually grow, then this approach won't work.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Brian Kolaci

cindy.swearin...@sun.com wrote:

Brian,

CR 4852783 was updated again this week so you might add yourself or
your customer to continue to be updated.


Will do.  I thought I was on it, but didn't see any updates...



In the meantime, a reminder is that a mirrored ZFS configuration
is flexible in that devices can be detached (as long as the redundancy
is not compromised) or replaced as long as the replacement disk is an 
equivalent size or larger. So, you can move storage around if you need 
to in a mirrored ZFS config and until 4852783 integrates.


Yes, we're trying to push that through now (make a ZFS root).  But the case I 
was more concerned about was the back-end storage for LDom guests and 
zonepaths.  All the SAN storage coming in is already RAID on EMC or Hitachi, 
and they just move the storage around through the SAN group.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Richard Elling

On Aug 5, 2009, at 4:06 PM, cindy.swearin...@sun.com wrote:


Brian,

CR 4852783 was updated again this week so you might add yourself or
your customer to continue to be updated.

In the meantime, a reminder is that a mirrored ZFS configuration
is flexible in that devices can be detached (as long as the redundancy
is not compromised) or replaced as long as the replacement disk is  
an equivalent size or larger. So, you can move storage around if you  
need to in a mirrored ZFS config and until 4852783 integrates.


Thanks Cindy,
This is another way to skin the cat. It works for simple volumes, too.
But there are some restrictions, which could impact the operation when a
large change in vdev size is needed. Is this planned to be backported
to Solaris 10?

CR 6844090 has more details.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6844090
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Brian Kolaci

Bob Friesenhahn wrote:

On Wed, 5 Aug 2009, Brian Kolaci wrote:


I have a customer that is trying to move from VxVM/VxFS to ZFS, 
however they have this same need.  They want to save money and move to 
ZFS.  They are charged by a separate group for their SAN storage 
needs.  The business group storage needs grow and shrink over time, as 
it has done for years.  They've been on E25K's and other high power 
boxes with VxVM/VxFS as their encapsulated root disk for over a 
decade.  They are/were a big Veritas shop. They rarely ever use UFS, 
especially in production.


ZFS is a storage pool and not strictly a filesystem.  One may create 
filesystems or logical volumes out of this storage pool.  The logical 
volumes can be exported via iSCSI or FC (COMSTAR).  Filesystems may be 
exported via NFS or CIFS.  ZFS filesystems support quotas for both 
maximum consumption, and minimum space reservation.


Perhaps the problem is one of educating the customer so that they can 
ammend their accounting practices.  Different business groups can share 
the same pool if necessary.


They understand the technology very well.  Yes, ZFS is very flexible with many 
features, and most are not needed in an enterprise environment where they have 
high-end SAN storage that is shared between Sun, IBM, linux, VMWare ESX and 
Windows.  Local disk is only for the OS image.  There is no need to have an 
M9000 be a file server.  They have NAS for that.  They use SAN across the 
enterprise and it gives them the ability to fail-over to servers in other data 
centers very quickly.

Different business groups cannot share the same pool for many reasons.  Each 
business group pays for their own storage.  There are legal issues as well, and 
in fact cannot have different divisions on the same frame let alone shared 
storage.  But they're in a major virtualization push to the point that nobody 
will be allowed to be on their own physical box.  So the big push is to move to 
VMware, and we're trying to salvage as much as we can to move them to 
containers and LDoms.  That being the case, I've recommended that each virtual 
machine on either a container or LDom should be allocated their own zpool, and 
the zonepath or LDom disk image be on their own zpool.  This way when (not if) 
they need to migrate to another system, they have one pool to move over.  They 
use fixed sized LUNs, so the granularity is a 33GB LUN, which can be migrated.  
This is also the case for their clusters as well as SRDF to their COB machines.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Cindy . Swearingen

Brian,

CR 4852783 was updated again this week so you might add yourself or
your customer to continue to be updated.

In the meantime, a reminder is that a mirrored ZFS configuration
is flexible in that devices can be detached (as long as the redundancy
is not compromised) or replaced as long as the replacement disk is an 
equivalent size or larger. So, you can move storage around if you need 
to in a mirrored ZFS config and until 4852783 integrates.


cs

On 08/05/09 15:58, Brian Kolaci wrote:
I'm chiming in late, but have a mission critical need of this as well 
and posted as a non-member before.  My customer was wondering when this 
would make it into Solaris 10.  Their complete adoption depends on it.


I have a customer that is trying to move from VxVM/VxFS to ZFS, however 
they have this same need.  They want to save money and move to ZFS.  
They are charged by a separate group for their SAN storage needs.  The 
business group storage needs grow and shrink over time, as it has done 
for years.  They've been on E25K's and other high power boxes with 
VxVM/VxFS as their encapsulated root disk for over a decade.  They 
are/were a big Veritas shop.  They rarely ever use UFS, especially in 
production.


They absolutely require the shrink functionality to completely move off 
VxVM/VxFS to ZFS, and we're talking $$millions.  I think your statements 
below are from a technology standpoint, not a business standpoint.  You 
say its poor planning, which is way off the mark.  Business needs change 
daily.  It takes several weeks to provision SAN with all the approvals, 
etc. and it it takes massive planning.  That goes for increasing as well 
as decreasing their storage needs.


Richard Elling wrote:


On Aug 5, 2009, at 1:06 PM, Martin wrote:


richard wrote:


Preface: yes, shrink will be cool.  But we've been
running highly
available,
mission critical datacenters for more than 50 years
without shrink being
widely available.



I would debate that.  I remember batch windows and downtime delaying 
one's career movement.  Today we are 24x7 where an outage can kill an 
entire business



Agree.


Do it exactly the same way you do it for UFS.  You've
been using UFS
for years without shrink, right?  Surely you have
procedures in
place :-)



While I haven't taken a formal survey, everywhere I look I see JFS on 
AIX and VxFS on Solaris.  I haven't been in a production UFS shop 
this decade.



Then why are you talking on Solaris forum?  All versions of
Solaris prior to Solaris 10 10/08 only support UFS for boot.


Backout plans are not always simple reversals.  A
well managed site will
have procedures for rolling upgrades.



I agree with everything you wrote.  Today other technologies allow 
live changes to the pool, so companies use those technologies instead 
of ZFS.



... and can continue to do so. If you are looking to replace a
for-fee product with for-free, then you need to consider all
ramifications. For example, a shrink causes previously written
data to be re-written, thus exposing the system to additional
failure modes. OTOH, a model of place once and never disrupt
can provide a more reliable service. You will see the latter
"pattern" repeated often for high assurance systems.




There is more than one way to skin a cat.



Which entirely misses the point.



Many cases where people needed to shrink were due to the
inability to plan for future growth. This is compounded by the
rather simplistic interface between a logical volume and traditional
file system. ZFS allows you to dynamically grow the pool, so you
can implement a process of only adding storage as needs dictate.

Bottom line: shrink will be cool, but it is not the perfect solution for
managing changing data needs in a mission critical environment.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Brian Kolaci

Richard Elling wrote:

On Aug 5, 2009, at 2:58 PM, Brian Kolaci wrote:

I'm chiming in late, but have a mission critical need of this as well 
and posted as a non-member before.  My customer was wondering when 
this would make it into Solaris 10.  Their complete adoption depends 
on it.


I have a customer that is trying to move from VxVM/VxFS to ZFS, 
however they have this same need.  They want to save money and move to 
ZFS.  They are charged by a separate group for their SAN storage 
needs.  The business group storage needs grow and shrink over time, as 
it has done for years.  They've been on E25K's and other high power 
boxes with VxVM/VxFS as their encapsulated root disk for over a 
decade.  They are/were a big Veritas shop.  They rarely ever use UFS, 
especially in production.


They absolutely require the shrink functionality to completely move 
off VxVM/VxFS to ZFS, and we're talking $$millions.  I think your 
statements below are from a technology standpoint, not a business 
standpoint.


If you look at it from Sun's business perspective, ZFS is $$ free, so 
Sun gains

no $$ millions by replacing VxFS. Indeed, if the customer purchases VxFS
from Sun, it makes little sense for Sun to eliminate a revenue source. 
OTOH,

I'm sure if they are willing to give Sun $$ millions, it can help raise the
priority of CR 4852783.
http://bugs.opensolaris.org/view_bug.do?bug_id=4852783


They're probably on the list already, but I'll check to make sure.
What I meant by the $$ millions is that currently all Sun hardware purchases are on hold.  Deploying on 
Solaris currently means not just the hardware, but the support, required certified third-party software 
such as EMC powerpath, Veritas VxVM & VxFS, BMC monitoring, and more...  Yes, I'm still working the 
MPxIO to replace powerpath, but there's issues there too.  They will not use UFS.  Right now ZFS is OK 
for limited deployment and no production use.  Their case on ZFS is that its good for dealing with 
JBOD, but it not yet "enterprise ready" for SAN use.  Shrinking a volume is just one of a 
list of requirements to move toward "enterprise ready", however many issues have been fixed.

So Sun would see increased hardware revenue stream if they would just listen to 
the customer...  Without it, they look for alternative hardware/software 
vendors.  While this is stalled, there have been several hundred systems that 
have been flipped to competitors (and this is still going on).  So lack of this 
feature will cause $$ millions to be lost...




You say its poor planning, which is way off the mark.  Business needs 
change daily.  It takes several weeks to provision SAN with all the 
approvals, etc. and it it takes massive planning.  That goes for 
increasing as well as decreasing their storage needs.


I think you've identified the real business problem. A shrink feature in 
ZFS will
do nothing to fix this. A business who's needs change faster than their 
ability
to react has (as we say in business school) an unsustainable business 
model.

 -- richard


Yes, hence a federal bail-out.  However a shrink feature will help them to be 
able to spend more with Sun.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ?: SMI vs. EFI label and a disk's write cache

2009-08-05 Thread Cindy . Swearingen

Hi Steffen,

Go with a mirrored root pool is my advice with all the disk space in s0
on each disk. Simple is best and redundant simple is even better.

I'm no write cache expert, but a few simple tests on Solaris 10 5/09,
show me that the write cache is enabled on a disk that is labeled with
an SMI label and slice when the pool is created, if the whole disk's 
capacity is in slice 0, for example. However, its not enabled on my

s10u7 root pool slice, all disk space is in slice 0, but it is enabled
on my upcoming Solaris 10 root pool disk. Don't know what's up with
that.

If performance is a goal then go with two pools anyway so that you have
more flexibility in configuring a mirrored or RAID-Z config for the data 
pool or adding log devices (if that helps their workload) and also
provides more flexibility in management of ZFS BEs vs ZFS data in zones, 
and so on.


With a root pool, you currently constrained by no RAID-Z, can't add 
add'l mirrored VDEVs, no log devices, can't be exported to another

system, and so on.

The ZFS BP wiki provides more performance-related tips:

http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

Cindy

On 08/05/09 15:07, Steffen Weiberle wrote:

For Solaris 10 5/09...

There are supposed to be performance improvements if you create a zpool 
on a full disk, such as one with an EFI label. Does the same apply if 
the full disk is used with an SMI label, which is required to boot?


I am trying to determine the trade-off, if any, of having a single rpool 
on cXtYd0s2, if I can even do that, and improved performance compared to 
having two pools, a root pool and a separate data pool, for improved 
manageability and isolation. The data pool will have zone root paths on 
it. Customer has stated they are experiencing some performance limits in 
their application due to the disk, and if creating a single pool will 
help by enabling the write cache, that may be of value.


If the *current* answer is no to having ZFS turn on the write cache at 
this time, is it something that is coming in OpenSolaris or an update to 
S10?


Thanks
Steffen
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Bob Friesenhahn

On Wed, 5 Aug 2009, Brian Kolaci wrote:


I have a customer that is trying to move from VxVM/VxFS to ZFS, however they 
have this same need.  They want to save money and move to ZFS.  They are 
charged by a separate group for their SAN storage needs.  The business group 
storage needs grow and shrink over time, as it has done for years.  They've 
been on E25K's and other high power boxes with VxVM/VxFS as their 
encapsulated root disk for over a decade.  They are/were a big Veritas shop. 
They rarely ever use UFS, especially in production.


ZFS is a storage pool and not strictly a filesystem.  One may create 
filesystems or logical volumes out of this storage pool.  The logical 
volumes can be exported via iSCSI or FC (COMSTAR).  Filesystems may be 
exported via NFS or CIFS.  ZFS filesystems support quotas for both 
maximum consumption, and minimum space reservation.


Perhaps the problem is one of educating the customer so that they can 
ammend their accounting practices.  Different business groups can 
share the same pool if necessary.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Richard Elling

On Aug 5, 2009, at 2:58 PM, Brian Kolaci wrote:

I'm chiming in late, but have a mission critical need of this as  
well and posted as a non-member before.  My customer was wondering  
when this would make it into Solaris 10.  Their complete adoption  
depends on it.


I have a customer that is trying to move from VxVM/VxFS to ZFS,  
however they have this same need.  They want to save money and move  
to ZFS.  They are charged by a separate group for their SAN storage  
needs.  The business group storage needs grow and shrink over time,  
as it has done for years.  They've been on E25K's and other high  
power boxes with VxVM/VxFS as their encapsulated root disk for over  
a decade.  They are/were a big Veritas shop.  They rarely ever use  
UFS, especially in production.


They absolutely require the shrink functionality to completely move  
off VxVM/VxFS to ZFS, and we're talking $$millions.  I think your  
statements below are from a technology standpoint, not a business  
standpoint.


If you look at it from Sun's business perspective, ZFS is $$ free, so  
Sun gains

no $$ millions by replacing VxFS. Indeed, if the customer purchases VxFS
from Sun, it makes little sense for Sun to eliminate a revenue source.  
OTOH,
I'm sure if they are willing to give Sun $$ millions, it can help  
raise the

priority of CR 4852783.
http://bugs.opensolaris.org/view_bug.do?bug_id=4852783

You say its poor planning, which is way off the mark.  Business  
needs change daily.  It takes several weeks to provision SAN with  
all the approvals, etc. and it it takes massive planning.  That goes  
for increasing as well as decreasing their storage needs.


I think you've identified the real business problem. A shrink feature  
in ZFS will
do nothing to fix this. A business who's needs change faster than  
their ability
to react has (as we say in business school) an unsustainable business  
model.

 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Brian Kolaci

I'm chiming in late, but have a mission critical need of this as well and 
posted as a non-member before.  My customer was wondering when this would make 
it into Solaris 10.  Their complete adoption depends on it.

I have a customer that is trying to move from VxVM/VxFS to ZFS, however they 
have this same need.  They want to save money and move to ZFS.  They are 
charged by a separate group for their SAN storage needs.  The business group 
storage needs grow and shrink over time, as it has done for years.  They've 
been on E25K's and other high power boxes with VxVM/VxFS as their encapsulated 
root disk for over a decade.  They are/were a big Veritas shop.  They rarely 
ever use UFS, especially in production.

They absolutely require the shrink functionality to completely move off 
VxVM/VxFS to ZFS, and we're talking $$millions.  I think your statements below 
are from a technology standpoint, not a business standpoint.  You say its poor 
planning, which is way off the mark.  Business needs change daily.  It takes 
several weeks to provision SAN with all the approvals, etc. and it it takes 
massive planning.  That goes for increasing as well as decreasing their storage 
needs.

Richard Elling wrote:

On Aug 5, 2009, at 1:06 PM, Martin wrote:


richard wrote:

Preface: yes, shrink will be cool.  But we've been
running highly
available,
mission critical datacenters for more than 50 years
without shrink being
widely available.


I would debate that.  I remember batch windows and downtime delaying 
one's career movement.  Today we are 24x7 where an outage can kill an 
entire business


Agree.


Do it exactly the same way you do it for UFS.  You've
been using UFS
for years without shrink, right?  Surely you have
procedures in
place :-)


While I haven't taken a formal survey, everywhere I look I see JFS on 
AIX and VxFS on Solaris.  I haven't been in a production UFS shop this 
decade.


Then why are you talking on Solaris forum?  All versions of
Solaris prior to Solaris 10 10/08 only support UFS for boot.


Backout plans are not always simple reversals.  A
well managed site will
have procedures for rolling upgrades.


I agree with everything you wrote.  Today other technologies allow 
live changes to the pool, so companies use those technologies instead 
of ZFS.


... and can continue to do so. If you are looking to replace a
for-fee product with for-free, then you need to consider all
ramifications. For example, a shrink causes previously written
data to be re-written, thus exposing the system to additional
failure modes. OTOH, a model of place once and never disrupt
can provide a more reliable service. You will see the latter
"pattern" repeated often for high assurance systems.




There is more than one way to skin a cat.


Which entirely misses the point.


Many cases where people needed to shrink were due to the
inability to plan for future growth. This is compounded by the
rather simplistic interface between a logical volume and traditional
file system. ZFS allows you to dynamically grow the pool, so you
can implement a process of only adding storage as needs dictate.

Bottom line: shrink will be cool, but it is not the perfect solution for
managing changing data needs in a mission critical environment.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] `zfs list -t filesystem` shouldn't return snapshots

2009-08-05 Thread Mark Shellenbaum

Robert Lawhead wrote:

I recently tried to post this as a bug, and received an auto-ack, but can't 
tell whether its been accepted.  Does this seem like a bug to anyone else?

Default for zfs list is now to show only filesystems.  However, a `zfs list` or 
`zfs list -t filesystem` shows filesystems AND incomplete snapshots, and
`zfs list -t snapshot` doesn't show incomplete snapshots.

Steps to Reproduce
  # start a send|receive, and DO NOT wait for it to finish...
zfs snapshot f...@bar && (zfs send f...@bar | zfs receive -F baz) &
# See where snapshot being created is reported; it will be reported
# with filesystems (wrong) and not with snapshots (wrong again).
zfs list
zfs list -t filesystem
zfs list -t snapshot

Expected Result
  Snapshot in progress should be reported with snapshots (I think) and 
definitely not with filesystems.  Necessitates filtering like '| grep -v -- %'


That was closed as a duplicate of:

6759986 zfs list shows temporary %clone when doing online zfs recv

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6759986
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] article on btrfs, comparison with zfs

2009-08-05 Thread Henk Langeveld

Roch wrote:

I don't know what 'enters the txg' exactly is but ZFS disk-block
allocation is done in the ZIO pipeline at the latest
possible time.


Thanks Roch,
I stand corrected in my assumptions.

Cheers,
Henk
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] `zfs list -t filesystem` shouldn't return snapshots

2009-08-05 Thread Robert Lawhead
I recently tried to post this as a bug, and received an auto-ack, but can't 
tell whether its been accepted.  Does this seem like a bug to anyone else?

Default for zfs list is now to show only filesystems.  However, a `zfs list` or 
`zfs list -t filesystem` shows filesystems AND incomplete snapshots, and
`zfs list -t snapshot` doesn't show incomplete snapshots.

Steps to Reproduce
  # start a send|receive, and DO NOT wait for it to finish...
zfs snapshot f...@bar && (zfs send f...@bar | zfs receive -F baz) &
# See where snapshot being created is reported; it will be reported
# with filesystems (wrong) and not with snapshots (wrong again).
zfs list
zfs list -t filesystem
zfs list -t snapshot

Expected Result
  Snapshot in progress should be reported with snapshots (I think) and 
definitely not with filesystems.  Necessitates filtering like '| grep -v -- %'
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ?: SMI vs. EFI label and a disk's write cache

2009-08-05 Thread Steffen Weiberle

For Solaris 10 5/09...

There are supposed to be performance improvements if you create a zpool 
on a full disk, such as one with an EFI label. Does the same apply if 
the full disk is used with an SMI label, which is required to boot?


I am trying to determine the trade-off, if any, of having a single rpool 
on cXtYd0s2, if I can even do that, and improved performance compared to 
having two pools, a root pool and a separate data pool, for improved 
manageability and isolation. The data pool will have zone root paths on 
it. Customer has stated they are experiencing some performance limits in 
their application due to the disk, and if creating a single pool will 
help by enabling the write cache, that may be of value.


If the *current* answer is no to having ZFS turn on the write cache at 
this time, is it something that is coming in OpenSolaris or an update to 
S10?


Thanks
Steffen
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs export and import between diferent controllers

2009-08-05 Thread miks
Problem itself happened on FreeBSD, but as I understand it's ZFS related, not 
FreeBSD.
So:
I got error when tried to migrate zfs disk between 2 different servers. After 
exporting on first, import on second one are failing with following:

Output from import pool:
#zpool import storage750
cannot import 'storage750': one or more devices is currently unavailable


Output from simple import:
#zpool import
  pool: storage750
id: 1304450798920256547
 state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-6X
config:

storage750 UNAVAIL  missing device
  ad6   ONLINE

As I understand, it's because first server have different controler, so disk is 
named da2 not ad6, as in second server.


Output from zdb:
#zdb -l  /dev/ad6

LABEL 0

version=6
name='storage750'
state=1
txg=8
pool_guid=1304450798920256547
hostid=2302370682
hostname='xx'
top_guid=2004285697880137437
guid=2004285697880137437
vdev_tree
type='disk'
id=0
guid=2004285697880137437
path='/dev/da2'
whole_disk=0
metaslab_array=14
metaslab_shift=32
ashift=9
asize=749984022528

Are there any way how to learn ZFS that drive name has changed?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Richard Elling

On Aug 5, 2009, at 1:06 PM, Martin wrote:


richard wrote:

Preface: yes, shrink will be cool.  But we've been
running highly
available,
mission critical datacenters for more than 50 years
without shrink being
widely available.


I would debate that.  I remember batch windows and downtime delaying  
one's career movement.  Today we are 24x7 where an outage can kill  
an entire business


Agree.


Do it exactly the same way you do it for UFS.  You've
been using UFS
for years without shrink, right?  Surely you have
procedures in
place :-)


While I haven't taken a formal survey, everywhere I look I see JFS  
on AIX and VxFS on Solaris.  I haven't been in a production UFS shop  
this decade.


Then why are you talking on Solaris forum?  All versions of
Solaris prior to Solaris 10 10/08 only support UFS for boot.


Backout plans are not always simple reversals.  A
well managed site will
have procedures for rolling upgrades.


I agree with everything you wrote.  Today other technologies allow  
live changes to the pool, so companies use those technologies  
instead of ZFS.


... and can continue to do so. If you are looking to replace a
for-fee product with for-free, then you need to consider all
ramifications. For example, a shrink causes previously written
data to be re-written, thus exposing the system to additional
failure modes. OTOH, a model of place once and never disrupt
can provide a more reliable service. You will see the latter
"pattern" repeated often for high assurance systems.




There is more than one way to skin a cat.


Which entirely misses the point.


Many cases where people needed to shrink were due to the
inability to plan for future growth. This is compounded by the
rather simplistic interface between a logical volume and traditional
file system. ZFS allows you to dynamically grow the pool, so you
can implement a process of only adding storage as needs dictate.

Bottom line: shrink will be cool, but it is not the perfect solution for
managing changing data needs in a mission critical environment.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool iscsi /zfs performance in opensolaris 0906

2009-08-05 Thread Henrik Johansen

Joseph L. Casale wrote:

Quick snipped from zpool iostat :

  mirror 1.12G   695G  0  0  0  0
c8t12d0  -  -  0  0  0  0
c8t13d0  -  -  0  0  0  0
  c7t2d04K  29.0G  0  1.56K  0   200M
  c7t3d04K  29.0G  0  1.58K  0   202M

The disks on c7 are both Intel X25-E 


Henrik,
So the SATA discs are in the MD1000 behind the PERC 6/E and how
have you configured/attached the 2 SSD slogs and L2ARC drive? If
I understand you, you have sued 14 of the 15 slots in the MD so
I assume you have the 3 SSD's in the R905, what controller are
they running on?


The internal PERC 6/i controller - but I've had them on the PERC 6/E
during other test runs since I have a couple of spare MD1000's at hand. 


Both controllers work well with the SSD's.


Thanks!
jlc
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Jacob Ritorto
Interesting, this is the same procedure I invented (with the exception 
that the zfs send came from the net) and used to hack OpenSolaris 
2009.06 onto my home SunBlade 2000 since it couldn't do AI due to low 
OBP rev..


I'll have to rework it this way, then, which will unfortunately cause 
downtime for a multitude of dependent services, affect the entire 
universe here and make my department look inept.  As much as it stings, 
I accept that this is the price I pay for adopting a new technology. 
Acknowledge and move on.  Quite simply, if this happens too often, we 
know we've made the wrong decision on vendor/platform.


Anyway, looking forward to shrink.  Thanks for the tips.


Kyle McDonald wrote:

Kyle McDonald wrote:

Jacob Ritorto wrote:
Is this implemented in OpenSolaris 2008.11?  I'm moving move my 
filer's rpool to an ssd mirror to free up bigdisk slots currently 
used by the os and need to shrink rpool from 40GB to 15GB. (only 
using 2.7GB for the install).


  
Your best bet would be to install the new ssd drives, create a new 
pool, snapshot the exisitng pool and use ZFS send/recv to migrate the 
data to the new pool. There are docs around about how install grub and 
the boot blocks on the new devices also. After that remove (export!, 
don't destroy yet!)

the old drives, and reboot to see how it works.

If you have no problems, (and I don't think there's anything technical 
that would keep this from working,) then you're good. Otherwise put 
the old pool back in. :)


This thread dicusses basically this same thing - he had a problem along 
the way, but Cindy answered it.



Hi Nawir,

I haven't tested these steps myself, but the error message
means that you need to set this property:

# zpool set bootfs=rpool/ROOT/BE-name rpool

Cindy

On 08/05/09 03:14, nawir wrote:
Hi,

I have sol10u7 OS with 73GB HD in c1t0d0.
I want to clone it to 36GB HD

These steps below is what come in my mind
STEPS TAKEN
# zpool create -f altrpool c1t1d0s0
# zpool set listsnapshots=on rpool
# SNAPNAME=`date +%Y%m%d`
# zfs snapshot -r rpool/r...@$snapname
# zfs list -t snapshot
# zfs send -R rp...@$snapname | zfs recv -vFd altrpool
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk 
/dev/rdsk/c1t1d0s0

for x86 do
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
# zpool export altrpool
# init 5
remove source disk (c1t0d0s0) and move target disk (c1t1d0s0) to slot0
-insert solaris10 dvd
ok boot cdrom -s
# zpool import altrpool rpool
# init 0
ok boot disk1

ERROR:
Rebooting with command: boot disk1
Boot device: /p...@1c,60/s...@2/d...@1,0  File and args:
no pool_props
Evaluating:
The file just loaded does not appear to be executable.
ok

QUESTIONS:
1. what's wrong what my steps
2. any better idea

thanks 

-Kyle




 -Kyle


thx
jake
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool iscsi /zfs performance in opensolaris 0906

2009-08-05 Thread Joseph L. Casale
>Quick snipped from zpool iostat :
>
>   mirror 1.12G   695G  0  0  0  0
> c8t12d0  -  -  0  0  0  0
> c8t13d0  -  -  0  0  0  0
>   c7t2d04K  29.0G  0  1.56K  0   200M
>   c7t3d04K  29.0G  0  1.58K  0   202M
>
>The disks on c7 are both Intel X25-E 

Henrik,
So the SATA discs are in the MD1000 behind the PERC 6/E and how
have you configured/attached the 2 SSD slogs and L2ARC drive? If
I understand you, you have sued 14 of the 15 slots in the MD so
I assume you have the 3 SSD's in the R905, what controller are
they running on?

Thanks!
jlc
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sol10u7: can't "zpool remove" missing hot spare

2009-08-05 Thread Kyle McDonald

Will Murnane wrote:

I'm using Solaris 10u6 updated to u7 via patches, and I have a pool
with a mirrored pair and a (shared) hot spare.  We reconfigured disks
a while ago and now the controller is c4 instead of c2.  The hot spare
was originally on c2, and apparently on rebooting it didn't get found.
 So, I looked up what the new name for the hot spare was, then added
it to the pool with "zpool add home1 spare c4t19d0".  I then tried to
remove the original name for the hot spare:

r...@box:~# zpool remove home1 c2t0d8
r...@box:~# zpool status home1
  pool: home1
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
home1ONLINE   0 0 0
  mirror ONLINE   0 0 0
c4t17d0  ONLINE   0 0 0
c4t24d0  ONLINE   0 0 0
spares
  c2t0d8 UNAVAIL   cannot open
  c4t19d0AVAIL

errors: No known data errors

So, how can I convince the pool to release its grasp on c2t0d8?

  
Have you tried making a sparse file with mkfile in /tmp and then ZFS 
replace'ing c2t0d8 with the file, and then zfs remove'ing the file?


I don't know if it will work, but at least at the time of the remove, 
the device will exist.


 -Kyle


Thanks!
Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Martin
richard wrote:
> Preface: yes, shrink will be cool.  But we've been
> running highly  
> available,
> mission critical datacenters for more than 50 years
> without shrink being
> widely available.

I would debate that.  I remember batch windows and downtime delaying one's 
career movement.  Today we are 24x7 where an outage can kill an entire business.

> Do it exactly the same way you do it for UFS.  You've
> been using UFS
> for years without shrink, right?  Surely you have
> procedures in  
> place :-)

While I haven't taken a formal survey, everywhere I look I see JFS on AIX and 
VxFS on Solaris.  I haven't been in a production UFS shop this decade.

> Backout plans are not always simple reversals.  A
> well managed site will
> have procedures for rolling upgrades.

I agree with everything you wrote.  Today other technologies allow live changes 
to the pool, so companies use those technologies instead of ZFS.

> There is more than one way to skin a cat.

Which entirely misses the point.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Kyle McDonald

Kyle McDonald wrote:

Jacob Ritorto wrote:
Is this implemented in OpenSolaris 2008.11?  I'm moving move my 
filer's rpool to an ssd mirror to free up bigdisk slots currently 
used by the os and need to shrink rpool from 40GB to 15GB. (only 
using 2.7GB for the install).


  
Your best bet would be to install the new ssd drives, create a new 
pool, snapshot the exisitng pool and use ZFS send/recv to migrate the 
data to the new pool. There are docs around about how install grub and 
the boot blocks on the new devices also. After that remove (export!, 
don't destroy yet!)

the old drives, and reboot to see how it works.

If you have no problems, (and I don't think there's anything technical 
that would keep this from working,) then you're good. Otherwise put 
the old pool back in. :)


This thread dicusses basically this same thing - he had a problem along 
the way, but Cindy answered it.



Hi Nawir,

I haven't tested these steps myself, but the error message
means that you need to set this property:

# zpool set bootfs=rpool/ROOT/BE-name rpool

Cindy

On 08/05/09 03:14, nawir wrote:
Hi,

I have sol10u7 OS with 73GB HD in c1t0d0.
I want to clone it to 36GB HD

These steps below is what come in my mind
STEPS TAKEN
# zpool create -f altrpool c1t1d0s0
# zpool set listsnapshots=on rpool
# SNAPNAME=`date +%Y%m%d`
# zfs snapshot -r rpool/r...@$snapname
# zfs list -t snapshot
# zfs send -R rp...@$snapname | zfs recv -vFd altrpool
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk 
/dev/rdsk/c1t1d0s0

for x86 do
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
# zpool export altrpool
# init 5
remove source disk (c1t0d0s0) and move target disk (c1t1d0s0) to slot0
-insert solaris10 dvd
ok boot cdrom -s
# zpool import altrpool rpool
# init 0
ok boot disk1

ERROR:
Rebooting with command: boot disk1
Boot device: /p...@1c,60/s...@2/d...@1,0  File and args:
no pool_props
Evaluating:
The file just loaded does not appear to be executable.
ok

QUESTIONS:
1. what's wrong what my steps
2. any better idea

thanks 

-Kyle




 -Kyle


thx
jake
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Kyle McDonald

Jacob Ritorto wrote:

Is this implemented in OpenSolaris 2008.11?  I'm moving move my filer's rpool 
to an ssd mirror to free up bigdisk slots currently used by the os and need to 
shrink rpool from 40GB to 15GB. (only using 2.7GB for the install).

  
Your best bet would be to install the new ssd drives, create a new pool, 
snapshot the exisitng pool and use ZFS send/recv to migrate the data to 
the new pool. There are docs around about how install grub and the boot 
blocks on the new devices also. After that remove (export!, don't 
destroy yet!)

the old drives, and reboot to see how it works.

If you have no problems, (and I don't think there's anything technical 
that would keep this from working,) then you're good. Otherwise put the 
old pool back in. :)



 -Kyle


thx
jake
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Kyle McDonald

Martin wrote:

C,

I appreciate the feedback and like you, do not wish to start a side rant, but 
rather understand this, because it is completely counter to my experience.

Allow me to respond based on my anecdotal experience.

  

What's wrong with make a new pool.. safely copy the data. verify data
and then delete the old pool..



You missed a few steps.  The actual process would be more like the following.
1. Write up the steps and get approval from all affected parties
-- In truth, the change would not make it past step 1.
  

Maybe, but maybe not see below...

2. Make a new pool
3. Quiesce the pool and cause a TOTAL outage during steps 4 through 9
  
That's not entirely true. You can use ZFS send/recv to do the major 
first pass of  #4  (and #5 against the snapshot) Live before the total 
outage.
Then after you quiesce everything, you could use an incremental 
send/recv copy the changes since then quickly, reducing down time.


I'd probably run a second full verify anyway, but in theory, I beleive 
the ZFS checksums are used in the send/recv process to ensure that there 
isn't any corruption, so after enough positive experience, I might start 
to skip the second verify.


This should greatly reduce the length of the down time.


Everyone.

  

and then one day [months or years later] wants to shrink it...



Business needs change.  Technology changes.  The project was a pilot and 
canceled.  The extended pool didn't meet verification requirements, e,g, 
performance and the change must be backed out.
In an Enterprise, a change for performance should have been tested on 
another identical non-production system before being implemented on the 
production one.


I'd have to concur there's more useful things out there. OTOH... 



That's probably true and I have not seen the priority list.  I was merely amazed at the 
number of "Enterprises don't need this functionality" posts.

  
All that said, as a personal home user, this is a feature I'm hoping for 
all the time. :)


 -Kyle


Thanks again,
Marty
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Richard Elling
Preface: yes, shrink will be cool.  But we've been running highly  
available,

mission critical datacenters for more than 50 years without shrink being
widely available.

On Aug 5, 2009, at 9:17 AM, Martin wrote:

You are the 2nd customer I've ever heard of to use shrink.


This attitude seems to be a common theme in ZFS discussions: "No  
enterprise uses shrink, only grow."


Maybe.  The enterprise I work for requires that every change be  
reversible and repeatable.  Every change requires a backout plan and  
that plan better be fast and nondisruptive.


Do it exactly the same way you do it for UFS.  You've been using UFS
for years without shrink, right?  Surely you have procedures in  
place :-)


Who are these enterprise admins who can honestly state that they  
have no requirement to reverse operations?


Backout plans are not always simple reversals.  A well managed site will
have procedures for rolling upgrades.

Who runs a 24x7 storage system and will look you in the eye and  
state, "The storage decisions (parity count, number of devices in a  
stripe, etc.) that I make today will be valid until the end of time  
and will NEVER need nondisruptive adjustment.  Every storage  
decision I made in 1993 when we first installed RAID is still  
correct and has needed no changes despite changes in our business  
models."


My experience is that this attitude about enterprise storage borders  
on insane.


Something does not compute.


There is more than one way to skin a cat.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sol10u7: can't "zpool remove" missing hot spare

2009-08-05 Thread Cindy . Swearingen

Hi Will,

I simulated this issue on s10u7 and then imported the pool on a
current Nevada release. The original issue remains, which is you
can't remove a spare device that no longer exists.

My sense is that the bug fix prevents the spare from getting messed
up in the first place when the device IDs change, but after the original 
device is removed, you can't remove the spare. I think the only

resolution is to put the device back and then you can remove the spare.
This was my resolution during testing.

But, in your case, the original device is renamed.

I don't think the ghost spare causes a problem except aesthetically.

I'm no expert in this error scenario so I will check with someone else
(when he gets back from vacation and then I'm on vacation).

Thanks,

Cindy

On 08/04/09 18:34, Will Murnane wrote:

On Tue, Aug 4, 2009 at 19:05,  wrote:


Hi Will,

It looks to me like you are running into this bug:

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6664649

This is fixed in Nevada and a fix will also be available in an
upcoming Solaris 10 release.


That looks like exactly the problem we hit.  Thanks for Googling for me.



This doesn't help you now, unfortunately.


Would it cause problems to temporarily import the pool on an
OpenSolaris machine, remove the spare, and move it back to the Sol10
machine?  I think it'd be safe provided I don't do "zpool upgrade" or
anything like that, but I'd like to make sure.

Thanks,
Will

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recovering from ZFS command lock up after yanking a non-redundant drive?

2009-08-05 Thread roland
doesn´t solaris have the great builtin dtrace for issues like these ?

if we knew in which syscall or kernel-thread the system is stuck, we may get a 
clue...

unfortunately, i don´t have any real knowledge of solaris kernel internals or 
dtrace...
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-05 Thread Chris Du
> On 4-Aug-09, at 19:46 , Chris Du wrote:
> > Yes Constellation, they also have sata version.
> CA$350 is way too  
> > high. It's CA$280 for SAS and CA$235 for SATA,
> 500GB in Vancouver.
> 
> 
> Wow, that is a much better price than I've seen:
> 
> http://pricecanada.com/p.php/Seagate-Constellation-720
> 0-500GB-7200-ST9500430SS-602367/?matched_search=ST9500
> 430SS
> 
> Which retailer is that?
> 
> A.
> 
> --
> Adam Sherman
> CTO, Versature Corp.
> Tel: +1.877.498.3772 x113
> 
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss

http://a-power.com/product-11331-624-1
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Jacob Ritorto
+1

Thanks for putting this in a real world perspective, Martin.  I'm faced with 
this exact circumstance right now (see my post to the list from earlier today). 
 Our ZFS filers are highly utilised, highly trusted components at the core of 
our enterprise and serve out OS images, mail storage, customer facing NFS 
mounts, CIFS mounts, etc. for nearly all of our critical services.  Downtime 
is, essentially, a catastrophe and won't get approval without weeks of 
painstaking social engineering..

jake
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Martin
C,

I appreciate the feedback and like you, do not wish to start a side rant, but 
rather understand this, because it is completely counter to my experience.

Allow me to respond based on my anecdotal experience.

> What's wrong with make a new pool.. safely copy the data. verify data
> and then delete the old pool..

You missed a few steps.  The actual process would be more like the following.
1. Write up the steps and get approval from all affected parties
-- In truth, the change would not make it past step 1.
2. Make a new pool
3. Quiesce the pool and cause a TOTAL outage during steps 4 through 9
4. Safely make a copy of the data
5. Verify the data
6. Export old pool
7. Import new pool
8. Restart server
9. Confirm all services are functioning correctly
10. Announce the outage has finished
11. Delete the old pool

Note step 3 and let me know which 24x7 operation would tolerate an extended 
outage (because it would last for hours or days) on a critical production 
server.

One solution is not to do this on a critical enterprise storage, and that's the 
point I am trying to make.

> Who in the enterprise just allocates a
> massive pool

Everyone.

> and then one day [months or years later] wants to shrink it...

Business needs change.  Technology changes.  The project was a pilot and 
canceled.  The extended pool didn't meet verification requirements, e,g, 
performance and the change must be backed out.  Business growth estimates are 
grossly too high and the pool needs migration to a cheaper frame in order to 
keep costs in line with revenue.  The pool was made of 40 of the largest disks 
at the time and now, 4 years later, only 10 disks are needed to accomplish the 
same thing while the 40 original disks are at EOL and no longer supported.

The list goes on and on.

> I'd have to concur there's more useful things out there. OTOH... 

That's probably true and I have not seen the priority list.  I was merely 
amazed at the number of "Enterprises don't need this functionality" posts.

Thanks again,
Marty
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] atomicity of zfs rename

2009-08-05 Thread Roman V. Shaposhnik

POSIX specification of rename(2) provides a very nice property
for building atomic transcations:


If the old argument points to the pathname of a file that is not a
directory, the new argument shall not point to the pathname of a
directory. If the link named by the new argument exists, it shall be
removed and old renamed to new. In this case, a link named new shall
remain visible to other processes throughout the renaming operation and
refer either to the file referred to by new or old before the operation
began.


It appears that zfs rename does NOT implement same semantics for
datasets, as it complains if a new dataset already exists.

Two questions: what would be the most logical way to workaround this
limitation and why was it implemented this way to begin with?

Thanks,
Roman.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recovering from ZFS command lock up after yanking a non-redundant drive?

2009-08-05 Thread Ross
Yeah, sounds just like the issues I've seen before.  I don't think you're 
likely to see a fix anytime soon, but the good news is that so far I've not 
seen anybody reporting problems with LSI 1068 based cards (and I've been 
watching for a while).

With the 1068 being used in the x4540 Thumper 2, I'd expect it to have pretty 
solid drivers :)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-05 Thread Bob Friesenhahn

On Wed, 5 Aug 2009, Bob Friesenhahn wrote:


Quite a few computers still come with a legacy PCI slot.  Are there PCI cards 
which act as a carrier for one or two CompactFlash devices and support system 
boot?


For example, does this product work well with OpenSolaris?  Can it 
work as a boot device for OpenSolaris?


  http://www.newegg.com/Product/Product.aspx?Item=N82E16812186075

It says that it uses a Sil0680 IDE chipset .  Four Compact Flash cards 
can be mounted to one PCI card.


It seems that this chipset does work with Solaris.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread C. Bergström

Martin wrote:

You are the 2nd customer I've ever heard of to use shrink.



This attitude seems to be a common theme in ZFS discussions: "No enterprise uses 
shrink, only grow."

Maybe.  The enterprise I work for requires that every change be reversible and 
repeatable.  Every change requires a backout plan and that plan better be fast 
and nondisruptive.

Who are these enterprise admins who can honestly state that they have no requirement to 
reverse operations?  Who runs a 24x7 storage system and will look you in the eye and 
state, "The storage decisions (parity count, number of devices in a stripe, etc.) 
that I make today will be valid until the end of time and will NEVER need nondisruptive 
adjustment.  Every storage decision I made in 1993 when we first installed RAID is still 
correct and has needed no changes despite changes in our business models."

My experience is that this attitude about enterprise storage borders on insane.
  
What's wrong with make a new pool.. safely copy the data. verify data 
and then delete the old pool..   Who in the enterprise just allocates a 
massive pool and then one day wants to shrink it...  For home nas I 
could see this being useful.. I'm not aruging there isn't a use case, 
but in terms of where my vote for time/energy of the developers goes.. 
I'd have to concur there's more useful things out there.  OTOH... 
once/if the block reallocation code is dropped (webrev?) the shrinking 
of a pool should be a lot easier.  I don't mean to go off on a side 
rant, but afaik this code is written and should have been available.  If 
we all pressured Green-bytes with an open letter it would maybe 
help..  The legal issues around this are what's holding it all up.  
@Sun people can't comment I'm sure, but this is what I speculate.


./C

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-05 Thread Adam Sherman

On 5-Aug-09, at 12:21 , Bob Friesenhahn wrote:
i would be VERY surprised if you couldn't fit these in there  
SOMEWHERE, the
sata to compactflash adapter i got was about 1.75 inches across and  
very
very thin, i was able to mount them side by side on top of the  
drive tray in
my machine, you can easily make a bracket...i know a guy who used  
double

sided tape! but, check out this picture


Quite a few computers still come with a legacy PCI slot.  Are there  
PCI cards which act as a carrier for one or two CompactFlash devices  
and support system boot?



That's also a good idea. Of course, my system only has a single x16  
PCI-E slot in it. :)


A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-05 Thread Bob Friesenhahn

On Wed, 5 Aug 2009, Thomas Burgess wrote:


i would be VERY surprised if you couldn't fit these in there SOMEWHERE, the
sata to compactflash adapter i got was about 1.75 inches across and very
very thin, i was able to mount them side by side on top of the drive tray in
my machine, you can easily make a bracket...i know a guy who used double
sided tape! but, check out this picture


Quite a few computers still come with a legacy PCI slot.  Are there 
PCI cards which act as a carrier for one or two CompactFlash devices 
and support system boot?


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Martin
> You are the 2nd customer I've ever heard of to use shrink.

This attitude seems to be a common theme in ZFS discussions: "No enterprise 
uses shrink, only grow."

Maybe.  The enterprise I work for requires that every change be reversible and 
repeatable.  Every change requires a backout plan and that plan better be fast 
and nondisruptive.

Who are these enterprise admins who can honestly state that they have no 
requirement to reverse operations?  Who runs a 24x7 storage system and will 
look you in the eye and state, "The storage decisions (parity count, number of 
devices in a stripe, etc.) that I make today will be valid until the end of 
time and will NEVER need nondisruptive adjustment.  Every storage decision I 
made in 1993 when we first installed RAID is still correct and has needed no 
changes despite changes in our business models."

My experience is that this attitude about enterprise storage borders on insane.

Something does not compute.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-05 Thread Adam Sherman

On 5-Aug-09, at 12:07 , Thomas Burgess wrote:
i would be VERY surprised if you couldn't fit these in there  
SOMEWHERE, the sata to compactflash adapter i got was about 1.75  
inches across and very very thin, i was able to mount them side by  
side on top of the drive tray in my machine, you can easily make a  
bracket...i know a guy who used double sided tape! but, check out  
this picturehttp://www.newegg.com/Product/Product.aspx?Item=N82E16812186051 
  most of them can be found like this, they are VERY VERY thin and  
can be mounted just about anywhere.  they don't get very hot.  I've  
used them on a few machines, opensolaris and freebsd.   I'm a big  
fan of compact flash.



What about USB sticks? Is there a difference in practice?

Thanks for the advice,

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-05 Thread Thomas Burgess
i would be VERY surprised if you couldn't fit these in there SOMEWHERE, the
sata to compactflash adapter i got was about 1.75 inches across and very
very thin, i was able to mount them side by side on top of the drive tray in
my machine, you can easily make a bracket...i know a guy who used double
sided tape! but, check out this picture
http://www.newegg.com/Product/Product.aspx?Item=N82E16812186051  most of
them can be found like this, they are VERY VERY thin and can be mounted just
about anywhere.  they don't get very hot.  I've used them on a few machines,
opensolaris and freebsd.   I'm a big fan of compact flash.

On Wed, Aug 5, 2009 at 8:38 AM, Adam Sherman  wrote:

> On 5-Aug-09, at 0:14 , Thomas Burgess wrote:
>
>> i boot from compact flash.  it's not a big deal if you mirror it because
>> you shouldn't be booting up very often.  Also, they make these great
>> compactflash to sata adapters so if yer motherboard has 2 open sata ports
>> then you'll be golden there.
>>
>
> You are suggesting booting from a mirrored pair of CF cards? I'll have to
> wait until I see the system to know if I have room, but that's a good idea.
>
> I've got lots of unused SATA ports.
>
> Thanks,
>
>
> A.
>
> --
> Adam Sherman
> CTO, Versature Corp.
> Tel: +1.877.498.3772 x113
>
>
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] which version that the ZFS performance is better ?

2009-08-05 Thread Thomas Burgess
i think you need to give more information about your setup

On Wed, Aug 5, 2009 at 5:40 AM, Mr liu  wrote:

> 0811 or 0906 or sun solairs
>
> I read a lot of aarticles about  zfs performance .and test 0811/0906
> /nexentastor 2.0 .
>
> The write performance  is at most 60Mb/s (32k),the other only around
> 10Mb/s.
>
> I test it from comstar iscsi target and used IOMeter in windows.
>
> What shall I do , I am very very dispirited and disappointed .
>
> Please help me ,thanks
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How Virtual Box handles the IO

2009-08-05 Thread Thomas Burgess
>From what i understand, and from everything i've read by following threads
here, there are ways to do it but there is not a standardized tool yet, and
it's complicated and on a per-case basis but people who pay for support have
recovered pools.

i'm sure they are working on it, and i would imagine it would be a major
goal.

On Wed, Aug 5, 2009 at 1:23 AM, James Hess  wrote:

> So much for the "it's a consumer hardware problem" argument.
> I for one gotta count it as a major drawback of ZFS that it doesn't provide
> you a mechanism to get something of your pool back  in the manner of
> reconstruction or reversion, if a failure occurs,  where there is a metadata
> inconsistency.
>
> A policy of data integrity taken to the extreme of blocking access to good
> data is not something OS users want.
>
> Users don't put up with this sort of thing from other filesystems...  some
> sort of improvement here is sorely needed.
>
> ZFS ought to be retaining enough information and make an effort to bring
> pool metadata back to a consistent state,   even if it means  loss of data,
>  that a file may have to revert to an older state,   or a file that was
> undergoing changes  may now be unreadable,  since the log was inconsistent..
>
> even if the user should have to zpool import with a  recovery-mode  option
> or something of that nature.
>
> It beats losing a TB of data on the pool that should be otherwise intact.
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Would ZFS will bring IO when the file is VERY short-lived?

2009-08-05 Thread Bob Friesenhahn

On Tue, 4 Aug 2009, Chookiex wrote:

You know, ZFS afford a very Big buffer for write IO.
So, When we write a file, the first stage is put it to buffer. 
But, if the file is VERY short-lived? Is it bring IO to disk?

or else, it just put the meta data and data to memory, and then removed it?

This depends on timing, available memory, and if the writes are 
synchronous.  Synchronous writes are sent to disk immediately. 
Buffered writes seem to be very well buffered and small created files 
are not persisted until the next TXG sync interval and if they are 
immediately deleted it is as if they did not exist at all.  This leads 
to a huge improvement in observed performance.


% while true
do
  rm -f crap.dat
  dd if=/dev/urandom of=crap.dat count=200
  rm -f crap.dat
  sleep 1
done

I just verified this by running the above script and running a tool 
which monitors zfs read and write requests.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recovering from ZFS command lock up after yanking a non-redundant drive?

2009-08-05 Thread Chris Baker
I've left it hanging about 2 hours. I've also just learned that whatever the 
issue is it is also blocking an "init 5" shutdown. I was thinking about setting 
a watchdog with a forced reboot but that will get me nowhere if I need I reset 
button restart.

Thanks for the advice re the LSI 1068, not exactly what I was hoping to hear 
but very good info all the same.

KInd regards

Chris
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] clone rpool to smaller disk

2009-08-05 Thread Cindy . Swearingen

Hi Nawir,

I haven't tested these steps myself, but the error message
means that you need to set this property:

# zpool set bootfs=rpool/ROOT/BE-name rpool

Cindy

On 08/05/09 03:14, nawir wrote:

Hi,

I have sol10u7 OS with 73GB HD in c1t0d0.
I want to clone it to 36GB HD

These steps below is what come in my mind
STEPS TAKEN
# zpool create -f altrpool c1t1d0s0
# zpool set listsnapshots=on rpool
# SNAPNAME=`date +%Y%m%d`
# zfs snapshot -r rpool/r...@$snapname
# zfs list -t snapshot
# zfs send -R rp...@$snapname | zfs recv -vFd altrpool
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk 
/dev/rdsk/c1t1d0s0
for x86 do
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
# zpool export altrpool
# init 5
remove source disk (c1t0d0s0) and move target disk (c1t1d0s0) to slot0
-insert solaris10 dvd
ok boot cdrom -s
# zpool import altrpool rpool
# init 0
ok boot disk1

ERROR:
Rebooting with command: boot disk1
Boot device: /p...@1c,60/s...@2/d...@1,0  File and args:
no pool_props
Evaluating:
The file just loaded does not appear to be executable.
ok

QUESTIONS:
1. what's wrong what my steps
2. any better idea

thanks

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sol10u7: can't "zpool remove" missing hot spare

2009-08-05 Thread Cindy . Swearingen

Hi Will,

Since no workaround is provided in the CR, I don't know if importing on
a more recent OpenSolaris release and trying to remove it will work.

I will simulate this error, try this approach, and get back to you.

Thanks,

Cindy



On 08/04/09 18:34, Will Murnane wrote:

On Tue, Aug 4, 2009 at 19:05,  wrote:


Hi Will,

It looks to me like you are running into this bug:

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6664649

This is fixed in Nevada and a fix will also be available in an
upcoming Solaris 10 release.


That looks like exactly the problem we hit.  Thanks for Googling for me.



This doesn't help you now, unfortunately.


Would it cause problems to temporarily import the pool on an
OpenSolaris machine, remove the spare, and move it back to the Sol10
machine?  I think it'd be safe provided I don't do "zpool upgrade" or
anything like that, but I'd like to make sure.

Thanks,
Will

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS CIFS problem with Ubuntu, NFS as an alternative?

2009-08-05 Thread Mark Shellenbaum

Christian Flaig wrote:

Hello,

I got a very strange problem here, tried out many things, can't solve it.
I run a virtual machine via VirtualBox 2.2.4, with Ubuntu 9.04. OpenSolaris as the host 
is 2009-06, with snv118. Now I try to mount (via CIFS) a share in Ubuntu from 
OpenSolaris. Mounting is successful, I can see all files, also change directories. But I 
can't read the files! Whenever I try to copy a file, I get a "Permission 
denied" from Ubuntu. But when I mount the same share in Windows XP, I can read the 
files also. So might be an Ubuntu issue, anyone also experienced this? Any logs I can 
check/configure to find out more?
Here the permissions for the directory (tmns is the user I use for mounting):
dr-xr-xr-x+ 31 chrisstaff588 Aug  4 23:57 video
  user:tmns:r-x---a-R-c---:fd-:allow
 user:chris:rwxpdDaARWcCos:fd-:allow
(The "x" shouldn't be necessary, but XP seems not able to list subdirectories 
without it...)

So I thought about using NFS instead, which should be better for an Unix - Unix 
connection anyway. But here I face another issue, which might be because of 
missing knowledge about NFS...
I share the "video" directory above with the ZFS sharenfs command, options are "anon=0,ro". Without 
"anon=0" I always get a "Permission denied" when I want to mount the share via NFS on Ubuntu (mounting with 
root user). But with "anon=0" I can only read the files on the Ubuntu side with root, the mounted directory had 
numerical ids for owner and group on the Ubuntu side.
Any clue how I can solve this?

Many thanks for your help, I'm not sure how to progress on this...

Cheers,

Chris



This is better asked on cifs-disc...@opensolaris.org

They will start out by asking you to run:

http://opensolaris.org/os/project/cifs-server/files/cifs-gendiag


  -Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Shrinking a zpool?

2009-08-05 Thread Jacob Ritorto
Is this implemented in OpenSolaris 2008.11?  I'm moving move my filer's rpool 
to an ssd mirror to free up bigdisk slots currently used by the os and need to 
shrink rpool from 40GB to 15GB. (only using 2.7GB for the install).

thx
jake
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recovering from ZFS command lock up after yanking a non-redundant drive?

2009-08-05 Thread Ross
Just a thought, but how long have you left it?  I had problems with a failing 
drive a while back which did eventually get taken offline, but took about 20 
minutes to do so.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Slow Resilvering Performance

2009-08-05 Thread Galen
I'm still struggling with slow resilvering performance. There doesn't seem to 
be any clear bottleneck at this point.. and it's going glacially slow.

scrub: resilver in progress for 11h2m, 27.86% done, 28h35m to go

Load averages are like 0.13-0.15 range, CPU usage is <10%, the machine is doing 
nothing else. zpool iostat -v shows that read/write operations per second on 
each disk are in the single digit range. 

As these are decent 7200 RPM 3.5" disks, I know they can do more IOPS than than 
that. I've seen it before. 

Is there any way to give this process a kick in the pants and speed things up? 
Because once this resilvering is done, I need to shuffle disks again and 
resilver at least once more, if not twice... and at this rate, we're measuring 
resilvering time in days!

Here's the zpool iostat -v output:

ga...@solaribyte:~# zpool iostat -v 1
   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
olympic 2.15T  2.38T 88  6  10.6M  23.5K
  raidz21.42T  2.21T 67  4  8.16M  14.9K
replacing  -  - 62  8  1.26M   107K
  c14d0  -  - 41  6  1.29M   107K
  13143843205485599815  -  -  0  0 13  2
c10d0   -  - 14  1  1.39M  2.33K
replacing  -  - 67  3  1.36M  2.84K
  c13d0  -  - 44  2  1.39M  2.56K
  c10d1  -  -  0 16 61  1.37M
replacing  -  -  8 61   176K  1.20M
  2673037112181665188  -  -  0  0  0  0
  c11d0  -  -  7 57   178K  1.20M
c8t0d0  -  - 42  1  1.39M  2.49K
c12d0   -  -  3  0   117K489
c7t0d0  -  - 42  1  1.39M  2.45K
c8t1d0  -  - 43  1  1.39M  2.36K
  raidz2 750G   178G 20  2  2.43M  8.67K
replacing  -  - 20  2  1.22M  4.44K
  c15d1  -  - 16  1  1.24M  3.27K
  8862062963069576548  -  -  0  0  0  0
replacing  -  - 20  2  1.22M  4.36K
  c16d1  -  - 16  1  1.24M  3.20K
  2970292359499355257  -  -  0  0 10  1
replacing  -  -  0  0  0  1.88K
  3106783608214265238  -  -  0  0  0  0
  348116080896813745  -  -  0  0 10  1
replacing  -  -  0 20  0  1.21M
  217004158856088173  -  -  0  0 10  1
  2711430129275390205  -  -  0  0 10  1
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Slow Resilvering Performance

2009-08-05 Thread Galen
I'm still struggling with slow resilvering performance. There doesn't seem to 
be any clear bottleneck at this point.. and it's going glacially slow.

scrub: resilver in progress for 11h2m, 27.86% done, 28h35m to go

Load averages are like 0.13-0.15 range, CPU usage is <10%, the machine is doing 
nothing else. zpool iostat -v shows that read/write operations per second on 
each disk are in the single digit range. 

As these are decent 7200 RPM 3.5" disks, I know they can do more IOPS than than 
that. I've seen it before. 

Is there any way to give this process a kick in the pants and speed things up? 
Because once this resilvering is done, I need to shuffle disks again and 
resilver at least once more, if not twice... and at this rate, we're measuring 
resilvering time in days!

Here's the zpool iostat -v output:

ga...@solaribyte:~# zpool iostat -v 1
   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
olympic 2.15T  2.38T 88  6  10.6M  23.5K
  raidz21.42T  2.21T 67  4  8.16M  14.9K
replacing  -  - 62  8  1.26M   107K
  c14d0  -  - 41  6  1.29M   107K
  13143843205485599815  -  -  0  0 13  2
c10d0   -  - 14  1  1.39M  2.33K
replacing  -  - 67  3  1.36M  2.84K
  c13d0  -  - 44  2  1.39M  2.56K
  c10d1  -  -  0 16 61  1.37M
replacing  -  -  8 61   176K  1.20M
  2673037112181665188  -  -  0  0  0  0
  c11d0  -  -  7 57   178K  1.20M
c8t0d0  -  - 42  1  1.39M  2.49K
c12d0   -  -  3  0   117K489
c7t0d0  -  - 42  1  1.39M  2.45K
c8t1d0  -  - 43  1  1.39M  2.36K
  raidz2 750G   178G 20  2  2.43M  8.67K
replacing  -  - 20  2  1.22M  4.44K
  c15d1  -  - 16  1  1.24M  3.27K
  8862062963069576548  -  -  0  0  0  0
replacing  -  - 20  2  1.22M  4.36K
  c16d1  -  - 16  1  1.24M  3.20K
  2970292359499355257  -  -  0  0 10  1
replacing  -  -  0  0  0  1.88K
  3106783608214265238  -  -  0  0  0  0
  348116080896813745  -  -  0  0 10  1
replacing  -  -  0 20  0  1.21M
  217004158856088173  -  -  0  0 10  1
  2711430129275390205  -  -  0  0 10  1
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool iscsi /zfs performance in opensolaris 0906

2009-08-05 Thread Henrik Johansen

Ross Walker wrote:

On Aug 5, 2009, at 2:49 AM, Henrik Johansen  wrote:


Ross Walker wrote:

On Aug 4, 2009, at 8:36 PM, Carson Gaspar  wrote:


Ross Walker wrote:

I get pretty good NFS write speeds with NVRAM (40MB/s 4k  
sequential  write). It's a Dell PERC 6/e with 512MB onboard.

...
there, dedicated slog device with NVRAM speed. It would be even   
better to have a pair of SSDs behind the NVRAM, but it's hard to   
find compatible SSDs for these controllers, Dell currently  
doesn't  even support SSDs in their RAID products :-(


Isn't the PERC 6/e just a re-branded LSI? LSI added SSD support   
recently.


Yes, but the LSI support of SSDs is on later controllers.


Sure that's not just a firmware issue ?

My PERC 6/E seems to support SSD's :
# ./MegaCli -AdpAllInfo -a2 | grep -i ssd
Enable Copyback to SSD on SMART Error   : No
Enable SSD Patrol Read  : No
Allow SSD SAS/SATA Mix in VD : No
Allow HDD/SSD Mix in VD  : No


Controller info :Versions
   
Product Name: PERC 6/E Adapter
Serial No   : 
FW Package Build: 6.0.3-0002

   Mfg. Data
   
Mfg. Date   : 06/08/07
Rework Date : 06/08/07
Revision No : Battery FRU : N/A

   Image Versions in Flash:
   
FW Version : 1.11.82-0473
BIOS Version   : NT13-2
WebBIOS Version: 1.1-32-e_11-Rel
Ctrl-R Version : 1.01-010B
Boot Block Version : 1.00.00.01-0008


I currently have 2 x Intel X25-E (32 GB) as dedicated slogs and 1 x
Intel X25-M (80 GB) for the L2ARC behind a PERC 6/i on my Dell R905
testbox.

So far there have been no problems with them.


Really?

Now you have my interest.

Two questions, did you get the X25 from Dell? Are you using it with a  
hot-swap carrier?


Knowing that these will work would be great news.


Those disks are not from Dell as they were incapable of delivering Intel
SSD's.

Just out of curiosity - do they have to be from Dell ?

I have tested the Intel SSD's on various Dell servers - they work
out-of-the-box with both their 2.5" and 3.5" trays (the 3.5" trays do
require a SATA interposer which is included with all SATA disks ordered
from them).


-Ross



--
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove disk from ZFS Pool

2009-08-05 Thread Ross Walker

On Aug 5, 2009, at 8:50 AM, Ketan  wrote:


How can we remove disk from zfs pool, i want to remove disk c0d3

zpool status datapool
 pool: datapool
state: ONLINE
scrub: none requested
config:

   NAMESTATE READ WRITE CKSUM
   datapoolONLINE   0 0 0
 c0d2  ONLINE   0 0 0
 c0d3  ONLINE   0 0 0


You can't in that non-redundant pool.

Copy data off, destroy and re-create.

-Ross

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool iscsi /zfs performance in opensolaris 0906

2009-08-05 Thread Henrik Johansen

Ross Walker wrote:

On Aug 5, 2009, at 3:09 AM, Henrik Johansen  wrote:


Ross Walker wrote:
On Aug 4, 2009, at 10:22 PM, Bob Friesenhahn  > wrote:



On Tue, 4 Aug 2009, Ross Walker wrote:
Are you sure that it is faster than an SSD?  The data is indeed   
pushed closer to the disks, but there may be considerably more   
latency associated with getting that data into the controller   
NVRAM cache than there is into a dedicated slog SSD.


I don't see how, as the SSD is behind a controller it still must   
make it to the controller.


If you take a look at 'iostat -x' output you will see that the   
system knows about a queue for each device.  If it was any other   
way, then a slow device would slow down access to all of the  
other  devices.  If there is concern about lack of bandwidth (PCI- 
E?) to  the controller, then you can use a separate controller for  
the SSDs.


It's not bandwidth. Though with a lot of mirrors that does become  
a  concern.


Well the duplexing benefit you mention does hold true. That's a   
complex real-world scenario that would be hard to benchmark in   
production.


But easy to see the effects of.


I actually meant to say, hard to bench out of production.

Tests done by others show a considerable NFS write speed  
advantage  when using a dedicated slog SSD rather than a  
controller's NVRAM  cache.


I get pretty good NFS write speeds with NVRAM (40MB/s 4k  
sequential  write). It's a Dell PERC 6/e with 512MB onboard.


I get 47.9 MB/s (60.7 MB/s peak) here too (also with 512MB  
NVRAM),  but that is not very good when the network is good for  
100 MB/s.   With an SSD, some other folks here are getting  
essentially network  speed.


In testing with ram disks I was only able to get a max of around  
60MB/ s with 4k block sizes, with 4 outstanding.


I can do 64k blocks now and get around 115MB/s.


I just ran some filebench microbenchmarks against my 10 Gbit testbox
which is a Dell R905, 4 x 2.5 Ghz AMD Quad Core CPU's and 64 GB RAM.

My current pool is comprised of 7 mirror vdevs (SATA disks), 2 Intel
X25-E as slogs and 1 Intel X25-M for the L2ARC.

The pool is a MD1000 array attached to a PERC 6/E using 2 SAS cables.

The nic's are ixgbe based.

Here are the numbers :
Randomwrite benchmark - via 10Gbit NFS : IO Summary: 4483228 ops,  
73981.2 ops/s, (0/73981 r/w) 578.0mb/s, 44us cpu/op, 0.0ms latency


Randomread benchmark - via 10Gbit NFS :
IO Summary: 7663903 ops, 126467.4 ops/s, (126467/0 r/w) 988.0mb/s,  
5us cpu/op, 0.0ms latency


The real question is if these numbers can be trusted - I am currently
preparing new test runs with other software to be able to do a
comparison.


Yes, need to make sure it is sync io as NFS clients can still choose  
to use async and work out of their own cache.


Quick snipped from zpool iostat : 


  mirror 1.12G   695G  0  0  0  0
c8t12d0  -  -  0  0  0  0
c8t13d0  -  -  0  0  0  0
  c7t2d04K  29.0G  0  1.56K  0   200M
  c7t3d04K  29.0G  0  1.58K  0   202M

The disks on c7 are both Intel X25-E 


-Ross



--
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool iscsi /zfs performance in opensolaris 0906

2009-08-05 Thread Ross Walker

On Aug 5, 2009, at 3:09 AM, Henrik Johansen  wrote:


Ross Walker wrote:
On Aug 4, 2009, at 10:22 PM, Bob Friesenhahn  > wrote:



On Tue, 4 Aug 2009, Ross Walker wrote:
Are you sure that it is faster than an SSD?  The data is indeed   
pushed closer to the disks, but there may be considerably more   
latency associated with getting that data into the controller   
NVRAM cache than there is into a dedicated slog SSD.


I don't see how, as the SSD is behind a controller it still must   
make it to the controller.


If you take a look at 'iostat -x' output you will see that the   
system knows about a queue for each device.  If it was any other   
way, then a slow device would slow down access to all of the  
other  devices.  If there is concern about lack of bandwidth (PCI- 
E?) to  the controller, then you can use a separate controller for  
the SSDs.


It's not bandwidth. Though with a lot of mirrors that does become  
a  concern.


Well the duplexing benefit you mention does hold true. That's a   
complex real-world scenario that would be hard to benchmark in   
production.


But easy to see the effects of.


I actually meant to say, hard to bench out of production.

Tests done by others show a considerable NFS write speed  
advantage  when using a dedicated slog SSD rather than a  
controller's NVRAM  cache.


I get pretty good NFS write speeds with NVRAM (40MB/s 4k  
sequential  write). It's a Dell PERC 6/e with 512MB onboard.


I get 47.9 MB/s (60.7 MB/s peak) here too (also with 512MB  
NVRAM),  but that is not very good when the network is good for  
100 MB/s.   With an SSD, some other folks here are getting  
essentially network  speed.


In testing with ram disks I was only able to get a max of around  
60MB/ s with 4k block sizes, with 4 outstanding.


I can do 64k blocks now and get around 115MB/s.


I just ran some filebench microbenchmarks against my 10 Gbit testbox
which is a Dell R905, 4 x 2.5 Ghz AMD Quad Core CPU's and 64 GB RAM.

My current pool is comprised of 7 mirror vdevs (SATA disks), 2 Intel
X25-E as slogs and 1 Intel X25-M for the L2ARC.

The pool is a MD1000 array attached to a PERC 6/E using 2 SAS cables.

The nic's are ixgbe based.

Here are the numbers :
Randomwrite benchmark - via 10Gbit NFS : IO Summary: 4483228 ops,  
73981.2 ops/s, (0/73981 r/w) 578.0mb/s, 44us cpu/op, 0.0ms latency


Randomread benchmark - via 10Gbit NFS :
IO Summary: 7663903 ops, 126467.4 ops/s, (126467/0 r/w) 988.0mb/s,  
5us cpu/op, 0.0ms latency


The real question is if these numbers can be trusted - I am currently
preparing new test runs with other software to be able to do a
comparison.


Yes, need to make sure it is sync io as NFS clients can still choose  
to use async and work out of their own cache.


-Ross

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool iscsi /zfs performance in opensolaris 0906

2009-08-05 Thread Ross Walker

On Aug 5, 2009, at 2:49 AM, Henrik Johansen  wrote:


Ross Walker wrote:

On Aug 4, 2009, at 8:36 PM, Carson Gaspar  wrote:


Ross Walker wrote:

I get pretty good NFS write speeds with NVRAM (40MB/s 4k  
sequential  write). It's a Dell PERC 6/e with 512MB onboard.

...
there, dedicated slog device with NVRAM speed. It would be even   
better to have a pair of SSDs behind the NVRAM, but it's hard to   
find compatible SSDs for these controllers, Dell currently  
doesn't  even support SSDs in their RAID products :-(


Isn't the PERC 6/e just a re-branded LSI? LSI added SSD support   
recently.


Yes, but the LSI support of SSDs is on later controllers.


Sure that's not just a firmware issue ?

My PERC 6/E seems to support SSD's :
# ./MegaCli -AdpAllInfo -a2 | grep -i ssd
Enable Copyback to SSD on SMART Error   : No
Enable SSD Patrol Read  : No
Allow SSD SAS/SATA Mix in VD : No
Allow HDD/SSD Mix in VD  : No


Controller info :Versions
   
Product Name: PERC 6/E Adapter
Serial No   : 
FW Package Build: 6.0.3-0002

   Mfg. Data
   
Mfg. Date   : 06/08/07
Rework Date : 06/08/07
Revision No : Battery FRU : N/A

   Image Versions in Flash:
   
FW Version : 1.11.82-0473
BIOS Version   : NT13-2
WebBIOS Version: 1.1-32-e_11-Rel
Ctrl-R Version : 1.01-010B
Boot Block Version : 1.00.00.01-0008


I currently have 2 x Intel X25-E (32 GB) as dedicated slogs and 1 x
Intel X25-M (80 GB) for the L2ARC behind a PERC 6/i on my Dell R905
testbox.

So far there have been no problems with them.


Really?

Now you have my interest.

Two questions, did you get the X25 from Dell? Are you using it with a  
hot-swap carrier?


Knowing that these will work would be great news.

-Ross

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Remove disk from ZFS Pool

2009-08-05 Thread Andre van Eyssen

On Wed, 5 Aug 2009, Ketan wrote:


How can we remove disk from zfs pool, i want to remove disk c0d3


[snip]

Currently, you can't remove a vdev without destroying the pool.

--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Remove the zfs snapshot keeping the original volume and clone

2009-08-05 Thread Ketan
I created a snapshot and subsequent clone of a zfs volume. But now i 'm not 
able to remove the snapshot it gives me following error 

zfs destroy newpool/ldom2/zdi...@bootimg
cannot destroy 'newpool/ldom2/zdi...@bootimg': snapshot has dependent clones
use '-R' to destroy the following datasets:
newpool/ldom2/zdisk0

and if i promote the clone then the original volume becomes the dependent clone 
, is there a way to destroy just the snapshot leaving the clone and original 
volume intact ?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Remove disk from ZFS Pool

2009-08-05 Thread Ketan
How can we remove disk from zfs pool, i want to remove disk c0d3 

 zpool status datapool
  pool: datapool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
datapoolONLINE   0 0 0
  c0d2  ONLINE   0 0 0
  c0d3  ONLINE   0 0 0

I
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-05 Thread Adam Sherman

On 4-Aug-09, at 19:46 , Chris Du wrote:
Yes Constellation, they also have sata version. CA$350 is way too  
high. It's CA$280 for SAS and CA$235 for SATA, 500GB in Vancouver.



Wow, that is a much better price than I've seen:

http://pricecanada.com/p.php/Seagate-Constellation-7200-500GB-7200-ST9500430SS-602367/?matched_search=ST9500430SS

Which retailer is that?

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool Layout Advice Needed

2009-08-05 Thread Adam Sherman

On 5-Aug-09, at 0:14 , Thomas Burgess wrote:
i boot from compact flash.  it's not a big deal if you mirror it  
because you shouldn't be booting up very often.  Also, they make  
these great compactflash to sata adapters so if yer motherboard has  
2 open sata ports then you'll be golden there.


You are suggesting booting from a mirrored pair of CF cards? I'll have  
to wait until I see the system to know if I have room, but that's a  
good idea.


I've got lots of unused SATA ports.

Thanks,

A.

--
Adam Sherman
CTO, Versature Corp.
Tel: +1.877.498.3772 x113



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recovering from ZFS command lock up after yanking a non-redundant drive?

2009-08-05 Thread Chris Baker
Sanjeev

Thanks for taking an interest. Unfortunately I did have failmode=continue, but 
I have just destroyed/recreated and double confirmed and got exactly the same 
results.

zpool status shows both drives mirror, ONLINE, no errors

dmesg shows:

SATA device detached at port 0

cfgadm shows:

sata-portemptyunconfigured

The IO process has just hung. 

It seems to me that zfs thinks it has a drive with a really long response time 
rather than a dead drive so no failmode processing, no mirror resilience etc. 
Clearly something has been reported back to the kernel re the port going dead 
but whether that came from the driver or not I wouldn't know.

KInd regards

Chris
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS CIFS problem with Ubuntu, NFS as an alternative?

2009-08-05 Thread Christian Flaig
Little update...

I can read files (within the share) with the following ACL:
-r--r--r--+  1 chrisstaff 35 Aug  5 13:18 .txt
  user:tmns:r-x---a-R-c---:--I:allow
 user:chris:rwxpdDaARWc--s:--I:allow
  everyone@:r-a-R-c--s:---:allow

(Line with "everyone@" is added compared to the first posting.)

I thought the ZFS CIFS server would map the user used for mouting (tmns) to the 
local user tmns, and I wouldn't need the "everyone" line... Anything wrong with 
my thought?

Thanks for your help.

Chris
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS CIFS problem with Ubuntu, NFS as an alternative?

2009-08-05 Thread Christian Flaig
Hello,

I got a very strange problem here, tried out many things, can't solve it.
I run a virtual machine via VirtualBox 2.2.4, with Ubuntu 9.04. OpenSolaris as 
the host is 2009-06, with snv118. Now I try to mount (via CIFS) a share in 
Ubuntu from OpenSolaris. Mounting is successful, I can see all files, also 
change directories. But I can't read the files! Whenever I try to copy a file, 
I get a "Permission denied" from Ubuntu. But when I mount the same share in 
Windows XP, I can read the files also. So might be an Ubuntu issue, anyone also 
experienced this? Any logs I can check/configure to find out more?
Here the permissions for the directory (tmns is the user I use for mounting):
dr-xr-xr-x+ 31 chrisstaff588 Aug  4 23:57 video
  user:tmns:r-x---a-R-c---:fd-:allow
 user:chris:rwxpdDaARWcCos:fd-:allow
(The "x" shouldn't be necessary, but XP seems not able to list subdirectories 
without it...)

So I thought about using NFS instead, which should be better for an Unix - Unix 
connection anyway. But here I face another issue, which might be because of 
missing knowledge about NFS...
I share the "video" directory above with the ZFS sharenfs command, options are 
"anon=0,ro". Without "anon=0" I always get a "Permission denied" when I want to 
mount the share via NFS on Ubuntu (mounting with root user). But with "anon=0" 
I can only read the files on the Ubuntu side with root, the mounted directory 
had numerical ids for owner and group on the Ubuntu side.
Any clue how I can solve this?

Many thanks for your help, I'm not sure how to progress on this...

Cheers,

Chris
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] which version that the ZFS performance is better ?

2009-08-05 Thread Mr liu
0811 or 0906 or sun solairs 

I read a lot of aarticles about  zfs performance .and test 0811/0906 
/nexentastor 2.0 .

The write performance  is at most 60Mb/s (32k),the other only around 10Mb/s.

I test it from comstar iscsi target and used IOMeter in windows.

What shall I do , I am very very dispirited and disappointed .

Please help me ,thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] clone rpool to smaller disk

2009-08-05 Thread nawir
Hi,

I have sol10u7 OS with 73GB HD in c1t0d0.
I want to clone it to 36GB HD

These steps below is what come in my mind
STEPS TAKEN
# zpool create -f altrpool c1t1d0s0
# zpool set listsnapshots=on rpool
# SNAPNAME=`date +%Y%m%d`
# zfs snapshot -r rpool/r...@$snapname
# zfs list -t snapshot
# zfs send -R rp...@$snapname | zfs recv -vFd altrpool
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk 
/dev/rdsk/c1t1d0s0
for x86 do
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
# zpool export altrpool
# init 5
remove source disk (c1t0d0s0) and move target disk (c1t1d0s0) to slot0
-insert solaris10 dvd
ok boot cdrom -s
# zpool import altrpool rpool
# init 0
ok boot disk1

ERROR:
Rebooting with command: boot disk1
Boot device: /p...@1c,60/s...@2/d...@1,0  File and args:
no pool_props
Evaluating:
The file just loaded does not appear to be executable.
ok

QUESTIONS:
1. what's wrong what my steps
2. any better idea

thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zdb CKSUM stats vary?

2009-08-05 Thread Victor Latushkin

On 05.08.09 11:40, Tristan Ball wrote:

Can anyone tell me why successive runs of "zdb" would show very
different values for the cksum column? I had thought these counters were
"since last clear" but that doesn't appear to be the case?


zdb is not intended to be run on live pools. For a live pool you can use it with 
predictable results only on a dataset that does not change on disk, in other 
words snapshot, to dump objects in that dataset only.


running it on a live pool may produce unpredictable results depending on a pool 
activity.


victor


If I run "zdb poolname", right at the end of the output, it lists pool
statistics:

capacity   operations   bandwidth  
errors 
descriptionused avail  read write  read write  read
write cksum
data  1.46T 7.63T   117 0 7.89M 0 0
0 9
  /dev/dsk/c0t21D0230F0298d0s0  150G  781G11 0  803K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d1s0  150G  781G11 0  791K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d2s0  150G  781G11 0  803K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d3s0  150G  781G11 0  807K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d6s0  150G  781G11 0  811K 0
0 0 2
  /dev/dsk/c0t21D0230F0298d7s0  150G  781G12 0  817K 0
0 0 4
  /dev/dsk/c0t21D0230F0298d8s0  150G  781G11 0  815K 0
0 0 4
  /dev/dsk/c0t21D0230F0298d9s0  150G  781G11 0  797K 0
0 014
  /dev/dsk/c0t21D0230F0298d10s0  150G  781G11 0  822K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d11s0  150G  781G11 0  814K 0
0 0 4

If I run it again:

capacity   operations   bandwidth  
errors 
descriptionused avail  read write  read write  read
write cksum
data  1.46T 7.63T   108 0 5.72M 0 0
0 3
  /dev/dsk/c0t21D0230F0298d0s0  150G  781G10 0  583K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d1s0  150G  781G10 0  570K 0
0 019
  /dev/dsk/c0t21D0230F0298d2s0  150G  781G11 0  596K 0
0 017
  /dev/dsk/c0t21D0230F0298d3s0  150G  781G11 0  597K 0
0 0 3
  /dev/dsk/c0t21D0230F0298d6s0  150G  781G10 0  591K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d7s0  150G  781G11 0  586K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d8s0  150G  781G10 0  591K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d9s0  150G  781G10 0  569K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d10s0  150G  781G10 0  586K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d11s0  150G  781G10 0  589K 0
0 0 2

If I run "zdb -vs data" I get:

   capacity   operations   bandwidth  
errors 
descriptionused avail  read write  read write  read
write cksum
data  1.46T 7.63T70 0 4.27M 0 0
0 0
  /dev/dsk/c0t21D0230F0298d0s0  150G  781G 8 0  526K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d1s0  150G  781G 6 0  385K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d2s0  150G  781G 6 0  385K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d3s0  150G  781G 6 0  413K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d6s0  150G  781G 8 0  522K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d7s0  150G  781G 8 0  550K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d8s0  150G  781G 6 0  385K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d9s0  150G  781G 6 0  377K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d10s0  150G  781G 6 0  404K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d11s0  150G  781G 6 0  422K 0
0 0 0

A zpool status shows:

  pool: data
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool
can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
pool will no longer be accessible on older software versions.
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
data  ONLINE   0 0 0
  c0t21D0230F0298d0   ONLINE   0 0 0
  c0t21D0230F0298d1   ONLINE   0 0 0
  c0t21D0230F0298d2   ONLINE   0 0 0
  c0t21D0230F0298d3   ONLINE   0 0 0
  c0t21D0230F0298d6   ONLINE   0 0 0
  c0t21D0230F0298d7   ONLINE   0 0 0
  c0t21D0230F0298d8   ONLINE   0 0 0
  c0t21D0230F0298d9   ONLINE   0 0 0
  c0t21D0230F0298d10  ONLINE   0 0 0
  c

[zfs-discuss] ZFS clone destroyed by rollback of it's parent filesystem... recoverable???

2009-08-05 Thread Euan Thoms
I created a clone from the most recent snapshot of a filesystem, the clone's 
parent filesystem was the same as the snapshot itself. When I did a rollback to 
a previous snapshot it erased my clone. Yes it was really stupid to keep the 
colne on the same filesystem, I was tired and wasn't thinking clearlt, new to 
this ZFS stuff. I did it in the web console gui, otherwise I would propably 
have had chance to think twice before using "zfs destroy -R ..." at command 
line.

Is there any way to recover a destroyed clone / snapshot. Is there any file 
carving / recovery tools for ZFS?

below is extract from zfs history:

2009-08-05.07:55:30 zfs snapshot data/var-...@screwed-01
2009-08-05.07:56:08 zfs clone data/var-...@screwed-01 
data/var-opt/screwed-01-clone
2009-08-05.07:56:40 zfs rollback -R -f data/var-...@patches-03

I want either data/var-...@screwed-01 or data/var-opt/screwed-01-clone

The boot environment saved me but ZFS snapshot/rollback cost me dearly, lost 
emails from 16 July onwards.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zdb CKSUM stats vary?

2009-08-05 Thread Tristan Ball
Can anyone tell me why successive runs of "zdb" would show very
different values for the cksum column? I had thought these counters were
"since last clear" but that doesn't appear to be the case?

If I run "zdb poolname", right at the end of the output, it lists pool
statistics:

capacity   operations   bandwidth  
errors 
descriptionused avail  read write  read write  read
write cksum
data  1.46T 7.63T   117 0 7.89M 0 0
0 9
  /dev/dsk/c0t21D0230F0298d0s0  150G  781G11 0  803K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d1s0  150G  781G11 0  791K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d2s0  150G  781G11 0  803K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d3s0  150G  781G11 0  807K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d6s0  150G  781G11 0  811K 0
0 0 2
  /dev/dsk/c0t21D0230F0298d7s0  150G  781G12 0  817K 0
0 0 4
  /dev/dsk/c0t21D0230F0298d8s0  150G  781G11 0  815K 0
0 0 4
  /dev/dsk/c0t21D0230F0298d9s0  150G  781G11 0  797K 0
0 014
  /dev/dsk/c0t21D0230F0298d10s0  150G  781G11 0  822K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d11s0  150G  781G11 0  814K 0
0 0 4

If I run it again:

capacity   operations   bandwidth  
errors 
descriptionused avail  read write  read write  read
write cksum
data  1.46T 7.63T   108 0 5.72M 0 0
0 3
  /dev/dsk/c0t21D0230F0298d0s0  150G  781G10 0  583K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d1s0  150G  781G10 0  570K 0
0 019
  /dev/dsk/c0t21D0230F0298d2s0  150G  781G11 0  596K 0
0 017
  /dev/dsk/c0t21D0230F0298d3s0  150G  781G11 0  597K 0
0 0 3
  /dev/dsk/c0t21D0230F0298d6s0  150G  781G10 0  591K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d7s0  150G  781G11 0  586K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d8s0  150G  781G10 0  591K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d9s0  150G  781G10 0  569K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d10s0  150G  781G10 0  586K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d11s0  150G  781G10 0  589K 0
0 0 2

If I run "zdb -vs data" I get:

   capacity   operations   bandwidth  
errors 
descriptionused avail  read write  read write  read
write cksum
data  1.46T 7.63T70 0 4.27M 0 0
0 0
  /dev/dsk/c0t21D0230F0298d0s0  150G  781G 8 0  526K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d1s0  150G  781G 6 0  385K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d2s0  150G  781G 6 0  385K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d3s0  150G  781G 6 0  413K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d6s0  150G  781G 8 0  522K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d7s0  150G  781G 8 0  550K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d8s0  150G  781G 6 0  385K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d9s0  150G  781G 6 0  377K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d10s0  150G  781G 6 0  404K 0
0 0 0
  /dev/dsk/c0t21D0230F0298d11s0  150G  781G 6 0  422K 0
0 0 0

A zpool status shows:

  pool: data
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool
can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
pool will no longer be accessible on older software versions.
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
data  ONLINE   0 0 0
  c0t21D0230F0298d0   ONLINE   0 0 0
  c0t21D0230F0298d1   ONLINE   0 0 0
  c0t21D0230F0298d2   ONLINE   0 0 0
  c0t21D0230F0298d3   ONLINE   0 0 0
  c0t21D0230F0298d6   ONLINE   0 0 0
  c0t21D0230F0298d7   ONLINE   0 0 0
  c0t21D0230F0298d8   ONLINE   0 0 0
  c0t21D0230F0298d9   ONLINE   0 0 0
  c0t21D0230F0298d10  ONLINE   0 0 0
  c0t21D0230F0298d11  ONLINE   0 0 0



Thanks,
Tristan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool iscsi /zfs performance in opensolaris 0906

2009-08-05 Thread Henrik Johansen

Ross Walker wrote:

On Aug 4, 2009, at 10:17 PM, James Lever  wrote:



On 05/08/2009, at 11:41 AM, Ross Walker wrote:


What is your recipe for these?


There wasn't one! ;)

The drive I'm using is a Dell badged Samsung MCCOE50G5MPQ-0VAD3.


So the key is the drive needs to have the Dell badging to work?

I called my rep about getting a Dell badged SSD and he told me they  
didn't support those in MD series enclosures so therefore were  
unavailable.


If the Dell branded SSD's are Samsung's then you might want to search
the archives - if I remember correctly there were mentionings of
less-than-desired performance using them but I cannot recall the
details.



Maybe it's time for a new account rep.

-Ross

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Would ZFS will bring IO when the file is VERY short-lived?

2009-08-05 Thread Roch Bourbonnais


Le 5 août 09 à 06:06, Chookiex a écrit :


Hi All,
You know, ZFS afford a very Big buffer for write IO.
So, When we write a file, the first stage is put it to buffer.
But, if the file is VERY short-lived? Is it bring IO to disk?
or else, it just put the meta data and data to memory, and then  
removed it?





So with a workload of 'creat,write,close,unlink', I don't see ZFS or  
other filesystems issuing I/Os for the files.


-r




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Pool iscsi /zfs performance in opensolaris 0906

2009-08-05 Thread Henrik Johansen

Ross Walker wrote:
On Aug 4, 2009, at 10:22 PM, Bob Friesenhahn > wrote:



On Tue, 4 Aug 2009, Ross Walker wrote:
Are you sure that it is faster than an SSD?  The data is indeed  
pushed closer to the disks, but there may be considerably more  
latency associated with getting that data into the controller  
NVRAM cache than there is into a dedicated slog SSD.


I don't see how, as the SSD is behind a controller it still must  
make it to the controller.


If you take a look at 'iostat -x' output you will see that the  
system knows about a queue for each device.  If it was any other  
way, then a slow device would slow down access to all of the other  
devices.  If there is concern about lack of bandwidth (PCI-E?) to  
the controller, then you can use a separate controller for the SSDs.


It's not bandwidth. Though with a lot of mirrors that does become a  
concern.


Well the duplexing benefit you mention does hold true. That's a  
complex real-world scenario that would be hard to benchmark in  
production.


But easy to see the effects of.


I actually meant to say, hard to bench out of production.

Tests done by others show a considerable NFS write speed advantage  
when using a dedicated slog SSD rather than a controller's NVRAM  
cache.


I get pretty good NFS write speeds with NVRAM (40MB/s 4k sequential  
write). It's a Dell PERC 6/e with 512MB onboard.


I get 47.9 MB/s (60.7 MB/s peak) here too (also with 512MB NVRAM),  
but that is not very good when the network is good for 100 MB/s.   
With an SSD, some other folks here are getting essentially network  
speed.


In testing with ram disks I was only able to get a max of around 60MB/ 
s with 4k block sizes, with 4 outstanding.


I can do 64k blocks now and get around 115MB/s.


I just ran some filebench microbenchmarks against my 10 Gbit testbox
which is a Dell R905, 4 x 2.5 Ghz AMD Quad Core CPU's and 64 GB RAM.

My current pool is comprised of 7 mirror vdevs (SATA disks), 2 Intel
X25-E as slogs and 1 Intel X25-M for the L2ARC.

The pool is a MD1000 array attached to a PERC 6/E using 2 SAS cables.

The nic's are ixgbe based.

Here are the numbers : 

Randomwrite benchmark - via 10Gbit NFS : 
IO Summary: 4483228 ops, 73981.2 ops/s, (0/73981 r/w) 578.0mb/s, 44us cpu/op, 0.0ms latency


Randomread benchmark - via 10Gbit NFS :
IO Summary: 7663903 ops, 126467.4 ops/s, (126467/0 r/w) 988.0mb/s, 5us cpu/op, 
0.0ms latency

The real question is if these numbers can be trusted - I am currently
preparing new test runs with other software to be able to do a
comparison. 

There is still bus and controller plus SSD latency. I suppose one  
could use a pair of disks as an slog mirror, enable NVRAM just for  
those and let the others do write-through with their disk caches


But this encounters the problem that when the NVRAM becomes full  
then you hit the wall of synchronous disk write performance.  With  
the SSD slog, the write log can be quite large and disk writes are  
then done in a much more efficient ordered fashion similar to non- 
sync writes.


Yes, you have a point there.

So, what SSD disks do you use?

-Ross


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss