Re: [zfs-discuss] Opensolaris with J4400 - Experiences

2009-12-14 Thread Trevor Pretty




Sorry if you got this twice
but I never saw it appear on the alias.







OK Today I played with a J4400 connected to a Txxx server running S10
10/09

First off
read the release
notes I spent about 4 hours pulling my hair out as I could not get
stmsboot to work until we read in the release notes that 500GB SATA
drives do not work!!!

InitialSetup:

A pair of
dual port SAS controllers (c4 and c5)
A J4400
with 6x 1TB SATA disks

The J440
had two controllers and these where connected to one SAS card (physical
controller c4)

Test 1:

First a
reboot -- -r 

format
shows 12 disks on
c4(each disk having two paths). If you picked the same disk via both
paths ZFS stopped you doing stupid things by knowing the disk was
already in use.

Test 2:

run
stmsboot -e

format now
shows six disk
on controller c6, a new "virtual controller" The two internal disks are
also now on c6 and stmsboot has done the right stuff with the rpool, so
I would guess you could multi-path at a later date if you don't want to
fist off, but I did not test this.

stmsboot -L
only showed the two internal disk not the six in the J4400 strange, but
we pressed on.

Test 3:

I created a
zpool (two disks mirrored) using two of the new devices on c6.

I created
some I/O load

I then
unplugged one of the cables from theSAS card (physical c4).

Result:
Nothing everything just keeps working - cool stuff!

Test 4:

I plugged
the unplugged cable into the other controller (physical c5)


Result:
Nothing everything just keeps working - cool stuff!

Test 5:

Being bold I then
unplugged the remaining cable from the physical c4 controller


Result:
Nothing everything just keeps working - cool stuff!

So I had gone from
dual pathed, on a single controller (c4) to single pathed, on a
different controller (c5).



Test 6:

I added the other
four drives to the zpool (plain old zfs stuff - a bit boring).


Test 7:

I plugged in four
more disks.

Result: Their
mulipathed devices
just showed up in format, I added them to the pool and also added them
as spares all the while the I/O load is happening. No noticable stops
or glitches.

Conclusion:

If you RTFM first
then stmsboot does
everything it is documented to do. You don't need to play with cfgadm
or anything like that, just as I said orginally (below). The
multi-pathing stuff is easy to set up and even a very rusty admin. like
me found it very easy.

Note: There may be
patches for the 500GB SATA disks I don'y know, fortunatly that's not
what I've sold - Phew!!

TTFN
Trevor











Trevor Pretty wrote:

  
  Karl
  
Don't you just use stmsboot?
  
  http://docs.sun.com/source/820-3223-14/SASMultipath.html#50511899_pgfId-1046940
  
Bruno
  
Next week I'm playing with a M3000 and a J4200 in the local NZ
distributor's lab. I had planned to just use the latest version of
S10, but if I get the time I might play with OpenSolaris as well, but I
don't think there is anything radically different between the two here.
  
From what I've read in preparation (and I stand to be corrected):
  
  
  
  * Will i be able to achieve multipath support, if i connect the 
  J4400 to 2 LSI HBA in one server, with SATA disks, or this is only 
  possible with SAS disks? This server will have OpenSolaris (any 
  release i think) . 

Disk type does not matter (see link above).

* The CAM ( StorageTek Common Array Manager ), its only for hardware 
  management of the JBOD, leaving 
  disk/volumes/zpools/luns/whatever_name management up to the server 
  operating system , correct ? 

That is my understanding see:- http://docs.sun.com/source/820-3765-11/

* Can i put some readzillas/writezillas in the j4400 along with sata 
  disks, and if so will i have any benefit  , or should i place 
  those *zillas directly into the servers disk tray? 

On the Unified Storage products they go in both. Readzilla in the server Logzillas in the J4400. This is quite logical if you want to move the array between hosts all the data needs to be in the array. Read data can always be re-created so therefore the closer to the CPU the better. See: http://catalog.sun.com/

* Does any one has experiences with those jbods? If so, are they in 
  general solid/reliable ? 

No: But, get a support contract!

* The server will probably be a Sun x44xx series, with 32Gb ram, but 
  for the best possible performance, should i invest in more and 
  more spindles, or a couple less spindles and buy some readzillas? 
  This system will be mainly used to export some volumes over ISCSI 
  to a windows 2003 fileserver, and to hold some NFS shares. 

Check Brendon Gregg's blogs *I think* he has done some work here from memory.
 
 
  
  
  
  
  
  
  
  
  
  
  
Karl Katzke wrote:
  
Bruno - 

Sorry, I don't have experience with OpenSolaris, but I *do* have experience running a J4400 with Solaris 10u8. 

First off, you need a LSI HBA for the Multipath support. It won't work with any others a

Re: [zfs-discuss] Opensolaris with J4400 - Experiences

2009-12-09 Thread Trevor Pretty
OK Today I played with a J4400 connected to a Txxx server running S10 10/09

First off read the release notes I spent about 4 hours pulling my hair out as I 
could not get stmsboot to work until we read in the release notes that 500GB 
SATA drives do not work!!!

Initial Setup:
A pair of dual port SAS controllers (c4 and c5)
A J4400 with 6x 1TB SATA disks

The J440 had two controllers and these where connected to one SAS card 
(physical controller c4)

Test 1:

First a reboot -- -r

format shows 12 disks on c4 (each disk having two paths). If you picked the 
same disk via both paths ZFS stopped you doing stupid things by knowing the 
disk was already in use.

Test 2:

run stmsboot -e

format now shows six disk on controller c6, a new virtual controller The two 
internal disks are also now on c6 and stmsboot has done the right stuff with 
the rpool, so I would guess you could multi-path at a later date if you don't 
want to fist off, but I did not test this.

stmsboot -L only showed the two internal disk not the six in the J4400 strange, 
but we pressed on.

Test 3:

I created a zpool (two disks mirrored) using two of the new devices on c6.

I created some I/O load

I then unplugged one of the cables from the SAS card (physical c4).

Result: Nothing everything just keeps working - cool stuff!

Test 4:

I plugged the unplugged cable into the other controller (physical c5)

Result: Nothing everything just keeps working - cool stuff!

Test 5:

Being bold I then unplugged the remaining cable from the physical c4 controller

Result: Nothing everything just keeps working - cool stuff!

So I had gone from dual pathed, on a single controller (c4) to single pathed, 
on a different controller (c5).


Test 6:

I added the other four drives to the zpool (plain old zfs stuff - a bit boring).


Test 7:

I plugged in four more disks.

Result: Their mulipathed devices just showed up in format, I added them to the 
pool and also added them as spares all the while the I/O load is happening. No 
noticable stops or glitches.

Conclusion:

If you RTFM first then stmsboot does everything it is documented to do. You 
don't need to play with cfgadm or anything like that, just as I said orginally 
(below). The multi-pathing stuff is easy to set up and even a very rusty admin. 
like me found it very easy.

Note: There may be patches for the 500GB SATA disks I don'y know, fortunatly 
that's not what I've sold - Phew!!

TTFN
Trevor









From: zfs-discuss-boun...@opensolaris.org [zfs-discuss-boun...@opensolaris.org] 
On Behalf Of Trevor Pretty [trevor_pre...@eagle.co.nz]
Sent: Monday, 30 November 2009 2:48 p.m.
To: Karl Katzke
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Opensolaris with J4400 - Experiences

Karl

Don't you just use stmsboot?

http://docs.sun.com/source/820-3223-14/SASMultipath.html#50511899_pgfId-1046940

Bruno

Next week I'm playing with a M3000 and a J4200 in the local NZ distributor's  
lab. I had planned to just use the latest version of S10, but if I get the time 
I might play with OpenSolaris as well, but I don't think there is anything 
radically different between the two here.

From what I've read in preparation (and I stand to be corrected):



* Will i be able to achieve multipath support, if i connect the
  J4400 to 2 LSI HBA in one server, with SATA disks, or this is only
  possible with SAS disks? This server will have OpenSolaris (any
  release i think) .

Disk type does not matter (see link above).

* The CAM ( StorageTek Common Array Manager ), its only for hardware
  management of the JBOD, leaving
  disk/volumes/zpools/luns/whatever_name management up to the server
  operating system , correct ?

That is my understanding see:- http://docs.sun.com/source/820-3765-11/

* Can i put some readzillas/writezillas in the j4400 along with sata
  disks, and if so will i have any benefit  , or should i place
  those *zillas directly into the servers disk tray?

On the Unified Storage products they go in both. Readzilla in the server 
Logzillas in the J4400. This is quite logical if you want to move the array 
between hosts all the data needs to be in the array. Read data can always be 
re-created so therefore the closer to the CPU the better. See: 
http://catalog.sun.com/

* Does any one has experiences with those jbods? If so, are they in
  general solid/reliable ?

No: But, get a support contract!

* The server will probably be a Sun x44xx series, with 32Gb ram, but
  for the best possible performance, should i invest in more and
  more spindles, or a couple less spindles and buy some readzillas?
  This system will be mainly used to export some volumes over ISCSI
  to a windows 2003 fileserver, and to hold some NFS shares.

Check Brendon Gregg's blogs *I think* he has done some work here from memory.







Karl Katzke wrote:

Bruno -

Sorry, I don't have experience with OpenSolaris

Re: [zfs-discuss] Petabytes on a budget - blog

2009-12-03 Thread Trevor Pretty





Just thought I would let everybody know I saw one at a local ISP
yesterday. They hadn't started testing the metal had only arrived the
day before and they where waiting for the drives to arrive. They had
also changed the design to give it more network. I will try to find out
more as the customer progresses.


Interesting blog:
http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/







-- 



Trevor Pretty 
| Technical Account Manager
|
T: +64 9 639 0652 |
M: +64 21 666 161

Eagle Technology Group Ltd. 
Gate D, Alexandra Park, Greenlane West, Epsom

Private Bag 93211, Parnell, Auckland




www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Opensolaris with J4400 - Experiences

2009-11-29 Thread Trevor Pretty




Karl

Don't you just use stmsboot?

http://docs.sun.com/source/820-3223-14/SASMultipath.html#50511899_pgfId-1046940

Bruno

Next week I'm playing with a M3000 and a J4200 in the local NZ
distributor's lab. I had planned to just use the latest version of
S10, but if I get the time I might play with OpenSolaris as well, but I
don't think there is anything radically different between the two here.

>From what I've read in preparation (and I stand to be corrected):



* Will i be able to achieve multipath support, if i connect the 
  J4400 to 2 LSI HBA in one server, with SATA disks, or this is only 
  possible with SAS disks? This server will have OpenSolaris (any 
  release i think) . 

Disk type does not matter (see link above).

* The CAM ( StorageTek Common Array Manager ), its only for hardware 
  management of the JBOD, leaving 
  disk/volumes/zpools/luns/whatever_name management up to the server 
  operating system , correct ? 

That is my understanding see:- http://docs.sun.com/source/820-3765-11/

* Can i put some readzillas/writezillas in the j4400 along with sata 
  disks, and if so will i have any benefit  , or should i place 
  those *zillas directly into the servers disk tray? 

On the Unified Storage products they go in both. Readzilla in the server Logzillas in the J4400. This is quite logical if you want to move the array between hosts all the data needs to be in the array. Read data can always be re-created so therefore the closer to the CPU the better. See: http://catalog.sun.com/

* Does any one has experiences with those jbods? If so, are they in 
  general solid/reliable ? 

No: But, get a support contract!

* The server will probably be a Sun x44xx series, with 32Gb ram, but 
  for the best possible performance, should i invest in more and 
  more spindles, or a couple less spindles and buy some readzillas? 
  This system will be mainly used to export some volumes over ISCSI 
  to a windows 2003 fileserver, and to hold some NFS shares. 

Check Brendon Gregg's blogs *I think* he has done some work here from memory.
 
 











Karl Katzke wrote:

  Bruno - 

Sorry, I don't have experience with OpenSolaris, but I *do* have experience running a J4400 with Solaris 10u8. 

First off, you need a LSI HBA for the Multipath support. It won't work with any others as far as I know. 

I ran into problems with the multipath support because it wouldn't allow me to manage the disks with cfgadm and got very confused when I'd do something as silly as replace a disk, causing the disk's GUID (and therefor address under the virtual multipath controller) to change. My take-away was that Solaris 10u8 multipath support is not ready for production environments as there are limited-to-no administration tools. This may have been fixed in recent builds of Nevada. (See a thread that started around 03Nov09 for my experiences with MPxIO.) 

At the moment, I have the J4400 split between the two controllers and simply have even numbered disks on one, and odd numbered disks on the other. Both controllers can *see* all the disks.

You are correct about the CAM software. It also updates the firmware, though, since us commoners don't seemingly have access to the serial management ports on the J4400. 

I can't speak to locating the drives -- that would be something you'd have to test. I have found increases in performance on my faster and more random array; others have found exactly the opposite. 

My configuration is as follows; 
x4250
- rpool - 2x 146 gb 10k SAS
- 'hot' pool - 10x 300gb 10k SAS + 2x 32gb ZIL
j4400
- 'cold' pool - 12x 1tb 7200rpm SATA ... testing adding 2x 146gb SAS in the x4250, but haven't benchmarked yet. 

Performance on the J4400 was disappointing with just one controller to 12 disks in one RAIDZ2 and no ZIL. However, I do not know if the bottleneck was at the disk, controller, backplane, or software level... I'm too close to my deadline to do much besides randomly shotgunning different configs to see what works best! 

-K 


Karl Katzke
Systems Analyst II
TAMU - RGS



  
  

  
On 11/25/2009 at 11:13 AM, in message 4b0d65d6.4020...@epinfante.com, Bruno

  

  
  Sousa bso...@epinfante.com wrote: 
  
  
Hello ! 
 
I'm currently using a X2200 with a LSI HBA connected to a Supermicro 
JBOD chassis, however i want to have more redundancy in the JBOD. 
So i have looked into to market, and into to the wallet, and i think 
that the Sun J4400 suits nicely to my goals. However i have some 
concerns and if anyone can give some suggestions i would trully appreciate. 
And now for my questions : 
 
* Will i be able to achieve multipath support, if i connect the 
  J4400 to 2 LSI HBA in one server, with SATA disks, or this is only 
  possible with SAS disks? This server will have OpenSolaris (any 
  release i think) . 
* The CAM ( StorageTek Common Array Manager ), its only for 

Re: [zfs-discuss] X45xx storage vs 7xxx Unified storage

2009-11-23 Thread Trevor Pretty








Len Zaifman wrote:

  Under these circumstances what advantage would a 7310 cluster over 2 X4540s backing each other up and splitting the load?
  

FISH! My wife could drive a
7310 :-)




www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] flar and tar the best way to backup S10 ZFS only?

2009-11-23 Thread Trevor Pretty





I'm persuading a customer that when he goes to S10 he should use ZFS
for everything. We only have one M3000 and a J4200 connected
to it. We are not
talking about a massive site here with a SAN etc. The M3000 is their
"mainframe". His RTO and RPO are both about 12 hours, his business gets
difficult without the server but does not die horribly.

He currently uses ufsdump to tape each night which is
sent off site. However "ufsrestore -i" has saved is bacon in the past
and does not want to loose this "functionality".

A couple of questions.

flar seems to work with ZFS quite well and will backup the whole root
pool flar(1M)

This seems to be the best way to get the equivalent of ufsrestore -r
and a great way to recover in a DR event:-
http://www.sun.com/bigadmin/content/submitted/flash_archive.jsp

My Questions...

Q: Is there the equivalent of ufsretore -i with flar? (which
seems to be an ugly shell script around cpio or pax)

Q: Therefore should I have a tar of the root pool as well?

Q: There is no reason I cannot use flar on the other non root pools?

Q: Or is tar better for the non root pools?

We will have LOTS of disk space, his whole working dataset will easily
fit onto an LTO4, so can anybody think of good a reason why you would
not flar the root pool into another pool and then just tar off this
pool each night to tape? In fact we will have so much disk space
(compared to now) I expect we will will be able to keep most backups
on-line for quite some time.


Discuss :-)



-- 



Trevor Pretty 
| Technical Account Manager
|
T: +64 9 639 0652 |
M: +64 21 666 161

Eagle Technology Group Ltd. 
Gate D, Alexandra Park, Greenlane West, Epsom

Private Bag 93211, Parnell, Auckland




www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris10 10/09 ZFS shared via CIFS?

2009-11-22 Thread Trevor Pretty






Tim Cook wrote:

  
  On Sun, Nov 22, 2009 at 4:18 PM, Trevor
Pretty trevor_pre...@eagle.co.nz
wrote:
  Team

I'm missing something? First off I normally play around with
OpenSolaris  it's been a while since I played with Solaris 10.

I'm doing all this via VirtualBox (Vista host) and I've set-up the
network (I believe) as I can ping, ssh and telnet from Vista into the
S10 virtual machine 192.168.56.101.

I've set smbshare on. But there seems to be non the the CIFS commands
you get in OpenSolaris and when I point a file browser (or whatever
it's called in Windows) at \\192.168.56.101 I can't access it.

I would also expect a file name in .zfs/share like it says in the man
pages, but there is non.

What have I missed? RTFMs more than welcome :-)


Details.

bash-3.00# zfs get sharesmb sam_pool/backup
NAME   PROPERTY VALUE   SOURCE
sam_pool/backup sharesmb onlocal


bash-3.00# ls -al /sam_pool/backup/.zfs
total 3
dr-xr-xr-x  3 root   root  3 Aug 11 14:26 .
drwxr-xr-x  2 root   root  8 Aug 18 09:52 ..
dr-xr-xr-x  2 root   root  2 Aug 11 14:26 snapshot


bash-3.00# ifconfig -a
lo0: flags=2001000849UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL
mtu 8232 index 1
   inet 127.0.0.1 netmask ff00
e1000g0: flags=1004843UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4
mtu 1500 index 2
   inet 192.168.56.101 netmask ff00 broadcast 192.168.56.255
   ether 8:0:27:84:cb:f5


bash-3.00# cat /etc/release
  Solaris 10 10/09 s10x_u8wos_08a X86
Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
   Use is subject to license terms.
Assembled 16 September 2009

  
  
I thought I had heard forever ago that the native cifs implementation
wouldn't ever be put back to solaris10 due to the fact it makes
significant changes to the kernel. Maybe I'm crazy though.
  
I would think an ls would tell you if it was or not. Do you see this
output when you run a '/bin/ls -dV'?
  
root# /bin/ls -dV /
drwxr-xr-x 26 root root 35 Nov 15 10:58 /
 owner@:--:---:deny
 owner@:rwxp---A-W-Co-:---:allow
 group@:-w-p--:---:deny
 group@:r-x---:---:allow
 everyone@:-w-p---A-W-Co-:---:deny
 everyone@:r-x---a-R-c--s:---:allow

  
  
  
  
  
-- 
--Tim

Yep!

bash-3.00# /bin/ls -dV /
drwxr-xr-x 46 root root 63 Nov 23 11:41 /
 owner@:--:--:deny
 owner@:rwxp---A-W-Co-:--:allow
 group@:-w-p--:--:deny
 group@:r-x---:--:allow
 everyone@:-w-p---A-W-Co-:--:deny
 everyone@:r-x---a-R-c--s:--:allow
bash-3.00#

I think the server is in but not the client but I can't find sharemgr
either.







www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris10 10/09 ZFS shared via CIFS?

2009-11-22 Thread Trevor Pretty


OK I've also a S700 simulator as a VM and it seems to have done what I 
would expect.


7000# zfs get sharesmb  pool-0/local/trevors_stuff/tlp
NAMEPROPERTY  VALUE   SOURCE
pool-0/local/trevors_stuff/tlp  sharesmb  name=trevors_stuff_tlp  
inherited from pool-0/local/trevors_stuff


7000# cd /var/ak/shares/web/export/tlp/.zfs
7000# ls shares/
trevors_stuff_tlp

It also has sharemgr which seems to be missing in S10.


Trevor Pretty wrote:

Team

I'm missing something?  First off I normally play around with 
OpenSolaris  it's been a while since I played with Solaris 10.


I'm doing all this via VirtualBox (Vista host) and I've set-up the 
network (I believe) as I can ping, ssh and telnet from Vista into the 
S10 virtual machine 192.168.56.101.


I've set smbshare on. But there seems to be non the the CIFS commands 
you get in OpenSolaris and when I point a file browser (or whatever it's 
called in Windows) at \\192.168.56.101 I can't access it.


I would also expect a file name in .zfs/share like it says in the man 
pages, but there is non.


What have I missed? RTFMs more than welcome :-)


Details.

bash-3.00# zfs get sharesmb sam_pool/backup
NAME PROPERTY  VALUE SOURCE
sam_pool/backup  sharesmb  onlocal


bash-3.00# ls -al /sam_pool/backup/.zfs
total 3
dr-xr-xr-x   3 root root   3 Aug 11 14:26 .
drwxr-xr-x   2 root root   8 Aug 18 09:52 ..
dr-xr-xr-x   2 root root   2 Aug 11 14:26 snapshot


bash-3.00# ifconfig -a
lo0: flags=2001000849UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL mtu 
8232 index 1

inet 127.0.0.1 netmask ff00
e1000g0: flags=1004843UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4 mtu 
1500 index 2

inet 192.168.56.101 netmask ff00 broadcast 192.168.56.255
ether 8:0:27:84:cb:f5


bash-3.00# cat /etc/release
   Solaris 10 10/09 s10x_u8wos_08a X86
   Copyright 2009 Sun Microsystems, Inc.  All Rights Reserved.
Use is subject to license terms.
   Assembled 16 September 2009




  


===
www.eagle.co.nz 


This email is confidential and may be legally privileged.
If received in error please destroy and immediately notify us.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solaris10 10/09 ZFS shared via CIFS?

2009-11-22 Thread Trevor Pretty





Thanks old friend

I was surprised to read in the S10 zfs man page that there was the
option sharesmb=on.
I though I had missed the CIFs server making S10 whilst I was not
looking, but I was quickly coming to the conclusion that the CIFs stuff
was just not there, despite being tantalised by the man pages :-). 

I wish the man page only
listed options that actually work! Would have saved me a couple of
hours of buggering around.

Trevor

Peter Karlsson wrote:

  Hi Trevor,

The native CIFS/SMB stuff was never backported to S10, so you would have 
to use the Samba on your S10 vm

Cheers,
Peter

  




www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs eradication

2009-11-10 Thread Trevor Pretty





Excuse me for mentioning it but why not just use the format command?


format(1M)

  analyze
  
Run read, write, compare tests, and data
purge. The data purge
function implements the National Computer Security Center Guide to
Understanding Data Remnance (NCSC-TG-025 version 2) Overwriting
Algorithm. See NOTES.
  
  
The NCSC-TG-025 algorithm for overwriting
meets the DoD 5200.28-M (ADP
Security Manual) Eraser Procedures specification. The NIST Guidelines
for Media Sanitization (NIST SP 800-88)
also reference this algorithm..
  


And if the disk is buggered (a very technical term). A great big hammer!


Mark A. Carlson wrote:

  
  Typically this is called "Sanitization" and could be
done as part of 
an evacuation of data from the disk in preparation for removal.
  
You would want to specify the patterns to write and the number of
passes.
  
-- mark
  
Brian Kolaci wrote:
  Hi, 

I was discussing the common practice of disk eradication used by many
firms for security. I was thinking this may be a useful feature of ZFS
to have an option to eradicate data as its removed, meaning after the
last reference/snapshot is done and a block is freed, then write the
eradication patterns back to the removed blocks. 

By any chance, has this been discussed or considered before? 

Thanks, 

Brian 
___ 
zfs-discuss mailing list 
zfs-discuss@opensolaris.org

http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

  
  
  -- 
  
  

  

 Mark A. Carlson 
Sr. Architect

Systems Group
Phone x69559 / 303-223-6139
Email mark.carl...@sun.com


  

  
  
  
  




www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (home NAS) zfs and spinning down of drives

2009-11-04 Thread Trevor Pretty




Kim

You've been able to spin down drives since about Solaris 8.







http://www.sun.com/bigadmin/features/articles/disk_power_saving.jsp

Jim Klimov wrote:

  Hello all.

Like many others, I've come close to making a home NAS server based on 
ZFS and OpenSolaris. While this is not an enterprise solution with high IOPS 
expectation, but rather a low-power system for storing everything I have,
I plan on cramming in some 6-10 5400RPM "Green" drives with low wattage 
and high capacity, and possibly an SSD or two (or one-two spinning disks) 
for Read/Write caching/logging.

However, having all the drives spinning (with little actual usage for 99% 
of the data at any given time) will get inefficient for power bills. An apparent
solution is to use very few active devices, and idle or spin down the other disks
until their data is actually accessed - and minimize the frequency of such 
requests by efficient caching, while transparently maintaining the ease of
use of a single ZFS pool. This was all recognized, considered and discussed 
before me, but I have yet to find any definite answers on my questions below :)

I've read a number of blogs and threads on ZFS support for spinning down
unused disks, and for deferring metadata updates to a few always-active
devices. Some threads also discuss hacks to spin up drives of a ZFS pool
in parallel, to reduce latency when accessing their data initially after a
spin-down. There were also hack suggestions to keep only a few devices
requiring active power for writes, i.e. adding a mirror to a pool when its
free space is about to end, so new writes go only to a couple of new disks -
effectively making the pool a growing concat device and losing benefits
of parallel read/writes over all disks at once.

There were many answers and ideas to digest, but some questions I have 
remaining are:

1) What is the real situation now? Are such solutions still some home-made
hacks or commercial-only solutions, or did they integrate into commonly and 
freely available OpenSolaris source code and binaries?

2) Can the same SSD (single or a mirrored couple) be used for read and write
logging, i.e. L2ARC and ZIL? Is that going to be efficient anyhow? Should their
size be preallocated (i.e. as partitions on SSD), or can both L2ARC and ZIL use 
all of the free space on a shared SSD?

3) For a real-life situation, say, I'm going to watch a movie off this home NAS
over CIFS or via local XWindows session, and the movie's file size is small
enough to fit in ARC (RAM) or L2ARC (SSD). Can I set up the system in such
a manner (and using freely available software) that the idle drives of the pool
spin up, read the whole movie's file into a cache, and spin down - and for the 2
hours that the movie goes, these drives don't rotate at all, and only the cache
devices, RAM and CPU consume power?

On a counter situation, is it possible to upload a few files to such a pool so 
that they fit into the single (mirrored) active non-volatile write-cache device, 
and the larger drive sets won't spin up at all until the write cache becomes full 
and needs to spill over to disks?

Would such scenarios require special hacks and scripts, or do they already
work as I envisioned above - out of the box?

What is a typical overhead noted by home-NAS ZFS enthusiasts?
I.e. for a 4Gb movie to be prefetched and watched from cache, how large 
should the cache device be?

4) For a cheap and not blazing-fast home-user solution, the expensive SSDs
(for L2ARC and/or ZIL roles, with spun-down large disks waiting for occasional
rare requests) can consume half the monetary budget for the server. Can SSDs
be replaced by commodity USB/CF flash devices, or by dedicated spinning rust -
with a single/mirrored spindle consuming power instead of the whole dozen?

5) Some threads mentioned hierarchical storage management, such as 
SAMFS/QFS, as a means to keep recently-requested/written data on some
active devices and later destage it to rarely-spun drives emulating a tape
array, and represent the whole lot as a single POSIX filesystem. 

Is any of SAMFS/QFS (or similar solution) available for free? 

Is it needed in my case, or current ZFS implementation with HDDs+L2ARC+ZIL
covers this aspect of HSM already?

If not, can a ZFS pool with multiple datasets be created inside a HSM volume, 
so that I have the flexibility of ZFS and offline-storage capabilities of HSM?

--

Thanks for any replies, including statements that my ideas are insane or my
views are outdated ;) But constructive ones are more appreciated ;)
//Jim
  




www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs mount error

2009-11-03 Thread Trevor Pretty




Ramin

I don't know but..

Is the error not from mount and it's /export/home that can't be
created? 

"mount '/export/home': failed to create mountpoint."

Have you tried mounting 'rpool/export' somewhere else, ike .mnt?

Ramin Moazeni wrote:

  Hello

A customer recently had a power outage.  Prior to the outage, they did a 
graceful shutdown of their system.
On power-up, the system is not coming up due to zfs errors as follows:
cannot mount 'rpool/export': Number of symbolic links encountered during 
path name traversal exceeds MAXSYMLINKS
mount '/export/home': failed to create mountpoint.

The possible cause of this might be that a symlink is created pointing 
to itself since the customer stated
that they created lots of symlink to get their env ready. However, since 
/export is not getting mounted, they
can not go back and delete/fix the symlinks.

Can someone suggest a way to fix this issue?

Thanks
Ramin Moazeni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  




www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs on multiple machines

2009-11-03 Thread Trevor Pretty




Miha 
If you do want multi-reader,
multi-writer block access (and not use iSCSI) then QFS is what you
want. 

http://www.sun.com/storage/management_software/data_management/qfs/features.xml

You can use ZFS pools are lumps of disk under SAM-QFS:-
https://blogs.communication.utexas.edu/groups/techteam/weblog/5e700/ 

I successfully mocked this up on VirtualBox on my laptop for a customer.

Trevor


Darren J Moffat wrote:

  Miha Voncina wrote:
  
  
Hi,

is it possible to link multiple machines into one storage pool using zfs?

  
  
Depends what you mean by this.

Multiple machines can not import the same ZFS pool at the same time, 
doing so *will* cause corruption and ZFS tries hard to protect against 
multiple imports.

However ZFS can use iSCSI LUNs from multiple target machines for its 
disks that make up a given pool.

ZFS volumes (ZVOLS) can also be used as iSCSI targets and thus shared 
out to multiple machines.

ZFS file systems can be shared over NFS and CIFS and thus shared by 
multiple machines.

ZFS pools can be used in a Sun Cluster configuration but will only 
imported into a single node of a Sun Cluster configuration at a time.

--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  




www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedupe is in

2009-11-03 Thread Trevor Pretty






Darren J Moffat wrote:

  Orvar Korvar wrote:
  
  
I was under the impression that you can create a new zfs dataset and turn on the dedup functionality, and copy your data to it. Or am I wrong?

  
  
you don't even have to create a new dataset just do:

# zfs set dedup=on dataset
  

But like all ZFS functions will that not only get applied, when you
(re)write (old)new data, like compression=on ?

Which leads to the question would a scrub activate dedupe?







www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] RBAC GUI (was Re: automate zpool scrub)

2009-11-01 Thread Trevor Pretty





What "root user" would that be then? "root" is just a role by default
in OpenSolaris.

Now sit down the next bit will come as a shock.

    Go to Systems - Administration - User and Groups 

    Select a user and click the properties button that un-greys

    You can give the user profiles and roles!!

I know scary stuff. Scared me when I found it the other day :-)

Although the help is not very helpful and seems to be written by
somebody in something close to, but not quite English. It also seems to
have been written by somebody who is looking at a different interface
than me, because I can't see how you are suppose to add or modify a
profile, and roles are not even mentioned. 








3.7. To create new profile

For opening the profiles window, you must press the Edit user profiles
that is inside the new users window, then press the Add button, a new
window will appear asking you for the new profile data. For creating a
new profile, you must at least provide the profile name, the default
home directory, the default shell and the default maximum/minimum
user/group ID.

If you want to replace any part of the default home directory with the
user name, you can use the $user keyword (i.e.: /home/$user).

BTW: After much hunting.  Add
User - Advanced Tab - Edit Users Profiles - Add Profile

And when you get through the maze to add a new profile it talks about
"privileges" which seems to be the same list as "profiles" how anybody
who does not understand RBAC is suppose to use this is beyond me.

Oh well can't have
everything. Rome was not built in a day.



Enrico Maria Crisostomo wrote:

  Glad it helped you.

As far as it concerns your observation about the root user, please
take into account that Solaris Role Based Access control lets you fine
tune privileges you grant to users: your "ZFS administrator" needs not
be root. Specifically, if you have a look at your /etc/prof_attr and
/etc/exec_attr, you'll notice that there exist two profiles: ZFS
Storage Management and ZFS File System Management:

exec_attr:ZFS File System Management:solaris:cmd:::/sbin/zfs:euid=0
exec_attr:ZFS Storage Management:solaris:cmd:::/sbin/zpool:uid=0

You can run the zfs and zpool command from a "mortal user" account
with pfexec if such users is associated with the corresponding
profile.

Bye,
Enrico

On Sun, Nov 1, 2009 at 9:03 PM, Vano Beridze vanua...@gmail.com wrote:
  
  
I've looked at man cron and found out that I can modify /etc/default/cron file to set PATH that is defaulted for /usr/bin for mortal users and /usr/bin:/usr/sbin for root.

I did not change /etc/default/cron file, instead I've indicated full path in my crontab file.

Ethically speaking I guess scrubbing filesystem weekly is an administrative task and it's more applicable to root user, So If I had created crontab job for root user the whole PATH problem would not arise.

Anyways it's my desktop so I'm the man and woman in here and there is no big difference what user's crontab will do the job. :)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


  
  


  




www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs code and fishworks fork

2009-10-27 Thread Trevor Pretty






Bruno Sousa wrote:

  
  
Hi,
  
I can agree that the software is the one that really has the added
value, but to my opinion allowing a stack like Fishworks to run outside
the Sun Unified Storage would lead to lower price per unit(Fishwork
license) but maybe increase revenue. Why an increase in revenues? Well,
i assume that alot of customers would buy the Fishworks to put into
they XYZ high-end server.

But in Bryan's blog.. http://blogs.sun.com/bmc/date/200811

"but one that also embedded an apt acronym: "FISH", Mike explained,
stood for "fully-integrated software and hardware" -- which is exactly
what we wanted to go build. I agreed that it captured us perfectly --
and Fishworks was born."

Bruno I agree it would be great to have this sort of BUI on
OpenSolaris, for example it makes CIFS integration in a AD/Windows shop
a breeze, even I got it to work in a couple of minutes, but this would
not be FISH. 

What the Fishworks team have shown is that Sun can make a admin GUI
that is easy to use if they have a goal. Perhaps Oracle will help, but
I see more lost sales of Solaris due it it being "difficult to manage"
than any other reason. We may all not like MS Windows, but you can't
say it's not easy to use. Compare it's RBAC implementation with
Solaris. One is a straight forward tick GUI (admittedly not very
extensible as far as I can see), the other a complete nightmare of
files that need editing with vi! Guess which one is used the most? 

OpenSolaris is getting there, but 99% of all Sun's customers never see
it as they are on Solaris 10. I recently bought a laptop just to run
OpenSolaris and most things "just work"; it's my preferred desktop at
home, but it still only does the simple stuff that Mac and Windows have
done for years. Using any of the advance features however requires a
degree in Systems Engineering. 

Ever wondered what makes Apple so successful? Apple makes FISH.






www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs inotify?

2009-10-26 Thread Trevor Pretty




Paul

Being a script hacker like you the only kludge I can think of.

A script that does something like

ls  /tmp/foo
sleep 
ls /tmp/foo.new
diff /tmp/foo /tmp/foo.new 
/tmp/files_that_have_changed
mv /tmp/foo.new /tmp/foo

Or you might be able to knock something up with bart nd zfs snapshots. I did write this which may
help?

#!/bin/sh

#set -x

# Note: No implied warranty etc. applies. 
# Don't cry if it does not work. I'm an SE not a programmer!
#
###
#
# Version 29th Jan. 2009
#
# GOAL: Show what files have changed between snapshots
#
# But of course it could be any two directories!!
#
###
#

## Set some variables
#
SCRIPT_NAME=$0
FILESYSTEM=$1
SNAPSHOT=$2
FILESYSTEM_BART_FILE=/tmp/filesystem.$$
SNAPSHOT_BART_FILE=/tmp/snapshot.$$
CHANGED_FILES=/tmp/changes.$$


## Declare some commands (just in case PATH is wrong, like cron)
#
BART=/bin/bart


## Usage
# 
Usage()
{
 echo ""
 echo ""
 echo "Usage: $SCRIPT_NAME -q filesystem snapshot "
 echo ""
 echo " -q will stop all echos and just list the changes"
  echo ""
 echo "Examples"
 echo " $SCRIPT_NAME /home/fred /home/.zfs/snapshot/fred "
 echo " $SCRIPT_NAME . /home/.zfs/snapshot/fred
" 
  echo ""
 echo ""
 exit 1
}

### Main Part ###


## Check Usage
#
if [ $# -ne 2 ]; then
 Usage
fi

## Check we have different directories
#
if [ "$1" = "$2" ]; then
 Usage
fi


## Handle dot
#
if [ "$FILESYSTEM" = "." ]; then
 cd $FILESYSTEM ; FILESYSTEM=`pwd`
fi
if [ "$SNAPSHOT" = "." ]; then
 cd $SNAPSHOT ; SNAPSHOT=`pwd`
fi

## Check the filesystems exists It should be a directory
# and it should have some files
#
for FS in "$FILESYSTEM" "$SNAPSHOT"
do
 if [ ! -d "$FS" ]; then
  echo ""
  echo "ERROR file system $FS does not exist"
  echo ""
  exit 1
 fi 
 if [ X"`/bin/ls "$FS"`" = "X" ]; then
  echo ""
  echo "ERROR file system $FS seems to be empty"
  exit 1
  echo ""
 fi
done



## Create the bart files
#

echo ""
echo "Creating bart file for $FILESYSTEM can take a while.."
cd "$FILESYSTEM" ; $BART create -R .  $FILESYSTEM_BART_FILE
echo ""
echo "Creating bart file for $SNAPSHOT can take a while.."
cd "$SNAPSHOT" ; $BART create -R .  $SNAPSHOT_BART_FILE


## Compare them and report the diff
#
echo ""
echo "Changes"
echo ""
$BART compare -p $FILESYSTEM_BART_FILE $SNAPSHOT_BART_FILE | awk
'{print $1}'  $CHANGED_FILES
/bin/more $CHANGED_FILES
echo ""
echo ""
echo ""

## Tidy kiwi
#
/bin/rm $FILESYSTEM_BART_FILE
/bin/rm $SNAPSHOT_BART_FILE
/bin/rm $CHANGED_FILES

exit 0





Paul Archer wrote:

  5:12pm, Cyril Plisko wrote:

  
  

  Question: Is there a facility similar to inotify that I can use to monitor a
directory structure in OpenSolaris/ZFS, such that it will block until a file
is modified (added, deleted, etc), and then pass the state along (STDOUT is
fine)? One other requirement: inotify can handle subdirectories being added
on the fly. So if you use it to monitor, for example, /data/images/incoming,
and a /data/images/incoming/100canon directory gets created, then the files
under that directory will automatically be monitored as well.
  


while there is no inotify for Solaris, there are similar technologies available.

Check port_create(3C) and gam_server(1)


  
  I can't find much on gam_server on Solaris (couldn't find too much on it at 
all, really), and port_create is apparently a system call. (I'm not a 
developer--if I can't write it in BASH, Perl, or Ruby, I can't write it.)
I appreciate the suggestions, but I need something a little more pret-a-porte.

Does anyone have any dtrace experience? I figure this could probably be done 
with dtrace, but I don't know enough about it to write a dtrace script 
(although I may learn if that turns out to be the best way to go). I was 
hoping that there'd be a script out there already, but I haven't turned up 
anything yet.

Paul
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  




www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk locating in OpenSolaris/Solaris 10

2009-10-21 Thread Trevor Pretty





have a look at this thread:-
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-September/032349.html

we discussed this a while back.



SHOUJIN WANG wrote:

  Hi there,
What I am tring to do is: Build a NAS storage server based on the following hardware architecture:
Server--SAS HBA---SAS JBOD
I plugin 2 SAS HBA cards into a X86 box, I also have 2 SAS I/O Modules on SAS JBOD. From each HBA card, I have one SAS cable which connects to SAS JBOD. 
Configured MPT successfully on server, I can see the single multipahted disks likes the following:
r...@super01:~# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c0t5000C5000D34BEDFd0 SEAGATE-ST31000640SS-0001-931.51GB
  /scsi_vhci/d...@g5000c5000d34bedf
   1. c0t5000C5000D34BF37d0 SEAGATE-ST31000640SS-0001-931.51GB
  /scsi_vhci/d...@g5000c5000d34bf37
   2. c0t5000C5000D34C727d0 SEAGATE-ST31000640SS-0001-931.51GB
  /scsi_vhci/d...@g5000c5000d34c727
   3. c0t5000C5000D34D0C7d0 SEAGATE-ST31000640SS-0001-931.51GB
  /scsi_vhci/d...@g5000c5000d34d0c7
   4. c0t5000C5000D34D85Bd0 SEAGATE-ST31000640SS-0001-931.51GB
  /scsi_vhci/d...@g5000c5000d34d85b

The problem is: if one of disks failed, I don't know how to locate the disk in chasiss. It is diffcult for failed disk replacement.

Is there any utility in opensoalris which can be used to locate/blink the failed disk(or do we have any michanism to implement the SES command in bond of SAS)? Or do we have a tool to map the multipathing device ID to the original single pathing device ID likes the following?

 c0t5000C5000D34BF37d0 
   |c2t0d0
\c3t0d0

Regards,
Autumn Wang.
  




www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Slow reads with ZFS+NFS

2009-10-20 Thread Trevor Pretty




Gary

Where you measuring the Linux NFS write performance? It's well know
that Linux can use NFS in a very "unsafe" mode and report the write
complete when it is not all the way to safe storage. This is often
reported as Solaris has slow NFS write performance. This link does not
mention NFS v4 but you might want to check. http://nfs.sourceforge.net/

What's the write performance like between the two OpenSolaris systems?


Richard Elling wrote:

  cross-posting to nfs-discuss

On Oct 20, 2009, at 10:35 AM, Gary Gogick wrote:

  
  
Heya all,

I'm working on testing ZFS with NFS, and I could use some guidance -  
read speeds are a bit less than I expected.

Over a gig-e line, we're seeing ~30 MB/s reads on average - doesn't  
seem to matter if we're doing large numbers of small files or small  
numbers of large files, the speed seems to top out there.  We've  
disabled pre-fetching, which may be having some affect on read  
speads, but proved necessary due to severe performance issues on  
database reads with it enabled.  (Reading from the DB with pre- 
fetching enabled was taking 4-5 times as long than with it disabled.)

  
  
What is the performance when reading locally (eliminate NFS from the  
equation)?
  -- richard

  
  
Write speed seems to be fine.  Testing is showing ~95 MB/s, which  
seems pretty decent considering there's been no real network tuning  
done.

The NFS server we're testing is a Sun x4500, configured with a  
storage pool consisting of 20x 2-disk mirrors, using separate SSD  
for logging.  It's running the latest version of Nexenta Core.   
(We've also got a second x4500 in with a raidZ2 config, running  
OpenSolaris proper, showing the same issues with reads.)

We're using NFS v4 via TCP, serving various Linux clients (the  
majority are  CentOS 5.3).  Connectivity is presently provided by a  
single gigabit ethernet link; entirely conventional configuration  
(no jumbo frames/etc).

Our workload is pretty read heavy; we're serving both website assets  
and databases via NFS.  The majority of files being served are small  
( 1MB).  The databases are MySQL/InnoDB, with the data in separate  
zfs filesystems with a record size of 16k.  The website assets/etc.  
are in zfs filesystems with the default record size.  On the  
database server side of things, we've disabled InnoDB's double write  
buffer.

I'm wondering if there's any other tuning that'd be a good idea for  
ZFS in this situation, or if there's some NFS tuning that should be  
done when dealing specifically with ZFS.  Any advice would be  
greatly appreciated.

Thanks,

-- 
--
Gary Gogick
senior systems administrator  |  workhabit,inc.

// email: g...@workhabit.com  |  web: http://www.workhabit.com
// office: 866-workhabit  | fax: 919-552-9690

--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

  
  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  




www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sun Flash Accelerator F20

2009-10-20 Thread Trevor Pretty




 
Richard Elling wrote:

  

I think where we stand today, the higher-level systems questions of
redundancy tend to work against builtin cards like the F20. These
sorts of cards have been available in one form or another for more
than 20 years, and yet they still have limited market share -- not
because they are fast, but because the other limitations carry more
weight. If the stars align and redundancy above the block layer gets
more popular, then we might see this sort of functionality implemented
directly on the mobo... at which point we can revisit the notion of file
system. Previous efforts to do this (eg Virident) haven't demonstrated
stellar market movement.
  -- richard
  

Richard

You mean presto-serve :-) Putting data on a local NVRAM in the sever
layer, was a bad idea 20 years ago for a lot of
applications. The reasons haven't changed in all those years!

For those who may not have been around in the "good old days" when 1 to
16 MB of NVRAM on an s-bus card was a good idea - or not
http://docs.sun.com/app/docs/doc/801-7289/6i1jv4t2s?a=view

Trevor

  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  




www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Slow reads with ZFS+NFS

2009-10-20 Thread Trevor Pretty





No it concerns the difference between reads and writes. 

The write performance may be being over stated!







Ross Walker wrote:

  
But this is concerning reads not writes.
  
  
  -Ross
  
  
  
  
On Oct 20, 2009, at 4:43 PM, Trevor Pretty trevor_pre...@eagle.co.nz
wrote:
  
  
  
Gary

Where you measuring the Linux NFS write performance? It's well know
that Linux can use NFS in a very "unsafe" mode and report the write
complete when it is not all the way to safe storage. This is often
reported as Solaris has slow NFS write performance. This link does not
mention NFS v4 but you might want to check. http://nfs.sourceforge.net/

What's the write performance like between the two OpenSolaris systems?


Richard Elling wrote:

  cross-posting to nfs-discuss

On Oct 20, 2009, at 10:35 AM, Gary Gogick wrote:

  
  
Heya all,

I'm working on testing ZFS with NFS, and I could use some guidance -  
read speeds are a bit less than I expected.

Over a gig-e line, we're seeing ~30 MB/s reads on average - doesn't  
seem to matter if we're doing large numbers of small files or small  
numbers of large files, the speed seems to top out there.  We've  
disabled pre-fetching, which may be having some affect on read  
speads, but proved necessary due to severe performance issues on  
database reads with it enabled.  (Reading from the DB with pre- 
fetching enabled was taking 4-5 times as long than with it disabled.)

  
  
What is the performance when reading locally (eliminate NFS from the  
equation)?
  -- richard

  
  
Write speed seems to be fine.  Testing is showing ~95 MB/s, which  
seems pretty decent considering there's been no real network tuning  
done.

The NFS server we're testing is a Sun x4500, configured with a  
storage pool consisting of 20x 2-disk mirrors, using separate SSD  
for logging.  It's running the latest version of Nexenta Core.   
(We've also got a second x4500 in with a raidZ2 config, running  
OpenSolaris proper, showing the same issues with reads.)

We're using NFS v4 via TCP, serving various Linux clients (the  
majority are  CentOS 5.3).  Connectivity is presently provided by a  
single gigabit ethernet link; entirely conventional configuration  
(no jumbo frames/etc).

Our workload is pretty read heavy; we're serving both website assets  
and databases via NFS.  The majority of files being served are small  
( 1MB).  The databases are MySQL/InnoDB, with the data in separate  
zfs filesystems with a record size of 16k.  The website assets/etc.  
are in zfs filesystems with the default record size.  On the  
database server side of things, we've disabled InnoDB's double write  
buffer.

I'm wondering if there's any other tuning that'd be a good idea for  
ZFS in this situation, or if there's some NFS tuning that should be  
done when dealing specifically with ZFS.  Any advice would be  
greatly appreciated.

Thanks,

-- 
--
Gary Gogick
senior systems administrator  |  workhabit,inc.

// email: g...@workhabit.com  |  web: http://www.workhabit.com
// office: 866-workhabit  | fax: 919-552-9690

--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

  
  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  



www.eagle.co.nz 

This email is confidential and may
be legally privileged. If received in error please destroy and
immediately notify us.

  
  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

  




www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] fishworks on x4275?

2009-10-18 Thread Trevor Pretty




Frank

I've been looking into:-
http://www.nexenta.com/corp/index.php?option=com_contenttask=blogsectionid=4Itemid=128

Only played with a VM so far on my laptop, but it does seem to be an
alternative to the Sun product if you don't want to buy a S7000.

IMHO: Sun are missing a great opportunity not offering a reasonable
upgrade path from an X to an S7000.







Trevor Pretty 
| Technical Account Manager
|
T: +64 9 639 0652 |
M: +64 21 666 161

Eagle Technology Group Ltd. 
Gate D, Alexandra Park, Greenlane West, Epsom

Private Bag 93211, Parnell, Auckland



Frank Cusack wrote:

  Apologies if this has been covered before, I couldn't find anything
in my searching.

Can the software which runs on the 7000 series servers be installed
on an x4275?

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  




www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Interesting performance comparison

2009-10-15 Thread Trevor Pretty






Sorry: Pointless and a waste of time until we get some detail!

http://fsbench.filesystems.org/papers/cheating.pdf 









Cyril Plisko wrote:

  Hello !

There is an interesting performance comparison of three popular
operating systems [1], I thought it could be of interest for people
hanging on this list.

These guys used their tool FlexTk as a benchmark engine, which is not
a benchmark tool per se, but still provides an interesting data.

The paper doesn't mention it, but AFAIU, OpenSolaris was configured
with Samba, rather than with native CIFS server.

I find their approach of avoiding any performance tweaks quite
interesting, as it gives a fair estimation of what average user can
expect out of the box.

Anyway, would be interesting to know what people think about it.


P.S. For the sake of full disclosure I must say that I know personally
Flexense people.

[1] http://www.flexense.com/documents/nas_performance_comparison.pdf

  




www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS disk failure question

2009-10-14 Thread Trevor Pretty




Cindy

How does the SS7000 do it? 

Today I demoed pulling a disk and the spare just automatically became
part of the pool. After it was re-silvered I then pulled three more
(latest Q3 version with triple RAID-Z). I then plugged all the drives
back in (different slots) and everything was back to normal. 

Being nosey I've also had a shell running with zpool status in a while
loop whilst "practising" this little stunt, but was not looking to see
what commands it was issuing. I even had brain fade and pulled all four
at once - Doh! The S7000 recovered however once I plugged the disks
back in and rebooted (sweaty palms time :-) ).

Unfortunately my borrowing time is up and it's now in a box on the way
back to my local distributor otherwise I would poke around more.

Trevor










Cindy Swearingen wrote:

  I think it is difficult to cover all the possible ways to replace
a disk with a spare.

This example in the ZFS Admin Guide didn't work for me:

http://docs.sun.com/app/docs/doc/819-5461/gcvcw?a=view

See the manual replacement example. After the zpool detach and
zpool replace operations, the spare is not removed from the
spare pool. Its in some unknown state. I'll fix this.

Cindy

On 10/14/09 15:26, Jason Frank wrote:
  
  
Thank you, that did the trick.  That's not terribly obvious from the
man page though.  The man page says it detaches the devices from a
mirror, and I had a raidz2.  Since I'm messing with production data, I
decided I wasn't going to chance it when I was reading the man page.
You might consider changing the man page, and explaining a little more
what it means, maybe even what the circumstances look like where you
might use it.

Actually, an official and easily searchable "What to do when you have
a zfs disk failure" with lots of examples would be great.  There are a
lot of attempts out there, but nothing I've found is comprehensive.

Jason

On Wed, Oct 14, 2009 at 4:23 PM, Eric Schrock eric.schr...@sun.com wrote:


  On 10/14/09 14:17, Cindy Swearingen wrote:
  
  
Hi Jason,

I think you are asking how do you tell ZFS that you want to replace the
failed disk c8t7d0 with the spare, c8t11d0?

I just tried do this on my Nevada build 124 lab system, simulating a
disk failure and using zpool replace to replace the failed disk with
the spare. The spare is now busy and it fails. This has to be a bug.

  
  You need to 'zpool detach' the original (c8t7d0).

- Eric

  
  
Another way to recover is if you have a replacement disk for c8t7d0,
like this:

1. Physically replace c8t7d0.

You might have to unconfigure the disk first. It depends
on the hardware.

2. Tell ZFS that you replaced it.

# zpool replace tank c8t7d0

3. Detach the spare.

# zpool detach tank c8t11d0

4. Clear the pool or the device specifically.

# zpool clear tank c8t7d0

Cindy

On 10/14/09 14:44, Jason Frank wrote:


  So, my Areca controller has been complaining via email of read errors for
a couple days on SATA channel 8.  The disk finally gave up last night at
17:40.  I got to say I really appreciate the Areca controller taking such
good care of me.

For some reason, I wasn't able to log into the server last night or in
the morning, probably because my home dir was on the zpool with the failed
disk (although it's a raidz2, so I don't know why that was a problem.)  So,
I went ahead and rebooted it the hard way this morning.

The reboot went OK, and I was able to get access to my home directory by
waiting about 5 minutes after authenticating.  I checked my zpool, and it
was resilvering.  But, it had only been running for a few minutes.
 Evidently, it didn't start resilvering until I rebooted it.  I would have
expected it to do that when the disk failed last night (I had set up a hot
spare disk already).

All of the zpool commands were taking minutes to complete while c8t7d0
was UNAVAIL, so I offline'd it.  When I say all, that includes iostat,
status, upgrade, just about anything non-destructive that I could try.  That
was a little odd.  Once I offlined the drive, my resilver restarted, which
surprised me.  After all, I simply changed an UNAVAIL drive to OFFLINE, in
either case, you can't use it for operations.  But no big deal there.  That
fixed the login slowness and the zpool command slowness.

The resilver completed, and now I'm left with the following zpool config.
 I'm not sure how to get things back to normal though, and I hate to do
something stupid...

r...@datasrv1:~# zpool status tank
 pool: tank
 state: DEGRADED
 scrub: scrub stopped after 0h10m with 0 errors on Wed Oct 14 15:23:06
2009
config:

   NAME   STATE READ WRITE CKSUM
   tank   DEGRADED 0 0 0
 raidz2   DEGRADED 0 0 0
   c8t0d0 ONLINE   0 0 0
   c8t1d0 ONLINE   0 0 0
   c8t2d0 ONLINE   0 0 0
   c8t3d0 ONLINE

Re: [zfs-discuss] .zfs snapshots on subdirectories?

2009-10-04 Thread Trevor Pretty




Edward

If you look at the man page:-


  snapshot
  
  
A read-only version of a file system or volume at a given point
in time. It is specified as filesys...@name or vol...@name.
  

I think you've taken volume
snapshots. I believe you need to make file system snapshots and each
users/username a zfs file system. 
Lets play..

r...@norton:~# zpool create -f storagepool c9t5d0
r...@norton:~# zfs create storagepool/users
r...@norton:~# zfs create storagepool/users/bob
r...@norton:~# zfs create storagepool/users/dick

r...@norton:# cd /storagepool/users/bob
r...@norton:# touch foo
r...@norton:# zfs snapshot storagepool/users/b...@now
r...@norton# ls -alR /storagepool/users/bob/.zfs
/storagepool/users/bob/.zfs:
total 3
dr-xr-xr-x 4 root root 4 2009-10-05 12:09 .
drwxr-xr-x 2 root root 3 2009-10-05 12:14 ..
dr-xr-xr-x 2 root root 2 2009-10-05 12:09 shares
dr-xr-xr-x 2 root root 2 2009-10-05 12:09 snapshot
/storagepool/users/bob/.zfs/shares:
total 2
dr-xr-xr-x 2 root root 2 2009-10-05 12:09 .
dr-xr-xr-x 4 root root 4 2009-10-05 12:09 ..
/storagepool/users/bob/.zfs/snapshot:
total 2
dr-xr-xr-x 2 root root 2 2009-10-05 12:09 .
dr-xr-xr-x 4 root root 4 2009-10-05 12:09 ..
drwxr-xr-x 2 root root 3 2009-10-05 12:14 now
/storagepool/users/bob/.zfs/snapshot/now:
total 2
drwxr-xr-x 2 root root 3 2009-10-05 12:14 .
dr-xr-xr-x 3 root root 3 2009-10-05 12:09 ..
-rw-r--r-- 1 root root 0 2009-10-05 12:14 foo

If you want a .zfs in
/storagepool/users/eharvey/some/foo/dir it needs to be a separate file
system.



Edward Ned Harvey wrote:

  
  
  

  
  Suppose I have a storagepool:   /storagepool
  And I have snapshots on it.  Then I can access
the
snaps under /storagepool/.zfs/snapshots
   
  But is there any way to enable this within all
the
subdirs?  For example, 
      cd
/storagepool/users/eharvey/some/foo/dir
      cd
.zfs
   
  I don’t want to create a new filesystem for
every
subdir.  I just want to automatically have the “.zfs” hidden
directory available within all the existing subdirs, if that’s possible.
   
  Thanks….
   
   
  










www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] .zfs snapshots on subdirectories?

2009-10-04 Thread Trevor Pretty




OOPS just spotted you said
you don't want a FS for each sub-dir :-)

Trevor Pretty wrote:

  
  Edward
  
If you look at the man page:-
  
  
snapshot 

  A read-only version of a file system or volume at a given
point
in time. It is specified as filesys...@name or vol...@name.

  
  I think you've taken volume
snapshots. I believe you need to make file system snapshots and each
users/username a zfs file system. 
Lets play..
  
r...@norton:~# zpool create -f storagepool c9t5d0
r...@norton:~# zfs create storagepool/users
r...@norton:~# zfs create storagepool/users/bob
r...@norton:~# zfs create storagepool/users/dick
  
r...@norton:# cd /storagepool/users/bob
r...@norton:# touch foo
r...@norton:# zfs snapshot storagepool/users/b...@now
r...@norton# ls -alR /storagepool/users/bob/.zfs
/storagepool/users/bob/.zfs:
total 3
dr-xr-xr-x 4 root root 4 2009-10-05 12:09 .
drwxr-xr-x 2 root root 3 2009-10-05 12:14 ..
dr-xr-xr-x 2 root root 2 2009-10-05 12:09 shares
dr-xr-xr-x 2 root root 2 2009-10-05 12:09 snapshot
/storagepool/users/bob/.zfs/shares:
total 2
dr-xr-xr-x 2 root root 2 2009-10-05 12:09 .
dr-xr-xr-x 4 root root 4 2009-10-05 12:09 ..
  /storagepool/users/bob/.zfs/snapshot:
total 2
dr-xr-xr-x 2 root root 2 2009-10-05 12:09 .
dr-xr-xr-x 4 root root 4 2009-10-05 12:09 ..
drwxr-xr-x 2 root root 3 2009-10-05 12:14 now
  /storagepool/users/bob/.zfs/snapshot/now:
total 2
drwxr-xr-x 2 root root 3 2009-10-05 12:14 .
dr-xr-xr-x 3 root root 3 2009-10-05 12:09 ..
-rw-r--r-- 1 root root 0 2009-10-05 12:14 foo
  
  If you want a .zfs in
/storagepool/users/eharvey/some/foo/dir it needs to be a separate file
system.
  
  
  
Edward Ned Harvey wrote:
  




Suppose I have a storagepool:   /storagepool
And I have snapshots on it.  Then I can access
the
snaps under /storagepool/.zfs/snapshots
 
But is there any way to enable this within all
the
subdirs?  For example, 
    cd
/storagepool/users/eharvey/some/foo/dir
    cd
.zfs
 
I don’t want to create a new filesystem for
every
subdir.  I just want to automatically have the “.zfs” hidden
directory available within all the existing subdirs, if that’s possible.
 
Thanks….
 
 

  
  
  
  
  
  
  
  
  
  www.eagle.co.nz 
  This email is confidential and may be
legally privileged. If received in error please destroy and immediately
notify us.


-- 





Trevor
Pretty |
Technical Account Manager
|
+64
9 639 0652 |
+64
21 666 161
Eagle
Technology Group Ltd. 
Gate
D, Alexandra Park, Greenlane West, Epsom
Private Bag 93211,
Parnell, Auckland










www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Would ZFS work for a high-bandwidth video SAN?

2009-09-29 Thread Trevor Pretty





Or just "try and buy" the machines from Sun for ZERO DOLLARS!!! 

Like Erik said..


"Both the Thor and 7110 are available for Try-and-Buy.  Get them and test them against your workload - it's the only way to be sure (to paraphrase Ripley)."


Marc Bevand wrote:

  Richard Connamacher rich at indieimage.com writes:
  
  
I was thinking of custom building a server, which I think I can do for
around $10,000 of hardware (using 45 SATA drives and a custom enclosure),
and putting OpenSolaris on it. It's a bit of a risk compared to buying a
$30,000 server, but would be a fun experiment.

  
  
Do you have a $2k budget to perform a cheap experiment?

Because for this amount of money you can build the following server that has
10TB of usable storage capacity, and that would be roughly able to sustain
sequential reads between 500MByte/s and 1000MByte/s over NFS over a Myricom
10GbE NIC. This is my estimation. I am less sure about sequential writes:
I think this server would be capable of at least 250-500 MByte/s.

$150 - Mobo with onboard 4-port AHCI SATA controller (eg. any AMD 700
  chipset), and at least two x8 electrical PCI-E slots
$200 - Quad-core Phenom II X4 CPU + 4GB RAM
$150 - LSISAS1068E 8-port SAS/SATA HBA, PCI-E x8
$500 - Myri-10G NIC (10G-PCIE-8B-C), PCI-E x8
$1000 - 12 x 1TB SATA drives (4 on onboard AHCI, 8 on LSISAS1068E)

- It is important to choose an AMD platform because the PCI-E lanes
  will always come from the northbridge chipset which is connected
  to the CPU via an HT 3.0 link. On Intel platforms, the DMI link
  between the ICH and MCH will be a bottleneck if the mobo gives
  you PCI-E lanes from the MCH (in my experience, this is the case
  of most desktop mobos).
- Make sure you enable AHCI in the BIOS.
- Configure the 12 drives as striped raidz vdevs:
  zpool create mytank raidz d0 d1 d2 d3 d4 d5 raidz d6 d7 d8 d9 d10 d11
- Buy drives able to sustain 120-130 MByte/s of sequential reads at the
  beginning of the platter (my recommendation: Seagate 7200.12) this
  way your 4Gbit/s requirement will be met even in the worst case when
  reading from the end of the platters.

Thank me for saving you $28k :-) The above experiment would be a way
to validate some of your ideas before building a 45-drive server...

-mrb


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  










www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [ZFS-discuss] RAIDZ drive removed status

2009-09-29 Thread Trevor Pretty




David

The disk is broken! Unlike
other file systems which would silently loose your data ZFS has decide
that this particular disk has "persistent errors"


action: Replace the faulted device, or use 'zpool clear' to mark the device repaired.
^^

It's clear you are
unsuccessful at repairing it.

Trevor


David Stewart wrote:

  Having casually used IRIX in the past and used BeOS, Windows, and MacOS as primary OSes, last week I set up a RAIDZ NAS with four Western Digital 1.5TB drives and copied over data from my WinXP box.  All of the hardware is fresh out of the box so I did not expect any hardware problems, but when I ran zpool after a few days of uptime and copying 2.4TB of data to the system I received the following:

da...@opensolarisnas:~$ zpool status mediapool
  pool: mediapool
 state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
	Sufficient replicas exist for the pool to continue functioning in a
	degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
	repaired.
 scrub: none requested
config:

	NAMESTATE READ WRITE CKSUM
	mediapool   DEGRADED 0 0 0
	  raidz1DEGRADED 0 0 0
	c8t0d0  ONLINE   0 0 0
	c8t1d0  ONLINE   0 0 0
	c8t2d0  ONLINE   0 0 0
	c8t3d0  FAULTED  0 0 0  too many errors

errors: No known data errors
da...@opensolarisnas:~$

I read the Solaris documentation and it seemed to indicate that I needed to run zpool clear.

da...@opensolarisnas:~$ zpool clear mediapool

And then the fun began.  The system froze and rebooted and I was stuck in a constant reboot cycle that would get to grub and selecting “opensolaris-2” and boot process and crash.  Removing the SATA card that the RAIDZ disks were attached to would result in a successful boot.  I reinserted the card, went through a few unsuccessful reboots, and magically it booted all the way for me to log in.  I then received the following:

me...@opensolarisnas:~$ zpool status -v mediapool
  pool: mediapool
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid.  Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://www.sun.com/msg/ZFS-8000-4J
 scrub: scrub in progress for 0h2m, 0.29% done, 16h12m to go
config:

NAMESTATE READ WRITE CKSUM
mediapool   DEGRADED 0 0 0
  raidz1DEGRADED 0 0 0
c8t0d0  ONLINE   0 0 0
c8t1d0  ONLINE   0 0 0
c8t2d0  ONLINE   0 0 0
c8t3d0  UNAVAIL  7 0 0  experienced I/O failures

errors: No known data errors
me...@opensolarisnas:~$

I shut the machine down and unplugged the power supply and removed the SATA card and reinserted it, removed each of the SATA cables individually and reinserted them, removed each of the SATA power cables and reinserted them.  Rebooted:

da...@opensolarisnas:~# zpool status -x mediapool
  pool: mediapool
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
	continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress for 0h20m, 2.68% done, 12h29m to go
config:

	NAMESTATE READ WRITE CKSUM
	mediapool   DEGRADED 0 0 0
	  raidz1DEGRADED 0 0 0
	c8t0d0  ONLINE   0 0 0
	c8t1d0  ONLINE   0 0 0
	c8t2d0  ONLINE   0 0 0
	c8t3d0  REMOVED  0 0 0

errors: No known data errors
da...@opensolarisnas:~#


The resilvering completed everything seemed fine and I shut the machine down and rebooted later and went through the same boot  crash cycle that never got me to the login screen until it finally did get me to that screen for unknown reasons.  The machine is resilvering currently with the zpool status the same as above.  What happened, why did it happen, and how can I stop it from happening again?  Does OpenSolaris believe that c8t3d0 is not connected to the SATA card?  The SATA card BIOS sees all four drives.  What is the best way for me to figure out which drive is c8t3d0?  Some operating systems will tell you which drive is which by telling you the serial number of the drive.  Does OpenSolaris do this?  If so, how?  I looked through all of the Solaris/OpenSolaris documentation re: ZFS and RAIDZ for a mention of a “removed” status for a drive in RAIDZ configuration, but could not find mention outside of mirrors having this error.  Page 231 of the OS Bible mentions 
reattaching a drive in the “removed” status from a mirror.  Does this mean physically reattaching the drive (unplugging it and replugging it in) or does it mean somehow software reattaching it?  If I run “zpool offline –t c8t3d0” and reboot and then “zpool replace mediapool c8t3d0 

Re: [zfs-discuss] True in U4? Tar and cpio...save and restore ZFS File attributes and ACLs

2009-09-29 Thread Trevor Pretty




Ray

Use this link it's worth it's weight in gold. The goolge search engine
is so much better than what's available at doc.sun.com

http://www.google.com/custom?hl=enclient=google-coopcof=S%3Ahttp%3A%2F%2Fwww.sun.com%3BCX%3ASun%2520Documentation%3BL%3Ahttp%3A%2F%2Flogos.sun.com%2Ftry%2Fimg%2Fsun_logo.gif%3BLH%3A31%3BLP%3A1%3Bq=btnG=Searchcx=014942951012728127402%3Aulblnwea12w

Simply search for: solaris
8/07 ZFS

FYI: A while back Sun decided rather than having complete copies of
each manual for each Solaris release which where 99.99% the same the
manuals would just indicate was was new. 

The "What's new" is always a good place to start.

http://docs.sun.com/app/docs/doc/817-0547

Or simply run a file with ACLs through a tar and cpio pipe and see if
they survive much quicker than reading!! 

examples in the respective man pages.

example% cd fromdir; tar
cf - .| (cd todir; tar xfBp -)
example% find . -depth -print | cpio
-pdlmv newdir

Don't forget the ACLs on ZFS
are different to UFS.

Trevor




Ray Clark wrote:

  The April 2009 "ZFS Administration Guide" states "...tar and cpio commands, to save ZFS files.  All of these utilities save and restore ZFS file attributes and ACLs.

I am running 8/07 (U4).  Was this true for the U4 verison of ZFS and the tar and cpio shipped with U4?

Also, I cannot seem to figure out how to find the ZFS admin manual applicable to U4.  Could someone please shove me in the right direction?
  










www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [ZFS-discuss] RAIDZ drive removed status

2009-09-29 Thread Trevor Pretty




David

That depends on the hardware layout. If you don't know and you say the
data is still somewhere else

You could.

Pull a disk out and see what happens to the pool the one you pulled
will be highlighted as the pool looses all it's replicas (clear
"should" fix when you plug it back in.)

Or.

Create a single zpool on each drive and then unplug a drive and see
which zpool dies! 

However. 

You may not have hot plug drives so if they have a busy light, create a
pool on each drive write a lot of data to each disk pool one at a time
and see which access lights flash.

Or..

Unmount (or destroy) the zpool and power off the machine. Plug in just
one drive and boot. Use format to see which drive appeared. Repeat as
needed... You can also run destructive tests using format on the
suspect drive and see what that thinks.

It is really a good
idea to know which drive is which because they are going to fail! I'm
surprised it's not on the hardware somewhere, but I tend to play with
hardware from the big three and there is always a label.

Warning: Others have
reported that rebooting system with faulted or degraded ZFS pools can
be "problematic" (you :-)) so be careful not to reboot with a pool in
that state if at all possible.

Trevor

David Stewart wrote:

  How do I identify which drive it is?  I hear each drive spinning (I listened to them individually) so I can't simply select the one that is not spinning.

 David
  










www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS pool replace single disk with raidz

2009-09-27 Thread Trevor Pretty





To: ZFS Developers. 

I know we hate them but an "Are you sure?" may have helped here, and
may be a quicker fix than waiting for 4852783
(just thinking out loud here). Could the zfs command have worked out
c5d0 was a single disk and attaching it to the pool would have been
dumb?


Ryan Hirsch wrote:

  I have a zpool named rtank.  I accidently attached a single drive to the pool.  I am an idiot I know :D Now I want to replace this single drive with a raidz group.  Below is the pool setup and what I tried:
 

NAMESTATE READ WRITE CKSUM
rtank   ONLINE   0 0 0
 - raidz1ONLINE   0 0 0
   -- c4t0d0  ONLINE   0 0 0
   -- c4t1d0  ONLINE   0 0 0
   -- c4t2d0  ONLINE   0 0 0
   -- c4t3d0  ONLINE   0 0 0
   -- c4t4d0  ONLINE   0 0 0
   -- c4t5d0  ONLINE   0 0 0
   -- c4t6d0  ONLINE   0 0 0
   -- c4t7d0  ONLINE   0 0 0
 - raidz1ONLINE   0 0 0
   -- c3t0d0  ONLINE   0 0 0
   -- c3t1d0  ONLINE   0 0 0
   -- c3t2d0  ONLINE   0 0 0
   -- c3t3d0  ONLINE   0 0 0
   -- c3t4d0  ONLINE   0 0 0
   -- c3t5d0  ONLINE   0 0 0
  - c5d0  ONLINE   0 0 0  --- single drive in the pool not in any raidz


$ pfexec zpool replace rtank c5d0 raidz c3t6d0 c3t7d0 c3t8d0 c3t9d0 c3t10d0 c3t11d0
too many arguments

$ zpool upgrade -v
This system is currently running ZFS pool version 18.


Is what I am trying to do possible?  If so what am I doing wrong?  Thanks.
  


-- 





Trevor
Pretty |
Technical Account Manager
|
+64
9 639 0652 |
+64
21 666 161
Eagle
Technology Group Ltd. 
Gate
D, Alexandra Park, Greenlane West, Epsom
Private Bag 93211,
Parnell, Auckland










www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs vbox and shared folders

2009-09-27 Thread Trevor Pretty




Dick

I'm 99$ sure I use to do this when I had OpenSolaris as my base OS to
an XP guest (no NFS client - Bob) for my $HOME

Now I use Vista as my base OS because I now work in an MS environment,
so sorry can't check. You having problems?

BTW: Thank goodness for VirtualBox when I want to do real file
manipulation, rather than windows explorer!

Trevor

dick hoogendijk wrote:

  Are there any known issues involving VirtualBox using shared folders 
from a ZFS filesystem?

  









www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OS install question

2009-09-27 Thread Trevor Pretty




Ron

That should work it's no real different to SVM.

BTW: I did you mean?

mirrored root on c1t0d0s0/c2t0d0s0
mirrored app on c1t1d0s0/c2t1d0s0
RaidZ accross c1t0d0s7/c1t1d0s7/c2t0d0s7/c2t1d0s7

I would then make slices 0 and 7 the same on all disks using fmthard
(BTW:I would not use 7, I would use 1 - but that's just preference)

Remember you don't need spare
slices with ZFS root for Live Upgrade like you did with SVM.


Ron Watkins wrote:

  My goal is to have a mirrored root on c1t0d0s0/c2t0d0s0, another mirrored app fs on c1t0d0s1/c2t0d0s1 and then a 3+1 Raid-5 accross c1t0d0s7/c1t1d0s7/c2t0d0s7/c2t1d0s7.
I want to play with creating ISCSI target luns on the Raid-5 partition, so I am trying out opensolaris for the first time. In the past, I would use Solaris 10 with the SVM do create what I need, but without ISCSI target support.
  










www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sun Flash Accelerator F20

2009-09-24 Thread Trevor Pretty





Oracle use Linux :-( 

But on the positive note have a look at this:- http://www.youtube.com/watch?v=rmrxN3GWHpM

It's Ed Zander talking to Larry and asking some great questions.

29:45 Ed asks what parts of Sun are you going to keep - all of it!

45:00 Larry's rant on Cloud Computing  "the cloud is water
vapour!"

20:00 Talks about Russell Coutts (a good kiwi bloke) and the America's
cup if you don't care about
anything else. Although they seem confused about who should own it,
Team New Zealand are only letting the Swiss borrow it for a while until
they loose all our top sailors, like Russell and we win it back, once
the trimaran side show is over :-)


Oh and back on topic. Anybody found any info on the F20. I've a
customer who wants to buy one and on the partner portal I can't find
any real details (Just the Facts, or SunIntro, onestop for partner page
would be nice)

Trevor


Enda O'Connor wrote:

  Richard Elling wrote:
  
  
On Sep 24, 2009, at 12:20 AM, James Andrewartha wrote:



  I'm surprised no-one else has posted about this - part of the Sun 
Oracle Exadata v2 is the Sun Flash Accelerator F20 PCIe card, with 48 
or 96 GB of SLC, a built-in SAS controller and a super-capacitor for 
cache protection. 
http://www.sun.com/storage/disk_systems/sss/f20/specs.xml
  

At the Exadata-2 announcement, Larry kept saying that it wasn't a disk.  
But there
was little else of a technical nature said, though John did have one to 
show.

RAC doesn't work with ZFS directly, so the details of the configuration 
should prove
interesting.

  
  
isn't exadata based on linux, so not clear where zfs comes into play, 
but I didn't see any of this oracle preso, so could be confused by all this.

Enda
  
  
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

  
  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  










www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What does 128-bit mean

2009-09-22 Thread Trevor Pretty





http://en.wikipedia.org/wiki/ZFS

Shu Wu wrote:
Hi pals, I'm now looking into zfs source and have been
puzzled about 128-bit. It's announced that ZFS is an 128-bit file
system. But what does 128-bit mean? Does that mean the addressing
capability is 2^128? But in the source, 'zp_size' (in 'struct
znode_phys'), the file size in bytes, is defined as uint64_t. So I
guess 128-bit may be the bit width of the zpool pointer, but where is
it defined?
  
Regards,
  
Wu Shu


-- 





Trevor
Pretty |
Technical Account Manager
|
+64
9 639 0652 |
+64
21 666 161
Eagle
Technology Group Ltd. 
Gate
D, Alexandra Park, Greenlane West, Epsom
Private Bag 93211,
Parnell, Auckland










www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What does 128-bit mean

2009-09-22 Thread Trevor Pretty





http://blogs.sun.com/bonwick/entry/128_bit_storage_are_you

Trevor Pretty wrote:

  
  
  http://en.wikipedia.org/wiki/ZFS
  
Shu Wu wrote:
  Hi pals, I'm now looking into zfs source and have been
puzzled about 128-bit. It's announced that ZFS is an 128-bit file
system. But what does 128-bit mean? Does that mean the addressing
capability is 2^128? But in the source, 'zp_size' (in 'struct
znode_phys'), the file size in bytes, is defined as uint64_t. So I
guess 128-bit may be the bit width of the zpool pointer, but where is
it defined?

Regards,

Wu Shu
  
  
  -- 
  
  
  
  
  
  Trevor
Pretty |
Technical Account Manager
  |
  +64
9 639 0652 |
  +64
21 666 161
  Eagle
Technology Group Ltd. 
  Gate
D, Alexandra Park, Greenlane West, Epsom
Private Bag 93211,
Parnell, Auckland
  
  
  
  
  
  
  


-- 





Trevor
Pretty |
Technical Account Manager
|
+64
9 639 0652 |
+64
21 666 161
Eagle
Technology Group Ltd. 
Gate
D, Alexandra Park, Greenlane West, Epsom
Private Bag 93211,
Parnell, Auckland










www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs bug

2009-09-22 Thread Trevor Pretty




Jeremy 

You sure?

http://bugs.opensolaris.org/view_bug.do%3Bjsessionid=32d28f683e21e4b5c35832c2e707?bug_id=6883885

BTW: I only found this by hunting for one of my bugs 6428437
and changing the URL! 

I think the searching is broken - but using bugster has always been a
black art even when I worked at Sun :-)

Trevor


Jeremy Kister wrote:

  I entered CR 6883885 at bugs.opensolaris.org.

someone closed it - not reproducible.

Where do i find more information, like which planet's gravitational 
properties affect the zfs source code ??


  












www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs bug

2009-09-22 Thread Trevor Pretty




BTW

Reading your bug. 

I assumed you meant? 

zfs set mountpoint=/home/pool tank

ln -s /dev/null /home/pool

I then tried on OpenSolaris 2008.11

r...@norton:~# zfs set mountpoint=
r...@norton:~# zfs set mountpoint=/home/pool tank
r...@norton:~# zpool export tank
r...@norton:~# rm -r /home/pool
rm: cannot remove `/home/pool': No such file or directory
r...@norton:~# ln -s /dev/null /home/pool
r...@norton:~# zpool import -f tank
cannot mount 'tank': Not a directory
r...@norton:~# 

So looks fixed to me.


Trevor Pretty wrote:

  
Jeremy 
  
You sure?
  
  http://bugs.opensolaris.org/view_bug.do%3Bjsessionid=32d28f683e21e4b5c35832c2e707?bug_id=6883885
  
BTW: I only found this by hunting for one of my bugs 6428437
  and changing the URL! 
  
I think the searching is broken - but using bugster has always been a
black art even when I worked at Sun :-)
  
Trevor
  
  
Jeremy Kister wrote:
  
I entered CR 6883885 at bugs.opensolaris.org.

someone closed it - not reproducible.

Where do i find more information, like which planet's gravitational 
properties affect the zfs source code ??


  
  
  
  
  
  
  
  
  
  


-- 





Trevor
Pretty |
Technical Account Manager
|
+64
9 639 0652 |
+64
21 666 161
Eagle
Technology Group Ltd. 
Gate
D, Alexandra Park, Greenlane West, Epsom
Private Bag 93211,
Parnell, Auckland










www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs bug

2009-09-22 Thread Trevor Pretty





Of course I meant 2009.06  :-)

Trevor Pretty wrote:

  
  BTW
  
Reading your bug. 
  
I assumed you meant? 
  
  zfs set mountpoint=/home/pool tank
  
ln -s /dev/null /home/pool
  
  I then tried on OpenSolaris 2008.11
  
r...@norton:~# zfs set mountpoint=
r...@norton:~# zfs set mountpoint=/home/pool tank
r...@norton:~# zpool export tank
r...@norton:~# rm -r /home/pool
rm: cannot remove `/home/pool': No such file or directory
r...@norton:~# ln -s /dev/null /home/pool
r...@norton:~# zpool import -f tank
cannot mount 'tank': Not a directory
r...@norton:~# 
  
  So looks fixed to me.
  
  
Trevor Pretty wrote:
  

Jeremy 

You sure?

http://bugs.opensolaris.org/view_bug.do%3Bjsessionid=32d28f683e21e4b5c35832c2e707?bug_id=6883885

BTW: I only found this by hunting for one of my bugs 6428437
and changing the URL! 

I think the searching is broken - but using bugster has always been a
black art even when I worked at Sun :-)

Trevor


Jeremy Kister wrote:

  I entered CR 6883885 at bugs.opensolaris.org.

someone closed it - not reproducible.

Where do i find more information, like which planet's gravitational 
properties affect the zfs source code ??


  









  
  
  -- 
  
  
  
  
  
  Trevor
Pretty |
Technical Account Manager
  |
  +64
9 639 0652 |
  +64
21 666 161
  Eagle
Technology Group Ltd. 
  Gate
D, Alexandra Park, Greenlane West, Epsom
Private Bag 93211,
Parnell, Auckland
  
  
  
  
  
  
  


-- 





Trevor
Pretty |
Technical Account Manager
|
+64
9 639 0652 |
+64
21 666 161
Eagle
Technology Group Ltd. 
Gate
D, Alexandra Park, Greenlane West, Epsom
Private Bag 93211,
Parnell, Auckland










www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs not sharing nfs shares on OSOl 2009.06?

2009-09-15 Thread Trevor Pretty




Tom

What's in the NFS server log? (svcs -x)

BTW: Why are the NFS services disabled? If it has a problem I would
have expected it to be in state maintenance.

http://docs.sun.com/app/docs/doc/819-2252/smf-5?a=view


  DISABLED
  
  
The instance is disabled. Enabling the service results in a
transition to the offline state and eventually to the online state with
all dependencies satisfied.
  


  MAINTENANCE
  
  
The instance is enabled, but not able to run. Administrative
action (through svcadm clear)
is required to move the instance out of the maintenance state. The
maintenance state might be a temporarily reached state if an
administrative operation is underway.
  


Trevor

Tom de Waal wrote:

  Hi,

I'm trying to identify why my nfs server does not work. I'm using a more 
or less core install of OSOL 2009.06 (release) and installed and 
configured a nfs server.

The issue: nfs server won't start - it can't find any filesystems in 
/etc/dfs/sharetab. the zfs file systems do have sharenfs=on property 
(infact the pool the used to be on a working NV build 100).

Some investigations that I did:
zfs create -o sharenfs=os tank1/home/nfs # just an example fs
cannot share 'tank1/home/nfs': share(1M) failed
filesystem successfully create, but not shared

sharemgr list -v
default enabled nfs
zfs enabled nfs smb


svcs -a | grep nfs
disabled   19:52:51 svc:/network/nfs/client:default
disabled   21:05:36 svc:/network/nfs/server:default
online 19:53:23 svc:/network/nfs/status:default
online 19:53:25 svc:/network/nfs/nlockmgr:default
online 19:53:25 svc:/network/nfs/mapid:default
online 19:53:30 svc:/network/nfs/rquota:default
online 21:05:24 svc:/network/nfs/cbd:default

cat /etc/dfs/sharetab is empty

sharemgr start -v -P nfs zfs
Starting group "zfs"

share
# no response

share -F nfs /tank1/home/nfs zfs
Could not share: /tank1/home/nfs: system error

pkg list | grep nfs
SUNWnfsc   0.5.11-0.111installed  
SUNWnfsckr 0.5.11-0.111installed  
SUNWnfss   0.5.11-0.111installed  

Note: I also enabled the smb server (CIFS), which works fine (and fills 
sharetab)

Any suggestion how to resolve this? Am I missing an  ips package or a file?

Regards,



Tom de Waal
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


-- 





Trevor
Pretty |
Technical Account Manager
|
+64
9 639 0652 |
+64
21 666 161
Eagle
Technology Group Ltd. 
Gate
D, Alexandra Park, Greenlane West, Epsom
Private Bag 93211,
Parnell, Auckland










www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which kind of ACLs does tmpfs support ?

2009-09-15 Thread Trevor Pretty




Interesting question takes a
few minutes to test...

http://docs.sun.com/app/docs/doc/819-2252/acl-5?l=ena=viewq=acl%285%29+
http://docs.sun.com/app/docs/doc/819-2239/chmod-1?l=ena=view

ZFS

[tp47...@norton:] df .
Filesystem size used avail capacity Mounted on
rpool/export/home/tp47565
 16G 1.2G 9.7G 11% /export/home/tp47565
[tp47...@norton:] touch file.3
[tp47...@norton:] ls -v file.3
-rw-r- 1 tp47565 staff 0 Sep 16 15:02 file.3
 0:owner@:execute:deny

1:owner@:read_data/write_data/append_data/write_xattr/write_attributes
 /write_acl/write_owner:allow
 2:group@:write_data/append_data/execute:deny
 3:group@:read_data:allow
 4:everyone@:read_data/write_data/append_data/write_xattr/execute
 /write_attributes/write_acl/write_owner:deny
 5:everyone@:read_xattr/read_attributes/read_acl/synchronize:allow
[tp47...@norton:] chmod A+user:lp:read_data:deny file.3
[tp47...@norton:] ls -v file.3 
-rw-r-+ 1 tp47565 staff 0 Sep 16 15:02 file.3
 0:user:lp:read_data:deny
 1:owner@:execute:deny

2:owner@:read_data/write_data/append_data/write_xattr/write_attributes
 /write_acl/write_owner:allow
 3:group@:write_data/append_data/execute:deny
 4:group@:read_data:allow
 5:everyone@:read_data/write_data/append_data/write_xattr/execute
 /write_attributes/write_acl/write_owner:deny
 6:everyone@:read_xattr/read_attributes/read_acl/synchronize:allow
[tp47...@norton:] 



Let's try the new ACLs on tmpfs

[tp47...@norton:] cd /tmp
[tp47...@norton:] df .
Filesystem size used avail capacity Mounted on
swap 528M 12K 528M 1% /tmp
[tp47...@norton:] grep swap /etc/vfstab 
swap  -  /tmp  tmpfs - yes -
/dev/zvol/dsk/rpool/swap -  -  swap - no -
[tp47...@norton:] 

[tp47...@norton:] touch file.3
[tp47...@norton:] ls -v file.3
-rw-r- 1 tp47565 staff 0 Sep 16 14:58 file.3
 0:user::rw-
 1:group::r--  #effective:r--
 2:mask:rwx
 3:other:---
[tp47...@norton:] 

[tp47...@norton:] chmod A+user:lp:read_data:deny file.3
chmod: ERROR: ACL type's are different
[tp47...@norton:] 

So tmpfs does not
support the new ACLs

Do I have to do the
old way as well?


Roland Mainz wrote:

  Hi!



Does anyone know out-of-the-head whether tmpfs supports ACLs - and if
"yes" - which type(s) of ACLs (e.g. NFSv4/ZFS, old POSIX draft ACLs
etc.) are supported by tmpfs ?



Bye,
Roland

  


-- 





Trevor
Pretty |
Technical Account Manager
|
+64
9 639 0652 |
+64
21 666 161
Eagle
Technology Group Ltd. 
Gate
D, Alexandra Park, Greenlane West, Epsom
Private Bag 93211,
Parnell, Auckland










www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Read about ZFS backup - Still confused

2009-09-03 Thread Trevor Pretty




Cork

To answer your question just use tar for everything. It's about the
best we've got. :-(

When the disk turns into a doorstop re-install OpenSolaris/Solaris and
then tar back all your data. I keep a complete list of EVER change I
make on any OS (including the Redmond one) so I can re-create the
machine.

And, IMHO - And I
know I will get shot at for saying it, but

One reason why I would not use ZFS root in a real live production
environment, is not having the equivalent of ufsdump/ufsrestore so I
can do a bare metal restore. ZFS root works great on my laptop, but I
know lots who still rely on ufsdump to a local tape drive for quick
bare metal restores. The only good news in UNIX is much more tidy then
Windows and there is very little that is not in /home (or /export/home)
that gets changed throughout the OSes life.

Unless somebody know
better


Cork Smith wrote:

  Let me try rephrasing this. I would like the ability to restore so my system mirrors its state at the time when I backed it up given the old hard drive is now a door stop.

Cork
  












www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Petabytes on a budget - blog

2009-09-02 Thread Trevor Pretty






  

Overall, the product is what it is.  There is nothing wrong with it in the 
right situation although they have trimmed some corners that I wouldn't 
have trimmed in their place.  However, comparing it to a NetAPP or an EMC 
is to grossly misrepresent the market.  

I don't think that is what they where doing. I think they where trying
to point out they had $X budget and wanted to buy YPB of storage and
building their own was cheaper than buying it. No surprise there!
However they don't show their RD costs. I'm sure the designers
don't work for nothing, although to their credit they do share the H/W
design and have made is open source. They also mention
www.protocase.com will make them for you so if you want to build your
own then you have no RD costs.

I would love to know why they did not use ZFS.


  This is the equivalent of seeing 
how many USB drives you can plug in as a storage solution.  I've seen this 
done.


Julian
--
Julian King
Computer Officer, University of Cambridge, Unix Support
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


-- 





Trevor
Pretty |+64
9 639 0652 |
+64
21 666 161
Eagle
Technology Group Ltd. 
Gate
D, Alexandra Park, Greenlane West, Epsom
Private Bag 93211,
Parnell, Auckland










www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] 7110: Would it self upgrade the system zpool?

2009-09-02 Thread Trevor Pretty




Just Curious

The 7110 I've on loan has an old zpool. I *assume* because it's been
upgraded and it gives me the ability to downgrade. Anybody know if I
delete the old version of Amber Road whether the pool would then
upgrade (I don't want to do it as I want to show the up/downgrade
feature). 

OS pool:-
  pool: system
 state: ONLINE
 status: The pool is formatted using an older on-disk format. The
pool can
 still be used, but some features are unavailable.

And yes I may have invalidated my support. If you have a 7000 box
don't ask me how to access the system like this, you can see the
warning. Remember I've a loan box and are just being nosey, a sort of
looking under the bonnet and going "OOOHHH" an engine, but being too
scared to even pull the dip stick :-)

+-+
| You are entering the operating system shell. By confirming this
action in |
| the appliance shell you have agreed that THIS ACTION MAY VOID ANY
SUPPORT |
| AGREEMENT. If you do not agree to this -- or do not otherwise
understand |
| what you are doing -- you should type "exit" at the shell prompt.
EVERY |
| COMMAND THAT YOU EXECUTE HERE IS AUDITED, and support personnel may
use |
| this audit trail to substantiate invalidating your support
contract. The |
| operating system shell is NOT a supported mechanism for managing
this |
| appliance, and COMMANDS EXECUTED HERE MAY DO IRREPARABLE
HARM. |
|
|
| NOTHING SHOULD BE ATTEMPTED HERE BY UNTRAINED SUPPORT PERSONNEL
UNDER ANY |
| CIRCUMSTANCES. This appliance is a non-traditional operating
system |
| environment, and expertise in a traditional operating system
environment |
| in NO WAY constitutes training for supporting this appliance. THOSE
WITH |
| EXPERTISE IN OTHER SYSTEMS -- HOWEVER SUPERFICIALLY SIMILAR -- ARE
MORE |
| LIKELY TO MISTAKENLY EXECUTE OPERATIONS HERE THAT WILL DO
IRREPARABLE |
| HARM. Unless you have been explicitly trained on supporting
this |
| appliance via the operating system shell, you should immediately
return |
| to the appliance
shell. |
|
|
| Type "exit" now to return to the appliance
shell. |
+-+


Trevor














www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Status/priority of 6761786

2009-08-27 Thread Trevor Pretty




Dave

Yep that's an RFE. (Request For Enchantment) that's how things are
reported to engineers to fix things inside Sun. If it's an honest to
goodness CR = bug (However it normally need a real support paying
customer to have a problem to go from RFE to CR) the "responsible
engineer" evaluates it, and eventually gets it fixed, or not. When I
worked at Sun I logged a lot of RFEs, only a few where accepted as bugs
and fixed. 

Click on the "new Search" link and look at the type and state menus. It
gives you an idea of the states a RFE and CR goes through. It's
probably documented somewhere, but I can't find it. Part of the joy of
Sun putting out in public something most other vendors would not dream
of doing.

Oh and it doesn't help both RFEs and CR are labelled "bug" at
http://bugs.opensolaris.org/

So. Looking at your RFE.

It tells you which version on Nevada it was reported against
(translating this into an Opensolaris version is easy - NOT!)

Look at "Related
Bugs  6612830
"

This will tell you the 

"Responsible
Engineer  Richard
Morris" 

and when it was fixed 

"Release Fixed  , solaris_10u6(s10u6_01) (Bug
ID:2160894)
"

Although as nothing in life is guaranteed it looks like another bug
2160894 has been identified and that's not yet on bugs.opensolaris.org 

Hope that helps.

Trevor


Dave wrote:

  Just to make sure we're looking at the same thing:

http://bugs.opensolaris.org/view_bug.do?bug_id=6761786

This is not an issue of auto snapshots. If I have a ZFS server that 
exports 300 zvols via iSCSI and I have daily snapshots retained for 14 
days, that is a total of 4200 snapshots. According to the link/bug 
report above it will take roughly 5.5 hours to import my pool (even when 
the pool is operating perfectly fine and is not degraded or faulted).

This is obviously unacceptable to anyone in an HA environment. Hopefully 
someone close to the issue can clarify.

--
Dave

Blake wrote:
  
  
I think the value of auto-snapshotting zvols is debatable.  At least,
there are not many folks who need to do this.

What I'd rather see is a default property of 'auto-snapshot=off' for zvols.

Blake

On Thu, Aug 27, 2009 at 4:29 PM, Tim Cookt...@cook.ms wrote:


  On Thu, Aug 27, 2009 at 3:24 PM, Remco Lengers re...@lengers.com wrote:
  
  
Dave,

Its logged as an RFE (Request for Enhancement) not as a CR (bug).

The status is 3-Accepted/  P1  RFE

RFE's are generally looked at in a much different way then a CR.

..Remco

  
  Seriously?  It's considered "works as designed" for a system to take 5+
hours to boot?  Wow.

--Tim

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

  
  ___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  












www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] live upgrade with lots of zfs filesystems

2009-08-27 Thread Trevor Pretty




Paul

You need to exclude all the file system that are not the "OS"

My S10 Virtual machine is not booted but you can put all the "excluded"
file systems in a file and use  -f  from memory.

You use to have to do this if there was a DVD in the drive otherwise
/cdrom got copied to the new boot environment. I know this because I
logged an RFE when Live Upgrade first appeared, and it was put into
state Deferred as the workaround is to just exclude it. I think it did
get fixed however in a later release.

trevor




Paul B. Henson wrote:

  Well, so I'm getting ready to install the first set of patches on my x4500
since we deployed into production, and have run into an unexpected snag.

I already knew that with about 5-6k file systems the reboot cycle was going
to be over an hour (not happy about, but knew about and planned for).

However, I went to create a new boot environment to install the patches
into, and so far that's been running for about an hour and a half :(,
which was not expected or planned for.

First, it looks like the ludefine script spent about 20 minutes iterating
through all of my zfs file systems, and then something named lupi_bebasic
ran for over an hour, and then it looks like it mounted all of my zfs
filesystems under /.alt.tmp.b-nAe.mnt, and now it looks like it is
unmounting all of them.

I hadn't noticed before, but when I went to check on my test system (with
only a handful of filesystems), but evidently when I get to the point of
using lumount to mount the boot environment for patching, it's going to
again mount all of my zfs file systems under the alternative root, and then
need to unmount them all again after I'm done patching, which is going to
add probably another hour or two.

I don't think I'm going to make my downtime window :(, and will probably
need to reschedule the patching. I never considered I might have to start
the patch process six hours before the window.

I poked around a bit, but have not come across any way to exclude zfs
filesystems not part of the boot os pool from the copy and mount process.
I'm really hoping I'm just being stupid and missing something blindingly
obvious. Given a boot pool named ospool, and a data pool named export, is
there anyway to make live upgrade completely ignore the data pool? There
is no need for my 6k user file systems to be mounted in the alternative
environment during patching. I only want the file systems in the ospool
copied, processed, and mounted.

fingers crossed Thanks...



  









www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Status/priority of 6761786

2009-08-27 Thread Trevor Pretty




Dave

This helps:- http://defect.opensolaris.org/bz/page.cgi?id=fields.html

The most common thing you will see is "Duplicate". As different people
find the same problem at different times in different ways and when they searched  database to see if it was "known"
they could not find a bug description that seems to match their
problem. I logged quite a few of these :-)  

The other common state is "Incomplete" typically because the submitter
has not provided enough info. for the evaluator to evaluate it.

Oh and what other company would allow you to see this data? :-
http://defect.opensolaris.org/bz/reports.cgi (Old Charts is interesting)

Trevor

Trevor Pretty wrote:

  
  Dave
  
Yep that's an RFE. (Request For Enchantment) that's how things are
reported to engineers to fix things inside Sun. If it's an honest to
goodness CR = bug (However it normally need a real support paying
customer to have a problem to go from RFE to CR) the "responsible
engineer" evaluates it, and eventually gets it fixed, or not. When I
worked at Sun I logged a lot of RFEs, only a few where accepted as bugs
and fixed. 
  
Click on the "new Search" link and look at the type and state menus. It
gives you an idea of the states a RFE and CR goes through. It's
probably documented somewhere, but I can't find it. Part of the joy of
Sun putting out in public something most other vendors would not dream
of doing.
  
Oh and it doesn't help both RFEs and CR are labelled "bug" at
  http://bugs.opensolaris.org/
  
So. Looking at your RFE.
  
It tells you which version on Nevada it was reported against
(translating this into an Opensolaris version is easy - NOT!)
  
Look at "Related
Bugs  6612830
"
  
This will tell you the 
  
"Responsible
Engineer  Richard
Morris" 
  
and when it was fixed 
  
"Release Fixed  , solaris_10u6(s10u6_01) (Bug
ID:2160894)
"
  
Although as nothing in life is guaranteed it looks like another bug
2160894 has been identified and that's not yet on bugs.opensolaris.org 
  
Hope that helps.
  
Trevor
  
  
Dave wrote:
  
Just to make sure we're looking at the same thing:

http://bugs.opensolaris.org/view_bug.do?bug_id=6761786

This is not an issue of auto snapshots. If I have a ZFS server that 
exports 300 zvols via iSCSI and I have daily snapshots retained for 14 
days, that is a total of 4200 snapshots. According to the link/bug 
report above it will take roughly 5.5 hours to import my pool (even when 
the pool is operating perfectly fine and is not degraded or faulted).

This is obviously unacceptable to anyone in an HA environment. Hopefully 
someone close to the issue can clarify.

--
Dave

Blake wrote:
  

  I think the value of auto-snapshotting zvols is debatable.  At least,
there are not many folks who need to do this.

What I'd rather see is a default property of 'auto-snapshot=off' for zvols.

Blake

On Thu, Aug 27, 2009 at 4:29 PM, Tim Cookt...@cook.ms wrote:

  
On Thu, Aug 27, 2009 at 3:24 PM, Remco Lengers re...@lengers.com wrote:
  

  Dave,

Its logged as an RFE (Request for Enhancement) not as a CR (bug).

The status is 3-Accepted/  P1  RFE

RFE's are generally looked at in a much different way then a CR.

..Remco


Seriously?  It's considered "works as designed" for a system to take 5+
hours to boot?  Wow.

--Tim

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


  
  
  ___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  
  
  
  
  
  
  
  
  
  










www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How to remove [alternate] cylinders from slice 9?

2009-08-20 Thread Trevor Pretty





Jeff old mate I assume you used format -e?

Have you tried swapping the label back to SMI and then back to EFI?

Trevor

Jeff Victor wrote:

  I am trying to mirror an existing zpool on OpenSolaris 2009.06. I think 
I need to delete two alternate cylinders...


The existing disk in the pool (c7d0s0):
Part  TagFlag Cylinders SizeBlocks
  0   rootwm   1 - 19453  149.02GB(19453/0/0) 312512445
  1 unassignedwm   00 (0/0/0) 0
  2 backupwu   0 - 19453  149.03GB(19454/0/0) 312528510
  3 unassignedwm   00 (0/0/0) 0
  4 unassignedwm   00 (0/0/0) 0
  5 unassignedwm   00 (0/0/0) 0
  6 unassignedwm   00 (0/0/0) 0
  7 unassignedwm   00 (0/0/0) 0
  8   bootwu   0 - 07.84MB(1/0/0) 16065
  9 unassignedwm   00 (0/0/0) 0


The new disk, which was a zpool before I destroyed that pool:
Part  TagFlag Cylinders SizeBlocks
  0 unassignedwm   00 (0/0/0) 0
  1 unassignedwm   00 (0/0/0) 0
  2 backupwu   0 - 19453  149.03GB(19454/0/0) 312528510
  3 unassignedwm   00 (0/0/0) 0
  4 unassignedwm   00 (0/0/0) 0
  5 unassignedwm   00 (0/0/0) 0
  6 unassignedwm   00 (0/0/0) 0
  7 unassignedwm   00 (0/0/0) 0
  8   bootwu   0 - 07.84MB(1/0/0) 16065
  9 alternateswm   1 - 2   15.69MB(2/0/0) 32130

Format won't let me remove the two cylinders from slice 9:
partition 0
Part  TagFlag Cylinders SizeBlocks
  0 unassignedwm   00 (0/0/0) 0

Enter partition id tag[unassigned]: root
Enter partition permission flags[wm]:
Enter new starting cyl[3]: 1
Enter partition size[0b, 0c, 1e, 0.00mb, 0.00gb]: 19453c

Warning: Partition overlaps alternates partition. Specify different 
start cyl.
partition 9
`9' is not expected.

How can I delete the alternate cylinders, or otherwise mirror c7d1 to 
c7d0?  Or can I safely use c7d0s2?

Thanks,
--JeffV



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  










www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help: Advice for a NAS

2009-08-10 Thread Trevor Pretty





Lets not forget despite the fact us lunatic fringe use OpenSolaris on
anything we can get our hands on. Sun Microsystems use Solaris to run
mission critical environments and adding disk in "chunks" like you have
to do in ZFS to a commercial organisation is no big deal. The data is
worth far more to most organisations than the disks. To get the
functionality Sun's customers now have with ZFS for zero dollars (with
the exception of shrinking lets not go down that rat hole), they use to
have to pay many many dollars to Veritas. Or they pay lots of money to
Network Appliance .

For Sun's paying customers to quoute Thomas "the benefits of
ZFS far outweigh the limitations" 

Lets not forget: UFS/xVFS SVM/xVM and the whole RAID industry, have
many more years of development and use. ZFS is still the new kid on the
block, he might not be as good as some of the old boys in the
playground, but he is creating a stir and gowning up fast!

Thomas Burgess wrote:
Why not just do simple mirrored vdevs? or use cheaper 1tb
drives for the second vdev?
I don't knowit's up to you...To me the benefits of ZFS far outweigh
the limitations. Also, in my opinion, when you are expanding your
storage, it's a good idea to add it in chunks like this...adding a 4
drive vdev is the way *I* do it right nowthough i use 1tb drives
because the 2tb drives aren't worth it atm.
  
1tb drives are around 80 bucks and 7200 rpm, 2tb drives are 250-300 and
5400 rpm...for the cost of 2 2tb drives you could EASILY add vdevs of
1tb drives...
  
  On Mon, Aug 10, 2009 at 7:03 AM, Chester no-re...@opensolaris.org
wrote:
  Thanks
for the info so far. Yes, I understand that you can add more vdevs,
but at what cost? With the 2TB drives costing $300 each, I wanted to
get more or less the bare minimum and then add more drives once I
filled the capacity. I understand that raidz1 is similar to RAID5 (it
can recover from a single drive failure) and raidz2 is similar to RAID6
(recovery from up to two drive failures). Since I have four drives
now, I would leave that with single parity and probably the next time I
added a drive, I would migrate over to double parity.

In your scenario, once I fill up my storage capacity, I would need to
add another three drives; therefore dedicating two drives for parity
(one for the four disk set and one for the three disk set), which would
be similar to my plan of moving to double parity. However, what about
after that? Three drives dedicated to single parity for three
different sets? Certainly, I would get to a point where I wouldn't
want 16 drives constantly spinning and I would hope by then either
solid state disks have moved up in storage size and down in terms of
price so I could start cutting over to those.

Is there a way to expand the zpool to take advantage of the increased
size of the hardware once I add a disk on the 3ware controller? I
looked zfsadmin document and see an autoexpand property, but that
feature doesn't appear to be support by OpenSolaris.
--


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


  
  
  












www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send/recv syntax

2009-07-28 Thread Trevor Pretty





Try send/receive to the same host (ssh localhost). I used this when
trying send/receive as it removes ssh between hosts "problems"

The on disk format of ZFS has changed there is something about it in
the man pages from memory so I don't think you can go S10 -
OpenSolaris without doing an upgrade, but I could be wrong!

Joseph L. Casale wrote:

  
Yes, use -R on the sending side and -d on the receiving side.

  
  
I tried that first, going from Solaris 10 to osol 0906:

# zfs send -vR mypo...@snap |ssh j...@catania "pfexec /usr/sbin/zfs recv -dF mypool/somename"

didn't create any of the zfs filesystems under mypool2?

Thanks!
jlc
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  














www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help with setting up ZFS

2009-07-27 Thread Trevor Pretty




Brian

This is a chunk of a script I wrote: To make it go to another machine
change the send/receive something like the other example below

Creates a copy of a zfs filesystem and mounts it on the local machine
(the "do_command" just made my demo self running).

scrubbing is easy just a cron entry!

Code Chunk 1

if [ -d $COPY_DIR ]; then
 echo "="
 echo "Make a copy of my current $HOME_DIR in $COPY_DIR"
 echo "="
 # Work out where $HOME_DIR and $COPY_DIR are located in ZFS
 #
 HOME_POOL=`/bin/df -h $HOME_DIR | grep $HOME_DIR | awk '{ print $1
}' | head -1`
 # This only works if /Backup is mounted and I now umount it so I
can always mount /Backup/home.
 # I had problems when I used the top dir as a filesystem when
reboot after an LU.
 #COPY_POOL=`/bin/df -h $COPY_DIR | grep $COPY_DIR | awk '{ print
$1 }' | head -1`
 COPY_POOL=`/usr/sbin/zfs list | grep $COPY_DIR | grep -v $HOME_DIR
| awk '{ print $1 }' | head -1`
 # Use zfs send and recieve
 # 
 # /usr/sbin/zfs destroy -fR $COPY_POOL$HOME_DIR # It can exist!
 /usr/sbin/zfs destroy -fR $home_p...@now 1/dev/null 21
#Just in case we aborted for some reason last time
 /usr/sbin/umount -f $COPY_DIR/$HOME_DIR 1/dev/null
21 # Just is case somebody is cd'ed to it
 sync
 usr/sbin/zfs snapshot $home_p...@now  \
 /usr/sbin/zfs send $home_p...@now | /usr/sbin/zfs receive -F
$COPY_POOL$HOME_DIR  \
 /usr/sbin/zfs destroy $home_p...@now
 /usr/sbin/zfs destroy $copy_pool$home_...@now
 /usr/sbin/zfs umount $COPY_POOL 1/dev/null 21 # It
should not be mounted
 /usr/sbin/zfs set mountpoint=none $COPY_POOL 
 /usr/sbin/zfs set mountpoint=$COPY_DIR$HOME_DIR
$COPY_POOL$HOME_DIR 
 /usr/sbin/zfs mount $COPY_POOL$HOME_DIR
 /usr/sbin/zfs set readonly=on $COPY_POOL$HOME_DIR 
 sync
 /bin/du -sk $COPY_DIR/$HOME  /tmp/email$$
fi


Code chunk 2

How I demoed send/recieve

# http://blogs.sun.com/timc/entry/ssh_cheat_sheet
#
# [r...@norton:] ssh-keygen -t rsa
#   no pass phrase
# [r...@norton:] cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys
#
# Edit /etc/ssh/sshd_config, change line to
#  PermitRootLogin yes
# 
# [r...@norton:] svcadm restart ssh


## Lets send the snaphost to another pool ##

echo ""
echo ""
echo "Create a new pool and send the snaphot to it to back it up"
echo ""
echo "Note: The pool could be on a remote systems"
echo "I will simply use ssh to localhost"
echo ""
do_command zpool create backup_pool $DISK5
do_command zpool status backup_pool
press_return
# Note do_command does not work via the pipe so I will just use echo
# Need to setup ssh - see notes above
echo ""
echo ""
echo "-- zfs send sap_pool/PRD/sapda...@today | ssh localhost
zfs receive -F backup_pool/sapdata1"
echo ""
zfs send sap_pool/PRD/sapda...@today | ssh localhost zfs receive -F
backup_pool/sapdata1
do_command df -h /sapdata1
do_command df -h /backup_pool/sapdata1
echo ""
echo "Notice the backup is not compressed!"
echo ""
press_return
do_command ls -alR /backup_pool/sapdata1 | more

Brian wrote:

  Thank you, Ill definitely implement a script to scrub the system, and have the system email me if there is a problem.
  


-- 





Trevor
Pretty |
Technical Account Manager
|
+64
9 639 0652 |
+64
21 666 161
Eagle
Technology Group Ltd. 
Gate
D, Alexandra Park, Greenlane West, Epsom
Private Bag 93211,
Parnell, Auckland










www.eagle.co.nz
This email is confidential and may be legally 
privileged. If received in error please destroy and immediately notify 
us.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Niagara and ZFS compression?

2006-08-20 Thread trevor pretty
Team

During a ZFS presentation I had a question from Vernon which I could not
answer and did not find with a quick look through the archives.

Q: What's the effect (if any) of only having on Floating Point Processor
on Niagara when you turn on ZFS compression?

-- 
==
 Trevor PrettyMob: +64 21 666 161
 Systems Engineer  OS Ambassador DDI: +64 9 976 6802
 Sun Microsystems (NZ) Ltd.   Fax: +64 9 976 6877
 Level 5, 10 Viaduct Harbour Ave,
 PO Box 5766, Auckland, New Zealand
==
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss