Re: [zfs-discuss] thousands of ZFS file systems

2006-10-30 Thread Cyril Plisko

On 10/30/06, Robert Milkowski <[EMAIL PROTECTED]> wrote:



1. rebooting server could take several hours right now with so many file system

   I belive this problem is being addressed right now


Well, I've done a quick test on b50 - 10K filesystems took around 5 minutes
to boot. Not bad, considering it was done on a single SATA disk. I am quite
sure S10U2 wouldn't be as quick as b50. On the other hand S10U3 may have
these fixes included.

--
Regards,
   Cyril
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Performance Question

2006-10-30 Thread Chad Leigh -- Shire.Net LLC


On Oct 30, 2006, at 10:45 PM, David Dyer-Bennet wrote:


Also, stacking it on top of an existing RAID setup is kinda missing
the entire point!


Everyone keeps saying this, but I don't think it is missing the point  
at all.  Checksumming and all the other goodies still work fine and  
you can run a ZFS mirror across 2 or more raid devices for ultimate  
in reliability.  My Dual RAID-6 with large ECC battery backed cache  
device mirrors will be much more reliable than your RAID-Z and  
probably perform better, and I still get the ZFS goodness.


I can lose one whole RAID device (all the disks) and up to 2 of the  
disks on the second RAID device, all att he same time, and still be  
OK and fully recoverable and still operating.


(ok, my second raid is not yet installed, so right now my ZFS'ed  
single RAID-6 is not as reliable as I would like, but the second  
half, ie, second RAID-6 will be installed before XMas)


Chad

---
Chad Leigh -- Shire.Net LLC
Your Web App and Email hosting provider
chad at shire.net





smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Performance Question

2006-10-30 Thread David Dyer-Bennet

On 10/30/06, Jay Grogan <[EMAIL PROTECTED]> wrote:

Ran 3 test using mkfile to create a 6GB on a ufs and ZFS file system.
command ran mkfile -v 6gb /ufs/tmpfile

Test 1 UFS mounted LUN  (2m2.373s)
Test 2 UFS mounted LUN with directio option (5m31.802s)
Test 3 ZFS LUN  (Single LUN in a pool)  (3m13.126s)

Sunfire V120
1 Qlogic 2340
Solaris 10 06/06

Attached to Hitachi 9990 (USP) LUNS are Open L's at 33.9 GB,  plenty of cache 
on the HDS box disk are in a Raid5 .

New to ZFS so am I missing something the standard UFS write bested ZFS by a 
minute. ZFS iostat showed about 50 MB a sec.


Do you find this surprising?  Why?  A ZFS pool has additional overhead
relative to a simple filesystem -- the metadata is duplicated, and
metadata and data blocks are checksummed.  ZFS gives higher
reliability, and better integration between the levels, but it's *not*
designed for maximizing disk performance without regard to
reliability.

Also, stacking it on top of an existing RAID setup is kinda missing
the entire point!
--
David Dyer-Bennet, , 
RKBA: 
Pics: 
Dragaera/Steven Brust: 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: recover zfs data from a crashed system?

2006-10-30 Thread Jason Williams
Hi Senthil,

We experienced a situation very close to this. Due to some instabilities, we 
weren't able to export the zpool safely from the distressed system (a T2000 
running SXb41). The only free system we had was an X4100, which was running S10 
6/06. Both were SAN attached. The filesystem imported like a champ onto the 
X4100 (we had to force the import since we didn't cleanly export). We had no 
corruption issues at all, verified by a full scrub. We went system to system in 
about 10 minutes. Most of that was spent re-configuring the LUN masking on the 
SAN array. Can't vouch you'll have the same experience, but we were very 
impressed. Particularly, going between different Solaris versions and different 
CPU architectures.

Best Regards,
Jason
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Recommended Minimum Hardware for ZFS Fileserver?

2006-10-30 Thread Wes Williams
Thanks again for your input Gents, I was able to get a W1100z inexpensively 
with 1Gb RAM and a 2.4 GHz Opteron...now I'll just have to manufacture my own 
drive slide rails since Sun won't sell the darn things [no, I don't want a 80Gb 
IDE drive and apple pie with that!] and I'm not paying $100 for three pair of 
plastic strips that almost fit right.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Performance Question

2006-10-30 Thread Jay Grogan
Ran 3 test using mkfile to create a 6GB on a ufs and ZFS file system. 
command ran mkfile -v 6gb /ufs/tmpfile

Test 1 UFS mounted LUN  (2m2.373s)
Test 2 UFS mounted LUN with directio option (5m31.802s)
Test 3 ZFS LUN  (Single LUN in a pool)  (3m13.126s)

Sunfire V120 
1 Qlogic 2340
Solaris 10 06/06

Attached to Hitachi 9990 (USP) LUNS are Open L's at 33.9 GB,  plenty of cache 
on the HDS box disk are in a Raid5 .

New to ZFS so am I missing something the standard UFS write bested ZFS by a 
minute. ZFS iostat showed about 50 MB a sec.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Recommended Minimum Hardware for ZFS Fileserver?

2006-10-30 Thread Bart Smaalders

Wes Williams wrote:

Thanks gents for  your replies.  I've used to a very large config W2100z and ZFS for 
awhile but didn't know "how low can you go" for ZFS to shine, though a 64-bit 
CPU seems to be the minimum performance threshold.

Now that Sun's store is [sort of] working again, I can see some X2100's with 
the custom configure and a very low starting price of only $450 sans CPU, 
drives, and memory.  Great!!

[b]If only we could get a basic X2100-ish designed, "custom build" priced, 
server from Sun that could also hold 3-5 drives internally[/b], I could see a bunch of 
those being used as ZFS file servers.  This would also be a good price point for small 
office and home users since the X4100 is certainly overkill in this application, though 
I'd wouldn't refuse one offered to me.  =)
 



I built my own, using essentially the same mobo (tyan 2865).
The Ultra 20 is slightly different, but not enough to matter.

I put it in a case that would hold more drives and a larger
power supply, and I've got a nice home server w/ a TB of
disk (effective space 750GB).

Very simple and easy.  Right now I'm still using a single
disk for /, since I'm worried about safegarding data, not
making sure I have max availability.

- Bart

--
Bart Smaalders  Solaris Kernel Performance
[EMAIL PROTECTED]   http://blogs.sun.com/barts
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: Recommended Minimum Hardware for ZFS Fileserver?

2006-10-30 Thread Wes Williams
> Though there isn't a Sun "tower server" that fits
> your description, the Ultra-40
> can hold 4 3.5" drives (80, 250, or 500 GBytes).  You
> might actually prefer
> something designed for office use at home, rather
> than something designed for a
> data center.
>   http://www.sun.com/desktop/workstation/ultra40/specs.
> xml
>   -- richard
> _

Thanks for the reply Richard, though with a W2100z already I really don't need 
another powerhouse desktop machine, just something small and relatively low 
power to hide around the house for redundant I/O duty only.  

The Ultra 40 is a nice machine, but I'd be more apt to go for an Ultra 20 for 
this application, and I've already considered that route and been very happy 
with the others I've used.  I'm spying a W1100z at the moment if I can get a 
nice price on it, though it's physically much larger than I would like for this 
application, I'll overlook that for the performance.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS thinks my 7-disk pool has imaginary disks

2006-10-30 Thread Rince
Hi all,I recently created a RAID-Z1 pool out of a set of 7 SCSI disks, using the following command:# zpool create magicant raidz c5t0d0 c5t1d0 c5t2d0 c5t3d0 c5t4d0 c5t5d0 c5t6d0It worked fine, but I was slightly confused by the size yield (99 GB vs the 116 GB I had on my other RAID-Z1 pool of same-sized disks).
I thought one of the disks might have been to blame, so I tried swapping it out - it turned out my replacement disk was a dud (zpool wasn't happy about that, and eventually offlined the disk). Oh well, swap the old one back in, no harm done.
Reboot, and ZFS informs me that I'm missing another, unrelated disk (c5t1d0 was the one I tried swapping out unsuccessfully - c5t3d0 is the one it complained about, which I had swapped for another disk before any problems began or any data was in the pool, with no problems - ZFS scrubbed and was happy).
It continually claimed the device was unavailable and so the pool was in degraded mode - attempting to replace the disk with itself yielded an error about the disk being in use by the same pool which claimed the disk was unavailable. Unmount the pool, same error persists, zpool replace continues to give that error, despite repeated zpool offline magicant  c5t3d0 followed by zpool online [etc].
I try exporting and re-importing the pool - the export went fine. The import threw the confusing error which is the point of this email:# zpool import  pool: magicant    id: 3232403590553596936
 state: FAULTEDstatus: One or more devices are missing from the system.action: The pool cannot be imported. Attach the missing    devices and try again.   see: 
http://www.sun.com/msg/ZFS-8000-6Xconfig:    magicant    UNAVAIL   missing device  raidz1    ONLINE    c5t0d0  ONLINE    c5t1d0  ONLINE    c5t2d0  ONLINE
    c5t3d0  ONLINE    c5t4d0  ONLINE    c5t5d0  ONLINE    c5t6d0  ONLINE    Additional devices are known to be part of this pool, though their    exact configuration cannot be determined.
So, to summarize:7-scsi-disk raidz1 zpool is createdc5t3d0 is swapped out for another disk of identical size, zfs is happy and functions fine after a scrubc5t1d0 is swapped out for another disk of identical size (which happened to be a dud), Solaris didn't like that, so I put the original back in and rebooted
On boot, zpool claims c5t3d0 is unavailable, while format and cfgadm both agree that the disk still exists and is dandy. zpool replace pool c5t3d0 c5t3d0 claims it's in use by that pool, zpool offline pool c5t3d0 followed by zpool online pool c5t3d0 doesn't help. zpool export pool worked, but then zpool import pool threw the above error.
Is this a bug, or am I missing something obvious?snv 44, x86.- Rich
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Recommended Minimum Hardware for ZFS Fileserver?

2006-10-30 Thread Richard Elling - PAE

Wes Williams wrote:
Thanks gents for  your replies.  I've used to a very large config W2100z and ZFS 
for awhile but didn't know "how low can you go" for ZFS to shine, though a 64-bit 
CPU seems to be the minimum performance threshold.


Now that Sun's store is [sort of] working again, I can see some X2100's with the 
custom configure and a very low starting price of only $450 sans CPU, drives, and 
memory.  Great!!


[b]If only we could get a basic X2100-ish designed, "custom build" priced, server 
from Sun that could also hold 3-5 drives internally[/b], I could see a bunch of 
those being used as ZFS file servers.  This would also be a good price point for 
small office and home users since the X4100 is certainly overkill in this application, 
though I'd wouldn't refuse one offered to me.  =)


Though there isn't a Sun "tower server" that fits your description, the Ultra-40
can hold 4 3.5" drives (80, 250, or 500 GBytes).  You might actually prefer
something designed for office use at home, rather than something designed for a
data center.
http://www.sun.com/desktop/workstation/ultra40/specs.xml
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] thousands of ZFS file systems

2006-10-30 Thread Erblichs
Hi,

My suggestion is direct any command output to a file 
that may print thous of lines.

I have not tried that number of FSs. So, my first
suggestion is to have alot of phys mem installed.

The second item that I could be concerned with is
path translation going thru alot of mount points.
I think I remember in some old code that their was
a limit of 256 mount points thru a path. I don't
know if it still exists.

Mitchell Erblich
-



> Rafael Friedlander wrote:
> 
> Hi,
> 
> An IT organization needs to implement highly available file server,
> using Solaris 10, SunCluster, NFS and Samba. We are talking about
> thousands, even 10s of thousands of ZFS file systems.
> 
> Is this doable? Should I expect any impact on performance or stability
> due to the fact I'll have that many mounted filesystems, with
> everything implied from that fact ('df | wc -l' with thousands of
> lines of result, for instance)?
> 
> Thanks,
> 
> Rafael.
> --
> 
> ---
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: copying a large file..

2006-10-30 Thread Matthew Ahrens

Jeremy Teo wrote:

This is the same problem described in
6343653 : want to quickly "copy" a file from a snapshot.


Actually it's a somewhat different problem.  "Copying" a file from a 
snapshot is a lot simpler than "copying" a file from a different 
filesystem.  With snapshots, things are a lot more constrained, and we 
already have the infrastructure for a filesystem referencing the same 
blocks as its snapshots.


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: [osol-discuss] Cloning a disk w/ ZFS in it

2006-10-30 Thread Asif Iqbal

On 10/30/06, Asif Iqbal <[EMAIL PROTECTED]> wrote:

On 10/20/06, Darren J Moffat <[EMAIL PROTECTED]> wrote:
> Asif Iqbal wrote:
> > On 10/20/06, Darren J Moffat <[EMAIL PROTECTED]> wrote:
> >> Asif Iqbal wrote:
> >> > Hi
> >> >
> >> > I have a X2100 with two 74G disks. I build the OS on the first disk
> >> > with slice0 root 10G ufs, slice1 2.5G swap, slice6 25MB ufs and slice7
> >> > 62G zfs. What is the fastest way to clone it to the second disk. I
> >> > have to build 10 of those in 2 days. Once I build the disks I slam
> >> > them to the other X2100s and ship it out.
> >>
> >> if clone really means make completely identical then do this:
> >>
> >> boot of cd or network.
> >>
> >> dd if=/dev/dsk/  of=/dev/dsk/
> >>
> >> Where  and  are both localally attached.
> >
> > Will it catch the ZFS fs part? For some reason I thought dd is not aware
> > of ZFS
>
> dd isn't ware of ZFS that is correct, but then dd isn't ware of UFS or
> VxFS or anything else either.  You are accessing the raw vtoc slices
> below what ZFS cares about.
>
> So yes it will include the ZFS pool.


After the dd I swapped it with the original disk and I can login fine.
But the zpool is complaining

SUNW-MSG-ID: ZFS-8000-CS, TYPE: Fault, VER: 1, SEVERITY: Major
EVENT-TIME: Mon Oct 30 12:11:44 MST 2006
PLATFORM: Sun Fire(TM) X2100  , CSN: 0634FU1003, HOSTNAME: host.domain.net
SOURCE: zfs-diagnosis, REV: 1.0
EVENT-ID: 09bfd531-5ba3-ef5c-d35c-e85b379d46c7
DESC: A ZFS pool failed to open.  Refer to
http://sun.com/msg/ZFS-8000-CS for more information.
AUTO-RESPONSE: No automated response will occur.
IMPACT: The pool data is unavailable
REC-ACTION: Run 'zpool status -x' and either attach the missing device or
restore from backup.

Login incorrect
host.domain.net console login: root
Password:
Last login: Mon Oct 30 11:04:04 on console
Sun Microsystems Inc.   SunOS 5.10  Generic January 2005
# bash
bash-3.00# zpool status -x
pool: udns
state: FAULTED
status: One or more devices could not be opened.  There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-D3
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
udnsUNAVAIL  0 0 0  insufficient replicas
  c1d0s7UNAVAIL  0 0 0  cannot open



Here is how I fixed it

zpool export udns
zpool import -f udns



>
> --
> Darren J Moffat
>


--
Asif Iqbal
PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu




--
Asif Iqbal
PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Recommended Minimum Hardware for ZFS Fileserver?

2006-10-30 Thread Wes Williams
Thanks gents for  your replies.  I've used to a very large config W2100z and 
ZFS for awhile but didn't know "how low can you go" for ZFS to shine, though a 
64-bit CPU seems to be the minimum performance threshold.

Now that Sun's store is [sort of] working again, I can see some X2100's with 
the custom configure and a very low starting price of only $450 sans CPU, 
drives, and memory.  Great!!

[b]If only we could get a basic X2100-ish designed, "custom build" priced, 
server from Sun that could also hold 3-5 drives internally[/b], I could see a 
bunch of those being used as ZFS file servers.  This would also be a good price 
point for small office and home users since the X4100 is certainly overkill in 
this application, though I'd wouldn't refuse one offered to me.  =)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recommended Minimum Hardware for ZFS Fileserver?

2006-10-30 Thread Richard Elling - PAE

Wes Williams wrote:

I could use the list's help.

My goal:  Build a cheap ZFS file server with OpenSolairs on UFS boot (for now) 
10,000 rpm U320 SCSI drive while having a ZFS pool in the same machine.  The ZFS 
pool will either be a mirror or raidz setup consisting of either two or three 
500Gb 7,200 rpm SATA II drives.


I've been looking at building this setup in some cheap eBay rack-mount servers 
that are generally single or dual 1.0GHz Pentium III, 1Gb PC133 RAM, and I'd have 
to add the SATA II controller into a spare PCI slot.


Buy the "cheap server" for $20 + $20 shipping, or don't bother.
Throw away the P-III mobo and buy a 64-bit CPU + mobo for $100 or so.
Many of these also include SATA controllers, so you won't need to
purchase a separate SATA controller.  Add as much RAM as you can.
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: panic during recv

2006-10-30 Thread Gary Mitchell
I don't have the crashes anymore! What I did was on the receiving pool 
explicitly set mountpoint=none
so that on the receiving side the filesystem is never mounted. Now this 
shouldn't make a difference. From what I saw before  - and If i've understood 
the documentation  - when you do have the recv side mounted then when you do 
the zfs send  (-i ) | .. recv the recv side unmounts and when the send-recv is 
complete the recv filesystem remounts. All I can say is that keeping the recv 
side unmounted stopped the recv from causing a crash.  Got recv-crash problems? 
Try it!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Current status of a ZFS root

2006-10-30 Thread Richard Elling - PAE

[Richard removes his Sun hat...]

Ceri Davies wrote:

On Sun, Oct 29, 2006 at 12:01:45PM -0800, Richard Elling - PAE wrote:

Chris Adams wrote:
We're looking at replacing a current Linux server with a T1000 + a fiber 
channel enclosure to take advantage of ZFS. Unfortunately, the T1000 only 
has a single drive bay (!) which makes it impossible to follow our normal 
practice of mirroring the root file system; naturally the idea of using 
that big ZFS pool is appealing.
Note: the original T1000 had the single disk limit.  This was unfortunate, 
and a
sales inhibitor.  Today, you have the option of single (SATA) or dual (SAS) 
boot

disks, with hardware RAID.  See:
http://www.sun.com/servers/coolthreads/t1000/specs.xml


Good to know that this limit has been removed.  Can the original
T1000s be backfitted, or do I just need to be very careful what
I'm ordering now?


Yes, there is a part, XRA-SS2CG-73G10KZ, which has two 2.5" SAS disks
with bracket and cable.

The reason I took my Sun hat off is because the drive controller on the
T1000 motherboard is an LSI 1064-based SAS/SATA controller with 2 ports.
If you can figure out how to mount a second drive, then that will be the
hardest part of adding a drive to a single-SATA disk T1000.  Obviously,
such a modification would not be "supported" by Sun, unless you use the
XRA-SS2CG-73G10KZ.
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: [osol-discuss] Cloning a disk w/ ZFS in it

2006-10-30 Thread Asif Iqbal

On 10/20/06, Darren J Moffat <[EMAIL PROTECTED]> wrote:

Asif Iqbal wrote:
> On 10/20/06, Darren J Moffat <[EMAIL PROTECTED]> wrote:
>> Asif Iqbal wrote:
>> > Hi
>> >
>> > I have a X2100 with two 74G disks. I build the OS on the first disk
>> > with slice0 root 10G ufs, slice1 2.5G swap, slice6 25MB ufs and slice7
>> > 62G zfs. What is the fastest way to clone it to the second disk. I
>> > have to build 10 of those in 2 days. Once I build the disks I slam
>> > them to the other X2100s and ship it out.
>>
>> if clone really means make completely identical then do this:
>>
>> boot of cd or network.
>>
>> dd if=/dev/dsk/  of=/dev/dsk/
>>
>> Where  and  are both localally attached.
>
> Will it catch the ZFS fs part? For some reason I thought dd is not aware
> of ZFS

dd isn't ware of ZFS that is correct, but then dd isn't ware of UFS or
VxFS or anything else either.  You are accessing the raw vtoc slices
below what ZFS cares about.

So yes it will include the ZFS pool.



After the dd I swapped it with the original disk and I can login fine.
But the zpool is complaining

SUNW-MSG-ID: ZFS-8000-CS, TYPE: Fault, VER: 1, SEVERITY: Major
EVENT-TIME: Mon Oct 30 12:11:44 MST 2006
PLATFORM: Sun Fire(TM) X2100  , CSN: 0634FU1003, HOSTNAME: host.domain.net
SOURCE: zfs-diagnosis, REV: 1.0
EVENT-ID: 09bfd531-5ba3-ef5c-d35c-e85b379d46c7
DESC: A ZFS pool failed to open.  Refer to
http://sun.com/msg/ZFS-8000-CS for more information.
AUTO-RESPONSE: No automated response will occur.
IMPACT: The pool data is unavailable
REC-ACTION: Run 'zpool status -x' and either attach the missing device or
   restore from backup.

Login incorrect
host.domain.net console login: root
Password:
Last login: Mon Oct 30 11:04:04 on console
Sun Microsystems Inc.   SunOS 5.10  Generic January 2005
# bash
bash-3.00# zpool status -x
pool: udns
state: FAULTED
status: One or more devices could not be opened.  There are insufficient
   replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
  see: http://www.sun.com/msg/ZFS-8000-D3
scrub: none requested
config:

   NAMESTATE READ WRITE CKSUM
   udnsUNAVAIL  0 0 0  insufficient replicas
 c1d0s7UNAVAIL  0 0 0  cannot open




--
Darren J Moffat




--
Asif Iqbal
PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recover zfs data from a crashed system?

2006-10-30 Thread senthil ramanujam

Thanks Robert, Michael.

I guess that has answered my question. I now have got to do a couple
of experiments and get this under control. I will keep you posted if I
see something strange, which I don't hope for. ;o)

senthil


On 10/30/06, Michael Schuster <[EMAIL PROTECTED]> wrote:

senthil ramanujam wrote:
> Hi,
>
> I am trying to experiment a scenario that we would like to find a
> possible solution. Is there anyone out there experienced or analyzed
> before the scenario given below?
>
> Scenario: The system is attached to an array. The array type is really
> doesn't matter, i,e., it can be a JBOD or a RAID array. Needless to
> say that ZFS is used to access the array. Note that the array is
> exclusively used to store data for the database.
>
> The question is that if the system is crashed, can I still use the
> array (or rather data) on a different system?


this should work by simply issuing "zfs import" on the "new" system (you may
need to add "-f"). As long as you don't reattach the crashed machine to the
storage, all should be fine.

HTH
--
Michael Schuster  +49 89 46008-2974 / x62974
visit the online support center:  http://www.sun.com/osc/

Recursion, n.: see 'Recursion'


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Recommended Minimum Hardware for ZFS Fileserver?

2006-10-30 Thread Jürgen Keil
> I've been looking at building this setup in some
> cheap eBay rack-mount servers that are generally
> single or dual 1.0GHz Pentium III, 1Gb PC133 RAM, and
> I'd have to add the SATA II controller into a spare
> PCI slot.
> 
> For maximum file system performance of the ZFS pool,
> would anyone care to offer hardware recommendations?

For maximum file system performance of the ZFS pool,
a 64-bit x86 cpu would be *much* better than a 32-bit x86 cpu.

The 32-bit cpu won't use more than ~ 512Mb of RAM for
ZFS' ARC cache (no matter how much is installed in the
machine); a 64-bit cpu is able to use all of the
available RAM for ZFS's cache.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Recommended Minimum Hardware for ZFS Fileserver?

2006-10-30 Thread Robert Milkowski
Hello Wes,

Monday, October 30, 2006, 3:28:19 PM, you wrote:

WW> I could use the list's help.

WW> My goal:  Build a cheap ZFS file server with OpenSolairs on UFS
WW> boot (for now) 10,000 rpm U320 SCSI drive while having a ZFS pool
WW> in the same machine.  The ZFS pool will either be a mirror or
WW> raidz setup consisting of either two or three 500Gb 7,200 rpm SATA II 
drives.

WW> I've been looking at building this setup in some cheap eBay
WW> rack-mount servers that are generally single or dual 1.0GHz
WW> Pentium III, 1Gb PC133 RAM, and I'd have to add the SATA II
WW> controller into a spare PCI slot.

WW> For maximum file system performance of the ZFS pool, would anyone
WW> care to offer hardware recommendations?  Is this enough CPU and
WW> memory bandwidth to handle maximum realistic throughput of the
WW> SATA II drives with most of ZFS's features enabled?

WW> Thanks for any insights before I spend and find out the hard way!!

Notice that by default ZFS will use only about 512MB of memory for
caches on 32bit hardware. You plan 1GB which could be ok anyway (dnlc
caches, etc. so kernel will consume 300-500MB anyway).

However if you add more memory ZFS won't use it.
Other than that it will work.

When it comes to performance it depends on actual workload.

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] thousands of ZFS file systems

2006-10-30 Thread Robert Milkowski




Hello Rafael,

Monday, October 30, 2006, 2:58:56 PM, you wrote:




>


Hi,

An IT organization needs to implement highly available file server, using Solaris 10, SunCluster, NFS and Samba. We are talking about thousands, even 10s of thousands of ZFS file systems.

Is this doable? Should I expect any impact on performance or stability due to the fact I'll have that many mounted filesystems, with everything implied from that fact ('df | wc -l' with thousands of lines of result, for instance)?

Thanks, 







1. rebooting server could take several hours right now with so many file system
   I belive this problem is being addressed right now

2. each new fs when mounted consumes some memory - so you can endup with
   much of the memory consumed just by mounting file system - something was done
   to fix it recently but I haven't been following

3. backup - depends on software you're going to use it could be tricky or it could not
   to backup/restore so many file systems




-- 
Best regards,
 Robert                            mailto:[EMAIL PROTECTED]
                                       http://milek.blogspot.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Recommended Minimum Hardware for ZFS Fileserver?

2006-10-30 Thread Wes Williams
I could use the list's help.

My goal:  Build a cheap ZFS file server with OpenSolairs on UFS boot (for now) 
10,000 rpm U320 SCSI drive while having a ZFS pool in the same machine.  The 
ZFS pool will either be a mirror or raidz setup consisting of either two or 
three 500Gb 7,200 rpm SATA II drives.

I've been looking at building this setup in some cheap eBay rack-mount servers 
that are generally single or dual 1.0GHz Pentium III, 1Gb PC133 RAM, and I'd 
have to add the SATA II controller into a spare PCI slot.

For maximum file system performance of the ZFS pool, would anyone care to offer 
hardware recommendations?  Is this enough CPU and memory bandwidth to handle 
maximum realistic throughput of the SATA II drives with most of ZFS's features 
enabled?

Thanks for any insights before I spend and find out the hard way!!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] thousands of ZFS file systems

2006-10-30 Thread Rafael Friedlander




Hi,

An IT organization needs to implement highly available file server,
using Solaris 10, SunCluster, NFS and Samba. We are talking about
thousands, even 10s of thousands of ZFS file systems.

Is this doable? Should I expect any impact on performance or stability
due to the fact I'll have that many mounted filesystems, with
everything implied from that fact ('df | wc -l' with thousands of lines
of result, for instance)?

Thanks, 

Rafael.

-- 

  

  
   Rafael Friedlander 
Solutions Architect
  
  Sun Microsystems, Inc.
Phone 972 9 971-0564 (X10564)
Mobile 972 544 971-564
Fax 972 9 951-3467
Email [EMAIL PROTECTED]
  


  

  




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recover zfs data from a crashed system?

2006-10-30 Thread Michael Schuster

senthil ramanujam wrote:

Hi,

I am trying to experiment a scenario that we would like to find a
possible solution. Is there anyone out there experienced or analyzed
before the scenario given below?

Scenario: The system is attached to an array. The array type is really
doesn't matter, i,e., it can be a JBOD or a RAID array. Needless to
say that ZFS is used to access the array. Note that the array is
exclusively used to store data for the database.

The question is that if the system is crashed, can I still use the
array (or rather data) on a different system?



this should work by simply issuing "zfs import" on the "new" system (you may 
need to add "-f"). As long as you don't reattach the crashed machine to the 
storage, all should be fine.


HTH
--
Michael Schuster  +49 89 46008-2974 / x62974
visit the online support center:  http://www.sun.com/osc/

Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recover zfs data from a crashed system?

2006-10-30 Thread Robert Milkowski
Hello senthil,

Monday, October 30, 2006, 1:12:28 PM, you wrote:

sr> Hi,

sr> I am trying to experiment a scenario that we would like to find a
sr> possible solution. Is there anyone out there experienced or analyzed
sr> before the scenario given below?

sr> Scenario: The system is attached to an array. The array type is really
sr> doesn't matter, i,e., it can be a JBOD or a RAID array. Needless to
sr> say that ZFS is used to access the array. Note that the array is
sr> exclusively used to store data for the database.

sr> The question is that if the system is crashed, can I still use the
sr> array (or rather data) on a different system?

sr> Assume the system is crashed and it can't come up. I have to work with
sr> support to bring the system up and access the array is one way. My
sr> question is really that is it possible that the array can be detached
sr> from the failed system and attached to another system and get the data
sr> to reduce the downtime? To keep our discussion simpler, lets consider
sr> the target (good) system is exactly similar to the source (failed)
sr> system.

sr> Would ZFS-snapshot or ZFS-clone work? Any pointers/input would be
sr> greatly appreciated. Please don't hesitate to suggest me RTFM if it
sr> has any good solution. :o)

Of course it will work just OOTB.
All you will have to do is to manually import pool(s) on new system.

In case the array has LUN masking feature then probably array
reconfiguration will be needed. But other than that it just work.

So, lets say you've got SCSI JBOD connected to host A. Now host A is
down, you re-connecy JBOD to host B, do zpool import pool_a
and that's it.

Now if you do not use legacy mounts and use sharenfs property instead
/etc/dfs/dfstab then you don't even have to worry about mountpoints,
fs parameters, nfs shares, etc.


-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] recover zfs data from a crashed system?

2006-10-30 Thread senthil ramanujam

Hi,

I am trying to experiment a scenario that we would like to find a
possible solution. Is there anyone out there experienced or analyzed
before the scenario given below?

Scenario: The system is attached to an array. The array type is really
doesn't matter, i,e., it can be a JBOD or a RAID array. Needless to
say that ZFS is used to access the array. Note that the array is
exclusively used to store data for the database.

The question is that if the system is crashed, can I still use the
array (or rather data) on a different system?

Assume the system is crashed and it can't come up. I have to work with
support to bring the system up and access the array is one way. My
question is really that is it possible that the array can be detached
from the failed system and attached to another system and get the data
to reduce the downtime? To keep our discussion simpler, lets consider
the target (good) system is exactly similar to the source (failed)
system.

Would ZFS-snapshot or ZFS-clone work? Any pointers/input would be
greatly appreciated. Please don't hesitate to suggest me RTFM if it
has any good solution. :o)

thanks.

senthil
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Very high system loads with ZFS

2006-10-30 Thread Peter Guthrie
Thanks for the reply, 

I heard separately that it's fixed in snv_52, don't know if it'll be available 
as a ZFS patch or in s10u3.

Pete
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: copying a large file..

2006-10-30 Thread Jeremy Teo

This is the same problem described in
6343653 : want to quickly "copy" a file from a snapshot.

On 10/30/06, eric kustarz <[EMAIL PROTECTED]> wrote:

Pavan Reddy wrote:
> This is the time it took to move the file:
>
> The machine is a Intel P4 - 512MB RAM.
>
> bash-3.00# time mv ../share/pav.tar .
>
> real1m26.334s
> user0m0.003s
> sys 0m7.397s
>
>
> bash-3.00# ls -l pav.tar
> -rw-r--r--   1 root root 516628480 Oct 29 19:30 pav.tar
>
>
> A similar move on my Mac OS X took this much time:
>
>
> pavan-mettus-computer:~/public pavan$ time mv pav.tar.gz ./Burn\ Folder.fpbf/
>
> real0m0.006s
> user0m0.001s
> sys 0m0.004s
>
> pavan-mettus-computer:~/public/Burn Folder.fpbf pavan$ ls -l pav.tar.gz
> -rw-r--r--   1 pavan  pavan  347758518 Oct 29 19:09 pav.tar.gz
>
> NOTE: The file size here is 347 MB where as the previous one was 516MB. But 
still the time taken is huge when compared.
>
> Its an X86 machine running Nevada build 51.  More info about the disk and 
pool:
> bash-3.00# zpool list
> NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
> mypool 29.2G789M   28.5G 2%  ONLINE -
> bash-3.00# zfs list
> NAMEUSED  AVAIL  REFER  MOUNTPOINT
> mypool  789M  28.0G  24.5K  /mypool
> mypool/nas  789M  28.0G  78.8M  /export/home
> mypool/nas/pavan710M  28.0G   710M  /export/home/pavan
> mypool/nas/rajeev  24.5K  28.0G  24.5K  /export/home/rajeev
> mypool/nas/share   24.5K  28.0G  24.5K  /export/home/share
>
> It took lots of time when I moved the file from /export/home/pavan/ to 
/export/home/share directory.

You're moving that file from one filesystem to another, so it will have
to copy all the data of the file in addition to just a few metadata
blocks.  If you mv it within the same filesystem it will be quick (as in
your OSX example).

eric

>
> I was not doing any other operation other than the move command.
>
> There are no files in that directories other than this one.
>
> It has  a 512MB Physical memory. The Mac machine has 1Gig RAM.
>
> No snapshots are taken.
>
> iostat and vmstat info:
> bash-3.00# iostat -x
>  extended device statistics
> devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b
> cmdk0 3.13.0  237.9  132.8  0.6  0.1  112.6   2   3
> fd0   0.00.00.00.0  0.0  0.0 3152.2   0   0
> sd0   0.00.00.00.0  0.0  0.00.0   0   0
> sd1   0.00.00.00.0  0.0  0.00.0   0   0
> bash-3.00# vmstat
>  kthr  memorypagedisk  faults  cpu
>  r b w   swap  free  re  mf pi po fr de sr cd f0 s0 s1   in   sy   cs us sy id
>  0 0 0 2083196 111668 5  25 13  3 13  0 44  6  0 -1 -0  296  276  231  1  2 97
>
>
> -Pavan
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss




--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Current status of a ZFS root

2006-10-30 Thread Ceri Davies
On Sun, Oct 29, 2006 at 12:01:45PM -0800, Richard Elling - PAE wrote:
> Chris Adams wrote:
> >We're looking at replacing a current Linux server with a T1000 + a fiber 
> >channel enclosure to take advantage of ZFS. Unfortunately, the T1000 only 
> >has a single drive bay (!) which makes it impossible to follow our normal 
> >practice of mirroring the root file system; naturally the idea of using 
> >that big ZFS pool is appealing.
> 
> Note: the original T1000 had the single disk limit.  This was unfortunate, 
> and a
> sales inhibitor.  Today, you have the option of single (SATA) or dual (SAS) 
> boot
> disks, with hardware RAID.  See:
>   http://www.sun.com/servers/coolthreads/t1000/specs.xml

Good to know that this limit has been removed.  Can the original
T1000s be backfitted, or do I just need to be very careful what
I'm ordering now?

Ceri
-- 
That must be wonderful!  I don't understand it at all.
  -- Moliere


pgpSg5vXMreNE.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re[2]: [zfs-discuss] Re: Re: Snapshots impact on performance

2006-10-30 Thread Robert Milkowski
Hello Jeff,

Monday, October 30, 2006, 2:03:52 AM, you wrote:

>> Nice, this is definitely pointing the finger more definitively.  Next 
>> time could you try:
>> 
>> dtrace -n '[EMAIL PROTECTED](20)] = count()}' -c 'sleep 5'
>> 
>> (just send the last 10 or so stack traces)
>> 
>> In the mean time I'll talk with our SPA experts and see if I can figure 
>> out how to fix this...

JB> By any chance is the pool fairly close to full?  The fuller it gets,
JB> the harder it becomes to find long stretches of free space.

Nope - at least 600GB free all the time in the pool.
Also it's not a quota - if I rise a quote for the file system (+100GB
for example) it doesn't help. If I remove oldest snapshot it always
helps immediately. Sometimes I have to remove two oldest snapshots.

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss