Re[2]: [zfs-discuss] Can't remove corrupt file

2006-07-21 Thread Robert Milkowski
Hello Bill,

Friday, July 21, 2006, 7:31:25 AM, you wrote:

BM On Thu, Jul 20, 2006 at 03:45:54PM -0700, Jeff Bonwick wrote:
  However, we do have the advantage of always knowing when something
  is corrupted, and knowing what that particular block should have been. 
 
 We also have ditto blocks for all metadata, so that even if any block
 of ZFS metadata is destroyed, we always have another copy.
 Bill Moore describes ditto blocks in detail here:
 
 http://blogs.sun.com/roller/page/bill?entry=ditto_blocks_the_amazing_tape

BM Right.  And I should point out that if Eric had been running build 38 or
BM later, this data corruption would not have happened - it would have been
BM automatically repaired using ditto blocks (the bad block was a L2
BM indirect block - of which there would have been 2 copies).

However possibly something is broken there as I see on two different
servers (v240, T2000) CKSUM errors for ditto blocks on daily basics
and it's hard to belive I have a problem with hardware and it hits
only metadata blocks. More at:

http://www.opensolaris.org/jive/thread.jspa?threadID=9846tstart=0

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs sucking down my memory!?

2006-07-21 Thread Casper . Dik

Bart Smaalders wrote:


 How much swap space is configured on this machine?

Zero. Is there any reason I would want to configure any swap space?


Yes.

In this particular case:

total: 213728k bytes allocated + 8896k reserved = 222624k used, 11416864k 
available

you have 9MB of reserved memory which means it is memory which is not
doing anything.

Then there is a lot of dirty data which is never used again and which
could be relegated to disk swap, if only there was some.

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] legato support

2006-07-21 Thread Rainer Orth
Anne Wong [EMAIL PROTECTED] writes:

 The EMC/Legato NetWorker (a.k.a. Sun StorEdge EBS) support for ZFS 
 NFSv4/ACLs will be in NetWorker 7.3.2 release currently targeting for 
 September release.

Any word on equivalent support in VERITAS/Symantec NetBackup?

Rainer

-- 
-
Rainer Orth, Faculty of Technology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] legato support

2006-07-21 Thread Gregory Shaw
I've been backing up ZFS with NetBackup 5.1 without issue.   I won't say it does everything, but I am able to backup and restore individual files.On Jul 21, 2006, at 7:08 AM, Rainer Orth wrote:Anne Wong [EMAIL PROTECTED] writes: The EMC/Legato NetWorker (a.k.a. Sun StorEdge EBS) support for ZFS NFSv4/ACLs will be in NetWorker 7.3.2 release currently targeting for September release. Any word on equivalent support in VERITAS/Symantec NetBackup?	Rainer-- -Rainer Orth, Faculty of Technology, Bielefeld University  -Gregory Shaw, IT ArchitectPhone: (303) 673-8273        Fax: (303) 673-8273ITCTO Group, Sun Microsystems Inc.1 StorageTek Drive MS 4382              [EMAIL PROTECTED] (work)Louisville, CO 80028-4382                 [EMAIL PROTECTED] (home)"When Microsoft writes an application for Linux, I've Won." - Linus Torvalds ___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] legato support

2006-07-21 Thread Rainer Orth
Gregory,

 I've been backing up ZFS with NetBackup 5.1 without issue.   I won't  
 say it does everything, but I am able to backup and restore  
 individual files.

I know: we're actually using 4.5 at the moment ;-)  My question was
specificialy about ACL support.  I think the ZFS Admin Guide mentions two
CRs for this, one for Legato and another for NetBackup.

Rainer

-
Rainer Orth, Faculty of Technology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can't remove corrupt file

2006-07-21 Thread Gregory Shaw
After reading the ditto blocks blog (good article, btw), an idea occurred to me:Since we use ditto blocks to preserve critical filesystem data, would it be practical to add a filesystem property that would cause all files in a filesystem to be stored as mirrored blocks?That would allow a dual-copy behavior selectable on a filesystem boundary even in a vdev pool.That could be handy for those that have a little bit of critical data and a lot of not-so-critical data.On Jul 20, 2006, at 4:45 PM, Jeff Bonwick wrote:However, we do have the advantage of always knowing when somethingis corrupted, and knowing what that particular block should have been.  We also have ditto blocks for all metadata, so that even if any blockof ZFS metadata is destroyed, we always have another copy.Bill Moore describes ditto blocks in detail here:http://blogs.sun.com/roller/page/bill?entry=ditto_blocks_the_amazing_tapeJeff___zfs-discuss mailing listzfs-discuss@opensolaris.orghttp://mail.opensolaris.org/mailman/listinfo/zfs-discuss  -Gregory Shaw, IT ArchitectPhone: (303) 673-8273        Fax: (303) 673-8273ITCTO Group, Sun Microsystems Inc.1 StorageTek Drive MS 4382              [EMAIL PROTECTED] (work)Louisville, CO 80028-4382                 [EMAIL PROTECTED] (home)"When Microsoft writes an application for Linux, I've Won." - Linus Torvalds ___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re[2]: [zfs-discuss] Can't remove corrupt file

2006-07-21 Thread Robert Milkowski




Hello Gregory,

Friday, July 21, 2006, 3:22:17 PM, you wrote:







After reading the ditto blocks blog (good article, btw), an idea occurred to me:

Since we use ditto blocks to preserve critical filesystem data, would it be practical to add a filesystem property that would cause all files in a filesystem to be stored as mirrored blocks?

That would allow a dual-copy behavior selectable on a filesystem boundary even in a vdev pool.

That could be handy for those that have a little bit of critical data and a lot of not-so-critical data.






IIRC that's already planned.



--
Best regards,
Robert  mailto:[EMAIL PROTECTED]
   http://milek.blogspot.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] legato support

2006-07-21 Thread Luc I. Suryo

as promissed: not working...

only the directories are getting created/backuped...


  zici:/export/projects/.zfs/snapshot: No full backups of this save set were 
found in the media database; performing a full backup
* zici:/export/projects/.zfs/snapshot save: Unable to read ACL * information 
for `/export/projects/': Operation not applicable
* zici:/export/projects/.zfs/snapshot save: RPC error: RPC cannot * encode 
arguments
* zici:/export/projects/.zfs/snapshot save: save of connecting * directories 
failed
  zici: /export/projects/.zfs/snapshot level=full,  3 KB 00:00:03 10 files


so for fun im testing this now:

 zfs set sharenfs=ro=zici,root=zici projects
 mount -o ro,vers=4 zici:/export/projects /export/backup

then added the save-set /export/backup in Legato, see what happends..

mount shows:

/export/backup on zici:/export/projects remote/read 
only/setuid/devices/largefiles/vers=4/xattr/dev=4d80008 on Fri Jul 21 08:09:45 
2006


note: /export/projects is the mount-point for the project zfs-pool

should know by tomorrow... or maybe i'll start the backup this
afternoon 

-ls

 
Do you have ACLs you need to maintain?  Can you just specify a snapshot
as a saveset directly? 
   
   Well we not (yet) worry about the ACLs as long we have a backup, using
   zfs sent/receieve of the snapshot to 1 single tar en dan to tape..
  
  I meant, rather than taring it up, can you just pass the snapshot mount
  point to Networker as a saveset?  
  
  Does Networker error when you give it a ZFS filesystem or snapshot as a
  saveset (not counting ACL warnings)?
 
 i will have to try.. letme do this now and report tomorrow
 I'll added /export/projects/.zfs/snapshot in the save set..
 
 and btw: Networker 7.1.2 build 325... (we have 7.2 and 7.3 but never
 upgraded since it works ...)
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs sucking down my memory!?

2006-07-21 Thread Bart Smaalders

Joseph Mocker wrote:

Bart Smaalders wrote:



How much swap space is configured on this machine?


Zero. Is there any reason I would want to configure any swap space?

 --joe


Well, if you want to allocate 500 MB in /tmp, and your machine
has no swap, you need 500M of physical memory or the write
_will_ fail.

W/ no swap configured, every allocation in every process of
any malloc'd memory, etc, is locked into RAM.

I just swap on a zvol w/ my ZFS root machine.

- Bart

--
Bart Smaalders  Solaris Kernel Performance
[EMAIL PROTECTED]   http://blogs.sun.com/barts
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can't remove corrupt file

2006-07-21 Thread Bill Moore
On Fri, Jul 21, 2006 at 07:22:17AM -0600, Gregory Shaw wrote:
 After reading the ditto blocks blog (good article, btw), an idea  
 occurred to me:
 
 Since we use ditto blocks to preserve critical filesystem data, would  
 it be practical to add a filesystem property that would cause all  
 files in a filesystem to be stored as mirrored blocks?

Yep, that's the plan.  I even mention it in the blog.  :)


--Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs sucking down my memory!?

2006-07-21 Thread Darren Reed

Bart Smaalders wrote:


...

I just swap on a zvol w/ my ZFS root machine.



I haven't been watching...what's the current status of using
ZFS for swap/dump?

Is a/the swap solution to use mkswap and then specify that file
in vfstab?

Darren

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs sucking down my memory!?

2006-07-21 Thread Bill Moore
On Sat, Jul 22, 2006 at 12:44:16AM +0800, Darren Reed wrote:
 Bart Smaalders wrote:
 
 I just swap on a zvol w/ my ZFS root machine.
 
 
 I haven't been watching...what's the current status of using
 ZFS for swap/dump?
 
 Is a/the swap solution to use mkswap and then specify that file
 in vfstab?

ZFS currently support swap, but not dump.  For swap, just make a zvol
and add that to vfstab.


--Bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs sucking down my memory!?

2006-07-21 Thread Joseph Mocker

Bart Smaalders wrote:

Joseph Mocker wrote:

Bart Smaalders wrote:



How much swap space is configured on this machine?


Zero. Is there any reason I would want to configure any swap space?

 --joe


Well, if you want to allocate 500 MB in /tmp, and your machine
has no swap, you need 500M of physical memory or the write
_will_ fail.

W/ no swap configured, every allocation in every process of
any malloc'd memory, etc, is locked into RAM.
Yep. Understood. In the interest of performance we typically run w/o 
swap. Is there a way to tune the system so that swap is used only when 
RAM is full? We've run w/o swap for so long (since 2.7 or 2.8) we've not 
kept up with any advances the swapping algorithms of the kernel.


I just swap on a zvol w/ my ZFS root machine.

Interesting. Doesn't ZFS have more overhead in this context than just a 
traditional RAW partition? Well I suppose you have a better guarantee of 
data accuracy though.


 --joe
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs sucking down my memory!?

2006-07-21 Thread Casper . Dik

We've kind of side tracked, but Yes, I do understand the limitations of 
running without swap. However, in the interest of performance, I, and in 
fact my whole organization which runs about 300 servers, disable swap.  
We've never had an out of memory problem in the past because of kernel 
memory. Is that wrong? We can't typically afford to have the kernel swap 
out portions of the application to disk and back.

Why do you think your performance *improves* if you don't use
swap?  It is much more likely it *deteriates* because your swap
accumulates stuff you do not use.

At any rate, I don't think adding swap will fix the problem I am seeing 
in that ZFS is not releasing its unused cache when applications need it. 
Adding swap might allow the kernel to move it out of memory but when the 
system needs it again it will have to swap it back in, and only 
performance suffers, no?

Well, you have decided that all application data needs to be memory
resident all of the time; but executables don't need to be (they
are now tossed out on memory shortage) and that ZFS can use less cache
than it wants to.

FWIW, here's the current ::memstat and swap output for my system. The 
reserved number is only about 46M or about 2% of RAM. Considering the 
box has 3G, I'm willing to sacrifice 2% in the interest of performance.

Page SummaryPagesMB  %Tot
     
Kernel 249927  1952   64%
Anon34719   2719%
Exec and libs2415181%
Page cache   1676130%
Free (cachelist)11796923%
Free (freelist) 88288   689   23%

Total  388821  3037
Physical   382802  2990

[EMAIL PROTECTED]: swap -s
total: 260008k bytes allocated + 47256k reserved = 307264k used, 381072k 
available

So there's 47MB of memory which is not used at all.  (Adding swap will
give you 47MB of additional free memory without anything being written
to disk).  Execs are also pushed out on shortfall.

There is 265 MB of anon memory and we have no clue how much of it
is used at all; a large percentage is likely unused.

But OTOH, you have sufficient memory on the freelist so there is not
much of an issue.

Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs sucking down my memory!?

2006-07-21 Thread Joseph Mocker

[EMAIL PROTECTED] wrote:
We've kind of side tracked, but Yes, I do understand the limitations of 
running without swap. However, in the interest of performance, I, and in 
fact my whole organization which runs about 300 servers, disable swap.  
We've never had an out of memory problem in the past because of kernel 
memory. Is that wrong? We can't typically afford to have the kernel swap 
out portions of the application to disk and back.



Why do you think your performance *improves* if you don't use
swap?  It is much more likely it *deteriates* because your swap
accumulates stuff you do not use.

  


Are you trying to convince me that having applications/application data 
occasionally swapped out to disk is actually faster than keeping it all 
in memory?


I have another box, which I LU'd to U1 a while ago. Its actually my 
primary desktop, a 2100z. After the upgrade I noticed my browser, 
firefox, was running slower. It was sluggish to respond when say I moved 
from reading my mail with thunderbird to firefox.


Looked at swap, wait a minute, LU switched on an inactive swap partition 
I had disabled long ago.


Removed the swap partition, and now everything is quite snappy.

The question really becomes, how do I pin desirable applications in 
memory while only allowing dirty memory to be shifted out to disk.


And still regardless of the swap issue. The bigger issue is that ZFS has 
about 1G of memory it won't free up for applications. Is it relying on 
the existance of swap to dump those pages out? Or should it be releasing 
memory itself?


 --joe

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs sucking down my memory!?

2006-07-21 Thread Rainer Orth
Bill Moore [EMAIL PROTECTED] writes:

 On Sat, Jul 22, 2006 at 12:44:16AM +0800, Darren Reed wrote:
  Bart Smaalders wrote:
  
  I just swap on a zvol w/ my ZFS root machine.
  
  
  I haven't been watching...what's the current status of using
  ZFS for swap/dump?
  
  Is a/the swap solution to use mkswap and then specify that file
  in vfstab?
 
 ZFS currently support swap, but not dump.  For swap, just make a zvol
 and add that to vfstab.

There are two caveats, though: 

* Before SXCR b43, you'll need the fix from CR 6405330 so the zvol is added
  after a reboot.  The fix hasn't been backported to S10 U2 (yet?), so it
  is equally affected.

* A Live Upgrade comments the zvol entry in /etc/vfstab, so you (sort of)
  loose swap after an upgrade ;-(

Rainer

-- 
-
Rainer Orth, Faculty of Technology, Bielefeld University
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs sucking down my memory!?

2006-07-21 Thread Jim Mauro


I need to read through this more thoroughly to get my head around it, but
on my first pass, what jumps out at me is that something significant
_changed_ in terms of application behavior with the introduction of ZFS.

I'm saying that that is a bad thing, or a good thing, but it is an 
important thing,

and we should try to understand if application behavior will, in general,
change with the introduction of ZFS, so we can advise users accordingly.

Joe appears to be a user of Sun system for some time, with a lot of 
experience
deploying Solaris 8 and Solaris 9. He has succesfully deployed systems 
without
physical swap, and I understand his reason for doing so. If the 
introduction of
Solaris 10 and ZFS means we need to change a system parameter when 
transitioning

from S8 or S9, such as configured swap, we need to understand why, and make
sure understand the performance implications.


Why do you think your performance *improves* if you don't use
swap?  It is much more likely it *deteriates* because your swap
accumulates stuff you do not use.
  

I'm not sure what this is saying, but I don't think it came out right.

As I said, I need to do another pass on the information in the messages 
to get

a better handle on the observed behviour, but this certainly seems like
something we should explore further.

Watch this space.

/jim

  
At any rate, I don't think adding swap will fix the problem I am seeing 
in that ZFS is not releasing its unused cache when applications need it. 
Adding swap might allow the kernel to move it out of memory but when the 
system needs it again it will have to swap it back in, and only 
performance suffers, no?



Well, you have decided that all application data needs to be memory
resident all of the time; but executables don't need to be (they
are now tossed out on memory shortage) and that ZFS can use less cache
than it wants to.

  
FWIW, here's the current ::memstat and swap output for my system. The 
reserved number is only about 46M or about 2% of RAM. Considering the 
box has 3G, I'm willing to sacrifice 2% in the interest of performance.


Page SummaryPagesMB  %Tot
     
Kernel 249927  1952   64%
Anon34719   2719%
Exec and libs2415181%
Page cache   1676130%
Free (cachelist)11796923%
Free (freelist) 88288   689   23%

Total  388821  3037
Physical   382802  2990

[EMAIL PROTECTED]: swap -s
total: 260008k bytes allocated + 47256k reserved = 307264k used, 381072k 
available



So there's 47MB of memory which is not used at all.  (Adding swap will
give you 47MB of additional free memory without anything being written
to disk).  Execs are also pushed out on shortfall.

There is 265 MB of anon memory and we have no clue how much of it
is used at all; a large percentage is likely unused.

But OTOH, you have sufficient memory on the freelist so there is not
much of an issue.

Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs sucking down my memory!?

2006-07-21 Thread Casper . Dik

Are you trying to convince me that having applications/application data 
occasionally swapped out to disk is actually faster than keeping it all 
in memory?

Yes.  Having more memory available generally causes the
system to be a faster.

I have another box, which I LU'd to U1 a while ago. Its actually my 
primary desktop, a 2100z. After the upgrade I noticed my browser, 
firefox, was running slower. It was sluggish to respond when say I moved 
from reading my mail with thunderbird to firefox.

Then that's a bug because something expunged the application when it
shouldn't have.

If you have enough memory, you should never swap.

Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs sucking down my memory!?

2006-07-21 Thread Roch

I just ran:

[EMAIL PROTECTED](129): mkfile 5000M f3
Could not set length of f3: No space left on device


Which fails in anon_resvmem:

dtrace -n fbt::anon_resvmem:return/arg1==0/[EMAIL PROTECTED](20)]=count()}

  tmpfs`tmp_resv+0x50
  tmpfs`wrtmp+0x28c
  tmpfs`tmp_write+0x50
  genunix`fop_write+0x20
  genunix`write+0x270
  unix`syscall_trap32+0xcc
1

Which could then be:

4034947 anon_swap_adjust(), anon_resvmem() should call kmem_reap() if availrmem 
is low.

FixedInBuild: snv_42


But it is a best practise to run ZFS with some
swap, I actually don't know exactly why, but 
possibly to account for such bugs as this one.

-r


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs sucking down my memory!?

2006-07-21 Thread Joseph Mocker

Ah ha. Interesting procedure and bug report. This is starting to make sense.
Another interesting bug report:

   6416757 zfs could still use less memory

This one is more or less the same thing I have noticed.

I guess I'll add some swap for the short term. :-(

 --joe

Roch wrote:

I just ran:

[EMAIL PROTECTED](129): mkfile 5000M f3
Could not set length of f3: No space left on device


Which fails in anon_resvmem:

dtrace -n fbt::anon_resvmem:return/arg1==0/[EMAIL PROTECTED](20)]=count()}

  tmpfs`tmp_resv+0x50
  tmpfs`wrtmp+0x28c
  tmpfs`tmp_write+0x50
  genunix`fop_write+0x20
  genunix`write+0x270
  unix`syscall_trap32+0xcc
1

Which could then be:

4034947 anon_swap_adjust(), anon_resvmem() should call kmem_reap() if availrmem 
is low.

FixedInBuild: snv_42


But it is a best practise to run ZFS with some
swap, I actually don't know exactly why, but 
possibly to account for such bugs as this one.


-r


  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Good 8 or 16 port x86 PCI SATA card

2006-07-21 Thread Shannon Roddy
Hi All,

I have looked on the HCL list for Sol 10 x86 without much luck.  I am
looking for a 8 or 16 port SATA card for a JBOD Sol 10 x86 ZFS
installation.  Anyone know of one that is well supported in Sol 10?  I
am starting to do some testing with an LSI Logic 320-XLP SATA RAID card,
but so far as I can tell, it does not want to do JBOD.  For several
reasons, I would rather have ZFS handle the RAID.

Any recommendations would be appreciated.  I have a 16 bay triple
redundant PS case here that I would really like to use with ZFS.
Unfortunately my CPU is 32 bit, so that may have to change.

Thanks,
Shannon

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Good 8 or 16 port x86 PCI SATA card

2006-07-21 Thread Casper . Dik

Hi All,

I have looked on the HCL list for Sol 10 x86 without much luck.  I am
looking for a 8 or 16 port SATA card for a JBOD Sol 10 x86 ZFS
installation.  Anyone know of one that is well supported in Sol 10?  I
am starting to do some testing with an LSI Logic 320-XLP SATA RAID card,
but so far as I can tell, it does not want to do JBOD.  For several
reasons, I would rather have ZFS handle the RAID.

Any recommendations would be appreciated.  I have a 16 bay triple
redundant PS case here that I would really like to use with ZFS.
Unfortunately my CPU is 32 bit, so that may have to change.


You probably want something like this:

http://cooldrives.stores.yahoo.net/8-channel-8-port-sata-pci-card.html

And you want PCI-X; not PCI.  (one 3Gbps SATA port can nearly saturate
PCI)

Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Good 8 or 16 port x86 PCI SATA card

2006-07-21 Thread Al Hopper
On Fri, 21 Jul 2006, Shannon Roddy wrote:

 Hi All,

 I have looked on the HCL list for Sol 10 x86 without much luck.  I am
 looking for a 8 or 16 port SATA card for a JBOD Sol 10 x86 ZFS
 installation.  Anyone know of one that is well supported in Sol 10?  I
 am starting to do some testing with an LSI Logic 320-XLP SATA RAID card,
 but so far as I can tell, it does not want to do JBOD.  For several
 reasons, I would rather have ZFS handle the RAID.

 Any recommendations would be appreciated.  I have a 16 bay triple
 redundant PS case here that I would really like to use with ZFS.
 Unfortunately my CPU is 32 bit, so that may have to change.

The newer version of the SuperMicro 8-port card works well:

http://www.supermicro.com/products/accessories/addon/AoC-SAT2-MV8.cfm

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
   Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005
OpenSolaris Governing Board (OGB) Member - Feb 2006
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Big JBOD: what would you do?

2006-07-21 Thread Lance
 This gives a nice bias towards one of the following
 configurations:
 
   - 5x(7+2), 1 hot spare, 17.5TB [corrected]
   - 4x(9+2), 2 hot spares, 18.0TB
   - 6x(5+2), 4 hot spares, 15.0TB

In addition to Eric's suggestions, I would be interested in these configs for 
46 disks:

5 x (8+1)1 hot spare20.0 TB
4 x (10+1)  2 hot spares   20.0 TB
6 x (6+1)4 hot spares   18.0 TB

In a few cases, we might want more space rather than 2-disk parity.  Thanks.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Boot hangs

2006-07-21 Thread Stephen Hahn
* Karen Chau [EMAIL PROTECTED] [2006-07-21 19:09]:
 Hi,
 Our server is hung at boot up.  I tried boot -s, hangs at the same place.
 
 *** SUMMARY of behavior ***
 
 Using boot, it hangs after displaying line:
 Hostname: itsm-mpk-2
 
 So tried using boot -v to show more detail. It now
 hangs after 4 more lines are displayed:
 
 - BEGIN HERE -
 px_pci1 is /[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]
 PCI-device: [EMAIL PROTECTED], px_pci7
 px_pci7 is /[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]
 dump on /dev/dsk/c1t0d0s1 size 2100 MB
 Hostname: itsm-mpk-2
 pseudo-device: zfs0
 zfs0 is /pseudo/[EMAIL PROTECTED]
 pseudo-device: dtrace0
 dtrace0 is /pseudo/[EMAIL PROTECTED]
 - END HERE -
 
 I'm suspecting this might be related to ZFS.  Is there a way to disable
 ZFS at boot up??

  Use

  boot -m milestone=none

  to have startup cease prior to any services being started.  You can
  then either use svcadm milestone all, and watch startup from your
  shell, or enable services individually. 

  - Stephen

-- 
Stephen Hahn, PhD  Solaris Kernel Development, Sun Microsystems
[EMAIL PROTECTED]  http://blogs.sun.com/sch/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss