Re: [zfs-discuss] Disabling COMMIT at NFS level, or disabling ZIL on a per-filesystem basis

2008-10-23 Thread Constantin Gonzalez
Hi,

>> - The ZIL exists on a per filesystem basis in ZFS. Is there an RFE 
>> already
>>that asks for the ability to disable the ZIL on a per filesystem 
>> basis?
> 
> Yes: 6280630 zil synchronicity

good, thanks for the pointer!

> Though personally I've been unhappy with the exposure that zil_disable 
> has got.
> It was originally meant for debug purposes only. So providing an official
> way to make synchronous behaviour asynchronous is to me dangerous.

IMHO, the need here is to give admins control over the way they want their
file servers to behave. In this particular case, the admin argues that he knows
what he's doing, that he doesn't want his NFS server to behave more strongly
than a local filesystem and that he deserves control of that behaviour.

Ideally, there would be an NFS option that lets customers choose whether they
want to honor COMMIT requests or not.

Disabling ZIL on a per filesystem basis is only the second best solution, but
since that CR already exists, it seems to be the more realistic route.

Thanks,
Constantin


-- 
Constantin Gonzalez  Sun Microsystems GmbH, Germany
Principal Field Technologisthttp://blogs.sun.com/constantin
Tel.: +49 89/4 60 08-25 91   http://google.com/search?q=constantin+gonzalez

Sitz d. Ges.: Sun Microsystems GmbH, Sonnenallee 1, 85551 Kirchheim-Heimstetten
Amtsgericht Muenchen: HRB 161028
Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Dr. Roland Boemer
Vorsitzender des Aufsichtsrates: Martin Haering
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] (ZFS) file corruption with HAStoragePlus

2008-10-23 Thread Armin Ollig
Good morning,

 i experience file corruption on a zfs in a two node Cluster. The Filesystem 
holds the datafile of a VirtualBox windows-guest instance. It is placed in one 
resourcegroup together with the gds-scripts which manage the virtual-machine 
startup and probe:

clresourcegroup create vb1 

clresource create -t SUNW.HAStoragePlus \
-g vb1 \
-p Zpools=vb1 \
-p AffinityOn=True vb1-storage 

clresource create -g vb1 -t SUNW.gds \
 [..]
-p stop_signal=9 -p Failover_enabled=true \
-p Resource_dependencies=vb1-storage vb1-vms

After some days of operations (and many failovers) the virtual-disk-datafile is 
corrupted and the zfs does not mount any more:

Oct 23 09:56:08 siegfried EVENT-TIME: Thu Oct 23 09:56:08 CEST 2008
Oct 23 09:56:08 siegfried PLATFORM: PowerEdge 1850, CSN: 9Z7MV1J, HOSTNAME: 
siegfried
Oct 23 09:56:08 siegfried SOURCE: zfs-diagnosis, REV: 1.0
Oct 23 09:56:08 siegfried EVENT-ID: 3e0a4051-cd05-cce8-b0bb-c4c165cc4fcc
Oct 23 09:56:08 siegfried DESC: The number of checksum errors associated with a 
ZFS device
Oct 23 09:56:08 siegfried exceeded acceptable levels.  Refer to 
http://sun.com/msg/ZFS-8000-GH for more information.
Oct 23 09:56:08 siegfried AUTO-RESPONSE: The device has been marked as 
degraded.  An attempt
Oct 23 09:56:08 siegfried will be made to activate a hot spare if available.
Oct 23 09:56:08 siegfried IMPACT: Fault tolerance of the pool may be 
compromised.
Oct 23 09:56:08 siegfried REC-ACTION: Run 'zpool status -x' and replace the bad 
device.

# zpool status -xv
  pool: vb1
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: none requested
config:
NAME STATE READ WRITE CKSUM
vb1  ONLINE   0 0 0
  c4t600D02300088824BC4228807d0  ONLINE   0 0 0
errors: Permanent errors have been detected in the following files:
/vb1/vb1/vhd/vb1_vhd1.vdi


SunOS Version: 5.11 snv_97 i86pc i386 i86pc
ClusterExpress Version: 08/20/2008 (build from source)
Storage: SAN Luns via scsi_vhci

Any suggestions?
Best wishes,
 Armin
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disabling COMMIT at NFS level, or disabling ZIL on a per-filesystem basis

2008-10-23 Thread Constantin Gonzalez
Hi,

Bob Friesenhahn wrote:
> On Wed, 22 Oct 2008, Neil Perrin wrote:
>> On 10/22/08 10:26, Constantin Gonzalez wrote:
>>> 3. Disable ZIL[1]. This is of course evil, but one customer pointed out to 
>>> me
>>> that if a tar xvf were writing locally to a ZFS file system, the writes
>>> wouldn't be synchronous either, so there's no point in forcing NFS users
>>> to having a better availability experience at the expense of 
>>> performance.
> 
> The conclusion reached here is quite seriously wrong and no Sun 
> employee should suggest it to a customer.  If the system writing to a 

I'm not suggesting it to any customer. Actually, I argued quite a long time
with the customer, trying to convince him that "slow but correct" is better.

The conclusion above is a conscious decision by the customer. He says that he
does not want NFS to turn any write into a synchronous write, he's happy if
all writes are asynchronous, because in this case the NFS server is a backup to
disk device and if power fails he simply restarts the backup 'cause he has the
data in multiple copies anyway.

> local filesystem reboots then the applications which were running are 
> also lost and will see the new filesystem state when they are 
> restarted.  If an NFS server sponteneously reboots, the applications 
> on the many clients are still running and the client systems are using 
> cached data.  This means that clients could do very bad things if the 
> filesystem state (as seen by NFS) is suddenly not consistent.  One of 
> the joys of NFS is that the client continues unhindered once the 
> server returns.

Yes, we're both aware of this. In this particular situation, the customer
would restart his backup job (and thus the client application) in case the
server dies.

Thanks for pointing out the difference, this is indeed an important distinction.

Cheers,
   Constantin

-- 
Constantin Gonzalez  Sun Microsystems GmbH, Germany
Principal Field Technologisthttp://blogs.sun.com/constantin
Tel.: +49 89/4 60 08-25 91   http://google.com/search?q=constantin+gonzalez

Sitz d. Ges.: Sun Microsystems GmbH, Sonnenallee 1, 85551 Kirchheim-Heimstetten
Amtsgericht Muenchen: HRB 161028
Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Dr. Roland Boemer
Vorsitzender des Aufsichtsrates: Martin Haering
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disabling COMMIT at NFS level, or disabling ZIL on a per-filesystem basis

2008-10-23 Thread Constantin Gonzalez
Hi,

yes, using slogs is the best solution.

Meanwhile, using mirrored slogs from other servers' RAM-Disks running on UPSs
seem like an interesting idea, if the reliability of UPS-backed RAM is deemed
reliable enough for the purposes of the NFS server.

Thanks for siggesting this!

Cheers,
Constantin

Ross wrote:
> Well, it might be even more of a bodge than disabling the ZIL, but how about:
> 
> - Create a 512MB ramdisk, use that for the ZIL
> - Buy a Micro Memory nvram PCI card for £100 or so.
> - Wait 3-6 months, hopefully buy a fully supported PCI-e SSD to replace the 
> Micro Memory card.
> 
> The ramdisk isn't an ideal solution, but provided you don't export the pool 
> with it offline, it does work.  We used it as a stop gap solution for a 
> couple of weeks while waiting for a Micro Memory nvram card.
> 
> Our reasoning was that our server's on a UPS and we figured if something 
> crashed badly enough to take out something like the UPS, the motherboard, 
> etc, we'd be loosing data anyway.  We just made sure we had good backups in 
> case the pool got corrupted and crossed our fingers.
> 
> The reason I say wait 3-6 months is that there's a huge amount of activity 
> with SSD's at the moment.  Sun said that they were planning to have flash 
> storage launched by Christmas, so I figure there's a fair chance that we'll 
> see some supported PCIe cards by next Spring.
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 
Constantin Gonzalez  Sun Microsystems GmbH, Germany
Principal Field Technologisthttp://blogs.sun.com/constantin
Tel.: +49 89/4 60 08-25 91   http://google.com/search?q=constantin+gonzalez

Sitz d. Ges.: Sun Microsystems GmbH, Sonnenallee 1, 85551 Kirchheim-Heimstetten
Amtsgericht Muenchen: HRB 161028
Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Dr. Roland Boemer
Vorsitzender des Aufsichtsrates: Martin Haering
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disabling COMMIT at NFS level, or disabling ZIL on a per-filesystem basis

2008-10-23 Thread Bob Friesenhahn
On Thu, 23 Oct 2008, Constantin Gonzalez wrote:
>
> Yes, we're both aware of this. In this particular situation, the customer
> would restart his backup job (and thus the client application) in case the
> server dies.

So it is ok for this customer if their backup becomes silently 
corrupted and the backup software continues running?  Consider that 
some of the backup files may have missing or corrupted data in the 
middle.  Your customer is quite dedicated in that he will monitor the 
situation very well and remember to reboot the backup system, correct 
any corrupted files, and restart the backup software whenever the 
server panics and reboots.

A properly built server should be able to handle NFS writes at 
gigabit wire-speed.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disabling COMMIT at NFS level, or disabling ZIL on a per-filesystem basis

2008-10-23 Thread Constantin Gonzalez
Hi,

Bob Friesenhahn wrote:
> On Thu, 23 Oct 2008, Constantin Gonzalez wrote:
>>
>> Yes, we're both aware of this. In this particular situation, the customer
>> would restart his backup job (and thus the client application) in case 
>> the
>> server dies.
> 
> So it is ok for this customer if their backup becomes silently corrupted 
> and the backup software continues running?  Consider that some of the 
> backup files may have missing or corrupted data in the middle.  Your 
> customer is quite dedicated in that he will monitor the situation very 
> well and remember to reboot the backup system, correct any corrupted 
> files, and restart the backup software whenever the server panics and 
> reboots.

This is what the customer told me. He uses rsync and he is ok with restarting
the rsync whenever the NFS server restarts.

> A properly built server should be able to handle NFS writes at gigabit 
> wire-speed.

I'm advocating for a properly built system, believe me :).

Cheers,
Constantin

-- 
Constantin Gonzalez  Sun Microsystems GmbH, Germany
Principal Field Technologisthttp://blogs.sun.com/constantin
Tel.: +49 89/4 60 08-25 91   http://google.com/search?q=constantin+gonzalez

Sitz d. Ges.: Sun Microsystems GmbH, Sonnenallee 1, 85551 Kirchheim-Heimstetten
Amtsgericht Muenchen: HRB 161028
Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Dr. Roland Boemer
Vorsitzender des Aufsichtsrates: Martin Haering
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disabling COMMIT at NFS level, or disabling ZIL on a per-filesystem basis

2008-10-23 Thread Bob Friesenhahn
On Thu, 23 Oct 2008, Constantin Gonzalez wrote:
>
> This is what the customer told me. He uses rsync and he is ok with restarting
> the rsync whenever the NFS server restarts.

Then remind your customer to tell rsync to inspect the data rather 
than trusting time stamps.  Rsync will then run quite a bit slower but 
at least it will catch a corrupted file.  There is still the problem 
that the client OS may have cached data which it thinks is correct but 
no longer matches what is on the server.  This may result in rsync 
making wrong decisions.

A better approach is to run rsync on the server so that there is rsync 
to rsync communication rather than rsync to NFS.  This can result in 
far better performance and without the NFS sychronous write problem.

For my own backups, I initiate rsync on the server side and have a 
special secure rsync service set up on the clients so that the server 
sucks files from the clients.  This works very well and helps with 
administration because any error conditions will be noted in just one 
place.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS scalability in terms of file system count (or lack thereof) in S10U6

2008-10-23 Thread Pramod Batni



On 10/23/08 08:19, Paul B. Henson wrote:

On Tue, 21 Oct 2008, Pramod Batni wrote:

  

Why does creating a new ZFS filesystem require enumerating all existing
ones?
  

  This is to determine if any of the filesystems in the dataset are mounted.



Ok, that leads to another question, why does creating a new ZFS filesystem
require determining if any of the existing filesystems in the dataset are
mounted :)? I could see checking the parent filesystems, but why the
siblings?

  

 I am not sure.
 All the checking is done as part of the libshare's sa_init which is 
calling into sa_get_zfs_shares().



In any case a bug can be filed on this.



Should I open a sun support call to request such a bug? I guess I should
wait until U6 is released, I don't have support for SXCE...
  
 You could do that else I can open a bug for you citing the Nevada 
build [b97] you are using.


Pramod

Thanks...


  
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (ZFS) file corruption with HAStoragePlus

2008-10-23 Thread Robert Milkowski
Hello Armin,

Thursday, October 23, 2008, 10:13:23 AM, you wrote:

AO> Good morning,

AO>  i experience file corruption on a zfs in a two node Cluster. The
AO> Filesystem holds the datafile of a VirtualBox windows-guest
AO> instance. It is placed in one resourcegroup together with the
AO> gds-scripts which manage the virtual-machine startup and probe:

AO> clresourcegroup create vb1 

AO> clresource create -t SUNW.HAStoragePlus \
AO> -g vb1 \
AO> -p Zpools=vb1 \
AO> -p AffinityOn=True vb1-storage 

AO> clresource create -g vb1 -t SUNW.gds \
AO>  [..]
AO> -p stop_signal=9 -p Failover_enabled=true \
AO> -p Resource_dependencies=vb1-storage vb1-vms

AO> After some days of operations (and many failovers) the
AO> virtual-disk-datafile is corrupted and the zfs does not mount any more:

AO> Oct 23 09:56:08 siegfried EVENT-TIME: Thu Oct 23 09:56:08 CEST 2008
AO> Oct 23 09:56:08 siegfried PLATFORM: PowerEdge 1850, CSN: 9Z7MV1J, HOSTNAME: 
siegfried
AO> Oct 23 09:56:08 siegfried SOURCE: zfs-diagnosis, REV: 1.0
AO> Oct 23 09:56:08 siegfried EVENT-ID:
AO> 3e0a4051-cd05-cce8-b0bb-c4c165cc4fcc
AO> Oct 23 09:56:08 siegfried DESC: The number of checksum errors associated 
with a ZFS device
AO> Oct 23 09:56:08 siegfried exceeded acceptable levels.  Refer to
AO> http://sun.com/msg/ZFS-8000-GH for more information.
AO> Oct 23 09:56:08 siegfried AUTO-RESPONSE: The device has been marked as 
degraded.  An attempt
AO> Oct 23 09:56:08 siegfried will be made to activate a hot spare if available.
AO> Oct 23 09:56:08 siegfried IMPACT: Fault tolerance of the pool may be 
compromised.
AO> Oct 23 09:56:08 siegfried REC-ACTION: Run 'zpool status -x' and replace the 
bad device.

AO> # zpool status -xv
AO>   pool: vb1
AO>  state: ONLINE
AO> status: One or more devices has experienced an error resulting in data
AO> corruption.  Applications may be affected.
AO> action: Restore the file in question if possible.  Otherwise restore the
AO> entire pool from backup.
AO>see: http://www.sun.com/msg/ZFS-8000-8A
AO>  scrub: none requested
AO> config:
AO> NAME STATE READ WRITE CKSUM
AO> vb1  ONLINE   0   0 0
AO>   c4t600D02300088824BC4228807d0  ONLINE   0   0 0
AO> errors: Permanent errors have been detected in the following files:
AO> /vb1/vb1/vhd/vb1_vhd1.vdi


AO> SunOS Version: 5.11 snv_97 i86pc i386 i86pc
AO> ClusterExpress Version: 08/20/2008 (build from source)
AO> Storage: SAN Luns via scsi_vhci

AO> Any suggestions?

If you can then try to get some kind of redundancy provided by ZFS
(mirror?). Looks like your controller/array/whatever corrupted some
data.

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool cross mount

2008-10-23 Thread Laurent Burnotte
Hi experts,

Short question

What happen if we have cross zpool mount ?

meaning :

zpool A -> should be mounted in /A
zpool B -> should be mounted in /A/B

=> is there in zfs an automatic mechanism during solaris 10 boot that 
prevent the import of pool B ( mounted /A/B ) before trying to import A 
pool or do we have to legacy mount and file /etc/vfstab

Regards,

Laurent


-- 
 
 




 
  ("`-''-/").___..--''"`-._
   `6_ 6  )   `-.  ( ).`-.__.`)
   (_Y_.)'  ._   )  `._ `. ``-..-'
_.  `--'_..-_/  /--'_.' ,'
 __(il),-''(li),'__((!.-'_
  Burnotte Laurent Sun Microsystems  System Support Engineer
Phone: (+352)49113377




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool cross mount

2008-10-23 Thread Darren J Moffat
Laurent Burnotte wrote:
> Hi experts,
> 
> Short question
> 
> What happen if we have cross zpool mount ?
> 
> meaning :
> 
> zpool A -> should be mounted in /A
> zpool B -> should be mounted in /A/B

I have exactly that situation on my home system:

Where A is the boot/root pool (rpool) and B is my data pool (store) 
/usr/local comes from pool B and is mounted ontop of /usr which comes 
from pool A.

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool cross mount

2008-10-23 Thread Johan Hartzenberg
On Thu, Oct 23, 2008 at 4:49 PM, Laurent Burnotte
<[EMAIL PROTECTED]>wrote:

>
> => is there in zfs an automatic mechanism during solaris 10 boot that
> prevent the import of pool B ( mounted /A/B ) before trying to import A
> pool or do we have to legacy mount and file /etc/vfstab
>

This is fine if the pool from which /A is mounted is "guaranteed" to be
present, online, and have /A mounted.  Where /A is from the root pool, you
should be safe most of the time.

If not, set  the canmount promptery of the "Pool B /A/B" dataset to noauto,
otherwise it may bet mounted without /A being mounted, which depending on
your situation can be a minor irritation or a serious problem.


-- 
Any sufficiently advanced technology is indistinguishable from magic.
   Arthur C. Clarke

My blog: http://initialprogramload.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disabling COMMIT at NFS level, or disabling ZIL on a per-filesystem basis

2008-10-23 Thread Ross Smith
No problem.  I didn't use mirrored slogs myself, but that's certainly
a step up for reliability.

It's pretty easy to create a boot script to re-create the ramdisk and
re-attach it to the pool too.  So long as you use the same device name
for the ramdisk you can add it each time with a simple "zpool replace
pool ramdisk"


On Thu, Oct 23, 2008 at 1:56 PM, Constantin Gonzalez
<[EMAIL PROTECTED]> wrote:
> Hi,
>
> yes, using slogs is the best solution.
>
> Meanwhile, using mirrored slogs from other servers' RAM-Disks running on
> UPSs
> seem like an interesting idea, if the reliability of UPS-backed RAM is
> deemed
> reliable enough for the purposes of the NFS server.
>
> Thanks for siggesting this!
>
> Cheers,
>   Constantin
>
> Ross wrote:
>>
>> Well, it might be even more of a bodge than disabling the ZIL, but how
>> about:
>>
>> - Create a 512MB ramdisk, use that for the ZIL
>> - Buy a Micro Memory nvram PCI card for £100 or so.
>> - Wait 3-6 months, hopefully buy a fully supported PCI-e SSD to replace
>> the Micro Memory card.
>>
>> The ramdisk isn't an ideal solution, but provided you don't export the
>> pool with it offline, it does work.  We used it as a stop gap solution for a
>> couple of weeks while waiting for a Micro Memory nvram card.
>>
>> Our reasoning was that our server's on a UPS and we figured if something
>> crashed badly enough to take out something like the UPS, the motherboard,
>> etc, we'd be loosing data anyway.  We just made sure we had good backups in
>> case the pool got corrupted and crossed our fingers.
>>
>> The reason I say wait 3-6 months is that there's a huge amount of activity
>> with SSD's at the moment.  Sun said that they were planning to have flash
>> storage launched by Christmas, so I figure there's a fair chance that we'll
>> see some supported PCIe cards by next Spring.
>> --
>> This message posted from opensolaris.org
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
> --
> Constantin Gonzalez  Sun Microsystems GmbH,
> Germany
> Principal Field Technologist
>  http://blogs.sun.com/constantin
> Tel.: +49 89/4 60 08-25 91
> http://google.com/search?q=constantin+gonzalez
>
> Sitz d. Ges.: Sun Microsystems GmbH, Sonnenallee 1, 85551
> Kirchheim-Heimstetten
> Amtsgericht Muenchen: HRB 161028
> Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Dr. Roland Boemer
> Vorsitzender des Aufsichtsrates: Martin Haering
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS + OpenSolaris for home NAS?

2008-10-23 Thread Peter Bridge
I'm looking to buy some new hardware to build a home ZFS based NAS.  I know ZFS 
can be quite CPU/mem hungry and I'd appreciate some opinions on the following 
combination:

Intel Essential Series D945GCLF2
Kingston ValueRAM DIMM 2GB PC2-5300U CL5 (DDR2-667) (KVR667D2N5/2G)

Firstly, does it sound like a reasonable combination to run OpenSolaris?

Will Solaris make use of both processors? / all cores?

Is it going to be enough power to run ZFS?

I read that ZFS prefers 64bit, but it's not clear to me if the above board will 
provide 64bit support.

Also I already have 2 SATA II disks to throw in (using both onboard SATA II 
ports), but ideally I would like to add a OS suitable PCI SATA card to add 
maybe another 4 disks.  Any suggestions on a suitable card please?

Cheers
Peter
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS scalability in terms of file system count (or lack thereof) in S10U6

2008-10-23 Thread Paul B. Henson
On Thu, 23 Oct 2008, Pramod Batni wrote:

> On 10/23/08 08:19, Paul B. Henson wrote:
> >
> > Ok, that leads to another question, why does creating a new ZFS filesystem
> > require determining if any of the existing filesystems in the dataset are
> > mounted :)?
>
> I am not sure. All the checking is done as part of the libshare's sa_init
> which is calling into sa_get_zfs_shares().

It does make a big difference whether or not sharenfs is enabled, I haven't
finished my testing, but at 5000 filesystems it takes about 30 seconds to
create a new filesystem and over 30 minutes to reboot if they are shared,
but only 7 seconds to make a filesystem and about 15 minutes to reboot if
they are not.

> You could do that else I can open a bug for you citing the Nevada
> build [b97] you are using.

I would greatly appreciate it if you could open the bug, I don't have an
opensolaris bugzilla account yet and you'd probably put better technical
details in it anyway :). If you do, could you please let me know the bug#
so I can refer to it once S10U6 is out and I confirm it has the same
behavior?

Thanks much...


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  [EMAIL PROTECTED]
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2008-10-23 Thread mike
I'm running ZFS on nevada (b94 and b98) on two machines at home, both
with 4 gig ram. one has a quad core intel core2 w/ ECC ram, the other
has normal RAM and an athlon 64 dual-core low power. both seem to be
working great.

On Thu, Oct 23, 2008 at 2:04 PM, Peter Bridge <[EMAIL PROTECTED]> wrote:
> I'm looking to buy some new hardware to build a home ZFS based NAS.  I know 
> ZFS can be quite CPU/mem hungry and I'd appreciate some opinions on the 
> following combination:
>
> Intel Essential Series D945GCLF2
> Kingston ValueRAM DIMM 2GB PC2-5300U CL5 (DDR2-667) (KVR667D2N5/2G)
>
> Firstly, does it sound like a reasonable combination to run OpenSolaris?
>
> Will Solaris make use of both processors? / all cores?
>
> Is it going to be enough power to run ZFS?
>
> I read that ZFS prefers 64bit, but it's not clear to me if the above board 
> will provide 64bit support.
>
> Also I already have 2 SATA II disks to throw in (using both onboard SATA II 
> ports), but ideally I would like to add a OS suitable PCI SATA card to add 
> maybe another 4 disks.  Any suggestions on a suitable card please?
>
> Cheers
> Peter
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2008-10-23 Thread John-Paul Drawneek
It depends on what your doing.

I got a AMD Sempron Processor LE-1100 (1.9Ghz) doing NAS for mythtv and seems 
to do ok.

If the board you quote is what your getting I think it is 64bit chip - intel 
site says its a Atom 330.

Solaris will should use all its cores/threads - intel have added a load of code 
to opensolaris not sure if Atom stuff was in it.

Think your out of luck for PCI SATA cards - not seen anything good about it.
Sil hardware is buggy, but its got the driver support.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2008-10-23 Thread Chris Greer
I've been looking at this board myself for the same thing
The blog below  is regarding the D945GCLF but looking at the two, it looks like 
the
processor is the only thing that is different (single core vs. dual core).

http://blogs.sun.com/PotstickerGuru/entry/solaris_running_on_intel_atom
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2008-10-23 Thread Peter Bridge
thanks for all the feedback.  Some followup questions:

If OS will see all 4 cores, will it also make use of all 4 cores for ZFS. ie is 
ZFS fully multi threaded?

Is there any point to run ZFS over just two 2 disks?  without the extra sata 
ports I'm thinking I may have to abandon this idea.  The plan was to use the 
internal ide just for a small boot disk and cdrom.  I don't think it would be a 
good idea to mix ide and sata zfs, agreed?

We'll I'll do some more searching, maybe there is another quad core board out 
there with 8 sata ports, 4GB ram support and passive cooled north bridge :)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss