Re: [zfs-discuss] Verify files' checksums

2008-10-24 Thread Mark Homoky

For MD5 checksums personally I favour MD5Summer on Windows boxes.

Just watch the formatting of the checksum file if you're checking  
downloads - sometimes it can be a bit picky about linebreaks. I think  
this file might have been the latest preview release of Netbeans 6.5  
(RC1). Looking in notepad though and putting the digest and filename  
on one line worked a treat.


Sorry this went a little OT / top-posted.

Sent from my iPod

Mark.

On 25 Oct 2008, at 06:01, "Johan Hartzenberg" <[EMAIL PROTECTED]>  
wrote:





On Sat, Oct 25, 2008 at 6:59 AM, Johan Hartzenberg  
<[EMAIL PROTECTED]> wrote:



On Sat, Oct 25, 2008 at 4:00 AM, Marcus Sundman <[EMAIL PROTECTED]>  
wrote:

How can I verify the checksums for a specific file?

I have a feeling you are not asking the question about ZFS hosted  
files specifically.


If you downloaded a file, enter
cksum filename

To get the "CRC Check-Sum"

For more types of checksum, you can use

digest -a md5 filename

digest -l will list types of checksum that the "digest" command  
knows about.


Cheers,
  _hartz

Oh, one other thing,
To check the cheksums of files you've downloaded to a MS Windows  
system you need do download and install a "checksum checking"  
utility, try twocows.com


  _hartz


--
Any sufficiently advanced technology is indistinguishable from magic.
   Arthur C. Clarke

My blog: http://initialprogramload.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Verify files' checksums

2008-10-24 Thread Johan Hartzenberg
On Sat, Oct 25, 2008 at 6:59 AM, Johan Hartzenberg <[EMAIL PROTECTED]>wrote:

>
>
> On Sat, Oct 25, 2008 at 4:00 AM, Marcus Sundman <[EMAIL PROTECTED]> wrote:
>
>> How can I verify the checksums for a specific file?
>>
>> I have a feeling you are not asking the question about ZFS hosted files
> specifically.
>
> If you downloaded a file, enter
> cksum filename
>
> To get the "CRC Check-Sum"
>
> For more types of checksum, you can use
>
> digest -a md5 filename
>
> digest -l will list types of checksum that the "digest" command knows
> about.
>
> Cheers,
>   _hartz
>
>
Oh, one other thing,
To check the cheksums of files you've downloaded to a MS Windows system you
need do download and install a "checksum checking" utility, try twocows.com

  _hartz


-- 
Any sufficiently advanced technology is indistinguishable from magic.
   Arthur C. Clarke

My blog: http://initialprogramload.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] diagnosing read performance problem

2008-10-24 Thread Bob Friesenhahn
On Sat, 25 Oct 2008, Matt Harrison wrote:
> I've got a lot of video files on a zfs/cifs fileserver running SXCE. A
> little while ago the dual onboard NICs died and I had to replace them with a
> PCI 10/100 NIC. The system was fine for a couple of weeks but now the
> performance when viewing a video file from the cifs share is appauling. Videos
> stop and jerk with audio distortion.
>
> I have tried this from several client machines so I'm pretty certain it lies
> with the server but I'm unsure of the next step to find out the source of
> the problem.

Other people on this list who experienced the exact same problem 
ultimately determined that the problem was with the network card.  I 
recall that Intel NICs were the recommended solution.

Note that 100MBit is now considered to be a slow link and PCI is also 
considered to be slow.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Verify files' checksums

2008-10-24 Thread Johan Hartzenberg
On Sat, Oct 25, 2008 at 4:00 AM, Marcus Sundman <[EMAIL PROTECTED]> wrote:

> How can I verify the checksums for a specific file?
>
> I have a feeling you are not asking the question about ZFS hosted files
specifically.

If you downloaded a file, enter
cksum filename

To get the "CRC Check-Sum"

For more types of checksum, you can use

digest -a md5 filename

digest -l will list types of checksum that the "digest" command knows about.

Cheers,
  _hartz




-- 
Any sufficiently advanced technology is indistinguishable from magic.
   Arthur C. Clarke

My blog: http://initialprogramload.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Verify files' checksums

2008-10-24 Thread Richard Elling
Marcus Sundman wrote:
> How can I verify the checksums for a specific file?
>   

ZFS doesn't checksum files.  So a file does not have a checksum
to verify.  Perhaps you want to keep a digest(1) of the files?
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Verify files' checksums

2008-10-24 Thread Marcus Sundman
How can I verify the checksums for a specific file?

- Marcus
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] diagnosing read performance problem

2008-10-24 Thread Matt Harrison
Hi all,

I've got a lot of video files on a zfs/cifs fileserver running SXCE. A
little while ago the dual onboard NICs died and I had to replace them with a
PCI 10/100 NIC. The system was fine for a couple of weeks but now the
performance when viewing a video file from the cifs share is appauling. Videos
stop and jerk with audio distortion.

I have tried this from several client machines so I'm pretty certain it lies
with the server but I'm unsure of the next step to find out the source of
the problem.

Is there any tool I should be using to find out if this is a zfs, network or
other problem?

Grateful for any ideas

Thanks

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] video on ZFS booting in Update 6

2008-10-24 Thread David Magda
Lori Alt gave an informative presentation (40 min.) on how ZFS booting  
works in Solaris 10 Update 6 (10/08):

http://blogs.sun.com/storage/entry/zfs_boot_in_solaris_10

The audio seems to be mono and focused on the left channel (or I'm  
having an aneurism of some kind).


Two questions that came to mind immediately:
. why was the word safe in quotation marks when upgrading BEs was  
being discussed?
. is there a plan to remove the need for separate zvols for dump and  
swap eventually? (I think there was a thread on this a while back)


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disabling auto-snapshot by default.

2008-10-24 Thread Tim Foster
Chris Gerhard wrote:
> Not quite.  I want to make the default for all pools imported or not to 
> not do this and then turn it on where it makes sense and won't do harm. 

Aah I see. That's the complete opposite of what the desktop folk wanted 
then - you want to opt-in, instead of opt-out.

For desktops & laptops, the requirement was for there to be no required 
user action for this stuff to "just work".  For servers, the intent was 
to just have it totally off by default (that is, the SMF service would 
just be disabled)

I'm not sure there's an easy way to please everyone to be honest :-/

That said, how often do you import or create pools where this would be 
an issue?  If you're importing pools you've used on such a system 
before, then you'd already have the property set on the root dataset.

If you're constantly importing brand new pools, then yes, you've got a 
point.

cheers,
tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disabling auto-snapshot by default.

2008-10-24 Thread Chris Gerhard

Tim Foster wrote:



Yep, you can do that. It uses ZFS user properties and respects 
inheritance, so you can do:


# zfs set com.sun:auto-snapshot=false rpool
# zfs set com.sun:auto-snapshot=true rpool/snapshot/this
# zfs set com.sun:auto-snapshot=false rpool/snapshot/this/but-not-this


Not quite.  I want to make the default for all pools imported or not to 
not do this and then turn it on where it makes sense and won't do harm. 
So it would only take a snapshot if auto-snapshot was set to "true". If 
the property is not set then no snapshot.  Obviously property 
inheritance would still be respected so to turn it on you just set the 
property to true in the root of the pool.



--
Chris Gerhard. __o __o __o
Systems TSC Chief Technologist_`\<,`\<,`\<,_
Sun Microsystems Limited (*)/---/---/ (*)
Phone: +44 (0) 1252 426033 (ext 26033) http://blogs.sun.com/chrisg


smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disabling auto-snapshot by default.

2008-10-24 Thread Tim Foster
Chris Gerhard wrote:
> This could be my misunderstanding of the parts of this. When I disabled 
> the auto-snapshot resulted in timeslider being disabled too.

Yep, time-slider depends on auto-snapshot.

> So what does the timeslider service do?

It's a service written by the desktop guys that adds a root cron job to 
monitor available disk space. The cron job fires every 15 minutes. If 
available disk space has gone below a given threshold, it starts 
destroying auto-snapshots until the available disk space exceeds that 
threshold again.

(I agree those two bits of functionality could be seen as independent of 
each other)

The time slider service has a SMF property with a list of services it 
wants to enable on startup, this isn't quite the traditional SMF 
dependency relationship, but I can't remember the exact reasoning for 
doing it this way, I think it had something to do with the failure modes 
for each service - Niall would know more I bet.

> I'd really like to be able to turn it off by default globally and then 
> turn it on in a controlled fashion. If you take snapshots of file 
> systems that are targets of zfs send | zfs receive it will prevent the 
> back ups working at all and require a lot of manual effort to recover.

Yep, you can do that. It uses ZFS user properties and respects 
inheritance, so you can do:

# zfs set com.sun:auto-snapshot=false rpool
# zfs set com.sun:auto-snapshot=true rpool/snapshot/this
# zfs set com.sun:auto-snapshot=false rpool/snapshot/this/but-not-this

So we'd get snapshots of rpool/snapshot/this and all of it's child 
datasets, except the dataset "rpool/snapshot/this/but-not-this"

(I use all the time on our build machines: I take auto snapshots of 
workspaces, but not proto areas or places where ISO images get built)

cheers,
tim

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disabling auto-snapshot by default.

2008-10-24 Thread Chris Gerhard

Tim Foster wrote:

Hi Chris,

Chris Gerhard wrote:

How can you disable the auto-snapshot service[s] by default without

 > disabling the timeslider
 > as well which appears to be the case if you disable the smf services.

Not sure I follow - time slider depends on the auto-snapshot service to 
take snapshots... (does the nautilus gui disappear if timeslider is 
disabled - if so, that sounds wrong to me)


This could be my misunderstanding of the parts of this. When I disabled 
the auto-snapshot resulted in timeslider being disabled too. So what 
does the timeslider service do?





Setting the properly in the root pool is ok except for removable media

 > which I don't want to have snapshots taken in the time between plugging
 > the things in and setting the property.

You'll be ok there. The service checks for/sets the property on service 
start on any pools that appear on the system: in effect, this means that 
removable media won't be snapshotted by default until the service 
refreshes. So only media that were inserted and mounted at boot would 
get snapshotted by default (unless the property was already set on those 
pools to tell the service to leave them alone)


btw. I've a changset checked in to have the service not take snapshots 
of swap & dump devices, which was broken in nv_100/101. There's a known 
bug about the service breaking for datasets with spaces in their names, 
I've got an ugly fix, but want to have a go at doing a better job of it.


I'd really like to be able to turn it off by default globally and then 
turn it on in a controlled fashion. If you take snapshots of file 
systems that are targets of zfs send | zfs receive it will prevent the 
back ups working at all and require a lot of manual effort to recover.



--
Chris Gerhard. __o __o __o
Systems TSC Chief Technologist_`\<,`\<,`\<,_
Sun Microsystems Limited (*)/---/---/ (*)
Phone: +44 (0) 1252 426033 (ext 26033) http://blogs.sun.com/chrisg


smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sata on sparc

2008-10-24 Thread John-Paul Drawneek
you have buy a lsi sas card.

not cheap - around 100 GBP
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] sata on sparc

2008-10-24 Thread Francois Dion

On this page:

http://www.sun.com/io_technologies/sata/SATA0.html

I see:
-USB,SATA,IEEE1394 SIIG, Inc. USB 2.0 + FireWire + SATA Combo (SC-UNS012) 
Verified (Solaris 10)

Indicating that if I install an SC-UNS012 in a Solaris 10 Sparc server I would 
get a few USB 2 ports, a few IEEE1394 ports but more interestingly, a few SATA 
ports working under sparc.

But that same page states:
"Native SATA is not yet fully supported. There is no current support on 
SPARC-based systems. " 

Which is it? It works or it doesn't?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Tuning ZFS for Sun Java Messaging Server

2008-10-24 Thread Torrey McMahon
Richard Elling wrote:
> Adam N. Copeland wrote:
>   
>> Thanks for the replies.
>>
>> It appears the problem is that we are I/O bound. We have our SAN guy
>> looking into possibly moving us to faster spindles. In the meantime, I
>> wanted to implement whatever was possible to give us breathing room.
>> Turning off atime certainly helped, but we are definitely not completely
>> out of the drink yet.
>>
>> I also found that disabling the ZFS cache flush as per the Evil Tuning
>> Guide was a huge boon, considering we're on a battery-backed (non-Sun) SAN.
>>   
>> 
>
> Really?  Which OS version are you on?  This should have been
> fixed in Solaris 10 5/08 (it is a fix in the [s]sd driver).  Caveat: there
> may be some devices which do not properly negotiate the SYNC_NV
> bit.  In my tests, using Solaris 10 5/08, disabling the cache flush made
> zero difference.
>   

PSARC 2007/053

If I read through the code correctly...

If the array doesn't respond to the device inquiry, you haven't made an 
entry to sd.conf for the array, or it isn't hard coded in the sd.c table 
- I think there are only two in that state - then you'd have to disable 
the cache flush.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sata on sparc

2008-10-24 Thread Martin Winkelman
On Fri, 24 Oct 2008, Francois Dion wrote:

> On this page:
>
> http://www.sun.com/io_technologies/sata/SATA0.html
>
> I see:
> -USB,SATA,IEEE1394 SIIG, Inc. USB 2.0 + FireWire + SATA Combo (SC-UNS012) 
> Verified (Solaris 10)
>
> Indicating that if I install an SC-UNS012 in a Solaris 10 Sparc server I 
> would get a few USB 2 ports, a few IEEE1394 ports but more interestingly, a 
> few SATA ports working under sparc.
>
> But that same page states:
> "Native SATA is not yet fully supported. There is no current support on 
> SPARC-based systems. "
>
> Which is it? It works or it doesn't?


Solaris 10 for Sparc doesn't have a driver for the SATA chipset on this card. 
It is listed as verified for Sparc Solaris because the USB and FireWire ports 
will work on Sparc systems.


--
Martin Winkelman  -  [EMAIL PROTECTED]  -  303-272-3122
http://www.sun.com/solarisready/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disabling auto-snapshot by default.

2008-10-24 Thread Tim Foster
Hi Chris,

Chris Gerhard wrote:
> With auto-snapshot still on are all the snapshots taken as a single 
> transaction as they would be with a recursive snapshot?

Yep, it uses "zfs snapshot -r" when it can, degrading to snapshots of 
the individual datasets it can't recursively snapshot and recursive 
snapshots of the ones it can. (if that makes sense)

The logic to do that works, but is a bit process-heavy right now: I'm 
working on using more ksh builtins instead.

Of course, you can disable the snapshot-based-on-zfs-user-properties 
functionality, and go back to taking snapshots based solely on 
statically listed filesystems (see the service manifest or README file 
and check for the "zfs/fs-name" SMF property)

> Alas I've had to downgrade as Nautilus is not usable:

Yow.

cheers,
tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Tuning ZFS for Sun Java Messaging Server

2008-10-24 Thread Richard Elling
Adam N. Copeland wrote:
> Thanks for the replies.
>
> It appears the problem is that we are I/O bound. We have our SAN guy
> looking into possibly moving us to faster spindles. In the meantime, I
> wanted to implement whatever was possible to give us breathing room.
> Turning off atime certainly helped, but we are definitely not completely
> out of the drink yet.
>
> I also found that disabling the ZFS cache flush as per the Evil Tuning
> Guide was a huge boon, considering we're on a battery-backed (non-Sun) SAN.
>   

Really?  Which OS version are you on?  This should have been
fixed in Solaris 10 5/08 (it is a fix in the [s]sd driver).  Caveat: there
may be some devices which do not properly negotiate the SYNC_NV
bit.  In my tests, using Solaris 10 5/08, disabling the cache flush made
zero difference.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Tuning ZFS for Sun Java Messaging Server

2008-10-24 Thread Torrey McMahon
You may want to ask your SAN vendor if they have a setting you can make 
to no-op the cache flush. That way you don't have to worry about the 
flush behavior if you change/add different arrays.

Adam N. Copeland wrote:
> Thanks for the replies.
>
> It appears the problem is that we are I/O bound. We have our SAN guy
> looking into possibly moving us to faster spindles. In the meantime, I
> wanted to implement whatever was possible to give us breathing room.
> Turning off atime certainly helped, but we are definitely not completely
> out of the drink yet.
>
> I also found that disabling the ZFS cache flush as per the Evil Tuning
> Guide was a huge boon, considering we're on a battery-backed (non-Sun) SAN.
>
> Thanks,
> Adam
>
> Richard Elling wrote:
>   
>> As it happens, I'm currently involved with a project doing some
>> performance
>> analysis for this... but it is currently a WIP.  Comments below.
>>
>> Robert Milkowski wrote:
>> 
>>> Hello Adam,
>>>
>>> Tuesday, October 21, 2008, 2:00:46 PM, you wrote:
>>>
>>> ANC> We're using a rather large (3.8TB) ZFS volume for our mailstores
>>> on a
>>> ANC> JMS setup. Does anybody have any tips for tuning ZFS for JMS? I'm
>>> ANC> looking for even the most obvious tips, as I am a bit of a
>>> novice. Thanks,
>>>
>>> Well, it's kind of broad topic and it depends on a specific
>>> environment. Then do not tune for the sake of tuning - try to
>>> understand your problem first. Nevertheless you should consider
>>> things like (random order):
>>>
>>> 1. RAID level - you probably will end-up with relatively small random
>>>IOs - generally avoid RAID-Z
>>>Of course it could be that RAID-Z in your environment is perfectly
>>>fine.
>>>   
>>>   
>> There are some write latency-sensitive areas that will begin
>> to cause consternation for large loads.  Storage tuning is very
>> important in this space.  In our case, we're using a ST6540
>> array which has a decent write cache and fast back-end.
>>
>> 
>>> 2. Depending on your workload and disk subsystem ZFS's slog on SSD
>>> could help to improve performance
>>>   
>>>   
>> My experiments show that this is not the main performance
>> issue for large message volumes.
>>
>> 
>>> 3. Disable atime updates on zfs file system
>>>   
>>>   
>> Agree.  JMS doesn't use it, so it just means extra work.
>>
>> 
>>> 4. Enabling compression like lzjb in theory could help - depends on
>>> how weel you data would compress and how much CPU you have left and if
>>> you are mostly IO bond
>>>   
>>>   
>> We have not experimented with this yet, but know that some
>> of the latency-sensitive writes are files with a small number of
>> bytes, which will not compress to be less than one disk block.
>> [opportunities for cleverness are here :-)]
>>
>> There may be a benefit for the message body, but in my tests
>> we are not concentrating on that at this time.
>>
>> 
>>> 5. ZFS recordsize - probably not as in most cases when you read
>>> anything from email you will probably read entire mail anyway.
>>> Nevertheless could be easily checked with dtrace.
>>>   
>>>   
>> This does not seem to be an issue.
>>
>> 
>>> 6. IIRC JMS keeps an index/db file per mailbox - so just maybe L2ARC
>>> on large SSD would help assuming it would nicely cache these files -
>>> would need to be simulated/tested
>>>   
>>>   
>> This does not seem to be an issue, but in our testing the message
>> stores have plenty of memory, and hence, ARC size is on the order
>> of tens of GBytes.
>>
>> 
>>> 7. Disabling vdev pre-fetching in ZFS could help - see ZFS Evile tuning
>>> guide
>>>   
>>>   
>> My experiments showed no benefit by disabling pre-fetch.  However,
>> there are multiple layers of pre-fetching at play when you are using an
>> array, and we haven't done a complete analysis on this yet.  It is clear
>> that we are not bandwidth limited, so prefetching may not hurt.
>>
>> 
>>> Except for #3 and maybe #7 first identify what is your problem and
>>> what are you trying to fix.
>>>
>>>   
>>>   
>> Yep.
>> -- richard
>>
>> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disabling auto-snapshot by default.

2008-10-24 Thread Tim Foster
Hi Chris,

Chris Gerhard wrote:
> How can you disable the auto-snapshot service[s] by default without
 > disabling the timeslider
 > as well which appears to be the case if you disable the smf services.

Not sure I follow - time slider depends on the auto-snapshot service to 
take snapshots... (does the nautilus gui disappear if timeslider is 
disabled - if so, that sounds wrong to me)

> Setting the properly in the root pool is ok except for removable media
 > which I don't want to have snapshots taken in the time between plugging
 > the things in and setting the property.

You'll be ok there. The service checks for/sets the property on service 
start on any pools that appear on the system: in effect, this means that 
removable media won't be snapshotted by default until the service 
refreshes. So only media that were inserted and mounted at boot would 
get snapshotted by default (unless the property was already set on those 
pools to tell the service to leave them alone)

btw. I've a changset checked in to have the service not take snapshots 
of swap & dump devices, which was broken in nv_100/101. There's a known 
bug about the service breaking for datasets with spaces in their names, 
I've got an ugly fix, but want to have a go at doing a better job of it.

cheers,
tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Tuning ZFS for Sun Java Messaging Server

2008-10-24 Thread Adam N. Copeland
Thanks for the replies.

It appears the problem is that we are I/O bound. We have our SAN guy
looking into possibly moving us to faster spindles. In the meantime, I
wanted to implement whatever was possible to give us breathing room.
Turning off atime certainly helped, but we are definitely not completely
out of the drink yet.

I also found that disabling the ZFS cache flush as per the Evil Tuning
Guide was a huge boon, considering we're on a battery-backed (non-Sun) SAN.

Thanks,
Adam

Richard Elling wrote:
> As it happens, I'm currently involved with a project doing some
> performance
> analysis for this... but it is currently a WIP.  Comments below.
>
> Robert Milkowski wrote:
>> Hello Adam,
>>
>> Tuesday, October 21, 2008, 2:00:46 PM, you wrote:
>>
>> ANC> We're using a rather large (3.8TB) ZFS volume for our mailstores
>> on a
>> ANC> JMS setup. Does anybody have any tips for tuning ZFS for JMS? I'm
>> ANC> looking for even the most obvious tips, as I am a bit of a
>> novice. Thanks,
>>
>> Well, it's kind of broad topic and it depends on a specific
>> environment. Then do not tune for the sake of tuning - try to
>> understand your problem first. Nevertheless you should consider
>> things like (random order):
>>
>> 1. RAID level - you probably will end-up with relatively small random
>>IOs - generally avoid RAID-Z
>>Of course it could be that RAID-Z in your environment is perfectly
>>fine.
>>   
>
> There are some write latency-sensitive areas that will begin
> to cause consternation for large loads.  Storage tuning is very
> important in this space.  In our case, we're using a ST6540
> array which has a decent write cache and fast back-end.
>
>> 2. Depending on your workload and disk subsystem ZFS's slog on SSD
>> could help to improve performance
>>   
>
> My experiments show that this is not the main performance
> issue for large message volumes.
>
>> 3. Disable atime updates on zfs file system
>>   
>
> Agree.  JMS doesn't use it, so it just means extra work.
>
>> 4. Enabling compression like lzjb in theory could help - depends on
>> how weel you data would compress and how much CPU you have left and if
>> you are mostly IO bond
>>   
>
> We have not experimented with this yet, but know that some
> of the latency-sensitive writes are files with a small number of
> bytes, which will not compress to be less than one disk block.
> [opportunities for cleverness are here :-)]
>
> There may be a benefit for the message body, but in my tests
> we are not concentrating on that at this time.
>
>> 5. ZFS recordsize - probably not as in most cases when you read
>> anything from email you will probably read entire mail anyway.
>> Nevertheless could be easily checked with dtrace.
>>   
>
> This does not seem to be an issue.
>
>> 6. IIRC JMS keeps an index/db file per mailbox - so just maybe L2ARC
>> on large SSD would help assuming it would nicely cache these files -
>> would need to be simulated/tested
>>   
>
> This does not seem to be an issue, but in our testing the message
> stores have plenty of memory, and hence, ARC size is on the order
> of tens of GBytes.
>
>> 7. Disabling vdev pre-fetching in ZFS could help - see ZFS Evile tuning
>> guide
>>   
>
> My experiments showed no benefit by disabling pre-fetch.  However,
> there are multiple layers of pre-fetching at play when you are using an
> array, and we haven't done a complete analysis on this yet.  It is clear
> that we are not bandwidth limited, so prefetching may not hurt.
>
>>
>> Except for #3 and maybe #7 first identify what is your problem and
>> what are you trying to fix.
>>
>>   
>
>
> Yep.
> -- richard
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disabling auto-snapshot by default.

2008-10-24 Thread Chris Gerhard

[EMAIL PROTECTED] wrote:

Chris,

Tim Foster sent out this syntax previously:

zfs set com.sun:auto-snapshot=false dataset


There is an obvious race condition here unless I can set the option when 
 I import the pool which I don't think I can. I suppose the same 
problem will occur when creating new pools.


With auto-snapshot still on are all the snapshots taken as a single 
transaction as they would be with a recursive snapshot?


Alas I've had to downgrade as Nautilus is not usable:

http://blogs.sun.com/chrisg/entry/brief_visit_to_build_100


--
Chris Gerhard. __o __o __o
Systems TSC Chief Technologist_`\<,`\<,`\<,_
Sun Microsystems Limited (*)/---/---/ (*)
Phone: +44 (0) 1252 426033 (ext 26033) http://blogs.sun.com/chrisg


smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2008-10-24 Thread Bob Friesenhahn
On Thu, 23 Oct 2008, Peter Bridge wrote:

> thanks for all the feedback.  Some followup questions:
>
> If OS will see all 4 cores, will it also make use of all 4 cores for 
> ZFS. ie is ZFS fully multi threaded?

I am not sure about the ZFS compression code but from what I can see 
ZFS is multi-threaded like the rest of Solaris.  If it was not 
multi-threaded then it would suck, and lots of Sun hardware would not 
succeed in the market.

> We'll I'll do some more searching, maybe there is another quad core 
> board out there with 8 sata ports, 4GB ram support and passive 
> cooled north bridge :)

On modern hardware, ZFS is not normally CPU limited (unless you enable 
compression or exotic checksums).  There is not really a need for lots 
of CPU cores.  AMD Athelon/Opteron dual core likely matches or exceeds 
Intel quad core for ZFS use due to a less bottlenecked memory channel.

These should be your priorities for NAS:

  * 64 bit CPU
  * ECC memory support
  * Lots of memory (>4GB if possible)
  * Enough SATA/SAS ports to satisfy storage reqirements.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disabling auto-snapshot by default.

2008-10-24 Thread Cindy . Swearingen
Chris,

Tim Foster sent out this syntax previously:

zfs set com.sun:auto-snapshot=false dataset

Unless I'm misunderstanding your questions, try this for the dataset
on the removable media device.

Let me know if you have any issues.

I'm tracking the auto snapshot experience...

Cindy


Chris Gerhard wrote:
> How can you disable the auto-snapshot service[s] by default without disabling 
> the timeslider as well which appears to be the case if you disable the smf 
> services.
> 
> Setting the properly in the root pool is ok except for removable media which 
> I don't want to have snapshots taken in the time between plugging the things 
> in and setting the property.
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS scalability in terms of file system count (or lack thereof) in S10U6

2008-10-24 Thread Pramod Batni

> I would greatly appreciate it if you could open the bug, I don't have an
> opensolaris bugzilla account yet and you'd probably put better technical
> details in it anyway :). If you do, could you please let me know the bug#
> so I can refer to it once S10U6 is out and I confirm it has the same
> behavior?
>   

   6763592 creating zfs filesystems gets slower as the number of zfs 
filesystems increase

Pramod

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Disabling auto-snapshot by default.

2008-10-24 Thread Chris Gerhard
How can you disable the auto-snapshot service[s] by default without disabling 
the timeslider as well which appears to be the case if you disable the smf 
services.

Setting the properly in the root pool is ok except for removable media which I 
don't want to have snapshots taken in the time between plugging the things in 
and setting the property.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2008-10-24 Thread John-Paul Drawneek
Nvidia 5 series for amd - got good support for the chipset
Intel boards - as ICH9 drivers are in
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2008-10-24 Thread Peter Bridge
oops, I mean 2 cores.

Anyway I'm having second thoughts about this board now because the northbridge 
sounds like a power hog and I was planning a passive cooled system.  Also I 
went through the hardware compatability list and can see what was mentioned 
about the lack of SATA card for PCI.  So back to the drawing board.

So any tips on a passivly cooled MB with 4-6 SATA II ports that runs well with 
OS?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS + OpenSolaris for home NAS?

2008-10-24 Thread Darren J Moffat
Peter Bridge wrote:
> thanks for all the feedback.  Some followup questions:
> 
> If OS will see all 4 cores, will it also make use of all 4 cores for ZFS. ie 
> is ZFS fully multi threaded?

Very much multithreaded just like the rest of the Solaris kernel.

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss