[zfs-discuss] problem ZFS / NFS from FreeBSD nfsv3 client -- periodic NFS server not resp

2006-09-25 Thread Chad Leigh
I have set up a Solaris 10 U2 06/06 system that has basic patches to the latest 
-19 kernel patch and latest zfs genesis etc as recommended.  I have set up a 
basic pool (local) and a bunch of sub-pools (local/mail, local/mail/shire.net, 
local/mail/shire.net/o, local/jailextras/shire.net/irsfl, etc). I am exporting 
these with [EMAIL PROTECTED],[EMAIL PROTECTED] and then mounting a few of these 
pools on a FreeBSD system using nfsv3.

The FreeBSD has about 4 of my 10 or so subpools mounted.  2 are email imap 
account tests, 1 is generic storage, and one is a FreeBSD jail root.  FreeBSD 
mounts them with using TCP

/sbin/mount_nfs -s -i -3 -T foo-i1:/local/mail/shire.net/o/obar 
/local/2/hobbiton/local/mail/shire.net/o/obar

The systems are both directly connected to a gigabit switch using 1000btx-fdx 
and both have an MTU set at 9000.  The Solaris side is an e1000g port (the 
system has 2 bge and 2 e1000g ports all configured) and the FreeBSD is a bge 
port.

etc.

I have heard that there are some ZFS/NFS sync performance problems etc that 
will be fixed in U3 or are fixed in OpenSolaris.  I do not think my issue is 
related to that.  I have also seen some of that with sometimes having pisspoor 
performance on writing.

I have experienced the following issue several times since I started 
experimenting with this a few days ago.  I periodically will get NFS server not 
responding errors on the FreeBSD machine for one of the mounted pools, and it 
will last 4-8 minutes or so and then come alive again and be fine for many 
hours.  When this happens, access to the other mounted pools still works fine 
and logged directly in to the Solaris machine I am able to access the file 
systems (pools) just fine.

Example error message:

Sep 24 03:09:44 freebsdclient kernel: nfs server 
solzfs-i1:/local/jailextras/shire.net/irsfl: not responding
Sep 24 03:10:15 freebsdclient kernel: nfs server 
solzfs-i1:/local/jailextras/shire.net/irsfl: not responding
Sep 24 03:12:19 freebsdclient last message repeated 4 times
Sep 24 03:14:54 freebsdclient last message repeated 5 times

I would be interested in getting feedback on what might be the problem and also 
ways to track this down etc.  Is this a know issue?  Have others seen the nfs 
server sharing ZFS time  out  (but not for all pools)?  Etc.

Is there any functional difference with setting up the ZFS pools as legacy 
mounts and using a traditional share command to share them over nfs?

I am mostly a Solaris noob and am happy to learn and can try anything people 
want me to test.

Thanks in advance for any comments or help.
thanks
Chad
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Info on OLTP Perf

2006-09-25 Thread przemolicc
On Fri, Sep 22, 2006 at 03:38:05PM +0200, Roch wrote:
 
 
   http://blogs.sun.com/roch/entry/zfs_and_oltp

After reading this page and taking into consideration my (not so big) knowledge
of ZFS it came to my mind that putting e.g. Oracle on both UFS+DIO _and_
ZFS would be the best solution _at_the_moment_. E.g redo logs and undo
tablespace on ZFS (because its COW nature so that all writes to these
files will go with full speed) and all the rest database files on UFS+DIO.

What do you think ?

przemol
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: slow reads question...

2006-09-25 Thread Roch


Harley Gorrell writes:
  On Fri, 22 Sep 2006, [EMAIL PROTECTED] wrote:
   Are you just trying to measure ZFS's read performance here?
  
  That is what I started looking at.  We scrounged around
  and found a set of 300GB drives to replace the old ones we
  started with.  Comparing these new drives to the old ones:
  
  Old 36GB drives:
  
  | # time mkfile -v 1g zeros-1g
  | zeros-1g 1073741824 bytes
  | 
  | real2m31.991s
  | user0m0.007s
  | sys 0m0.923s
  
  Newer 300GB drives:
  
  | # time mkfile -v 1g zeros-1g
  | zeros-1g 1073741824 bytes
  | 
  | real0m8.425s
  | user0m0.010s
  | sys 0m1.809s
  
  At this point I am pretty happy.
  

This looks like on the second run, you had lots more free
memory and mkfile completed near memcpy speed.

Something is awry on the first pass though. Then,

zpool iostat 1

can put some lights on this. IO will keep on going after the 
mkfile completes in the second case. For the first one,
there may have been an interaction with not yet finished I/O loads ?

-r


  I am wondering if there is something other than capacity
  and seek time which has changed between the drives.  Would a
  different scsi command set or features have this dramatic a
  difference?
  
  thanks!,
  harley.
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] [Fwd: RESEND: [Fwd: Queston: after installing SunMC 3.6.1 ability to view the ZFS gui has disappeared]]

2006-09-25 Thread Arlina Goce-Capiral

All,

Anyone for this?
I haven't received any informations regarding this. This is my third 
attempt and i would appreciate if you can

send me any info you have.

TIA,
Arlina

NOTE: Please email me directly as i'm not on this alias.
---BeginMessage---


I'm resending this again since i haven't received anything from anybody.
Any suggestions i would appreciate it.

Thanks,
Arlina-
---BeginMessage---



Customer opened a case with an issue regarding the ability of the ZFS 
gui which disappeared

under the menu storage. This is after loading the SunMC 3.6.1.

More informations from customer's email below:


Yes, I just installed Solaris 10 6/06 on an Ultra 25 for testing, we'll
be using ZFS on an E2900 very soon.

I was evaluating the zfs and was looking into the SMC as well for zone
management and loaded SMC on to the Ultra 25.

When I started up the below link for zfs, java web console started up in
it's place.

Under the menu storage should have ZFS underneath it, but it wasn't
there, only Solaris Container Manager.

As for the SMC installation I took the defaults except for snmp.


The disks show up via command line, however, the problem is this. The
ZFS gui management tool and the sun management center gui both use port
6789 as the java web console.

After I installed SunMC, the ability to view the zfs gui has disappeared.
I thought you could do all the above from the web port 6789, but SunMC
seems to have 'overwritten' the zfs management gui.

Any ideas?
=

TIA,
Arlina

NOTE: Please email me directly as i'm not on this alias.


---End Message---
---End Message---
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: slow reads question...

2006-09-25 Thread Harley Gorrell

On Mon, 25 Sep 2006, Roch wrote:

This looks like on the second run, you had lots more free
memory and mkfile completed near memcpy speed.


   Both times the system was near idle.


Something is awry on the first pass though. Then,

zpool iostat 1

can put some lights on this. IO will keep on going after the
mkfile completes in the second case. For the first one,
there may have been an interaction with not yet finished I/O loads ?


The old drives arent in the system, but I did try this
with the new drives.  I ran mkfile -v 1g zeros-1g a couple
times while zpool iostat -v 1 was running in another
window.  There were a seven stats like this first one where
it is writing to disk.  The next to last is were the
bandwidth drops as there isnt enough IO to fill out that
second. Followed by zeros of no IO.  I didnt see any write
behind -- Once the IO was done I didnt see more until I
started something else.

|capacity operationsbandwidth
| pool used  avail   read  write   read  write
| --  -  -  -  -  -  -
| tank26.1G  1.34T  0  1.13K  0   134M
|   raidz126.1G  1.34T  0  1.13K  0   134M
| c0t1d0  -  -  0367  0  33.6M
| c0t2d0  -  -  0377  0  35.5M
| c0t3d0  -  -  0401  0  35.0M
| c0t4d0  -  -  0411  0  36.0M
| c0t5d0  -  -  0424  0  34.9M
| --  -  -  -  -  -  -
| 
|capacity operationsbandwidth

| pool used  avail   read  write   read  write
| --  -  -  -  -  -  -
| tank26.4G  1.34T  0  1.01K560   118M
|   raidz126.4G  1.34T  0  1.01K560   118M
| c0t1d0  -  -  0307  0  29.6M
| c0t2d0  -  -  0309  0  27.6M
| c0t3d0  -  -  0331  0  28.1M
| c0t4d0  -  -  0338  35.0K  27.0M
| c0t5d0  -  -  0338  35.0K  28.3M
| --  -  -  -  -  -  -
| 
|capacity operationsbandwidth

| pool used  avail   read  write   read  write
| --  -  -  -  -  -  -
| tank26.4G  1.34T  0  0  0  0
|   raidz126.4G  1.34T  0  0  0  0
| c0t1d0  -  -  0  0  0  0
| c0t2d0  -  -  0  0  0  0
| c0t3d0  -  -  0  0  0  0
| c0t4d0  -  -  0  0  0  0
| c0t5d0  -  -  0  0  0  0
| --  -  -  -  -  -  -

   As things stand now, I am happy.

   I do wonder what accounts for the improvement -- seek
time, transfer rate, disk cache, or something else?  Does
anywone have a dtrace script to measure this which they
would share?

harley.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: slow reads question...

2006-09-25 Thread Richard Elling - PAE

Harley Gorrell wrote:

   I do wonder what accounts for the improvement -- seek
time, transfer rate, disk cache, or something else?  Does
anywone have a dtrace script to measure this which they
would share?


You might also be seeing the effects of defect management.  As
drives get older, they tend to find and repair more defects.
This will slow the performance of the drive, though I've not
seen this sort of extreme.  You might infer this from a dtrace
script which would record the service time per iop -- in which
case you may see some iops with much larger service times than
normal.  I would expect this to be a second order effect.

Meanwhile, you should check to make sure you're tranferring data
at the rate you think (SCSI autonegotiates data transfer rates).
If you know the model number, you can get the rotational speed
and average seek times to see if that is radically different
for the two disk types.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] problem ZFS / NFS from FreeBSD nfsv3 client -- periodic NFS server not resp

2006-09-25 Thread eric kustarz

Chad Leigh wrote:


I have set up a Solaris 10 U2 06/06 system that has basic patches to the latest 
-19 kernel patch and latest zfs genesis etc as recommended.  I have set up a 
basic pool (local) and a bunch of sub-pools (local/mail, local/mail/shire.net, 
local/mail/shire.net/o, local/jailextras/shire.net/irsfl, etc). I am exporting 
these with [EMAIL PROTECTED],[EMAIL PROTECTED] and then mounting a few of these 
pools on a FreeBSD system using nfsv3.

The FreeBSD has about 4 of my 10 or so subpools mounted.  2 are email imap 
account tests, 1 is generic storage, and one is a FreeBSD jail root.  FreeBSD 
mounts them with using TCP

/sbin/mount_nfs -s -i -3 -T foo-i1:/local/mail/shire.net/o/obar 
/local/2/hobbiton/local/mail/shire.net/o/obar

The systems are both directly connected to a gigabit switch using 1000btx-fdx 
and both have an MTU set at 9000.  The Solaris side is an e1000g port (the 
system has 2 bge and 2 e1000g ports all configured) and the FreeBSD is a bge 
port.

etc.

I have heard that there are some ZFS/NFS sync performance problems etc that 
will be fixed in U3 or are fixed in OpenSolaris.  I do not think my issue is 
related to that.  I have also seen some of that with sometimes having pisspoor 
performance on writing.

I have experienced the following issue several times since I started 
experimenting with this a few days ago.  I periodically will get NFS server not 
responding errors on the FreeBSD machine for one of the mounted pools, and it 
will last 4-8 minutes or so and then come alive again and be fine for many 
hours.  When this happens, access to the other mounted pools still works fine 
and logged directly in to the Solaris machine I am able to access the file 
systems (pools) just fine.

Example error message:

Sep 24 03:09:44 freebsdclient kernel: nfs server 
solzfs-i1:/local/jailextras/shire.net/irsfl: not responding
Sep 24 03:10:15 freebsdclient kernel: nfs server 
solzfs-i1:/local/jailextras/shire.net/irsfl: not responding
Sep 24 03:12:19 freebsdclient last message repeated 4 times
Sep 24 03:14:54 freebsdclient last message repeated 5 times

I would be interested in getting feedback on what might be the problem and also 
ways to track this down etc.  Is this a know issue?  Have others seen the nfs 
server sharing ZFS time  out  (but not for all pools)?  Etc.
 



Could be lots of things - network partition, bad hardware, overloaded 
server, bad routers, etc.


What's the server's load like (vmstat, prstat)?  If you're banging on 
the server too hard and using up the server's resources then nfsd may 
not be able to respond to your client's requests.


You can also grab a snoop trace to see what packets are not being 
responded too?


What are clients and local apps doing to the machine?

What is your server hardware (# processors, memory) - is it 
underprovisioned for what you're doing to it?


How is the freeBSD NFS client code  - robust?

Are there any disk errors on the server (iostat -E, check 
/var/adm/messages, zpool iostat -x)?


Is the network being flaky?

eric


Is there any functional difference with setting up the ZFS pools as legacy 
mounts and using a traditional share command to share them over nfs?

I am mostly a Solaris noob and am happy to learn and can try anything people 
want me to test.

Thanks in advance for any comments or help.
thanks
Chad


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] problem ZFS / NFS from FreeBSD nfsv3 client -- periodic NFS server not resp

2006-09-25 Thread Chad Leigh -- Shire.Net LLC


On Sep 25, 2006, at 12:18 PM, eric kustarz wrote:


Chad Leigh wrote:

I have set up a Solaris 10 U2 06/06 system that has basic patches  
to the latest -19 kernel patch and latest zfs genesis etc as  
recommended.  I have set up a basic pool (local) and a bunch of  
sub-pools (local/mail, local/mail/shire.net, local/mail/shire.net/ 
o, local/jailextras/shire.net/irsfl, etc). I am exporting these  
with [EMAIL PROTECTED],[EMAIL PROTECTED] and then mounting a few  
of these pools on a FreeBSD system using nfsv3.


The FreeBSD has about 4 of my 10 or so subpools mounted.  2 are  
email imap account tests, 1 is generic storage, and one is a  
FreeBSD jail root.  FreeBSD mounts them with using TCP


/sbin/mount_nfs -s -i -3 -T foo-i1:/local/mail/shire.net/o/obar / 
local/2/hobbiton/local/mail/shire.net/o/obar


The systems are both directly connected to a gigabit switch using  
1000btx-fdx and both have an MTU set at 9000.  The Solaris side is  
an e1000g port (the system has 2 bge and 2 e1000g ports all  
configured) and the FreeBSD is a bge port.


etc.

I have heard that there are some ZFS/NFS sync performance problems  
etc that will be fixed in U3 or are fixed in OpenSolaris.  I do  
not think my issue is related to that.  I have also seen some of  
that with sometimes having pisspoor performance on writing.


I have experienced the following issue several times since I  
started experimenting with this a few days ago.  I periodically  
will get NFS server not responding errors on the FreeBSD machine  
for one of the mounted pools, and it will last 4-8 minutes or so  
and then come alive again and be fine for many hours.  When this  
happens, access to the other mounted pools still works fine and  
logged directly in to the Solaris machine I am able to access the  
file systems (pools) just fine.


Example error message:

Sep 24 03:09:44 freebsdclient kernel: nfs server solzfs-i1:/local/ 
jailextras/shire.net/irsfl: not responding
Sep 24 03:10:15 freebsdclient kernel: nfs server solzfs-i1:/local/ 
jailextras/shire.net/irsfl: not responding

Sep 24 03:12:19 freebsdclient last message repeated 4 times
Sep 24 03:14:54 freebsdclient last message repeated 5 times

I would be interested in getting feedback on what might be the  
problem and also ways to track this down etc.  Is this a know  
issue?  Have others seen the nfs server sharing ZFS time  out   
(but not for all pools)?  Etc.




Could be lots of things - network partition, bad hardware,  
overloaded server, bad routers, etc.


What's the server's load like (vmstat, prstat)?  If you're banging  
on the server too hard and using up the server's resources then  
nfsd may not be able to respond to your client's requests.


The server is not doing anything except this ZFS / NFS serving and  
only 1 client is attached to it (the one with the problems).  prstat  
shows a load of 0.00 continually and vmstat is typically like


# vmstat
kthr  memorypagedisk  faults   
cpu
r b w   swap  free  re  mf pi po fr de sr s1 s2 -- --   in   sy   cs  
us sy id
0 0 0 10640580 691412 0  1  0  0  0  0  2  0 11  0  0  421   85  120   
0  0 100

#




You can also grab a snoop trace to see what packets are not being  
responded too?


If I can catch it happening.  Most of the time I am not around and I  
just see it in the logs.  Sometimes it happens when I do a df -h on  
the client for example.




What are clients and local apps doing to the machine?


Almost nothing.  No local apps are running on the server.  It is  
basically just doing ZFS and NFS.


The client has 4 mounts from ZFS,  all of them very low usage.  2  
email accounts storage (imap maildir) are mounted for testing.  Each  
receives 10-100 messages a day.  1 extra storage space is mounted and  
once a day rsync copies 2 files to it in the middle of the night --  
one around 70mb and one 7mb.  The other is being used as the root for  
a FreeBSD jail which is not being used for anything.  Just proof of  
concept.  No processes are running in the jail that are doing much of  
anything to the NFS mounted fiel system -- occasional log writes.




What is your server hardware (# processors, memory) - is it  
underprovisioned for what you're doing to it?


Tyan 2892 MB with a single dual core Opteron at 2.0 GHZ.  2GB memory.

Single Areca 1130 raid card with 1gb RAM cache.  Works very well with  
ZFS without the NFS component.  (Has a 9 disk RAID 6 array on it).  I  
have done lots of testing with this card and Solaris with and without  
ZFS and it has held up very well without any sort of IO issues.   
(Except the fact that it does not get a flush when the system powers  
down with init 5).  The ZFS pools are currently on this single  
disk (to be augmented later this year when more funding comes  
through to buy more stuff)


A dual port e1000g intel server card over PCIe is the Solaris side of  
the network.




How is the freeBSD NFS client code  - robust?


I have 

[zfs-discuss] Re: [Fwd: Queston: after installing SunMC 3.6.1 ability to view the ZFS gui has disappeared]

2006-09-25 Thread Stephen Talley
Arlina,

The ZFS GUI runs within the Java Web Console, so there is no port
conflict.

My guess is that the Java Web Console was upgraded to version 3.0.x,
which breaks the ZFS GUI.  Run pkginfo SUNWmcon to verify.

The bug ID for this is:

6473968 ZFS GUI does not function under Lockhart 3.0 in s10u3

The contact for this bug is Venkata Madhabhaktula
([EMAIL PROTECTED]).

Steve

Arlina Goce-Capiral wrote:

 I'm forwarding this inquiry to this alias as well just in case somebody
 can suggests or provide any informations.

 Thank you in advance,
 Arlina-

 Date: Thu, 21 Sep 2006 11:12:02 -0600
 From: Arlina Goce-Capiral [EMAIL PROTECTED]
 Subject: Queston: after installing SunMC 3.6.1 ability to view the ZFS gui has
   disappeared
 To: [EMAIL PROTECTED], [EMAIL PROTECTED]

 Customer opened a case with an issue regarding the ability of the ZFS
 gui which disappeared
 under the menu storage. This is after loading the SunMC 3.6.1.

 More informations from customer's email below:

 
 Yes, I just installed Solaris 10 6/06 on an Ultra 25 for testing, we'll
 be using ZFS on an E2900 very soon.

 I was evaluating the zfs and was looking into the SMC as well for zone
 management and loaded SMC on to the Ultra 25.

 When I started up the below link for zfs, java web console started up in
 it's place.

 Under the menu storage should have ZFS underneath it, but it wasn't
 there, only Solaris Container Manager.

 As for the SMC installation I took the defaults except for snmp.

 The disks show up via command line, however, the problem is this. The
 ZFS gui management tool and the sun management center gui both use port
 6789 as the java web console.

 After I installed SunMC, the ability to view the zfs gui has disappeared.
 I thought you could do all the above from the web port 6789, but SunMC
 seems to have 'overwritten' the zfs management gui.

 Any ideas?
 =

 TIA,
 Arlina

 NOTE: Please email me directly as i'm not on this alias.


pgp6vUksLK87G.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] problem ZFS / NFS from FreeBSD nfsv3 client -- periodic NFS server not resp

2006-09-25 Thread Chad Leigh -- Shire.Net LLC


On Sep 25, 2006, at 1:15 PM, Mike Kupfer wrote:


Chad == Chad Leigh -- Shire.Net LLC [EMAIL PROTECTED] writes:


Chad On Sep 25, 2006, at 12:18 PM, eric kustarz wrote:


You can also grab a snoop trace to see what packets are not being
responded too?


Chad If I can catch it happening.  Most of the time I am not  
around and

Chad I just see it in the logs.

I've attached a hack script that runs snoop in the background and
rotates the capture files.  If you start it as (for example)

bgsnoop client server

it will save the last 6 hours of capture files between the two hosts.
If you notice a problem in the logs, you can find the corresponding
capture file and extract from it what you need.


Hi Mike

Thanks.  I set this up like so

./bgsnoop.sh -d e1000g0 freebsd-internal

since my nfs is not going out the default interface.  Soon  
thereafter I caught the problem.  In looking at the snoop.trace file  
I am not sure what to look for.  There seems to be no packet headers  
or time stamps or anything -- just a lot of binary data.  What am I  
looking for?


Thanks
Chad



mike
bgsnoop


---
Chad Leigh -- Shire.Net LLC
Your Web App and Email hosting provider
chad at shire.n




smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Customer problem with zfs

2006-09-25 Thread Edward Wetmore

   Good day all. Please respond to me directly as I am not on this alias.
   I have a customer who is develping his site's implementation of zfs, 
my case come to me because he is using Solaris 10 6/06 x86 on a Sun Fire 
V40z (an x86 unit). He had no problem assembling and mounting a zfs 
volume, the command set he got worked fine. He then then began 
preperation of documenting and scripting, which meant he wanted to take 
the zfs volume apart and then recreate it.
   He ran  zfsdestroy filesystem_name (that was all he was informed 
to do), but when he tried to build it again, the system indicated the 
file system was in use. He has found he has a /etc/zfs/zpool.cache.
   My question is (pardon my ignorance), what steps does he need to 
take to completely eliminate evidence that he previously had a zfs file 
system so that he can then build one again?

   Thanks for your time,
   Ed Wetmore
   AltPlat/Install/OS Tech Support Engineer
   Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Good PCI controllers for Nevada?

2006-09-25 Thread Peter Baer Galvin
Just thought I'd share some recent experiences. I had an Adaptec ASH-1233 PCI 
controller (based on the Silicon Image SII0680ACL144 chip) in my Nevada build 
45 system (a white box PC based on the AMD 3200+ CPU). This system is the 
backup for my main home server. Using zfs send | rsh zfs receive to copy the 
data, I was only getting about 4MB/sec to a RAIDZ1 pool of 4 X IDE 300GB drives 
(2 on the internal IDE controller and 2 on the Adaptec). 

I swapped the Adaptec for the generic CompuUSA PCI ULtra ATA / 133 PCI 
Card. It's SKU 293595. It's based on the ITE IT8212F. I give you the details 
because with the same disk, just swapping controllers, I'm now moving on 
average 34MB/sec.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] problem ZFS / NFS from FreeBSD nfsv3 client -- periodic NFS server not resp

2006-09-25 Thread Chad Leigh -- Shire.Net LLC


On Sep 25, 2006, at 2:49 PM, Mike Kupfer wrote:


Chad == Chad Leigh -- Shire.Net LLC [EMAIL PROTECTED] writes:


Chad There seems to be no packet headers or time stamps or  
anything --

Chad just a lot of binary data.  What am I looking for?

Use snoop -i capture_file to decode the capture file.


OK, a little snoop help is required.

I ran bgsnoop as follows:

# ./bgsnoop.sh -t a -r -d e1000g0

According to the snoop man page

 -t [ r | a | d ]Time-stamp presentation. Time-stamps
 areaccuratetowithin4
 microseconds.  The  default  is  for
 times  to  be presented in d (delta)
 format (the time since receiving the
 previous  packet).  Option  a (abso-
 lute) gives wall-clock time.  Option
 r  (relative) gives time relative to
 the first packet displayed. This can
 be   used  with  the  -p  option  to
 display   time   relative   to   any
 selected packet.

so -t a should show wall clock time

But my feed looks like the following and I don't see any wall clock  
time stamps.  I need to be able to get some sort of wall-time stamp  
on this so that I can know where to look in my snoop dump for  
offending issues...


 1   0.0 freebsd-internal.shire.net - bagend-i1NFS C  
ACCESS3 FH=50E5 (read,lookup,modify,extend,delete,execute)
  2   0.00045 freebsd-internal.shire.net - bagend-i1NFS C  
ACCESS3 FH=339B (read,lookup,modify,extend,delete,execute)
  3   0.00019 freebsd-internal.shire.net - bagend-i1NFS C  
LOOKUP3 FH=339B 1159219290.M400972P15189_courierlock.freebsd.shire.net
  4   0.00019 freebsd-internal.shire.net - bagend-i1NFS C  
LOOKUP3 FH=339B 1159219290.M400972P15189_courierlock.freebsd.shire.net
  5   0.00026 freebsd-internal.shire.net - bagend-i1NFS C  
CREATE3 FH=339B (UNCHECKED)  
1159219290.M400972P15189_courierlock.freebsd.shire.net
  6   0.00045 freebsd-internal.shire.net - bagend-i1NFS C  
ACCESS3 FH=878C (read,lookup,modify,extend,delete,execute)
  7   0.00013 freebsd-internal.shire.net - bagend-i1NFS C  
LOOKUP3 FH=50E5 tmp
  8   0.00013 freebsd-internal.shire.net - bagend-i1NFS C  
LOOKUP3 FH=339B 1159219290.M400972P15189_courierlock.freebsd.shire.net
  9   0.00019 freebsd-internal.shire.net - bagend-i1NFS C  
ACCESS3 FH=878C (read,lookup,modify,extend,delete,execute)
10   0.00026 freebsd-internal.shire.net - bagend-i1NFS C ACCESS3  
FH=878C (read,lookup,modify,extend,delete,execute)
11   0.00019 freebsd-internal.shire.net - bagend-i1NFS C WRITE3  
FH=878C at 0 for 24 (ASYNC)
12   0.00026 freebsd-internal.shire.net - bagend-i1NFS C ACCESS3  
FH=878C (read,lookup,modify,extend,delete,execute)
13   0.00013 freebsd-internal.shire.net - bagend-i1NFS C LOOKUP3  
FH=339B courier.lock
14   0.00013 freebsd-internal.shire.net - bagend-i1NFS C COMMIT3  
FH=878C at 0 for 24
15   0.00032 freebsd-internal.shire.net - bagend-i1NFS C LINK3  
FH=878C to FH=339B courier.lock
16   0.00026 freebsd-internal.shire.net - bagend-i1NFS C LOOKUP3  
FH=339B 1159219290.M400972P15189_courierlock.freebsd.shire.net
17   0.00019 freebsd-internal.shire.net - bagend-i1NFS C REMOVE3  
FH=339B 1159219290.M400972P15189_courierlock.freebsd.shire.net
18   0.00032 freebsd-internal.shire.net - bagend-i1NFS C ACCESS3  
FH=339B (read,lookup,modify,extend,delete,execute)
19   0.00019 freebsd-internal.shire.net - bagend-i1NFS C FSSTAT3  
FH=50E5
20   0.00019 freebsd-internal.shire.net - bagend-i1NFS C  
READDIR3 FH=339B Cookie=0 for 8192
21   0.00026 freebsd-internal.shire.net - bagend-i1NFS C LOOKUP3  
FH=339B courier.lock
22   0.00019 freebsd-internal.shire.net - bagend-i1NFS C LOOKUP3  
FH=339B 1159219290.M405999P15189_imapuid_164.freebsd.shire.net
23   0.00026 freebsd-internal.shire.net - bagend-i1NFS C LOOKUP3  
FH=339B 1159219290.M405999P15189_imapuid_164.freebsd.shire.net
24   0.00013 freebsd-internal.shire.net - bagend-i1NFS C CREATE3  
FH=339B (UNCHECKED)  
1159219290.M405999P15189_imapuid_164.freebsd.shire.net
25   0.00032 freebsd-internal.shire.net - bagend-i1NFS C ACCESS3  
FH=868C (read,lookup,modify,extend,delete,execute)
26   0.00013 freebsd-internal.shire.net - bagend-i1NFS C LOOKUP3  
FH=339B 1159219290.M405999P15189_imapuid_164.freebsd.shire.net
27   0.00013 freebsd-internal.shire.net - bagend-i1NFS C ACCESS3  
FH=EE81 (read,lookup,modify,extend,delete,execute)
28   0.00013 freebsd-internal.shire.net - bagend-i1NFS C ACCESS3  
FH=EE81 (read,lookup,modify,extend,delete,execute)
29   0.05840 freebsd-internal.shire.net - bagend-i1NFS C ACCESS3  
FH=868C (read,lookup,modify,extend,delete,execute)
30   0.00019 freebsd-internal.shire.net - bagend-i1NFS C ACCESS3  

[zfs-discuss] Re: Re: Re: low disk performance

2006-09-25 Thread Gino Ruopolo
other example:

rsyncing from/to the same zpool: 

device   r/sw/s   Mr/s   Mw/s wait actv  svc_t  %w  %b 
c625.0  276.51.33.8  1.9 16.5   61.1   0 135 
sd44 6.0  158.30.30.4  1.9 15.5  106.2  33 [b]100[/b] 
sd45 6.0   37.10.31.1  0.0  0.36.5   0  10 
sd46 8.0   42.10.41.1  0.0  0.47.3   0  15 
sd47 5.0   39.10.31.1  0.0  0.37.3   0  10

sd44 is always at 100, performance are really really low ..
Using 3 lun or 4 lun in the zpool is the same.

any suggest?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] problem ZFS / NFS from FreeBSD nfsv3 client -- periodic NFS server not resp

2006-09-25 Thread Mike Kupfer
 Chad == Chad Leigh -- Shire.Net LLC [EMAIL PROTECTED] writes:

Chad so -t a should show wall clock time

The capture file always records absolute time.  So you (just) need to
use -t a when you decode the capture file.

Sorry for not making the clear earlier.

mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] problem ZFS / NFS from FreeBSD nfsv3 client -- periodic NFS server not resp

2006-09-25 Thread Chad Leigh -- Shire.Net LLC


On Sep 25, 2006, at 3:54 PM, Mike Kupfer wrote:


Chad == Chad Leigh -- Shire.Net LLC [EMAIL PROTECTED] writes:


Chad so -t a should show wall clock time

The capture file always records absolute time.  So you (just) need to
use -t a when you decode the capture file.

Sorry for not making the clear earlier.


OK, thanks.  Sorry for being such a noob with snoop.  I guess it is  
kind of obvious now that you would put that on the snoop that reads  
the file and outputs the human readable one and not the one that  
saves things away...


This appears to be the only stuff having to do with the hanging  
server (lots of other stuff that is with other zfs pools that are  
served over nfs)


 68 15:29:27.53298 freebsd-internal.shire.net - solaris-zfs-i1 
NFS C FSSTAT3 FH=84EC
72 15:29:28.54294 freebsd-internal.shire.net - solaris-zfs-i1NFS  
C FSSTAT3 FH=84EC (retransmit)
73 15:29:29.54312 freebsd-internal.shire.net - solaris-zfs-i1NFS  
C FSSTAT3 FH=84EC (retransmit)
74 15:29:31.54356 freebsd-internal.shire.net - solaris-zfs-i1NFS  
C FSSTAT3 FH=84EC (retransmit)
75 15:29:35.54443 freebsd-internal.shire.net - solaris-zfs-i1NFS  
C FSSTAT3 FH=84EC (retransmit)
76 15:29:43.54610 freebsd-internal.shire.net - solaris-zfs-i1NFS  
C FSSTAT3 FH=84EC (retransmit)
5890 15:29:59.55835 freebsd-internal.shire.net - solaris-zfs-i1 
NFS C FSSTAT3 FH=84EC
5993 15:30:31.56506 freebsd-internal.shire.net - solaris-zfs-i1 
NFS C FSSTAT3 FH=84EC (retransmit)
6124 15:31:35.58971 freebsd-internal.shire.net - solaris-zfs-i1 
NFS C FSSTAT3 FH=84EC (retransmit)
6346 15:32:44.23048 freebsd-internal.shire.net - solaris-zfs-i1 
NFS C FSSTAT3 FH=84EC
6347 15:32:44.23585 freebsd-internal.shire.net - solaris-zfs-i1 
NFS C FSSTAT3 FH=84EC (retransmit)
6755 15:34:40.56138 freebsd-internal.shire.net - solaris-zfs-i1 
NFS C FSSTAT3 FH=84EC


comes alive again again right about 6347  15:32:22.23585  based on  
matching log entries and this snoop


snoop does not show me the reply packets going back.  What do I need  
to do to go both ways?


Thanks
Chad




mike


---
Chad Leigh -- Shire.Net LLC
Your Web App and Email hosting provider
chad at shire.net





smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Customer problem with zfs

2006-09-25 Thread Wee Yeh Tan

Edward,

/etc/zpool.cache contains data pointing to devices involved in a
zpool.  Changes to ZFS datasets are reflected in the actual zpool so
destroying a zfs dataset should not change zpool.cache.

zfs destroy is the correct command to destroy a file system.

It will be easier if we can know
- the output of 'zfs list' at various stages
- the command he executed that failed.


--
Just me,
Wire ...

On 9/26/06, Edward Wetmore [EMAIL PROTECTED] wrote:

Good day all. Please respond to me directly as I am not on this alias.
I have a customer who is develping his site's implementation of zfs,
my case come to me because he is using Solaris 10 6/06 x86 on a Sun Fire
V40z (an x86 unit). He had no problem assembling and mounting a zfs
volume, the command set he got worked fine. He then then began
preperation of documenting and scripting, which meant he wanted to take
the zfs volume apart and then recreate it.
He ran  zfsdestroy filesystem_name (that was all he was informed
to do), but when he tried to build it again, the system indicated the
file system was in use. He has found he has a /etc/zfs/zpool.cache.
My question is (pardon my ignorance), what steps does he need to
take to completely eliminate evidence that he previously had a zfs file
system so that he can then build one again?
Thanks for your time,
Ed Wetmore
AltPlat/Install/OS Tech Support Engineer
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss