Re: [zfs-discuss] Re: copying a large file..

2006-10-29 Thread eric kustarz

Pavan Reddy wrote:

This is the time it took to move the file:

The machine is a Intel P4 - 512MB RAM. 


bash-3.00# time mv ../share/pav.tar .

real1m26.334s
user0m0.003s
sys 0m7.397s


bash-3.00# ls -l pav.tar
-rw-r--r--   1 root root 516628480 Oct 29 19:30 pav.tar


A similar move on my Mac OS X took this much time:


pavan-mettus-computer:~/public pavan$ time mv pav.tar.gz ./Burn\ Folder.fpbf/

real0m0.006s
user0m0.001s
sys 0m0.004s

pavan-mettus-computer:~/public/Burn Folder.fpbf pavan$ ls -l pav.tar.gz
-rw-r--r--   1 pavan  pavan  347758518 Oct 29 19:09 pav.tar.gz

NOTE: The file size here is 347 MB where as the previous one was 516MB. But still the time taken is huge when compared. 


Its an X86 machine running Nevada build 51.  More info about the disk and pool:
bash-3.00# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
mypool 29.2G789M   28.5G 2%  ONLINE -
bash-3.00# zfs list
NAMEUSED  AVAIL  REFER  MOUNTPOINT
mypool  789M  28.0G  24.5K  /mypool
mypool/nas  789M  28.0G  78.8M  /export/home
mypool/nas/pavan710M  28.0G   710M  /export/home/pavan
mypool/nas/rajeev  24.5K  28.0G  24.5K  /export/home/rajeev
mypool/nas/share   24.5K  28.0G  24.5K  /export/home/share

It took lots of time when I moved the file from /export/home/pavan/ to 
/export/home/share directory.


You're moving that file from one filesystem to another, so it will have 
to copy all the data of the file in addition to just a few metadata 
blocks.  If you mv it within the same filesystem it will be quick (as in 
your OSX example).


eric



I was not doing any other operation other than the move command.

There are no files in that directories other than this one.

It has  a 512MB Physical memory. The Mac machine has 1Gig RAM.

No snapshots are taken. 


iostat and vmstat info:
bash-3.00# iostat -x
 extended device statistics 
devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b 
cmdk0 3.13.0  237.9  132.8  0.6  0.1  112.6   2   3 
fd0   0.00.00.00.0  0.0  0.0 3152.2   0   0 
sd0   0.00.00.00.0  0.0  0.00.0   0   0 
sd1   0.00.00.00.0  0.0  0.00.0   0   0 
bash-3.00# vmstat

 kthr  memorypagedisk  faults  cpu
 r b w   swap  free  re  mf pi po fr de sr cd f0 s0 s1   in   sy   cs us sy id
 0 0 0 2083196 111668 5  25 13  3 13  0 44  6  0 -1 -0  296  276  231  1  2 97


-Pavan
 
 
This message posted from opensolaris.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: copying a large file..

2006-10-29 Thread Pavan Reddy
This is the time it took to move the file:

The machine is a Intel P4 - 512MB RAM. 

bash-3.00# time mv ../share/pav.tar .

real1m26.334s
user0m0.003s
sys 0m7.397s


bash-3.00# ls -l pav.tar
-rw-r--r--   1 root root 516628480 Oct 29 19:30 pav.tar


A similar move on my Mac OS X took this much time:


pavan-mettus-computer:~/public pavan$ time mv pav.tar.gz ./Burn\ Folder.fpbf/

real0m0.006s
user0m0.001s
sys 0m0.004s

pavan-mettus-computer:~/public/Burn Folder.fpbf pavan$ ls -l pav.tar.gz
-rw-r--r--   1 pavan  pavan  347758518 Oct 29 19:09 pav.tar.gz

NOTE: The file size here is 347 MB where as the previous one was 516MB. But 
still the time taken is huge when compared. 

Its an X86 machine running Nevada build 51.  More info about the disk and pool:
bash-3.00# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
mypool 29.2G789M   28.5G 2%  ONLINE -
bash-3.00# zfs list
NAMEUSED  AVAIL  REFER  MOUNTPOINT
mypool  789M  28.0G  24.5K  /mypool
mypool/nas  789M  28.0G  78.8M  /export/home
mypool/nas/pavan710M  28.0G   710M  /export/home/pavan
mypool/nas/rajeev  24.5K  28.0G  24.5K  /export/home/rajeev
mypool/nas/share   24.5K  28.0G  24.5K  /export/home/share

It took lots of time when I moved the file from /export/home/pavan/ to 
/export/home/share directory.

I was not doing any other operation other than the move command.

There are no files in that directories other than this one.

It has  a 512MB Physical memory. The Mac machine has 1Gig RAM.

No snapshots are taken. 

iostat and vmstat info:
bash-3.00# iostat -x
 extended device statistics 
devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b 
cmdk0 3.13.0  237.9  132.8  0.6  0.1  112.6   2   3 
fd0   0.00.00.00.0  0.0  0.0 3152.2   0   0 
sd0   0.00.00.00.0  0.0  0.00.0   0   0 
sd1   0.00.00.00.0  0.0  0.00.0   0   0 
bash-3.00# vmstat
 kthr  memorypagedisk  faults  cpu
 r b w   swap  free  re  mf pi po fr de sr cd f0 s0 s1   in   sy   cs us sy id
 0 0 0 2083196 111668 5  25 13  3 13  0 44  6  0 -1 -0  296  276  231  1  2 97


-Pavan
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Very high system loads with ZFS

2006-10-29 Thread eric kustarz

Daniel Rock wrote:

Peter Guthrie schrieb:


So far I've seen *very* high loads twice using ZFS which does not


 > happen when the same task is implemented with UFS

This is a known bug in the SPARC IDE driver.

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6421427

You could try the following workaround until a fix has been delivered:

http://groups.google.com/group/comp.unix.solaris/browse_frm/thread/bed12f2e5085e69b/bc5b8afba2b477d7?lnk=gst&q=zfs+svm&rnum=2#bc5b8afba2b477d7 



And the fix has been putback a couple of days ago (it will be in snv_52).

eric

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] copying a large file..

2006-10-29 Thread Erblichs
Hi,

How much time is a "long time"?

Second, had a snapshot been taken after the file
was created?

Are the src and dst directories in the
 same slice?

What other work was being done at the time of
 the move?

Were their numerous files in the src or dst
 directories?

How much phys mem is in your system?

Does a equivelent move take a drasticly shorter
 amount of time if done right after a reboot?

Mitchell Erblich
--



Pavan Reddy wrote:
> 
> 'mv' command took very long time to copy a large file from one ZFS directory 
> to another. The directories share the same pool and file system. I had a 385 
> MB file in one directory and wanted to move that to a different directory.  
> It took long time to move. Any particular reasons? There is no raid involved.
> 
> -Pavan
> 
> 
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] copying a large file..

2006-10-29 Thread Nathan Kroenert
Some details on what a long time is might be of help to us...

Also - Some details on what type of hardware you are using...

There are a number of known issues, as well as a number of known
physical limitations that might get in your way. (eg: using multiple
slices of the same disk in the same pool... ;)

Perhaps if you could include as much detail as possible, we might be
able to help a little more...

Don't forget to include what your expectations were for the time of the
copy (ans the basis of those expectations!), versus reality...

Some vmstat output, iostat -x output would also be helpful...

Cheers,

Nathan.

On Mon, 2006-10-30 at 14:50, Pavan Reddy wrote:
> 'mv' command took very long time to copy a large file from one ZFS directory 
> to another. The directories share the same pool and file system. I had a 385 
> MB file in one directory and wanted to move that to a different directory.  
> It took long time to move. Any particular reasons? There is no raid involved. 
>  
> 
> -Pavan
>  
> 
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-- 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] copying a large file..

2006-10-29 Thread Pavan Reddy
'mv' command took very long time to copy a large file from one ZFS directory to 
another. The directories share the same pool and file system. I had a 385 MB 
file in one directory and wanted to move that to a different directory.  It 
took long time to move. Any particular reasons? There is no raid involved.  

-Pavan
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: Snapshots impact on performance

2006-10-29 Thread Jeff Bonwick
> Nice, this is definitely pointing the finger more definitively.  Next 
> time could you try:
> 
> dtrace -n '[EMAIL PROTECTED](20)] = count()}' -c 'sleep 5'
> 
> (just send the last 10 or so stack traces)
> 
> In the mean time I'll talk with our SPA experts and see if I can figure 
> out how to fix this...

By any chance is the pool fairly close to full?  The fuller it gets,
the harder it becomes to find long stretches of free space.

Jeff

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


CLARIFICATION: Re: [zfs-discuss] Current status of a ZFS root

2006-10-29 Thread Richard Elling - PAE

CLARIFICATION below.

Richard Elling - PAE wrote:

Chris Adams wrote:
We're looking at replacing a current Linux server with a T1000 + a 
fiber channel enclosure to take advantage of ZFS. Unfortunately, the 
T1000 only has a single drive bay (!) which makes it impossible to 
follow our normal practice of mirroring the root file system; 
naturally the idea of using that big ZFS pool is appealing.


Note: the original T1000 had the single disk limit.  This was 
unfortunate, and a
sales inhibitor.  Today, you have the option of single (SATA) or dual 
(SAS) boot

disks, with hardware RAID.  See:
http://www.sun.com/servers/coolthreads/t1000/specs.xml

Is anyone actually booting ZFS in production and, if so, would you 
recommend this approach?


At this time, I would not recommend it for SPARC systems.
 -- richard


Should read:
At this time, I would not recommend using ZFS for root on SPARC systems.
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Current status of a ZFS root

2006-10-29 Thread Matt Ingenthron

Richard Elling - PAE wrote:


Is anyone actually booting ZFS in production and, if so, would you 
recommend this approach?


At this time, I would not recommend it for SPARC systems.
I'll note that I've been using S10U2 on a similar system (two drives) 
with slices.  I currently have S10U2 with space for doing a live upgrade 
when I decide to absorb an update.  Then I have other slices on the 
drives I've turned into a mirrored zfs pool. 

I use the UFS with Solaris Volume Manager Mirroring for the global zone 
(yes, inferior to ZFS technically, but has been in production on systems 
for over a decade, so well understood if I get into trouble), but put 
all of the production workload on zones (on zfs filesystems) and 
datasets delegated to the zones from there.


I've not had any failures yet, but I've had all of the  zfs 
functionality (snapshots/compression mainly) I need along with something 
reliable.


- Matt

--
Matt Ingenthron - Web Infrastructure Solutions Architect
Sun Microsystems, Inc. - Client Solutions, Systems Practice
http://blogs.sun.com/mingenthron/
email: [EMAIL PROTECTED] Phone: 310-242-6439



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Current status of a ZFS root

2006-10-29 Thread Richard Elling - PAE

Chris Adams wrote:
We're looking at replacing a current Linux server with a T1000 + a fiber channel enclosure 
to take advantage of ZFS. Unfortunately, the T1000 only has a single drive bay (!) which 
makes it impossible to follow our normal practice of mirroring the root file system; 
naturally the idea of using that big ZFS pool is appealing.


Note: the original T1000 had the single disk limit.  This was unfortunate, and a
sales inhibitor.  Today, you have the option of single (SATA) or dual (SAS) boot
disks, with hardware RAID.  See:
http://www.sun.com/servers/coolthreads/t1000/specs.xml


Is anyone actually booting ZFS in production and, if so, would you recommend 
this approach?


At this time, I would not recommend it for SPARC systems.
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Very high system loads with ZFS

2006-10-29 Thread Daniel Rock

Peter Guthrie schrieb:

So far I've seen *very* high loads twice using ZFS which does not

> happen when the same task is implemented with UFS

This is a known bug in the SPARC IDE driver.

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6421427

You could try the following workaround until a fix has been delivered:

http://groups.google.com/group/comp.unix.solaris/browse_frm/thread/bed12f2e5085e69b/bc5b8afba2b477d7?lnk=gst&q=zfs+svm&rnum=2#bc5b8afba2b477d7


Daniel

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Very high system loads with ZFS

2006-10-29 Thread Peter Guthrie
I have a home email/fileserver based on a Netra X1 with two 120Gb disks and 2Gb 
RAM which I updated to S10 6/06 so that I could ZFS for mirroring and 
snapshotting a data slice. Both disks are partitioned identically: slice 0 is 
20Gb is root and mirrored (with SVM) and UFS, slice 1 is swap, slice 3 is 90Gb 
and is a ZFS mirrored pool, slice 7 is 50Mb for the SVM metadb.

So far I've seen [b]very[/b] high loads twice using ZFS which does not happen 
when the same task is implemented with UFS:

1. My email is served to clients by the Dovecot IMAP daemon. Mailboxes are 
stored in users home directories (there's only 5 mailboxes and there's rarely 
more than one accessed at a time). As soon as /home was moved to ZFS I noticed 
that as soon as the first imap client access started the load (as measured with 
uptime) shot up to around 15 and stays there! it's normally around 0.5.

2. I created a zone with the storage path in a new ZFS volume. Installing the 
zone took several hours and the load again was around 15. The first boot of 
this new zone took 16 hours to complete the svcadm initialisation during which 
time the load was between 12 and 15. I've just deleted the zone and ZFS volume 
and recreated if in UFS and it took 30 minutes to install (load peaked at 5) 
and 25 minutes to complete the first boot.

I know it's a low spec machine but it's been fine running S10 with UFS. 
Obviously the checksumming in ZFS will add some CPU overhead (I'm not using any 
compression). Are there any known problems with ZFS causing high CPU loads??

Pete
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss