Re: disk fragmentation, 0%?

2005-08-16 Thread Lowell Gilbert
Don't top-post, please.

Lei Sun [EMAIL PROTECTED] writes:

 Then, my other question is,
 
 If the file space allocation works like Glenn said earlier, how come
 with the exact same files from 2 different installations using the
 exact procedures, can result in different fragmentation?
 
 in the atacontrol raid1 failure case, /dev/ar0s1a: ... 0.5% fragmentation
 in the new build case, /dev/ar0s1a: ... 0.0% fragmentation
 
 That doesn't seems to make a lot of sense.

There are lots of possible explanations, including (non-harmful) race
conditions.  Is there some reason you care?
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: disk fragmentation, 0%?

2005-08-16 Thread cpghost
On Sun, Aug 14, 2005 at 01:30:41PM -0700, Glenn Dawson wrote:
 From the original message:
 
 Filesystem SizeUsed   Avail Capacity  Mounted on
 /dev/ar0s1e248M   -278K228M-0%/tmp
 
 This shows that /tmp is empty.  If the reserved space was being encroached 
 upon, it would show  100% capacity, and available bytes would go negative, 
 not bytes used.
 
 It would look something like this:
 
 Filesystem SizeUsed   Avail Capacity  Mounted on
 /dev/ad0s1a248M238M-10M   105%/
 
 I've never seen the capacity go negative before, which is why I suggested 
 someone else might know the answer.

Ups, yes, that's really weird. It's so unusual that I didn't notice
it the first time. Could that be some counter overflowing?

 -Glenn

Regards,
-cpghost.

-- 
Cordula's Web. http://www.cordula.ws/
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: disk fragmentation, 0%?

2005-08-15 Thread Jerry McAllister
 
 Thanks for the good answers.
 
 But can anyone tell me why the capacity is going negative? and not full?
 
  Filesystem SizeUsed   Avail Capacity  Mounted on
  /dev/ar0s1e248M   -278K228M-0%/tmp

As someone mentioned, there is a FAQ on this.   You should read it.

It is going negative because you have used more than the nominal
capacity of the slice.   The nominal capacity is the total space
minus the reserved proportion (usually 8%) that is held out.
Root is able to write to that space and you have done something
that got root to write beyond the nominal space.   

jerry

 
 Thanks a lot
 
 Lei
 
 On 8/14/05, Glenn Dawson [EMAIL PROTECTED] wrote:
  At 12:18 PM 8/14/2005, cpghost wrote:
  On Sun, Aug 14, 2005 at 12:09:19AM -0700, Glenn Dawson wrote:
2. How come /tmp is -0% in size? -278K? What had happened? as I have
never experienced this in the previous installs on the exact same
hardware.
   
Not sure about that one.  Maybe someone else has an answer.
  
  This is a FAQ.
  
  The available space is always computed after subtracting some space
  that would be only available to root (typically around 5% or 10%
  of the partition size).
  
  The default is 8%.
  
This free space is necessary to avoid internal
  fragmentation and to keep the file system going. Root may be able
  to borrow some space from this (in which case the capacity goes
  below 0%), but it is not advisable to keep the file system so full,
  so it should be only for a limited period of time.
  
  The reason for having the reserved space is to allow the functions that
  allocate space to be able to find contiguous free space.  When the disk is
  nearly full it takes longer and longer to locate contiguous space, which
  can lead to performance problems.
  
  
  In your example, you're 278K over the limit; and should delete some
  files to make space ASAP. Should /tmp fill up more, it will soon become
  inoperable.
  
   From the original message:
  
  Filesystem SizeUsed   Avail Capacity  Mounted on
  /dev/ar0s1e248M   -278K228M-0%/tmp
  
  This shows that /tmp is empty.  If the reserved space was being encroached
  upon, it would show  100% capacity, and available bytes would go negative,
  not bytes used.
  
  It would look something like this:
  
  Filesystem SizeUsed   Avail Capacity  Mounted on
  /dev/ad0s1a248M238M-10M   105%/
  
  I've never seen the capacity go negative before, which is why I suggested
  someone else might know the answer.
  
  -Glenn
  
 
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to [EMAIL PROTECTED]
 
 

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: disk fragmentation, 0%?

2005-08-15 Thread Freminlins
On 8/15/05, Jerry McAllister [EMAIL PROTECTED] wrote:
 

 As someone mentioned, there is a FAQ on this.   You should read it.
 
 It is going negative because you have used more than the nominal
 capacity of the slice.   The nominal capacity is the total space
 minus the reserved proportion (usually 8%) that is held out.
 Root is able to write to that space and you have done something
 that got root to write beyond the nominal space.

I'm not sure you are right in this case. I think you need to re-read
the post. I've quoted the relevent part here:
 
  Filesystem SizeUsed   Avail Capacity  Mounted on
  /dev/ar0s1e248M   -278K228M-0%/tmp

Looking at how the columns line up I have to state that I too have
never seen this behaviour.  As an experiment I over-filled a file
system and here's the results:

Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/ad0s1f965M895M   -7.4M   101%/tmp

Note capacity is not negative. So that makes three of us in this
thread who have not seen negative capacity on UFS.

I have seen negative capacity when running an old version of FreeBSD
with a very large NFS mount (not enough bits in statfs if I remember
correctly).

 jerry

Frem.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: disk fragmentation, 0%?

2005-08-15 Thread Lei Sun
This happened, after I tested the atacontrol to rebuild the raid1.

The /tmp partition doesn't have anything but several empty directories created.

and I have the clear /tmp directive in the rc.conf, which will clean
up the /tmp everytime when system boot up.

So that was really wierd. as it never happened this way the previous
time that I was rebuilding the raid1.

Thanks

Lei

On 8/15/05, Freminlins [EMAIL PROTECTED] wrote:
 On 8/15/05, Jerry McAllister [EMAIL PROTECTED] wrote:
  
 
  As someone mentioned, there is a FAQ on this.   You should read it.
 
  It is going negative because you have used more than the nominal
  capacity of the slice.   The nominal capacity is the total space
  minus the reserved proportion (usually 8%) that is held out.
  Root is able to write to that space and you have done something
  that got root to write beyond the nominal space.
 
 I'm not sure you are right in this case. I think you need to re-read
 the post. I've quoted the relevent part here:
 
   Filesystem SizeUsed   Avail Capacity  Mounted on
   /dev/ar0s1e248M   -278K228M-0%/tmp
 
 Looking at how the columns line up I have to state that I too have
 never seen this behaviour.  As an experiment I over-filled a file
 system and here's the results:
 
 Filesystem SizeUsed   Avail Capacity  Mounted on
 /dev/ad0s1f965M895M   -7.4M   101%/tmp
 
 Note capacity is not negative. So that makes three of us in this
 thread who have not seen negative capacity on UFS.
 
 I have seen negative capacity when running an old version of FreeBSD
 with a very large NFS mount (not enough bits in statfs if I remember
 correctly).
 
  jerry
 
 Frem.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: disk fragmentation, 0%?

2005-08-15 Thread Kris Kennaway
On Mon, Aug 15, 2005 at 09:20:12AM -0400, Jerry McAllister wrote:
  
  Thanks for the good answers.
  
  But can anyone tell me why the capacity is going negative? and not full?
  
   Filesystem SizeUsed   Avail Capacity  Mounted on
   /dev/ar0s1e248M   -278K228M-0%/tmp
 
 As someone mentioned, there is a FAQ on this.   You should read it.
 

In fact, you're both wrong, because that's clearly not what's going on
here (capacity 0, not capacity 100!)

The only thing I can think of is that you have some filesystem
corruption on this partition that is confusing the stats.  Try
dropping to single-user mode and running fsck -f /tmp.

Kris


pgpUKruIUuhAv.pgp
Description: PGP signature


Re: disk fragmentation, 0%?

2005-08-15 Thread Lei Sun
Thanks All,

I think Kris's suggestion worked, as when I was rebuilding of the
atacontrol, I remember it failed once, and had a lot of problem trying
to reboot and unmount the /tmp directory.

So after I rebuild the array, somehow /tmp looks clean to the OS, and
didn't get checked.

so somehow the the stats was not showing the correct information.

I have already rebuild the machine, all of the effect from the
atacontrol rebuild array are gone now, and it seems like everything is
back to normal.

Capacity is right, Used is right, Avail is right, and all 0.0% fragmentation.

Then, my other question is,

If the file space allocation works like Glenn said earlier, how come
with the exact same files from 2 different installations using the
exact procedures, can result in different fragmentation?

in the atacontrol raid1 failure case, /dev/ar0s1a: ... 0.5% fragmentation
in the new build case, /dev/ar0s1a: ... 0.0% fragmentation

That doesn't seems to make a lot of sense.

Thanks again

Lei

On 8/15/05, Kris Kennaway [EMAIL PROTECTED] wrote:
 On Mon, Aug 15, 2005 at 09:20:12AM -0400, Jerry McAllister wrote:
  
   Thanks for the good answers.
  
   But can anyone tell me why the capacity is going negative? and not full?
  
Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/ar0s1e248M   -278K228M-0%/tmp
 
  As someone mentioned, there is a FAQ on this.   You should read it.
 
 
 In fact, you're both wrong, because that's clearly not what's going on
 here (capacity 0, not capacity 100!)
 
 The only thing I can think of is that you have some filesystem
 corruption on this partition that is confusing the stats.  Try
 dropping to single-user mode and running fsck -f /tmp.
 
 Kris
 
 

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


disk fragmentation, 0%?

2005-08-14 Thread Lei Sun
Hi,

I know this question has been raised a lot of times, and most of
people don't think it is necessary to defragment ufs, and from the
previous posts, I got to know there are sometimes, disksize can be
more than 100%

But...

I got ...

/dev/ar0s1a: ... 0.5% fragmentation
/dev/ar0s1e: ... 0.0% fragmentation
/dev/ar0s1f: ... 0.0% fragmentation
/dev/ar0s1d: ... 0.1% fragmentation

Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/ar0s1a248M 53M175M23%/
devfs  1.0K1.0K  0B   100%/dev
/dev/ar0s1e248M   -278K228M-0%/tmp
/dev/ar0s1f221G1.4G202G 1%/usr
/dev/ar0s1d248M 30M197M13%/var

My questions:
1. How do I make /dev/ar0s1a 0.0% fragmentation the clean way? If I
really wanted to?

2. How come /tmp is -0% in size? -278K? What had happened? as I have
never experienced this in the previous installs on the exact same
hardware.

Thanks 

Lei
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: disk fragmentation, 0%?

2005-08-14 Thread Glenn Dawson

At 11:54 PM 8/13/2005, Lei Sun wrote:

Hi,

I know this question has been raised a lot of times, and most of
people don't think it is necessary to defragment ufs, and from the
previous posts, I got to know there are sometimes, disksize can be
more than 100%

But...

I got ...

/dev/ar0s1a: ... 0.5% fragmentation
/dev/ar0s1e: ... 0.0% fragmentation
/dev/ar0s1f: ... 0.0% fragmentation
/dev/ar0s1d: ... 0.1% fragmentation

Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/ar0s1a248M 53M175M23%/
devfs  1.0K1.0K  0B   100%/dev
/dev/ar0s1e248M   -278K228M-0%/tmp
/dev/ar0s1f221G1.4G202G 1%/usr
/dev/ar0s1d248M 30M197M13%/var

My questions:
1. How do I make /dev/ar0s1a 0.0% fragmentation the clean way? If I
really wanted to?


You don't.  The term fragmentation does not mean the same thing in 
FreeBSD that it does in other OS's. (ie windows)


Fragmentation in FreeBSD refers to blocks that have not been fully 
allocated.  For example, if I have a file system that has 16K blocks, and 
2K fragments (think of fragments as sub-blocks if it helps), and I save an 
18K file, it will occupy 1 block and 1 fragment from the next block.  That 
second block then is said to be fragmented.


In the windows world, fragmentation refers to files which occupy 
non-contiguous groups of blocks.  For example, you might have 5 blocks in a 
row, and then have to move to another part of the disk to read the next 5 
blocks.




2. How come /tmp is -0% in size? -278K? What had happened? as I have
never experienced this in the previous installs on the exact same
hardware.


Not sure about that one.  Maybe someone else has an answer.

-Glenn



Thanks

Lei
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: disk fragmentation, 0%?

2005-08-14 Thread cpghost
On Sun, Aug 14, 2005 at 12:09:19AM -0700, Glenn Dawson wrote:
 2. How come /tmp is -0% in size? -278K? What had happened? as I have
 never experienced this in the previous installs on the exact same
 hardware.
 
 Not sure about that one.  Maybe someone else has an answer.

This is a FAQ.

The available space is always computed after subtracting some space
that would be only available to root (typically around 5% or 10%
of the partition size). This free space is necessary to avoid internal
fragmentation and to keep the file system going. Root may be able
to borrow some space from this (in which case the capacity goes
below 0%), but it is not advisable to keep the file system so full,
so it should be only for a limited period of time.

In your example, you're 278K over the limit; and should delete some
files to make space ASAP. Should /tmp fill up more, it will soon become
inoperable.

 -Glenn

-cpghost.

-- 
Cordula's Web. http://www.cordula.ws/
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: disk fragmentation, 0%?

2005-08-14 Thread Glenn Dawson

At 12:18 PM 8/14/2005, cpghost wrote:

On Sun, Aug 14, 2005 at 12:09:19AM -0700, Glenn Dawson wrote:
 2. How come /tmp is -0% in size? -278K? What had happened? as I have
 never experienced this in the previous installs on the exact same
 hardware.

 Not sure about that one.  Maybe someone else has an answer.

This is a FAQ.

The available space is always computed after subtracting some space
that would be only available to root (typically around 5% or 10%
of the partition size).


The default is 8%.


 This free space is necessary to avoid internal
fragmentation and to keep the file system going. Root may be able
to borrow some space from this (in which case the capacity goes
below 0%), but it is not advisable to keep the file system so full,
so it should be only for a limited period of time.


The reason for having the reserved space is to allow the functions that 
allocate space to be able to find contiguous free space.  When the disk is 
nearly full it takes longer and longer to locate contiguous space, which 
can lead to performance problems.




In your example, you're 278K over the limit; and should delete some
files to make space ASAP. Should /tmp fill up more, it will soon become
inoperable.


From the original message:

Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/ar0s1e248M   -278K228M-0%/tmp

This shows that /tmp is empty.  If the reserved space was being encroached 
upon, it would show  100% capacity, and available bytes would go negative, 
not bytes used.


It would look something like this:

Filesystem SizeUsed   Avail Capacity  Mounted on
/dev/ad0s1a248M238M-10M   105%/

I've never seen the capacity go negative before, which is why I suggested 
someone else might know the answer.


-Glenn

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: disk fragmentation, 0%?

2005-08-14 Thread Lei Sun
Thanks for the good answers.

But can anyone tell me why the capacity is going negative? and not full?

 Filesystem SizeUsed   Avail Capacity  Mounted on
 /dev/ar0s1e248M   -278K228M-0%/tmp

Thanks a lot

Lei

On 8/14/05, Glenn Dawson [EMAIL PROTECTED] wrote:
 At 12:18 PM 8/14/2005, cpghost wrote:
 On Sun, Aug 14, 2005 at 12:09:19AM -0700, Glenn Dawson wrote:
   2. How come /tmp is -0% in size? -278K? What had happened? as I have
   never experienced this in the previous installs on the exact same
   hardware.
  
   Not sure about that one.  Maybe someone else has an answer.
 
 This is a FAQ.
 
 The available space is always computed after subtracting some space
 that would be only available to root (typically around 5% or 10%
 of the partition size).
 
 The default is 8%.
 
   This free space is necessary to avoid internal
 fragmentation and to keep the file system going. Root may be able
 to borrow some space from this (in which case the capacity goes
 below 0%), but it is not advisable to keep the file system so full,
 so it should be only for a limited period of time.
 
 The reason for having the reserved space is to allow the functions that
 allocate space to be able to find contiguous free space.  When the disk is
 nearly full it takes longer and longer to locate contiguous space, which
 can lead to performance problems.
 
 
 In your example, you're 278K over the limit; and should delete some
 files to make space ASAP. Should /tmp fill up more, it will soon become
 inoperable.
 
  From the original message:
 
 Filesystem SizeUsed   Avail Capacity  Mounted on
 /dev/ar0s1e248M   -278K228M-0%/tmp
 
 This shows that /tmp is empty.  If the reserved space was being encroached
 upon, it would show  100% capacity, and available bytes would go negative,
 not bytes used.
 
 It would look something like this:
 
 Filesystem SizeUsed   Avail Capacity  Mounted on
 /dev/ad0s1a248M238M-10M   105%/
 
 I've never seen the capacity go negative before, which is why I suggested
 someone else might know the answer.
 
 -Glenn
 

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


disk fragmentation

2005-02-01 Thread Jim Pazarena
during the boot sequence, I routinely see a % fragmentation message.
It was my understanding that fragmentation doesn't occur on a Unix
(er FreeBSD) box..
It seems that there is a concept of fragmentation from the above
message, so, is there an un-fragment utility?
Jim
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: disk fragmentation

2005-02-01 Thread Dan Nelson
In the last episode (Feb 01), Jim Pazarena said:
 during the boot sequence, I routinely see a % fragmentation
 message.
 
 It was my understanding that fragmentation doesn't occur on a Unix
 (er FreeBSD) box..
 
 It seems that there is a concept of fragmentation from the above
 message, so, is there an un-fragment utility?

In the ffs filesystem, a file that's smaller than the default 16k
blocksize (or the last part of a file that doesn't completely fit into
a block) doesn't have to waste an entire block.  Blocks can be split
into eight 2k fragments and small files are put in them.  The %
fragmentation is just the percentage of fragment blocks vs the total
number blocks.  It's more an indicator of how many small files you have
in the system than anything else.

-- 
Dan Nelson
[EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: disk fragmentation

2005-02-01 Thread Jeremy Faulkner
Jim Pazarena wrote:
during the boot sequence, I routinely see a % fragmentation message.
It was my understanding that fragmentation doesn't occur on a Unix
(er FreeBSD) box..
It seems that there is a concept of fragmentation from the above
message, so, is there an un-fragment utility?
Jim
No there is not a defragmenting program. Fragmentation is not a problem 
it is part of the normal operation of the filesystem.

Data is stored in the filesystem in blocks, if a file does not have 
enough data to evenly fill all of its assigned blocks then the last 
block for the file is a fragment. The UFS filesystem will fill the 
fragment with new data when new data is added to the file.

Some filesystems used by a redmond based company do not attempt to fill 
existing fragments, but simply add new blocks to the file so that a file 
could have more than one fragmented block.

There are better descriptions of what is occurring in the archives, and 
presumably in textbooks that discuss filesystems.

--
Jeremy Faulkner [EMAIL PROTECTED]
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]