Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-07-11 Thread Richard Lee
This is on clean FreeBSD 8.1 RC2, amd64, with 4GB memory.

The closest I found by Googling was this:
http://forums.freebsd.org/showthread.php?t=9935

And it talks about all kinds of little tweaks, but in the end, the
only thing that actually works is the stupid 1-line perl code that
forces the kernal to free the memory allocated to (non-zfs) disk
cache, which is the "Inact"ive memory in "top."

I have a 4-disk raidz pool, but that's unlikely to matter.

Try to copy large files from non-zfs disk to zfs disk.  FreeBSD will
cache the data read from non-zfs disk in memory, and free memory will
go down.  This is as expected, obviously.

Once there's very little free memory, one would expect whatever is
more important to kick out the cached data (Inact) and make memory
available.

But when almost all of the memory is taken by disk cache (of non-zfs
file system), ZFS disks start threshing like mad and the write
throughput goes down in 1-digit MB/second.

I believe it should be extremely easy to duplicate.  Just plug in a
big USB drive formatted in UFS (msdosfs will likely do the same), and
copy large files from that USB drive to zfs pool.

Right after clean boot, gstat will show something like 20+MB/s
movement from USB device (da*), and occasional bursts of activity on
zpool devices at very high rate.  Once free memory is exhausted, zpool
devices will change to constant low-speed activity, with disks
threshing about constantly.

I tried enabling/disabling prefetch, messing with vnode counts,
zfs.vdev.min/max_pending, etc.  The only thing that works is that
stupid perl 1-liner (perl -e '$x="x"x15'), which returns the
activity to that seen right after a clean boot.  It doesn't last very
long, though, as the disk cache again consumes all the memory.

Copying files between zfs devices doesn't seem to affect anything.

I understand zfs subsystem has its own memory/cache management.
Can a zfs expert please comment on this?

And is there a way to force the kernel to not cache non-zfs disk data?

--rich
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-10-07 Thread Andriy Bakay

Hi All,

Do we have any new information about this issue (fixes, work arounds 
etc.)? Any input will be highly useful.


http://lists.freebsd.org/pipermail/freebsd-stable/2010-July/057682.html

I am experiencing kind of same problem on FreeBSD 8.1-RELEASE-p1 i386 
2G RAM.


Thanks,
Andriy

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-07-11 Thread Jeremy Chadwick
On Sun, Jul 11, 2010 at 11:25:12AM -0700, Richard Lee wrote:
> This is on clean FreeBSD 8.1 RC2, amd64, with 4GB memory.
> 
> The closest I found by Googling was this:
> http://forums.freebsd.org/showthread.php?t=9935
> 
> And it talks about all kinds of little tweaks, but in the end, the
> only thing that actually works is the stupid 1-line perl code that
> forces the kernal to free the memory allocated to (non-zfs) disk
> cache, which is the "Inact"ive memory in "top."
> 
> I have a 4-disk raidz pool, but that's unlikely to matter.
> 
> Try to copy large files from non-zfs disk to zfs disk.  FreeBSD will
> cache the data read from non-zfs disk in memory, and free memory will
> go down.  This is as expected, obviously.
> 
> Once there's very little free memory, one would expect whatever is
> more important to kick out the cached data (Inact) and make memory
> available.
> 
> But when almost all of the memory is taken by disk cache (of non-zfs
> file system), ZFS disks start threshing like mad and the write
> throughput goes down in 1-digit MB/second.
> 
> I believe it should be extremely easy to duplicate.  Just plug in a
> big USB drive formatted in UFS (msdosfs will likely do the same), and
> copy large files from that USB drive to zfs pool.
> 
> Right after clean boot, gstat will show something like 20+MB/s
> movement from USB device (da*), and occasional bursts of activity on
> zpool devices at very high rate.  Once free memory is exhausted, zpool
> devices will change to constant low-speed activity, with disks
> threshing about constantly.
> 
> I tried enabling/disabling prefetch, messing with vnode counts,
> zfs.vdev.min/max_pending, etc.  The only thing that works is that
> stupid perl 1-liner (perl -e '$x="x"x15'), which returns the
> activity to that seen right after a clean boot.  It doesn't last very
> long, though, as the disk cache again consumes all the memory.
> 
> Copying files between zfs devices doesn't seem to affect anything.
> 
> I understand zfs subsystem has its own memory/cache management.
> Can a zfs expert please comment on this?
> 
> And is there a way to force the kernel to not cache non-zfs disk data?

I believe you may be describing two separate issues:

1) ZFS using a lot of memory but not freeing it as you expect
2) Lack of disk I/O scheduler

For (1), try this in /boot/loader.conf and reboot:

# Disable UMA (uma(9)) for ZFS; amd64 was moved to exclusively use UMA
# on 2010/05/24.
# http://lists.freebsd.org/pipermail/freebsd-stable/2010-June/057162.html
vfs.zfs.zio.use_uma="0"

For (2), may try gsched_rr:

http://svnweb.freebsd.org/viewvc/base/releng/8.1/sys/geom/sched/README?view=markup

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


RE: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-07-11 Thread Scott Sanbeg
Using Jeremy's suggestion as follows:
1) ZFS using a lot of memory but not freeing it as you expect
For (1), try this in /boot/loader.conf and reboot:
vfs.zfs.zio.use_uma="0"

... works like a charm for me.  Thank you.

Scott

-Original Message-
From: owner-freebsd-sta...@freebsd.org
[mailto:owner-freebsd-sta...@freebsd.org] On Behalf Of Jeremy Chadwick
Sent: Sunday, July 11, 2010 1:48 PM
To: Richard Lee
Cc: freebsd-stable@freebsd.org
Subject: Re: Serious zfs slowdown when mixed with another file system
(ufs/msdosfs/etc.).

On Sun, Jul 11, 2010 at 11:25:12AM -0700, Richard Lee wrote:
> This is on clean FreeBSD 8.1 RC2, amd64, with 4GB memory.
> 
> The closest I found by Googling was this:
> http://forums.freebsd.org/showthread.php?t=9935
> 
> And it talks about all kinds of little tweaks, but in the end, the
> only thing that actually works is the stupid 1-line perl code that
> forces the kernal to free the memory allocated to (non-zfs) disk
> cache, which is the "Inact"ive memory in "top."
> 
> I have a 4-disk raidz pool, but that's unlikely to matter.
> 
> Try to copy large files from non-zfs disk to zfs disk.  FreeBSD will
> cache the data read from non-zfs disk in memory, and free memory will
> go down.  This is as expected, obviously.
> 
> Once there's very little free memory, one would expect whatever is
> more important to kick out the cached data (Inact) and make memory
> available.
> 
> But when almost all of the memory is taken by disk cache (of non-zfs
> file system), ZFS disks start threshing like mad and the write
> throughput goes down in 1-digit MB/second.
> 
> I believe it should be extremely easy to duplicate.  Just plug in a
> big USB drive formatted in UFS (msdosfs will likely do the same), and
> copy large files from that USB drive to zfs pool.
> 
> Right after clean boot, gstat will show something like 20+MB/s
> movement from USB device (da*), and occasional bursts of activity on
> zpool devices at very high rate.  Once free memory is exhausted, zpool
> devices will change to constant low-speed activity, with disks
> threshing about constantly.
> 
> I tried enabling/disabling prefetch, messing with vnode counts,
> zfs.vdev.min/max_pending, etc.  The only thing that works is that
> stupid perl 1-liner (perl -e '$x="x"x15'), which returns the
> activity to that seen right after a clean boot.  It doesn't last very
> long, though, as the disk cache again consumes all the memory.
> 
> Copying files between zfs devices doesn't seem to affect anything.
> 
> I understand zfs subsystem has its own memory/cache management.
> Can a zfs expert please comment on this?
> 
> And is there a way to force the kernel to not cache non-zfs disk data?

I believe you may be describing two separate issues:

1) ZFS using a lot of memory but not freeing it as you expect
2) Lack of disk I/O scheduler

For (1), try this in /boot/loader.conf and reboot:

# Disable UMA (uma(9)) for ZFS; amd64 was moved to exclusively use UMA
# on 2010/05/24.
# http://lists.freebsd.org/pipermail/freebsd-stable/2010-June/057162.html
vfs.zfs.zio.use_uma="0"

For (2), may try gsched_rr:

http://svnweb.freebsd.org/viewvc/base/releng/8.1/sys/geom/sched/README?view=
markup

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-07-11 Thread Richard Lee
On Sun, Jul 11, 2010 at 01:47:57PM -0700, Jeremy Chadwick wrote:
> On Sun, Jul 11, 2010 at 11:25:12AM -0700, Richard Lee wrote:
> > This is on clean FreeBSD 8.1 RC2, amd64, with 4GB memory.
> > 
> > The closest I found by Googling was this:
> > http://forums.freebsd.org/showthread.php?t=9935
> > 
> > And it talks about all kinds of little tweaks, but in the end, the
> > only thing that actually works is the stupid 1-line perl code that
> > forces the kernal to free the memory allocated to (non-zfs) disk
> > cache, which is the "Inact"ive memory in "top."
> > 
> > I have a 4-disk raidz pool, but that's unlikely to matter.
> > 
> > Try to copy large files from non-zfs disk to zfs disk.  FreeBSD will
> > cache the data read from non-zfs disk in memory, and free memory will
> > go down.  This is as expected, obviously.
> > 
> > Once there's very little free memory, one would expect whatever is
> > more important to kick out the cached data (Inact) and make memory
> > available.
> > 
> > But when almost all of the memory is taken by disk cache (of non-zfs
> > file system), ZFS disks start threshing like mad and the write
> > throughput goes down in 1-digit MB/second.
> > 
> > I believe it should be extremely easy to duplicate.  Just plug in a
> > big USB drive formatted in UFS (msdosfs will likely do the same), and
> > copy large files from that USB drive to zfs pool.
> > 
> > Right after clean boot, gstat will show something like 20+MB/s
> > movement from USB device (da*), and occasional bursts of activity on
> > zpool devices at very high rate.  Once free memory is exhausted, zpool
> > devices will change to constant low-speed activity, with disks
> > threshing about constantly.
> > 
> > I tried enabling/disabling prefetch, messing with vnode counts,
> > zfs.vdev.min/max_pending, etc.  The only thing that works is that
> > stupid perl 1-liner (perl -e '$x="x"x15'), which returns the
> > activity to that seen right after a clean boot.  It doesn't last very
> > long, though, as the disk cache again consumes all the memory.
> > 
> > Copying files between zfs devices doesn't seem to affect anything.
> > 
> > I understand zfs subsystem has its own memory/cache management.
> > Can a zfs expert please comment on this?
> > 
> > And is there a way to force the kernel to not cache non-zfs disk data?
> 
> I believe you may be describing two separate issues:
> 
> 1) ZFS using a lot of memory but not freeing it as you expect
> 2) Lack of disk I/O scheduler
> 
> For (1), try this in /boot/loader.conf and reboot:
> 
> # Disable UMA (uma(9)) for ZFS; amd64 was moved to exclusively use UMA
> # on 2010/05/24.
> # http://lists.freebsd.org/pipermail/freebsd-stable/2010-June/057162.html
> vfs.zfs.zio.use_uma="0"
> 
> For (2), may try gsched_rr:
> 
> http://svnweb.freebsd.org/viewvc/base/releng/8.1/sys/geom/sched/README?view=markup
> 
> -- 
> | Jeremy Chadwick   j...@parodius.com |
> | Parodius Networking   http://www.parodius.com/ |
> | UNIX Systems Administrator  Mountain View, CA, USA |
> | Making life hard for others since 1977.  PGP: 4BD6C0CB |

vfs.zfs.zio.use_uma is already 0.  It looks to be the default, as I never
touched it.  And in my case, Wired memory is stable at around 1GB.  It's
the Inact memory that takes off, but only if reading from non-zfs file
system.  Without other file systems, I can keep moving files around and
see no adverse slowdown.  I can also scp huge files from another system
into the zfs machine, and it doesn't affect memory usage (as reported by
top), nor does it affect performance.

As for gsched_rr, I don't believe this is related.  There is only ONE
access to the zfs devices (4 sata drives), which is purely a sequential
write.

The external USB HDD (UFS2) is a completely different device, and is doing
purely sequential read.  There is only one "cp" process doing anything at
all.

The FreeBSD system files aren't on either of these devices, either not
that it's doing anything with the disk during this time.

--rich
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-07-11 Thread Jeremy Chadwick
On Sun, Jul 11, 2010 at 02:12:13PM -0700, Richard Lee wrote:
> On Sun, Jul 11, 2010 at 01:47:57PM -0700, Jeremy Chadwick wrote:
> > On Sun, Jul 11, 2010 at 11:25:12AM -0700, Richard Lee wrote:
> > > This is on clean FreeBSD 8.1 RC2, amd64, with 4GB memory.
> > > 
> > > The closest I found by Googling was this:
> > > http://forums.freebsd.org/showthread.php?t=9935
> > > 
> > > And it talks about all kinds of little tweaks, but in the end, the
> > > only thing that actually works is the stupid 1-line perl code that
> > > forces the kernal to free the memory allocated to (non-zfs) disk
> > > cache, which is the "Inact"ive memory in "top."
> > > 
> > > I have a 4-disk raidz pool, but that's unlikely to matter.
> > > 
> > > Try to copy large files from non-zfs disk to zfs disk.  FreeBSD will
> > > cache the data read from non-zfs disk in memory, and free memory will
> > > go down.  This is as expected, obviously.
> > > 
> > > Once there's very little free memory, one would expect whatever is
> > > more important to kick out the cached data (Inact) and make memory
> > > available.
> > > 
> > > But when almost all of the memory is taken by disk cache (of non-zfs
> > > file system), ZFS disks start threshing like mad and the write
> > > throughput goes down in 1-digit MB/second.
> > > 
> > > I believe it should be extremely easy to duplicate.  Just plug in a
> > > big USB drive formatted in UFS (msdosfs will likely do the same), and
> > > copy large files from that USB drive to zfs pool.
> > > 
> > > Right after clean boot, gstat will show something like 20+MB/s
> > > movement from USB device (da*), and occasional bursts of activity on
> > > zpool devices at very high rate.  Once free memory is exhausted, zpool
> > > devices will change to constant low-speed activity, with disks
> > > threshing about constantly.
> > > 
> > > I tried enabling/disabling prefetch, messing with vnode counts,
> > > zfs.vdev.min/max_pending, etc.  The only thing that works is that
> > > stupid perl 1-liner (perl -e '$x="x"x15'), which returns the
> > > activity to that seen right after a clean boot.  It doesn't last very
> > > long, though, as the disk cache again consumes all the memory.
> > > 
> > > Copying files between zfs devices doesn't seem to affect anything.
> > > 
> > > I understand zfs subsystem has its own memory/cache management.
> > > Can a zfs expert please comment on this?
> > > 
> > > And is there a way to force the kernel to not cache non-zfs disk data?
> > 
> > I believe you may be describing two separate issues:
> > 
> > 1) ZFS using a lot of memory but not freeing it as you expect
> > 2) Lack of disk I/O scheduler
> > 
> > For (1), try this in /boot/loader.conf and reboot:
> > 
> > # Disable UMA (uma(9)) for ZFS; amd64 was moved to exclusively use UMA
> > # on 2010/05/24.
> > # http://lists.freebsd.org/pipermail/freebsd-stable/2010-June/057162.html
> > vfs.zfs.zio.use_uma="0"
> > 
> > For (2), may try gsched_rr:
> > 
> > http://svnweb.freebsd.org/viewvc/base/releng/8.1/sys/geom/sched/README?view=markup
> > 
> > -- 
> > | Jeremy Chadwick   j...@parodius.com |
> > | Parodius Networking   http://www.parodius.com/ |
> > | UNIX Systems Administrator  Mountain View, CA, USA |
> > | Making life hard for others since 1977.  PGP: 4BD6C0CB |
> 
> vfs.zfs.zio.use_uma is already 0.  It looks to be the default, as I never
> touched it.

Okay, just checking, because the default did change at one point, as the
link in my /boot/loader.conf denotes.  Here's further confirmation (same
thread), the first confirming on i386, the second confirming on amd64:

http://lists.freebsd.org/pipermail/freebsd-stable/2010-June/057168.html
http://lists.freebsd.org/pipermail/freebsd-stable/2010-June/057239.html

> And in my case, Wired memory is stable at around 1GB.  It's
> the Inact memory that takes off, but only if reading from non-zfs file
> system.  Without other file systems, I can keep moving files around and
> see no adverse slowdown.  I can also scp huge files from another system
> into the zfs machine, and it doesn't affect memory usage (as reported by
> top), nor does it affect performance.

Let me get this straight:

The system has ZFS enabled (kernel module loaded), with a 4-disk raidz1
pool defined and used in the past (Wired being @ 1GB, due to ARC).  The
same system also has UFS2 filesystems.  The ZFS pool vdevs consist of
their own dedicated disks, and the UFS2 filesystems also have their own
disk (which appears to be USB-based).

When any sort of read I/O is done on the UFS2 filesystems, Inact
skyrockets, and as a result this impacts performance of ZFS.

If this is correct: can you remove USB from the picture and confirm the
problem still happens?  This is the first I've heard of the UFS caching
mechanism "spiraling out of control".

By the way, all the "stupid perl 1-liner" does is make a process with an
extremely large SIZE, and RES will gro

Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-07-11 Thread Richard Lee
On Sun, Jul 11, 2010 at 02:45:46PM -0700, Jeremy Chadwick wrote:
> On Sun, Jul 11, 2010 at 02:12:13PM -0700, Richard Lee wrote:
> > On Sun, Jul 11, 2010 at 01:47:57PM -0700, Jeremy Chadwick wrote:
> > > On Sun, Jul 11, 2010 at 11:25:12AM -0700, Richard Lee wrote:
> > > > This is on clean FreeBSD 8.1 RC2, amd64, with 4GB memory.
> > > > 
> > > > The closest I found by Googling was this:
> > > > http://forums.freebsd.org/showthread.php?t=9935
> > > > 
> > > > And it talks about all kinds of little tweaks, but in the end, the
> > > > only thing that actually works is the stupid 1-line perl code that
> > > > forces the kernal to free the memory allocated to (non-zfs) disk
> > > > cache, which is the "Inact"ive memory in "top."
> > > > 
> > > > I have a 4-disk raidz pool, but that's unlikely to matter.
> > > > 
> > > > Try to copy large files from non-zfs disk to zfs disk.  FreeBSD will
> > > > cache the data read from non-zfs disk in memory, and free memory will
> > > > go down.  This is as expected, obviously.
> > > > 
> > > > Once there's very little free memory, one would expect whatever is
> > > > more important to kick out the cached data (Inact) and make memory
> > > > available.
> > > > 
> > > > But when almost all of the memory is taken by disk cache (of non-zfs
> > > > file system), ZFS disks start threshing like mad and the write
> > > > throughput goes down in 1-digit MB/second.
> > > > 
> > > > I believe it should be extremely easy to duplicate.  Just plug in a
> > > > big USB drive formatted in UFS (msdosfs will likely do the same), and
> > > > copy large files from that USB drive to zfs pool.
> > > > 
> > > > Right after clean boot, gstat will show something like 20+MB/s
> > > > movement from USB device (da*), and occasional bursts of activity on
> > > > zpool devices at very high rate.  Once free memory is exhausted, zpool
> > > > devices will change to constant low-speed activity, with disks
> > > > threshing about constantly.
> > > > 
> > > > I tried enabling/disabling prefetch, messing with vnode counts,
> > > > zfs.vdev.min/max_pending, etc.  The only thing that works is that
> > > > stupid perl 1-liner (perl -e '$x="x"x15'), which returns the
> > > > activity to that seen right after a clean boot.  It doesn't last very
> > > > long, though, as the disk cache again consumes all the memory.
> > > > 
> > > > Copying files between zfs devices doesn't seem to affect anything.
> > > > 
> > > > I understand zfs subsystem has its own memory/cache management.
> > > > Can a zfs expert please comment on this?
> > > > 
> > > > And is there a way to force the kernel to not cache non-zfs disk data?
> > > 
> > > I believe you may be describing two separate issues:
> > > 
> > > 1) ZFS using a lot of memory but not freeing it as you expect
> > > 2) Lack of disk I/O scheduler
> > > 
> > > For (1), try this in /boot/loader.conf and reboot:
> > > 
> > > # Disable UMA (uma(9)) for ZFS; amd64 was moved to exclusively use UMA
> > > # on 2010/05/24.
> > > # http://lists.freebsd.org/pipermail/freebsd-stable/2010-June/057162.html
> > > vfs.zfs.zio.use_uma="0"
> > > 
> > > For (2), may try gsched_rr:
> > > 
> > > http://svnweb.freebsd.org/viewvc/base/releng/8.1/sys/geom/sched/README?view=markup
> > > 
> > > -- 
> > > | Jeremy Chadwick   j...@parodius.com |
> > > | Parodius Networking   http://www.parodius.com/ |
> > > | UNIX Systems Administrator  Mountain View, CA, USA |
> > > | Making life hard for others since 1977.  PGP: 4BD6C0CB |
> > 
> > vfs.zfs.zio.use_uma is already 0.  It looks to be the default, as I never
> > touched it.
> 
> Okay, just checking, because the default did change at one point, as the
> link in my /boot/loader.conf denotes.  Here's further confirmation (same
> thread), the first confirming on i386, the second confirming on amd64:
> 
> http://lists.freebsd.org/pipermail/freebsd-stable/2010-June/057168.html
> http://lists.freebsd.org/pipermail/freebsd-stable/2010-June/057239.html
> 
> > And in my case, Wired memory is stable at around 1GB.  It's
> > the Inact memory that takes off, but only if reading from non-zfs file
> > system.  Without other file systems, I can keep moving files around and
> > see no adverse slowdown.  I can also scp huge files from another system
> > into the zfs machine, and it doesn't affect memory usage (as reported by
> > top), nor does it affect performance.
> 
> Let me get this straight:
> 
> The system has ZFS enabled (kernel module loaded), with a 4-disk raidz1
> pool defined and used in the past (Wired being @ 1GB, due to ARC).  The
> same system also has UFS2 filesystems.  The ZFS pool vdevs consist of
> their own dedicated disks, and the UFS2 filesystems also have their own
> disk (which appears to be USB-based).

Yes, correct.

I have:
ad4 (An old 200GB SATA UFS2 main system drive)
ad8, ad10, ad12, ad14 (1TB SATA drives) part of raidz1 and nothing else
da0 is an external USB

Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-07-11 Thread Freddie Cash
Search the archives for the -stable, -current, and -fs mailing lists
from the past 3 months.  There are patches floating around to fix
this.  The ZFS code that monitors memory pressure currently only
monitors the "free" amount, and completely ignores the "inact" and
other "not actually in use" amounts.

-- 
Freddie Cash
fjwc...@gmail.com
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-07-11 Thread jhell
On 07/11/2010 23:08, Freddie Cash wrote:
> Search the archives for the -stable, -current, and -fs mailing lists
> from the past 3 months.  There are patches floating around to fix
> this.  The ZFS code that monitors memory pressure currently only
> monitors the "free" amount, and completely ignores the "inact" and
> other "not actually in use" amounts.
> 

AFAIR, any of the patches that were around were either incomplete or
inconsistent & were good attempts to solve the problem but turned out in
the end to not effect the problem in a positive or negative way. I may
be wrong as it seems the problem has a few variables that determine its
effect in different cases, usage, hardware mixture & implementation.

If there is one thing that I have been seeing more of was the perl hack
that gets every process on your system to swap-out to free RAM for use
by ZFS or whatever its intention.

perl -e '$x = "x" x 100;'

It might be a good thing to mention this on the ZFS tuning section of
the wiki for reference.


Regards & Good Luck,

-- 

 +-+-+-+-+-+
 |j|h|e|l|l|
 +-+-+-+-+-+
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-07-11 Thread Emil Mikulic
On Sun, Jul 11, 2010 at 02:45:46PM -0700, Jeremy Chadwick wrote:
> On Sun, Jul 11, 2010 at 02:12:13PM -0700, Richard Lee wrote:
> > vfs.zfs.zio.use_uma is already 0.  It looks to be the default, as I never
> > touched it.
> 
> Okay, just checking, because the default did change at one point

And changed back (to zero) in rev 209261:
http://article.gmane.org/gmane.os.freebsd.devel.cvs/395961
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"



Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-07-12 Thread Peter Jeremy
On 2010-Jul-11 11:25:12 -0700, Richard Lee  wrote:
>But when almost all of the memory is taken by disk cache (of non-zfs
>file system), ZFS disks start threshing like mad and the write
>throughput goes down in 1-digit MB/second.

It can go a lot lower than that...

Yes, this is a known problem.  The underlying problem is a disconnect
between the ZFS cache (ARC) and the VM cache used by everything else,
preventing ZFS reclaiming RAM from the VM cache.  For several months,
I was running a regular cron job that was a slightly fancier version
of the perl one-liner.

I have been using the attached arc.patch1 based on a patch written by
Artem Belevich  (see http://pastebin.com/ZCkzkWcs )
for about a month.  I have had reasonable success with it (and junked
my cronjob) but have managed to wedge my system a couple of times
whilst doing zfs send|recv.  Whilst looking at that diff, I just
noticed a nasty signed/unsigned bug that could bite in low memory
conditions and have revised it to arc.patch2 (untested as yet).

Independently, Martin Matuska  committed r209227
that corrects a number of ARC bugs reported on OpenSolaris.  Whilst
this patch doesn't add checks on "inactive" or "cache", some quick
checks suggest it also helps (though I need to do further checks).
See http://people.freebsd.org/~mm/patches/zfs/head-12636.patch

-- 
Peter Jeremy


pgpqfU3ecAmS4.pgp
Description: PGP signature


Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-07-12 Thread Peter Jeremy
On 2010-Jul-12 19:38:18 +1000, Peter Jeremy  
wrote:
>I have been using the attached arc.patch1 based on a patch written by
>Artem Belevich  (see http://pastebin.com/ZCkzkWcs )
>for about a month.  I have had reasonable success with it (and junked
>my cronjob) but have managed to wedge my system a couple of times
>whilst doing zfs send|recv.  Whilst looking at that diff, I just
>noticed a nasty signed/unsigned bug that could bite in low memory
>conditions and have revised it to arc.patch2 (untested as yet).

Let try actually attaching those patches...  Sorry.

-- 
Peter Jeremy
Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
===
RCS file: /usr/ncvs/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c,v
retrieving revision 1.22.2.6
diff -u -r1.22.2.6 arc.c
--- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c24 May 2010 
20:09:40 -  1.22.2.6
+++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c12 Jun 2010 
21:04:13 -
@@ -183,10 +183,15 @@
 int zfs_arc_shrink_shift = 0;
 int zfs_arc_p_min_shift = 0;
 
+uint64_t zfs_arc_bp_active;
+uint64_t zfs_arc_bp_inactive;
+
 TUNABLE_QUAD("vfs.zfs.arc_max", &zfs_arc_max);
 TUNABLE_QUAD("vfs.zfs.arc_min", &zfs_arc_min);
 TUNABLE_QUAD("vfs.zfs.arc_meta_limit", &zfs_arc_meta_limit);
 TUNABLE_INT("vfs.zfs.mdcomp_disable", &zfs_mdcomp_disable);
+TUNABLE_QUAD("vfs.zfs.arc_bp_active", &zfs_arc_bp_active);
+TUNABLE_QUAD("vfs.zfs.arc_bp_inactive", &zfs_arc_bp_inactive);
 SYSCTL_DECL(_vfs_zfs);
 SYSCTL_QUAD(_vfs_zfs, OID_AUTO, arc_max, CTLFLAG_RDTUN, &zfs_arc_max, 0,
 "Maximum ARC size");
@@ -195,6 +200,11 @@
 SYSCTL_INT(_vfs_zfs, OID_AUTO, mdcomp_disable, CTLFLAG_RDTUN,
 &zfs_mdcomp_disable, 0, "Disable metadata compression");
 
+SYSCTL_QUAD(_vfs_zfs, OID_AUTO, arc_bp_active, CTLFLAG_RW|CTLFLAG_TUN, 
&zfs_arc_bp_active, 0,
+"Start ARC backpressure if active memory is below this limit");
+SYSCTL_QUAD(_vfs_zfs, OID_AUTO, arc_bp_inactive, CTLFLAG_RW|CTLFLAG_TUN, 
&zfs_arc_bp_inactive, 0,
+"Start ARC backpressure if inactive memory is below this limit");
+
 /*
  * Note that buffers can be in one of 6 states:
  * ARC_anon- anonymous (discussed below)
@@ -2103,7 +2113,6 @@
 }
 
 static int needfree = 0;
-
 static int
 arc_reclaim_needed(void)
 {
@@ -2112,20 +2121,58 @@
 #endif
 
 #ifdef _KERNEL
-   if (needfree)
-   return (1);
+   /* We've grown too much, */
if (arc_size > arc_c_max)
return (1);
+
+   /* Pagedaemon is stuck, let's free something right away */
+   if (vm_pageout_pages_needed)
+   return 1;
+
+   /* Check if inactive list have grown too much */
+   if ( zfs_arc_bp_inactive
+&& (ptoa((uintmax_t)cnt.v_inactive_count) > zfs_arc_bp_inactive)) {
+   /* tell pager to reap 1/2th of inactive queue*/
+   atomic_add_int(&vm_pageout_deficit, cnt.v_inactive_count/2);
+   pagedaemon_wakeup();
+   return needfree;
+   }
+
+   /* Same for active list... */
+   if ( zfs_arc_bp_active
+&& (ptoa((uintmax_t)cnt.v_active_count) > zfs_arc_bp_active)) {
+   atomic_add_int(&vm_pageout_deficit, cnt.v_active_count/2);
+   pagedaemon_wakeup();
+   return needfree;
+   }
+
+   
+   /* Old style behavior -- ARC gives up memory whenever page daemon 
asks.. */
+   if (needfree)
+   return 1;
+
+   /*
+ We got here either because active/inactive lists are
+ getting short or because we've been called during voluntary
+ ARC size checks. Kind of gray area...
+   */
+
+   /* If we didn't reach our minimum yet, don't rush to give memory up..*/
if (arc_size <= arc_c_min)
return (0);
 
+   /* If we're really short on memory now, give it up. */
+   if (vm_page_count_min()) {
+   return (1);
+   }
+   
/*
-* If pages are needed or we're within 2048 pages
-* of needing to page need to reclaim
+* If we're within 2048 pages of pagedaemon start, reclaim...
 */
-   if (vm_pages_needed || (vm_paging_target() > -2048))
+   if (vm_pages_needed && (vm_paging_target() > -2048))
return (1);
 
+
 #if 0
/*
 * take 'desfree' extra pages, so we reclaim sooner, rather than later
@@ -2169,8 +2216,6 @@
return (1);
 #endif
 #else
-   if (kmem_used() > (kmem_size() * 3) / 4)
-   return (1);
 #endif
 
 #else
@@ -2279,7 +2324,7 @@
if (arc_eviction_list != NULL)
arc_do_user_evicts();
 
-   if (arc_reclaim_needed()) {
+   if (needfree) {
needfree = 0;
 #ifdef _KERNEL
wakeup(&needfree);
@@ -3611,10 +3656,15 @@
 {
 #ifdef _KERNEL
uint64_t inflight_data = arc_anon->ar

Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-07-12 Thread Andriy Gapon
on 12/07/2010 12:39 Peter Jeremy said the following:
>  /*
> - * If pages are needed or we're within 2048 pages
> - * of needing to page need to reclaim
> + * If we're within 2048 pages of pagedaemon start, reclaim...
>   */
> - if (vm_pages_needed || (vm_paging_target() > -2048))
> + if (vm_pages_needed && (vm_paging_target() > -2048))

I am not sure that what comment says is actually what the code checks.
For both pre-patch and post-patch versions.

-- 
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-10-07 Thread Andriy Gapon
on 07/10/2010 18:46 Andriy Bakay said the following:
> Hi All,
> 
> Do we have any new information about this issue (fixes, work arounds etc.)? 
> Any
> input will be highly useful.

Yes, _we_ do.  Where have you been? :-)

-- 
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-10-07 Thread Andriy Bakay
I expressed myself incorrectly. Sorry. :-(

Do you Andriy :-) or anybody else from list(s) have more info how to
fix or work around this issue?

On Thu, 07 Oct 2010 21:29:54 +0300, Andriy Gapon 
wrote:
> on 07/10/2010 18:46 Andriy Bakay said the following:
>> Hi All,
>>
>> Do we have any new information about this issue (fixes, work arounds etc.)? 
>> Any
>> input will be highly useful.
> 
> Yes, _we_ do.  Where have you been? :-)

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-10-07 Thread Andriy Gapon
on 07/10/2010 21:47 Andriy Bakay said the following:
> I expressed myself incorrectly. Sorry. :-(
> 
> Do you Andriy :-) or anybody else from list(s) have more info how to
> fix or work around this issue?

First, I recommend to try to upgrade to the recent stable/8.

-- 
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-10-07 Thread Andriy Bakay
Understood, but is it possible to apply "local" ZFS+UFS related
changes? Because STABLE will bring all deltas which was accumulated
since RELEASE and I really concern about stability of this box (which is
router/firewall/mail server). Other people depend on it.

On Thu, 07 Oct 2010 22:20:18 +0300, Andriy Gapon 
wrote:
> on 07/10/2010 21:47 Andriy Bakay said the following:
>> I expressed myself incorrectly. Sorry. :-(
>>
>> Do you Andriy :-) or anybody else from list(s) have more info how to
>> fix or work around this issue?
> 
> First, I recommend to try to upgrade to the recent stable/8.

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-10-07 Thread Andriy Gapon
on 08/10/2010 00:04 Andriy Bakay said the following:
> Understood, but is it possible to apply "local" ZFS+UFS related
> changes? Because STABLE will bring all deltas which was accumulated
> since RELEASE and I really concern about stability of this box (which is
> router/firewall/mail server). Other people depend on it.

Nothing is impossible.  But it's up to you to separate the changes you want from
the changes you don't want.

-- 
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-10-07 Thread Andriy Bakay
Ok. But how stable (production ready) the FreeBSD-8-STABLE is? What is your 
opinion?

On 2010-10-07, at 18:12, Andriy Gapon  wrote:

> on 08/10/2010 00:04 Andriy Bakay said the following:
>> Understood, but is it possible to apply "local" ZFS+UFS related
>> changes? Because STABLE will bring all deltas which was accumulated
>> since RELEASE and I really concern about stability of this box (which is
>> router/firewall/mail server). Other people depend on it.
> 
> Nothing is impossible.  But it's up to you to separate the changes you want 
> from
> the changes you don't want.
> 
> -- 
> Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-10-07 Thread Andriy Gapon
on 08/10/2010 01:24 Andriy Bakay said the following:
> Ok. But how stable (production ready) the FreeBSD-8-STABLE is? What is your 
> opinion?

I use it all the time :-) (And head too).
In general, and this opinion is not only my own, the best FreeBSD "release" is
the latest stable branch.

-- 
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-10-08 Thread Pete French
> Ok. But how stable (production ready) the FreeBSD-8-STABLE is? What is your 
> opinion?

I am running 8-STABLE from 27th September on all our ptoduction
machines (from webservers to database servers to the company mail
server) and it is fine. I am going to update again over the next
few days, as there are some ZFS fixes in which I want - and which
may benifit you too - so I will be able to report back next
week as to how a more recent version behaves.

In general though, I have never had problems running STABLE on
prodyction systems over the years. Of course what I do is to test it
on a singlre machine before rolling it out (a leaf in a webfarm
so if it goes down it wont affect the business) but it is usually
fine. keep an eye on -STABLE mailing list though, as that is where
problems arise. I watch that, and also the dailing commits, either here

http://www.freshbsd.org/?branch=RELENG_8&project=freebsd&committer=&module=&q=

or here

http://www.secnetix.de/olli/FreeBSD/svnews/?p=stable/8

Just to see whats going into the tree relative to whats being discussed.
It only takes a few minutes a dat to monitor the mailin lists and the
commits, and the result is that we've been running STABLE for a very
long time (close to a decade I suspect) with great success.

-pete.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-10-09 Thread Andriy Bakay
Do you know any more convenient way (except make buildword, etc.) to 
upgrade/update several boxes to STABLE on regular basis? Something like 
freebsd-update or maybe some process, tips, tricks, etc?

Thanks.

On 2010-10-08, at 6:11, Pete French  wrote:

>> Ok. But how stable (production ready) the FreeBSD-8-STABLE is? What is your 
>> opinion?
> 
> I am running 8-STABLE from 27th September on all our ptoduction
> machines (from webservers to database servers to the company mail
> server) and it is fine. I am going to update again over the next
> few days, as there are some ZFS fixes in which I want - and which
> may benifit you too - so I will be able to report back next
> week as to how a more recent version behaves.
> 
> In general though, I have never had problems running STABLE on
> prodyction systems over the years. Of course what I do is to test it
> on a singlre machine before rolling it out (a leaf in a webfarm
> so if it goes down it wont affect the business) but it is usually
> fine. keep an eye on -STABLE mailing list though, as that is where
> problems arise. I watch that, and also the dailing commits, either here
> 
> http://www.freshbsd.org/?branch=RELENG_8&project=freebsd&committer=&module=&q=
> 
> or here
> 
> http://www.secnetix.de/olli/FreeBSD/svnews/?p=stable/8
> 
> Just to see whats going into the tree relative to whats being discussed.
> It only takes a few minutes a dat to monitor the mailin lists and the
> commits, and the result is that we've been running STABLE for a very
> long time (close to a decade I suspect) with great success.
> 
> -pete.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-10-09 Thread Adam Vande More
On Sat, Oct 9, 2010 at 9:55 AM, Andriy Bakay  wrote:

> Do you know any more convenient way (except make buildword, etc.) to
> upgrade/update several boxes to STABLE on regular basis? Something like
> freebsd-update or maybe some process, tips, tricks, etc?
>

Can you not top-post please?

Probably the most convenient method is NFS boot the systems and simply
update the mfs_root image when needed.  However getting that process worked
out is tedious.  Another method is simply to compile on one machine and use
the resulting stuff to install from.

Of course there is always ccache and distcc to help speed up the process,
although last I tried ccache won't buildworld on amd4.

-- 
Adam Vande More
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-10-09 Thread Pieter de Goeje
On Saturday 09 October 2010 16:55:35 Andriy Bakay wrote:
> Do you know any more convenient way (except make buildword, etc.) to
> upgrade/update several boxes to STABLE on regular basis? Something like
> freebsd-update or maybe some process, tips, tricks, etc?
>
> Thanks.

Here's how I do it:
1) Build server: make buildworld && make buildkernel
2) Other servers: export / via NFS

Repeat for each other server on build server:

mount boxN:/ /mnt
make installkernel DESTDIR=/mnt -DNO_FSCHG
make installworld DESTDIR=/mnt -DNO_FSCHG
umount /mnt

Note that I use a single filesystem for / and /usr. Obviously if those are 
separate filesystems more NFS exports and mount commands are necessary. 
Before the first run all immutable flags need to be removed from the target 
box, otherwise the install will fail (i.e. chflags -R noschg /).

>
> On 2010-10-08, at 6:11, Pete French  wrote:
> >> Ok. But how stable (production ready) the FreeBSD-8-STABLE is? What is
> >> your opinion?
> >
> > I am running 8-STABLE from 27th September on all our ptoduction
> > machines (from webservers to database servers to the company mail
> > server) and it is fine. I am going to update again over the next
> > few days, as there are some ZFS fixes in which I want - and which
> > may benifit you too - so I will be able to report back next
> > week as to how a more recent version behaves.
> >
> > In general though, I have never had problems running STABLE on
> > prodyction systems over the years. Of course what I do is to test it
> > on a singlre machine before rolling it out (a leaf in a webfarm
> > so if it goes down it wont affect the business) but it is usually
> > fine. keep an eye on -STABLE mailing list though, as that is where
> > problems arise. I watch that, and also the dailing commits, either here
> >
> > http://www.freshbsd.org/?branch=RELENG_8&project=freebsd&committer=&modul
> >e=&q=
> >
> > or here
> >
> > http://www.secnetix.de/olli/FreeBSD/svnews/?p=stable/8
> >
> > Just to see whats going into the tree relative to whats being discussed.
> > It only takes a few minutes a dat to monitor the mailin lists and the
> > commits, and the result is that we've been running STABLE for a very
> > long time (close to a decade I suspect) with great success.
> >
> > -pete.
>
> ___
> freebsd-stable@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"



-- 
Pieter de Goeje
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-10-09 Thread Matthew D. Fuller
On Sun, Oct 10, 2010 at 03:32:54AM +0200 I heard the voice of
Pieter de Goeje, and lo! it spake thus:
> 
> Note that I use a single filesystem for / and /usr. Obviously if
> those are separate filesystems more NFS exports and mount commands
> are necessary.  Before the first run all immutable flags need to be
> removed from the target box, otherwise the install will fail (i.e.
> chflags -R noschg /).

That's one reason I always found it easier to go the other way, export
/usr/{src,obj} from the build server, and do the installworld locally
on the boxes in question.  's just simpler that way, when everything
is in its "normal" place.


-- 
Matthew Fuller (MF4839)   |  fulle...@over-yonder.net
Systems/Network Administrator |  http://www.over-yonder.net/~fullermd/
   On the Internet, nobody can hear you scream.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-10-10 Thread Andriy Gapon
on 09/10/2010 17:55 Andriy Bakay said the following:
> Do you know any more convenient way (except make buildword, etc.) to
> upgrade/update several boxes to STABLE on regular basis? Something like
> freebsd-update or maybe some process, tips, tricks, etc?

More convenient? :-)
Sorry, I always use "make buildword, etc", works 100% for me and I find it very
convenient.

-- 
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Serious zfs slowdown when mixed with another file system (ufs/msdosfs/etc.).

2010-10-10 Thread Andriy Bakay

On 10-Oct-10, at 4:16 AM, Andriy Gapon wrote:


More convenient? :-)
Sorry, I always use "make buildword, etc", works 100% for me and I  
find it very

convenient.

--
Andriy Gapon


I mean some thing like freebsd-update for FreeBSD-STABLE monthly  
snapshots. You are right, update from sources is working perfectly and  
I used it before freebsd-update came along. But I found freebsd-update  
way more convenient.


Anyway thank you all for all your tips.

--
Andriy

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"