ZFS v28: kernel panics while reading an extended attribute

2011-08-01 Thread Alexander Zagrebin
Hi!

It seems, I've found a bug in the ZFS v28 on the latest stable:
if we have a snapshot with some files having an extended attributes,
then attempt to read an extended attributes's value leads to a well
reproducible kernel panic.

The part of backtrace follows:

#6  0x804bbe44 in calltrap ()
at /usr/src/sys/amd64/amd64/exception.S:228
#7  0x80950ea7 in zil_commit (zilog=0x0, foid=5795917)
at 
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zil.c:1497
#8  0x80979e6b in zfs_freebsd_read (ap=Variable "ap" is not available.)
at 
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c:622
#9  0x80979750 in zfs_getextattr (ap=0xff80dd5d8820)
at vnode_if.h:384
#10 0x8038921b in extattr_get_vp (vp=0xff0056a01588,
attrnamespace=1, attrname=0xff80dd5d89a0 "DOSATTRIB", data=Variable 
"data" is not available.)
at vnode_if.h:1332

It seems that ZIL isn't available for snapshots, but zfs_freebsd_read
doesn't check this when calling zil_commit.

The attached patch fixes this issue.

Can anybody confirm this?

-- 
Alexander Zagrebin
--- sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c.orig	2011-08-01 23:04:07.358173627 +0400
+++ sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c		2011-08-02 00:10:02.674585604 +0400
@@ -618,7 +618,8 @@ zfs_read(vnode_t *vp, uio_t *uio, int io
 	/*
 	 * If we're in FRSYNC mode, sync out this znode before reading it.
 	 */
-	if (ioflag & FRSYNC || zfsvfs->z_os->os_sync == ZFS_SYNC_ALWAYS)
+	if (zfsvfs->z_log &&
+	(ioflag & FRSYNC || zfsvfs->z_os->os_sync == ZFS_SYNC_ALWAYS))
 		zil_commit(zfsvfs->z_log, zp->z_id);
 
 	/*
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

RE: 8.1-STABLE: zfs and sendfile: problem still exists

2010-11-22 Thread Alexander Zagrebin
> -Original Message-
> From: Andriy Gapon [mailto:a...@freebsd.org] 
> Sent: Saturday, October 30, 2010 1:53 PM
> To: Artemiev Igor
> Cc: freebsd-stable@freebsd.org; freebsd...@freebsd.org; 
> Alexander Zagrebin
> Subject: Re: 8.1-STABLE: zfs and sendfile: problem still exists
> 
> 
> Heh, next try.
> 
> Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c
> ===
> --- 
> sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c
> (revision 214318)
> +++ 
> sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c
> (working copy)
> @@ -67,6 +67,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
> 
>  /*
>   * Programming rules.
> @@ -464,7 +465,7 @@
>   uiomove_fromphys(&m, off, bytes, uio);
>   VM_OBJECT_LOCK(obj);
>   vm_page_wakeup(m);
> - } else if (m != NULL && uio->uio_segflg == UIO_NOCOPY) {
> + } else if (uio->uio_segflg == UIO_NOCOPY) {
>   /*
>* The code below is here to make 
> sendfile(2) work
>* correctly with ZFS. As pointed out by ups@
> @@ -474,9 +475,23 @@
>*/
>   KASSERT(off == 0,
>   ("unexpected offset in mappedread 
> for sendfile"));
> - if (vm_page_sleep_if_busy(m, FALSE, "zfsmrb"))
> + if (m != NULL && 
> vm_page_sleep_if_busy(m, FALSE, "zfsmrb"))
>   goto again;
> - vm_page_busy(m);
> + if (m == NULL) {
> + m = vm_page_alloc(obj, 
> OFF_TO_IDX(start),
> + VM_ALLOC_NOBUSY | VM_ALLOC_NORMAL);
> + if (m == NULL) {
> + VM_OBJECT_UNLOCK(obj);
> + VM_WAIT;
> + VM_OBJECT_LOCK(obj);
> + goto again;
> + }
> + } else {
> + vm_page_lock_queues();
> + vm_page_wire(m);
> + vm_page_unlock_queues();
> + }
> + vm_page_io_start(m);
>   VM_OBJECT_UNLOCK(obj);
>   if (dirbytes > 0) {
>   error = dmu_read_uio(os, zp->z_id, uio,
> @@ -494,7 +509,10 @@
>   VM_OBJECT_LOCK(obj);
>   if (error == 0)
>   m->valid = VM_PAGE_BITS_ALL;
> - vm_page_wakeup(m);
> + vm_page_io_finish(m);
> + vm_page_lock_queues();
> + vm_page_unwire(m, 0);
> + vm_page_unlock_queues();
>   if (error == 0) {
>   uio->uio_resid -= bytes;
>   uio->uio_offset += bytes;
> 

It seems that this patch isn't merged into RELENG_8.
Are there chances that it will be merged before 8.2-RELEASE?

-- 
Alexander Zagrebin

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


RE: 8.1-STABLE: problem with unmounting ZFS snapshots

2010-11-12 Thread Alexander Zagrebin
> Yes, this is indeed a leak introduced by importing onnv revision 9214
> and it exists in perforce as well - very easy to reproduce.
> 
> # mount -t zfs t...@t1 /mnt
> # umount /mnt (-> hang)
> 
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6604992
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6810367
> 
> This is not compatible with mounting snapshots outside 
> mounted ZFS and I
> was not able to reproduce the errors defined in 6604992 and 6810367
> (they are Solaris-specific). I suggest we comment out this code (from
> head, later MFC and p4 as well).
> 
> Patch (should work with HEAD and 8-STABLE):
> http://people.freebsd.org/~mm/patches/zfs/zfs_vfsops.c.patch
> 

The patch was applied cleanly to the latest stable.
umount doesn't hangs now. Thanks.

Let me ask a question...
I'm updating the source tree via csup/cvs.
Is there a method to determine a SVN revision in this case?
If no, then may be possible to add (and automatically maintain on
svn -> cvs replication) special file into cvs tree
(for example, /usr/src/revision) with the current svn revision inside?

-- 
Alexander Zagrebin

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


RE: 8.1-STABLE: problem with unmounting ZFS snapshots

2010-11-12 Thread Alexander Zagrebin
Thanks for your reply!

> > 2. the umount is waiting for disk
> > #ps | egrep 'PID|umount'
> >   PID  TT  STAT  TIME COMMAND
> >   958   0  D+ 0:00,04 umount /mnt
> > # procstat -t 958
> >   PIDTID COMM TDNAME   CPU  PRI 
> STATE   WCHAN
> >   958 100731 umount   -  3  133 
> sleep   mntref
> 
> procstat -kk 

$ ps a | grep umount
86874   2- D  0:00,06 umount /mnt
90433   3  S+ 0:00,01 grep umount

$ sudo procstat -kk 86874
  PIDTID COMM TDNAME   KSTACK
86874 100731 umount   -mi_switch+0x176
sleepq_wait+0x42 _sleep+0x317 vfs_mount_destroy+0x5a dounmount+0x4d4
unmount+0x38b syscall+0x1cf Xfast_syscall+0xe2

-- 
Alexander Zagrebin

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


8.1-STABLE: problem with unmounting ZFS snapshots

2010-11-12 Thread Alexander Zagrebin
I have found that there is an issue with unmounting ZFS snapshots:
the /sbin/umount "hangs" after unmounting.

The test system is i386, but I can reproduce this issue on amd64 too.

# uname -a
FreeBSD alpha.vosz.local 8.1-STABLE FreeBSD 8.1-STABLE #0: Tue Oct 19
18:47:05 MSD 2010 r...@alpha.vosz.local:/usr/obj/usr/src/sys/GENERIC
i386

How to try to repeat:

# zfs snapshot pool/v...@test

# zfs list -t all -r pool/var
NAMEUSED  AVAIL  REFER  MOUNTPOINT
pool/var   4,86M  2,99G  4,86M  /var
pool/v...@test  0  -  4,86M  -

# mount -t zfs pool/v...@test /mnt

# mount
...
pool/v...@test on /mnt (zfs, local, noatime, read-only)

# umount /mnt

At this point umount hangs and it's impossible to kill it
even with the `kill -9`.

>From the working console I can see that:
1. snapshot is unmounted successfully

# mount
pool/root on / (zfs, local)
devfs on /dev (devfs, local, multilabel)
pool/home on /home (zfs, local)
pool/tmp on /tmp (zfs, local)
pool/usr on /usr (zfs, local)
pool/usr/src on /usr/src (zfs, local)
pool/var on /var (zfs, local)

2. the umount is waiting for disk
#ps | egrep 'PID|umount'
  PID  TT  STAT  TIME COMMAND
  958   0  D+ 0:00,04 umount /mnt
# procstat -t 958
  PIDTID COMM TDNAME   CPU  PRI STATE   WCHAN
  958 100731 umount   -  3  133 sleep   mntref

Can anybody confirm this issue?
Any suggestions?

-- 
Alexander Zagrebin

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


RE: 8.1-STABLE: zfs and sendfile: problem still exists

2010-10-31 Thread Alexander Zagrebin
> >> I apologize for my haste, it should have been VM_ALLOC_WIRED.
> > 
> > Ok, applied and tested under some load(~1200 active 
> connections, outgoing
> > ~80MB/s). Patch work as expected and i has noted no side 
> effects.  Just one
> > question - should grow Active memory counter, if some pages 
> is "hot"(during
> > multiple sendfile on one file)?
> 
> Pages used by sendfile are marked as Inactive for faster 
> reclamation on demand.

I have a question.
When we transfer a file via sendfile, then current code allocates
a memory, marked inactive. For example, if the file has size 100 MB,
then 100 MB of memory will be allocated.
If we have to transfer this file again later, then this memory will used
as cache, and no disk io will be required.
The memory will be freed if file will be deleted or operating system
will need an additional memory. 
I have correctly understood?
If it so, the i continue...
Such behaviour is good if we have files with relatively small size.
Suppose we have to transfer file with large size (for example, greater 
than amount of physical memory).
While transfering, the inactive memory will grow, pressing the ARC.
When size of the ARC will fall to its minimum (vfs.zfs.arc_min), then
inactive memory will be reused.
So, when transfer is complete, we have:
1. No free memory
2. Size of the ARC has minimal size (it is bad)
3. Inactive memory contains the _tail_ of the file only (it is bad too)
Now if we have to transfer this file again, then
1. there is no (or few) file's data in ARC (ARC too small)
2. The inactive memory doesn't contain a head part of the file
So the file's data will read from a disk again and again...
Also i've noticed that inactive memory frees relatively slowly,
so if there is a frequent access to large files, then system will run
at very unoptimal conditions.
It's imho...
Can you comment this?

-- 
Alexander Zagrebin

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


RE: 8.1-STABLE: zfs and sendfile: problem still exists

2010-10-30 Thread Alexander Zagrebin
> >> Oh, thank you for testing - forgot another piece 
> (VM_ALLOC_WIRE for vm_page_alloc):
> > 
> > Yep, it work. But VM_ALLOC_WIRE not exists in RELENG_8, 
> therefore i slightly modified your patch:
> 
> I apologize for my haste, it should have been VM_ALLOC_WIRED.
> Here is a corrected patch:
> Index: sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c
> ===
> --- 
> sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c
> (revision 214318)
> +++ 
> sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vnops.c
> (working copy)
> @@ -67,6 +67,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
> 
>  /*
>   * Programming rules.
> @@ -464,7 +465,7 @@
>   uiomove_fromphys(&m, off, bytes, uio);
>   VM_OBJECT_LOCK(obj);
>   vm_page_wakeup(m);
> - } else if (m != NULL && uio->uio_segflg == UIO_NOCOPY) {
> + } else if (uio->uio_segflg == UIO_NOCOPY) {
>   /*
>* The code below is here to make 
> sendfile(2) work
>* correctly with ZFS. As pointed out by ups@
> @@ -474,9 +475,23 @@
>*/
>   KASSERT(off == 0,
>   ("unexpected offset in mappedread 
> for sendfile"));
> - if (vm_page_sleep_if_busy(m, FALSE, "zfsmrb"))
> + if (m != NULL && 
> vm_page_sleep_if_busy(m, FALSE, "zfsmrb"))
>   goto again;
> - vm_page_busy(m);
> + if (m == NULL) {
> + m = vm_page_alloc(obj, 
> OFF_TO_IDX(start),
> + VM_ALLOC_NOBUSY | 
> VM_ALLOC_WIRED | VM_ALLOC_NORMAL);
> + if (m == NULL) {
> + VM_OBJECT_UNLOCK(obj);
> + VM_WAIT;
> + VM_OBJECT_LOCK(obj);
> + goto again;
> + }
> + } else {
> + vm_page_lock_queues();
> + vm_page_wire(m);
> + vm_page_unlock_queues();
> + }
> + vm_page_io_start(m);
>   VM_OBJECT_UNLOCK(obj);
>   if (dirbytes > 0) {
>   error = dmu_read_uio(os, zp->z_id, uio,
> @@ -494,7 +509,10 @@
>   VM_OBJECT_LOCK(obj);
>   if (error == 0)
>   m->valid = VM_PAGE_BITS_ALL;
> - vm_page_wakeup(m);
> + vm_page_io_finish(m);
> + vm_page_lock_queues();
> + vm_page_unwire(m, 0);
> + vm_page_unlock_queues();
>   if (error == 0) {
>   uio->uio_resid -= bytes;
>   uio->uio_offset += bytes;
> 

Big thanks to Andriy, Igor and all who has paid attention to this problem.
I've tried this patch on the test system running under VirtualBox,
and it seems that it solves the problem.
I'll try to test this patch in real conditions today.

-- 
Alexander Zagrebin

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


RE: 8.1-STABLE: zfs and sendfile: problem still exists

2010-10-29 Thread Alexander Zagrebin
gible: 150970209280
kstat.zfs.misc.arcstats.evict_l2_ineligible: 36864
kstat.zfs.misc.arcstats.hash_elements: 92084
kstat.zfs.misc.arcstats.hash_elements_max: 168546
kstat.zfs.misc.arcstats.hash_collisions: 2062370
kstat.zfs.misc.arcstats.hash_chains: 23974
kstat.zfs.misc.arcstats.hash_chain_max: 18
kstat.zfs.misc.arcstats.p: 810895823
kstat.zfs.misc.arcstats.c: 1006632960
kstat.zfs.misc.arcstats.c_min: 125829120
kstat.zfs.misc.arcstats.c_max: 1006632960
kstat.zfs.misc.arcstats.size: 1006658848
kstat.zfs.misc.arcstats.hdr_size: 20246240
kstat.zfs.misc.arcstats.data_size: 917672960
kstat.zfs.misc.arcstats.other_size: 68739648
kstat.zfs.misc.arcstats.l2_hits: 0
kstat.zfs.misc.arcstats.l2_misses: 0
kstat.zfs.misc.arcstats.l2_feeds: 0
kstat.zfs.misc.arcstats.l2_rw_clash: 0
kstat.zfs.misc.arcstats.l2_read_bytes: 0
kstat.zfs.misc.arcstats.l2_write_bytes: 0
kstat.zfs.misc.arcstats.l2_writes_sent: 0
kstat.zfs.misc.arcstats.l2_writes_done: 0
kstat.zfs.misc.arcstats.l2_writes_error: 0
kstat.zfs.misc.arcstats.l2_writes_hdr_miss: 0
kstat.zfs.misc.arcstats.l2_evict_lock_retry: 0
kstat.zfs.misc.arcstats.l2_evict_reading: 0
kstat.zfs.misc.arcstats.l2_free_on_write: 0
kstat.zfs.misc.arcstats.l2_abort_lowmem: 0
kstat.zfs.misc.arcstats.l2_cksum_bad: 0
kstat.zfs.misc.arcstats.l2_io_error: 0
kstat.zfs.misc.arcstats.l2_size: 0
kstat.zfs.misc.arcstats.l2_hdr_size: 0
kstat.zfs.misc.arcstats.memory_throttle_count: 9
kstat.zfs.misc.arcstats.l2_write_trylock_fail: 0
kstat.zfs.misc.arcstats.l2_write_passed_headroom: 0
kstat.zfs.misc.arcstats.l2_write_spa_mismatch: 0
kstat.zfs.misc.arcstats.l2_write_in_l2: 0
kstat.zfs.misc.arcstats.l2_write_io_in_progress: 0
kstat.zfs.misc.arcstats.l2_write_not_cacheable: 30
kstat.zfs.misc.arcstats.l2_write_full: 0
kstat.zfs.misc.arcstats.l2_write_buffer_iter: 0
kstat.zfs.misc.arcstats.l2_write_pios: 0
kstat.zfs.misc.arcstats.l2_write_buffer_bytes_scanned: 0
kstat.zfs.misc.arcstats.l2_write_buffer_list_iter: 0
kstat.zfs.misc.arcstats.l2_write_buffer_list_null_iter: 0


-- 
Alexander Zagrebin

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


RE: 8.1-STABLE: zfs and sendfile: problem still exists

2010-10-29 Thread Alexander Zagrebin
> > I've tried the nginx with
> > disabled sendfile (the nginx.conf contains "sendfile off;"):
> > 
> > $ dd if=/dev/random of=test bs=1m count=100
> > 100+0 records in
> > 100+0 records out
> > 104857600 bytes transferred in 5.892504 secs (17795083 bytes/sec)
> > $ fetch -o /dev/null http://localhost/test
> > /dev/null 100% of  100 
> MB   41 MBps
> > $ fetch -o /dev/null http://localhost/test
> > /dev/null 100% of  100 
> MB   44 MBps
> > $ fetch -o /dev/null http://localhost/test
> > /dev/null 100% of  100 
> MB   44 MBps
> > 
> 
> I am really surprised with such a bad performance of sendfile.
> Will you be able to profile the issue further?

Yes.

> I will also try to think of some measurements.

A transfer rate is too low for the _first_ attempt only.
Further attempts demonstrates a reasonable transfer rate.
For example, nginx with "sendfile on;":

$ dd if=/dev/random of=test bs=1m count=100
100+0 records in
100+0 records out
104857600 bytes transferred in 5.855305 secs (17908136 bytes/sec)
$ fetch -o /dev/null http://localhost/test
/dev/null   3% of  100 MB  118 kBps
13m50s^C
fetch: transfer interrupted
$ fetch -o /dev/null http://localhost/test
/dev/null 100% of  100 MB   39 MBps

If there was no access to the file during some time, then everything
repeats:
The first attempt - transfer rate is too low
A further attempts - no problems

Can you reproduce the problem on your system?

-- 
Alexander Zagrebin

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


RE: 8.1-STABLE: zfs and sendfile: problem still exists

2010-10-29 Thread Alexander Zagrebin
> > I've noticed that ZFS on 8.1-STABLE still has problems with 
> sendfile.
> 
> Which svn revision, just in case?

8.1-STABLE
The source tree was updated 2010-10-27

> > When accessing a file at first time the transfer speed is 
> too low, but
> > on following attempts the transfer speed is normal.
> > 
> > How to repeat:
> > 
> > $ dd if=/dev/random of=/tmp/test bs=1m count=100
> > 100+0 records in
> > 100+0 records out
> > 104857600 bytes transferred in 5.933945 secs (17670807 bytes/sec)
> > $ sudo env LC_ALL=C /usr/libexec/ftpd -D
> > 
> > The first attempt to fetch file:
> > 
> > $ fetch -o /dev/null ftp://localhost/tmp/test
> > /dev/null   1% of  100 
> MB  118 kBps
> > 14m07s^C
> > fetch: transfer interrupted
> > 
> > The transfer rate is too low (approx. 120 kBps), but any 
> subsequent attempts
> > are success:
> > 
> > $ fetch -o /dev/null ftp://localhost/tmp/test
> > /dev/null 100% of  100 
> MB   42 MBps
> > $ fetch -o /dev/null ftp://localhost/tmp/test
> > /dev/null 100% of  100 
> MB   47 MBps
> 
> Can you do an experiment with the same structure but sendfile 
> excluded?

IMHO, ftpd hasn't an option to disable sendfile. I've tried the nginx with
disabled sendfile (the nginx.conf contains "sendfile off;"):

$ dd if=/dev/random of=test bs=1m count=100
100+0 records in
100+0 records out
104857600 bytes transferred in 5.892504 secs (17795083 bytes/sec)
$ fetch -o /dev/null http://localhost/test
/dev/null 100% of  100 MB   41 MBps
$ fetch -o /dev/null http://localhost/test
/dev/null 100% of  100 MB   44 MBps
$ fetch -o /dev/null http://localhost/test
/dev/null 100% of  100 MB   44 MBps

-- 
Alexander Zagrebin

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


8.1-STABLE: zfs and sendfile: problem still exists

2010-10-27 Thread Alexander Zagrebin
Hi!

I've noticed that ZFS on 8.1-STABLE still has problems with sendfile.
When accessing a file at first time the transfer speed is too low, but
on following attempts the transfer speed is normal.

How to repeat:

$ dd if=/dev/random of=/tmp/test bs=1m count=100
100+0 records in
100+0 records out
104857600 bytes transferred in 5.933945 secs (17670807 bytes/sec)
$ sudo env LC_ALL=C /usr/libexec/ftpd -D

The first attempt to fetch file:

$ fetch -o /dev/null ftp://localhost/tmp/test
/dev/null   1% of  100 MB  118 kBps
14m07s^C
fetch: transfer interrupted

The transfer rate is too low (approx. 120 kBps), but any subsequent attempts
are success:

$ fetch -o /dev/null ftp://localhost/tmp/test
/dev/null 100% of  100 MB   42 MBps
$ fetch -o /dev/null ftp://localhost/tmp/test
/dev/null 100% of  100 MB   47 MBps
...

To repeat it is enough to copy a file and to try again:

$ cp /tmp/test /tmp/test1
$ fetch -o /dev/null ftp://localhost/tmp/test1
/dev/null   2% of  100 MB  119 kBps
13m50s^C
fetch: transfer interrupted
$ fetch -o /dev/null ftp://localhost/tmp/test1
/dev/null 100% of  100 MB   41 MBps
$ fetch -o /dev/null ftp://localhost/tmp/test1
/dev/null 100% of  100 MB   47 MBps

...and again:

$ cp /tmp/test1 /tmp/test2
$ fetch -o /dev/null ftp://localhost/tmp/test2
/dev/null   1% of  100 MB  118 kBps
14m07s^C
fetch: transfer interrupted
$ fetch -o /dev/null ftp://localhost/tmp/test2
/dev/null 100% of  100 MB   41 MBps
$ fetch -o /dev/null ftp://localhost/tmp/test2
/dev/null 100% of  100 MB   47 MBps

I've tried ftpd and nginx with "sendfile on". The behavior is the same.
After disabling using sendfile in nginx ("sendfile off") the problem has
gone.

-- 
Alexander Zagrebin

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


RE: upcoming 7.3-RELEASE: zfsloader doesn't support ZFS (doesn't link with libzfsboot)

2010-03-23 Thread Alexander Zagrebin
> On Tuesday 23 March 2010 3:41:06 am Alexander Zagrebin wrote:
> > I have tried to build RELENG_7_3_0_RELEASE and have noticed 
> that zfsloader
> > really doesn't supports ZFS due to incomplete Makefiles 
> (LOADER_ZFS_SUPPORT
> > issue).
> > Will be this issue fixed in 7.3-RELEASE?
> 
> Can you provide the output of the errors you are seeing?

There are no build errors.

IMHO, to support a ZFS, the loader have to be linked with the libzfsboot.
But (IMHO again) in the RELENG_7_3_0_RELEASE zfsloader builds without
this library.

To build zfsloader, the /usr/src/sys/boot/i386/zfsloader/Makefile contains
the following most important lines:

LOADER_ZFS_SUPPORT=yes
...
.include "${.CURDIR}/../loader/Makefile"

So the /usr/src/sys/boot/i386/loader/Makefile have to set required CFLAGS
and so on,
but it don't. It contains the folowing ZFS related lines:

# Set by zfsloader Makefile
#.if ${MK_ZFS} != "no"
#CFLAGS+=   -DLOADER_ZFS_SUPPORT
#LIBZFS=${.OBJDIR}/../../zfs/libzfsboot.a
#.else
LIBZFS=
#.endif

As you can see, all ZFS related stuff is commented out.
So "LOADER_ZFS_SUPPORT=yes" (/usr/src/sys/boot/i386/zfsloader/Makefile)
doesn't
affects a build process.

-- 
Alexander Zagrebin


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


upcoming 7.3-RELEASE: zfsloader doesn't support ZFS (doesn't link with libzfsboot)

2010-03-23 Thread Alexander Zagrebin
I have tried to build RELENG_7_3_0_RELEASE and have noticed that zfsloader
really doesn't supports ZFS due to incomplete Makefiles (LOADER_ZFS_SUPPORT
issue).
Will be this issue fixed in 7.3-RELEASE?

-- 
Alexander Zagrebin

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


RE: 8.0-RELEASE: disk IO temporarily hangs up (ZFS or ATA related problem)

2009-12-16 Thread Alexander Zagrebin
orted.
Short self-test routine
recommended polling time:(   2) minutes.
Extended self-test routine
recommended polling time:( 255) minutes.
Conveyance self-test routine
recommended polling time:(   5) minutes.
SCT capabilities:  (0x303f) SCT Status supported.
SCT Feature Control supported.
SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE  UPDATED
WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate 0x002f   200   200   051Pre-fail  Always
-   0
  3 Spin_Up_Time0x0027   180   180   021Pre-fail  Always
-   5991
  4 Start_Stop_Count0x0032   100   100   000Old_age   Always
-   21
  5 Reallocated_Sector_Ct   0x0033   200   200   140Pre-fail  Always
-   0
  7 Seek_Error_Rate 0x002e   200   200   000Old_age   Always
-   0
  9 Power_On_Hours  0x0032   100   100   000Old_age   Always
-   109
 10 Spin_Retry_Count0x0032   100   253   000Old_age   Always
-   0
 11 Calibration_Retry_Count 0x0032   100   253   000Old_age   Always
-   0
 12 Power_Cycle_Count   0x0032   100   100   000Old_age   Always
-   16
192 Power-Off_Retract_Count 0x0032   200   200   000Old_age   Always
-   15
193 Load_Cycle_Count0x0032   199   199   000Old_age   Always
-   3819
194 Temperature_Celsius 0x0022   115   106   000Old_age   Always
-   35
196 Reallocated_Event_Count 0x0032   200   200   000Old_age   Always
-   0
197 Current_Pending_Sector  0x0032   200   200   000Old_age   Always
-   0
198 Offline_Uncorrectable   0x0030   200   200   000Old_age   Offline
-   0
199 UDMA_CRC_Error_Count0x0032   200   200   000Old_age   Always
-   0
200 Multi_Zone_Error_Rate   0x0008   200   200   000Old_age   Offline
-   0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_DescriptionStatus  Remaining  LifeTime(hours)
LBA_of_first_error
# 1  Conveyance offline  Completed without error   00%84
-

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
100  Not_testing
200  Not_testing
300  Not_testing
400  Not_testing
500  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
=

-- 
Alexander Zagrebin

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


8.0-RELEASE: disk IO temporarily hangs up (ZFS or ATA related problem)

2009-12-16 Thread Alexander Zagrebin
Hi!

I use onboard ICH7 SATA controller with two disks attached:

atapci1:  port
0x30c8-0x30cf,0x30ec-0x30ef,0x30c0-0x30c7,0x30e8-0x30eb,0x30a0-0x30af irq 19
at device 31.2 on pci0
atapci1: [ITHREAD]
ata2:  on atapci1
ata2: [ITHREAD]
ata3:  on atapci1
ata3: [ITHREAD]
ad4: 1430799MB  at ata2-master SATA150
ad6: 1430799MB  at ata3-master SATA150

The disks are used for mirrored ZFS pool.
I have noticed that the system periodically locks up on disk operations.
After approx. 10 min of very slow disk i/o (several KB/s) the speed of disk
operations restores to normal.
gstat has shown that the problem is in ad6.
For example, there is a filtered output of iostat -x 1:

extended device statistics
device r/s   w/skr/skw/s wait svc_t  %b
ad6  818.6   0.0 10840.2 0.00   0.4  34
ad6  300.6 642.0  3518.5 24830.3   50  24.8  72
ad61.0 639.363.7 17118.30  62.1  98
ad6  404.5   4.0  6837.7 4.00   0.5  18
ad6  504.5   0.0 13667.2 0.01   0.7  32
ad6  633.3   0.0 13190.3 0.01   0.7  38
ad6  416.3 384.5  8134.7 24606.20  16.3  57
ad6  538.9  76.7  9772.8  2982.2   55   2.9  40
ad6   31.9 929.5   801.0 37498.60  27.2  82
ad6  635.5   0.0 13087.1 0.01   0.6  35
ad6  579.6   0.0 16669.8 0.00   0.8  43
ad6  603.6   0.0 11697.4 0.01   0.7  40
ad6  538.0   0.0 10438.7 0.00   0.9  47
ad6   30.9 898.4   868.6 40585.40  36.6  78
ad6  653.3  86.6  8566.6   202.71   0.8  40
ad6  737.1   0.0  6429.4 0.01   0.6  42
ad6  717.1   0.0  3958.7 0.00   0.5  36
ad6 1179.5   0.0  2058.9 0.00   0.1  15
ad6 1191.2   0.0  1079.6 0.01   0.1  15
ad6  985.1   0.0  5093.9 0.00   0.2  23
ad6  761.8   0.0  9801.3 0.01   0.4  31
ad6  698.7   0.0  9215.1 0.00   0.4  30
ad6  434.2 513.9  5903.1 13658.3   48  10.2  55
ad63.0 762.8   191.2 28732.30  57.6  99
ad6   10.0   4.0   163.9 4.01   1.6   2

Before this line we have a normal operations.
Then the behaviour of ad6 changes (pay attention to high average access time
and percent of "busy" significantly greater than 100):

ad60.0   0.0 0.0 0.01   0.0   0
ad61.0   0.0 0.5 0.01 1798.3 179
ad61.0   0.0 1.5 0.01 1775.4 177
ad60.0   0.0 0.0 0.01   0.0   0
ad6   10.0   0.075.2 0.01 180.3 180
ad60.0   0.0 0.0 0.01   0.0   0
ad6   83.7   0.0   862.9 0.01  21.4 179
ad60.0   0.0 0.0 0.01   0.0   0
ad61.0   0.063.7 0.01 1707.4 170
ad61.0   0.0 9.0 0.00 1791.0 178
ad6   10.9   0.0   172.2 0.02   0.2   0
ad6   24.9   0.0   553.7 0.01 143.3 179
ad60.0   0.0 0.0 0.07   0.0   0
ad62.0  23.932.4  1529.91 336.3 177
ad60.0   0.0 0.0 0.01   0.0   0
ad6   68.7   0.0  1322.8 0.01  26.3 181
ad60.0   0.0 0.0 0.01   0.0   0
ad6   27.9   0.0   193.7 0.01  61.6 172
ad61.0   0.0 2.5 0.01 1777.4 177
ad60.0   0.0 0.0 0.01   0.0   0
ad61.0   0.0 2.0 0.01 1786.9 178
ad60.0   0.0 0.0 0.01   0.0   0
ad62.0   0.0 6.5 0.01 899.4 179
ad60.0   0.0 0.0 0.01   0.0   0
ad61.0   0.0 2.0 0.01 1786.7 178
ad60.0   0.0 0.0 0.01   0.0   0

And so on for about 10 minutes.
Then the disk i/o is reverted to normal:

ad6  139.4   0.0  8860.5 0.01   4.4  61
ad6  167.3   0.0 10528.7 0.01   3.3  55
ad6   60.8 411.5  3707.6  8574.81  19.6  87
ad6  163.4   0.0 10334.9 0.01   4.4  72
ad6  157.4   0.0  9770.7 0.01   5.0  78
ad6  108.5   0.0  6886.8 0.00   3.9  43
ad6  101.6   0.0  6381.4 0.00   2.6  27
ad6  109.6   0.0  7013.9 0.00   2.0  22
ad6  121.4   0.0  7769.7 0.00   2.4  29
ad6   92.5   0.0  5922.6 0.01   3.4  31
ad6  122.4  19.9  7833.0  1273.70   3.9  54
ad6   83.6   0.0  5349.5 0.00   3.9  33
ad65.0   0.0   318.4 0.00   8.1   4

There are no ata error messages neither in the system log, nor on the
console.
The manufacture's diagnostic test is passed on ad6 without any errors.
The ad6 also contains swap partition.
I have tried to run several (10..20) instances of dd, which read and write
data
from and to the swap partition simultaneously, but it has not called the
lockup.
So there is a probability that this problem is ZFS related.

I have been forced to switch ad6 to the offline state... :(

Any suggestions on this problem?

-- 
Alexander Zagrebin

___
freebsd-stable@freebsd.org ma

igmp problems on 8.0-RC1

2009-10-09 Thread Alexander Zagrebin
After upgrading from 7.2-RELEASE to 8.0-RC1 i have noticed problems with
igmp.
xorp sees IGMP_V2_LEAVE_GROUP, but not IGMP_V2_MEMBERSHIP_REPORT.
igmpproxy has the same behavior.

xorp_rtrmgr.log contains only:

[ TRACE xorp_igmp MLD6IGMP ] RX IGMP_V2_LEAVE_GROUP from 192.168.0.10 to
224.0.0.2 on vif rl0
[ TRACE xorp_igmp MLD6IGMP ] RX IGMP_V2_LEAVE_GROUP from 192.168.0.10 to
224.0.0.2 on vif rl0
[ TRACE xorp_igmp MLD6IGMP ] RX IGMP_V2_LEAVE_GROUP from 192.168.0.10 to
224.0.0.2 on vif rl0
[ TRACE xorp_igmp MLD6IGMP ] RX IGMP_V2_LEAVE_GROUP from 192.168.0.10 to
224.0.0.2 on vif rl0

but /var/log/messages (KTR and KTR_VERBOSE enabled):

kernel: cpu1 ip_mforward: delete mfc orig 192.168.0.10 group e002 ifp
0xc3e99400
kernel: cpu1 igmp_input: called w/mbuf (0xc494f900,24)
kernel: cpu1 ip_mforward: delete mfc orig 192.168.0.10 group ef20013a ifp
0xc3e99400
kernel: cpu1 igmp_input: called w/mbuf (0xc4072300,24)
kernel: cpu1 process v2 report 239.32.1.58 on ifp 0xc3e99400(rl0)
kernel: cpu1 ip_mforward: delete mfc orig 192.168.0.10 group ef20013a ifp
0xc3e99400
kernel: cpu1 igmp_input: called w/mbuf (0xc4073500,24)
kernel: cpu1 process v2 report 239.32.1.58 on ifp 0xc3e99400(rl0)
kernel: cpu1 ip_mforward: delete mfc orig 192.168.0.10 group ef20013a ifp
0xc3e99400
kernel: cpu1 igmp_input: called w/mbuf (0xc429cc00,24)
kernel: cpu1 process v2 report 239.32.1.58 on ifp 0xc3e99400(rl0)
kernel: cpu1 ip_mforward: delete mfc orig 192.168.0.10 group e002 ifp
0xc3e99400
kernel: cpu1 igmp_input: called w/mbuf (0xc4926500,24)
kernel: cpu1 ip_mforward: delete mfc orig 192.168.0.10 group ef200139 ifp
0xc3e99400
kernel: cpu1 igmp_input: called w/mbuf (0xc429fe00,24)
kernel: cpu1 process v2 report 239.32.1.57 on ifp 0xc3e99400(rl0)
kernel: cpu1 ip_mforward: delete mfc orig 192.168.0.10 group ef200139 ifp
0xc3e99400
kernel: cpu1 igmp_input: called w/mbuf (0xc4242a00,24)
kernel: cpu1 process v2 report 239.32.1.57 on ifp 0xc3e99400(rl0)
kernel: cpu1 ip_mforward: delete mfc orig 192.168.0.10 group e002 ifp
0xc3e99400
kernel: cpu1 igmp_input: called w/mbuf (0xc491f700,24)
kernel: cpu1 ip_mforward: delete mfc orig 192.168.0.10 group ef200108 ifp
0xc3e99400
...

So the kernel processes igmp v2 reports, but user space daemons doesn't
recieve it. 

Any suggestions?

PS: It is not ipfw issue (when testing, the first ipfw rule is "allow ip
from any to any")

-- 
Alexander Zagrebin

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


4.3-STABLE can't read files on mounted NTFS volumes

2001-05-22 Thread Alexander Zagrebin

Hi!

I cvsuped to 4.3-STABLE 20/05/2001, and after recompiling i have troubles
with access to the files on mounted ntfs volumes.
Directory listing works, but i can't view file contents.
For example, 'cat /mnt/somefile.txt' returns error
"cat: /mnt/somefile.txt: Inappropriate ioctl for device"

Does anyone know anything about this?

Alexander Zagrebin
--


To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-stable" in the body of the message