zfs panic

2009-11-02 Thread Gerrit Kühn
Hi,

I got the following panic when rebooting after a crash on 7.2-REL:

panic: solaris assert: dmu_read(os, smo-smo_object, offset, size,
entry_map) == 0 (0x5 == 0x0), file:
/usr/src/sys/modules/zfs/../../contrib/opensolaris/uts/common/fs/zfs/spa
ce_map.c, line: 341

This seems to be the same panic as mentioned here:
http://lists.freebsd.org/pipermail/freebsd-stable/2008-July/043763.html.

However, I did not see warnings about the ZIL. The crash leading to this
situation was probably caused by me pushing the controller card a bit too
hard (mechanically) during operation (well, so much about hot-plugging of
cards :-).
Since my pool was almost empty anyway and I needed the machine, I opted to
recreate the pool instead of trying the patches supplied by pjd@ in the
thread above.

But nevertheless I would like to be prepared if this happens again (and
the pool is not empty :-).
Right now I am updating the system to 8.0-RC2. Will this issue go away
with zpoolv13/FBSD8.0 (as suggested above)? I could not find out from the
thread above if the suggested patches helped or if anything from this has
been commited at all. Pawel or Daniel, do you remember what the final
result was?


cu
  Gerrit
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Performance issues with 8.0 ZFS and sendfile/lighttpd

2009-11-02 Thread Ivan Voras

gnu...@alltel.blackberry.com wrote:
I can send in more documentation later but I am seeing severe zfs performance issues with lighttpd. Same machine using UFS will push 1gbit or more but same content and traffic load can not hit 200mbit. Ufs does around 3 megabytes/sec IO at 800mbit network but zfs pushes the disks into the ground with 50+ megabytes/sec dusk i/o. No compression no atime no checksums on zfs and still same IO levels. Ufs with soft updates and atime on. Orders of magnitude more disk IO... Like zfs isn't using cache or isn't coalescing disk reads or both. 

Has anyone else seen this or have any recommendations? Lighttpd config remains exactly the same as well FYI. Only difference is ufs vs zfs. 


AFAIK, ZFS is incompatible (currently) with some advanced VM operations 
(like mmap, and I think sendfile relies on the same mechanism as mmap), 
so that could be a cause of the slowdown. Though I'm surprised you can 
only get 200 MBit/s - that's 25 MB/s and I think that even with multiple 
memcpy-ing data around the kernel you should be able to get hundreds of 
MB/s on newer hardware (which normally really can achieve tens of 
gigabytes/s of sustained memory access).


What else can you observe from your system? Do you have exceedingly high 
sys times and load numbers? I'm also interested in what does 10 seconds 
of running 'vmstat 1' looks like on your system. Is it a bare machine or 
a virtual machine?



___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


8.0-RC1 ZFS loader extremely slow

2009-11-02 Thread Tom Evans
Hi all

I just installed 8.0-RC1 amd64 on a 6 gpt disk ZFS raidz1 (following the
guide on the wiki), but
have problems on reboot with the newly installed ZFS aware loader. The
loader runs correctly,
but incredibly slowly. It takes about 2 hours to get to the point where it
enumerates the BIOS
disks, although when it gets to that point, it does not take a long time to
enumerate each
disk. It takes about 2 minutes for each character change of the spinner!

Fully completing the loader takes somewhere between 2 and 8 hours (I got
bored watching it),
and works correctly. The loader from the memstick image works normally.

Daichi GOTO experienced something similar back in January [1], but there
didn't seem to be
any resolution to that problem. Disabling AHCI has no effect. Interestingly,
we both have P45
based motherboards.

I will try installing 8.0-RC2 tonight, and try to save verbose boot logs,
dmidecode etc. If there
is any other information I should be looking at, please let me know.

Cheers

Tom



[1]
http://unix.derkeiler.com/Mailing-Lists/FreeBSD/current/2009-02/msg00108.html
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: 8.0-RC1 NFS client timeout issue

2009-11-02 Thread Olaf Seibert
On Sun 01 Nov 2009 at 17:17:15 -0500, Rick Macklem wrote:
 On Thu, 29 Oct 2009, Olaf Seibert wrote:
 
 
  Thanks, it looks like it should do the trick. I can't try it before
  monday, though.
 
 Although I think the patch does avoid sending the request on the
 partially closed connection, it doesn't fix the real problem,
 so I don't know if it is worth testing?

Well, I tested it anyway, just in case. It seems to work fine for me, so
far.

I don't see your extra RSTs either. Maybe that is because in my case the
client used a different port number for the new connection. (Usually,
this is controlled by the TCP option SO_REUSEADDR from sys/socket.h).

Here is a new packet trace. I had to cut out some packets since I forgot
to kill some (failing) mount attempts of another directory on the same
server.
(sorry again for the long lines)

No. TimeSourceDestination   Protocol Info
486 60.438406   xxx.xxx.31.43 xxx.xxx.16.142NFS  V3 
LOOKUP Call (Reply In 487), DH:0x61b8eb12/date
487 60.438629   xxx.xxx.16.142xxx.xxx.31.43 NFS  V3 
LOOKUP Reply (Call In 486) Error:NFS3ERR_NOENT
488 60.538796   xxx.xxx.31.43 xxx.xxx.16.142TCP  
hello-port  nfs [ACK] Seq=36477 Ack=44701 Win=8192 Len=0 TSV=228817 
TSER=1575935

last real action on old connection (client port hello-port)

537 420.437763  xxx.xxx.16.142xxx.xxx.31.43 TCP  nfs  
hello-port [FIN, ACK] Seq=44701 Ack=36477 Win=49232 Len=0 TSV=1611935 
TSER=228817
538 420.437805  xxx.xxx.31.43 xxx.xxx.16.142TCP  
hello-port  nfs [ACK] Seq=36477 Ack=44702 Win=8192 Len=0 TSV=588734 
TSER=1611935

server ends connection

563 605.334262  xxx.xxx.31.43 xxx.xxx.16.142TCP  
hello-port  nfs [FIN, ACK] Seq=36477 Ack=44702 Win=8192 Len=0 TSV=773641 
TSER=1611935

some time later, client now ends connection before sending its request on new 
connection
(port 875)

564 605.334303  xxx.xxx.31.43 xxx.xxx.16.142TCP  875  
nfs [SYN] Seq=0 Win=65535 Len=0 MSS=1460 WS=5 TSV=773641 TSER=0
565 605.334440  xxx.xxx.16.142xxx.xxx.31.43 TCP  nfs  
hello-port [ACK] Seq=44702 Ack=36478 Win=49232 Len=0 TSV=1630424 TSER=773641
566 605.334564  xxx.xxx.16.142xxx.xxx.31.43 TCP  nfs  
875 [SYN, ACK] Seq=0 Ack=1 Win=49232 Len=0 TSV=1630424 TSER=773641 MSS=1460 WS=0
567 605.334588  xxx.xxx.31.43 xxx.xxx.16.142TCP  875  
nfs [ACK] Seq=1 Ack=1 Win=66592 Len=0 TSV=773641 TSER=1630424

new connection set up

568 605.334605  xxx.xxx.31.43 xxx.xxx.16.142NFS  V3 
ACCESS Call (Reply In 570), FH:0x008002a2
569 605.334828  xxx.xxx.16.142xxx.xxx.31.43 TCP  nfs  
875 [ACK] Seq=1 Ack=141 Win=49092 Len=0 TSV=1630424 TSER=773641

and in use

 I'm hoping that the Help TCP Wizards... thread I just started
 on freebsd-current comes up with something.
 
 At least I can reproduce the problem now. (For some reason, I have
 to reboot the Solaris10 server before the problem appears for me.
 I can't think why this matters, but that's networking for you:-)

Maybe it depends on server load or something. This particular server is
a central file server at a university, it may have some more pressure to
terminate unused connections.

 rick
-Olaf.
-- 
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Performance issues with 8.0 ZFS and sendfile/lighttpd

2009-11-02 Thread Miroslav Lachman

Ivan Voras wrote:

gnu...@alltel.blackberry.com wrote:

I can send in more documentation later but I am seeing severe zfs
performance issues with lighttpd. Same machine using UFS will push
1gbit or more but same content and traffic load can not hit 200mbit.
Ufs does around 3 megabytes/sec IO at 800mbit network but zfs pushes
the disks into the ground with 50+ megabytes/sec dusk i/o. No
compression no atime no checksums on zfs and still same IO levels. Ufs
with soft updates and atime on. Orders of magnitude more disk IO...
Like zfs isn't using cache or isn't coalescing disk reads or both.
Has anyone else seen this or have any recommendations? Lighttpd config
remains exactly the same as well FYI. Only difference is ufs vs zfs.


AFAIK, ZFS is incompatible (currently) with some advanced VM operations
(like mmap, and I think sendfile relies on the same mechanism as mmap),
so that could be a cause of the slowdown. Though I'm surprised you can
only get 200 MBit/s - that's 25 MB/s and I think that even with multiple
memcpy-ing data around the kernel you should be able to get hundreds of
MB/s on newer hardware (which normally really can achieve tens of
gigabytes/s of sustained memory access).


I have more strange issue with Lighttpd in jail on top of ZFS. Lighttpd 
is serving static content (mp3 downloads thru flash player). Is runs 
fine for relatively small number of parallel clients with bandwidth 
about 30 Mbps, but after some number of clients is reached (about 50-60 
parallel clients) the throughput drops down to 6 Mbps.


I can server hundereds of clients on same HW using Lighttpd not in jail 
and UFS2 with gjournal instead of ZFS reaching 100 Mbps (maybe more).


I don't know if it is ZFS or Jail issue.

Miroslav Lachman
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Performance issues with 8.0 ZFS and sendfile/lighttpd

2009-11-02 Thread Ivan Voras

Miroslav Lachman wrote:

Ivan Voras wrote:

gnu...@alltel.blackberry.com wrote:

I can send in more documentation later but I am seeing severe zfs
performance issues with lighttpd. Same machine using UFS will push
1gbit or more but same content and traffic load can not hit 200mbit.
Ufs does around 3 megabytes/sec IO at 800mbit network but zfs pushes
the disks into the ground with 50+ megabytes/sec dusk i/o. No
compression no atime no checksums on zfs and still same IO levels. Ufs
with soft updates and atime on. Orders of magnitude more disk IO...
Like zfs isn't using cache or isn't coalescing disk reads or both.
Has anyone else seen this or have any recommendations? Lighttpd config
remains exactly the same as well FYI. Only difference is ufs vs zfs.


AFAIK, ZFS is incompatible (currently) with some advanced VM operations
(like mmap, and I think sendfile relies on the same mechanism as mmap),
so that could be a cause of the slowdown. Though I'm surprised you can
only get 200 MBit/s - that's 25 MB/s and I think that even with multiple
memcpy-ing data around the kernel you should be able to get hundreds of
MB/s on newer hardware (which normally really can achieve tens of
gigabytes/s of sustained memory access).


I have more strange issue with Lighttpd in jail on top of ZFS. Lighttpd 
is serving static content (mp3 downloads thru flash player). Is runs 
fine for relatively small number of parallel clients with bandwidth 
about 30 Mbps, but after some number of clients is reached (about 50-60 
parallel clients) the throughput drops down to 6 Mbps.


I can server hundereds of clients on same HW using Lighttpd not in jail 
and UFS2 with gjournal instead of ZFS reaching 100 Mbps (maybe more).


I don't know if it is ZFS or Jail issue.


Do you have actual disk IO or is the vast majority of your data served 
from the caches? (actually - the same question to the OP)


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Performance issues with 8.0 ZFS and sendfile/lighttpd

2009-11-02 Thread Miroslav Lachman

Ivan Voras wrote:

Miroslav Lachman wrote:


[..]


I have more strange issue with Lighttpd in jail on top of ZFS.
Lighttpd is serving static content (mp3 downloads thru flash player).
Is runs fine for relatively small number of parallel clients with
bandwidth about 30 Mbps, but after some number of clients is reached
(about 50-60 parallel clients) the throughput drops down to 6 Mbps.

I can server hundereds of clients on same HW using Lighttpd not in
jail and UFS2 with gjournal instead of ZFS reaching 100 Mbps (maybe
more).

I don't know if it is ZFS or Jail issue.


Do you have actual disk IO or is the vast majority of your data served
from the caches? (actually - the same question to the OP)


I had ZFS zpool as mirror of two SATA II drives (500GB) and in the peak 
iostat (or systat -vm or gstat) shows about 80 tps / 60% busy.


In case of UFS, I am using gmirrored 1TB SATA II drives working nice 
with 160 or more tps.


Both setups are using FreeBSD 7.x amd64 with GENERIC kernel, 4GB of RAM.

As the ZFS + Lighttpd in jail was unreliable, I am no longer using it, 
but if you want some more info for debuging, I can set it up again.


Miroslav Lachman
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Performance issues with 8.0 ZFS and sendfile/lighttpd

2009-11-02 Thread Ivan Voras

Miroslav Lachman wrote:

Ivan Voras wrote:

Miroslav Lachman wrote:


[..]


I have more strange issue with Lighttpd in jail on top of ZFS.
Lighttpd is serving static content (mp3 downloads thru flash player).
Is runs fine for relatively small number of parallel clients with
bandwidth about 30 Mbps, but after some number of clients is reached
(about 50-60 parallel clients) the throughput drops down to 6 Mbps.

I can server hundereds of clients on same HW using Lighttpd not in
jail and UFS2 with gjournal instead of ZFS reaching 100 Mbps (maybe
more).

I don't know if it is ZFS or Jail issue.


Do you have actual disk IO or is the vast majority of your data served
from the caches? (actually - the same question to the OP)


I had ZFS zpool as mirror of two SATA II drives (500GB) and in the peak 
iostat (or systat -vm or gstat) shows about 80 tps / 60% busy.


In case of UFS, I am using gmirrored 1TB SATA II drives working nice 
with 160 or more tps.


Both setups are using FreeBSD 7.x amd64 with GENERIC kernel, 4GB of RAM.

As the ZFS + Lighttpd in jail was unreliable, I am no longer using it, 
but if you want some more info for debuging, I can set it up again.


For what it's worth, I have just set up a little test on a production 
machine with 3 500 GB SATA drives in RAIDZ, FreeBSD 7.2-RELEASE. The 
total data set is some 2 GB in 5000 files but the machine has only 2 GB 
RAM total so there is some disk IO - about 40 IOPS per drive. I'm also 
using Apache-worker, not lighty, and siege to benchmark with 10 
concurrent users.


In this setup, the machine has no problems saturating a 100 Mbit/s link 
- it's not on a LAN but the latency is close enough and I get ~~ 11 MB/s.


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: 8.0-RC1 NFS client timeout issue

2009-11-02 Thread Rick Macklem



On Mon, 2 Nov 2009, Olaf Seibert wrote:


Although I think the patch does avoid sending the request on the
partially closed connection, it doesn't fix the real problem,
so I don't know if it is worth testing?


Well, I tested it anyway, just in case. It seems to work fine for me, so
far.


Yes, I think the patch is ok, but it doesn't completely resolve the
reconnect issue. It's good to hear that it helps for your case.


I don't see your extra RSTs either. Maybe that is because in my case the
client used a different port number for the new connection. (Usually,
this is controlled by the TCP option SO_REUSEADDR from sys/socket.h).


For my packet trace, it is using different port#s. The problem is that,
for some reason, it sends the RST from the new port# instead of the port#
for the old connection just closed via soclose().

I don't know why you don't see the extra RSTs, but consider yourself
lucky, since you should be ok without them. (It may simply be that your
server isn't Solaris10 -- a different TCP stack in it.)

Do you happen to know what your server is?


I'm hoping that the Help TCP Wizards... thread I just started
on freebsd-current comes up with something.

At least I can reproduce the problem now. (For some reason, I have
to reboot the Solaris10 server before the problem appears for me.
I can't think why this matters, but that's networking for you:-)


Maybe it depends on server load or something. This particular server is
a central file server at a university, it may have some more pressure to
terminate unused connections.


Or type of server (ie. not Solaris10). It definitely depends upon timing
in the client. (I'm about to try introducing a 1sec delay before the
soconnect() call and see if that makes the RSTs go away. Not much of a
fix, but...)

I now recall that I ran into a similar problem (although I didn't dig
into the packet traces then) when testing my Mac OS X 10 client, which
uses essentially the reconnect code from Mac OS X 10.4 Tiger. I fixed
it by adding a 1sec delay before the reconnect.

Thanks for helping with testing, rick

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: 7.2 Stable Crash - possibly related to if_re

2009-11-02 Thread Norbert Papke
On October 31, 2009, Pyun YongHyeon wrote:
 On Fri, Oct 30, 2009 at 06:23:51PM -0700, Norbert Papke wrote:
  On October 30, 2009, Pyun YongHyeon wrote:
   On Thu, Oct 29, 2009 at 09:56:19PM -0700, Norbert Papke wrote:
This occurred shortly after scping from a VirtualBox VM to the
host. The file transfer got stuck.  The re interface stopped
working. Shortly afterwards, the host crashed.  The re interface
was used by the host, the guest was using a different NIC in bridged
mode.
   
   
FreeBSD proven.lan 7.2-STABLE FreeBSD 7.2-STABLE #5 r198666: Thu Oct
29 18:36:57 PDT 2009
   
Fatal trap 12: page fault while in kernel mode
cpuid = 0; apic id = 00
fault virtual address   = 0x18
  
   It looks like a NULL pointer dereference, possibly mbuf related
   one.
  
fault code  = supervisor write data, page not present
instruction pointer = 0x8:0x80d476ee
stack pointer   = 0x10:0xff878ae0
frame pointer   = 0x10:0xff878b40
code segment= base 0x0, limit 0xf, type 0x1b
= DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags= interrupt enabled, resume, IOPL = 0
current process = 18 (swi5: +)
Physical memory: 8177 MB
   

   By chance, did you stop the re0 interface with ifconfig when you
   noticed the file transfer got stuck?
 
  It is possible.  I had it happen twice.  The first time I definitely
  tried to down re.  I cannot recall what I did the second time.  The
  crash dump is from the second time.

 Ok, then would you try attached patch?

I have been running with patch for a couple of days now.  Although I can still 
reproduce the problem with the file transfer, I have not been able to 
reproduce the panic.  The patch appears to do what it is supposed to do.

I am going to continue to try to come up with a better test case for the 
original file transfer problem.  I no longer suspect re as a cause.

Thank you very much for your help.

Cheers,

-- Norbert Papke.
   npa...@acm.org


http://saveournet.ca
Protecting your Internet's level playing field
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: 7.2 Stable Crash - possibly related to if_re

2009-11-02 Thread Norbert Papke
On October 31, 2009, Pyun YongHyeon wrote:
 On Fri, Oct 30, 2009 at 06:23:51PM -0700, Norbert Papke wrote:
  On October 30, 2009, Pyun YongHyeon wrote:
   On Thu, Oct 29, 2009 at 09:56:19PM -0700, Norbert Papke wrote:
This occurred shortly after scping from a VirtualBox VM to the
host. The file transfer got stuck.  The re interface stopped
working. Shortly afterwards, the host crashed.  The re interface
was used by the host, the guest was using a different NIC in bridged
mode.
   
   
FreeBSD proven.lan 7.2-STABLE FreeBSD 7.2-STABLE #5 r198666: Thu Oct
29 18:36:57 PDT 2009
   
Fatal trap 12: page fault while in kernel mode
cpuid = 0; apic id = 00
fault virtual address   = 0x18
  
   It looks like a NULL pointer dereference, possibly mbuf related
   one.
  
fault code  = supervisor write data, page not present
instruction pointer = 0x8:0x80d476ee
stack pointer   = 0x10:0xff878ae0
frame pointer   = 0x10:0xff878b40
code segment= base 0x0, limit 0xf, type 0x1b
= DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags= interrupt enabled, resume, IOPL = 0
current process = 18 (swi5: +)


   By chance, did you stop the re0 interface with ifconfig when you
   noticed the file transfer got stuck?
 
  It is possible.  I had it happen twice.  The first time I definitely
  tried to down re.  I cannot recall what I did the second time.  The
  crash dump is from the second time.

 Ok, then would you try attached patch?

I have been running with the patch for a couple of days.  Although I can still 
reproduce the lock-up of the network stack, I have not been able to reproduce 
the panic.  The patch does what it is supposed to do.

I will continue to try to come up with a better test case for the file 
transfer problem.  However, I no longer suspect re as a cause.

Thank you very much for your help.

Cheers,

-- Norbert Papke.
   npa...@acm.org


http://saveournet.ca
Protecting your Internet's level playing field
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: 7.2 Stable Crash - possibly related to if_re

2009-11-02 Thread Pyun YongHyeon
On Mon, Nov 02, 2009 at 08:45:43AM -0800, Norbert Papke wrote:
 On October 31, 2009, Pyun YongHyeon wrote:
  On Fri, Oct 30, 2009 at 06:23:51PM -0700, Norbert Papke wrote:
   On October 30, 2009, Pyun YongHyeon wrote:
On Thu, Oct 29, 2009 at 09:56:19PM -0700, Norbert Papke wrote:
 This occurred shortly after scping from a VirtualBox VM to the
 host. The file transfer got stuck.  The re interface stopped
 working. Shortly afterwards, the host crashed.  The re interface
 was used by the host, the guest was using a different NIC in bridged
 mode.


 FreeBSD proven.lan 7.2-STABLE FreeBSD 7.2-STABLE #5 r198666: Thu Oct
 29 18:36:57 PDT 2009

 Fatal trap 12: page fault while in kernel mode
 cpuid = 0; apic id = 00
 fault virtual address   = 0x18
   
It looks like a NULL pointer dereference, possibly mbuf related
one.
   
 fault code  = supervisor write data, page not present
 instruction pointer = 0x8:0x80d476ee
 stack pointer   = 0x10:0xff878ae0
 frame pointer   = 0x10:0xff878b40
 code segment= base 0x0, limit 0xf, type 0x1b
 = DPL 0, pres 1, long 1, def32 0, gran 1
 processor eflags= interrupt enabled, resume, IOPL = 0
 current process = 18 (swi5: +)
 
 
By chance, did you stop the re0 interface with ifconfig when you
noticed the file transfer got stuck?
  
   It is possible.  I had it happen twice.  The first time I definitely
   tried to down re.  I cannot recall what I did the second time.  The
   crash dump is from the second time.
 
  Ok, then would you try attached patch?
 
 I have been running with the patch for a couple of days.  Although I can 
 still 
 reproduce the lock-up of the network stack, I have not been able to reproduce 
 the panic.  The patch does what it is supposed to do.
 

Thanks a lot for testing! Patch committed to HEAD(r198814)

 I will continue to try to come up with a better test case for the file 
 transfer problem.  However, I no longer suspect re as a cause.
 
 Thank you very much for your help.
 
 Cheers,
 
 -- Norbert Papke.
npa...@acm.org
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: 8.0-RC1 ZFS loader extremely slow

2009-11-02 Thread Daichi GOTO
On Mon, 2 Nov 2009 09:53:30 +
Tom Evans tevans...@googlemail.com wrote:
 Hi all
 
 I just installed 8.0-RC1 amd64 on a 6 gpt disk ZFS raidz1 (following the
 guide on the wiki), but
 have problems on reboot with the newly installed ZFS aware loader. The
 loader runs correctly,
 but incredibly slowly. It takes about 2 hours to get to the point where it
 enumerates the BIOS
 disks, although when it gets to that point, it does not take a long time to
 enumerate each
 disk. It takes about 2 minutes for each character change of the spinner!
 
 Fully completing the loader takes somewhere between 2 and 8 hours (I got
 bored watching it),
 and works correctly. The loader from the memstick image works normally.
 
 Daichi GOTO experienced something similar back in January [1], but there
 didn't seem to be
 any resolution to that problem. Disabling AHCI has no effect. Interestingly,
 we both have P45
 based motherboards.

Unfortunately, slow loader process of FreeBSD is still been in my box.
Even on latest 9-current/amd64, situation is the same...

As workaround, I have splited my disk to small UFS partitions including
minimum distrition install by make installworld/installkernel and
ZFS partition. Removed all loading kernel module config from 
/etc/loaderc.conf to get up boot speed. Yes, it's just a workaround
not fixed.

 I will try installing 8.0-RC2 tonight, and try to save verbose boot logs,
 dmidecode etc. If there
 is any other information I should be looking at, please let me know.
 
 Cheers
 
 Tom
 
 
 
 [1]
 http://unix.derkeiler.com/Mailing-Lists/FreeBSD/current/2009-02/msg00108.html

-- 
Daichi GOTO
CEO | ONGS Inc.
81-42-316-7945 | dai...@ongs.co.jp | http://www.ongs.co.jp
LinkedIn: http://linkedin.com/in/daichigoto
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org