Re: IBM blade server abysmal disk write performances

2013-01-25 Thread Karim Fodil-Lemelin

Hi,

Quick follow up on this. As I mentioned in a previous email we have 
moved to SATA drives and the SAS drives have been shelved for now. The 
current project will be using those so further tests on SAS have been 
postponed to an undefined date.


Thanks,

Karim.

PS: I'll keep the SAS tests in my back pocket so I get a head start when 
we get around SAS testing again.


On 18/01/2013 6:32 PM, Karim Fodil-Lemelin wrote:

On 18/01/2013 5:42 PM, Matthew Jacob wrote:
This is all turning into a bikeshed discussion. As far as I can tell, 
the basic original question was why a *SAS* (not a SATA) drive was 
not performing as well as expected based upon experiences with Linux. 
I still don't know whether reads or writes were being used for dd.


This morning, I ran a fio test with a single threaded read component 
and a multithreaded write component to see if there were differences. 
All I had connected to my MPT system were ATA drives (Seagate 500GBs) 
and I'm remote now and won't be back until Sunday to put one of my 
'good' SAS drives (140 GB Seagates, i.e., real SAS 15K RPM drives, 
not fat SATA bs drives).


The numbers were pretty much the same for both FreeBSD and Linux. In 
fact, FreeBSD was slightly faster. I won't report the exact numbers 
right now, but only mention this as a piece of information that at 
least in my case the differences between the OS platform involved is 
negligible. This would, at least in my case, rule out issues based 
upon different platform access methods and different drivers.


All of this other discussion, about WCE and what not is nice, but for 
all intents and purposes it serves could be moved to *-advocacy.



Thanks for the clarifications!

I did mention at some point those were write speeds and reads were 
just fine and those were either writes to the filesystem or direct 
access (only on SAS again).


Here is what I am planning to do next week when I get the chance:

0) I plan on focusing on the SAS driver tests _only_ since SATA is 
working as expected so nothing to report there.
1) Look carefully at how the drives are physically connected. Although 
it feels like if the SATA works fine the SAS should also but I'll 
check anyway.
2) Boot verbose with boot -v and send the dmesg output. mpt driver 
might give us a clue.
3) Run gstat -abc in a loop for the test duration. Although I would 
think ctlstat(8) might be more interesting here so I'll run it too for 
good measure :).


Please note that in all tests write caching was enabled as I think 
this is the default with FBSD 9.1 GENERIC but I'll confirm this with 
camcontrol(8).


I've also seen quite a lot of 'quirks' for tagged command queuing in 
the source code (/sys/cam/scsi/scps_xtp.c) but a particular one got my 
attention (thanks to whomever writes good comments in source code :) :


/*
 * Slow when tagged queueing is enabled. Write 
performance

 * steadily drops off with more and more concurrent
 * transactions.  Best sequential write performance with
 * tagged queueing turned off and write caching turned 
on.

 *
 * PR:  kern/10398
 * Submitted by:  Hideaki Okada hok...@isl.melco.co.jp
 * Drive:  DCAS-34330 w/ S65A firmware.
 *
 * The drive with the problem had the S65A firmware
 * revision, and has also been reported (by Stephen J.
 * Roznowski s...@home.net) for a drive with the S61A
 * firmware revision.
 *
 * Although no one has reported problems with the 2 gig
 * version of the DCAS drive, the assumption is that it
 * has the same problems as the 4 gig version. Therefore
 * this quirk entries disables tagged queueing for all
 * DCAS drives.
 */
{ T_DIRECT, SIP_MEDIA_FIXED, IBM, DCAS*, * },
/*quirks*/0, /*mintags*/0, /*maxtags*/0

So I looked at the kern/10398 pr and got some feeling of 'deja vu' 
although the original problem was on FreeBSD 3.1 so its most likely 
not that but I though I would mention it. The issue described is 
awfully familiar. Basically the SAS drive (scsi back then) is slow on 
writes but fast on reads with dd. Could be a coincidence or a ghost 
from the past who knows...


Cheers,

Karim.


___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: IBM blade server abysmal disk write performances

2013-01-18 Thread Karim Fodil-Lemelin

On 18/01/2013 10:16 AM, Mark Felder wrote:
On Thu, 17 Jan 2013 16:12:17 -0600, Karim Fodil-Lemelin 
fodillemlinka...@gmail.com wrote:


SAS controllers may connect to SATA devices, either directly 
connected using native SATA protocol or through SAS expanders using 
SATA Tunneled Protocol (STP).
 The systems is currently put in place using SATA instead of SAS 
although its using the same interface and backplane connectors and 
the drives (SATA) show as da0 in BSD _but_ with the SATA drive we get 
*much* better performances. I am thinking that something fancy in 
that SAS drive is not being handled correctly by the FreeBSD driver. 
I am planning to revisit the SAS drive issue at a later point 
(sometimes next week).


Your SATA drives are connected directly not with an interposer such as 
the LSISS9252, correct? If so, this might be the cause of your 
problems. Mixing SAS and SATA drives is known to cause serious 
performance issues for almost every 
JBOD/controller/expander/what-have-you. Change your configuration so 
there is only one protocol being spoken on the bus (SAS) by putting 
your SATA drives behind interposers which translate SAS to SATA just 
before the disk. This will solve many problems.
Not sure what you mean by this but isn't the mpt detecting an interposer 
in this line:


mpt0: LSILogic SAS/SATA Adapter port 0x1000-0x10ff mem 
0x9991-0x99913fff,0x9990-0x9990 irq 28 at device 0.0 on pci11

mpt0: MPI Version=1.5.20.0
mpt0: Capabilities: ( RAID-0 RAID-1E RAID-1 )
mpt0: 0 Active Volumes (2 Max)
mpt0: 0 Hidden Drive Members (14 Max)

Also please not SATA speed in that same hardware setup works just fine. 
In any case I will have a look.


Thanks,

Karim.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: IBM blade server abysmal disk write performances

2013-01-18 Thread Karim Fodil-Lemelin

On 18/01/2013 5:42 PM, Matthew Jacob wrote:
This is all turning into a bikeshed discussion. As far as I can tell, 
the basic original question was why a *SAS* (not a SATA) drive was not 
performing as well as expected based upon experiences with Linux. I 
still don't know whether reads or writes were being used for dd.


This morning, I ran a fio test with a single threaded read component 
and a multithreaded write component to see if there were differences. 
All I had connected to my MPT system were ATA drives (Seagate 500GBs) 
and I'm remote now and won't be back until Sunday to put one of my 
'good' SAS drives (140 GB Seagates, i.e., real SAS 15K RPM drives, not 
fat SATA bs drives).


The numbers were pretty much the same for both FreeBSD and Linux. In 
fact, FreeBSD was slightly faster. I won't report the exact numbers 
right now, but only mention this as a piece of information that at 
least in my case the differences between the OS platform involved is 
negligible. This would, at least in my case, rule out issues based 
upon different platform access methods and different drivers.


All of this other discussion, about WCE and what not is nice, but for 
all intents and purposes it serves could be moved to *-advocacy.



Thanks for the clarifications!

I did mention at some point those were write speeds and reads were just 
fine and those were either writes to the filesystem or direct access 
(only on SAS again).


Here is what I am planning to do next week when I get the chance:

0) I plan on focusing on the SAS driver tests _only_ since SATA is 
working as expected so nothing to report there.
1) Look carefully at how the drives are physically connected. Although 
it feels like if the SATA works fine the SAS should also but I'll check 
anyway.
2) Boot verbose with boot -v and send the dmesg output. mpt driver 
might give us a clue.
3) Run gstat -abc in a loop for the test duration. Although I would 
think ctlstat(8) might be more interesting here so I'll run it too for 
good measure :).


Please note that in all tests write caching was enabled as I think this 
is the default with FBSD 9.1 GENERIC but I'll confirm this with 
camcontrol(8).


I've also seen quite a lot of 'quirks' for tagged command queuing in the 
source code (/sys/cam/scsi/scps_xtp.c) but a particular one got my 
attention (thanks to whomever writes good comments in source code :) :


/*
 * Slow when tagged queueing is enabled. Write performance
 * steadily drops off with more and more concurrent
 * transactions.  Best sequential write performance with
 * tagged queueing turned off and write caching turned on.
 *
 * PR:  kern/10398
 * Submitted by:  Hideaki Okada hok...@isl.melco.co.jp
 * Drive:  DCAS-34330 w/ S65A firmware.
 *
 * The drive with the problem had the S65A firmware
 * revision, and has also been reported (by Stephen J.
 * Roznowski s...@home.net) for a drive with the S61A
 * firmware revision.
 *
 * Although no one has reported problems with the 2 gig
 * version of the DCAS drive, the assumption is that it
 * has the same problems as the 4 gig version.  Therefore
 * this quirk entries disables tagged queueing for all
 * DCAS drives.
 */
{ T_DIRECT, SIP_MEDIA_FIXED, IBM, DCAS*, * },
/*quirks*/0, /*mintags*/0, /*maxtags*/0

So I looked at the kern/10398 pr and got some feeling of 'deja vu' 
although the original problem was on FreeBSD 3.1 so its most likely not 
that but I though I would mention it. The issue described is awfully 
familiar. Basically the SAS drive (scsi back then) is slow on writes but 
fast on reads with dd. Could be a coincidence or a ghost from the past 
who knows...


Cheers,

Karim.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: IBM blade server abysmal disk write performances

2013-01-17 Thread Karim Fodil-Lemelin

On 16/01/2013 2:48 AM, Dieter BSD wrote:

Karim writes:

It is quite obvious that something is awfully slow on SAS drives,
whatever it is and regardless of OS comparison. We swapped the SAS
drives for SATA and we're seeing much higher speeds. Basically on par
with what we were expecting (roughly 300 to 400 times faster then what
we see with SAS...).

Major clue there!  According to wikipedia: Most SAS drives provide
tagged command queuing, while most newer SATA drives provide native
command queuing [1]

Note that the driver says Command Queueing enabled without
specifying which.  If the driver is trying to use SATA's NCQ but
the drive only speaks SCSI's TCQ, that could explain it. Or if
the TCQ isn't working for some other reason.

See if there are any error messages in dmesg or /var/log.
If not, perhaps the driver has extra debugging you could turn on.

Get TCQ working and make sure your partitions are aligned on
4 KiB boundaries (in case the drive actually has 4 KiB sectors),
and you should get the expected performance.

[1] http://en.wikipedia.org/wiki/Serial_attached_SCSI

Thanks for the wiki article reference it is very interesting and 
confirms our current setup. I'm mostly thinking about this line:


SAS controllers may connect to SATA devices, either directly connected 
using native SATA protocol or through SAS expanders using SATA Tunneled 
Protocol (STP).


The systems is currently put in place using SATA instead of SAS although 
its using the same interface and backplane connectors and the drives 
(SATA) show as da0 in BSD _but_ with the SATA drive we get *much* better 
performances. I am thinking that something fancy in that SAS drive is 
not being handled correctly by the FreeBSD driver. I am planning to 
revisit the SAS drive issue at a later point (sometimes next week).


Here is some trimmed and hopefully relevant information (from dmesg):

SAS drive:

mpt0: LSILogic SAS/SATA Adapter port 0x1000-0x10ff mem 
0x9991-0x99913fff,0x9990-0x9990 irq 28 at device 0.0 on pci11

mpt0: MPI Version=1.5.20.0
mpt0: Capabilities: ( RAID-0 RAID-1E RAID-1 )
mpt0: 0 Active Volumes (2 Max)
mpt0: 0 Hidden Drive Members (14 Max)
...

da0 at mpt0 bus 0 scbus0 target 1 lun 0
da0: IBM-ESXS HUC106030CSS60 D3A6 Fixed Direct Access SCSI-6 device
da0: 300.000MB/s transfers
da0: Command Queueing enabled
da0: 286102MB (585937500 512 byte sectors: 255H 63S/T 36472C)
...
GEOM: da0: the primary GPT table is corrupt or invalid.
GEOM: da0: using the secondary instead -- recovery strongly advised.

SATA drive:

mpt0: LSILogic SAS/SATA Adapter port 0x1000-0x10ff mem 
0x9b91-0x9b913fff,0x9b90-0x9b90 irq 28 at device 0.0 on pci11

mpt0: MPI Version=1.5.20.0
mpt0: Capabilities: ( RAID-0 RAID-1E RAID-1 )
mpt0: 0 Active Volumes (2 Max)
mpt0: 0 Hidden Drive Members (14 Max)
...
da0 at mpt0 bus 0 scbus0 target 2 lun 0
da0: ATA ST91000640NS SN03 Fixed Direct Access SCSI-5 device
da0: 300.000MB/s transfers
da0: Command Queueing enabled
da0: 953869MB (1953525168 512 byte sectors: 255H 63S/T 121601C)
...
GEOM: da0s1: geometry does not match label (16h,63s != 255h,63s).

Please let me know if there is anything you would like me to run on the 
BSD 9.1 system to help diagnose this issue?


Thank you,

Karim.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: IBM blade server abysmal disk write performances

2013-01-15 Thread Karim Fodil-Lemelin

On 15/01/2013 3:03 PM, Dieter BSD wrote:

Disabling the disks's write cache is *required* for data integrity.
One op per rev means write caching is disabled and no queueing.
But dmesg claims Command Queueing enabled, so you should be getting
more than one op per rev, and writes should be fast.
Is this dd to the raw drive, to a filesystem? (FFS? ZFS? other?)
Are you running compression, encryption, or some other feature
that might slow things down? Also, try dd with a larger block size,
like bs=1m.

Hi,

Thanks to everyone that answered so far. Here is a follow up.  dd to the 
raw drive and no compression/encryption or some other features, just a 
naive boot off a live 9.1 CD then dd (see below). The following results 
have been gathered on the FreeBSD 9.1 system:


# dd if=/dev/zero of=toto count=100
100+0 records in
100+0 records out
51200 bytes transferred in 1.057507 secs (48416 bytes/sec)

# dd if=/dev/zero of=toto count=100 bs=104
100+0 records in
100+0 records out
10400 bytes transferred in 1.209524 secs (8598 bytes/sec)

# dd if=/dev/zero of=toto count=100 bs=1024
100+0 records in
100+0 records out
102400 bytes transferred in 0.844302 secs (121284 bytes/sec)

# dd if=/dev/zero of=toto count=100 bs=10240
100+0 records in
100+0 records out
1024000 bytes transferred in 2.173532 secs (471123 bytes/sec)

# dd if=/dev/zero of=toto count=100 bs=102400
100+0 records in
100+0 records out
1024 bytes transferred in 19.915159 secs (514181 bytes/sec)

# dd if=/dev/zero of=toto count=100
100+0 records in
100+0 records out
51200 bytes transferred in 1.070473 secs (47829 bytes/sec)

# dd if=/dev/zero of=foo count=100
100+0 records in
100+0 records out
51200 bytes transferred in 0.683736 secs (74883 bytes/sec)

# dd if=/dev/zero of=foo count=100 bs=1024
100+0 records in
100+0 records out
102400 bytes transferred in 0.682579 secs (150019 bytes/sec)

# dd if=/dev/zero of=foo count=100 bs=10240
100+0 records in
100+0 records out
1024000 bytes transferred in 2.431012 secs (421224 bytes/sec)

# dd if=/dev/zero of=foo count=100 bs=102400
100+0 records in
100+0 records out
1024 bytes transferred in 19.963030 secs (512948 bytes/sec)

# dd if=/dev/zero of=foo count=10 bs=1024000
10+0 records in
10+0 records out
1024 bytes transferred in 19.615134 secs (522046 bytes/sec)

# dd if=/dev/zero of=foo count=1 bs=1024
1+0 records in
1+0 records out
1024 bytes transferred in 19.579077 secs (523007 bytes/sec)

Best regards,

Karim.

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: IBM blade server abysmal disk write performances

2013-01-15 Thread Karim Fodil-Lemelin

On 15/01/2013 3:55 PM, Adrian Chadd wrote:

You're only doing one IO at the end. That's just plain silly. There's
all kinds of overhead that could show up, that would be amortized over
doing many IOs.

You should also realise that the raw disk IO on Linux is by default
buffered, so you're hitting the buffer cache. The results aren't going
to match, not unless you exhaust physical memory and start falling
behind on disk IO. At that point you'll see what the fuss is about.

To put is simply and maybe give a bit more context, here is what we're 
doing:


1) Boot OS (Linux or FreeBSD in this case)
2) dd some image over to the SAS drive.
3) rinse and repeat for X times.
4) profit.

In this case if step 1) is done with Linux we get 100 times more profit. 
I was wondering if we could close the gap.


Karim.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: IBM blade server abysmal disk write performances

2013-01-15 Thread Karim Fodil-Lemelin

On 15/01/2013 4:54 PM, Wojciech Puchar wrote:


# dd if=/dev/zero of=foo count=1 bs=1024
1+0 records in
1+0 records out
1024 bytes transferred in 19.579077 secs (523007 bytes/sec)


you write to file not device, so it will be clustered anyway by FreeBSD.

128kB by default, more if you put options MAXPHYS=... in kernel config 
and recompile.


Even with hard drive write cache disabled, it should about one write 
per revolution but seems to do 4 writes per second.


so probably it is not that but much worse failure.

Did you rest read speed?

dd if=/dev/disk of=/dev/null bs=512

dd if=/dev/disk of=/dev/null bs=4k

dd if=/dev/disk of=/dev/null bs=128k

As you mentioned the dd file tests were done UFS and not on raw device. 
I will get those numbers for you.


Thanks,

Karim.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: IBM blade server abysmal disk write performances

2013-01-15 Thread Karim Fodil-Lemelin

On 15/01/2013 4:54 PM, Wojciech Puchar wrote:


# dd if=/dev/zero of=foo count=1 bs=1024
1+0 records in
1+0 records out
1024 bytes transferred in 19.579077 secs (523007 bytes/sec)


you write to file not device, so it will be clustered anyway by FreeBSD.

128kB by default, more if you put options MAXPHYS=... in kernel config 
and recompile.


Even with hard drive write cache disabled, it should about one write 
per revolution but seems to do 4 writes per second.


so probably it is not that but much worse failure.

Did you rest read speed?

dd if=/dev/disk of=/dev/null bs=512

dd if=/dev/disk of=/dev/null bs=4k

dd if=/dev/disk of=/dev/null bs=128k

?

I'll do the read test as well but if I recall correctly it seemed pretty 
decent.


It is quite obvious that something is awfully slow on SAS drives, 
whatever it is and regardless of OS comparison. We swapped the SAS 
drives for SATA and we're seeing much higher speeds. Basically on par 
with what we were expecting (roughly 300 to 400 times faster then what 
we see with SAS...).


I find it strange that diskinfo reports those transfer rates:

Transfer rates:
outside:   102400 kbytes in   0.685483 sec = 149384 kbytes/sec
middle:102400 kbytes in   0.747424 sec = 137004 kbytes/sec
inside:102400 kbytes in   1.051036 sec = 97428 kbytes/sec

Yet we get only a tiny fraction of those (it takes 20 seconds to 
transfer 10MB!) when using dd. I also doubt its dd's behavior since how 
can we explain the performance going up with SATA when doing the same test?


Unfortunately, we'll have to move on soon and we're about to write off 
SAS and use SATA instead.


Thanks,

Karim.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


vm_zone corruption 4.x

2008-01-25 Thread Karim Fodil-Lemelin

Good day,

   I have stumbled into a strange problem where my FBSD 4.x box keeps 
crashing under network traffic load. I have enabled INVARIANTS and 
debugging and was able to gather a trace. The context here is that a 
listening connection created a syncache entry sent a syn-ack and is now 
processing the ack it got back. Everything seems find until it tries to 
create a new socket from the listening one and as it is about to get 
another tcp control block the kernel dies :(


(kgdb) bt
#0  Debugger (msg=0xc02bd93b panic) at ../../i386/i386/db_interface.c:321
#1  0xc016b080 in panic (fmt=0xc02e6cd9 zone: entry not free) at 
../../kern/kern_shutdown.c:593

#2  0xc025046b in zerror () at ../../vm/vm_zone.c:547
#3  0xc02500ab in zalloci (z=0xce703180) at ../../vm/vm_zone.c:76
#4  0xc01c1809 in in_pcballoc (so=0xeef12fe0, pcbinfo=0xc034ba40, p=0x0)
   at ../../netinet/in_pcb.c:167
#5  0xc01df8f0 in tcp_attach (so=0xeef12fe0, p=0x0) at 
../../netinet/tcp_usrreq.c:1603
#6  0xc01ddbc9 in tcp_usr_attach (so=0xeef12fe0, proto=0, p=0x0) at 
../../netinet/tcp_usrreq.c:175
#7  0xc018cd1d in sonewconn3 (head=0xeedfb7c0, connstatus=2, p=0x0) at 
../../kern/uipc_socket2.c:223
#8  0xc018cc54 in sonewconn (head=0xeedfb7c0, connstatus=2) at 
../../kern/uipc_socket2.c:196

#9  0xc01dbc40 in syncache_socket (sc=0xf0f0ac80, lso=0xeedfb7c0)
   at ../../netinet/tcp_syncache.c:594
#10 0xc01dc290 in syncache_expand (inc=0xf585ac50, th=0xc61d0034, 
sop=0xf585ac48, m=0xc3774200)

   at ../../netinet/tcp_syncache.c:946
#11 0xc01d2ce7 in tcp_input (m=0xc3774200, off0=20, proto=6) at 
../../netinet/tcp_input.c:1058

#12 0xc01ca93f in ip_input (m=0xc3774200) at ../../netinet/ip_input.c:1279
#13 0xc01ca9a3 in ipintr () at ../../netinet/ip_input.c:1300
#14 0xc027e5b9 in swi_net_next ()
#15 0xc016de61 in tsleep (ident=0xce7e9700, priority=280, 
wmesg=0xc02bb3b8 kqread, timo=3)

   at ../../kern/kern_synch.c:479
#16 0xc01616e3 in kqueue_scan (fp=0xce7f7040, maxevents=65535, 
ulistp=0x80a2000, tsp=0xf585af2c,

   p=0xed3c3d80) at ../../kern/kern_event.c:645
#17 0xc0161211 in kevent (p=0xed3c3d80, uap=0xf585af80) at 
../../kern/kern_event.c:454
#18 0xc028c33e in syscall2 (frame={tf_fs = 47, tf_es = -562495441, tf_ds 
= -1078001617,
 tf_edi = 60, tf_esi = 134881340, tf_ebp = -1077937120, tf_isp = 
-175788076,
 tf_ebx = 134852608, tf_edx = 1, tf_ecx = -1077937128, tf_eax = 
363, tf_trapno = 7,
 tf_err = 2, tf_eip = 134690428, tf_cs = 31, tf_eflags = 663, 
tf_esp = -1077937180,

 tf_ss = 47}) at ../../i386/i386/trap.c:1175
#19 0xc027d155 in Xint0x80_syscall ()

(kgdb) p *z
$2 = {zlock = {lock_data = 0}, zitems = 0x0, zfreecnt = 13945, zfreemin 
= 6, znalloc = 253356,
 zkva = 4021252096, zpagecount = 3687, zpagemax = 5120, zmax = 32768, 
ztotal = 23596, zsize = 640,
 zalloc = 1, zflags = 1, zallocflag = 1, zobj = 0xc0341c80, zname = 
0xc02cb489 tcpcb,

 znext = 0xce703200}

Now there is a couple of strange things here and maybe someone with more 
experience with the vm can shed some light into it.

1) I can't help but find unusual that zitems is NULL ...
2) The sum of zfreecnt + ztotal is bigger the zmax ...
3) If we are in zalloci() why is the zlock not held (0)?

What else should I be looking for here, the crash only happens after a 
certain amount of items are used (20k so far).


Thanks,

Karim.

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: TTCP/RFC1644 problem

2004-02-11 Thread Karim Fodil-Lemelin
Hi,

   Your problem here is that your TTCP connection times out and the 
data is retransmitted (loosing all the benefits of TTCP) see my other 
email for why this happens.

Karim.

Danny Braniss wrote:

hi,
im running some experiments, and it seems to me that
setting net.inet.tcp.rfc1644 has the reverse effect.
with sysctl net.inet.tcp.rfc1644 = 0, the transaction uses only 6 packets
and it's less than 1 sec, setting net.inet.tcp.rfc1644 to 1 uses
8 packets and takes more than 1 sec.
with net.inet.tcp.rfc1644 = 0:
   No. TimeSourceDestination   Protocol Info
 1 0.00132.65.80.32  132.65.16.103 TCP  4105 
255 [SYN] Seq=3300562868 Ack=0 Win=57920 Len=0
 2 0.38132.65.16.103 132.65.80.32  TCP  255  
4105 [SYN, ACK] Seq=3867169834 Ack=3300562869 Win=57344 Len=0
 3 0.003137132.65.80.32  132.65.16.103 TCP  4105 
255 [FIN, PSH, ACK] Seq=3300562869 Ack=3867169835 Win=57920 Len=25
 4 0.003215132.65.16.103 132.65.80.32  TCP  255  
4105 [ACK] Seq=3867169835 Ack=3300562895 Win=57895 Len=0
 5 0.035350132.65.16.103 132.65.80.32  TCP  255  
4105 [FIN, PSH, ACK] Seq=3867169835 Ack=3300562895 Win=57920 Len=4
 6 0.038110132.65.80.32  132.65.16.103 TCP  4105 
255 [ACK] Seq=3300562895 Ack=3867169840 Win=57916 Len=0

with net.inet.tcp.rfc1644 = 1:
   No. TimeSourceDestination   Protocol Info
 1 0.00132.65.80.32  132.65.16.103 TCP  4108 
255 [FIN, SYN, PSH] Seq=967743282 Ack=0 Win=57600 Len=25
 2 0.36132.65.16.103 132.65.80.32  TCP  255  
4108 [SYN, ACK] Seq=99082279 Ack=967743283 Win=57344 Len=0
 3 0.002622132.65.80.32  132.65.16.103 TCP  4108 
255 [FIN, ACK] Seq=967743308 Ack=99082280 Win=57920 Len=0
 4 0.002671132.65.16.103 132.65.80.32  TCP  255  
4108 [ACK] Seq=99082280 Ack=967743283 Win=57920 Len=0
 5 1.201556132.65.80.32  132.65.16.103 TCP  4108 
255 [FIN, PSH, ACK] Seq=967743283 Ack=99082280 Win=57920 Len=25
 6 1.201609132.65.16.103 132.65.80.32  TCP  255  
4108 [ACK] Seq=99082280 Ack=967743309 Win=57895 Len=0
 7 1.227906132.65.16.103 132.65.80.32  TCP  255  
4108 [FIN, PSH, ACK] Seq=99082280 Ack=967743309 Win=57920 Len=4
 8 1.230653132.65.80.32  132.65.16.103 TCP  4108 
255 [ACK] Seq=967743309 Ack=99082285 Win=57916 Len=0



___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to [EMAIL PROTECTED]
 

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]