On Fri, 20 Sep 2002, Boris Popov wrote:
On Thu, 19 Sep 2002, Jeff Roberson wrote:
Well, haven't tested it with smbfs, but may point that patch for
nwfs contains two vref()s instead of vgetref().
Ah, thanks very much. (un?)luckily it was in debug code so it would not
have been
I have a patch available at
http://www.chesapeake.net/~jroberson/vfssmp.diff that locks the majority
of the vnode fields. The namecache locking has been omitted from this
patch. The locking has been specified in vnode.h and all interlock,
syncer, and vn lock usage has been verified. Any places
Is someone going to address this? If not, I will.
Jeff
To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-current in the body of the message
As near as I can tell the panic is happening in VOP_GETATTR(). It looks
to me like it would be possible for the vnode to be recycled between the
time when it passes the vp-v_mount test at the top of the loop and the
time when vn_lock() succeeds. Shouldn't we bump the vnode reference
On Sat, 6 Jul 2002, Jeff Roberson wrote:
jeff2002/07/06 23:39:37 PDT
Modified files:
sys/toolsvnode_if.awk
Log:
- Use 'options DEBUG_VFS_LOCKS' instead of the DEBUG_ALL_VFS_LOCKS
environment variable to enable the lock verifiction code.
Revision
On Sun, 7 Jul 2002, Don Lewis wrote:
On 7 Jul, Jeff Roberson wrote:
On Sat, 6 Jul 2002, Jeff Roberson wrote:
- Use 'options DEBUG_VFS_LOCKS' instead of the DEBUG_ALL_VFS_LOCKS
environment variable to enable the lock verifiction code.
If you have a crash test box I would
On Sun, 7 Jul 2002, Don Lewis wrote:
It wasn't able to sucessfully boot with this enabled. I'm hand
transcribing this, so apologies for any typos:
[snip]
Debugger(c0420fe4) at Debugger+0x45
vn_rdwr(0,c6737800,c6425000,55ac,0,0,1,8,c22c7200,df241aec,c22cc0c0) at
vn_rdwr+0x18d
To those of you who provided me with UMA statistics; Thank you! The
information was enlightening. The current bucket sizes aren't as bad as I
had originally anticipated. I think I need to rework the mechanism by
which the statistics are collected to get more interesting results, but
for now
Jeff , (current included because it may be an interesting answer)
As you know I'm using UMA to allocate threads and cache them.
The 'constructor methods allow me to allocated threads that have been
pre-set up with thread stacks and other special items.
When they are being cached they
I got 2 panics from -current sources of today.
The back traces are:
panic 1:
vm_page_insert
vm_page_alloc
vm_page_grab
pmap_new_proc
vm_forkproc
fork1
fork
syscall
syscal
panic 2:
panic
mtx_init
fork1
fork
syscal
syscall
I would provide more
On Wed, 5 Jun 2002, Ian Dowse wrote:
The logic for testing UMA_ZFLAG_INTERNAL in zone_dtor() is reversed.
I was able to reliably reproduce crashes with:
mdconfig -a -t malloc -s 10m
mdconfig -d -u 0
mdconfig -a -t malloc -s 10m
mdconfig -d -u 0
Ian
Index:
This will keep statistics on the effeciency of our current malloc bucket
sizes. After some time of general usage please do a 'sysctl kern.mprof
file' and mail the file to me. Please include the following information:
Primary Usage: workstation/server/web server/etc. etc.
Architecture:
On 9 Apr 2002, Dag-Erling Smorgrav wrote:
Considering that neither LOOKUP_SHARED nor LOOKUP_EXCLUSIVE is
documented anywhere, could you enlighten us as to what, exactly, they
do?
Right, sorry. There was some minimal discussion about this on arch quite
a while ago. Basically, it allows
This patch has seriously reduced file system deadlocks for several people.
It also makes concurrent file system access much faster in certain cases.
Since I have only heard good reports and no bad reports I'm going to
enable it by default. If you do experience some file system deadlocks
please
I saw some similar weirdness in my test machines last night where a dual
processor DS20 (Alpha 21264 500x2) beat out a PII Xeon 450x4. Normally
the
quad xeon beats the DS20. The quad xeon was using -j16 but was about 74%
idle.
The DS20 had used -j8. I didn't get a chacne to run top to
I have received a few reports of panics when loading modules. If you're
going to run it you may want to staticly compile in pseudofs/procfs, etc.
Thanks,
Jeff
On Sun, 10 Mar 2002, Glenn Gombert wrote:
I have the UMA patch installed on two systems here, a 500Mhz K7 system and
dual PIII SMP
There were problems with loading modules, but I haven't
seen any panics. The loading problems were fixed yesterday
in revisions 1.77 and 1.78 of kern_linker.c. I suspect
people, who imay have had panics, need to update to the latest
version of kern_linker.c.
--
Steve
Good news for
Should I postpone my allocator commit then?
Thanks,
Jeff
To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-current in the body of the message
I have an updated copy of my kernel memory allocator available for general
testing. If you aren't familiar with this allocator you may want to look
at the arch@ archives under Slab allocator.
This patch has been tested on SMP alpha and single proc x86. It depends
on recent current changes, so
will investigate further and post a pr though.
Thanks for your help!
Jeff
-Original Message-
From: Bruce Evans [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 11, 2001 8:52 PM
To: Jeff Roberson
Cc: '[EMAIL PROTECTED]'
Subject: Re: Broken mmap in current?
On Thu, 11 Jan 2001, Jeff Roberson
Title: Broken mmap in current?
I have written a character device driver for a proprietary PCI device that has a large sum of mapable memory. The character device supports mmap() which I use to export the memory into a user process. I have no problems accessing the memory on this device, but I
Title: Bug Fix for SYSV semaphores.
I noticed that sysv semaphores initialize the otime member of the semid_ds structure to 0, but they never update it afterwards. This field is supposed to be the last operation time. ie the last time a semctl was done. In UNIX Network Programming, Stevens
Title: PXE build?
Does anyone know of any current issues with PXE? I've searched the mailing lists and I don't see any mention of a problem similar to mine.
I'm running FreeBSD-CURRENT from 2000 09 15 on a server. The client has an Intel 21143 based ethernet card that claims it has PXE 2.0
201 - 223 of 223 matches
Mail list logo