Re: kldload of driver for isa pnp card (cycle two)

2000-04-02 Thread Doug Rabson
On Sat, 1 Apr 2000, Nikolai Saoukh wrote: On Fri, Mar 31, 2000 at 08:55:37PM +0100, Doug Rabson wrote: I will try to tackle these issues soon. Due to other commitments, this won't happen for a few days though. Can I halp somehow? After I get the basics working, I'll contact you

Re: PNP dependant configs in /sys/isa/pnpparse.c

2000-04-02 Thread Doug Rabson
On Sat, 1 Apr 2000, Pierre Y. Dampure wrote: Since revision 1.5 of the above, my kernel is giving me a "too many dependant configs (8)" when probing PNP resources. Problem is, it looks like the SoundBlaster AWE 64 Gold advertises 8 different PNP configurations (at least, that's what I

Re: MLEN and crashes

2000-04-02 Thread Alfred Perlstein
* Gary Jennejohn [EMAIL PROTECTED] [000402 01:43] wrote: This is a HEADS UP. The recent increase in MLEN from 128 to 256 bytes led to very surprising problems with the latest, so called developers', version of isdn4bsd. The new version uses slcompress by default. The change in MLEN makes

$B!z%"%$%I%k#2#0#0#1%*!<%W(B(B$B%s!*L5NABN83$b$"$j!z(B(B

2000-04-02 Thread $B%"%$%I%k#2#0#0#1(B(B
== $B%"%$%I%k#2#0#0#1%*!<%W%s!*!*(B == $B"##R#e#a#l#G#2!!%i%$%V%A%c%C%H?74k2h2q0wBgJg=8!*!*(B

Re: MLEN and crashes

2000-04-02 Thread Bruce Evans
On Sun, 2 Apr 2000, Gary Jennejohn wrote: struct slcompress is now in struct sppp, which is passed by ispppcontrol as part of an ioctl call. Eventually the kernel lands in sppp_params, which does a copyin to a struct spppreq (which contains struct sppp) on the kernel stack. Because struct

Re: JetDirect 500X and FreeBSD

2000-04-02 Thread Alexander Langer
Thus spake Andrew MacIntyre ([EMAIL PROTECTED]): they weren't particularly reliable, particularly when multiple jobs were queued simultaneously. I hope their more recent stuff is better behaved. It is now. A further thing is: If your LaserJet doesn't understand PostScript, you have to use

Re: Deadlock with vinum raid5

2000-04-02 Thread Vallo Kallaste
On Sun, Apr 02, 2000 at 01:50:16AM +0200, Bernd Walter [EMAIL PROTECTED] wrote: Greg - I'm using vinums raid5 code since months now for FreeBSDs CVS-Tree on 7x 200M disks - it does not hang for me since a long time. The latest current I tested R5 well is from 19th March on alpha. That's

Summer/winter time problems with daily/460

2000-04-02 Thread Jeroen Ruigrok/Asmodai
Just went through a few logfiles: Checking for rejected mail hosts: -1d: Cannot apply date adjustment usage: date [-nu] [-d dst] [-r seconds] [-t west] [-v[+|-]val[ymwdHMS]] ... [-f

Please review newbus patch for amd and adv

2000-04-02 Thread Takahashi Yoshihiro
I converted the amd and adv drivers to new-bus. But, as I am not familiar with these drivers and new-bus, maybe I have mistaken. Will somebody please review these changes? For amd driver: http://home.jp.FreeBSD.org/~nyan/patches/amd.diff.gz For adv driver:

Re: MLEN and crashes

2000-04-02 Thread Louis A. Mamakos
I wonder how wise it was to change MLEN without more testing. But hey, this is -current, that's what it's there for. I've been running with MLEN set to 256 bytes for more than a year for reasons unrelated to this commit, and haven't seen any problems at all. (Of course, I don't use sppp..)

Buffering issues of the pcm audio driver (Was: cvs commit: src/sys/dev/sound/isa ess.c)

2000-04-02 Thread Maxim Sobolev
Hi, I've tried to track down sound issues of the SDL (Simple Direct Layer) library and found that current pcm buffering behaviour inconsistent with OSS specifications, which cause applications that require sophisticated sound control to misbehave on FreeBSD. There is two different buffers

Re: Deadlock with vinum raid5

2000-04-02 Thread Vallo Kallaste
On Sun, Apr 02, 2000 at 03:39:35PM +0200, Vallo Kallaste [EMAIL PROTECTED] wrote: I hook up serial console to get full traceback next time, but I don't have any knowledge for further analysis. Here's full traceback, environment is all same, except the filesystem is mounted with async (before

Re: Deadlock with vinum raid5

2000-04-02 Thread Bernd Walter
On Sun, Apr 02, 2000 at 03:39:35PM +0200, Vallo Kallaste wrote: I got now crash under 4.0-RELEASE, with syncer and bufdaemon in the same vrlock state, pax in flswait. I was in single-user mode using pax to extract usr archive to newly created raid5 volume. I'm using NFS mount with flags -3i

Re: Deadlock with vinum raid5

2000-04-02 Thread Soren Schmidt
It seems Vallo Kallaste wrote: On Sun, Apr 02, 2000 at 03:39:35PM +0200, Vallo Kallaste [EMAIL PROTECTED] wrote: I hook up serial console to get full traceback next time, but I don't have any knowledge for further analysis. Here's full traceback, environment is all same, except the

Re: Deadlock with vinum raid5

2000-04-02 Thread Bernd Walter
On Sun, Apr 02, 2000 at 01:50:16AM +0200, Bernd Walter wrote: On Sun, Apr 02, 2000 at 01:15:39AM +0200, Bernd Walter wrote: Greg - I'm using vinums raid5 code since months now for FreeBSDs CVS-Tree on 7x 200M disks - it does not hang for me since a long time. The latest current I tested

Re: Deadlock with vinum raid5

2000-04-02 Thread Bernd Walter
On Sun, Apr 02, 2000 at 05:37:30PM +0200, Bernd Walter wrote: Are you for any chance running the NFS Server without nfsd? I expect them to be needed if you are serving vinum volumes. Sometimes I'm to stupid - the NFS case was a different thread. -- B.Walter COSMO-Project

Re: Deadlock with vinum raid5

2000-04-02 Thread Bernd Walter
On Sun, Apr 02, 2000 at 05:38:01PM +0200, Soren Schmidt wrote: It seems Vallo Kallaste wrote: On Sun, Apr 02, 2000 at 03:39:35PM +0200, Vallo Kallaste [EMAIL PROTECTED] wrote: I hook up serial console to get full traceback next time, but I don't have any knowledge for further

Re: Deadlock with vinum raid5

2000-04-02 Thread Soren Schmidt
It seems Bernd Walter wrote: On Sun, Apr 02, 2000 at 05:38:01PM +0200, Soren Schmidt wrote: It seems Vallo Kallaste wrote: On Sun, Apr 02, 2000 at 03:39:35PM +0200, Vallo Kallaste [EMAIL PROTECTED] wrote: I hook up serial console to get full traceback next time, but I don't

GDB 5...

2000-04-02 Thread Mirko Viviani
Ciao. As stated in gdb mlist, gdb 5.0 is on the way but it hasn't support for freebsd-elf in the package. Is there anyone that could explain me why ? Thanks. --- Bye, Mirko [EMAIL PROTECTED] (NeXTmail, MIME) [EMAIL PROTECTED] To Unsubscribe: send mail to [EMAIL

Re: GDB 5...

2000-04-02 Thread Kris Kennaway
On Sun, 2 Apr 2000, Mirko Viviani wrote: Ciao. As stated in gdb mlist, gdb 5.0 is on the way but it hasn't support for freebsd-elf in the package. Is there anyone that could explain me why ? You're more likely to get an answer by asking the gdb developers on the gdb "mlist" :-) Kris

Re: MLEN and crashes

2000-04-02 Thread Gary Jennejohn
Bruce Evans writes: On Sun, 2 Apr 2000, Gary Jennejohn wrote: Moving the struct spppreq into global address space solves the problem, but that makes the kernel BSS somewhat larger. Redefining MAX_HDR to be 128 also fixes the problem, even with the struct spppreq on the stack. Big structs need

Re: Deadlock with vinum raid5

2000-04-02 Thread Vallo Kallaste
On Sun, Apr 02, 2000 at 05:37:30PM +0200, Bernd Walter [EMAIL PROTECTED] wrote: I hook up serial console to get full traceback next time, but I don't have any knowledge for further analysis. You don't need a serial console to get further informations. You should compile the kernel with

Re: Deadlock with vinum raid5

2000-04-02 Thread Vallo Kallaste
On Sun, Apr 02, 2000 at 06:16:43PM +0200, Bernd Walter [EMAIL PROTECTED] wrote: Do you see it only with R5 or also with other organisations? I don't have any problems with my own -current system which has striped volume over three UW SCSI disks. SCSI-only system, SMP. Sources from March 14. I

Re: Deadlock with vinum raid5

2000-04-02 Thread Bernd Walter
On Sun, Apr 02, 2000 at 08:02:54PM +0200, Vallo Kallaste wrote: Yes, but I don't have space for crashdump and I can't build new kernel with limited memory usage because I don't have /usr filesystem up and running. Is there a way to limit memory usage without recompiling kernel? I can store

Re: Deadlock with vinum raid5

2000-04-02 Thread Soren Schmidt
It seems Bernd Walter wrote: On Sun, Apr 02, 2000 at 08:02:54PM +0200, Vallo Kallaste wrote: Yes, but I don't have space for crashdump and I can't build new kernel with limited memory usage because I don't have /usr filesystem up and running. Is there a way to limit memory usage without

Re: Deadlock with vinum raid5

2000-04-02 Thread Alfred Perlstein
* Soren Schmidt [EMAIL PROTECTED] [000402 12:42] wrote: It seems Bernd Walter wrote: On Sun, Apr 02, 2000 at 08:02:54PM +0200, Vallo Kallaste wrote: Yes, but I don't have space for crashdump and I can't build new kernel with limited memory usage because I don't have /usr filesystem up

Re: Deadlock with vinum raid5

2000-04-02 Thread Soren Schmidt
It seems Alfred Perlstein wrote: Just to clearify the things... Are these problems with 4.0-RELEASE with 4.0-STABLE or with 5.0-CURRENT? I have the problem with 4.0-RELEASE, STABLE and 5.0-current but it might only occur with RAID5... I've never seen it with just a striped setup:

Re: RSA library problems

2000-04-02 Thread Dirk Roehrdanz
Hi Jim, On 0, Jim Bloom [EMAIL PROTECTED] wrote: A similar patch was added for the USA version of RSA for the same basic reason. Your patch is almost correct. It should add the line: LDADD+= -L$[.OBJDIR]/../libcrypto -lcrypto ^ ^ Your version would

Re: Deadlock with vinum raid5

2000-04-02 Thread Bernd Walter
On Sun, Apr 02, 2000 at 09:39:36PM +0200, Soren Schmidt wrote: I dont think vinum is/was usable under -current at least not the RAID5 stuff, its broken, and some of it is because greg is not up to date with what -current looks like these days. Can you please explain what have massivly

Re: Deadlock with vinum raid5

2000-04-02 Thread Soren Schmidt
It seems Bernd Walter wrote: On Sun, Apr 02, 2000 at 09:39:36PM +0200, Soren Schmidt wrote: I dont think vinum is/was usable under -current at least not the RAID5 stuff, its broken, and some of it is because greg is not up to date with what -current looks like these days. Can you

Perl 5.6.0?

2000-04-02 Thread Thomas T. Veldhouse
Are there any plans to merge perl-5.6.0 into current? I don't have any plans for using it currently, but I curious. Thanks, Tom Veldhouse [EMAIL PROTECTED] To Unsubscribe: send mail to [EMAIL PROTECTED] with "unsubscribe freebsd-current" in the body of the message

Re: Perl 5.6.0?

2000-04-02 Thread Chuck Robey
On Sun, 2 Apr 2000, Thomas T. Veldhouse wrote: Are there any plans to merge perl-5.6.0 into current? I don't have any plans for using it currently, but I curious. Hmm. What with the nightmarish build structure of perl, I'm sure that reading this is just going to wreck Mark's day. In light

Re: Deadlock with vinum raid5

2000-04-02 Thread Alfred Perlstein
* Soren Schmidt [EMAIL PROTECTED] [000402 13:05] wrote: It seems Alfred Perlstein wrote: Just to clearify the things... Are these problems with 4.0-RELEASE with 4.0-STABLE or with 5.0-CURRENT? I have the problem with 4.0-RELEASE, STABLE and 5.0-current but it might only occur

Re: Deadlock with vinum raid5

2000-04-02 Thread Greg Lehey
On Sunday, 2 April 2000 at 22:22:39 +0200, Søren Schmidt wrote: It seems Bernd Walter wrote: On Sun, Apr 02, 2000 at 09:39:36PM +0200, Soren Schmidt wrote: I dont think vinum is/was usable under -current at least not the RAID5 stuff, its broken, and some of it is because greg is not up to

Re: Deadlock with vinum raid5

2000-04-02 Thread Greg Lehey
On Sunday, 2 April 2000 at 17:42:16 +0200, Bernd Walter wrote: On Sun, Apr 02, 2000 at 01:50:16AM +0200, Bernd Walter wrote: On Sun, Apr 02, 2000 at 01:15:39AM +0200, Bernd Walter wrote: Greg - I'm using vinums raid5 code since months now for FreeBSDs CVS-Tree on 7x 200M disks - it does not

Re: Deadlock with vinum raid5

2000-04-02 Thread Matthew Dillon
: to vinum? : : The changes done by phk to seperate out the io stuff from struct : buf. : :Alfred and Bernd came up with fixes that seem to work. I still need :to review them, but I'm in the process of installing an up-to-date :-CURRENT on my test box. Watch this space. : :Greg I won't

Re: Deadlock with vinum raid5

2000-04-02 Thread Bernd Walter
On Mon, Apr 03, 2000 at 07:41:54AM +0930, Greg Lehey wrote: On Sunday, 2 April 2000 at 22:22:39 +0200, Søren Schmidt wrote: It seems Bernd Walter wrote: On Sun, Apr 02, 2000 at 09:39:36PM +0200, Soren Schmidt wrote: I dont think vinum is/was usable under -current at least not the RAID5

Re: Deadlock with vinum raid5

2000-04-02 Thread Bernd Walter
On Mon, Apr 03, 2000 at 07:43:00AM +0930, Greg Lehey wrote: I found a potentially serious bug in the RAID calculations yesterday: it assumed that sizeof (int) == 4. I suspect that it would just slow down the calculations, but in any case I've fixed it. That's generaly not good but allways

kernel build busted in /sys/dev/sound/pci/emu10k1.c (fix included)

2000-04-02 Thread Josh Tiefenbach
In attempting to test out Cameron's emu10k1 support, one quickly notices that the build dies in $SUBJECT due to unresolved constants. The attached patch fixes the problem. I'm happy to report that mpg123 is playing mp3's quite fine. There's a little burst of static when it first starts up, but

Re: cvs commit: src/sys/i386/i386 support.s src/sys/kern init_sysent.c kern_prot.c kern_sig.c

2000-04-02 Thread Alfred Perlstein
* Matt Dillon [EMAIL PROTECTED] [000402 11:18] wrote: dillon 2000/04/02 10:52:44 PDT Modified files: sys/i386/i386support.s sys/kern init_sysent.c kern_prot.c kern_sig.c Log: Make the sigprocmask() and geteuid() system calls MP SAFE. Expand

Re: cvs commit: src/sys/i386/i386 support.s src/sys/kern init_sysent.c kern_prot.c kern_sig.c

2000-04-02 Thread Matthew Dillon
:Along with snagging the "easy ones" for MP safeness, shouldn't getpid :be MP safe? The struct proc is allocated from the proc_zone, and :afaik zalloc allows for stable storage meaning it's safe to dereference :the ppid pointer once the entire struct proc is populated, which needs :to happen

Re: cvs commit: src/sys/i386/i386 support.s src/sys/kern init_sysent.c kern_prot.c kern_sig.c

2000-04-02 Thread Alfred Perlstein
* Matthew Dillon [EMAIL PROTECTED] [000402 16:37] wrote: :Along with snagging the "easy ones" for MP safeness, shouldn't getpid :be MP safe? The struct proc is allocated from the proc_zone, and :afaik zalloc allows for stable storage meaning it's safe to dereference :the ppid pointer once

Re: cvs commit: src/sys/i386/i386 support.s src/sys/kern init_sysent.c kern_prot.c kern_sig.c

2000-04-02 Thread Matthew Dillon
:I did look at the code, struct proc is allocated from a zone, :meaning it won't "go away" once allocated, there's no danger in :dereferencing p_pptr, I don't get it. : :-- :-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]] :"I have the heart of a child; I keep it in a jar on my desk."

Re: cvs commit: src/sys/i386/i386 support.s src/sys/kern init_sysent.c kern_prot.c kern_sig.c

2000-04-02 Thread Alfred Perlstein
* Matthew Dillon [EMAIL PROTECTED] [000402 17:04] wrote: :I did look at the code, struct proc is allocated from a zone, :meaning it won't "go away" once allocated, there's no danger in :dereferencing p_pptr, I don't get it. : :-- :-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]

Re: RSA library problems

2000-04-02 Thread Jim Bloom
Dirk Roehrdanz wrote: Thanks for the modification. I have replaced the "[]" with "{}" . A run of "make buildworld" with the patch finished without errors. Sorry about that. I did it quickly and the font I was using has the characters looking identical. Time to change fonts :-) The

Re: Please review newbus patch for amd and adv

2000-04-02 Thread Warner Losh
In message [EMAIL PROTECTED] Takahashi Yoshihiro writes: : For adv driver: : http://home.jp.FreeBSD.org/~nyan/patches/advansys.diff.gz I took a look at this change. For the most part it looks good. However, I have a question. Why did you move the adv_isa into the md files file and then comment

Re: cvs commit: src/sys/i386/i386 support.s src/sys/kern init_sysent.c kern_prot.c kern_sig.c

2000-04-02 Thread Matthew Dillon
:Good call. : :Ugh, I should think about this more, but i'll just grasp at straws :and suggest that p_pptr is set to volatile variable, and we re-read :it after we snarf the pid from the pointer and make sure it hasn't :changed out from under us. : :Either that or store it in the child's proc

Load average calculation?

2000-04-02 Thread Kevin Day
I'm not sure if this is -current fodder or not, but since it's still happening in -current, I'll ask. We recently upgraded a server from 2.2.8 to 4.0(the same behavior is shown on 5.0-current, too). Before, with the exact same load, we'd see load averages from between 0.20 and 0.30. Now, we're

Re: Load average calculation?

2000-04-02 Thread Kevin Day
:We recently upgraded a server from 2.2.8 to 4.0(the same behavior is shown :on 5.0-current, too). Before, with the exact same load, we'd see load :averages from between 0.20 and 0.30. Now, we're getting: : :load averages: 4.16, 4.23, 4.66 : :Top shows the same CPU percentages, just a

Recent commits to /sys/dev/sound/isa/ess.c

2000-04-02 Thread Donn Miller
I tried this, and the buffer size of 16k doesn't work too well with the ESS 1868 isa. The fist .3 seconds or so of the beginning of the clip plays in an infinite loop. I bumped down the buffer size to 12k, and it seems to work pretty well. Actually, I tried experimenting with various buffer

Re: Load average calculation?

2000-04-02 Thread Wilko Bulte
On Sun, Apr 02, 2000 at 11:10:59PM -0500, Kevin Day wrote: :We recently upgraded a server from 2.2.8 to 4.0(the same behavior is shown :on 5.0-current, too). Before, with the exact same load, we'd see load :averages from between 0.20 and 0.30. Now, we're getting: : :load averages:

Re: Load average calculation?

2000-04-02 Thread Kevin Day
I believe the load average was changed quite a while ago to reflect not only runnable processes but also processes stuck in disk-wait. It's a more accurate measure of load. Ahh, and since nearly everything is done on this system via NFS, I can imagine that several

Re: Load average calculation?

2000-04-02 Thread Matthew Dillon
:I'm not sure if this is -current fodder or not, but since it's still :happening in -current, I'll ask. : :We recently upgraded a server from 2.2.8 to 4.0(the same behavior is shown :on 5.0-current, too). Before, with the exact same load, we'd see load :averages from between 0.20 and 0.30. Now,