Re: mysql50-server not starting correctly on FreeBSD 6.1-PRERELEASE (ldconfig, rcorder)

2006-02-23 Thread Alex Dupre

Florent Thoumie wrote:

Yup, since mysqld is running as root, otherwise REQUIRE: LOGIN.


mysqld switch to user mysql after startup, so I guess it should require 
LOGIN.


--
Alex Dupre
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: AMD64 or I386

2006-02-23 Thread Spartak Radchenko
On Wed, Feb 22, 2006 at 08:38:44PM -0700, Scott Long wrote:
 Albert Shih wrote:
 Hi all
 
 I've very strange problem with my new servers with AMD single core dual
 proc with 4 Go Ram.
 
 When I boot the i386 version of FreeBSD 6.0 he see 4 Go but tell me he
 can't not access to 4go but only 3 Go. I upgrade to FreeBSD 6-Stable and 
 nothing change.
 
 I suspect that the number here's aren't accurate, and that what you 
 really mean is that you can acccess 3.5GB, not 3.0GB.

It depends on BIOS settings with some Intel motherboards at least.
Could be 3.0GB or 3.5GB. Not sure about AMD.

-- 
Spartak Radchenko SVR1-RIPE
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


graid3 data corruption?!?

2006-02-23 Thread Michael Reifenberger

Hi,
I'm having 5 firewire Disks in one graid3 set.
and using a fresh STABLE on SMP with an dual AMD64 in i386 mode.

While doing an md5 checksum of all files in the filesystem
(~770GB of data) on disk died. graid3 did the right thing
and disconnected the disk.

BUT:
after diffing the md5sums of the files on large file (probably the one
that got checked during the disk failure) had an different md5sum than before.
--- md5_11.log  Fri Dec  9 13:23:07 2005
+++ md5_12.log  Wed Feb 22 18:03:03 2006
@@ -4460,3 +4460,3 @@
 MD5 (Backup/totum/root_0_050211_i386.dmp.gz) = 5a3e7b03f48ea4c2cba10624edd996cf
-MD5 (Backup/totum/root_0_050715.dmp.gz) = 0e154301cbec84571d1df94bf68e3d79
+MD5 (Backup/totum/root_0_050715.dmp.gz) = 172d7c12b78f3f191c184d467e31a53c
 MD5 (RIP/.pgp/PGPMacBinaryMappings.txt) = bf1b637a3a69bcbb8d4177be46a1c3ac

BUT:
doing a fresh md5sum now in degraded mode of the file I get again
(the correct) value of:
MD5 (Backup/totum/root_0_050715.dmp.gz) = 0e154301cbec84571d1df94bf68e3d79

For me this means, that graid3 gave incorrect data during the disk los.
This shouldn't happen!

Any clues how this could happen?
Has anyone else seen this behaviour?

BTW: dmesg showed:
...
GEOM_RAID3: Device data created (id=0).
GEOM_RAID3: Device data: provider da5s1a detected.
GEOM_RAID3: Device data: provider da4s1a detected.
GEOM_RAID3: Device data: provider da3s1a detected.
GEOM_RAID3: Device data: provider da2s1a detected.
GEOM_RAID3: Device data: provider da1s1a detected.
GEOM_RAID3: Device data: provider da1s1a activated.
GEOM_RAID3: Device data: provider da2s1a activated.
GEOM_RAID3: Device data: provider da4s1a activated.
GEOM_RAID3: Device data: provider da3s1a activated.
GEOM_RAID3: Device data: provider da5s1a activated.
GEOM_RAID3: Device data: provider raid3/data launched.
...
(da2:sbp0:0:0:0): READ(10). CDB: 28 0 9 3f 46 6f 0 0 40 0
(da2:sbp0:0:0:0): CAM Status: SCSI Status Error
(da2:sbp0:0:0:0): SCSI Status: Check Condition
(da2:sbp0:0:0:0): ABORTED COMMAND asc:0,0
(da2:sbp0:0:0:0): No additional sense information
(da2:sbp0:0:0:0): Retrying Command (per Sense Data)
(da2:sbp0:0:0:0): READ(10). CDB: 28 0 9 3f 46 6f 0 0 40 0
(da2:sbp0:0:0:0): CAM Status: SCSI Status Error
(da2:sbp0:0:0:0): SCSI Status: Check Condition
(da2:sbp0:0:0:0): MEDIUM ERROR asc:4b,0
(da2:sbp0:0:0:0): Data phase error
(da2:sbp0:0:0:0): Retrying Command (per Sense Data)
(da2:sbp0:0:0:0): READ(10). CDB: 28 0 9 3f 46 6f 0 0 40 0
(da2:sbp0:0:0:0): CAM Status: SCSI Status Error
(da2:sbp0:0:0:0): SCSI Status: Check Condition
(da2:sbp0:0:0:0): ABORTED COMMAND asc:0,0
(da2:sbp0:0:0:0): No additional sense information
(da2:sbp0:0:0:0): Retrying Command (per Sense Data)
(da2:sbp0:0:0:0): READ(10). CDB: 28 0 9 3f 46 6f 0 0 40 0
(da2:sbp0:0:0:0): CAM Status: SCSI Status Error
(da2:sbp0:0:0:0): SCSI Status: Check Condition
(da2:sbp0:0:0:0): MEDIUM ERROR asc:4b,0
(da2:sbp0:0:0:0): Data phase error
(da2:sbp0:0:0:0): Retrying Command (per Sense Data)
(da2:sbp0:0:0:0): READ(10). CDB: 28 0 9 3f 46 6f 0 0 40 0
(da2:sbp0:0:0:0): CAM Status: SCSI Status Error
(da2:sbp0:0:0:0): SCSI Status: Check Condition
(da2:sbp0:0:0:0): ABORTED COMMAND asc:0,0
(da2:sbp0:0:0:0): No additional sense information
(da2:sbp0:0:0:0): Retries Exhausted
GEOM_RAID3: Request failed. da2s1a[READ(offset=79432531968, length=32768)]
GEOM_RAID3: Device data: provider da2s1a disconnected.
GEOM_RAID3: Request failed. da2s1a[READ(offset=79432761344, length=32768)]
GEOM_RAID3: Device data: provider [unknown] disconnected.
GEOM_RAID3: Request failed. da2s1a[READ(offset=79432695808, length=32768)]
GEOM_RAID3: Device data: provider [unknown] disconnected.
GEOM_RAID3: Request failed. da2s1a[READ(offset=79432663040, length=32768)]
GEOM_RAID3: Device data: provider [unknown] disconnected.
GEOM_RAID3: Request failed. da2s1a[READ(offset=79432630272, length=32768)]
GEOM_RAID3: Device data: provider [unknown] disconnected.
GEOM_RAID3: Request failed. da2s1a[READ(offset=79432597504, length=32768)]
GEOM_RAID3: Device data: provider [unknown] disconnected.
...
(da2:sbp0:0:0:0): READ(10). CDB: 28 0 9 3f 46 80 0 0 40 0
(da2:sbp0:0:0:0): CAM Status: SCSI Status Error
(da2:sbp0:0:0:0): SCSI Status: Check Condition
(da2:sbp0:0:0:0): MEDIUM ERROR asc:4b,0
(da2:sbp0:0:0:0): Data phase error
(da2:sbp0:0:0:0): Retrying Command (per Sense Data)
(da2:sbp0:0:0:0): READ(10). CDB: 28 0 9 3f 46 80 0 0 40 0
(da2:sbp0:0:0:0): CAM Status: SCSI Status Error
(da2:sbp0:0:0:0): SCSI Status: Check Condition
(da2:sbp0:0:0:0): MEDIUM ERROR asc:4b,0
(da2:sbp0:0:0:0): Data phase error
(da2:sbp0:0:0:0): Retrying Command (per Sense Data)

The last cam errors are during `dd`.

Bye/2
---
Michael Reifenberger, Business Development Manager SAP-Basis, Plaut Consulting
Comp: [EMAIL PROTECTED] | Priv: [EMAIL PROTECTED]
  http://www.plaut.de   |   http://www.Reifenberger.com

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable

Re: mysql50-server not starting correctly on FreeBSD 6.1-PRERELEASE (ldconfig, rcorder)

2006-02-23 Thread John Nielsen
On Thursday 23 February 2006 03:46, Alex Dupre wrote:
 Florent Thoumie wrote:
  Yup, since mysqld is running as root, otherwise REQUIRE: LOGIN.

 mysqld switch to user mysql after startup, so I guess it should require
 LOGIN.

That works here.  I removed the BEFORE line and changed REQUIRE to only 
include LOGIN:

# PROVIDE: mysql
# REQUIRE: LOGIN
# KEYWORD: shutdown

Let me know if there are any other incantations I should test.

Thanks,

JN
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


all not polling interfaces are timing out

2006-02-23 Thread JoaoBR
Hi

some has an idea what is going on with the latest stbale sources?

All not polling network interfaces are timing out since two weeks or so and 
stop transmitting. All I mean all I have in use as nve, fxp, sis, sk and even 
xl0 when not setting polling, with the sources before pre-release61 they 
worked fine and stable.

João







A mensagem foi scaneada pelo sistema de e-mail e pode ser considerada segura.
Service fornecido pelo Datacenter Matik  https://datacenter.matik.com.br
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Disk I/O system hang on 5.4-RELEASE-p8 i386

2006-02-23 Thread Michael R. Wayne

Been fighting this for a while.  We have an older server, running
5.4-RELEASE-p8 i386 and used primarily for email, which hangs every
couple of weeks.  The hang seems to be in the disk I/O system; pings
succeed, and I can continue get a login: prompt on the console until
I enter a login at which the response stops.

I'm suspecting this is a 5.8 issue as we have the same problem on
a another box running 5.4-RELEASE-p8 amd64 with a 3ware controller.
I do not have a dump on that one.

Based on the times of the hangs, the triggering event seems to be
running dump.

We have a serial console set up, I broke to the debugger and got
the following info.  Since the hang is in the disk I/O system, a
dump is not possible.  The many versions of inetd are likely due
to users attempting to POP their email and hanging on disk I/O.

Any suggestions or tips on how to track this down would be appreciated.


db ps
  pid   proc uid  ppid  pgrp  flag   stat  wmesgwchan  cmd
67487 c5ea98d40   551   551 100 [SLPQ ufs 0xc3851c04][SLP] sendmail
67486 c3b8a1c40   551   551 100 [SLPQ ufs 0xc3851c04][SLP] sendmail
67485 c634c7100   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67484 c62931c40   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67483 c58a93880   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67482 c62937100   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67481 c58ab8d40   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67480 c6292c5c0   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67479 c62938d40   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67478 c634f0000   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67477 c62941c40   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67476 c5e55c5c0   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67475 c5f1fe200   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67474 c5da854c0   551   551 100 [SLPQ ufs 0xc3851c04][SLP] sendmail
67473 c5f9ee200   551   551 100 [SLPQ ufs 0xc3851c04][SLP] sendmail
67472 c58a9e200   551   551 100 [SLPQ ufs 0xc3851c04][SLP] sendmail
67471 c602d8d40   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67470 c61191c40   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67469 c58ab1c40   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67468 c5f19a980   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67467 c58c33880   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67466 c5f1f3880   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67465 c5f190000   442 67465 100 [SLPQ ufs 0xc3851c04][SLP] sshd
67464 c5fa68d40   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67463 c5eab54c0   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67462 c62940000   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67461 c5fa67100   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67460 c61190000   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67459 c634c3880   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67458 c5da87100   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67457 c3b8e3880   551   551 100 [SLPQ ufs 0xc3851c04][SLP] sendmail
67456 c62948d40   551   551 100 [SLPQ ufs 0xc3851c04][SLP] sendmail
67455 c62921c40   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67454 c5f1954c0   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67453 c5eab3880   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67452 c5f171c40   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67451 c62943880   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67450 c60291c40   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67449 c5ea97100   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67448 c3b8ac5c0   551   551 100 [SLPQ ufs 0xc3851c04][SLP] sendmail
67447 c62933880   551   551 100 [SLPQ ufs 0xc3851c04][SLP] sendmail
67446 c5f1fa980   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67445 c5e55e200   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67444 c61177100   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67443 c5fa7a980   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67442 c60263880   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67441 c5e5a1c40   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67440 c61197100   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67439 c5e5a0000   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67438 c3b8e8d40   551   551 100 [SLPQ ufs 0xc3851c04][SLP] sendmail
67437 c6293e200   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67436 c5dfee200   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67435 c5ea8e200   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67434 c5fa73880   574   574 000 [SLPQ ufs 0xc3851c04][SLP] inetd
67433 c39dde200   574   574 000 [SLPQ ufs 

HT1000 serial ATA support

2006-02-23 Thread Tomas Randa

Hello everybody,

I bought very nice motherboard SuperMicro H8SSL-i, but onboard serial 
ATA is supported in FreeBSD 6.0 as Generic, so only UDMA33 is possible.


I would ask here, what I can do for making this device fully functional 
on full speed.


Thank a lot

Tomas Randa
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Processes started inside a jail are only visible outside the jail

2006-02-23 Thread Vlad GALU
6.1-PRERELEASE

Inside the jail:
[EMAIL PROTECTED] / # /usr/local/sbin/lighttpd -f /usr/local/etc/lighttpd.conf
[EMAIL PROTECTED] / #
[EMAIL PROTECTED] / # ps ax | grep light
55816  p0  S+J0:00.00 grep light
[EMAIL PROTECTED] / #

Outside the jail:
[EMAIL PROTECTED] / # ps ax | grep light
 6263  ??  S  0:47.85 /usr/local/sbin/lighttpd -f
/usr/local/etc/lighttpd.conf
81204  ??  SJ 0:00.01 /usr/local/sbin/lighttpd -f
/usr/local/etc/lighttpd.conf
85151  pa  S+ 0:00.00 grep light
[EMAIL PROTECTED] / #

   There are two lighttpd instances - the host runs one as well. The
other one is the one started from within the jail.
   I don't know where to start investigating from.

--
If it's there, and you can see it, it's real.
If it's not there, and you can see it, it's virtual.
If it's there, and you can't see it, it's transparent.
If it's not there, and you can't see it, you erased it.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


LSI Megaraid (amr) performance woes

2006-02-23 Thread Sven Willenberger
I am having some issues getting any (write) performance out of an LSi
Megaraid (320-1) SCSI raid card (using the amr driver). The system is an
i386 (p4 xeon) with on-board adaptec scsi controllers and a SUPER GEM318
Saf-te backplane with 6 ea 146GB U320 10k rpm Hitachi drives.

dmesg highlights at message end.

The main problem I am having is getting anywhere near a decent write
performance using the card. I compared having the backplane connected to
the on-board adaptec controller to having it connected to the LSi
controller.

I tried 3 methods of benchmarking. Adaptec Connected involved using
the on-board adaptec scsi controller, LSi Connected involved using the
LSI controller as a simple controller having each drive its own logical
raid0 drive. LSI write-through and write-back involved using the LSi
controller to set up 2 single raid0 drives as their own logical unit and
a spanned mirror of 4 drives (raid10) as a logical unit (write-back
and write-through simply referring to the write method used).

In the case of the XXX Connected setup, I created a raid10
configuration with 4 of the drives as follows (shown is the commands for
adaptec .. for lsi I simply used amrd2 amrd3 etc for the drives).

gmirror label -b load md1 da2 da3
gmirror label -b load md2 da4 da5
gmirror load
gstripe label -s 65536 md0 /dev/mirror/md1 /dev/mirror/md2
newfs /dev/stripe/md0
mkdir /bench
mount /dev/stripe/md0 /bench

to test read and write performance I used dd as follows:

dd if=/dev/zero of=/raid_or_single_drive/bench64 bs=64k count=32768
which created 2GB files.

The summary of results (measured in bytes/sec) is as follows:

| SINGLE DRIVE| RAID DRIVE   |
Connection Method   |  Write   |   Read   |  Write   |   Read|
|-|--|
adaptec connected   | 58808057 | 78188838 | 78625494 | 127331944 |
lsi singles | 43944507 | 81238863 | 95104511 | 111626492 |
lsi write-through   | 45716204 | 81748996 |*10299554*| 108620637 |
lsi write-back  | 31689131 | 37241934 | 50382152 |  56053085 |

With the drives connected to the adaptec controller and using geom, I
get the expected increase in write and read performance when moving from
a single drive to a raid10 system. Likewise, when using the LSI
controller to manage the drives as single units and using geom to create
the raid, I get a marked increase in write performance (less of a read
increase). 

However, when using the LSI to create the raid, I end up with a
*miserable* 10MB/sec write speed (while achieving acceptable read
speeds) in write-through mode and mediocre write speeds in write-back
mode (which, without a battery-backed raid card I would rather not do)
and, for some reason, a marked decrease in read speeds (over the
write-through values).

So the question arises as to whether this is an issue with the way the
LSI card (320-1) handles spans (which I call stripes - versus mirrors)
or the way the amr driver views such spans, or an issue with the card
not playing nicely with the supermicro motherboard, or perhaps even a
defective card. Has anyone else had experience with this card and
motherboard combination?

As a side note, I also tried dragonfly-bsd (1.4.0) which also uses the
amr driver and experienced similar results, and linux (slackware 10.2
default install) which showed write speeds of 45MB/s or so and read
speeds of 140MB/s or so using the default LSI controller settings
(write-through, 64k stripe size, etc.)

Any help or ideas here would be really appreciated in an effort to get
anywhere near acceptable write speeds without relying on the unsafe
write-back method or excessively sacrificing read speeds.


dmesg highlights:
FreeBSD 6.0-RELEASE #0: Thu Nov  3 09:36:13 UTC 2005
[EMAIL PROTECTED]:/usr/obj/usr/src/sys/GENERIC
ACPI APIC Table: PTLTD  APIC  
Timecounter i8254 frequency 1193182 Hz quality 0
CPU: Intel(R) Xeon(TM) CPU 2.80GHz (2799.22-MHz 686-class CPU)
  Origin = GenuineIntel  Id = 0xf29  Stepping = 9

Features=0xbfebfbffFPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE
  Features2=0x4400CNTX-ID,b14
  Hyperthreading: 2 logical CPUs
real memory  = 1073217536 (1023 MB)
avail memory = 1041264640 (993 MB)

pcib5: ACPI PCI-PCI bridge at device 29.0 on pci4
pci5: ACPI PCI bus on pcib5
amr0: LSILogic MegaRAID 1.51 mem 0xfe20-0xfe20 irq 96 at
device 1.0 on pci5
amr0: LSILogic MegaRAID SCSI 320-1 Firmware 1L37, BIOS G119, 64MB RAM
pci4: base peripheral, interrupt controller at device 30.0 (no driver
attached)
pcib6: ACPI PCI-PCI bridge at device 31.0 on pci4
pci6: ACPI PCI bus on pcib6
ahd0: Adaptec AIC7902 Ultra320 SCSI adapter port
0x4400-0x44ff,0x4000-0x40ff mem 0xfc40-0xfc401fff irq 76 at device
2.0 on pci6
ahd0: [GIANT-LOCKED]
aic7902: Ultra320 Wide Channel A, SCSI Id=7, PCI-X 101-133Mhz, 512 SCBs
ahd1: Adaptec 

Re: LSI Megaraid (amr) performance woes

2006-02-23 Thread Kris Kennaway
On Thu, Feb 23, 2006 at 03:41:06PM -0500, Sven Willenberger wrote:
 I am having some issues getting any (write) performance out of an LSi
 Megaraid (320-1) SCSI raid card (using the amr driver). The system is an
 i386 (p4 xeon) with on-board adaptec scsi controllers and a SUPER GEM318
 Saf-te backplane with 6 ea 146GB U320 10k rpm Hitachi drives.

Try again with 6.1, performance should be much better.

Kris


pgpnSRGGqr0Yd.pgp
Description: PGP signature


Re: graid3 data corruption?!?

2006-02-23 Thread Ruslan Ermilov
On Thu, Feb 23, 2006 at 01:31:16PM +0100, Michael Reifenberger wrote:
 Hi,
 I'm having 5 firewire Disks in one graid3 set.
 and using a fresh STABLE on SMP with an dual AMD64 in i386 mode.
 
 While doing an md5 checksum of all files in the filesystem
 (~770GB of data) on disk died. graid3 did the right thing
 and disconnected the disk.
 
 BUT:
 after diffing the md5sums of the files on large file (probably the one
 that got checked during the disk failure) had an different md5sum than 
 before.
 --- md5_11.log  Fri Dec  9 13:23:07 2005
 +++ md5_12.log  Wed Feb 22 18:03:03 2006
 @@ -4460,3 +4460,3 @@
  MD5 (Backup/totum/root_0_050211_i386.dmp.gz) = 
  5a3e7b03f48ea4c2cba10624edd996cf
 -MD5 (Backup/totum/root_0_050715.dmp.gz) = 0e154301cbec84571d1df94bf68e3d79
 +MD5 (Backup/totum/root_0_050715.dmp.gz) = 172d7c12b78f3f191c184d467e31a53c
  MD5 (RIP/.pgp/PGPMacBinaryMappings.txt) = bf1b637a3a69bcbb8d4177be46a1c3ac
 
 BUT:
 doing a fresh md5sum now in degraded mode of the file I get again
 (the correct) value of:
 MD5 (Backup/totum/root_0_050715.dmp.gz) = 0e154301cbec84571d1df94bf68e3d79
 
 For me this means, that graid3 gave incorrect data during the disk los.
 This shouldn't happen!
 
 Any clues how this could happen?
 Has anyone else seen this behaviour?
 
Yes, we saw it too.  We hope that the last commit to g_raid3.c (in
HEAD) may fix it as well.


Cheers,
-- 
Ruslan Ermilov
[EMAIL PROTECTED]
FreeBSD committer


pgpijYZqxxR0l.pgp
Description: PGP signature


Re: HT1000 serial ATA support

2006-02-23 Thread Kevin Oberman
 Date: Thu, 23 Feb 2006 21:04:58 +0100
 From: Tomas Randa [EMAIL PROTECTED]
 Sender: [EMAIL PROTECTED]
 
 Hello everybody,
 
 I bought very nice motherboard SuperMicro H8SSL-i, but onboard serial 
 ATA is supported in FreeBSD 6.0 as Generic, so only UDMA33 is possible.
 
 I would ask here, what I can do for making this device fully functional 
 on full speed.

You may have already tried, but can you set it to a higher speed with
atacontrol? I have never tried this with a SATA drive, so I don't know
if it will help.
-- 
R. Kevin Oberman, Network Engineer
Energy Sciences Network (ESnet)
Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab)
E-mail: [EMAIL PROTECTED]   Phone: +1 510 486-8634
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: SSH login takes very long time...sometimes

2006-02-23 Thread Rostislav Krasny
On Thu, 23 Feb 2006 02:08:17 +0900
Hajimu UMEMOTO [EMAIL PROTECTED] wrote:

 Hi,
 
  On Wed, 22 Feb 2006 02:44:30 +0200
  Rostislav Krasny [EMAIL PROTECTED] said:
 
 rosti On Tue, 21 Feb 2006 19:59:59 +0300
 rosti Yar Tikhiy [EMAIL PROTECTED] wrote:
 
 rosti I forgot that a search resolver(5) parameter is useless for reverse
 rosti resolving. But that doubling of name-IP requests with an empty (or
 rosti root, according to resolver(5)) domain in the search is still a bug,
 rosti IMHO. Although it shouldn't affect the sshd.
 
 I looked BIND9's resolver, and took the related part into our
 resolver.  However, it seems to me that there is still same issue in
 BIND9's resolver.  So, I change more bit.  Please try the following
 patch and let me know the result:

Your patch fixed the problem, thank you. But during my tests I've found
another form of doubling bug in getaddrinfo(). To test the getaddrinfo()
behavior I used a program that is attached to this email.
If hints.ai_socktype == 0 then the getaddrinfo() does the lookup twice,
even if DNS is reachable. If the latter is the case, returned linked
list is twice as large than it should be. The hints.ai_socktype seems
to have the same influence when hints.ai_family is PF_INET6 or
PF_UNSPEC too.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]

Re: AMD64 or I386

2006-02-23 Thread Albert Shih
 Le 22/02/2006 à 18:29:37-0800, Atanas a écrit
 Albert Shih said the following on 02/22/06 16:59:
 Hi all
 
 I've very strange problem with my new servers with AMD single core dual
 proc with 4 Go Ram.
 
 When I boot the i386 version of FreeBSD 6.0 he see 4 Go but tell me he
 can't not access to 4go but only 3 Go. I upgrade to FreeBSD 6-Stable and 
 nothing change.
 
 When I boot the amd64 version of FreeBSD 6.0 he see 5 Go (!!) but tell me
 he can access only 4 go (well). When I upgrade to FreeBSD 6-Stable nothing
 change.
 
 My problem is I need i386 version (because I'need maxima who need
 sbcl...and sbcl don't run on amd64).
 
 If I tell the kernel I've 4 Go the system don't boot (kernel panic).
 
 What can I do.
 
 You might need to compile a PAE enabled kernel, see the pae(4) man page 
 for more details.
 

Very thanks all of you...

Everey thing work fine 

Copyright (c) 1992-2006 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
The Regents of the University of California. All rights reserved.
FreeBSD 6.1-PRERELEASE #0: Thu Feb 23 23:02:58 UTC 2006
[EMAIL PROTECTED]:/usr/obj/usr/src/sys/ISIS-PAE
ACPI APIC Table: PTLTD  APIC  
Timecounter i8254 frequency 1193182 Hz quality 0
CPU: AMD Opteron(tm) Processor 248 (2210.20-MHz 686-class CPU)
  Origin = AuthenticAMD  Id = 0x20f51  Stepping = 1
  
Features=0x78bfbffFPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,MMX,FXSR,SSE,
SSE2
  Features2=0x1SSE3
  AMD Features=0xe2500800SYSCALL,NX,MMX+,b25,LM,3DNow+,3DNow
real memory  = 5368709120 (5120 MB)
avail memory = 4182499328 (3988 MB)


Very thanks again. You save my life (weeel no exactly ;-) but thanks...)


--
Albert SHIH
Universite de Paris 7 (Denis DIDEROT)
U.F.R. de Mathematiques.
7 ième étage, plateau D, bureau 10
Heure local/Local time:
Thu Feb 23 23:15:02 CET 2006
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: HT1000 serial ATA support

2006-02-23 Thread Mike Jakubik

Tomas Randa wrote:

Hello everybody,

I bought very nice motherboard SuperMicro H8SSL-i, but onboard serial 
ATA is supported in FreeBSD 6.0 as Generic, so only UDMA33 is possible.


I would ask here, what I can do for making this device fully 
functional on full speed.


Thank a lot


The HT1000/2000 chipsets are not supported yet, this has already been 
discussed on the -current list, search for ServerWorks HT1000 Chipset 
SATA support.

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Disk I/O system hang on 5.4-RELEASE-p8 i386

2006-02-23 Thread Greg Rivers

On Thu, 23 Feb 2006, Michael R. Wayne wrote:


Been fighting this for a while.  We have an older server, running
5.4-RELEASE-p8 i386 and used primarily for email, which hangs every
couple of weeks.  The hang seems to be in the disk I/O system; pings
succeed, and I can continue get a login: prompt on the console until
I enter a login at which the response stops.
[snip]


I think you're seeing the UFS deadlock I reported last November for 
RELENG_6.  See the thread beginning at 
http://lists.freebsd.org/pipermail/freebsd-stable/2005-November/019979.html


I believe this issue has made it onto the show-stopper list for 
6.1-RELEASE and is being actively worked on.


--
Greg
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Disk I/O system hang on 5.4-RELEASE-p8 i386

2006-02-23 Thread Kris Kennaway
On Thu, Feb 23, 2006 at 04:44:46PM -0600, Greg Rivers wrote:
 On Thu, 23 Feb 2006, Michael R. Wayne wrote:
 
 Been fighting this for a while.  We have an older server, running
 5.4-RELEASE-p8 i386 and used primarily for email, which hangs every
 couple of weeks.  The hang seems to be in the disk I/O system; pings
 succeed, and I can continue get a login: prompt on the console until
 I enter a login at which the response stops.
 [snip]
 
 I think you're seeing the UFS deadlock I reported last November for 
 RELENG_6.  See the thread beginning at 
 http://lists.freebsd.org/pipermail/freebsd-stable/2005-November/019979.html
 
 I believe this issue has made it onto the show-stopper list for 
 6.1-RELEASE and is being actively worked on.

It's on the todo list, but I don't think it's being worked on yet.
The main problem is that we need a way to reproduce it on command.
I'd forgotten that snapshots are involved, so maybe it's just a matter
of running lots of mksnap_ffs while I/O is in progress.

kris


pgpHzmUp4vUud.pgp
Description: PGP signature


device atapicam - causes huge slowdown

2006-02-23 Thread Adam Retter
Hi Chaps I am tracking 6-STABLE,

FreeBSD funkalicious.home.dom 6.1-PRERELEASE FreeBSD 6.1-PRERELEASE #8:
Thu Feb 23 23:24:57 GMT 2006
[EMAIL PROTECTED]:/usr/obj/usr/src/sys/funkalicious  i386

I have a fairly straight-forward kernel config (see below) I think, yet
if I enable device atapicam, and buildkernel and installkernel and
reboot, the system starts up fine until it get's to finding disks and
then it goes incredibly slowly, takes about 5 minutes to get to
harvesting interupts and so on and so on, I think it will eventually
get to the login prompt, but I havent been tolerant to wait that long
15 minutes.

Are there known problems with atapicam? or conditions under which it
causes a massive system slow down, some sort of conflict timeout or loop
problem maybe?

My System is -

Intel Pentium IV 3.2GHz
MSI 848P-Neo Motherboard
2GB DDR RAM (2x1GB)
1 x 120GB Maxtor SATA Hard Disk
HighPoint Tech RocketRaid 1640 RAID5 Card (with 3 x 250GB Maxtor SATA
Hard Disks attached)
NVIDIA GeForce 6800LE 256MB

I have tried booting the system with and without the HighPoint RAID
Driver (hpt374.ko = http://www.highpoint-tech.com/USA/bios_rr1640.htm)
loaded and it seems to make no difference.

If I dont use device atapicam the system is perfect, but I could
really do with enabling it, for CD/DVD writting purposes...


Thanks Adam.


#
Kernel config
#

makeoptions COPTFLAGS=-O2 -pipe -funroll-loops -ffast-math

machine i386
cpu I686_CPU
ident   funkalicious

options SCHED_4BSD  # 4BSD scheduler
#optionsSCHED_ULE   # ULE scheduler
options PREEMPTION  # Enable kernel thread preemption


options INET# InterNETworking
options FFS # Berkeley Fast Filesystem
options SOFTUPDATES # Enable FFS soft updates support
options UFS_ACL # Support for access control lists
options UFS_DIRHASH # Improve performance on big directories
options MSDOSFS # MSDOS Filesystem
options CD9660  # ISO 9660 Filesystem
options PROCFS  # Process filesystem (requires PSEUDOFS)
options PSEUDOFS# Pseudo-filesystem framework
options GEOM_GPT# GUID Partition Tables.
options COMPAT_43   # Compatible with BSD 4.3 [KEEP THIS!]
options COMPAT_FREEBSD4 # Compatible with FreeBSD4
options COMPAT_FREEBSD5 # Compatible with FreeBSD5
options SCSI_DELAY=5000 # Delay (in ms) before probing SCSI
options SYSVSHM # SYSV-style shared memory
options SYSVMSG # SYSV-style message queues
options SYSVSEM # SYSV-style semaphores
options _KPOSIX_PRIORITY_SCHEDULING # POSIX P1003_1B real-time
extensions
options ADAPTIVE_GIANT  # Giant mutex is adaptive.

device  apic# I/O APIC

# Bus support.
device  eisa
device  pci

# Floppy drives
device  fdc

# ATA and ATAPI devices
device  ata
device  atadisk # ATA disk drives
device  ataraid # ATA RAID drives
device  atapicd # ATAPI CDROM drives
options ATA_STATIC_ID   # Static device numbering

# ATAPI - SCSI Interface, mainly for cdrecord 
#device atapicam# causes boot problems?!? (huge system slow 
down)

# SCSI peripherals
device  scbus   # SCSI bus (required for SCSI)
device  da  # Direct Access (disks)
device  pass# Passthrough device (direct SCSI access)
device  ses # SCSI Environmental Services (and SAF-TE)
device  cd  # CD

# atkbdc0 controls both the keyboard and the PS/2 mouse
device  atkbdc  # AT keyboard controller
device  atkbd   # AT keyboard
device  psm # PS/2 mouse

device  vga # VGA video card driver

device  splash  # Splash screen and screen saver support

# syscons is the default console driver, resembling an SCO console
device  sc

# Enable this for the pcvt (VT220 compatible) console driver
#device vt
#optionsXSERVER # support for X server on a vt console
#optionsFAT_CURSOR  # start with block cursor

#Disabled so we use nvidias own agp driver
#device agp # support several AGP chipsets

# Floating point support - do not disable.
device  npx

# Add suspend/resume support for the i8254.
device  pmtimer

# Serial (COM) ports
device  sio # 8250, 16[45]50 based serial ports

# Parallel port
device  ppc
device  ppbus   # Parallel port bus (required)
device  lpt # Printer

Re: Disk I/O system hang on 5.4-RELEASE-p8 i386

2006-02-23 Thread Greg Rivers

On Thu, 23 Feb 2006, Kris Kennaway wrote:


I believe this issue has made it onto the show-stopper list for
6.1-RELEASE and is being actively worked on.


It's on the todo list, but I don't think it's being worked on yet.
The main problem is that we need a way to reproduce it on command.
I'd forgotten that snapshots are involved, so maybe it's just a matter
of running lots of mksnap_ffs while I/O is in progress.

kris



It happens with or without snapshots, but snapshots are a lot more likely 
to make it happen.  In my case, approximately 1 in 3 snapshots will do it. 
Without snapshots, I get a deadlock about every ten days in a population 
of three hosts.


Tor Egge and Don Lewis were kind enough to work with me off-list for a bit 
last December.  They analyzed several of the core files I produced and I 
think they have a fair understanding of what the problems are.  But I 
wouldn't presume to put words in their mouths; perhaps they'll give us an 
update.  I see from the todo list that Tor may already be working on the 
deadlock for amd64.


I'm at the disposal of anyone who's willing to look into this further.

--
Greg
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Deadlock from fsx/mksnap_ffs

2006-02-23 Thread Kris Kennaway
On Thu, Feb 23, 2006 at 06:13:29PM -0600, Greg Rivers wrote:
 On Thu, 23 Feb 2006, Kris Kennaway wrote:
 
 I believe this issue has made it onto the show-stopper list for
 6.1-RELEASE and is being actively worked on.
 
 It's on the todo list, but I don't think it's being worked on yet.
 The main problem is that we need a way to reproduce it on command.
 I'd forgotten that snapshots are involved, so maybe it's just a matter
 of running lots of mksnap_ffs while I/O is in progress.
 
 kris
 
 
 It happens with or without snapshots, but snapshots are a lot more likely 
 to make it happen.  In my case, approximately 1 in 3 snapshots will do it. 
 Without snapshots, I get a deadlock about every ten days in a population 
 of three hosts.
 
 Tor Egge and Don Lewis were kind enough to work with me off-list for a bit 
 last December.  They analyzed several of the core files I produced and I 
 think they have a fair understanding of what the problems are.  But I 
 wouldn't presume to put words in their mouths; perhaps they'll give us an 
 update.  I see from the todo list that Tor may already be working on the 
 deadlock for amd64.

OK, I reproduced it pretty easily by running fsx against mksnap_ffs.

  828 c95254480   579   828 0004002 [SLPQ ufs 0xcb758c28][SLP] mksnap_ffs
  820 c96d52240   684   820 0004002 [SLPQ suspfs 0xc9035cb8][SLP] fsx

db show lockedvnods
Locked vnodes

0xcb758bd0: tag ufs, type VREG
usecount 2, writecount 1, refcount 8 mountedhere 0
flags ()
v_object 0xc9d32438 ref 1 pages 26
 lock type ufs: EXCL (count 1) by thread 0xc96d4000 (pid 820) with 1 
pending#0 0xc0521640 at lockmgr+0x584
#1 0xc058e493 at vop_stdlock+0x32
#2 0xc06f623e at VOP_LOCK_APV+0xa6
#3 0xc06635f5 at ffs_lock+0x19
#4 0xc06f623e at VOP_LOCK_APV+0xa6
#5 0xc05a6f10 at vn_lock+0xd3
#6 0xc068768c at vm_object_sync+0x16c
#7 0xc068285c at vm_map_sync+0x2a1
#8 0xc0684ba4 at msync+0x7f
#9 0xc06dd45c at syscall+0x2e7
#10 0xc06c755f at Xint0x80_syscall+0x1f

ino 282625, on dev da0s1e
VNASSERT failed
0xca1b5600: tag (null), type VMARKER
usecount 0, writecount 0, refcount 0 mountedhere 0
flags ()
db

db wh 828
Tracing pid 828 tid 100079 td 0xc90d6340
sched_switch(c90d6340,0,1,15c,c078b4f0) at sched_switch+0x19b
mi_switch(1,0,c0722811,1c0,2) at mi_switch+0x2d0
sleepq_switch(cb758c28,0,c0722811,212,ee12e3c4) at sleepq_switch+0x10d
sleepq_wait(cb758c28,0,c072013c,c8,0) at sleepq_wait+0x66
msleep(cb758c28,c0786b28,50,c07274f4,0) at msleep+0x302
acquire(ee12e470,60,6,b2,c90d6340) at acquire+0x88
lockmgr(cb758c28,2022,cb758c98,c90d6340,ee12e49c) at lockmgr+0x4dc
vop_stdlock(ee12e4f0,cb758c98,9,c07717e0,ee12e4f0) at vop_stdlock+0x32
VOP_LOCK_APV(c0771d20,ee12e4f0,ee12e4c8,c06f623e,ee12e4f0) at VOP_LOCK_APV+0xa6
ffs_lock(ee12e4f0,c078d0b4,ee12e4d8,2022,cb758bd0) at ffs_lock+0x19
VOP_LOCK_APV(c07717e0,ee12e4f0,c0734ad5,ee12e4f4,c055b153) at VOP_LOCK_APV+0xa6
vn_lock(cb758bd0,2022,c90d6340,c0734acc,2022) at vn_lock+0xd3
vget(cb758bd0,2022,c90d6340,482,c9035c88) at vget+0xf0
ffs_sync(c9035c00,1,c90d6340,c072a290,0) at ffs_sync+0x192
vfs_write_suspend(c9035c00,ee12e7d0,df59d408,1,c8bc9880) at 
vfs_write_suspend+0xd5
ffs_snapshot(c9035c00,cd07a480,ee12e944,6c,c106c988) at ffs_snapshot+0xd36
ffs_mount(c9035c00,c90d6340,c0728f74,33f,246) at ffs_mount+0xd21
vfs_domount(c90d6340,ca133a80,ccdda980,1211000,cc6c8e80) at vfs_domount+0x644
vfs_donmount(c90d6340,1211000,ee12eb9c,cce83880,e) at vfs_donmount+0x51f
kernel_mount(cc0bcd80,1211000,ee12ebe0,6c,bfbfe98e) at kernel_mount+0x7d
ffs_cmount(cc0bcd80,bfbfe1f0,1211000,c90d6340,c07714c0) at ffs_cmount+0x85
mount(c90d6340,ee12ed04,c0740342,3ed,c97ad630) at mount+0x1d7
syscall(3b,3b,3b,28152dd8,bfbfe190) at syscall+0x2e7
Xint0x80_syscall() at Xint0x80_syscall+0x1f
--- syscall (21, FreeBSD ELF32, mount), eip = 0x280c068f, esp = 0xbfbfe15c, ebp 
= 0xbfbfe858 ---
db wh 820
Tracing pid 820 tid 100106 td 0xc96d4000
sched_switch(c96d4000,0,1,15c,c078bdb0) at sched_switch+0x19b
mi_switch(1,0,c0722811,1c0,0) at mi_switch+0x2d0
sleepq_switch(c9035cb8,0,c0722811,212,ee1fa940) at sleepq_switch+0x10d
sleepq_wait(c9035cb8,0,c072013c,c8,0) at sleepq_wait+0x66
msleep(c9035cb8,c9035c88,9f,c072a26e,0) at msleep+0x302
vn_start_write(cb758bd0,ee1fa990,1,3f9,c9035c00) at vn_start_write+0xe0
vnode_pager_putpages(c9d32438,ee1faa60,f,5,ee1fa9e0) at 
vnode_pager_putpages+0x77
vm_pageout_flush(ee1faa60,f,5,375,c071d6bf) at vm_pageout_flush+0x16d
vm_object_page_collect_flush(c9d32438,c11f5338,139a0,5,10) at 
vm_object_page_collect_flush+0x2f9
vm_object_page_clean(c9d32438,7,0,16,0) at vm_object_page_clean+0x1b7
vm_object_sync(c9d32438,7000,0,f000,1) at vm_object_sync+0x1e9
vm_map_sync(c9709420,2816f000,2817e000,1,0) at vm_map_sync+0x2a1
msync(c96d4000,ee1fad04,c,c96d4000,c9709420) at msync+0x7f
syscall(3b,3b,bfbf003b,ddc3,73b5) at syscall+0x2e7
Xint0x80_syscall() at Xint0x80_syscall+0x1f
--- syscall (65, FreeBSD ELF32, msync), eip = 0x280cd4cf, esp = 0xbfbfdf6c, ebp 
= 0xbfbfdfc8 ---
db 


hw.realmem on i386

2006-02-23 Thread Atanas
I'm setting 6.1-BETA2/i386 on a AMD-based (dual Opteron 270) Tyan K8SE 
S2892 motherboard with 4GB RAM. The PCI memory address range on this 
board takes entire gigabyte, leaving only 3GB of usable memory in i386 
mode. The remaining part gets remapped (by the BIOS) above the 4GB limit.


On the Intel server boards the PCI range used to take only 256MB or 
512MB, so I could afford ignoring that. But 1GB now seems too much and I 
decided to compile a PAE enabled kernel and see what happens.


The PAE enabled kernel detects the full amount of RAM, boots normally 
and seems all right so far (not in production yet):


  # dmesg |grep memory
  real memory  = 5368709120 (5120 MB)
  avail memory = 4182597632 (3988 MB)

  # memcontrol list
  ...
  0x0/0x8000 BIOS write-back set-by-firmware active
  0x8000/0x4000 BIOS write-back set-by-firmware active
  0x1/0x4000 BIOS write-back set-by-firmware active

The thing that puzzles me is the sysctl hw.realmem value:

  # sysctl -a |grep hw.*mem:
  hw.physmem: 4286291968
  hw.usermem: 4106076160
  hw.realmem: 1073741824

Wasn't this supposed to be greater that both hw.physmem and hw.usermem?

Or at least this is what I see on all other (non-PAE) boxes I have:

  realmem  physmem  usermem

Here are a few examples:

Intel SE7501WV2, 4GB (-256MB), non-PAE i386:
  hw.physmem: 4017508352
  hw.usermem: 3792785408
  hw.realmem: 4026466304

Intel SE7520JR2, 4GB (-512MB), non-PAE i386:
  hw.physmem: 3749007360
  hw.usermem: 3285360640
  hw.realmem: 3757965312

AMD-based, 4GB, amd64:
  hw.physmem: 4218327040
  hw.usermem: 399648
  hw.realmem: 4227792896

I'm wondering what impact such a supposedly incorrect hw.realmem  
hw.physmem value could have, and whether the kernel options would need 
to be tweaked manually in order to fix that.


I remember a case when I had to upgrade a 4.x based box from 2GB to 4GB, 
so vm.kvm_free became larger than vm.kvm_size resulting in random 
crashes (until I realized that I had to manually adjusting the 
KVA_PAGES kernel option, but it does not seems to be much relevant here).


I'm running 6.1-PRERELEASE (6-STABLE) from Feb 22 2005 with the 
following mods to the kernel configuration files:


GENERIC:
 cpu   I486_CPU
 options   INET6   # IPv6 communications protocols
 options   SCSI_DELAY=5000 # Delay (in ms) before probing SCSI
 options   QUOTA
 options   SMP # Symmetric MultiProcessor Kernel

PAE:
 options   IPFIREWALL
 options   DUMMYNET

Regards,
Atanas
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: SSH login takes very long time...sometimes

2006-02-23 Thread Hajimu UMEMOTO
Hi,

 On Thu, 23 Feb 2006 23:57:27 +0200
 Rostislav Krasny [EMAIL PROTECTED] said:

rosti Your patch fixed the problem, thank you.

Thank you for testing.  I'll commit it later.

rosti But during my tests I've found
rosti another form of doubling bug in getaddrinfo(). To test the 
getaddrinfo()
rosti behavior I used a program that is attached to this email.
rosti If hints.ai_socktype == 0 then the getaddrinfo() does the lookup twice,
rosti even if DNS is reachable. If the latter is the case, returned linked
rosti list is twice as large than it should be. The hints.ai_socktype seems
rosti to have the same influence when hints.ai_family is PF_INET6 or
rosti PF_UNSPEC too.

No, it is expected behavior of getaddrinfo(3).  If hints.ai_socktype
is zero, getaddrinfo(3) returns the entries for all available
socktypes.  Though getaddrinfo(3) returns doubled linkd list,
getaddrinfo(3) does DNS lookup just once for all.  If you don't want
it, you need to specify appropriate socktype explicitly.

Sincerely,

--
Hajimu UMEMOTO @ Internet Mutual Aid Society Yokohama, Japan
[EMAIL PROTECTED]  [EMAIL PROTECTED],jp.}FreeBSD.org
http://www.imasy.org/~ume/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: acpi: suspend request ignored (not ready yet)

2006-02-23 Thread Nikolay Pavlov
On Monday, 20 February 2006 at 16:19:07 +0100, Holger Kipp wrote:
 On Mon, Feb 20, 2006 at 11:56:44AM -0300, JoaoBR wrote:
  On Monday 20 February 2006 11:14, Holger Kipp wrote:
   Hello,
  
   for some time now I have a strange behaviour upon
   shutdown on my home system:
  
   Occasionally the last output of shutdown is
  
   All buffers synced
  
   but nothing afterwards. Switching off via acpi
   button then gives
  
   acpi: suspend request ignored (not ready yet)
  
  I get kind of this problem when hw.acpi.cpu.cx_lowest=C2|C3
 
 sysctl says this:
 
 hw.acpi.cpu.cx_supported: C1/0
 hw.acpi.cpu.cx_lowest: C1
 hw.acpi.cpu.cx_usage: 100.00%
 
  you may hang on there before pressing the button, after an hour or two it 
  shuts down then
 
 Hmm, I haven't waited that long yet ...
 

Hi, Holger.
I have the same issue.
Do you have vmware3 port installed? 


 Regards,
 Holger
 ___
 freebsd-stable@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable
 To unsubscribe, send any mail to [EMAIL PROTECTED]

-- 

= Best regards, Nikolay Pavlov. --- =

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to [EMAIL PROTECTED]