offer for bugtracking

2005-09-14 Thread Ben Cadieux
Hi,

I've been reading the dragonfly digest and noticed the conversations
about bug tracking.  I realize it was in the kernel list (I'm not
subscribed to it, don't really wish to be).

If you don't find what you're looking for - I'll code one in php if you'd like.
 - BC



ACPI suspend/resume

2005-09-14 Thread Johannes Hofmann
I am trying to get suspend to RAM working on my Thinkpad. The machine
suspends fine, and even comes up again, but then stops in the kernel
debugger immediately. Here is a stacktrace from gdb:

..

#37 0xc80d4c00 in ?? ()
#38 0xdebf9988 in ?? ()
#39 0xc026ebef in cdevsw_putport (port=0xcce52700, lmsg=0x80045003)
at /usr/src/sys/kern/kern_device.c:100
#40 0xc026ebef in cdevsw_putport (port=0xc05708e0, lmsg=0xdebf99ac)
at /usr/src/sys/kern/kern_device.c:100
#41 0xc0289352 in lwkt_domsg (port=0x0, msg=0xdebf99ac) at msgport2.h:86
#42 0xc026ee5f in dev_dioctl (dev=0xcce52700, cmd=0, data=0x0, fflag=0, td=0x0)
at /usr/src/sys/kern/kern_device.c:222
#43 0xc02d9a24 in spec_ioctl (ap=0x0)
at /usr/src/sys/vfs/specfs/spec_vnops.c:370
#44 0xc02d95b0 in spec_vnoperate (ap=0x0)
at /usr/src/sys/vfs/specfs/spec_vnops.c:125
#45 0xc0405797 in ufs_vnoperatespec (ap=0x0)
at /usr/src/sys/vfs/ufs/ufs_vnops.c:2384
#46 0xc02d449a in vop_ioctl (ops=0x0, vp=0x0, command=0, data=0x0, fflag=0, 
cred=0x0, td=0x0) at /usr/src/sys/kern/vfs_vopops.c:555
#47 0xc02d3ffc in vn_ioctl (fp=0xc1744a00, com=2147766275, 
data=0xdebf9b50 "\003", td=0x0) at /usr/src/sys/kern/vfs_vnops.c:903
#48 0xc029d445 in mapped_ioctl (fd=3, com=2147766275, uspc_data=---Can't read 
userspace from dump, or kernel process---

) at file2.h:91
#49 0xc029cf7f in ioctl (uap=0x0) at /usr/src/sys/kern/sys_generic.c:392
#50 0xc0479e24 in syscall2 (frame=
  {tf_fs = 47, tf_es = 47, tf_ds = 47, tf_edi = -1077938180, tf_esi = 
-1077937904, tf_ebp = -1077938312, tf_isp = -557867660, tf_ebx = -1077938400, 
tf_edx = 2, tf_ecx = 671531072, tf_eax = 54, tf_trapno = 12, tf_err = 2, tf_eip 
= 671847888, tf_cs = 31, tf_eflags = 642, tf_esp = -1077938436, tf_ss = 47})
at /usr/src/sys/i386/i386/trap.c:1338
#51 0xc046690a in Xint0x80_syscall () at /usr/src/sys/i386/i386/exception.s:854



Any ideas how to debug this further?

  Johannes

PS: I am running 1.3.5-DEVELOPMENT


Re: More on vinum woes

2005-09-14 Thread Martin P. Hellwig

Matthew Dillon wrote:


Exactly my conclusion when I searched for a new server a while ago.
I wanted to go for the sun fire v20z (as some may have noticed I tested 
Df on it) but I needed 300GB+ storage in a hotswap raid configuration. 
The SAN/NAS solution where way to expensive. But I was very convinced by 
the amd cpu's.


Because money is always an issue at my place I decided to build the 
server myself from components (after explaining the risk to my boss), so 
I took 4 250GB 7.200 rpm SATA drives in a hotswap RAID 1+0 (FastTrak 
S150-SX4M-B) configuration on a AMD64 tyan (2885ANRF dual opteron 248) 
board.


The machine feels faster then my dual XEON bord with 4 15.000RPM SCSI 
disk in RAID-5. But that might be apple/orange.


Perhaps only stability could be a reason to stick to SCSI as SATA disk 
are in my experience more are more failure prone, but the hotswap RAID 
1+0 relatively solves that problem for me.


Hack with the money I saved on 0,5 TB storage I can almost buy a new server.

--
mph


Re: More on vinum woes

2005-09-14 Thread Matthew Dillon

:
:Dave Hayes wrote:
:
:>Might I ask the exact model number of the 3ware card(s) you use for
:>RAID 5? If I do things this way I've got to buy two at the same time,
:>and I'd like to be accurate. ;)
:>
:I've used two 6400 and one 7800 and I can only recommend AGAINST them.
:Out of those 3, two (one 6400 and the 7800) kept kicking out drives out
:of arrays for no understandable reason (means you could rebuild the
:array with the very same drives after that only to have it kick out
:another drive soon thereafter).
:
:For cards, that cost 500$, that's simply inacceptable to me. And the
:support didn't even answer any mails and searches on Google showed I'm
:by no means alone with this issue (and others didnt get any help from
:3ware either, aside of a few that were being told to dump the NVRAM of
:the controller but usually without a howto...).

Its going to depend on how old the drives are, for PATA IDE.  It's
only the last few years where PATA IDE drives have settled down into
a semblence of a reliable interface standard from the drive side, but
even then I wouldn't trust PATA (parallel ATA, i.e. standard IDE cable)
for *anything* that needed serious uptime.  SATA is different, however.
At least on the drive side its completely different.  On the motherboard
chipset side its still a mess because the idiots are trying to maintain
compatibility with the basic IDE chipset protocols, which have been
broken from the day they were introduced.  But a 3ware SATA card will
bypass that so it shouldn't be a problem.

Everyone I know who was using SCSI for reliability 5 years ago is now
switching to SATA, simply because they can no longer justify the
ridiculous price premium for SCSI.  Nearly all the RAID storage vendors
now have SATA/IDE-support precisely due to this change in the demand
curve.

$500 is the cost of an 8-port (900 series) SATA RAID controller.  If
I have 8 SATA drives $500 would be a small price to pay to build a 
reliable storage subsystem out of them, especially considering that it
would cost an extra $1000 if not more if those were SCSI drives instead
of SATA drives.  I'll price it out for you right now:

Maxtor 250GB SATA drive:$100
Seagate 250GB SATA drive:   $112

Seagate cheetah 300GB HD:   $900
Maxtor 300GC SCSI 10K rpm:  $1095

See the problem?  And that's after spending 30 minutes trying to find
SCSI drives on the net.  I don't even believe those prices... I KNOW I
can get better prices then $900 for a 300GB SCSI drive, if I spend
another hour looking.  When I researched SCSI drives last year they
were costing about 100% premium.  Now the best deals I can find are in
the 200% range, or worse, and fewer vendors sell them.

There's no point buying SCSI any more when you can buy a second 
(or third) entirely redundant RAID array full of SATA drives for the same
price.  If SCSI only cost, say, a 20% premium, I would probably 
still be using it.  But it doesn't.   Even the best deals the premium
often exceeds 100% and short of spending a lot of time working at it
they generally exceed 200%.

-Matt
Matthew Dillon 
<[EMAIL PROTECTED]>


Re: [OT] gcc, ssp, pie, and thunk

2005-09-14 Thread Joerg Sonnenberger
On Tue, Sep 13, 2005 at 05:31:58PM -0700, walt wrote:
> Yes, yes, I already have two different gcc's on DFly also.  The
> extra feature that gentoo's 'gcc-config' provides is an app
> called 'fix_libtool_files' (in perl, IIRC) which runs around
> looking for those god-forsaken *.la (Libtool Archive) files which
> are hard-wired to one version of g++ and patches them to reflect
> your current choice of compiler versions.

You don't fix the *libtool archives*, you fix *libtool*. That's where it
wires the GCC specific parts in and nowhere else.

PIE is not worth it, the performance hit is too big (since it basically
demands PIC for everything).

Joerg


net-snmp from pkgsrc ?

2005-09-14 Thread Yiorgos Adamopoulos
Has anyone succeeded in building it.  I am running 1.3.5-PREVIEW and I get:

 cc -I../../include -I. -I../../agent -I../../agent/mibgroup -I../../snmplib 
-I/usr/pkgsrc/net/net-snmp/work/.buildlink/include -DINET6 -O2 -Ddragonfly1 -c 
mibII/ipv6.c  -fPIC -DPIC -o mibII/.libs/ipv6.o
mibII/ipv6.c: In function `if_getname':
mibII/ipv6.c:531: warning: return discards qualifiers from pointer target type
mibII/ipv6.c: In function `if_getifnet':
mibII/ipv6.c:584: error: structure has no member named `if_next'
mibII/ipv6.c: In function `var_ifv6Entry':
mibII/ipv6.c:815: error: cannot convert to a pointer type
mibII/ipv6.c:836: error: structure has no member named `ifa_next'
mibII/ipv6.c: In function `var_udp6':
mibII/ipv6.c:1251: error: structure has no member named `in6p_next'
mibII/ipv6.c:1344: error: structure has no member named `in6p_next'
mibII/ipv6.c: In function `var_tcp6':
mibII/ipv6.c:1665: error: structure has no member named `in6p_next'
mibII/ipv6.c:1773: error: structure has no member named `in6p_next'
*** Error code 1


-- 
#include 
#define POWERED_BY "http://www.DragonFlyBSD.org/";


Re: More on vinum woes

2005-09-14 Thread Gabriel Ambuehl
Gabriel Ambuehl wrote:
[rant against 3ware]

To be fair, though, all other IDE RAID solutions I've tested (numerous
Promise and Highpoint and the odd Adaptec product, usually pseudo RAID
though) have their share of issues as well. I gotta admit, Linux' md
Driver suite so far has impressed me much more than anything the BSDs
provide (vinum is just hopelessly complex).


Re: More on vinum woes

2005-09-14 Thread Gabriel Ambuehl
Dave Hayes wrote:

>Might I ask the exact model number of the 3ware card(s) you use for
>RAID 5? If I do things this way I've got to buy two at the same time,
>and I'd like to be accurate. ;)
>
I've used two 6400 and one 7800 and I can only recommend AGAINST them.
Out of those 3, two (one 6400 and the 7800) kept kicking out drives out
of arrays for no understandable reason (means you could rebuild the
array with the very same drives after that only to have it kick out
another drive soon thereafter).

For cards, that cost 500$, that's simply inacceptable to me. And the
support didn't even answer any mails and searches on Google showed I'm
by no means alone with this issue (and others didnt get any help from
3ware either, aside of a few that were being told to dump the NVRAM of
the controller but usually without a howto...).


Re: More on vinum woes

2005-09-14 Thread David Cuthbert

Matthew Dillon wrote:
I don't agree re: SCSI RAID.  It used to be true that SCSI was 
superior not just in the reliability of the bus protocol but also
in the actual hardware.  I remember back in the day when seagate 
waxed poetic about all the work they did to make their SCSI drives

more robust, and I gladly paid for SCSI drives.


Well, having worked for Seagate, maybe I'm just spouting their KoolAid 
here. :-)  But the production quality of the components which went into 
SCSI drives far exceeded those for the ATA line (which was extremely 
cost-sensitive compared to the SCSI line).


I'm not saying *you* should consider SCSI... just that if you're running 
something which requires serious uptime (as in it's unacceptable to have 
more than a second of downtime per year), you're pretty much looking at 
SCSI.  Actually, you're pretty much looking at an EMC box or somesuch, 
which will use SCSI only.  And having an EMC apps engineer on call 24/7.


It's simple statistics: if you need 99.999% uptime, then your components 
have to be much better, and you're going to pay a pretty penny for even 
marginal improvements.  (On the flip side, most people who say they need 
99.999% uptime suddenly don't when they find out just how expensive it 
is. :-)



SATA is clearly just as reliable a bus protocol.


Yes and no.  ATA's protocol is reliable (even though the signalling 
sucks)... it's more that the chipset vendors (mostly) play fast and 
loose with the rules.  (I've been extremely disappointed by the 
deteriorating quality of chipsets and flagrant lack of testing.)


I've already seen one SATA setup go completely unreliable thanks to a 
chipset which had a tendency to freeze the north bus when attached to an 
NCQ SATA drive.



Also, modern drives have far
fewer moving parts and far smaller (and fewer) heads,


Smaller isn't necessarily better.  Smaller sliders (the black thing you 
can actually see) *are* good, because when they hit the disk (and they 
will, even on a disk which appears to be operating at 100%) it means 
less mass, less momentum, less debris.  The head itself, though, is also 
smaller, which is bad -- it takes less to start eroding away the GMR stripe.


I don't think there is that much of a difference in the number of moving 
parts -- in fact, IBM added more a few years back when they started 
doing load/unload of the head off the platters during power down.  (I 
think, but am not sure, that this has been pretty much replaced with 
laser texturing a zone on the platters so you can park the head there.)


> and its hard

to differentiate the robustness for commercial vs consumer models
by using (e.g.) more robust materials because of that.


Keep in mind that some fancy, new robust materials end up not working 
out so well.  Generally, the newest technology goes to laptop drives 
first (where it's all about trying to squeeze as much as possible on 
those 2.5" platters), then to consumer desktops, then (once it's proven) 
to the enterprise lines.



Software raid is a fine solution as long as your computer doesn't
crash and as long as you have the extra cpu and bandwidth to spare.
i.e. it can handle drive faults just fine, but it isn't so good handling
operating system faults or power failures due to the lack of
battery-backed cache, and it costs a lot of extra cpu to do something
like RAID-5 in software.


Never had a crash with Vinum on FreeBSD 4.x; on Linux, it will rebuild 
the RAID array in the background after a crash.  (It's slow, but if you 
have the CPU to spare, you can probably afford to let it run overnight 
like me).


All the above in mind, when I finish configuring my new server, it'll 
use the exact setup you're describing: 3Ware SATA RAID.  (My old server 
got zapped due to a direct lightning hit to my old house days before we 
left... need to get into the new place before I get everything out of 
storage...)