On Saturday 07 April 2007 04:30:09 am you wrote:
> On 4/7/07, Merp.com Volunteer <[EMAIL PROTECTED]> wrote:
> > I used the directions from eclectica here:
> > http://www.eclectica.ca/howto/openbsd-software-raid-howto.php
>
> To be blunt: you are using old (3.7) instructions that are not from
> the OpenBSD project, that involve compiling your own kernel (see the
> FAQ on that [1]), that you do not fully follow either. Why do you
> expect help on misc@ (instead of contacting the author of your
> instructions)?
>

So where would some up to date instructions on software raid for 4.0 be found? 
Several volunteers tried several times to find a 1u rackmount server that 
wasn't full of cheap crap, but that we could afford, that had what was 
supposed to have been hardware raid.
Our current old server has "real" scsi raid array, but is now more than 6 
years old and starting to show it's age.

We ended up with the ASUS RS120-e3 (PA2).
The purchaser thought it was hardware raid, but apparently it is only hardware 
raid in the 2u/4u server equivalent, not the 1u. So we're stuck with making 
the best of what has been donated:
LSI Logic Embedded  (with option to switch the jumpers to Intel Matrix, but 
that's even more softare dependent apparently (less non-windows support)).

> > My partitioning scheme is a little different, and maybe that's part
> > of the problem.
> >
> > I'm trying to have it setup as:
> > /raid0a =>      /boot
> > /raid0d =>      /
>
> Why do you want a separate /boot? If the answer to that question is:
> "It works that way on my Linux system" alarm bells should go off,
> prompting you to read the documentation. If I misinterpreted things
> here, please say so.

No, sounds like you're spot on in that regards. I didn't setup the original 
configuration in this case, it was another (less experienced) volunteer, who 
has thrown up her hands in frustration and asked me to step in an take over. 
We'll make the adjustment in that area easily enough. The previous servers 
have (are currently) all been Linux (Redhat enterprise, gentoo, suse, and I'm 
not sure what others are floating around on the mirrors in other countries).

 A couple of volunteers agreed we might want to try to use openbsd instead. 
But, trying to do so had delayed this whole process several months. We had to 
wait for workaround suggestions from this list on using server software, etc. 
Maybe we should just go back to/stick with Linux if there are still so many 
hardware/software incompatibilities/issues?

Personally I've been a vocal supporter of OpenBSD's goals (including in weekly 
radio broadcasts), and have used it on desktop systems, and simple websites 
and internal servers occasionally. But nothing as intricate as this server 
has turned out to be. I would rather figure out the best way to implement 
this with openbsd with the hardware we have. If that's just not going to be a 
reasonable goal, it would be good to know that now rather than waste any more 
of the volunteers time on a futile effort.

>
> The 'a' partition is for your root. Using it for /boot (which is a
> single file on OpenBSD, not a directory) is bound to get you strange
> results. The raidctl(8) manual, for instance, is quite clear on that
> (see the -A root option).

Ok, roger that, no reason why that can't be corrected easily enough.

>
>
> Your easiest option would be to acquire a decent RAID card (the
> ami(4), mfi(4) or mpi(4) cards come to mind) and perform a regular

The volunteer who had the money, on a set budget, purchased the system 5 
months ago. Getting a different system is not an option, and no one has any 
budget for a better raid option anytime this year, especially with something 
that will fit in a 1U configuration for SATA.
We looked at "real" scsi raid setup, but the 1u scsi hd's in the same storage 
capacity range (300+ gb) were around 900+ USD each, with the sata 
equivalent's only around 90 USD each. We wish we had a larger budget to go 
with a setup closer to the original server (but more up to date, it's running 
scsi 18 gb hardware raid 1u, but that was a va linux 4,000+ usd box donated 
long ago).

> install. Granted, doing so costs money and I do not know your budget.
> Given your sender address, the choice probably depends on the scarcer
> of the two: volunteers or money. If others will need to maintain the
> system after you're involved, spending money to save them time later
> may be well worth it.
>
Fortunately the key volunteers (including me) have been onboard for more than 
a decade, and continued, and will likely continue short of death or severe 
disability.


> If you want to continue on RAIDframe (which is a fine product, but
> requires more skills from you),

I would say more along the lines of information/knowledge than skill, but 
whatever the semantics, the point is understood. More importantly we (the 
volunteers working on this) need to coordinate a little better. 3 different 
people have been working on this in our "spare" time, and so that has added 
to some of the confusing mess in the setup. I'm going to step up and take 
over to get this addressed one way or another.

> I suggest you rethink your partition 
> scheme and make raid0a the root partition.

Done. Easily enough rectified.

> In fact, I would recommend 
> starting from scratch and with the documentation to figure out a
> proper procedure. You're likely to come out with a better
> understanding of the system.

But is that the documentation I should be referencing?  Since as you stated 
it's several versions out of date.

>
> Please document your entire setup (and recovery) procedure for
> posterity and fellow volunteers to come. 

Yes, there are two documents that have been created, and been filled out as we 
progressed, one is a text document, the other a spreadsheet, to help keep the 
config information clear. This would be impossible to coordinate between 
volunteers without documenting. But because we've run into one complication 
with openbsd after another (compred the linux setup of the old server), it's 
going to to need some clean up to remove the parts that no longer apply.
Additionally, once the server is at "ready to launch" status. I will be making 
an exact binary backup image of the system, so that in the case of 
catastrophic failure related to the drives, we can just boot from cd/dvd and 
preform an image restore very quickly. Hopefully we will perform an update to 
said image periodically. But at least the whole installation process won't be 
so hideous.
10
> They *will* need it at some 
> point in time.

Of course.

> If you are not planning to do documentation, better 
> rethink the whole effort.

We are discussing (once again) which OS to use.
Whereas everyone was pretty supportive of switching to openbsd. At this point 
only myself and one other still trying to keep everyone on board with that 
option. Everyone else says "let's just use what we know works.".
So, a little more constructive guidance would definitely be appreciated at 
this point.

What is the best source (aside from man) for directions for setting up raid on 
such a server setup as this, for openbsd 4.0, if the eclectica directions are 
too outdated?

Considering all the workarounds to even just getting python/zope/plone to 
install, let alone the list of other bugs/issues related to that config on 
openbsd (that hasn't been an issue on other bsd and nix setups for us), as 
well as the scattered knowledge/experience of openbsd in the volunteer group 
(whereas most are familiar with Linux), should we just abandon the entire 
effort of converting to openbsd and just stick with Linux? Or will there be 
sufficient support from the openbsd community to help us get through the 
entire trainsition to openbsd?

Thanks.

>
> Cheers,
>
> Rogier
>
>
> References:
> 1. OpenBSD FAQ - Why do I need a custom kernel?
> http://www.openbsd.org/faq/faq5.html#Why

I certainly never "wanted" to use a custom kernel, and certainly don't want to 
be ostracized and ridiculed (as that faq and you have indicated) and not get 
any support from the community because of using it.

So is there a better route to take (short of the expense of buying hardware 
raid) that does not require customizing the kernel (that would certainly be 
preferred)?

The answer to this question could very well make other decisions much easier 
to decide on.

Thanks!
-- 
*** Volunteer Team for the completely non-profit, non-revenue, 
non-business-entity dedicated to 
the Middle-earth Role-playing International Community at Merp.com
"Fighting the noble battle against the dark forces, trying to keep alive, and 
growing,
the dream and joy of role-playing gaming in J.R.R. Tolkien's Middle-earth"
http://www.merp.com
Mailing list subscribe: [EMAIL PROTECTED]
IRC (Internet Relay Chat) Server: irc.merp.com (channel: #merpchat)
Yahoo=merpcom
ICQ=293-163-919
[EMAIL PROTECTED]
Alternate Email: [EMAIL PROTECTED] (in case you're blocked by our spam 
filters).

Be sure to sign up for the 3rd annual International MerpCon (2007):
July 27th, 28th, & 29th in Spokane, WA, USA.
This event is not run by merp.com, but by a different group of volunteers,
but merp.com has donated many services to help them out.
Show them your support by signing up, spreading the word, and showing up.
http://www.merpcon.com

"I would draw some of the great tales in fullness, 
and leave many only placed in the scheme, and sketched. 
The cycles should be linked to a majestic whole, 
and yet leave scope for other minds and hands, 
wielding paint and music and drama..." 
- John Ronald Reuel Tolkien, from a letter written to Milton Waldman, ca. 
1951 -

Reply via email to