Re: How to convert BIND to TinyDNS?

2006-01-04 Thread Benson Wong
This is the cheat sheet I have at the top of my tinydns data (definition) file:

###
# TinyDNS Data File for domains hosted on our primary name server
# Notes:
#
#   Make sure all domains use [a.ns.domain.com] and [b.ns.domain.com] as the
#   authorative name servers. This is done at the registar.
#
#   Define the hosts and ns information here.
#   For each domain needs NS entries in order to be answered by tinydns.
#
#   Quick Reference: (see: http://cr.yp.to/djbdns/tinydns-data.html)
#   - each line starts with a special character:
# # --> Comment
# . --> NS record
#   use these as a default:
# .domain.com::a.ns.domain.com
# .domain.com::b.ns.domain.com
# = --> An "A" record
# + --> Alias, like an "A" record but no PTR created.
# @ --> MX Record :
#@domain.com:1.1.1.4:mail.domain.com:10
#-> creates mail.domain.com with ip 1.1.1.4 as the MX,
#   with distance 10
# C --> CNAME Record: (CAREFUL WITH USAGE) Use to point one domain to
#   another
#  Cdomain.com:otherdomain.ca:86400
#  Cchat.suttoncity.com:suttoncity-com.ch.outblaze.com:86400
#
#   - Wildcards
# +*.domain.com:192.168.1.4:86400
#   - this will resolve for
#www.domain.com, lfja.domain.com, xxx.domain.com, etc.
#
###
--

In reply to the BIND vs TinyDNS debate, I've been running it as a DNS
server for about 2 years now on FreeBSD. Zero problems since I made it
live. And I run a pretty large domain.

I also use the resolver that comes with DJBDNS. Again extremely
stable, and no problems. These run on the same machine as the DNS
servers, different IP address, and both answer more than one million
queries a day.

Load on the machines never go over 0.2

The only problem I've ever had was that the resolvers didn't have
enough ram dedicated to them (slowed them down). After bumping that
up, they just run/work.

My only complaint is that tinydns doesn't have a great way to manage a
huge number of domains, since everything goes into one large file. Of
course, that can be easily maintained by putting each domain into a
separate text file, and writing a shell script that cat's them all
together before generating the cdb binary file.

Ben.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Help on bash script?

2005-08-12 Thread Benson Wong
I prefer: 

for COREFILE in `find / -type f -name core -print`
do 
  ... 
done

Wouldn't that accomplish the same thing? 


On 8/12/05, Ian Smith <[EMAIL PROTECTED]> wrote:
> On Fri 12 Aug 2005 09:33:54 +0800 Xu Qiang <[EMAIL PROTECTED]> wrote:
> 
>  >  find / -type f -name core -print | while read COREFILE ; do
>  >  NCOREFILES=$[ $NCOREFILES + 1 ] # a bit strange - xq
>  >  echo $NCOREFILES # xq
>  >
>  >  NEWNAME="${HOSTNAME}esscore${NCOREFILES}_${TIMESTAMP}"
>  >  # record mapping so people can go back and figure out
>  >  # where they came from
>  >  echo -e $NEWNAME "was" `ls -l $COREFILE` >> 
> $SAVE_DIR/$CORELOG
>  >  mv $COREFILE $SAVE_DIR/$NEWNAME
>  >
>  >  echo "There are $NCOREFILES core files." # xq
>  >  done
>  >
>  >  fi
>  >
>  >  # What confused me most is the value $NCOREFILES outside
>  >  # the do-while loop (but still in this function) reverted
>  >  # back to its initial value, which seems contradictory to
>  >  # our concept of local variables. - xq
>  >  #echo $NCOREFILES
> 
> It's been pointed out that the find piped to while runs in a subshell,
> and that changes to variables within aren't seen by its parent, but one
> way around this is to avoid using a pipe, thus a subshell, for instance:
> 
> find / -type f -name "*.core" -print >tempfile
> while read COREFILE; do
> [.. stuff ..]
> NCOREFILES=$(($NCOREFILES + 1))
> done  echo $NCOREFILES
> 
> I use sh, not bash, but suspect that this should work in bash too.
> 
> cheers, Ian
> 
> ___
> freebsd-questions@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "[EMAIL PROTECTED]"
> 


-- 
blog: http://www.mostlygeek.com
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: bge0: WatchDog Timedout -- resetting in FreeBSD 5.3

2005-08-05 Thread Benson Wong
I had this problem last week after upgrading to a newer 5.4-STABLE.
The problem looked like an IRQ problem since both bge interfaces were
sharing the same IRQ. The problem went away after disabling hyper
threading in bios.

The box is a dual XEON so I had enabled SMP. SMP works fine, but HTT
was causing the bge timeouts and system load to be 10x higher than
normal.

Ben.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: FreeBSD 5.4+SMP, severe network degredation

2005-07-28 Thread Benson Wong
I've been having similar issues after upgrading to the latest
5.4-STABLE. I used to have it running 5.4-STABLE with the GENERIC PAE
kernel. I rebuilt the world, and the kernel and now my bge (1Gbit)
gets a lot of "watchdog timeout -- resetting" issues.

I have acpi enabled in my kernel as well, perhaps this is also
affecting my server. I can't remember if the config has ACPI enabled
in the kernel. The only other difference I found in my old kernel
(PAE) configs and the generic one that comes with 5.4 is:

nodevice ehci (enhanced USB driver).

I'll try it with the NODEVICE ehci first and see if that fixes the
load / card issues. If not I'll try it with no acpi.

Thanks for the info.

On 7/28/05, dpk <[EMAIL PROTECTED]> wrote:
> Woah. Well, I guess I didn't try *everything*. Removing "device acpi" from
> the kernel config leaves me witih a PAE+SMP kernel that works fine. I can
> fetch files at wire speed and everything.
> 
> So, I guess this issue is closed. acpi was the ultimate culprit.
> 
> On Thu, 28 Jul 2005, dpk wrote:
> 
> > By the way, I also compared GENERIC performance against GENERIC w/
> > "options SMP" added, and had the same results.
> >
> > On Wed, 27 Jul 2005, dpk wrote:
> >
> > > We just received several SuperMicro servers, 3.0Ghz Xeon x 2, 4GB RAM.
> > > They're using the em driver and the ports are set to 1000Mbit (we also
> > > tried 100Mbit/full duplex on the card and on the switch). They're running
> > > FreeBSD 5.4.
> > >
> > > I ran a steady ping on a couple of them while they were running "GENERIC",
> > > and then rebooted them with a kernel built with the "PAE" kernel included
> > > with the installation, with "option SMP" added.
> > >
> > > The PAE-SMP-GENERIC kernel was built after cvsup'ing with "tag=RELENG_5_4"
> > > and the uname reports "5.4-RELEASE-p5".
> > >
> > > Here are the ping results:
> > >
> > > GENERIC:
> > >
> > > 117 packets transmitted, 117 packets received, 0% packet loss
> > > round-trip min/avg/max/stddev = 0.451/0.554/0.856/0.059 ms
> > >
> > > PAE-SMP-GENERIC:
> > >
> > > 102 packets transmitted, 102 packets received, 0% packet loss
> > > round-trip min/avg/max/stddev = 0.569/4.262/7.944/2.065 ms
> > >
> > > Fetching a 637MB ISO from a local server, also on 100/FDX:
> > >
> > > GENERIC:
> > >
> > > /dev/null 100% of  637 MB   10 MBps 
> > > 00m00s
> > >
> > > real0m58.071s
> > > user0m1.954s
> > > sys 0m6.278s
> > >
> > > PAE-SMP-GENERIC:
> > >
> > > /dev/null 100% of  637 MB 5764 kBps 
> > > 00m00s
> > >
> > > real1m53.324s
> > > user0m1.478s
> > > sys 0m5.624s
> > >
> > > Running GENERIC, systat shows about 7000 interrupts/second, and around 600
> > > interrupts/second using PAE-SMP-GENERIC, while fetch was running.
> > >
> > > I've checked the errata and hardware notes, as well as gnats, and was not
> > > able to find anything that explains or matches this behavior. We've run
> > > SMP servers for years, using 4.5-4.11, but we've never seen the network
> > > performance cut in half (or pings go up 10x).
> > >
> > > Removing "option SMP" makes the problem go away, but at a very significant
> > > performance cost obviously.
> > >
> > > Could it be something from -p5? Is this explained/examined in a PR I've
> > > missed, and if so can I add some information?
> > >
> >
> ___
> freebsd-questions@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "[EMAIL PROTECTED]"
> 


-- 
blog: http://www.mostlygeek.com
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Load much higher after upgrading to 5.4-STABLE

2005-07-26 Thread Benson Wong
Hi, 

I upgraded from 5.4-STABLE (about 5 months old) to 5.4-STABLE as of
last Sunday. Went through the standard procedure, buildkernel,
buildworld, install kernel, install world, mergemaster, etc.

The system functions normally except now load on the system hovers
around 2.4 average, where it used to be around 0.5. From what I can
tell nothing much has changed. The system works as an NFS server for
Maildirs. It is working normally, and I see no performance problems,
however the load seems to be much higher (graphed with MRTG ever 5
minutes).

Anybody else encounter this? Offer any insights? 

Thanks. 
Ben.

-- 
blog: http://www.mostlygeek.com
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: installing big qmail server ... where to start?

2005-05-09 Thread Benson Wong
I told the XRAID to slice the 2 x 2.2TB arrays into 4 x 1.1TB arrays.
No problems after that.

Ben. 

On 5/6/05, Matthias F. Brandstetter <[EMAIL PROTECTED]> wrote:
> -- quoting Benson Wong --
> > There are a couple of issues you will run into here.
> >
> > 1. Mass storage. FreeBSD doesn't support file systems > 2TB, at least
> > not that I found decent documentation and support for.
> 
> How can you address data storage > 2TB then?
> Or do I have to split my storage/partition on a RAID with more than 2TB?
> 
> --
> It was the most I ever threw up, and it changed my life forever.
> 
>   -- Homer Simpson
>  Homer Goes To College
> 


-- 
blog: http://www.mostlygeek.com
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: installing big qmail server ... where to start?

2005-05-05 Thread Benson Wong
I'm CC this answer back to FBSD-Questions. 


On 5/5/05, Odhiambo Washington <[EMAIL PROTECTED]> wrote:
> * Benson Wong <[EMAIL PROTECTED]> [20050505 02:56]: wrote:
> 
> Hi Ben,
> 
> 
> > I run a qmail-ldap installation for about 10,000 users. Each has 100MB
> > of quota. I use 2 LDAP servers, 2 qmail servers and have all the
> > Maildirs stored on a 5.6TB Xserve RAID.
> 
> I would like to ask you if you run your servers on XServe or on FreeBSD.
> I'm interested in knowing how you have setup your heterogeneous network
> to link FreeBSD to the XServe RAID.

On FreeBSD 5.4-STABLE. My FreeBSD (qmail) email servers use NFS.

> 
> 
> > There are a couple of issues you will run into here.
> >
> > 1. Mass storage. FreeBSD doesn't support file systems > 2TB, at least
> > not that I found decent documentation and support for.
> 
> So how did you go around this limitation? I am buying two Xserves with
> XServe RAID and I am positive your experience will help me alot. I've
> a great belief that it will..

The 5.6TB XRAID has 2 separate RAID arrays. After RAID-5 overhead it
becomes 2.2TB on each array. 2.2TB is 200GB over the max limit, so I
had the XRAID slice each array into 2 logical arrays. This gave me 4 x
1.1TB arrays. Each one of those mounted easily in FreeBSD. You may be
able use vinum to create them into one large array again, but i didn't
bother

> 
> 
> > 2. Backing up 50,000 Maildirs, where each email is a separate file
> > requires something custom. I use Bacula, a network backup tool, and I
> > instruct it to do a tar-gzip of each Maildir before backup. This adds
> > a bit of overhead, and almost doubles space usage, but it sure beats
> > backing up millions of little 4K - 80K files!
> 
> You backup e-mails? That is new to me. How about if the files change as
> you run the backup?

Oh well. Maildirs don't require file locking. The Maildir may change
state but all emails will still be there. Trade off between having OK
backups and no backups. FreeBSD's snapshots for large TB volumes isn't
stable yet (at least not that I've read) so I don't bother using
those.

> 
> 
> > 3. There is a MAJOR bug with maildirsize, the quota file. These quota
> > files go out of sync a lot. From a year of statistics about 0.1% of
> > users will likely have out of sync maildirsize files everyday. Who it
> > happens to seems to be random. I wrote a custom script that runs every
> > 15 minutes to clean up the out of sync maildirsize files.
> 
> This one I am interested in. I experience this alot with Exim. Oh, I use
> Exim and not Qmail, but the maildirsize code I believe is from the same
> source, no?
> 
I dunno if the code is the same. Good chance if you're seeing
maildirsize that are going out of sync for no good reason.

> 
> > Other than those issues my qmail-ldap installation runs super stable.
> > On the two mail servers I have serving up IMAP and POP3, their load
> > hovers around 0.1 to 0.3 barely anything at all. On my NFS server the
> > load is about 0.3... it's barely working too.
> 
> You have separated the servers for IMAP and POP3 or you are
> load-balancing?

I use DJBDNS and round-robin a domain name with all of my mail
servers. It's is very simple to set up and works very well. Load is
distributed very evenly. Each mail server provides POP3 and IMAP.
Haven't found a need or reason to separate them.

Ben. 

> 

-- 
blog: http://www.mostlygeek.com
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: installing big qmail server ... where to start?

2005-05-04 Thread Benson Wong
I run a qmail-ldap installation for about 10,000 users. Each has 100MB
of quota. I use 2 LDAP servers, 2 qmail servers and have all the
Maildirs stored on a 5.6TB Xserve RAID.

There are a couple of issues you will run into here. 

1. Mass storage. FreeBSD doesn't support file systems > 2TB, at least
not that I found decent documentation and support for.

2. Backing up 50,000 Maildirs, where each email is a separate file
requires something custom. I use Bacula, a network backup tool, and I
instruct it to do a tar-gzip of each Maildir before backup. This adds
a bit of overhead, and almost doubles space usage, but it sure beats
backing up millions of little 4K - 80K files!

3. There is a MAJOR bug with maildirsize, the quota file. These quota
files go out of sync a lot. From a year of statistics about 0.1% of
users will likely have out of sync maildirsize files everyday. Who it
happens to seems to be random. I wrote a custom script that runs every
15 minutes to clean up the out of sync maildirsize files.

Other than those issues my qmail-ldap installation runs super stable.
On the two mail servers I have serving up IMAP and POP3, their load
hovers around 0.1 to 0.3 barely anything at all. On my NFS server the
load is about 0.3... it's barely working too.

Hope that was helpful. 

Ben. 



On 5/4/05, Matthias F. Brandstetter <[EMAIL PROTECTED]> wrote:
> Hi all,
> 
> I have to plan and setup a mail solution for about 50.000 users, here are
> some key features requested by our customer:
> 
>  - self coded webfrontend w/ webmail and administration (filter, alias etc)
>  - 100MB quota per user
>  - autoresponder
>  - about 50.000 user
>  - online backup of data
>  - some more featuers for web frontend
> 
> Since I happily use qmail for some other (but smaller) installations, I
> want to try it with qmail here for this project as well. My only problem
> is, I have no clue where to start ... beginning from "should I use 2
> redundant and really strong or some more but cheaper servers?" to "which
> qmail distributions and patches should I use (ldap, mysql, ...)?" and "how
> to store data (mails) and do online backup w/o downtime?".
> 
> I know you can't give me _the_ solution for this issue, but I am thankful
> for any hints and internet links on this topic.
> 
> I am sure you guys can help me :)
> Greetings and TIA, Matthias
> 
> --
> And thank you most of all for nuclear power, which is yet to cause a
> single proven fatality, at least in this country.
> 
>   -- Homer Simpson
>  Oh Brother, Where Art Thou?
> ___
> freebsd-questions@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "[EMAIL PROTECTED]"
> 


-- 
blog: http://www.mostlygeek.com
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Where to find good/cheap tech support

2005-04-25 Thread Benson Wong
I've actually seen the ATAPI_TIMEOUT problems before, but not with an
Adaptec SCSI card. I thought I wrote up about it here:
http://www.mostlygeek.com/node/22 but looks like I didn't bother
mentioning the ATAPI_TIMEOUT problems. Oops. I think I'll have to
update it.

First of all, does the system boot in Safe Mode? 
If it does (mine did), and the solution I found to the ATAPI_TIMEOUT
problem was to compile a kernel based on the PAE configuration
(without the PAE option). I'm not sure what exactly fixes it but give
that a try.

My system would hang on ATAPI_TIMEOUT, but booted in Safe Mode. The
new kernel has been running stable for weeks, no problems, rock solid.


Ben. 

On 4/25/05, Chuck Swiger <[EMAIL PROTECTED]> wrote:
> ChrisC wrote:
> [ ... ]
> > In my mind there is always the possibility of a problem being a pebkac but 
> > this
> > problem only occurs with FreeBSD. The SCSI controller works fine when I load
> > RedHat Fedora Core 3 or Windows 2000 Pro. Unfortunately I don't know much 
> > about
> > FreeBSD to do much trouble shooting myself so I might just have to go with
> > another OS on this specific server.
> 
> A system that works is more useful to you than one which doesn't-- maybe that
> would be best.
> 
> [ ... ]
> > *Ata1-master : FAILURE – ATAPI_IDENTIFY timed out
> > Waiting 15 seconds for SCSI devices to settle
> > ---Dump Card State Ends---
> > (probe29:ahd1:0:15:0) SCB0xe – timed out
> > ahd0: Issued Channel A Bus Reset 4 SCBs aborted
> > -
> > Thanks again for taking time to reply.
> 
> There are people who are a lot more expert than I at interpreting Adaptec card
> dumps lurking on these lists, but honestly, there isn't much here to go on.
> 
> My first take would have been to double-check the cabling, and retest the
> hardware in another machine.  But if the hardware seems to work using another
> OS, well, the easy answers are out.  I might try disabling your ATA controller
> entirely, if you are not using it, to remove the first error message...
> 
> --
> -Chuck
> 
> ___
> freebsd-questions@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> To unsubscribe, send any mail to "[EMAIL PROTECTED]"
> 


-- 
blog: http://benzo.tummytoons.com
site: http://www.thephpwtf.com
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: 5.8TB RAID5 SATA Array Questions - UPDATE

2005-04-22 Thread Benson Wong
Hi Edgar, 

Good to hear you finally got it running. Sounds like you went through
the same challenges I went through. I wound up getting FreeBSD
5.4-STABLE running and it's been stable for weeks. I put it through
quite a bit of load lately and it seems to running well.

Comments below: 

> 
> As much loved as BSD is to me.it simply just isn't up to the challenge at
> all.its far too difficult to get in a properly working state.and the
> limitations imposed are just too difficult to overcome easily.

Sounds like you hit the same 2TB limit on both FBSD and Linux. What
was the limitations that were too difficult to overcome?

> 
> I ended up using Ubuntu which not only had all the driver support to all
> the
> devices and controllers.but also had little to no problem getting the
> system
> installed properly.It however does not like/want to boot to the array.so I
> installed additional drives (Seagate sata) and created a mirror (300GB) for
> the system to live on and bring up the array (/dev/md0) using mdadm.overall
> it was easy and nice.there are several caveats left to wrestle with.

I wonder why it wouldn't want to boot off of your large array. It
could be that it is way too big for the old PC bios to recognize. I
think you could get around this by creating a small partition at the
beginning of your array. I tried this too, but no luck. My arrays were
over fiber channel but that should have been taken care of by the FC
card.

> 
> Currently although the 3ware controller can create a huge 4TB raid5 array,
> nothing exists that I am aware of that can utilize the entire container.
> Every single OS that exists seems to all share the 2TB limitations..so
> while
> the BIOS can "see" it.everything else will only see 2TB..this includes NFS
> on OSX (which don't get me started on the horrible implementation mistakes
> from apple and their poor NFS support..i mean NFSv4 comeon! Why is that
> hard!!)

That is strange that OSX can't see larger than 2TB partitions over
NFS. I would assume that an OSX client talking to an XServe would be
able to see it. I haven't tested this so I wouldn't know for sure.

I'm more curious about the 2TB limit on Linux. I figured Linux, with
it's great file system support, would be able to handle a larger than
2TB partition. What were the limitations you ran into?

> So to get past Ubuntu's 2TB problem, I created 2xRAID5 2TB (1.8TB
> reporting)
> containers on the array.and then using software raid.created 1xRAID0 using
> the 2xRAID5 containers.which create 1xRAID0 @4TB.

Why did software raid0 help you get over the 2TB limitation? Wouldn't
it still appear as one filesystem that is way too big to use?
Something doesn't add up here. Pun not intended. :)

> 
> Utterly horrible.probably the WORST half-assed installation imaginable.in
> my
> honest opinion.here are my desires.
>
I chose to break my 4.4TB system into 4 x 1.1TB arrays. This is very
well supported by FreeBSD. The downside is that I had to modify my
email system configuration and maintenance scripts to work with four
smaller arrays rather than a single large one.

I purposely avoided using software raid because it makes maintenance
of the array a lot more complex. It usually doesn't take a lot of
skills or time to fix a hardware array but the learning curve for
fixing a software array is a lot higher. Plus I don't think software
raid on linux is any good, or on FreeBSD for that matter.

> Create 1xRAID5 @ 4TB.install the OS TO the array.boot to the array and then
> share out 4TB via NFS/SMB.was that too much to ask?? Obviously it was.
> 
> So in response.I can modified the requirements.
> 
> Create [EMAIL PROTECTED] an OS TO a [EMAIL PROTECTED] to the
> RAID1..and SHARE out the 4TB.

This is essentially what I did as well. Didn't know about the
limitations when I first started.

ben
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: 5.8TB RAID5 SATA Array Questions - UPDATE

2005-04-22 Thread Benson Wong
No. Doesn't work. Fdisk couldn't figure out how to partition it
correctly. Actually it had a very hard time figuring out the correct
Cylinder, Heads, Sectors values that worked correctly. I gave up on
this.

I boot from a 3Ware RAID5 host array (160GB). 

2. No. I had 2.2TB arrays and I couldn't create a filesystem that big.
I split them up in hardware to 1.1TB each and created 4 x 1.1TB
arrays. No other workable solution I could find.

Ben

On 4/22/05, Edgar Martinez <[EMAIL PROTECTED]> wrote:
> Are you booting to the array? Is it over 2TB? Or are you mounting the
> array?
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: 5.8TB RAID5 SATA Array Questions

2005-04-15 Thread Benson Wong
> 
> If your array is just going to used for one large filesystem, you can
> skip any partitioning steps and newfs the base device directly.  then
> if you decide to grow the array (and if your controller supports
> nondestructive resizing), you can use growfs to expand the filesystem
> without the extra step of manually adjusting a partition table.
> 

So you don't actually need to disklabel it?
You can just go newfs {options} /dev/da0 and it will just work? 

Hmm.. wish I had something to test that with because I thought I had
to disklabel first and then newfs it.

Ben.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Solution for ATAPI_TIMEOUT on FreeBSD 5.3 on some SuperMicro motherboards

2005-04-15 Thread Benson Wong
Hi, 

I know a few people have had ATAPI_TIMEOUT errors when installing
5.3-RELEASE on SuperMicro motherboards. I got it successfully
installed on a SuperMicro X6DHE-X8 which has dual Broadcom 5721 which
are not supported in 5.3-RELEASE. I wrote up the solution here:

http://www.mostlygeek.com/node/22

Hope this helps anybody facing ATAPI_TIMEOUT problems. 

Ben.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: 5.8TB RAID5 SATA Array Questions

2005-04-14 Thread Benson Wong
> 
> So theoretically it should go over 1000TB…I've conducted several bastardized
> installations due to sysinstall not being able to do anything over the 2TB
> limit by creating the partition ahead of time…I am going to be attacking
> this tonight and my efforts will be primarily focused on creating one large
> 5.8TB slice….wish me luck!! 
> 
>   
> 
> PS: Muhaa haa haa! 
You're probably going to run into "boo hoo hoo hoo". Most likely you
won't be able to get over the 2TB limit. Also don't use sysinstall, I
was never able to get it to work well. Probably because my arrays were
mounted over fiber channel and fdisk craps out.

This is what I did: 

dd if=/dev/zero of=/dev/da0 bs=1k count=1
disklabel -rw da0 audo
newfs /dev/da0

That creates one large slice, UFS2, for FreeBSD. Let know if you get
it over 2TB, I was never able to have any luck.

Another reason you might want to avoid a super large file system is
that UFS2 is not journaling. If the server crashes it will take fschk
a LONG time to check all those inodes!

Ben.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: 5.8TB RAID5 SATA Array Questions

2005-04-14 Thread Benson Wong
Ahh, that clarifies some things. 
UFS2 can handle 2^64, but disklabel, newfs might not be able to yet.
Not entirely sure where things are still 32-bit, I do know that when I
tried to create a 2.2TB file system with the standard freebsd tools it
didn't work.

Ben.

On 4/14/05, Edgar Martinez <[EMAIL PROTECTED]> wrote:
>  
>  
> 
> Benson….GREAT RESPONSE!! I Don't think I could have done any better myself.
> Although I knew most of the information you provided, it was good to know
> that my knowledge was not very far off. It's also reassuring that I'm not
> the only nut job building ludicrous systems.. 
> 
>   
> 
> Nick, I believe that we may have some minor misinformation on our hands…. 
> 
>   
> 
> I refer you both to
> http://www.freebsd.org/projects/bigdisk/ which according to
> the page… 
> 
>   
> 
> When the UFS filesystem was introduced to BSD in 1982, its use of 32 bit
> offsets and counters to address the storage was considered to be ahead of
> its time. Since most fixed-disk storage devices use 512 byte sectors, 32
> bits allowed for 2 Terabytes of storage. That was an almost un-imaginable
> quantity for the time. But now that 250 and 400 Gigabyte disks are available
> at consumer prices, it's trivial to build a hardware or software based
> storage array that can exceed 2TB for a few thousand dollars. 
> 
> The UFS2 filesystem was introduced in 2003 as a replacement to the original
> UFS and provides 64 bit counters and offsets. This allows for files and
> filesystems to grow to 2^73 bytes (2^64 * 512) in size and hopefully be
> sufficient for quite a long time. UFS2 largely solved the storage size
> limits imposed by the filesystem. Unfortunately, many tools and storage
> mechanisms still use or assume 32 bit values, often keeping FreeBSD limited
> to 2TB. 
> 
> So theoretically it should go over 1000TB…I've conducted several bastardized
> installations due to sysinstall not being able to do anything over the 2TB
> limit by creating the partition ahead of time…I am going to be attacking
> this tonight and my efforts will be primarily focused on creating one large
> 5.8TB slice….wish me luck!! 
> 
>   
> 
> PS: Muhaa haa haa! 
> 
>   
> 
>   
>  
>  
>  
> 
> From: Nick Pavlica [mailto:[EMAIL PROTECTED] 
>  Sent: Thursday, April 14, 2005 2:49 PM
>  To: Benson Wong
>  Cc: [EMAIL PROTECTED]; freebsd-questions@freebsd.org
>  Subject: Re: 5.8TB RAID5 SATA Array Questions 
>  
> 
>   
> 
> > Is there any limitations that would prevent a single volume that large?
> (if
>  > I remember there is a 2TB limit or something)
>  2TB is the largest for UFS2. 1TB is the largest for UFS1.
>  
>  Is the 2TB limit that you mention only for x86?  This file system
> comparison lists the maximum size to be much larger
> (http://en.wikipedia.org/wiki/Comparison_of_file_systems).
>  
>  --Nick 


-- 
blog: http://benzo.tummytoons.com
site: http://www.thephpwtf.com
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: 5.8TB RAID5 SATA Array Questions

2005-04-14 Thread Benson Wong
>From my experience mucking around with UFS1/UFS2 this is what I
learned.  On UFS2 the largest filesystem you can have is 2TB. I tried
with a 2.2TB and it wouldn't handle it.

I read somewhere that with UFS2 you have 2^(32-1) 1K-blocks and UFS1
supports 2^(31-1) 1K blocks per filesystem. That is essentially a 2TB
max file system for UFS2 and a 1TB filesystem for UFS1.

Ben
 
On 4/14/05, Nick Pavlica <[EMAIL PROTECTED]> wrote:
> > Is there any limitations that would prevent a single volume that large?
> (if
>  > I remember there is a 2TB limit or something)
>  2TB is the largest for UFS2. 1TB is the largest for UFS1.
>  
>  Is the 2TB limit that you mention only for x86?  This file system
> comparison lists the maximum size to be much larger
> (http://en.wikipedia.org/wiki/Comparison_of_file_systems).
>  
>  --Nick
>  


-- 
blog: http://benzo.tummytoons.com
site: http://www.thephpwtf.com
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: 5.8TB RAID5 SATA Array Questions

2005-04-14 Thread Benson Wong
I'm halfway through a project using about the same amount of storage,
5.6TB on an attach Apple XServe RAID. After everything I have about
4.4TB of usable space, 14 x 400GB HDDs in 2 RAID5 arrays.

> All,
>
> I have a project in which I have purchased the hardware to build a massive
> file server (specifically for video). The array from all estimates will come
> in at close to 5.8TB after overheard and formatting. Questions are:
>
> What Version of BSD (5.3, 5.4, 4.X)?
If all your hardware is compatible with 5.3-RELEASE use that. It is
quite stable. I had to upgrade through buildworld to 5.4-STABLE
because the onboard NIC didn't get recognize. Don't use 4.X since it
doesn't support UFS2. Also 4.X doesn't see partitions larger than 1TB.
I "sliced" up my XRAID so it shows 4 x 1.1TB arrays. This shows up
like this in 5.x:

/dev/da0c  1.1T 32M996G 0%/storage1
/dev/da2c  1.1T 27G969G 3%/storage3
/dev/da3c  1.1T186M996G 0%/storage4
/dev/da1c  1.1T156K996G 0%/storage2

These are NFS mounted, and in FBSD 4.9 they look like this:
server:/storage1   -965.4G32M   996G 0%/storage1
server:/storage2   -965.4G   156K   996G 0%/storage2
server:/storage3   -965.4G27G   969G 3%/storage3
server:/storage4   -965.4G   186M   996G 0%/storage4

I'm in the process of slowly migrating all the servers to 5.3.

Also UFS2 allows for lazy inode initialization. It won't go and
allocate all the inodes at one time, only when it needs more. This is
a large time saving because TB size partitions will likely have
hundreds of millions of inodes. Each one of my 1.1TB arrays has about
146M inodes!

>
> What should the stripe size be for the array for speed when laying down
> video streams?

This is more of a 3Ware RAID thing. Not sure, use a larger stripe size
because you're likely using larger files. For the FBSD block/fragment
size I stuck with the default 16K blocks 2K fragments even though
using 8K blocks and 1K frags would be more efficient for what I'm
using it for (Maildir storage). I did some benchmarks and 16K/2K
performed slightly better. Stick to the default.

>
> What filesystem?
UFS2.

>
> Is there any limitations that would prevent a single volume that large? (if
> I remember there is a 2TB limit or something)
2TB is the largest for UFS2. 1TB is the largest for UFS1.

>
> The idea is to provide as much network storage as possible as fast as
> possible, any particular service? (SMB. NFS, ETC)

I share it all over NFS. Haven't done extensive testing yet but NFS is
alright. I just made sure I have lots of NFS server processes and
tuned it a bit using nfsiod. Haven't tried SMB but SMB is usually
quite slow. I would recommend using whatever your client machines
support and tuning for that.

>
> Raid controller: 3Ware 9500S-12MI
I use a 9500S in my system as well. These are quite slow from the
benchmarks I've read.

--
This isn't one of your questions but I'm going to share this anyways.
After building this new massive email storage system I concluded that
FreeBSD large file system support is sub-par. I love FreeBSD and I'm
running it on pretty much every server but progress on large TB file
systems is not up to snuff yet. Likely because the developers do not
have access to large expensive disk arrays and equipment. Maybe the
FreeBSD foundation can throw some $$ towards this.

If you haven't already purchased the equipment I would recommend going
with an XServe + XRAID. Mostly because it will probably be a breeze to
set up and use. The price is a premium but for a couple of extra
grand, it is worth saving the headaches of configuration.

My network is predominantly FBSD so I choose FBSD for to keep things
more homogenous and have FBSD NFS talking to FBSD NFS. If I didn't
dislike Linux distros so much I would probably have used Linux and
it's fantastic selection of stable, modern file systems with
journaling support.

Another thing you will likely run into with FBSD is creating the
partitions. I didn't have much luck with sysinstall/fdisk to create
the large file systems. My arrays are mounted over Fibre channel so
you might have more luck. Basically I had to use disklabel and newfs
from the shell prompt. It worked, but took a few days of googling and
documentation scanning to figure it all out.

Hope that helps. Let me know if you need any more info.

Ben.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Does the mpt(4) driver work for the LSI Logic 929X chipset?

2005-03-21 Thread Benson Wong
Hi, 

I am thinking of getting an LSI Logic LSI7202XP-LC Fibre Channel card
(uses 929X). I read the hardware compatibility list for FreeBSD 5.3
and it says there is full support for the 929 chipset. I was wondering
if that also extends to the 929X as well.

The "X" means that the card uses PCI-X. I am assuming that it would
work, but a few loose assumptions has left me with a 5.6TB XServe RAID
that I can't talk to! :( I also got the Apple FC card, which uses the
929X chipset, and FreeBSD does not recognize it at all. Supposedly
Apple made some modifications to it but there's no way of seeing what
those are.

Also: When I boot the card into a Windows XP machine it says "New
Hardware Found".

So if anybody know if the LSI7202XP-LC with the 929X chipset would
work, I would appreciate any experience you can share.

On a side note (sort of related), does the GENERIC kernel on the
installation ISO have the mpt(4) compiled into it?

thanks. 
Ben
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: FreeBSD 4.10 not finding 3ware controllers

2004-10-22 Thread Benson Wong
Um, I can't do a dmesg since I can't installed FBSD. 
The chipset is Intel E7520

The BIOS does detect the cards. When booting the cards will show what
drives are attached and how they're connected to the RAID array. They
also give you the option of using Alt+3 to enter the configuration
mode.

During boot I don't actually see the TWE drivers being loaded. I tried
using FreeBSD/amd64 5.2.1 since it supports the EM64T stuff provided
but it basically crashed during installation. I'm going to try
5.2.1/i386 now.

Thanks. 
Ben.

On Fri, 22 Oct 2004 08:22:57 -0700, pete wright <[EMAIL PROTECTED]> wrote:
> On Thu, 21 Oct 2004 15:55:12 -0700, Benson Wong <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > I just got a new server with a SuperMicro X6DH8-G motherboard. it's
> > basically a dual xeon board with 2  3Ware controllers in it.
> >
> > The Controllers:
> > 8006-2LP
> > 8506-4LP
> >
> > A couple of days ago I installed 4.10 on old Dell 300S PowerEdge
> > server, with the same 3ware controllers and the installation went
> > flawlessly. However on this machine FreeBSD doesn't can't seem to
> > autodetect the controllers.
> >
> > I'm thinking that 4.10 doesn't have support for the motherboards, but
> > I'm not entirely sure. Has anybody else experienced the same thing, or
> > have any insight into this? Does this require some kernel tweaks of
> > the installation CD?
> >
> 
> can you provide a dmesg from the machine you are having problems with.
>  i think the list will specifically be interested in the "twe" lines
> from the dmesg (i think those controllers use the twe driver not
> sure).  Also what chipset is on the mobo, i'm not familiar with it off
> the top of my head.  finally, does the BIOS detect the cards?
> 
> -p
> 
> > Thanks.
> > Ben
> >
> > --
> > blog: http://benzo.tummytoons.com
> > site: http://www.thephpwtf.com
> > ___
> > [EMAIL PROTECTED] mailing list
> > http://lists.freebsd.org/mailman/listinfo/freebsd-questions
> > To unsubscribe, send any mail to "[EMAIL PROTECTED]"
> >
> 


-- 
blog: http://benzo.tummytoons.com
site: http://www.thephpwtf.com
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


FreeBSD 4.10 not finding 3ware controllers

2004-10-21 Thread Benson Wong
Hi, 

I just got a new server with a SuperMicro X6DH8-G motherboard. it's
basically a dual xeon board with 2  3Ware controllers in it.

The Controllers: 
8006-2LP
8506-4LP

A couple of days ago I installed 4.10 on old Dell 300S PowerEdge
server, with the same 3ware controllers and the installation went
flawlessly. However on this machine FreeBSD doesn't can't seem to
autodetect the controllers.

I'm thinking that 4.10 doesn't have support for the motherboards, but
I'm not entirely sure. Has anybody else experienced the same thing, or
have any insight into this? Does this require some kernel tweaks of
the installation CD?

Thanks. 
Ben

-- 
blog: http://benzo.tummytoons.com
site: http://www.thephpwtf.com
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"