Re: customizing debian apache

2001-11-06 Thread Jeff Waugh


> Look in the the debian dir of the src deb.  The rules, post*, pre*, and
> apacheconfig files are all hardcoded to assuming the Debian Layout.

You haven't mentioned what's wrong, or requires customisation...

> That's all fine and good, but it restricts customization.  I'm not sure
> how foobarred everything would get if a package that depends on apache
> being in a certain spot, either.

The package requiring apache to be in a certain place would be foobarred, in
this instance.

Specifics! What is wrong with it?

- Jeff

-- 
   She said she loved my mind, though by most accounts I had already lost   
it. 




Re: customizing debian apache

2001-11-06 Thread Cameron Moore
* [EMAIL PROTECTED] [2001.11.06 22:50]:
> 
> 
> > Has anyone managed to customize (as in "use your own Layout on") an
> > apache build from .deb source?  I can't stand the debian Layout and want
> > to customize it (or even use an existing layout that comes with apache).
> > The problem is that all of the build scripts and whatnot assume you use
> > the Debian layout.
> 
> Define "layout"?

See config.layout:
 #   Debian GNU policy conforming path layout.
 
 prefix:/usr
 exec_prefix:   $prefix
 bindir:$exec_prefix/bin
 sbindir:   $prefix/lib+
 libexecdir:$exec_prefix/libexec
 mandir:$prefix/share/man
 sysconfdir:/etc+
 datadir:   $prefix/lib
 iconsdir:  $prefix/share/apache/icons
 htdocsdir: $datadir/htdocs
 cgidir:$datadir/cgi-bin
 includedir:$prefix/include+
 localstatedir: /var
 runtimedir:$localstatedir/run
 logfiledir:$localstatedir/log+
 proxycachedir: $localstatedir/cache+
 
 
> If it's just a matter of "where served files are on the filesystem" you can
> do that very easily post-install.

Possibly, but that's not what I want to do.  If you want suEXEC, you
have to know the paths at compile-time.  It would be *much* easier to be
able to define an alternative layout and have the deb package build
properly.

> I'm surprised you'd have any issues with the apache packages - they are one
> of the most well put together and administrator-friendly sets of packages
> I've ever seen.
> 
> Please point out specific issues.
> 
> - Jeff

Look in the the debian dir of the src deb.  The rules, post*, pre*, and
apacheconfig files are all hardcoded to assuming the Debian Layout.
That's all fine and good, but it restricts customization.  I'm not sure
how foobarred everything would get if a package that depends on apache
being in a certain spot, either.  Guess I need to contact the
maintainer.  :-)

I mainly want to know if anyone has done this before.  If not, I'll dig
through it myself and see what happens.  ;-)  Thanks
-- 
Cameron Moore




Re: RAID & Hard disk performance

2001-11-06 Thread Dave Watkins
God.. this is turning into a war... I think this will be my last post on 
the subject

When running RAID MTBF is not such a big deal... Unless you have a several 
racks of servers in 2U cases...  40-50 servers.. Would you rather drop 1 
drive every month or 1 drive every year?? In a single machine this isn't 
too much of a problem. But as numbers increase you spend more and more time 
in the server room replacing drive and rebuilding arrays.

At 03:09 PM 11/6/01 +0100, you wrote:
On Tue, 6 Nov 2001 07:26, Dave Watkins wrote:
> Not to start a holy war, but there are real reasons to use SCSI.
>
> The big ones are
>
> Much larger MTBF,
Mean Time Between Failures is not such a big deal when you run RAID.  As long
as you don't have two drives fail at the same time.  Cheaper IDE disks make
RAID-10 more viable, RAID-10 allows two disks to fail at the same time as
long as they aren't a matched pair.  So a RAID-10 of IDE disks should give
you more safety than a RAID-5 of SCSI.
> faster access times due to higher spindle speeds, better
When doing some tests on a Mylex DAC 960 controller and a Dual P3-800 machine
I found speed severely limited by the DAC.  The performance on bulk IO for
the 10K rpm Ultra2 SCSI drives was much less than that of ATA-66 drives.
That was a problem with your controller then. Not the technology and bus 
system.

For example head over to Seagate's web site
http://www.seagate.com/support/kb/presales/performance.html
http://www.seagate.com/docs/pdf/training/SG_SCSI.pdf
You also mention on your site that a typical SCSI drive can only sustain 
30MB/sec so cannot fill a SCSI bus running at 160MB/sec. The difference 
between SCSI and IDE is that SCSI can have multiple transfers at once. 
Hence a 6 drive system could easily fill the bus. In fact with too many 
more drives/channels you start filling the PCI bus and have to start 
looking at PCI 64/66.

IDE on the other hand cannot have multiple transfers at once.
You'll also find that SCSI and IDE sizes are not identical. SCSI drive have 
approx 9GB per platter and IDE about 10GB.
You can find IDE drives in 20.4, 30.6 etc etc. SCSI on the other hand come 
in 18GB, 36GB etc etc.

> bus management (eg 2 drives can perform tasks at once unlike IDE), Hot
See http://www.coker.com.au/~russell/hardware/46g.png for a graph of
performance of an ATA disk on it's own, two ATA disks running on separate
busses, and two disks on the same bus.  From that graph I conclude that most
of the performance hit of running two such drives comes from the motherboard
bus performance not from an IDE cable.  That graph was done with an old
kernel (about 2.4.1), I'll have to re-do it with the latest results from the
latest kernel.
Anyway motherboards with 4 IDE buses on the motherboard are common now, most
servers don't have more than 4 drives.
I think we are talking about different ends of the spectrum. You are 
talking about low end systems with 4 drives. I'm talking about larger 
systems with 5 or more drives. As an example a 2 drive mirror array for the 
OS and a 3 drive RAID 5 array for data etc, or even a 0+1 array with 4 or 6 
drives


> Swapable (This is HUGE) and more cache on the drive.
NO!  SCSI hard drives are no more swappable than ATA drives!  If you unplug
an active SCSI bus you run the same risks of hardware damage as you do for
ATA!
Hardware support for hot-swap is more commonly available for SCSI drives than
for ATA, but it is very pricey.
Actually Hotswap backplanes are not actually that much more expensive if 
you plan on it. If you are talking about a $20,000 server, the HS backplane 
only adds $300 to that. SCA drive are about the same price




Re: customizing debian apache

2001-11-06 Thread Jeff Waugh


> Has anyone managed to customize (as in "use your own Layout on") an
> apache build from .deb source?  I can't stand the debian Layout and want
> to customize it (or even use an existing layout that comes with apache).
> The problem is that all of the build scripts and whatnot assume you use
> the Debian layout.

Define "layout"?

If it's just a matter of "where served files are on the filesystem" you can
do that very easily post-install.

I'm surprised you'd have any issues with the apache packages - they are one
of the most well put together and administrator-friendly sets of packages
I've ever seen.

Please point out specific issues.

- Jeff

-- 
  Cette menace est très sérieuse.   




RE: RAID & Hard disk performance

2001-11-06 Thread Dave Watkins
DOA should be a non issue from a reputable supplier. I know we test all our 
drives before shipping any of our machines. A few things you're forgetting 
is that traditionally SCSI drivers run 24x7 until they fail. IDE drives run 
for 8 hours a day, 5 days a week. Also there are a lot of lower end servers 
out there with insufficient cooling, and hard drives are probably the first 
thing this will significantly damage.

Dave
At 12:46 PM 11/6/01 -0700, you wrote:
That is kind of funny, in my experience I have found that SCSI drives have a
much higher death rate than IDE drives, by far.
I just finished a project of installing 50+ servers, some with RAID
configurations, some without, all using SCSI drives.  Five were dead upon
arrival and will need to be exchanged with the vendor.  Two more died a
short time after installation.  I expect more deaths, which is why critical
systems are using RAID.  This mirrors my other experiences with SCSI as
well.  The drives just seem to die more often -- not in huge numbers, just a
few at a time.
A few months back on another project we bought about 30 IBM IDE drives for
office members, taking them off of low capacity SCSI drives.  All are okay,
no deaths, no loss of data after about a year.  This also mirrors my
previous experiences with IDE drives.  They seem to be more rugged.  Western
Digital, and older Maxtor make up the majority of my IDE death experiences.
My only reasoning for this is the higher spindle speeds and the push for
speed on SCSI drives and the lower quantities produced versus IDE.
That might go against logic, but it is what I have experienced.

# Jesse Molina  lanner, Snow
# Network Engineer  Maximum Charisma Studios Inc.
# [EMAIL PROTECTED]1.303.432.0286
# end of sig
> -Original Message-
> From: Dave Watkins [mailto:[EMAIL PROTECTED]
> Sent: Monday, November 05, 2001 11:27 PM
> To: debian-isp@lists.debian.org
> Subject: Re: RAID & Hard disk performance
>
>
> Not to start a holy war, but there are real reasons to use SCSI.
>
> The big ones are
>
> Much larger MTBF, faster access times due to higher spindle
> speeds, better
> bus management (eg 2 drives can perform tasks at once unlike
> IDE), Hot
> Swapable (This is HUGE) and more cache on the drive.
>
> I'll stop now before I start that war :-)
>
> Dave
>
> At 11:20 AM 11/4/01 +1100, you wrote:
> >
> >
> > > There's a number of guides that tell you about hdparm and
> what DMA is,
> > but if
> > > you already know that stuff then there's little good
> documentation.
> >
> >"Oh bum." :)
> >
> > > Then on the rare occasions that I do meet people who know
> this stuff
> > > reasonably well they seem to spend all their time trying
> to convince me
> > that
> > > SCSI is better than IDE (regardless of benchmark results).  :(
> >
> >Heh, there's a religious war waiting to happen.
> >
> > > > [1] http://people.redhat.com/alikins/system_tuning.html
> >
> >I've just found that iostat (in unstable's sysstat package) supports
> >extended I/O properties in /proc if you have sct's I/O
> monitoring patches.
> >Unfortunately, the last one on his ftp site is for
> 2.3.99-preBlah. I sent an
> >email to lkml last night to see if there's a newer patch -
> I'll follow up
> >here if so.
> >
> >Thanks Russell,
> >
> >- Jeff
> >
> >--
> >Wars end, love lasts.
> >
> >
> >--
> >To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> >with a subject of "unsubscribe". Trouble? Contact
> [EMAIL PROTECTED]
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
> [EMAIL PROTECTED]
>
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



customizing debian apache

2001-11-06 Thread Cameron Moore
Has anyone managed to customize (as in "use your own Layout on") an
apache build from .deb source?  I can't stand the debian Layout and want
to customize it (or even use an existing layout that comes with apache).
The problem is that all of the build scripts and whatnot assume you use
the Debian layout.

I'm not familiar enough with the debian build process to start trying to
hack up a "neutral" build scheme yet, so I'm hoping someone has some
insight into how to attack this problem.

BTW, I'm using woody/testing.
-- 
Cameron Moore




RE: RAID & Hard disk performance

2001-11-06 Thread Jesse Molina

That is kind of funny, in my experience I have found that SCSI drives have a
much higher death rate than IDE drives, by far.

I just finished a project of installing 50+ servers, some with RAID
configurations, some without, all using SCSI drives.  Five were dead upon
arrival and will need to be exchanged with the vendor.  Two more died a
short time after installation.  I expect more deaths, which is why critical
systems are using RAID.  This mirrors my other experiences with SCSI as
well.  The drives just seem to die more often -- not in huge numbers, just a
few at a time.

A few months back on another project we bought about 30 IBM IDE drives for
office members, taking them off of low capacity SCSI drives.  All are okay,
no deaths, no loss of data after about a year.  This also mirrors my
previous experiences with IDE drives.  They seem to be more rugged.  Western
Digital, and older Maxtor make up the majority of my IDE death experiences.

My only reasoning for this is the higher spindle speeds and the push for
speed on SCSI drives and the lower quantities produced versus IDE.

That might go against logic, but it is what I have experienced.



# Jesse Molina  lanner, Snow
# Network Engineer  Maximum Charisma Studios Inc.
# [EMAIL PROTECTED] 1.303.432.0286
# end of sig


> -Original Message-
> From: Dave Watkins [mailto:[EMAIL PROTECTED]
> Sent: Monday, November 05, 2001 11:27 PM
> To: debian-isp@lists.debian.org
> Subject: Re: RAID & Hard disk performance
> 
> 
> Not to start a holy war, but there are real reasons to use SCSI.
> 
> The big ones are
> 
> Much larger MTBF, faster access times due to higher spindle 
> speeds, better 
> bus management (eg 2 drives can perform tasks at once unlike 
> IDE), Hot 
> Swapable (This is HUGE) and more cache on the drive.
> 
> I'll stop now before I start that war :-)
> 
> Dave
> 
> At 11:20 AM 11/4/01 +1100, you wrote:
> >
> >
> > > There's a number of guides that tell you about hdparm and 
> what DMA is, 
> > but if
> > > you already know that stuff then there's little good 
> documentation.
> >
> >"Oh bum." :)
> >
> > > Then on the rare occasions that I do meet people who know 
> this stuff
> > > reasonably well they seem to spend all their time trying 
> to convince me 
> > that
> > > SCSI is better than IDE (regardless of benchmark results).  :(
> >
> >Heh, there's a religious war waiting to happen.
> >
> > > > [1] http://people.redhat.com/alikins/system_tuning.html
> >
> >I've just found that iostat (in unstable's sysstat package) supports
> >extended I/O properties in /proc if you have sct's I/O 
> monitoring patches.
> >Unfortunately, the last one on his ftp site is for 
> 2.3.99-preBlah. I sent an
> >email to lkml last night to see if there's a newer patch - 
> I'll follow up
> >here if so.
> >
> >Thanks Russell,
> >
> >- Jeff
> >
> >--
> >Wars end, love lasts.
> >
> >
> >--
> >To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> >with a subject of "unsubscribe". Trouble? Contact 
> [EMAIL PROTECTED]
> 
> 
> -- 
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact 
> [EMAIL PROTECTED]
> 




Re: CVS preformance issue

2001-11-06 Thread Alson van der Meulen
On Tue, Nov 06, 2001 at 04:55:13PM +0100, Amaya wrote:
> We are intensively using CVS at work. I am starting to get these error
> messages:
> 
> Nov 6 16:45:27 correo kernel: Out of Memory: Killed process 22198 (cvs).
> 
> It is set up as a pserver on the mail server, users mapped to cvsowner,
> very tipical set up. 
> 
> correo:/var/log# uname -a
> Linux correo 2.4.3 #1 Tue Jul 10 17:44:34 CEST 2001 i686 unknown
You might want to try a newer kernel (2.4.13/2.4.14 or possibly an -ac,
matter of personal preference). A lot of VM issues, both in the van riel
and aa VM have been fixed.
> 
> correo:/var/log# free -m
> total   used   free sharedbuffers cached
> Mem:18681   105   0 3 39
> -/+ buffers/cache: 38148
> Swap:  125 12113
> 
> correo:/var/log# w | head -1 
> 5:00pm  up 71 days,  3:16,  3 users,  load average: 1.26, 1.23, 1.43
HTH,
Alson
-- 
,---.
> Name:   Alson van der Meulen  <
> Personal:[EMAIL PROTECTED]<
> School:   [EMAIL PROTECTED]<
`---'
I have never seen it do *that* before...
-




sendmail

2001-11-06 Thread Matt Fair
Hi,
I am getting Warning: .cf file is out of date: sendmail 8.9.3 supports 
version 8, .cf file is version 7, is there a way to upgrade the cf file 
or do I need to regenerate it.  How would I do this, just 
dpkg-reconfigure sendmail?  Would I need to backup my virtualusertable 
before I do that?
Thanks,
Matt




CVS preformance issue

2001-11-06 Thread Amaya
We are intensively using CVS at work. I am starting to get these error
messages:

Nov 6 16:45:27 correo kernel: Out of Memory: Killed process 22198 (cvs).

It is set up as a pserver on the mail server, users mapped to cvsowner,
very tipical set up. 

correo:/var/log# uname -a
Linux correo 2.4.3 #1 Tue Jul 10 17:44:34 CEST 2001 i686 unknown

correo:/var/log# free -m
total   used   free sharedbuffers cached
Mem:18681   105   0 3 39
-/+ buffers/cache: 38148
Swap:  125 12113

correo:/var/log# w | head -1 
5:00pm  up 71 days,  3:16,  3 users,  load average: 1.26, 1.23, 1.43
  
Any hint?

-- 
Open your mind, and your ass will follow- Michael Balzary, aka Flea, RHCP

 Amaya Rodrigo Sastre   www.andago.com  Sta Engracia, 54  28010 Madrid
 BOFH-dev && CVS Evangelist Tfn: 912041124Fax: 91204
 Listening to:  - Madness - Our House




Re: Journaling FS for Production Systems

2001-11-06 Thread Amaya
Paul Fleischer dijo:
> Hard to say, however, I have had some serious crashes with reiserfs. 

So have I.

> At one point it blew my partition into pieces, at a reinstall was
> needed (reiserfs from kernel 2.4.8). 

Reiserfs used to be stable enough and performance overhead was not
really noticeable (I'd reccomend 2.4.7 and 2.4.9. Never 2.4.5).  

I had the great idea (tm) of rebooting with a previous kernel version, 
and the journal was lost, mangled or whatever. So was my data. MP3s
became HTMLs and so on. There were files I couldn't ls or delete, even
as root, but I could move them around. Weird.

Reinstall and back to ext2. No big deal, I find it stable enough ;-) and
don't trust journaling anymore ;-) This is my little trauma.

Please share positive journaling experiences so that I can overcome it
:-)

-- 
Open your mind, and your ass will follow- Michael Balzary, aka Flea, RHCP

 Amaya Rodrigo Sastre   www.andago.com  Sta Engracia, 54  28010 Madrid
 BOFH-dev && CVS Evangelist Tfn: 912041124Fax: 91204
 Listening to: James Brown - I got you (I feel good)




RE: [OT] Dreamweaver + CVS

2001-11-06 Thread James
Maybe something out of here could help you for dav and CVS?
http://www.cvshome.org/docs/infodav.html

Google rocks :)

- James

-Original Message-
From: Nicolas Bouthors [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 06, 2001 9:21 AM
To: debian-isp@lists.debian.org
Cc: [EMAIL PROTECTED]
Subject: [OT] Dreamweaver + CVS

Hi !
Sorry, this is not completly debian related but someone here has
probably
already done what I'm trying to.

We are looking for a way to develop our websites using CVS for version
control. The problem is that it seems dreamveaver cannot talk to a CVS
server directly. So I looked around and I found that there is a way to
tell
Dreamweaver to store files in a dav server (with mod_dav) and to
configure
mod_dav so that the real storage is done in a CVS tree...

My problem is that I cannot find much documentation on how to do that
(interconnecting dav and CVS).

Did anybody do it before ?
Any docs around ?

Thanks,
Nico

--
Administrateur Système/Réseau - GHS 38, rue du Texel  75014 Paris
Tél : 01 43 21 16 66 - [EMAIL PROTECTED] - [EMAIL PROTECTED]



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]




Re: Journaling FS for Production Systems

2001-11-06 Thread Russell Coker
On Tue, 6 Nov 2001 09:03, I. Forbes wrote:
> I am looking at moving some of our "potato" based production
> servers onto woody, and at the same time upgrading onto a
> journaling FS.
>
> I need the FS to meet the following in order of importance:
>
> -   MUST BE STABLE (our income depends on uptime!)

Now probably isn't a good time to upgrade to 2.4.x then.

2.4.9 and below have security problems.

2.4.10 has Ext2 problems.

2.4.11 was a dud version.

2.4.12 has had some bad reports.

2.4.13 and 2.4.14 have only just been released.

Solar Designer plans to add 2.4.x OpenWall support in 2.4.15...

Maybe you should wait for 2.4.15?

> -   Good performance for "Maildir" directories.  (We run Exim,
> Courier IMAP and SQWebmail as standard).

As long as there is <1000 files per directory they should all perform well.  
If large numbers of files are in a directory then look at JFS and ReiserFS.

> -   Software RAID 1 disk mirroring on IDE drives.  Something new but
> very necessary.

There's a patch to do Raidtools2 for 2.2.19 which you probably should apply 
first before the kernel upgrade.

> -   Suitable for use on a root file system on a machine with one
> partition.  - (Availability of boot/installation disks would be
> nice.  We currently do installations from 3 stiffy disks and the
> rest from the LAN using nfs/ftp/http)

Ext3 is best for that.  Do a regular Ext2 install then create the journal and 
remount!

> -   File system quota support (nice but not essential).

Ext{2,3} is easiest for this.  ReiserFS and XFS apparently work.  Not sure 
about JFS.

> -   NFS support would be nice to have, but not essential.

I think that issue is pretty much solved.  It's solved for Ext{2,3} and for 
ReiserFS.

-- 
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page




Re: RAID & Hard disk performance

2001-11-06 Thread Russell Coker
On Tue, 6 Nov 2001 07:26, Dave Watkins wrote:
> Not to start a holy war, but there are real reasons to use SCSI.
>
> The big ones are
>
> Much larger MTBF,

Mean Time Between Failures is not such a big deal when you run RAID.  As long 
as you don't have two drives fail at the same time.  Cheaper IDE disks make 
RAID-10 more viable, RAID-10 allows two disks to fail at the same time as 
long as they aren't a matched pair.  So a RAID-10 of IDE disks should give 
you more safety than a RAID-5 of SCSI.

> faster access times due to higher spindle speeds, better

When doing some tests on a Mylex DAC 960 controller and a Dual P3-800 machine 
I found speed severely limited by the DAC.  The performance on bulk IO for 
the 10K rpm Ultra2 SCSI drives was much less than that of ATA-66 drives.

> bus management (eg 2 drives can perform tasks at once unlike IDE), Hot

See http://www.coker.com.au/~russell/hardware/46g.png for a graph of 
performance of an ATA disk on it's own, two ATA disks running on separate 
busses, and two disks on the same bus.  From that graph I conclude that most 
of the performance hit of running two such drives comes from the motherboard 
bus performance not from an IDE cable.  That graph was done with an old 
kernel (about 2.4.1), I'll have to re-do it with the latest results from the 
latest kernel.

Anyway motherboards with 4 IDE buses on the motherboard are common now, most 
servers don't have more than 4 drives.

> Swapable (This is HUGE) and more cache on the drive.

NO!  SCSI hard drives are no more swappable than ATA drives!  If you unplug 
an active SCSI bus you run the same risks of hardware damage as you do for 
ATA!

Hardware support for hot-swap is more commonly available for SCSI drives than 
for ATA, but it is very pricey.

-- 
http://www.coker.com.au/bonnie++/ Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/   Postal SMTP/POP benchmark
http://www.coker.com.au/projects.html Projects I am working on
http://www.coker.com.au/~russell/ My home page




[OT] Dreamweaver + CVS

2001-11-06 Thread Nicolas Bouthors
Hi !
Sorry, this is not completly debian related but someone here has probably
already done what I'm trying to.

We are looking for a way to develop our websites using CVS for version
control. The problem is that it seems dreamveaver cannot talk to a CVS
server directly. So I looked around and I found that there is a way to tell
Dreamweaver to store files in a dav server (with mod_dav) and to configure
mod_dav so that the real storage is done in a CVS tree...

My problem is that I cannot find much documentation on how to do that
(interconnecting dav and CVS).

Did anybody do it before ?
Any docs around ?

Thanks,
Nico

--
Administrateur Système/Réseau - GHS 38, rue du Texel  75014 Paris
Tél : 01 43 21 16 66 - [EMAIL PROTECTED] - [EMAIL PROTECTED]





Re: Journaling FS for Production Systems

2001-11-06 Thread Paul Fleischer
tir, 2001-11-06 kl. 09:03 skrev I. Forbes:
> I am looking at moving some of our "potato" based production 
> servers onto woody, and at the same time upgrading onto a 
> journaling FS.
Sounds interesting.
 
> I need the FS to meet the following in order of importance:
> 
> -   MUST BE STABLE (our income depends on uptime!) 
Hard to say, however, I have had some serious crashes with reiserfs. At
one point it blew my partition into pieces, at a reinstall was needed
(reiserfs from kernel 2.4.8). 
 
> -   Must be supported in woody, without too much extra fiddling. 
I know at least that reiser and xfs is - haven't done installation on
xfs/ext3, but it should be easy to find some bootfloppies that do the
job.

> -   Good "power switch abuse" recoverability.  EXT2 is pretty good,
> except if you have multiple reboots, you need to run fsck
> manually (at least with the standard debian init scripts).  I
> can live with fsck, but I would prefer no manual intervention. 
I beleive all of them have, it's one of the fine things with journaling
filsystems.

> -   File system quota support (nice but not essential). 
xfs, and ext3 have quota support - I'm not sure about Reiser...
xfs even has acl support (which ext3 doesn't have without some
patching)...

> -   NFS support would be nice to have, but not essential. 
I might be wrong here - but I beleive that NFS supports every filesystem
that the kernel supports...
 
> Without wishing to start a flame ware, can anybody give me a quick 
> run-down on which of the above criteria new generation file 
> systems, like Reiser, XFS, EXT3, etc  meet.
And I can only add to this, that my comments aren't ment to start any
flame war either, just sharing some experience and some though.

I would either go with ext3 (which even is ext2 compatible AFAIK) or
XFS. They really seem to be the most stable. Reiser is not bad, but I
have had some terrible experiences with it - however, I do still use it,
it is nice, but IMHO not suited for production systems yet (allthough I
beleive that many people do actually use it in production).


--
Paul Fleischer // ProGuy
Registered Linux User #166300
http://counter.li.org





Re: Journaling FS for Production Systems

2001-11-06 Thread Waldemar Brodkorb
Hello,
>From the keyboard of I.,

> Hello All
> 
> I am looking at moving some of our "potato" based production 
> servers onto woody, and at the same time upgrading onto a 
> journaling FS.
> 
> I need the FS to meet the following in order of importance:
> 
> -   MUST BE STABLE (our income depends on uptime!) 
> 
> -   Must be supported in woody, without too much extra fiddling. 
> 
> -   Good "power switch abuse" recoverability.  EXT2 is pretty good,
> except if you have multiple reboots, you need to run fsck
> manually (at least with the standard debian init scripts).  I
> can live with fsck, but I would prefer no manual intervention. 
> 
> -   Good performance for "Maildir" directories.  (We run Exim, 
> Courier IMAP and SQWebmail as standard). 
> 
> -   Software RAID 1 disk mirroring on IDE drives.  Something new but
> very necessary. 
> 
> -   Suitable for use on a root file system on a machine with one
> partition.  - (Availability of boot/installation disks would be
> nice.  We currently do installations from 3 stiffy disks and the
> rest from the LAN using nfs/ftp/http) 
> 
> -   File system quota support (nice but not essential). 
> 
> -   NFS support would be nice to have, but not essential. 
> 
> Without wishing to start a flame ware, can anybody give me a quick 
> run-down on which of the above criteria new generation file 
> systems, like Reiser, XFS, EXT3, etc  meet.

No one, for a production system.
If you want to make a research machine to make some tests, then I
would suggest to use ext3. 
- stable for my systems
- simple upgrading (tunefs -j /dev/hd*, vi /etc/fstab)
- no problems with nfs, as reiserfs have 
- could be used as root-fs

bye
Waldemar




[OT] Dreamweaver + CVS

2001-11-06 Thread Nicolas Bouthors

Hi !
Sorry, this is not completly debian related but someone here has probably
already done what I'm trying to.

We are looking for a way to develop our websites using CVS for version
control. The problem is that it seems dreamveaver cannot talk to a CVS
server directly. So I looked around and I found that there is a way to tell
Dreamweaver to store files in a dav server (with mod_dav) and to configure
mod_dav so that the real storage is done in a CVS tree...

My problem is that I cannot find much documentation on how to do that
(interconnecting dav and CVS).

Did anybody do it before ?
Any docs around ?

Thanks,
Nico

--
Administrateur Système/Réseau - GHS 38, rue du Texel  75014 Paris
Tél : 01 43 21 16 66 - [EMAIL PROTECTED] - [EMAIL PROTECTED]



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]




Journaling FS for Production Systems

2001-11-06 Thread I. Forbes
Hello All

I am looking at moving some of our "potato" based production 
servers onto woody, and at the same time upgrading onto a 
journaling FS.

I need the FS to meet the following in order of importance:

-   MUST BE STABLE (our income depends on uptime!) 

-   Must be supported in woody, without too much extra fiddling. 

-   Good "power switch abuse" recoverability.  EXT2 is pretty good,
except if you have multiple reboots, you need to run fsck
manually (at least with the standard debian init scripts).  I
can live with fsck, but I would prefer no manual intervention. 

-   Good performance for "Maildir" directories.  (We run Exim, 
Courier IMAP and SQWebmail as standard). 

-   Software RAID 1 disk mirroring on IDE drives.  Something new but
very necessary. 

-   Suitable for use on a root file system on a machine with one
partition.  - (Availability of boot/installation disks would be
nice.  We currently do installations from 3 stiffy disks and the
rest from the LAN using nfs/ftp/http) 

-   File system quota support (nice but not essential). 

-   NFS support would be nice to have, but not essential. 

Without wishing to start a flame ware, can anybody give me a quick 
run-down on which of the above criteria new generation file 
systems, like Reiser, XFS, EXT3, etc  meet.

Thanks

Ian

-
Ian Forbes ZSD
http://www.zsd.co.za
Office: +27 +21 683-1388  Fax: +27 +21 64-1106
Snail Mail: P.O. Box 46827, Glosderry, 7702, South Africa
-




Re: RAID & Hard disk performance

2001-11-06 Thread Dave Watkins
Not to start a holy war, but there are real reasons to use SCSI.
The big ones are
Much larger MTBF, faster access times due to higher spindle speeds, better 
bus management (eg 2 drives can perform tasks at once unlike IDE), Hot 
Swapable (This is HUGE) and more cache on the drive.

I'll stop now before I start that war :-)
Dave
At 11:20 AM 11/4/01 +1100, you wrote:

> There's a number of guides that tell you about hdparm and what DMA is, 
but if
> you already know that stuff then there's little good documentation.

"Oh bum." :)
> Then on the rare occasions that I do meet people who know this stuff
> reasonably well they seem to spend all their time trying to convince me 
that
> SCSI is better than IDE (regardless of benchmark results).  :(

Heh, there's a religious war waiting to happen.
> > [1] http://people.redhat.com/alikins/system_tuning.html
I've just found that iostat (in unstable's sysstat package) supports
extended I/O properties in /proc if you have sct's I/O monitoring patches.
Unfortunately, the last one on his ftp site is for 2.3.99-preBlah. I sent an
email to lkml last night to see if there's a newer patch - I'll follow up
here if so.
Thanks Russell,
- Jeff
--
   Wars end, love lasts.
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Survey .. how many domains do you host? (Now RAID)

2001-11-06 Thread Dave Watkins
The advantage is in building the RAID array as such.. It's much easier to 
go into a BIOS on boot and say you want these three disks in a stripe array 
that to install the raidtools package and edit /etc/raidtab. If you check 
out the Promise cards the same applies.. There was discussion in the 
hardware scene a while ago about converting Promise Fasttrak card to a 
Supertrak card (I think the are the right names). Basically converting a 
PCI IDE card to a PCI IDE "RAID" card. It involved adding 1 resister and 
updating the BIOS on the card.

That shows you how little processing the card does of RAID funtions. It's 
not too big of a deal as there was also an article on Anandtech a while 
back testing how much CPU time was used when software RAID was setup 
(http://www.anandtech.com/storage/showdoc.html?i=1491&p=1). Have a read.. 
will fill in some holes and explain it better than I can here. Just 
remember the HPT is comparable to the Promise Card

Also another article you might find helpful
http://www.anandtech.com/storage/showdoc.html?i=913&p=1
Hope this helps
At 11:19 AM 11/3/01 +1100, you wrote:
Hi Dave...
Hum... if the Highpoint chipsets are merely IDE controllers... whats the
advantage to using them over the regular plain vanilla generic IDE
controller cards?
Don't they offload ANY work from the processor at ALL? They have to have
SOME sort of benefit... otherwise, why market them as RAID controllers?
Sincerely,
Jason
- Original Message -
From: "Dave Watkins" <[EMAIL PROTECTED]>
To: 
Sent: Saturday, November 03, 2001 10:07 AM
Subject: Re: Survey .. how many domains do you host? (Now RAID)
>
>
> Contrary to popular belief the Highpoint chipsets are only software
RAID.
> The driver uses processor time to actually do the RAID work. The chip is
> just an IDE controller. Based on that even if it isn't supported at a
RAID
> level you can still use the software RAID avaliable in linux as the
kernel
> has had standard IDE drivers for the highpoint for a while now
>
> Hope this helps
>
> At 08:35 AM 11/3/01 +1100, you wrote:
> >On the topic of RAID...
> >
> >does anyone know if the HighPoint RAID chipsets are supported YET?
> >
> >BSD has had support for this for ages... linux in the game yet?
> >
> >Sincerely,
> >Jason
> >
> >- Original Message -
> >From: "James Beam" <[EMAIL PROTECTED]>
> >To: 
> >Sent: Saturday, November 03, 2001 6:07 AM
> >Subject: Re: Survey .. how many domains do you host?
> >
> >
> > > Wouldn't something like this totaly depend on the hardware resources
and
> > > general config/maintenance of the server?
> > >
> > > I can tell you that one of my servers running an older copy of
> >qmail/vchkpw
> > > is running over 800 domains with lots of steam to spare (each domain
is
> > > minimal traffic). Hardware is a PIII733 w256MB ram and 30GIG EIDE
drives
> > > (promise mirror)
> > >
> > > - Original Message -
> > > From: "alexus" <[EMAIL PROTECTED]>
> > > To: "Steve Fulton" <[EMAIL PROTECTED]>; 
> > > Sent: Friday, November 02, 2001 11:49 AM
> > > Subject: Re: Survey .. how many domains do you host?
> > >
> > >
> > > > um.. m'key..
> > > >
> > > > you should've state that before so no one would get wrong thoughts
> >(like i
> > > > did)
> > > >
> > > > - Original Message -
> > > > From: "Steve Fulton" <[EMAIL PROTECTED]>
> > > > To: "alexus" <[EMAIL PROTECTED]>; 
> > > > Sent: Friday, November 02, 2001 1:58 AM
> > > > Subject: Re: Survey .. how many domains do you host?
> > > >
> > > >
> > > > > > and who are you to do such a survey?
> > > > >
> > > > >   Down boy!  Down!  LOL!
> > > > >
> > > > >   No need to snap, I'm doing this because a PROGRAM I AM WRITING
has
> > > > > VARIABLES that need to be defined to a certain array size, as
they
> >will
> > > > hold
> > > > > FQDN's.  In order to make this program universally useful, I
would
> >like
> > > to
> > > > > know the maximum number of domains that has been (realistically)
> >hosted
> > > on
> > > > > one server.
> > > > >
> > > > >   K?
> > > > >
> > > > > -- Steve
> > > > >
> > > > > http://www.zentek-international.com/
> > > >
> > > >
> > >
> > >
> >
> >
> >--
> >To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> >with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>
> --
> To UNSUBSCRIBE, email to [EMAIL PROTECTED]
> with a subject of "unsubscribe". Trouble? Contact
[EMAIL PROTECTED]
>
>
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]