Increased performance and vitality

2004-10-18 Thread scarlet gordon
Feel the Energy of a Fulfilled Life.

Visit here to increase your quality of life
http://a.wpm.freeinfogoal.com/a/

I've spent fortunes on face creams and anti-aging serums...but using Axis
spray MD is the first time I've tried turning back the clock from the inside
out. It's also the last time I'll spend a fortune on face creams! Thanks for
this great product. Karen B., Americus, GA



po box in link above and you can say no thank you for the future

yet our results do not point to a clinical benefit with combination therapy
 With these infections further studies should assess whether the addition of
an aminoglycoside is justified  . But he said something in French to a
waiter who was passing, and the latter came to Rob and made a low bow
I speak ze Eengliss ver' fine, he said.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



SV: Runing Bind under User Bind

2004-10-18 Thread Christofer Algotsson
>> Your user bind would write a file in /var/run, but it's not allowed.
> 
> I ran BIND this way, I seem to recall chown'ing that pid file once
> and never having a problem with it again for the lifetime of the box.

I hope I don't get heaps of flames for posting this micro-howto, but I hope
it helps.

< -- BEGIN -- >

# this is a micro-howto -- installing bind with chroot (Debian Potato)

# revision 2 - Mon,  8 Apr 2002 09:10:38 -0300

# Copyright (C) 2002 Pedro Zorzenon Neto 
#
# Permission is granted to copy, distribute and/or modify this document
# under the terms of the GNU Free Documentation License, Version 1.1
#
# You can find the license at http://www.fsf.org/licenses/fdl.html
#
# there is ABSOLUTELY NO WARRANTY about this document. I am not responsible
# for any damage it could lead to. Read it and use it at your own risk.

# This document was based on Chroot-BIND8 Howto by Scott Wunsch
# you can find it at http://www.linuxdoc.org/HOWTO/Chroot-BIND8-HOWTO.html

# software versions related to this document:
#  cat /etc/debian_version  ->  2.2
#  chroot --version->  chroot (GNU sh-utils) 2.0
#  named -v->  named 8.2.3-REL-NOESW Sat Jan 27 01:46:37 MST
2001

#
---

# install and configure bind as you would without chroot. I will assume from
# now on that you have a configured and working bind instalation.

# it is a good idea to backup your bind configuration now.

# add the 'named' user that bind will run as
adduser --system --group --no-create-home named

# create the directory that bind will run chrooted.
# "/chroot/named" was used in this example
mkdir /chroot
mkdir /chroot/named

# now, create the directory structure that bind will use
cd /chroot/named
mkdir dev
mkdir etc
mkdir lib
mkdir usr
mkdir usr/sbin
mkdir var
mkdir var/run
mkdir var/cache
mkdir var/cache/bind
chown -R named.named var#bind needs permission to write to 'var'

# if your nameserver is "secondary" or "cache" for some domains, then
# give write permission to etc/bind/zone-files so it can write the
# tranfered zones...
#
# chown named.named etc/bind/some-file
# chmod u+rw /etc/bind/some-file

# create dev/null
# check the node numbers with the following command and use them below
# my system had the numbers 1 3 ...
#egrep 'makedev\ +null' /dev/MAKEDEV
mknod dev/null c 1 3
chmod ugo+rw dev/null   #dev/null needs to be writeable

# copy time related files, so bind will know the timezone
cp /etc/localtime etc

#copy named entry in group file
egrep '^named:' /etc/group > etc/group

# enable bind logging in syslog
# edit /etc/init.d/sysklogd and find a line with SYSLOGD="", change it to
# SYSLOGD="-a /chroot/named/dev/log"
# restart syslogd
/etc/init.d/sysklogd restart

# check libraries used and copy to 'lib'
# check them with "ldd /usr/sbin/named" and "ldd /usr/sbin/named-xfer"
cp /lib/libc.so.6 lib
cp /lib/ld-linux.so.2 lib

#copy executables to 'usr/sbin'
cp /usr/sbin/named usr/sbin/
cp /usr/sbin/named-xfer usr/sbin

# move config files to chroot jail (you can use 'cp' instead of 'mv')
mv /etc/bind /chroot/named/etc/
# you can also create a link to the new place
ln -s /chroot/named/etc/bind /etc/bind

# stop bind
/etc/init.d/bind stop

# edit /etc/init.d/bind
# to the line:
#   start-stop-daemon --start --quiet --exec /usr/sbin/named
# append the following:
#   -- -u named -g named -t /chroot/named/
# to the line starting with: start-stop-daemon --stop
# change "/var/run/named.pid" to "/chroot/named/var/run/named.pid"

# now start bind chrooted
/etc/init.d/bind start

# view the log and check if it was started chrooted
grep "named" /var/log/syslog | tail -n 100 | less

# check if your bind is working and if it is... have fun :-)

# now, take a minute and write me an e-mail with your opinion
# about this document. :-)

# TODO/BUGS:
# - rewrite "restart/reload/force-reload" options in /etc/init.d/bind


< -- END -- >




Re: Runing Bind under User Bind

2004-10-18 Thread Nate Campi
On Mon, Oct 18, 2004 at 04:32:04AM +0200, Ulf Volmer wrote:
> On Mon, Oct 18, 2004 at 02:46:07PM +1300, Johnno wrote:
> 
> > I am trying to get named to work under the user bind but I keep on getting
> > errors:
> > Oct 18 14:39:34 woody named[1117]: couldn't open pid file
> > '/var/run/named.pid': Permission denied
> > Oct 18 14:39:34 woody named[1117]: exiting (due to early fatal error)
> > 
> > in the /var directory
> > drwxr-xr-x4 root root 4096 Oct 18 14:39 run
> 
> Your user bind would write a file in /var/run, but it's not allowed.

I ran BIND this way, I seem to recall chown'ing that pid file once and
never having a problem with it again for the lifetime of the box.
-- 
Nate

"My company doesn't know Usenet exists, and my boss would have kittens
if he thought I spoke for them. My opinions are better than theirs
anyway." - Unknown found in a .signature


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Mount options for Optimizing ext2/ext3 performance with Maildir's

2004-10-18 Thread Mark Bucciarelli
On Monday 18 October 2004 16:15, Ian Forbes wrote:

> What mount options give the best performance, "noatime" "data=journal" ?

The fellow that runs KDE's news site recently did some investigation of 
speed / disk usage for Zope's object database vs. ext3.  He figured the 
hierarchical nature of the article and comment history could be 
represented by a file system pretty easily, so was curious how ext3 would 
fare compared to ZODB.

There were some useful comments about optimizing ext3 posted in response to 
his original post.  Some things that were mentioned:

- use htree
- use 2.6 orlov (?)
- mount with data=writeback
- mount with commit=

The blog post and comments are here: 
http://navindra.blogspot.com/2004/10/kde-dot-news-ext3s-miserable-failure.html

His blog entry seems to support the contention that if you want expert 
feedback about a tool, just say it sucks.  ;)

Regards,

Mark


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Mount options for Optimizing ext2/ext3 performance with Maildir's

2004-10-18 Thread maarten
On Monday 18 October 2004 18:15, Ian Forbes wrote:

> I have a mailserver with load average sitting somewhere between 1 and 2. It
> is running exim serving a couple of thousand Maildir mailboxes and also has
> a bunch of antivirus / antispam programs running on it. It has a pair of
> ide hard drives running mirrored, raid1. I really do not want to start on
> reiser or XFS. Reliability is my major concern and I do not think they are
> warrented. But I would like to maximise my performance with the existing
> ext2/ext3 partions.

Well, reliability is relative.  You can have a near-perfect reliable ext2 FS, 
but what is that reliability worth if it means that at every crash the box 
will be down for 5 hours doing its e2fsck on the 200 GB mail volume...?

> Is ext3 faster or slower than ext2?

I'm no expert myself but I've read that ext2 is faster than ext3 because it 
has no additional overhead caused by the journaling functions. But see above 
what that tradeoff can result in. Journaled filesystems are here for a good 
reason (not to mention filesystem consistency).

Everyone has to live with his/her own choices and the consequences thereof, so 
I will not try to convince you to run XFS or reiser, but in my own experience 
reiserfs has proven more reliable than ext3.
Other than that, I've heard that reiserfs is especially efficient with small 
files, and XFS being especially efficient for big files.  I have no data on 
ext3, but the web is littered with benchmarks (all with different results of 
course, since benchmarks are very rarely neutral ;-)  

> Currently I have everything in one big root partition. If I mount it with
> "noatime" will a hole bunch of things stop working, like the automatic
> reloading of files in /etc/cron.d/ ?

Don't know, but you seriously have to consider making different partitions for 
the various mountpoints, not only for the atime issue but for added security.
I'm not talking one / and one mailspool, but giving at least /tmp, /usr, /var 
and /home (when applicable) their own partitions too.  This is good for a lot 
of reasons; with FS crashes you do not stand to lose everything (as you do 
with 1 single partition), you make the likelyhood of a DoS much smaller, and 
you can set specific mount options per drive, like noexec on /home, or nosuid 
on /tmp, or whatever you desire, which wholly depends on the role of the 
machine and its users' granted rights. 

> The system is running with a 2.4.18. Is there anything to be gained from
> upgrading to a later 2.4 or a 2.6 kernel.

Isn't 2.4.18 a tad old by now ?  Are you security-conscious ?

Maarten

-- 


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Documentation of big "mail systems"?

2004-10-18 Thread Wouter Verhelst
On Mon, Oct 18, 2004 at 05:44:08PM +0200, Christoph Moench-Tegeder wrote:
> ## Wouter Verhelst ([EMAIL PROTECTED]):
> > Debian does not need the storage for developers to store their mail on
> > the project's servers.
> 
> This thread is not about Debian's mail service.

Sorry, I was indeed confused.

-- 
 EARTH
 smog  |   bricks
 AIR  --  mud  -- FIRE
soda water |   tequila
 WATER
 -- with thanks to fortune


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Mount options for Optimizing ext2/ext3 performance with Maildir's

2004-10-18 Thread Ian Forbes
Hi

I have a mailserver with load average sitting somewhere between 1 and 2. It is 
running exim serving a couple of thousand Maildir mailboxes and also has a 
bunch of antivirus / antispam programs running on it. It has a pair of ide 
hard drives running mirrored, raid1. I really do not want to start on reiser 
or XFS. Reliability is my major concern and I do not think they are 
warrented. But I would like to maximise my performance with the existing 
ext2/ext3 partions.

Is ext3 faster or slower than ext2?

What mount options give the best performance, "noatime" "data=journal" ?

Currently I have everything in one big root partition. If I mount it with 
"noatime" will a hole bunch of things stop working, like the automatic 
reloading of files in /etc/cron.d/ ?

With the options data=journal / data=ordered / data=writeback which will give 
me the best performance and which has the biggest chance of data loss in a 
crash situation.  I think I can live with mail that is being delivered at the 
moment of a crash getting corrupted, provided that the server is never 
rendered un-bootable and that no other files are effected.

At the moment the partitions are mounted ext2, although previously they were 
mounted ext3. I have a kjournald process at the top of my "top" listing 
(sorted by CPU usage). Is this actually stealing processing capacity from the 
rest of the system?

The system is running with a 2.4.18. Is there anything to be gained from 
upgrading to a later 2.4 or a 2.6 kernel.

My plan is to move the Maildirs and /var/spool/exim onto a separate partition 
(there is an empty partition on the disk) mounted ext3 with "noatime" and 
"data=journal". Maybe with a new kernel. Any suggestions?

Thanks

Ian

-- 
Ian Forbes ZSD
http://www.zsd.co.za
Office: +27 21 683-1388  Fax: +27 21 674-1106
Snail Mail: P. O. Box 46827, Glosderry, 7702, South Africa


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Documentation of big "mail systems"?

2004-10-18 Thread Christoph Moench-Tegeder
## Wouter Verhelst ([EMAIL PROTECTED]):

> > > Getting servers that each have 200G or 300G of storage is easy. 
> > For a mail server, it means either 1G per user (like gmail gives you)
> > for only 300 users or 10M (much less than hotmail) for 30 000
> > users. It is probably not enough for a Hotmail-like service. Think of
> > 300 000 users. How many servers will you need?
> Debian does not need the storage for developers to store their mail on
> the project's servers.

This thread is not about Debian's mail service.

> That said, your calculation is incorrect. You don't need 300G to give
> 300 users 1G disk space -- 250G should do. The reason is that most
> people never reach their 1G quota; and if they do, it takes a while, so
> that you can take appropriate measures (i.e., add more disks) well on
> time.

Yes, depending on your average user you could get down to 50% average
usage. Or up to 95%.

Regards,
Christoph

-- 
Spare Space


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



OT: Experiences with SATA, Intel ICH5, Kernel 2.6

2004-10-18 Thread Henrik Heil
Hello everybody,
we are currently running a web- and mailserver with a Tyan S5102G3NR 
Board, Linux-2.6.8.1(kernel.org) and Debian Woody. The disks are 2x 36 
Gb, U-320 SCSI, RAID 1 (ICP-Vortex GDT8114RZ).
We now need additional disks in this server, but not at the same speed 
(not SCSI), not redundant (no RAID) and effordable.

Onboard options are:
* 2 ATA-Ports (one already has a CD-ROM attached)
* 2 SATA-PORTS (Intel 82801EB ICH5)
* 2 SATA-PORTS (Promise PDC20378, RAID 0,1)
At this time we need 2 additional disks and would like to use the 
ICH5-SATA Ports. I did not use SATA before and 
http://www.kerneltraffic.org/kernel-traffic/kt20040807_270.html#6 says

---8<---
Serial ATA (SATA) for Linux
status report
July 8, 2004
Intel ICH5, ICH5-R, ICH6
libata driver status: Production, but see issue #2, #3. Recently work on 
issue #2 has improved the state of that issue.

drivers/ide driver status: Production, but see issue #1, #2.
Issue #1: Depending on BIOS settings, IDE driver may lock up computer 
when probing drives.

Issue #2: Excessive interrupts are seen in some configurations.
Issue #3: "Enhanced mode" or "SATA-only mode" may need to be set in BIOS.
--->8---
Could someone share experiences with this chipset/board or comment on 
the issues. Do you think the SATA-Ports might interfere in any way with 
the ICP-Vortex controller?

Another question is -- would you avoid the Promise controller by all 
means? (another 2 disks could be needed in the future). Has someone 
experiences with this one. I often heard that Promise is crap and a 
performance hog compared to other raid solutions. I would use it just to 
have another 2 SATA-Ports.

Best regards,
Henrik
--
Henrik Heil, zweipol Coy & Heil GbR
http://www.zweipol.net/
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: Documentation of big "mail systems"?

2004-10-18 Thread Wouter Verhelst
On Mon, Oct 18, 2004 at 04:17:14PM +0200, Stephane Bortzmeyer wrote:
> On Sat, Oct 16, 2004 at 09:41:43PM +1000,
>  Russell Coker <[EMAIL PROTECTED]> wrote 
>  a message of 39 lines which said:
> 
> > Getting servers that each have 200G or 300G of storage is easy. 
> 
> For a mail server, it means either 1G per user (like gmail gives you)
> for only 300 users or 10M (much less than hotmail) for 30 000
> users. It is probably not enough for a Hotmail-like service. Think of
> 300 000 users. How many servers will you need?

Debian does not need the storage for developers to store their mail on
the project's servers.

That said, your calculation is incorrect. You don't need 300G to give
300 users 1G disk space -- 250G should do. The reason is that most
people never reach their 1G quota; and if they do, it takes a while, so
that you can take appropriate measures (i.e., add more disks) well on
time.

-- 
 EARTH
 smog  |   bricks
 AIR  --  mud  -- FIRE
soda water |   tequila
 WATER
 -- with thanks to fortune


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Documentation of big "mail systems"?

2004-10-18 Thread Stephane Bortzmeyer
On Sat, Oct 16, 2004 at 09:41:43PM +1000,
 Russell Coker <[EMAIL PROTECTED]> wrote 
 a message of 39 lines which said:

> Getting servers that each have 200G or 300G of storage is easy. 

For a mail server, it means either 1G per user (like gmail gives you)
for only 300 users or 10M (much less than hotmail) for 30 000
users. It is probably not enough for a Hotmail-like service. Think of
300 000 users. How many servers will you need?



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Documentation of big "mail systems"?

2004-10-18 Thread Aurélien Beaujean
Le vendredi 15 octobre 2004 à 13:13, Paul Dwerryhouse écrivait:
> We don't use NFS. Only the LDAP servers are using 2.6.x - everything
> else is still on 2.4.

So mails are delivered to your backend mailstores by smtp ou lmtp ? No
NFS means also that pop/imap daemons are running on the backend
mailstores ?

-- 
BOFH excuse #409: The vulcan-death-grip ping has been applied.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Documentation of big "mail systems"?

2004-10-18 Thread Aurélien Beaujean
Le samedi 16 octobre 2004 à 21:46, Russell Coker écrivait:
> Is there any way to optimise PHP for speed?  Maybe PHP5 is worth trying?

We uses php/zend mmcache. With it, we freed 50% of CPU of the machines
which run IMP.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Woody and Java with lots of threads

2004-10-18 Thread andrew
Dear list,
Has anyone managed to get java 1.4.2 running with 1100 + threads on 
Woody (with 2.4.25smp)?

I currently have the following ulimits set...
:~$ ulimit -a
core file size(blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size   (kbytes, -m) unlimited
open files(-n) 8192
pipe size  (512 bytes, -p) 8
stack size(kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes(-u) 4096
virtual memory(kbytes, -v) unlimited
:~$
The machine has 1G of RAM.
With SARGE I am able to get  3500 processes runnning...
Any suggestions on how I should set -Xms -Xss -Xmx ?
Is this a problem with glibc on woody?
Thanks
Andrew
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: why multicasting is working?

2004-10-18 Thread Oleg Butorin
Mike Mestnik wrote:
I'm not an expert on MC, but I'd think 224.0.0.1 would be routed to your
default route.  Then the pkt would get multicasted and you would receve
multiple responces.
 

Yes, but I received responces from the systems where multicasting 
disabled in the kernel.

IIRC kernel level MC support is only for if you want to be on Mbone, not
if you want to use it as a client/server.
But the option called "IP: multicasting" and help:
This is code for addressing several networked computers at once,
enlarging your kernel by about 2 KB...
And Mbone is the standard network, that supports multicasting (routers, 
computers...).
As I understand, there is no special support for Mbone, this is support 
for Multicasting.

Best regards,
Oleg.
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: why multicasting is working?

2004-10-18 Thread Oleg Butorin
Theodore Knab wrote:
Actually, this set of find commands will work better:
find /proc/net -name '*cast* -print -exec cat {} ';'
find /proc/sys -name '*cast* -print -exec cat {} ';'
Thank you for the answer.
I didn't find anything. And the question is: why it is working, when it 
is disabled in the kernel?!
I don't want to know where it is switched in the /proc, I want to know 
why it is working.

Best regards,
Oleg.
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]