SCSI or IDE

2002-11-24 Thread Scott
After some talks with the person who handles the books she has given me 
the authority to bail on these Netfinity boxes and get something more 
supported by Debian.  My question is:  with IDE drives as fast as they are 
now does it really pay to go SCSI?  Are there any benefits besides RAID?
I understand fault tolerance, but how about performance?

Thanks,

-Scott



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Russell Coker
On Sun, 24 Nov 2002 18:38, Scott wrote:
 After some talks with the person who handles the books she has given me
 the authority to bail on these Netfinity boxes and get something more
 supported by Debian.  My question is:  with IDE drives as fast as they are
 now does it really pay to go SCSI?  Are there any benefits besides RAID?
 I understand fault tolerance, but how about performance?

IDE and SCSI give very similar performance.  Performance is determined by 
hardware issues such as rotational speed rather than the type of interface.

If you want RAID then 3Ware makes some good IDE RAID products.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread
About performance - IDE still uses a lot of the CPU, SCSI has it's own
processing power. You can put a lot more disks on a single SCSI
controler, than on a IDE controler, and there (afaik, i could be
mistaken) two drives on one bus cannot work simultaneously and share the
bandwidth (which isn't a problem with SCSI, if you have 160 MB/s bus,
and 3 disks that can make about 40MB/s, you can have all 120MB/s)

And maybe i should say something about the reliability, SCSI disks don't
die that often, compared to IDE drives, while being used a lot 24x7.


Íà íä, 2002-11-24 â 18:38, Scott çàïèñà:
 After some talks with the person who handles the books she has given me 
 the authority to bail on these Netfinity boxes and get something more 
 supported by Debian.  My question is:  with IDE drives as fast as they are 
 now does it really pay to go SCSI?  Are there any benefits besides RAID?
 I understand fault tolerance, but how about performance?



--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Jeremy Zawodny
On Sun, Nov 24, 2002 at 06:56:34PM +0100, ? ? wrote:

 About performance - IDE still uses a lot of the CPU

IMHO that argument made a lot more sense when we had 300MHz CPUs.  But
now that most servers are far faster than that, we're talking about
what, 1% or maybe 2% of the CPU?

It's probably more than worth the cost savings on the SCSI premium.

Jeremy
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Thing
On Mon, 25 Nov 2002 06:38, Scott wrote:
 After some talks with the person who handles the books she has given me
 the authority to bail on these Netfinity boxes and get something more
 supported by Debian.  My question is:  with IDE drives as fast as they are
 now does it really pay to go SCSI?  Are there any benefits besides RAID?
 I understand fault tolerance, but how about performance?

 Thanks,

 -Scott

I would be grateful if you cold document why / what probs you are having wiht 
the net infinity kit (for future reference).

Ide is obviously way cheaper than Scsi, You can go ide raid, which Ive not 
tried yet, but it would give a mirror whcih is what you want really (read 
performance will be a bit better too).

Does the load justify scsi? if its not hammered then hardware ide raid is 
probably fine.

regards

Thing


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Russell Coker
On Sun, 24 Nov 2002 18:56, Âàñèë Êîëåâ wrote:
 About performance - IDE still uses a lot of the CPU, SCSI has it's own
 processing power.

Please do some benchmarks.  You'll discover that when DMA is enabled and you 
have a good chipset then IDE will not use much CPU.

OTOH if you have an Adaptec 1510 then even accessing a CD-ROM will take 
excessive amounts of CPU time.

In summary:  Good controllers use little CPU time, bad controllers use a lot 
of CPU time.  Doesn't matter whether it's IDE or SCSI.

 You can put a lot more disks on a single SCSI
 controler, than on a IDE controler, and there (afaik, i could be
 mistaken) two drives on one bus cannot work simultaneously and share the
 bandwidth (which isn't a problem with SCSI, if you have 160 MB/s bus,
 and 3 disks that can make about 40MB/s, you can have all 120MB/s)

3ware IDE controllers support up to 12 drives.  You won't find many SCSI 
controllers that can do that and deliver acceptable performance (you won't 
get good performance unless you have 64bit 66MHz PCI).

Do a benchmark of two IDE drives on the one cable and you will discover that 
the performance loss is not very significant.

ATA-133 compared to Ultra2 SCSI at 160MB/s is not much difference.  S-ATA is 
coming out now and supports 150MB/s per drive.

Please do some benchmarks before you start talking about performance.

 And maybe i should say something about the reliability, SCSI disks don't
 die that often, compared to IDE drives, while being used a lot 24x7.

The three biggest causes of data loss that I have seen are:
1)  Incompetant administrators.
2)  Heat.
3)  SCSI termination.

SCSI drives tend to have higher rotational speeds than IDE drives and thus 
produce more heat.  Even when IBM was shipping thousands of broken IDE hard 
drives (and hundreds of broken SCSI drives which didn't seem to get any 
press) the data loss caused by defective drives was still far less than any 
of those three factors.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Emilio Brambilla
hello,
On Sun, 24 Nov 2002, Russell Coker wrote:

 IDE and SCSI give very similar performance.  Performance is determined by 
 hardware issues such as rotational speed rather than the type of interface.
I agree if you think at a single drive workstation, not if you think at a 
server with many disks making heavy i/o on the disks.

ATA/IDE drives/controllers lack the ability to perform command queuing,
so they are not much fast on many concurrent i/o requests (this feature
will be introduced in serial-ATA II devices, I think)

SCSI can queue up to 256 commands and reorder them for maximum
performance, furthermore SCSI has been developed to be used in the server
market, so they are optimized for servers (rescheduling commands and seek
patterns of SCSI has been written for this kind of use!)

It's true that on many entry-level severs IDE is enough for the job (and
a lot cheeper than SCSI), but on hi-end servers scsi is still a MUST!

btw, rotational speed speeking, how many 15.000 rpm IDE disks are
there? :-)

-- 
Saluti,
emilio brambilla


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Russell Coker
On Sun, 24 Nov 2002 20:45, Emilio Brambilla wrote:
  IDE and SCSI give very similar performance.  Performance is determined by
  hardware issues such as rotational speed rather than the type of
  interface.

 I agree if you think at a single drive workstation, not if you think at a
 server with many disks making heavy i/o on the disks.

Organizations such as CERN are using IDE disks for multi-terabyte arrays.

 ATA/IDE drives/controllers lack the ability to perform command queuing,
 so they are not much fast on many concurrent i/o requests (this feature
 will be introduced in serial-ATA II devices, I think)

Get 10 disks in a RAID array and the ability of a single disk to queue 
commands becomes less important, the RAID hardware can do that.

 SCSI can queue up to 256 commands and reorder them for maximum
 performance, furthermore SCSI has been developed to be used in the server
 market, so they are optimized for servers (rescheduling commands and seek
 patterns of SCSI has been written for this kind of use!)

However benchmarks tend not to show any great advantage for SCSI.  If you get 
an affordable SCSI RAID solution then the performance will suck.  Seeing an 
array of 10,000 RPM Ultra2 SCSI disks delivering the same performance as a 
single IDE disk is not uncommon when you have a cheap RAID setup.

Even when your RAID array costs more than your house you may find the 
performance unsatisfactory.

3ware RAID arrays are affordable and deliver quite satisfactory performance.  
Usually they are limited by PCI speeds (last time I checked they didn't 
support 66MHz 64bit PCI).

 It's true that on many entry-level severs IDE is enough for the job (and
 a lot cheeper than SCSI), but on hi-end servers scsi is still a MUST!

SCSI is more expensive, it's not faster, it's not as well supported, and it 
has termination issues.  SCSI is not a must unless you buy from Sun or one 
of the other vendors that gives you what costs the most rather than what you 
need...

 btw, rotational speed speeking, how many 15.000 rpm IDE disks are
 there? :-)

None, not that I care.  As long as fiber channel speeds, RAID array speeds, 
etc slow the arrays of SCSI drives I use so much that they deliver the same 
performance as a single IDE disk that's all irrelevant.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Vector

- Original Message -
From: Russell Coker [EMAIL PROTECTED]
To: Âàñèë Êîëåâ [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Sunday, November 24, 2002 12:39 PM
Subject: Re: SCSI or IDE


  You can put a lot more disks on a single SCSI
  controler, than on a IDE controler, and there (afaik, i could be
  mistaken) two drives on one bus cannot work simultaneously and share the
  bandwidth (which isn't a problem with SCSI, if you have 160 MB/s bus,
  and 3 disks that can make about 40MB/s, you can have all 120MB/s)

 3ware IDE controllers support up to 12 drives.  You won't find many SCSI
 controllers that can do that and deliver acceptable performance (you won't
 get good performance unless you have 64bit 66MHz PCI).


That is not true.

 Do a benchmark of two IDE drives on the one cable and you will discover
that
 the performance loss is not very significant.

 ATA-133 compared to Ultra2 SCSI at 160MB/s is not much difference.  S-ATA
is
 coming out now and supports 150MB/s per drive.


Ultra2 can't do 160MB/s.  Ultra2 is limited to 80MB/s.  U160 (or ultra3) can
do 160MB/s.  And perhaps, yes, Ultra2 vs ATA-133 might be comparable.  And
U320 is now and can do 320MB/ssuch is and has been the evolution of both
standards.

  And maybe i should say something about the reliability, SCSI disks don't
  die that often, compared to IDE drives, while being used a lot 24x7.

 The three biggest causes of data loss that I have seen are:
 1)  Incompetant administrators.

Amen.

 2)  Heat.

Halleluja, Brother!

 3)  SCSI termination.


Huh?  I'd honestly have to say this falls into the same category as 1)
Incompetant administrators.  Get the termination right and it all works just
fine, which is now easier than ever since controllers have been able to
autoterminate for many many years and now they are building terminators
right into the cable.  And there are other factors like cable quality and
length.  It's cerntaily more complicated but again, I feel it's worth it
once you know what you are doing.

 SCSI drives tend to have higher rotational speeds than IDE drives and thus

True, and in your first reply on this thread didn't you quote this as one of
the primary factors determining speed?

 produce more heat.  Even when IBM was shipping thousands of broken IDE
hard

yes, fans are our friends!

 drives (and hundreds of broken SCSI drives which didn't seem to get any
 press) the data loss caused by defective drives was still far less than
any
 of those three factors.

Hmm, yeah, there's crap in both sectors that's for sure.  I can't say I've
been a huge fan of IBM drives in the past.

vec



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Vector

- Original Message -
From: Russell Coker [EMAIL PROTECTED]
To: Emilio Brambilla [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Sunday, November 24, 2002 1:14 PM
Subject: Re: SCSI or IDE


 Organizations such as CERN are using IDE disks for multi-terabyte arrays.

I've heard google uses IDE as well.  Of course, they come in a huge cluster
of cheap workstations, not as a mass storage system.


In an attempt to answer the original question:
As you can see here there is somewhat of a religious war going on.  I don't
much care about the specifics of the religion.  I have always gone with what
worked best for me which in my case has been SCSI.  If you have a tight
budget, I'm sure you can find an IDE solution that will do you just fine.
If you have a fat budget, try them both and then sell off the one like the
least and chalk it up to a learning experience.

vec



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Russell Coker
On Sun, 24 Nov 2002 22:13, Vector wrote:
   You can put a lot more disks on a single SCSI
   controler, than on a IDE controler, and there (afaik, i could be
   mistaken) two drives on one bus cannot work simultaneously and share
   the bandwidth (which isn't a problem with SCSI, if you have 160 MB/s
   bus, and 3 disks that can make about 40MB/s, you can have all 120MB/s)
 
  3ware IDE controllers support up to 12 drives.  You won't find many SCSI
  controllers that can do that and deliver acceptable performance (you
  won't get good performance unless you have 64bit 66MHz PCI).

 That is not true.

What are you claiming is not true?

 Ultra2 can't do 160MB/s.  Ultra2 is limited to 80MB/s.  U160 (or ultra3)
 can do 160MB/s.  And perhaps, yes, Ultra2 vs ATA-133 might be comparable. 
 And U320 is now and can do 320MB/ssuch is and has been the evolution of
 both standards.

ATA-133 for two disks (or one disks for 3ware type devices) is more than 
adequate.  U160 for more than 4 disks will be a bottleneck.

  3)  SCSI termination.

 Huh?  I'd honestly have to say this falls into the same category as 1)
 Incompetant administrators.  Get the termination right and it all works
 just fine, which is now easier than ever since controllers have been able

Unfortunately the administrators don't always get a chance to inspect the 
hardware or fiddle with it.

I often don't get to touch the hardware I administer until after it has been 
proven to be broken.

Sun likes to do all the hardware maintenance (it's quite profitable for them).  
Sun employees often aren't able to terminate SCSI properly.

For these and other reasons a company I am working for is abandoning Sun and 
moving to Debian on PC servers.

 to autoterminate for many many years and now they are building terminators
 right into the cable.  And there are other factors like cable quality and
 length.  It's cerntaily more complicated but again, I feel it's worth it
 once you know what you are doing.

Regardless of auto-termination and terminators built into cables, if you 
install the wrong parts then you can still stuff it up.

When you've had a repair-man from the vendor use a hammer to install a CPU you 
learn to accept that any hardware can be broken no matter how well it's 
installed.

  SCSI drives tend to have higher rotational speeds than IDE drives and
  thus

 True, and in your first reply on this thread didn't you quote this as one
 of the primary factors determining speed?

Yes.  However for bulk IO it's rotational speed multiplied by the number of 
sectors per track.  A 5400rpm IDE disk with capacity 160G will probably 
perform better for bulk IO than a 10,000rpm SCSI disk with capacity 36G for 
this reason.

A high rotational speed helps seek times a lot too, but a big cache and a 
battery-backed write-back cache can make up for this (admittedly this isn't 
something you'll see in a typical IDE-RAID solution).

 Hmm, yeah, there's crap in both sectors that's for sure.  I can't say I've
 been a huge fan of IBM drives in the past.

IBM drives used to be really good.  They used to run cool, quietly, and 
reliably.  I've had IBM drives keep working in situations where other brand 
drives failed from heat.

It seems that whenever a vendor gets a reputation for high quality they then 
increase the volume, sub-contract the manufacturing to a country where they 
can pay the workers $0.10 per hour, and the results are what you would 
expect.  :(

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread John
On Sun, Nov 24, 2002 at 12:38:56PM -0500, Scott wrote:
 After some talks with the person who handles the books she has given me 
 the authority to bail on these Netfinity boxes and get something more 
 supported by Debian.  My question is:  with IDE drives as fast as they are 
 now does it really pay to go SCSI?  Are there any benefits besides RAID?
 I understand fault tolerance, but how about performance?

I have used SCSI and IDE in many levels of the game. I've also used
filers (Netapp). 

I currently work with an ISP that has mostly IDE on the servers doing
miscellaneous stuff, all SCSI RAID5 on the servers such as database, NFS
and network monitoring. I just like being able to pull a drive hot and
replace it nice and easy in the servers that are most critical to me. 

There's quite little point in having IDE for my work on the most mission
critical servers. We also have a habit of netbooting many of our
machines. POP/SMTP/HTTP/HTTPS/DNS are done via netboot. This reduces our
reliance on drives in tons of systems. I would be happy to know if there
are controllers and setups that allow hotswappable IDE RAID5 - I'd be
very interested if there were (please feel free to let me know on or off
list). 

At home, where we have a completely overbuilt network (geek!) I have a
server with IDE software RAID1 (dual 40G) and a SCSI RAID5 array that is
external and on a module installed basis so I can move/add/remove
drives at will - without losing my uptime on the main machine. My SCSI
array is currently 54G, but will expand again in the spring when I make
some other upgrades and free up more like drives. I also like to add and
remove my SCSI CD-ROMs as well, just cause I have several laying around.


I've seen (figuring off the top of my head) a 3:1 IDE/SCSI failure rate
across all drives/servers/systems. I'm not recalling that many failures
all told. I can actually only recall two SCSI failure, a 2G WD and a 18G
IBM. I've had multiple Fujitu IDE, WD IDE failures, sometimes with the
replacement drive failing in the same machine (Grrr)


Overall, this would be my recommendation (IMHO - YMMV)

IF you can get a combination of good IDE drives with good IDE
controllers that don't peg your CPU usage and money is an issue, go with
IDE. Never put two RAID1 IDE drives on the same channel (primary or
secondary). Put one on each for safety. For storing mp3's at home or
files locally, IDE is generally well suited and will save you a lot.

If you've got more money and want to see a (actual, not spec) better
MTBF, go with SCSI. Take the time to learn how SCSI works, terminations,
etc. Research block sizes on RAID arrays. Experiment to get the best
speeds. Use multiple controllers if you want. Have proper cooling. 

I think SCSI edges out IDE for reliability and I think the extra cost is
worth it. And if your data is super mission critical, just buy a filer
instead and use snapshots. If, as I reread your question, you just want
to know Is SCSI worth it for speed? - no, probably not, you can do
well with an intelligently configured IDE system. 

$.02, FWIW,

John


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




RE: SCSI or IDE

2002-11-24 Thread Jones, Steven
u can get hot swap ide 

promise do one (hot swap ide), dunno how good it is mind.

Thing

8--

I currently work with an ISP that has mostly IDE on the servers doing
miscellaneous stuff, all SCSI RAID5 on the servers such as database, NFS
and network monitoring. I just like being able to pull a drive hot and
replace it nice and easy in the servers that are most critical to me. 


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Russell Coker
On Sun, 24 Nov 2002 23:39, John wrote:
 There's quite little point in having IDE for my work on the most mission
 critical servers. We also have a habit of netbooting many of our
 machines. POP/SMTP/HTTP/HTTPS/DNS are done via netboot. This reduces our
 reliance on drives in tons of systems.

It does however increase your reliance on the network.  However it's an 
interesting concept.

One problem with netbooting is that you then become reliant on a single Filer 
type device instead of having multiple independant servers.  If each server 
has it's own disks running software RAID then a single disk failure isn't 
going to cause any great problems, and a total server failure isn't going to 
be a big hassle either.

Another problem that has prevented me from doing such things in the past is 
that the switches etc have been run by a different group.  I have been unable 
to trust the administrators of the switches to not break things on me...  :(

 I would be happy to know if there
 are controllers and setups that allow hotswappable IDE RAID5 - I'd be
 very interested if there were (please feel free to let me know on or off
 list).

http://www.raidzone.com/Products___Solutions/OpenNAS_Overview/opennas_overview.html

 IF you can get a combination of good IDE drives with good IDE
 controllers that don't peg your CPU usage and money is an issue, go with
 IDE. Never put two RAID1 IDE drives on the same channel (primary or
 secondary). Put one on each for safety.

You can say the same about SCSI.

If you get a high-end RAID product from Sun then you won't have two drives in 
the same RAID set on the same SCSI cable.

One final thing, the performance differences between ReiserFS, Ext3, and XFS 
are far greater than that between IDE and SCSI drives of similar specs.  All 
three file systems perform best for different tasks, benchmark for what you 
plan to do first.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Donovan Baarda

On Sun, Nov 24, 2002 at 08:45:04PM +0100, Emilio Brambilla wrote:
 hello,
 On Sun, 24 Nov 2002, Russell Coker wrote:
[...]
 ATA/IDE drives/controllers lack the ability to perform command queuing,
 so they are not much fast on many concurrent i/o requests (this feature
 will be introduced in serial-ATA II devices, I think)
 
 SCSI can queue up to 256 commands and reorder them for maximum
 performance, furthermore SCSI has been developed to be used in the server
 market, so they are optimized for servers (rescheduling commands and seek
 patterns of SCSI has been written for this kind of use!)

There are lots of IDE vs SCSI arguments that are no longer true that still
surface when this topic is recycled.

CPU: IDE in the PIO days used bucketloads of CPU. UDMA ended that three or
for IDE generations ago. It is not unusual to see benchmarks with IDE drives
using less CPU than SCSI drives, though they are pretty much the same now.

Thermal recalibration: Some drives do periodic recalibrations that cause a
hickup in data streaming. This is _not_ and IDE vs SCSI issue, but a drive
issue. Some drives were multi-media rated, which means they can gaurentee
a constant stream of data without recalibration hickups. Many low-end SCSI
drives are mechanicaly identical to the manufacturuers corresponding IDE
drive, and hence have the same recalibration behaviour. I'm not sure what
the current state of affairs with thermal recalibration and multi-media
rated, but it wouldn't surprise me if the terms have faded away as it's
probably not an issue on new drives. Anyone else care to comment?

Command Queuing: IDE didn't support command queuing, and SCSI did. I thought
command queuing had been available in IDE for ages... A quick search of the
Linux IDE driver source pulls up bucketloads of matches against queue,
including;

 * Version 6.00 use per device request queues
 *  attempt to optimize shared hwgroup performance
  ::
 * Version 6.31 Debug Share INTR's and request queue streaming
 *  Native ATA-100 support

And the ataraid.c code includes references to ataraid_split_request
commands.  The ide-cd.c code also refers to cdrom_queue_packet_command.
This might not be actual command-queuing so perhaps I'm wrong, but I'm
sure I read ages ago that IDE had at least something compareable. Anyone
actualy know?

In any case, command queuing makes a big difference when you have lots of
slow drives sharing a mega-bandwidth buss. IDE has only two drives, so it's
not as relevant. I believe most benchmarking shows only marginal peformance
hit for two IDE's on the same bus (this might be because IDE does have a
form of command queuing, or it could just be because it doesn't make much
difference). I know SCSI shows nearly no hit for two drives on one bus, but
when you compare 8 SCSI's on one bus with 8 IDE's on 4 buses, I bet they
turn out about the same.

 It's true that on many entry-level severs IDE is enough for the job (and
 a lot cheeper than SCSI), but on hi-end servers scsi is still a MUST!

Many high-end integrated SCSI RAID storage solutions are actualy A SCSI
interface to a bunch of IDE disks...

The best way to compare bare single drive performance is to compare drives
at;

http://www.storagereview.com/

IMHO, the big win of SCSI is a single interface with a proper bus that
supports multiple devices. SCSI can drive a scanner, 2 cdroms, and 4
hardrives off one interface using a single interrupt. UW SCSI can handle
up to 15 devices on one interface, and not break a sweat.

If you are going to have more than 6 devices, SCSI is the less painful path
to take, though more expensive.

If you have 6 or less devices, IDE is just as good as SCSI, and bucketloads
cheaper.

The IDE raid cards do open up the 6~12 device area to IDE, but I suspect
SCSI is still slightly less painful, though IDE is definitely cheaper.

-- 
--
ABO: finger [EMAIL PROTECTED] for more info, including pgp key
--


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Jan-Benedict Glaw
On Mon, 2002-11-25 10:17:44 +1100, Donovan Baarda [EMAIL PROTECTED]
wrote in message [EMAIL PROTECTED]:
 On Sun, Nov 24, 2002 at 08:45:04PM +0100, Emilio Brambilla wrote:
  hello,
  On Sun, 24 Nov 2002, Russell Coker wrote:
 [...]
  SCSI can queue up to 256 commands and reorder them for maximum
  performance, furthermore SCSI has been developed to be used in the server
  market, so they are optimized for servers (rescheduling commands and seek
  patterns of SCSI has been written for this kind of use!)
 
 There are lots of IDE vs SCSI arguments that are no longer true that still
 surface when this topic is recycled.
 
 Command Queuing: IDE didn't support command queuing, and SCSI did. I thought
 command queuing had been available in IDE for ages... A quick search of the
 Linux IDE driver source pulls up bucketloads of matches against queue,
 including;

Command queuing is quite new to ide, and only IBM drives support it up
to now, but others are to follow...

  * Version 6.00 use per device request queues
  *  attempt to optimize shared hwgroup performance
   ::
  * Version 6.31 Debug Share INTR's and request queue streaming
  *  Native ATA-100 support

This is Linux' internal queuing, not drive queuing...

 In any case, command queuing makes a big difference when you have lots of
 slow drives sharing a mega-bandwidth buss. IDE has only two drives, so it's

That's not really right. Command Queuing allows to to tell the drive you
want to have, say, 10 sectors scattered across the whole drive. If you
give 10 synchronous commands, you'll see 10 seeks. Issuing them as
queued commands will fetch them _all_ within _one_ seek, if there's good
firmware on the drive. Only the drive itself does know the optimal order
of fetching them, the OS only knows some semantics...

 not as relevant. I believe most benchmarking shows only marginal peformance
 hit for two IDE's on the same bus (this might be because IDE does have a
 form of command queuing, or it could just be because it doesn't make much
 difference). I know SCSI shows nearly no hit for two drives on one bus, but

Or it is because the benchmark doesn't ask _both_ drive to send their
very maximum of data...

 when you compare 8 SCSI's on one bus with 8 IDE's on 4 buses, I bet they
 turn out about the same.

 If you have 6 or less devices, IDE is just as good as SCSI, and bucketloads
 cheaper.

Only true if you don't want to see your devices to send at their maximum
speed _all the time_.

 The IDE raid cards do open up the 6~12 device area to IDE, but I suspect
 SCSI is still slightly less painful, though IDE is definitely cheaper.

That's right:-)

MfG, JBG

-- 
   Jan-Benedict Glaw   [EMAIL PROTECTED]. +49-172-7608481
   Eine Freie Meinung in  einem Freien Kopf| Gegen Zensur
fuer einen Freien Staat voll Freier Bürger | im Internet!
   Shell Script APT-Proxy: http://lug-owl.de/~jbglaw/software/ap2/



msg07371/pgp0.pgp
Description: PGP signature


Re: SCSI or IDE

2002-11-24 Thread Donovan Baarda
On Sun, Nov 24, 2002 at 05:39:53PM -0500, John wrote:
 On Sun, Nov 24, 2002 at 12:38:56PM -0500, Scott wrote:
  After some talks with the person who handles the books she has given me 
  the authority to bail on these Netfinity boxes and get something more 
  supported by Debian.  My question is:  with IDE drives as fast as they are 
  now does it really pay to go SCSI?  Are there any benefits besides RAID?
  I understand fault tolerance, but how about performance?
 
 I have used SCSI and IDE in many levels of the game. I've also used
 filers (Netapp). 
[...]
 I've seen (figuring off the top of my head) a 3:1 IDE/SCSI failure rate
 across all drives/servers/systems. I'm not recalling that many failures
 all told. I can actually only recall two SCSI failure, a 2G WD and a 18G
 IBM. I've had multiple Fujitu IDE, WD IDE failures, sometimes with the
 replacement drive failing in the same machine (Grrr)

Fujitsu and WD don't make HDD's, they make paperweights... and cheap
nasty paperweights at that.

Actually, I'm probably being a bit harsh on WD... they do probably make some
HDD's at their paperweight factories, but Fujitsu never have.

If you want reliable IDE's get Quantum (whups... they don't exist anymore),
IBM (whups again... they shut down after they built a paperweight factory in
Hungary?), or Segate... perhaps Maxtor (bought out Quantum didn't they?).

The hard thing with computer gear in general is each generation is a totally
new generation... A manufacturer's drives can turn from good to crap in one
batch, and the reverse. About all you can do is check www.storagereview.com
and solicit for advice each time you go to buy something.

-- 
--
ABO: finger [EMAIL PROTECTED] for more info, including pgp key
--


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Donovan Baarda
On Mon, Nov 25, 2002 at 12:29:11AM +0100, Jan-Benedict Glaw wrote:
 On Mon, 2002-11-25 10:17:44 +1100, Donovan Baarda [EMAIL PROTECTED]
 wrote in message [EMAIL PROTECTED]:
  On Sun, Nov 24, 2002 at 08:45:04PM +0100, Emilio Brambilla wrote:
   hello,
   On Sun, 24 Nov 2002, Russell Coker wrote:
[...]
 Command queuing is quite new to ide, and only IBM drives support it up
 to now, but others are to follow...

Ahh, perhaps only the spec supported it, and no actual hardware :-)

  In any case, command queuing makes a big difference when you have lots of
  slow drives sharing a mega-bandwidth buss. IDE has only two drives, so it's
 
 That's not really right. Command Queuing allows to to tell the drive you
 want to have, say, 10 sectors scattered across the whole drive. If you
 give 10 synchronous commands, you'll see 10 seeks. Issuing them as
 queued commands will fetch them _all_ within _one_ seek, if there's good
 firmware on the drive. Only the drive itself does know the optimal order
 of fetching them, the OS only knows some semantics...

I'm pretty sure most device drivers for both IDE and SCSI do some degree of
command-reordering before issuing the commands down the buss. I wonder how
much real-world benefit can be gained from drive-level command re-ordering,
and how many SCSI drives actualy bother to implement it well :-)

  not as relevant. I believe most benchmarking shows only marginal peformance
  hit for two IDE's on the same bus (this might be because IDE does have a
  form of command queuing, or it could just be because it doesn't make much
  difference). I know SCSI shows nearly no hit for two drives on one bus, but
 
 Or it is because the benchmark doesn't ask _both_ drive to send their
 very maximum of data...

I'm pretty sure any benchmarks done on this would have been hammering both
drives at once... that would be the point, wouldn't it?

  when you compare 8 SCSI's on one bus with 8 IDE's on 4 buses, I bet they
  turn out about the same.
 
  If you have 6 or less devices, IDE is just as good as SCSI, and bucketloads
  cheaper.
 
 Only true if you don't want to see your devices to send at their maximum
 speed _all the time_.

The point is, 4 IDE buses will probably match 1 SCSI bus for sustained
transfer rates4x133 =533MB/sec... more than 1x the fastest SCSI. Throw
in the IDE crappy performance, and you get about the same.

-- 
--
ABO: finger [EMAIL PROTECTED] for more info, including pgp key
--


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Russell Coker
On Mon, 25 Nov 2002 00:54, Donovan Baarda wrote:
 I'm pretty sure most device drivers for both IDE and SCSI do some degree of
 command-reordering before issuing the commands down the buss. I wonder how
 much real-world benefit can be gained from drive-level command re-ordering,
 and how many SCSI drives actualy bother to implement it well :-)

Last I heard was that they both did it badly.  Commands were re-ordered at the 
block device level (re-ordering commands sent to a RAID device is not 
helpful).

This is separate to re-ordering within the disk.

 The point is, 4 IDE buses will probably match 1 SCSI bus for sustained
 transfer rates4x133 =533MB/sec... more than 1x the fastest SCSI. Throw
 in the IDE crappy performance, and you get about the same.

To sustain that speed you need 66MHz 64bit PCI, which almost no-one gets.

If you have a single 33MHz card then the entire bus runs at 33MHz, so 
therefore you need an expensive motherboard with multiple PCI buses and RAID 
controller cards to support it.

Running two hardware RAID cards on separate PCI buses and then doing software 
RAID-0 across them to solve PCI bottlenecks is apparently not that uncommon.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: routing policy

2002-11-24 Thread Donovan Baarda
On Fri, Nov 22, 2002 at 07:30:49PM +0100, Marc Haber wrote:
 On Fri, 22 Nov 2002 17:19:47 +0100, mathias daus
 [EMAIL PROTECTED] wrote:
 i wonder if there is a debian policy how to handle routing on boot time. 
 is there any solution as ifupdown?
 
 i read something about iproute. but i'm not sure if i like it.
 
 till now i have a self made script called /etc/init.d/route. it's simply 
 adding all routes.
 
 Add your routes in the up and down clause in /etc/network/interfaces.

Does this work for ppp, ippp and other such devices?

It would be nice if it did, but I bet it doesn't :-)

At the moment all this stuff is going into /etc/ppp/ip-(up|down).d/

it would be good if these could be made /etc/network/interfaces aware, and
for them to work with ifup/ifdown.

the ISDN stuff is a mess... stuff scattered between /etc/isdn/ and /etc/ppp.

-- 
--
ABO: finger [EMAIL PROTECTED] for more info, including pgp key
--


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Qmail/Postfix/Sendmail for fastest outgoing mail

2002-11-24 Thread Jason Lim
Hi all,

I don't want to spark a flame war or anything... but for purely outgoing
mailing (sending emails), which mail package would be fastest?

I know people have complained about Qmail's way of sending emails... in
that it creates a connection for each email rather than bunching them up
like Sendmail, but then how does Postfix operate (similar/hybrid)? It hear
Postfix does something fancy in that regard that is a mix or something,
but since I'm no Postfix expert, perhaps someone knows more about this?

The reason I ask is that we have a number of Qmail servers right now, and
they are heavily loaded because they also run Apache, DNS, and other
stuff. My idea was to get Qmail to send all email quickly to the pure
Email box (mail relay), and have the Email box handle all the actual
grunt work of sending to remote hosts. All servers are connected together
by 100Mbps so no bottleneck there. We can't stop using Qmail on the
multi-purpose servers because the whole system is setup and downtime is
unacceptable, but do you think having the Qmail relay all email to the
Email box, then having them actually sent would benefit? We're talking
about 2-3 million emails per day, which the Qmails have done well so far
but because Apache is getting loaded, Qmail is slowing up and the number
of concurrent connections it can handle has been dropping.

I'm *thinking* it would because then the Qmail servers would not need to
create so many simultaneous connections to slow remote hosts (waiting
around and stuff), and instead would be able to get email off faster to
the Email box and thus free up load on the Qmail servers, so they can do
other stuff more (Apache).

Any input appreciated.

Sincerely,
Jason



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE

2002-11-24 Thread Jason Lim


 pps: last time i needed to build a large raid array for a fileserver, i
 priced both IDE and SCSI solutions.  the SCSI solution was about $15000
 all up (server, external drive box, drives, raid controller, etc).  the
 equivalent IDE solution was about $13000.  i ended up deciding that scsi
 was worth the extra $2000.

 btw, prices are in australian dollars, $AUD1.00 is somewhere around
$US0.55

 these days, i may have chosen differently because i could probably have
 got twice the storage capacity for the same price with IDE.

Definately... the gap is widening between IDE and SCSI.

The actual physical hardware (the disk acutators and such) are usually
manufactured in the same factory, right? So all things being equal
(transportation from the factory, etc.) they should have similar failure
rates, only that the SCSI drives have more/better chips/firmware/software?


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: SCSI or IDE (IBM for RAID)

2002-11-24 Thread Jason Lim
 If you want reliable IDE's get Quantum (whups... they don't exist
anymore),
 IBM (whups again... they shut down after they built a paperweight
factory in
 Hungary?), or Segate... perhaps Maxtor (bought out Quantum didn't
they?).

Thing with IBM HDs, in my experience, is that some are good from the
start, some are bad from the start. When I was building a big array a
while ago using IBM 120GXP HDs, out of 8 per server, 2 or 3 failed, but
they failed almost straight away (clicking sound). The rest have run
pretty much flawlessly till today.

 The hard thing with computer gear in general is each generation is a
totally
 new generation... A manufacturer's drives can turn from good to crap in
one
 batch, and the reverse. About all you can do is check
www.storagereview.com
 and solicit for advice each time you go to buy something.


IBM HDs are well-known for having the best RAID performance, due to the
firmware being tuned that way. Check out storagereview.com and you'll see
what I mean. Individually they don't perform much better than the
competition, but in a RAID setup they actually perform better. Not sure
why as I'd assume high speed independent HDs would lead to high speed
RAID... but anyway, IBM HDs are best for RAID from the performance results
for some reason.



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




light emitting diodes for toy gift

2002-11-24 Thread peter
Title: fortoygift





   
 
  

 
  
  

  
   
 
  

 
  
  

  
   
 
  

 
  
tel: 
  +86-755-26615498 ext:809
fax: 
  +86-755-26614200
email: 
  [EMAIL PROTECTED]
  

  
   

 
  slopt 
helps you achieve fun goals!

  




-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: Qmail/Postfix/Sendmail for fastest outgoing mail

2002-11-24 Thread Craig Sanders
On Mon, Nov 25, 2002 at 01:00:51PM +1100, Jason Lim wrote:
 I don't want to spark a flame war or anything... but for purely
 outgoing mailing (sending emails), which mail package would be
 fastest?

if you're using VERP (Variable Envelope Return Path), postfix is a
little faster than qmail.  if you're not using VERP, postfix is *MUCH*
faster than qmail.

 I know people have complained about Qmail's way of sending emails...
 in that it creates a connection for each email rather than bunching
 them up like Sendmail, but then how does Postfix operate
 (similar/hybrid)? It hear Postfix does something fancy in that regard
 that is a mix or something, but since I'm no Postfix expert, perhaps
 someone knows more about this?

postfix can do either, depending on how you use it.

by default, postfix does the same as sendmail - multiple emails to
different recipients at the same domain will be sent in one SMTP
session.  actually, postfix performs much better than sendmail in this
instance because for any given message with multiple recipients, postfix
will open multiple connections to *different* servers in parallel,
whereas, for the same message, sendmail will open only one connection to
each server in turn.

if you use VERP for completely automated bounce-detection then postfix
will send one message per recipient the same as qmail, even if several
recipients are @ the same domain - VERP requires this to work.


actually, even sendmail can send one message per recipient, *IFF* your
mailing list software sends one message per recipient.  qmail and
postfix can, by using VERP, do the same even when the mailing list sends
only one message with a huge CC or BCC list.

to summarise:

 - qmail *always* sends one message per recipient, whether it
   makes sense to do so or not.
 - postfix can do either, depending on what you tell it to do.
 - sendmail can do either, depending on what it is given to do.


 The reason I ask is that we have a number of Qmail servers right now,
 and they are heavily loaded because they also run Apache, DNS, and
 other stuff. My idea was to get Qmail to send all email quickly to the
 pure Email box (mail relay), and have the Email box handle all the
 actual grunt work of sending to remote hosts. All servers are
 connected together by 100Mbps so no bottleneck there. We can't stop
 using Qmail on the multi-purpose servers because the whole system is
 setup and downtime is unacceptable, but do you think having the Qmail
 relay all email to the Email box, then having them actually sent would
 benefit? We're talking about 2-3 million emails per day, which the
 Qmails have done well so far but because Apache is getting loaded,
 Qmail is slowing up and the number of concurrent connections it can
 handle has been dropping.

it'll take the mail delivery load off your multi-purpose boxes, but
won't result in much faster delivery (although you'll get some benefit
simply because you're spreading the same load over more machines).

however, it won't solve the multiple-recipients-at-one-domain problem.
if qmail relays individual messages via a postfix box, then the postfix
box will have individual messages in it's queue - it can't recombine
them into one message.  i.e. the damage has already been done.


 I'm *thinking* it would because then the Qmail servers would not need
 to create so many simultaneous connections to slow remote hosts
 (waiting around and stuff), and instead would be able to get email off
 faster to the Email box and thus free up load on the Qmail servers, so
 they can do other stuff more (Apache).

yep, it will get the mail off the qmail boxes ASAP, which will be some
improvement at least.

craig

-- 
craig sanders [EMAIL PROTECTED]

Fabricati Diem, PVNC.
 -- motto of the Ankh-Morpork City Watch


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: Qmail/Postfix/Sendmail for fastest outgoing mail

2002-11-24 Thread Jason Lim
Thanks for the input, Craig.


 it'll take the mail delivery load off your multi-purpose boxes, but
 won't result in much faster delivery (although you'll get some benefit
 simply because you're spreading the same load over more machines).

 however, it won't solve the multiple-recipients-at-one-domain problem.
 if qmail relays individual messages via a postfix box, then the postfix
 box will have individual messages in it's queue - it can't recombine
 them into one message.  i.e. the damage has already been done.

I don't quite understand that part about the damage already being done.
For example, if Qmail hands Postfix, say, 10 emails (individually, as
Qmail does), and 5 of those emails is to the same domain/mailserver,
wouldn't Postfix combine them and send them at one time?

Of course, if there is significant delay between the messages, such as 1
hour, then obviously Postfix won't wait an hour to collect enough emails
to a particular domain, but say they were sent in succession (say 2
seconds apart), since these are mailing list servers and email from the
Qmail servers tends to go out in a batch rather than individually.
Wouldn't Postfix combine them in this situation?


  I'm *thinking* it would because then the Qmail servers would not need
  to create so many simultaneous connections to slow remote hosts
  (waiting around and stuff), and instead would be able to get email off
  faster to the Email box and thus free up load on the Qmail servers, so
  they can do other stuff more (Apache).

 yep, it will get the mail off the qmail boxes ASAP, which will be some
 improvement at least.

I was also hoping in some optimization of the actual mail sending as
well... such as what Sendmail or Postfix could offer in this case. If what
you say above is what will happen (emails send individually and not
combined) then in effect setting up the email server as
qmail/sendmail/postfix won't make any difference, since they would be sent
individually anyway?


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: load average question

2002-11-24 Thread Cameron Moore
* [EMAIL PROTECTED] (Jeremy C. Reed) [2002.11.23 17:19]:
 On Fri, 22 Nov 2002, Scott St. John wrote:
  So the question is:  is anyone running a similar set up with either
  Sendmail or Posrtfix servicing 2,000+ email accounts with any
  performance issues?
 
 No performance issues using vm-pop3d, exim (MTA), apache and
 OpenWebMail with around 10,000 email accounts on similar hardware.
 
 In the past, when using qpopper with 10-15,000 accounts, I improved
 performance by using qpopper server mode.

The number of email accounts is a false indicator.  How many messages do
you receive each day?  What are your mesgs/sec statistics under normal
load?  How active is the webmail application?

Could you be more specific about the disk/raid setup?

I'm asking all these questions because I'm going to be replacing a mail
server soon with 6000+ accounts receiving about 80K mesgs per day.  I'm
curious to hear about other setups and they loads they can sustain.
Thanks
-- 
Cameron Moore
[ Why is the word dictionary in the dictionary? ]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Backup Web Server

2002-11-24 Thread rizal

   Can anyone pls tell me how to setup a Backup Web Server..meaning if the
primary Web Server fails, it will  automatically go to a seperate Web
Server.

  ex.

 Home User - www.abc.com

  Server Unit 1 - www.abc.com : but if the unit bogs down
 it will go to,

  Server Unit 2 - www.abc.com

Can this be possible?

Rizal

If you think you play too much, play more



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: Qmail/Postfix/Sendmail for fastest outgoing mail

2002-11-24 Thread Craig Sanders
On Mon, Nov 25, 2002 at 03:01:29PM +1100, Jason Lim wrote:
  however, it won't solve the multiple-recipients-at-one-domain
  problem.  if qmail relays individual messages via a postfix box,
  then the postfix box will have individual messages in it's queue -
  it can't recombine them into one message.  i.e. the damage has
  already been done.
 
 I don't quite understand that part about the damage already being
 done.  For example, if Qmail hands Postfix, say, 10 emails
 (individually, as Qmail does), and 5 of those emails is to the same
 domain/mailserver, wouldn't Postfix combine them and send them at
 one time?

nope, because postfix has no way of knowing that they were originally
the same email(*).  postfix has been handed 10 individual emails by
qmail, so it will deliver 10 individual emails.

postfix is good, but it's not magic.

(*) theoretically, it is possible to scan the headers and/or body to
determine this, but postfix doesn't do it and i doubt if anyone would
implement it.  it's just not worth the dev. time (or the increased size
 complexity of code) to do it just to handle this unusual corner-case.


 Of course, if there is significant delay between the messages, such as
 1 hour, then obviously Postfix won't wait an hour to collect enough
 emails to a particular domain, but say they were sent in succession
 (say 2 seconds apart), since these are mailing list servers and email
 from the Qmail servers tends to go out in a batch rather than
 individually.  Wouldn't Postfix combine them in this situation?

nope.  you might think of them as just one email, but by the time
postfix gets them they are multiple different emails.


 I was also hoping in some optimization of the actual mail sending as
 well... such as what Sendmail or Postfix could offer in this case. If
 what you say above is what will happen (emails send individually and
 not combined) then in effect setting up the email server as
 qmail/sendmail/postfix won't make any difference, since they would be
 sent individually anyway?

there would be a difference, just not the huge difference you were
hoping for.

postfix would be slightly faster than qmail but not that much faster
that it's worth using different software for - unless part of your
purpose is to get some real (as opposed to testing) experience with
postfix to decide whether it's worth switching.

sendmail isn't a good choice if high-performance is important to you.
overall performance would probably be worse than just keeping things as
they are.  (there, that was a very politic way of saying that, wasn't it :)



if you want to fix this problem, you have to do it at the source - i.e.
replace qmail with postfix on your main boxes.  that would be a lot of
work, not something to be done lightly.

if you're not using ezmlm, you may be able to hack your list manager to
bypass local qmail and send outgoing messages via SMTP direct to the
postfix box.  this may involve hacking the list manager to talk SMTP
rather thank fork /usr/sbin/sendmail, or it may involve replacing
/usr/sbin/sendmail with a wrapper script that talks SMTP.  either way,
it's not too hard.

actually, now that i think about it, i remember reading something about
getting ezmlm to work with postfix...i didn't pay much attention because
it required a qmail box as well as postfix, so it might be just right
for your situation.  search the postfix-users archive for ezmlm.

craig

-- 
craig sanders [EMAIL PROTECTED]

Fabricati Diem, PVNC.
 -- motto of the Ankh-Morpork City Watch


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




transfering linux to new HDD

2002-11-24 Thread Craig
Hi Guys

I have transfered a current linux install across to a new
drive using the cp -a command. I need to now chroot the
existing install to the new hdd and tell lilo to boot
from there.

How do I do this from a remote location ?

Thanks
Craig


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: transfering linux to new HDD

2002-11-24 Thread CaT
On Mon, Nov 25, 2002 at 09:40:35AM +0200, Craig wrote:
 Hi Guys
 
 I have transfered a current linux install across to a new
 drive using the cp -a command. I need to now chroot the
 existing install to the new hdd and tell lilo to boot
 from there.
 
 How do I do this from a remote location ?

* ssh in
* mount the relevant new partitions as they would appear in the new
  setup
* chroot to the new root (/) filesystem
* make sure you have an entry for the old setup in the new lilo.conf
* lilo -v
* exit
* reboot

You might want to start screen before you do this (but after you ssh in ;).

That should do it. I don't think you'll get any interference from the
old lilo install. You may also want to play with the -R lilo option.

This is all from memory but you should be fine. Just keep cool.

-- 
All people are equal,
But some are more equal then others.
- George W. Bush Jr, President of the United States
  September 21, 2002 (Abridged version of security speech)


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: routing policy

2002-11-24 Thread Marc Haber
On Mon, 25 Nov 2002 11:02:26 +1100, [EMAIL PROTECTED] (Donovan
Baarda) wrote:
On Fri, Nov 22, 2002 at 07:30:49PM +0100, Marc Haber wrote:
 Add your routes in the up and down clause in /etc/network/interfaces.

Does this work for ppp, ippp and other such devices?

Not yet flawlessly.

the ISDN stuff is a mess... stuff scattered between /etc/isdn/ and /etc/ppp.

Are there ISPs that do ISDN with Linux? If not, we are offtopic here.

Greetings
Marc

-- 
-- !! No courtesy copies, please !! -
Marc Haber  |Questions are the | Mailadresse im Header
Karlsruhe, Germany  | Beginning of Wisdom  | Fon: *49 721 966 32 15
Nordisch by Nature  | Lt. Worf, TNG Rightful Heir | Fax: *49 721 966 31 29


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]