Re: Server questions

2003-03-08 Thread Ged Haywood
Hi there,

On Fri, 7 Mar 2003, Michael Hyman wrote:

 I am not familiar with clustering
 
 Would you run a mod_perl based web site on a cluster?

If the performance and the money for the hardware are issues then
perhaps before you buy you should spend some time looking into things
like alternative system architectures, software packages, development
methods, timescales and *those* costs...

There's a lot more to it than what's the fastest machine.  Apache
isn't the fastest Web server on the planet and coding it in Perl isn't
the fastest way of implementing an algorithm.  Asking for data from
Oracle won't usually be the fastest way to get hold of it - especially
if the machine running Oracle is remote.  On the same hardware, you
might get ten times the performance from a well-tuned proxy server
setup than you can from a single mod_perl server.  And you might not.

Put all those things into a system and the difference between Solaris
and Linux or between PC and Sparc may well be lost in the noise.

And if you don't have a pretty good idea of where you're going with it
all before you set out, then you might not get there.  Have you any
metrics for the kinds of loads you expect to meet, and the parts of
the system which will use most resources to sustain them?  Do you know
what performance you can expect from the database under the expected
load conditions?  Is any of this under your control?

73,
Ged.



Server questions

2003-03-07 Thread Michael Hyman
Hi guys,

I have a dilemma that I need input on.

If you were to buy machines to be used as dedicated web servers, which would
you go with?

Option 1. A Sun SunFire 280R with 2 Ultra 3 processors and 4GB RAM. Run
Solaris 9

Option 2. PC-server with 2 ~2.8GHZ Xeon processors and 8GB RAM. Run Linux

The prices are worlds apart and I think I will get more bang for the buck
with PC.

The systems will connect to an Oracle server, via SQL*Net and server both
dynamic and static content along with providing download files up to 1GB in
size. The files will be stored locally.

What I want to understand is which machine will be faster, be able to handle
more peak loading, have a longer lifespan yet be upgradeable for a
reasonable price.

In the benchmarking we have done, we run out of Ram before CPU using Apache
1.3.27 and Mod_perl, so we will heavily load the machines with RAM.

I have years of experience with Solaris and SunOS, and little with Linux,
but the learning curve seems small and easily handled. It seems to me that
Linux is more customizable than Solaris, but then Solaris comes pretty well
tuned and does not always need much tweaking.

Apache and all of our software components support both Solaris and Linux, so
we can go either way as far as that goes.

I think it comes down to a simple formula of which option gets us the most
peak and sustained performance for the least amount of money.

So, I am looking for some input as to which way you might go in my
positions.

Thanks in advance for the help!!

Regards...Michael





Re: Server questions

2003-03-07 Thread Dzuy Nguyen
I always say, buy the best you can afford.
Then again, consider how many Linux PC you can have for the price of the Sun.
Run those PCs in a web farm or cluster and that Sun can't match the processing
power and speed.
Michael Hyman wrote:
Hi guys,

I have a dilemma that I need input on.

If you were to buy machines to be used as dedicated web servers, which would
you go with?
Option 1. A Sun SunFire 280R with 2 Ultra 3 processors and 4GB RAM. Run
Solaris 9
Option 2. PC-server with 2 ~2.8GHZ Xeon processors and 8GB RAM. Run Linux

The prices are worlds apart and I think I will get more bang for the buck
with PC.
The systems will connect to an Oracle server, via SQL*Net and server both
dynamic and static content along with providing download files up to 1GB in
size. The files will be stored locally.
What I want to understand is which machine will be faster, be able to handle
more peak loading, have a longer lifespan yet be upgradeable for a
reasonable price.
In the benchmarking we have done, we run out of Ram before CPU using Apache
1.3.27 and Mod_perl, so we will heavily load the machines with RAM.
I have years of experience with Solaris and SunOS, and little with Linux,
but the learning curve seems small and easily handled. It seems to me that
Linux is more customizable than Solaris, but then Solaris comes pretty well
tuned and does not always need much tweaking.
Apache and all of our software components support both Solaris and Linux, so
we can go either way as far as that goes.
I think it comes down to a simple formula of which option gets us the most
peak and sustained performance for the least amount of money.
So, I am looking for some input as to which way you might go in my
positions.
Thanks in advance for the help!!

Regards...Michael








Re: Server questions

2003-03-07 Thread Michael Hyman
I am not familiar with clustering

Would you run a mod_perl based web site on a cluster? Isn't the point of a
cluster to make a group of machines appear to be one? If so, how is that
beneficial for web services?

Thanks...Michael

- Original Message -
From: Dzuy Nguyen [EMAIL PROTECTED]
To: Modperl [EMAIL PROTECTED]
Sent: Friday, March 07, 2003 6:19 PM
Subject: Re: Server questions


 I always say, buy the best you can afford.
 Then again, consider how many Linux PC you can have for the price of the
Sun.
 Run those PCs in a web farm or cluster and that Sun can't match the
processing
 power and speed.

 Michael Hyman wrote:
  Hi guys,
 
  I have a dilemma that I need input on.
 
  If you were to buy machines to be used as dedicated web servers, which
would
  you go with?
 
  Option 1. A Sun SunFire 280R with 2 Ultra 3 processors and 4GB RAM. Run
  Solaris 9
 
  Option 2. PC-server with 2 ~2.8GHZ Xeon processors and 8GB RAM. Run
Linux
 
  The prices are worlds apart and I think I will get more bang for the
buck
  with PC.
 
  The systems will connect to an Oracle server, via SQL*Net and server
both
  dynamic and static content along with providing download files up to 1GB
in
  size. The files will be stored locally.
 
  What I want to understand is which machine will be faster, be able to
handle
  more peak loading, have a longer lifespan yet be upgradeable for a
  reasonable price.
 
  In the benchmarking we have done, we run out of Ram before CPU using
Apache
  1.3.27 and Mod_perl, so we will heavily load the machines with RAM.
 
  I have years of experience with Solaris and SunOS, and little with
Linux,
  but the learning curve seems small and easily handled. It seems to me
that
  Linux is more customizable than Solaris, but then Solaris comes pretty
well
  tuned and does not always need much tweaking.
 
  Apache and all of our software components support both Solaris and
Linux, so
  we can go either way as far as that goes.
 
  I think it comes down to a simple formula of which option gets us the
most
  peak and sustained performance for the least amount of money.
 
  So, I am looking for some input as to which way you might go in my
  positions.
 
  Thanks in advance for the help!!
 
  Regards...Michael
 
 
 
 
 





Re: Server questions

2003-03-07 Thread Dzuy Nguyen
Absolutely.  In this case, the cluster actually acts like a load balancing solution.

Michael Hyman wrote:
I am not familiar with clustering

Would you run a mod_perl based web site on a cluster? Isn't the point of a
cluster to make a group of machines appear to be one? If so, how is that
beneficial for web services?
Thanks...Michael

- Original Message -
From: Dzuy Nguyen [EMAIL PROTECTED]
To: Modperl [EMAIL PROTECTED]
Sent: Friday, March 07, 2003 6:19 PM
Subject: Re: Server questions


I always say, buy the best you can afford.
Then again, consider how many Linux PC you can have for the price of the
Sun.

Run those PCs in a web farm or cluster and that Sun can't match the
processing

power and speed.

Michael Hyman wrote:

Hi guys,

I have a dilemma that I need input on.

If you were to buy machines to be used as dedicated web servers, which
would

you go with?

Option 1. A Sun SunFire 280R with 2 Ultra 3 processors and 4GB RAM. Run
Solaris 9
Option 2. PC-server with 2 ~2.8GHZ Xeon processors and 8GB RAM. Run
Linux

The prices are worlds apart and I think I will get more bang for the
buck

with PC.

The systems will connect to an Oracle server, via SQL*Net and server
both

dynamic and static content along with providing download files up to 1GB
in

size. The files will be stored locally.

What I want to understand is which machine will be faster, be able to
handle

more peak loading, have a longer lifespan yet be upgradeable for a
reasonable price.
In the benchmarking we have done, we run out of Ram before CPU using
Apache

1.3.27 and Mod_perl, so we will heavily load the machines with RAM.

I have years of experience with Solaris and SunOS, and little with
Linux,

but the learning curve seems small and easily handled. It seems to me
that

Linux is more customizable than Solaris, but then Solaris comes pretty
well

tuned and does not always need much tweaking.

Apache and all of our software components support both Solaris and
Linux, so

we can go either way as far as that goes.

I think it comes down to a simple formula of which option gets us the
most

peak and sustained performance for the least amount of money.

So, I am looking for some input as to which way you might go in my
positions.
Thanks in advance for the help!!

Regards...Michael













Apache::RPC::Server questions

2002-01-08 Thread Bruce W. Hoylman

Ciao!

Looking at RPC::XMl::Server and subsequently the Apache::RPC::Server 
subclass I see that methods are implemented via passing a code ref to a 
named or anonymous subroutine that implements the actual method logic. 
Given the persistent nature of perl in an Apache/modperl environment, is 
there any inherient problems this might cause in a complex method, other 
than those caused via bad modperl coding practices?

The first thing I thought about is the dangers of closures and the 
potential for persistent variables and data between requests.  Will the 
avoidance of using any type of globally scoped variables within this 
method defining subroutine be enough to avoid these types of problems? 
Does this method defining subroutine ever go out of scope during the 
life of a modperl process, therefore freeing lexicals and other local 
data and data structures?  Do I need to do any explicite destruction of 
objects prior to exiting the subroutine, for example?

I may need to make database accesses from within the method.  Will 
Apache::DBI still work within this framework?

The documentation indicates it is dangerous to change package namespace 
within the method defining subroutine.  Does this apply to the 'use' or 
'require' of modules within the subroutine as well?

Frankly, I'm so used to writing the actual handler subroutine and 
supporting modules that I'm feeling a little out of my element given the 
method definition paradigm the RPC::XML::Server is introducing.

Thanks for any information you might be able to provide on this matter.

Peace.