This is an interesting example of performance measurement being relative and
subjective...but to help drag this back to the question at hand...

I think the Application/Session caching mechanisms are stigmatized by their
legacy.  I remember that in classic ASP the Application and Session objects
were STA based COM servers that would serialize IIS threads.  This is an
enormous scalability bottleneck.  The fact remains that you will still have
to serialize access to the shared state of a the Application/Session objects
in .NET but you have a chance at keeping the locks more granular thus
reducing contention...  So in a nutshell this should perform better.  But,
as with any shared state you have to protect it - and protecting it leads to
contention.  Contention leads to bottlenecks.

Jim




> -----Original Message-----
> From: Ian Griffiths [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, May 07, 2002 6:09 AM
> To: [EMAIL PROTECTED]
> Subject: Re: [DOTNET] ASP.NET's Application object: does it scale?
>
>
> ...although according to Nielsen/netratings (the very
> authority Yahoo! quote
> to back up their figures) Yahoo!'s are making inflated
> claims...  That page
> on Yahoo!'s web site reports 237 million unique users worldwide, but
> netratings say it was under 60 million.  Yahoo! claim that
> netratings say
> that the average US Yahoo! user spends 2 hours 15 minutes on
> Yahoo! every
> month, but according to netratings own site it's only 1 hour
> 25 minutes...
> (I'm using netratings' figures for April - possibly Yahoo!'s
> fortunes have
> slipped in the last few months.)
>
> Now it looks like Yahoo! are mostly aggregating their stats
> across all their
> sites worldwide, which would partially explain the inflated
> figures.  (And I
> guess the 2 hours 15 minutes figure, which was US-only, was just a
> particularly good month; or possibly this April was a bad month.)  So
> actually they don't have any one 'site' which is as large as
> these figures
> suggest.  I can't work out how large any one site is from the
> information
> available.
>
>
> What's your measure of 'traffic' by the way?  You seem to be
> suggesting that
> for a given number of page impressions, a mostly static site has more
> 'traffic' than a mostly dynamic site.  A dynamic site will be
> busier, sure,
> because it has to work harder, but that's not a measure of
> 'traffic' in any
> sense that I understand.
>
>
> Anyway, to try and drag this back onto the topic of the
> original question, I
> think the question was whether the application object represented a
> scalability issue.  If you're getting 35 page views a second, then
> regardless of whether you think that's piffling or massive,
> it's definitely
> going to be enough to mean that locking on the application
> object could be a
> bad idea.  In particularly you absolutely would not want to
> lock it for the
> whole time it takes to serve a page up.  (Well duh...  But
> that would be a
> no-brainer solution to thread safety for accessing the
> Application object.
> I've seen worse...)
>
>
> --
> Ian Griffiths
> DevelopMentor
>
> ----- Original Message -----
> From: "Thomas Tomiczek" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Tuesday, May 07, 2002 10:33 AM
> Subject: Re: [DOTNET] ASP.NET's Application object: does it scale?
>
>
> Ian,
>
> Inline...
>
> -----Original Message-----
> From: Ian Griffiths [mailto:[EMAIL PROTECTED]]
> Sent: Dienstag, 7. Mai 2002 11:18
> To: [EMAIL PROTECTED]
> Subject: Re: [DOTNET] ASP.NET's Application object: does it scale?
>
> According to Microsoft, the www.microsoft.com web site gets 20 million
> page
> views per day[1].  That's some 4 million short of your definition of
> large.
>
> *** Correct. MS website - and I assume you talk of www.microsoft.com -
> is also very static. From the web traffic site this is NOT a
> large site,
> sorry. What makes MS large is the amount of content (80 GB) and the
> point that they handle a lot of traffic through downloads on
> their site.
> The web site by itself is not exactly a sample of a large
> etc. site and
> would in itself NOT need the bandwidth the have - here we talk about
> downloads.
>
> (Remember that they have an international audience, so their usage
> patterns
> are not compressed into an 8 hour space.)
>
> So you're saying that www.microsoft.com is not a large web site?
>
> *** Not from a traffic point of view.
>
> In any case I think your reasoning here is suspect:
>
> *** No, MS is not a regular site - they are by large amounts
> a download
> center and have a tremendous amount of content, writers etc. BUT the
> pure web serving is NOT ridiculously high. I know a lot of sites that
> have higher traffic in terms of pages served.
>
> >1 million page views a day, lets say over a period of 8 hours
> > (to account for spikes etc.) are 125000 page views an hour.
>
> Judging by every web site and internet link usage graph I've
> ever looked
> at,
> you don't get uniform distributions like this.  I understand
> that you've
> compressed the day's traffic into working hours, which will give a
> higher
> 'average hits per second' figure than averaging it out over the whole
> day,
>
> *** This was my idea - actually I was getting a 3x peak.
>
> but I think you will still be underestimating the peakiness
> of the load
> -
> you usually see spikes in the morning and at lunch time.  (On
> the other
> hand
>
> *** Hm, 3 times is still pretty high :-)
>
> if your web site has international readership, it will be a bit more
> spread
> out.  You'll have more peaks of course, but they'll be less high.)
>
> At peak times you should expect well over 35 page views a second.
> (Well,
> assuming your server can cope with the load...  A lot of servers slow
> down
> at lunch time.)
>
> I tend to think of web sites as being (very approximately) one of:
>
>   (1) Dead (e.g. no page views most days)
>   (2) Piffling (a few, maybe a few hundred)
>   (3) Light load (a few thousand)
>   (4) Non-trivial load
>   (5) Large
>   (6) Huge
>
> (1) through (3) would not be worth putting on their own server.  (E.g.
> most
> personal web sites, and most small company web sites.)  My
> criteria for
> moving into (5) is approximately when you really need more than one
> server
> to handle the load.  (And I mean *need*, as opposed to using multiple
> servers because it's cheaper than making one server work
> properly.  It's
> surprising how much bandwidth a single server can chuck out
> if the site
> is
> designed well.)  I'd say that a million hits a day was easily in
> category
>
> *** FULL agreement on this. Actually you go into more servers often in
> order to
> * keep your SQL Server "in the back" or
> * actually handle processing like in the SQL Server.
>
> (5).  A million an hour is definitely (6), but how many of those are
> there?
> Not even Microsoft get a million page views an hour on average.
>
> *** A million an hour is NOT 5 - it is 4 or 5, but mostly 4.
> Let me get
> some little numbers straight.
>
> A sample of 6: YAHOO:
> 1.32 billion page views per day on average during December 2001
>
> Get this - that is 1320 MILLION PAGE VIEWS PER DAY. That is roughly 50
> million an hour. THAT is huge.
> Reference: http://docs.yahoo.com/info/pr/investor_metrics.html
>
> Check the access numbers of hotmail, amazon, ebay, and there you have
> sites of 5 or 6. I consider everything below one million an hour to be
> maximum 4, UNLESS the processing is extreme.
>
> Regards
>
> Thomas
>
> You can read messages from the DOTNET archive, unsubscribe
> from DOTNET, or
> subscribe to other DevelopMentor lists at http://discuss.develop.com.
>

You can read messages from the DOTNET archive, unsubscribe from DOTNET, or
subscribe to other DevelopMentor lists at http://discuss.develop.com.

Reply via email to