you kidding me? 200,000 *concurrent* HTTP sessions on *one* server?
whats your session time window? 24 hours?

seriously mate, if you want help, tell me some thing I would believe.

Zaid

On Nov 15, 2007 5:03 PM, Ala'a Ibrahim <[EMAIL PROTECTED]> wrote:
> well, the problem is that it's a legacy system, and we are not planning to
> do any development on it right now, we are having about 200,000 concurrent
> sessions on that system, and the database server is also already loaded,
> (given too that the apache server is more loaded) that's why it's being
> clustered upon 2 servers, also the servers are on the same rack, so I don't
> think the network latency would be a problem.
>
>
>
> On 11/15/07, Ammar Ibrahim <[EMAIL PROTECTED]> wrote:
> > I completely agree, I *guess* it would run much faster on a properly
> designed database. All your queries will be based on an indexed column
> (session_id). A nice trick would be to create a table that is in memory,
> this way you would have blazing speed, but then again, you need to study the
> possibility of the database server going offline and the consequences.
> Databases are much easier to scale than a file system based solution. And
> keep in mind if you apply your own session mechanism that doesn't use a
> database you might run into problems that are very hard to trace/debug (race
> conditions). You need to test properly with the load you are getting. There
> are tools to help you simulate the number of sessions that you have.
> >
> > What you could also do is sharding, which is the technique of splitting
> the sessions across more than one database. e.g. you can use a hashing
> function to determine to which Database the session is stored into. As an
> example:
> >
> > Let's assume you want to create 5 data stores for sessions. Tables might
> be in the same database or a combination of servers/tables. To make it
> simple let's say we have the tables sessions1 - session5 in one db. By
> writing a function that allows sessions to be mapped to their corresponding
> table, this way you can achieve splitting your number of records by a factor
> of 5. If you had a million sessions, on average you will have around 200K
> records in each table. This is a very nice technique for database
> optimization in general, the draw back is complexity in programming, and if
> you work on a project with less experienced developers I highly don't
> recommend sharding all your app. Just minimize it to Sessions, which is
> gonna be transparent to the developers working on the project as they will
> be using the built in session mechanism in PHP, and they don't have to worry
> about what's going on in the background.
> >
> > It wouldn't be hard to benchmark and see the best fit to your problem.
> >
> > - Ammar
> >
> >
> >
> > On Nov 15, 2007 11:38 AM, Zaid Amireh < [EMAIL PROTECTED] > wrote:
> >
> > >
> > > How many concurrent sessions do you have? whats the size of a typical
> > > session file? you have to know your workload to find the best
> > > solution.
> > >
> > > Unless you have 1000+ concurrent sessions the only load you will
> > > imposing on the DB server is the network overhead, sessions come and
> > > go and so the table would stay small, with some extra indexes here and
> > > there you should be golden.
> > >
> > > Then again, even with 1000+ sessions, if the table has proper indexes,
> > > I would assume its still faster than a remote filesystem, too many
> > > factors to write about in this email.
> > >
> > > Zaid
> > >
> > >
> > > On Nov 15, 2007 10:12 AM, Ala'a Ibrahim < [EMAIL PROTECTED]> wrote:
> > > > Well, I'm avoiding the use of database, I don't want to add an extra
> load on
> > > > the DB server, and don't won't to get a new server for the sessions
> > > >
> > > >
> > > >
> > > > On 11/15/07, Ammar Ibrahim < [EMAIL PROTECTED]> wrote:
> > > > > Database?
> > > > >
> > > > >
> > > > >
> > > > > On Nov 15, 2007 7:48 AM, Ala'a Ibrahim < [EMAIL PROTECTED]>
> wrote:
> > > > >
> > > > > > Hi,
> > > > > > I'm trying to do a small apache cluster, everything works fine,
> except
> > > > for PHP sessions, as they are stored on the filesystem, I tried using
> an NFS
> > > > share to hold the data, it worked fine on the testing environment,but
> on the
> > > > live servers, it wasn't a good idea, as the NFS share kept hanging up,
> so
> > > > I'm thinking of using a DRBD filesystem, connected on all the nodes,
> > > > something like a Replica RAID on all the servers, in the testing
> > > > environment, it worked fine, but I wonder if it would be on the live
> one, so
> > > > if anyone thinks this is a bad idea, has a better solution, or has
> some
> > > > comments please share it.
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > --
> > > > > >                                  Ala'a A. Ibrahim
> > > > > > http://guru.alaa-ibrahim.com/
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > >                                  Ala'a A. Ibrahim
> > > > http://guru.alaa-ibrahim.com/
> > > >  >
> > > >
> > >
> > >
> > >
> > > --
> > > ---------------------------
> > > Netiquette -> http://www.dtcc.edu/cs/rfc1855.html
> > > Netiquette Nazi ->
> > > http://redwing.hutman.net/%7Emreed/warriorshtm/netiquettenazi.htm
> > > ---------------------------
> > >
> > >
> > >
> > >
> > >
> > >
> >
>
>
>
> --
>                                  Ala'a A. Ibrahim
> http://guru.alaa-ibrahim.com/
>
>  >
>



-- 
---------------------------
Netiquette -> http://www.dtcc.edu/cs/rfc1855.html
Netiquette Nazi ->
http://redwing.hutman.net/%7Emreed/warriorshtm/netiquettenazi.htm
---------------------------

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Jolug" group.
 To post to this group, send email to [email protected]
 To unsubscribe from this group, send email to [EMAIL PROTECTED]
 For more options, visit this group at 
http://groups.google.com/group/Jolug?hl=en-GB
-~----------~----~----~----~------~----~------~--~---

رد على