On Fri, May 24, 2002 at 08:01:35PM +0200, Nicolas Bougues wrote:
On Fri, May 24, 2002 at 01:42:12PM -0400, Jeff S Wheeler wrote:
While he may still need a large amount of DB muscle for other things,
using PHP/MySQL sessions for a site that really expects to have 30,000
different HTTP
On Fri, May 24, 2002 at 08:01:35PM +0200, Nicolas Bougues wrote:
On Fri, May 24, 2002 at 01:42:12PM -0400, Jeff S Wheeler wrote:
While he may still need a large amount of DB muscle for other things,
using PHP/MySQL sessions for a site that really expects to have 30,000
different HTTP
Hello Nicolas Bougues [EMAIL PROTECTED],
I'd like to discuss the NFS server in this network scenario.
Say, if I put a linux-based NFS server as the central storage device and
make all web servers as well as the single mysql write server attached
over the 100Base ethernet. When encountering
Hello
I have similar scenario of mysql, but i can't put mysql data on
NFS, because there have some locks and cache problems. My NFS is
NetworkAppliance too, and my mysql have very hard traffic. I'm studing
the posibility to put on SAN with fiberchannel cards. We will buy
Everything I've heard about experiences with mysql on NFS has been
negative. If you do want to try it, though, keep in mind that
100Mbit/sec ethernet is going to give you 12.5MByte/sec, less actually,
of I/O performance. GIGE cards are cheap these days, as are switches
with a few GIGE ports.
Hello Nicolas Bougues [EMAIL PROTECTED],
I'd like to discuss the NFS server in this network scenario.
Say, if I put a linux-based NFS server as the central storage device and
make all web servers as well as the single mysql write server attached
over the 100Base ethernet. When encountering 30,000
Hello
I have similar scenario of mysql, but i can't put mysql data on
NFS, because there have some locks and cache problems. My NFS is
NetworkAppliance too, and my mysql have very hard traffic. I'm studing
the posibility to put on SAN with fiberchannel cards. We will buy
specefic
Everything I've heard about experiences with mysql on NFS has been
negative. If you do want to try it, though, keep in mind that
100Mbit/sec ethernet is going to give you 12.5MByte/sec, less actually,
of I/O performance. GIGE cards are cheap these days, as are switches
with a few GIGE ports.
Ideas?
Noone mentioned it before, but if you may choose different
database, ie postreSQL than there is a debian packaged
program called dbbalancer (at least in woody). But as the
authors states it works for now only with postgres.
http://dbbalancer.sourceforge.net
JA
--
To UNSUBSCRIBE,
On Wednesday 22 May 2002 18:44, Dave Watkins wrote:
At 16:02 22/05/2002 +0800, Patrick Hsieh wrote:
Hello list,
I am expecting to have 30,000 http clients visting my website at the
same time. To meet the HA requirement, we use dual firewall, dual
Layer-4 switch and multiple web
I don't know if anyone else who followed-up on this thread has ever
implemented a high traffic web site of this calibre, but the original
poster is really just trying to band-aid a poor session management
mechanism into working for traffic levels it wasn't really intended for.
While he may still
On Fri, May 24, 2002 at 01:42:12PM -0400, Jeff S Wheeler wrote:
While he may still need a large amount of DB muscle for other things,
using PHP/MySQL sessions for a site that really expects to have 30,000
different HTTP clients at peak instants is not very bright. We have
cookies for this.
Ideas?
Noone mentioned it before, but if you may choose different
database, ie postreSQL than there is a debian packaged
program called dbbalancer (at least in woody). But as the
authors states it works for now only with postgres.
http://dbbalancer.sourceforge.net
JA
--
To UNSUBSCRIBE, email to
On Wednesday 22 May 2002 18:44, Dave Watkins wrote:
At 16:02 22/05/2002 +0800, Patrick Hsieh wrote:
Hello list,
I am expecting to have 30,000 http clients visting my website at the
same time. To meet the HA requirement, we use dual firewall, dual
Layer-4 switch and multiple web
I don't know if anyone else who followed-up on this thread has ever
implemented a high traffic web site of this calibre, but the original
poster is really just trying to band-aid a poor session management
mechanism into working for traffic levels it wasn't really intended for.
While he may still
On Fri, May 24, 2002 at 01:42:12PM -0400, Jeff S Wheeler wrote:
While he may still need a large amount of DB muscle for other things,
using PHP/MySQL sessions for a site that really expects to have 30,000
different HTTP clients at peak instants is not very bright. We have
cookies for this.
Hi.
On Thu, May 23, 2002 at 10:44:15AM +1200, [EMAIL PROTECTED] wrote:
At 16:02 22/05/2002 +0800, Patrick Hsieh wrote:
[...]
1. use 3 or more mysql servers for write/update and more than 5 mysql
servers for read-only. Native mysql replication is applied among them.
In the mysql write
Hello Benjamin Pflugmann [EMAIL PROTECTED],
This scenario is fine. But in real life, the circular master-slave
replication will probably cause inconsistency of data among them. I
wish to keep 1 copy of the shared raw data in a storage device and
forget circular master-slave replication. If there
Hi.
On Thu, May 23, 2002 at 11:16:33PM +0800, [EMAIL PROTECTED] wrote:
Hello Benjamin Pflugmann [EMAIL PROTECTED],
This scenario is fine. But in real life, the circular master-slave
replication will probably cause inconsistency of data among them.
That is why I wrote you have to take care
On Wednesday 22 May 2002 18:44, Dave Watkins wrote:
At 16:02 22/05/2002 +0800, Patrick Hsieh wrote:
Hello list,
I am expecting to have 30,000 http clients visting my website at the
same time. To meet the HA requirement, we use dual firewall, dual
Layer-4 switch and multiple web servers in
Hi.
On Thu, May 23, 2002 at 10:44:15AM +1200, [EMAIL PROTECTED] wrote:
At 16:02 22/05/2002 +0800, Patrick Hsieh wrote:
[...]
1. use 3 or more mysql servers for write/update and more than 5 mysql
servers for read-only. Native mysql replication is applied among them.
In the mysql write servers,
Hello Benjamin Pflugmann [EMAIL PROTECTED],
This scenario is fine. But in real life, the circular master-slave
replication will probably cause inconsistency of data among them. I
wish to keep 1 copy of the shared raw data in a storage device and
forget circular master-slave replication. If there
Hi.
On Thu, May 23, 2002 at 11:16:33PM +0800, [EMAIL PROTECTED] wrote:
Hello Benjamin Pflugmann [EMAIL PROTECTED],
This scenario is fine. But in real life, the circular master-slave
replication will probably cause inconsistency of data among them.
That is why I wrote you have to take care
On Wednesday 22 May 2002 18:44, Dave Watkins wrote:
At 16:02 22/05/2002 +0800, Patrick Hsieh wrote:
Hello list,
I am expecting to have 30,000 http clients visting my website at the
same time. To meet the HA requirement, we use dual firewall, dual
Layer-4 switch and multiple web servers in
Hello list,
I am expecting to have 30,000 http clients visting my website at the
same time. To meet the HA requirement, we use dual firewall, dual
Layer-4 switch and multiple web servers in the backend. My problem is,
if we use the user-tracking system with apache, php and mysql, it will
surely
At 16:02 22/05/2002 +0800, Patrick Hsieh wrote:
Hello list,
I am expecting to have 30,000 http clients visting my website at the
same time. To meet the HA requirement, we use dual firewall, dual
Layer-4 switch and multiple web servers in the backend. My problem is,
if we use the user-tracking
Hello list,
I am expecting to have 30,000 http clients visting my website at the
same time. To meet the HA requirement, we use dual firewall, dual
Layer-4 switch and multiple web servers in the backend. My problem is,
if we use the user-tracking system with apache, php and mysql, it will
surely
At 16:02 22/05/2002 +0800, Patrick Hsieh wrote:
Hello list,
I am expecting to have 30,000 http clients visting my website at the
same time. To meet the HA requirement, we use dual firewall, dual
Layer-4 switch and multiple web servers in the backend. My problem is,
if we use the user-tracking
28 matches
Mail list logo