Re: I/O performance issues on 2.4.23 SMP system

2004-02-03 Thread Benjamin Sherman
Thanks to all who sent comments on this. I did some more testing and 
went straight to the source for input.

snip
if you want to try the 4G patch then i'd suggest Andrew Morton's -mm 
tree, which has it included:

http://kernel.org/pub/linux/kernel/people/akpm/patches/2.6/2.6.2-rc2/2.6.2-rc2-mm2/

i've got a 2.4 backport too, included in RHEL3. (the SRPM is
downloadable.) But extracting the patch from this srpm will likely not
apply to a vanilla 2.4 tree - there are lots of other patches as well 
and interdependencies. So i'd suggest the RHEL3 kernel as-is, or the -mm 
tree in 2.6.

Ingo
/snip
Of course, as newer kernels are released, Andrew releases newer -mm 
patches. This patch set solved the I/O problem and let me use 4GB RAM.



Mark Ferlatte wrote:

Daniel Erat said on Thu, Jan 29, 2004 at 08:08:49AM -0800:

I was the poster who initiated the previous thread on this subject.  The
problem disappeared here after we went down to 2 GB of memory (although
we physically removed it from the server rather than passing the arg to
the kernel... shouldn't make a difference though, I'd imagine).  We went
straight from 4 GB to 2 GB, so I can't comment on the results of using 3
GB.
Our problem didn't seem to directly correspond with the 1 GB threshold
-- it wouldn't manifest itself until the server had allocated all 4 GB
of RAM.  After a reboot, it would be nice and speedy again for a day or
two until all the memory was being used for buffering again.


This was the behavior I saw as well.  I did a bunch of research and source
reading before actually figuring out what was going on; it wasn't a well
documented bug for some reason... I guess there aren't that many people running
large boxes using 2.4.
This makes me think that the problems I saw with 2GB were not related to the IO
subsystem, but were something else.  Time to go play around a bit; getting
those boxes up to 2GB without having to do a kernel patch/upgrade cycle would
be nice.
M
--
Benjamin Sherman
Software Developer
Iowa Interactive, Inc
515-323-3468 x14
[EMAIL PROTECTED]
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]


Re: I/O performance issues on 2.4.23 SMP system

2004-02-03 Thread Benjamin Sherman
Thanks to all who sent comments on this. I did some more testing and 
went straight to the source for input.

snip
if you want to try the 4G patch then i'd suggest Andrew Morton's -mm 
tree, which has it included:

http://kernel.org/pub/linux/kernel/people/akpm/patches/2.6/2.6.2-rc2/2.6.2-rc2-mm2/
i've got a 2.4 backport too, included in RHEL3. (the SRPM is
downloadable.) But extracting the patch from this srpm will likely not
apply to a vanilla 2.4 tree - there are lots of other patches as well 
and interdependencies. So i'd suggest the RHEL3 kernel as-is, or the -mm 
tree in 2.6.

Ingo
/snip
Of course, as newer kernels are released, Andrew releases newer -mm 
patches. This patch set solved the I/O problem and let me use 4GB RAM.


Mark Ferlatte wrote:
Daniel Erat said on Thu, Jan 29, 2004 at 08:08:49AM -0800:
I was the poster who initiated the previous thread on this subject.  The
problem disappeared here after we went down to 2 GB of memory (although
we physically removed it from the server rather than passing the arg to
the kernel... shouldn't make a difference though, I'd imagine).  We went
straight from 4 GB to 2 GB, so I can't comment on the results of using 3
GB.
Our problem didn't seem to directly correspond with the 1 GB threshold
-- it wouldn't manifest itself until the server had allocated all 4 GB
of RAM.  After a reboot, it would be nice and speedy again for a day or
two until all the memory was being used for buffering again.

This was the behavior I saw as well.  I did a bunch of research and source
reading before actually figuring out what was going on; it wasn't a well
documented bug for some reason... I guess there aren't that many people running
large boxes using 2.4.
This makes me think that the problems I saw with 2GB were not related to the IO
subsystem, but were something else.  Time to go play around a bit; getting
those boxes up to 2GB without having to do a kernel patch/upgrade cycle would
be nice.
M
--
Benjamin Sherman
Software Developer
Iowa Interactive, Inc
515-323-3468 x14
[EMAIL PROTECTED]



Re: I/O performance issues on 2.4.23 SMP system

2004-01-29 Thread Benjamin Sherman
 Is this problem specific to the 3ware cards? does anyone know of any 
issues with the Highpoint 1640 SATA RAID cards?

 Any experience or recomendations with these?
No, this issue is not specific to 3ware cards. The original poster had 
QLogic fibre channel card and Adaptec SCSI.

--
Benjamin Sherman
Software Developer
Iowa Interactive, Inc
515-323-3468 x14
[EMAIL PROTECTED]


smime.p7s
Description: S/MIME Cryptographic Signature


Re: I/O performance issues on 2.4.23 SMP system

2004-01-29 Thread Benjamin Sherman
The problem (bug) is that block device IO has to go through buffers that are
below 1GB.  The memory manager doesn't know this, so what happens is that the
IO layer requests a block of memory below 1GB, and the swapout daemon (kswapd)
then runs around like a madman trying to free pages, instead of shuffling pages
that don't need to be below 1GB to higher memory addresses.  Since many of the
pages below 1GB can't be freed (they belong to active programs), the IO
starves.
With 1GB of memory, both the IO layer and the swapout daemon are working with
the same view of memory, so the bug is concealed, and performance is good.
I have heard of people trying 2GB, and having it work, but it didn't for me.
Right, I have seen a 2GB success story.
Do you know if this is fixed in kernel 2.6.x?
--
Benjamin Sherman
Software Developer
Iowa Interactive, Inc
515-323-3468 x14
[EMAIL PROTECTED]


smime.p7s
Description: S/MIME Cryptographic Signature


Re: I/O performance issues on 2.4.23 SMP system

2004-01-29 Thread Benjamin Sherman
 Is this problem specific to the 3ware cards? does anyone know of any 
issues with the Highpoint 1640 SATA RAID cards?

 Any experience or recomendations with these?
No, this issue is not specific to 3ware cards. The original poster had 
QLogic fibre channel card and Adaptec SCSI.

--
Benjamin Sherman
Software Developer
Iowa Interactive, Inc
515-323-3468 x14
[EMAIL PROTECTED]


smime.p7s
Description: S/MIME Cryptographic Signature


Re: I/O performance issues on 2.4.23 SMP system

2004-01-29 Thread Benjamin Sherman
The problem (bug) is that block device IO has to go through buffers that are
below 1GB.  The memory manager doesn't know this, so what happens is that the
IO layer requests a block of memory below 1GB, and the swapout daemon (kswapd)
then runs around like a madman trying to free pages, instead of shuffling pages
that don't need to be below 1GB to higher memory addresses.  Since many of the
pages below 1GB can't be freed (they belong to active programs), the IO
starves.
With 1GB of memory, both the IO layer and the swapout daemon are working with
the same view of memory, so the bug is concealed, and performance is good.
I have heard of people trying 2GB, and having it work, but it didn't for me.
Right, I have seen a 2GB success story.
Do you know if this is fixed in kernel 2.6.x?
--
Benjamin Sherman
Software Developer
Iowa Interactive, Inc
515-323-3468 x14
[EMAIL PROTECTED]


smime.p7s
Description: S/MIME Cryptographic Signature


Re: I/O performance issues on 2.4.23 SMP system

2004-01-28 Thread Benjamin Sherman
* Is the I/O patch referenced (by Ingo Molnar) available for 2.4.24?
Possibly; it's certainly not merged into 2.4.24.
Can anyone point me to the specific patch?

I've got some machines in nearly the same configuration.  What I ended up doing
was to put an `append=mem=1G' in the lilo.conf boot stanza for the kernel I
was using, and rebooted the machine in question.
This does reduce the available memory in the machine to 1GB, but solves the IO
problem.  In my case, it was much faster, even though MySQL couldn't buffer
nearly as much as with 4GB.
Thanks, Mark. I will probably try this with 3GB instead of 1GB. Did you 
try that?

--
Benjamin Sherman
Software Developer
Iowa Interactive, Inc
515-323-3468 x14
[EMAIL PROTECTED]


smime.p7s
Description: S/MIME Cryptographic Signature


Re: I/O performance issues on 2.4.23 SMP system

2004-01-28 Thread Benjamin Sherman
* Is the I/O patch referenced (by Ingo Molnar) available for 2.4.24?
Possibly; it's certainly not merged into 2.4.24.
Can anyone point me to the specific patch?
I've got some machines in nearly the same configuration.  What I ended up doing
was to put an `append=mem=1G' in the lilo.conf boot stanza for the kernel I
was using, and rebooted the machine in question.
This does reduce the available memory in the machine to 1GB, but solves the IO
problem.  In my case, it was much faster, even though MySQL couldn't buffer
nearly as much as with 4GB.
Thanks, Mark. I will probably try this with 3GB instead of 1GB. Did you 
try that?

--
Benjamin Sherman
Software Developer
Iowa Interactive, Inc
515-323-3468 x14
[EMAIL PROTECTED]


smime.p7s
Description: S/MIME Cryptographic Signature


I/O performance issues on 2.4.23 SMP system

2004-01-27 Thread Benjamin Sherman
I am following up a message sent to this list:
# Subject: severe I/O performance issues on 2.4.22 SMP system
# From: Daniel Erat [EMAIL PROTECTED]
# Date: Fri, 31 Oct 2003 12:38:38 -0800
I have a server running dual 2.66Ghz Xeons and 4GB RAM, in a 
PenguinComputing Relion 230S system. It has a 3ware RAID card with 3 
120GB SATA drives in RAID5. It is currently running Debian 3.0 w/ 
vanilla kernel 2.4.23, HIGHMEM4G=y, HIGHIO=y, SMP=y, ACPI=y. I see the 
problem with APCI and HT turned off OR if I leave them on.

I think my problem is perhaps the same as Mr. Erat's.

Basically, I/O on this box sucks. A good example of the problem is the 
compared import of identical data to mysql. On this box, importing a 
dataset takes roughly 20 minutes. On another dev server (single Athlon 
2Ghz, 1GB RAM, software RAID5 over Firewire), with identical mysql and 
dataset, the same import takes roughly 4.5 minutes.

So, I have a couple of questions because this box made it to production 
before the problem was discovered and I can't test as I'd like.
* If I were to use 64GB HIGHMEM support. Would this problem go away?
* Is the I/O patch referenced (by Ingo Molnar) available for 2.4.24?
  OR is the patch going to be in the kernel anytime soon?
* Is the patch available individually, if so, where can it be found? I 
googled quite a bit, but didn't find anything definite.

Any thoughts or suggestions?
Thanks!
--
Benjamin Sherman
Iowa Interactive, Inc
[EMAIL PROTECTED]


smime.p7s
Description: S/MIME Cryptographic Signature


I/O performance issues on 2.4.23 SMP system

2004-01-27 Thread Benjamin Sherman
I am following up a message sent to this list:
# Subject: severe I/O performance issues on 2.4.22 SMP system
# From: Daniel Erat [EMAIL PROTECTED]
# Date: Fri, 31 Oct 2003 12:38:38 -0800
I have a server running dual 2.66Ghz Xeons and 4GB RAM, in a 
PenguinComputing Relion 230S system. It has a 3ware RAID card with 3 
120GB SATA drives in RAID5. It is currently running Debian 3.0 w/ 
vanilla kernel 2.4.23, HIGHMEM4G=y, HIGHIO=y, SMP=y, ACPI=y. I see the 
problem with APCI and HT turned off OR if I leave them on.

I think my problem is perhaps the same as Mr. Erat's.
Basically, I/O on this box sucks. A good example of the problem is the 
compared import of identical data to mysql. On this box, importing a 
dataset takes roughly 20 minutes. On another dev server (single Athlon 
2Ghz, 1GB RAM, software RAID5 over Firewire), with identical mysql and 
dataset, the same import takes roughly 4.5 minutes.

So, I have a couple of questions because this box made it to production 
before the problem was discovered and I can't test as I'd like.
* If I were to use 64GB HIGHMEM support. Would this problem go away?
* Is the I/O patch referenced (by Ingo Molnar) available for 2.4.24?
  OR is the patch going to be in the kernel anytime soon?
* Is the patch available individually, if so, where can it be found? I 
googled quite a bit, but didn't find anything definite.

Any thoughts or suggestions?
Thanks!
--
Benjamin Sherman
Iowa Interactive, Inc
[EMAIL PROTECTED]


smime.p7s
Description: S/MIME Cryptographic Signature


Re: how to design mysql clusters with 30,000 clients?

2002-05-23 Thread Benjamin Pflugmann

Hi.

On Thu, May 23, 2002 at 10:44:15AM +1200, [EMAIL PROTECTED] wrote:
 At 16:02 22/05/2002 +0800, Patrick Hsieh wrote:
[...]
 1. use 3 or more mysql servers for write/update and more than 5 mysql
 servers for read-only. Native mysql replication is applied among them.
 In the mysql write servers, use 1 way replication like A-B-C-A to
 keep the data consistency. But I am afraid the loss of data, since we
 can't take the risk on it, especially when we are relying our billing
 system on it.
 
 This will not work. MySQL replication does not work like that. With MySQL 
 replication you have one master and all others replicate from it. 
[...]

I beg to differ. This kind of setting is doable since 3.23.26 and even
mentioned in the manual as circular master-slave relationship:

http://www.mysql.com/doc/R/e/Replication_Features.html

Of course you have to take care of the special properties of this
configuration.

Regards,

Benjamin.

-- 
[EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: how to design mysql clusters with 30,000 clients?

2002-05-23 Thread Benjamin Pflugmann

Hi.

On Thu, May 23, 2002 at 11:16:33PM +0800, [EMAIL PROTECTED] wrote:
 Hello Benjamin Pflugmann [EMAIL PROTECTED],
 
 This scenario is fine. But in real life, the circular master-slave
 replication will probably cause inconsistency of data among them.

That is why I wrote you have to take care of the special properties
(e.g. unique keys will not assure uniqueness among all servers). If
you take care of such things, there should be no problem with data
consistency.

 I wish to keep 1 copy of the shared raw data in a storage device and
 forget circular master-slave replication. If there is no locking
 problem in this scenario, then I can balance the
 insert/delete/update load onto every mysql server attached on the
 shared storage device. Idea?

No comment, because I have no experience with that.

Bye,

Benjamin.


 On Thu, 23 May 2002 16:19:53 +0200
 Benjamin Pflugmann [EMAIL PROTECTED] wrote:
[...]
  I beg to differ. This kind of setting is doable since 3.23.26 and even
  mentioned in the manual as circular master-slave relationship:
  
  http://www.mysql.com/doc/R/e/Replication_Features.html
  
  Of course you have to take care of the special properties of this
  configuration.

-- 
[EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: how to design mysql clusters with 30,000 clients?

2002-05-23 Thread Benjamin Pflugmann
Hi.

On Thu, May 23, 2002 at 10:44:15AM +1200, [EMAIL PROTECTED] wrote:
 At 16:02 22/05/2002 +0800, Patrick Hsieh wrote:
[...]
 1. use 3 or more mysql servers for write/update and more than 5 mysql
 servers for read-only. Native mysql replication is applied among them.
 In the mysql write servers, use 1 way replication like A-B-C-A to
 keep the data consistency. But I am afraid the loss of data, since we
 can't take the risk on it, especially when we are relying our billing
 system on it.
 
 This will not work. MySQL replication does not work like that. With MySQL 
 replication you have one master and all others replicate from it. 
[...]

I beg to differ. This kind of setting is doable since 3.23.26 and even
mentioned in the manual as circular master-slave relationship:

http://www.mysql.com/doc/R/e/Replication_Features.html

Of course you have to take care of the special properties of this
configuration.

Regards,

Benjamin.

-- 
[EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Re: how to design mysql clusters with 30,000 clients?

2002-05-23 Thread Benjamin Pflugmann
Hi.

On Thu, May 23, 2002 at 11:16:33PM +0800, [EMAIL PROTECTED] wrote:
 Hello Benjamin Pflugmann [EMAIL PROTECTED],
 
 This scenario is fine. But in real life, the circular master-slave
 replication will probably cause inconsistency of data among them.

That is why I wrote you have to take care of the special properties
(e.g. unique keys will not assure uniqueness among all servers). If
you take care of such things, there should be no problem with data
consistency.

 I wish to keep 1 copy of the shared raw data in a storage device and
 forget circular master-slave replication. If there is no locking
 problem in this scenario, then I can balance the
 insert/delete/update load onto every mysql server attached on the
 shared storage device. Idea?

No comment, because I have no experience with that.

Bye,

Benjamin.


 On Thu, 23 May 2002 16:19:53 +0200
 Benjamin Pflugmann [EMAIL PROTECTED] wrote:
[...]
  I beg to differ. This kind of setting is doable since 3.23.26 and even
  mentioned in the manual as circular master-slave relationship:
  
  http://www.mysql.com/doc/R/e/Replication_Features.html
  
  Of course you have to take care of the special properties of this
  configuration.

-- 
[EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]




Network Design

2001-06-03 Thread A. Benjamin
Hello,

I have a network layout that I am deemed to put into operation. 
I am trying to make this thing work before I start configuring 
this monster. Please offer your comments.

Here's a few hurdles I would have to overcome.
1. I do not have a static IP address to my ISP. It's dynamic.
2.Computer number 1 is on the 1st floor and the rest are
all in the basement.
3. I have no bridges, routers or switches.
4. There is one twisted-pair cable running from the basement
to computer 1 and wish not to run another.
5. I will attempt to use a redirectional service, such
as DHS to direct viewers to a my web server.
6. I will run my own DNS servers.
7. I want to add some resilience and redundancy for
my webservers. I mentioned a primary and a secondary
web server. The primary would be my main domain and
the another a subdomain. As I understand, a Class C
IP address is not routable thru the internet, but can I use
it as a secondary web server if it has a Class C IP?

A few temporal remedies:
1. I could use a program such as DHSup to have my IP
address point to the same IP address to compensate
for the dynamic IP.
2. When I use DHS services and create a host for example,
myserver.dhs.org, and my computer (locally) host name is
Phoenix,  I can configure my DNS server to reflect
phoenix.myserver.dhs.org.
3. If it is possible, I could sub, sub, subnet a network to 
give more than one workable IP. For instance, I have 
configured the following:

NetworkHosts (from and to) Broadcast Address
212.185.0.0 212.185.0.1 212.185.63.254 212.185.63.255 
212.185.64.0 212.185.64.1 212.185.127.254 212.185.127.255 
212.185.128.0 212.185.128.1 212.185.191.254 212.185.191.255 
212.185.192.0 212.185.192.1 212.185.255.254 212.185.255.255 

Is this conceivable?
Please reply with any comments that I could use to better my 
problem. Thanks for you help.





Title: network_diagram