On Wednesday 26 February 2003 08:49 am, thorsten Sideb0ard wrote:
> Hi there,
>
> i recently subscribed to the list, and have been enjoying browsing the
> threads, however i didn't think i would be posting as soon...
>
> I wonder if anyone can offer any advice...
>
> I have two two machines, built from scratch
> comprising of Asus P4S8X motherboard, P4 2.4GHz,
> with two IBM drives each, one Deskstar 120GXP 82.3GB UDMA100 for the
> system drive and one Deskstar180GXP 185.2GB UDMA100
> as a data partition.
> They have a clean install of Mandrake 9.0
> [EMAIL PROTECTED] root]# uname -an
> Linux morpheus.cissme.com 2.4.19-16mdkcustom #4 SMP
>
> The drives themselves seem in good shape:
> [EMAIL PROTECTED] root]# hdparm /dev/hda
>
> /dev/hda:
>  multcount    = 16 (on)
>  IO_support   =  1 (32-bit)
>  unmaskirq    =  0 (off)
>  using_dma    =  1 (on)
>  keepsettings =  0 (off)
>  readonly     =  0 (off)
>  readahead    =  8 (on)
>  geometry     = 10011/255/63, sectors = 160836480, start = 0
>
>
> [EMAIL PROTECTED] root]# hdparm /dev/hdc
>
> /dev/hdc:
>  multcount    = 16 (on)
>  IO_support   =  1 (32-bit)
>  unmaskirq    =  0 (off)
>  using_dma    =  1 (on)
>  keepsettings =  0 (off)
>  readonly     =  0 (off)
>  readahead    =  8 (on)
>  geometry     = 22526/255/63, sectors = 361882080, start = 0
>
> and seem to get good throughput:
>
> [EMAIL PROTECTED] root]# hdparm -tT /dev/hda
>
> /dev/hda:
>  Timing buffer-cache reads:   128 MB in  0.28 seconds =457.14 MB/sec
>  Timing buffered disk reads:  64 MB in  1.40 seconds = 45.71 MB/sec
>
> [EMAIL PROTECTED] root]# hdparm -tT /dev/hdc
>
> /dev/hdc:
>  Timing buffer-cache reads:   128 MB in  0.28 seconds =457.14 MB/sec
>  Timing buffered disk reads:  64 MB in  1.18 seconds = 54.24 MB/sec
>
>
> Running Bonnie++ also confirms these results and in addition tells me that
> i get about 23MB a second on Rewrite speeds.
>
> All seems pretty good, until i start trying to move any large amounts
> of data around on the drives while also having mysql and apache running.
>
>
> One of the systems is still not being used, so i can use it as a test box:
> When i copy 1.7GB of data from hda -> hdc,
> it doesn't seem like much load:
>
> [EMAIL PROTECTED] root]# time cp *.gz /space/scratch.backup/
> 0.13user 17.59system 1:44.22elapsed 17%CPU (0avgtext+0avgdata
> 0maxresident)k
> 0inputs+0outputs (130major+23minor)pagefaults 0swaps
>
> Output of non-idle mode ps:
>
>   4:02pm  up 8 days, 22:35,  6 users,  load average: 0.97, 0.35, 0.12
> 86 processes: 83 sleeping, 3 running, 0 zombie, 0 stopped
> CPU states:  0.5% user, 25.2% system,  0.0% nice, 74.1% idle
> Mem:  1551472K av, 1521556K used,   29916K free,       0K shrd,   68920K
> buff
> Swap: 2097136K av,  154508K used, 1942628K free                 1299612K
> cached
>
>   PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
>   380 root      18   0   512  512   440 R    15.3  0.0   0:17 cp
>   421 thorsten  12   0  1044 1044   820 R     0.5  0.0   0:00 top
>   383 thorsten   9   0  1816 1816  1628 R     0.0  0.1   0:00 sshd
>
>
> And here is a sample of vmstat 1 while the copy is running:
>    procs                      memory    swap          io     system        
>  cpu r  b  w   swpd   free   buff  cache  si  so    bi    bo   in    cs  us
>  sy  id 1  1  1 154508  31488  69112 1297788   0   0 16000 36752  989   719
>   0  29  71 0  1  0 154508  31748  69064 1297580   0   0 16128 40648 1023  
> 851   0  25  75 1  0  0 154508  31080  69060 1298236   0   0 16128 36692 
> 928   695   1  26  73 1  0  0 154508  31044  69068 1298280   0   0 16768
> 36632  998   765   0  28  72 1  0  0 154508  31004  69036 1298588   0   0
> 21888 32592  973   922   0  31  69 0  1  1 154508  31084  68928 1298468   0
>   0 17664 40800 1004   774   2  29  69 1  0  0 154508  31348  68856 1298296
>   0   0 17792 32596  947   766   0  25  75 1  0  0 154508  31012  68892
> 1298652   0   0 20480 32592  958   884   1  26  73 0  1  1 154508  30488 
> 68960 1299100   0   0 18048 36692  939   799   0  30  70 1  0  1 154508 
> 29728  68968 1299748   0   0 16128 40932 1034   722   1  20  79 0  1  0
> 154508  30192  68940 1299324   0   0 16000 36492  971   834   1  24  75 0 
> 1  0 154508  30060  68928 1299464   0   0 16128 38696  933   735   0  23 
> 77 0  1  0 154508  29908  68920 1299612   0   0 16128 34544  943   656   0 
> 28  72
>
>
> Howver on the box also running mysql and httpd at the same time:
>
> [EMAIL PROTECTED] root]# time cp *.gz /space/scratch.backup/
> 0.31user 15.43system 3:27.27elapsed 7%CPU (0avgtext+0avgdata
> 0maxresident)k
> 0inputs+0outputs (130major+23minor)pagefaults 0swaps
>
>
> non-idle ps output:
>
>   4:26pm  up 27 days,  3:03,  6 users,  load average: 7.97, 4.42, 2.42
> 419 processes: 418 sleeping, 1 running, 0 zombie, 0 stopped
> CPU states: 18.0% user, 58.9% system,  0.0% nice, 23.0% idle
> Mem:  1551472K av, 1520456K used,   31016K free,       0K shrd,   17532K
> buff
> Swap: 2097136K av,   39756K used, 2057380K free                 1129820K
> cached
>
>   PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
>  5120 thorsten  14   0  1332 1332   820 R    25.3  0.0   3:23 top
>  5548 root      10   0   512  512   440 D    10.9  0.0   0:09 cp
> 24561 www        9   0 29456  10M  6172 D     0.0  0.6   0:41 httpd
>  5294 mysql      9   0  229M 226M  1184 D     0.0 14.9   0:00 mysqld
>
>
>    procs                      memory    swap          io     system        
> cpu r  b  w   swpd   free   buff  cache  si  so    bi    bo   in    cs  us 
> sy  id 1  8  1  42636  31616  17280 1133684   0   0 10132 15848  729   563 
> 11  17  73 0 10  1  42632  31200  17332 1134084   0   0   412 11816  443  
> 185   1   5  94 1  6  1  42620  30992  17416 1134344   0   0 15200 10684 
> 725   746   5  28  67 3  7  0  42620  32016  17488 1133476   0   0  6276
> 18580  713   422  15  84   1 4  3  1  42620  31060  17496 1134468   0   0 
> 6180  8080  604   411   9  91   0 3  5  1  42620  31060  17528 1133944   0 
>  0 11952 18676  793   513  14  86   0 2  8  1  42616  30500  17560 1134116 
> 32   0   204 13876  497   248   4  17  80 2  8  1  42616  31264  17576
> 1132264   0   0 20576 15700  966   967  30  35  36 1  5  0  42616  31404 
> 17648 1133588   0   0  3528 23448 1052   495  13  13  75 0  6  1  42616 
> 31332  17676 1133400   0   0 16860 18296  851   740   4  18  78 5  2  0 
> 42600  30752  17636 1134312  56   0  5528 14280  646   320   4  76  20 2  3
>  0  42600  31340  17800 1133148   0   0  8264 11276  674   516  16  84   0
> 4  6  0  42600  31352  17808 1132960   0   0  5236     0  593   545  30  70
>   0 2  2  0  42600  32028  17808 1131484   0   0  6772     0  857   806  42
>  57   1 3  3  0  42600  31360  17808 1130852   0   0  3440     0  638   629
>  40  60   0
>
> And more extreme, if i move a large directory (4.6GB)
>
> quick snapshot before move:
> 5:12pm  up 27 days,  3:49,  6 users,  load average: 1.25, 1.08, 1.11
>
>
> During move:
>   5:16pm  up 27 days,  3:53,  6 users,  load average: 7.57, 3.93, 2.21
> 369 processes: 368 sleeping, 1 running, 0 zombie, 0 stopped
> CPU states: 28.7% user, 57.8% system,  0.0% nice, 13.3% idle
> Mem:  1551472K av, 1519628K used,   31844K free,       0K shrd,   31804K
> buff
> Swap: 2097136K av,   48888K used, 2048248K free                 1120272K
> cached
>
>   PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
>  5120 thorsten  17   0  1332 1332   820 R    25.0  0.0  15:00 top
>  6212 root       9   0   688  688   464 D     8.3  0.0   0:16 mv
>  6302 mysql      9   0  233M 222M  1500 D     0.6 14.6   0:00 mysqld
>  3871 mysql      9   0  233M 222M  1500 D     0.2 14.6   0:08 mysqld
>  6257 mysql      9   0  233M 222M  1500 D     0.2 14.6   0:00 mysqld
>  6194 mysql      9   0  233M 222M  1500 D     0.1 14.6   0:00 mysqld
>  6290 mysql      9   0  233M 222M  1500 D     0.1 14.6   0:00 mysqld
>
>    procs                      memory    swap          io     system        
> cpu r  b  w   swpd   free   buff  cache  si  so    bi    bo   in    cs  us 
> sy  id 12  5  1  67424  31140  24336 1127380   0   0 12768  8104  915   837
>  36  63   1 3 17  1  67424  32564  24376 1125796   0   0  3956 16064  739  
> 431  46  54   0 6  3  0  67552  32324  24348 1124728   0   0  9244 12160 
> 753   606  41  58   1 3 11  1  68960  32856  24572 1127092  32 1528  9764
> 14744  970   775  42  58   0 1 16  1  68960  32296  24604 1127500   0   0  
> 412 12176  482   178  16  84   0 0 14  0  68960  32808  24620 1127760   0  
> 0   260  4980  420   312  10   8  83 2  8  1  69600  30464  24636 1129840 
> 20   0 18088 15224 1021  1025  22  32  46 0 16  0  69600  31668  24704
> 1128148   0   0   140 14788  513   208   3   9  88 1  8  0  69600  31892 
> 24832 1128680   0   0   412  5636  410   283   2   4  94 2 10  1  69600 
> 32488  24996 1128272   0   0 11736  8212 1136  1260  11  39  50
>
>
> Huh. i'm looking over my figures and nothing looks too terrible to read,
> however my real problem is the user front end, as all our web sites rely
> on the backed mysql server, which really seems unusable, waiting several
> minutes for a page to load.
>
> i may not be coherent enough to be of much help to myself. :) i've been
> trying to track this down for days now, so any one has any advice as to
> what to look for next, it would be much appreciated.
>
> thanks,
> thorsten

Offhand I would expect you have hit a condition in a filesystem.  You say 
"move"  .... does that mean delete as well?  If so and if you are using XFS, 
that is normal.  XFS seems to do something really interesting with deletion, 
perhaps some form of defragging of open space.

if you are using ext2 I would be shocked by this behavior, but the journaling 
filesystems do have additional overhead.

Let's get a little better description of your setup and see if someone can 
reproduce the behavior.

Civileme


Want to buy your Pack or Services from MandrakeSoft? 
Go to http://www.mandrakestore.com

Reply via email to