On Sat, 2010-11-13 at 13:17 +0100, Ralf Hildebrandt wrote:
* Timo Sirainen t...@iki.fi:
There's a lot more of IPC going on now. Each process at startup connects
to config process to read configuration (vs. reading it from environment
variables). State tracking is done in anvil process
* Timo Sirainen t...@iki.fi:
Is dstat --ipc a suitable to measure/see what's going on?
That looks like it's about sysv IPC, which Dovecot doesn't use. Maybe
some other options would show something useful, I don't know.
Well...
Anyway, getting the rusage stats for v1.2 and comparing them
* Timo Sirainen t...@iki.fi:
might show something useful. Could you patch your v1.2 with the attached
patch
Done. It seems to work:
Nov 17 20:50:08 postamt dovecot: IMAP(stxxxke): rusage: real=38.583 user=0.4000
sys=0.80005 reclaims=485 faults=0 swaps=0 bin=0 bout=0 signals=0 volcs=23
On Wed, 2010-11-17 at 20:55 +0100, Ralf Hildebrandt wrote:
my ($type, $data) = ($1, $3);
to
my ($type, $data) = ($1, $4);
since I added another pair of ()
Just use non-capturing grouping instead. (?:foo)
--
char *t=\10pse\0r\0dtu...@ghno\x4e\xc8\x79\xf4\xab\x51\x8a\x10\xf4\xf4\xc4;
* Timo Sirainen t...@iki.fi:
There's a lot more of IPC going on now. Each process at startup connects
to config process to read configuration (vs. reading it from environment
variables). State tracking is done in anvil process (vs. master process
internally). Logging is via pipes to log
Ralf Hildebrandt put forth on 11/8/2010 12:44 PM:
* Stan Hoeppner s...@hardwarefreak.com:
Does this machine have more than 4GB of RAM? You do realize that merely
utilizing PAE will cause an increase in context switching, whether on
bare medal or in a VM guest. It will probably actually be
* Ralf Hildebrandt ralf.hildebra...@charite.de:
And I'm guessing you're running a 32bit PAE kernel because VMWare ESX
still doesn't officially support 64bit guests, correct?
No, it's supported, but I don'T want to change the whole system.
That's right, we cannot switch without having
Udo Wolter put forth on 11/8/2010 4:45 AM:
* Ralf Hildebrandt ralf.hildebra...@charite.de:
And I'm guessing you're running a 32bit PAE kernel because VMWare ESX
still doesn't officially support 64bit guests, correct?
No, it's supported, but I don'T want to change the whole system.
That's
* Stan Hoeppner s...@hardwarefreak.com:
Does this machine have more than 4GB of RAM? You do realize that merely
utilizing PAE will cause an increase in context switching, whether on
bare medal or in a VM guest. It will probably actually be much higher
with a VM guest running a PAE kernel.
Stan,
On 11/8/10 10:39 AM, Stan Hoeppner s...@hardwarefreak.com wrote:
However, if CONFIG_HZ=1000 you're generating WAY too many interrupts/sec
to the timer, ESPECIALLY on an 8 core machine. This will exacerbate the
high context switching problem. On an 8 vCPU (and physical CPU) machine
* Timo Sirainen t...@iki.fi:
Attached a script to parse and summarize the logs. In a small imaptest
run I didn't notice high system usage.
I'm trying to run the logparser, but it only emits:
postamt:~# /var/admhome/hildeb/logparse.pl /var/log/pop3d-imapd.log
type
postamt:~#
On 7.11.2010, at 18.31, Ralf Hildebrandt wrote:
I'm trying to run the logparser, but it only emits:
postamt:~# /var/admhome/hildeb/logparse.pl /var/log/pop3d-imapd.log
type
Probably your timestamps are different. Show one log line?
* Timo Sirainen t...@iki.fi:
postamt:~# /var/admhome/hildeb/logparse.pl /var/log/pop3d-imapd.log
type
Probably your timestamps are different. Show one log line?
Nov 7 19:37:17 postamt dovecot: imap(ptm-aus): Debug: rusage: real=0.51
user=0.16001 sys=0.52003 reclaims=665 faults=0
On 7.11.2010, at 18.37, Ralf Hildebrandt wrote:
* Timo Sirainen t...@iki.fi:
postamt:~# /var/admhome/hildeb/logparse.pl /var/log/pop3d-imapd.log
type
Probably your timestamps are different. Show one log line?
Nov 7 19:37:17 postamt dovecot: imap(ptm-aus): Debug: rusage: real=0.51
* Timo Sirainen t...@iki.fi:
Attached with a working regexp.
I switched a few minutes ago, back to 2.0.6
The load on the server is extremely light (it's sunday):
typerealusersys reclaim faults swaps bin bout
signals volcs involcs
auth38.44
* Timo Sirainen t...@iki.fi:
Nov 7 19:37:17 postamt dovecot: imap(ptm-aus): Debug: rusage: real=0.51
user=0.16001 sys=0.52003 reclaims=665 faults=0 swaps=0 bin=0 bout=0
signals=0 volcs=10 involcs=8
Attached with a working regexp.
Hmm, consecutive calls of the program are resulting in
* Ralf Hildebrandt ralf.hildebra...@charite.de:
* Timo Sirainen t...@iki.fi:
Nov 7 19:37:17 postamt dovecot: imap(ptm-aus): Debug: rusage: real=0.51
user=0.16001 sys=0.52003 reclaims=665 faults=0 swaps=0 bin=0 bout=0
signals=0 volcs=10 involcs=8
Attached with a working regexp.
Ralf Hildebrandt put forth on 11/5/2010 4:23 AM:
Due to the ongoing performance issues with 2.0.x I switched back to
1.2.15 yesterday evening, with no changes to the machine or my users.
(I migrated from 1.2.15 to 2.0.x by converting the existing config)
Today, we have MUCH LESS load, with
* Daniel L. Miller dmil...@amfes.com:
Dunno if you ever mentioned it - or if it makes any difference - but
what configure/build options are you using for 1.2 vs 2.0? Any
difference in the compiler? Is your 1.2 a distro pre-packaged binary?
No, both have been compiled from source using these
* Stan Hoeppner s...@hardwarefreak.com:
What hardware platform? (AMD/Intel/SPARC/PPC, generation/freq)
Intel(R) Xeon(R) CPU L5335 @ 2.00GHz
What OS platform?
Debian lenny
What compiler/version?
gcc version 4.4.5 (Debian 4.4.5-2)
What threading library?
? how do I find out?
Ralf Hildebrandt put forth on 11/6/2010 9:15 AM:
* Stan Hoeppner s...@hardwarefreak.com:
What hardware platform? (AMD/Intel/SPARC/PPC, generation/freq)
Intel(R) Xeon(R) CPU L5335 @ 2.00GHz
What OS platform?
Debian lenny
What compiler/version?
gcc version 4.4.5 (Debian
* Stan Hoeppner s...@hardwarefreak.com:
Hmm. My Lenny systems have 4.3.2-2. Are you maybe using Squeeze, not
Lenny?
Yes, squeeze, sorry
I'm still using i686 systems, but I wouldn't think that would change
the version of GCC that gets installed. I'm not sure if this may be
playing a role
* Ralf Hildebrandt ralf.hildebra...@charite.de:
I'm still using i686 systems, but I wouldn't think that would change
the version of GCC that gets installed. I'm not sure if this may be
playing a role in this problem or not. What kernel version are you
running, stock Debian or rolled
Ralf Hildebrandt put forth on 11/6/2010 10:33 AM:
* Ralf Hildebrandt ralf.hildebra...@charite.de:
I'm still using i686 systems, but I wouldn't think that would change
the version of GCC that gets installed. I'm not sure if this may be
playing a role in this problem or not. What kernel
Due to the ongoing performance issues with 2.0.x I switched back to
1.2.15 yesterday evening, with no changes to the machine or my users.
(I migrated from 1.2.15 to 2.0.x by converting the existing config)
Today, we have MUCH LESS load, with the same number of logins/min.
I cannot say what
* Ralf Hildebrandt ralf.hildebra...@charite.de:
Due to the ongoing performance issues with 2.0.x I switched back to
1.2.15 yesterday evening, with no changes to the machine or my users.
(I migrated from 1.2.15 to 2.0.x by converting the existing config)
Today, we have MUCH LESS load, with
On Fri, Nov 5, 2010 at 5:58 AM, Ralf Hildebrandt
ralf.hildebra...@charite.de wrote:
I uploaded a preliminary screenshot with comments:
http://www.arschkrebs.de/bugs/dovecot.png
Unclear from your graphs what is for 2.0 and what is for 1.2
Plotting the same variable for 2.0 and 1.2 data on the
On Fri, Nov 5, 2010 at 6:23 AM, Ralf Hildebrandt
ralf.hildebra...@charite.de wrote:
* zhong ming wu mr.z.m...@gmail.com:
On Fri, Nov 5, 2010 at 5:58 AM, Ralf Hildebrandt
ralf.hildebra...@charite.de wrote:
I uploaded a preliminary screenshot with comments:
why don't you run clamdscan on delivery? that way you only scan each email
once, not repeatedly every night until it's deleted.
-david
On 11/05/10 05:58, Ralf Hildebrandt wrote:
During the night we're using clamdscan to scan mailboxes for viruses,
this results in the big block of system
* Timo Sirainen t...@iki.fi:
On 5.11.2010, at 9.58, Ralf Hildebrandt wrote:
I uploaded a preliminary screenshot with comments:
http://www.arschkrebs.de/bugs/dovecot.png
Were you using v1.2's deliver here in left also? Or how much of a difference
did that make alone?
2.0 was indeed
* David Ford da...@blue-labs.org:
why don't you run clamdscan on delivery?
I do.
that way you only scan each email once, not repeatedly every night
until it's deleted.
I'm only scanning directories that haven't been scanned for a long time
(I cannot scan all the boxes in one night). Main
* Ralf Hildebrandt ralf.hildebra...@charite.de:
* zhong ming wu mr.z.m...@gmail.com:
Left of switching back to 1.2.x is 2.0
Right of switching back to 1.2.x is 1.2.x
i thought switching back to 1.2.x is title of that graph.
Since you know your server better I assume that you expect
On 11/05/10 08:56, Ralf Hildebrandt wrote:
I'm only scanning directories that haven't been scanned for a long time
(I cannot scan all the boxes in one night). Main purpose is to remove
freshly detected viruses/spam that wasn't in the patterns at delivery
time.
The benefit is somewhat
* David Ford da...@blue-labs.org:
on my networks, AV and anti-spam hooks are via sendmail/milter and get
called for all smtp regardless of direction which means an infected
desktop won't be able to transmit spam.
same here.
thus, running a nightly scan on mailboxes after delivery means the
Am 05.11.2010 10:58, schrieb Ralf Hildebrandt:
* Ralf Hildebrandt ralf.hildebra...@charite.de:
Due to the ongoing performance issues with 2.0.x I switched back to
1.2.15 yesterday evening, with no changes to the machine or my users.
(I migrated from 1.2.15 to 2.0.x by converting the existing
On 2010-11-05 8:56 AM, Ralf Hildebrandt wrote:
* David Ford da...@blue-labs.org:
why don't you run clamdscan on delivery?
I do.
On 2010-11-05 9:33 AM, Robert Schetterer wrote:
Hi Ralph, high cpu load is common with clamscan
Hmmm... maybe dovecot 2.0 is doing something different from 1.2
On 2010-11-05 9:18 AM, David Ford wrote:
snip
-d
-- Linux - freedom to build is good Please top-post and trim when
replying to my messages.
snip
David, once was funny, and even better when replying to a message from
someone who has a 'real' 'disclaimer' sig - but I sure hope you're not
* Robert Schetterer rob...@schetterer.org:
Hi Ralph, high cpu load is common with clamscan
We're not talking about the times where clamdscan is running.
It's ONLY running at night. That's why I labeled the graph accordingly.
--
Ralf Hildebrandt
Geschäftsbereich IT | Abteilung Netzwerk
* Charles Marcus cmar...@media-brokers.com:
Hmmm... maybe dovecot 2.0 is doing something different from 1.2 that
causes your *live* clamdscan at delivery time to produce the heavier load...
Clamdscan is not running at delivery time on that box, it's running on
another machine.
On my graph I
I'm wondering if the problem has to do with the way processes now do
IPC
That could very well be. Lots of time is spent in the kernel
What exactly has changed - and what kind of data are the processes
exchanging via IPCs?
And which processes are talking to each other?
--
Ralf
On 2010-11-05 10:05 AM, Ralf Hildebrandt wrote:
* Charles Marcus cmar...@media-brokers.com:
Hmmm... maybe dovecot 2.0 is doing something different from 1.2 that
causes your *live* clamdscan at delivery time to produce the heavier load...
Clamdscan is not running at delivery time on that
* Charles Marcus cmar...@media-brokers.com:
On 2010-11-05 10:05 AM, Ralf Hildebrandt wrote:
* Charles Marcus cmar...@media-brokers.com:
Hmmm... maybe dovecot 2.0 is doing something different from 1.2 that
causes your *live* clamdscan at delivery time to produce the heavier
load...
On 2010-11-05 10:15 AM, Ralf Hildebrandt wrote:
* Charles Marcus cmar...@media-brokers.com:
You plainly state that you *do* run clamdscan on delivery...
Not on this machine.
Gotcha...
--
Best regards,
Charles
Am 05.11.2010 15:15, schrieb Ralf Hildebrandt:
* Charles Marcus cmar...@media-brokers.com:
On 2010-11-05 10:05 AM, Ralf Hildebrandt wrote:
* Charles Marcus cmar...@media-brokers.com:
Hmmm... maybe dovecot 2.0 is doing something different from 1.2 that
causes your *live* clamdscan at
* Robert Schetterer rob...@schetterer.org:
Hi Ralph , ia still not clear about your problem
i understand that you do something with clam and there is difference
between dovocot versions , am i right ?
No. clamd is not involved.
dovecot-2.0.x : slow
dovecot-1.2.x : pretty fast
same machine,
* Ralf Hildebrandt ralf.hildebra...@charite.de:
I uploaded a preliminary screenshot with comments:
http://www.arschkrebs.de/bugs/dovecot.png
During the night we're using clamdscan to scan mailboxes for viruses,
this results in the big block of system user from 0:00 until about
08:00
Hi Ralf
Not sure how your setup is arranged, but do you perhaps have the
opportunity to do a partial upgrade and switch say only POP or only
IMAP users to 2.0? (Or only deliver?) The thought is that you might
narrow down it down a little?
I'm thinking if you use a virtualisation solution
* Ed W li...@wildgooses.com:
Hi Ralf
Not sure how your setup is arranged, but do you perhaps have the
opportunity to do a partial upgrade and switch say only POP or only
IMAP users to 2.0? (Or only deliver?)
Well, why not. It's possible. It's all in place.
What I had was using pop3s imap
On Fri, 2010-11-05 at 15:08 +0100, Ralf Hildebrandt wrote:
I'm wondering if the problem has to do with the way processes now do
IPC
That could very well be. Lots of time is spent in the kernel
What exactly has changed - and what kind of data are the processes
exchanging via IPCs?
* Timo Sirainen t...@iki.fi:
There's a lot more of IPC going on now. Each process at startup connects
to config process to read configuration (vs. reading it from environment
variables).
OK
State tracking is done in anvil process (vs. master process
internally).
anvil is completely new,
On Fri, 2010-11-05 at 15:25 +, Timo Sirainen wrote:
Anyway, I'd think the used system time is owned by some process(es).
Would be interesting to know what kind of logs you get with the attached
patch (e.g. run dovecot for an hour..day, stop it, gather all logs,
count the used system times
51 matches
Mail list logo