Re: Frequent Out of Memory for service(config)

2019-06-05 Thread Root Kev via dovecot
Hello Aki + group.

The following is the latest dump output that I have. Does this give any
indication of what is wrong?  I am not sure what this is really showing
me...

# coredumpctl gdb -1
   PID: 14432 (config)
   UID: 0 (root)
   GID: 0 (root)
Signal: 6 (ABRT)
 Timestamp: Wed 2019-06-05 13:25:32 UTC (4h 52min ago)
  Command Line: dovecot-Pop3 Mail Service/config
Executable: /usr/libexec/dovecot/config
 Control Group: /system.slice/dovecot.service
  Unit: dovecot.service
 Slice: system.slice
   Boot ID: 9ae422871a814d699f5feb4ca52d3b69
Machine ID: 05cb8c7b39fe0f70e3ce97e5beab809d
  Hostname: <*REMOVED**>
  Coredump:
/var/lib/systemd/coredump/core.config.0.9ae422871a814d699f5feb4ca52d3b69.14432.155974113200.xz
   Message: Process 14432 (config) of user 0 dumped core.

Stack trace of thread 14432:
#0  0x7fd25a809207 raise (libc.so.6)
#1  0x7fd25a80a8f8 abort (libc.so.6)
#2  0x7fd25ac79567 fatal_handler_real (libdovecot.so.0)
#3  0x7fd25ac79651 i_internal_fatal_handler
(libdovecot.so.0)
#4  0x7fd25abdf2d9 i_fatal_status (libdovecot.so.0)
#5  0x7fd25ac9b900 pool_system_malloc (libdovecot.so.0)
#6  0x7fd25aca220a o_stream_grow_buffer
(libdovecot.so.0)
#7  0x7fd25aca2506 o_stream_add (libdovecot.so.0)
#8  0x7fd25aca3388 o_stream_file_sendv (libdovecot.so.0)
#9  0x7fd25aca03f5 o_stream_sendv_int (libdovecot.so.0)
#10 0x7fd25aca0a5c o_stream_nsendv (libdovecot.so.0)
#11 0x7fd25aca0aca o_stream_nsend (libdovecot.so.0)
#12 0x556e753617dc config_connection_input (config)
#13 0x7fd25ac91e0f io_loop_call_io (libdovecot.so.0)
#14 0x7fd25ac9384b io_loop_handler_run_internal
(libdovecot.so.0)
#15 0x7fd25ac91f16 io_loop_handler_run (libdovecot.so.0)
#16 0x7fd25ac92138 io_loop_run (libdovecot.so.0)
#17 0x7fd25ac07973 master_service_run (libdovecot.so.0)
#18 0x556e7535edc9 main (config)
#19 0x7fd25a7f53d5 __libc_start_main (libc.so.6)
#20 0x556e7535ee3b _start (config)

GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-114.el7
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html
>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from /usr/libexec/dovecot/config...Reading symbols from
/usr/lib/debug/usr/libexec/dovecot/config.debug...done.
done.
[New LWP 14432]
Core was generated by `dovecot-Pop3 Mail Service/config'.
Program terminated with signal 6, Aborted.
#0  0x7fd25a809207 in __GI_raise (sig=sig@entry=6) at
../nptl/sysdeps/unix/sysv/linux/raise.c:55
55return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);

Thanks,

Kevin

On Wed, May 15, 2019 at 4:34 AM Aki Tuomi 
wrote:

>
> On 13.5.2019 22.56, Root Kev via dovecot wrote:
> > Hello Group,
> >
> > We have dovecot deployed as solely a Pop3 service that is used by our
> > applications to pass mail from one application to another internally.
> > We have roughly 4 applications that connect to the Pop3 service every
> > 2 seconds, to check for new messages and pop them for processing if
> > they are present.  Depending on the site, we have between 1024-2048MB
> > of memory set for default_vsz_limit.  In all systems we see the Out of
> > memory alert several times a day. We previously did not see this at
> > all when running on CentOS6, with less memory.
> >
> > We have tried increasing the memory to the vsz_limit up to 2gb without
> > success.
> >
> > We are running on CentOS 7 servers, running dovecot 2.3.6 (7eab80676)
> > (from the dovecot repo).
> >
> > Can anyone advise any other settings that could be modified in order
> > to correct these out of memory issues?
> >
> > # dovecot -n
> > # 2.3.6 (7eab80676): /etc/dovecot/dovecot.conf
> > # OS: Linux 3.10.0-957.5.1.el7.x86_64 x86_64 CentOS Linux release
> > 7.6.1810 (Core)
> > # Hostname: ** #
> > auth_cache_size = 10 M
> > auth_verbose = yes
> > default_vsz_limit = 1 G
> > instance_name = Pop3 Mail Service
> > listen = 10.*.*.* #
> > log_path = /var/log/dovecot.log
> > login_greet

Re: Frequent Out of Memory for service(config)

2019-05-21 Thread Root Kev via dovecot
Hey Tom,

We don't run any quota-service in dovecot (I just verified just in case as
well).  Thanks for the suggestion though.

Kevin

On Sun, May 19, 2019 at 9:21 AM Tom Sommer via dovecot 
wrote:

>
>
> On 2019-05-13 21:56, Root Kev via dovecot wrote:
>
> > Hello Group,
> >
> > We have dovecot deployed as solely a Pop3 service that is used by our
> > applications to pass mail from one application to another internally.
> > We have roughly 4 applications that connect to the Pop3 service every 2
> > seconds, to check for new messages and pop them for processing if they
> > are present.  Depending on the site, we have between 1024-2048MB of
> > memory set for default_vsz_limit.  In all systems we see the Out of
> > memory alert several times a day. We previously did not see this at all
> > when running on CentOS6, with less memory.
>
> I see this too on servers running quota-service (dunno if it is
> related).
>
> ---
> Tom
>


Re: Frequent Out of Memory for service(config)

2019-05-17 Thread Root Kev via dovecot
Hi Aki,

I put  in the line that you recommended as well as the " echo
'DAEMON_COREFILE_LIMIT="unlimited"' >> /etc/sysconfig/dovecot".
Unfortunately, the logs are saying that core dumps are still disabled.

May 17 13:30:17 config: Fatal: pool_system_malloc(8192): Out of memory
May 17 13:30:17 pop3(emx-echoworx): Error: Error reading configuration:
read(/var/run/dovecot/config) failed: EOF
May 17 13:30:17 config: Fatal: master: service(config): child 11530 killed
with signal 6 (core dumps disabled -
https://dovecot.org/bugreport.html#coredumps)

On Wed, May 15, 2019 at 4:34 AM Aki Tuomi 
wrote:

>
> On 13.5.2019 22.56, Root Kev via dovecot wrote:
> > Hello Group,
> >
> > We have dovecot deployed as solely a Pop3 service that is used by our
> > applications to pass mail from one application to another internally.
> > We have roughly 4 applications that connect to the Pop3 service every
> > 2 seconds, to check for new messages and pop them for processing if
> > they are present.  Depending on the site, we have between 1024-2048MB
> > of memory set for default_vsz_limit.  In all systems we see the Out of
> > memory alert several times a day. We previously did not see this at
> > all when running on CentOS6, with less memory.
> >
> > We have tried increasing the memory to the vsz_limit up to 2gb without
> > success.
> >
> > We are running on CentOS 7 servers, running dovecot 2.3.6 (7eab80676)
> > (from the dovecot repo).
> >
> > Can anyone advise any other settings that could be modified in order
> > to correct these out of memory issues?
> >
> > # dovecot -n
> > # 2.3.6 (7eab80676): /etc/dovecot/dovecot.conf
> > # OS: Linux 3.10.0-957.5.1.el7.x86_64 x86_64 CentOS Linux release
> > 7.6.1810 (Core)
> > # Hostname: ** #
> > auth_cache_size = 10 M
> > auth_verbose = yes
> > default_vsz_limit = 1 G
> > instance_name = Pop3 Mail Service
> > listen = 10.*.*.* #
> > log_path = /var/log/dovecot.log
> > login_greeting = Pop3 Mail Service
> > login_trusted_networks = 10.*.*.* 10.*.*.* 10.*.*.* 10.*.*.* 10.*.*.*
> > #
> > mail_location = maildir:~/Maildir
> > namespace inbox {
> >   inbox = yes
> >   location =
> >   mailbox Drafts {
> > special_use = \Drafts
> >   }
> >   mailbox Junk {
> > special_use = \Junk
> >   }
> >   mailbox Sent {
> > special_use = \Sent
> >   }
> >   mailbox "Sent Messages" {
> > special_use = \Sent
> >   }
> >   mailbox Trash {
> > special_use = \Trash
> >   }
> >   prefix =
> > }
> > passdb {
> >   args = cache_key=#hidden_use-P_to_show#
> >   driver = pam
> > }
> > protocols = pop3
> > ssl_cert =  > ssl_key = # hidden, use -P to show it
> > userdb {
> >   driver = passwd
> > }
> > verbose_ssl = yes
> >
> > May 10 06:44:05 config: Fatal: pool_system_malloc(8192): Out of memory
> > May 10 06:44:05 config: Fatal: master: service(config): child 27887
> > returned error 83 (Out of memory (service config { vsz_limit=1024 MB
> > }, you may need to increase it) - set CORE_OUTOFMEM=1 environment to
> > get core dump)
>
> Can you try setting
>
> import_environment = $import_environment CORE_OUTOFMEM=1
>
> and see if it causes coredump?
>
> Aki
>
>


Frequent Out of Memory for service(config)

2019-05-13 Thread Root Kev via dovecot
Hello Group,

We have dovecot deployed as solely a Pop3 service that is used by our
applications to pass mail from one application to another internally.  We
have roughly 4 applications that connect to the Pop3 service every 2
seconds, to check for new messages and pop them for processing if they are
present.  Depending on the site, we have between 1024-2048MB of memory set
for default_vsz_limit.  In all systems we see the Out of memory alert
several times a day. We previously did not see this at all when running on
CentOS6, with less memory.

We have tried increasing the memory to the vsz_limit up to 2gb without
success.

We are running on CentOS 7 servers, running dovecot 2.3.6 (7eab80676) (from
the dovecot repo).

Can anyone advise any other settings that could be modified in order to
correct these out of memory issues?

# dovecot -n
# 2.3.6 (7eab80676): /etc/dovecot/dovecot.conf
# OS: Linux 3.10.0-957.5.1.el7.x86_64 x86_64 CentOS Linux release 7.6.1810
(Core)
# Hostname: ** #
auth_cache_size = 10 M
auth_verbose = yes
default_vsz_limit = 1 G
instance_name = Pop3 Mail Service
listen = 10.*.*.* #
log_path = /var/log/dovecot.log
login_greeting = Pop3 Mail Service
login_trusted_networks = 10.*.*.* 10.*.*.* 10.*.*.* 10.*.*.* 10.*.*.*
#
mail_location = maildir:~/Maildir
namespace inbox {
  inbox = yes
  location =
  mailbox Drafts {
special_use = \Drafts
  }
  mailbox Junk {
special_use = \Junk
  }
  mailbox Sent {
special_use = \Sent
  }
  mailbox "Sent Messages" {
special_use = \Sent
  }
  mailbox Trash {
special_use = \Trash
  }
  prefix =
}
passdb {
  args = cache_key=#hidden_use-P_to_show#
  driver = pam
}
protocols = pop3
ssl_cert = 

[no subject]

2018-11-27 Thread Root Kev
Dovecot Version: 2.3.2.1

Hello Mailing List,

We are having a random issue in a couple of our production servers, where
one of the child processes randomly dies with an out of memory error (see
below):

Nov 26 11:58:17 config: Fatal: pool_system_malloc(8192): Out of memory
Nov 26 11:58:17 pop3-login: Fatal: Error reading configuration:
read(/var/run/dovecot/config) failed: EOF
Nov 26 11:58:17 config: Fatal: master: service(config): child 29696
returned error 83 (Out of memory (service config { vsz_limit=2048 MB }, you
may need to increase it) - set CORE_OUTOFMEM=1 environment to get core dump)

We only use dovecot for internal application POP3 mail access from a
mailbox, and there is under 10 connecting applications.  Have have
gradually increased the vsz_limit from the default of 256mb up to 2GB now.
Is there anything else that should/could be changed instead of continuing
to through memory at it?



# dovecot -n
# 2.3.2.1 (0719df592): /etc/dovecot/dovecot.conf
# OS: Linux 3.10.0-862.11.6.el7.x86_64 x86_64 CentOS Linux release 7.5.1804
(Core)
# Hostname: 
doveconf: Warning: please set ssl_dh= /etc/dovecot/dh.pem
auth_cache_size = 10 M
auth_verbose = yes
base_dir = /var/run/dovecot/
default_vsz_limit = 2 G
instance_name = EMX Pop Mailstore
listen = 
log_path = /var/log/dovecot-echo.log
login_greeting = Pop3 MailServer Ready.
login_trusted_networks = 
mail_location = maildir:~/Maildir
namespace inbox {
  inbox = yes
  location =
  mailbox Drafts {
special_use = \Drafts
  }
  mailbox Junk {
special_use = \Junk
  }
  mailbox Sent {
special_use = \Sent
  }
  mailbox "Sent Messages" {
special_use = \Sent
  }
  mailbox Trash {
special_use = \Trash
  }
  prefix =
}
passdb {
  args = cache_key=%u
  driver = pam
}
protocols = pop3
ssl_cert = 

Re: pigeonhole - how to whitelist

2015-01-17 Thread Sendmail Root

I fixed it by moving the directory from the recommended
/var/lib/dovecot/sieve/after.d/
to
/tmp
If that is not the recommended resolution, please advise.


On 1/16/2015 12:45 PM, Cliff Hayes wrote:

Thanks.
That's exactly what I needed.
However I have a permission problem.
I added the parameter to 90-sieve.conf and created the directory but now
I get the following permissions errors in maillog even though I have the
file and directory wide open with 777 permissions:

Error: yY/0JHtauVQfPgAAU+Cu/Q: sieve: failed to open sieve dir:
stat(/var/lib/dovecot/sieve/after.d/) failed: Permission denied
(euid=526(cliffhayes) egid=12(mail) missing +x perm: /var/lib/dovecot,
euid is not dir owner)


On 1/16/2015 1:33 AM, Steffen Kaiser wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Thu, 15 Jan 2015, Cliff Hayes wrote:


When new users are added we start them with a spam rule that routes
spam to their junk folder.  I don't see a way to assign priority ...
so how does a user whitelist a spam-flagged email?  Are the rules
applied in some order? Alphabetically perhaps?  If so I can name the
spam rule z-spam.


rules do have exactly one order, in which they appear in the Sieve
script.

But you certainly mean something different. Maybe a particular Sieve
front-end, that assembles the Sieve script together?

See, http://wiki2.dovecot.org/Pigeonhole/Sieve/Configuration#multiscript

There is one personal script the user may change and you can define one
or more scripts to be executed before or after the personal script. So,
if this would be pigeonhole problem, you define the spam processing in
an "after" global script and let have the user whitelist a message, the
personal script must file the message somewhere and stop script
processing, see the paragraphe after "sieve_after = ".

- -- Steffen Kaiser
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEVAwUBVLi+yHz1H7kL/d9rAQL7VwgAnJyDcjCccum3681zpRl7wwm7BWgJq/9D
jYGTOg162a/MO1nCcJTV+D0jETe4eaLe7QLLbYrhHyjdOoeHk32w9fMmNtrFsDQS
PnddE8o0xIxEquuabBbY5grx9KWKBoriZvaN6XbBh+kC+GxAQWkZ8P+4WA5NHZCc
/FbwD/3Nf5C7rZbujgkxLdhaGD+pb9EfE9+fq6WZD8+/avU/Gfm91N1H0a/I5vGf
OgeErUwBH35iA0Z++cCv7tT7i4stwHAyF12LVnr9uQQE4XtDXAgQjzzeC/eY008b
iyB0+i3edeR6peCh+MJ7NIn3ptNEilf8jHAfv5WrnCtRM9uSZvJPmQ==
=8CZj
-END PGP SIGNATURE-





[Dovecot] ACL and SSL

2012-11-19 Thread Dave Shariff Yadallee - System Administrator a.k.a. The Root of the Problem
Finally got Dovecot to work on ports 100 and 143.

I would like to

a) Learn about ACL esp on port 110 as there are still yodellaks that try 
  to brake in on port 110.

b) Setting up separate SSL cert for imaps and pop3s.

-- 
For effective Internet Etiquette and communications read 
http://catb.org/jargon/html/T/top-post.html, http://idallen.com/topposting.html
& http://www.caliburn.nl/topposting.html

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



[Dovecot] Dovecot 1.x on AIX -> Dovecot 2.x on Ubuntu

2012-06-06 Thread root
We are working on migrating Dovecot 1.2.17 running on AIX 5.3 (believe it
or not!) to Dovecot 2.0.13 running on Ubuntu.  We have hundreds of users
mboxes we will be migrating.  My question is regarding the index files.
Should we remove those after the migration, but before we open it up to
users so Dovecot can create new ones?

I did a test migration of a single user, and Dovecot detects the
architecture change and put out some panic errors, corrupt files and
backtrace messages in syslog on Ubuntu.  The messages are shown below.  If
every user is going to generate these types of errors, I'm thinking maybe
it makes sense to remove all the .imap directories and let Dovecot create
new clean ones.  I realize that may slow things down for awhile while
Dovecot is rebuilding new files.

Thanks for any info.

Jackie Hunt
Acad Computing & Networking Srvcs
Colorado State University

Jun  6 13:43:02 newlamar dovecot: imap-login: Login: user=,
method=PLAIN, rip=129.82.100.64, lip=129.82.100.124, mpid=19593, TLS
Jun  6 13:43:21 newlamar dovecot: imap-login: Login: user=,
method=PLAIN, rip=129.82.100.64, lip=129.82.100.124, mpid=19597, TLS
Jun  6 13:43:21 newlamar dovecot: imap-login: Login: user=,
method=PLAIN, rip=129.82.100.64, lip=129.82.100.124, mpid=19600, TLS
Jun  6 13:44:11 newlamar dovecot: imap(cacti): Disconnected: Logged out
bytes=107/441
Jun  6 13:44:11 newlamar dovecot: imap(cacti): Disconnected: Logged out
bytes=1676/2724868
Jun  6 13:44:11 newlamar dovecot: imap(cacti): Disconnected: Logged out
bytes=129/759
Jun  6 13:51:49 newlamar dovecot: imap-login: Login: user=,
method=PLAIN, rip=129.82.100.64, lip=129.82.100.124, mpid=19657, TLS
Jun  6 13:51:49 newlamar dovecot: imap(cacti): Error: Rebuilding index
file /adhome/cacti/.imap/INBOX/dovecot.index: CPU architecture changed
Jun  6 13:51:58 newlamar dovecot: imap-login: Login: user=,
method=PLAIN, rip=129.82.100.64, lip=129.82.100.124, mpid=19662, TLS
Jun  6 13:51:58 newlamar dovecot: imap(cacti): Error: Corrupted
transaction log file /adhome/cacti/.imap/Trash/dovecot.index.log seq
16777216: log file shrank (1428 < 6144) (sync_offset=6144)
Jun  6 13:51:58 newlamar dovecot: imap(cacti): Panic: file buffer.c: line
295 (buffer_set_used_size): assertion failed: (used_size <= buf->alloc)
Jun  6 13:51:58 newlamar dovecot: imap(cacti): Error: Raw backtrace:
/usr/lib/dovecot/libdovecot.so.0(+0x374fa) [0x7f3ada59c4fa] ->
/usr/lib/dovecot/libdovecot.so.0(+0x3753e) [0x7f3ada59c53e] ->
/usr/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f3ada576837] ->
/usr/lib/dovecot/libdovecot.so.0(+0x35319) [0x7f3ada59a319] ->
/usr/lib/dovecot/libdovecot-storage.so.0(mail_transaction_log_file_open+0x21e)
[0x7f3ada87acee] ->
/usr/lib/dovecot/libdovecot-storage.so.0(mail_transaction_log_open+0xb8)
[0x7f3ada877a68] ->
/usr/lib/dovecot/libdovecot-storage.so.0(mail_index_open+0xe5)
[0x7f3ada860e75] ->
/usr/lib/dovecot/libdovecot-storage.so.0(index_storage_mailbox_open+0xbc)
[0x7f3ada826eac] -> /usr/lib/dovecot/libdovecot-storage.so.0(+0x5f7fb)
[0x7f3ada8417fb] -> /usr/lib/dovecot/libdovecot-storage.so.0(+0x28c4c)
[0x7f3ada80ac4c] ->
/usr/lib/dovecot/libdovecot-storage.so.0(index_storage_mailbox_enable+0x24)
[0x7f3ada827584] -> dovecot/imap(imap_status_get+0xfd) [0x7f3adacead8d] ->
doveco
 t/imap(cmd_status+0x182) [0x7f3adace1f92] -> dovecot/imap(+0x1105d)
[0x7f3adace405d] -> dovecot/imap(+0x11135) [0x7f3adace4135] ->
dovecot/imap(client_handle_input+0x125) [0x7f3adace4385] ->
dovecot/imap(client_input+0x65) [0x7f3adace4c35] ->
/usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x48) [0x7f3ada5a8048] ->
/usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0xa7)
[0x7f3ada5a90c7] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x28)
[0x7f3ada5a7fd8] ->
/usr/lib/dovecot/libdovecot.so.0(master_service_run+0x13) [0x7f3ada5962c3]
-> dovecot/imap(main+0x2f4) [0x7f3adacdc544] ->
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed) [0x7f3ada1e530d]
-> dovecot/imap(+0x95d5) [0x7f3adacdc5d5]
Jun  6 13:51:59 newlamar dovecot: imap-login: Login: user=,
method=PLAIN, rip=129.82.100.64, lip=129.82.100.124, mpid=19664, TLS
Jun  6 13:51:59 newlamar dovecot: imap(cacti): Error: Transaction log file
/adhome/cacti/.imap/Trash/dovecot.index.log: marked corrupted
Jun  6 13:51:59 newlamar dovecot: imap(cacti): Error: Rebuilding index
file /adhome/cacti/.imap/Trash/dovecot.index: CPU architecture changed


Re: [Dovecot] High level of pop3 popping causing server to become unresponsive

2012-05-30 Thread Root Kev
Thierry,

Had a chance to test this change this morning, and in my test environment,
this does drastically improve the ability to ssh and su during heavy pop3
load (in test environment, change of 10-15sec to 1-2sec login).  While this
is better, the popping is still slowing down authentication on the box
during load.  Is there anything else that might be changed to increase
speed?

Just a recap, I basically have between 2-8 mailboxes on each server, and an
application that creates two connections to each mailbox, processes each
message that comes in, and then deleted each message from the pop3
mailbox.  During high load times (500+ messages) with the settings that I
had, I was unable to reconnect to the server, nor su to root.  Ended up
having to stop the dovecot process and revert to popa3d (luckily had a ssh
session still open to the box).

This is the config from my test environment:
root@devsmtp:~# dovecot -c /usr/local/etc/dovecot/dovecot-test.conf -n
# 2.1.4: /usr/local/etc/dovecot/dovecot-test.conf
# OS: Linux 2.6.32-33-generic-pae i686 Ubuntu 10.04.4 LTS ext4
auth_verbose = yes
base_dir = /var/run/dovecot-test/
disable_plaintext_auth = no
instance_name = dovecot-test
listen = 
login_greeting = Dovecot-Test for removing index
mail_location = mbox:/var/empty:INBOX=/var/mail/%u
mail_privileged_group = mail
namespace inbox {
  inbox = yes
  location =
  mailbox Drafts {
special_use = \Drafts
  }
  mailbox Junk {
special_use = \Junk
  }
  mailbox Sent {
special_use = \Sent
  }
  mailbox "Sent Messages" {
special_use = \Sent
  }
  mailbox Trash {
special_use = \Trash
  }
  prefix =
}
passdb {
  driver = shadow
}
protocols = pop3
service pop3-login {
  inet_listener pop3 {
port = 130
  }
  service_count = 0
}
ssl = no
userdb {
  driver = passwd
}
protocol pop3 {
  pop3_uidl_format = %08Xu%08Xv
}


And from one of my production servers that has the issue:
# 2.1.4: /usr/local/etc/dovecot/dovecot.conf-ml
# OS: Linux 2.6.18-194.17.1.el5 x86_64 Red Hat Enterprise Linux Server
release 5.6 (Tikanga) ext3
auth_verbose = yes
base_dir = /var/run/dovecotml/
disable_plaintext_auth = no
instance_name = Popper
listen = 
login_greeting = Popper
mail_location =
mbox:/var/empty:INBOX=/opt/mailstore/spool/mail/%u:INDEX=MEMORY
mail_privileged_group = mail
namespace inbox {
  inbox = yes
  location =
  mailbox Drafts {
special_use = \Drafts
  }
  mailbox Junk {
special_use = \Junk
  }
  mailbox Sent {
special_use = \Sent
  }
  mailbox "Sent Messages" {
special_use = \Sent
  }
  mailbox Trash {
special_use = \Trash
  }
  prefix =
}
passdb {
  driver = shadow
}
protocols = pop3
ssl = no
userdb {
  args = blocking=yes
  driver = passwd
}
protocol pop3 {
  pop3_uidl_format = %08Xu%08Xv
}


On Fri, May 25, 2012 at 2:24 PM, Thierry de Montaudry wrote:

> On 25 May 2012, at 20:00, Root Kev  wrote:
>
> So the best way to would be to remove it that part completely, or should
> it be stored somewhere on disk?
>
> Thanks again,
>
> Kevin
>
> yes, best is to remove ":INDEX=MEMORY" part, and it will store indexes in
> your INBOX path, which is fine.
>
> If you want to put it in a different path, you can have something like
> INDEX=/path/to/indexes/%u
>
> Regards,
>
> Thierry
>
>
>
>> Hi,
>>
>> Having a system with a third of our users on POP3 (230 of them), no
>> trouble with dovecot (v2.1.5, on CentOS 5).
>> But one thing surprises me in your config, the INDEX=MEMORY in the
>> location parameter. That means that for each POP3 connection, dovecot has
>> to read each and every mails to create the index in memory. That might be
>> why the machine becomes unresponsive.
>> Unless you have a specific reason to use memory index (and I would be
>> curious to know about it), I would suggest to remove this and keep standard
>> file indexes, and your performances should improve a lot.
>>
>> Regards,
>>
>>Thierry
>>
>>
>
>


Re: [Dovecot] High level of pop3 popping causing server to become unresponsive

2012-05-25 Thread Root Kev
So the best way to would be to remove it that part completely, or should it
be stored somewhere on disk?

Thanks again,

Kevin


> Hi,
>
> Having a system with a third of our users on POP3 (230 of them), no
> trouble with dovecot (v2.1.5, on CentOS 5).
> But one thing surprises me in your config, the INDEX=MEMORY in the
> location parameter. That means that for each POP3 connection, dovecot has
> to read each and every mails to create the index in memory. That might be
> why the machine becomes unresponsive.
> Unless you have a specific reason to use memory index (and I would be
> curious to know about it), I would suggest to remove this and keep standard
> file indexes, and your performances should improve a lot.
>
> Regards,
>
>Thierry
>
>


Re: [Dovecot] High level of pop3 popping causing server to become unresponsive

2012-05-24 Thread Root Kev
Currently cannot use IMAP as our application to access the mailbox
currently is only setup to access pop3 mailboxes.  We are currently using
Dovecot 2.1.4.  Below is the majority of our configs, it is mostly basic:

protocols = pop3
listen = ***Address here***
base_dir = /var/run/dovecot1/
instance_name = Popper
login_greeting = Popper
mail_location =
mbox:/var/empty:INBOX=/opt/mailstore/spool/mail/%u:INDEX=MEMORY

!include conf.d/*.conf
!include_try local.conf

disable_plaintext_auth = no
auth_mechanisms = plain
log_path = syslog
syslog_facility = mail
auth_verbose = yes
log_timestamp = "%b %d %H:%M:%S "
namespace inbox {
inbox = yes
}
mail_privileged_group = mail
lock_method = fcntl
mbox_read_locks = fcntl
mbox_write_locks = dotlock fcntl
pop3_uidl_format = %08Xu%08Xv
passdb {
  driver = checkpassword
  args = /usr/bin/checkpassword
}
userdb {
  driver = prefetch
}


Re: [Dovecot] High level of pop3 popping causing server to become unresponsive

2012-05-23 Thread Root Kev
Missed CCing on last reply.  See below..

Also, would having Nagios checking the number of messages in a mailbox
cause issues with the popping of messages? And/Or would having a user
accessing the mailbox from two different applications cause issues?  Ie.
from Outlook and mobile device?  I am under the impression that a lock file
is generated which should deal with the issue of contention, deleting mail,
etc.

Thanks for any information.

Kevin


Sorry for the delay in responding, long weekend in Canada...
>
> When trying to SSH into the server, the server prompts for user name then
> password, then just hangs.  Same if there is already a console connection
> over, when trying to SU to root, it just hangs after entering the password.
>
> Our passwords are in the shadow file, and because this is a server
> dedicated for this task, there is only the default linux users, and maybe 8
> other user accounts in the shadow file.
>
> There shouldn't be high IO as all this box does is postfix, pop3ad and
> dovecot.  This box is seeing less then 3000 emails a day, and only has 5
> mailboxes on it.
>
> Thanks for the continued suggestions...
>
> Kevin
>
>
> On Sat, May 19, 2012 at 3:33 PM, Timo Sirainen  wrote:
>
>> On Fri, 2012-05-18 at 09:21 -0400, Root Kev wrote:
>> > During the last time that the load went up, it became unable to login /
>> su
>> > to root for the entire period that dovecot was running, we had to kill
>> > dovecot and go back to Popa3d until the mailq was cleared up.  We are
>> > running CentOS 5.6 server.  Based on TOP running at the time the CPU
>> usage
>> > was running under 10%.  Once Dovecot was killed, we were then able to
>> log
>> > in /su again.
>>
>> Like Kelsey said, a very high disk IO might explain this, although
>> normally the login should still eventually succeed. Another thing I'm
>> wondering is if some process limit reached. How does the login/su fail,
>> does it just hang or immediately fail with some error?
>>
>> > We were under the impression that checking to shadow directly should be
>> the
>> > fastest and least amount of overhead, is any of the other ways to
>> connect
>> > have less load on authentication to PAM?
>>
>> Your passwords really are in /etc/shadow file, not LDAP/something else?
>> I don't think the problem is with authentication. Reading /etc/shadow is
>> pretty fast (unless maybe if it's a huge file) and it anyway can't block
>> login/su from working.
>>
>>
>>
>


Re: [Dovecot] High level of pop3 popping causing server to become unresponsive

2012-05-18 Thread Root Kev
During the last time that the load went up, it became unable to login / su
to root for the entire period that dovecot was running, we had to kill
dovecot and go back to Popa3d until the mailq was cleared up.  We are
running CentOS 5.6 server.  Based on TOP running at the time the CPU usage
was running under 10%.  Once Dovecot was killed, we were then able to log
in /su again.

We were under the impression that checking to shadow directly should be the
fastest and least amount of overhead, is any of the other ways to connect
have less load on authentication to PAM?

Thanks,

Kevin

On Thu, May 17, 2012 at 4:57 PM, Timo Sirainen  wrote:

> On 17.5.2012, at 18.22, Root Kev wrote:
>
> > We have put Dovecot 2.1.4 on several of our production servers (CentOS,
> on
> > Dell R710, with 20GB memory, dual CPU Quad-core). We have a single
> instance
> > of Dovecot running and currently have several instances of Popa3d.  When
> > there are significant amount of popping from 2 mailboxes that dovecot
> that
> > is popping from (500+ msgs in the mailboxes), the popping of the messages
> > causes the boxes to become unresponsive.  We use another application that
> > connects to the Dovecot, downloads 2-10 messages, then processes them,
> then
> > sends the delete command to Dovecot.
>
> Unresponsive for a long time?.. What CentOS version?
>
> > When this issue occurs we are unable to become Root, or login again if we
> > close our ssh connection.  This only occures when Dovecot is doing the
> > popping.  If we only run the older Popa3d, this doesn't occur.  We
> believe
> > it is caused by the way dovecot is authenticating.
>
> Sounds like PAM is hanging. Is the (CPU) load in general high at this time?
>
> > We are using auth_mechanisms = plain
> >
> > passdb
> > drive = shadow
> >
> > usedb
> > driver = passwd
> > args = blocking=yes
>
> Using shadow/passwd directly shouldn't affect PAM at all. So this is a
> rather strange problem..


[Dovecot] High level of pop3 popping causing server to become unresponsive

2012-05-17 Thread Root Kev
Hello all,

We have put Dovecot 2.1.4 on several of our production servers (CentOS, on
Dell R710, with 20GB memory, dual CPU Quad-core). We have a single instance
of Dovecot running and currently have several instances of Popa3d.  When
there are significant amount of popping from 2 mailboxes that dovecot that
is popping from (500+ msgs in the mailboxes), the popping of the messages
causes the boxes to become unresponsive.  We use another application that
connects to the Dovecot, downloads 2-10 messages, then processes them, then
sends the delete command to Dovecot.

When this issue occurs we are unable to become Root, or login again if we
close our ssh connection.  This only occures when Dovecot is doing the
popping.  If we only run the older Popa3d, this doesn't occur.  We believe
it is caused by the way dovecot is authenticating.

We are using auth_mechanisms = plain

passdb
drive = shadow

usedb
driver = passwd
args = blocking=yes

If anyone could suggest what could be causing the login issue, we would
appreciate any incite to fix it!

Thanks,

Kevin


Re: [Dovecot] POP3 Dovecot Auth CPU usage 75%+

2012-04-16 Thread Root Kev
I think my last email may have been bounced due to attachment size, I have
put a snippet of the captures below.  The CPU is still going to high
percent of usage when my test mailboxes are used.  An ideas on how to bring
down the Auth CPU usage would be greatly appreciated!

Thanks,

Kevin

Stace on the Auth process:

epoll_wait(13, {{EPOLLIN, {u32=150109008, u64=150109008}}}, 29, 149958) = 1
gettimeofday({1334328634, 21072}, NULL) = 0
read(29, "VERSION\t1\t1\nREQUEST\t1011351553\t3"..., 1024) = 72
time(NULL)  = 1334328634
writev(29, [{"USER\t1011351553\tservermailbox1\ts"..., 108}, {"\n",
1}], 2) = 109
gettimeofday({1334328634, 27993}, NULL) = 0
epoll_wait(13, {{EPOLLIN, {u32=149927248, u64=149927248}}}, 29, 149992) = 1
gettimeofday({1334328634, 32215}, NULL) = 0
accept(11, {sa_family=AF_FILE, NULL}, [2]) = 30
fcntl64(30, F_GETFL)= 0x2 (flags O_RDWR)
fcntl64(30, F_SETFL, O_RDWR|O_NONBLOCK) = 0
gettimeofday({1334328634, 32342}, NULL) = 0
fstat64(30, {st_mode=S_IFSOCK|0777, st_size=0, ...}) = 0
_llseek(30, 0, 0xbffd24c0, SEEK_CUR)= -1 ESPIPE (Illegal seek)
getsockname(30, {sa_family=AF_FILE,
path="/usr/local/var/run/dovecot"}, [41]) = 0
epoll_ctl(13, EPOLL_CTL_ADD, 30, {EPOLLIN|EPOLLPRI|EPOLLERR|EPOLLHUP,
{u32=150123496, u64=150123496}}) = 0
write(30, "VERSION\t1\t1\nSPID\t2093\n", 22) = 22
gettimeofday({1334328634, 32625}, NULL) = 0
epoll_wait(13, {{EPOLLIN, {u32=150123496, u64=150123496}}}, 29, 1000) = 1
gettimeofday({1334328634, 32721}, NULL) = 0
read(30, "VERSION\t1\t1\n", 1024)   = 12
gettimeofday({1334328634, 32792}, NULL) = 0
epoll_wait(13, {{EPOLLIN, {u32=150123496, u64=150123496}}}, 29, 1000) = 1
gettimeofday({1334328634, 32883}, NULL) = 0
read(30, "REQUEST\t3624009729\t3062\t16\tbe004"..., 1012) = 60
time(NULL)  = 1334328634
writev(30, [{"USER\t3624009729\tservermailbox\tsy"..., 105}, {"\n",
1}], 2) = 106
gettimeofday({1334328634, 33062}, NULL) = 0
epoll_wait(13, {{EPOLLIN|EPOLLHUP, {u32=150094520, u64=150094520}}},
29, 999) = 1
gettimeofday({1334328634, 33766}, NULL) = 0
read(28, "", 6243)  = 0
epoll_ctl(13, EPOLL_CTL_DEL, 28, {0, {u32=150094520, u64=150094520}}) = 0
close(28)   = 0
epoll_wait(13, {{EPOLLIN|EPOLLHUP, {u32=150109008, u64=150109008}}}, 29, -1) = 1
gettimeofday({1334328634, 40036}, NULL) = 0
read(29, "", 952)   = 0
epoll_ctl(13, EPOLL_CTL_DEL, 29, {0, {u32=150109008, u64=150109008}}) = 0
close(29)   = 0
gettimeofday({1334328634, 40163}, NULL) = 0
gettimeofday({1334328634, 40197}, NULL) = 0
epoll_wait(13, {{EPOLLIN|EPOLLHUP, {u32=150123496, u64=150123496}}},
29, 1000) = 1
gettimeofday({1334328634, 44007}, NULL) = 0
read(30, "", 952)   = 0
epoll_ctl(13, EPOLL_CTL_DEL, 30, {0, {u32=150123496, u64=150123496}}) = 0
close(30)   = 0
gettimeofday({1334328634, 44148}, NULL) = 0
gettimeofday({1334328634, 44184}, NULL) = 0
epoll_wait(13, {{EPOLLIN, {u32=150065544, u64=150065544}}}, 29, 1000) = 1
gettimeofday({1334328634, 52466}, NULL) = 0
read(26, "AUTH\t1\tPLAIN\tservice=pop3\tlip=17"..., 8170) = 122
gettimeofday({1334328634, 52582}, NULL) = 0
writev(12, [{"PENALTY-GET\t172.20.20.110", 25}, {"\n", 1}], 2) = 26
gettimeofday({1334328634, 52698}, NULL) = 0
epoll_wait(13, {{EPOLLIN, {u32=149924840, u64=149924840}}}, 29, 992) = 1
gettimeofday({1334328634, 52760}, NULL) = 0
read(12, "0 0\n", 424)  = 4
time(NULL)  = 1334328634
gettimeofday({1334328634, 93200}, NULL) = 0
writev(26, [{"OK\t1\tuser=servermailbox1", 24}, {"\n", 1}], 2) = 25
read(12, 0x8f36c14, 420)= -1 EAGAIN (Resource
temporarily unavailable)
gettimeofday({1334328634, 93651}, NULL) = 0
epoll_wait(13, {{EPOLLIN, {u32=150065544, u64=150065544}}}, 29, 951) = 1
gettimeofday({1334328634, 93715}, NULL) = 0
read(26, "AUTH\t2\tPLAIN\tservice=pop3\tlip=17"..., 8048) = 118
gettimeofday({1334328634, 93808}, NULL) = 0
writev(12, [{"PENALTY-GET\t172.20.20.110", 25}, {"\n", 1}], 2) = 26
gettimeofday({1334328634, 93919}, NULL) = 0
epoll_wait(13, {{EPOLLIN, {u32=149924840, u64=149924840}}}, 29, 951) = 1
gettimeofday({1334328634, 93980}, NULL) = 0
read(12, "0 0\n", 420)  = 4
time(NULL)  = 1334328634
gettimeofday({1334328634, 133578}, NULL) = 0
writev(26, [{"OK\t2\tuser=servermailbox", 23}, {"\n", 1}], 2) = 24
read(12, 0x8f36c18, 416)= -1 EAGAIN (Resource
temporarily unavailable)
gettimeofday({1334328634, 133998}, NULL) = 0
epoll_wait(13, {{EPOLLIN, {u32=149927248, u64=149927248}}}, 29, 911) = 1
gettimeofday({1334328634, 134064}, NULL) = 0
accept(11, {sa_family=AF_FILE, NULL}, [2]) = 28
fcntl64(28, F_GETFL)= 0x2 (flags O_RDWR)
fcntl64(28, F_SETFL, O_RDWR|O_NONBLOCK) = 0
gettimeofday({1334328634, 134200}, NULL) = 0
fstat64(28, {st_mode=S_IFSOCK|0777, st_size=0, ...}) = 0
_llseek(28, 0, 0xbffd24

Re: [Dovecot] POP3 Dovecot Auth CPU usage 75%+

2012-04-13 Thread Root Kev
I tried making the changes that you suggested but it didn't seem to make a
noticeable difference. It should be using the shadow file directly.  The
shadow file has the default Ubuntu system accounts and 16 user accounts, so
overall fairly small.  The nsswitch.conf file is set as default:
# /etc/nsswitch.conf
#
# Example configuration of GNU Name Service Switch functionality.
# If you have the `glibc-doc-reference' and `info' packages installed, try:
# `info libc "Name Service Switch"' for information about this file.

passwd: compat
group:  compat
shadow: compat

hosts:  files dns
networks:   files

protocols:  db files
services:   db files
ethers: db files
rpc:db files

netgroup:   nis

An example of users connecting and the Auth process using alot of CPU (from
top):

Cpu(s): 87.4%us,  8.0%sy,  0.0%ni,  2.3%id,  0.0%wa,  0.7%hi,  1.7%si,
0.0%st
Mem:   1026096k total,   533924k used,   492172k free,60340k buffers
Swap:  1757176k total,0k used,  1757176k free,   414212k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+
COMMAND

  643 dovecot   20   0  3096 1616 1208 S 50.7  0.2   0:01.76
auth

  644 root  20   0  3096 1524 1140 S  1.3  0.1   0:00.08
auth

  642 dovenull  20   0  4276 1612 1256 S  1.0  0.2   0:00.03
pop3-login

  623 root  20   0  2704 1020  772 S  0.7  0.1   0:00.02
dovecot

  627 root  20   0  4344 2808 1056 S  0.7  0.3   0:00.03
config

  631 syslog20   0 33916 1924 1036 S  0.3  0.2   0:01.61
rsyslogd

  696 serverma  20   0  5464 2564 2040 R  0.3  0.2   0:00.01
pop3

1 root  20   0  2652 1604 1216 S  0.0  0.2   0:01.59
init

2 root  20   0 000 S  0.0  0.0   0:00.00
kthreadd

Thanks for any other ideas

Kevin

On Fri, Apr 13, 2012 at 7:55 AM, Timo Sirainen  wrote:

> On 12.4.2012, at 23.46, Root Kev wrote:
>
> So is it the "auth" process or "auth worker" process? What if you add:
>
> > passdb {
> >  driver = shadow
> > }
> > userdb {
> >  driver = passwd
> args = blocking=yes
> > }
>
> does that move the CPU usage from "auth" to "auth worker" process? Is it
> using /etc/shadow and /etc/passwd files? Are they large? Do you have
> enabled other weird stuff in /etc/nsswitch.conf (and were there some other
> files related to them as well?)
>
>


[Dovecot] POP3 Dovecot Auth CPU usage 75%+

2012-04-12 Thread Root Kev
Hello all,

I hope someone can help me, I have been testing out Dovecot to switch from
popa3d which I use at the moment.  When I get several users connecting and
disconnection multiple times, the Dovecot process with command Auth uses
50-90% of the CPU for the period which they are connecting.  I am wondering
if there is something that I may have misconfigured, or if there is
something that I can change so that this spike doesn't occur.

If anyone could shed some light on the issue, I would appreciate it,

Kevin

/var/mail# dovecot -n
# 2.1.4: /usr/local/etc/dovecot/dovecot.conf
# OS: Linux 2.6.32-33-generic-pae i686 Ubuntu 10.04.4 LTS ext4
auth_cache_size = 10 M
auth_verbose = yes
disable_plaintext_auth = no
instance_name = Mail Popper 1
listen = 172.20.20.222
login_greeting = Mail Popper 1 Ready
mail_location = mbox:/var/empty:INBOX=/var/mail/%u:INDEX=MEMORY
mail_privileged_group = mail
namespace inbox {
  inbox = yes
  location =
  mailbox Drafts {
special_use = \Drafts
  }
  mailbox Junk {
special_use = \Junk
  }
  mailbox Sent {
special_use = \Sent
  }
  mailbox "Sent Messages" {
special_use = \Sent
  }
  mailbox Trash {
special_use = \Trash
  }
  prefix =
}
passdb {
  driver = shadow
}
protocols = pop3
service pop3-login {
  service_count = 0
}
ssl = no
userdb {
  driver = passwd
}
protocol pop3 {
  pop3_uidl_format = %08Xu%08Xv
}


Re: [Dovecot] POP3 Dovecot Auth CPU usage 75%+

2012-04-12 Thread Root Kev
Hello all,


I hope someone can help me, I have been testing out Dovecot to switch from
popa3d which I use at the moment.  When I get several users connecting and
disconnection multiple times, the Dovecot process with command Auth uses
50-90% of the CPU for the period which they are connecting.  I am wondering
if there is something that I may have misconfigured, or if there is
something that I can change so that this spike doesn't occur.

 If anyone could shed some light on the issue, I would appreciate it,

 Kevin

 /var/mail# dovecot -n
 # 2.1.4: /usr/local/etc/dovecot/dovecot.conf
 # OS: Linux 2.6.32-33-generic-pae i686 Ubuntu 10.04.4 LTS ext4
 auth_cache_size = 10 M
 auth_verbose = yes
 disable_plaintext_auth = no
 instance_name = Mail Popper 1
 listen = 172.20.20.222
 login_greeting = Mail Popper 1 Ready
 mail_location = mbox:/var/empty:INBOX=/var/mail/%u:INDEX=MEMORY
 mail_privileged_group = mail
 namespace inbox {
   inbox = yes
   location =
   mailbox Drafts {
 special_use = \Drafts
   }
   mailbox Junk {
 special_use = \Junk
   }
   mailbox Sent {
 special_use = \Sent
   }
   mailbox "Sent Messages" {
 special_use = \Sent
   }
   mailbox Trash {
 special_use = \Trash
   }
   prefix =
 }
 passdb {
   driver = shadow
 }
 protocols = pop3
 service pop3-login {
   service_count = 0
 }
 ssl = no
 userdb {
   driver = passwd
 }
 protocol pop3 {
   pop3_uidl_format = %08Xu%08Xv
 }