Re: [Dovecot] Can`t get over 1024 processes on FreeBSD - possible bug?

2012-09-24 Thread Angel L. Mateo

El 21/09/12 11:32, Tomáš Randa escribió:

Hello,

I still cannot get dovecot running with more then 1000 processes, but
hard limit is 8192 per user in box. I tried everything, including
modifying startup script of dovecot to set ulimit -u 8192. Could it be
some dovecot bug or dovecotfreebsd bug?
I also tried to set client_limit=2 in imap service to spawn more imap
clients in one process, but still I am over 1000 processes with kernel
message:

maxproc limit exceeded by uid 89


Could anybody help? Many thanks Tomas


Hi,

	I don't know BSD, but we had a similar problems with linux, when we 
reached 1024 processes, no more processes were created and we had errors 
like imap-login: Panic: epoll_ctl(add, 6) failed: Invalid argument.


	If this is your same case, you could look for more info at 
http://www.dovecot.org/list/dovecot/2012-July/067014.html


--
Angel L. Mateo Martínez
Sección de Telemática
Área de Tecnologías de la Información
y las Comunicaciones Aplicadas (ATICA)
http://www.um.es/atica
Tfo: 868887590
Fax: 86337


[Dovecot] dsync

2012-09-24 Thread Emmanuel Dreyfus
Hi 

Testing dsync, things go wrong:
doveadm sync -u user remote:r...@mail2.example.net
dsync-local(user): Error: Mailboxes don't have unique GUIDs: 
  72e3be2c6f203b50883c44af56a8 is shared by RT and 
  RT_72e3be2c6f203b50883c44af56a8

Obviously RT_72e3be2c6f203b50883c44af56a8 is an outdated copy of RT
But .mailboxlist does not list that mailbox. Is there a trick to make
sure dsync only use valid mailboxes?

I have this in dovecot.conf
mail_location = mbox:~/mail:INBOX=/var/mail/%u:INDEX=/mail/indexes/%u:SUBSCRIPTI
ONS=../.mailboxlist

Another problem, that may or may not be related:
dsync-local(user): Error: Next message unexpectedly corrupted in mbox file 
  /home/user/mail/RT at 60298748 dsync-local(user): Error: Failed to sync
  mailbox RT: Timeout while waiting for lock
dsync-local(user): Error: Next message unexpectedly corrupted in mbox file 
  /home/user/mail/RT at 63587421 dsync-local(user): Error: Failed to sync
   mailbox RT: Mailbox GUIDs are not permanent without index files

I also get this:
dsync-local(user): Error: Failed to sync mailbox RT: Mailbox GUIDs are not 
  permanent without index files
dsync-local(user): Error: proxy client timed out (waiting for MSG-GET 
  message from remote)

And this:
dsync-local(user): Error: read() from worker server failed: EOF

And generally speaking ,how good is dsync? is it usabel in production?
This is on dovecot 2.1.7



-- 
Emmanuel Dreyfus
m...@netbsd.org


[Dovecot] 2.1.10 imapc assert crash report

2012-09-24 Thread Oli Schacher
Hi Timo

I have a simple imapc gmail proxy test setup which works fine on 2.1.9,
but crashes on 2.1.10


# 2.1.10: /usr/local/etc/dovecot/dovecot.conf
# OS: Linux 3.4.7-1-ARCH x86_64  
auth_mechanisms = plain login
imapc_host = imap.gmail.com
imapc_port = 993
imapc_ssl = imaps
imapc_ssl_ca_dir = /etc/ssl/certs
listen = 127.0.0.1
mail_gid = imapproxy
mail_home = /home/imapproxy/%u
mail_location = imapc:~/imapc
mail_uid = imapproxy
passdb {
  args = host=imap.gmail.com port=993 ssl=imaps
  default_fields = userdb_imapc_user=%u userdb_imapc_password=%w
userdb_imapc_ssl=imaps userdb_imapc_port=993 driver = imap
}
protocols = imap
ssl = no
userdb {
  driver = prefetch
}


Log:
Sep 24 10:21:58 codemonkey dovecot: master: Dovecot v2.1.10 starting up 


Sep 24 10:22:12 codemonkey dovecot: auth: Panic: file imapc-connection.c: line 
1289 (imapc_connection_connect_next_ip): assertion failed: 
(conn-client-set.max_idle_time  0) 
Sep 24 10:22:12 codemonkey dovecot: auth: Error: Raw backtrace: 
/usr/local/lib/dovecot/libdovecot.so.0(+0x453aa) [0x7f8d5ce963aa] - 
/usr/local/lib/dovecot/libdovecot.so.0(+0x453ee) [0x7f8d5ce963ee] - 
/usr/local/lib/dovecot/libdovecot.so.0(i_fatal+0) [0x7f8d5ce6abd3] - 
/usr/local/lib/dovecot/auth/libauthdb_imap.so(+0x977c) [0x7f8d5be3677c] - 
/usr/local/lib/dovecot/libdovecot.so.0(+0x3618e) [0x7f8d5ce8718e] - 
/usr/local/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x36) [0x7f8d5cea3006] 
- /usr/local/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0xa7) 
[0x7f8d5cea3df7] - /usr/local/lib/dovecot/libdovecot.so.0(io_loop_run+0x28) 
[0x7f8d5cea2b48] - 
/usr/local/lib/dovecot/libdovecot.so.0(master_service_run+0x13) 
[0x7f8d5ce8eb93] - dovecot/auth(main+0x2ff) [0x40ad4f] - 
/lib/libc.so.6(__libc_start_main+0xf5) [0x7f8d5c27d725] - dovecot/auth() 
[0x40af61]
Sep 24 10:22:12 codemonkey dovecot: auth: Fatal: master: service(auth): child 
24008 killed with signal 6 (core not dumped)
Sep 24 10:22:12 codemonkey dovecot: imap-login: Warning: Auth connection closed 
with 1 pending requests (max 0 secs, pid=24007, EOF)



Oli

-- 
message transmitted on 100% recycled electrons


Re: [Dovecot] doveadm with multiple commands

2012-09-24 Thread Timo Sirainen
On 21.9.2012, at 8.28, Timo Sirainen wrote:

 Timo Sirainen wrote:
 doveadm multi [-A | -u wildcards] separator string comand 1 [separator 
 string command 2 [...]]
 
 Thoughts?
 
 As command name I could also think of doveadm sequence, which
 implies the commands being executed in serial order.
 
 Hmm. Maybe.

sequence is already commonly used by IMAP protocol and Dovecot code to mean 
message sequence numbers. I think it would be too confusing to use that word 
for other things.



Re: [Dovecot] 2.1.10 imapc assert crash report

2012-09-24 Thread Timo Sirainen
On 24.9.2012, at 11.49, Oli Schacher wrote:

 I have a simple imapc gmail proxy test setup which works fine on 2.1.9,
 but crashes on 2.1.10
 Sep 24 10:22:12 codemonkey dovecot: auth: Panic: file imapc-connection.c: 
 line 1289 (imapc_connection_connect_next_ip): assertion failed: 
 (conn-client-set.max_idle_time  0) 

Fixed:

http://hg.dovecot.org/dovecot-2.1/rev/fd863826c892
http://hg.dovecot.org/dovecot-2.1/rev/17a8f15beb8c



[Dovecot] Logging question regarding delete actions

2012-09-24 Thread Ralf Hildebrandt
A user is logged in via imap from multiple devices.
The log has this:

Sep 21 11:46:32 postamt dovecot: imap(awer): delete: box=INBOX, uid=15347, 
msgid=1341851741.4ffb085d2e2b7@swift.generated, size=15675
Sep 21 11:46:32 postamt dovecot: imap(awer): delete: box=INBOX, uid=15739, 
msgid=b23b2e42f6ae9ba1602690be42b7b5c7.squir...@webmail.charite.de, size=18134
Sep 21 11:46:32 postamt dovecot: imap(awer): delete: box=INBOX, uid=15740, 
msgid=3509d8b3f9054c4bb26c85f3e4b96563036822877...@exchange21.charite.de, 
size=6
Sep 21 11:46:32 postamt dovecot: imap(awer): delete: box=INBOX, uid=15741, 
msgid=c2e306f4-8622-4d0f-98bd-c67f31f2f...@charite.de, size=5160

How can I find out WHICH CLIENT caused the deletion?

Same issue:

Sep 21 09:36:05 postamt dovecot: imap-login: Login: user=awer, 
method=PLAIN, rip=109.45.0.37, lip=141.42.206.36, mpid=30773, TLS, 
session=e8TfSjHKgwBtLQAl
Sep 21 10:06:17 postamt dovecot: imap(awer): Disconnected for inactivity 
in=2255 out=4398

How can I be sure that the log entry from 10:06:17 belongs to the
log entry from 09:36:05? Also, what is the meaning of the
session=e8TfSjHKgwBtLQAl?

-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | http://www.charite.de



Re: [Dovecot] Logging question regarding delete actions

2012-09-24 Thread Timo Sirainen
On 24.9.2012, at 14.27, Ralf Hildebrandt wrote:

 A user is logged in via imap from multiple devices.
 The log has this:
 
 Sep 21 11:46:32 postamt dovecot: imap(awer): delete: box=INBOX, 
 uid=15347, msgid=1341851741.4ffb085d2e2b7@swift.generated, size=15675
 Sep 21 11:46:32 postamt dovecot: imap(awer): delete: box=INBOX, 
 uid=15739, 
 msgid=b23b2e42f6ae9ba1602690be42b7b5c7.squir...@webmail.charite.de, 
 size=18134
 Sep 21 11:46:32 postamt dovecot: imap(awer): delete: box=INBOX, 
 uid=15740, 
 msgid=3509d8b3f9054c4bb26c85f3e4b96563036822877...@exchange21.charite.de, 
 size=6
 Sep 21 11:46:32 postamt dovecot: imap(awer): delete: box=INBOX, 
 uid=15741, msgid=c2e306f4-8622-4d0f-98bd-c67f31f2f...@charite.de, size=5160
 
 How can I find out WHICH CLIENT caused the deletion?

Change mail_log_prefix to include %{session} (and maybe %r for IP).

 Same issue:
 
 Sep 21 09:36:05 postamt dovecot: imap-login: Login: user=awer, 
 method=PLAIN, rip=109.45.0.37, lip=141.42.206.36, mpid=30773, TLS, 
 session=e8TfSjHKgwBtLQAl
 Sep 21 10:06:17 postamt dovecot: imap(awer): Disconnected for inactivity 
 in=2255 out=4398
 
 How can I be sure that the log entry from 10:06:17 belongs to the
 log entry from 09:36:05? Also, what is the meaning of the
 session=e8TfSjHKgwBtLQAl?

This is also solved with mail_log_prefix change. The session's idea is exactly 
to match the same session's log messages together. It's a string guaranteed to 
be unique for the next .. was it 7 years or so.

Re: [Dovecot] doveadm with multiple commands

2012-09-24 Thread A.L.E.C
On 09/22/2012 06:50 PM, Timo Sirainen wrote:
 On 21.9.2012, at 11.23, A.L.E.C wrote:
 
 On 09/20/2012 06:01 PM, Timo Sirainen wrote:
 Thoughts? Any better name for the command than multi?

 How about 'execute' or 'exec'.
 
 v2.1.10 already has dovecot exec that does a different thing. So can't be 
 anything related to exec..

next is run or pipe, but what if you create global separator option
and detect multi-command syntax usage automatically without a keyword?

Syntax for doveadm would be

doveadm [-Dv] [-f formatter] [-s separator] [-A | -u wildcards ] command
[command_options] [command_arguments] [separator command
[command_options] [command_arguments] [...]]

and example

doveadm -A -s : expunge mailbox Trash savedbefore 7d : purge

-- 
Aleksander 'A.L.E.C' Machniak
LAN Management System Developer [http://lms.org.pl]
Roundcube Webmail Developer  [http://roundcube.net]
---
PGP: 19359DC1 @@ GG: 2275252 @@ WWW: http://alec.pl


Re: [Dovecot] Logging question regarding delete actions

2012-09-24 Thread Ralf Hildebrandt
* Timo Sirainen t...@iki.fi:

 This is also solved with mail_log_prefix change. The session's idea is
 exactly to match the same session's log messages together. It's a
 string guaranteed to be unique for the next .. was it 7 years or so.

Thanks. I changed the mail_log_prefix from 
mail_log_prefix = %s(%u): 
to
mail_log_prefix = %s(%u) %{session}: 

-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | http://www.charite.de



Re: [Dovecot] doveadm with multiple commands

2012-09-24 Thread Timo Sirainen
On 24.9.2012, at 14.44, A.L.E.C wrote:

 next is run or pipe, but what if you create global separator option
 and detect multi-command syntax usage automatically without a keyword?
 
 Syntax for doveadm would be
 
 doveadm [-Dv] [-f formatter] [-s separator] [-A | -u wildcards ] command
 [command_options] [command_arguments] [separator command
 [command_options] [command_arguments] [...]]
 
 and example
 
 doveadm -A -s : expunge mailbox Trash savedbefore 7d : purge

Hmm. Yes, that might work. Although it would have to be:

doveadm expunge -A -s : mailbox Trash savedbefore 7d : purge

because both -A and -s are mail command specific parameters, which won't work 
for non-mail commands.

Hmm. This reminds me also that it would be possible with some extra work to do 
some command interaction. IMAP supports saving search results, which can later 
be accessed with $ parameter. So this could be made to work:

doveadm search -s : from foo : fetch text \$ : expunge \$



[Dovecot] noisy auth-worker messages in logs (dovecot 2.1.8 FreeBSD)

2012-09-24 Thread Philippe Chevalier

Hello,

I don't know if it's been addressed before, but anyway : 


In my dovecot setup, I have local and virtual users. So, I need multiple passdb
backends. Namely, passwd for the local users and ldap for the virtual
users.

passdb {
  driver = passwd
}
passdb {
  args = /usr/local/etc/dovecot/dovecot-ldap.conf
  driver = ldap
}
Everything work correctly : when a user logs in (imap/pop3) there's a
lookup in passwd and if it fails there's a lookup in ldap (if I
understand the process correctly), which eventually succeeds.

Except that every time a virtual user logs in, dovecot logs an error,
like :

dovecot: auth-worker(99126): Error: passwd(x...@domain.org,12.34.254.255): 
getpwnam() failed: Invalid argument

I guess it's because the login is a full email that getpwnam fails.

Anyway, the user logs in just fine. But I would like to know if/how I
can get rid of the messages filling my logs ?

I tried :

auth_debug_passwords = no
auth_verbose = no

But no dice.

I used dovecot 1.x before and there was no such messages.

Thanks for any advice.

K.
--
Kyoko Otonashi's shrine / Le temple de Kyoko Otonashi
My tribute to Maison Ikkoku / Mon hommage a Maison Ikkoku
Visit http://www.kyoko.org/


Re: [Dovecot] noisy auth-worker messages in logs (dovecot 2.1.8 FreeBSD)

2012-09-24 Thread Timo Sirainen
On 24.9.2012, at 16.48, Philippe Chevalier wrote:

 dovecot: auth-worker(99126): Error: passwd(x...@domain.org,12.34.254.255): 
 getpwnam() failed: Invalid argument
 
 I guess it's because the login is a full email that getpwnam fails.

So if you log in as nonexistent user foo.bar it doesn't log an error, but if 
you log in as foo@bar it does? The attached patch probably fixes it?


diff
Description: Binary data


Re: [Dovecot] segfault in Debian Squeeze + Dovecot 2.1.10

2012-09-24 Thread Timo Sirainen
On 23.9.2012, at 14.05, Joe Auty wrote:

 #0  0x7f789cd08e14 in hash_table_destroy () from 
 /usr/lib/dovecot/libdovecot.so.0
 (gdb) bt full
 #0  0x7f789cd08e14 in hash_table_destroy () from 
 /usr/lib/dovecot/libdovecot.so.0
 No symbol table info available.
 #1  0x7f789ccda054 in settings_parser_deinit () from 
 /usr/lib/dovecot/libdovecot.so.0
 No symbol table info available.
 #2  0x7f789ccff33d in master_service_settings_cache_deinit () from 
 /usr/lib/dovecot/libdovecot.so.0

Well, the good news is that it crashes only after it has already disconnected 
the client anyway. But I thought I fixed this bug in v2.1.10 and I'm not able 
to reproduce it myself.. Having debugging information available might show 
something useful. Try installing dovecot-dbg package and getting the bt full 
again?



Re: [Dovecot] segfault in Debian Squeeze + Dovecot 2.1.10

2012-09-24 Thread Birta Levente

On 24/09/2012 17:32, Timo Sirainen wrote:

On 23.9.2012, at 14.05, Joe Auty wrote:


#0  0x7f789cd08e14 in hash_table_destroy () from 
/usr/lib/dovecot/libdovecot.so.0
(gdb) bt full
#0  0x7f789cd08e14 in hash_table_destroy () from 
/usr/lib/dovecot/libdovecot.so.0
No symbol table info available.
#1  0x7f789ccda054 in settings_parser_deinit () from 
/usr/lib/dovecot/libdovecot.so.0
No symbol table info available.
#2  0x7f789ccff33d in master_service_settings_cache_deinit () from 
/usr/lib/dovecot/libdovecot.so.0


Well, the good news is that it crashes only after it has already disconnected 
the client anyway. But I thought I fixed this bug in v2.1.10 and I'm not able 
to reproduce it myself.. Having debugging information available might show 
something useful. Try installing dovecot-dbg package and getting the bt full 
again?



I have the same problem, but on centos 6.3 64bit. How can I give you the 
debug information?


Levi



Re: [Dovecot] segfault in Debian Squeeze + Dovecot 2.1.10

2012-09-24 Thread Timo Sirainen
On 24.9.2012, at 17.55, Birta Levente wrote:

 On 24/09/2012 17:32, Timo Sirainen wrote:
 On 23.9.2012, at 14.05, Joe Auty wrote:
 
 #0  0x7f789cd08e14 in hash_table_destroy () from 
 /usr/lib/dovecot/libdovecot.so.0
 (gdb) bt full
 #0  0x7f789cd08e14 in hash_table_destroy () from 
 /usr/lib/dovecot/libdovecot.so.0
 No symbol table info available.
 #1  0x7f789ccda054 in settings_parser_deinit () from 
 /usr/lib/dovecot/libdovecot.so.0
 No symbol table info available.
 #2  0x7f789ccff33d in master_service_settings_cache_deinit () from 
 /usr/lib/dovecot/libdovecot.so.0
 
 Well, the good news is that it crashes only after it has already 
 disconnected the client anyway. But I thought I fixed this bug in v2.1.10 
 and I'm not able to reproduce it myself.. Having debugging information 
 available might show something useful. Try installing dovecot-dbg package 
 and getting the bt full again?
 
 
 I have the same problem, but on centos 6.3 64bit. How can I give you the 
 debug information?

Show your doveconf -n output at least. As for debugging information, that would 
depend on how you installed Dovecot? From some RPM or sources?



Re: [Dovecot] segfault in Debian Squeeze + Dovecot 2.1.10

2012-09-24 Thread Birta Levente

On 24/09/2012 17:58, Timo Sirainen wrote:

On 24.9.2012, at 17.55, Birta Levente wrote:


On 24/09/2012 17:32, Timo Sirainen wrote:

On 23.9.2012, at 14.05, Joe Auty wrote:


#0  0x7f789cd08e14 in hash_table_destroy () from 
/usr/lib/dovecot/libdovecot.so.0
(gdb) bt full
#0  0x7f789cd08e14 in hash_table_destroy () from 
/usr/lib/dovecot/libdovecot.so.0
No symbol table info available.
#1  0x7f789ccda054 in settings_parser_deinit () from 
/usr/lib/dovecot/libdovecot.so.0
No symbol table info available.
#2  0x7f789ccff33d in master_service_settings_cache_deinit () from 
/usr/lib/dovecot/libdovecot.so.0


Well, the good news is that it crashes only after it has already disconnected 
the client anyway. But I thought I fixed this bug in v2.1.10 and I'm not able 
to reproduce it myself.. Having debugging information available might show 
something useful. Try installing dovecot-dbg package and getting the bt full 
again?



I have the same problem, but on centos 6.3 64bit. How can I give you the debug 
information?


Show your doveconf -n output at least. As for debugging information, that would 
depend on how you installed Dovecot? From some RPM or sources?



I build my own rpm based on src rpm: dovecot-2.1.1-2_132.src.rpm.

#dovecot -n

auth_mechanisms = plain login cram-md5
debug_log_path = /var/log/dovecot.log
disable_plaintext_auth = no
listen = *
mail_access_groups = vmail
mail_location = maildir:/var/vmail/%d/%n/Maildir
mail_plugins = quota
mbox_write_locks = fcntl
namespace inbox {
  inbox = yes
  location =
  mailbox Drafts {
special_use = \Drafts
  }
  mailbox Junk {
special_use = \Junk
  }
  mailbox Sent {
special_use = \Sent
  }
  mailbox Sent Messages {
special_use = \Sent
  }
  mailbox Trash {
special_use = \Trash
  }
  prefix =
  separator = /
}
passdb {
  args = /etc/dovecot/dovecot-sql.conf.ext
  driver = sql
}
plugin {
  mail_log_events = delete undelete expunge copy mailbox_delete 
mailbox_rename

  mail_log_fields = uid box msgid size
  quota = maildir:User quota
  quota_exceeded_message = Quota exceeded, please contact postmaster at 
benvenuti.ro

  quota_warning = storage=95%% quota-warning 95 %u
  quota_warning2 = storage=90%% quota-warning 90 %u
  quota_warning3 = storage=85%% quota-warning 85 %u
  quota_warning4 = storage=80%% quota-warning 80 %u
  quota_warning5 = storage=50%% quota-warning 50 %u
}
postmaster_address = postmas...@mydomain.com
service auth {
  unix_listener /var/spool/postfix/private/auth {
group = postfix
mode = 0660
user = postfix
  }
  unix_listener auth-userdb {
group = vmail
mode = 0600
user = vmail
  }
}
service imap-login {
  inet_listener imap {
port = 143
  }
  inet_listener imaps {
port = 993
ssl = yes
  }
}
service pop3-login {
  inet_listener pop3 {
port = 110
  }
  inet_listener pop3s {
port = 995
ssl = yes
  }
}
service quota-warning {
  executable = script /usr/local/bin/quota-warning.sh
  unix_listener quota-warning {
group = vmail
mode = 0640
user = vmail
  }
  user = vmail
}
ssl_ca = /etc/pki/tls/certs/ca.pem
ssl_cert = /etc/pki/tls/certs/0.pem
ssl_key = /etc/pki/tls/private/0.pem
userdb {
  args = /etc/dovecot/dovecot-sql.conf.ext
  driver = sql
}
protocol lmtp {
  mail_plugins = quota
}
protocol lda {
  mail_plugins = quota
}
protocol imap {
  mail_plugins = quota imap_quota
}
protocol pop3 {
  mail_plugins = quota
  pop3_uidl_format = %08Xu%08Xv
}
local xxx.xxx.xxx.xxx {
  ssl_ca = /etc/pki/tls/certs/ca.pem
  ssl_cert = /etc/pki/tls/certs/1.cert.pem
  ssl_key = /etc/pki/tls/private/1.privatekey.pem
}
local xxx.xxx.xxx.xxx {
  ssl_ca = /etc/pki/tls/certs/ca.pem
  ssl_cert = /etc/pki/tls/certs/2.cert.pem
  ssl_key = /etc/pki/tls/private/2.privatekey.pem
}



Re: [Dovecot] noisy auth-worker messages in logs (dovecot 2.1.8 FreeBSD)

2012-09-24 Thread Philippe Chevalier

On Mon, Sep 24, 2012 at 05:16:06PM +0300, Timo Sirainen wrote:

On 24.9.2012, at 16.48, Philippe Chevalier wrote:


dovecot: auth-worker(99126): Error: passwd(x...@domain.org,12.34.254.255): 
getpwnam() failed: Invalid argument

I guess it's because the login is a full email that getpwnam fails.


So if you log in as nonexistent user foo.bar it doesn't log an error, but if you log in 
as foo@bar it does? The attached patch probably fixes it?


If I log in as a non-existent user (neither in passwd or ldap), without the 
domain part, it logs also an error, but this time from ldap :

dovecot: auth: Error: ldap(foo.bar,xx.xx.xx.xx,bCVDqnPKXQC8pSxD): ldap_bind() 
failed: Invalid DN syntax

My bind DN to check the password is :

auth_bind_userdn = dc=%n,dc=%d,ou=Domains,ou=Mail,dc=dspnet,dc=fr

(I have virtual users in multiple domains)

So ldap protests probably because the domain part is missing.

If I use a non-existent login foo@bar, dovecot logs nothing : no error from 
passwd, no error from ldap, just an authentication error on the client side.

I will apply the patch later today and will let you know the result.

Regards,

K.
--
Kyoko Otonashi's shrine / Le temple de Kyoko Otonashi
My tribute to Maison Ikkoku / Mon hommage a Maison Ikkoku
Visit http://www.kyoko.org/


Re: [Dovecot] Dovecot deliver Segmentation fault when arrive the first message

2012-09-24 Thread Alessio Cecchi

Il 19/09/2012 15:07, Alessio Cecchi ha scritto:

Il 19/09/2012 15:03, Alessio Cecchi ha scritto:

Il 19/09/2012 14:48, Timo Sirainen ha scritto:

On 19.9.2012, at 13.54, Alessio Cecchi wrote:

LDA is configured and works fine but the problem is when the first 
message arrive dovecot-lda return a Segmentation fault, the 
message is written to the user's Mailbox but the message remains, 
also, in the queue of qmail (deferral: Segmentation_fault/) and at 
the second attempt is delivered fine.
gdb backtrace would be very helpful in figuring out the problem: 
http://dovecot.org/bugreport.html






Hi Timo,

had you occasion to see the problem? Can I provide more information?

Thanks



This is the full bt:

(gdb) bt full
#0  acl_lookup_dict_rebuild (dict=0x0) at acl-lookup-dict.c:221
ns = value optimized out
ids_arr = {arr = {buffer = 0x0, element_size = 26492496}, v = 
0x0,

  v_modifiable = 0x0}
ids = 0x1928658
i = value optimized out
dest = value optimized out
ret = -883075307
#1  0x7f2fc9fc41b4 in acl_backend_vfile_acllist_try_rebuild (
backend=0x1944240) at acl-backend-vfile-acllist.c:297
auser = 0x1949a08
iter = 0x0
acllist_path = 0x1928658 
/home/vpopmail/domains/qboxdns.it/cecchi10/Maildir/dovecot-acl-list

ret = value optimized out
ns = 0x1943e50
output = 0x0
st = {st_dev = 2051, st_ino = 662103, st_nlink = 1, st_mode = 
33152,

  st_uid = 89, st_gid = 89, __pad0 = 0, st_rdev = 0, st_size = 0,
  st_blksize = 4096, st_blocks = 0, st_atim = {tv_sec = 
1348059559,

tv_nsec = 0}, st_mtim = {tv_sec = 1348059559, tv_nsec = 0},
  st_ctim = {tv_sec = 1348059559, tv_nsec = 0}, __unused = {0, 
0, 0}}

path = 0x1928210

file_mode = 384
dir_mode = 448
gid = 4294967295
list = value optimized out
info = value optimized out
rootdir = 0x1928610 Sent
origin = 0x194d178 
/home/vpopmail/domains/qboxdns.it/cecchi10/Maildir

fd = 8
#2  acl_backend_vfile_acllist_rebuild (backend=0x1944240)
at acl-backend-vfile-acllist.c:311
acllist_path = value optimized out
#3  0x7f2fc9fc4563 in acl_backend_vfile_acllist_refresh 
(backend=0x1944240)

at acl-backend-vfile-acllist.c:153
__FUNCTION__ = acl_backend_vfile_acllist_refresh
#4  0x7f2fc9fc46d5 in acl_backend_vfile_acllist_verify (backend=0x0,
name=0x1944a60 , mtime=0) at acl-backend-vfile-acllist.c:343
acllist = value optimized out
#5  0x7f2fc9fc30b8 in acl_backend_vfile_object_refresh_cache (
_aclobj=0x19444e0) at acl-backend-vfile.c:858
old_validity = value optimized out
validity = {global_validity = {last_check = 0,
last_read_time = 1348059559, last_mtime = 0, last_size = 0},
  local_validity = {last_check = 0, last_read_time = 0,

last_mtime = 0, last_size = 0}, mailbox_validity = {
last_check = 0, last_read_time = 0, last_mtime = 0, 
last_size = 0}}

mtime = 0
ret = 26515976
#6  0x7f2fc9fc125e in acl_backend_get_default_rights 
(backend=0x1944240,

mask_r=0x28) at acl-backend.c:164
No locals.
#7  0x7f2fc9fc75bd in acl_mailbox_try_list_fast (list=0x194cc00,
patterns=0x7fff362dff50, flags=MAILBOX_LIST_ITER_RETURN_NO_FLAGS)
at acl-mailbox-list.c:107
alist = value optimized out
nonowner_list_ctx = value optimized out
ret = value optimized out
backend = 0x1944240
acl_mask = 0x1
ns = 0x1943e50
update_ctx = {iter_ctx = 0x7f2fcb80d2c8, tree_ctx = 
0x7f2fcbf2ba88,

  glob = 0x0, leaf_flags = 4294967295, parent_flags = 0,
  update_only = 0, match_parents = 0}
name = value optimized out
#8  acl_mailbox_list_iter_init (list=0x194cc00, patterns=0x7fff362dff50,
flags=MAILBOX_LIST_ITER_RETURN_NO_FLAGS) at acl-mailbox-list.c:194
_data_stack_cur_id = 2

ctx = 0x1946b20
pool = value optimized out
i = value optimized out
inboxcase = value optimized out
#9  0x7f2fcb886d33 in mailbox_list_iter_init_multiple 
(list=0x194cc00,

patterns=0x7fff362dff50, flags=MAILBOX_LIST_ITER_RETURN_NO_FLAGS)
at mailbox-list-iter.c:158
ctx = value optimized out
ret = value optimized out
__FUNCTION__ = mailbox_list_iter_init_multiple
#10 0x7f2fcb887459 in mailbox_list_iter_init (list=0x0,
pattern=value optimized out, flags=1348059559) at 
mailbox-list-iter.c:58

patterns = {0x7f2fc9db76dc *, 0x0}
#11 0x7f2fc9db2370 in quota_count_namespace (root=0x1944950,
bytes_r=value optimized out, count_r=0x7fff362dfff0) at 
quota-count.c:73

ctx = 0x7f2fcb5beef3
info = value optimized out
#12 quota_count (root=0x1944950, bytes_r=value optimized out,
count_r=0x7fff362dfff0) at quota-count.c:111
i = 0
ret = 0
#13 0x7f2fc9db37ce in 

[Dovecot] Patches and dovecot releases

2012-09-24 Thread Jean Michel
I'd like to know if after a release, for example the recently release
2.1.10, its common to see few days from the release some bug reports and
algo some patches, are theses patches applyed on the daily builds ?


--
Jean Michel Feltrin


[Dovecot] Traffic Accounting

2012-09-24 Thread M. Naumann
Hi,

I'm trying to find out how to do traffic accounting with Dovecot 2.x,
preferrably v2.0.9, preferrably on CentOS 6.

I've previously asked on IRC, but there was little feedback, and my
understanding is now this list is the preferred media for such
inquiries. If I recall correctly, some weeks ago I was told that traffic
accounting is not officially supported on Dovecot 2, but that there
could still be ways to get it to work, but no details were provided.

I can think of the following approaches:

* rawlog, preferrably piped (if that's possible?) into something like wc
to prevent privacy issues and to reduce the I/O overhead

* maildrop filtering in front of dovecot LDA (for mail inbound to mail
storage)

* sieve filtering

Unfortunately I have little experience with either so far, so it's hard
to make a good choice. I would appreciate hints on these approaches, and
on any other approaches you can think of, as well as any related
documentation / how-to you could point me to.

While I'm subscribed to the list (for mail authentication purposes),
I've disabled receiving any e-mail form the list, so please CC me on any
replies.

Thanks in advance,

Moritz


Re: [Dovecot] doveadm with multiple commands

2012-09-24 Thread Ben Morrow
At  2PM +0300 on 24/09/12 you (Timo Sirainen) wrote:
 On 24.9.2012, at 14.44, A.L.E.C wrote:
 
  next is run or pipe, but what if you create global separator option
  and detect multi-command syntax usage automatically without a keyword?
  
  Syntax for doveadm would be
  
  doveadm [-Dv] [-f formatter] [-s separator] [-A | -u wildcards ] command
  [command_options] [command_arguments] [separator command
  [command_options] [command_arguments] [...]]
  
  and example
  
  doveadm -A -s : expunge mailbox Trash savedbefore 7d : purge
 
 Hmm. Yes, that might work. Although it would have to be:
 
 doveadm expunge -A -s : mailbox Trash savedbefore 7d : purge
 
 because both -A and -s are mail command specific parameters, which
 won't work for non-mail commands.
 
 Hmm. This reminds me also that it would be possible with some extra
 work to do some command interaction. IMAP supports saving search
 results, which can later be accessed with $ parameter. So this could
 be made to work:
 
 doveadm search -s : from foo : fetch text \$ : expunge \$

This is turning into a proper scripting language, so perhaps something
like

doveadm -e 'search from foo; fetch text $; expunge $'

with 'doveadm -F file' to run a script file?

Ben



Re: [Dovecot] Can`t get over 1024 processes on FreeBSD - possible bug?

2012-09-24 Thread Ben Morrow
 El 21/09/12 11:32, Tomáš Randa escribió:
  Hello,
 
  I still cannot get dovecot running with more then 1000 processes, but
  hard limit is 8192 per user in box. I tried everything, including
  modifying startup script of dovecot to set ulimit -u 8192. Could it be
  some dovecot bug or dovecotfreebsd bug?
  I also tried to set client_limit=2 in imap service to spawn more imap
  clients in one process, but still I am over 1000 processes with kernel
  message:
 
  maxproc limit exceeded by uid 89

You may be running into the kern.maxprocperuid sysctl setting. This is
initialized to 9/10ths of kern.maxproc, but can be changed
independantly. If you do this you may want to consider setting a default
maxproc rlimit in login.conf for the other users on the box. (You may,
of course, already have a maxproc limit in login.conf, and that's what's
causing the problem, though the default install doesn't include one.)

If you have procfs mounted you can check the maxproc rlimit of a running
process by looking in /proc/pid/rlimit. In principle it's possible to
also get this information with libkvm, but it's not very easy and I
don't think any of the standard utilities expose it.

Ben



Re: [Dovecot] Dovecot deliver Segmentation fault when arrive the first message

2012-09-24 Thread Alessio Cecchi

Il 24/09/2012 17:40, Alessio Cecchi ha scritto:

Il 19/09/2012 15:07, Alessio Cecchi ha scritto:

Il 19/09/2012 15:03, Alessio Cecchi ha scritto:

Il 19/09/2012 14:48, Timo Sirainen ha scritto:

On 19.9.2012, at 13.54, Alessio Cecchi wrote:

LDA is configured and works fine but the problem is when the first 
message arrive dovecot-lda return a Segmentation fault, the 
message is written to the user's Mailbox but the message remains, 
also, in the queue of qmail (deferral: Segmentation_fault/) and at 
the second attempt is delivered fine.
gdb backtrace would be very helpful in figuring out the problem: 
http://dovecot.org/bugreport.html






Hi Timo,

had you occasion to see the problem? Can I provide more information?

Thanks


After further testing I found this behavior, a note, I'm using dict 
quota in mysql.


- add a new user
- delivery the first email via deliver
- Segmentation fault

- I remove the newly created user
- add the same user
- delivery the first email via deliver
- OK

- add a new user
- the user connects via pop/imap
- delivery the first email via deliver
- OK

- add a new user
- manually create the entry for dict quota ==
- delivery the first email via deliver
- OK

it seems that if there is user's entry in the dict database the problem 
does not appear.


--
Alessio Cecchi is:
@ ILS - http://www.linux.it/~alessice/
on LinkedIn - http://www.linkedin.com/in/alessice
Assistenza Sistemi GNU/Linux - http://www.cecchi.biz/
@ PLUG - ex-Presidente, adesso senatore a vita, http://www.prato.linux.it



[Dovecot] 76Gb to 146Gb

2012-09-24 Thread Spyros Tsiolis
Hello all,

I have a DL360 G4 1U server that does a wonderfull job with dovecot horde,
Xmail and OpenLDAP for a company and serving about 40 acouunts.

The machine is wonderful. I am very happy with it.
However, I am running out of disk space.
It has two times 76Gb Drives in RAID1 (disk mirroring) and the capacity
has reached 82%. 

I am starting of getting nervous.

Does anyone know of a painless way to migrate the entire contents directly
to another pair of 146Gb SCSI RAID1 disks ?

I thought of downtime and using clonezilla, but my last experience with it
was questionable. I remember having problems declaring disk re-sizing
from the smaller capacity drives to the larger ones.

CentOS 5.5
Manual install of :

Mysql
XMail (pop3/smtp)
ASSP (anti spam)
Apache / LAMP
and last but by no means list : Dovecot

Dovecot -n :

# 1.2.16: /etc/dovecot/dovecot.conf
# OS: Linux 2.6.18-194.17.4.el5 i686 CentOS release 5.5 (Final) ext3
base_dir: /var/run/dovecot/
log_path: /var/log/dovecot/dovecot.log
info_log_path: /var/log/dovecot/dovecot-info.log
ssl_parameters_regenerate: 48
verbose_ssl: yes
login_dir: /var/run/dovecot//login
login_executable: /usr/local/dovecot/libexec/dovecot/imap-login
login_greeting: * Dovecot ready *
login_max_processes_count: 96
mail_location: maildir:/var/MailRoot/domains/%d/%n/Maildir
mail_plugins: zlib
auth default:
  verbose: yes
  debug: yes
  debug_passwords: yes
  passdb:
    driver: passwd-file
    args: /etc/dovecot/passwd
  passdb:
    driver: pam
  userdb:
    driver: static
    args: uid=vmail gid=vmail home=/home/vmail/%u
  userdb:
    driver: passwd


Any help would be appreciated or any ideas you might have.

Regards,

spyros







I merely function as a channel that filters 
music through the chaos of noise
- Vangelis 


Re: [Dovecot] 76Gb to 146Gb

2012-09-24 Thread Michescu Andrei
Hello Spyros,

As best practice you never have the OS and the data/logs/user homes on
the same partition or set of disks.

If this is the case then your life is pretty easy:
 -simply create the new set of partitions
 -mount the new ones in a temporary location
 -rsync (or copy everything from old partitions)
 -Stop dovecot / all other daemons that might be using the data
 -mount the new ones in the place of old ones, mount the old ones in the
place of new ones
 - rsync again (should be quick as not many things changed)
 - start all your deamons again :P

If you do not have separate partitions maybe this is the perfect time to
look into that...

I would also look into btrfs... might be a good pick for your new partitions.

best regards,
Andrei

 Hello all,

 I have a DL360 G4 1U server that does a wonderfull job with dovecot horde,
 Xmail and OpenLDAP for a company and serving about 40 acouunts.

 The machine is wonderful. I am very happy with it.
 However, I am running out of disk space.
 It has two times 76Gb Drives in RAID1 (disk mirroring) and the capacity
 has reached 82%. 

 I am starting of getting nervous.

 Does anyone know of a painless way to migrate the entire contents directly
 to another pair of 146Gb SCSI RAID1 disks ?

 I thought of downtime and using clonezilla, but my last experience with it
 was questionable. I remember having problems declaring disk re-sizing
 from the smaller capacity drives to the larger ones.

 CentOS 5.5
 Manual install of :

 Mysql
 XMail (pop3/smtp)
 ASSP (anti spam)
 Apache / LAMP
 and last but by no means list : Dovecot

 Dovecot -n :

 # 1.2.16: /etc/dovecot/dovecot.conf
 # OS: Linux 2.6.18-194.17.4.el5 i686 CentOS release 5.5 (Final) ext3
 base_dir: /var/run/dovecot/
 log_path: /var/log/dovecot/dovecot.log
 info_log_path: /var/log/dovecot/dovecot-info.log
 ssl_parameters_regenerate: 48
 verbose_ssl: yes
 login_dir: /var/run/dovecot//login
 login_executable: /usr/local/dovecot/libexec/dovecot/imap-login
 login_greeting: * Dovecot ready *
 login_max_processes_count: 96
 mail_location: maildir:/var/MailRoot/domains/%d/%n/Maildir
 mail_plugins: zlib
 auth default:
   verbose: yes
   debug: yes
   debug_passwords: yes
   passdb:
     driver: passwd-file
     args: /etc/dovecot/passwd
   passdb:
     driver: pam
   userdb:
     driver: static
     args: uid=vmail gid=vmail home=/home/vmail/%u
   userdb:
     driver: passwd


 Any help would be appreciated or any ideas you might have.

 Regards,

 spyros






 
 I merely function as a channel that filters
 music through the chaos of noise
 - Vangelis 

 !DSPAM:50609d2c301831828332458!






Re: [Dovecot] 76Gb to 146Gb

2012-09-24 Thread Robert Schetterer
Am 24.09.2012 19:42, schrieb Spyros Tsiolis:
 Hello all,
 
 I have a DL360 G4 1U server that does a wonderfull job with dovecot horde,
 Xmail and OpenLDAP for a company and serving about 40 acouunts.
 
 The machine is wonderful. I am very happy with it.
 However, I am running out of disk space.
 It has two times 76Gb Drives in RAID1 (disk mirroring) and the capacity
 has reached 82%. 
 
 I am starting of getting nervous.
 
 Does anyone know of a painless way to migrate the entire contents directly
 to another pair of 146Gb SCSI RAID1 disks ?
 
 I thought of downtime and using clonezilla, but my last experience with it
 was questionable. I remember having problems declaring disk re-sizing
 from the smaller capacity drives to the larger ones.
 
 CentOS 5.5
 Manual install of :
 
 Mysql
 XMail (pop3/smtp)
 ASSP (anti spam)
 Apache / LAMP
 and last but by no means list : Dovecot
 
 Dovecot -n :
 
 # 1.2.16: /etc/dovecot/dovecot.conf
 # OS: Linux 2.6.18-194.17.4.el5 i686 CentOS release 5.5 (Final) ext3
 base_dir: /var/run/dovecot/
 log_path: /var/log/dovecot/dovecot.log
 info_log_path: /var/log/dovecot/dovecot-info.log
 ssl_parameters_regenerate: 48
 verbose_ssl: yes
 login_dir: /var/run/dovecot//login
 login_executable: /usr/local/dovecot/libexec/dovecot/imap-login
 login_greeting: * Dovecot ready *
 login_max_processes_count: 96
 mail_location: maildir:/var/MailRoot/domains/%d/%n/Maildir
 mail_plugins: zlib
 auth default:
   verbose: yes
   debug: yes
   debug_passwords: yes
   passdb:
 driver: passwd-file
 args: /etc/dovecot/passwd
   passdb:
 driver: pam
   userdb:
 driver: static
 args: uid=vmail gid=vmail home=/home/vmail/%u
   userdb:
 driver: passwd
 
 
 Any help would be appreciated or any ideas you might have.
 
 Regards,
 
 spyros
 


rsync
should do the job

depending on your whole machine setup it might only be only
umount old /home and mount new(bigger) /home after sync
,perhaps with tmp store elsewhere
( for sure you have to have a plan before doing..)

but your dovecot is very outdated, i would recommend
get up to new hard and software/os install, and then migrate
to new machine

 
 
 
 
 
 
 I merely function as a channel that filters 
 music through the chaos of noise
 - Vangelis 
 


-- 
Best Regards
MfG Robert Schetterer


Re: [Dovecot] 76Gb to 146Gb

2012-09-24 Thread Ed W
This is one of those questions which is almost too easy if you are 
familiar with Linux.  Trying not to sound like a d*ck, but is it an 
option to rent someone to help with admin jobs?  For example, were it me 
then I would probably have setup some partitioning scheme with separate 
partitions for data and operating system?  Possibly also using LVM?


You have several options, mainly the choice of filesystem will dictate 
here, but quite possibly you can:
1) Pull the drives one by one and rebuild the raid after each. Keep the 
old drives since you can technically roll back onto them. Expand the 
partitions (scary without LVM) and then expand the filesystem on the 
partitions
2) Boot from a DVD/Flash on your favourite rescue distro (I like 
sysrecuecd). Create the new raid, copy the old to the new, remove the 
old drives, reboot from new.  Possibly taking the time to repartition 
and move some data around while you do it (remember to update fstab)


Both are fairly simple if you have done it once, but it would be well 
worth finding someone either local or who will log in via remote control 
and support you?


Final thought:  For the size of drives you are looking at, SSD drives 
are relatively inexpensive and likely comparable with the high end 
drives you are probably looking to buy?  For 40 users I would hazard a 
guess you likely would be happy with inexpensive low end drives, but 
certainly a couple of small SSDs will blow away a spinning disk and give 
you a decent upgrade...


Good luck

Ed W



On 24/09/2012 18:42, Spyros Tsiolis wrote:

Hello all,

I have a DL360 G4 1U server that does a wonderfull job with dovecot horde,
Xmail and OpenLDAP for a company and serving about 40 acouunts.

The machine is wonderful. I am very happy with it.
However, I am running out of disk space.
It has two times 76Gb Drives in RAID1 (disk mirroring) and the capacity
has reached 82%.

I am starting of getting nervous.

Does anyone know of a painless way to migrate the entire contents directly
to another pair of 146Gb SCSI RAID1 disks ?

I thought of downtime and using clonezilla, but my last experience with it
was questionable. I remember having problems declaring disk re-sizing
from the smaller capacity drives to the larger ones.

CentOS 5.5
Manual install of :

Mysql
XMail (pop3/smtp)
ASSP (anti spam)
Apache / LAMP
and last but by no means list : Dovecot

Dovecot -n :

# 1.2.16: /etc/dovecot/dovecot.conf
# OS: Linux 2.6.18-194.17.4.el5 i686 CentOS release 5.5 (Final) ext3
base_dir: /var/run/dovecot/
log_path: /var/log/dovecot/dovecot.log
info_log_path: /var/log/dovecot/dovecot-info.log
ssl_parameters_regenerate: 48
verbose_ssl: yes
login_dir: /var/run/dovecot//login
login_executable: /usr/local/dovecot/libexec/dovecot/imap-login
login_greeting: * Dovecot ready *
login_max_processes_count: 96
mail_location: maildir:/var/MailRoot/domains/%d/%n/Maildir
mail_plugins: zlib
auth default:
   verbose: yes
   debug: yes
   debug_passwords: yes
   passdb:
 driver: passwd-file
 args: /etc/dovecot/passwd
   passdb:
 driver: pam
   userdb:
 driver: static
 args: uid=vmail gid=vmail home=/home/vmail/%u
   userdb:
 driver: passwd


Any help would be appreciated or any ideas you might have.

Regards,

spyros







I merely function as a channel that filters
music through the chaos of noise
- Vangelis




Re: [Dovecot] 76Gb to 146Gb

2012-09-24 Thread Michescu Andrei
Hello Spyros,

Oupss... the DL360 G4 has only 2 bays and no external SCSI/SATA
connector... so the solution below does not really apply to you :(

Andrei

 Hello Spyros,

 As best practice you never have the OS and the data/logs/user homes on
 the same partition or set of disks.

 If this is the case then your life is pretty easy:
  -simply create the new set of partitions
  -mount the new ones in a temporary location
  -rsync (or copy everything from old partitions)
  -Stop dovecot / all other daemons that might be using the data
  -mount the new ones in the place of old ones, mount the old ones in the
 place of new ones
  - rsync again (should be quick as not many things changed)
  - start all your deamons again :P

 If you do not have separate partitions maybe this is the perfect time to
 look into that...

 I would also look into btrfs... might be a good pick for your new
 partitions.

 best regards,
 Andrei

 Hello all,

 I have a DL360 G4 1U server that does a wonderfull job with dovecot
 horde,
 Xmail and OpenLDAP for a company and serving about 40 acouunts.

 The machine is wonderful. I am very happy with it.
 However, I am running out of disk space.
 It has two times 76Gb Drives in RAID1 (disk mirroring) and the capacity
 has reached 82%. 

 I am starting of getting nervous.

 Does anyone know of a painless way to migrate the entire contents
 directly
 to another pair of 146Gb SCSI RAID1 disks ?

 I thought of downtime and using clonezilla, but my last experience with
 it
 was questionable. I remember having problems declaring disk re-sizing
 from the smaller capacity drives to the larger ones.

 CentOS 5.5
 Manual install of :

 Mysql
 XMail (pop3/smtp)
 ASSP (anti spam)
 Apache / LAMP
 and last but by no means list : Dovecot

 Dovecot -n :

 # 1.2.16: /etc/dovecot/dovecot.conf
 # OS: Linux 2.6.18-194.17.4.el5 i686 CentOS release 5.5 (Final) ext3
 base_dir: /var/run/dovecot/
 log_path: /var/log/dovecot/dovecot.log
 info_log_path: /var/log/dovecot/dovecot-info.log
 ssl_parameters_regenerate: 48
 verbose_ssl: yes
 login_dir: /var/run/dovecot//login
 login_executable: /usr/local/dovecot/libexec/dovecot/imap-login
 login_greeting: * Dovecot ready *
 login_max_processes_count: 96
 mail_location: maildir:/var/MailRoot/domains/%d/%n/Maildir
 mail_plugins: zlib
 auth default:
   verbose: yes
   debug: yes
   debug_passwords: yes
   passdb:
     driver: passwd-file
     args: /etc/dovecot/passwd
   passdb:
     driver: pam
   userdb:
     driver: static
     args: uid=vmail gid=vmail home=/home/vmail/%u
   userdb:
     driver: passwd


 Any help would be appreciated or any ideas you might have.

 Regards,

 spyros






 
 I merely function as a channel that filters
 music through the chaos of noise
 - Vangelis 







 !DSPAM:5060a006309197419291868!






Re: [Dovecot] 76Gb to 146Gb

2012-09-24 Thread Robert Schetterer
Am 24.09.2012 20:07, schrieb Michescu Andrei:
 Hello Spyros,
 
 Oupss... the DL360 G4 has only 2 bays and no external SCSI/SATA
 connector... so the solution below does not really apply to you :(
 
 Andrei

depends how long downtime is acceptable
i.e go rsync tmp over usb storage, or simple
tmp nfs mounts to other servers are thinkable to minimize downtime
should be no big problem its only one server with maildir and less
mailboxes and data

but thats all much off topic with dovecot

-- 
Best Regards
MfG Robert Schetterer


Re: [Dovecot] noisy auth-worker messages in logs (dovecot 2.1.8 FreeBSD)

2012-09-24 Thread Philippe Chevalier

On Mon, Sep 24, 2012 at 05:04:40PM +0200, Philippe Chevalier wrote:


I will apply the patch later today and will let you know the result.


I applied the patch, and obviously, when getpwnam_r sets the result to
NULL and returns EINVAL, dovecot do as if the entry was not found and
stays mute.

So, thank you, auth is now a lot less noisy.

As for the ldap message, it errors if there's no domain in the login.

In the doc, it says that %d is empty if there's no domain part. So I
guess it's an enhancement request : a configuration option to have it
filled out with a default domain if there's no one supplied by the
client.

Regards,

K.
--
Kyoko Otonashi's shrine / Le temple de Kyoko Otonashi
My tribute to Maison Ikkoku / Mon hommage a Maison Ikkoku
Visit http://www.kyoko.org/


Re: [Dovecot] 76Gb to 146Gb

2012-09-24 Thread Spyros Tsiolis
- Original Message -
 From: Michescu Andrei andrei.miche...@miau.ca
 To: Dovecot Mailing List dovecot@dovecot.org
 Cc: 
 Sent: Monday, 24 September 2012, 21:07
 Subject: Re: [Dovecot] 76Gb to 146Gb
 
 Hello Spyros,
 
 Oupss... the DL360 G4 has only 2 bays and no external SCSI/SATA
 connector... so the solution below does not really apply to you :(
 
 Andrei
 
 Hello Spyros,
 
 As best practice you never have the OS and the data/logs/user 
 homes on
 the same partition or set of disks.
 
 If this is the case then your life is pretty easy:
   -simply create the new set of partitions
   -mount the new ones in a temporary location
   -rsync (or copy everything from old partitions)
   -Stop dovecot / all other daemons that might be using the data
   -mount the new ones in the place of old ones, mount the old ones in the
 place of new ones
   - rsync again (should be quick as not many things changed)
   - start all your deamons again :P
 
 If you do not have separate partitions maybe this is the perfect time to
 look into that...
 
 I would also look into btrfs... might be a good pick for your new
 partitions.
 
 best regards,
 Andrei
 
 Hello all,
 
 I have a DL360 G4 1U server that does a wonderfull job with dovecot
 horde,
 Xmail and OpenLDAP for a company and serving about 40 acouunts.
 
 The machine is wonderful. I am very happy with it.
 However, I am running out of disk space.
 It has two times 76Gb Drives in RAID1 (disk mirroring) and the capacity
 has reached 82%. 
 
 I am starting of getting nervous.
 
 Does anyone know of a painless way to migrate the entire contents
 directly
 to another pair of 146Gb SCSI RAID1 disks ?
 
 I thought of downtime and using clonezilla, but my last experience with
 it
 was questionable. I remember having problems declaring disk re-sizing
 from the smaller capacity drives to the larger ones.
 
 CentOS 5.5
 Manual install of :
 
 Mysql
 XMail (pop3/smtp)
 ASSP (anti spam)
 Apache / LAMP
 and last but by no means list : Dovecot
 
 Dovecot -n :
 
 # 1.2.16: /etc/dovecot/dovecot.conf
 # OS: Linux 2.6.18-194.17.4.el5 i686 CentOS release 5.5 (Final) ext3
 base_dir: /var/run/dovecot/
 log_path: /var/log/dovecot/dovecot.log
 info_log_path: /var/log/dovecot/dovecot-info.log
 ssl_parameters_regenerate: 48
 verbose_ssl: yes
 login_dir: /var/run/dovecot//login
 login_executable: /usr/local/dovecot/libexec/dovecot/imap-login
 login_greeting: * Dovecot ready *
 login_max_processes_count: 96
 mail_location: maildir:/var/MailRoot/domains/%d/%n/Maildir
 mail_plugins: zlib
 auth default:
   verbose: yes
   debug: yes
   debug_passwords: yes
   passdb:
     driver: passwd-file
     args: /etc/dovecot/passwd
   passdb:
     driver: pam
   userdb:
     driver: static
     args: uid=vmail gid=vmail home=/home/vmail/%u
   userdb:
     driver: passwd
 
 
 Any help would be appreciated or any ideas you might have.
 
 Regards,
 
 spyros
 
 
 
 
 
 
 
 I merely function as a channel that filters
 music through the chaos of noise
 - Vangelis 
 
 
 
 
 
 
 
 !DSPAM:5060a006309197419291868!




Andrei,

Thank you very much for you kind reply and
both your messages.

Having said that, would it be possible to take
away on 72Gb drive (say Drive1 the second drive)
and shove in one of the two 146Gb ones ?

Shouldn't the array be rebuilt ?
Will it use the extra disk space though ?

Thanks,

spyros




 

I merely function as a channel that filters 
music through the chaos of noise
- Vangelis


Re: [Dovecot] 76Gb to 146Gb

2012-09-24 Thread Ed W

On 24/09/2012 19:07, Ed W wrote:
This is one of those questions which is almost too easy if you are 
familiar with Linux.  Trying not to sound like a d*ck, but is it an 
option to rent someone to help with admin jobs?  For example, were it 
me then I would probably have setup some partitioning scheme with 
separate partitions for data and operating system? Possibly also using 
LVM?


That came out wrong...  What I meant to say was something more like if 
you were to employ someone locally they would probably give you a whole 
bunch of ideas on how you could adjust the setup of the server to be 
more future proof.  It would be worth working with someone just to get 
that right.  For example, here are some ideas that occur to me that you 
could use ...


Sorry, should re-read my words before hitting send

Ed


Re: [Dovecot] segfault in Debian Squeeze + Dovecot 2.1.10

2012-09-24 Thread Joe Auty


Timo Sirainen mailto:t...@iki.fi
September 24, 2012 10:32 AM

Well, the good news is that it crashes only after it has already 
disconnected the client anyway. But I thought I fixed this bug in 
v2.1.10 and I'm not able to reproduce it myself.. Having debugging 
information available might show something useful. Try installing 
dovecot-dbg package and getting the bt full again?


Thanks Timo, I have done so. Here is the results of my debugging info now:

 gdb /usr/lib/dovecot/imap-login /var/run/dovecot/login/core
GNU gdb (GDB) 7.0.1-debian
Copyright (C) 2009 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
http://gnu.org/licenses/gpl.html

This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type show copying
and show warranty for details.
This GDB was configured as x86_64-linux-gnu.
For bug reporting instructions, please see:
http://www.gnu.org/software/gdb/bugs/...
Reading symbols from /usr/lib/dovecot/imap-login...Reading symbols from 
/usr/lib/debug/usr/lib/dovecot/imap-login...done.

(no debugging symbols found)...done.

warning: Can't read pathname for load map: Input/output error.
Reading symbols from /usr/lib/dovecot/libdovecot-login.so.0...Reading 
symbols from 
/usr/lib/debug/usr/lib/dovecot/libdovecot-login.so.0.0.0...done.

(no debugging symbols found)...done.
Loaded symbols for /usr/lib/dovecot/libdovecot-login.so.0
Reading symbols from /usr/lib/dovecot/libdovecot.so.0...Reading symbols 
from /usr/lib/debug/usr/lib/dovecot/libdovecot.so.0.0.0...done.

(no debugging symbols found)...done.
Loaded symbols for /usr/lib/dovecot/libdovecot.so.0
Reading symbols from /lib/libc.so.6...(no debugging symbols found)...done.
Loaded symbols for /lib/libc.so.6
Reading symbols from /usr/lib/libssl.so.0.9.8...(no debugging symbols 
found)...done.

Loaded symbols for /usr/lib/libssl.so.0.9.8
Reading symbols from /usr/lib/libcrypto.so.0.9.8...(no debugging symbols 
found)...done.

Loaded symbols for /usr/lib/libcrypto.so.0.9.8
Reading symbols from /lib/librt.so.1...(no debugging symbols found)...done.
Loaded symbols for /lib/librt.so.1
Reading symbols from /lib/libdl.so.2...(no debugging symbols found)...done.
Loaded symbols for /lib/libdl.so.2
Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib64/ld-linux-x86-64.so.2
Reading symbols from /usr/lib/libz.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /usr/lib/libz.so.1
Reading symbols from /lib/libpthread.so.0...(no debugging symbols 
found)...done.

Loaded symbols for /lib/libpthread.so.0
Core was generated by `dovecot/imap-login   ?'.
Program terminated with signal 11, Segmentation fault.
#0  hash_table_destroy (_table=0x28) at hash.c:106
106hash.c: No such file or directory.
in hash.c
(gdb) bt full
#0  hash_table_destroy (_table=0x28) at hash.c:106
table = value optimized out
#1  0x7ff300721054 in settings_parser_deinit (_ctx=value optimized 
out) at settings-parser.c:237

ctx = 0x0
#2  0x7ff30074633d in master_service_settings_cache_deinit 
(_cache=value optimized out)

at master-service-settings-cache.c:86
cache = 0x9f9a60
entry = 0xa016e0
next = 0x0
__FUNCTION__ = master_service_settings_cache_deinit
#3  0x7ff3009a5018 in main_deinit (binary=value optimized out, 
argc=2, argv=0x9f8370) at main.c:355

No locals.
#4  login_binary_run (binary=value optimized out, argc=2, 
argv=0x9f8370) at main.c:407

set_pool = 0x9f8a30
allow_core_dumps = value optimized out
login_socket = value optimized out
c = value optimized out
#5  0x7ff3003c0c8d in __libc_start_main () from /lib/libc.so.6
No symbol table info available.
#6  0x00402459 in _start ()
No symbol table info available.
(gdb)





Joe Auty mailto:j...@netmusician.org
September 23, 2012 7:05 AM


Timo Sirainen mailto:t...@iki.fi
September 23, 2012 5:58 AM


You should have a similar log line about the crash in mail.log (or 
wherever doveadm log find says that errors get logged). Find those 
lines, then configure login processes to dump core files. This 
probably should work:


service imap-login {
executable = imap-login -D
}

Next time it crashes hopefully you'll have 
/var/run/dovecot/login/core* file(s). Get a gdb backtrace from it 
send it:


gdb /usr/lib/dovecot/imap-login /var/run/dovecot/login/core
bt full


I hope I'm doing this correctly!

# gdb /usr/lib/dovecot/imap-login /var/run/dovecot/login/core
GNU gdb (GDB) 7.0.1-debian
Copyright (C) 2009 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
http://gnu.org/licenses/gpl.html

This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type show 
copying

and show warranty for details.
This GDB was configured as x86_64-linux-gnu.
For bug reporting instructions, please see:

Re: [Dovecot] 76Gb to 146Gb

2012-09-24 Thread Spyros Tsiolis
- Original Message -

 From: Ed W li...@wildgooses.com
 To: dovecot@dovecot.org
 Cc: 
 Sent: Monday, 24 September 2012, 21:55
 Subject: Re: [Dovecot] 76Gb to 146Gb
 
 On 24/09/2012 19:07, Ed W wrote:
  This is one of those questions which is almost too easy if you are familiar 
 with Linux.  Trying not to sound like a d*ck, but is it an option to rent 
 someone to help with admin jobs?  For example, were it me then I would 
 probably 
 have setup some partitioning scheme with separate partitions for data and 
 operating system? Possibly also using LVM?
 
 That came out wrong...  What I meant to say was something more like if you 
 were to employ someone locally they would probably give you a whole bunch of 
 ideas on how you could adjust the setup of the server to be more future 
 proof.  
 It would be worth working with someone just to get that right.  For example, 
 here are some ideas that occur to me that you could use ...
 
 Sorry, should re-read my words before hitting send
 
 Ed
 

Ed,

Don't worry about it. I wasn't offended.
I have a lot of experience with linux but not on heavy metal servers.
I used to have plenty of experience back in the G2/ G3 era (I was
also ACE in the Compaq years) but that was back in the time that
Compaq was only supporting Windows OSs and SCO.

Also the problem is that I don't have the time to play with a
spare HP/Compaq server ( I have a couple laying around btw).
I'll get round to it at some point.

I am just asking you chaps because I am sure people out there had
the chance to tinker with newer and better equipment.

Thank you for your reply,

Best Regards,

spyros



 

I merely function as a channel that filters 
music through the chaos of noise
- Vangelis



Re: [Dovecot] 76Gb to 146Gb

2012-09-24 Thread Spyros Tsiolis




- Original Message -
 From: Robert Schetterer rob...@schetterer.org
 To: dovecot@dovecot.org
 Cc: 
 Sent: Monday, 24 September 2012, 21:06
 Subject: Re: [Dovecot] 76Gb to 146Gb
 
 Am 24.09.2012 19:42, schrieb Spyros Tsiolis:
  Hello all,
 
 %%%%%%%%
 
 rsync
 should do the job
 
 depending on your whole machine setup it might only be only
 umount old /home and mount new(bigger) /home after sync
 ,perhaps with tmp store elsewhere
 ( for sure you have to have a plan before doing..)
 
 but your dovecot is very outdated, i would recommend
 get up to new hard and software/os install, and then migrate
 to new machine
 
 

 
 -- 
 Best Regards
 MfG Robert Schetterer
 


On client machines I have thunderbird.

What if :

1. I would make sure that thunderbird keeps a local
copy of all the message (I think there is a check box
somewhere on settings)

2. Make sure all client machines have synced their
mailboxes locally on thunderbird.


3. Install a new version of Dovecot/Horde/XMail etc.

4. When the new installation is done, try to sync
from the existing clients pc's to dovecot ?

Would that work ?
It's one scenario I am seriously contemplating.

Thank you very much again,

s.



 

I merely function as a channel that filters 
music through the chaos of noise
- Vangelis


Re: [Dovecot] 76Gb to 146Gb

2012-09-24 Thread Benny Pedersen

Spyros Tsiolis skrev den 24-09-2012 19:42:


Any help would be appreciated or any ideas you might have.


try google centos cloud server

if you would like to do it local, use all 4 drives with 2 raid1 in the 
same controller if possible, then use sysrescue cd to tar it all over to 
the other raid1 while its down


no matter how, it will be downtime

i am not using centos here so i cant be more specifik

http://www.sysresccd.org





Re: [Dovecot] 76Gb to 146Gb

2012-09-24 Thread Benny Pedersen

Spyros Tsiolis skrev den 24-09-2012 20:42:


Having said that, would it be possible to take
away on 72Gb drive (say Drive1 the second drive)
and shove in one of the two 146Gb ones ?


this can be done yes, but you will have to do more steps :)

first step, remove one drive
add the 146 drive

wait it for rebuildin

when done, remove the last small drive
add the last 146 drive

wait for it to rebuild

now at this stage you have 72g more unused room for new partions

make this new partion /home

and after its being created, move the user data to it, but this leves 
72g system partion with just few gigs needed ?, then i would create the 
new partion as lwm2, and then possible shrink system, and mount the lwm2 
as /home, that way you have more options later if 146 will be to small 
again


warn i have not doing this myself, but if should work in teori atleast




Re: [Dovecot] doveadm with multiple commands

2012-09-24 Thread Daniel Parthey
Timo Sirainen wrote:
 On 21.9.2012, at 8.28, Timo Sirainen wrote:
 
  Timo Sirainen wrote:
  doveadm multi [-A | -u wildcards] separator string comand 1 
  [separator string command 2 [...]]
  
  Thoughts?
  
  As command name I could also think of doveadm sequence, which
  implies the commands being executed in serial order.
  
  Hmm. Maybe.
 
 sequence is already commonly used by IMAP protocol and Dovecot code to mean 
 message sequence numbers. I think it would be too confusing to use that word 
 for other things.

Ok, so how about batch?

It reads a series of commands and collects them into
one batch job which is then carried out.

http://en.wikipedia.org/wiki/Batch_(Unix)

Regards
Daniel
-- 
https://plus.google.com/103021802792276734820


Re: [Dovecot] 76Gb to 146Gb

2012-09-24 Thread Stan Hoeppner
On 9/24/2012 1:42 PM, Spyros Tsiolis wrote:

 Having said that, would it be possible to take
 away on 72Gb drive (say Drive1 the second drive)
 and shove in one of the two 146Gb ones ?

It's always best to manually take a drive off line before pulling it.

 Shouldn't the array be rebuilt ?

Depends on how your 6i is configured.  Best guess is that it will
automatically rebuild the mirror on the new 146GB drive, but...

 Will it use the extra disk space though ?

It will probably not.  You need to read the 6i manual.

I sense a hardware upgrade in your near future, either an HP server with
4 bays, or an SFF8088 JBOD chassis and an inexpensive RAID card.

You already have the 146GB drives correct?  They are HP pluggable
drives?  Which means they only work in HP gear.  If that's the case you
need a new server with at least 4 drive bays.  You you need to buy an
off brand JBOD box and two standard SATA drives.

Or maybe your organizations needs more storage on many servers, and it's
time to step up to an iSCSI SAN array.

-- 
Stan



Re: [Dovecot] 76Gb to 146Gb

2012-09-24 Thread Michael Orlitzky
On 09/24/2012 01:42 PM, Spyros Tsiolis wrote:
 Hello all,
 
 I have a DL360 G4 1U server that does a wonderfull job with dovecot horde,
 Xmail and OpenLDAP for a company and serving about 40 acouunts.
 
 The machine is wonderful. I am very happy with it.
 However, I am running out of disk space.
 It has two times 76Gb Drives in RAID1 (disk mirroring) and the capacity
 has reached 82%. 
 
 I am starting of getting nervous.
 
 Does anyone know of a painless way to migrate the entire contents directly
 to another pair of 146Gb SCSI RAID1 disks ?
 
 I thought of downtime and using clonezilla, but my last experience with it
 was questionable. I remember having problems declaring disk re-sizing
 from the smaller capacity drives to the larger ones.

We've done this on the same hardware. You can pick up these servers for
cheap; just buy an extra one. Take the new machine, throw two big disks
in it, and install Gentoo.

Rsync the important stuff. Make sure all of the services are working on
the new machine.

When you're ready to make the switch, disable external networking on the
current live server. Rsync everything again, and then turn the old
server off. Add its IP address to the new server. Maybe kick your
router's ARP cache to expedite the change. It should only cause a minute
or two of downtime.


Re: [Dovecot] 76Gb to 146Gb

2012-09-24 Thread Michael Orlitzky
On 09/24/2012 10:59 PM, Michael Orlitzky wrote:
 
 We've done this on the same hardware. You can pick up these servers for
 cheap; just buy an extra one. Take the new machine, throw two big disks
 in it, and install Gentoo.

I seem to have gone insane, I thought this was on gentoo-user for some
reason. Anyway, it's a fine suggestion =)


Re: [Dovecot] 76Gb to 146Gb

2012-09-24 Thread Robert Schetterer
Am 24.09.2012 21:24, schrieb Spyros Tsiolis:
 
 
 
 
 - Original Message -
 From: Robert Schetterer rob...@schetterer.org
 To: dovecot@dovecot.org
 Cc: 
 Sent: Monday, 24 September 2012, 21:06
 Subject: Re: [Dovecot] 76Gb to 146Gb

 Am 24.09.2012 19:42, schrieb Spyros Tsiolis:
  Hello all,

 %%%%%%%%
  
 rsync
 should do the job

 depending on your whole machine setup it might only be only
 umount old /home and mount new(bigger) /home after sync
 ,perhaps with tmp store elsewhere
 ( for sure you have to have a plan before doing..)

 but your dovecot is very outdated, i would recommend
 get up to new hard and software/os install, and then migrate
 to new machine




 -- 
 Best Regards
 MfG Robert Schetterer

 
 
 On client machines I have thunderbird.
 
 What if :
 
 1. I would make sure that thunderbird keeps a local
 copy of all the message (I think there is a check box
 somewhere on settings)
 
 2. Make sure all client machines have synced their
 mailboxes locally on thunderbird.
 
 
 3. Install a new version of Dovecot/Horde/XMail etc.
 
 4. When the new installation is done, try to sync
 from the existing clients pc's to dovecot ?
 
 Would that work ?
 It's one scenario I am seriously contemplating.
 
 Thank you very much again,
 
 s.
 

in short words ,dont do it like this
setup your new server,
test it, then do i.e imapsync from old to new
switch your ips then ,done

only one way which you can go
look archives and www for migration tips

 
 
  
 
 I merely function as a channel that filters 
 music through the chaos of noise
 - Vangelis
 


-- 
Best Regards
MfG Robert Schetterer