-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 Hi,
Here is some more data. I started saslquthd in debug mode, and then tried sending emails using my client sendmail, and then using icedove, to see if there is a difference. To eliminate saslauthd backends, I have tried this with pam, and then with shadow backends for saslauthd, and the results are the same for shadow and pam. Data: using pam for saslauthd mechanism ====================================================================== __> /usr/sbin/saslauthd -a pam -c -m /var/run/saslauthd -n 5 -d saslauthd[12066] :main : num_procs : 5 saslauthd[12066] :main : mech_option: NULL saslauthd[12066] :main : run_path : /var/run/saslauthd saslauthd[12066] :main : auth_mech : pam saslauthd[12066] :cache_alloc_mm : mmaped shared memory segment on file: /var/run/saslauthd/cache.mmap saslauthd[12066] :cache_init : bucket size: 96 bytes saslauthd[12066] :cache_init : stats size : 36 bytes saslauthd[12066] :cache_init : timeout : 28800 seconds saslauthd[12066] :cache_init : cache table: 985828 total bytes saslauthd[12066] :cache_init : cache table: 1711 slots saslauthd[12066] :cache_init : cache table: 10266 buckets saslauthd[12066] :cache_init_lock : flock file opened at /var/run/saslauthd/cache.flock saslauthd[12066] :ipc_init : using accept lock file: /var/run/saslauthd/mux.accept saslauthd[12066] :detach_tty : master pid is: 0 saslauthd[12066] :ipc_init : listening on socket: /var/run/saslauthd/mux saslauthd[12066] :main : using process model saslauthd[12066] :have_baby : forked child: 12067 saslauthd[12066] :have_baby : forked child: 12068 saslauthd[12066] :have_baby : forked child: 12069 saslauthd[12066] :have_baby : forked child: 12070 saslauthd[12066] :get_accept_lock : acquired accept lock ====================================================================== First, send using local sendmail as relay: ====================================================================== saslauthd[12066] :cache_get_rlock : attempting a read lock on slot: 15 saslauthd[12066] :cache_lookup : [login=srivasta] [service=] [realm=smtp]: not found, update pending saslauthd[12066] :cache_un_lock : attempting to release lock on slot: 15 saslauthd[12066] :do_auth : auth failure: [user=srivasta] [service=smtp] [realm=] [mech=pam] [reason=PAM auth error] saslauthd[12068] :rel_accept_lock : released accept lock saslauthd[12068] :cache_get_rlock : attempting a read lock on slot: 15 saslauthd[12068] :cache_lookup : [login=srivasta] [service=] [realm=smtp]: not found, update pending saslauthd[12068] :cache_un_lock : attempting to release lock on slot: 15 saslauthd[12067] :get_accept_lock : acquired accept lock saslauthd[12068] :do_auth : auth failure: [user=srivasta] [service=smtp] [realm=] [mech=pam] [reason=PAM auth error] ====================================================================== Immediately, send another email using icedove, which connect directly to the remote mail server: ====================================================================== saslauthd[12067] :rel_accept_lock : released accept lock saslauthd[12070] :get_accept_lock : acquired accept lock saslauthd[12067] :cache_get_rlock : attempting a read lock on slot: 15 saslauthd[12067] :cache_lookup : [login=srivasta] [service=] [realm=smtp]: not found, update pending saslauthd[12067] :cache_un_lock : attempting to release lock on slot: 15 saslauthd[12067] :cache_get_wlock : attempting a write lock on slot: 15 saslauthd[12067] :cache_commit : lookup committed saslauthd[12067] :cache_un_lock : attempting to release lock on slot: 15 saslauthd[12067] :do_auth : auth success: [user=srivasta] [service=smtp] [realm=] [mech=pam] saslauthd[12067] :do_request : response: OK ====================================================================== So, no change anywhere, icedove works, local sendmail client does not. The same experiment with shadow as the saslauthd backend: ====================================================================== __> /usr/sbin/saslauthd -a shadow -c -m /var/run/saslauthd -n 5 -d saslauthd[12015] :main : num_procs : 5 saslauthd[12015] :main : mech_option: NULL saslauthd[12015] :main : run_path : /var/run/saslauthd saslauthd[12015] :main : auth_mech : shadow saslauthd[12015] :cache_alloc_mm : mmaped shared memory segment on file: /var/run/saslauthd/cache.mmap saslauthd[12015] :cache_init : bucket size: 96 bytes saslauthd[12015] :cache_init : stats size : 36 bytes saslauthd[12015] :cache_init : timeout : 28800 seconds saslauthd[12015] :cache_init : cache table: 985828 total bytes saslauthd[12015] :cache_init : cache table: 1711 slots saslauthd[12015] :cache_init : cache table: 10266 buckets saslauthd[12015] :cache_init_lock : flock file opened at /var/run/saslauthd/cache.flock saslauthd[12015] :ipc_init : using accept lock file: /var/run/saslauthd/mux.accept saslauthd[12015] :detach_tty : master pid is: 0 saslauthd[12015] :ipc_init : listening on socket: /var/run/saslauthd/mux saslauthd[12015] :main : using process model saslauthd[12015] :have_baby : forked child: 12016 saslauthd[12015] :have_baby : forked child: 12017 saslauthd[12015] :have_baby : forked child: 12018 saslauthd[12015] :have_baby : forked child: 12019 saslauthd[12015] :get_accept_lock : acquired accept lock saslauthd[12015] :rel_accept_lock : released accept lock saslauthd[12017] :get_accept_lock : acquired accept lock ====================================================================== saslauthd[12015] :cache_get_rlock : attempting a read lock on slot: 15 saslauthd[12015] :cache_lookup : [login=srivasta] [service=] [realm=smtp]: not found, update pending saslauthd[12015] :cache_un_lock : attempting to release lock on slot: 15 saslauthd[12015] :do_auth : auth failure: [user=srivasta] [service=smtp] [realm=] [mech=shadow] [reason=Unknown] saslauthd[12015] :do_request : response: NO saslauthd[12017] :rel_accept_lock : released accept lock saslauthd[12017] :cache_get_rlock : attempting a read lock on slot: 15 saslauthd[12017] :cache_lookup : [login=srivasta] [service=] [realm=smtp]: not found, update pending saslauthd[12017] :cache_un_lock : attempting to release lock on slot: 15 saslauthd[12016] :get_accept_lock : acquired accept lock saslauthd[12017] :do_auth : auth failure: [user=srivasta] [service=smtp] [realm=] [mech=shadow] [reason=Unknown] saslauthd[12017] :do_request : response: NO ====================================================================== saslauthd[12016] :rel_accept_lock : released accept lock saslauthd[12018] :get_accept_lock : acquired accept lock saslauthd[12016] :cache_get_rlock : attempting a read lock on slot: 15 saslauthd[12016] :cache_lookup : [login=srivasta] [service=] [realm=smtp]: not found, update pending saslauthd[12016] :cache_un_lock : attempting to release lock on slot: 15 saslauthd[12016] :cache_get_wlock : attempting a write lock on slot: 15 saslauthd[12016] :cache_commit : lookup committed saslauthd[12016] :cache_un_lock : attempting to release lock on slot: 15 saslauthd[12016] :do_auth : auth success: [user=srivasta] [service=smtp] [realm=] [mech=shadow] saslauthd[12016] :do_request : response: OK ====================================================================== There is certainly something wrong with the local sendmail daemon talking to the remote mail server. The issue now is what the problem is. manoj - -- Manoj Srivastava <sriva...@acm.org> <http://www.golden-gryphon.com/> 4096R/C5779A1C E37E 5EC5 2A01 DA25 AD20 05B6 CF48 9438 C577 9A1C -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.12 (GNU/Linux) iQFtBAEBCgBXBQJQgzokUBSAAAAAABsALHNyaXZhc3RhQGdvbGRlbi1ncnlwaG9u LmNvbUFCQTcxMDI1QTFCNUE4OEE0RTVGNjhDMjM2QkQ3MjBGNkY1NzY0NzJfMTg4 AAoJEDa9cg9vV2Ryp+sIANoAxP4UtVpMsWYTws80cT2bnZe+wXu+9XpOlpHH/43C ph27QUPVSrDkgF2zzF25os1gwraLPkb8+2tk9JwyJ+r6C+O3R6BjuttMFSo8cw6m Wmpz9ly/O0w6JKxoXeqq5xFd9XJcTbTJclLSddiMt0rymxgGMpf8SH/NkoF5oVgM YkGuyx5UW88/01EoCHOcfDN1UwRZnzQZXizWIsasP3roREm3aDcQ/rQQJ+LOWHRZ QQmYjKL1Cgz9phCoFD7VLa3vOoV9o0XJ0VXNJe3hI1gJmvZafwXcczWJqLij1Ztk HYFU6M1LRd3bzu2Zlr4gBENeAFYQ1op/EyE6kxhpmRc= =UXmP -----END PGP SIGNATURE----- -- To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org