Re: Virtual reject reason
On Fri, Jan 08, 2010 at 02:33:37PM -0500, Christopher Hackman wrote: Is it possible to customize the following error message? MAIL FROM: u...@remotedomain.com 250 2.1.0 Ok RCPT TO: invalidacco...@virtualdomain.com 550 5.1.1 invalidacco...@virtualdomain.com: Recipient address rejected: virtualdomain.com In this sanitized example, virtualdomain.com is just that, as is invalidaccount. I see unknown_virtual_mailbox_reject_code will let me change the response code, but not the text. Right. A fairly straightforward (not simple, but doable) workaround is to implement a check_recipient_access lookup against your list of valid addresses. Your virtual_mailbox_maps query will not work as is, but a little bit of SQL/LDAP magic or a simple policy service could do it. Pseudocode: if domain matches virtual_mailbox_domains: if u...@domain is found in virtual_mailbox_maps: return DUNNO else: return 550 5.1.1 u...@domain Your-custom-reject-text endif endif (repeat for other address classes in use) Perhaps one of the existing publically available policy servers can already do this, I don't know. Is this really worth the trouble? I would think not, but if you still want to do it, check out these references: Access controls: http://www.postfix.org/SMTPD_ACCESS_README.html http://www.postfix.org/access.5.html SQL or LDAP interaction: http://www.postfix.org/MYSQL_README.html http://www.postfix.org/mysql_table.5.html http://www.postfix.org/PGSQL_README.html http://www.postfix.org/pgsql_table.5.html http://www.postfix.org/LDAP_README.html http://www.postfix.org/ldap_table.5.html Policy service protocol: http://www.postfix.org/SMTPD_POLICY_README.html and see this for links to existing policy server projects: http://www.postfix.org/addon.html#policy -- Offlist mail to this address is discarded unless /dev/rob0 or not-spam is in Subject: header
Re: Confusing sasl configuration examples
Bonjour mouss, On Fri, Jan 08, 2010 at 09:53:42PM +0100, mouss wrote: /dev/rob0 a écrit : On Fri, Jan 08, 2010 at 10:23:38AM -0500, Wietse Venema wrote: /dev/rob0: The purpose of the submission service is to accept mail only from authenticated clients. This, I understand. The above submission entry implements this particular requirement without depending on main.cf settings. This, I do not. $ /usr/sbin/postconf -dh smtpd_recipient_restrictions permit_mynetworks, reject_unauth_destination If a client from outside $mynetworks attempts to relay to external addresses, and AUTH succeeds, it passes smtpd_client_restrictions. But in smtpd_recipient_restrictions it gets Relay access denied. It would work if either the client is in $mynetworks, or if the main.cf setting of smtpd_recipient_restrictions has had permit_sasl_authenticated added as per SASL_README. I'm still confused; the point of confusion being that of purpose and utility. Wietse said above, The purpose of the submission service is to accept mail only from authenticated clients. Fine. But I think it's rather useless unless it enables offsite users to relay to any address, internal or external. The master.cf example does not cover this unless as I noted, the default smtpd_recipient_restrictions has been changed. I don't see much real-world use for this, assuming basically default settings, as documentation examples must. Do you? 1. An authenticated TLS client in $mynetworks can send anywhere using this example. So what? That client can do the same on port 25 without the trouble of TLS AUTH, with default settings. 2. An authenticated TLS client outside $mynetworks can send to any local/virtual/relay domains using this example. So what? If that client can get in on port 25, it can do the same without TLS AUTH, with default settings. This is done for robustness reasons. I think, as the OP noted, that the example is confusing, and should be changed as follows: #submission inet n - n - - smtpd # -o smtpd_tls_security_level=encrypt # -o smtpd_sasl_auth_enable=yes # -o smtpd_recipient_restrictions=permit_sasl_authenticated,reject I think my suggestion makes a more useful real-world submission service, that's all. Don't you allow your authenticated submission users to relay? Clearly, the OP had read enough of the documentation to understand how to make a useful submission service, else the question would never have been asked, so indeed no harm resulted from the confusion. And Wietse can take it or leave it. No reply expected nor necessary in either case, so let's move on. :) -- Offlist mail to this address is discarded unless /dev/rob0 or not-spam is in Subject: header
Sender based relay server
Hi all. Our internal postfix server relays all outbound mail thru an external host. How can I set it to use a different relay server when the email comes from a specified domain? Eg. j...@domain1.com - xxx.xxx.xxx.xxx (default), m...@domain2.com - yyy.yyy.yyy.yyy Thanks. -JK
Re: Sender based relay server
Jack Knowlton: Hi all. Our internal postfix server relays all outbound mail thru an external host. How can I set it to use a different relay server when the email comes from a specified domain? Eg. j...@domain1.com - xxx.xxx.xxx.xxx (default), m...@domain2.com - yyy.yyy.yyy.yyy Postfix 2.3 and later: http://www.postfix.org/postconf.5.html#sender_dependent_relayhost_maps And perhaps: http://www.postfix.org/SOHO_README.html Wietse
Re: Sender based relay server
Jack Knowlton put forth on 1/9/2010 9:57 AM: Hi all. Our internal postfix server relays all outbound mail thru an external host. How can I set it to use a different relay server when the email comes from a specified domain? Eg. j...@domain1.com - xxx.xxx.xxx.xxx (default), m...@domain2.com - yyy.yyy.yyy.yyy This might help ya: sender_dependent_relayhost_maps (default: empty) A sender-dependent override for the global relayhost parameter setting. The tables are searched by the envelope sender address and @domain. A lookup result of DUNNO terminates the search without overriding the global relayhost parameter setting (Postfix 2.6 and later). This information is overruled with relay_transport, sender_dependent_default_transport_maps, default_transport and with the transport(5) table. For safety reasons, this feature does not allow $number substitutions in regular expression maps. This feature is available in Postfix 2.3 and later. -- Stan
Re: Huge active queue and system idle, not delivering
Hi, I will try all your advises, but something still very strange for me: We see that postfix logs show that ehlo process is very slow through postfix but very fast by hand. Even I have recorded through tcpdump/WireShark and I can see that messages are sent very very very quickly in about 1 second. But still messages are sent at a rate of a dozen in 10 seconds. That means that messages are sent 1 by one. If connexion to qmail servers are slow, or if qmails are mis-parameted, too slow or anything else, When I do netstat -apn |grep :25 I get only a few connexions from postfix server to qmail servers. Even if DNS+EHLO are slow, and more, because DNS+EHLO seem to be slow, why I don't see hundreds TCP connexions ESTABLISHED ? I expected that postfix will deliver on 30 qmail servers at the same time, and should manage hundreds parallel deliveries, hundreds parallel connexions. Is there some parameter or some conception rule that refrain him to do so? I expected that postfix will full up his own CPU/memory creating these parallel delivery processes or/and will wait after the qmail servers, but on all servers at the same time, on multiple connections to each one. Am I correct ? or I am dreaming of another mail transport package? Patrick
Re: Huge active queue and system idle, not delivering
Hi all, I got these statistics: Jan 9 19:15:21 postfix postfix/scache[18038]: statistics: start interval Jan 9 19:09:03 Jan 9 19:15:21 postfix postfix/scache[18038]: statistics: domain lookup hits=110 miss=89 success=55% Jan 9 19:15:21 postfix postfix/scache[18038]: statistics: address lookup hits=0 miss=2492 success=0% Jan 9 19:15:21 postfix postfix/scache[18038]: statistics: max simultaneous domains=1 addresses=4 connection=4 What means miss=89 success=55%, miss=2492 success=0%? Thanks Patrick
Re: Huge active queue and system idle, not delivering
Patrick Chemla put forth on 1/9/2010 11:17 AM: Hi all, I got these statistics: Jan 9 19:15:21 postfix postfix/scache[18038]: statistics: start interval Jan 9 19:09:03 Jan 9 19:15:21 postfix postfix/scache[18038]: statistics: domain lookup hits=110 miss=89 success=55% Jan 9 19:15:21 postfix postfix/scache[18038]: statistics: address lookup hits=0 miss=2492 success=0% Jan 9 19:15:21 postfix postfix/scache[18038]: statistics: max simultaneous domains=1 addresses=4 connection=4 What means miss=89 success=55%, miss=2492 success=0%? http://www.postfix.com/CONNECTION_CACHE_README.html -- Stan
Re: Huge active queue and system idle, not delivering
Hi Stan, Thanks for your interest. Le 09/01/2010 20:21, Stan Hoeppner a écrit : Patrick Chemla put forth on 1/9/2010 11:17 AM: Hi all, I got these statistics: Jan 9 19:15:21 postfix postfix/scache[18038]: statistics: start interval Jan 9 19:09:03 Jan 9 19:15:21 postfix postfix/scache[18038]: statistics: domain lookup hits=110 miss=89 success=55% Jan 9 19:15:21 postfix postfix/scache[18038]: statistics: address lookup hits=0 miss=2492 success=0% Jan 9 19:15:21 postfix postfix/scache[18038]: statistics: max simultaneous domains=1 addresses=4 connection=4 What means miss=89 success=55%, miss=2492 success=0%? http://www.postfix.com/CONNECTION_CACHE_README.html I wen t there but did not find explanations about miss address lookup or miss domain lookup. While I have 122,000 messages in active queue I still don't understand why statistics show max simultaneous domains=1. It should be dozens , or hundreds. Patrick -- Stan
Re: Huge active queue and system idle, not delivering
Patrick Chemla put forth on 1/9/2010 11:07 AM: Hi, I will try all your advises, but something still very strange for me: We see that postfix logs show that ehlo process is very slow through postfix but very fast by hand. Even I have recorded through tcpdump/WireShark and I can see that messages are sent very very very quickly in about 1 second. But still messages are sent at a rate of a dozen in 10 seconds. That means that messages are sent 1 by one. If connexion to qmail servers are slow, or if qmails are mis-parameted, too slow or anything else, When I do netstat -apn |grep :25 I get only a few connexions from postfix server to qmail servers. Even if DNS+EHLO are slow, and more, because DNS+EHLO seem to be slow, why I don't see hundreds TCP connexions ESTABLISHED ? This behavior is likely a result of the connection cache: http://www.postfix.com/CONNECTION_CACHE_README.html If one has a large amount of mail destined for a single host, it is inefficient to open dozens or hundreds of TCP connections and SMTP connections due to the additional overhead of process/thread count and memory consumption. It is much more efficient to pipeline all the mail through a single connection. One can only pump so many bits down the wire between two hosts. If you can fill the pipe to near capacity with one TCP/SMTP stream, why open 100s of connections to do the same? I believe this is why you are not seeing dozens or hundreds of TCP connections. Postfix is intelligently designed to avoid this inefficiency. I expected that postfix will deliver on 30 qmail servers at the same time, and should manage hundreds parallel deliveries, hundreds parallel connexions. Is there some parameter or some conception rule that refrain him to do so? I expected that postfix will full up his own CPU/memory creating these parallel delivery processes or/and will wait after the qmail servers, but on all servers at the same time, on multiple connections to each one. Am I correct ? or I am dreaming of another mail transport package? Patrick As Victor and others have already stated: 1. In your previous configuration, you had multiple thousands of unique IP addresses (your customers) connecting directly to your 30 qmail servers to relay their mail. qmail performed fine with this configuration because no one qmail server was seeing thousands of delivery attempts per minute from any one single IP address. 2. In your current Postfix configuration, your qmail servers are seeing a single unique IP address attempting to send multiple thousands of messages per minute, and qmail is reacting with rate limiting countermeasures because of this. You need to figure out what settings in the qmail configuration are controlling this rate throttling and in what way. Once you find this and change it, you should see a dramatic improvement in Postfix's ability to quickly move the mail out of the queue to the 30 qmail servers, most likely using a single or only a few TCP connections to each qmail server. -- Stan
Re: Huge active queue and system idle, not delivering
Patrick Chemla put forth on 1/9/2010 12:37 PM: I wen t there but did not find explanations about miss address lookup or miss domain lookup. While I have 122,000 messages in active queue I still don't understand why statistics show max simultaneous domains=1. It should be dozens , or hundreds. Those are statistics relating to scache performance. It tells you how many domains or addresses were able to be delivered via scache reuse. I.e. how many emails Postfix was able to send through an already open SMTP connection to a given host. Since all of your qmail hosts are configured identically, and should be able to relay mail bound for any destination on the internet, you should never see anything less than ~100% in those statistics, _unless_ there is some other kind of problem. If your qmail servers are rate limiting via any method, and Postfix is attempting to send 2000 emails per minute down that one SMTP connection, when qmail blocks individual deliveries for any reason, those scache failure statistics will increase. -- Stan
Re: Huge active queue and system idle, not delivering
Le 09/01/2010 20:54, Stan Hoeppner a écrit : Patrick Chemla put forth on 1/9/2010 12:37 PM: I wen t there but did not find explanations about miss address lookup or miss domain lookup. While I have 122,000 messages in active queue I still don't understand why statistics show max simultaneous domains=1. It should be dozens , or hundreds. Those are statistics relating to scache performance. It tells you how many domains or addresses were able to be delivered via scache reuse. I.e. how many emails Postfix was able to send through an already open SMTP connection to a given host. Since all of your qmail hosts are configured identically, and should be able to relay mail bound for any destination on the internet, you should never see anything less than ~100% in those statistics, _unless_ there is some other kind of problem. You mean 100% success? If your qmail servers are rate limiting via any method, and Postfix is attempting to send 2000 emails per minute down that one SMTP connection, when qmail blocks individual deliveries for any reason, those scache failure statistics will increase. Before I set up the postfix relay to load balance between 30 qmail servers, each of them was able to accept in his own queue hundreds thousands email. Email were sent by campaigns of thousands balanced on 3 qmails servers, each one full in CPU/memory working hard to deliver. Instead of sending each campaign on only 3 qmails, I though that by sending each campaign on 30 qmails I will cut each one load by ten and speed up deliveries. But now, postfix is retaining the emails in his own queue, not pushing the queue down to the qmails. Postfix server and qmail servers are all about 90%cpu free. only 1 to 9 connexions exist at a time from postfix to qmails. This is exactly what I would like to append: Instead of a queue of 122,000 on postfix, I expect to have each qmail with a queue of 4000. Qmails did this before I set up postfix. Patrick -- Stan
Re: Huge active queue and system idle, not delivering
Patrick Chemla put forth on 1/9/2010 1:08 PM: You mean 100% success? Yes. Before I set up the postfix relay to load balance between 30 qmail servers, each of them was able to accept in his own queue hundreds thousands email. Email were sent by campaigns of thousands balanced on 3 qmails servers, each one full in CPU/memory working hard to deliver. Instead of sending each campaign on only 3 qmails, I though that by sending each campaign on 30 qmails I will cut each one load by ten and speed up deliveries. But now, postfix is retaining the emails in his own queue, not pushing the queue down to the qmails. An admiral technical goal. Can you elaborate on these campaigns? You said previously that you had hundreds of thousands of customers whose email you were relaying, as if you are an ISP. Now you are saying the mail load is generated by campaigns. What exactly are these campaigns? Postfix server and qmail servers are all about 90%cpu free. only 1 to 9 connexions exist at a time from postfix to qmails. This is because the qmail servers won't let the postfix server send any faster. We've been over this mulitple times now. Multiple people have told you the same thing. For this to work correctly, you need to figure out why the qmail servers are rate limiting the postfix server deliveries. This is exactly what I would like to append: Instead of a queue of 122,000 on postfix, I expect to have each qmail with a queue of 4000. Qmails did this before I set up postfix. All MTAs have unique performance characteristics. You've changed one of the MTAs in your architecture. Now you must re-tune your qmail farm servers to work with the new MTA, postfix, which you have introduced. This is kinda IT 101 stuff. You can't automatically assume the problem lies with the new thing you introduced. Often, the new thing exposes problems or weaknesses that already existed in the old stuff. -- Stan
Re: Huge active queue and system idle, not delivering
Patrick Chemla: Hi all, I got these statistics: Jan 9 19:15:21 postfix postfix/scache[18038]: statistics: start interval Jan 9 19:09:03 Jan 9 19:15:21 postfix postfix/scache[18038]: statistics: domain lookup hits=110 miss=89 success=55% Jan 9 19:15:21 postfix postfix/scache[18038]: statistics: address lookup hits=0 miss=2492 success=0% Jan 9 19:15:21 postfix postfix/scache[18038]: statistics: max simultaneous domains=1 addresses=4 connection=4 Please try the following, as asked half a week ago: postconf -e smtp_connection_cache_on_demand=no postfix reload and report if this makes a difference. Wietse
Re: Huge active queue and system idle, not delivering
Wietse Venema: Patrick Chemla: Hi all, I got these statistics: Jan 9 19:15:21 postfix postfix/scache[18038]: statistics: start interval Jan 9 19:09:03 Jan 9 19:15:21 postfix postfix/scache[18038]: statistics: domain lookup hits=110 miss=89 success=55% Jan 9 19:15:21 postfix postfix/scache[18038]: statistics: address lookup hits=0 miss=2492 success=0% Jan 9 19:15:21 postfix postfix/scache[18038]: statistics: max simultaneous domains=1 addresses=4 connection=4 Please try the following, as asked half a week ago: postconf -e smtp_connection_cache_on_demand=no postfix reload and report if this makes a difference. Oh, and please limit the discussion to people who understand the hard technical internals of Postfix. Other people please stay out of the way. Wietse
how are sysexit.h statues interpreted
Hi. Is there somewhere some documentation how each of the exit codes from sysexit.h is interpreted by Postfix when used with pipe(8) (returned e.g. by maildrop)? I just now the EX_TEMPFAIL means that mail is defered, and I assume EX_UNAVAILABLE leads to a bounce. What about the others? But generally, if the exit status != 0 Postfix first looks at (I assume) the first line of stdout and interprets 4.X.X or 5.X.X as said in the manpage. If found, the EX_* are not interpreted, right? Cheers, Chris. This message was sent using IMP, the Internet Messaging Program.
Re: how are sysexit.h statues interpreted
Christoph Anton Mitterer: Hi. Is there somewhere some documentation how each of the exit codes from sysexit.h is interpreted by Postfix when used with pipe(8) (returned e.g. by maildrop)? I naively assume that the sysexits.h names speak for themselves. I just now the EX_TEMPFAIL means that mail is defered, and I assume EX_UNAVAILABLE leads to a bounce. What about the others? EX_TEMPFAIL defers mail, as does EX_OSERR (system resource not available). All others are hard coded as non-retryable. Making this configurable is a couple hours of work (design a user interface, implement the code, test the code, preferable with an automated test that exercises all the cases, document the user interface). The current mapping is in global/sys_exits.c. But generally, if the exit status != 0 Postfix first looks at (I assume) the first line of stdout and interprets 4.X.X or 5.X.X as said in the manpage. If found, the EX_* are not interpreted, right? That behavior is documented in the pipe(8) and local(8) manpages, in the paragraphs that discuss RFC 3463 enhanced status codes. If there is anything not correct in that text then I am sure you will post an improvement. Wietse
Re: how are sysexit.h statues interpreted
On Sat, 2010-01-09 at 19:58 -0500, Wietse Venema wrote: EX_TEMPFAIL defers mail, as does EX_OSERR (system resource not available). All others are hard coded as non-retryable. Thanks. Making this configurable is a couple hours of work (design a user interface, implement the code, test the code, preferable with an automated test that exercises all the cases, document the user interface). The current mapping is in global/sys_exits.c. Ah,.. I see the mappings... If there is anything not correct in that text then I am sure you will post an improvement. No no,.. everything ok. I hope my previous remarks on sections where I thought to have found errors or similar are not disapproved. If so I'll stop trying to make such minor contributions. Cheers, Chris. smime.p7s Description: S/MIME cryptographic signature
Re: how are sysexit.h statues interpreted
Christoph Anton Mitterer: On Sat, 2010-01-09 at 19:58 -0500, Wietse Venema wrote: EX_TEMPFAIL defers mail, as does EX_OSERR (system resource not available). All others are hard coded as non-retryable. Thanks. Making this configurable is a couple hours of work (design a user interface, implement the code, test the code, preferable with an automated test that exercises all the cases, document the user interface). The current mapping is in global/sys_exits.c. Ah,.. I see the mappings... If there is anything not correct in that text then I am sure you will post an improvement. No no,.. everything ok. I hope my previous remarks on sections where I thought to have found errors or similar are not disapproved. If so I'll stop trying to make such minor contributions. Constructive feedback is welcome. If a question can be answered by documentation, then I will refer to that text, but won't answer every question. Wietse