[Dovecot] Test outgoing email on director setup

2014-04-02 Thread Murray Trainer
HI All,

I have a several Exim MTA's relaying mail to a pair of director
proxies via LMTP which then relayi to several mailstores via LMTP  
Incoming mail is working fine.  

My outgoing mail uses LMTP also in the reverse of above.   How do I
manually test outgoing mail on the mailstores and proxies as I only
have dovecot and not exim installed on them?

I have the following set in dovecot.conf for LMTP 

 submission_host = mailproxy01:24 mailproxy02:24

Hopefully that works with multiple submission hosts to give
redundancy?

Thanks

Murray



[Dovecot] test

2013-02-08 Thread Timo Sirainen
mailman archive process seems to have crashed and hasn't been writing to
archives. lets see if it works again after restart..




Re: [Dovecot] Test suite?

2012-01-27 Thread Timo Sirainen
On 28.1.2012, at 0.57, Kyle Lafkoff wrote:

> I am building a RPM for dovecot. Is there a test suite available I could use 
> during the build to verify proper functionality? Thanks!

It would be nice to have a proper finished test suite testing all kinds of 
functionality. Unfortunately I haven't had time to write such a thing, and no 
one's tried to help creating one.

There is "make check" that you can run, which goes through some unit tests, but 
it's not very useful in catching bugs.

There is also imaptest tool (http://imapwiki.org/ImapTest), which is very 
useful in catching bugs. I've been planning on creating a comprehensive test 
suite by creating Dovecot-specific scripts for imaptest and running them 
against many different Dovecot configurations (mbox/maildir/sdbox/mdbox formats 
each against different kinds of namespaces, as well as many other tests). That 
plan has existed several years now, but unfortunately only in my head.

Perhaps soon I can hire someone else to do that via my company. :)



[Dovecot] Test suite?

2012-01-27 Thread Kyle Lafkoff
Hi

I am building a RPM for dovecot. Is there a test suite available I could use 
during the build to verify proper functionality? Thanks!

Kyle

Re: [Dovecot] test

2011-09-08 Thread Timo Sirainen
On Thu, 2011-09-08 at 12:41 +0300, Timo Sirainen wrote:

> I'm not aware of any such bugs ever existing in dovecot-lda. You could
> check this by having Exim internally deliver mails from that site to
> some other maildir/mbox file, and check if the empty line exists there
> also. I don't know the specifics of how to configure Exim this way.

Oh, or another possibility: instead of executing dovecot-lda directly,
execute dovecot-lda.sh which contains something like (warning: totally
untested):

#!/bin/sh


tmpfile=`mktemp`
cat > $tmpfile

if grep -q ^From.*transfer.ro; then
  cp $tmpfile /tmp/transfer.ro.`date +%s`
fi

/usr/local/libexec/dovecot/dovecot-lda "$@" < $tmpfile
ret=$?
rm -f $tmpfile
exit $ret




Re: [Dovecot] test

2011-09-08 Thread Timo Sirainen
On Thu, 2011-09-08 at 12:00 +0300, Adrian Stoica wrote:
> Hello
> i use dovecot 2.0.14 , with exim 4.76 using dovecot-lda.
> 
> We have the following problem: when I receive mail from the site 
> http://www.transfer.ro, which is a file transfer site, most emails 
> appear to be empty.
> Empty rows appear in email body slipped through the existing, and this 
> makemy mail client to show me an empty mail. You can see the content 
> only by viewing the message source.
> 
> instead of
> "- np4e68592849da7
> Content-type: text / plain, charset = utf-8
> "
> appear
> 
> "- np4e68592849da7
> 
> Content-type: text / plain, charset = utf-8 " , and that blank line 
> spoil everything.
> 
> You can check if there is somethingwrong ?

I'm not aware of any such bugs ever existing in dovecot-lda. You could
check this by having Exim internally deliver mails from that site to
some other maildir/mbox file, and check if the empty line exists there
also. I don't know the specifics of how to configure Exim this way.




[Dovecot] test

2011-09-08 Thread Adrian Stoica

Hello
i use dovecot 2.0.14 , with exim 4.76 using dovecot-lda.

We have the following problem: when I receive mail from the site 
http://www.transfer.ro, which is a file transfer site, most emails 
appear to be empty.
Empty rows appear in email body slipped through the existing, and this 
makemy mail client to show me an empty mail. You can see the content 
only by viewing the message source.


instead of
"- np4e68592849da7
Content-type: text / plain, charset = utf-8
"
appear

"- np4e68592849da7

Content-type: text / plain, charset = utf-8 " , and that blank line 
spoil everything.


You can check if there is somethingwrong ?
<>

Re: [Dovecot] test emails did not arrive at SMTP server : after dovecot installation

2011-02-16 Thread Jerry
On Wed, 16 Feb 2011 22:38:45 +0800
sunhux G  articulated:

> Just set up postfix & it's running on my RHES 4.2 box.
> 
> Immediately after postfix is up, I test sending emails from a
> permitted domain
> (ahhh, on this postfix server's domain firewall, we even have a
> firewall rule
>  which permits Tcp25 from those few sending domains' SMTP servers)
> using an email client  to
> sender_id@[IP_address_of_the_postfix_server]  & the /var/log/maillog
> on the postfix server indicated the email arrives at the postfix
> server (with some errors though) :
> 
> # grep recipient_id /var/log/maillog*
> maillog:Feb 15 11:41:52 hostname postfix/smtpd[6891]: NOQUEUE:
> reject: RCPT from gate1.mds.com.sg[203.126.130.157]: 554 5.7.1
> :
> Relay access denied; from=
> to= proto=ESMTP helo=
> maillog:Feb 15 13:43:20 hostname sendmail[7688]: NOQUEUE:
> SYSERR(recipient_id): can not chdir(/var/spool/mqueue/): Permission
> denied
> 
> Then I installed dovecot rpm on my RHES box : uninstall it as it's an
> old version &
> reinstall with a newer version & start up dovecot as well.
> 
> I did not test sending to sender_id@domain_name at that time because
> the domain I purchased from a domain provider/registrar has yet to be
> registered in our ISP's DNS.  Subsequently I registered the following
> A, MX & NS records with our ISP :
> 
> A: myportaltech.com. IN A 202.6.163.31
> A: smtp.myportaltech.com. IN A 202.6.163.31
> 
> PTR: 31.163.6.202.in-addr.arpa. IN PTR smtp.myportaltech.com.
> 
> MX: myportaltech.com. IN MX 10 smtp.myportaltech.com.
> 
> NS: myportaltech.com.IN NS ns1.businessexprezz.com.
> NS: myportaltech.com.IN NS ns2.businessexprezz.com.
> 
> The above myportaltech is just a fictitious name of my domain but I
> can provide the actual domain name if needed.
> 
> After the above records have been propagated to all other DNSes, I
> test sending email from the same permitted domain, this time using
> domain name & the email never arrives & I did not receive a 'bounced
> mail' notification too.  Then I test sending from the same domain to
> recipient_id@[202.6.163.31] & this time round, the test email never
> show up in /var/log/maillog* anymore.
> 
> The network/security guys confirmed that the firewall logs did not
> show any denied SMTP records.
> 
> So how do I go about troubleshooting this?
> 
> Is this a DNS record entries issue, firewall/network issue, related to
> dovecot or something within my postfix server?

Unless I am misreading this, it is a Postfix problem. I would strongly
suggest that you ask your question on their forum. Before doing so,
please read the documentation on:
http://www.postfix.com/DEBUG_README.html

I would also suggest that you follow the instruction for "Reporting
problems to postfix-us...@postfix.org" located at the end of the
document. Provide the output from the postfinger tool. This can be
found at http://ftp.wl0.org/SOURCES/postfinger.

-- 
Jerry ✌
dovecot.u...@seibercom.net

Disclaimer: off-list followups get on-list replies or get ignored.
Please do not ignore the Reply-To header.
__
It is easy when we are in prosperity to give advice to the afflicted.


Aeschylus


[Dovecot] test emails did not arrive at SMTP server : after dovecot installation

2011-02-16 Thread sunhux G
Just set up postfix & it's running on my RHES 4.2 box.

Immediately after postfix is up, I test sending emails from a permitted
domain
(ahhh, on this postfix server's domain firewall, we even have a firewall
rule
 which permits Tcp25 from those few sending domains' SMTP servers) using
an email client  to sender_id@[IP_address_of_the_postfix_server]  & the
/var/log/maillog on the postfix server indicated the email arrives at the
postfix
server (with some errors though) :

# grep recipient_id /var/log/maillog*
maillog:Feb 15 11:41:52 hostname postfix/smtpd[6891]: NOQUEUE: reject: RCPT
from gate1.mds.com.sg[203.126.130.157]: 554 5.7.1
:
Relay access denied; from=
to= proto=ESMTP helo=
maillog:Feb 15 13:43:20 hostname sendmail[7688]: NOQUEUE:
SYSERR(recipient_id): can not chdir(/var/spool/mqueue/): Permission denied

Then I installed dovecot rpm on my RHES box : uninstall it as it's an old
version &
reinstall with a newer version & start up dovecot as well.

I did not test sending to sender_id@domain_name at that time because the
domain I purchased from a domain provider/registrar has yet to be registered
in our ISP's DNS.  Subsequently I registered the following A, MX & NS
 records with our ISP :

A: myportaltech.com. IN A 202.6.163.31
A: smtp.myportaltech.com. IN A 202.6.163.31

PTR: 31.163.6.202.in-addr.arpa. IN PTR smtp.myportaltech.com.

MX: myportaltech.com. IN MX 10 smtp.myportaltech.com.

NS: myportaltech.com.IN NS ns1.businessexprezz.com.
NS: myportaltech.com.IN NS ns2.businessexprezz.com.

The above myportaltech is just a fictitious name of my domain but I
can provide the actual domain name if needed.

After the above records have been propagated to all other DNSes, I
test sending email from the same permitted domain, this time using
domain name & the email never arrives & I did not receive a 'bounced
mail' notification too.  Then I test sending from the same domain to
recipient_id@[202.6.163.31] & this time round, the test email never
show up in /var/log/maillog* anymore.

The network/security guys confirmed that the firewall logs did not
show any denied SMTP records.

So how do I go about troubleshooting this?

Is this a DNS record entries issue, firewall/network issue, related to
dovecot or something within my postfix server?


Thanks
Sun


Re: [Dovecot] test

2010-03-25 Thread Mark Sapiro
On 11:59 AM, Timo Sirainen wrote:
> On Thu, 2010-03-25 at 16:17 +0200, to...@example.com wrote:
>> test
> 
> Ugh. I guess mailman doesn't use From: line to check if user is
> subscribed. :)


In a default Mailman installation, a post is considered to be from a
list member if the envelope sender or any of the From:, Reply-To: or
Sender: headers contain a member address.

To change this set SENDER_HEADERS in mm_cfg.py (documented in Defaults.py).

-- 
Mark Sapiro The highway is for gamblers,
San Francisco Bay Area, Californiabetter use your sense - B. Dylan



Re: [Dovecot] test

2010-03-25 Thread Sabahattin Gucukoglu
On 25 Mar 2010, at 14:19, Timo Sirainen wrote:
On Thu, 2010-03-25 at 16:17 +0200, to...@example.com wrote:
>> test
> 
> Ugh. I guess mailman doesn't use From: line to check if user is
> subscribed. :)

It does if you ask it to.  It uses the From: header as the authentication for 
the sender.  So I change my bounce and reply address but leave the From: 
intact, then I get bounces and replies without filtering.

Cheers,
Sabahattin



smime.p7s
Description: S/MIME cryptographic signature


Re: [Dovecot] test

2010-03-25 Thread Timo Sirainen
On Thu, 2010-03-25 at 16:17 +0200, to...@example.com wrote:
> test

Ugh. I guess mailman doesn't use From: line to check if user is
subscribed. :)



signature.asc
Description: This is a digitally signed message part


[Dovecot] test

2010-03-25 Thread total
test




[Dovecot] Test environment question

2009-10-27 Thread Stewart Dean
I want to test out my first V1.2 Dovecot (upgraded from V1.1) instance.  
What I have in mind to do is to run it on another machine that has the 
Inbox dir and homedirs  NFS import mounted from the production 
mailserver.  I then have 5 people test it in this test environment


A) Then I can deal with the index filesystem in one of two ways:
  1) Make it local OR
  2) NFS import it from the production DC server
Comments as to which is best?  I have used #1 before...which caused some 
temporary unhappiness with the switchover and switchbackduring which 
time the index is badly wrong and DC auto-rebuilds it...


B) Is there anything else I should do/not do? 
C) Any ugliness that will surface in this testing lashup but isn't 
important?


--
 Stewart Dean, Unix System Admin, Henderson Computer Resources 
Center of Bard College, Annandale-on-Hudson, New York 12504 
sd...@bard.edu voice: 845-758-7475, fax: 845-758-7035


Re: [Dovecot] test of mailing list

2009-05-27 Thread Konstantin Khomoutov

Pascal Volk wrote:


Im sendding messages to the list and they do not show up.

Who wrote this messages to the mailing list?
* http://dovecot.org/list/dovecot/2009-May/039893.html
* http://dovecot.org/list/dovecot/2009-May/039902.html
It's possible to switch off reception of own messages in the mailman 
settings. The original poster might have this problem.


Re: [Dovecot] test of mailing list

2009-05-27 Thread Pascal Volk
On 05/27/2009 05:04 PM Carlos Xavier wrote:
> Im sendding messages to the list and they do not show up.

Who wrote this messages to the mailing list?
* http://dovecot.org/list/dovecot/2009-May/039893.html
* http://dovecot.org/list/dovecot/2009-May/039902.html


Regards,
Pascal
-- 
The trapper recommends today: c01dcofe.0914...@localdomain.org


[Dovecot] test of mailing list

2009-05-27 Thread Carlos Xavier

Im sendding messages to the list and they do not show up.

its just a test, please ignore

Regards,
Carlos Xavier.


Re: [Dovecot] Test environment question

2008-10-09 Thread Timo Sirainen
The code has some checks that if posix_fallocate() fails with a  
specific errno it stops trying to use it. Maybe it hits that condition  
at some point. Or maybe the code just isn't called for some reason, I  
don't really know..


On Oct 9, 2008, at 6:45 PM, Stewart Dean wrote:

I have a call open to IBM with their Compiler group on this to see  
if this can't be fixed right.  A side question: how come is it that  
this happens when the session starts up and reoccurs periodically  
for the first day or so...and then not again unless and until those  
imap process sessions are closed out



Timo Sirainen wrote:

On Fri, 2008-10-03 at 14:33 -0400, Stewart Dean wrote:

I am seeing posix_fallocate and file_set_size errmsgs in the mail  
syslog, but

see a pattern:

1) They only happen with the /var/spool/mail inbox NOT with any of  
the /home
folders and appear to be happening every 10 minutes from the time  
I started DC

(9AM, 10/1/98) until 11AM, 10/2...and then ceased
The every ten minute message sets looked like this:
 > Oct  1 22:30:31 egg mail:err|error dovecot: IMAP(sdean):  
posix_fallocate()

failed: Resource temporarily unavailable



The main problem here is that posix_fallocate() is broken in your AIX
(v1.0 doesn't even try to use it). My previous patch attempted to  
make

Dovecot detect this and silently fallback to not using it, but
apparently it can fail in more ways. I thought about adding another
check for EAGAIN, but perhaps posix_fallocate() just returns the
previous errno so it can't be checked that way. So I moved the  
check to

configure instead:

http://hg.dovecot.org/dovecot-1.1/rev/12565ef10d1c

Alternatively you could just remove HAVE_POSIX_FALLOCATE from  
config.h
after running configure. Or yet another way would be to try to find  
out

if it's already been fixed in AIX. This looks related:
http://www-01.ibm.com/support/docview.wss?uid=isg1IY77112



3) However, then there was the following:
a) If I used webmail, which accessed the production server and got  
the indices
on my test server out of sync, I got this error message from in  
the mail syslog

on my test server:

Oct  3 12:20:23 egg mail:err|error dovecot: IMAP(sdean): mbox  
sync: UID inserted
in the middle of mailbox /var/spool/mail/sdean (648818 > 648046,  
seq=1153, idx_

msgs=1187)



v1.1 also has a bug that can cause this, although normally it  
should be
visible only when index files aren't being used, or they're out of  
sync

for some reason. This'll fix it:
http://hg.dovecot.org/dovecot-1.1/rev/a5bf7e12f3cc


Oct  3 12:44:58 egg mail:info dovecot: imap-login: Maximum number  
of connections
from user+IP exceeded: user=, method=PLAIN,  
rip=10.20.10.169, lip=192.24

6.229.31

Turns out I had 10+ sessions, one back from yesterday, so I killed  
them all and
could get mail, but...about six minutes later, I had the two  
posix_fallocate and
file_set_size errmsgs again after not having any for a day.  So  
something about

new connections maybe causes this?

Any ideas why:
a) I am having leftover IMAP sessions on my test server?  This  
doesn't happen on

  my production DC V1.0 server



Are you sure? Perhaps you just didn't notice them since v1.0 didn't  
have
any limits to how many were allowed? I think it's more likely that  
the

client(s) really just left that many connections. So the choices are:

a) Increase mail_max_userip_connections setting.

b) Figure out where the sessions are from and see if you can do
something about them on the client side. In Thunderbird there's a
setting which specifies how many connections it can use.





PGP.sig
Description: This is a digitally signed message part


Re: [Dovecot] Test environment question

2008-10-09 Thread Stewart Dean
I have a call open to IBM with their Compiler group on this to see if 
this can't be fixed right.  A side question: how come is it that this 
happens when the session starts up and reoccurs periodically for the 
first day or so...and then not again unless and until those imap process 
sessions are closed out



Timo Sirainen wrote:

On Fri, 2008-10-03 at 14:33 -0400, Stewart Dean wrote:
  

I am seeing posix_fallocate and file_set_size errmsgs in the mail syslog, but
see a pattern:

1) They only happen with the /var/spool/mail inbox NOT with any of the /home
folders and appear to be happening every 10 minutes from the time I started DC
(9AM, 10/1/98) until 11AM, 10/2...and then ceased
The every ten minute message sets looked like this:
  > Oct  1 22:30:31 egg mail:err|error dovecot: IMAP(sdean): posix_fallocate()
failed: Resource temporarily unavailable



The main problem here is that posix_fallocate() is broken in your AIX
(v1.0 doesn't even try to use it). My previous patch attempted to make
Dovecot detect this and silently fallback to not using it, but
apparently it can fail in more ways. I thought about adding another
check for EAGAIN, but perhaps posix_fallocate() just returns the
previous errno so it can't be checked that way. So I moved the check to
configure instead:

http://hg.dovecot.org/dovecot-1.1/rev/12565ef10d1c

Alternatively you could just remove HAVE_POSIX_FALLOCATE from config.h
after running configure. Or yet another way would be to try to find out
if it's already been fixed in AIX. This looks related:
http://www-01.ibm.com/support/docview.wss?uid=isg1IY77112

  

3) However, then there was the following:
a) If I used webmail, which accessed the production server and got the indices
on my test server out of sync, I got this error message from in the mail syslog
on my test server:


Oct  3 12:20:23 egg mail:err|error dovecot: IMAP(sdean): mbox sync: UID inserted
 in the middle of mailbox /var/spool/mail/sdean (648818 > 648046, seq=1153, idx_
msgs=1187)
  


v1.1 also has a bug that can cause this, although normally it should be
visible only when index files aren't being used, or they're out of sync
for some reason. This'll fix it:
http://hg.dovecot.org/dovecot-1.1/rev/a5bf7e12f3cc

  

Oct  3 12:44:58 egg mail:info dovecot: imap-login: Maximum number of connections
 from user+IP exceeded: user=, method=PLAIN, rip=10.20.10.169, lip=192.24
6.229.31
  

Turns out I had 10+ sessions, one back from yesterday, so I killed them all and
could get mail, but...about six minutes later, I had the two posix_fallocate and
file_set_size errmsgs again after not having any for a day.  So something about
new connections maybe causes this?

Any ideas why:
a) I am having leftover IMAP sessions on my test server?  This doesn't happen on
   my production DC V1.0 server



Are you sure? Perhaps you just didn't notice them since v1.0 didn't have
any limits to how many were allowed? I think it's more likely that the
client(s) really just left that many connections. So the choices are:

a) Increase mail_max_userip_connections setting.

b) Figure out where the sessions are from and see if you can do
something about them on the client side. In Thunderbird there's a
setting which specifies how many connections it can use.
  




Re: [Dovecot] Test environment question

2008-10-05 Thread Timo Sirainen
On Fri, 2008-10-03 at 14:33 -0400, Stewart Dean wrote:
> I am seeing posix_fallocate and file_set_size errmsgs in the mail syslog, but
> see a pattern:
> 
> 1) They only happen with the /var/spool/mail inbox NOT with any of the /home
> folders and appear to be happening every 10 minutes from the time I started DC
> (9AM, 10/1/98) until 11AM, 10/2...and then ceased
> The every ten minute message sets looked like this:
>   > Oct  1 22:30:31 egg mail:err|error dovecot: IMAP(sdean): posix_fallocate()
> failed: Resource temporarily unavailable

The main problem here is that posix_fallocate() is broken in your AIX
(v1.0 doesn't even try to use it). My previous patch attempted to make
Dovecot detect this and silently fallback to not using it, but
apparently it can fail in more ways. I thought about adding another
check for EAGAIN, but perhaps posix_fallocate() just returns the
previous errno so it can't be checked that way. So I moved the check to
configure instead:

http://hg.dovecot.org/dovecot-1.1/rev/12565ef10d1c

Alternatively you could just remove HAVE_POSIX_FALLOCATE from config.h
after running configure. Or yet another way would be to try to find out
if it's already been fixed in AIX. This looks related:
http://www-01.ibm.com/support/docview.wss?uid=isg1IY77112

> 3) However, then there was the following:
> a) If I used webmail, which accessed the production server and got the indices
> on my test server out of sync, I got this error message from in the mail 
> syslog
> on my test server:
> > Oct  3 12:20:23 egg mail:err|error dovecot: IMAP(sdean): mbox sync: UID 
> > inserted
> >  in the middle of mailbox /var/spool/mail/sdean (648818 > 648046, seq=1153, 
> > idx_
> > msgs=1187)

v1.1 also has a bug that can cause this, although normally it should be
visible only when index files aren't being used, or they're out of sync
for some reason. This'll fix it:
http://hg.dovecot.org/dovecot-1.1/rev/a5bf7e12f3cc

> > Oct  3 12:44:58 egg mail:info dovecot: imap-login: Maximum number of 
> > connections
> >  from user+IP exceeded: user=, method=PLAIN, rip=10.20.10.169, 
> > lip=192.24
> > 6.229.31
> Turns out I had 10+ sessions, one back from yesterday, so I killed them all 
> and
> could get mail, but...about six minutes later, I had the two posix_fallocate 
> and
> file_set_size errmsgs again after not having any for a day.  So something 
> about
> new connections maybe causes this?
> 
> Any ideas why:
> a) I am having leftover IMAP sessions on my test server?  This doesn't happen 
> on
>my production DC V1.0 server

Are you sure? Perhaps you just didn't notice them since v1.0 didn't have
any limits to how many were allowed? I think it's more likely that the
client(s) really just left that many connections. So the choices are:

a) Increase mail_max_userip_connections setting.

b) Figure out where the sessions are from and see if you can do
something about them on the client side. In Thunderbird there's a
setting which specifies how many connections it can use.


signature.asc
Description: This is a digitally signed message part


[Dovecot] Test environment question

2008-10-03 Thread Stewart Dean

I have V1.1 running on a test server that NFS mounts mbox-formatted inbox and
home folder dirs.  I have eliminated the profile listing for connection to the
V1.0 production servers so that can't start up and corrupt the synch of the test
servers indices

I am seeing posix_fallocate and file_set_size errmsgs in the mail syslog, but
see a pattern:

1) They only happen with the /var/spool/mail inbox NOT with any of the /home
folders and appear to be happening every 10 minutes from the time I started DC
(9AM, 10/1/98) until 11AM, 10/2...and then ceased
The every ten minute message sets looked like this:
 > Oct  1 22:30:31 egg mail:err|error dovecot: IMAP(sdean): posix_fallocate()
failed: Resource temporarily unavailable
 > Oct  1 22:30:31 egg mail:err|error dovecot: IMAP(sdean): file_set_size()
failed with mbox file /var/spool/mail/sdean: Resource temporarily unavailable
 > Oct  1 22:40:31 egg mail:err|error dovecot: IMAP(sdean): posix_fallocate()
failed: Resource temporarily unavailable
 > Oct  1 22:40:31 egg mail:err|error dovecot: IMAP(sdean): file_set_size()
failed with mbox file /var/spool/mail/sdean: Resource temporarily unavailable
 > Oct  1 22:50:31 egg mail:err|error dovecot: IMAP(sdean): posix_fallocate()
failed: Resource temporarily unavailable

2) My Thunderbird client's server settings are set to check for mail every 10
minutes AND I don't access the mail overnight, so it this must be causing it!
I did check the crontabs on both my test and production servers and they had
nothing with this time periodicity

3) However, then there was the following:
a) If I used webmail, which accessed the production server and got the indices
on my test server out of sync, I got this error message from in the mail syslog
on my test server:

Oct  3 12:20:23 egg mail:err|error dovecot: IMAP(sdean): mbox sync: UID inserted
 in the middle of mailbox /var/spool/mail/sdean (648818 > 648046, seq=1153, idx_
msgs=1187)

Which is what one would expect...once the V1.1 code is on production server that
won't happen anymore, so that's OK and can be ignored
b) I seem to end up having leftover imap session on the test server.  Around 1PM
today, I was unable to get mail and saw these messages in the test server's mail
syslog:

Oct  3 12:44:58 egg mail:info dovecot: imap-login: Maximum number of connections
 from user+IP exceeded: user=, method=PLAIN, rip=10.20.10.169, lip=192.24
6.229.31

Turns out I had 10+ sessions, one back from yesterday, so I killed them all and
could get mail, but...about six minutes later, I had the two posix_fallocate and
file_set_size errmsgs again after not having any for a day.  So something about
new connections maybe causes this?

Any ideas why:
a) I am having leftover IMAP sessions on my test server?  This doesn't happen on
  my production DC V1.0 server
b) Ditto on the the posix_fallocate and file_set_size errmsgs which also aren't
found on my production server's mail syslog.
?

I do realize that these seem to be related to Tbird, but they don't happen with
V1.0

I have attached my original note with its copies of the dovecot -n
output for both machines



--- Begin Message ---
My production DC machine owns the mail filesystems and is running DC 
V1.0.15 and mbox folder format.
I am looking to test V1.1.3 on another machine, which NFS mounts the 
mail filesystems, but has its own local index FS.


I have made this test environment my default connection in TBird, and it 
seems to work just fine.  Also, I have made sure that my TBird client 
isn't connecting to the production server (it has multiple accounts but 
I have turned off the cehck for mail when starting and check for new 
mail every N minutes functions, and then check the ps table to make sure 
there are no imap connections)

However, I'm seeing two errmsgs in the maillog on the test machine:

Sep 22 11:54:13 egg mail:err|error dovecot: IMAP(sdean): posix_fallocate() faile
d: Protocol not available
Sep 22 11:54:13 egg mail:err|error dovecot: IMAP(sdean): file_set_size() failed 
with mbox file /var/spool/mail/sdean: Protocol not available
which appear to happen AFTER mail arrives at the production serverit 
seems to happen on my test server the next time my client goes to access 
mail AFTER mail has arrived at the production server.  Subsequent client 
requests of the test server execute without error until AFTER the next 
time mail arrives at and my inbox is updated with it.


Again, if I hadn't looked at the logs, I wouldn't know there was a 
problem...I can see my new mail just fine from the test server.


The questions: Is this anything I should be concerned about?  Is this a 
bug or a legit problem coming from my improper use of two servers 
against the same data.


FWIW, I am using fcntl for both mbox read and write locks.  procmail in 
the MDA on the production server, and its locking hierarchy 
, which Timo previously approved.


Thanks!

Production  dovecot -n output:

# 1.0.15: /usr/local/etc/dovecot.conf
listen: *:143
ssl

Re: [Dovecot] Test environment question

2008-09-30 Thread Stewart Dean

Timo Sirainen wrote:

On Mon, 2008-09-22 at 13:04 -0400, Stewart Dean wrote:
  

Sep 22 11:54:13 egg mail:err|error dovecot: IMAP(sdean): posix_fallocate() faile
d: Protocol not available
  


See if this helps: http://hg.dovecot.org/dovecot-1.1/rev/ad13463328aa

  
My apologies for not getting back to you...I was sick and out last week 
and am not exactly shining brightly this week :)


I rebuilt with the patch you specified.  I made sure that my imap 
session from my TBird client to my production (DC V1.0.15) server was 
shut down, that it was reconfigured NOT to periodically look for mail, 
and I have rechecked since then to make sure that there are no session 
in the PS table for it.  When I started up on my DC V1.1.3 test server, 
I got the following messages:

Sep 30 13:24:13 egg mail:info dovecot: Dovecot v1.1.3 starting up
Sep 30 13:24:26 egg mail:info dovecot: imap-login: Login: user=, method=P
LAIN, rip=10.20.10.169, lip=192.246.229.31
Sep 30 13:24:28 egg mail:info dovecot: imap-login: Login: user=, method=P
LAIN, rip=10.20.10.169, lip=192.246.229.31
Sep 30 13:24:30 egg mail:err|error dovecot: IMAP(sdean): mbox sync: UID inserted
 in the middle of mailbox /var/spool/mail/sdean (646581 > 646564, seq=1125, idx_
msgs=1126)
Sep 30 13:24:31 egg mail:err|error dovecot: IMAP(sdean): mbox sync: UID inserted
 in the middle of mailbox /var/spool/mail/sdean (646581 > 646564, seq=1125, idx_
msgs=1126)
Sep 30 13:24:33 egg mail:err|error dovecot: IMAP(sdean): posix_fallocate() faile
d: File exists
Sep 30 13:24:33 egg mail:err|error dovecot: IMAP(sdean): file_set_size() failed 
with mbox file /var/spool/mail/sdean: File exists

Sep 30 13:24:35 egg mail:err|error dovecot: IMAP(sdean): posix_fallocate() faile
d: File exists
Sep 30 13:24:35 egg mail:err|error dovecot: IMAP(sdean): file_set_size() failed 
with mbox file /var/spool/mail/sdean: File exists

Sep 30 13:25:37 egg mail:info dovecot: ssl-build-param: SSL parameters regenerat
ion completed
Sep 30 13:27:42 egg mail:info dovecot: imap-login: Login: user=, method=P
LAIN, rip=10.20.10.169, lip=192.246.229.31
Sep 30 13:30:28 egg mail:info dovecot: imap-login: Login: user=, method=P
LAIN, rip=10.20.10.169, lip=192.246.229.31
I would assume that, when the test server started up, the index and such 
stuff it had from the last time it was run was grossly out of synch and 
that this is therefore just DC on the test server setting things right.


Since then, as I wrote a message, DC on the test machine coughed out an 
errmsg relating to the Drafts folder, which again makes sense as it also 
likely out of sync:

Sep 30 13:49:25 egg mail:info dovecot: imap-login: Login: user=, method=P
LAIN, rip=10.20.10.169, lip=192.246.229.31
Sep 30 13:51:03 egg mail:err|error dovecot: IMAP(sdean): mbox sync: UID inserted
 in the middle of mailbox /home/hcrc/sdean/mail/Drafts (9422 > 9403, seq=607, id
x_msgs=651)
Sep 30 13:51:04 egg mail:err|error dovecot: IMAP(sdean): mbox sync: UID inserted
 in the middle of mailbox /home/hcrc/sdean/mail/Drafts (9422 > 9403, seq=607, id
x_msgs=651)
Sep 30 13:53:45 egg mail:info dovecot: IMAP(sdean): Disconnected: Logged out byt
es=73/3631

So there are two possibilities
1) That this just happens once (for any given folder), as long as the 
test DC server is the only one to ride heard on  the folders

and/or
2) even so, these messages shouldn't happen and something is wrong.

I will watch it carefully for a day and see if I can confirm that #1 is 
true




I have attached my original note with its copies of the dovecot -n 
output for both machines
--- Begin Message ---
My production DC machine owns the mail filesystems and is running DC 
V1.0.15 and mbox folder format.
I am looking to test V1.1.3 on another machine, which NFS mounts the 
mail filesystems, but has its own local index FS.


I have made this test environment my default connection in TBird, and it 
seems to work just fine.  Also, I have made sure that my TBird client 
isn't connecting to the production server (it has multiple accounts but 
I have turned off the cehck for mail when starting and check for new 
mail every N minutes functions, and then check the ps table to make sure 
there are no imap connections)

However, I'm seeing two errmsgs in the maillog on the test machine:

Sep 22 11:54:13 egg mail:err|error dovecot: IMAP(sdean): posix_fallocate() faile
d: Protocol not available
Sep 22 11:54:13 egg mail:err|error dovecot: IMAP(sdean): file_set_size() failed 
with mbox file /var/spool/mail/sdean: Protocol not available
which appear to happen AFTER mail arrives at the production serverit 
seems to happen on my test server the next time my client goes to access 
mail AFTER mail has arrived at the production server.  Subsequent client 
requests of the test server execute without error until AFTER the next 
time mail arrives at and my inbox is updated with it.


Again, if I hadn't looked at the logs, I wouldn't know there was a 
problem...I can see my new mail just fin

Re: [Dovecot] Test environment question

2008-09-22 Thread Timo Sirainen
On Mon, 2008-09-22 at 13:04 -0400, Stewart Dean wrote:
> > Sep 22 11:54:13 egg mail:err|error dovecot: IMAP(sdean): posix_fallocate() 
> > faile
> > d: Protocol not available

See if this helps: http://hg.dovecot.org/dovecot-1.1/rev/ad13463328aa



signature.asc
Description: This is a digitally signed message part


[Dovecot] Test environment question

2008-09-22 Thread Stewart Dean
My production DC machine owns the mail filesystems and is running DC 
V1.0.15 and mbox folder format.
I am looking to test V1.1.3 on another machine, which NFS mounts the 
mail filesystems, but has its own local index FS.


I have made this test environment my default connection in TBird, and it 
seems to work just fine.  Also, I have made sure that my TBird client 
isn't connecting to the production server (it has multiple accounts but 
I have turned off the cehck for mail when starting and check for new 
mail every N minutes functions, and then check the ps table to make sure 
there are no imap connections)

However, I'm seeing two errmsgs in the maillog on the test machine:

Sep 22 11:54:13 egg mail:err|error dovecot: IMAP(sdean): posix_fallocate() faile
d: Protocol not available
Sep 22 11:54:13 egg mail:err|error dovecot: IMAP(sdean): file_set_size() failed 
with mbox file /var/spool/mail/sdean: Protocol not available
which appear to happen AFTER mail arrives at the production serverit 
seems to happen on my test server the next time my client goes to access 
mail AFTER mail has arrived at the production server.  Subsequent client 
requests of the test server execute without error until AFTER the next 
time mail arrives at and my inbox is updated with it.


Again, if I hadn't looked at the logs, I wouldn't know there was a 
problem...I can see my new mail just fine from the test server.


The questions: Is this anything I should be concerned about?  Is this a 
bug or a legit problem coming from my improper use of two servers 
against the same data.


FWIW, I am using fcntl for both mbox read and write locks.  procmail in 
the MDA on the production server, and its locking hierarchy 
, which Timo previously approved.


Thanks!

Production  dovecot -n output:

# 1.0.15: /usr/local/etc/dovecot.conf
listen: *:143
ssl_listen: *:993
disable_plaintext_auth: no
verbose_ssl: yes
login_dir: /var/run/dovecot/login
login_executable: /usr/local/libexec/dovecot/imap-login
login_processes_count: 12
login_max_processes_count: 774
verbose_proctitle: yes
first_valid_uid: 200
mail_location: mbox:~/mail:INBOX=/var/spool/mail/%u:INDEX=/var/dcindx/%u
mbox_write_locks: fcntl
mbox_dirty_syncs: no
auth default:
  passdb:
driver: pam
  userdb:
driver: passwd

Test dovecot -n output:

# 1.1.3: /usr/local/etc/dovecot.conf
listen: *:143
ssl_listen: *:993
disable_plaintext_auth: no
verbose_ssl: yes
login_dir: /var/run/dovecot/login
login_executable: /usr/local/libexec/dovecot/imap-login
login_processes_count: 12
login_max_processes_count: 774
max_mail_processes: 1024
verbose_proctitle: yes
first_valid_uid: 200
mail_location: mbox:~/mail:INBOX=/var/spool/mail/%u:INDEX=/var/dcindx/%u
mbox_write_locks: fcntl
mbox_dirty_syncs: no
auth default:
  passdb:
driver: pam
  userdb:
driver: passwd




[Dovecot] Test utility for sieve filters? (Re: Sieve doesnt filter)

2008-03-11 Thread Chris Vogel

Hey everybody,

I had a similar problem with a sieve filter lately and
was desperatly looking for a tool to test my filter
conditions. In the end I used 'exim -d -bf', which works
quiet well, but does not interpret sieve the same way
the deliver plugin does (probably different sieve version).

Is there a tool to test sieve filters, which behaves like
the deliver plugin and writes a lot of output about the
way it interprets the single commands and conditions?

Chris.


Re: [Dovecot] Test Environment Question

2008-01-10 Thread Timo Sirainen
On Tue, 2008-01-08 at 09:22 -0500, Stewart Dean wrote:
> I have my master IMAP server running DC V1.0.10.  The homedir and 
> INBOXdir are physically resident there and NFS exported (no caching) to 
> 3 other machines.  I have installed V1.1beta13 on one of them (which 
> thus accesses the homedir/INBOXdir remotely) and plan to have a limited 
> community test-drive it there.  Are there any hazards or drawbacks in 
> doing this?  While the homedirs and INBOXdirs are thus shared, I have it 
> so that each machine has its own local index directory and /var/run 
> dir.  Comments or dire warnings?

If indexes are separate, there should be nothing to worry about.
Although with the beta13 machine you could enable NFS attribute cache
and set mail_nfs_storage=yes. If there are no bugs it should improve
performance.



signature.asc
Description: This is a digitally signed message part


[Dovecot] Test Environment Question

2008-01-08 Thread Stewart Dean
I have my master IMAP server running DC V1.0.10.  The homedir and 
INBOXdir are physically resident there and NFS exported (no caching) to 
3 other machines.  I have installed V1.1beta13 on one of them (which 
thus accesses the homedir/INBOXdir remotely) and plan to have a limited 
community test-drive it there.  Are there any hazards or drawbacks in 
doing this?  While the homedirs and INBOXdirs are thus shared, I have it 
so that each machine has its own local index directory and /var/run 
dir.  Comments or dire warnings?


Re: [Dovecot] test program #2: mmaping

2007-06-22 Thread Bill Boebel
On Wed, June 20, 2007 9:28 pm, Timo Sirainen <[EMAIL PROTECTED]> said:

> Attached another test program. I don't expect it to print any errors
> with any OS, but I'd like to confirm it for non-Linux SMP kernels.

# ./concurrency
writing, page size = 4096
4: reading, page size = 4096
3: reading, page size = 4096
2: reading, page size = 4096
1: reading, page size = 4096
0: reading, page size = 4096

Red Hat ES3 (2.4.21-32.0.1.EL) athlon i386



Re: [Dovecot] test program #2: mmaping

2007-06-21 Thread Timo Sirainen
On Thu, 2007-06-21 at 08:06 -0400, Greg Troxel wrote:
> I'm not sure what you expect to happen, but:
> 
> fnord gdt 18 ~ > ./concurrency 
> 0: reading, page size = 4096
> writing, page size = 4096
> 4: reading, page size = 4096
> 3: reading, page size = 4096
> 2: reading, page size = 4096
> 1: reading, page size = 4096
> open(): No such file or directory
> open(): No such file or directory
> open(): No such file or directory
> open(): No such file or directory
> open(): No such file or directory

touch foo
./concurrency

fixes this. :)



signature.asc
Description: This is a digitally signed message part


Re: [Dovecot] test program #2: mmaping

2007-06-21 Thread Greg Troxel
I'm not sure what you expect to happen, but:

fnord gdt 18 ~ > ./concurrency 
0: reading, page size = 4096
writing, page size = 4096
4: reading, page size = 4096
3: reading, page size = 4096
2: reading, page size = 4096
1: reading, page size = 4096
open(): No such file or directory
open(): No such file or directory
open(): No such file or directory
open(): No such file or directory
open(): No such file or directory
fnord gdt 19 ~ > ps uaxw|egrep con
gdt 19239  0.0  0.1   104   548 ttyp1  S 7:55AM 0:00.00 ./concurrency 
[other false hits redacted]
fnord gdt 20 ~ > uname -a
NetBSD fnord.ir.bbn.com 4.0_BETA2 NetBSD 4.0_BETA2 (GENERIC) #11: Mon Apr 30 
10:46:41 EDT 2007  [EMAIL 
PROTECTED]:/n0/obj/gdt-4/i386/sys/arch/i386/compile/GENERIC i386

My system has 2 cpus.


Reading the code, I don't understand why this shouldn't happen - the
bottom branch in the children gets to the open before the top has done
rename, and there's no synchronization to prevent this.

With the following, it prints the 'reading' lines and then sits running:

fnord gdt 68 ~ > ./concurrency 
writing, page size = 4096
0: reading, page size = 4096
4: reading, page size = 4096
3: reading, page size = 4096
2: reading, page size = 4096
1: reading, page size = 4096
...^C


--- concurrency.c.~1~   2007-06-21 07:54:51.0 -0400
+++ concurrency.c   2007-06-21 08:05:33.0 -0400
@@ -43,16 +43,19 @@
perror("rename()");
usleep(rand() % 1000);
 
-   pwrite(fd, buf, pagesize + 16, 0);
+   if (pwrite(fd, buf, pagesize + 16, 0) < 0)
+   perror("pwrite1()");
//usleep(rand() % 1000);
//fdatasync(fd);
-   pwrite(fd, ones, 4, pagesize-4);
+   if (pwrite(fd, ones, 4, pagesize-4) < 0)
+   perror("pwrite1()");
if (flock(fd, LOCK_UN) < 0)
perror("flock()");
close(fd);
usleep(rand() % 1000);
}
} else {
+   sleep(1);
while (process_count-- > 1) {
if (fork() == 0)
break;
@@ -61,7 +64,7 @@
for (;; close(fd), usleep(rand() % 1000)) {
fd = open("foo", O_RDWR, 0600);
if (fd == -1) {
-   perror("open()");
+   perror("open_lower()");
return 1;
}
 
@@ -93,6 +96,7 @@
} else if (((char *)mmap_base)[pagesize] != 'h')
printf("broken data\n");
}
+   putchar('.'); fflush(stdout);   
}
}
return 0;


Re: [Dovecot] test program #2: mmaping

2007-06-21 Thread Jim Maenpaa

Attached another test program. I don't expect it to print any errors
with any OS, but I'd like to confirm it for non-Linux SMP kernels.

(Except for OpenBSD, it doesn't work correctly in it anyway because it
doesn't support mixing write()s and mmap())




Mac OS X for Intel 10.4.9: SMP (Core Duo)

$ ./concurrency
writing, page size = 4096
0: reading, page size = 4096
open(): No such file or directory
4: reading, page size = 4096
3: reading, page size = 4096
1: reading, page size = 4096
$ 2: reading, page size = 4096
open(): No such file or directory
open(): No such file or directory
open(): No such file or directory
open(): No such file or directory

With all of the reader processes dying almost immediately.


Mac OS X for PowerPC 10.4.9: non-SMP (G4)

$ ./concurrency
writing, page size = 4096
4: reading, page size = 4096
3: reading, page size = 4096
2: reading, page size = 4096
1: reading, page size = 4096
0: reading, page size = 4096
open(): No such file or directory

With one reader process dying after about two minutes. Nothing else  
after another 20 minutes.


-jim



Re: [Dovecot] test program #2: mmaping

2007-06-21 Thread greg
Hi,

> It doesn't compile for Solaris 10:

You can compile it with :
gcc -o concurency -I/usr/ucbinclude -L/usr/ucblib -lucb concurency.c
(on a default Solaris 10 install). Then, you must add /usr/ucblib to your
library search path using crle.

On a dual UltraSparc IIIi running Solaris 10, here is what is printed
after 30 minutes :

$ ./concurency
writing, page size = 8192
4: reading, page size = 8192
3: reading, page size = 8192
2: reading, page size = 8192
0: reading, page size = 8192
1: reading, page size = 8192

Cheers,
Greg



Re: [Dovecot] test program #2: mmaping

2007-06-20 Thread pitun



Attached another test program. I don't expect it to print any errors
with any OS, but I'd like to confirm it for non-Linux SMP kernels.

(Except for OpenBSD, it doesn't work correctly in it anyway because it
doesn't support mixing write()s and mmap())

  
6.2-RELEASE FreeBSD 2 x Intel(R) Xeon(R) CPU 5130  @ 2.00GHz 
(2000.08-MHz 686-class CPU)

-
af>./concurrency
4: reading, page size = 4096
open(): No such file or directory
0: reading, page size = 4096
open(): No such file or directory
af> writing, page size = 4096
3: reading, page size = 4096
1: reading, page size = 4096
2: reading, page size = 4096
open(): No such file or directory
open(): No such file or directory
open(): No such file or directory
--

6.2-RELEASE-p2 FreeBSD CPU: Intel(R) Core(TM)2 CPU 6300  @ 1.86GHz 
(1864.81-MHz 686-class CPU)

--
j170> ./concurrency
4: reading, page size = 4096
open(): No such file or directory
0: reading, page size = 4096
open(): No such file or directory
j170> writing, page size = 4096
3: reading, page size = 4096
2: reading, page size = 4096
1: reading, page size = 4096
open(): No such file or directory
open(): No such file or directory
open(): No such file or directory
--

6.1-RELEASE FreeBSD 2 x CPU: Intel(R) Pentium(TM)3 900 MHz
--
uos> ./concurrency
4: reading, page size = 4096
open(): No such file or directory
writing, page size = 4096
3: reading, page size = 4096
0: reading, page size = 4096
2: reading, page size = 4096
1: reading, page size = 4096
--

been waiting for 2h, prompt does not return



--
PJ



--
Masz fajnie urzadzone mieszkanie?
Pokaz je innym >>> http://link.interia.pl/f1abf





Re: [Dovecot] test program #2: mmaping

2007-06-20 Thread Tan Shao Yi


Hi Timo,

It doesn't compile for Solaris 10:


gcc concurrency.c -o concurrency -Wall

concurrency.c: In function `main':
concurrency.c:40: warning: implicit declaration of function `flock'
concurrency.c:40: error: `LOCK_EX' undeclared (first use in this function)
concurrency.c:40: error: (Each undeclared identifier is reported only once
concurrency.c:40: error: for each function it appears in.)
concurrency.c:50: error: `LOCK_UN' undeclared (first use in this function)
concurrency.c:92: warning: long int format, size_t arg (arg 2)

Cheers.

On Thu, 21 Jun 2007, Timo Sirainen wrote:


Attached another test program. I don't expect it to print any errors
with any OS, but I'd like to confirm it for non-Linux SMP kernels.

(Except for OpenBSD, it doesn't work correctly in it anyway because it
doesn't support mixing write()s and mmap())




Re: [Dovecot] test program #2: mmaping

2007-06-20 Thread Adam McDougall
On Thu, Jun 21, 2007 at 04:28:17AM +0300, Timo Sirainen wrote:

  Attached another test program. I don't expect it to print any errors
  with any OS, but I'd like to confirm it for non-Linux SMP kernels.
  
  (Except for OpenBSD, it doesn't work correctly in it anyway because it
  doesn't support mixing write()s and mmap())


On one computer I faltered on the first two tries because both my home directory
and /tmp contained a file or directory named foo :)  Perhaps the script should
check for existing file entries and abort to avoid unexpected results?  Applies
to multiple runs too.


  
FreeBSD 7.0 amd64 SMP (on core 2 duo), local UFS:

% gcc concurrency.c -o concurrency -Wall
% gcc -v
Using built-in specs.
Target: amd64-undermydesk-freebsd
Configured with: FreeBSD/amd64 system compiler
Thread model: posix
gcc version 4.2.0 20070514 [FreeBSD]
% ./concurrency 
writing, page size = 4096
0: reading, page size = 4096
4: reading, page size = 4096
3: reading, page size = 4096
2: reading, page size = 4096
1: reading, page size = 4096

(been waiting for 22 min, prompt does not return)



FreeBSD 7.0 i386 UP (pentium 4), local UFS:

> gcc concurrency.c -o concurrency -Wall
concurrency.c: In function 'main':
concurrency.c:92: warning: format '%ld' expects type 'long int', but argument 2 
has type 'size_t'
> gcc -v
Using built-in specs.
Target: i386-undermydesk-freebsd
Configured with: FreeBSD/i386 system compiler
Thread model: posix
gcc version 4.2.0 20070514 [FreeBSD]
> ./concurrency 
0: reading, page size = 4096
open(): No such file or directory
writing, page size = 4096
4: reading, page size = 4096
3: reading, page size = 4096
2: reading, page size = 4096
1: reading, page size = 4096
> ps -xauww | grep concurr
mcdouga9  2262  0.0  0.1  3104   652  p0  R10:57PM   0:00.23 ./concurrency
mcdouga9  2263  0.0  0.1  3112   648  p0  S10:57PM   0:00.11 ./concurrency
mcdouga9  2264  0.0  0.1  3112   648  p0  S10:57PM   0:00.08 ./concurrency
mcdouga9  2265  0.0  0.1  3112   648  p0  S10:57PM   0:00.08 ./concurrency
mcdouga9  2266  0.0  0.1  3112   648  p0  S10:57PM   0:00.09 ./concurrency
> 

(been waiting for 22 min, prompt returns right away with backgrounded processes)

--

Solaris 9 sparc:

> gcc concurrency.c -o concurrency -Wall
concurrency.c: In function `main':
concurrency.c:40: warning: implicit declaration of function `flock'
concurrency.c:40: error: `LOCK_EX' undeclared (first use in this function)
concurrency.c:40: error: (Each undeclared identifier is reported only once
concurrency.c:40: error: for each function it appears in.)
concurrency.c:50: error: `LOCK_UN' undeclared (first use in this function)
concurrency.c:92: warning: long int format, size_t arg (arg 2)
> gcc -v
Reading specs from /opt/lib/gcc-lib/sparc-sun-solaris2.8/3.3.1/specs
Configured with: /usr/local/src/gcc-3.3.1/configure --prefix=/opt 
--with-as=/usr/ccs/bin/as 
--with-ld=/usr/ccs/bin/ld --with-system-zlib
Thread model: posix
gcc version 3.3.1
> ls -l concurrency
ls: concurrency: No such file or directory

-

FreeBSD 6.2 amd64 SMP on opteron

> gcc concurrency.c -o concurrency -Wall
> gcc -v
Using built-in specs.
Configured with: FreeBSD/amd64 system compiler
Thread model: posix
gcc version 3.4.6 [FreeBSD] 20060305
> ./concurrency
(has inconsistent behaviors, sometimes looks fine, sometimes has some open() 
failures
and backgrounded process(es?), may or may not vary if stored on NFS).

Not sure if the behavior here is expected or the test needs work, 
I didn't want to test exhaustively if the results would be inconclusive anyway

--

FreeBSD 6.2 i386 SMP on opteron, local UFS

% gcc concurrency.c -o concurrency -Wall
concurrency.c: In function `main':
concurrency.c:92: warning: long int format, size_t arg (arg 2)
% gcc -v
Using built-in specs.
Configured with: FreeBSD/i386 system compiler
Thread model: posix
gcc version 3.4.6 [FreeBSD] 20060305
% ~/concurrency
writing, page size = 4096
0: reading, page size = 4096
4: reading, page size = 4096
3: reading, page size = 4096
2: reading, page size = 4096
1: reading, page size = 4096

(seems consistent and fine)


Re: [Dovecot] test program #2: mmaping

2007-06-20 Thread Jim Maenpaa

Attached another test program. I don't expect it to print any errors
with any OS, but I'd like to confirm it for non-Linux SMP kernels.

(Except for OpenBSD, it doesn't work correctly in it anyway because it
doesn't support mixing write()s and mmap())




Mac OS X for Intel 10.4.9: SMP (Core Duo)

$ ./concurrency
writing, page size = 4096
0: reading, page size = 4096
open(): No such file or directory
4: reading, page size = 4096
3: reading, page size = 4096
1: reading, page size = 4096
$ 2: reading, page size = 4096
open(): No such file or directory
open(): No such file or directory
open(): No such file or directory
open(): No such file or directory

With all of the reader processes dying almost immediately.


Mac OS X for PowerPC 10.4.9: non-SMP (G4)

$ ./concurrency
writing, page size = 4096
4: reading, page size = 4096
3: reading, page size = 4096
2: reading, page size = 4096
1: reading, page size = 4096
0: reading, page size = 4096
open(): No such file or directory

With one reader process dying after about two minutes. Nothing else  
after another 20 minutes.


-jim



[Dovecot] test program #2: mmaping

2007-06-20 Thread Timo Sirainen
Attached another test program. I don't expect it to print any errors
with any OS, but I'd like to confirm it for non-Linux SMP kernels.

(Except for OpenBSD, it doesn't work correctly in it anyway because it
doesn't support mixing write()s and mmap())

/*
   gcc concurrency.c -o concurrency -Wall
*/
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#define MAX_PAGESIZE (8192)

int main(int argc, char *argv[])
{
	char buf[MAX_PAGESIZE*2], ones[4] = { 1, 1, 1, 1 };
	int fd, pagesize;
	int process_count = 5;
	void *mmap_base = NULL;
	size_t mmap_size = 0;
	struct stat st;
	int fixed;

	memset(buf, 0, sizeof(buf));

	pagesize = getpagesize();
	assert(pagesize <= MAX_PAGESIZE);

	buf[pagesize] = 'h';
	if (fork() == 0) {
		printf("writing, page size = %d\n", pagesize);
		for (;;) {
			fd = open("foo2", O_RDWR | O_CREAT | O_TRUNC, 0600);
			if (fd == -1) {
perror("open()");
return 1;
			}
			if (flock(fd, LOCK_EX) < 0)
perror("flock()");
			if (rename("foo2", "foo") < 0)
perror("rename()");
			usleep(rand() % 1000);

			pwrite(fd, buf, pagesize + 16, 0);
			//usleep(rand() % 1000);
			//fdatasync(fd);
			pwrite(fd, ones, 4, pagesize-4);
			if (flock(fd, LOCK_UN) < 0)
perror("flock()");
			close(fd);
			usleep(rand() % 1000);
		}
	} else {
		while (process_count-- > 1) {
			if (fork() == 0)
break;
		}
		printf("%d: reading, page size = %d\n", process_count, pagesize);
		for (;; close(fd), usleep(rand() % 1000)) {
			fd = open("foo", O_RDWR, 0600);
			if (fd == -1) {
perror("open()");
return 1;
			}

			if (fstat(fd, &st) < 0)
perror("fstat()");
			fixed = 0;
		again:
			if (st.st_size < pagesize)
continue;

			if (mmap_base != NULL && mmap_base != MAP_FAILED)
munmap(mmap_base, mmap_size);

			mmap_size = st.st_size;
			mmap_base = mmap(NULL, mmap_size, PROT_READ,
	 MAP_SHARED, fd, 0);

			if (memcmp((char *)mmap_base + pagesize - 4,
   ones, 4) == 0) {
if (mmap_size != pagesize+16) {
	if (mmap_size == pagesize &&
	fstat(fd, &st) == 0 &&
	st.st_size != pagesize) {
		fixed = 1;
		goto again;
	}

	printf("page size cut, mmap_size=%ld\n", mmap_size);
} else if (((char *)mmap_base)[pagesize] != 'h')
	printf("broken data\n");
			}
		}
	}
	return 0;
}


signature.asc
Description: This is a digitally signed message part


[Dovecot] Test

2007-04-13 Thread Brian Morrison
Hi Georgie

Just testing!

-- 

Brian Morrison

bdm at fenrir dot org dot uk

   "Arguing with an engineer is like wrestling with a pig in the mud;
after a while you realize you are muddy and the pig is enjoying it."

GnuPG key ID DE32E5C5 - http://wwwkeys.uk.pgp.net/pgpnet/wwwkeys.html