Awfully slow dovecot

2014-12-25 Thread Robin Helgelin
Hi,

We’re using dovecot 1.0.7, which seems to be the latest version available on 
CentOS 5.

Downloading emails are dead slow. Really small emails goes quickly, but normal 
emails and emails with attachments are so slow to download it’s almost 
ridiculous. I’ve googled some and found that it could be related to quota, but 
I disabled the quota plugins on imap with no difference.

dovecot -n:
# 1.0.7: /etc/dovecot.conf
ssl_cert_file: /etc/dovecot/cert.pem
ssl_key_file: /etc/dovecot/key.pem
disable_plaintext_auth: yes
verbose_ssl: yes
login_dir: /var/run/dovecot/login
login_executable(default): /usr/libexec/dovecot/imap-login
login_executable(imap): /usr/libexec/dovecot/imap-login
login_executable(pop3): /usr/libexec/dovecot/pop3-login
valid_chroot_dirs: /var/mail/domains
verbose_proctitle: yes
last_valid_uid: 500
mail_location: maildir:/var/mail/domains/%d/%n/mail
mail_executable(default): /usr/libexec/dovecot/imap
mail_executable(imap): /usr/libexec/dovecot/imap
mail_executable(pop3): /usr/libexec/dovecot/pop3
mail_plugins(default): 
mail_plugins(imap): 
mail_plugins(pop3): quota
mail_plugin_dir(default): /usr/lib/dovecot/imap
mail_plugin_dir(imap): /usr/lib/dovecot/imap
mail_plugin_dir(pop3): /usr/lib/dovecot/pop3
auth default:
  mechanisms: plain login
  passdb:
driver: ldap
args: /etc/dovecot/dovecot-ldap.conf
  userdb:
driver: ldap
args: /etc/dovecot/dovecot-ldap.conf
  socket:
type: listen
client:
  path: /var/spool/postfix/private/auth
  mode: 384
  user: postfix
  group: postfix
master:
  path: /var/run/dovecot/auth-master
  mode: 384
  user: vmail
  group: mail
plugin:
  convert_mail: maildir:/var/mail/domains/%d/%u/mail

dovecot-ldap.conf:
auth_bind = yes
hosts = example.com
ldap_version = 3
base = o=hosting,dc=example,dc=com
dn = cn=phamm,o=hosting,dc=example,dc=com
dnpass = password
pass_attrs = mail
pass_filter = (&(objectClass=VirtualMailAccount)(accountActive=TRUE)(mail=%u))
user_attrs = mail,,,mail,,,quota=quota=maildir:storage
user_filter = (&(objectClass=VirtualMailAccount)(accountActive=TRUE)(mail=%u))
deref = never
scope = subtree
default_pass_scheme = MD5
user_global_uid = 500
user_global_gid = 12


Re: [Dovecot] Blocking certain hostnames/clients

2013-10-28 Thread Robin

On 10/27/2013 1:21 PM, Charles Marcus wrote:


Bottom line desire is to avoid scraping/hijacking email stored on my
dovecot server by any client other than a users client.


I don't think IMAP has a "client identification" component in its 
protocol, at least one that's in widespread and "compatible" use.  So 
you're stuck with IP/hostname-based ACLs or perhaps something more 
"forensic" that does analysis of how those clients access mail and 
tailor a countermeasure accordingly.


Of course, blackholing all of the offending IP#s is an option, but I 
suspect it will be a bit "whack-a-mole".


=R=


Re: [Dovecot] Odd Feature Request - RBL blacklist lookup to prevent authentication

2013-10-22 Thread Robin

On 10/22/2013 3:22 PM, Noel Butler wrote:

But I agree with you on the rest, since of those 500K IP's Marc claims
to have I'd bet that 99% are hijacked innocent pc's/servers, and of
them, >75% would likely be a one time usage.


This accords with our own statistics.  While it IS tempting to treat 
every IP# that "spams" or hits you with a port-scan as something worthy 
of blackholing, the reality is that the vast majority of the attempts 
are from "innocent" victim hosts.


Now, there's little doubt that MOST of these are not legitimate MTA 
endpoints, and so "shouldn't" be issuing email directly to your MX 
hosts.  SPF + OpenDKIM are great, but only for those domains that 
actually use them; you can score "improperly delivered" emails bearing 
those domains with a policy defined by their operators, but many domains 
don't publish a policy.


I would caution people to avoid throwing out the baby with the 
bathwater.  I've been collecting an increasing number of "mysterious" 
email delivery problems to endpoints which do not issue DSN/bounces, 
*OR* provide any feedback to their users that emails have been "blocked".


The list includes some big names, like:

comcast (cable ISP subscribers)
secureserver.net hosted emails (GoDaddy's "hosted email" service, which 
uses Cloudmark's anti-spam solutions)

McAfee's "MXLogic" anti-spam services

McAfee's "SaaS/MXLogic" anti-spam service has a responsive false 
positive mediation system, whereas comcast's + GoDaddy's setups are 
thoroughly dysfunctional and broken.  Despite publishing SPF, fully 
specified OpenDKIM and using DomainKeys signing, having perfectly clean 
IP# reputations and not being on ANY RBLs, emails to those hosts is at 
best "random", or in comcast's case - when it's hosting "vanity domains" 
for its customers - completely broken.


I strongly suspect these inferior anti-spam systems are mistakenly 
ascribing fault for "Joe Jobbed" spam runs, even if they're delivered by 
non-compliant hosts as specified in the domain's SPF.  All of my clients 
"login" and issue emails through our MTAs, which are specified as 
permitted senders in SPF, so there are no "rogue" road warriors 
"allowed" by our domains' SPF policies.


My point is simple: it's easy to let frustration about spam get the 
better of you, but don't create worse problems for your users and those 
who try to legitimately reach them.  It's progressively making email 
less and less usable in a global context.


=R=


Re: [Dovecot] Dovecot extremely slow!

2013-09-26 Thread Robin

On 9/26/2013 7:47 AM, Patricio Rojo wrote:


* /home partition nfs mounted from a remote firewalled QNAP NAS server
(TS-869U-RP), which also serves other machines (RAID-5 setup with
currently no bad disks).


I assume this NAS properly implements various locking services? 
Dovecot, like most mail MUA + MTAs, makes use of various filesystem 
locking primitives to maintain conherence in a multi-user access 
scenario.  If QNAP's stack doesn't implement proper NFS locking, this is 
probably a cause of these odd lags.


You can probably add a "nolock" to your /etc/fstab to resolve it, but 
you risk mailbox corruption.


You mentioned it was firewalled... are you allowing the lockd port 
through to the QNAP from the Dovecot machine that's mounting it?  NFS2 + 
3 implement locking via communication with a "lock manager" that listens 
on port 4045, if I recall.


=R=


Re: [Dovecot] 2048-bit Diffie-Hellman parameters

2013-09-24 Thread Robin
On 9/24/2013 2:28 AM, Reindl Harald wrote:

> maybe on your server, my logs showing the opposite and since
> the "smtp" are outgoing messages your conclusion of "nobody"
> is strange
> 
> cat maillog | grep smtp | grep -v smtpd | grep TLS | wc -l
> 12327
> 
> cat maillog | grep smtpd | grep TLS | wc -l
> 13350
> 
> cat maillog | grep smtp | grep -v smtpd | grep TLSv1.2 | wc -l
> 2603
> 
> cat maillog | grep smtpd | grep TLSv1.2 | wc -l
> 2219

This doesn't necessarily mean the encryption is effective at cloaking the data 
exchange.  Remember:

1) Most admins who use TLS on their MTAs don't reject the transaction of the 
presented certificates FAIL to be validated against your local trust store's 
certificates.  Unlike the error dialog boxes presented to the end user when a 
certificate fails to validate against its local trust store, these "error 
fallbacks" are "silent" and to most users, completely invisible. (Yes, I know 
most MTAs will log a TLS certificate failure in the headers, but we're talking 
about Lusers here, not readers of this list.)  Failing certificate validity 
means it could be ANYONE's key/cert used to setup the ephemeral connection, and 
you can place no reliance on that channel being opaque to third-party scrutiny.

2) Even if you DO reject all failing certifcate trust-stores (on *ALL* MX hosts 
that receive/send mail), it's increasingly likely that one or more of those 
root certificates are compromised, either publicly(*) or secretly though some 
back-door arrangement with the NSA.  The Big Ugly elephant in the room is the 
notion of the NSA having a certificate signing key for VeriSlime/GeoBust/et al 
so that they're free to use their own key + cert in a MITM interception, with 
the end user being none the wiser(**).  Take a tally of the jurisdictions of 
the big root-level CAs.  It's alarmingly AUSCANNZUKUS-centric.

3) Even with all of the above dealt with, the rush for people to use 
Diffie-Hellman "PFS" based on elliptic curves (EC) may be itself subject to 
additional problems based on revelations and leaks that suggest the NSA has 
been busy subverting various standards and publicly designed software reference 
implementations to weaken its security in ways to benefit them.  In particular, 
Schneier and Bernstein feel very uneasy about the NIST specified parametres for 
the EC-based cryptographic algorithms.  These aren't "tin foil hatters" or 
kooks.

To that end, there are proposals to adopt elliptic-curve parametres and methods 
that each and every generated public key maps to a valid EC point.

See:

https://www.schneier.com/blog/archives/2013/09/the_nsa_is_brea.html#c1675929
http://cr.yp.to/talks/2013.05.31/slides-dan+tanja-20130531-4x3.pdf
http://cr.yp.to/ecdh/curve25519-20060209.pdf
 
An Ivory Tower organisation with total control over the clients' and the 
servers' configurations can pin all of its certs + keys, and configure them to 
dump connections that fail to validate local trust stores.

This is an unfortunately very subtle and nuanced problem that defies mere 
"throwing more bits" at your key sizes. 

And I would hope that the IQ and worldly mindsets of those generally reading 
this list have an appreciation for why retaining complete control and privilege 
within your organisation's end-to-end security is important, now more than 
ever.  It has nothing to do with "I'm not doing anything wrong, so they can 
read all they want."

For an ISP or other provider with a "random" and "noisy" userbase with 
who-knows-what clients + OS/platform brain damage, the problem is probably 
intractable unless you accept that some users will be simply unable to access 
the services from some or all of their devices.

=R=

(*) Despite many compromised CAs (Certificate Authorities) being known 
publicly, I discover an annoying large number of improperly configured systems 
who accept these as valid. Maybe there are/were distros who incorrectly 
compiled lists of CAs and didn't remove those compromised CAs from the 
trust-store.  Maybe they're out of date.  Who knows why.

(**) If you "pin" various trust store certificates + keys, you can detect this 
when it occurs.

See: https://www.owasp.org/index.php/Certificate_and_Public_Key_Pinning



Re: [Dovecot] password encryption

2013-04-06 Thread Robin
On 4/5/2013 11:36 PM, Jim Pazarena wrote:
> I have just come to the realization that password encryption using the 
> crypt function in linux, ONLY USES THE FIRST 8 CHARS. I have written 
> routines using crypt allowing 16+ chars, and find that anything past 8 
> is ignored. Wow.
> 
> Is there a way around this that can be used in dovecot, as well as 
> encryption routines for an email front end? (not system users).

Remember that most Linux distros offer a way to configure the default password 
salt/encryption scheme.

Look in /etc/login.defs or equivalent on your distro.

With any semi-recent glibc + contemporaneous toolchain, you'll see options like:

#
# Only works if compiled with ENCRYPTMETHOD_SELECT defined:
# If set to MD5 , MD5-based algorithm will be used for encrypting password
# If set to SHA256, SHA256-based algorithm will be used for encrypting password
# If set to SHA512, SHA512-based algorithm will be used for encrypting password
# If set to DES, DES-based algorithm will be used for encrypting password 
(default)
# Overrides the MD5_CRYPT_ENAB option
#
ENCRYPT_METHOD SHA512
#
# Only works if ENCRYPT_METHOD is set to SHA256 or SHA512.
#
# Define the number of SHA rounds.
# With a lot of rounds, it is more difficult to brute forcing the password.
# But note also that it more CPU resources will be needed to authenticate
# users.
#
SHA_CRYPT_MIN_ROUNDS 40
SHA_CRYPT_MAX_ROUNDS 400

Tune the values on your system so the authentication delay isn't too bad.

I'm surprised your distro has defaulted to the ancient crypt().  Even 
slackware, not noted for being "bleeding edge" has defaulted to MD5 for a very 
very long time now.

Of course, if you've been running the same system or one where you migrated 
shadow files from old ones, you may still be using those ancient shadow 
password formats.  (No system changes those in-place for you until you 
explicitly change the password with new login.defs defaults in effect.)

=R=


Re: [Dovecot] Disallow POP3 from deleting messages

2013-03-20 Thread Robin

On 3/20/2013 6:35 AM, dormitionsk...@hotmail.com wrote:


Well, like I said, we have real slow upload speeds.  I think POP3 would give a 
better user experience.


About the only connectivity situation where POP3 might make for a better 
"user experience" is one of intermittent bursty sort that's prone to 
reliability issues.


IMAP provides for header-only enumations as well as partial body fetches 
on demand, as opposed to "all or nothing" POP3 access.  With a suitable 
modern caching client, it will not re-download emails already viewed. 
I've never used any of the devices you mentioned, so I can't speak to 
how their mail clients are implemented.



We're using sendmail.  I assume this is done in sendmail, not Dovecot?


No, sendmail is a Mail Transport Agent (MTA), which is akin to the 
Postal Service.  All it does is convey emails from one endpoint to 
another as reliably as possible.  What is done with the mail once it's 
at that endpoint is left to the "consumer" of the mail, in this case, 
the Mail User Agent (MUA).  It can be automatically processed/filed like 
via procmail or LMTP, or managed via the client through POP3 or IMAP4.


Your main concern sounds like performance from users who connect from 
outside of your enterprise network, which may happen even when your 
mobile devices are on site, due to the way they obtain their 
connectivity?  Timo's replication idea is sensible to address that problem.


Good luck!
=R=


Re: [Dovecot] mbox vs. maildir storage block waste

2012-11-12 Thread Robin
On 11/11/2012 5:26 PM, Christoph Anton Mitterer wrote:
> Have you made systematic tests? I.e. compared times for all of these
> with those from the different dovecot backends.

The choice of Dovecot backends made no substantial difference.  I used maildir, 
sdbox, and mdbox.  I also added SiS (with mdbox).  Initial tests were on local 
multi-spindle RAID5 storage, but to handicap Dovecot, I pushed it over NFS 
(also Linux 3.2 on a local GigE segment).  It wasn't slow enough to make dbmail 
competitive, even though you have to start turning off performance optimisation 
features in Dovecot to avoid NFS bugs.

>> There wasn't a task that the dbmail setup performed faster than
>> Dovecot, in either low or high load situations.
> Which backend did you use?

Backend for dbmail?  Two MySQL versions (5.0 and 5.5) - InnoDB is required for 
dbmail, by the way.  Postgres 8.4 and 9.1 backends, using its default storage 
engine.  I tried the tests with both a separate DB machine, as well as a 
cohosted one with the dbmail connector using local sockets instead of TCP/IP, 
but that didn't significantly alter the performance.

I've found my first notes from the tests.  It was the second round of tests 
with the latest MySQL 5.0 server given some tuning to more aggressively use 
system memory.  You will note the puny size of the mail folder hive in this 
round.

> The mysqld process has consumed nearly an hour of CPU time during this 
> process.
> dbmail is configured to use local sockets rather than network I/O.
> 
> I'm using the PERL MailTools http://search.cpan.org/dist/MailTools/
> to import about 10 folders' worth of email, totaling about 560MB in raw size, 
> constituting about 23,000 emails.  The script basically creates the folders, 
> and does an APPEND for each email.  It's bog simple.
> 
> I DROP the database, recreated it, added the one user, verify DBMail 
> accepts authentication for the newly created mailbox, and then do the import.
> The MySQL files live on a freshly formatted ext4 filesystem.
> 
> The import takes Dovecot (MailDir or mdbox format), or Panda IMAP (mix) 
> about six minutes to complete.
> 
> DBMail 3 took 4h 23m.  Casual inspection of the system showed modestly 
> high CPU usage in mysqld and dbmail-imapd (as well as the import perl 
> command on occasion), but the Load Average didn't get too close to 1.0,
> let alone 2.0, which concerns me that I might have hit some kind of 
> "busy wait" pathology. 

To clarify the above:  To streamline iterative testing, I made a script to 
deactivate the currently running SQL server, unmount, re-format, re-mount, and 
re-populate the skeletal DB directories and restart the DB engine.  So between 
each test, no matter the imapd or DB back-end, the mailstore was presented with 
a freshly formatted volume on dedicated spindles.  The filesystem was ext4, 
formatted with:

lazy_itable_init=0,lazy_journal_init=0,dir_index=1,extents=1,uninit_bg=0,flex_bg=0,has_journal=0,inode_size=256,dir_index=1,

> Do you have detailed numbers?

Not really, but after it was clear that I wasn't going to get comparable 
performance even within the same magnitude, I stopped testing it.  I included 
the IMAP SEARCH performance comparison against fts_squat in my original mail to 
this list.  In addition to huge performance deficiencies, it also has/had fatal 
operational bugs.

> I guess you’ve "only" tried dbmail?

I did try Manitou, but the lack of a proper IMAP service for it made extensive 
"like for like" testing very difficult.  Manitou is still in the very early 
days, alas.  It also relies on the SQL DB's underlying authentication systems 
which is rather ... alarming.  It performs quite a bit better than dbmail, but 
still it's not close to Dovecot.  At the time I tested it, only custom-rolled 
clients could talk to it, i.e., no imap4/pop3 "gateways" to it.

I think I was most alarmed to see that the widely assumed benefits of putting 
mail on a SQL DB, i.e., fast searching/sorting, didn't actually happen in 
reality.

As others have mentioned, I also shudder to think of backup/restore issues, 
especially on a single user level.  The mechanisms of backing up and restoring 
maildirs and even mdboxes, i.e., simple files, are not only well understood, 
the failure modes are generally fully recoverable.  SQL-DB file blobs, 
especially with MySQL, remind me too much of the "PST Hell" that Exchange 
administrators face.  But maybe that's just my ignorance talking.

> All something I wouldn’t want to do on my production systems ;)

Neither would I.  But as I said, I was "desperate" to get this close to 
Dovecot's performance.  I had about 2-3 weeks to pre-qualify mail storage 
back-ends with an eye towards 4 or 5 digits of usercount, and maybe tens to 
hundreds of TBs' scale of mail storage.  Running across such poor performance 
with such relatively small loads disqualified the DB-based mail products very 
very quickly, for ME, anyway.

If you want to run your own tests, my s

Re: [Dovecot] mbox vs. maildir storage block waste

2012-11-08 Thread Robin
Obvious caveats and qualifications apply here throughout this email.

Christoph Anton Mitterer  wrote:
> I see... well I haven't tested AOX or dbmail so far (especially as
> they're not in Debian and I was too lazy till now to compile them)...
> 
> At least I had the impression that performance (especially in searches)
> was one of the major things these people were proud of.
> 
> 
> I'll stay tuned, whether we ever see a fully usable SQL backend for
> Dovecot :)

I wouldn't hold your breath.

It's a recurringly seductive "meme" in email circles, but the reality is that 
email is mostly unstructured data with a few fields of reasonably structured 
data (dates, from, to, maybe attachment types + filenames).  The bulk of the 
emails, and the part of the emails that people really want to search quickly: 
the body, is unstructured, and doesn't perform quickly with the stock "full 
text search" modules in the main SQL engines.

I'd given dbmail2 a try with MySQL 5, 5.5, and Postgres 8.4 and 9.1 branches.  
I've dedicated 16GB of DDR3-1800/3.4GHz 6-core AMD 1090T with hardware RAID 
local storage (12 x Seagate ES 7200RPM spindles). (64 bit Slackware 13.37 
running Linux 3.2 kernels built for the platform.)

The performance is surprisingly bad ... doing almost everything.  Searches 
through IMAP, bulk importation of mail folders, large numbers of simultaneous 
mail deliveries, you name it.  There wasn't a task that the dbmail setup 
performed faster than Dovecot, in either low or high load situations.  When I 
tossed a test load that introduced lots of mail deliveries as well as searches 
and full folder pulls, things got really pear-shaped.  Even putting dovecot's 
mailstore on NFS (GigE) didn't really slow Dovecot down enough to make dbmail 
competitive.

When pressed on this lack of performance, I was instructed to "add more RAM" to 
the DB machine, and that for ideal performance I should have more RAM than my 
mailbox sizes.  *sigh*  This sounds great for a very small installation, but 
this clearly is not something that scales.

I think the final humiliation was comparing the body + header searching 
performance using Timo's practically obsolete fts_squat plugin against 
dbmail's.  Wow.  Squat was multiple orders of magnitude faster.  Lucene and 
Solr are even moreso when fed large datasets (mail folder hives of about 
100GB).  The SQL setups hit the obvious performance shelf once they were unable 
to maintain everything in RAM or cache.

The dbmail folk are earnest and hard-working, and I don't mean to cast the 
slightest bit of negativity on their project.  I think the assumptions about 
what SQL servers can do well often doesn't square with the reality of many 
applications that people try to fit them into.

On my first initial round of tests, I imported 24,000 emails comprising a mere 
560MB of space.  Just about all of the non-SQL imap servers handled the 
importation (basically IMAP APPENDs) within 6 minutes.  dbmail2 required hours 
(using MySQL), and a bit shorter time (but still hours') with Postgres.

>From an old email:

> Searching INBOX #msgs = 24714
>  [NOFIND] Time=2.072423, matches=24714 <--- this should be zero *BUG*
>  [date] Time=2.07519, matches=24714 <--- this is correct
>  [here] Time=2.072075, matches=24714 <--- this should be about 30% of total # 
> of msgs *BUG*
> 
> Does dbmail break IMAP SEARCH TEXT (i.e., search both body + headers)?  Is 
> this a result of relying on MySQL's search algorithms in text-like fields? 
> I'm still puzzled, because I can't believe that 'here' appears in EVERY 
> email.  It looks like dbmail's returning EVERY email on a SEARCH TEXT.  This 
> is not correct operation.
> 
> When I alter the search to use "FROM" as the key instead of "TEXT", the 
> results are more discriminating and meet expectations.
> 
> Searching INBOX #msgs = 24714
>  [NOFIND] Time=2.161049, matches=0
>  [james] Time=2.273255, matches=1049
>  [here] Time=2.165406, matches=2
> 
> Not that it matters, but it's much slower than Dovecot's fts_squat for 
> substring searches.
> 
> Dovecot's fts_squat IMAP SEARCH TEXT results are:
> 
> Searching INBOX #msgs = 55731
>  [Updating Index] Time=78.184637 (66% of the mailbox unindexed at start)
>  [NOFIND] Time=0.045654, matches=0
>  [date] Time=0.13364, matches=55731
>  [here] Time=0.069091, matches=24663

FWIW, I found Postgres to be faster than MySQL (5 and 5.5, though 5.5 with a 
hand-rolled config file using metrics supplied by a dbmail/MySQL guru helped a 
great deal for size(data_set) < size(PHYSICAL MEMORY) cases.

Where lots of write-commits were involved on the same exact setup.  MySQL "got 
close" to PSQL's performance when I did crazy things like remove filesystem 
journaling, write barriers, etc on the mail db mountpoint.  Obviously, this is 
desperation talking.

I concede that the motivations behind SQLising mail storage extends to 
administration/replication and other non-performance/scalability aspects.  I 
suspect what constitut

Re: [Dovecot] IMAP IDLE - iPhone?

2012-08-09 Thread Robin

On 8/9/2012 11:26 PM, Luigi Rosa wrote:

I used K-9 client on Android for one year with push, but I had to remove
it and go back to integrated email client because it drained the battery.


It sounds like "push" was really implemented as a poll.

=R=


Re: [Dovecot] bcypt availability

2012-07-15 Thread Robin

On 7/15/2012 2:14 AM, Ed W wrote:
>

Interestingly, there doesn't seem to be so much difference between
iterated sha-512 (sha512crypt) and bcrypt. Based on looking at latest
john the ripper results (although I'm a bit confused because they don't
seem to quote the baseline results using the normal default number of
rounds?)

So I think right now, many/most modern glibc are shipping with
sha256/512crypt implementations (recently uclibc also added this).


Indeed.  What I have seen is a create deal of variation in the 
configuration (/etc/login.defs or your distro's equivalent) in terms of 
making use of such things.


I don't see any added value to bcrypt over iterated SHA-512, really, and 
while I don't even pretend to claim I've looked at all distros, even 
"old-school" ones like Slackware have full support for it.  I suspect 
many admins doubt this because of configurations that don't make use of 
the modern hashing functionality.


Converting shadow files and/or login.defs would seem to be the bulk of 
the SysAdmin work to beef up the protection to bcrypt levels here.


Remember to keep this in perspective though - as the nature of this 
"vulnerability" extends to the case where your shadow file's hashes have 
been cloned, meaning a root-compromise or local device clone/access was 
made of it, etc.


=R=


Re: [Dovecot] 2.0.19 segfault

2012-06-23 Thread Robin

On 6/23/2012 3:27 PM, Mailing List SVR wrote:

I looked at the code and there was no relevant change from dovecot
2.0.13 and dovecot 2.0.19, upgrading between ubuntu releases updated
openssl too and this could be the problem,

however is not clear to me while imap over ssl works fine with
thunderdird and I see the crash in the logs for customers that seems to
use ms outlook,


There have been many interactions between OpenSSL (and some other SSL 
implementations) and some versions of schannel.dll (the system library 
responsible for SSL connections, used by Outlook and Internet Explorer, 
amongst other tools).


M$ has released hotfixes addressing various problems in schannel.dll in 
the past, such as: http://support.microsoft.com/kb/933430


There is a fair bit of write-up online about how to configure your SSL 
servers to avoid problematic ciphers and socket configurations that help 
you avoid tripping over most of the bugs.


For example: http://httpd.apache.org/docs/2.2/ssl/ssl_faq.html#msie

Whenever SSL is involved in the transaction process, always include it 
in your debug process as SSL negotiation is non-trivial and has been 
often fraught with some peril.


=R=


Re: [Dovecot] Better to use a single large storage server or multiple smaller for mdbox?

2012-04-07 Thread Robin



Putting XFS on a singe RAID1 pair, as you seem to be describing above
for the multiple "thin" node case, and hitting one node with parallel
writes to multiple user mail dirs, you'll get less performance than
EXT3/4 on that mirror pair--possibly less than half, depending on the
size of the disks and thus the number of AGs created.  The 'secret' to
XFS performance with this workload is concatenation of spindles.
Without it you can't spread the AGs--thus directories, thus parallel
file writes--horizontally across the spindles--and this is the key.  By
spreading AGs 'horizontally' across the disks in a concat, instead of
'vertically' down a striped array, you accomplish two important things:

1.  You dramatically reduce disk head seeking by using the concat array.
  With XFS on a RAID10 array of 24 2TB disks you end up with 24 AGs
evenly spaced vertically down each disk in the array, following the
stripe pattern.  Each user mailbox is stored in a different directory.
Each directory was created in a different AG.  So if you have 96 users
writing their dovecot index concurrently, you have at worst case a
minimum 192 head movements occurring back and forth across the entire
platter of each disk, and likely not well optimized by TCQ/NCQ.  Why 192
instead of 96?  The modification time in the directory metadata must be
updated for each index file, among other things.


Does the XFS allocator automatically distribute AGs in this way even 
when disk usage is extremely light, i.e, a freshly formatted system with 
user directories initially created, and then the actual mailbox contents 
copied into them?


If this is indeed the case, then what you describe is a wondrous 
revelation, since you're scaling out the number of simultaneous metadata 
reads+writes/second as you add RAID1 pairs, if my understanding of this 
is correct.  I'm assuming of course, but should look at the code, that 
metadata locks imposed by the filesystem "distribute" as the number of 
pairs increase - if it's all just one Big Lock, then that wouldn't be 
the case.


Forgive my laziness, as I could just experiment and take a look at the 
on-disk structures myself, but I don't have four empty drives handy to 
experiment.


The bandwidth improvements due to striping (RAID0/5/6 style) are no help 
for metadata-intensive IO loads, and probably of little value for even 
mdbox loads too, I suspect, unless the mdbox max size is set to 
something pretty large, no?


Have you tried other filesystems and seen if they distribute metadata in 
a similarly efficient and scalable manner across concatenated drive sets?


Is there ANY point to using striping at all, a la "RAID10" in this?  I'd 
have thought just making as many RAID1 pairs out of your drives as 
possible would be the ideal strategy - is this not the case?


=R=


Re: [Dovecot] dsync redesign

2012-03-29 Thread Robin
On 3/29/2012 5:24 AM, Stan Hoeppner wrote:
> This happens with a lot of "fan boys".  There was so much hype
> surrounding ZFS that even many logically thinking people were frothing
> at the mouth waiting to get their hands on it.  Then, as with many/most
> things in the tech world, the goods didn't live up to the hype.

The problem with zfs especially is that there are so many different 
implementations, with only the commercial Sun, er, Oracle paid Solaris having 
ALL of the promised features and the bug-fixes to make them safely usable.  For 
those users, with very large RAM-backed Sun, er, Oracle, hardware, it probably 
works well.

FreeBSD and even the last versions of OpenSolaris lack fixes for some wickedly 
nasty box-bricking bugs in de-dup, as well as many of the "sexy" features in 
zpool that had people flocking to it in the first place.  

The bug database that used to be on the OpenSolaris portal by Sun's gone dark, 
but you may have some luck through archive.org.  I know when I tried it out for 
myself using the "Community Edition" of Solaris, I did feel annoyed by the 
bait-and-switch, and the RAM requirements to run de-dupe with merely adequate 
performance were staggering if I wanted to have plenty of spare block cache 
left over for improving performance overall.

Sun left some of the FOSS operating systems a poison pill with its CDDL 
licence, which is the main reason why the implementations of zfs on Linux are 
immature and is being "re-implemented" with US DOE sponsorship, ostensibly in a 
GNU compatible licence.

zfs reminds me a great deal of TIFF - lots of great ideas in the "White Paper", 
but an elusive (or very very costly) white elephant to acquire.  "Rapidly 
changing", "bleeding edge", and "hot & new" are not descriptors for filesystems 
I want to trust more than a token amount of data to.

=R=


Re: [Dovecot] Need fast Maildir to mdbox conversion

2012-03-27 Thread Robin

On 3/27/2012 3:40 PM, Jeff Gustafson wrote:

I looked around the 'Net to see if there might be a custom program for
offline Maildir to mdbox conversion. So far I haven't turned up
anything. The problem for us is that the dsync program simply takes a
lot of time to convert mailboxes.


Is it slower than doing an IMAP APPEND over an authenticated dovecot 
connection?


I've used a simple PERL script based on Mail::IMAPClient and Mail::Box 
to import 180,000+ mailboxes into dovecot's mdbox at fairly high speed, 
and all it does is IMAP APPENDs.  (I had to shard the mailboxes because 
these PERL based tools exhaust RAM when run with mailboxes larger than 
about 600MB).


On my development VM test box (32 bit Slack 13.37, 2G/2G split kernel, 
no RAID, Q6600 with only two cores allocated to the VM) and 8GB of DDR2 
RAM does


Emails=180,044
real237m28.485s  (12.5 emails/second)
user94m50.425s
sys 10m09.389s
21,984,824  /mail/home

I'm writing a swiss-army (C-based, no bytecode crap languages) mailbox 
"transcoding" tool, since none appear to exist.  To keep it simple, I/O 
to/from "remote" mailbox (connections) are not pipelined.  It won't 
require more than MAXEMAILSIZE's worth of RAM (if one of the directions 
involves a remote connection), and so far when processing MIX, Maildir, 
and Mbox files, it's extremely fast.


Adding support for [sm]dbox wouldn't appear to be problematic.  At the 
moment, it supports everything Panda's c-client supports plus 
Maildir/Maildir++ (including Panda's "MIX").


Write support for Maildir's extremely UNDER-tested so far, as I've 
mainly used it to import Maildir hives.


I've experimented with Maildir as a format, and while the one email to a 
file model seems like a sensible idea, it seems to simply transfer 
stress from one part of the system to another, mainly filesystems, and 
not many of those are really up for handling that many files in one 
directory very efficiently.


None of my users have mailboxes with fewer than 100K emails in them, 
some have more than a million.


=R=


Re: [Dovecot] Creating an IMAP repo for ~100 users need some advice

2012-03-19 Thread Robin
On 3/17/2012 12:36 PM, Sven Hartge wrote:
> Storing mails inside SQL? Not supported by dovecot and not very wise,
> IMHO. DBmail does this, but to be honest, I never heard any good
> feedback from admins using that product. From what I have been told, you
> need quite the beefy server to get a decent performance out of DBmail,
> compared to the needs of a "traditional" setup like with dovecot or
> courier-mail, but I digress.

Ugh, I've tried the product.  It works pretty well, until you move more 
than a small handful of users and email hives to it, and you hit some 
hard walls pretty fast with how many inbound emails/second it can handle 
for even burly server configurations.

Those hard walls occur at too low a threshold for me.  The product's 
mailing list is supportive and there are many dedicated DBMail users who 
step in an answer questions, but be prepared for "BUY MORE RAM" as the 
answer to concerns about performance.  When 128GB of RAM is needed for a 
small organisation's email setup to perform well, I am strongly inclined 
to move on to the next product.

Best practices for it seem to revolve around being able to have your 
ENTIRE email + index content resident in RAM.  Well, gosh.  Why didn't I 
think of that before instead of wasting all of this time worrying about 
design and efficiency?

And if you're hoping that it will make text searches "automagically" 
fast, think again.

Timo's FTS_SQUAT blows it out of the water by orders of magnitude, even 
with mailbox sizes of around 300K emails (20GB), let alone something 
like Lucene or Solr.

I understand why it seems like a great idea to store email this way, but 
realise that the bulk of email is NOT structured or inherently relational.

=R=


Re: [Dovecot] testing fts-solr?

2012-03-02 Thread Robin

On 3/2/2012 4:40 AM, Charles Marcus wrote:

Please respond... I need to know whether or not I need to pursue this,
since we use Thunderbird in house and will be switching soon to dovecot...


This mailing list is for dovecot, not Thunderbird support.  The lack of 
replies to Thunderbird usage questions no doubt reflects this.


I would look at the GUI interface and/or "manual" for Thunderbird to 
find the answer to that question.  I suspect there is a check-box or 
configuration item that's been right in front of you all along that 
you've not thought twice about.


=R=


Re: [Dovecot] fts size

2012-03-02 Thread Robin



No, but I can help you with any questions if you want to try implementing it, 
and even finish it if you get at least the basic index/search functionality 
working. You can use v2.1's fts-lucene as a start.


That sounds like a great deal to me!  I'm glad you're still interested 
enough in it.


=R=


Re: [Dovecot] fts size

2012-03-01 Thread Robin



My initial tests for CLucene were that it would take 30% of mailbox size
(compared to 50% for Xapian). But this was before I actually implemented
it to Dovecot.. I haven't really looked at how large the indexes
actually are.


Did you ever make an fts_xapian plugin, Timo?  I've looked into Xapian 
as an alternative to the solr codebase, mainly out of a dislike of java 
and its downstream technologies.


=R=


Re: [Dovecot] testing fts-solr?

2012-02-28 Thread Robin

I think Thunderbird does this search internally, not via IMAP. You can test 
this by talking IMAP protocol directly:

telnet localhost 143
a login user pass
b select inbox
c search text hello


Yes, you definitely want to check things are being accelerated by 
issuing direct IMAP commands via telnet. Many clients try to "help" by 
performing local searches, which will only obfuscate things for you. 
Even with 150K+ messages, it shouldn't take fts_solr more than 20ms or 
so to give you results.


I too was bitten by the configuration issue.  The wiki/docs suggest that 
you only need to put the fts fts_solr plugin spec into imap "section", 
which never worked for me, unlike fts_squat which did). Putting it into 
the "global" plugin list made it all work for me.


You can check your solr index data directory too.  A freshly installed 
solr index occupies almost no space, but that grows QUICKLY once it's 
indexed anything.


=R=


Re: [Dovecot] Possible broken indexer(lucene/solr)? (Updated: also present in 2.1rc7 perhaps?)

2012-02-16 Thread Robin

> You mean you deleted Solr index, so that it's empty? That should work too.
> 
> Anyway, in v2.1 Dovecot keeps track of what is the last indexed message in 
> dovecot.index files. So if you're switching between backends or have messed 
> things up in tests, you need to run "doveadm fts rescan" (for each user).

# doveadm(root): Fatal: Unknown command 'fts', but plugin fts exists. Try to 
set mail_plugins=fts

I get this, despite having fts + fts_solr defined in 20-imap.conf as 
recommended with the following plugin format stanza:

plugin {
fts = solr
fts_solr = break-imap-search url=http://solrhost:8983/solr/
}

Should I be adding fts/fts_solr to the global mail_plugins setting?

I have Solr up and running, without any firewalling between the hosts, and it 
never seems to even try to use it.

The logs show, even after importing fresh mail and issuing a "search text 
"your" command to the server, which takes it about 5m or so to return results.

I see the following in the log:

Feb 16 17:51:54 indexer-worker(testuser): Info: Indexed 0 messages in INBOX2010

GET /solr/ issued to http://solrhost:8983/ via TELNET reports A-OK, and Solr 
Admin shows ready status when the admin console is loaded into a web browser.  
I can see there is ZERO traffic between the hosts during the SEARCH text 
command's execution, though I can see an open connection to the solr host in 
netstat:

tcp0  0 linuxcode:56393 solrhost:8983  ESTABLISHED

=R=


Re: [Dovecot] imap process limits problem

2011-12-30 Thread Robin

On 12/30/2011 10:53 AM, Calvin Cochran wrote:

I am having a problem with the number of current processes that I cannot
seem to diagnose adequately, and is a possible bug.  This will be a bit
long, but usually more info is better.
[]
verbose_proctitle, at this moment there are 99 connections from the IP in
question, all of which show in ps output as:
dovecot/imap-login [1 connections (1 TLS)]
My understanding is that means they have successfully authenticated, and
that there should be line with
dovecot/imap [username ip TLS]
in ps output, but there isn't, so I am taking that to mean the client
closed the imap session.


This sounds like yet another round of buggy clients that just abruptly 
dump connections instead of closing them down properly, or some 
intervening firewalling configuration that's preventing the proper 
signoff and TCP FIN handshakes from completing.


The 2 hours+ sounds like these sockets (and the processes that used 
them) might be stuck in FIN_WAIT1, which isn't affected by the timeout 
specified in /proc/sys/net/ipv4/tcp_fin_timeout


Use netstat -a these connections to see their disposition

You can try some of the following:

1) Lower tcp_keepalive intervals and reduce the # of probes before a 
"kill" - does Dovecot make use of SO_KEEPALIVE, or can it be configured 
to do so?


2) Lower application idle timeout settings.  (Is there a mandated 
"check-in" interval defined for IMAP clients?)


=R=


Re: [Dovecot] Calculation of suggested vsz_limit values

2011-12-19 Thread Robin

Timo  wrote:


Not really. For mail related processes (imap, pop3, lmtp) you could
find the largest dovecot.index.cache file and make sure that
vsz_limit is at least 3 times that.


Yikes.  Aside from forcing users to "prune" mailboxes, what do you 
suggest when vsz_limit exceeds available host RAM?


I ran across another "RAM only" process in fts_squat for a large, but 
not *HUGE* mailbox when the size of the dovecot.index.search.uids file 
got larger than 600MB.


There's no mitigation for these problems other than "buying more RAM" or 
getting users to delete/file their emails?


I was quite shocked to hit these limits so early - there was no mention 
of RAM resource requirements in the Dovecot documentation I'd perused. :(


=R=



[Dovecot] Dovecot 2.1rc1 + 2.0.16 woes regarding fts_squat

2011-12-13 Thread Robin
I can confirm the report posted in 
http://dovecot.org/list/dovecot/2011-November/062263.html that fts_squat no 
longer seems to be used after moving from 2.0.16->2.1 rc 1.  I don't see crash 
reports in the logs, just "0 messages indexed". My search test tool just does a 
normal IMAP SEARCH for a long non-existent string. If there's another way to 
trigger re-indexing in 2.1, I don't see anything in the documentation for it.

I've enabled mail_debug, but no log entries that shed light on the problem are 
available.  Has the configuration for fts_squat changed?

If anyone has a working fts_squat setup with Dovecot 2.1rc1, I'd appreciate 
hearing how you have it setup and working.

During a large mail import with 2.0.16 today, I ran across a worrying message 
in the logs during an fts_squat reindex: out of memory. The plugin doesn't obey 
the mmap_disable configuration directive, which I've confirmed in the plugin 
source.

The mailbox in question has only 17GB (mdbox style), with about 90,000 emails 
in it.  Its "index" (for the purposes of normal IMAP retrieval as opposed to 
IMAP TEXT/BODY searching) is fine and uncorrupted. I freshly import these 
mailboxes between test iterations and any version changes anyway, so if there's 
corruption, it's happening within dovecot only.  I'm using use Mail::IMAPClient 
to create + append mail over localhost, not any direct mdbox conversion 
trickery.

In looking through the code, I see that mmap() is called for the *ENTIRE FILE*, 
which is guaranteed to fail on large indexes.  I assume this was done out of 
expedience, but it's a "risky" sort of thing to do in a server process, even if 
8GB RAM systems do seem to grow on trees.  I intend to put this to work in a 
large installation (>10K users), so this IS of some concern for me in the 
long-term.

Dec 12 22:48:52 linuxcode dovecot: imap(user1001): Error: 
mremap_anon(188084224) failed: Cannot allocate memory
Dec 12 22:48:52 linuxcode dovecot: imap(user1001): Error: 
read(.../mdbox/mailboxes/INBOX2010/dbox-Mails/dovecot.index.search.uids) 
failed: Cannot allocate memory
Dec 12 22:48:52 linuxcode dovecot: imap(user1001): Error: 
mremap_anon(188280832) failed: Cannot allocate memory
Dec 12 22:48:52 linuxcode dovecot: imap(user1001): Error: 
read(.../mdbox/mailboxes/INBOX2010/dbox-Mails/dovecot.index.search.uids) 
failed: Cannot allocate memory
Dec 12 22:50:47 linuxcode dovecot: imap(user1001): Error: Corrupted squat 
uidlist file 
.../mdbox/mailboxes/INBOX2010/dbox-Mails/dovecot.index.search.uids: uidlist not 
found



dovecot -n output:

# 2.0.16: /etc/dovecot/dovecot.conf
# OS: Linux 2.6.35.7-smp i686 Slackware 13.1.0 
auth_mechanisms = plain cram-md5 digest-md5 apop
default_vsz_limit = 192 M
disable_plaintext_auth = no
first_valid_gid = 100
hostname = linuxcode
info_log_path = /tmp/dovecot.log
last_valid_gid = 6
last_valid_uid = 6
listen = *
mail_location = mdbox:~/mdbox
mail_plugins = " zlib acl"
mdbox_preallocate_space = yes
mdbox_rotate_interval = 1 days
mmap_disable = yes
passdb {
  args = scheme=plain /etc/cram-md5.pwd
  driver = passwd-file
}
plugin {
  acl = vfile
}
postmaster_address = postmaster@linuxcode
quota_full_tempfail = yes
service imap-login {
  inet_listener imap {
port = 143
  }
  inet_listener imaps {
port = 993
ssl = yes
  }
  process_min_avail = 0
  vsz_limit = 64 M
}
service imap {
  vsz_limit = 512 M
}
service lmtp {
  unix_listener lmtp {
mode = 0666
  }
}
ssl = no
userdb {
  args = blocking=no
  driver = passwd
}
protocol lmtp {
  mail_plugins = " zlib acl"
}
protocol imap {
  mail_plugins = " zlib acl fts fts_squat imap_acl imap_zlib"
  plugin {
fts = squat
fts_squat = partial=4 full=10
  }
}


CONFIGURE

LIBS=-lnsl CFLAGS='-O2 -march=core2 -mtune=core2 -fstack-protector 
-fomit-frame-pointer' \
CXXFLAGS='-O2 -march=core2 -mtune=core2 -fstack-protector -fomit-frame-pointer' 
\
LDFLAGS=-s ./configure --prefix=/usr --sysconfdir=/etc \
--with-mysql --with-sqlite --with-pgsql --without-pam --with-sql \
--with-libwrap --with-libcap -with-ssl=openssl --with-solr \
--with-mem-align=16 --with-bzlib --with-zlib --localstatedir=/var


OS: Slackware 13.1 (32-bit, 2GB physical RAM, kernel setup for 2G/2G split) 
fully patched up

=R=


Re: [Dovecot] New error messages

2009-10-23 Thread Robin Atwood
On Saturday 24 October 2009, Timo Sirainen wrote:
> On Sat, 2009-10-24 at 02:38 +0700, Robin Atwood wrote:
> > I was glancing at my logwatch report when I noticed:
> >
> > dovecot: IMAP(robinmail):
> > fchown(/home/robinmail/mail/.imap/INBOX/dovecot.index.log.newlock, -1,
> > 10(wheel)) failed: Operation not permitted (egid=100(users), group based
> > on /var/mail/robinmail): 1 Time(s)
> 
> chmod 0600 /var/mail/* fixes this.
> 

Done. Thanks Timo!
-Robin
-- 
------
Robin Atwood.

"Ship me somewheres east of Suez, where the best is like the worst,
 Where there ain't no Ten Commandments an' a man can raise a thirst"
 from "Mandalay" by Rudyard Kipling
--










[Dovecot] New error messages

2009-10-23 Thread Robin Atwood
I was glancing at my logwatch report when I noticed:

dovecot: IMAP(robinmail): 
fchown(/home/robinmail/mail/.imap/INBOX/dovecot.index.log.newlock, -1, 
10(wheel)) failed: Operation not permitted (egid=100(users), group based on 
/var/mail/robinmail): 1 Time(s)
dovecot: IMAP(robinmail): 
fchown(/home/robinmail/mail/.imap/INBOX/dovecot.index.tmp, -1, 10(wheel)) 
failed: Operation not permitted (egid=100(users), group based on 
/var/mail/robinmail): 3 Time(s)

So what's going on here? :) I am fairly sure I never had them before. Running 
dovecot 1.2.6.

TIA
-Robin
-- 
--
Robin Atwood.

"Ship me somewheres east of Suez, where the best is like the worst,
 Where there ain't no Ten Commandments an' a man can raise a thirst"
 from "Mandalay" by Rudyard Kipling
--










Re: [Dovecot] Phone cannot receive mail suddenly

2009-02-02 Thread Robin Atwood
On Tuesday 03 Feb 2009, Timo Sirainen wrote:
> On Sun, 2009-02-01 at 20:26 +0700, Robin Atwood wrote:
> > Starting at midnight Feb 1 my phone can no longer fetch mail from
> > Dovecot. It endlessly connects and reconnects as you can see in the log
> > below. I have restarted dovecot, the phone, deleted /home/robinmail/mail,
> > all to no avail. I have turned on debug output but it does not tell me
> > anymore. I can connect and see the folder using the KMail imap client.
> > Any idea how I can proceed with this? The phone is 192.168.1.57 (LAN) or
> > 202.91.19.194 (GPRS) and Dovecot and KMail are on 192.168.1.2.
>
> See what rawlog shows: http://wiki.dovecot.org/Debugging/Rawlog

Thanks for the tip. In fact, the problem was, as I thought, with the phone 
software. It got so confused it deleted the email account and when I 
recreated it everything worked again. I suspect something with the IMAP 
indices getting out of sync.

Cheers...
-Robin
-- 
--
Robin Atwood.

"Ship me somewheres east of Suez, where the best is like the worst,
 Where there ain't no Ten Commandments an' a man can raise a thirst"
 from "Mandalay" by Rudyard Kipling
--










[Dovecot] Phone cannot receive mail suddenly

2009-02-01 Thread Robin Atwood
Starting at midnight Feb 1 my phone can no longer fetch mail from Dovecot. It 
endlessly connects and reconnects as you can see in the log below. I have 
restarted dovecot, the phone, deleted /home/robinmail/mail, all to no avail. 
I have turned on debug output but it does not tell me anymore. I can connect 
and see the folder using the KMail imap client. Any idea how I can proceed 
with this? The phone is 192.168.1.57 (LAN) or 202.91.19.194 (GPRS) and 
Dovecot and KMail are on 192.168.1.2.


Feb  1 20:12:58 opal dovecot: Dovecot v1.1.8 starting up
Feb  1 20:13:41 opal dovecot: imap-login: Login: user=, 
method=PLAIN, rip=192.168.1.57, lip=192.168.1.2
Feb  1 20:13:41 opal dovecot: IMAP(robinmail): Effective uid=500, gid=100, 
home=/home/robinmail
Feb  1 20:13:41 opal dovecot: IMAP(robinmail): mbox: 
data=~/mail:INBOX=/var/mail/robinmail
Feb  1 20:13:41 opal dovecot: IMAP(robinmail): fs: root=/home/robinmail/mail, 
index=, control=, inbox=/var/mail/robinmail
Feb  1 20:13:41 opal dovecot: IMAP(robinmail): Connection closed bytes=18/484
Feb  1 20:14:05 opal dovecot: IMAP(robinmail): Effective uid=500, gid=100, 
home=/home/robinmail
Feb  1 20:14:05 opal dovecot: IMAP(robinmail): mbox: 
data=~/mail:INBOX=/var/mail/robinmail
Feb  1 20:14:05 opal dovecot: IMAP(robinmail): fs: root=/home/robinmail/mail, 
index=, control=, inbox=/var/mail/robinmail
Feb  1 20:14:05 opal dovecot: imap-login: Login: user=, 
method=PLAIN, rip=192.168.1.2, lip=192.168.1.2, TLS
Feb  1 20:14:37 opal dovecot: IMAP(robinmail): Effective uid=500, gid=100, 
home=/home/robinmail
Feb  1 20:14:37 opal dovecot: IMAP(robinmail): mbox: 
data=~/mail:INBOX=/var/mail/robinmail
Feb  1 20:14:37 opal dovecot: IMAP(robinmail): fs: root=/home/robinmail/mail, 
index=, control=, inbox=/var/mail/robinmail
Feb  1 20:14:37 opal dovecot: imap-login: Login: user=, 
method=PLAIN, rip=202.91.19.194, lip=192.168.1.2
Feb  1 20:14:39 opal dovecot: IMAP(robinmail): Connection closed bytes=18/453
Feb  1 20:14:43 opal dovecot: IMAP(robinmail): Effective uid=500, gid=100, 
home=/home/robinmail
Feb  1 20:14:43 opal dovecot: IMAP(robinmail): mbox: 
data=~/mail:INBOX=/var/mail/robinmail
Feb  1 20:14:43 opal dovecot: IMAP(robinmail): fs: root=/home/robinmail/mail, 
index=, control=, inbox=/var/mail/robinmail

TIA
-Robin
-- 
--
Robin Atwood.

"Ship me somewheres east of Suez, where the best is like the worst,
 Where there ain't no Ten Commandments an' a man can raise a thirst"
 from "Mandalay" by Rudyard Kipling
--










Re: [Dovecot] Questions about using sieve

2008-09-13 Thread Robin Atwood
On Saturday 13 Sep 2008, Timo Sirainen wrote:
> On Sat, 2008-09-13 at 12:37 +0700, Robin Atwood wrote:
> > and I have added "mail_plugins = cmusieve" to protocol lda{}. I then
> > created a ".dovecot.sieve" script but am not sure where to place it. I
> > tried ~/ and ~/.imap but it never seems to get compiled.
>
> Home directory is correct.
>
> > Dovecot is running as a root
> > service so it should have access. Setting mail_debug=yes produces no
> > clues, so what should I be looking at?
>
> Sieve is used only by deliver binary, which isn't running as root.
> Sounds like you're never even calling it?

That was my impression! Since mail delivery is already working, I assumed I do 
not have to customise sendmail.cf. Is that not the case? I am not sure  I 
understand this deliver thing, I thought in my case it actually meant 
sendmail.

-Robin
-- 
--
Robin Atwood.

"Ship me somewheres east of Suez, where the best is like the worst,
 Where there ain't no Ten Commandments an' a man can raise a thirst"
 from "Mandalay" by Rudyard Kipling
--










[Dovecot] Small deliver bug

2008-09-11 Thread Robin Breathe
Setting "mail_location=maildir:~/Maildir:INDEX=~/index:CONTROL=~/control"
(or separating INDEX and CONTROL in general) along with
"quota=maildir" (and or having a ~/.dovecot.sieve) appears to expose a
small bug in deliver (version 1.1.3, but probably all). While deliver
will create a missing INDEX directory, it throws an error if the
CONTROL directory is missing, exiting with exit code 75 and the
following message:
  Internal error occurred. Refer to server log for more information.
[2008-09-11 12:30:13]

The associated syslog error is:
  Sep 11 12:30:13 csmail2 deliver(88345678): [ID 994296 mail.error]
file_dotlock_open(/users4/stu/78/88345678/control/maildirsize) failed:
No such file or directory
  Sep 11 12:30:13 csmail2 deliver(88345678): [ID 702911 mail.error]
Internal quota calculation error
  Sep 11 12:30:13 csmail2 deliver(88345678): [ID 702911 mail.info]
msgid=<[EMAIL PROTECTED]>: save failed to
INBOX: Internal error occurred. Refer to server log for more
information. [2008-09-11 12:30:13]

Full "dovecot -n" output is attached.

Regards,
Robin
# 1.1.3: /app/dovecot/1.1.3/etc/dovecot.conf
base_dir: /dovecot/run/imap/
protocols: none
ssl_disable: yes
disable_plaintext_auth: no
login_dir: /dovecot/run/imap/login
login_executable: /app/dovecot/1.1.3/libexec/dovecot/imap-login
login_process_per_connection: no
login_processes_count: 8
max_mail_processes: 8192
mail_max_userip_connections: 32
verbose_proctitle: yes
first_valid_uid: 999
first_valid_gid: 10
mail_location: maildir:~/Maildir:INDEX=~/index:CONTROL=~/control
mmap_disable: yes
mail_nfs_storage: yes
mail_nfs_index: yes
maildir_copy_preserve_filename: yes
mail_plugins: quota imap_quota
imap_client_workarounds: delay-newmail netscape-eoh tb-extra-mailbox-sep
auth default:
  cache_size: 8192
  cache_ttl: 1200
  cache_negative_ttl: 1200
  passdb:
driver: sql
args: /app/dovecot/1.1.3/etc/dovecot-sql.conf
  userdb:
driver: sql
args: /app/dovecot/1.1.3/etc/dovecot-sql.conf
  socket:
type: listen
client:
  path: /dovecot/run/imap/auth-client
  mode: 384
master:
  path: /dovecot/run/imap/auth-master
  mode: 384
plugin:
  quota: maildir

Re: [Dovecot] dotlock errors without using dotlock

2008-01-25 Thread Robin Breathe

On 20 Dec 2007, at 16:41, Timo Sirainen wrote:

On Thu, 2007-12-20 at 10:31 -0500, Brian Taber wrote:

I have my setup using fctrl for mailbox and index locking.


Oh, and as for this, Dovecot uses only dotlocking for maildir's
dovecot-uidlist file, regardless of what your settings are. I was
thinking about changing this in future releases though.


A vote for getting this changed - we've been seeing terrible IMAP  
performance under Solaris 10 when CONTROL points to a ZFS filesystem,  
seemingly caused by excessive latency on the low level zfs_create()  
call (client sessions sometimes lock for in excess of a minute).  
Background reading suggests that ZFS currently reacts badly to the  
constant creation/deletion of many tiny files (to give you an idea of  
scale we have ~20k users). As a workaround we've had to create a UFS  
filesystem on a ZFS volume to house CONTROL files.


Regards,
Robin


Re: [Dovecot] How to logoff a session with dovecot?

2007-12-29 Thread Robin Atwood
On Saturday 29 Dec 2007, Timo Sirainen wrote:
> On Sun, 2007-12-09 at 18:38 +0700, Robin Atwood wrote:
> > I use dovecot to push email to my SE P1i and it works very well. :)
> > However, I have two email accounts set up on the phone, one using my
> > domain for GPRS and public WiFi and one using my WLAN address for use at
> > home, the idea being I don't want to pay for GPRS data at home. The
> > trouble is the GPRS account remains logged on and I get the mail in both
> > inboxes. There is no option in the email client on the phone to
> > disconnect, so is there any trick to forcing a disconnect from the mail
> > server?
>
> I guess you could do something with post-login scripting
> (http://wiki.dovecot.org/PostLoginScripting). For example track GPRS vs.
> non-GPRS connections based on $IP. When non-GPRS connection logs in,
> kill all imap processes using GPRS IP.

Timo -
Thanks for the suggestion, that hook looks very useful. I tried killing the 
imap-login processes manually and it seemed to do the trick!

Cheers...
-Robin
-- 
--
Robin Atwood.

"Ship me somewheres east of Suez, where the best is like the worst,
 Where there ain't no Ten Commandments an' a man can raise a thirst"
 from "Mandalay" by Rudyard Kipling
--










[Dovecot] How to logoff a session with dovecot?

2007-12-09 Thread Robin Atwood
I use dovecot to push email to my SE P1i and it works very well. :) However, I 
have two email accounts set up on the phone, one using my domain for GPRS and 
public WiFi and one using my WLAN address for use at home, the idea being I 
don't want to pay for GPRS data at home. The trouble is the GPRS account 
remains logged on and I get the mail in both inboxes. There is no option in 
the email client on the phone to disconnect, so is there any trick to forcing 
a disconnect from the mail server?

TIA
-Robin
-- 
--
Robin Atwood.

"Ship me somewheres east of Suez, where the best is like the worst,
 Where there ain't no Ten Commandments an' a man can raise a thirst"
 from "Mandalay" by Rudyard Kipling
--










Re: [Dovecot] Compiling drac.c on a 64 bit system

2007-10-05 Thread Robin Atwood
On Friday 05 Oct 2007, Marcus Rueckert wrote:
> On 2007-10-05 17:48:51 +0700, Robin Atwood wrote:
> your libdrac is compiled without -fPIC -pic

Thanks, that did the trick!

-Robin.
-- 
--
Robin Atwood.

"Ship me somewheres east of Suez, where the best is like the worst,
 Where there ain't no Ten Commandments an' a man can raise a thirst"
 from "Mandalay" by Rudyard Kipling
--










[Dovecot] Compiling drac.c on a 64 bit system

2007-10-05 Thread Robin Atwood
I have just installed dovecot and want to set up the drac interface. However, 
when I compile drac.c I get the following link errors; anybody know what the 
right compile options are for a 64 bit system? I have a Gentoo Linux system 
using the amd64 architecture.


gcc -Wall -W -shared -fPIC -DHAVE_CONFIG_H -I$dovecot -I$dovecot/src/lib
drac.c -o drac.so -ldrac
/usr/lib/gcc/x86_64-pc-linux-gnu/4.1.2/../../../../x86_64-pc-linux-gnu/bin/ld: 
/usr/lib/gcc/x86_64-pc-linux-gnu/4.1.2/../../../../lib64/libdrac.a(dracauth.o): 
relocation R_X86_64_32S against `a local symbol' can not be used when making 
a shared object; recompile with -fPIC
/usr/lib/gcc/x86_64-pc-linux-gnu/4.1.2/../../../../lib64/libdrac.a: could not 
read symbols: Bad value
collect2: ld returned 1 exit status

TIA
-Robin.
-- 
--
Robin Atwood.

"Ship me somewheres east of Suez, where the best is like the worst,
 Where there ain't no Ten Commandments an' a man can raise a thirst"
 from "Mandalay" by Rudyard Kipling
--










Re: [Dovecot] MANAGESIEVE patch v6 for dovecot 1.0.3

2007-08-17 Thread Robin Breathe
Stephan Bosch wrote:
> On Fri, 2007-08-17 at 11:56 +0100, Robin Breathe wrote:
>> Should the current incarnation of the patch support TLS, or is there
>> anything I need to do to enable TLS for managesieve; the Thunderbird
>> Sieve extension hangs when "Use TLS" option is selected. 
> Yes, it should. I'll have a look at the sieve extension's TLS support
> this evening (i didn't know it supported TLS already). I re-tested the
> TLS support of the managesieve patch v6 at my end and it still works. 

I can confirm that TLS is working via gnutls-cli, so I guess the problem
must lie with the Sieve extension. Of note, we're using a non-standard
port (12000) and a chained, wildcard GlobalSign certificate.

Regards,
Robin


Re: [Dovecot] MANAGESIEVE patch v6 for dovecot 1.0.3

2007-08-17 Thread Robin Breathe
Stephan Bosch wrote:
> I have updated the MANAGESIEVE patch to (hopefully) fix the compilation
> issues reported by Robin Breathe. This is a patch against the latest
> stable release 1.0.3. It currently won't compile with 1.1 due to
> significant changes in the master code.  

I can confirm that it's now compiling fairly cleanly with Sun CC under
Solaris 10 again, thanks.

> Change Log V6
> -
> 
> - Corked the client output stream while producing the capability greeting and 
> on
>   other some other occasions as well. Some naive client implementations 
> expect to
>   receive this as a single tcp frame and it is a good practice to do so 
> anyway.
>   Using this change the Thunderbird sieve extension (v0.1.1) seemed to work. 
> However,
>   scripts larger than a tcp frame still caused failures. All these issues are 
> fixed
>   in the latest version of the sieve add-on (currently v0.1.4).

Should the current incarnation of the patch support TLS, or is there
anything I need to do to enable TLS for managesieve; the Thunderbird
Sieve extension hangs when "Use TLS" option is selected. Configuration
below:

# ./dovecot -n
# 1.0.3: /app/dovecot/1.0.3-managesieve/etc/dovecot.conf
base_dir: /dovecot/run-managesieve/
protocols: managesieve
listen: imap.brookes.ac.uk:12000
ssl_cert_file: /app/openssl/certs/public/dovecot.pem
disable_plaintext_auth: no
login_dir: /dovecot/run-managesieve/login
login_executable:
/app/dovecot/1.0.3-managesieve/libexec/dovecot/managesieve-login
login_processes_count: 16
login_max_processes_count: 512
max_mail_processes: 8192
verbose_proctitle: yes
first_valid_uid: 900
first_valid_gid: 10
mail_location:
maildir:%h/Maildir:INDEX=/dovecot/index/%u:CONTROL=/dovecot/control/%u
mail_debug: yes
mail_executable: /app/dovecot/1.0.3-managesieve/libexec/dovecot/managesieve
mail_plugin_dir: /app/dovecot/1.0.3-managesieve/lib/dovecot/managesieve
namespace:
  type: private
  inbox: yes
auth default:
  cache_size: 8192
  verbose: yes
  debug: yes
  passdb:
driver: pam
args: cache_key=%u%r%l%s dovecot
  userdb:
driver: passwd
plugin:
  quota: fs

Regards,
Robin


Re: [Dovecot] MANAGESIEVE patch v5 for dovecot 1.0.2

2007-08-15 Thread Robin Breathe
Timo Sirainen wrote:
> On Wed, 2007-08-15 at 15:29 +0100, Robin Breathe wrote:
>> Stephan Bosch wrote:
>>> Have fun testing the patch. Notify me when there are problems.
>> Stephan,
>>
>> There's a small problem with your patch as it stands: it depends on a
>> number of GCCisms, and fails to compile with, for example, Sun CC under
>> Solaris 10.
>> Removing all of your "__attribute__((unused))" declarations goes some
>> way, but the build then fails with the following:
> 
> These can be replaced with __attr_unused__.

Great.

>> "sieve-implementation.c", line 193: void function cannot return value
>> cc: acomp failed for sieve-implementation.c
>>
>> A reasonable error given that sieve_runenv_mark_duplicate() is a void
>> function with a return :) Removing the "return" leads to a clean build,
>> but it's not clear what implications that might have.
> 
> Probably just an accidental mistake. I've had the same problem when
> changing return values to voids. It's annoying that gcc doesn't complain
> about this.

Fair enough, you think that removing the return is safe/correct then?

On a side-note, I've not seen anything from you to indicate whether the
managesieve functionality will be integrated into a future release. Any
thoughts?

Regards,
Robin


Re: [Dovecot] MANAGESIEVE patch v5 for dovecot 1.0.2

2007-08-15 Thread Robin Breathe
Stephan Bosch wrote:
> Have fun testing the patch. Notify me when there are problems.

Stephan,

There's a small problem with your patch as it stands: it depends on a
number of GCCisms, and fails to compile with, for example, Sun CC under
Solaris 10.
Removing all of your "__attribute__((unused))" declarations goes some
way, but the build then fails with the following:

 /opt/SUNWspro/bin/cc -DHAVE_CONFIG_H -I. -I../.. -I../../src/lib
-I../../src/li b-storage
-I../../src/lib-mail -I../../src/lib-sievestorage -I/app/openssl/0.9.7
   m/include -g -c
sieve-implementation.c  -KPIC -DPIC -o .libs/sieve-implementatio
 n.o
"sieve-implementation.c", line 193: void function cannot return value
cc: acomp failed for sieve-implementation.c

A reasonable error given that sieve_runenv_mark_duplicate() is a void
function with a return :) Removing the "return" leads to a clean build,
but it's not clear what implications that might have.

NB: this is applied against dovecot-1.0.3, though only one of the hunks
is off by 1 line.

Regards,
Robin