Re: [Dovecot] remote hot site, IMAP replication or cluster over WAN

2010-11-03 Thread Eric Jon Rostetter

Quoting Johan Hendriks :

I do not know how the rebuild goes with hast, if the master provider  
goes down, like i said, i need to try and test it.

Maybe an question on the freebsd-fs mailing list will answer this.

More about HAST http://wiki.freebsd.org/HAST


Sounds/looks a lot like DRBD.  The config snip shown is about the same for
both..  But I don't think it is DRBD, but rather based on DRBD...

DRBD generally (configurable) will restart a rebuild as soon as a host
comes up and contacts the other host(s), assuming it can.  If it can't
for any reason,  then it of course sits and waits for manual intervention
or for something to change so it can continue.

--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.



Re: [Dovecot] remote hot site, IMAP replication or cluster over WAN

2010-11-03 Thread Eric Jon Rostetter

Quoting Stan Hoeppner :


Johan Hendriks put forth on 11/3/2010 3:32 AM:


Hello, i am working primarly with FreeBSD, and the latest release has a
service called HAST.
See it as a mirrored disk over the network.


This is similar to the DRBD solution.


With CARP in the mix, when the master machine fails, it starts dovecot
on the slave.
This way you have a failover without user interference.


This is similar to heartbeat, or RHCS, etc.


1.  How do you automatically redirect clients to the IP address of the
slave when the master goes down?  Is this seamless?  What is the
duration of "server down" seen by clients?  Seconds, minutes?


Usually there is a "floating IP" that the clients used.  Which ever
server is active has this IP assigned (usually in addition to another
IP used for management and such).

The transition time depends on how the master goes down.  If you do
an administrative shutdown or transfer, it is usually just a fraction
of a second for the change to take affect, and maybe a bit longer for
the router/switch to get the new MAC address for the IP and route things
correctly.

If the primary crashes/dies, then it is usually several seconds before
the secondary confirms the primary is in trouble, makes sure it is really
down (STOMITH), and takes over the IP, mounts any needed filesystems,
and starts any needed services...  In this case, the arp/MAC issue isn't
really a problem because the transition takes a longer time.


2.  When you bring the master back up after repairing the cause of the
failure, does it automatically and correctly resume mirroring of the
HAST device so it obtains the new emails that were saved to the slave
while it was offline?  How do you then put the master back into service
and make the slave offline again?


DRBD does (or at least can, it is configurable).  Sometimes you might
just do role reversal (old primary becomes secondary, old secondary stays
the primary).  Other times you might have the original primary become
primary again (say, if the original primary has "better" hardware, etc).

So, these things really depend on the use case, and the failure case...
And are usually configurable. :)

I can give two personal examples.  First I have a file server, which is
active-passive cluster.  Since the hardware is identical, when one fails,
it is promoted to primary.  When the dead one comes back, it stays as
secondary.  It is all automatic via RHCS and DRBD using ext3.  Always
feels like I'm wasting a machine, but it is rock solid...

Second I have a mail cluster which is active-active (still RHCS but with
DRBD+GFS2).  When both nodes are up, one does the pop/imap, mailing list
web/cli/email interface, and slave LDAP services, while the other node
does the mailing list processing, SMTP processing, anti-virus/spam
processing, etc.  When one machine goes down, the services on that
machine migrate automatically to the other machine.  When the machine
comes back up, the services migrate back to their "home" machine.

Time for failover is a second or two for an admin failover, and for a
crash/etc maybe 15-30 seconds max for the fileserver, and 10-15 seconds
for the mail server.  During the failover, connections may hang or fail,
but most clients just retry the connection and get the new machine without
user intervention (or in the case of email clients, sometimes they
annoying ask for the password again, but that is not too bad).  I've never
had anyone contact me during either type of failover, which makes me
think they either don't notice, or they write it off as a "normal network
hiccup" kind of thing (well, they did contact me once, when the failover
failed, and the service went completely down, but that was my fault).

So, again, the answer is, as always, "it depends..."


--
Stan



--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.



Re: [Dovecot] So, what about clustering and load balancing?

2010-02-14 Thread Eric Jon Rostetter

Quoting Stan Hoeppner :


Eric Rostetter put forth on 2/13/2010 11:02 PM:


I'm bowing out of this discussion, as I was using words in a non-precise
way, and it is clear that Stan is using them in a very precise way,
and hence we're not really discussing the same thing...

My fault for not thinking/writing in precise, technical terms...
I was basically introducing things like i/o contention and bandwidth
issues into his thread which was solely on actual lock contention...

Suffice it to say: yes, there can be lock contention, but no, it isn't
really any worse than a SMP machine...  What I tend to think of informally
as being lock contention issues are really i/o contention and bandwidth
issues.


You have the same lock contention if both the MTA and dovecot are on the same
host.  The only difference is that for the clustered case,  
notification of the

lock takes longer [...]


Yeah, which is what I was thinking of as lock contention, but really it
is about i/o or bandwidth, and not lock contention...  I was just being
real sloppy with my language...

My $deity that is an unnecessarily complicated HA setup.  You could  
have an FC
switch, a 14 x 10K rpm 300GB SAN array and FC HBAs for all your  
hosts for about

$20K or a little less.  Make it less than $15K for an equivalent iSCSI setup.


My setup, since it used some existing machines with only 4 new machines,
was cheaper.  And it supports iSCSI, I just choose not to do that for the
email as my opinion (right or wrong) was that the DRBD+GFS local to the
mail server would be better performance than iSCSI back to my SAN cluster.

An inexpensive SAN will outperform this setup by leaps and bounds,  
and eliminate a boat load of complexity.  You might want to look  
into it.


Well, essentially I do have an inexpensive SAN (see the description), I
just choose not to use it for the e-mail, which may or may not have been
a good idea, but it was what I decided to do.

I could always switch the email to an iSCSI connection in the future
without any problem other than some very limited downtime to copy the
files off the local disks to the SAN machines...  (Very limited as
I can move the mbox files one by one when they are not in use, using
the routing mentioned to send dovecot to the proper location... That's
actually how I migrated to this system, and I only had an issue with 13
users who ran non-stop imap connections and I had to actually disconnect
them...  All the other users had times they were not connected and I could
migrate them without them ever noticing...  Took a while, but it went
completely unnoticed by all but the last 13 users...)


--
Stan


--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.


Re: [Dovecot] [OT] DRBD

2009-11-24 Thread Eric Jon Rostetter

Quoting Bernd Petrovitsch :


(Now) 25K mailboxes (with ~92GB data) on DRBD-6 (now old - the thing was
built in early 2006) with ext{3,4} on it. As long as heartbeat/
pacemaker/openais/whatever assures that it is mounted on at most one
host, no problems whatsoever with the filesystem as such.


I'm doing a file server with similar setup, but with DRBD 8.2...
Primary/secondary, heartbeat, ext3 filesystems...


Nowadays, DRBD-7+GFS or OCFS2 (and dovecot of course;-) are very
probably worth exploring for new clusters - see other mails.


I'm doing dovecot off DRBD 8.3, RHCS, GFS, active-active...  Why would
you recommend DRBD 7 instead of 8?


Bernd


--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.


Re: [Dovecot] [OT] DRBD

2009-11-23 Thread Eric Jon Rostetter

Quoting Rodolfo Gonzalez Gonzalez :


has someone worked with DRBD (http://www.drbd.org) for HA of mail storage?


Yes.


if so, does it have stability issues?


None that I've run into.


comments and experiences are thanked :)


Works great for me (two machines, sharing via DRBD, using LVM+GFS, in
active-active mode).


Thanks,
Rodolfo.



--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.


Re: [Dovecot] Dovecot and SATA Backend - filesystems

2009-11-19 Thread Eric Jon Rostetter

Quoting John Lyons :


I've spent a week looking at the likes of PVFS, GFS, Lustre and a whole
host of different systems, including pNFS (NFS 4.1)

At the risk of diverting the thread away from the SATA backend, is there
any recommendation for a fault tolerant file service.


Most people seem to be recommending either GFS or OCFS.  I use GFS myself.
They are not fault tolerant per se, just cluster-enabled filesystems...
That is, they are not distributed filesystems, but shared filesystems.


I'm really looking for 3 or 4 boxes to store data/metadata to support 10
Apache and Dovecot servers.


If you need to share the filesystem between 3-4 boxes, you either need:
1) A SAN/NAS/etc.
2) Something to act like a SAN/NAS (drbd, etc)
3) Something that exports a filesystem to other hosts (gnbd, nfs, etc).
4) A distributed filesystem...

I can't tell you which of the above would be best for you, since it depends
on your needs and budget and skill level and risk tolerance and such.


The things I don't like are having a single metadata server be a single
point of failure.


Yes, we certainly want to avoid that, if possible...  A replicated SAN
would work, and I use a poor man's replicated SAN via DRBD myself, but it
is only two nodes...  (You could then gnbd the files from those two nodes
to additional nodes if you wanted, though, to make it scale to almost any
size, budget allowing).

The only answer I can give is that this is a very complex issue that needs
lots of careful consideration. ;)


Regards

John


--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.


Re: [Dovecot] Fwd: Re: Dovecot and SATA Backend

2009-11-19 Thread Eric Jon Rostetter



If one had a network-based NFS service of the user mail data, that would
mean that
1) it would be easy to upgrade servers (data wouldn't move as it would have
to if it was owned either by being directly connected to the mail server or
connected over iSCSI)


True for directly connected storage, but nor for iSCSI.  iSCSI storage is
remote and would not have to move if the mail server is updated, only if
the iSCSI server is replaced.


2) If other servers access the mail data, this is a load on the mail server
if again, as above, it owns the disk resource either by direct attach or
iSCSI.


Again, correct for local storage but not for iSCSI.


 Better it would seem to me if there was a dedicated NFS network-based
server that all clients could get to


It's not the best idea to have multiple clients messing independently with
your mail spool.  We did that until this year, and I'm glad to be done with
it...  Now all mail access comes via dovecot, and my life is much easier...


Comments on that?


I don't think you understand iSCSI very well...  But your arguments about
direct attached versus NFS are solid.

--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.


Re: [Dovecot] Dovecot and SATA Backend

2009-11-16 Thread Eric Jon Rostetter

Quoting Nicolas GRENECHE :


It should be a future option, but index management will be more tricky
as you stated.


If you want to do any kind of clustering/failover, even in the future,
then I would go with iSCSI/SAN of some sort instead of NFS...  Just my $0.02.

The other way to think about it is that if it is a future plan then it
won't matter much as Timo will probably get NFS support working
so well by then it won't matter any more. :)  But, right now, NFS support
is a bit tricky, though constantly improving, and I'd still recommend
you stay away from it if possible...

--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.


Re: [Dovecot] Dovecot and SATA Backend

2009-11-16 Thread Eric Jon Rostetter

Quoting Nicolas GRENECHE :


I plan to run a dovecot IMAPS and POPS service on our network. We
handle about 3 000 mailboxes. I thought first buying a topnotch server
(8 cores and 16 Go RAM) with equalogic iSCSI SAN SAS 15K for storage
backend.


Sounds like overkill to me, but if you have the money go for it. :)

I run mine on an 8 core (dual quad core) system with 4G RAM, using
SATA (would have preferred SAS, but cost was an issue for us).


created on a separete local filesystem. My question is : for 3000
users, is it possible to have only a SATA backend attached to my
topnotch server (to handle bigger mail quotas) by storing index on
local hard drives (SAS drives) ?


Sure.  If you only have one dovecot server without any failover this
is fine.  If you have multiple (active or passive) servers then more care
is required, and you need to decide on the level of risk you want to take.


Extra question, what is the better : iSCSI SATA backend or NFS share ?


iSCSI would be better than NFS IMHO.


NFS share is more convenient to have a failover server.


If you introduce NFS and/or a failover server, your local index question
gets much more complex...

Is that a design requirement, desire, or future option?


Thanks for your help.

Regards,

--
Nicolas Grenèche - Orléans University - France
http://blog.garnett.fr (in french)



--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.


Re: [Dovecot] HA Dovecot Config?

2009-10-21 Thread Eric Jon Rostetter

Quoting Rick Romero :

Anyone used FileReplicationPro?   I'm more interested in low  
bandwidth, 'cheaper', replication.


Might work for an active-failover setup, but since I use active-active I
need something like DRBD instead.

Personally I wouldn't trust it for cluster type situations, and I kind
of doubt the authors would either though I could be wrong.

Certainly a great choice for backups though...  Depending on your needs,
it might work for a failover node...


Rick


--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.


Re: [Dovecot] IMAP proxying for ALL users to internal mail server

2009-10-02 Thread Eric Jon Rostetter

Quoting Timo Sirainen :

So if you really want Dovecot to be there, you need to use either  
SQL (e.g. SQLite) or checkpassword passdb. Others can't just accept  
all users without explicitly listing all of them. With SQL you could  
do something like:


Why not ldap authentication off the MS AD?

--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.


Re: [Dovecot] Dovecot + DRBD/GFS mailstore

2009-09-26 Thread Eric Jon Rostetter

Quoting Mario Antonio :

Any good documentation regarding building a RHCS with GFS over DRBD  
...? (or just the Rethat web site ..)


I've got my internal docs, which I could be talked into sharing...
Other than that, the Red Hat docs and the DRBD docs are the best source.
Not a lot out there.

Just curious, which Dovecot Version are you using? and which  
Web-mail system? and Postfix or Exim? and user database on Mysql or  
Ldap?


I started in testing with 1.1.11 and then moved to 1.1.18, both of which
worked fine with no problems noticed.  Then we got a new high-level boss
who wanted shared folders, so I went to 1.2.4 which is where I am now.
I'm not sure it matters.

My DRBD+GFS layout is:
/cluster_data holds configuration files, etc.
/var/spool/mail   holds mbox inboxes
/var/dovecot  holds dovecot indexes, control files, acl files, etc.
/var/log/dovecot  holds logs for all mail programs (so I can see logs
  for any node from any cluster node).

Webmail is Horde/IMP with postgresql, MTA is MailScanner with sendmail,
user database is LDAP (used to be pam, but now direct to LDAP).


M.A.


--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.


Re: [Dovecot] Dovecot + DRBD/GFS mailstore

2009-09-25 Thread Eric Jon Rostetter

Quoting Mario Antonio :

How does the system behave when you shutdown one server, and bring  
it back later ?  (are you using an IP load balancer/heart beat etc ?)


I'm just using RHCS with GFS over DRBD.  DRBD and LVM are started by
the system (not managed by the cluster) and everything else (including
GFS) is managed by RHCS.  So there is no load balancer, and nothing
external to RHCS like heartbeat et al. (There is a two-cluster active/passive
firewall in front of these that acts as a traffic director, but it isn't
concerned about load balancing, and is a separate stand-alone cluster
from the one running DRBD and GFS).

The DRBD+GFS cluster is a simple 3 node RHCS cluster.  Two nodes (mailer1
and mailer2) run DRBD+GFS (active/active), while the 3rd node (webmail1)
does not (just local ext3 file systems).  I may add more nodes in the
future if needed, but so far this is sufficient for my needs.  The third
node is nice as it prevents cluster (not DRBD) split-brain situations, and
allows me to maintain real quorum when I need to reboot a node, etc.

BTW, they are all running CentOS 5.3 (started on RHEL, moved to CentOS
which I actually find easier to use for DRBD/GFS/etc than RHEL).

If I do an orderly shutdown of the node, it all works fine.  All
services fail-over at the shutdown to the remaining node without a hitch.

At startup, they almost always migrate back automatically, and if not I
can migrate them back later by hand.  The reason they don't always migrate
back at startup seems to be that if the node is down too long, then drbd
takes a while to sync back up, and this can prevent lvm and gfs from
starting at boot, which means of course the services can't migrate back.
(I don't have drbd and lvm under cluster control, so if they don't start
at boot, I need to manually fix them).

If I 'crash' a node (kill the power, reboot it via a hardware stonith card,
etc) sometimes it doesn't work so fine and I need to manually intervene.
Often it will all come up fine, but sometimes the drbd won't come up as
primary/primary, and I'll need to fix it by hand.  Or sometimes the drbd
will come up, but the lvm or gfs won't (like above).  So often I have to
manually fix things.

But the good news is that in any case (shutdown, crash, etc) the cluster
is always up and running, since only one node is down...  So my services
are always available, though maybe slower when a node isn't participating
properly.  Not the best situation, but certainly I'm able to live with it.

My main goal was to be able to do orderly shutdowns, and that works great.
That way I can update kernels, tweak hardware (e.g., add RAM or upgrade
disks), etc. with no real service interruption.  So I'm not as worried
about the "crash" situation, since it happens so much less often than the
orderly shutdown, which was my main concern.

In any case, after many shutdowns and crashes and bad software upgrades
and such, I've not lost any data or anything like that.  Overall I'm
very happy.  Sure, I could be a bit happier with the recovery after
a crash, but I'm tickled with the way it works the rest of the time,
and it is a large improvement over my old setup.


Regards,

Mario Antonio


--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.


Re: [Dovecot] Dovecot + DRBD/GFS mailstore

2009-09-25 Thread Eric Jon Rostetter


To update an old thread...


I'm looking at the possibility of running a pair of servers with
Dovecot LDA/imap/pop3 using internal drives with DRBD and GFS (or
other clustered FS) for the mail storage and ext3 for the root drive.



I'm in testing right now with this setup.  Two Dell PE 2900 servers
(quad core @ 2.3 GHz, 8 GB RAM, raid 10 for the GFS+DRBD disk, raid 1
for the ext3 disks).  Running DRBD as a master/master setup.

[...]

So far it is early testing.  63 users, but only about 12 of those are
"power users".  The performance has been real good so far, but as I say,
not many users yet.


Well, as of yesterday, I've gone "live" with this setup with about 1K users.
Averaging about 150 to 200 concurrent sessions (higher during certain
day hours, lower at night, etc).

Slightly slower with 1K users than with 63 users (of course) but so
far it is proving very stable and reasonably fast.

Most of the time it is performing faster than my old system with similar load,
though there are rare "stalls" of webmail imap operations (connect, get data,
and disconnect session) where it might take about 5 to 10 seconds to complete.
I'm thinking it is a locking issue, but not sure.  The average time for such
a webmail operation is 0 to 2 seconds (which is reasonable, based on the
message/mailbox size; using mbox here, so we have some 2 GB to 3 GB mbox
files with large messages in them, etc).

Anyway, the point is that doing a cluster like this is very reasonable
from a cluster/stability point of view.  Jury is still out on performance,
but I should know soon since I've now got a "significant" number of users
hitting it.

My gut feeling is that there will be some slow connections from time to
time due to locking probably, but that overall it will scale under load
better and not die when a spammer attacks us or we otherwise get flooded...

--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.


Re: [Dovecot] Disconnected: Too many invalid IMAP commands

2009-09-21 Thread Eric Jon Rostetter

Quoting Charles Marcus :


That said, the biggest reason I see for upgrading often, especially for
things like dovecot, is to take advantage of the performance
improvements and new capabilities/options.


I've not seen to many lately, but maybe that is due to differences in
say mbox versus maildir storage we use?

In fact, there were a couple times when performance got worse after an
upgrade, though it was usually restored a release or two later...

I've upgraded only for bug-fixes, security-fixes, or because I needed
a new feature in a new release...

On the other hand, dovecot was so fast compared to our old wu-imap server
that I don't really care if it gets faster or not!  The switch to
dovecot was so great anything else since then is just gravy...


Of course, eventually I'm sure dovecot will hit a wall where performance
improvements will be negligible, but for now, the difference between the
1.0.x version and 1.2.x is so great that anyone who refuses to upgrade
is simply missing out.


Again, this might be storage related, or something...  Don't assume it
applies to everyone.

--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.


Re: [Dovecot] Disconnected: Too many invalid IMAP commands

2009-09-20 Thread Eric Jon Rostetter

Quoting Noel Butler :


No...  Really, I've got lots of machines on older distros (3+ years)
that are just plain stable and just plain work.



until they are owned.


Not a one has been owned yet.  And why would they be since there
are regular security updates, and of course out-of-band security
updates for critical issues.


I have never ran debian, so I won't spin any rot that debian people do.


er you seem to think the world shines out of ubuntu's ass, but dont


Nope.  You seem to have no clue what you are talking about.


realise that ubnuntu follows the same policies and principles of debian,


I've never run ubuntu, nor debian.  Stop assuming stupid assumptions.


even to the point that the vast majority of the ubuntu package
maintainers ,  maintain the debian versions.


Since ubuntu is a debian derivative, this would be normal.  I don't
run either, so not a problem.

Anyway, this is way off topic, so end of thread for me...

--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.


Re: [Dovecot] Disconnected: Too many invalid IMAP commands

2009-09-18 Thread Eric Jon Rostetter

Quoting Noel Butler :


On Fri, 2009-09-18 at 11:11 -0500, Eric Jon Rostetter wrote:



> I have never understood anyone who would use a distro for critical
> applications that forces them to use 3+ year old software.

Because it is stable and just plain works, of course.



Oh what rubbish


No...  Really, I've got lots of machines on older distros (3+ years)
that are just plain stable and just plain work.

Note there is nothing forcing me to stay with their old dovecot version
either, just because I want to use their old distro.  Your assertion is
just plain wrong, not to mention biased.


ubuntu released a brand new version of their
distribution,


Which distribution?  LTS, or desktop, or server, or another?


cant recall if it was 8.04 or 8.10,


Well, that's helpful...  Since the current LTS is 8.04, it better not be 8.10
you are talking about...  Because that isn't an LTS version...


with MailScanner,
which never worked, not only was it I think years old version, it was


I doubt it was years old at the release of the LTS version, though often
it is a version or two behind due to production timelines and overlaps.

In any case, I know of several people who are _VERY_ happy with Ubuntu
8.04 LTS and MailScanner...  So personally, I don't put a lot of stock
in your vague claims without even specific version numbers to back you up.


just made a deb and inserted as a "stable" package, which never ran or
was it ever going to with how it was packaged, so please don't sit there
and spin that rot that debian associated people also do.


I have never ran debian, so I won't spin any rot that debian people do.
But I do run long-term support distros of linux, so I will spin the
appropriate rot as needed.


IMHO, if you want to use a distros version of package X, then you accept
ALL of the risks that go with it


Sure, you AND the distro provider, assuming the distro provider offers
support.  In the case of a LTS version, that is of course implied (and
should be true, as long as they don't go out of business).


and you should NEVER ask for help on
the upstreams site


I see no reason not to ask, but I also believe:

1) You _should_, though don't have to, ask the distro support first, as that
   is why you run a LTS distro and in many cases pay for the LTS support.
2) The upstream site support has the right to refuse service, and send
   you to the distro support, or recommend you upgrade, etc.


and the projects I'm involved with will either ignore
you, or tell you to go to the distro for help.


I hope I don't use any software from a project that ignores me!
If I was ignored for any project I requested help from, I'd surely
find another project instead...


If package maintainers
insist on doing things like this, then they accept full responsibility
for it, adn who are they to decide a version 3 yo is more stable than
the one released last week.


So what's your point?

BTW, I run dovecot on two servers; one is 1.1.5 and the other is 1.2.4.
Timo has always supported me fully on each.  But since each is not the
current stable version, I guess I don't have a clue what I'm doing and
I guess Timo is wrong for supporting me -- at least from your point of view?
That is what your emails say at least...

--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.


Re: [Dovecot] Disconnected: Too many invalid IMAP commands

2009-09-18 Thread Eric Jon Rostetter

Quoting Charles Marcus :


On 9/18/2009, Matthias Andree (matthias.and...@gmx.de) wrote:

Ubuntu 8.04 is a long-term support release (desktop three years, server five
years), and it's natural that users will use that.


Yes.


It is also natural that critical servers should always be running the
latest stable release of critical applications, of course after a short
but suitable internal testing cycle...


No.  It may be desirable, but it isn't always "natural".  And sometimes it
is problematic (if the latest stable version removed a feature you need,
or changed in such a way that it isn't desirable, etc).

BUT, they should ALWAYS be running the latest version IF there is a SECURITY
issue with the older versions, UNLESS the security patch(es) have been
back-ported and applied properly...


I have never understood anyone who would use a distro for critical
applications that forces them to use 3+ year old software.


Because it is stable and just plain works, of course.  If it fully meets
your needs, why would you update?  Updating only for the reason of
updating is silly.  Why update if that doesn't buy you anything?  And since
updating can actually CAUSE problems, sometimes you are better of not
doing so...

Besides, it's not like these distro's don't update them for security patches
and/or bug fixes (by back-porting).

This doesn't mean the OP should or shouldn't upgrade, it just means that
some people should, and others shouldn't, and each case has to be taken
on its own merits.

--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.


Re: [Dovecot] Question about ACL/flags

2009-09-01 Thread Eric Jon Rostetter

Quoting Eric Jon Rostetter :


But I can't figure out how to access it (either manuall via telnet as above,
or from a client, etc).


Never mind... Got it working now...

a0 select "shared/myfolder"
* FLAGS (\Answered \Flagged \Deleted \Seen \Draft)
* OK [PERMANENTFLAGS (\Answered \Flagged \Deleted \Seen \Draft \*)]  
Flags permitted.

* 4 EXISTS
* 4 RECENT
* OK [UNSEEN 1] First unseen.
* OK [UIDVALIDITY 1251771676] UIDs valid
* OK [UIDNEXT 5] Predicted next UID
* OK [HIGHESTMODSEQ 1]
a0 OK [READ-WRITE] Select completed.

Will let you know when/if I hit the next stumbling block...

--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.


Re: [Dovecot] Question about ACL/flags

2009-09-01 Thread Eric Jon Rostetter

Quoting Timo Sirainen :


You can't have per-user seen flags with mbox currently. So create a
public namespace with a maildir location and set up dovecot-acl file in
a way that allows only some specific users access to it. So for example:


To refresh, I want a shared account, but my system was Dovecot 1.1 with
all mbox mailboxes.  Well, this is what I've done so far:

1) Upgraded to dovecot 1.2 (just because I could, I guess)

2) Tried to setup private/public namespaces...  Edited "dovecot -n" output:

# 1.2.4: /etc/dovecot.conf
# OS: Linux 2.6.18-128.4.1.el5.centos.plus x86_64 CentOS release 5.3 (Final)
[...]
mail_location:  
mbox:~/mail/:INBOX=/var/spool/mail/%u:INDEX=/var/dovecot/indexes/

%u
[...]
mail_plugins(default): zlib acl
mail_plugins(imap): zlib acl
mail_plugins(pop3): zlib
[...]
namespace:
  type: private
  separator: /
  inbox: yes
  list: yes
  subscriptions: yes
namespace:
  type: public
  separator: /
  prefix: shared/
  location:  
maildir:/var/spool/mail/public:INDEX=/var/dovecot/indexes/public/%u

  list: no
lda:
  postmaster_address: postmas...@physics.utexas.edu
  hostname: mail.ph.utexas.edu
  log_path:
  info_log_path:
  syslog_facility: mail
auth default:
  passdb:
driver: ldap
args: /etc/dovecot-ldap.conf
  userdb:
driver: ldap
args: /etc/dovecot-ldap.conf
plugin:
  acl: vfile

3) Created /var/spool/mail/public/.myfolder for the account to deliver to.
   Created empty dovecot-shared file for it.

4) Created a .forward file in the account to run "deliver" which does in
   fact deliver the account's email to the right maildir location.

5) Created /var/spool/mail/public/.myfolder/dovecot-acl which has something
   like:

owner lrwstiekxa
user=me lrwstiekxa
user=you   lrst

So, mail is delivered to this account correctly as a maildir mailbox.
But, I don't know how to read it, and/or configured something incorrectly.

It appears to be there, more or less:

a0 namespace
* NAMESPACE (("" "/")) NIL (("shared/" "/"))
a0 OK Namespace completed.
a0 list "" "shared/"
* LIST (\Noselect \HasNoChildren) "/" "shared/"
a0 OK List completed.

But I can't figure out how to access it (either manuall via telnet as above,
or from a client, etc).

So, how do I access it, and/or what did I do wrong?

While I've been doing email servers since the 1980's, this is my first
try at using IMAP namespaces and shared folders, and I'm just not getting
it... :(

--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.


[Dovecot] Question about ACL/flags

2009-08-28 Thread Eric Jon Rostetter

Okay, I'm cruising the wiki, and it is at best confusing to me.  Maybe
someone on the list can help me out quickly?

Here is what I have:

dovecot 1.1.18, mbox format, currently no acl/namespace/etc.  All works great.

What I want to be able to do:

Have an email account (or folder or mailbox) which can be accessed by
several people (say 3) with per-user seen flags.  That is, say 3 people
all access the mail but each user has their own seen flag for the messages.
This would hopefully be done with mbox still, if possible, but I'm willing
to try a mixed mbox/maildir setup if required to accomplish the goal.

Questions:

1) Can I do this with 1.1.18, or do I need to upgrade?
2) Do I need to set :CONTROL in mail_location, and if so, what should
   I set it to, and what does this control exactly (more precisely, does
   this info need to be HA or not, etc)  Is this where the seen flag info
   will be stored (or is that in INDEXES)?
3) Can I do this with mbox only, or do I need maildir, or does it depend
   on dovecot version?
4) Any additional help you can give me...

I basically understand the ideas behind it all, but from the wiki I'm
confused exactly what I need to do, and what version I might need.  (If
the wiki example is for dovecot 1.2+, does that mean it won't work in
1.1, or just that it has to be done differently, etc).

Any help (clearing up my obvious confusion) would be appreciated...
Step by step directions would be even better! :)

--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.


Re: [Dovecot] Dovecot + DRBD/GFS mailstore

2009-08-25 Thread Eric Jon Rostetter

Quoting Romer Ventura :


Last time i checked the free version of DRBD only supports 2 nodes.


Correct.  But RHCS supports more, and works best with an off number
(to prevent cluster splits, etc).


The paid version supports 16 nodes.


I think there are some limits on that too...  Like two read-write nodes,
plus more failover nodes?  Not sure though.

This however, doesnt mean that you cannot use the storage via NFS or  
SMB/CIFS mount point. Only that


Or, since I'm using GFS, via gnbd or such also.

the DRBD replication will only happen to 2 nodes. If a third node is  
supported on the free version, it would be for quorum only.


I'm using the 3rd node only with RHCS, not with DRBD.  The webmail needs
to actual access to the file storage, it does everything via IMAP calls
to dovecot.

--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.


Re: [Dovecot] Dovecot + DRBD/GFS mailstore

2009-08-24 Thread Eric Jon Rostetter

Quoting Guy :


I'm looking at the possibility of running a pair of servers with
Dovecot LDA/imap/pop3 using internal drives with DRBD and GFS (or
other clustered FS) for the mail storage and ext3 for the root drive.


I'm in testing right now with this setup.  Two Dell PE 2900 servers
(quad core @ 2.3 GHz, 8 GB RAM, raid 10 for the GFS+DRBD disk, raid 1
for the ext3 disks).  Running DRBD as a master/master setup.

I added a third node for webmail (Dell PE 2650), but it doesn't do the
DRBD or GFS.  It is there mostly to make a 3-node cluster versus 2-node
cluster, to avoid split-brain type situations.  And of course to do the
webmail. :)

Using MailScanner as the MTA, dovecot for pop/imap, mailman for mailing
lists, Horde/IMP/etc for webmail.   All held together with RHCS on CentOS
5.3.

All services run on only one node at a time, with failover...  This may
or may not help with GFS lock contention (not for /var/spool/mail, since
it is always accessed from both nodes at once, but yes for dovecot indexes
since they are only ever accessed on one node at a time, etc).  This is
probably where performance will really be decided (GFS lock contention).

Cluster Status for mailer @ Mon Aug 24 10:27:12 2009
Member Status: Quorate

 Member Name ID   Status
 --   --
 mailer1-hb.localdomain  1 Online, rgmanager
 mailer2-hb.localdomain  2 Online, Local, rgmanager
 webmail1-hb.localdomain 3 Online, rgmanager

 Service Name   Owner (Last)   State
 ---    - --   -
 service:Apache mailer1-hb.localdomain started
 service:Dovecotmailer1-hb.localdomain started
 service:MailManmailer2-hb.localdomain started
 service:MailScannermailer2-hb.localdomain started
 service:VIP-MAIL   mailer1-hb.localdomain started
 service:VIP-SMTP   mailer2-hb.localdomain started
 service:WebMailwebmail1-hb.localdomainstarted


Has anyone had experience with a setup like the one I'm suggesting?
What was performance like with Dovecot using GFS?


So far it is early testing.  63 users, but only about 12 of those are
"power users".  The performance has been real good so far, but as I say,
not many users yet.

My GFS is sharing the mail log files (via syslog-ng, what would otherwise
be /var/log/maillog), the dovecot index files, the /var/spool/mail/ mbox
spool (yes, I use mbox), and "shared" configuration files for the two nodes
(mailman data, MailScanner/Sendmail configs, dovecot config, clamav/spamd
config, procmail config, apache config, ssl certificates, etc).

If interested, I can let you know about performance once I know more...


Thanks
Guy


--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.


Re: [Dovecot] SIS Implementation

2009-08-14 Thread Eric Jon Rostetter

Quoting Timo Sirainen :


1) When writing the data, extract the attachments and write them to
different files. Add pointers to those files to the EXT_REF metadata.
Dovecot's message parsers should make this not-too-difficult to
implement.


I'd rather it did mime parts, rather than attachments.  In my use case,
we don't get attachments distrubuted as widely as we get messages
distributed.  If the local mailbox had the headers, but the SIS area
had the mime parts, this would save tons of space.  Since attachments
are mime-parts, this works for both cases...


3) Once that works, figure out a way to implement SIS for those
externally written files. Creating hard links to files a global
attachment storage would probably be a workable solution. Then scan
through the storage once in a while and delete files with link count=1.


Hardlinks is one way, for filesystems that support it.  But it does have
limits (can't span volumes, etc).

But any kind of setup that can maintain the file and a usage count should
work (and the two don't have to be kept together, though they can).  If you
add a management interface, all the better.

BTW, PMDF implemented all this eons ago (in their popstore I think it was)
system added around PMDF 5 or 6  Was actually pretty nice, in particular
for the times (this was the 1990's).

Anyway, my $0.02 worth, not that I'm waiting on this feature, but it sure
would save me tons of disk space if I had it...

--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.


Re: [Dovecot] Scalability plans: Abstract out filesystem and make it someone else's problem

2009-08-11 Thread Eric Jon Rostetter

Quoting Timo Sirainen :


It depends on the locking scheme used by the filesystem. Working queue
directories (the ones where stuff comes and goes rapidly) is best suited
for a local FS anyway.


And when a server and its disk dies, the emails get lost :(


It would appear he is not talking about a /var/spool/mail type queue/spool,
but the queues where the MTA/AV/Anti-Spam/etc process the mail.

For the most part, on machine crash, this will always result in the mail
being lost or resent (resent if it hasn't confirmed the acceptance of the
message yet).  If done with battery backup, the risk is less, but since
most filesystems (local or remote) cache writes in memory, the chances you
will lose the mail is high in any case (if still cached in memory).

I agree that for smaller mail systems, the processing queues
are best on local fs or in memory (memory for AV/Anti-Spam, local disk
for MTA processing).  The delivery queues (where the message awaits delivery
or is delivered) are best on some other file system (mirrored, distributed,
etc).

For a massively scaled system, there may be sufficient performance to
put the queues elsewhere.  But on a small system, with 90% of the mail
being spam/virus/malware, performance will usually dictate local/memory
file systems for such queues...

--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.


Re: [Dovecot] Scalability plans: Abstract out filesystem and make it someone else's problem

2009-08-11 Thread Eric Jon Rostetter

Quoting Seth Mattinen :


Queue directories and clusters don't
mix well, but a read-heavy maildir/dbox environment shouldn't suffer the
same problem.


Why don't queue directories and clusters mix well?  Is this a performance
issue only, or something worse?


~Seth


--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.


Re: [Dovecot] Lots of assertion PANICs in error log

2008-09-19 Thread Eric Jon Rostetter

Quoting Eric Jon Rostetter <[EMAIL PROTECTED]>:


Quoting Timo Sirainen <[EMAIL PROTECTED]>:


dovecot: Aug 29 09:34:32 Panic: IMAP(user): file index-sync.c: line 39
(index_mailbox_set_recent_uid): assertion failed:
(seq_range_exists(&ibox->recent_flags, uid))


This patch should help:
http://hg.dovecot.org/dovecot-1.1/rev/8cc0eaec7d0f


I applied that patch last night, and I'm still seeing the same errors...
So unless I didn't get the patch right, there is still something amiss...


As way of an update, I could never fix this with 1.1.2, but I recently
upgraded to 1.1.3 and I'm still seeing the problem, but A LOT less...

Right now, since upgrading to 1.1.3, I have 1 user, and only 1 user,  
triggering

this same assert.

I have two other users who each triggered another assert a single time each:

dovecot: Panic: IMAP(user): file message-parser.c: line 129  
(message_parser_read_more): assertion failed: (ctx->input->eof ||  
ctx->input->stream_errno != 0)


I also get the occasional error of "Next message unexpectedly lost from ..."
for some users.

The one user still getting the original assert issue gets it a lot.  So if you
want files for this user, or more complete log entries, let me know what to
do, etc...

So far I've got no user complaints about these issues.  Just log entries...
Funny that he wouldn't report a problem since it is a panic...  One thing
I note about this user, which is unique to him, is that he runs "xbiff"
to monitor his mail.  One might speculate the bug is being tickled by xbiff,
since he is the only one running xbiff, and he is the only logging this assert
message.  Note with 1.1.2 I got this error for lots and lots of users; but
so far with 1.1.3 it is only this single user.


--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.


--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.


Re: [Dovecot] Lots of assertion PANICs in error log

2008-08-29 Thread Eric Jon Rostetter

Quoting Timo Sirainen <[EMAIL PROTECTED]>:


On Fri, 2008-08-29 at 09:49 +0100, Yates Ian Mr (ITCS) wrote:

I have just upgraded to dovecot 1.1.2 and am seeing lots of the
following panic messages filling up the error logs:

dovecot: Aug 29 09:34:32 Panic: IMAP(user): file index-sync.c: line 39
(index_mailbox_set_recent_uid): assertion failed:
(seq_range_exists(&ibox->recent_flags, uid))


This patch should help:
http://hg.dovecot.org/dovecot-1.1/rev/8cc0eaec7d0f


I applied that patch last night, and I'm still seeing the same errors...
So unless I didn't get the patch right, there is still something amiss...

--
Eric Rostetter
The Department of Physics
The University of Texas at Austin

This message is provided "AS IS" without warranty of any kind,
either expressed or implied.  Use this message at your own risk.