Re: macOS Mojave: setgroups(501) failed: Too many extra groups

2018-10-10 Thread Aki Tuomi



On 11.10.2018 09:28, Heiko W. Rupp wrote:
> Hello,
>
> I have recently upgraded to macOS 10.14 (Mojave) and am running into
> an issue where one use can no longer log into dovecot via imap. Log shows
>
> Oct 11 08:10:27 imap(hwr)<12659>:
> Fatal: setgroups(501) failed: Too many extra groups
>
> and indeed, the user is in 17 groups, which is more than NGROUPS_MAX
> (16).
> Another user with << 16 groups can log in fine. Unfortunately it is
> not (easily) doable to reduce
> the number of groups, as macOS seems to set them internally.
>
> Is there a config option that I am missing to work around this?
>
> Looking at the source, I see this is handled in
> src/lib/restrict-access.c::fix_groups_list(),
> where above the call to setgroups() a gid_list2 is constructed. I
> wonder if one could
> have a config option to prevent adding all those extra groups, which
> then make the
> call to setgroups() fail
>
> Any help appreciated
>    Heiko
>

Not trivially. We would need to know which groups to drop and which not.

Aki


macOS Mojave: setgroups(501) failed: Too many extra groups

2018-10-10 Thread Heiko W. Rupp

Hello,

I have recently upgraded to macOS 10.14 (Mojave) and am running into an 
issue where one use can no longer log into dovecot via imap. Log shows


Oct 11 08:10:27 imap(hwr)<12659>: 
Fatal: setgroups(501) failed: Too many extra groups


and indeed, the user is in 17 groups, which is more than NGROUPS_MAX 
(16).
Another user with << 16 groups can log in fine. Unfortunately it is not 
(easily) doable to reduce

the number of groups, as macOS seems to set them internally.

Is there a config option that I am missing to work around this?

Looking at the source, I see this is handled in 
src/lib/restrict-access.c::fix_groups_list(),
where above the call to setgroups() a gid_list2 is constructed. I wonder 
if one could
have a config option to prevent adding all those extra groups, which 
then make the

call to setgroups() fail

Any help appreciated
   Heiko

--
h...@pilhuhn.de  m:0179/207 4919  b:http://pilhuhn.blogspot.com


Re: Problem getting quota-warning script to function.

2018-10-10 Thread Ted
Hello,

Do you mean you're using the configurations I sent as examples and are
seeing the quota warning script effectively?   I've continued to try
changes to get it working but no luck yet.  Might it be somehow related
to the directories mail is stored in being shared NFS mounts?

Thank you
Ted
easyDNS Technologies

On 2018-10-09 09:45 AM, Aki Tuomi wrote:
> Hi!
>
> We have not been able to reproduce this issue yet.
>
> Aki
>
>
>> On 09 October 2018 at 16:28 Ted  wrote:
>>
>>
>> Hello,
>>
>> I don't suppose there's been any thoughts or progress on this one?  Is
>> there further information I can provide or anything I could try or check
>> on the mailserver?
>>
>> Thank you
>> Ted
>> easyDNS Technologies
>>
>> On 2018-09-24 12:44 PM, Aki Tuomi wrote:
>>> Hi!
>>>
>>> It can take some time for us to look into these...
>>>
>>> Aki
>>>
 On 24 September 2018 at 19:38 Ted  wrote:


 Hello,

 I haven't received a reply since I sent my last logs in, so I thought
 I'd ask again and update the thread.  I had trouble getting the server
 to work properly on dovecot 2.3.2 so I rebuilt the server back on
 2.2.27.  I've got the quota enforcement itself working, but the warnings
 still fail to fire.   I've attached logs with mail_debug=yes for the
 period from the send which put the quota at 90% all the way until it is
 blocking email for being overquota.

 Thank you
 Ted
 easyDNS Technologies

 On 2018-09-19 12:55 PM, Aki Tuomi wrote:
>> On 19 September 2018 at 19:49 Ted  wrote:
>>
>>
>> Hello,
>>
>> Most of the work was done with dovecot 2.2.27 but I just upgraded to
>> 2.3.2 and didn't see any change.  Some debug logs are below, is there
>> something specific I could search them for?
>>
> Can you maybe try delivery to an account which should trigger quota 
> warning or overquota action with mail_debug=yes?
>
> Aki
>>




Re: index corruption weirdness

2018-10-10 Thread Kelsey Cummings
On 10/10/18 7:26 AM, Aki Tuomi wrote:
>> Are you saying that there is a bug in this version that affects RHEL 7.5
>> but not RHEL 6 or just use the newest version and maybe the problem goes
>> away?
> 
> We have very limited interest in figuring out problems with (very) old
> dovecot versions. At minimum you need to show this problem with 2.2.36
> or 2.3.2.1.
> 
> A thing you should make sure is that you are not accessing the user with
> two different servers concurrently.

The directors appear to be working fine so, no, users aren't hitting
multiple back end servers.

To be clear, we don't suspect Dovecot as much - our deployment had been
stable for years - but rather behavior changes between the RHEL6 and
RHLE7 environment, particularly with regards to NFSv3.  But we've have
been at a loss to find a smoking gun.

For various reasons achieving stability (again) on the current version
is very important while we continue to plan Dovecot and storage backend
upgrades.  Corruption leading to crashes is very infrequent percentage
wise but it's enough to negatively impact performance and impact users
-- out of 5+ million sessions/day we're seeing ~5 instances whereas on 6
it would have been one every few months.

Has anyone else experienced any NFS/locking issues transitioning from
RHEL6 to 7 with Netapp storage?  Grasping at straws - perhaps compiler
and/or system library issues interacting with Dovecot?

-K


Re: index corruption weirdness

2018-10-10 Thread Reio Remma

On 10.10.2018 19:12, William Taylor wrote:

OS Info:

CentOS Linux release 7.5.1804 (Core)
3.10.0-862.14.4.el7.x86_64

NFS:
# mount -t nfs |grep mail/15
172.16.255.14:/vol/vol1/mail/15 on /var/spool/mail/15 type nfs
(rw,nosuid,nodev,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,nordirplus,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.255.14,mountvers=3,mountport=4046,mountproto=udp,local_lock=none,addr=172.16.255.14)

Dovecot Info:
dovecot -n
# 2.1.17: /etc/dovecot/dovecot.conf


Hi!

Thank you for your report, however, 2.1.17 is VERY old version of
dovecot and this problem is very likely fixed in a more recent version.

Aki


I realize it is an older release.

Are you saying that there is a bug in this version that affects RHEL 7.5
but not RHEL 6 or just use the newest version and maybe the problem goes
away?


I can see from my CentOS 7 installation that it comes with 2.2.10-8.el7 
package. Did you install 2.1.17 specifically somehow?


I'm using dovecot 2.3.3 as packaged by the developers in CentOS 7 myself.

Good luck,
Reio


Re: index corruption weirdness

2018-10-10 Thread Aki Tuomi


 
 
  
   
  
  
   
On 10 October 2018 at 19:12 William Taylor <
william.tay...@sonic.com> wrote:
   
   

   
   

   
   
On Wed, Oct 10, 2018 at 09:37:46AM +0300, Aki Tuomi wrote:
   
   

 


 On 09.10.2018 22:16, William Taylor wrote:


 
  We have started seeing index corruption ever since we upgraded (we
 
 
  believe) our imap servers from SL6 to Centos 7. Mail/Indexes are stored
 
 
  on Netapps mounted via NFS. We have 2 lvs servers running surealived in
 
 
  dr/wlc, 2 directors and 6 backend imap/pop servers.
 


 
  Most of the core dumps I've looked at for different users are like
 
 
  "Backtrace 2" with some variations on folder path.
 


 
  This latest crash (Backtrace 1) is different from others I've seen.
 
 
  It is also leaving 0byte files in the users .Drafts/tmp folder.
 


 
  # ls -s /var/spool/mail/15/00/user1/.Drafts/tmp | awk '{print $1}'
 
 
  |sort | uniq -c
 
 
  9692 0
 
 
  1 218600
 


 
  I believe the number of cores here is different from the number of tmp
 
 
  files because this is when we moved the user to our debug server so we
 
 
  could get the core dumps.
 
 
  # ls -la /home/u/user1/core.* |wc -l
 
 
  8437
 


 
  Any help/insight would be greatly appreciated.
 


 
  Thanks,
 
 
  William
 


 >


 
  OS Info:
 
 
  CentOS Linux release 7.5.1804 (Core)
 
 
  3.10.0-862.14.4.el7.x86_64
 


 
  NFS:
 
 
  # mount -t nfs |grep mail/15
 
 
  172.16.255.14:/vol/vol1/mail/15 on /var/spool/mail/15 type nfs
 
 
  (rw,nosuid,nodev,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,nordirplus,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.255.14,mountvers=3,mountport=4046,mountproto=udp,local_lock=none,addr=172.16.255.14)
 


 
  Dovecot Info:
 
 
  dovecot -n
 
 
  # 2.1.17: /etc/dovecot/dovecot.conf
 


 


 Hi!


 


 Thank you for your report, however, 2.1.17 is VERY old version of


 dovecot and this problem is very likely fixed in a more recent version.


 


 Aki


 

   
   
I realize it is an older release.
   
   

   
   
Are you saying that there is a bug in this version that affects RHEL 7.5
   
   
but not RHEL 6 or just use the newest version and maybe the problem goes
   
   
away?
   
  
  
   
  
  
   We have very limited interest in figuring out problems with (very) old dovecot versions. At minimum you need to show this problem with 2.2.36 or 2.3.2.1.
  
  
   
  
  
   A thing you should make sure is that you are not accessing the user with two different servers concurrently.
  
  
   ---
   Aki Tuomi
   
 



Re: index corruption weirdness

2018-10-10 Thread William Taylor
On Wed, Oct 10, 2018 at 09:37:46AM +0300, Aki Tuomi wrote:
> 
> 
> On 09.10.2018 22:16, William Taylor wrote:
> > We have started seeing index corruption ever since we upgraded (we 
> > believe) our imap servers from SL6 to Centos 7. Mail/Indexes are stored 
> > on Netapps mounted via NFS. We have 2 lvs servers running surealived in 
> > dr/wlc, 2 directors and 6 backend imap/pop servers.
> >
> > Most of the core dumps I've looked at for different users are like 
> > "Backtrace 2" with some variations on folder path.
> >
> > This latest crash (Backtrace 1) is different from others I've seen.
> > It is also leaving 0byte files in the users .Drafts/tmp folder.
> >
> > # ls -s /var/spool/mail/15/00/user1/.Drafts/tmp | awk '{print $1}'  
> >   |sort | uniq -c
> >9692 0
> >   1 218600
> >
> > I believe the number of cores here is different from the number of tmp 
> > files because this is when we moved the user to our debug server so we
> > could get the core dumps.
> > # ls -la /home/u/user1/core.* |wc -l   
> >   8437
> >
> > Any help/insight would be greatly appreciated.
> >
> > Thanks,
> >   William
> >
> >
> > OS Info:
> > CentOS Linux release 7.5.1804 (Core)
> > 3.10.0-862.14.4.el7.x86_64
> >
> > NFS:
> > # mount -t nfs |grep mail/15
> > 172.16.255.14:/vol/vol1/mail/15 on /var/spool/mail/15 type nfs 
> > (rw,nosuid,nodev,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,nordirplus,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.255.14,mountvers=3,mountport=4046,mountproto=udp,local_lock=none,addr=172.16.255.14)
> >
> > Dovecot Info:
> > dovecot -n
> > # 2.1.17: /etc/dovecot/dovecot.conf
> >
> 
> Hi!
> 
> Thank you for your report, however, 2.1.17 is VERY old version of
> dovecot and this problem is very likely fixed in a more recent version.
> 
> Aki
> 

I realize it is an older release.

Are you saying that there is a bug in this version that affects RHEL 7.5 
but not RHEL 6 or just use the newest version and maybe the problem goes 
away?


Re: index corruption weirdness

2018-10-10 Thread Martin Johannes Dauser
On Wed, 2018-10-10 at 09:37 +0300, Aki Tuomi wrote:
> 
> On 09.10.2018 22:16, William Taylor wrote:
> > ...
> > 
> > Dovecot Info:
> > dovecot -n
> > # 2.1.17: /etc/dovecot/dovecot.conf
> > 
> 
> Hi!
> 
> Thank you for your report, however, 2.1.17 is VERY old version of
> dovecot and this problem is very likely fixed in a more recent
> version.
> 
> Aki

Like RHEL 7, CentOS 7.5 should run 2.2.10 -- which is well hung either.
http://mirror.centos.org/centos/7/os/x86_64/Packages/

Martin