dovecot.list.index.log

2024-07-12 Thread Joan Moreau via dovecot

HI

Is it safe to delete the file dovecot.list.index.log (as I am still 
struggling with using any protocol for network storage of the emails 
(dbox), and now dovecot complains that my dovecot.list.index.log is 
corrupted)


Thank you
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


dovecot imap_zlib

2024-07-07 Thread Joan Moreau via dovecot

Hi

I tested teh git version of dovecot. It seems the IMAP Compress plugin 
(imap_zlib) has disappeared.


How to get it back ?

Thank you
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Pread error over smb3

2024-07-06 Thread Joan Moreau via dovecot
Moved from smb3 to sshfs to test

same problem, only dovecot is complaining


On Sat, 2024-07-06 at 21:04 +0800, Joan Moreau wrote:
> archlinux x64
> 
> no selinux activated
> 
> On Sat, 2024-07-06 at 13:53 +0200, John Fawcett via dovecot wrote:
> > Hi Joan
> > 
> > not sure what OS you're using, so just a guess: but maybe this is 
> > selinux related or something similar. When it's the OS providing
> > the 
> > error code to dovecot, it's very unlikely to be anything in dovecot
> > itself.
> > 
> > On 06/07/2024 12:51, Joan Moreau via dovecot wrote:
> > > No error on the error side
> > > 
> > > the error occurs only with dovecot. all other soft do nto
> > > complains
> > > about the smb3 protocol
> > > 
> > > I get also
> > > Jul 6 10:49:45 gjserver dovecot[4220]:
> > > lmtp(ad...@grosjo.net)<4355>: Error:
> > > rename(/net/mails/grosjo.net/admin/storage/dovecot.map.index.tmp,
> > > /net/mails/grosjo.net/admin/storage/dovecot.map.index) failed:
> > > Permission denied
> > > 
> > > 
> > > 
> > > On Tue, 2024-07-02 at 08:00 +0300, Aki Tuomi via dovecot wrote:
> > > > Ok. But the error is coming from kernel, so not much Dovecot
> > > > can do
> > > > about it. Maybe try turning on some debugging in your server to
> > > > see
> > > > what is going on?
> > > > 
> > > > Aki
> > > > 
> > > > > On 02/07/2024 07:54 EEST Joan Moreau via dovecot
> > > > >  wrote:
> > > > > 
> > > > >   
> > > > > Permissions on the server are very fine
> > > > > 
> > > > > The problem occurs ONLY with dovecot
> > > > > 
> > > > > 
> > > > > On Tue, 2024-07-02 at 07:49 +0300, Aki Tuomi wrote:
> > > > > > > On 02/07/2024 02:05 EEST Joan Moreau via dovecot
> > > > > > >  wrote:
> > > > > > > 
> > > > > > >   
> > > > > > > Hi
> > > > > > > 
> > > > > > > I am trying to move my storage of email on a smb3 mounted
> > > > > > > volume.
> > > > > > > 
> > > > > > > I am getting the following error :
> > > > > > > Error: pread(/net/.../storage/dovecot.map.index.log)
> > > > > > > failed:
> > > > > > > Permission
> > > > > > > denied (euid=1004(mailusers) egid=12(mail) UNIX perms
> > > > > > > appear ok
> > > > > > > (ACL/MAC wrong?))
> > > > > > > 
> > > > > > > How to resolve that ?
> > > > > > > 
> > > > > > > Thank you
> > > > > > > 
> > > > > > This seems to be some kind of smb3 ACL problem, check
> > > > > > permissions
> > > > > > on
> > > > > > the server?
> > > > > > 
> > > > > > Aki
> > > > > ___
> > > > > dovecot mailing list -- dovecot@dovecot.org
> > > > > To unsubscribe send an email to dovecot-le...@dovecot.org
> > > > ___
> > > > dovecot mailing list -- dovecot@dovecot.org
> > > > To unsubscribe send an email to dovecot-le...@dovecot.org
> > > ___
> > > dovecot mailing list -- dovecot@dovecot.org
> > > To unsubscribe send an email to dovecot-le...@dovecot.org
> > ___
> > dovecot mailing list -- dovecot@dovecot.org
> > To unsubscribe send an email to dovecot-le...@dovecot.org
> 

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Pread error over smb3

2024-07-06 Thread Joan Moreau via dovecot
archlinux x64

no selinux activated

On Sat, 2024-07-06 at 13:53 +0200, John Fawcett via dovecot wrote:
> Hi Joan
> 
> not sure what OS you're using, so just a guess: but maybe this is 
> selinux related or something similar. When it's the OS providing the 
> error code to dovecot, it's very unlikely to be anything in dovecot
> itself.
> 
> On 06/07/2024 12:51, Joan Moreau via dovecot wrote:
> > No error on the error side
> > 
> > the error occurs only with dovecot. all other soft do nto complains
> > about the smb3 protocol
> > 
> > I get also
> > Jul 6 10:49:45 gjserver dovecot[4220]:
> > lmtp(ad...@grosjo.net)<4355>: Error:
> > rename(/net/mails/grosjo.net/admin/storage/dovecot.map.index.tmp,
> > /net/mails/grosjo.net/admin/storage/dovecot.map.index) failed:
> > Permission denied
> > 
> > 
> > 
> > On Tue, 2024-07-02 at 08:00 +0300, Aki Tuomi via dovecot wrote:
> > > Ok. But the error is coming from kernel, so not much Dovecot can
> > > do
> > > about it. Maybe try turning on some debugging in your server to
> > > see
> > > what is going on?
> > > 
> > > Aki
> > > 
> > > > On 02/07/2024 07:54 EEST Joan Moreau via dovecot
> > > >  wrote:
> > > > 
> > > >   
> > > > Permissions on the server are very fine
> > > > 
> > > > The problem occurs ONLY with dovecot
> > > > 
> > > > 
> > > > On Tue, 2024-07-02 at 07:49 +0300, Aki Tuomi wrote:
> > > > > > On 02/07/2024 02:05 EEST Joan Moreau via dovecot
> > > > > >  wrote:
> > > > > > 
> > > > > >   
> > > > > > Hi
> > > > > > 
> > > > > > I am trying to move my storage of email on a smb3 mounted
> > > > > > volume.
> > > > > > 
> > > > > > I am getting the following error :
> > > > > > Error: pread(/net/.../storage/dovecot.map.index.log)
> > > > > > failed:
> > > > > > Permission
> > > > > > denied (euid=1004(mailusers) egid=12(mail) UNIX perms
> > > > > > appear ok
> > > > > > (ACL/MAC wrong?))
> > > > > > 
> > > > > > How to resolve that ?
> > > > > > 
> > > > > > Thank you
> > > > > > 
> > > > > This seems to be some kind of smb3 ACL problem, check
> > > > > permissions
> > > > > on
> > > > > the server?
> > > > > 
> > > > > Aki
> > > > ___
> > > > dovecot mailing list -- dovecot@dovecot.org
> > > > To unsubscribe send an email to dovecot-le...@dovecot.org
> > > ___
> > > dovecot mailing list -- dovecot@dovecot.org
> > > To unsubscribe send an email to dovecot-le...@dovecot.org
> > ___
> > dovecot mailing list -- dovecot@dovecot.org
> > To unsubscribe send an email to dovecot-le...@dovecot.org
> ___
> dovecot mailing list -- dovecot@dovecot.org
> To unsubscribe send an email to dovecot-le...@dovecot.org

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Pread error over smb3

2024-07-06 Thread Joan Moreau via dovecot
No error on the error side

the error occurs only with dovecot. all other soft do nto complains
about the smb3 protocol

I get also
Jul 6 10:49:45 gjserver dovecot[4220]:
lmtp(ad...@grosjo.net)<4355>: Error:
rename(/net/mails/grosjo.net/admin/storage/dovecot.map.index.tmp,
/net/mails/grosjo.net/admin/storage/dovecot.map.index) failed:
Permission denied



On Tue, 2024-07-02 at 08:00 +0300, Aki Tuomi via dovecot wrote:
> Ok. But the error is coming from kernel, so not much Dovecot can do
> about it. Maybe try turning on some debugging in your server to see
> what is going on?
> 
> Aki
> 
> > On 02/07/2024 07:54 EEST Joan Moreau via dovecot
> >  wrote:
> > 
> >  
> > Permissions on the server are very fine
> > 
> > The problem occurs ONLY with dovecot
> > 
> > 
> > On Tue, 2024-07-02 at 07:49 +0300, Aki Tuomi wrote:
> > > 
> > > > On 02/07/2024 02:05 EEST Joan Moreau via dovecot
> > > >  wrote:
> > > > 
> > > >  
> > > > Hi
> > > > 
> > > > I am trying to move my storage of email on a smb3 mounted
> > > > volume.
> > > > 
> > > > I am getting the following error :
> > > > Error: pread(/net/.../storage/dovecot.map.index.log) failed:
> > > > Permission
> > > > denied (euid=1004(mailusers) egid=12(mail) UNIX perms appear ok
> > > > (ACL/MAC wrong?))
> > > > 
> > > > How to resolve that ?
> > > > 
> > > > Thank you
> > > > 
> > > 
> > > This seems to be some kind of smb3 ACL problem, check permissions
> > > on
> > > the server?
> > > 
> > > Aki
> > 
> > ___
> > dovecot mailing list -- dovecot@dovecot.org
> > To unsubscribe send an email to dovecot-le...@dovecot.org
> ___
> dovecot mailing list -- dovecot@dovecot.org
> To unsubscribe send an email to dovecot-le...@dovecot.org

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Pread error over smb3

2024-07-01 Thread Joan Moreau via dovecot
Permissions on the server are very fine

The problem occurs ONLY with dovecot


On Tue, 2024-07-02 at 07:49 +0300, Aki Tuomi wrote:
> 
> > On 02/07/2024 02:05 EEST Joan Moreau via dovecot
> >  wrote:
> > 
> >  
> > Hi
> > 
> > I am trying to move my storage of email on a smb3 mounted volume.
> > 
> > I am getting the following error :
> > Error: pread(/net/.../storage/dovecot.map.index.log) failed:
> > Permission
> > denied (euid=1004(mailusers) egid=12(mail) UNIX perms appear ok
> > (ACL/MAC wrong?))
> > 
> > How to resolve that ?
> > 
> > Thank you
> > 
> 
> This seems to be some kind of smb3 ACL problem, check permissions on
> the server?
> 
> Aki

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Pread error over smb3

2024-07-01 Thread Joan Moreau via dovecot
Hi

I am trying to move my storage of email on a smb3 mounted volume.

I am getting the following error :
Error: pread(/net/.../storage/dovecot.map.index.log) failed: Permission
denied (euid=1004(mailusers) egid=12(mail) UNIX perms appear ok
(ACL/MAC wrong?))

How to resolve that ?

Thank you

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Sieve not getting recompiled

2024-04-20 Thread Joan Moreau via dovecot

I changed it to the following to stick to the doc

   sieve = file:/mails/%d/%n/sieve/
sieve_after = file:/mails/sieve/after.sieve
sieve_default = file:/mails/sieve/before.sieve
sieve_before = file:/mails/sieve/before.sieve

Still no scripts are compiled/executed (and it was working fine before 
!)


On 2024-04-21 09:21, Joan Moreau wrote:


Hi

I have

sieve = /mails/%d/%n/sieve/roundcube.sieve
sieve_after = /mails/sieve/after.sieve
sieve_before = /mails/sieve/before.sieve
sieve_dir = /mails/%d/%n/sieve/
sieve_global_dir = /mails/sieve/

But sieve scripts are not compiled and not executed

It was working until I removed the setting "sieve_global_path"

Something I dont understand ?

Thank you

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Sieve not getting recompiled

2024-04-20 Thread Joan Moreau via dovecot

Hi

I have

sieve = /mails/%d/%n/sieve/roundcube.sieve
sieve_after = /mails/sieve/after.sieve
sieve_before = /mails/sieve/before.sieve
sieve_dir = /mails/%d/%n/sieve/
sieve_global_dir = /mails/sieve/

But sieve scripts are not compiled and not executed

It was working until I removed the setting "sieve_global_path"

Something I dont understand ?

Thank you
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: exfat not supported ?

2024-04-20 Thread Joan Moreau via dovecot

That resolve the fisrt bug

but I get now :

Error: link(/xxx/dovecot.list.index.log, /xxx/dovecot.list.index.log.2) 
failed: Operation not permitted


On 2024-04-21 02:02, Aki Tuomi via dovecot wrote:


Try setting lock_method = dotlock

Aki
On 20/04/2024 15:32 EEST Joan Moreau via dovecot
 wrote:

I tried and get the following:

Error: Couldn't create mailbox list lock /xxx/mailboxes.lock:
file_create_locked(/xxx/mailboxes.lock) failed:
link(/xxx/mailboxes.locka94f3757318b0b90, /xxx/mailboxes.lock)
failed:
Operation not permitted

On 2024-04-20 17:39, Aki Tuomi via dovecot wrote:

On 20/04/2024 12:27 EEST Joan Moreau via dovecot
 wrote:

Hi

Would placing my storage on a exfat partition work ? If no,
why ?

Thank you
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org

I can't see any reason why not. As long as it behaves like
POSIX
filesystem.

Aki
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: exfat not supported ?

2024-04-20 Thread Joan Moreau via dovecot

I tried and get the following:

Error: Couldn't create mailbox list lock /xxx/mailboxes.lock: 
file_create_locked(/xxx/mailboxes.lock) failed: 
link(/xxx/mailboxes.locka94f3757318b0b90, /xxx/mailboxes.lock) failed: 
Operation not permitted


On 2024-04-20 17:39, Aki Tuomi via dovecot wrote:


On 20/04/2024 12:27 EEST Joan Moreau via dovecot
 wrote:

Hi

Would placing my storage on a exfat partition work ? If no, why ?

Thank you
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org

I can't see any reason why not. As long as it behaves like POSIX 
filesystem.


Aki
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


thread->detach() creates confusion of dovecot

2024-04-20 Thread Joan Moreau via dovecot

Hi

When I try to "detach" 
(https://en.cppreference.com/w/cpp/thread/thread/detach) a thread 
running inside a plugin, it seems the core dovecot has some influence on 
that , tries to close this for some unknown reason and usually ends up 
crashing


What is the cause of this ?

Thank you
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


exfat not supported ?

2024-04-20 Thread Joan Moreau via dovecot

Hi

Would placing my storage on a exfat partition work ? If no, why ?

Thank you
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Separate index get dovecot lost

2024-03-30 Thread Joan Moreau
> To do that kind of a change, mailbox migration is required. 
 
Meaning ?

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Separate index get dovecot lost

2024-03-29 Thread Joan Moreau
Hi
I have large number of email (~TB) and want to put the index in a separate,
rapid drive

Initially, I have
mail_location = mdbox:/files/mail/%d/%n

If I put
mail_location = mdbox:/files/mail/%d/%n:INDEX=/data/mailindexes/%d/%n
then dovecot gets totally lost and tries to reach mailboxes content and tree
from the INDEX location instead of the original location

What is wrong ?
Thank you


___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: [EXT] Re: How to get a memory pointer in the core process

2024-03-14 Thread Joan Moreau via dovecot
Thanks Eduardo

I am trying to avoid closing/ reopening a file pointer to the exact same file
between each call to the plugin



On 14 March 2024 20:08:37 Eduardo M KALINOWSKI via dovecot
 wrote:

 On 14/03/2024 02:49, Joan Moreau via dovecot wrote:
  No, you don´t understand
  There is a core process (/usr/bin/dovecot) running all the
  time. So I want to
  allocate a memory block, the core process keep it and it is
  retrievable by the
  pluging when laded again
  At exit of /usr/bin/dovecot, it just does a "delete()" of
  the said allocation

 While I cannot help you with plugin writing or dovecot internals,
 this 
 does seem like an example of the XY problem[0]. Perhaps if you
 provide a 
 high level description of what you're attempting to do someone might 
 come up with a way to achieve that.

 [0] https://en.wikipedia.org/wiki/XY_problem

 -- 
 Eduardo M KALINOWSKI
 edua...@kalinowski.com.br

 ___
 dovecot mailing list -- dovecot@dovecot.org
 To unsubscribe send an email to dovecot-le...@dovecot.org

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: [EXT] Re: How to get a memory pointer in the core process

2024-03-13 Thread Joan Moreau via dovecot
No, you don´t understand
There is a core process (/usr/bin/dovecot) running all the time. So I want to
allocate a memory block, the core process keep it and it is retrievable by the
pluging when laded again
At exit of /usr/bin/dovecot, it just does a "delete()" of the said allocation


On 2024-03-14 13:25, Aki Tuomi via dovecot wrote:
 Hi!

 Sorry but that's just not possible, ther is no "core" where to create
 such object. There is no "dovecot" where to store things.

 When user logs in, dovecot executes /usr/libexec/dovecot/imap and
 transfers the connection fd there. then plugins and stuff are loaded,
 and the user does what he does, and then plugins and stuff are
 unloaded and process exists and no longer exists in memory.

 You are clearly asking about memory persistence between sessions, and
 this can be done with

 a) services (internal or external), such as redis, sql, or something
 else
 b) storing things to disk

 Aki

  On 13/03/2024 18:45 EET Joan Moreau via dovecot
   wrote:

   
  No, I am not referring to that

  I want to create an object at first call in memory

  that object would be retrievable at second and furthers
  calls of the
  plugin, as long as dovecot is running

  On 2024-03-13 16:29, Aki Tuomi via dovecot wrote:

   Not really no. You should use e.g. dict inteface
   for storing this kind
   of stateful data. When deinit is called the
   calling core process will
   likely die too.

   Aki

   On 13/03/2024 10:19 EET Joan Moreau
wrote:

   Keep a pointer in memory retrievable each time a
   plugin is called

   So the plugin keep the memory, not has to restart
   everything at each
   call

   On 12 March 2024 08:53:38 Aki Tuomi via dovecot
   
   wrote:

   On 11/03/2024 10:42 EET Joan Moreau
wrote:

   Hi
   Is it possible, from a plugin perspective, to
   create and recover a
   pointer in the core process (i.e. memory not lost
   between 2 calls to
   the plugin, even after the "deinit" of the
   plugin" ) ?

   Thanks
   Hi Joan!

   May I ask what you are attempting to achieve in
   more detail?

   Aki
   ___
   dovecot mailing list -- dovecot@dovecot.org
   To unsubscribe send an email to dovecot-
   le...@dovecot.org
    ___
  dovecot mailing list -- dovecot@dovecot.org
  To unsubscribe send an email to dovecot-
  leave@dovecot.orgNo, I am not referring to that
  I want to create an object at first call in memory
  that object would be retrievable at second and furthers
  calls of the plugin, as
  long as dovecot is running




  On 2024-03-13 16:29, Aki Tuomi via dovecot wrote:
       Not really no. You should use e.g. dict inteface for
  storing this
       kind of stateful data. When deinit is called the
  calling core process
       will likely die too.

       Aki

                On 13/03/2024 10:19 EET Joan Moreau
   wrote:


            Keep a pointer in memory retrievable each time a
  plugin is
            called

            So the plugin keep the memory, not has to restart
            everything at each call



            On 12 March 2024 08:53:38 Aki Tuomi via dovecot
             wrote:

                      On 11/03/2024 10:42 EET Joan Moreau
                       wrote:


                      Hi
                      Is it possible, from a plugin
                      perspective, to create and recover a
                      pointer in the core process (i.e.
                      memory not lost between 2 calls to the
                      plugin, even after the "deinit" of the
                      plugin" ) ?

                      Thanks

                 Hi Joan!

                 May I ask what you are attempting to achieve
  in
                 more detail?

                 Aki
               
   ___
                 dovecot mailing list -- dovecot@dovecot.org
                 To unsubscribe send an email to dovecot-
                 le...@dovecot.org
       __

Re: [EXT] Re: How to get a memory pointer in the core process

2024-03-13 Thread Joan Moreau via dovecot
No, I am not referring to that
I want to create an object at first call in memory
that object would be retrievable at second and furthers calls of the plugin, as
long as dovecot is running




On 2024-03-13 16:29, Aki Tuomi via dovecot wrote:
 Not really no. You should use e.g. dict inteface for storing this
 kind of stateful data. When deinit is called the calling core process
 will likely die too.

 Aki

  On 13/03/2024 10:19 EET Joan Moreau  wrote:


  Keep a pointer in memory retrievable each time a plugin is
  called

  So the plugin keep the memory, not has to restart
  everything at each call



  On 12 March 2024 08:53:38 Aki Tuomi via dovecot
   wrote:

On 11/03/2024 10:42 EET Joan Moreau
 wrote:


Hi
Is it possible, from a plugin
perspective, to create and recover a
pointer in the core process (i.e.
memory not lost between 2 calls to the
plugin, even after the "deinit" of the
plugin" ) ?

Thanks

   Hi Joan!

   May I ask what you are attempting to achieve in
   more detail?

   Aki
   ___
   dovecot mailing list -- dovecot@dovecot.org
   To unsubscribe send an email to dovecot-
   le...@dovecot.org
 ___
 dovecot mailing list -- dovecot@dovecot.org
 To unsubscribe send an email to dovecot-le...@dovecot.org

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: How to get a memory pointer in the core process

2024-03-13 Thread Joan Moreau via dovecot
Keep a pointer in memory retrievable each time a plugin is called

So the plugin keep the memory, not has to restart everything at each call



On 12 March 2024 08:53:38 Aki Tuomi via dovecot  wrote:

  On 11/03/2024 10:42 EET Joan Moreau  wrote:


  Hi
  Is it possible, from a plugin perspective, to create and
  recover a pointer in the core process (i.e. memory not lost
  between 2 calls to the plugin, even after the "deinit" of
  the plugin" ) ?

  Thanks

 Hi Joan!

 May I ask what you are attempting to achieve in more detail?

 Aki
 ___
 dovecot mailing list -- dovecot@dovecot.org
 To unsubscribe send an email to dovecot-le...@dovecot.org

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


How to get a memory pointer in the core process

2024-03-11 Thread Joan Moreau via dovecot
Hi
Is it possible, from a plugin perspective, to create and recover a pointer in
the core process (i.e. memory not lost between 2 calls to the plugin, even
after the "deinit" of the plugin" ) ?

Thanks
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: Problem with the dovecot-fts-xapian package.

2021-09-12 Thread Joan Moreau



@Bob : The package has been recompiled against the new version of 
Dovecot. Hope it works now


@Aki : It would be nice to have all plugins included in the source code 
for major releases (with a simple rule that non-maintained packages are 
removed), including Pigeonhole, FTS plugins, and many other existing 
plugins from all over the world


On 2021-09-12 13:54, Aki Tuomi wrote:


On 12/09/2021 15:12 Bob Marcan  wrote:

On Sun, 12 Sep 2021 11:36:46 +0100
Joan Moreau  wrote:

This is where I am for now :

https://koji.fedoraproject.org/koji/packageinfo?packageID=34417

Probably, I should wait for Fedora batch programs to push that into 
main rep


On 2021-09-12 11:18, Joan Moreau wrote:

Hi Bob,

I am trying to achieve that.

But do you know the process of pushing an update as a maintainer, in > 
fedore repositories ?


Thank you

On 2021-09-12 11:02, Bob Marcan wrote:
On Sun, 12 Sep 2021 09:45:35 +0100
Joan Moreau  wrote:

Thank you for notice.

What is the process to rebuild the package with recent dovecot, as  > 
(instead of existing 1.4.12-1) ?
There are no (yet)  1.4.12-2 in updates-testing or > 
updates-testing-modular repository.

Should i'll wait for update?
BR, Bob


Got the new version and there is no more API mismatch.
It's not so important for me, since i'm retired and i'm running this on 
my home computer.

But i think it needs more support from te dovecot group.
There a lot file protection issues nad lack of documentation on dovecot 
side.


BR, Bob
Hi Bob,

Dovecot does not maintain either the packages for fedora. These are 
maintained by Fedora Project. Also we do not maintain or document the 
dovecot-fts-xapian plugin, since it's 3rd party plugin. It's maintained 
by Joan Moreau.


Kind regards,
Aki

Re: Problem with the dovecot-fts-xapian package.

2021-09-12 Thread Joan Moreau



Thank you for notice.

What is the process to rebuild the package with recent dovecot, as 
1.4.12-2 (instead of existing 1.4.12-1) ?


On 2021-09-12 07:21, Bob Marcan wrote:


Problem with the dovecot-fts-xapian package.

Fedora 34 with latest updates.
dovecot-2.3.16-1.fc34.x86_64
dovecot-fts-xapian-1.4.12-1.fc34.x86_64

[root@smicro conf.d]# systemctl restart dovecot
[root@smicro conf.d]# doveadm index -A \*
Fatal: Couldn't load required plugin 
/usr/lib64/dovecot/lib21_fts_xapian_plugin.so: Module is for different 
ABI version 2.3.ABIv15(2.3.15) (we have 2.3.ABIv16(2.3.16))


BR, Bob

Re: Duplicate plugins - FTS Xapian

2021-09-01 Thread Joan Moreau
Just for clarity, Open-Xchange has not written any xapian plugin 
whatsoever.


Yes but the doc says that Open Xchaneg "supports" one over the other.

Honestly, I am doing this over my free time, begin very reactive to user 
requests, and have this confirmed by Debian, Archlinux and now Fedora in 
their core packages


This is not very encouraging despite all the efforts achieved.

Duplicate plugins - FTS Xapian

2021-08-30 Thread Joan Moreau

Hi

There seems to be 2 plugins doing the same thins

- https://github.com/slusarz/dovecot-fts-flatcurve/

- https://github.com/grosjo/fts-xapian/ (mine)

Both are in the doc of dovecot 
https://doc.dovecot.org/configuration_manual/fts/


I am currently working hard to push it to RPM package, and plugin is 
already approved by ArchLinux and Debian


Isn't there double work here ?

Thanks

JM

Re: [Dovecot-news] v2.3.16 released

2021-08-09 Thread Joan Moreau

Well, I don't really understand your note.

Bottom-line : 2.3.16 crashes every now and then.

Maybe is there a quick fix for production servers ?

On 2021-08-09 10:27, Timo Sirainen wrote:


On 9. Aug 2021, at 11.24, Timo Sirainen  wrote:

On 9. Aug 2021, at 11.03, Joan Moreau  wrote:

#0 0x7f2370f7fe3d in o_stream_nsendv (stream=0x0, 
iov=iov@entry=0x7ffeb9dabd70, iov_count=iov_count@entry=1) at 
ostream.c:333


overflow = false
#1 0x7f2370f7feca in o_stream_nsend (stream=, 
data=, size=) at ostream.c:325

iov = {iov_base = 0x55b8af41d470, iov_len = 5}
#2 0x7f2370f7ff1a in o_stream_nsend_str (stream=, 
str=) at ostream.c:344

No locals.
#3 0x55b8af391f84 in indexer_client_status_callback (percentage=56, 
context=0x55b8af434b70) at indexer-client.c:146

_data_stack_cur_id = 4
ctx = 0x55b8af434b70
#4 0x55b8af3921a0 in indexer_queue_request_status_int 
(queue=0x55b8af4299a0, request=0x55b8af434b90, percentage=56) at 
indexer-queue.c:182

context = 

Looks like v2.3.15 already broke this. Happens when indexer-client 
disconnects early. Hopefully doesn't happen very often.


Oh, actually v2.3.15.1, but looks like it wasn't even released to 
community.

Re: [Dovecot-news] v2.3.16 released

2021-08-09 Thread Joan Moreau

Well, I do not think I am mistaken.

I also get the following error for "indexer" process

#0 0x7f2370f7fe3d in o_stream_nsendv (stream=0x0, 
iov=iov@entry=0x7ffeb9dabd70, iov_count=iov_count@entry=1) at 
ostream.c:333

333 if (unlikely(stream->closed || stream->stream_errno != 0 ||
(gdb) bt full
#0 0x7f2370f7fe3d in o_stream_nsendv (stream=0x0, 
iov=iov@entry=0x7ffeb9dabd70, iov_count=iov_count@entry=1) at 
ostream.c:333

overflow = false
#1 0x7f2370f7feca in o_stream_nsend (stream=, 
data=, size=) at ostream.c:325

iov = {iov_base = 0x55b8af41d470, iov_len = 5}
#2 0x7f2370f7ff1a in o_stream_nsend_str (stream=, 
str=) at ostream.c:344

No locals.
#3 0x55b8af391f84 in indexer_client_status_callback (percentage=56, 
context=0x55b8af434b70) at indexer-client.c:146

_data_stack_cur_id = 4
ctx = 0x55b8af434b70
#4 0x55b8af3921a0 in indexer_queue_request_status_int 
(queue=0x55b8af4299a0, request=0x55b8af434b90, percentage=56) at 
indexer-queue.c:182

context = 
i = 0
#5 0x55b8af3919a2 in worker_status_callback (percentage=56, 
context=0x55b8af434cb0) at indexer.c:104

conn = 0x55b8af434cb0
request = 0x55b8af434b90
#6 0x55b8af392ac4 in worker_connection_call_callback 
(percentage=, worker=0x55b8af434cb0) at 
worker-connection.c:42

No locals.
#7 worker_connection_input_args (conn=0x55b8af434cb0, 
args=0x55b8af41d348) at worker-connection.c:109

worker = 0x55b8af434cb0
percentage = 56
ret = 
_tmp_event = 
#8 0x7f2370f53853 in connection_input_default (conn=0x55b8af434cb0) 
at connection.c:95

_data_stack_cur_id = 3
line = 0x55b8af438625 "56"
input = 0x55b8af436210
output = 0x55b8af436430
ret = 1
#9 0x7f2370f71919 in io_loop_call_io (io=0x55b8af436550) at 
ioloop.c:727

ioloop = 0x55b8af425ec0
t_id = 2
__func__ = "io_loop_call_io"
#10 0x7f2370f72fc2 in io_loop_handler_run_internal 
(ioloop=ioloop@entry=0x55b8af425ec0) at ioloop-epoll.c:222


On 2021-08-06 13:49, Aki Tuomi wrote:


On 06/08/2021 15:43 Joan Moreau  wrote:

Thank you Timo
However, this leads to
kernel: imap[228122]: segfault at 50 ip 7f7015ee332b sp 
7fffa7178740 error 4 in lib20_fts_plugin.so[7f7015ee1000+11000]

Returning to 2.3.15 resolves the problem


Can you provide `gdb bt full` output for the crash?

Aki

Re: [Dovecot-news] v2.3.16 released

2021-08-06 Thread Joan Moreau

git clone -b release-2.3.16

On 2021-08-06 15:07, Timo Sirainen wrote:


On 6. Aug 2021, at 15.08, Joan Moreau  wrote:


Below

(gdb) bt full
#0 fts_user_autoindex_exclude (box=, 
box@entry=0x55e0bc7e0fe8) at fts-user.c:347


There is no such function in 2.3.16 release. That's only in the current 
git master. What did you install and from where?

Re: [Dovecot-news] v2.3.16 released

2021-08-06 Thread Joan Moreau

Below

(gdb) bt full
#0 fts_user_autoindex_exclude (box=, 
box@entry=0x55e0bc7e0fe8) at fts-user.c:347

fuser = 
#1 0x7f42e8e9b4a6 in fts_mailbox_allocated (box=0x55e0bc7e0fe8) at 
fts-storage.c:806

flist = 
v = 0x55e0bc7e1010
fbox = 0x55e0bc7e1608
#2 0x7f42e952652c in hook_mailbox_allocated 
(box=box@entry=0x55e0bc7e0fe8) at mail-storage-hooks.c:256

_data_stack_cur_id = 5
_foreach_end = 0x55e0bc7d28a0
_foreach_ptr = 0x55e0bc7d2890
hooks = 0x7f42e8ec9ba0 
ctx = 0x55e0bc7e2818
#3 0x7f42e95219c1 in mailbox_alloc (list=0x55e0bc7d97b8, 
vname=0x55e0bc78f608 "INBOX", 
flags=flags@entry=MAILBOX_FLAG_DROP_RECENT) at mail-storage.c:860

_data_stack_cur_id = 4
new_list = 0x55e0bc7d97b8
storage = 0x55e0bc7d9fc8
box = 0x55e0bc7e0fe8
open_error = MAIL_ERROR_NONE
errstr = 0x0
__func__ = "mailbox_alloc"
#4 0x55e0bbd0a5c2 in select_open (readonly=false, mailbox=out>, ctx=0x55e0bc7d6fa0) at cmd-select.c:285

client = 0x55e0bc7d6298
status = {messages = 32, recent = 48, unseen = 814554448, uidvalidity = 
32766, uidnext = 814554256, first_unseen_seq = 32766, first_recent_uid = 
1633369088,
last_cached_seq = 3805518085, highest_modseq = 0, highest_pvt_modseq = 
139925357787644, keywords = 0x55e0bc78f398, permanent_flags = 0, flags = 
0, permanent_keywords = false,
allow_new_keywords = false, nonpermanent_modseqs = false, 
no_modseq_tracking = false, have_guids = false, have_save_guids = true, 
have_only_guid128 = false}

flags = MAILBOX_FLAG_DROP_RECENT
ret = 0
client = 
status = {messages = , recent = , unseen = 
, uidvalidity = , uidnext = out>,
first_unseen_seq = , first_recent_uid = , 
last_cached_seq = , highest_modseq = , 
highest_pvt_modseq = ,
keywords = , permanent_flags = , flags = 
, permanent_keywords = , 
allow_new_keywords = ,
nonpermanent_modseqs = , no_modseq_tracking = out>, have_guids = , have_save_guids = , 
have_only_guid128 = }

flags = 
ret = 
#5 cmd_select_full (cmd=, readonly=) at 
cmd-select.c:416

client = 0x55e0bc7d6298
ctx = 0x55e0bc7d6fa0
args = 0x55e0bc7a58d8
list_args = 0x7ffe308d1c74
mailbox = 0x55e0bc78f608 "INBOX"
client_error = 0x1 
ret = 
__func__ = "cmd_select_full"
#6 0x55e0bbd12484 in command_exec (cmd=cmd@entry=0x55e0bc7d6e08) at 
imap-commands.c:201

hook = 0x55e0bc79b5d0
finished = 
__func__ = "command_exec"
#7 0x55e0bbd104b2 in client_command_input (cmd=) at 
imap-client.c:1230

client = 0x55e0bc7d6298
command = 
tag = 0x7f42e942d8fa  
"]A\\A]\303\061\300\303ff.\017\037\204"

name = 0x55e0bbd26e50 "SELECT"
ret = 

On 2021-08-06 13:49, Aki Tuomi wrote:


On 06/08/2021 15:43 Joan Moreau  wrote:

Thank you Timo
However, this leads to
kernel: imap[228122]: segfault at 50 ip 7f7015ee332b sp 
7fffa7178740 error 4 in lib20_fts_plugin.so[7f7015ee1000+11000]

Returning to 2.3.15 resolves the problem


Can you provide `gdb bt full` output for the crash?

Aki

Re: [Dovecot-news] v2.3.16 released

2021-08-06 Thread Joan Moreau

Thank you Timo

However, this leads to

kernel: imap[228122]: segfault at 50 ip 7f7015ee332b sp 
7fffa7178740 error 4 in lib20_fts_plugin.so[7f7015ee1000+11000]


Returning to 2.3.15 resolves the problem

On 2021-08-06 12:42, Timo Sirainen wrote:


Hi,

One interesting thing in this release is the support for configuring 
OAUTH2 openid-configuration element. It would be nice if IMAP clients 
started supporting this feature to enable OAUTH2 for all IMAP servers, 
not just Gmail and a few others. This would allow all kinds of new 
authentication methods for IMAP and improve the authentication security 
in general.

https://dovecot.org/releases/2.3/dovecot-2.3.16.tar.gz
https://dovecot.org/releases/2.3/dovecot-2.3.16.tar.gz.sig

Binary packages in https://repo.dovecot.org/
Docker images in https://hub.docker.com/r/dovecot/dovecot

* Any unexpected exit() will now result in a core dump. This can
especially help notice problems when a Lua script causes exit(0).
* auth-worker process is now restarted when the number of auth
requests reaches service auth-worker { service_count }. The default
is still unlimited.

+ Event improvements: Added data_stack_grow event and http-client
category. See https://doc.dovecot.org/admin_manual/list_of_events/
+ oauth2: Support RFC 7628 openid-configuration element. This allows
clients to support OAUTH2 for any server, not just a few hardcoded
servers like they do now. See openid_configuration_url setting in
dovecot-oauth2.conf.ext.
+ mysql: Single statements are no longer enclosed with BEGIN/COMMIT.
+ dovecot-sysreport --core supports multiple core files now and does
not require specifying the binary path.
+ imapc: When imap_acl plugin is loaded and imapc_features=acl is used,
IMAP ACL commands are proxied to the remote server. See
https://doc.dovecot.org/configuration_manual/mail_location/imapc/
+ dict-sql now supports the "UPSERT" syntax for SQLite and PostgreSQL.
+ imap: If IMAP client disconnects during a COPY command, the copying
is aborted, and changes are reverted. This may help to avoid many
email duplicates if client disconnects during COPY and retries it
after reconnecting.
- master process was using 100% CPU if service attempted to create more
processes due to process_min_avail, but process_limit was already
reached. v2.3.15 regression.
- Using attachment detection flags wrongly logged unnecessary "Failed
to add attachment keywords" errors. v2.3.13 regression.
- IMAP QRESYNC: Expunging UID 1 mail resulted in broken VANISHED
response, which could have confused IMAP clients. v2.3.13 regression.
- imap: STORE didn't send untagged replies for \Seen changes for
(shared) mailboxes using INDEXPVT. v2.3.10 regression.
- rawlog_dir setting would not log input that was pipelined after
authentication command.
- Fixed potential infinite looping with autoexpunging.
- Log event exporter: Truncate long fields to 1000 bytes
- LAYOUT=index: ACL inheritance didn't work when creating mailboxes
- Event filters: Unquoted '?' wildcard caused a crash at startup
- fs-metawrap: Fix to handling zero sized files
- imap-hibernate: Fixed potential crash at deinit.
- acl: dovecot-acl-list files were written for acl_ignore_namespaces
- program-client (used by Sieve extprograms, director_flush_socket)
may have missed status response from UNIX and network sockets,
resulting in unexpected failures.

___
Dovecot-news mailing list
dovecot-n...@dovecot.org
https://dovecot.org/mailman/listinfo/dovecot-news

Re: How to use xapian with non-text attachments

2021-07-03 Thread Joan Moreau

It is now very out of date.

@Jello : Kindly update please

On 2021-03-21 12:58, André Rodier wrote:


Hello,

The version packaged on Bullseye is slightly out of date, I have filled
a bug report:

https://bugs.debian.org/985654

Thanks to the maintainers for their hard work!

André

On Sun, 2021-03-21 at 10:51 +, André Rodier wrote:


Hello,

I am developing a hosting platform on Debian Bullseye, with Dovecot
amongst other tools.

I am trying to use the xapian full test search plugin, but I can see
the attachments are skipped:

This is what I have in the logs when running the indexing in verbose
mode:

---

doveadm(camille): Info: FTS Xapian: fts_backend_xapian_check_access
doveadm(camille): Info: FTS Xapian: Memory stats : Used = 56 MB, Free
=
66 MB
doveadm(camille): Info: FTS Xapian: fts_backend_xapian_index_hdr
doveadm(camille): Info: FTS Xapian: fts_backend_xapian_query
doveadm(camille): Info: FTS Xapian: Query= uid:"44"
doveadm(camille): Info: FTS Xapian: Ngram(S) -> 63 items (total 0 KB)
doveadm(camille): Info: FTS Xapian:
fts_backend_xapian_update_unset_build_key
doveadm(camille): Info: FTS Xapian:
fts_backend_xapian_update_set_build_key
doveadm(camille): Info: FTS Xapian: New part (Header=Message-
Id,Type=(null),Disposition=(null))
doveadm(camille): Info: FTS Xapian:
fts_backend_xapian_update_build_more
doveadm(camille): Info: FTS Xapian: fts_backend_xapian_check_access
doveadm(camille): Info: FTS Xapian: Memory stats : Used = 56 MB, Free
=
66 MB
doveadm(camille): Info: FTS Xapian: fts_backend_xapian_index_hdr
doveadm(camille): Info: FTS Xapian: fts_backend_xapian_query
doveadm(camille): Info: FTS Xapian: Query= uid:"44"
doveadm(camille): Info: FTS Xapian: Ngram(XMID) -> 4 items (total 0
KB)
doveadm(camille): Info: FTS Xapian:
fts_backend_xapian_update_unset_build_key
doveadm(camille): Info: FTS Xapian:
fts_backend_xapian_update_set_build_key
doveadm(camille): Info: FTS Xapian: New part (Header=X-
Mailer,Type=(null),Disposition=(null))
doveadm(camille): Info: FTS Xapian: Unknown header (indexing)
'xmailer'
doveadm(camille): Info: FTS Xapian:
fts_backend_xapian_update_set_build_key
doveadm(camille): Info: FTS Xapian: New part (Header=MIME-
Version,Type=(null),Disposition=(null))
doveadm(camille): Info: FTS Xapian: Unknown header (indexing)
'mimeversion'
doveadm(camille): Info: FTS Xapian:
fts_backend_xapian_update_set_build_key
doveadm(camille): Info: FTS Xapian: New part (Header=Content-
Type,Type=(null),Disposition=(null))
doveadm(camille): Info: FTS Xapian: Unknown header (indexing)
'contenttype'
doveadm(camille): Info: FTS Xapian:
fts_backend_xapian_update_set_build_key
doveadm(camille): Info: FTS Xapian: New part (Header=Authentication-
Results,Type=(null),Disposition=(null))
doveadm(camille): Info: FTS Xapian: Unknown header (indexing)
'authenticationresults'
doveadm(camille): Info: FTS Xapian:
fts_backend_xapian_update_set_build_key
doveadm(camille): Info: FTS Xapian: New part (Header=X-AV-
Checked,Type=(null),Disposition=(null))
doveadm(camille): Info: FTS Xapian: Unknown header (indexing)
'xavchecked'
doveadm(camille): Info: FTS Xapian:
fts_backend_xapian_update_set_build_key
doveadm(camille): Info: FTS Xapian: New part (Header=Content-
Type,Type=(null),Disposition=(null))
doveadm(camille): Info: FTS Xapian: Unknown header (indexing)
'contenttype'
doveadm(camille): Info: FTS Xapian:
fts_backend_xapian_update_set_build_key
doveadm(camille): Info: FTS Xapian: New part
(Header=(null),Type=text/plain,Disposition=(null))
doveadm(camille): Info: FTS Xapian:
fts_backend_xapian_update_build_more
doveadm(camille): Info: FTS Xapian:
fts_backend_xapian_update_unset_build_key
doveadm(camille): Info: FTS Xapian:
fts_backend_xapian_update_set_build_key
doveadm(camille): Info: FTS Xapian: New part (Header=Content-
Type,Type=(null),Disposition=(null))
doveadm(camille): Info: FTS Xapian: Unknown header (indexing)
'contenttype'
doveadm(camille): Info: FTS Xapian:
fts_backend_xapian_update_set_build_key
doveadm(camille): Info: FTS Xapian: New part (Header=Content-
Description,Type=(null),Disposition=(null))
doveadm(camille): Info: FTS Xapian: Unknown header (indexing)
'contentdescription'
doveadm(camille): Info: FTS Xapian:
fts_backend_xapian_update_set_build_key
doveadm(camille): Info: FTS Xapian: New part (Header=Content-
Disposition,Type=(null),Disposition=(null))
doveadm(camille): Info: FTS Xapian: Unknown header (indexing)
'contentdisposition'
doveadm(camille): Info: FTS Xapian:
fts_backend_xapian_update_set_build_key
doveadm(camille): Info: FTS Xapian: New part (Header=Content-
Transfer-
Encoding,Type=(null),Disposition=(null))
doveadm(camille): Info: FTS Xapian: Unknown header (indexing)
'contenttransferencoding'
doveadm(camille): Info: FTS Xapian:
fts_backend_xapian_update_set_build_key
doveadm(camille): Info: FTS Xapian: New part
(Header=(null),Type=text/csv,Disposition=attachment;
filename="file.csv")
doveadm(camille): Info: 

Re: systemd integration not working

2021-04-28 Thread Joan Moreau

Not much details

Git version (including the patch you sent)  raised CPU load very very 
high.


Can't play too much on my production server.

Let me know if I can help

On 2021-04-28 06:12, Aki Tuomi wrote:


Can you provide any details on this instability?

Aki

On April 27, 2021 7:58:01 PM UTC, Joan Moreau  wrote:

Ok, a third regression is that it becomes highly unstable with the 
patch you sent


I had to get back to 2.3.14

On 2021-04-27 17:07, Joan Moreau wrote:

Indeed, latest git works much better :)

On 2021-04-27 05:58, Aki Tuomi wrote:
Can you try with latest git? We did some improvements on the systemd 
configure parts.


Aki

On 26/04/2021 23:32 Joan Moreau  wrote:

Looking at config.log, there is #define HAVE_LIBSYSTEMD 1
But "Type=notify" does not appear
My systemd is version 248

On 2021-04-26 12:05, Joan Moreau wrote: I have
# sudo systemctl status dovecot
● dovecot.service - Dovecot IMAP/POP3 email server
Loaded: loaded (/usr/lib/systemd/system/dovecot.service; enabled; 
vendor preset: disabled)

Active: active (running) since Sun 2021-04-25 20:13:25 UTC; 14h ago
Docs: man:dovecot(1)
https://doc.dovecot.org/
Main PID: 2559364 (dovecot)
Tasks: 28 (limit: 76912)
Memory: 1.0G
CPU: 7min 18.342s
CGroup: /system.slice/dovecot.service
├─2559364 /usr/sbin/dovecot -F
├─2559366 dovecot/imap-login
├─2559367 dovecot/anvil [11 connections]
├─2559368 dovecot/log

On 2021-04-26 08:32, Aki Tuomi wrote: I don't know then. It works for 
me and I just tried it again. The only reason it would fail would be 
that HAVE_LIBSYSTEMD is not defined, so it would not be using 
libsystemd for notify support.


$ sudo systemctl status dovecot
● dovecot.service - Dovecot IMAP/POP3 email server
Loaded: loaded (/lib/systemd/system/dovecot.service; disabled; vendor 
preset: enabled)

Active: active (running) since Mon 2021-04-26 10:30:02 EEST; 2s ago
Docs: man:dovecot(1)
https://doc.dovecot.org/
Main PID: 30213 (dovecot)
Status: "v2.4.devel (98a1cca054) running"
Tasks: 4 (limit: 4701)
Memory: 3.3M
CGroup: /system.slice/dovecot.service
├─30213 /home/cmouse/dovecot/sbin/dovecot -F
├─30214 dovecot/anvil
├─30215 dovecot/log
└─30216 dovecot/config

You can tell from the "Status" line that it's using Type=notify.

Aki

On 26/04/2021 10:29 Joan Moreau  wrote:

Yes, I do run autogen.sh after every "git pull"

On 2021-04-26 08:21, Aki Tuomi wrote: The current autoconf code is bit 
buggy, but if you do indeed have libsystemd-dev installed it should do 
the right thing and will work with systemd even if you have 
Type=notify.


This has been actually tested, so if it's not working, then something 
else is wrong.


Did you remember to run ./autogen.sh after pulling from git to make 
sure you get new configure script?


Aki

On 26/04/2021 10:11 Joan Moreau  wrote:

Yes systemd is installed (and the "dev" files as well)

On 2021-04-26 06:23, Aki Tuomi wrote: This is because you are not 
compiling with libsystemd-dev installed. I guess we need to make some 
service template that use type simple when you don't use libsystemd.


Aki

On 25/04/2021 22:53 Joan Moreau  wrote:

Yes, it seems fixed with this patch :)

Another bug with git, is the "type=" in systemd is switched from 
"simple" to "notify". The later does not work and reverting to "simple" 
does work


On 2021-04-25 17:53, Aki Tuomi wrote: On 24/04/2021 21:56 Joan Moreau 
 wrote:


chroot= does not resolve the issue
I have "chroot = login" in my conf

Thanks!

The chroot was needed to get the core dump.

Can you try if this does fix the crash?

Aki

From 1df4e02cbff710ce8938480b07a5690e37f661f6 Mon Sep 17 00:00:00 2001
From: Timo Sirainen 
Date: Fri, 23 Apr 2021 16:43:36 +0300
Subject: [PATCH] login-common: Fix handling destroyed_clients linked 
list


The client needs to be removed from destroyed_clients linked list 
before

it's added to client_fd_proxies linked list.

Broken by 1c622cdbe08df2f642e28923c39894516143ae2a
---
src/login-common/client-common.c | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/src/login-common/client-common.c 
b/src/login-common/client-common.c

index bdb6e9c798..1d264d9f75 100644
--- a/src/login-common/client-common.c
+++ b/src/login-common/client-common.c
@@ -289,8 +289,9 @@ void client_disconnect(struct client *client, const 
char *reason,

/* Login was successful. We may now be proxying the connection,
so don't disconnect the client until client_unref(). */
if (client->iostream_fd_proxy != NULL) {
+ i_assert(!client->fd_proxying);
client->fd_proxying = TRUE;
- i_assert(client->prev == NULL && client->next == NULL);
+ DLLIST_REMOVE(_clients, client);
DLLIST_PREPEND(_fd_proxies, client);
client_fd_proxies_count++;
}
@@ -307,8 +308,9 @@ void client_destroy(struct client *client, const 
char *reason)


if (last_client == client)
last_client = client->prev;
- /* remove from clients linke

Re: systemd integration not working

2021-04-27 Thread Joan Moreau
Ok, a third regression is that it becomes highly unstable with the patch 
you sent


I had to get back to 2.3.14

On 2021-04-27 17:07, Joan Moreau wrote:


Indeed, latest git works much better :)

On 2021-04-27 05:58, Aki Tuomi wrote:
Can you try with latest git? We did some improvements on the systemd 
configure parts.


Aki

On 26/04/2021 23:32 Joan Moreau  wrote:

Looking at config.log, there is #define HAVE_LIBSYSTEMD 1
But "Type=notify" does not appear
My systemd is version 248

On 2021-04-26 12:05, Joan Moreau wrote: I have
# sudo systemctl status dovecot
● dovecot.service - Dovecot IMAP/POP3 email server
Loaded: loaded (/usr/lib/systemd/system/dovecot.service; enabled; 
vendor preset: disabled)

Active: active (running) since Sun 2021-04-25 20:13:25 UTC; 14h ago
Docs: man:dovecot(1)
https://doc.dovecot.org/
Main PID: 2559364 (dovecot)
Tasks: 28 (limit: 76912)
Memory: 1.0G
CPU: 7min 18.342s
CGroup: /system.slice/dovecot.service
├─2559364 /usr/sbin/dovecot -F
├─2559366 dovecot/imap-login
├─2559367 dovecot/anvil [11 connections]
├─2559368 dovecot/log

On 2021-04-26 08:32, Aki Tuomi wrote: I don't know then. It works for 
me and I just tried it again. The only reason it would fail would be 
that HAVE_LIBSYSTEMD is not defined, so it would not be using 
libsystemd for notify support.


$ sudo systemctl status dovecot
● dovecot.service - Dovecot IMAP/POP3 email server
Loaded: loaded (/lib/systemd/system/dovecot.service; disabled; vendor 
preset: enabled)

Active: active (running) since Mon 2021-04-26 10:30:02 EEST; 2s ago
Docs: man:dovecot(1)
https://doc.dovecot.org/
Main PID: 30213 (dovecot)
Status: "v2.4.devel (98a1cca054) running"
Tasks: 4 (limit: 4701)
Memory: 3.3M
CGroup: /system.slice/dovecot.service
├─30213 /home/cmouse/dovecot/sbin/dovecot -F
├─30214 dovecot/anvil
├─30215 dovecot/log
└─30216 dovecot/config

You can tell from the "Status" line that it's using Type=notify.

Aki

On 26/04/2021 10:29 Joan Moreau  wrote:

Yes, I do run autogen.sh after every "git pull"

On 2021-04-26 08:21, Aki Tuomi wrote: The current autoconf code is bit 
buggy, but if you do indeed have libsystemd-dev installed it should do 
the right thing and will work with systemd even if you have 
Type=notify.


This has been actually tested, so if it's not working, then something 
else is wrong.


Did you remember to run ./autogen.sh after pulling from git to make 
sure you get new configure script?


Aki

On 26/04/2021 10:11 Joan Moreau  wrote:

Yes systemd is installed (and the "dev" files as well)

On 2021-04-26 06:23, Aki Tuomi wrote: This is because you are not 
compiling with libsystemd-dev installed. I guess we need to make some 
service template that use type simple when you don't use libsystemd.


Aki

On 25/04/2021 22:53 Joan Moreau  wrote:

Yes, it seems fixed with this patch :)

Another bug with git, is the "type=" in systemd is switched from 
"simple" to "notify". The later does not work and reverting to "simple" 
does work


On 2021-04-25 17:53, Aki Tuomi wrote: On 24/04/2021 21:56 Joan Moreau 
 wrote:


chroot= does not resolve the issue
I have "chroot = login" in my conf

Thanks!

The chroot was needed to get the core dump.

Can you try if this does fix the crash?

Aki

From 1df4e02cbff710ce8938480b07a5690e37f661f6 Mon Sep 17 00:00:00 2001
From: Timo Sirainen 
Date: Fri, 23 Apr 2021 16:43:36 +0300
Subject: [PATCH] login-common: Fix handling destroyed_clients linked 
list


The client needs to be removed from destroyed_clients linked list 
before

it's added to client_fd_proxies linked list.

Broken by 1c622cdbe08df2f642e28923c39894516143ae2a
---
src/login-common/client-common.c | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/src/login-common/client-common.c 
b/src/login-common/client-common.c

index bdb6e9c798..1d264d9f75 100644
--- a/src/login-common/client-common.c
+++ b/src/login-common/client-common.c
@@ -289,8 +289,9 @@ void client_disconnect(struct client *client, const 
char *reason,

/* Login was successful. We may now be proxying the connection,
so don't disconnect the client until client_unref(). */
if (client->iostream_fd_proxy != NULL) {
+ i_assert(!client->fd_proxying);
client->fd_proxying = TRUE;
- i_assert(client->prev == NULL && client->next == NULL);
+ DLLIST_REMOVE(_clients, client);
DLLIST_PREPEND(_fd_proxies, client);
client_fd_proxies_count++;
}
@@ -307,8 +308,9 @@ void client_destroy(struct client *client, const 
char *reason)


if (last_client == client)
last_client = client->prev;
- /* remove from clients linked list before it's added to
- client_fd_proxies. */
+ /* move to destroyed_clients linked list before it's potentially
+ added to client_fd_proxies. */
+ i_assert(!client->fd_proxying);
DLLIST_REMOVE(, client);
DLLIST_PREPEND(_clients, client);

@@ -409,13 +411,14 @@ bool client_unref(struct client **_client)
DLLIST_R

Re: systemd integration not working

2021-04-27 Thread Joan Moreau

Indeed, latest git works much better :)

On 2021-04-27 05:58, Aki Tuomi wrote:

Can you try with latest git? We did some improvements on the systemd 
configure parts.


Aki

On 26/04/2021 23:32 Joan Moreau  wrote:

Looking at config.log, there is #define HAVE_LIBSYSTEMD 1
But "Type=notify" does not appear
My systemd is version 248

On 2021-04-26 12:05, Joan Moreau wrote: I have
# sudo systemctl status dovecot
● dovecot.service - Dovecot IMAP/POP3 email server
Loaded: loaded (/usr/lib/systemd/system/dovecot.service; enabled; 
vendor preset: disabled)

Active: active (running) since Sun 2021-04-25 20:13:25 UTC; 14h ago
Docs: man:dovecot(1)
https://doc.dovecot.org/
Main PID: 2559364 (dovecot)
Tasks: 28 (limit: 76912)
Memory: 1.0G
CPU: 7min 18.342s
CGroup: /system.slice/dovecot.service
├─2559364 /usr/sbin/dovecot -F
├─2559366 dovecot/imap-login
├─2559367 dovecot/anvil [11 connections]
├─2559368 dovecot/log

On 2021-04-26 08:32, Aki Tuomi wrote: I don't know then. It works for 
me and I just tried it again. The only reason it would fail would be 
that HAVE_LIBSYSTEMD is not defined, so it would not be using 
libsystemd for notify support.


$ sudo systemctl status dovecot
● dovecot.service - Dovecot IMAP/POP3 email server
Loaded: loaded (/lib/systemd/system/dovecot.service; disabled; vendor 
preset: enabled)

Active: active (running) since Mon 2021-04-26 10:30:02 EEST; 2s ago
Docs: man:dovecot(1)
https://doc.dovecot.org/
Main PID: 30213 (dovecot)
Status: "v2.4.devel (98a1cca054) running"
Tasks: 4 (limit: 4701)
Memory: 3.3M
CGroup: /system.slice/dovecot.service
├─30213 /home/cmouse/dovecot/sbin/dovecot -F
├─30214 dovecot/anvil
├─30215 dovecot/log
└─30216 dovecot/config

You can tell from the "Status" line that it's using Type=notify.

Aki

On 26/04/2021 10:29 Joan Moreau  wrote:

Yes, I do run autogen.sh after every "git pull"

On 2021-04-26 08:21, Aki Tuomi wrote: The current autoconf code is bit 
buggy, but if you do indeed have libsystemd-dev installed it should do 
the right thing and will work with systemd even if you have 
Type=notify.


This has been actually tested, so if it's not working, then something 
else is wrong.


Did you remember to run ./autogen.sh after pulling from git to make 
sure you get new configure script?


Aki

On 26/04/2021 10:11 Joan Moreau  wrote:

Yes systemd is installed (and the "dev" files as well)

On 2021-04-26 06:23, Aki Tuomi wrote: This is because you are not 
compiling with libsystemd-dev installed. I guess we need to make some 
service template that use type simple when you don't use libsystemd.


Aki

On 25/04/2021 22:53 Joan Moreau  wrote:

Yes, it seems fixed with this patch :)

Another bug with git, is the "type=" in systemd is switched from 
"simple" to "notify". The later does not work and reverting to "simple" 
does work


On 2021-04-25 17:53, Aki Tuomi wrote: On 24/04/2021 21:56 Joan Moreau 
 wrote:


chroot= does not resolve the issue
I have "chroot = login" in my conf

Thanks!

The chroot was needed to get the core dump.

Can you try if this does fix the crash?

Aki

From 1df4e02cbff710ce8938480b07a5690e37f661f6 Mon Sep 17 00:00:00 2001
From: Timo Sirainen 
Date: Fri, 23 Apr 2021 16:43:36 +0300
Subject: [PATCH] login-common: Fix handling destroyed_clients linked 
list


The client needs to be removed from destroyed_clients linked list 
before

it's added to client_fd_proxies linked list.

Broken by 1c622cdbe08df2f642e28923c39894516143ae2a
---
src/login-common/client-common.c | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/src/login-common/client-common.c 
b/src/login-common/client-common.c

index bdb6e9c798..1d264d9f75 100644
--- a/src/login-common/client-common.c
+++ b/src/login-common/client-common.c
@@ -289,8 +289,9 @@ void client_disconnect(struct client *client, const 
char *reason,

/* Login was successful. We may now be proxying the connection,
so don't disconnect the client until client_unref(). */
if (client->iostream_fd_proxy != NULL) {
+ i_assert(!client->fd_proxying);
client->fd_proxying = TRUE;
- i_assert(client->prev == NULL && client->next == NULL);
+ DLLIST_REMOVE(_clients, client);
DLLIST_PREPEND(_fd_proxies, client);
client_fd_proxies_count++;
}
@@ -307,8 +308,9 @@ void client_destroy(struct client *client, const 
char *reason)


if (last_client == client)
last_client = client->prev;
- /* remove from clients linked list before it's added to
- client_fd_proxies. */
+ /* move to destroyed_clients linked list before it's potentially
+ added to client_fd_proxies. */
+ i_assert(!client->fd_proxying);
DLLIST_REMOVE(, client);
DLLIST_PREPEND(_clients, client);

@@ -409,13 +411,14 @@ bool client_unref(struct client **_client)
DLLIST_REMOVE(_fd_proxies, client);
i_assert(client_fd_proxies_count > 0);
client_fd_proxies_count--;
+ } else {
+ DLLIST_REMOVE(_clients, client);
}
i_stream_unref(&

Re: systemd integration not working

2021-04-26 Thread Joan Moreau

Looking at config.log, there is #define HAVE_LIBSYSTEMD 1

But "Type=notify" does not appear

My systemd is version 248

On 2021-04-26 12:05, Joan Moreau wrote:


I have

# sudo systemctl status dovecot
● dovecot.service - Dovecot IMAP/POP3 email server
Loaded: loaded (/usr/lib/systemd/system/dovecot.service; enabled; 
vendor preset: disabled)

Active: active (running) since Sun 2021-04-25 20:13:25 UTC; 14h ago
Docs: man:dovecot(1)
https://doc.dovecot.org/
Main PID: 2559364 (dovecot)
Tasks: 28 (limit: 76912)
Memory: 1.0G
CPU: 7min 18.342s
CGroup: /system.slice/dovecot.service
├─2559364 /usr/sbin/dovecot -F
├─2559366 dovecot/imap-login
├─2559367 dovecot/anvil [11 connections]
├─2559368 dovecot/log

On 2021-04-26 08:32, Aki Tuomi wrote:
I don't know then. It works for me and I just tried it again. The only 
reason it would fail would be that HAVE_LIBSYSTEMD is not defined, so 
it would not be using libsystemd for notify support.


$ sudo systemctl status dovecot
● dovecot.service - Dovecot IMAP/POP3 email server
Loaded: loaded (/lib/systemd/system/dovecot.service; disabled; vendor 
preset: enabled)

Active: active (running) since Mon 2021-04-26 10:30:02 EEST; 2s ago
Docs: man:dovecot(1)
https://doc.dovecot.org/
Main PID: 30213 (dovecot)
Status: "v2.4.devel (98a1cca054) running"
Tasks: 4 (limit: 4701)
Memory: 3.3M
CGroup: /system.slice/dovecot.service
├─30213 /home/cmouse/dovecot/sbin/dovecot -F
├─30214 dovecot/anvil
├─30215 dovecot/log
└─30216 dovecot/config

You can tell from the "Status" line that it's using Type=notify.

Aki

On 26/04/2021 10:29 Joan Moreau  wrote:

Yes, I do run autogen.sh after every "git pull"

On 2021-04-26 08:21, Aki Tuomi wrote: The current autoconf code is bit 
buggy, but if you do indeed have libsystemd-dev installed it should do 
the right thing and will work with systemd even if you have 
Type=notify.


This has been actually tested, so if it's not working, then something 
else is wrong.


Did you remember to run ./autogen.sh after pulling from git to make 
sure you get new configure script?


Aki

On 26/04/2021 10:11 Joan Moreau  wrote:

Yes systemd is installed (and the "dev" files as well)

On 2021-04-26 06:23, Aki Tuomi wrote: This is because you are not 
compiling with libsystemd-dev installed. I guess we need to make some 
service template that use type simple when you don't use libsystemd.


Aki

On 25/04/2021 22:53 Joan Moreau  wrote:

Yes, it seems fixed with this patch :)

Another bug with git, is the "type=" in systemd is switched from 
"simple" to "notify". The later does not work and reverting to "simple" 
does work


On 2021-04-25 17:53, Aki Tuomi wrote: On 24/04/2021 21:56 Joan Moreau 
 wrote:


chroot= does not resolve the issue
I have "chroot = login" in my conf

Thanks!

The chroot was needed to get the core dump.

Can you try if this does fix the crash?

Aki

From 1df4e02cbff710ce8938480b07a5690e37f661f6 Mon Sep 17 00:00:00 2001
From: Timo Sirainen 
Date: Fri, 23 Apr 2021 16:43:36 +0300
Subject: [PATCH] login-common: Fix handling destroyed_clients linked 
list


The client needs to be removed from destroyed_clients linked list 
before

it's added to client_fd_proxies linked list.

Broken by 1c622cdbe08df2f642e28923c39894516143ae2a
---
src/login-common/client-common.c | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/src/login-common/client-common.c 
b/src/login-common/client-common.c

index bdb6e9c798..1d264d9f75 100644
--- a/src/login-common/client-common.c
+++ b/src/login-common/client-common.c
@@ -289,8 +289,9 @@ void client_disconnect(struct client *client, const 
char *reason,

/* Login was successful. We may now be proxying the connection,
so don't disconnect the client until client_unref(). */
if (client->iostream_fd_proxy != NULL) {
+ i_assert(!client->fd_proxying);
client->fd_proxying = TRUE;
- i_assert(client->prev == NULL && client->next == NULL);
+ DLLIST_REMOVE(_clients, client);
DLLIST_PREPEND(_fd_proxies, client);
client_fd_proxies_count++;
}
@@ -307,8 +308,9 @@ void client_destroy(struct client *client, const 
char *reason)


if (last_client == client)
last_client = client->prev;
- /* remove from clients linked list before it's added to
- client_fd_proxies. */
+ /* move to destroyed_clients linked list before it's potentially
+ added to client_fd_proxies. */
+ i_assert(!client->fd_proxying);
DLLIST_REMOVE(, client);
DLLIST_PREPEND(_clients, client);

@@ -409,13 +411,14 @@ bool client_unref(struct client **_client)
DLLIST_REMOVE(_fd_proxies, client);
i_assert(client_fd_proxies_count > 0);
client_fd_proxies_count--;
+ } else {
+ DLLIST_REMOVE(_clients, client);
}
i_stream_unref(>input);
o_stream_unref(>output);
i_close_fd(>fd);
event_unref(>event);

- DLLIST_REMOVE(_clients, client);
i_free(client->proxy_user);
i_free(client->proxy_master_user);
i_free(client->virtual_user);

Re: systemd integration not working (WAS: Latest git FATAL error)

2021-04-26 Thread Joan Moreau

I have

# sudo systemctl status dovecot
● dovecot.service - Dovecot IMAP/POP3 email server
 Loaded: loaded (/usr/lib/systemd/system/dovecot.service; enabled; 
vendor preset: disabled)

 Active: active (running) since Sun 2021-04-25 20:13:25 UTC; 14h ago
   Docs: man:dovecot(1)
 https://doc.dovecot.org/
   Main PID: 2559364 (dovecot)
  Tasks: 28 (limit: 76912)
 Memory: 1.0G
CPU: 7min 18.342s
 CGroup: /system.slice/dovecot.service
 ├─2559364 /usr/sbin/dovecot -F
 ├─2559366 dovecot/imap-login
 ├─2559367 dovecot/anvil [11 connections]
 ├─2559368 dovecot/log

On 2021-04-26 08:32, Aki Tuomi wrote:

I don't know then. It works for me and I just tried it again. The only 
reason it would fail would be that HAVE_LIBSYSTEMD is not defined, so 
it would not be using libsystemd for notify support.


$ sudo systemctl status dovecot
● dovecot.service - Dovecot IMAP/POP3 email server
Loaded: loaded (/lib/systemd/system/dovecot.service; disabled; vendor 
preset: enabled)

Active: active (running) since Mon 2021-04-26 10:30:02 EEST; 2s ago
Docs: man:dovecot(1)
https://doc.dovecot.org/
Main PID: 30213 (dovecot)
Status: "v2.4.devel (98a1cca054) running"
Tasks: 4 (limit: 4701)
Memory: 3.3M
CGroup: /system.slice/dovecot.service
├─30213 /home/cmouse/dovecot/sbin/dovecot -F
├─30214 dovecot/anvil
├─30215 dovecot/log
└─30216 dovecot/config

You can tell from the "Status" line that it's using Type=notify.

Aki

On 26/04/2021 10:29 Joan Moreau  wrote:

Yes, I do run autogen.sh after every "git pull"

On 2021-04-26 08:21, Aki Tuomi wrote: The current autoconf code is bit 
buggy, but if you do indeed have libsystemd-dev installed it should do 
the right thing and will work with systemd even if you have 
Type=notify.


This has been actually tested, so if it's not working, then something 
else is wrong.


Did you remember to run ./autogen.sh after pulling from git to make 
sure you get new configure script?


Aki

On 26/04/2021 10:11 Joan Moreau  wrote:

Yes systemd is installed (and the "dev" files as well)

On 2021-04-26 06:23, Aki Tuomi wrote: This is because you are not 
compiling with libsystemd-dev installed. I guess we need to make some 
service template that use type simple when you don't use libsystemd.


Aki

On 25/04/2021 22:53 Joan Moreau  wrote:

Yes, it seems fixed with this patch :)

Another bug with git, is the "type=" in systemd is switched from 
"simple" to "notify". The later does not work and reverting to "simple" 
does work


On 2021-04-25 17:53, Aki Tuomi wrote: On 24/04/2021 21:56 Joan Moreau 
 wrote:


chroot= does not resolve the issue
I have "chroot = login" in my conf

Thanks!

The chroot was needed to get the core dump.

Can you try if this does fix the crash?

Aki

From 1df4e02cbff710ce8938480b07a5690e37f661f6 Mon Sep 17 00:00:00 2001
From: Timo Sirainen 
Date: Fri, 23 Apr 2021 16:43:36 +0300
Subject: [PATCH] login-common: Fix handling destroyed_clients linked 
list


The client needs to be removed from destroyed_clients linked list 
before

it's added to client_fd_proxies linked list.

Broken by 1c622cdbe08df2f642e28923c39894516143ae2a
---
src/login-common/client-common.c | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/src/login-common/client-common.c 
b/src/login-common/client-common.c

index bdb6e9c798..1d264d9f75 100644
--- a/src/login-common/client-common.c
+++ b/src/login-common/client-common.c
@@ -289,8 +289,9 @@ void client_disconnect(struct client *client, const 
char *reason,

/* Login was successful. We may now be proxying the connection,
so don't disconnect the client until client_unref(). */
if (client->iostream_fd_proxy != NULL) {
+ i_assert(!client->fd_proxying);
client->fd_proxying = TRUE;
- i_assert(client->prev == NULL && client->next == NULL);
+ DLLIST_REMOVE(_clients, client);
DLLIST_PREPEND(_fd_proxies, client);
client_fd_proxies_count++;
}
@@ -307,8 +308,9 @@ void client_destroy(struct client *client, const 
char *reason)


if (last_client == client)
last_client = client->prev;
- /* remove from clients linked list before it's added to
- client_fd_proxies. */
+ /* move to destroyed_clients linked list before it's potentially
+ added to client_fd_proxies. */
+ i_assert(!client->fd_proxying);
DLLIST_REMOVE(, client);
DLLIST_PREPEND(_clients, client);

@@ -409,13 +411,14 @@ bool client_unref(struct client **_client)
DLLIST_REMOVE(_fd_proxies, client);
i_assert(client_fd_proxies_count > 0);
client_fd_proxies_count--;
+ } else {
+ DLLIST_REMOVE(_clients, client);
}
i_stream_unref(>input);
o_stream_unref(>output);
i_close_fd(>fd);
event_unref(>event);

- DLLIST_REMOVE(_clients, client);
i_free(client->proxy_user);
i_free(client->proxy_master_user);
i_free(client->virtual_user);

Re: Latest git FATAL error

2021-04-26 Thread Joan Moreau

Yes, I do run autogen.sh after every "git pull"

On 2021-04-26 08:21, Aki Tuomi wrote:

The current autoconf code is bit buggy, but if you do indeed have 
libsystemd-dev installed it should do the right thing and will work 
with systemd even if you have Type=notify.


This has been actually tested, so if it's not working, then something 
else is wrong.


Did you remember to run ./autogen.sh after pulling from git to make 
sure you get new configure script?


Aki

On 26/04/2021 10:11 Joan Moreau  wrote:

Yes systemd is installed (and the "dev" files as well)

On 2021-04-26 06:23, Aki Tuomi wrote: This is because you are not 
compiling with libsystemd-dev installed. I guess we need to make some 
service template that use type simple when you don't use libsystemd.


Aki

On 25/04/2021 22:53 Joan Moreau  wrote:

Yes, it seems fixed with this patch :)

Another bug with git, is the "type=" in systemd is switched from 
"simple" to "notify". The later does not work and reverting to "simple" 
does work


On 2021-04-25 17:53, Aki Tuomi wrote: On 24/04/2021 21:56 Joan Moreau 
 wrote:


chroot= does not resolve the issue
I have "chroot = login" in my conf

Thanks!

The chroot was needed to get the core dump.

Can you try if this does fix the crash?

Aki

From 1df4e02cbff710ce8938480b07a5690e37f661f6 Mon Sep 17 00:00:00 2001
From: Timo Sirainen 
Date: Fri, 23 Apr 2021 16:43:36 +0300
Subject: [PATCH] login-common: Fix handling destroyed_clients linked 
list


The client needs to be removed from destroyed_clients linked list 
before

it's added to client_fd_proxies linked list.

Broken by 1c622cdbe08df2f642e28923c39894516143ae2a
---
src/login-common/client-common.c | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/src/login-common/client-common.c 
b/src/login-common/client-common.c

index bdb6e9c798..1d264d9f75 100644
--- a/src/login-common/client-common.c
+++ b/src/login-common/client-common.c
@@ -289,8 +289,9 @@ void client_disconnect(struct client *client, const 
char *reason,

/* Login was successful. We may now be proxying the connection,
so don't disconnect the client until client_unref(). */
if (client->iostream_fd_proxy != NULL) {
+ i_assert(!client->fd_proxying);
client->fd_proxying = TRUE;
- i_assert(client->prev == NULL && client->next == NULL);
+ DLLIST_REMOVE(_clients, client);
DLLIST_PREPEND(_fd_proxies, client);
client_fd_proxies_count++;
}
@@ -307,8 +308,9 @@ void client_destroy(struct client *client, const 
char *reason)


if (last_client == client)
last_client = client->prev;
- /* remove from clients linked list before it's added to
- client_fd_proxies. */
+ /* move to destroyed_clients linked list before it's potentially
+ added to client_fd_proxies. */
+ i_assert(!client->fd_proxying);
DLLIST_REMOVE(, client);
DLLIST_PREPEND(_clients, client);

@@ -409,13 +411,14 @@ bool client_unref(struct client **_client)
DLLIST_REMOVE(_fd_proxies, client);
i_assert(client_fd_proxies_count > 0);
client_fd_proxies_count--;
+ } else {
+ DLLIST_REMOVE(_clients, client);
}
i_stream_unref(>input);
o_stream_unref(>output);
i_close_fd(>fd);
event_unref(>event);

- DLLIST_REMOVE(_clients, client);
i_free(client->proxy_user);
i_free(client->proxy_master_user);
i_free(client->virtual_user);

Re: Latest git FATAL error

2021-04-26 Thread Joan Moreau

Yes systemd is installed (and the "dev" files as well)

On 2021-04-26 06:23, Aki Tuomi wrote:

This is because you are not compiling with libsystemd-dev installed. I 
guess we need to make some service template that use type simple when 
you don't use libsystemd.


Aki

On 25/04/2021 22:53 Joan Moreau  wrote:

Yes, it seems fixed with this patch :)

Another bug with git, is the "type=" in systemd is switched from 
"simple" to "notify". The later does not work and reverting to "simple" 
does work


On 2021-04-25 17:53, Aki Tuomi wrote: On 24/04/2021 21:56 Joan Moreau 
 wrote:


chroot= does not resolve the issue
I have "chroot = login" in my conf

Thanks!

The chroot was needed to get the core dump.

Can you try if this does fix the crash?

Aki

From 1df4e02cbff710ce8938480b07a5690e37f661f6 Mon Sep 17 00:00:00 2001
From: Timo Sirainen 
Date: Fri, 23 Apr 2021 16:43:36 +0300
Subject: [PATCH] login-common: Fix handling destroyed_clients linked 
list


The client needs to be removed from destroyed_clients linked list 
before

it's added to client_fd_proxies linked list.

Broken by 1c622cdbe08df2f642e28923c39894516143ae2a
---
src/login-common/client-common.c | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/src/login-common/client-common.c 
b/src/login-common/client-common.c

index bdb6e9c798..1d264d9f75 100644
--- a/src/login-common/client-common.c
+++ b/src/login-common/client-common.c
@@ -289,8 +289,9 @@ void client_disconnect(struct client *client, const 
char *reason,

/* Login was successful. We may now be proxying the connection,
so don't disconnect the client until client_unref(). */
if (client->iostream_fd_proxy != NULL) {
+ i_assert(!client->fd_proxying);
client->fd_proxying = TRUE;
- i_assert(client->prev == NULL && client->next == NULL);
+ DLLIST_REMOVE(_clients, client);
DLLIST_PREPEND(_fd_proxies, client);
client_fd_proxies_count++;
}
@@ -307,8 +308,9 @@ void client_destroy(struct client *client, const 
char *reason)


if (last_client == client)
last_client = client->prev;
- /* remove from clients linked list before it's added to
- client_fd_proxies. */
+ /* move to destroyed_clients linked list before it's potentially
+ added to client_fd_proxies. */
+ i_assert(!client->fd_proxying);
DLLIST_REMOVE(, client);
DLLIST_PREPEND(_clients, client);

@@ -409,13 +411,14 @@ bool client_unref(struct client **_client)
DLLIST_REMOVE(_fd_proxies, client);
i_assert(client_fd_proxies_count > 0);
client_fd_proxies_count--;
+ } else {
+ DLLIST_REMOVE(_clients, client);
}
i_stream_unref(>input);
o_stream_unref(>output);
i_close_fd(>fd);
event_unref(>event);

- DLLIST_REMOVE(_clients, client);
i_free(client->proxy_user);
i_free(client->proxy_master_user);
i_free(client->virtual_user);

Re: Latest git FATAL error

2021-04-25 Thread Joan Moreau

Yes, it seems fixed with this patch :)

Another bug with git, is the "type=" in systemd is switched from 
"simple" to "notify". The later does not work and reverting to "simple" 
does work


On 2021-04-25 17:53, Aki Tuomi wrote:


On 24/04/2021 21:56 Joan Moreau  wrote:

chroot= does not resolve the issue
I have "chroot = login" in my conf


Thanks!

The chroot was needed to get the core dump.

Can you try if this does fix the crash?

Aki

From 1df4e02cbff710ce8938480b07a5690e37f661f6 Mon Sep 17 00:00:00 2001
From: Timo Sirainen 
Date: Fri, 23 Apr 2021 16:43:36 +0300
Subject: [PATCH] login-common: Fix handling destroyed_clients linked 
list


The client needs to be removed from destroyed_clients linked list 
before

it's added to client_fd_proxies linked list.

Broken by 1c622cdbe08df2f642e28923c39894516143ae2a
---
src/login-common/client-common.c | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/src/login-common/client-common.c 
b/src/login-common/client-common.c

index bdb6e9c798..1d264d9f75 100644
--- a/src/login-common/client-common.c
+++ b/src/login-common/client-common.c
@@ -289,8 +289,9 @@ void client_disconnect(struct client *client, const 
char *reason,

/* Login was successful. We may now be proxying the connection,
so don't disconnect the client until client_unref(). */
if (client->iostream_fd_proxy != NULL) {
+i_assert(!client->fd_proxying);
client->fd_proxying = TRUE;
-i_assert(client->prev == NULL && client->next == NULL);
+DLLIST_REMOVE(_clients, client);
DLLIST_PREPEND(_fd_proxies, client);
client_fd_proxies_count++;
}
@@ -307,8 +308,9 @@ void client_destroy(struct client *client, const 
char *reason)


if (last_client == client)
last_client = client->prev;
-/* remove from clients linked list before it's added to
-   client_fd_proxies. */
+/* move to destroyed_clients linked list before it's potentially
+   added to client_fd_proxies. */
+i_assert(!client->fd_proxying);
DLLIST_REMOVE(, client);
DLLIST_PREPEND(_clients, client);

@@ -409,13 +411,14 @@ bool client_unref(struct client **_client)
DLLIST_REMOVE(_fd_proxies, client);
i_assert(client_fd_proxies_count > 0);
client_fd_proxies_count--;
+} else {
+DLLIST_REMOVE(_clients, client);
}
i_stream_unref(>input);
o_stream_unref(>output);
i_close_fd(>fd);
event_unref(>event);

-DLLIST_REMOVE(_clients, client);
i_free(client->proxy_user);
i_free(client->proxy_master_user);
i_free(client->virtual_user);

Re: Latest git FATAL error

2021-04-24 Thread Joan Moreau
d120, 
callback=callback@entry=0x7f7a3336e340 ) at 
master-service.c:862

No locals.
#16 0x7f7a3336eb7d in login_binary_run (binary=, 
argc=, argv=) at main.c:562

service_flags = 
set_pool = 0x55d70a144de0
login_socket = 0x7f7a3337337d "login"
c = 
#17 0x7f7a32feeb25 in __libc_start_main () from /usr/lib/libc.so.6
No symbol table info available.
#18 0x55d70823a84e in _start ()
No symbol table info available.

On 2021-04-24 09:41, Aki Tuomi wrote:


On April 24, 2021 8:19:55 AM UTC, Joan Moreau  wrote:


Hello

On latest git of dovecot, I get

Apr 24 04:07:36 gjserver dovecot[857958]: imap-login: Panic: file
client-common.c: line 293 (client_disconnect): assertion failed:
(client->prev == NULL && client->next == NULL)

and login process crash

On 2.3.14, there is no problems

Hope it helps

JM


Hi!

Can you try

service imap-login {
chroot=
}

and see if you can get a core dump? gdb bt full output would be useful.

Aki

Latest git FATAL error

2021-04-24 Thread Joan Moreau

Hello

On latest git of dovecot, I get

Apr 24 04:07:36 gjserver dovecot[857958]: imap-login: Panic: file 
client-common.c: line 293 (client_disconnect): assertion failed: 
(client->prev == NULL && client->next == NULL)


and login process crash

On 2.3.14, there is no problems

Hope it helps

JM

Re: Virtual folders and mailbox_list_get_root_forced

2021-04-02 Thread Joan Moreau

Hello

Anyone on this ?

Thank you

On 2021-03-28 20:55, Joan Moreau wrote:


yes, this is getting to a mess

Details can be seen here : 
https://github.com/grosjo/fts-xapian/issues/72


It shows that sometimes mailbox_list_get_root_forced return the generic 
INDEX value, sometimes the namespace value


thank you for your help

On 2021-03-28 12:03, Aki Tuomi wrote:
Hi!

mail_location = maildir:/var/vmail/%d/%n:LAYOUT=fs:INDEX=/var/mailindex

This is going to put everyone's indexes under /var/mailindex, without 
separating them properly. Might cause fun issues.


Can you give an concrete example of what your issue is?

Aki

On 28/03/2021 13:35 Joan Moreau  wrote:

Hi
Anyone on that ?
Thank you so much

On 2021-03-22 18:16, Joan Moreau wrote: Hi
The function mailbox_list_get_root_forcedreturns sometimes the first or 
the second value of the INDEX param for the same mailbox.


How to make sure this returns only the correct one of the corresponding 
mailbox ?


mail_location = maildir:/var/vmail/%d/%n:LAYOUT=fs:INDEX=/var/mailindex
namespace {
location = 
virtual:/nix/store/toto-virtual:INDEX=/var/vmail/%d/%n/virtual

prefix = virtual/
separator = /
subscriptions = no
}

Thank you

Re: Virtual folders and mailbox_list_get_root_forced

2021-03-28 Thread Joan Moreau

yes, this is getting to a mess

Details can be seen here : 
https://github.com/grosjo/fts-xapian/issues/72


It shows that sometimes mailbox_list_get_root_forced return the generic 
INDEX value, sometimes the namespace value


thank you for your help

On 2021-03-28 12:03, Aki Tuomi wrote:


Hi!

mail_location = maildir:/var/vmail/%d/%n:LAYOUT=fs:INDEX=/var/mailindex

This is going to put everyone's indexes under /var/mailindex, without 
separating them properly. Might cause fun issues.


Can you give an concrete example of what your issue is?

Aki

On 28/03/2021 13:35 Joan Moreau  wrote:

Hi
Anyone on that ?
Thank you so much

On 2021-03-22 18:16, Joan Moreau wrote: Hi
The function mailbox_list_get_root_forcedreturns sometimes the first or 
the second value of the INDEX param for the same mailbox.


How to make sure this returns only the correct one of the corresponding 
mailbox ?


mail_location = maildir:/var/vmail/%d/%n:LAYOUT=fs:INDEX=/var/mailindex
namespace {
location = 
virtual:/nix/store/toto-virtual:INDEX=/var/vmail/%d/%n/virtual

prefix = virtual/
separator = /
subscriptions = no
}

Thank you

Re: Virtual folders and mailbox_list_get_root_forced

2021-03-28 Thread Joan Moreau

Hi

Anyone on that ?

Thank you so much

On 2021-03-22 18:16, Joan Moreau wrote:


Hi

The function mailbox_list_get_root_forced returns sometimes the first 
or the second value of the INDEX param for the same mailbox.


How to make sure this returns only the correct one of the corresponding 
mailbox ?


mail_location = maildir:/var/vmail/%d/%n:LAYOUT=fs:INDEX=/var/mailindex
namespace {
location = 
virtual:/nix/store/toto-virtual:INDEX=/var/vmail/%d/%n/virtual

prefix = virtual/
separator = /
subscriptions = no
}

Thank you

Virtual folders and mailbox_list_get_root_forced

2021-03-22 Thread Joan Moreau

Hi

The function mailbox_list_get_root_forced returns sometimes the first or 
the second value of the INDEX param for the same mailbox.


How to make sure this returns only the correct one of the corresponding 
mailbox ?


mail_location = maildir:/var/vmail/%d/%n:LAYOUT=fs:INDEX=/var/mailindex
namespace {
  location = 
virtual:/nix/store/toto-virtual:INDEX=/var/vmail/%d/%n/virtual

  prefix = virtual/
  separator = /
  subscriptions = no
}

Thank you

Re: Git / Compilation error

2021-03-04 Thread Joan Moreau

I do that each time

The problem arises on recent git only

On 2021-03-04 08:16, Aki Tuomi wrote:


Try running `autoreconf -vi`

Aki

On 04/03/2021 10:13 Joan Moreau  wrote:

I already have this file (dovecot compilation was working fine until 
recent git)

[root@gjserver dovecot]# ls -al /usr/share/aclocal/gettext.m4
-rw-r--r-- 1 root root 14488 Aug 4 2020 /usr/share/aclocal/gettext.m4

On 2021-03-04 08:09, Aki Tuomi wrote: You need to find package on your 
system which contains


/usr/share/aclocal/gettext.m4

or similar. This provides AM_ICONV.

Aki

On 04/03/2021 10:07 Joan Moreau  wrote:

Hello
I already have gettext
[root@gjserver dovecot]# pacman -S gettext
warning: gettext-0.21-1 is up to date -- reinstalling
resolving dependencies...
looking for conflicting packages...
Package (1) Old Version New Version Net Change
core/gettext 0.21-1 0.21-1 0.00 MiB

On 2021-03-04 08:03, Aki Tuomi wrote: You need to install gettext

Aki

On 04/03/2021 10:02 Joan Moreau  wrote:

Hello,
With latest git, I get the following error :
configure.ac:761: the top level
configure.ac:22: error: possibly undefined macro: AC_DEFINE
If this token and others are legitimate, please use m4_pattern_allow.
See the Autoconf documentation.
configure.ac:205: error: possibly undefined macro: AC_MSG_ERROR
configure.ac:247: error: possibly undefined macro: AS_IF
configure.ac:303: error: possibly undefined macro: AM_ICONV
configure.ac:434: error: possibly undefined macro: AC_CHECK_HEADER
configure:28073: error: possibly undefined macro: AC_CHECK_FUNC

Something I am missing?
Thank you

Re: Git / Compilation error

2021-03-04 Thread Joan Moreau
I already have this file (dovecot compilation was working fine until 
recent git)


[root@gjserver dovecot]# ls -al /usr/share/aclocal/gettext.m4
-rw-r--r-- 1 root root 14488 Aug  4  2020 /usr/share/aclocal/gettext.m4

On 2021-03-04 08:09, Aki Tuomi wrote:


You need to find package on your system which contains

/usr/share/aclocal/gettext.m4

or similar. This provides AM_ICONV.

Aki

On 04/03/2021 10:07 Joan Moreau  wrote:

Hello
I already have gettext
[root@gjserver dovecot]# pacman -S gettext
warning: gettext-0.21-1 is up to date -- reinstalling
resolving dependencies...
looking for conflicting packages...
Package (1) Old Version New Version Net Change
core/gettext 0.21-1 0.21-1 0.00 MiB

On 2021-03-04 08:03, Aki Tuomi wrote: You need to install gettext

Aki

On 04/03/2021 10:02 Joan Moreau  wrote:

Hello,
With latest git, I get the following error :
configure.ac:761: the top level
configure.ac:22: error: possibly undefined macro: AC_DEFINE
If this token and others are legitimate, please use m4_pattern_allow.
See the Autoconf documentation.
configure.ac:205: error: possibly undefined macro: AC_MSG_ERROR
configure.ac:247: error: possibly undefined macro: AS_IF
configure.ac:303: error: possibly undefined macro: AM_ICONV
configure.ac:434: error: possibly undefined macro: AC_CHECK_HEADER
configure:28073: error: possibly undefined macro: AC_CHECK_FUNC

Something I am missing?
Thank you

Re: Git / Compilation error

2021-03-04 Thread Joan Moreau

Hello

I already have gettext

[root@gjserver dovecot]# pacman -S gettext
warning: gettext-0.21-1 is up to date -- reinstalling
resolving dependencies...
looking for conflicting packages...

Package (1)   Old Version  New Version  Net Change

core/gettext  0.21-1   0.21-1 0.00 MiB

On 2021-03-04 08:03, Aki Tuomi wrote:


You need to install gettext

Aki


On 04/03/2021 10:02 Joan Moreau  wrote:

Hello,
With latest git, I get the following error :
configure.ac:761: the top level
configure.ac:22: error: possibly undefined macro: AC_DEFINE
If this token and others are legitimate, please use m4_pattern_allow.
See the Autoconf documentation.
configure.ac:205: error: possibly undefined macro: AC_MSG_ERROR
configure.ac:247: error: possibly undefined macro: AS_IF
configure.ac:303: error: possibly undefined macro: AM_ICONV
configure.ac:434: error: possibly undefined macro: AC_CHECK_HEADER
configure:28073: error: possibly undefined macro: AC_CHECK_FUNC

Something I am missing?
Thank you

Git / Compilation error

2021-03-04 Thread Joan Moreau

Hello,

With latest git, I get the following error :

configure.ac:761: the top level
configure.ac:22: error: possibly undefined macro: AC_DEFINE
  If this token and others are legitimate, please use 
m4_pattern_allow.

  See the Autoconf documentation.
configure.ac:205: error: possibly undefined macro: AC_MSG_ERROR
configure.ac:247: error: possibly undefined macro: AS_IF
configure.ac:303: error: possibly undefined macro: AM_ICONV
configure.ac:434: error: possibly undefined macro: AC_CHECK_HEADER
configure:28073: error: possibly undefined macro: AC_CHECK_FUNC

Something I am missing?

Thank you

Re: fts_encoder

2021-02-11 Thread Joan Moreau

Created a PR

https://github.com/dovecot/core/pull/155

On 2021-02-11 13:25, Joan Moreau wrote:


Hello

Checking further, and putting logs a bit every where in the dovecot 
code, the core is sending FIRST the initial document (not decoded) then 
SECOND the decoded version


Thisi is really weird, and the indexer then indexes a lot of binary 
crap


I am struggling to find where in the code this double call is made.

Anyone knows ?

On 2021-02-10 00:05, John Fawcett wrote:

On 09/02/2021 15:33, Joan Moreau wrote:

If I place the following code in the plugin 
fts_backend_xxx_update_build_more function (lucene, squat and xapian, 
as solr refuses to work properly on my setup)


{
char * s = i_strdup("EMPTY");
if(data != NULL) { i_free(s); s = i_strndup(data,20); }
i_info("fts_backend_update_build_more: data like '%s'",s);
i_free(s);
}

and if I send a PDF by email, the data shown in the log is "%PDF-1.7 "

so it does mean the decoder data is not properly transmitted to the 
plugin


Something is wrong in the data transmission

Joan

I too see something similar with fts_solr. I do see the raw %PDF string 
and PDF binary data being passed through to 
fts_backend_xxx_update_build_more function but I disagree with the 
conclusion you draw from it.


After the raw data I also see the decoded data, so at least in my case 
it is possible to see both the raw and decoded data in 
fts_backend_xxx_update_build_more function. In the rawlog I no longer 
see the binary data (but some blank lines), so something is filtering 
it. I do see the decoded data in the rawlog. I do get hits on the solr 
search for the decoded text.


John

Re: fts_encoder

2021-02-11 Thread Joan Moreau

Hello

Checking further, and putting logs a bit every where in the dovecot 
code, the core is sending FIRST the initial document (not decoded) then 
SECOND the decoded version


Thisi is really weird, and the indexer then indexes a lot of binary crap

I am struggling to find where in the code this double call is made.

Anyone knows ?

On 2021-02-10 00:05, John Fawcett wrote:


On 09/02/2021 15:33, Joan Moreau wrote:

If I place the following code in the plugin 
fts_backend_xxx_update_build_more function (lucene, squat and xapian, 
as solr refuses to work properly on my setup)


{
char * s = i_strdup("EMPTY");
if(data != NULL) { i_free(s); s = i_strndup(data,20); }
i_info("fts_backend_update_build_more: data like '%s'",s);
i_free(s);
}

and if I send a PDF by email, the data shown in the log is "%PDF-1.7 "

so it does mean the decoder data is not properly transmitted to the 
plugin


Something is wrong in the data transmission


Joan

I too see something similar with fts_solr. I do see the raw %PDF string 
and PDF binary data being passed through to 
fts_backend_xxx_update_build_more function but I disagree with the 
conclusion you draw from it.


After the raw data I also see the decoded data, so at least in my case 
it is possible to see both the raw and decoded data in 
fts_backend_xxx_update_build_more function. In the rawlog I no longer 
see the binary data (but some blank lines), so something is filtering 
it. I do see the decoded data in the rawlog. I do get hits on the solr 
search for the decoded text.


John

Re: fts_encoder

2021-02-09 Thread Joan Moreau
If I place the following code in the plugin 
fts_backend_xxx_update_build_more function (lucene, squat and xapian, as 
solr refuses to work properly on my setup)


{
char * s = i_strdup("EMPTY");
if(data != NULL) { i_free(s); s = i_strndup(data,20); }
i_info("fts_backend_update_build_more: data like 
'%s'",s);

i_free(s);
}

and if I send a PDF by email, the data shown in the log is "%PDF-1.7 "

so it does mean the decoder data is not properly transmitted to the 
plugin


Something is wrong in the data transmission

On 2021-02-09 11:58, John Fawcett wrote:

On 08/02/2021 23:05, Stuart Henderson wrote: On 2021/02/08 21:33, Joan 
Moreau wrote: Yes , once again : output of the decoder is fine, I also 
put log inide the dovecot core to
check whether data is properly transmitted, and result is that it is 
(i.e. dovecot core

receives the proper output of pdftotext via the decoder

Now, that data is the /not/ the one sent from dovecot core to the fts 
plugin (and this is the
same issue for solr and all other plugins) Seems that something is 
different with your setup than John's and mine

then, as fts_solr rawlog (which is just the http request split into
.in and .out files) has the decoded file for us.

Did you try with the actual fts_solr plugin so it's a direct comparison
with what we see? There is no need for a real solr server, just point 
it

at any http server (or I guess netcat listening on a port will also do)
with

mail_plugins = fts fts_solr

plugin {
fts_autoindex = yes
fts = solr
fts_solr = url=http://127.0.0.1:80/ rawlog_dir=/tmp/solr
}

If that is not showing decoded for you then I suppose there's some
problem on the way into/through fts. And if it does show as decoded
then perhaps fts_solr is doing something slightly different than the
places you're examining in fts and your plugin, and that might give
a point to work backwards from.
 I'd also recommend Joan to look into some of the potential 
configuration

issues I mentioned in my first reply and if the problem persists, post
some clear evidence.

John

Re: fts_encoder

2021-02-08 Thread Joan Moreau
Yes , once again : output of the decoder is fine, I also put log inide 
the dovecot core to check whether data is properly transmitted, and 
result is that it is (i.e. dovecot core receives the proper output of 
pdftotext via the decoder


Now, that data is the /not/ the one sent from dovecot core to the fts 
plugin (and this is the same issue for solr and all other plugins)


Of course, the stemming will show a good results (as PDF content will be 
stemmed) but the problem does remain.


How to make sure the data sent to the FTS plugins (xapian, solr, 
whatever...) is the the output of the decoder and /not/ the original 
data ?


On 2021-02-08 21:11, Stuart Henderson wrote:


On 2021-02-08, Joan Moreau  wrote:

Well, in the function xxx_build_more of FTS plugin, the data received 
in

the original PDF, not the output of pdftotext

Can you clarify where do you put your log in the solr plugin , so I 
can

check the situation in the xapian plugin ?


The log is particular to fts_solr, you set it with e.g.

"fts_solr = url=http://127.0.0.1:8983/solr/dovecot/ 
rawlog_dir=/tmp/solr"


Confirmed it works for me, i.e. passes text from inside the pdf, and 
not

the whole pdf itself.

Did you check that decode2text.sh works ok on your system (when running
as the relevant uid)?

cat foo.pdf | sudo -u dovecot /usr/libexec/dovecot/decode2text.sh 
application/pdf

Re: fts_encoder

2021-02-08 Thread Joan Moreau
Yes , once again : output of the decoder is fine, I also put log inide 
the dovecot core to check whereas data is properly transmitted and it is 
(i.e. dovecot core receives the proper output of pdftotext via the 
decoder


Now, that data is the /not/ the once ent from dovecot core to the fts 
plugin (and this is the same issue for solr and all other plugins)


Of course, the stemming will show a good results abut the problem does 
remain.


How to make sure the data sent to the FTS plugins (xapian, solr, 
whatever...) is the the output of the decoder and /not/ the original 
data ?


On 2021-02-08 21:11, Stuart Henderson wrote:


On 2021-02-08, Joan Moreau  wrote:

Well, in the function xxx_build_more of FTS plugin, the data received 
in

the original PDF, not the output of pdftotext

Can you clarify where do you put your log in the solr plugin , so I 
can

check the situation in the xapian plugin ?


The log is particular to fts_solr, you set it with e.g.

"fts_solr = url=http://127.0.0.1:8983/solr/dovecot/ 
rawlog_dir=/tmp/solr"


Confirmed it works for me, i.e. passes text from inside the pdf, and 
not

the whole pdf itself.

Did you check that decode2text.sh works ok on your system (when running
as the relevant uid)?

cat foo.pdf | sudo -u dovecot /usr/libexec/dovecot/decode2text.sh 
application/pdf

Re: fts_encoder

2021-02-08 Thread Joan Moreau
Well, in the function xxx_build_more of FTS plugin, the data received in 
the original PDF, not the output of pdftotext


Can you clarify where do you put your log in the solr plugin , so I can 
check the situation in the xapian plugin ?


On 2021-02-08 17:34, John Fawcett wrote:


On 08/02/2021 15:22, Joan Moreau wrote:


Well, thank you for the answer, but the actual issue is that data sent
by the decoder (stipulated in the conf file) is properly collected by
dovecot core, but /not/ sent to the plugin : the plugin receives the
original data.

This is not linked to a particular plugin (xapian, solr, squat, etc..)
but seems to be a general issue of dovecot core


Hi Joan

as far as I can see there's not a general issue in the dovecot core 
with

using the decoder. It works for me. I see the text extracted from PDF
sent to solr (I enable raw_log feature to see the actual data going 
over

) Also when I query solr I get a search hit for attachment text.

John

Re: fts_encoder

2021-02-08 Thread Joan Moreau
Well, thank you for the answer, but the actual issue is that data sent 
by the decoder (stipulated in the conf file) is properly collected by 
dovecot core, but /not/ sent to the plugin : the plugin receives the 
original data.


This is not linked to a particular plugin (xapian, solr, squat, etc..) 
but seems to be a general issue of dovecot core


On 2021-02-08 01:03, John Fawcett wrote:


On 07/02/2021 18:51, Joan Moreau wrote:

more info : the function fts_parser_script_more in 
plugins/fts/fts-parser.c properly read the output of the script


still, the data is not sent to the FTS pligins (xapian or any other)

On 2021-02-07 17:37, Joan Moreau wrote:

more info : I am running dovecot git version

On 2021-02-07 17:15, Joan Moreau wrote:

a bit more on this, adding log in the decode2text.sh, I can see that 
pdftotext output the right data, but that data is /not/ transmitted to 
the fts plugin for indexing (only the original pdf code is)


On 2021-02-07 17:00, Joan Moreau wrote:

Hello,

I am trying to deal properly with email attachements in fts-xapian 
plugins.


I tried the default script with a PDF file.

The data I receive in the fts plugin part ("xxx_build_more") is the 
original document, no the output of the pdftotext


Is there anything I am missing ?

Here my config:

plugin {
plugin = fts_xapian managesieve sieve

fts = xapian
fts_xapian = partial=2 full=20 verbose=1 attachments=1

fts_autoindex = yes
fts_enforced = yes
fts_autoindex_exclude = \Trash
fts_autoindex_exclude2 = \Drafts

fts_decoder = decode2text

sieve = /data/mail/%d/%n/local.sieve
sieve_after = /data/mail/after.sieve
sieve_before = /data/mail/before.sieve
sieve_dir = /data/mail/%d/%n/sieve
sieve_global_dir = /data/mail
sieve_global_path = /data/mail/global.sieve
}

...

service decode2text {
executable = script /usr/libexec/dovecot/decode2text.sh
user = dovecot
unix_listener decode2text {
mode = 0666
}
}

Thank you


Joan

I'm not sure I can be much use for xapian, but looking at your 
configuration I did notice some differences with the documentation. I 
don't know if they are relevant to the issue you're seeing.


First of all I don't see

mail_plugins = fts

plugin = fts

settings which are both mentioned in the xapian documentation.

Also the documentation states that attachments=1 can only index text 
attachments. Maybe you should be using attachments=0 and let fts_decode 
handle the attachments.


Failing that, I can only advise to turn on some debugging and see what 
that brings.


best regards

John

Re: fts_encoder

2021-02-07 Thread Joan Moreau
more info : the function fts_parser_script_more in 
plugins/fts/fts-parser.c properly read the output of the script


still, the data is not sent to the FTS pligins (xapian or any other)

On 2021-02-07 17:37, Joan Moreau wrote:


more info : I am running dovecot git version

On 2021-02-07 17:15, Joan Moreau wrote:

a bit more on this, adding log in the decode2text.sh, I can see that 
pdftotext output the right data, but that data is /not/ transmitted to 
the fts plugin for indexing (only the original pdf code is)


On 2021-02-07 17:00, Joan Moreau wrote:

Hello,

I am trying to deal properly with email attachements in fts-xapian 
plugins.


I tried the default script with a PDF file.

The data I receive in the fts plugin part ("xxx_build_more") is the 
original document, no the output of the pdftotext


Is there anything I am missing ?

Here my config:

plugin {
plugin = fts_xapian managesieve sieve

fts = xapian
fts_xapian = partial=2 full=20 verbose=1 attachments=1

fts_autoindex = yes
fts_enforced = yes
fts_autoindex_exclude = \Trash
fts_autoindex_exclude2 = \Drafts

fts_decoder = decode2text

sieve = /data/mail/%d/%n/local.sieve
sieve_after = /data/mail/after.sieve
sieve_before = /data/mail/before.sieve
sieve_dir = /data/mail/%d/%n/sieve
sieve_global_dir = /data/mail
sieve_global_path = /data/mail/global.sieve
}

...

service decode2text {
executable = script /usr/libexec/dovecot/decode2text.sh
user = dovecot
unix_listener decode2text {
mode = 0666
}
}

Thank you

Re: fts_encoder

2021-02-07 Thread Joan Moreau

more info : I am running dovecot git version

On 2021-02-07 17:15, Joan Moreau wrote:

a bit more on this, adding log in the decode2text.sh, I can see that 
pdftotext output the right data, but that data is /not/ transmitted to 
the fts plugin for indexing (only the original pdf code is)


On 2021-02-07 17:00, Joan Moreau wrote:


Hello,

I am trying to deal properly with email attachements in fts-xapian 
plugins.


I tried the default script with a PDF file.

The data I receive in the fts plugin part ("xxx_build_more") is the 
original document, no the output of the pdftotext


Is there anything I am missing ?

Here my config:

plugin {
plugin = fts_xapian managesieve sieve

fts = xapian
fts_xapian = partial=2 full=20 verbose=1 attachments=1

fts_autoindex = yes
fts_enforced = yes
fts_autoindex_exclude = \Trash
fts_autoindex_exclude2 = \Drafts

fts_decoder = decode2text

sieve = /data/mail/%d/%n/local.sieve
sieve_after = /data/mail/after.sieve
sieve_before = /data/mail/before.sieve
sieve_dir = /data/mail/%d/%n/sieve
sieve_global_dir = /data/mail
sieve_global_path = /data/mail/global.sieve
}

...

service decode2text {
executable = script /usr/libexec/dovecot/decode2text.sh
user = dovecot
unix_listener decode2text {
mode = 0666
}
}

Thank you

Re: fts_encoder

2021-02-07 Thread Joan Moreau
a bit more on this, adding log in the decode2text.sh, I can see that 
pdftotext output the right data, but that data is /not/ transmitted to 
the fts plugin for indexing (only the original pdf code is)


On 2021-02-07 17:00, Joan Moreau wrote:


Hello,

I am trying to deal properly with email attachements in fts-xapian 
plugins.


I tried the default script with a PDF file.

The data I receive in the fts plugin part ("xxx_build_more") is the 
original document, no the output of the pdftotext


Is there anything I am missing ?

Here my config:

plugin {
plugin = fts_xapian managesieve sieve

fts = xapian
fts_xapian = partial=2 full=20 verbose=1 attachments=1

fts_autoindex = yes
fts_enforced = yes
fts_autoindex_exclude = \Trash
fts_autoindex_exclude2 = \Drafts

fts_decoder = decode2text

sieve = /data/mail/%d/%n/local.sieve
sieve_after = /data/mail/after.sieve
sieve_before = /data/mail/before.sieve
sieve_dir = /data/mail/%d/%n/sieve
sieve_global_dir = /data/mail
sieve_global_path = /data/mail/global.sieve
}

...

service decode2text {
executable = script /usr/libexec/dovecot/decode2text.sh
user = dovecot
unix_listener decode2text {
mode = 0666
}
}

Thank you

fts_encoder

2021-02-07 Thread Joan Moreau

Hello,

I am trying to deal properly with email attachements in fts-xapian 
plugins.


I tried the default script with a PDF file.

The data I receive in the fts plugin part ("xxx_build_more") is the 
original document, no the output of the pdftotext


Is there anything I am missing ?

Here my config:

plugin {
plugin = fts_xapian managesieve sieve

fts = xapian
fts_xapian = partial=2 full=20 verbose=1 attachments=1

fts_autoindex = yes
fts_enforced = yes
fts_autoindex_exclude = \Trash
fts_autoindex_exclude2 = \Drafts

fts_decoder = decode2text

sieve = /data/mail/%d/%n/local.sieve
sieve_after = /data/mail/after.sieve
sieve_before = /data/mail/before.sieve
sieve_dir = /data/mail/%d/%n/sieve
sieve_global_dir = /data/mail
sieve_global_path = /data/mail/global.sieve
}

...

service decode2text {
   executable = script /usr/libexec/dovecot/decode2text.sh
   user = dovecot
   unix_listener decode2text {
 mode = 0666
   }
}

Thank you

Re: Dovecot FTS not using plugins

2021-01-11 Thread Joan Moreau

Soirry, I always forget that dovecot does not do multi-threading (why ?)

The process was waiting for another process.

On 2021-01-11 14:57, Aki Tuomi wrote:


On 11/01/2021 16:51 Joan Moreau  wrote:

Hello,
With recent git version of dovecot, I can see that the FTS does not 
use the configured plugin anymore, but tries to sort the mailbox 
directly on the spot (which is of course very painful).
Is there a change in the configuration file in order to recover the 
old behavior ? or something else has changed ?

Thank you
Joan


Can you share `doveconf -n` and output of `doveadm -Dv search -u victim 
text foobar`?


Aki

Dovecot FST not using plugins

2021-01-11 Thread Joan Moreau

Hello,

With recent git version of dovecot, I can see that the FTS does not use 
the configured plugin anymore, but tries to sort the mailbox directly on 
the spot (which is of course very painful).


Is there a change in the configuration file in order to recover the old 
behavior ? or something else has changed ?


Thank you

Joan

Re: vsz_limit

2020-11-06 Thread Joan Moreau

SOrry, my mistake, the conversion type was wrong.

So restrict_get_process_size is indeed consistent with vsz_limit

Now, for the memory usage of the process, getrusage gives only the /max/ 
of the memory used, not the current


THe only way I found is to fopen("/proc/self/status") and read the 
correct line. Do you have a better way ?


thank you

On 2020-11-06 14:16, Joan Moreau wrote:


ok found it,

However, it returns me some random number. Maybe I am missing something

On 2020-11-06 13:57, Aki Tuomi wrote:
Duh... src/lib/restrict-process-size.h

Should be in the installed include files as well,

/usr/include/dovecot/restrict-process-size.h

Aki

On 06/11/2020 15:56 Joan Moreau  wrote:

Hello
I can't find "src/lib/restrict.h" . Is it in dovecot source ?

On 2020-11-06 13:20, Aki Tuomi wrote: Seems I had forgotten that you 
can use src/lib/restrict.h, in particular, restrict_get_process_size() 
to figure out the limit. You can combine this with getrusage to find 
out current usage.


Aki

On 06/11/2020 13:26 Joan Moreau  wrote:

yes, will do so.
It would be nice however to be able to access the actual dovecot config 
from the plugin side


On 2020-11-04 06:46, Aki Tuomi wrote: You could also add it as setting 
for the fts_xapian plugin parameters?


Aki

On 04/11/2020 08:42 Joan Moreau  wrote:

For machines with low memory, I would like to detect how much ram 
remains available before starting indexing a mail, so I can commit 
everything on disk before the ram is exhausted (and break the process)
I tried to put a "fake" allocation to test if it fails, (so it can fail 
separately, and I can "if ram remaining is above X") but the is really 
not clean


On 2020-11-04 06:28, Aki Tuomi wrote:

On 04/11/2020 05:19 Joan Moreau  wrote:

Hello
I am looking for help around memory management
1 - How to get the current value of "vsz_limit" from inside a plugin 
(namely https://github.com/grosjo/fts-xapian/ ) , especially for 
indexer-worker
2 - Is there a macro or function in dovecot to get the remaining free 
memory from this vsz value ?

Thank you

Hi Joan,

I don't think there is a feasible way to access this setting as of now. 
Is there a reason you need this? We usually recommend setting 
vsz_limit=0 for indexer-worker.


Aki

Re: vsz_limit

2020-11-06 Thread Joan Moreau

ok found it,

However, it returns me some random number. Maybe I am missing something

On 2020-11-06 13:57, Aki Tuomi wrote:


Duh... src/lib/restrict-process-size.h

Should be in the installed include files as well,

/usr/include/dovecot/restrict-process-size.h

Aki

On 06/11/2020 15:56 Joan Moreau  wrote:

Hello
I can't find "src/lib/restrict.h" . Is it in dovecot source ?

On 2020-11-06 13:20, Aki Tuomi wrote: Seems I had forgotten that you 
can use src/lib/restrict.h, in particular, restrict_get_process_size() 
to figure out the limit. You can combine this with getrusage to find 
out current usage.


Aki

On 06/11/2020 13:26 Joan Moreau  wrote:

yes, will do so.
It would be nice however to be able to access the actual dovecot config 
from the plugin side


On 2020-11-04 06:46, Aki Tuomi wrote: You could also add it as setting 
for the fts_xapian plugin parameters?


Aki

On 04/11/2020 08:42 Joan Moreau  wrote:

For machines with low memory, I would like to detect how much ram 
remains available before starting indexing a mail, so I can commit 
everything on disk before the ram is exhausted (and break the process)
I tried to put a "fake" allocation to test if it fails, (so it can fail 
separately, and I can "if ram remaining is above X") but the is really 
not clean


On 2020-11-04 06:28, Aki Tuomi wrote:

On 04/11/2020 05:19 Joan Moreau  wrote:

Hello
I am looking for help around memory management
1 - How to get the current value of "vsz_limit" from inside a plugin 
(namely https://github.com/grosjo/fts-xapian/ ) , especially for 
indexer-worker
2 - Is there a macro or function in dovecot to get the remaining free 
memory from this vsz value ?

Thank you

Hi Joan,

I don't think there is a feasible way to access this setting as of now. 
Is there a reason you need this? We usually recommend setting 
vsz_limit=0 for indexer-worker.


Aki

Re: vsz_limit

2020-11-06 Thread Joan Moreau

Hello

I can't find "src/lib/restrict.h" . Is it in dovecot source ?

On 2020-11-06 13:20, Aki Tuomi wrote:

Seems I had forgotten that you can use src/lib/restrict.h, in 
particular, restrict_get_process_size() to figure out the limit. You 
can combine this with getrusage to find out current usage.


Aki

On 06/11/2020 13:26 Joan Moreau  wrote:

yes, will do so.
It would be nice however to be able to access the actual dovecot config 
from the plugin side


On 2020-11-04 06:46, Aki Tuomi wrote: You could also add it as setting 
for the fts_xapian plugin parameters?


Aki

On 04/11/2020 08:42 Joan Moreau  wrote:

For machines with low memory, I would like to detect how much ram 
remains available before starting indexing a mail, so I can commit 
everything on disk before the ram is exhausted (and break the process)
I tried to put a "fake" allocation to test if it fails, (so it can fail 
separately, and I can "if ram remaining is above X") but the is really 
not clean


On 2020-11-04 06:28, Aki Tuomi wrote:

On 04/11/2020 05:19 Joan Moreau  wrote:

Hello
I am looking for help around memory management
1 - How to get the current value of "vsz_limit" from inside a plugin 
(namely https://github.com/grosjo/fts-xapian/ ) , especially for 
indexer-worker
2 - Is there a macro or function in dovecot to get the remaining free 
memory from this vsz value ?

Thank you

Hi Joan,

I don't think there is a feasible way to access this setting as of now. 
Is there a reason you need this? We usually recommend setting 
vsz_limit=0 for indexer-worker.


Aki

Fatal: write(indexer) failed: Resource temporarily unavailable

2020-11-06 Thread Joan Moreau

Hello

I have this issue for Xapian plugin:

https://github.com/grosjo/fts-xapian/issues/62

But I am not sure where can it comes from.

Is dovecot calling some specific function in the plugin after the init, 
that would create such error ?


In doveadm dealing differently with plugins that dovecot core does ?

Has there been some recent changes in the plugin framework that would 
lead to such error ?


Thank you

Re: vsz_limit

2020-11-06 Thread Joan Moreau

yes, will do so.

It would be nice however to be able to access the actual dovecot config 
from the plugin side


On 2020-11-04 06:46, Aki Tuomi wrote:


You could also add it as setting for the fts_xapian plugin parameters?

Aki

On 04/11/2020 08:42 Joan Moreau  wrote:

For machines with low memory, I would like to detect how much ram 
remains available before starting indexing a mail, so I can commit 
everything on disk before the ram is exhausted (and break the process)
I tried to put a "fake" allocation to test if it fails, (so it can fail 
separately, and I can "if ram remaining is above X") but the is really 
not clean


On 2020-11-04 06:28, Aki Tuomi wrote:

On 04/11/2020 05:19 Joan Moreau  wrote:

Hello
I am looking for help around memory management
1 - How to get the current value of "vsz_limit" from inside a plugin 
(namely https://github.com/grosjo/fts-xapian/ ) , especially for 
indexer-worker
2 - Is there a macro or function in dovecot to get the remaining free 
memory from this vsz value ?

Thank you

Hi Joan,

I don't think there is a feasible way to access this setting as of now. 
Is there a reason you need this? We usually recommend setting 
vsz_limit=0 for indexer-worker.


Aki

Re: vsz_limit

2020-11-04 Thread Joan Moreau
For machines with low memory, I would like to detect how much ram 
remains available before starting indexing a mail, so I can commit 
everything on disk before the ram is exhausted (and break the process)


I tried to put a "fake" allocation to test if it fails, (so it can fail 
separately, and I can "if ram remaining is above X") but the is really 
not clean


On 2020-11-04 06:28, Aki Tuomi wrote:


On 04/11/2020 05:19 Joan Moreau  wrote:

Hello
I am looking for help around memory management
1 - How to get the current value of "vsz_limit" from inside a plugin 
(namely https://github.com/grosjo/fts-xapian/ ) , especially for 
indexer-worker
2 - Is there a macro or function in dovecot to get the remaining free 
memory from this vsz value ?

Thank you


Hi Joan,

I don't think there is a feasible way to access this setting as of now. 
Is there a reason you need this? We usually recommend setting 
vsz_limit=0 for indexer-worker.


Aki

vsz_limit

2020-11-03 Thread Joan Moreau

Hello

I am looking for help around memory management

1 - How to get the current value of "vsz_limit" from inside a plugin 
(namely https://github.com/grosjo/fts-xapian/ ) , especially for 
indexer-worker


2 - Is there a macro or function in dovecot to get the remaining free 
memory from this vsz value ?


Thank you

Re: lazy_expunge and fts_autoindex

2020-08-29 Thread Joan Moreau

Maybe try

fts_autoindex_exclude = \EXPUNGED

On 2020-08-29 14:34, Gregory Heytings wrote:


Hi list,

I have both lazy_expunge and fts_autoindex activated (with fts-xapian), 
as follows:


plugin {
lazy_expunge = EXPUNGED/
}

plugin {
fts = xapian
fts_xapian = partial=2 full=20 attachments=1 verbose=0
fts_autoindex = yes
fts_enforced = yes
fts_autoindex_exclude = EXPUNGED
fts_autoindex_exclude2 = EXPUNGED/*
}

However, I still see "indexer-worker...: Info: Indexed 1 messages in 
EXPUNGED/..." in the dovecot log each time I expunge an email.  I tried 
various other settings for "fts_autoindex_exclude" (EXPUNGED alone, 
EXPUNGED + EXPUNGED/ + EXPUNGED/*, ...), but none of them seem to work.


Thanks for your help,

Gregory

Re: FTS-lucene errors : language not available for stemming

2020-05-21 Thread Joan Moreau
Hello 

Indexer does not run as root 

It runs as "mail_uid = xxx" (based on your config) 


dovecot-fts-xapian is easy to configure, but has a big downside compared
to solr in that the indexer runs as root.

Background operations

2020-04-03 Thread Joan Moreau
Hello, 


Moving a large number of email from one folder to another does create
timeout on roundcube, due to either a very very large number of emails
or indexing process that increases the processing time. 


Would it make sense to have a background thread, to process orders
asynchronously, instead of executing it on the spot ? 


For instance, orders (like moving or indexing or others) would be stored
in a backlog instead of executing them in place, and then a background
operation would process it ? 

Makes any sense ? 


Thank you

Re: Strategy for fts

2020-02-15 Thread Joan Moreau

I updated fts-xapian to make it compatible with dovecot 2.2

On 2020-02-04 12:37, Peter Chiochetti wrote:

Am 04.02.20 um 11:46 schrieb Francis Augusto Medeiros-Logeay: 


Hi Philon,

Thanks a lot for your thoughts!

Can I ask you if using Solr improved things for you? I have a mailbox with 15 
years of e-mail and searching things take a long time.


Here, SOLR itself searches a quarter million mails in split seconds and returns 
very good results. That is on a low memory average machine.

If you dont mind the standard, you can change the schema, so headers (from, to) 
get indexed in body text. That can help narrowing results.

Only problem is search through e.g. nested folders from IMAP: something like 
ESEARCH would be nice - https://tools.ietf.org/html/rfc6237

Peter

On 04.02.2020 09:39, Philon wrote: Hi Francis,

next to fts-solr there was fts-lucene. But that Lucene there seems
heavily outdated why the Dovecot docs also suggest using Solr.
Elasticsearch probably is similar to Solr but the later is maintained
by Dovecot team.

I started with downloading the Solr binary distribution to Debian with
JRE preinstalled and things were running like after 10 min. Yes it's a
bit more complicated to find the schema and edit things like header
size (in tips section). It's running quite nicely since then and has
zero maintenance. 
I will try again - I kept getting some weird errors, so I don't know if that's why I wasn't seing much of improvement.


As FTS indexes are separate in external Solr instance I'd guess that
it won't interfere with dsync. What I don't know is if dsync'ing would
trigger indexing. This brings me to wonder how one could actually
replicate the Solr instance!? 
Good question. But what I thought about doing was to install FTS on my backup instance, and if things go fine, then I install an FTS instance on my production server - that is, if one doesn't interfere with the other.


I will give Solr another shot - my worries are mostly if Solr is supported on 
ARM (my prod instance is running on ARM) - I know Elasticsearch has an ARM 
build.

Ii thought about the Xapian engine, but since it requires dovecot 2.3, I will 
have to wait.

Best,

Francis

Philon

On 31 Jan 2020, at 17:24, Francis Augusto Medeiros-Logeay  
wrote:

Hi there,

I got successfully to replicate my mail server to another dovecot install using 
dsync, mainly for redundancy, and it works great.

I want to try to install fts, as some of the mailboxes have tens of thousands 
of messages, and it takes minutes to get some results when searching via IMAP 
on a Roundcube interface.

I want to experiment with fts-solr first, and firstly on my redundant server, 
ie., not on my main dovecot install. Is it ok to do this? I ask because I am 
afraid of how this whole reindexing on the redundant install will affect the 
production server.

Also, any tips on something else than fts-solr? I tried it once, but it was so 
hard to get it right, so many configurations, java, etc., that I'd rather try 
something else. I also could try fts-elastic or something like that, but, 
again, having to maintain an elasticsearch install might use more resources 
than I think is worth. Any thoughts on that?

Best,

-- Francis

Re: FTS indexer-worker Panic

2019-12-15 Thread Joan Moreau

Please kindly file an issue on github, together with an example of email
causing the panic 


On 2019-11-11 15:21, Yarema via dovecot wrote:


Set up fts_xapian over the weekend and re-indexed.
https://github.com/grosjo/fts-xapian

Tried to search my INBOX and got:


dovecot: indexer-worker: Panic: file charset-iconv.c: line 83

(charset_to_utf8_try): assertion failed: (srcleft <=
CHARSET_MAX_PENDING_BUF_SIZE)

What could I possibly have lurking in my INBOX to cause that ??

Re: dovecot full text search

2019-12-15 Thread Joan Moreau
Hi 


The first run of indexing on a large existing mailbox is indeed slow,
and I would run "doveadm index -A -q \*" before putting the system in
production. 

Besides the Ram disk, what kind of solution would you suggest ? 


On 2019-12-10 19:28, Wojciech Puchar via dovecot wrote:


Where do write ops take place?


to the xapian index  subdirectory


Maybe mount that path to a RAM disk rather than looking for anorher solution.

not a solution for a problem but workaround

Am 10.12.2019 um 15:50 schrieb Wojciech Puchar via dovecot 
:

what FTP module should i use instead of squat that is probably no longer 
supported or no longer at all?

i want to upgrade my dovecot installation. it currently uses squat but i found 
it often crashes on FTS on large mailboxes.

i found "xapian" addon for dovecot but while it works excellent AFTER database 
is created, i found it need like 20 or so minutes to index less than 10GB of mails and 
while doing this - generate many tens of megabytes/s constant write traffic on it's 
database files.

Excellent way of killing SSD.

something must be broken.

my config is

plugin {
plugin = fts fts_xapian

fts = xapian
fts_xapian = partial=2 full=20 verbose=0

fts_autoindex = yes
fts_enforced = yes

#   fts_autoindex_exclude = \Junk
#   fts_autoindex_exclude2 = \Trash
}

any ideas?

Re: Bug: indexer-worker segfaults with fts_xapian 1.2.5

2019-12-15 Thread Joan Moreau
It seems this comes from the old version of gcc/stdlib also. 


Please kindly file a "issue" on github
https://github.com/grosjo/fts-xapian/issues 


On 2019-12-15 21:35, Martynas Bendorius wrote:


Core was generated by `dovecot/indexer-worker'.
Program terminated with signal 11, Segmentation fault.
#0  0x7f30f7ad056d in __exchange_and_add (__val=-1, 
__mem=0xfff8)
at 
/usr/src/debug/gcc-4.8.5-20150702/obj-x86_64-redhat-linux/x86_64-redhat-linux/libstdc++-v3/include/ext/atomicity.h:49
49  { return __atomic_fetch_add(__mem, __val, __ATOMIC_ACQ_REL); }

(gdb) bt full
#0  0x7f30f7ad056d in __exchange_and_add (__val=-1, 
__mem=0xfff8)
at 
/usr/src/debug/gcc-4.8.5-20150702/obj-x86_64-redhat-linux/x86_64-redhat-linux/libstdc++-v3/include/ext/atomicity.h:49
No locals.
#1  __exchange_and_add_dispatch (__val=-1, __mem=0xfff8)
at 
/usr/src/debug/gcc-4.8.5-20150702/obj-x86_64-redhat-linux/x86_64-redhat-linux/libstdc++-v3/include/ext/atomicity.h:82
No locals.
#2  std::string::_Rep::_M_dispose (this=0xffe8, __a=...)
at 
/usr/src/debug/gcc-4.8.5-20150702/obj-x86_64-redhat-linux/x86_64-redhat-linux/libstdc++-v3/include/bits/basic_string.h:246
No locals.
#3  0x7f30f7b3407e in _M_dispose (__a=..., this=)
at 
/usr/src/debug/gcc-4.8.5-20150702/obj-x86_64-redhat-linux/x86_64-redhat-linux/libstdc++-v3/include/bits/basic_string.tcc:254
No locals.
#4  std::string::assign (this=this@entry=0x55c7b93e0168, __str="return-path")
at 
/usr/src/debug/gcc-4.8.5-20150702/obj-x86_64-redhat-linux/x86_64-redhat-linux/libstdc++-v3/include/bits/basic_string.tcc:250
__a = {<__gnu_cxx::new_allocator> = {}, }
#5  0x7f30fa8776ce in operator= (__str="return-path", this=0x55c7b93e0168) 
at /usr/include/c++/4.8.2/bits/basic_string.h:547
No locals.
#6  fts_backend_xapian_update_set_build_key (_ctx=0x55c7b93e0140, 
key=0x7ffc1ba2c380) at fts-backend-xapian.cpp:303
ctx = 0x55c7b93e0140
i = 
f2 = "return-path"
backend = 
field = 0x55c7b943fb5f "RETURN-PATH"
j = 11
#7  0x7f30fb0b8cda in fts_backend_update_set_build_key (ctx=0x55c7b93e0140, 
key=key@entry=0x7ffc1ba2c380) at fts-api.c:174
__func__ = "fts_backend_update_set_build_key"
#8  0x7f30fb0ba243 in fts_build_mail_header (block=0x7ffc1ba2c340, 
block=0x7ffc1ba2c340, ctx=0x7ffc1ba2c3b0) at fts-build-mail.c:173
hdr = 
key = {uid = 6006200, type = FTS_BACKEND_BUILD_KEY_HDR, part = 0x55c7b935eed0, hdr_name = 0x55c7b943fb5f "RETURN-PATH", body_content_type = 0x0, 
body_content_disposition = 0x0}

ret = 
#9  fts_build_mail_real (may_need_retry_r=0x7ffc1ba2c2f3, 
retriable_err_msg_r=0x7ffc1ba2c300, mail=0x55c7b93dcff8, 
update_ctx=0x55c7b93e0140) at fts-build-mail.c:568
block = {part = 0x55c7b935eed0, hdr = 0x55c7b93d7f18, data = 0x55c7b938b8a8 "", 
size = 0}
ret = 
input = 0x55c7b93d78c0
raw_block = {part = 0x55c7b935eed0, hdr = 0x55c7b93d80b0, data = 0x0, size = 0}
skip_body = false
ctx = {mail = 0x55c7b93dcff8, update_ctx = 0x55c7b93e0140, content_type = 0x0, content_disposition = 0x0, body_parser = 0x0, word_buf = 0x0, 
---Type  to continue, or q  to quit---

pending_input = 0x0, cur_user_lang = 0x0}
prev_part = 0x55c7b935eed0
parser = 0x55c7b93d7b38
decoder = 0x55c7b93d7f00
parts = 0x7ffc1ba2c414
body_part = false
body_added = false
binary_body = 
error = 0x5ba5b8 
#10 fts_build_mail (update_ctx=0x55c7b93e0140, mail=mail@entry=0x55c7b93dcff8) 
at fts-build-mail.c:617
_data_stack_cur_id = 6
attempts = 2
retriable_err_msg = 0x11e900729 
may_need_retry = false
#11 0x7f30fb0c1102 in fts_mail_index (_mail=0x55c7b93dcff8) at 
fts-storage.c:550
ft = 0x55c7b9396880
flist = 0x55c7b938b8a8
pmail = 0x55c7b93dcff8
#12 fts_mail_precache (_mail=0x55c7b93dcff8) at fts-storage.c:571
_data_stack_cur_id = 5
mail = 0x55c7b93dcff8
fmail = 
ft = 0x55c7b9396880
__func__ = "fts_mail_precache"
#13 0x7f30fc1b7d64 in mail_precache (mail=0x55c7b93dcff8) at mail.c:432
_data_stack_cur_id = 4
p = 0x55c7b93dcff8
#14 0x55c7b8b4c844 in index_mailbox_precache (conn=, 
box=0x55c7b938ef18) at master-connection.c:102
counter = 0
max = 21569
percentage_sent = 0
storage = 
status = {messages = 21569, recent = 0, unseen = 0, uidvalidity = 1462447525, uidnext = 6027769, first_unseen_seq = 0, first_recent_uid = 6027758, 
last_cached_seq = 0, highest_modseq = 0, highest_pvt_modseq = 0, keywords = 0x0, permanent_flags = 0, flags = 0, permanent_keywords = false, 
allow_new_keywords = false, nonpermanent_modseqs = false, no_modseq_tracking = false, have_guids = true, have_save_guids = true, have_only_guid128 = false}

uids = 
username = 0x55c7b9385988 "m...@domain.com"
first_uid = 6006200
---Type  to continue, or q  to quit---
percentage_str = "\003\000\000"
percentage = 
error = MAIL_ERROR_NONE
trans = 0x55c7b9390480
ctx = 0x55c7b93d15f0
last_uid = 6006200
ret = 0
box_vname = 0x55c7b938f280 "INBOX.lfd.SSH login alerts"
errstr = 
search_args = 0x0
mail = 0x55c7b93dcff8
metadata = {guid = '\000' , virtual_size = 0, physical_size = 0, first_save_date = 0, 

Re: FTS Xapian -> FTS core issue

2019-06-09 Thread Joan Moreau via dovecot
The issue is not in plug-in then. 


Maybe Aki or Timo knows where does this bug come from ?

On 2019-06-07 14:09, Daniel Miller wrote:

Yes, latest git version.  

The logs show (as I read them) returned results - yet nothing shows in the client. The logs look the same (with different numbers) when querying "regular" folders - but results are shown in clients.  


--
Daniel 

On June 6, 2019 12:16:08 AM Joan Moreau  wrote: 

Hi 

Are you using the latest git version ? 

WHich part exactly of your logs relates to "virtual folders do not work" ? 

On 2019-06-05 13:08, Daniel Miller via dovecot wrote: 
Logs:


Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
Opening DB (RO) 
/var/mail/amfes.com/dmiller/sdbox/xapian-indexes/db_f2857830c70c844e2f1d3bc41c5f
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: FLAG=AND
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: FTS Xapian: Query= (subject:"dovecot" OR 
from:"dovecot" OR to:"dovecot" OR cc:"dovecot" OR bcc:"dovecot" OR message-id:"dovecot" OR 
body:"dovecot")
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: 0 results in 1 ms
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
Opening DB (RO) 
/var/mail/amfes.com/dmiller/sdbox/xapian-indexes/db_78544714f3f1ae5b9b0d3bda95b5
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: FLAG=AND
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: FTS Xapian: Query= (subject:"dovecot" OR 
from:"dovecot" OR to:"dovecot" OR cc:"dovecot" OR bcc:"dovecot" OR message-id:"dovecot" OR 
body:"dovecot")
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: 53 results in 40 ms
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
Opening DB (RO) 
/var/mail/amfes.com/dmiller/sdbox/xapian-indexes/db_bdcb8e2172fadf4db50b3bc41c5f
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: FLAG=AND
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: FTS Xapian: Query= (subject:"dovecot" OR 
from:"dovecot" OR to:"dovecot" OR cc:"dovecot" OR bcc:"dovecot" OR message-id:"dovecot" OR 
body:"dovecot")
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: 0 results in 12 ms
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
Opening DB (RO) 
/var/mail/amfes.com/dmiller/sdbox/xapian-indexes/db_be25c00241fedf4de00b3bc41c5f
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: FLAG=AND
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: FTS Xapian: Query= (subject:"dovecot" OR 
from:"dovecot" OR to:"dovecot" OR cc:"dovecot" OR bcc:"dovecot" OR message-id:"dovecot" OR 
body:"dovecot")
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: 3 results in 32 ms
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
Opening DB (RO) 
/var/mail/amfes.com/dmiller/sdbox/xapian-indexes/db_a7e75820d9fadf4dd90b3bc41c5f
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: FLAG=AND
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: FTS Xapian: Query= (subject:"dovecot" OR 
from:"dovecot" OR to:"dovecot" OR cc:"dovecot" OR bcc:"dovecot" OR message-id:"dovecot" OR 
body:"dovecot")
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: 0 results in 11 ms
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
Opening DB (RO) 
/var/mail/amfes.com/dmiller/sdbox/xapian-indexes/db_6fa78f2738cbdf4d007b3bc41c5f
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: FLAG=AND
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: FTS Xapian: Query= (subject:"dovecot" OR 
from:"dovecot" OR to:"dovecot" OR cc:"dovecot" OR bcc:"dovecot" OR message-id:"dovecot" OR 
body:"dovecot")
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: 0 results in 21 ms
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
Opening DB (RO) 
/var/mail/amfes.com/dmiller/sdbox/xapian-indexes/db_6ea78f2738cbdf4d007b3bc41c5f
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: FLAG=AND
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: FTS Xapian: Query= (subject:"dovecot" OR 
from:"dovecot" OR to:"dovecot" OR cc:"dovecot" OR bcc:

Re: FTS Xapian

2019-06-09 Thread Joan Moreau via dovecot
Hi 

Are you using the latest git version ? 


WHich part exactly of your logs relates to "virtual folders do not work"
? 


On 2019-06-05 13:08, Daniel Miller via dovecot wrote:


Logs:

Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
Opening DB (RO) 
/var/mail/amfes.com/dmiller/sdbox/xapian-indexes/db_f2857830c70c844e2f1d3bc41c5f
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: FLAG=AND
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: FTS Xapian: Query= (subject:"dovecot" OR 
from:"dovecot" OR to:"dovecot" OR cc:"dovecot" OR bcc:"dovecot" OR message-id:"dovecot" OR 
body:"dovecot")
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: 0 results in 1 ms
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
Opening DB (RO) 
/var/mail/amfes.com/dmiller/sdbox/xapian-indexes/db_78544714f3f1ae5b9b0d3bda95b5
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: FLAG=AND
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: FTS Xapian: Query= (subject:"dovecot" OR 
from:"dovecot" OR to:"dovecot" OR cc:"dovecot" OR bcc:"dovecot" OR message-id:"dovecot" OR 
body:"dovecot")
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: 53 results in 40 ms
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
Opening DB (RO) 
/var/mail/amfes.com/dmiller/sdbox/xapian-indexes/db_bdcb8e2172fadf4db50b3bc41c5f
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: FLAG=AND
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: FTS Xapian: Query= (subject:"dovecot" OR 
from:"dovecot" OR to:"dovecot" OR cc:"dovecot" OR bcc:"dovecot" OR message-id:"dovecot" OR 
body:"dovecot")
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: 0 results in 12 ms
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
Opening DB (RO) 
/var/mail/amfes.com/dmiller/sdbox/xapian-indexes/db_be25c00241fedf4de00b3bc41c5f
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: FLAG=AND
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: FTS Xapian: Query= (subject:"dovecot" OR 
from:"dovecot" OR to:"dovecot" OR cc:"dovecot" OR bcc:"dovecot" OR message-id:"dovecot" OR 
body:"dovecot")
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: 3 results in 32 ms
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
Opening DB (RO) 
/var/mail/amfes.com/dmiller/sdbox/xapian-indexes/db_a7e75820d9fadf4dd90b3bc41c5f
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: FLAG=AND
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: FTS Xapian: Query= (subject:"dovecot" OR 
from:"dovecot" OR to:"dovecot" OR cc:"dovecot" OR bcc:"dovecot" OR message-id:"dovecot" OR 
body:"dovecot")
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: 0 results in 11 ms
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
Opening DB (RO) 
/var/mail/amfes.com/dmiller/sdbox/xapian-indexes/db_6fa78f2738cbdf4d007b3bc41c5f
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: FLAG=AND
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: FTS Xapian: Query= (subject:"dovecot" OR 
from:"dovecot" OR to:"dovecot" OR cc:"dovecot" OR bcc:"dovecot" OR message-id:"dovecot" OR 
body:"dovecot")
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: 0 results in 21 ms
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
Opening DB (RO) 
/var/mail/amfes.com/dmiller/sdbox/xapian-indexes/db_6ea78f2738cbdf4d007b3bc41c5f
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: FLAG=AND
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: FTS Xapian: Query= (subject:"dovecot" OR 
from:"dovecot" OR to:"dovecot" OR cc:"dovecot" OR bcc:"dovecot" OR message-id:"dovecot" OR 
body:"dovecot")
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: 0 results in 1 ms
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
Opening DB (RO) 
/var/mail/amfes.com/dmiller/sdbox/xapian-indexes/db_f2c3522c5d9b9d4f8847e130c744
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: FLAG=AND
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: FTS Xapian: Query= (subject:"dovecot" OR 
from:"dovecot" OR to:"dovecot" OR cc:"dovecot" OR bcc:"dovecot" OR message-id:"dovecot" OR 
body:"dovecot")
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: 43 results in 51 ms
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
Opening DB (RO) 
/var/mail/amfes.com/dmiller/sdbox/xapian-indexes/db_58c26f3b9085134fe04b3bc41c5f
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: 
FTS Xapian: FLAG=AND
Jun 5 06:02:25 bubba dovecot: imap(dmil...@amfes.com)<25877>: FTS Xapian: Query= (subject:"dovecot" OR 
from:"dovecot" OR 

Re: FTS Xapian

2019-06-04 Thread Joan Moreau via dovecot

Hi

Can you post your dovecot conf file and the subset of the log files related 
to the issue ?


thanks


On June 5, 2019 9:29:13 AM Daniel Miller via dovecot  
wrote:



For my primary namespace this is working fine - thanks to the developers!


It also appears to work great for shared folders as well.


But my virtual folders aren't returning results - at least not to the
client. The logs show FTS Xapian opening several DB files and getting
results - but nothing is being returned to client. Is this a config
issue on my side or is this a current limitation of the plugin?
--
Daniel






Further issues on FTS engine

2019-05-20 Thread Joan Moreau via dovecot
Hi, 


Additionally to the long list of problem on the FTS previously
discussed, here a new: 


WHen I reset the indexes, the indexer-worker seems paralelleizing the
indexing (which is good), however, the number available in "ps aux |
grep dove" shows that it does not move: 


dovecot 28549 0.0 0.0 8620 3920 ? S 06:20 0:00 dovecot/indexer [0
clients, 3 requests]
mailuse+ 28550 98.6 0.1 167412 86916 ? R 06:20 5:28
dovecot/indexer-worker [j...@grosjo.net  - 800/37755] 


Looking further, if I put a tracer in teh backend, it treats the *same*
message several times in parallel, and therefore does not move very fast
on the global indexing of the box 

ANy clue ? 


THanks

Re: FTS delays

2019-04-21 Thread Joan Moreau via dovecot

for instance, if I do a search from roundcube, the inbo name is NOT
passed to the backend (which is normal) 


the same search from the command line add the mailbox name ADDITIONALLY
to the mailbox * pointer 


However, passing a search from roudcube ask TWICE the backend  (first
with AND flag, second with OR flag) 


THis is obviously a clear bug form the part calling the backend (even if
the backend may need improvements ! this is really not the point here) 


Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: Get last UID of Sent =
61714
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: Get last UID of Sent =
61714
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: Query: FLAG=AND
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: Query(1/1): add
term(wilcard) : milao
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: Query(2/1): add
term(wilcard) : milao
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: Query(3/1): add
term(wilcard) : milao
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: Query(4/1): add
term(wilcard) : milao
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: Query(5/1): add
term(wilcard) : milao
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: SEARCH_OR
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: MATCH NOT : 0
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: Testing if wildcard
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: Query: set GLOBAL (no
specified header)
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: Query : ( bcc:milao OR
body:milao OR cc:milao OR from:milao OR message-id:milao OR
subject:milao OR to:milao )
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: Query: 0 results in 0 ms
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: Query: FLAG=OR
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: Query(1): add
term(SUBJECT) : milao
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: SEARCH_HEADER
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: MATCH NOT : 0
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: Query(2): add term(TO) :
milao
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: SEARCH_HEADER
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: MATCH NOT : 0
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: Query(3): add term(FROM)
: milao
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: SEARCH_HEADER
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: MATCH NOT : 0
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: Query(4): add term(CC) :
milao
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: SEARCH_HEADER
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: MATCH NOT : 0
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: Query(5): add term(BCC) :
milao
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: SEARCH_HEADER
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: MATCH NOT : 0
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: Testing if wildcard
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: Query : ( bcc:milao ) OR
( cc:milao ) OR ( from:milao ) OR ( subject:milao ) OR ( to:milao )
Apr 21 11:08:39 gjserver dovecot[14251]:
imap(j...@grosjo.net)<15709>: Query: 0 results in 0 ms

On 2019-04-21 11:56, Joan Moreau via dovecot wrote:

Timo, 

A little of logic here : 

1 - the mailbox is passed by dovecot to the backend as a mailbox * pointer  , NOT as a search parameter. 

-> It works properly when entering a search from roundcube or evolution for instance. 

-> therefore this is a clear bug of the command line 

2 - the loop : Actually, the timeout occurs because the dovecot core is DISCARDING the results of the backend and do its own search (ie. in my example , it search fo "milan" in my inbox , which is huge , without even considering the backend results 


-> This is a enormous error.

On 2019-04-21 11:29, Timo Sirainen wrote: It's because you're misunderstanding how the lookup() function works. It gets ALL the search parameters, including the "mailbox inbox". This is intentional, and not a bug. Two reasons being: 

1) The FTS plugin in theory could support indexing/searching any kinds of searches, not just regular word searches. So I didn't want to limit it unnecessari

Re: FTS delays

2019-04-21 Thread Joan Moreau via dovecot
Timo, 

A little of logic here : 


1 - the mailbox is passed by dovecot to the backend as a mailbox *
pointer  , NOT as a search parameter. 


-> It works properly when entering a search from roundcube or evolution
for instance. 

-> therefore this is a clear bug of the command line 


2 - the loop : Actually, the timeout occurs because the dovecot core is
DISCARDING the results of the backend and do its own search (ie. in my
example , it search fo "milan" in my inbox , which is huge , without
even considering the backend results 


-> This is a enormous error.

On 2019-04-21 11:29, Timo Sirainen wrote:

It's because you're misunderstanding how the lookup() function works. It gets ALL the search parameters, including the "mailbox inbox". This is intentional, and not a bug. Two reasons being: 

1) The FTS plugin in theory could support indexing/searching any kinds of searches, not just regular word searches. So I didn't want to limit it unnecessarily. 

2) Especially with "mailbox inbox" this is important when searching from virtual mailboxes. If you configure "All mails in all folders" virtual mailbox, you can do a search in there that restricts which physical mailboxes are matched. In this case the FTS backend can optimize this lookup so it can filter only the physical mailboxes that have matches, leaving the others out. And it can do this in a single query if all the mailboxes are in the same FTS index. 

So again: Your lookup() function needs to be changed to only use those search args that it really wants to search, and ignore the others. Use solr_add_definite_query_args() as the template. 

Also I see now the reason for the timeout problem. It's because you're not setting search_arg->match_always=TRUE. These need to be set for the search args that you're actually using to generate the Xapian query. If it's not set, then Dovecot core doesn't think that the arg was part of the FTS search and it processes it itself. Meaning that it opens all the emails and does the search the slow way, practically making the FTS lookup ignored. 

On 21 Apr 2019, at 19.50, Joan Moreau  wrote: 

No, the parsing is made by dovecot core, that is nothing the backend can do about it. The backend shall *never*  reveive this. (would it be buggy or no) 

PLease, have a look deeper 


And the loop is a very big problem as it times out all the time (and once 
again, this is not in any of the backend  functions)

On 2019-04-21 10:42, Timo Sirainen via dovecot wrote: 
Inbox appears in the list of arguments, because fts_backend_xapian_lookup() is parsing the search args wrong. Not sure about the other issue. 

On 21 Apr 2019, at 19.31, Joan Moreau  wrote: 

For this first point, the problem is that dovecot core sends TWICE the request and "Inbox" appears in the list of arguments ! (inbox shall serve to select teh right mailbox, never sent to the backend) 

And even if this would be solved, the dovecot core loops *after* the backend hs returneds the results 


# doveadm search -u j...@grosjo.net mailbox inbox text milan
doveadm(j...@grosjo.net): Info: Get last UID of INBOX = 315526
doveadm(j...@grosjo.net): Info: Get last UID of INBOX = 315526
doveadm(j...@grosjo.net): Info: Query: FLAG=AND
doveadm(j...@grosjo.net): Info: Query(1): add term(wilcard) : inbox
doveadm(j...@grosjo.net): Info: Query(2): add term(wilcard) : milan
doveadm(j...@grosjo.net): Info: Testing if wildcard
doveadm(j...@grosjo.net): Info: Query: set GLOBAL (no specified header)
doveadm(j...@grosjo.net): Info: Query : ( bcc:inbox OR body:inbox OR cc:inbox 
OR from:inbox OR message-id:inbox OR subject:inbox OR to:inbox ) AND ( 
bcc:milan OR body:milan OR cc:milan OR from:milan OR message-id:milan OR 
subject:milan OR to:milan )
DOVEADM(j...@grosjo.net): INFO: QUERY: 2 RESULTS IN 1 MS // THIS IS WHEN 
BACKEND HAS FOUND RESULTS AND STOPPED
d82b4b0f550d3859364495331209 847
d82b4b0f550d3859364495331209 1569
d82b4b0f550d3859364495331209 2260
d82b4b0f550d3859364495331209 2575
d82b4b0f550d3859364495331209 2811
d82b4b0f550d3859364495331209 2885
d82b4b0f550d3859364495331209 3038
D82B4B0F550D3859364495331209 3121 -> LOOPING FOREVER 

On 2019-04-21 09:57, Timo Sirainen via dovecot wrote: 
On 3 Apr 2019, at 20.30, Joan Moreau via dovecot  wrote: doveadm search -u j...@grosjo.net mailbox inbox text milan

output

doveadm(j...@grosjo.net): Info: Query : ( bcc:inbox OR body:inbox OR cc:inbox 
OR from:inbox OR message-id:inbox OR subject:inbox OR to:inbox OR uid:inbox ) 
AND ( bcc:milan OR body:milan OR cc:milan OR from:milan OR message-id:milan OR 
subject:milan OR to:milan OR uid:milan )

1 - The query is wrong 
That's because fts_backend_xapian_lookup() isn't anywhere close to being correct. Try to copy the logic based on solr_add_definite_query_args().

Re: FTS delays

2019-04-21 Thread Joan Moreau via dovecot

No, the parsing is made by dovecot core, that is nothing the backend can
do about it. The backend shall *never*  reveive this. (would it be buggy
or no) 

PLease, have a look deeper 


And the loop is a very big problem as it times out all the time (and
once again, this is not in any of the backend  functions)

On 2019-04-21 10:42, Timo Sirainen via dovecot wrote:

Inbox appears in the list of arguments, because fts_backend_xapian_lookup() is parsing the search args wrong. Not sure about the other issue. 

On 21 Apr 2019, at 19.31, Joan Moreau  wrote: 

For this first point, the problem is that dovecot core sends TWICE the request and "Inbox" appears in the list of arguments ! (inbox shall serve to select teh right mailbox, never sent to the backend) 

And even if this would be solved, the dovecot core loops *after* the backend hs returneds the results 


# doveadm search -u j...@grosjo.net mailbox inbox text milan
doveadm(j...@grosjo.net): Info: Get last UID of INBOX = 315526
doveadm(j...@grosjo.net): Info: Get last UID of INBOX = 315526
doveadm(j...@grosjo.net): Info: Query: FLAG=AND
doveadm(j...@grosjo.net): Info: Query(1): add term(wilcard) : inbox
doveadm(j...@grosjo.net): Info: Query(2): add term(wilcard) : milan
doveadm(j...@grosjo.net): Info: Testing if wildcard
doveadm(j...@grosjo.net): Info: Query: set GLOBAL (no specified header)
doveadm(j...@grosjo.net): Info: Query : ( bcc:inbox OR body:inbox OR cc:inbox 
OR from:inbox OR message-id:inbox OR subject:inbox OR to:inbox ) AND ( 
bcc:milan OR body:milan OR cc:milan OR from:milan OR message-id:milan OR 
subject:milan OR to:milan )
DOVEADM(j...@grosjo.net): INFO: QUERY: 2 RESULTS IN 1 MS // THIS IS WHEN 
BACKEND HAS FOUND RESULTS AND STOPPED
d82b4b0f550d3859364495331209 847
d82b4b0f550d3859364495331209 1569
d82b4b0f550d3859364495331209 2260
d82b4b0f550d3859364495331209 2575
d82b4b0f550d3859364495331209 2811
d82b4b0f550d3859364495331209 2885
d82b4b0f550d3859364495331209 3038
D82B4B0F550D3859364495331209 3121 -> LOOPING FOREVER 

On 2019-04-21 09:57, Timo Sirainen via dovecot wrote: 
On 3 Apr 2019, at 20.30, Joan Moreau via dovecot  wrote: doveadm search -u j...@grosjo.net mailbox inbox text milan

output

doveadm(j...@grosjo.net): Info: Query : ( bcc:inbox OR body:inbox OR cc:inbox 
OR from:inbox OR message-id:inbox OR subject:inbox OR to:inbox OR uid:inbox ) 
AND ( bcc:milan OR body:milan OR cc:milan OR from:milan OR message-id:milan OR 
subject:milan OR to:milan OR uid:milan )

1 - The query is wrong 
That's because fts_backend_xapian_lookup() isn't anywhere close to being correct. Try to copy the logic based on solr_add_definite_query_args().

Re: FTS delays

2019-04-21 Thread Joan Moreau via dovecot

Antoher example so you understand how may understand the bug in dovecote
core : 

# doveadm search -u j...@grosjo.net mailbox SENT text milan 


doveadm(j...@grosjo.net): Info: Get last UID of Sent = 61707 -> CORRECTLY
ASSIGNED THE PROPER MAILBOX TO THE BACK END
doveadm(j...@grosjo.net): Info: Get last UID of Sent = 61707
doveadm(j...@grosjo.net): Info: Query: FLAG=AND
doveadm(j...@grosjo.net): Info: Query(1): add term(wilcard) : Sent -> WHY
IS "SENT" AMONG THE SERACH PARAMETERS ???
doveadm(j...@grosjo.net): Info: Query(2): add term(wilcard) : milan
doveadm(j...@grosjo.net): Info: Testing if wildcard
doveadm(j...@grosjo.net): Info: Query: set GLOBAL (no specified header)
doveadm(j...@grosjo.net): Info: Query : ( bcc:milan OR body:milan OR
cc:milan OR from:milan OR message-id:milan OR subject:milan OR to:milan
) AND ( bcc:sent OR body:sent OR cc:sent OR from:sent OR message-id:sent
OR subject:sent OR to:sent )
doveadm(j...@grosjo.net): Info: Query: 7 results in 71 ms 

(AND SAME LOOP) 


In this example, the "Sent" shall *never*  be passed as argument to the
backend (xapian, solr or any other), only the mailbox reference.
However, it appears in the search parameters 


On 2019-04-21 10:31, Joan Moreau via dovecot wrote:

For this first point, the problem is that dovecot core sends TWICE the request and "Inbox" appears in the list of arguments ! (inbox shall serve to select teh right mailbox, never sent to the backend) 

And even if this would be solved, the dovecot core loops *after* the backend hs returneds the results 


# doveadm search -u j...@grosjo.net mailbox inbox text milan
doveadm(j...@grosjo.net): Info: Get last UID of INBOX = 315526
doveadm(j...@grosjo.net): Info: Get last UID of INBOX = 315526
doveadm(j...@grosjo.net): Info: Query: FLAG=AND
doveadm(j...@grosjo.net): Info: Query(1): add term(wilcard) : inbox
doveadm(j...@grosjo.net): Info: Query(2): add term(wilcard) : milan
doveadm(j...@grosjo.net): Info: Testing if wildcard
doveadm(j...@grosjo.net): Info: Query: set GLOBAL (no specified header)
doveadm(j...@grosjo.net): Info: Query : ( bcc:inbox OR body:inbox OR cc:inbox 
OR from:inbox OR message-id:inbox OR subject:inbox OR to:inbox ) AND ( 
bcc:milan OR body:milan OR cc:milan OR from:milan OR message-id:milan OR 
subject:milan OR to:milan )
DOVEADM(j...@grosjo.net): INFO: QUERY: 2 RESULTS IN 1 MS // THIS IS WHEN 
BACKEND HAS FOUND RESULTS AND STOPPED
d82b4b0f550d3859364495331209 847
d82b4b0f550d3859364495331209 1569
d82b4b0f550d3859364495331209 2260
d82b4b0f550d3859364495331209 2575
d82b4b0f550d3859364495331209 2811
d82b4b0f550d3859364495331209 2885
d82b4b0f550d3859364495331209 3038
D82B4B0F550D3859364495331209 3121 -> LOOPING FOREVER 

On 2019-04-21 09:57, Timo Sirainen via dovecot wrote: 
On 3 Apr 2019, at 20.30, Joan Moreau via dovecot  wrote: doveadm search -u j...@grosjo.net mailbox inbox text milan

output

doveadm(j...@grosjo.net): Info: Query : ( bcc:inbox OR body:inbox OR cc:inbox 
OR from:inbox OR message-id:inbox OR subject:inbox OR to:inbox OR uid:inbox ) 
AND ( bcc:milan OR body:milan OR cc:milan OR from:milan OR message-id:milan OR 
subject:milan OR to:milan OR uid:milan )

1 - The query is wrong 
That's because fts_backend_xapian_lookup() isn't anywhere close to being correct. Try to copy the logic based on solr_add_definite_query_args().

Re: FTS delays

2019-04-21 Thread Joan Moreau via dovecot

For this first point, the problem is that dovecot core sends TWICE the
request and "Inbox" appears in the list of arguments ! (inbox shall
serve to select teh right mailbox, never sent to the backend) 


And even if this would be solved, the dovecot core loops *after* the
backend hs returneds the results 


# doveadm search -u j...@grosjo.net mailbox inbox text milan
doveadm(j...@grosjo.net): Info: Get last UID of INBOX = 315526
doveadm(j...@grosjo.net): Info: Get last UID of INBOX = 315526
doveadm(j...@grosjo.net): Info: Query: FLAG=AND
doveadm(j...@grosjo.net): Info: Query(1): add term(wilcard) : inbox
doveadm(j...@grosjo.net): Info: Query(2): add term(wilcard) : milan
doveadm(j...@grosjo.net): Info: Testing if wildcard
doveadm(j...@grosjo.net): Info: Query: set GLOBAL (no specified header)
doveadm(j...@grosjo.net): Info: Query : ( bcc:inbox OR body:inbox OR
cc:inbox OR from:inbox OR message-id:inbox OR subject:inbox OR to:inbox
) AND ( bcc:milan OR body:milan OR cc:milan OR from:milan OR
message-id:milan OR subject:milan OR to:milan )
DOVEADM(j...@grosjo.net): INFO: QUERY: 2 RESULTS IN 1 MS // THIS IS WHEN
BACKEND HAS FOUND RESULTS AND STOPPED
d82b4b0f550d3859364495331209 847
d82b4b0f550d3859364495331209 1569
d82b4b0f550d3859364495331209 2260
d82b4b0f550d3859364495331209 2575
d82b4b0f550d3859364495331209 2811
d82b4b0f550d3859364495331209 2885
d82b4b0f550d3859364495331209 3038
D82B4B0F550D3859364495331209 3121 -> LOOPING FOREVER 


On 2019-04-21 09:57, Timo Sirainen via dovecot wrote:

On 3 Apr 2019, at 20.30, Joan Moreau via dovecot  wrote: 


doveadm search -u j...@grosjo.net mailbox inbox text milan
output

doveadm(j...@grosjo.net): Info: Query : ( bcc:inbox OR body:inbox OR cc:inbox 
OR from:inbox OR message-id:inbox OR subject:inbox OR to:inbox OR uid:inbox ) 
AND ( bcc:milan OR body:milan OR cc:milan OR from:milan OR message-id:milan OR 
subject:milan OR to:milan OR uid:milan )

1 - The query is wrong


That's because fts_backend_xapian_lookup() isn't anywhere close to being 
correct. Try to copy the logic based on solr_add_definite_query_args().

Re: FTS delays

2019-04-20 Thread Joan Moreau via dovecot
I have no idea how to use git-bitsec 


On 2019-04-15 15:31, Josef 'Jeff' Sipek wrote:


On Sun, Apr 14, 2019 at 21:09:54 +0800, Joan Moreau wrote:
... 


THe "loop" part seems the most urgent : It breaks everything (search
timeout 100% of the time)


Any luck with git-bisect?

Jeff.

On 2019-04-06 09:56, Joan Moreau via dovecot wrote:

For the point 1, this is not "suboptimal", it is plain wrong (results are damn 
wrong ! and this is not related to the backend, but the FTS logic in Dovecot core)

For the point 2 , this has been discussed already numerous times but without action. The dovecot core shall be the one re-submitting the emails to scan, not the backend to try to figure out where and which are the emails to be re-scaned 

For the point 3, I will do a bit of research in the existing code and will get back to you 

For the point 4, this is random. FTS backend (xapian, lucene, solr, whatever..) returns X, then dovecot core choose to select only Y emails. THis is a clear bug. 

On 2019-04-05 20:08, Josef 'Jeff' Sipek via dovecot wrote: 
On Fri, Apr 05, 2019 at 19:33:57 +0800, Joan Moreau via dovecot wrote: Hi 

If you plan to fix the FTS part of Dovecot, I will be very gratefull. 
I'm trying to figure out what is causing the 3rd issue you listed, so we can

decide how severe it is and therefore how quickly it needs to be fixed.  At
the moment we are unable to reproduce it, and therefore we cannot fix it.

Not sure this is related to any specific commit but rahter the overall
design 
Ok.


The list of bugs so far 


1 - Double call to fts plugins with inconsistent parameter (first call
diferent from second call for the same request) 
Understood.  It is my understanding that this is simply suboptimal rather

than causing crashes/etc.

2 - "Rescan" features for now consists of deleting indexes. SHall be
resending emails to rescan to the fts plugin instead 
I'm not sure I follow.  The rescan operation is invoked on the fts backend

and it is up to the implementation to somehow ensure that after it is done
the fts index is up to date.  The easiest way to implement it is to simply
delete the fts index and re-index all the mails.  That is what currently
happens in the solr backend.

The lucene fts backend does a more complicated matching of the fts index
with the emails.  Finally, the deprecated squat backend seem to ignore the
rescan requests (its rescan vfunc is NULL).

3 - the loop when body search (just do a "doveadm search -u user@domain
mailbox inbox text whatevertexte") 


Refer to my email to Timo on 2019-04-03 18:30 on the same thread for bug
details 

(especially the loop) 
This seems to be the most important of the 4 issues you listed, so I'd like

to focus on this one for now.

As I mentioned, we cannot reproduce this ourselves.  So, we need your help
to narrow things down.  Therefore, can you give us the commit hashes of
revisions that you know are good and which are bad?  You can use git-bisect
to narrow the range down.

4 - Most notably, I notice that header search usually does not care
about fts plugin (even with fts_enforced) and rely on some internal
search , which si total non-sense 
You're right, that doesn't seem to make sense.  Can you provide a test case?


Jeff.

Let me know how can I help on thos 4 points 


On 2019-04-05 18:37, Josef 'Jeff' Sipek wrote:

On Fri, Apr 05, 2019 at 17:45:36 +0800, Joan Moreau wrote: 

I am on master (very latest) 

No clue exactly when this problem appears, but 


1 - the "request twice the fts plugin instead of once" issue has always
been there (since my first RC release of fts-xapian) 
Ok, good to know.


2 - the body/text loop has appeared recently (maybe during the month of
March) 
Our testing doesn't seem to be able to reproduce this.  Can you try to

git-bisect this to find which commit broke it?

Thanks,

Jeff.

On 2019-04-05 16:36, Josef 'Jeff' Sipek via dovecot wrote:

On Wed, Apr 03, 2019 at 19:02:52 +0800, Joan Moreau via dovecot wrote: 

issue seems in the Git version : 
Which git revision?


Before you updated to the broken revision, which revision/version were you
running?

Can you try it with 5f6e39c50ec79ba8847b2fdb571a9152c71cd1b6 (the commit
just before the fts_enforced=body introduction)?  That's the only recent fts
change.

Thanks,

Jeff.

On 2019-04-03 18:58, @lbutlr via dovecot wrote:

On 3 Apr 2019, at 04:30, Joan Moreau via dovecot  wrote: 

doveadm search -u j...@grosjo.net mailbox inbox text milan 
Did that search over my list mail and got 83 results, not able to duplicate your issue.


What version of dovecot and have you tried to reindex?

dovecot-2.3.5.1 here.

Re: FTS delays

2019-04-14 Thread Joan Moreau via dovecot

I have tried to spend some time of understanding the logic (if any !) of
the fts part 


Honestly, the one who created this mess shall be the one to fix it, or
one shall refactor it totally. 

Basically, the fts "core" should be able to do 

- select the backend according to conf file 

- send new emails/maiblox to backend 

- send teh ID of the emails to be removed 

- resend an entire mailbox ('rescan') 


- send the search parameters (from client) to backend and return the
email to front end based on backend results (and NOTHING more) 

Today, the fts part is plain wong and must be totally reviewed. 


I do not have the time but I can participate in testing if someone is
ready to roll up its sleeves on teh mater 


THe "loop" part seems the most urgent : It breaks everything (search
timeout 100% of the time) 


On 2019-04-06 09:56, Joan Moreau via dovecot wrote:


For the point 1, this is not "suboptimal", it is plain wrong (results are damn 
wrong ! and this is not related to the backend, but the FTS logic in Dovecot core)

For the point 2 , this has been discussed already numerous times but without action. The dovecot core shall be the one re-submitting the emails to scan, not the backend to try to figure out where and which are the emails to be re-scaned 

For the point 3, I will do a bit of research in the existing code and will get back to you 

For the point 4, this is random. FTS backend (xapian, lucene, solr, whatever..) returns X, then dovecot core choose to select only Y emails. THis is a clear bug. 

On 2019-04-05 20:08, Josef 'Jeff' Sipek via dovecot wrote: 
On Fri, Apr 05, 2019 at 19:33:57 +0800, Joan Moreau via dovecot wrote: Hi 

If you plan to fix the FTS part of Dovecot, I will be very gratefull. 
I'm trying to figure out what is causing the 3rd issue you listed, so we can

decide how severe it is and therefore how quickly it needs to be fixed.  At
the moment we are unable to reproduce it, and therefore we cannot fix it.

Not sure this is related to any specific commit but rahter the overall
design 
Ok.


The list of bugs so far 


1 - Double call to fts plugins with inconsistent parameter (first call
diferent from second call for the same request) 
Understood.  It is my understanding that this is simply suboptimal rather

than causing crashes/etc.

2 - "Rescan" features for now consists of deleting indexes. SHall be
resending emails to rescan to the fts plugin instead 
I'm not sure I follow.  The rescan operation is invoked on the fts backend

and it is up to the implementation to somehow ensure that after it is done
the fts index is up to date.  The easiest way to implement it is to simply
delete the fts index and re-index all the mails.  That is what currently
happens in the solr backend.

The lucene fts backend does a more complicated matching of the fts index
with the emails.  Finally, the deprecated squat backend seem to ignore the
rescan requests (its rescan vfunc is NULL).

3 - the loop when body search (just do a "doveadm search -u user@domain
mailbox inbox text whatevertexte") 


Refer to my email to Timo on 2019-04-03 18:30 on the same thread for bug
details 

(especially the loop) 
This seems to be the most important of the 4 issues you listed, so I'd like

to focus on this one for now.

As I mentioned, we cannot reproduce this ourselves.  So, we need your help
to narrow things down.  Therefore, can you give us the commit hashes of
revisions that you know are good and which are bad?  You can use git-bisect
to narrow the range down.

4 - Most notably, I notice that header search usually does not care
about fts plugin (even with fts_enforced) and rely on some internal
search , which si total non-sense 
You're right, that doesn't seem to make sense.  Can you provide a test case?


Jeff.

Let me know how can I help on thos 4 points 


On 2019-04-05 18:37, Josef 'Jeff' Sipek wrote:

On Fri, Apr 05, 2019 at 17:45:36 +0800, Joan Moreau wrote: 

I am on master (very latest) 

No clue exactly when this problem appears, but 


1 - the "request twice the fts plugin instead of once" issue has always
been there (since my first RC release of fts-xapian) 
Ok, good to know.


2 - the body/text loop has appeared recently (maybe during the month of
March) 
Our testing doesn't seem to be able to reproduce this.  Can you try to

git-bisect this to find which commit broke it?

Thanks,

Jeff.

On 2019-04-05 16:36, Josef 'Jeff' Sipek via dovecot wrote:

On Wed, Apr 03, 2019 at 19:02:52 +0800, Joan Moreau via dovecot wrote: 

issue seems in the Git version : 
Which git revision?


Before you updated to the broken revision, which revision/version were you
running?

Can you try it with 5f6e39c50ec79ba8847b2fdb571a9152c71cd1b6 (the commit
just before the fts_enforced=body introduction)?  That's the only recent fts
change.

Thanks,

Jeff.

On 2019-04-03 18:58, @lbutlr via dovecot wrote:

On 3 Apr 2019, at 04:30, Joan Moreau via dovecot  w

Re: FTS delays

2019-04-05 Thread Joan Moreau via dovecot

For the point 1, this is not "suboptimal", it is plain wrong (results
are damn wrong ! and this is not related to the backend, but the FTS
logic in Dovecot core)

For the point 2 , this has been discussed already numerous times but
without action. The dovecot core shall be the one re-submitting the
emails to scan, not the backend to try to figure out where and which are
the emails to be re-scaned 


For the point 3, I will do a bit of research in the existing code and
will get back to you 


For the point 4, this is random. FTS backend (xapian, lucene, solr,
whatever..) returns X, then dovecot core choose to select only Y emails.
THis is a clear bug. 


On 2019-04-05 20:08, Josef 'Jeff' Sipek via dovecot wrote:

On Fri, Apr 05, 2019 at 19:33:57 +0800, Joan Moreau via dovecot wrote: 

Hi 


If you plan to fix the FTS part of Dovecot, I will be very gratefull.


I'm trying to figure out what is causing the 3rd issue you listed, so we can
decide how severe it is and therefore how quickly it needs to be fixed.  At
the moment we are unable to reproduce it, and therefore we cannot fix it.


Not sure this is related to any specific commit but rahter the overall
design


Ok.

The list of bugs so far 


1 - Double call to fts plugins with inconsistent parameter (first call
diferent from second call for the same request)


Understood.  It is my understanding that this is simply suboptimal rather
than causing crashes/etc.


2 - "Rescan" features for now consists of deleting indexes. SHall be
resending emails to rescan to the fts plugin instead


I'm not sure I follow.  The rescan operation is invoked on the fts backend
and it is up to the implementation to somehow ensure that after it is done
the fts index is up to date.  The easiest way to implement it is to simply
delete the fts index and re-index all the mails.  That is what currently
happens in the solr backend.

The lucene fts backend does a more complicated matching of the fts index
with the emails.  Finally, the deprecated squat backend seem to ignore the
rescan requests (its rescan vfunc is NULL).


3 - the loop when body search (just do a "doveadm search -u user@domain
mailbox inbox text whatevertexte") 


Refer to my email to Timo on 2019-04-03 18:30 on the same thread for bug
details 


(especially the loop)


This seems to be the most important of the 4 issues you listed, so I'd like
to focus on this one for now.

As I mentioned, we cannot reproduce this ourselves.  So, we need your help
to narrow things down.  Therefore, can you give us the commit hashes of
revisions that you know are good and which are bad?  You can use git-bisect
to narrow the range down.


4 - Most notably, I notice that header search usually does not care
about fts plugin (even with fts_enforced) and rely on some internal
search , which si total non-sense


You're right, that doesn't seem to make sense.  Can you provide a test case?

Jeff.

Let me know how can I help on thos 4 points 


On 2019-04-05 18:37, Josef 'Jeff' Sipek wrote:

On Fri, Apr 05, 2019 at 17:45:36 +0800, Joan Moreau wrote: 

I am on master (very latest) 

No clue exactly when this problem appears, but 


1 - the "request twice the fts plugin instead of once" issue has always
been there (since my first RC release of fts-xapian) 
Ok, good to know.


2 - the body/text loop has appeared recently (maybe during the month of
March) 
Our testing doesn't seem to be able to reproduce this.  Can you try to

git-bisect this to find which commit broke it?

Thanks,

Jeff.

On 2019-04-05 16:36, Josef 'Jeff' Sipek via dovecot wrote:

On Wed, Apr 03, 2019 at 19:02:52 +0800, Joan Moreau via dovecot wrote: 

issue seems in the Git version : 
Which git revision?


Before you updated to the broken revision, which revision/version were you
running?

Can you try it with 5f6e39c50ec79ba8847b2fdb571a9152c71cd1b6 (the commit
just before the fts_enforced=body introduction)?  That's the only recent fts
change.

Thanks,

Jeff.

On 2019-04-03 18:58, @lbutlr via dovecot wrote:

On 3 Apr 2019, at 04:30, Joan Moreau via dovecot  wrote: 

doveadm search -u j...@grosjo.net mailbox inbox text milan 
Did that search over my list mail and got 83 results, not able to duplicate your issue.


What version of dovecot and have you tried to reindex?

dovecot-2.3.5.1 here.

Re: FTS delays

2019-04-05 Thread Joan Moreau via dovecot
Hi 


If you plan to fix the FTS part of Dovecot, I will be very gratefull.
Not sure this is related to any specific commit but rahter the overall
design 

The list of bugs so far 


1 - Double call to fts plugins with inconsistent parameter (first call
diferent from second call for the same request) 


2 - "Rescan" features for now consists of deleting indexes. SHall be
resending emails to rescan to the fts plugin instead 


3 - the loop when body search (just do a "doveadm search -u user@domain
mailbox inbox text whatevertexte") 


Refer to my email to Timo on 2019-04-03 18:30 on the same thread for bug
details 

(especially the loop) 


4 - Most notably, I notice that header search usually does not care
about fts plugin (even with fts_enforced) and rely on some internal
search , which si total non-sense 

Let me know how can I help on thos 4 points 


On 2019-04-05 18:37, Josef 'Jeff' Sipek wrote:

On Fri, Apr 05, 2019 at 17:45:36 +0800, Joan Moreau wrote: 

I am on master (very latest) 

No clue exactly when this problem appears, but 


1 - the "request twice the fts plugin instead of once" issue has always
been there (since my first RC release of fts-xapian)


Ok, good to know.


2 - the body/text loop has appeared recently (maybe during the month of
March)


Our testing doesn't seem to be able to reproduce this.  Can you try to
git-bisect this to find which commit broke it?

Thanks,

Jeff.

On 2019-04-05 16:36, Josef 'Jeff' Sipek via dovecot wrote:

On Wed, Apr 03, 2019 at 19:02:52 +0800, Joan Moreau via dovecot wrote: 

issue seems in the Git version : 
Which git revision?


Before you updated to the broken revision, which revision/version were you
running?

Can you try it with 5f6e39c50ec79ba8847b2fdb571a9152c71cd1b6 (the commit
just before the fts_enforced=body introduction)?  That's the only recent fts
change.

Thanks,

Jeff.

On 2019-04-03 18:58, @lbutlr via dovecot wrote:

On 3 Apr 2019, at 04:30, Joan Moreau via dovecot  wrote: 

doveadm search -u j...@grosjo.net mailbox inbox text milan 
Did that search over my list mail and got 83 results, not able to duplicate your issue.


What version of dovecot and have you tried to reindex?

dovecot-2.3.5.1 here.

Re: FTS delays

2019-04-05 Thread Joan Moreau via dovecot
I am on master (very latest) 

No clue exactly when this problem appears, but 


1 - the "request twice the fts plugin instead of once" issue has always
been there (since my first RC release of fts-xapian) 


2 - the body/text loop has appeared recently (maybe during the month of
March) 


On 2019-04-05 16:36, Josef 'Jeff' Sipek via dovecot wrote:

On Wed, Apr 03, 2019 at 19:02:52 +0800, Joan Moreau via dovecot wrote: 


issue seems in the Git version :


Which git revision?

Before you updated to the broken revision, which revision/version were you
running?

Can you try it with 5f6e39c50ec79ba8847b2fdb571a9152c71cd1b6 (the commit
just before the fts_enforced=body introduction)?  That's the only recent fts
change.

Thanks,

Jeff.

On 2019-04-03 18:58, @lbutlr via dovecot wrote:

On 3 Apr 2019, at 04:30, Joan Moreau via dovecot  wrote: 

doveadm search -u j...@grosjo.net mailbox inbox text milan 
Did that search over my list mail and got 83 results, not able to duplicate your issue.


What version of dovecot and have you tried to reindex?

dovecot-2.3.5.1 here.

Re: FTS delays

2019-04-03 Thread Joan Moreau via dovecot
issue seems in the Git version : 

FTS search in teh body ends up with looping 

Other search call twice the FTS plugin (for no reason) 


On 2019-04-03 18:58, @lbutlr via dovecot wrote:

On 3 Apr 2019, at 04:30, Joan Moreau via dovecot  wrote: 


doveadm search -u j...@grosjo.net mailbox inbox text milan


Did that search over my list mail and got 83 results, not able to duplicate 
your issue.

What version of dovecot and have you tried to reindex?

dovecot-2.3.5.1 here.

Re: FTS delays

2019-04-03 Thread Joan Moreau via dovecot
Example from real life 

From Roubdcube, I serach "milan" in full message (body & headers) 


Logs : 


Apr 3 10:24:01 gjserver dovecot[29778]:
imap(j...@grosjo.net)<30311><4pACp52FfCF/AAAB>: Query : ( bcc:milan OR
body:milan OR cc:milan OR from:milan OR message-id:milan OR
subject:milan OR to:milan OR uid:milan )
Apr 3 10:24:01 gjserver dovecot[29778]:
imap(j...@grosjo.net)<30311><4pACp52FfCF/AAAB>: Query: 81 results in 2 ms


81 results is correct 

but Roundcube times out 

from command line, I do : 

doveadm search -u j...@grosjo.net mailbox inbox text milan 

output 


doveadm(j...@grosjo.net): Info: Query : ( bcc:inbox OR body:inbox OR
cc:inbox OR from:inbox OR message-id:inbox OR subject:inbox OR to:inbox
OR uid:inbox ) AND ( bcc:milan OR body:milan OR cc:milan OR from:milan
OR message-id:milan OR subject:milan OR to:milan OR uid:milan )
doveadm(j...@grosjo.net): Info: Query: 1 results in 1 ms
d82b4b0f550d3859364495331209 847
d82b4b0f550d3859364495331209 1569
d82b4b0f550d3859364495331209 2260
d82b4b0f550d3859364495331209 2575
d82b4b0f550d3859364495331209 2811
d82b4b0f550d3859364495331209 2885
d82b4b0f550d3859364495331209 3038
d82b4b0f550d3859364495331209 3121
d82b4b0f550d3859364495331209 3170 

1 - The query is wrong 

2 - teh last line "d8...209 3170" gets repeated for ages 


On 2019-04-02 16:30, Timo Sirainen wrote:

On 2 Apr 2019, at 6.38, Joan Moreau via dovecot  wrote: 


Further on this topic:

When choosing any headers in the search box, dovecot core calls the plugin 
TWICE (and returns the results quickly, but not immediatly after getting the 
IDs from the plugins)

When choosing the BODY search, dovecot core calls the plugin ONCE (and never 
returns) (whereas the plugins returns properly the IDs)


If we simplify this, do you mean this calls it once and is fast:

doveadm search -u user@domain mailbox inbox body helloworld

But this calls twice and is slow:

doveadm search -u user@domain mailbox inbox text helloworld

And what about searching e.g. subject? :

doveadm search -u user@domain mailbox inbox subject helloworld

And does the slowness depend on whether there were any matches or not?


This is based on GIT version. (previous versions were working properly)


Previous versions were fast? Do you mean v2.3.5?

Re: FTS delays

2019-04-01 Thread Joan Moreau via dovecot
Further on this topic: 


When choosing any headers in the search box, dovecot core calls the
plugin TWICE (and returns the results quickly, but not immediatly after
getting the IDs from the plugins) 


When choosing the BODY search, dovecot core calls the plugin ONCE (and
never returns) (whereas the plugins returns properly the IDs) 

This is based on GIT version. (previous versions were working properly) 

Looking for feedback 


Thank you

On 2019-03-30 21:48, Joan Moreau wrote:

it is already on 

On March 31, 2019 03:47:52 Aki Tuomi via dovecot  wrote: 

On 30 March 2019 21:37 Joan Moreau via dovecot  wrote: 

Hi 

When I do a FTS search (using Xapian plugin) in the BODY part, the plugins returns the matching IDs within few milliseconds (as seen in the log). 

However, roundcube (connected on dovecot) takes ages to show (headers only vie IMAP) the few results (I tested with a matching requests of 9 emails) 

What could be the root cause ? 

Thank you 

does it help if you set 

plugin { 
fts_enforced=yes 
} 


---
Aki Tuomi

Re: FTS delays

2019-03-30 Thread Joan Moreau via dovecot

it is already on

On March 31, 2019 03:47:52 Aki Tuomi via dovecot  wrote:



On 30 March 2019 21:37 Joan Moreau via dovecot  wrote:





Hi

When I do a FTS search (using Xapian plugin) in the BODY part, the plugins 
returns the matching IDs within few milliseconds (as seen in the log).


However, roundcube (connected on dovecot) takes ages to show (headers only 
vie IMAP) the few results (I tested with a matching requests of 9 emails)


What could be the root cause ?

Thank you


does it help if you set

plugin {
  fts_enforced=yes
}
---
Aki Tuomi




FTS delays

2019-03-30 Thread Joan Moreau via dovecot
Hi 


When I do a FTS search (using Xapian plugin) in the BODY part, the
plugins returns the matching IDs within few milliseconds (as seen in the
log). 


However, roundcube (connected on dovecot) takes ages to show (headers
only vie IMAP) the few results (I tested with a matching requests of 9
emails) 

What could be the root cause ? 


Thank you

Re: [grosjo/fts-xapian] `doveadm fts rescan` removes all indices (#15)

2019-02-18 Thread Joan Moreau via dovecot

Can you clarify the piece of code or give an example on how to  "Get
list of UIDs for all mails in each folder " and how to get the "list of
all folder/mailbox"  from a *backend input ?

On 2019-02-17 14:52, Aki Tuomi wrote:


Not really, as the steps outlined by Timo would not get done.

Aki

On 17 February 2019 at 10:56 Joan Moreau via dovecot  
wrote:

In such case, as long as the API is not upgraded, should 

doveadm index -A -q \* 

be considered a replacement of 


doveadm fts rescan

On 2019-02-14 16:24, Timo Sirainen via dovecot wrote:

Hi, 

The rescan() function is a bit badly designed. Currently what you could do what fts-lucene does and: 
- Get list of UIDs for all mails in each folder 
- If Xapian has UID that doesn't exist -> delete it from Xapian 
- If UID is missing from Xapian -> expunge the rest of the UIDs in that folder, so the next indexing will cause them to be indexed 

The expunging of rest of the mails is rather ugly, yes.. A better API would be if backend simply had a way to iterate all mails in the index, preferrably sorted by folder. Then a more generic code could go through them and expunge the necessary mails and index the missing mails. Although not all FTS backends support indexing in the middle. Anyway, we don't really have time to implement this new API soon. 

I'm not sure if this is a big problem though. I don't think most people running FTS have ever run rescan. 

On 8 Feb 2019, at 9.54, Joan Moreau via dovecot  wrote: 

Hi, 

THis is a core problem in Dovecot in my understanding. 

In my opinion, the rescan in dovecot should send to the FTS plugin the list of "supposedly" indexed emails (UID), and the plugin shall purge the redundant UID (i..e UID present in the index but not in the list sent by dovecot) and send back the list of UID not in its indexes to dovecot, so Dovect can send one by one the missing emails 

WHat do you think ? 

 Original Message  


SUBJECT:
[grosjo/fts-xapian] `doveadm fts rescan` removes all indices (#15)

DATE:
2019-02-08 08:28

FROM:
Leonard Lausen 

TO:
grosjo/fts-xapian 

CC:
Subscribed 

REPLY-TO:
grosjo/fts-xapian 


doveadm fts rescan -A deletes all indices, ie. all folders and files in the xapian-indexes are deleted. However, according to man doveadm fts, the rescan command should only 


Scan what mails exist in the full text search index and compare those to what
actually exist in mailboxes. This removes mails from the index that have already
been expunged and makes sure that the next doveadm index will index all the
missing mails (if any). 

Deleting all indices does not seem to be the intended action, especially as constructing the index anew may take very long on large mailboxes. 


--
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub [1 [1]], or mute the thread [2].  


Links:
--
[1] https://github.com/grosjo/fts-xapian/issues/15
[2]
https://github.com/notifications/unsubscribe-auth/ACLmB9OB-7GaKIvhNc8sCgi7KQTrjNnoks5vLScugaJpZM4auCWp



Links:
--
[1] https://github.com/grosjo/fts-xapian/issues/15

Re: [grosjo/fts-xapian] `doveadm fts rescan` removes all indices (#15)

2019-02-17 Thread Joan Moreau via dovecot
In such case, as long as the API is not upgraded, should 

doveadm index -A -q \* 

be considered a replacement of 


doveadm fts rescan

On 2019-02-14 16:24, Timo Sirainen via dovecot wrote:

Hi, 

The rescan() function is a bit badly designed. Currently what you could do what fts-lucene does and: 
- Get list of UIDs for all mails in each folder 
- If Xapian has UID that doesn't exist -> delete it from Xapian 
- If UID is missing from Xapian -> expunge the rest of the UIDs in that folder, so the next indexing will cause them to be indexed 

The expunging of rest of the mails is rather ugly, yes.. A better API would be if backend simply had a way to iterate all mails in the index, preferrably sorted by folder. Then a more generic code could go through them and expunge the necessary mails and index the missing mails. Although not all FTS backends support indexing in the middle. Anyway, we don't really have time to implement this new API soon. 

I'm not sure if this is a big problem though. I don't think most people running FTS have ever run rescan. 

On 8 Feb 2019, at 9.54, Joan Moreau via dovecot  wrote: 

Hi, 

THis is a core problem in Dovecot in my understanding. 

In my opinion, the rescan in dovecot should send to the FTS plugin the list of "supposedly" indexed emails (UID), and the plugin shall purge the redundant UID (i..e UID present in the index but not in the list sent by dovecot) and send back the list of UID not in its indexes to dovecot, so Dovect can send one by one the missing emails 

WHat do you think ? 

 Original Message  


SUBJECT:
[grosjo/fts-xapian] `doveadm fts rescan` removes all indices (#15)

DATE:
2019-02-08 08:28

FROM:
Leonard Lausen 

TO:
grosjo/fts-xapian 

CC:
Subscribed 

REPLY-TO:
grosjo/fts-xapian 


doveadm fts rescan -A deletes all indices, ie. all folders and files in the xapian-indexes are deleted. However, according to man doveadm fts, the rescan command should only 


Scan what mails exist in the full text search index and compare those to what
actually exist in mailboxes. This removes mails from the index that have already
been expunged and makes sure that the next doveadm index will index all the
missing mails (if any). 

Deleting all indices does not seem to be the intended action, especially as constructing the index anew may take very long on large mailboxes. 


--
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub [1], or mute the thread [2].



Links:
--
[1] https://github.com/grosjo/fts-xapian/issues/15
[2]
https://github.com/notifications/unsubscribe-auth/ACLmB9OB-7GaKIvhNc8sCgi7KQTrjNnoks5vLScugaJpZM4auCWp

Re: Fwd: [grosjo/fts-xapian] `doveadm fts rescan` removes all indices (#15)

2019-02-13 Thread Joan Moreau via dovecot
Hi 


Anyone ?

On 2019-02-08 08:54, Joan Moreau via dovecot wrote:

Hi, 

THis is a core problem in Dovecot in my understanding. 

In my opinion, the rescan in dovecot should send to the FTS plugin the list of "supposedly" indexed emails (UID), and the plugin shall purge the redundant UID (i..e UID present in the index but not in the list sent by dovecot) and send back the list of UID not in its indexes to dovecot, so Dovect can send one by one the missing emails 

WHat do you think ? 

 Original Message  


SUBJECT:
[grosjo/fts-xapian] `doveadm fts rescan` removes all indices (#15)

DATE:
2019-02-08 08:28

FROM:
Leonard Lausen 

TO:
grosjo/fts-xapian 

CC:
Subscribed 

REPLY-TO:
grosjo/fts-xapian 


doveadm fts rescan -A deletes all indices, ie. all folders and files in the 
xapian-indexes are deleted. However, according to man doveadm fts, the rescan 
command should only


Scan what mails exist in the full text search index and compare those to what
actually exist in mailboxes. This removes mails from the index that have already
been expunged and makes sure that the next doveadm index will index all the
missing mails (if any).


Deleting all indices does not seem to be the intended action, especially as constructing the index anew may take very long on large mailboxes. 


--
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub [1], or mute the thread [2].



Links:
--
[1] https://github.com/grosjo/fts-xapian/issues/15
[2]
https://github.com/notifications/unsubscribe-auth/ACLmB9OB-7GaKIvhNc8sCgi7KQTrjNnoks5vLScugaJpZM4auCWp

Fwd: [grosjo/fts-xapian] `doveadm fts rescan` removes all indices (#15)

2019-02-07 Thread Joan Moreau via dovecot
Hi, 

THis is a core problem in Dovecot in my understanding. 


In my opinion, the rescan in dovecot should send to the FTS plugin the
list of "supposedly" indexed emails (UID), and the plugin shall purge
the redundant UID (i..e UID present in the index but not in the list
sent by dovecot) and send back the list of UID not in its indexes to
dovecot, so Dovect can send one by one the missing emails 

WHat do you think ? 

 Original Message  


SUBJECT:
[grosjo/fts-xapian] `doveadm fts rescan` removes all indices 
(#15)

DATE:
2019-02-08 08:28

FROM:
Leonard Lausen 

TO:
grosjo/fts-xapian 

CC:
Subscribed 

REPLY-TO:
grosjo/fts-xapian


doveadm fts rescan -A deletes all indices, ie. all folders and files in
the xapian-indexes are deleted. However, according to man doveadm fts,
the rescan command should only


Scan what mails exist in the full text search index and compare those to what
actually exist in mailboxes. This removes mails from the index that have already
been expunged and makes sure that the next doveadm index will index all the
missing mails (if any).


Deleting all indices does not seem to be the intended action, especially
as constructing the index anew may take very long on large mailboxes. 


--
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub [1], or mute the thread
[2]. 




Links:
--
[1] https://github.com/grosjo/fts-xapian/issues/15
[2]
https://github.com/notifications/unsubscribe-auth/ACLmB9OB-7GaKIvhNc8sCgi7KQTrjNnoks5vLScugaJpZM4auCWp

Re: Solr - complete setup (update)

2019-01-29 Thread Joan Moreau via dovecot

On 2019-01-30 07:33, Stephan Bosch wrote:


(forgot to CC mailing list)

Op 26/01/2019 om 20:07 schreef Joan Moreau via dovecot: 


*- Bugs so far*

-> Line 620 of fts_solr dovecot plugin : the size oof header is improperly calculated 
("huge header" warning for a simple email, which kilss the index of that 
considered email, so basically MOST emails as the calculation is wrong) *You can check that 
regularly in dovecot log file. My guess is the mix of Unicode which is not properly 
addressed here.*


Does this happen with specific messages? Do you have a sample message
for me? I don't see how Unicode could cause this. 


MY ONLY GUESS IS THAT IT REFERS TO SOME 'STRLEN', WHICH IS WRONG OF
COURSE IN CASE OF UNICODE EMAILS. THIS IS JUST A GUESS. 


BUT DO A GREP FOR "HUGE" IN THE DOVECOT LOG OF A BUSY SERVER TO FIND
EXAMPLES. 


(SORRY, I SWITCHED TO XAPIAN, AS SOLR IS CREATING TOO MUCH TROUBLES FOR
MY SERVER, SO NO MORE CONCRETE EXAMPLE) 


-> The UID returned by SOlr is to be considered as a STRING (and that is maybe the source of 
problem of the "out of bound" errors in fts_solr dovecot, as "long" is not enough)

*This is just highly visible in Solr schema.xml. Swithcing it to "long" in 
schema.xml returns plenty of errors.*


I cannot reproduce this so far (see modified schema below). In a simple
test I just get the desired results and no errors logged. 


I got this with large mailboxes (where UID seems not acceptable for Solr
). The fault is not on Dovecot side but Solr, and the returned UID(s)
for a search is garbage instead of a proper value -> Putting it as
string solves this


-> Java errors : A lot of non sense for me, I am not expert in Java. But, with 
increased memory, it seems not crashing, even if complaining quite a lot in the 
logs

Can you elaborate on the errors you have seen so far? When do these happen? How 
can I reproduce them?

*Honestly, I have no clue what the problems are. I just increased the memory of 
the JVM and the systems stopped crashing. Log files are huge anyway.*


What errors do you see? I see only INFO entries in my
/var/solr/logs/solr.log. Looks like Solr is pretty verbose by default
(lots of INFO output), but there must be a way to reduce that. 

I DELETED SOLR. NO MORE LOGS. MAYBE SOMEONE ELSE CAN TELL. 




id
















































Re: Solr - complete setup (update)

2019-01-26 Thread Joan Moreau via dovecot

*- Installation:*

-> Create a clean install using the default, (at least in the Archlinux package), and do 
a "sudo -u solr solr create -c dovecot ". The config files are then in 
/opt/solr/server/solr/dovecot/conf and datafiles in /opt/solr/server/solr/dovecot/data


On my system (Debian) these directories are wildly different (e.g. data
is under /var), but other than that, this information is OK.

Used this as a side-reference for Debian installation:
https://tecadmin.net/install-apache-solr-on-debian/

Accessed http://solr-host.tld:8983/solr/ to check whether all is OK. 


MAKE SURE YOU HAVE A DOVECOT INSTANCE (NOT THE DEFAULT INSTANCE) , WITH
THE FUNCTION BELOW: 

SOLR CREATE -C DOVECOT (OR WHATEVER NAME) 


Weirdly, rescan returns immediately here. When I perform `doveadm index INBOX` 
for my test user, I do see a lot of fts and HTTP activity.


THE SOLR PLUGIN IS NOT CODED ENTIRELY, REFRESH AND RESCAN FUNCTIONS ARE
MISSING : 


https://github.com/dovecot/core/blob/master/src/plugins/fts-solr/fts-backend-solr.c


static int fts_backend_solr_refresh(struct fts_backend *backend
ATTR_UNUSED)
{
return 0;
} 


static int fts_backend_solr_rescan(struct fts_backend *backend)
{
/* FIXME: proper rescan needed. for now we'll just reset the
last-uids */
return fts_backend_reset_last_uids(backend);
} 


*- Bugs so far*

-> Line 620 of fts_solr dovecot plugin : the size oof header is improperly calculated 
("huge header" warning for a simple email, which kilss the index of that 
considered email, so basically MOST emails as the calculation is wrong)


YOU CAN CHECK THAT REGULARLY IN DOVECOT LOG FILE. MY GUESS IS THE MIX OF
UNICODE WHICH IS NOT PROPERLY ADDRESSED HERE. 


-> The UID returned by SOlr is to be considered as a STRING (and that is maybe the source of 
problem of the "out of bound" errors in fts_solr dovecot, as "long" is not enough)


THIS IS JUST HIGHLY VISIBLE IN SOLR SCHEMA.XML. SWITHCING IT TO "LONG"
IN SCHEMA.XML RETURNS PLENTY OF ERRORS. 

-> Java errors : A lot of non sense for me, I am not expert in Java. But, with increased memory, it seems not crashing, even if complaining quite a lot in the logs 


Can you elaborate on the errors you have seen so far? When do these happen? How 
can I reproduce them?


HONESTLY, I HAVE NO CLUE WHAT THE PROBLEMS ARE. I JUST INCREASED THE
MEMORY OF THE JVM AND THE SYSTEMS STOPPED CRASHING. LOG FILES ARE HUGE
ANYWAY.

  1   2   3   >