[Dovecot] suspect valgrind error in mail-index-map.c

2008-03-08 Thread Diego Liziero
Hi

At line 1118 of src//lib-index/mail-index-map.c, inside the function
mail_index_map_move_to_memory, there is:
mail_index_map_copy_header(map, map);

Valgrind is stating that "Source and destination overlap in memcpy".
I'm wondering if this code is just coping the same memory over itself,
or if it is doing something useful.

Regards,
Diego Liziero.
---
1104 void mail_index_map_move_to_memory(struct mail_index_map *map)
1105 {
1106struct mail_index_record_map *new_map;
1107
1108if (map->rec_map->mmap_base == NULL)
1109return;
1110
i_assert(map->rec_map->lock_id != 0);
1112
1113new_map = array_count(&map->rec_map->maps) == 1 ? map->rec_map :
1114mail_index_record_map_alloc(map);
1115
1116mail_index_map_copy_records(new_map, map->rec_map,
1117map->hdr.record_size);
1118mail_index_map_copy_header(map, map);
1119
1120if (new_map != map->rec_map) {
1121mail_index_record_map_unlink(map);
1122map->rec_map = new_map;
1123} else {
1124mail_index_unlock(map->index, &new_map->lock_id);
1125if (munmap(new_map->mmap_base, new_map->mmap_size) < 0)
1126mail_index_set_syscall_error(map->index, "munmap()");
1127new_map->mmap_base = NULL;
1128}
1129 }


Re: [Dovecot] suspect valgrind error in mail-index-map.c

2008-03-08 Thread Diego Liziero
On Sun, Mar 9, 2008 at 1:12 AM, Timo Sirainen <[EMAIL PROTECTED]> wrote:
>
>  It's copying over itself, so it shouldn't break anything. But I fixed
>  the error anyway: http://hg.dovecot.org/dovecot-1.1/rev/8e014fd46e84

Thanks Timo. With this patch valgrind gives 0 errors most of the time.

Forgive me if I'm asking this last valgrind trace.
Can we simply ignore it?

Diego.

==3921== 180 (124 direct, 56 indirect) bytes in 1 blocks are
definitely lost in loss record 3 of 5
==3921==at 0x40046FF: calloc (vg_replace_malloc.c:279)
==3921==by 0x80D76DC: pool_system_malloc (mempool-system.c:74)
==3921==by 0x80B59CB: mail_transaction_log_file_alloc
(mail-transaction-log-file.c:51)
==3921==by 0x80B3A86: mail_transaction_log_find_file
(mail-transaction-log.c:385)
==3921==by 0x80B776F: mail_transaction_log_view_set
(mail-transaction-log-view.c:160)
==3921==by 0x80B00B2: mail_index_sync_map (mail-index-sync-update.c:747)
==3921==by 0x80A9368: mail_index_map (mail-index-map.c:897)
==3921==by 0x80A63BB: mail_index_try_open (mail-index.c:290)
==3921==by 0x80A673C: mail_index_open (mail-index.c:352)
==3921==by 0x809AB78: index_storage_mailbox_open (index-storage.c:383)
==3921==by 0x807809B: mbox_alloc_mailbox (mbox-storage.c:573)
==3921==by 0x8078E2F: mbox_open (mbox-storage.c:591)
==3921==by 0x807900E: mbox_mailbox_open (mbox-storage.c:668)
==3921==by 0x809EB98: mailbox_open (mail-storage.c:459)
==3921==by 0x805E0E8: cmd_select_full (cmd-select.c:32)
==3921==by 0x805B5A8: cmd_examine (cmd-examine.c:8)
==3921==by 0x805F9A8: client_command_input (client.c:546)
==3921==by 0x805FA3D: client_command_input (client.c:595)
==3921==by 0x80601C4: client_handle_input (client.c:636)
==3921==by 0x80603DD: client_input (client.c:691)
==3921==by 0x80D5B9F: io_loop_handler_run (ioloop-epoll.c:201)
==3921==by 0x80D4E27: io_loop_run (ioloop.c:301)
==3921==by 0x8067DBB: main (main.c:293)


Re: [Dovecot] suspect valgrind error in mail-index-map.c

2008-03-09 Thread Diego Liziero
On Sun, Mar 9, 2008 at 2:07 AM, Diego Liziero <[EMAIL PROTECTED]> wrote:
> [..]
>
> 180 (124 direct, 56 indirect) bytes in 1 blocks are definitely lost in loss 
> record 3 of 5
> [..]
>  by 0x80B59CB: mail_transaction_log_file_alloc 
> (mail-transaction-log-file.c:51)
>  by 0x80B3A86: mail_transaction_log_find_file (mail-transaction-log.c:385)

I think that it's this last line causing the error.

  mail-transaction-log.c
343 int mail_transaction_log_find_file(struct mail_transaction_log *log,
344uint32_t file_seq, bool nfs_flush,
345struct mail_transaction_log_file **file_r)
346 {
347 struct mail_transaction_log_file *file;
348 const char *path;
349 int ret;
[..]
382 /* see if we have it in log.2 file */
383 path = t_strconcat(log->index->filepath,
384MAIL_TRANSACTION_LOG_SUFFIX".2", NULL);
385 file = mail_transaction_log_file_alloc(log, path);

Here a new mail_transaction_log_file is allocated before getting lost.
Maybe I'm wrong, but, isn't here a path where
mail_transaction_log_file_free(&file); should be called before
returning without losing the memory pointed by file?

386 if ((ret = mail_transaction_log_file_open(file, TRUE)) <= 0)
387 return ret;
388
389 /* but is it what we expected? */
390 if (file->hdr.file_seq != file_seq)
391 return 0;
392
393 *file_r = file;
394 return 1;
395 }

Regards,
Diego.


[Dovecot] dovecot 1.1.rc3 assertion failed at index_mailbox_set_recent_uid while deleting message with thunderbird.

2008-03-10 Thread Diego Liziero
To some users happens this assertion failure while deleting a message.

dovecot: Mar 10 08:40:44 Panic: IMAP(user): file index-sync.c: line 39
(index_mailbox_set_recent_uid): assertion failed: (seq_range_exists
(&ibox->recent_flags, uid))
dovecot: Mar 10 08:40:44 Error: IMAP(user): Raw backtrace: [see bleow]
dovecot: Mar 10 08:40:44 Error: child 17683 (imap) killed with signal 6

And the message doesn't get deleted.

Here one of the backtraces

(gdb) bt
#0  0x00ae9402 in __kernel_vsyscall ()
#1  0x00725ba0 in raise () from /lib/libc.so.6
#2  0x007274b1 in abort () from /lib/libc.so.6
#3  0x080ce02d in i_internal_fatal_handler (type=LOG_TYPE_PANIC,
status=0, fmt=0x80e1750 "file %s: line %d (%s): assertion failed:
(%s)",
args=0xbfd8c6a4 "=???\016\b'") at failures.c:424
#4  0x080cdc3c in i_panic (format=0x80e1750 "file %s: line %d (%s):
assertion failed: (%s)") at failures.c:187
#5  0x0809bb4a in index_mailbox_set_recent_uid (ibox=0x88ca768,
uid=1041) at index-sync.c:39
#6  0x0808085f in mbox_sync_loop (sync_ctx=0xbfd8c8d0,
mail_ctx=0xbfd8cac4, partial=true) at mbox-sync.c:457
#7  0x08081361 in mbox_sync (mbox=0x88ca768, flags=18) at mbox-sync.c:1504
#8  0x08079cf9 in mbox_transaction_commit (t=0x88c5550,
log_file_seq_r=0xbfd8cbc8, log_file_offset_r=0xbfd8cbc0) at
mbox-transaction.c:45
#9  0x0809c69e in index_transaction_commit (_t=0x88cb1c0,
uid_validity_r=0xbfd8cc48, first_saved_uid_r=0xbfd8cc44,
last_saved_uid_r=0xbfd8cc40) at index-transaction.c:105
#10 0x0805b0cf in cmd_copy (cmd=0x88b75c0) at cmd-copy.c:141
#11 0x0805f079 in cmd_uid (cmd=0x88b75c0) at cmd-uid.c:26
#12 0x0805f9a9 in client_command_input (cmd=0x88b75c0) at client.c:546
#13 0x0805fa3e in client_command_input (cmd=0x88b75c0) at client.c:595
#14 0x080601c5 in client_handle_input (client=0x88b7368) at client.c:636
#15 0x080603de in client_input (client=0x88b7368) at client.c:691
#16 0x080d5ba0 in io_loop_handler_run (ioloop=0x88b59b0) at ioloop-epoll.c:201
#17 0x080d4e28 in io_loop_run (ioloop=0x88b59b0) at ioloop.c:301
#18 0x08067dbc in main (argc=Cannot access memory at address 0x413a
) at main.c:293

(gdb) bt full
#0  0x00ae9402 in __kernel_vsyscall ()
No symbol table info available.
#1  0x00725ba0 in raise () from /lib/libc.so.6
No symbol table info available.
#2  0x007274b1 in abort () from /lib/libc.so.6
No symbol table info available.
#3  0x080ce02d in i_internal_fatal_handler (type=LOG_TYPE_PANIC,
status=0, fmt=0x80e1750 "file %s: line %d (%s): assertion failed:
(%s)",
args=0xbfd8c6a4 "=???\016\b'") at failures.c:424
backtrace = 0x88add50 "/usr/libexec/dovecot/imap [0x80ce024]
-> /usr/libexec/dovecot/imap [0x80cdc3c] -> /usr/libexec/dovecot/imap
[0x809bb4a] -> /usr/libexec/dovecot/imap [0x808085f] ->
/usr/libexec/dovecot/imap(mbox_sync+"...
#4  0x080cdc3c in i_panic (format=0x80e1750 "file %s: line %d (%s):
assertion failed: (%s)") at failures.c:187
args = 0xbfd8c6a4 "=???\016\b'"
#5  0x0809bb4a in index_mailbox_set_recent_uid (ibox=0x88ca768,
uid=1041) at index-sync.c:39
__PRETTY_FUNCTION__ = "index_mailbox_set_recent_uid"
#6  0x0808085f in mbox_sync_loop (sync_ctx=0xbfd8c8d0,
mail_ctx=0xbfd8cac4, partial=true) at mbox-sync.c:457
mail = {uid = 268, idx_seq = 133265, keywords = {arr = {buffer
= 0x88d3770, element_size = 3218655176}, v = 0x88d3770,
v_modifiable = 0x88d3770}, flags = 172 '???', uid_broken = 1,
expunged = 0, pseudo = 1, from_offset = 143457960,
  body_size = 579466361970685888, offset = 143456608, space =
578778797692682360}
rec = (const struct mail_index_record *) 0xb7e6f008
uid = 1041
messages_count = 3518
offset = 563
ret = 
expunged = false
skipped_mails = false
uids_broken = false
#7  0x08081361 in mbox_sync (mbox=0x88ca768, flags=18) at mbox-sync.c:1504
ret = 
#8  0x08079cf9 in mbox_transaction_commit (t=0x88c5550,
log_file_seq_r=0xbfd8cbc8, log_file_offset_r=0xbfd8cbc0) at
mbox-transaction.c:45
mt = (struct mbox_transaction_context *) 0x88cb1c0
mbox = (struct mbox_mailbox *) 0x88ca768
lock_id = 3
ret = 0
#9  0x0809c69e in index_transaction_commit (_t=0x88cb1c0,
uid_validity_r=0xbfd8cc48, first_saved_uid_r=0xbfd8cc44,
last_saved_uid_r=0xbfd8cc40) at index-transaction.c:105
itrans = (struct mail_index_transaction *) 0x0
seq = 7
offset = 20544
#10 0x0805b0cf in cmd_copy (cmd=0x88b75c0) at cmd-copy.c:141
client = (struct client *) 0x88b7368
storage = (struct mail_storage *) 0x88b6c80
destbox = (struct mailbox *) 0x88ca768
t = (struct mailbox_transaction_context *) 0x0
search_arg = 
messageset = 0x88bb6c0 "224"
mailbox = 0x88bb6c8 "Trash"
msg = 
sync_flags = 
imap_flags = 
copy_count = 1
uid_validity = 
uid1 = 
uid2 = 
ret = 1
__PRETTY_FUNCTION__ = "cmd_copy"
#11 0x0805f079 in cmd_uid (cmd=0x88b75c0) at cmd-uid.c:26

Re: [Dovecot] dovecot 1.1.rc3 assertion failed at index_mailbox_set_recent_uid while deleting message with thunderbird.

2008-03-10 Thread Diego Liziero
On Mon, Mar 10, 2008 at 9:05 AM, Diego Liziero <[EMAIL PROTECTED]> wrote:
> To some users happens this assertion failure while deleting a message.
>
>  dovecot: Mar 10 08:40:44 Panic: IMAP(user): file index-sync.c: line 39
>  (index_mailbox_set_recent_uid): assertion failed: (seq_range_exists
>  (&ibox->recent_flags, uid))

36  void index_mailbox_set_recent_uid(struct index_mailbox *ibox, uint32_t 
uid)
37  {
38  if (uid <= ibox->recent_flags_prev_uid) {
39  i_assert(seq_range_exists(&ibox->recent_flags, uid));
40  return;
41  }
42  ibox->recent_flags_prev_uid = uid;

Here, when assert fails:
uid=1041
ibox->recent_flags_prev_uid = 4557
ibox->recent_flags->arr->element_size = 8
**ibox->recent_flags->v = {seq1 = 4557, seq2 = 4557}
(struct seq_range)(ibox->recent_flags->arr->buffer->data) =  {seq1 =
143455672, seq2 = 8}

(gdb)  print *ibox
$16 = {box = {name = 0x88ca8f8 "Trash", storage = 0x88b6c80, v =
{is_readonly = 0x809a750 ,
  allow_new_keywords = 0x809a770
, close = 0x8077d50
,
  get_status = 0x809a620 ,
list_index_has_changed = 0, list_index_update_sync = 0,
  sync_init = 0x80826e0 , sync_next =
0x809bf00 ,
  sync_deinit = 0x809bc30 , sync_notify
= 0, notify_changes = 0x8077d10 ,
  transaction_begin = 0x809c6b0 ,
transaction_commit = 0x809c650 ,
  transaction_rollback = 0x809c630 ,
keywords_create = 0x809a880 ,
  keywords_free = 0x809a790 , get_uids =
0x80934d0 ,
  mail_alloc = 0x8094320 , header_lookup_init =
0x8095d40 ,
  header_lookup_deinit = 0x8095c90 ,
search_init = 0x8099140 ,
  search_deinit = 0x8098b50 ,
search_next_nonblock = 0x8097e40 ,
  search_next_update_seq = 0x8097c10
, save_init = 0x807cab0
,
  save_continue = 0x807c740 , save_finish =
0x807c300 ,
  save_cancel = 0x807c530 , copy = 0x809cbd0
,
  is_inconsistent = 0x809aa00 },
pool = 0x88ca750, transaction_count = 0, private_flags_mask = 0,
file_create_mode = 384, dir_create_mode = 448, file_create_gid =
4294967295, notify_min_interval = 0, notify_callback = 0,
notify_context = 0x0, module_contexts = {arr = {buffer =
0x88ca900, element_size = 4}, v = 0x88ca900, v_modifiable =
0x88ca900},
opened = 1, mailbox_deleted = 0}, view_module_ctx = {reg = 0x0},
storage = 0x88b6c80, open_flags = 14, index = 0x88c09a8,
  view = 0x88c3988, cache = 0x88caf50, mail_vfuncs = 0x810aa20,
md5hdr_ext_idx = 2, notify_to = 0x0, notify_files = 0x0,
  notify_ios = 0x0, notify_last_check = 0, notify_last_sent = 0,
next_lock_notify = 1205134826,
  last_notify_type = MAILBOX_LOCK_NOTIFY_NONE, commit_log_file_seq =
7, commit_log_file_offset = 20544, keyword_names = 0x88c0a84,
  cache_fields = 0x88bf5d8, mail_cache_min_mail_count = 0,
recent_flags = {arr = {buffer = 0x88d3758, element_size = 8}, v =
0x88d3758,
v_modifiable = 0x88d3758}, recent_flags_prev_uid = 4557,
recent_flags_count = 1, sync_last_check = 0, readonly = 0, keep_recent
= 1,
  keep_locked = 0, sent_diskspace_warning = 0,
sent_readonly_flags_warning = 0, notify_pending = 0, move_to_memory =
0, fsync_disable = 0}
--
Diego


[Dovecot] dovecot-1.1.rc3 segmentation fault in fetch_bodystructure

2008-03-11 Thread Diego Liziero
Hi,
another imap crash with latest dovecot.

segmentation fault in fetch_bodystructure

src/imap/imap-fetch.c
static int fetch_bodystructure(struct imap_fetch_context *ctx,
   struct mail *mail, void *context ATTR_UNUSED)
{
const char *bodystructure;

if (mail_get_special(mail, MAIL_FETCH_IMAP_BODYSTRUCTURE,
 &bodystructure) < 0)
return -1;

---> before the segfault here we have bodystructure=0 and
mail_get_special returns >=0
[..]

if (o_stream_send(ctx->client->output, "BODYSTRUCTURE (", 15) < 0 ||
/*line 461*/  o_stream_send_str(ctx->client->output, bodystructure) < 0 ||

---> here o_stream_send_str calls strlen(bodystructure=0), and strlen
tries to access "Address 0x0" causing a segfault

--
 Address 0x0 is not stack'd, malloc'd or (recently) free'd
Process terminating with default action of signal 11 (SIGSEGV): dumping core
 Access not within mapped region at address 0x0
   at: strlen
   by: o_stream_send_str (ostream.c:163)
   by: fetch_bodystructure (imap-fetch.c:461)
   by: imap_fetch (imap-fetch.c:309)
   by: cmd_fetch (cmd-fetch.c:154)
   by: client_command_input (client.c:546)
   by: client_command_input (client.c:595)
   by: client_handle_input (client.c:636)
   by: client_input (client.c:691)
   by: io_loop_handler_run (ioloop-epoll.c:201)
   by: io_loop_run (ioloop.c:301)
   by: main (main.c:293)


Re: [Dovecot] dovecot-1.1.rc3 segmentation fault in fetch_bodystructure

2008-03-11 Thread Diego Liziero
On Tue, Mar 11, 2008 at 9:09 AM, Timo Sirainen <[EMAIL PROTECTED]> wrote:
>
>  Well, I'm not sure how you managed to cause this, but this should fix
>  it: http://hg.dovecot.org/dovecot-1.1/rev/7e27d67d3abe

Thank you Timo for the quick fix,
here we have latest rc3 in a production environment.
It has been used by over 600 users in the last 2 days.

The most failing assertion (9694 times in 2 days) is the one I posted yesterday:
Panic: IMAP(username): file index-sync.c: line 39
(index_mailbox_set_recent_uid): assertion failed:
(seq_range_exists(&ibox->recent_flags, uid))

It happens when users are moving messages to Trash folder with thunderbird.
The workaround for the user is to delete directly the messages without
moving them to Trash.

We had also some trouble with pop3. A couple of users weren't able to
get new mail (see log below) until we deleted completely their .imap
dir.

Diego.
---
Error: POP3(username): Cached message offset lost for seq 1 in mbox
file /maildir/username
Error: POP3(username): Log synchronization error at seq=1,offset=7824
for /maildir/username/.imap/INBOX/dovecot.index: Broken extension
introduction: Record field alignmentation 8 not used
Error: POP3(username): Log synchronization error at seq=1,offset=7856
for /maildir/username/.imap/INBOX/dovecot.index: Broken extension
introduction: Record field points outside record size (0+16 > 12)
Error: POP3(username): Log synchronization error at seq=1,offset=7928
for /maildir/username/.imap/INBOX/dovecot.index: Broken extension
introduction: Record field alignmentation 8 not used
Error: POP3(username): Log synchronization error at seq=1,offset=8004
for /maildir/username/.imap/INBOX/dovecot.index: Broken extension
introduction: Record field points outside record size (0+16 > 12)
Warning: POP3(username): fscking index file
/maildir/username/.imap/INBOX/dovecot.index
Error: POP3(username): Cached message offset lost for seq 1 in mbox
file /cl/e/spool-mail/username
Error: POP3(username): Log synchronization error at seq=1,offset=8208
for /maildir/username/.imap/INBOX/dovecot.index: Broken extension
introduction: Record field alignmentation 8 not used
Error: POP3(username): Log synchronization error at seq=1,offset=8240
for /maildir/username/.imap/INBOX/dovecot.index: Broken extension
introduction: Record field points outside record size (0+16 > 12)
Error: POP3(username): Log synchronization error at seq=1,offset=8312
for /maildir/username/.imap/INBOX/dovecot.index: Broken extension
introduction: Record field alignmentation 8 not used
Error: POP3(username): Log synchronization error at seq=1,offset=8388
for /maildir/username/.imap/INBOX/dovecot.index: Broken extension
introduction: Record field points outside record size (0+16 > 12)
Warning: POP3(username): fscking index file
/maildir/username/.imap/INBOX/dovecot.index
Error: POP3(username): Sending log messages too fast, throttling..
Error: POP3(username): Couldn't init INBOX: Can't sync mailbox:
Messages keep getting expunged


[Dovecot] imap sent-mail folder sometimes dosen't get updated when used by more than a mailer at the same time.

2008-03-12 Thread Diego Liziero
Hi,
I'm collecting users feedback of latest dovecot 1.1.rc3 development release.

Some users are complaining that their sent mails sometimes don't get
written to imap Sent-mail folder.

It seems that all these users were using multiple istances of imap
processes to read their mail
(thunderbird+horde-imp, evolution+horde-imp or multiple thunderbird istances).

No error message is written in dovecot.log, no user feedback in
horde-imp/evolution.
Only thunderbird says it can't update Sent folder.

Has anyone had similar issues?

Diego.


Re: [Dovecot] imap sent-mail folder sometimes dosen't get updated when used by more than a mailer at the same time.

2008-03-12 Thread Diego Liziero
> On Wed, Mar 12, 2008 at 6:59 PM, Diego Liziero <[EMAIL PROTECTED]> wrote:
> >  Has anyone had similar issues?

 Charles Marcus wrote:
> Yes... I don't believe this is a dovecot issue, as this is an occasional
> issue with Thunderbird I've seen on both dovecot and Courier-imap (in
> fact it seems to happen more often on the sites that use Courier)...
> [..]

The fact is that with wu-imap none complained about missing sent mails,
we just switced last week to dovecot (mainly because of really bad
performance of wu-imap)
and several users started complaining about this issue.

Another (maybe related?) strange thing is that local mail (sent by
sendmail) sometimes get delayed to the following queue run (that was
set to 30 min). Even this fact never happened before the switch to
dovecot.

I thought it could be caused by a bad locking configuration, but
Centos5 sendmail should use fcntl, wu-imap should have used it as well
(am I wrong?), and dovecot is
explicitly configured with fcntl, too.

Perhaps, as it never happened with wu-imap, it could be fixed with dovecot, too.

The hard part is to understand why it happens.

Regards,
Diego.


Re: [Dovecot] imap sent-mail folder sometimes dosen't get updated when used by more than a mailer at the same time.

2008-03-12 Thread Diego Liziero
On Wed, Mar 12, 2008 at 6:59 PM, Diego Liziero wrote:
> [..]
>  Some users are complaining that their sent mails sometimes don't get
>  written to imap Sent-mail folder.

 Tim Alberts wrote:
> [..] I'm finding that using multiple clients at the same time, changes are not
> immediately posted so viewing the same account with thunderbird on one
> machine and outlook on another, it appears to get out of sync.  Seems
> mostly the 'delete' or 'move' commands don't actually happen until the
> client program is closed.

Mmm.. the fact I'm reporting is different: you noticed that changes
don't appear immediately when using multiple clients because of local
caching.

Here it happens that mail isn't actually written to an imap folder (sent mail).

Closing all instances and opening a new one doesn't get the lost mail back.

When it happens with thunderbird, the user gets an error stating that
the mail can't be saved.
With other mail clients nothing is said and the mail is just lost.

Regards,
Diego.


Re: [Dovecot] [SOLVED] imap sent-mail folder sometimes dosen't get updated when used by more than a mailer at the same time.

2008-03-13 Thread Diego Liziero
On Thu, Mar 13, 2008 at 3:36 AM, Timo Sirainen <[EMAIL PROTECTED]> wrote:
>
>  Enabling rawlog (http://wiki.dovecot.org/Debugging/Rawlog) and looking
>  at what exactly gets sent at that time could be useful.

I did it but I could't find anything useful.

Then I decided to use wireshark on the client PC and here is what I found:
Mail clients are creating new connections to save sent mail to "Sent"
imap folder.
At the end of the authentication process the following helpful message
is sent to client:
"NO Maximum number of connections from user+IP exceeded"

Great. That's why. Our imap server is in dmz and all internal users
are reaching this server natted from the same ip.

The fact is that error message doesn't appear in dovecot logs, and
email clients don't say anything useful, either.
(maybe dovecot should write it to /var/log/dovecot.log)

Increasing mail_max_userip_connections value should fix it.

Thank you all for helping me in solving this issue.

Regards,
Diego.


Re: [Dovecot] [SOLVED] imap sent-mail folder sometimes dosen't get updated when used by more than a mailer at the same time.

2008-03-13 Thread Diego Liziero
On Thu, Mar 13, 2008 at 11:44 AM, Timo Sirainen <[EMAIL PROTECTED]> wrote:
>
>  It is written as a disconnect reason:
>

Sorry Timo, you're right, my fault, it was there on info_log among
tons of other debug messages, and I was watching the error log.

>  I thought Thunderbird was using max. 5 connections as default though, so
>  why is this happening? Is it not 5 as default or did your users change
>  it?

The issue happens when multiple instances of the mail client are
opened with the same username (and here dovecot sees everyone natted
with the same ip).

We have some office mail accounts that are read by more than a user.

I have still to check how many connections are used by horde-imp webmail.

So, again, my fault, and sorry for the noise.

Diego.


[Dovecot] A message, when moved by multiple clients with same account and same filters, can be duplicated by filters on destination imap folder.

2008-03-13 Thread Diego Liziero
I'm still collecting user feedback after moving from uw-imap to
dovecot development release 1.1.rc3.

What said in the subject seems the obvious result of an improper configuration.
Office emails are configured and accessed on multiple clients with the
same automatic filters, and sometimes the filtered emails get
duplicated on the destination folder.

The fact is that this was perfectly working with wu-imap (without any
duplication/triplication of emails).

I'm looking if there is some workaround to keep this client misuse
working with dovecot.

Could an email have a sort of "primary key" that forbids it's
duplication in the same imap folder?

Regards,
Diego.


Re: [Dovecot] dovecot 1.1.rc3 assertion failed at index_mailbox_set_recent_uid while deleting message with thunderbird.

2008-03-13 Thread Diego Liziero
Attached the rawlogs while this happens.

I've also a tar.gz of .imap dir of the affected user while this was
happening, should I post it, too?

Regards,
Diego.


20080313-094002-29657.in
Description: Binary data


20080313-094002-29657.out
Description: Binary data


Re: [Dovecot] dovecot-1.1.rc3 segmentation fault in fetch_bodystructure

2008-03-27 Thread Diego Liziero
On Thu, Mar 27, 2008 at 5:47 PM, Dean Brooks <[EMAIL PROTECTED]> wrote:

>
> Can you confirm that the patch Timo submited in the above link fixes
> this problem for 1.1rc3?  If so, will this be committed for rc4 or beyond?
>

No, the above patch was about another issue (segmentation fault in
fetch_bodystructure) and I forgot to change the subject about the assertion
failure you are getting.

As Timo told you, I changed the assert with a i_error waiting for a proper
fix.

Diego.

--- ./src/lib-storage/index/index-sync.c-orig   2008-03-13 16:46:
36.0 +0100
+++ ./src/lib-storage/index/index-sync.c2008-03-13 16:51:
38.0 +0100
@@ -36,7 +36,9 @@
 void index_mailbox_set_recent_uid(struct index_mailbox *ibox, uint32_t uid)
 {
if (uid <= ibox->recent_flags_prev_uid) {
-   i_assert(seq_range_exists(&ibox->recent_flags, uid));
+   /*i_assert(seq_range_exists(&ibox->recent_flags, uid));*/
+   if (!seq_range_exists(&ibox->recent_flags, uid))
+   i_error("seq_range_exists(&ibox->recent_flags,
uid)");
return;
}
ibox->recent_flags_prev_uid = uid;


[Dovecot] Imap sent folder sometimes dosen't get updated in dovecot-1.1-rc3(+some current hg patches)

2008-03-27 Thread Diego Liziero
Yes, again, some sent emails don't get copied to sent folder.

This time I get the following error:
Error: IMAP(username): UIDs broken with partial sync in mbox file
/maildir/username/Sent

This happened at least with thunderbird and with evolution.

Dovecot is running on vith fcntl locking on Centos5, sendmail should use fcntl,
too.
Filesystem is ext3 on drbd partition.

After a grep vith dovecot.log I see the same error message also with other
imap folders as well (mainly Trash imap folder).

Should I try to get a rawlog?

Regards,
Diego.


Re: [Dovecot] [partially solved] Imap sent folder sometimes dosen't get updated in dovecot-1.1-rc3(+some current hg patches)

2008-03-29 Thread Diego Liziero
On Thu, Mar 27, 2008 at 11:35 PM, Diego Liziero <[EMAIL PROTECTED]> wrote:
> Yes, again, some sent emails don't get copied to sent folder.

Thist time the reporter reproduced the issue because of a client wrong
configuration and without the related error (he was using a secondary
account that was set to save Sent mails locally instead of using imap
folder, and all the missing sent mails were saved locally)

> Error: IMAP(username): UIDs broken with partial sync in mbox file
> /maildir/username/Sent

This error was in the log at the same minute (but different seconds)
the mail was sent, and it is probably unrelated. Maybe while the user
searched the mail in the Sent folder after sending it.
Sorry for the confusion.

> After a grep vith dovecot.log I see the same error message also with other
> imap folders as well (mainly Trash imap folder).

This is still one of the errors I see in dovecot-1.1.rc3 log.

Regards,
Diego.


Re: [Dovecot] [partially solved] Imap sent folder sometimes dosen't get updated in dovecot-1.1-rc3(+some current hg patches)

2008-04-28 Thread Diego Liziero
On Fri, Apr 25, 2008 at 1:31 AM, Timo Sirainen wrote:
> On Sat, 2008-03-29 at 19:19 +0100, Diego Liziero wrote:
>
> > Error: IMAP(username): UIDs broken with partial sync in mbox file
> > /maildir/username/Sent
>
> Do you use mbox_very_dirty_syncs=yes?

Mmm... actually the line is commented out, and the default should be =no.

> It can cause this and I haven't
> yet bothered to fix it. Anyway I should probably just change it to a
> warning since Dovecot can fix the situation itself

Ok.

Regards,
Diego.


[Dovecot] [patch] let valgrind run on login process with GDB=1

2008-04-30 Thread Diego Liziero
Not sure if this should be included in dovecot, just posting if someone
feels like using valgrind.

Diego.

-
diff -r ba634d2c0ab9 src/master/login-process.c
--- a/src/master/login-process.cWed Apr 30 20:18:37 2008 +0300
+++ b/src/master/login-process.cThu May 01 00:59:10 2008 +0200
@@ -689,7 +689,8 @@ static pid_t create_login_process(struct
fd_limit = 16 + listen_count + ssl_listen_count +
2 * (group->set->login_process_per_connection ? 1 :
 group->set->login_max_connections);
-   restrict_fd_limit(fd_limit);
+   if (getenv("GDB") == NULL)
+   restrict_fd_limit(fd_limit);

/* make sure we don't leak syslog fd, but do it last so that
   any errors above will be logged */


Re: [Dovecot] dovecot 1.1.rc3 assertion failed at index_mailbox_set_recent_uid while deleting message with thunderbird.

2008-04-30 Thread Diego Liziero
On Wed, Apr 30, 2008 at 4:07 PM, Timo Sirainen <[EMAIL PROTECTED]> wrote:

> On Mon, 2008-03-10 at 13:17 +0100, Diego Liziero wrote:
> > On Mon, Mar 10, 2008 at 9:05 AM, Diego Liziero <[EMAIL PROTECTED]>
> wrote:
> > > To some users happens this assertion failure while deleting a message.
> > >
> > >  dovecot: Mar 10 08:40:44 Panic: IMAP(user): file index-sync.c: line
> 39
> > >  (index_mailbox_set_recent_uid): assertion failed: (seq_range_exists
> > >  (&ibox->recent_flags, uid))
>
> Wonder if this fixes it?
> http://hg.dovecot.org/dovecot-1.1/rev/abc88e664e63
>

Just updated to current tree. I'll let you know if it happens again.


Here are some more errors I got in dovecot.log with 1.1.rc4:
Corrupted index cache file /path/to/dovecot.index.cache: Broken MIME parts
for mail UID uidnumber
Corrupted index cache file /path/to/dovecot.index.cache: Broken virtual size
for mail UID uidnumber
Corrupted index cache file /path/to/dovecot.index.cache: used_file_size too
large
FETCH for mailbox mailboxname UID uidnumber got too little data: a vs a+b
UIDs broken with partial sync in mbox file
Corrupted transaction log file /path/to/dovecot.index.log: record size too
small (type=0x40, offset=7576, size=0)
Log synchronization error at seq=1,offset=24 for /path/to/dovecot.index:
Broken extension introduction: Record field points outside record size (0+16
> 8)

Diego.


Re: [Dovecot] dovecot 1.1.rc3 assertion failed at index_mailbox_set_recent_uid while deleting message with thunderbird.

2008-05-02 Thread Diego Liziero
On Wed, Apr 30, 2008 at 4:07 PM, Timo Sirainen <[EMAIL PROTECTED]> wrote:

> > On Mon, Mar 10, 2008 at 9:05 AM, Diego Liziero wrote:
> > > To some users happens this assertion failure while deleting a message.
> > >
> > >  dovecot: Mar 10 08:40:44 Panic: IMAP(user): file index-sync.c: line
> 39
> > >  (index_mailbox_set_recent_uid): assertion failed: (seq_range_exists
> > >  (&ibox->recent_flags, uid))
>
> Wonder if this fixes it?
> http://hg.dovecot.org/dovecot-1.1/rev/abc88e664e63
>

Unfortunately not.
Got it again this morning (I only changed i_assert with i_error):
dovecot: May 02 09:40:39 Error: IMAP(username):
seq_range_exists(&ibox->recent_flags, uid)


Re: [Dovecot] dovecot 1.1.rc3 assertion failed at index_mailbox_set_recent_uid while deleting message with thunderbird.

2008-05-05 Thread Diego Liziero
Timo,
I was wondering If I can help you in spotting the cause of this
assertion failure (got this morning with rc5)
adding some i_info/debug and other seq_range_exists tests.

This morning all assertion failures were caused by users that deleted
with thunderbird many emails from inbox and thunderbird tried to move
them to imap trash folder.

seq_range_exists(&ibox->recent_flags, uid)

Regards,
Diego.


Re: [Dovecot] mbox empty messages in Sent folder

2008-05-26 Thread Diego Liziero
On Mon, May 26, 2008 at 3:34 PM, Timo Sirainen <[EMAIL PROTECTED]> wrote:
> On Mon, 2008-05-26 at 15:08 +0200, Diego Liziero wrote:
>> I'm talking about mbox Sent folder, where some mailers append through
>> imap server
>> a copy of each message they send.
>>
>> Sometimes, just three header lines got appended instead of the whole
>> mail message, such as:
>>
>> >From [EMAIL PROTECTED]  Fri May 23 12:30:14 2008
>> X-UID: 2852
>> Status: RO
>
> Are there other messages around it with the same From-line? Could this
> From-line be part of the previous message's body?

The empty mail header is part of a sent email that is missing in the
Sent mbox file.
The previous From header and the following one are correctly the ones
of the previous and following sent emails.
Here is the mbox lines starting fom the previous mail, and ending to
the header of the following one
(in this case it seems that the actual previous mail has been deleted
as UID 2851 is missing, but I have other examples where no UID is
missing):

---

>From [EMAIL PROTECTED]  Fri May 23 12:15:17 2008
Message-ID: <[EMAIL PROTECTED]>
Date: Fri, 23 May 2008 12:15:06 +0200
From:  xxx xxx  <[EMAIL PROTECTED]>
User-Agent: xxx x.x.x.xx (xxx/)
MIME-Version: x.x
To: xxx <[EMAIL PROTECTED]>,
 xxx  <[EMAIL PROTECTED]>
Subject:   x x xxx xxx xxx x xx ...
Content-Type: text/plain; charset=ISO-8859-15; format=flowed
Content-Transfer-Encoding: 7bit
X-UID: 2850
Status: RO
X-Keywords:
Content-Length: 336

xxx,

 xx  xxx xx xxx xxx x.xx - 
xxx xx x xx xxx 'xx .



-- 
xx xx x - xx  
xx xxx xxx
xxx xxx , x - x x (xx)
xxx.: xxx.xx
xxx:  xxx xxx.xxx
.: xxx.xxx
x: [EMAIL PROTECTED]



>From [EMAIL PROTECTED]  Fri May 23 12:30:14 2008
X-UID: 2852
Status: RO

>From [EMAIL PROTECTED]  Sat May 24 08:24:09 2008
Message-ID: <[EMAIL PROTECTED]>
Date: Sat, 24 May 2008 08:24:02 +0200
From:  xxx xxx  <[EMAIL PROTECTED]>
User-Agent: xxx x.x.x.xx (xxx/)
MIME-Version: x.x
To:   <[EMAIL PROTECTED]>
Subject: xxx xxx x ,xx  (x xxx
) 
 xxx xxx 
Content-Type: multipart/alternative;
 =""
X-UID: 2853
Status: R
X-Keywords:
Content-Length: 4591



Regards,
Diego


[Dovecot] mbox empty messages in Sent folder

2008-05-26 Thread Diego Liziero
I'm talking about mbox Sent folder, where some mailers append through
imap server
a copy of each message they send.

Sometimes, just three header lines got appended instead of the whole
mail message, such as:

>From [EMAIL PROTECTED]  Fri May 23 12:30:14 2008
X-UID: 2852
Status: RO

This happened in the past (dovecot-1.1-beta/rc with Evolution and with
Thunderbird), and happened last week
with dovecot-1.1-rc5 (+ last week hg patches).

Anyone had a similar issue?

Regards,
Diego.


Re: [Dovecot] mbox empty messages in Sent folder

2008-05-27 Thread Diego Liziero
On Mon, May 26, 2008 at 11:17 PM, Timo Sirainen <[EMAIL PROTECTED]> wrote:
>
> Could it be that the connection died client was trying to save it, so it
> never got there? Although it still shouldn't have left the From-line there.

The mailer didn't crashed.
When it happened to me I thought about a bug of Evolution.
When the helpdesk told me that is happening also to other users
(with thunderbird), I wrote to this list to ask if someone had a similar issue.

> Could Dovecot have crashed there?

I don't know, however nothing unusual is written to the log.

> Maybe this helps? http://hg.dovecot.org/dovecot-1.1/rev/dd9d344ba140

Thanks Timo, I'll let you know if it happens again with this patch.

Diego.


Re: [Dovecot] dovecot's deliver and SELinux

2008-05-29 Thread Diego Liziero
On Thu, 2008-05-29 at 16:48 +0200, Dan Horák wrote:
> Hello,
>
> I am the new maintainer of dovecot for Fedora and Red Hat and so I am
> trying to cleanup some old reported bugs.
> [..]

Mmm.. I was wondering if it's worth to have a look at the various
dovecot patches used by main distributions before releasing
dovecot-1.1.

I mean, having a look if there is something besides the changes of
default configuration options or installation path that can be merged
upstream.

Regards,
Diego


[Dovecot] Panic: Trying to close mailbox INBOX with open transactions

2008-05-30 Thread Diego Liziero
Got this using dovecot-1.1 (hg version of 2 days ago).
I mean with http://hg.dovecot.org/dovecot-1.1/rev/c4342385d696 included.
Regards,
Diego.

-
Panic: IMAP(username): Trying to close mailbox INBOX with open transactions
(gdb) bt
#0  0x007c3402 in __kernel_vsyscall ()
#1  0x00138ba0 in raise () from /lib/libc.so.6
#2  0x0013a4b1 in abort () from /lib/libc.so.6
#3  0x080d49f0 in default_fatal_finish (type=LOG_TYPE_PANIC, status=0)
at failures.c:152
#4  0x080d523a in i_internal_fatal_handler (type=LOG_TYPE_PANIC,
status=0, fmt=0x80f6c34 "Trying to close mailbox %s with open
transactions",
args=0xbf8648c4
"8^\\\b�H\206���\005\b\210�[\bN�\016\b�H\206�NI\016\b��[\bh�[\b\030I\206��\023\006\b��[\b��[\b\bI\206�2�\016\b��[\b�$")
at failures.c:418
#5  0x080d4b2a in i_panic (format=0x80f6c34 "Trying to close mailbox
%s with open transactions") at failures.c:193
#6  0x080a5126 in mailbox_close (_box=0x6) at mail-storage.c:471
#7  0x0805edb8 in cmd_logout (cmd=0x85bd7d8) at cmd-logout.c:18
#8  0x080613dc in client_command_cancel (_cmd=0x85bd3c4) at client.c:75
#9  0x08061662 in client_destroy (client=0x85bd368, reason=0x80eee5b
"Connection closed") at client.c:134
#10 0x0806285e in client_input (client=0x85bd368) at client.c:697
#11 0x080df05a in io_loop_handler_run (ioloop=0x85bb9b0) at ioloop-epoll.c:201
#12 0x080de2a2 in io_loop_run (ioloop=0x85bb9b0) at ioloop.c:308
#13 0x0806dd14 in main (argc=1, argv=0xbf864ab4, envp=0xbf864abc) at main.c:293


[Dovecot] dovecot-1.1.rc7 Panic: POP3(username): file index-mail.c: line 1007: unreached

2008-05-30 Thread Diego Liziero
And got this regression with dovecot-1.1.rc7 with each pop3 connection,
hg version of 2 days ago was working fine.

Now I'm back again to hg before "Message sort index handling rewrite".

Regards,
Diego.
---
Panic: POP3(username): file index-mail.c: line 1007: unreached

(gdb) bt
#0  0x00b91402 in __kernel_vsyscall ()
#1  0x00725ba0 in raise () from /lib/libc.so.6
#2  0x007274b1 in abort () from /lib/libc.so.6
#3  0x080c3f0d in default_fatal_finish (type=,
status=0) at failures.c:152
#4  0x080c3f5a in i_internal_fatal_handler (type=LOG_TYPE_PANIC,
status=0, fmt=0x80d8210 "file %s: line %d: unreached", args=0xbfeec684
"3�\r\b�\003") at failures.c:418
#5  0x080c37fc in i_panic (format=0x80d8210 "file %s: line %d:
unreached") at failures.c:193
#6  0x08089a60 in index_mail_get_special (_mail=0x963cbe0, field=6,
value_r=0xbfeec70c) at index-mail.c:1007
#7  0x0805aac5 in list_uids_iter (client=0x9636f68, ctx=0x96389a0) at
commands.c:566
#8  0x0805ad31 in client_command_execute (client=0x9636f68,
name=0x963b37c "UIDL", args=0x80dc0c2 "") at commands.c:642
#9  0x08059f92 in client_input (client=0x9636f68) at client.c:413
#10 0x080cb7c0 in io_loop_handler_run (ioloop=0x96359b0) at ioloop-epoll.c:201
#11 0x080ca9c8 in io_loop_run (ioloop=0x96359b0) at ioloop.c:308
#12 0x0805bbfc in main (argc=Cannot access memory at address 0x4c52
) at main.c:275


Re: [Dovecot] dovecot-1.1.rc7 Panic: POP3(username): file index-mail.c: line 1007: unreached

2008-05-31 Thread Diego Liziero
On Sat, May 31, 2008 at 12:47 PM, Timo Sirainen <[EMAIL PROTECTED]> wrote:
> On Fri, 2008-05-30 at 23:19 +0200, Diego Liziero wrote:
>> And got this regression with dovecot-1.1.rc7
>> ---
>> Panic: POP3(username): file index-mail.c: line 1007: unreached
>
> Broke with non-maildir formats. :( Fixed:
> http://hg.dovecot.org/dovecot-1.1/rev/8e7a15987428

Tested. Now it works again,
thanks.

Diego.


[Dovecot] mbox: extra linefeed after Content-Length header in 1.1.rc8

2008-06-03 Thread Diego Liziero
mbox messages gets header corruption caused by an extra linefeed after
Content-Length

Users sees their mails in Sent mbox folder without the from and to
fields, without attachments and with the date of 1/1/1970

Diego.
---
Here is an anonymized header:

>From [EMAIL PROTECTED]  Tue Jun 03 09:14:33 2008
Message-ID: <[EMAIL PROTECTED]>
X-UID: 3913
Status: RO
X-Keywords:
Content-Length: 6817

: xxx, xx xxx  xx:xx:xx +
: xxx  <[EMAIL PROTECTED]>
-x: xxx x.x.x.x (xxx/)
-xxx: x.x
xx: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
xx:  [EMAIL PROTECTED],
 xx x <[EMAIL PROTECTED]>,
 xxx xxx <[EMAIL PROTECTED]>
xxx: xx: x: xx: x
xx: <[EMAIL PROTECTED]>
xx-x-xx: <[EMAIL PROTECTED]>
xxx-: /x; xxx=xxx-x; xx=xx
xxx--: 


Re: [Dovecot] mbox empty messages in Sent folder

2008-06-04 Thread Diego Liziero
On Tue, May 27, 2008 at 5:43 PM, Timo Sirainen <[EMAIL PROTECTED]> wrote:
>> > Maybe this helps? http://hg.dovecot.org/dovecot-1.1/rev/dd9d344ba140
>>
>> Thanks Timo, I'll let you know if it happens again with this patch.
>
> That only causes the empty message not to be written at all, it doesn't
> help about losing saved messages.

Amoung the various emails with header corruption I got with 1.1.rc8, one of them
is similar to the one of this bug report. Just the first 5 lines of
header without body, and without the remaining lines of the header,
either.

But this time there are two header lines more: X-Keywords and Content-Length.

---
>From [EMAIL PROTECTED]  Mon Jun 02 23:53:29 2008
X-UID: 7863
Status: R
X-Keywords:
Content-Length: 0
---

Actually this is the Content-Length after having selected the mail
with evolution, and having this error in dovecot.log:

Error: IMAP(username): FETCH for mailbox sent-mail UID 7863 got too
little data: 2 vs 615
Error: IMAP(username): Corrupted index cache file
/mailhome/username/.imap/sent-mail/dovecot.index.cache: Broken virtual
size for mail UID 7863

I don't know if the original email content length was correctly
written in the header, or in the index.

Diego.


Re: [Dovecot] mbox: extra linefeed after Content-Length header in 1.1.rc8

2008-06-04 Thread Diego Liziero
On Tue, Jun 3, 2008 at 3:05 PM, Timo Sirainen <[EMAIL PROTECTED]> wrote:
> On Tue, 2008-06-03 at 10:34 +0200, Diego Liziero wrote:
>> mbox messages get header corruption caused by an extra linefeed after
>> Content-Length
>
> Fixed: http://hg.dovecot.org/dovecot-1.1/rev/e043135e971d

Works, thank you.

Now I have to fix users mbox files.

As the extra linefeed is between Content-Length and Subject headers,
I'm thinking about using a regexp based replace such as
s/(Content-Length: [0-9]+)\n\n(Subject: )/$1\n$2/s
but I can't find how to make multiple lines matching work.

Any suggestion?

Regards,
Diego.


Re: [Dovecot] [solved] mbox: extra linefeed after Content-Length header in 1.1.rc8

2008-06-05 Thread Diego Liziero
> On Wed, 2008-06-04 at 23:59 +0200, Diego Liziero wrote:
> As the extra linefeed is between Content-Length and Subject headers,
> I'm thinking about using a regexp based replace such as
> s/(Content-Length: [0-9]+)\n\n(Subject: )/$1\n$2/s
> but I can't find how to make multiple lines matching work.
>
> Any suggestion?

Thank you everyone for your help.
After some quick tries, and following your suggestions, I ended up in
writing a silly perl script that matched one by one each of the three
lines and printed only the first and third one.

On Thu, Jun 5, 2008 at 12:07 AM, Timo Sirainen <[EMAIL PROTECTED]> wrote:
>
> Perl maybe? Something like (not tested):
>
> perl -pe 'BEGIN { $/ = ""; } s/^(Content-Length: [0-9]+)\n\n(Subject: 
> )/$1\n$2/g' < mbox > mbox2
>
> $/ changes the line separator.

Almost right. But this deletes all empty lines, not just the ones in
the header. I didn't try to have a deeper look.

On Thu, Jun 5, 2008 at 8:16 AM,  <[EMAIL PROTECTED]> wrote:
> That would be the s modifier for a Perl regexp (treat string as a single
> line):
>
>  $x =~ /.../s

This should be the right way.. see below.

On Thu, Jun 5, 2008 at 12:03 AM, Asheesh Laroia <[EMAIL PROTECTED]> wrote:
>Python has an re.MULTILINE option you can pass to the regular expression so 
>that it can cross lines.  Perhaps Perl >or your favorite regular expression 
>toolkit has something similar?

Yes, but with perl I didn't find quickly a solution to read multiple
lines from a file without filling all system memory when files are
some gigabytes big.

Regards,
Diego.


Re: [Dovecot] While searching: Assertion failed (offset > = ctx- >input->v_offset)

2008-06-20 Thread Diego Liziero
On Friday 20 June 2008, Timo Sirainen wrote:
> On Fri, 2008-06-20 at 09:42 +0200, Diego Liziero wrote:
> >
> > Timo,
> > here is an anonymized mbox file that causes it at every body search
> > (tested with rc12).
>
> Did you test it without index files? I couldn't reproduce the crash.

Deleting index files solved the crash. I sent broken index files
privately to Timo.

Diego.


Re: [Dovecot] v1.1.rc13 released

2008-06-20 Thread Diego Liziero
Any chance to have this assert converted to error as last patch before 1.1?

Or am I the only one that is still getting this in rc13?

Regards,
Diego

--- ./src/lib-storage/index/index-sync.c-orig   2008-03-13
16:46:36.0 +0100
+++ ./src/lib-storage/index/index-sync.c2008-03-13
16:51:38.0 +0100
@@ -36,7 +36,9 @@
 void index_mailbox_set_recent_uid(struct index_mailbox *ibox, uint32_t uid)
 {
if (uid <= ibox->recent_flags_prev_uid) {
-   i_assert(seq_range_exists(&ibox->recent_flags, uid));
+   /*i_assert(seq_range_exists(&ibox->recent_flags, uid));*/
+   if (!seq_range_exists(&ibox->recent_flags, uid))
+   i_error("seq_range_exists(&ibox->recent_flags,
uid) uid=%d",uid);
return;
}
ibox->recent_flags_prev_uid = uid;


Re: [Dovecot] mbox empty messages in Sent folder

2008-06-23 Thread Diego Liziero
On Tue, May 27, 2008 at 3:48 PM, Diego Liziero <[EMAIL PROTECTED]> wrote:
> On Mon, May 26, 2008 at 11:17 PM, Timo Sirainen <[EMAIL PROTECTED]> wrote:
>>
>> Could it be that the connection died client was trying to save it, so it
>> never got there? Although it still shouldn't have left the From-line there.
>
> The mailer didn't crashed.
> When it happened to me I thought about a bug of Evolution.
> When the helpdesk told me that is happening also to other users
> (with thunderbird), I wrote to this list to ask if someone had a similar 
> issue.
>
>> Could Dovecot have crashed there?
>
> I don't know, however nothing unusual is written to the log.
>
>> Maybe this helps? http://hg.dovecot.org/dovecot-1.1/rev/dd9d344ba140
>
> Thanks Timo, I'll let you know if it happens again with this patch.

Unfortunately it happened again with dovecot-1.1.rc13 (=1.1.0)

This time using horde/imp

Diego.


[Dovecot] dovecot 1.1.2, another assertion failure in index-mail.c: (!mail->data.destroying_stream)

2008-07-25 Thread Diego Liziero
Here is the log.

---
dovecot: Jul 25 07:18:35 Panic: IMAP(user): file index-mail.c: line
1091 (index_mail_close): assertion failed:
(!mail->data.destroying_stream)
dovecot: Jul 25 07:18:35 Error: IMAP(user): Raw backtrace:
/usr/libexec/dovecot/imap [0x80f5fd4] -> /usr/libexec/dovecot/imap
[0x80f6877] -> /usr/libexec/dovecot/imap(i_fatal+0) [0x80f6130] ->
/usr/libexec/dovecot/imap(index_mail_close+0xf9) [0x80a8129] ->
/usr/libexec/dovecot/imap [0x80a8148] ->
/usr/libexec/dovecot/imap(index_mail_set_seq+0x50) [0x80a8302] ->
/usr/libexec/dovecot/imap [0x8087e3b] ->
/usr/libexec/dovecot/imap(mail_set_seq+0x21) [0x80b391c] ->
/usr/libexec/dovecot/imap(index_storage_search_next_nonblock+0xcb)
[0x80ad3e8] -> /usr/libexec/dovecot/imap(mailbox_search_next_nonblock+0x26)
[0x80b6a97] -> /usr/libexec/dovecot/imap(mailbox_search_next+0x2c)
[0x80b6a63] -> /usr/libexec/dovecot/imap [0x805b757] ->
/usr/libexec/dovecot/imap(cmd_copy+0x1c4) [0x805b98e] ->
/usr/libexec/dovecot/imap(cmd_uid+0xbb) [0x80610d3] ->
/usr/libexec/dovecot/imap [0x8062494] -> /usr/libexec/dovecot/imap
[0x80626c9] -> /usr/libexec/dovecot/imap [0x80627c7] ->
/usr/libexec/dovecot/imap [0x8062803] ->
/usr/libexec/dovecot/imap(client_input+0xb7) [0x8062991] ->
dovecot: Jul 25 07:18:35 Error: IMAP(user):
/usr/libexec/dovecot/imap(io_loop_handler_run+0x17d) [0x8100630] ->
/usr/libexec/dovecot/imap(io_loop_run+0x35) [0x80ff8de] ->
/usr/libexec/dovecot/imap(main+0xe4) [0x806de1d] ->
/lib/libc.so.6(__libc_start_main+0xdc) [0x42bdec] ->
/usr/libexec/dovecot/imap [0x805a1a1]
dovecot: Jul 25 07:18:35 Error: child 27074 (imap) killed with signal 6
dovecot: Jul 25 07:18:35 Error: IMAP(user): Corrupted transaction log
file /mailhome/user/.imap/Trash/dovecot.index.log: record size too
small (type=0x40, offset=56, size=0)
dovecot: Jul 25 07:18:36 Error: IMAP(user): Corrupted transaction log
file /mailhome/user/.imap/Trash/dovecot.index.log: record size too
small (type=0x40, offset=56, size=0)
dovecot: Jul 25 07:19:33 Error: IMAP(user): Corrupted transaction log
file /mailhome/user/.imap/Trash/dovecot.index.log: record size too
small (type=0x40, offset=56, size=0)
dovecot: Jul 25 07:19:42 Error: IMAP(user): Corrupted transaction log
file /mailhome/user/.imap/Trash/dovecot.index.log: record size too
small (type=0x40, offset=56, size=0)
dovecot: Jul 25 07:19:48 Warning: IMAP(user): fscking index file
(in-memory index)
dovecot: Jul 25 07:19:48 Error: IMAP(user): Transaction log got
desynced for index (in-memory index)


[Dovecot] bugzilla or other similar bug tracking systems

2008-07-31 Thread Diego Liziero
I was wondering if it could be useful to use such tools to keep track
of users bugs.

I find somehow harder to search the mailing list if a bug is known, if
it's being worked on, if it needs more feedback, in witch release it
has been eventually solved, and so on.

Any thoughts?

Regards,
Diego.


Re: [Dovecot] mbox empty messages in Sent folder

2008-07-31 Thread Diego Liziero
On Mon, Jun 23, 2008 at 11:03 AM, Diego Liziero <[EMAIL PROTECTED]> wrote:
> On Tue, May 27, 2008 at 3:48 PM, Diego Liziero <[EMAIL PROTECTED]> wrote:
>> On Mon, May 26, 2008 at 11:17 PM, Timo Sirainen <[EMAIL PROTECTED]> wrote:
>>
>>> Maybe this helps? http://hg.dovecot.org/dovecot-1.1/rev/dd9d344ba140
>>
>> Thanks Timo, I'll let you know if it happens again with this patch.
>
> Unfortunately it happened again with dovecot-1.1.rc13 (=1.1.0)

And many users complains about this also with latest dovecot 1.1.2

In the last 2 days it happened 7 times to my user. I've the rawlog and
valgrind logs.
Nothing wrong according to valgrind, and nothing strange in the rawlog.

Is there any further debug I can use?


Re: [Dovecot] Dovecot 1.1.1 killed with SIGABRT

2008-08-01 Thread Diego Liziero
On Mon, Jul 21, 2008 at 8:13 PM, Timo Sirainen <[EMAIL PROTECTED]> wrote:
> On Mon, 2008-07-21 at 18:08 +0200, Andreas M. Kirchwitz wrote:
>>  >> Jul 12 01:04:45 linux dovecot: Panic: IMAP(user2): file index-sync.c: 
>> line 39 (index_mailbox_set_recent_uid): assertion failed: 
>> (seq_range_exists(&ibox->recent_flags, uid))
>
> I think I finally managed to fix this:
> http://hg.dovecot.org/dovecot-1.1/rev/48bbaf0c3e4d

Unfortunately just got it again with dovecot 1.1.2
seq_range_exists(&ibox->recent_flags, uid) ibox->recent_flags=164556344 uid=8


Re: [Dovecot] bugzilla or other similar bug tracking systems

2008-08-03 Thread Diego Liziero
On Fri, Aug 1, 2008 at 1:12 AM, Timo Sirainen <[EMAIL PROTECTED]> wrote:

> On Aug 1, 2008, at 12:51 AM, Diego Liziero wrote:
>
>  I was wondering if it could be useful to use such tools to keep track
>> of users bugs.
>>
>> I find somehow harder to search the mailing list if a bug is known, if
>> it's being worked on, if it needs more feedback, in witch release it
>> has been eventually solved, and so on.
>>
>
> I'd like to have a bug tracking system that basically keeps track of
> threads in this mailing list that have bugs and with some extra metadata
> assigned to them about the bug states etc. But that'd require 1) ANNOTATE
> extension for Dovecot and 2) Writing the bug tracking system itself..
>
> I don't really want to have a separate bug tracking system where I'd be the
> only one handling the bug reports.


Ok, I'm waiting for that. My belief is that a public bug tracking system,
whatever you choose, could bring more people into the bug fixing process.

Who knows, maybe during the summer holidays someone that would like to spend
some weeks  to help this project could find an easy way to start.


Re: [Dovecot] mbox empty messages in Sent folder

2008-08-03 Thread Diego Liziero
On Fri, Aug 1, 2008 at 1:09 AM, Timo Sirainen <[EMAIL PROTECTED]> wrote:

> On Aug 1, 2008, at 1:49 AM, Diego Liziero wrote:
>
>  On Mon, Jun 23, 2008 at 11:03 AM, Diego Liziero <[EMAIL PROTECTED]>
>> wrote:
>>
>>> On Tue, May 27, 2008 at 3:48 PM, Diego Liziero <[EMAIL PROTECTED]>
>>> wrote:
>>>
>>>> On Mon, May 26, 2008 at 11:17 PM, Timo Sirainen <[EMAIL PROTECTED]> wrote:
>>>>
>>>>  Maybe this helps? http://hg.dovecot.org/dovecot-1.1/rev/dd9d344ba140
>>>>>
>>>>
>>>> Thanks Timo, I'll let you know if it happens again with this patch.
>>>>
>>>
>>> Unfortunately it happened again with dovecot-1.1.rc13 (=1.1.0)
>>>
>>
>> And many users complains about this also with latest dovecot 1.1.2
>>
>> In the last 2 days it happened 7 times to my user. I've the rawlog and
>> valgrind logs.
>> Nothing wrong according to valgrind, and nothing strange in the rawlog.
>>
>> Is there any further debug I can use?
>>
>
> You mean the messages show up as APPENDed in the rawlog with an OK reply?


It seems so,
I sent you privately the rawlog and the last part of a sent-mail mailbox.

Anyone besides me is seeing this? (random empty mails in Sent folder)

In the last two weeks users are complainig almost every day about lost sent
mails.

Regards,
Diego.


Re: [Dovecot] mbox empty messages in Sent folder

2008-08-04 Thread Diego Liziero
On Mon, Aug 4, 2008 at 12:07 AM, Timo Sirainen <[EMAIL PROTECTED]> wrote:
> On Aug 3, 2008, at 10:31 PM, Diego Liziero wrote:
>
>> It seems so,
>> I sent you privately the rawlog and the last part of a sent-mail mailbox.
>
> The interesting thing about that rawlog was that it shows the APPEND
> returning it saved the message with UID x, but in the mbox file there's no
> UID x, but there is the empty message with UID x+1.
>
>> Anyone besides me is seeing this? (random empty mails in Sent folder)
>
> I'm using mboxes all the time, never seen this..
>
> Perhaps if you put all processes through strace (-s 100) and when it
> again happens for some user send me the strace? Although I'd guess it shows
> that the message was properly written to the mbox file. The real question is
> then what truncates the mbox file..

Just sent to Timo the strace of the same mail sent twice, the fist
time appeared empty, the second time correctly written to sent-mail
folder.


Re: [Dovecot] mbox empty messages in Sent folder

2008-08-06 Thread Diego Liziero
On Mon, Aug 4, 2008 at 4:17 PM, Timo Sirainen <[EMAIL PROTECTED]> wrote:
>
> Maybe this helps? http://hg.dovecot.org/dovecot-1.1/rev/8ab845d3c96d
>

It seems so, thanks Timo.

With this patch, by now, all sent mails are correctly written in
"Sent" folder, I'let you know if I've just been lucky :)

BTW I didn't succeed in reproducing this issue with imaptest, what was
the trick to trigger it?


Re: [Dovecot] Dovecot 1.1.1 killed with SIGABRT

2008-08-08 Thread Diego Liziero
On Fri, Aug 1, 2008 at 12:00 PM, Diego Liziero <[EMAIL PROTECTED]> wrote:
> On Mon, Jul 21, 2008 at 8:13 PM, Timo Sirainen <[EMAIL PROTECTED]> wrote:
>> On Mon, 2008-07-21 at 18:08 +0200, Andreas M. Kirchwitz wrote:
>>>  >> Jul 12 01:04:45 linux dovecot: Panic: IMAP(user2): file index-sync.c: 
>>> line 39 (index_mailbox_set_recent_uid): assertion failed: 
>>> (seq_range_exists(&ibox->recent_flags, uid))
>>
>> I think I finally managed to fix this:
>> http://hg.dovecot.org/dovecot-1.1/rev/48bbaf0c3e4d
>
> Unfortunately just got it again with dovecot 1.1.2
> seq_range_exists(&ibox->recent_flags, uid)

Got it again, I've exec and core file, if you need them.

BTW, I'm getting this assertion failure definetly less frequently than before.

Could it be just a leftover from the previous unpatched version?

Regards,
Diego.



Panic: IMAP(user): file index-sync.c: line 42
(index_mailbox_set_recent_uid): assertion failed:
(seq_range_exists(&ibox->recent_flags, uid))
Error: IMAP(user): Raw backtrace: /usr/libexec/dovecot/imap
[0x80f609c] -> /usr/libexec/dovecot/imap [0x80f693f] ->
/usr/libexec/dovecot/imap(i_fatal+0) [0x80f61f8] ->
/usr/libexec/dovecot/imap(index_mailbox_set_recent_uid+0xbc)
[0x80b201f] -> /usr/libexec/dovecot/imap(index_mailbox_set_recent_seq+0x33)
[0x80b2095] -> /usr/libexec/dovecot/imap [0x808f02b] ->
/usr/libexec/dovecot/imap [0x808f64d] -> /usr/libexec/dovecot/imap
[0x808ffb7] -> /usr/libexec/dovecot/imap(mbox_sync+0x2b) [0x8090243]
-> /usr/libexec/dovecot/imap [0x80851a1] ->
/usr/libexec/dovecot/imap(mail_index_transaction_commit+0x59)
[0x80c6df2] -> /usr/libexec/dovecot/imap(index_transaction_commit+0x65)
[0x80b3901] -> 
/usr/libexec/dovecot/imap(mailbox_transaction_commit_get_uids+0x50)
[0x80b6bce] -> /usr/libexec/dovecot/imap [0x805a994] ->
/usr/libexec/dovecot/imap [0x805a3a0] ->
/usr/libexec/dovecot/imap(io_loop_handler_run+0x17d) [0x81006e8] ->
/usr/libexec/dovecot/imap(io_loop_run+0x35) [0x80ff996] ->
/usr/libexec/dovecot/imap(main+0xe4) [0x806de3d] -> /lib/li
Error: IMAP(user): bc.so.6(__libc_start_main+0xdc) [0x42bdec] ->
/usr/libexec/dovecot/imap [0x805a1a1]
Error: child 14026 (imap) killed with signal 6

(gdb) bt full
No symbol table info available.
#1  0x0043ed20 in raise () from /lib/libc.so.6
No symbol table info available.
#2  0x00440631 in abort () from /lib/libc.so.6
No symbol table info available.
#3  0x080f60be in default_fatal_finish (type=LOG_TYPE_PANIC, status=0)
at failures.c:149
backtrace = 0x9f34738 "/usr/libexec/dovecot/imap [0x80f609c]
-> /usr/libexec/dovecot/imap [0x80f693f] ->
/usr/libexec/dovecot/imap(i_fatal+0) [0x80f61f8] ->
/usr/libexec/dovecot/imap(index_mailbox_set_recent_uid+0xbc) [0x80"...
#4  0x080f693f in i_internal_fatal_handler (type=LOG_TYPE_PANIC,
status=0, fmt=0x8119ea0 "file %s: line %d (%s): assertion failed:
(%s)",
args=0xbfe8da34 "\223\236\021\b*") at failures.c:423
No locals.
#5  0x080f61f8 in i_panic (format=0x8119ea0 "file %s: line %d (%s):
assertion failed: (%s)") at failures.c:190
args = 0xbfe8da34 "\223\236\021\b*"
#6  0x080b201f in index_mailbox_set_recent_uid (ibox=0x9f46e48,
uid=14) at index-sync.c:42
__PRETTY_FUNCTION__ = "index_mailbox_set_recent_uid"
#7  0x080b2095 in index_mailbox_set_recent_seq (ibox=0x9f46e48,
view=0x9f50c10, seq1=2, seq2=7) at index-sync.c:59
uid = 14
#8  0x0808f02b in mbox_sync_update_index_header (sync_ctx=0xbfe8dc04)
at mbox-sync.c:1424
view = (struct mail_index_view *) 0x9f50c10
st = (const struct stat *) 0x9f580c0
first_recent_uid = 0
seq = 2
seq2 = 7
__PRETTY_FUNCTION__ = "mbox_sync_update_index_header"
#9  0x0808f64d in mbox_sync_do (sync_ctx=0xbfe8dc04, flags=18) at
mbox-sync.c:1560
mail_ctx = {sync_ctx = 0xbfe8dc04, mail = {uid = 19, idx_seq =
7, keywords = {arr = {buffer = 0x0, element_size = 0}, v = 0x0,
  v_modifiable = 0x0}, flags = 40 '(', uid_broken = 0, expunged =
0, pseudo = 0, from_offset = 7366, body_size = 414, offset = 8025,
space = 70}, seq = 7, hdr_offset = 7433, body_offset = 8117,
header_first_change = 4294967295, header_last_change = 0,
  header = 0x9f57c80, hdr_md5_sum =
"){\023??\\??y\226???Z\221?", content_length = 414,
hdr_pos = {578, 4294967295, 592, 4294967295,
567}, parsed_uid = 19, last_uid_updated_value = 0,
last_uid_value_start_pos = 0, have_eoh = 1, need_rewrite = 0,
seen_imapbase = 0,
  updated = 0, recent = 1, dirty = 0, imapbase_rewrite = 0,
imapbase_updated = 0}
st = (const struct stat *) 0x9f580c0
i = 0
ret = 1
partial = 1
#10 0x0808ffb7 in mbox_sync_int (mbox=0x9f46e48, flags=18) at mbox-sync.c:1803
index_sync_ctx = (struct mail_index_sy

Re: [Dovecot] mbox empty messages in Sent folder

2008-08-13 Thread Diego Liziero
On Wed, Aug 6, 2008 at 4:26 PM, Timo Sirainen <[EMAIL PROTECTED]> wrote:
> On Aug 6, 2008, at 6:11 AM, Diego Liziero wrote:
>> On Mon, Aug 4, 2008 at 4:17 PM, Timo Sirainen <[EMAIL PROTECTED]> wrote:
>>>
>>> Maybe this helps? http://hg.dovecot.org/dovecot-1.1/rev/8ab845d3c96d
>>>
>>
>> It seems so, thanks Timo.
>>
>> With this patch, by now, all sent mails are correctly written in
>> "Sent" folder, I'let you know if I've just been lucky :)

Definitely solved. I asked the most complainig users to test if it's
fixed and they say "yes".
The most affected client was horde/imp webmail.

Thanks again Timo.

>> BTW I didn't succeed in reproducing this issue with imaptest, what was
>> the trick to trigger it?
>
> I'm not sure if there's an easy way to reproduce it. You'd have to cause the
> first read to return EAGAIN but the second read that comes only microseconds
> later to return the entire message. Perhaps if imaptest sent first the
> APPEND command, then did a small pause and after that sent the message.

Mmm.. I tried to comment out the "cork" part and added a 10% random
sleep after sending the command
if (!(rand()%9)) usleep(rand()%500);

and I started getting the famous "Error: IMAP(testdove): FETCH for
mailbox INBOX UID xxx got too little data: yyy vs zzz" instead.

Regards,
Diego.


[Dovecot] assertion failure in current hg, file istream.c: line 303 (i_stream_read_data): assertion failed: (stream->stream_errno != 0) (1.1.3 was working fine)

2008-09-16 Thread Diego Liziero
Today I updated to current dovecot-1.1 hg tree and I got many of these
assertion failures:

file istream.c: line 303 (i_stream_read_data): assertion failed:
(stream->stream_errno != 0)
(gdb) bt full
#0  0x00352402 in __kernel_vsyscall ()
No symbol table info available.
#1  0x0043ed20 in raise () from /lib/libc.so.6
No symbol table info available.
#2  0x00440631 in abort () from /lib/libc.so.6
No symbol table info available.
#3  0x080f6968 in default_fatal_finish (type=LOG_TYPE_PANIC, status=0)
at failures.c:150
backtrace = 0x906d950 "/usr/libexec/dovecot/imap [0x80f6946]
-> /usr/libexec/dovecot/imap [0x80f7207] ->
/usr/libexec/dovecot/imap(i_fatal+0) [0x80f6ac0] ->
/usr/libexec/dovecot/imap(i_stream_read_data+0.
#4  0x080f7207 in i_internal_fatal_handler (type=LOG_TYPE_PANIC,
status=0, fmt=0x8124778 "file %s: line %d (%s): assertion failed:
(%s)", args=0xbfe643a4 "mG\022\b/\001") at failures.c:430
No locals.
#5  0x080f6ac0 in i_panic (format=0x8124778 "file %s: line %d (%s):
assertion failed: (%s)") at failures.c:197
args = 0xbfe643a4 "mG\022\b/\001"
#6  0x080fcb64 in i_stream_read_data (stream=0x9094a08,
data_r=0xbfe6, size_r=0xbfe64440, threshold=0) at istream.c:303
ret = -1
read_more = false
__PRETTY_FUNCTION__ = "i_stream_read_data"
#7  0x08065ba1 in imap_fetch_send (ctx=0x90716e0, output=0x90714ec,
input=0x9094a08, cr_skipped=false, virtual_size=169,
add_missing_eoh=false, last_cr=0x9071734) at imap-fetch-body.c:132
msg = (const unsigned char *) 0x0
i = 3219539032
size = 0
vsize_left = 169
sent = 0
ret = 581237648214344776
add = 16 '\020'
blocks = false
#8  0x08065d7f in fetch_stream_send (ctx=0x90716e0) at imap-fetch-body.c:217
ret = 151603720
#9  0x080660e8 in fetch_stream (ctx=0x90716e0, size=0x908efc4) at
imap-fetch-body.c:293
input = (struct istream *) 0x10b8df
#10 0x08066225 in fetch_data (ctx=0x90716e0, body=0x9071cd8,
size=0x908efc4) at imap-fetch-body.c:318
str = (string_t *) 0x906cc18
#11 0x08066ac7 in fetch_body_mime (ctx=0x90716e0, mail=0x908e9e8,
body=0x9071cd8) at imap-fetch-body.c:555
part = (const struct message_part *) 0x908efb0
section = 0x9071d12 "MIME"
__PRETTY_FUNCTION__ = "fetch_body_mime"
#12 0x08064bad in imap_fetch_more (ctx=0x90716e0) at imap-fetch.c:309
h = (const struct imap_fetch_context_handler *) 0x90718c8
_data_stack_cur_id = 4
client = (struct client *) 0x9071368
handlers = (const struct imap_fetch_context_handler *) 0x9071850
count = 7
ret = 1
__PRETTY_FUNCTION__ = "imap_fetch_more"
#13 0x08064dd4 in imap_fetch (ctx=0x90716e0) at imap-fetch.c:361
ret = 0
__PRETTY_FUNCTION__ = "imap_fetch"
#14 0x0805c792 in cmd_fetch (cmd=0x9071600) at cmd-fetch.c:152
ctx = (struct imap_fetch_context *) 0x90716e0
args = (const struct imap_arg *) 0x9075688
search_arg = (struct mail_search_arg *) 0x9071678
messageset = 0x9075750 "4068"
ret = 151455048
#15 0x08061173 in cmd_uid (cmd=0x9071600) at cmd-uid.c:26
command = (struct command *) 0x907068c
cmd_name = 0x9075738 "fetch"
#16 0x08062534 in client_command_input (cmd=0x9071600) at client.c:580
client = (struct client *) 0x9071368
command = (struct command *) 0x23
__PRETTY_FUNCTION__ = "client_command_input"
#17 0x08062769 in client_command_input (cmd=0x9071600) at client.c:629
client = (struct client *) 0x9071368
command = (struct command *) 0x9070668
__PRETTY_FUNCTION__ = "client_command_input"
#18 0x08062867 in client_handle_next_command (client=0x9071368,
remove_io_r=0xbfe64765) at client.c:670
size = 132
#19 0x080628a3 in client_handle_input (client=0x9071368) at client.c:680
_data_stack_cur_id = 3
ret = 7
remove_io = false
handled_commands = false
#20 0x08062a31 in client_input (client=0x9071368) at client.c:725
cmd = (struct client_command_context *) 0x4e7408
output = (struct ostream *) 0x90714ec
bytes = 132
__PRETTY_FUNCTION__ = "client_input"
#21 0x08101115 in io_loop_handler_run (ioloop=0x906f9b0) at ioloop-epoll.c:203
ctx = (struct ioloop_handler_context *) 0x906faa8
events = (struct epoll_event *) 0x906fae8
event = (const struct epoll_event *) 0x906fae8
list = (struct io_list *) 0x9071568
io = (struct io_file *) 0x9071548
tv = {tv_sec = 1799, tv_usec = 999874}
events_count = 3
t_id = 2
msecs = 180
ret = 1
i = 0
j = 0
call = true
#22 0x081003ac in io_loop_run (ioloop=0x906f9b0) at ioloop.c:320
No locals.
#23 0x0806debd in main (argc=1, argv=0xbfe648c4, envp=0xbfe648cc) at main.c:293
No locals.


[Dovecot] another assertion failure in current 1.1 hg (1.1.3 was working fine) - file message-address.c: line 43 (parse_local_part): assertion failed: (ctx->parser.data != ctx->parser.end)

2008-09-16 Thread Diego Liziero
file message-address.c: line 43 (parse_local_part): assertion failed:
(ctx->parser.data != ctx->parser.end)

#0  0x001b3402 in __kernel_vsyscall ()
No symbol table info available.
#1  0x0043ed20 in raise () from /lib/libc.so.6
No symbol table info available.
#2  0x00440631 in abort () from /lib/libc.so.6
No symbol table info available.
#3  0x080f6968 in default_fatal_finish (type=LOG_TYPE_PANIC, status=0)
at failures.c:150
backtrace = 0x90e9ed8 "/usr/libexec/dovecot/imap [0x80f6946]
-> /usr/libexec/dovecot/imap [0x80f7207] ->
/usr/libexec/dovecot/imap(i_fatal+0) [0x80f6ac0] ->
/usr/libexec/dovecot/imap [0x80eaee7] -> /usr/.
#4  0x080f7207 in i_internal_fatal_handler (type=LOG_TYPE_PANIC,
status=0, fmt=0x8122bf0 "file %s: line %d (%s): assertion failed:
(%s)", args=0xbfe92c54 "�+\022\b+") at failures.c:430
No locals.
#5  0x080f6ac0 in i_panic (format=0x8122bf0 "file %s: line %d (%s):
assertion failed: (%s)") at failures.c:197
args = 0xbfe92c54 "�+\022\b+"
#6  0x080eaee7 in parse_local_part (ctx=0xbfe92d78) at message-address.c:43
ret = 0
__PRETTY_FUNCTION__ = "parse_local_part"
#7  0x080eb373 in parse_addr_spec (ctx=0xbfe92d78) at message-address.c:163
ret = 24
ret2 = -1
#8  0x080eb49d in parse_mailbox (ctx=0xbfe92d78) at message-address.c:200
start = (const unsigned char *) 0x90fba9f ""
ret = -1
#9  0x080eb503 in parse_mailbox_list (ctx=0xbfe92d78) at message-address.c:214
ret = 151951304
#10 0x080eb5c9 in parse_group (ctx=0xbfe92d78) at message-address.c:246
ret = 152025736
#11 0x080eb64c in parse_address (ctx=0xbfe92d78) at message-address.c:268
start = (const unsigned char *) 0x90fba88 "undisclosed-recipients:"
ret = 151951000
#12 0x080eb68b in parse_address_list (ctx=0xbfe92d78,
max_addresses=4294967294) at message-address.c:283
ret = 0
#13 0x080eb78f in message_address_parse_real (pool=0x90e9500,
data=0x90fba88 "undisclosed-recipients:", size=23,
max_addresses=4294967295, fill_missing=true) at message-address.c:320
ctx = {pool = 0x90e9500, parser = {data = 0x90fba9f "", end =
0x90fba9f "", last_comment = 0x90e9570}, first_addr = 0x90e97c8,
last_addr = 0x90e97c8, addr = {next = 0x0, name = 0x0, route = 0x0,
mailbox = 0x0, domain = 0x0, invalid_syntax = false}, str =
0x90e9698, fill_missing = true}
ret = 1
#14 0x080eb7e0 in message_address_parse (pool=0x90e9500,
data=0x90fba88 "undisclosed-recipients:", size=23,
max_addresses=4294967295, fill_missing=true) at message-address.c:331
addr = (struct message_address *) 0x90e9500
#15 0x080ac097 in search_header_arg (arg=0x90ef6c0, ctx=0xbfe92f80) at
index-search.c:424
addr = (struct message_address *) 0x90ff31c
str = (string_t *) 0x17
_data_stack_cur_id = 5
msg_search_ctx = (struct message_search_context *) 0x91032f0
block = {part = 0x0, hdr = 0xbfe92e38, data = 0x0, size = 0}
hdr = {name = 0x811a0f0 "", name_len = 0, value = 0x90fba88
"undisclosed-recipients:", value_len = 23, full_value = 0x90fba88
"undisclosed-recipients:", full_value_len = 23, middle = 0x90fdd53 ":
",
  middle_len = 0, name_offset = 0, full_value_offset = 4, continues =
0, continued = 0, eoh = 0, no_newline = 0, crlf_newline = 0,
use_full_value = 0}
ret = 0
#16 0x080b537a in search_arg_foreach (arg=0x90ef6c0,
callback=0x80abe09 , context=0xbfe92f80) at
mail-search.c:85
subarg = (struct mail_search_arg *) 0x0
__PRETTY_FUNCTION__ = "search_arg_foreach"
#17 0x080b53a4 in mail_search_args_foreach (args=0x90ef6c0,
callback=0x80abe09 , context=0xbfe92f80) at
mail-search.c:98
result = 1
#18 0x080ac2f7 in search_header (hdr=0x90fec80, ctx=0xbfe92f80) at
index-search.c:491
No locals.
#19 0x080ed97e in message_parse_header (input=0x90fa8f8, hdr_size=0x0,
flags=MESSAGE_HEADER_PARSER_FLAG_CLEAN_ONELINE, callback=0x80ac22f
, context=0xbfe92f80) at message-header-parser.c:389
hdr_ctx = (struct message_header_parser_ctx *) 0x90fec80
hdr = (struct message_header_line *) 0x90fec80
ret = 1
__PRETTY_FUNCTION__ = "message_parse_header"
#20 0x080ac691 in search_arg_match_text (args=0x90ef678,
ctx=0x90fb7a8) at index-search.c:580
hdr_ctx = {index_context = 0x90fb7a8, args = 0x90ef678, hdr =
0x90fec80, parse_headers = 0, custom_header = 1, threading = 0}
input = (struct istream *) 0x90fa8f8
headers_ctx = (struct mailbox_header_lookup_ctx *) 0x90fb068
headers = (const char * const *) 0x90e9458
have_headers = true
have_body = false
__PRETTY_FUNCTION__ = "search_arg_match_text"
#21 0x080ad3e1 in search_match_next (ctx=0x90fb7a8) at index-search.c:967
arg = (struct mail_search_arg *) 0x90fb7a8
ret = -1
#22 0x080ad703 in index_storage_search_next_nonblock (_ctx=0x90fb7a8,
mail=0x90f8f68, tryagain_r=0xbfe93083) at index-search.c:1042
_data_stack_cur_id = 4
ctx = (struct index_search_cont

Re: [Dovecot] assertion failure in current hg, file istream.c: line 303 (i_stream_read_data): assertion failed: (stream->stream_errno != 0) (1.1.3 was working fine)

2008-09-17 Thread Diego Liziero
On Wed, Sep 17, 2008 at 9:35 PM, Timo Sirainen <[EMAIL PROTECTED]> wrote:
>
> Could you do:
>
>> #6  0x080fcb64 in i_stream_read_data (stream=0x9094a08,
>> data_r=0xbfe6, size_r=0xbfe64440, threshold=0) at istream.c:303
>>   ret = -1
>>   read_more = false
>>   __PRETTY_FUNCTION__ = "i_stream_read_data"
>
> fr 6
> p *stream
> p *stream.real_stream

(gdb) fr 6
#6  0x080fcb64 in i_stream_read_data (stream=0x9094a08,
data_r=0xbfe6, size_r=0xbfe64440, threshold=0) at istream.c:303
303 i_assert(stream->stream_errno != 0);
(gdb) p *stream
$3 = {v_offset = 155312, stream_errno = 0, mmaped = 0, blocking = 1,
closed = 0, seekable = 1, eof = 0, real_stream = 0x90949e0}
(gdb) p *stream.real_stream
$4 = {iostream = {refcount = 2, close = 0x810f75c
, destroy = 0x80e8a2c
,
set_max_buffer_size = 0x80e8acb
, destroy_callback =
0x80a771c , destroy_context =
0x908e9e8},
  read = 0x80e95c9 , seek = 0x80e989a
, sync = 0x80e9a51
, stat = 0x80e9a63
, istream = {
v_offset = 155312, stream_errno = 0, mmaped = 0, blocking = 1,
closed = 0, seekable = 1, eof = 0, real_stream = 0x90949e0}, fd = -1,
abs_start_offset = 109832723, statbuf = {st_dev = 0, __pad1 = 0,
__st_ino = 0, st_mode = 0, st_nlink = 0, st_uid = 0, st_gid = 0,
st_rdev = 0, __pad2 = 0, st_size = -1, st_blksize = 0, st_blocks = 0,
st_atim = {tv_sec = 1221544863, tv_nsec = 0}, st_mtim = {
  tv_sec = 1221544863, tv_nsec = 0}, st_ctim = {tv_sec =
1221544863, tv_nsec = 0}, st_ino = 0}, buffer = 0x0, w_buffer = 0x0,
buffer_size = 0, max_buffer_size = 8192, skip = 0, pos = 0, parent =
0x9094580,
  parent_start_offset = 0, line_str = 0x0}
(gdb)


[Dovecot] lately pop3 with #define DEBUG needs GDB=1

2008-09-17 Thread Diego Liziero
I'm not sure when this happened.

In yesterday dovecot-1.1 hg if pop3 is compiled with DEBUG defined, it
needs GDB=1 otherwise it ends with:
Panic: Leaked file fd 4: dev 104.2 inode 3342766

Not sure if this can be caused by the fact that I call pop3 with a bash script.

protocol pop3 {
mail_executable = /usr/libexec/dovecot/pop3.sh

inside I've something like
if [ x`id -un` = "xmyuser" -o x`id -un` = "xtestdovecot" ]
then
exec /usr/libexec/dovecot/pop3-new-test-release
else
exec /usr/libexec/dovecot/pop3

[..]

(gdb) bt full
#0  0x006ed402 in __kernel_vsyscall ()
No symbol table info available.
#1  0x0043ed20 in raise () from /lib/libc.so.6
No symbol table info available.
#2  0x00440631 in abort () from /lib/libc.so.6
No symbol table info available.
#3  0x080e44a8 in default_fatal_finish (type=LOG_TYPE_PANIC, status=0)
at failures.c:150
backtrace = 0x982b160 "/usr/libexec/dovecot/pop3 [0x80e4486]
-> /usr/libexec/dovecot/pop3(default_error_handler+0) [0x80e4506] ->
/usr/libexec/dovecot/pop3(i_fatal+0) [0x80e4600] ->
/usr/libexec/dovecot/.
#4  0x080e4506 in default_fatal_handler (type=LOG_TYPE_PANIC,
status=0, format=0x810f568 "Leaked file fd %d: dev %s.%s inode %s",
args=0xbfee57a4 "\004") at failures.c:162
No locals.
#5  0x080e4600 in i_panic (format=0x810f568 "Leaked file fd %d: dev
%s.%s inode %s") at failures.c:197
args = 0xbfee57a4 "\004"
#6  0x080e519d in fd_debug_verify_leaks (first_fd=4, last_fd=1024) at
fd-close-on-exec.c:71
addr = {family = 58360, u = {ip6 = {in6_u = {u6_addr8 =
"\225�\017\b\a\000\000\000�XGD��\017\b", u6_addr16 = {58005,
2063, 7, 0, 22776, 17479, 58362, 2063}, u6_addr32 = {135258773, 7,
1145526520,
  135259130}}}, ip4 = {s_addr = 135258773}}}
raddr = {family = 2, u = {ip6 = {in6_u = {u6_addr8 =
"\031^��Y��X�\005\021D", u6_addr16 = {24089, 49134, 23028,
49134, 22744, 49134, 4357, 68}, u6_addr32 = {3220069913, 3220068852,
3220068568,
  4460805}}}, ip4 = {s_addr = 3220069913}}}
port = 4274100
rport = 3220068600
st = {st_dev = 26626, __pad1 = 0, __st_ino = 3342766, st_mode
= 33152, st_nlink = 1, st_uid = 0, st_gid = 0, st_rdev = 0, __pad2 =
0, st_size = 6654265, st_blksize = 4096, st_blocks = 13032, st_atim =
{
tv_sec = 1221681411, tv_nsec = 0}, st_mtim = {tv_sec = 1221683098,
tv_nsec = 0}, st_ctim = {tv_sec = 1221683098, tv_nsec = 0}, st_ino =
3342766}
old_errno = 9
#7  0x0805c817 in main (argc=1, argv=0xbfee5994, envp=0xbfee599c) at main.c:257
No locals.


Re: [Dovecot] lately pop3 with #define DEBUG needs GDB=1

2008-09-18 Thread Diego Liziero
On Wed, Sep 17, 2008 at 11:10 PM, Timo Sirainen <[EMAIL PROTECTED]> wrote:
> a) make the pop3 process sleep at startup so you can do ls
> -l /proc/pid/fd/4

Pop3 is fine, it was my fault, I left a file descriptor open.
Sorry for the noise.

Diego.


Re: [Dovecot] Dovecot performance on GFS clustered filesystem

2008-09-24 Thread Diego Liziero
I've read somewhere that one of gfs2 goals was to improve performance
for directory access with many files.

I've tested it doing a simple ls in a directory with many test empty
files in gfs and it was _really_ slow, doing the ls on a gfs2 with the
same amount of emtpy files is actually faster.

But when I tested gfs2 with bonnie++ I got fewer sequential I/O speed
than in gfs (consider that I tested a beta version of gfs2 some months
ago, maybe things are better now).

So my conclusion of the tests was that gfs is best with mbox, gfs2
beta with maildir.

But, again, I haven't tested gfs2 improvements recently.

Regards,
Diego.


[Dovecot] FETCH for mailbox mailboxname UID #1 got too little data: #2 vs #3

2008-09-24 Thread Diego Liziero
I got it with multiple imaptest instances even with current dovecot-1.1 hg tree.

I checked the emails with that UIDs and they are actually truncated.

Some things I noted on these mails:
- they are all with MIME multipart attachments.
- the last multipart attachment is truncated
- the truncated last line is not complete
- the following line is the beginning of a new mail (From line)
without any empty line after the truncated attachment.
- in "got too little data: #2 vs #3"  0 <= #2 < #3 (zero included)
- Content-Length: #4 is sometimes bigger than both #2 and #3, sometimes smaller.

Regards,
Diego.


Re: [Dovecot] assertion in dovecot imap 1.1.1 to 1.1.3

2008-10-03 Thread Diego Liziero
2008/10/1 Rene Luria <[EMAIL PROTECTED]>:
> Dovecot dies with signal 11 (segfault) when doing some commands with a
> specific message

Could you post a backtrace (bt full) of the core file?

See:
http://dovecot.org/bugreport.html

Regards,
Diego.


Re: [Dovecot] assertion failure in current hg, file istream.c: line 303 (i_stream_read_data): assertion failed: (stream->stream_errno != 0) (1.1.3 was working fine)

2008-10-05 Thread Diego Liziero
On Mon, Sep 22, 2008 at 9:56 PM, Timo Sirainen <[EMAIL PROTECTED]> wrote:
>
> On Wed, 2008-09-17 at 22:01 +0200, Diego Liziero wrote:
> > On Wed, Sep 17, 2008 at 9:35 PM, Timo Sirainen <[EMAIL PROTECTED]> wrote:
> > >
> > > Could you do:
> > >
> > >> #6  0x080fcb64 in i_stream_read_data (stream=0x9094a08,
> > >> data_r=0xbfe6, size_r=0xbfe64440, threshold=0) at istream.c:303
> > >>   ret = -1
> > >>   read_more = false
> > >>   __PRETTY_FUNCTION__ = "i_stream_read_data"
> > >
> > > fr 6
> > > p *stream
> > > p *stream.real_stream
> >
> > (gdb) fr 6
> > #6  0x080fcb64 in i_stream_read_data (stream=0x9094a08,
> > data_r=0xbfe6, size_r=0xbfe64440, threshold=0) at istream.c:303
> > 303 i_assert(stream->stream_errno != 0);
>
> I'm still not really sure why this is happening. I recently added some
> more asserts to v1.2 code tree. Now that I've fixed several bugs in them
> maybe they're stable enough for v1.1 too. Does this move the assert to
> come earlier? http://hg.dovecot.org/dovecot-1.1/rev/a1a14d67b15d

Not sure if it's related, but the only assertion that is failing now
is this one (sorry, this time I've no core file):

file index-mail.c: line 1091 (index_mail_close): assertion failed:
(!mail->data.destroyi
ng_stream)

Raw backtrace: /usr/libexec/dovecot/imap [0x80f6a6a] ->
/usr/libexec/dovecot/imap [0x80f732b] ->
/usr/libexec/dovecot/imap(i_fatal+0) [0x80f6be4] ->
/usr/libexec/dovecot/imap(index_mail_close+0xf9) [0x80a8489] ->
/usr/libexec/dovecot/imap(index_mail_free+0x1a) [0x80a8abd] ->
/usr/libexec/dovecot/imap(mail_free+0x1e) [0x80b3cb0] ->
/usr/libexec/dovecot/imap [0x805b7f0] ->
/usr/libexec/dovecot/imap(cmd_copy+0x1c4) [0x805ba0e] ->
/usr/libexec/dovecot/imap(cmd_uid+0xbb) [0x8061173] ->
/usr/libexec/dovecot/imap [0x8062534] -> /usr/libexec/dovecot/imap
[0x8062769] -> /usr/libexec/dovecot/imap [0x8062867] ->
/usr/libexec/dovecot/imap [0x80628a3] ->
/usr/libexec/dovecot/imap(client_input+0xb7) [0x8062a31] ->
/usr/libexec/dovecot/imap(io_loop_handler_run+0x17d) [0x8101485] ->
/usr/libexec/dovecot/imap(io_loop_run+0x35) [0x810071c] ->
/usr/libexec/dovecot/imap(main+0xe4) [0x806debd] ->
/lib/libc.so.6(__libc_start_main+0xdc) [0x42bdec] ->
/usr/libexec/dovecot/imap [0x805a221]

Regards,
Diego.


Re: [Dovecot] assertion failure in current hg, file istream.c: line 303 (i_stream_read_data): assertion failed: (stream->stream_errno != 0) (1.1.3 was working fine)

2008-10-05 Thread Diego Liziero
On Sun, Oct 5, 2008 at 4:22 PM, Diego Liziero <[EMAIL PROTECTED]> wrote:
>
> Not sure if it's related, but the only assertion that is failing now
> is this one (sorry, this time I've no core file):
>
> file index-mail.c: line 1091 (index_mail_close): assertion failed:
> (!mail->data.destroyi
> ng_stream)

Sorry for the noise, forget my previous mail.
It's again my fault.

This happened because some users were over fs quota (without the quota plugin).
And that's why I've no core file (I use the same file system for mail
& core files)

Regards,
Diego.


Re: [Dovecot] assertion failure in current hg, file istream.c: line 303 (i_stream_read_data): assertion failed: (stream->stream_errno != 0) (1.1.3 was working fine)

2008-10-05 Thread Diego Liziero
On Sun, Oct 5, 2008 at 5:14 PM, Timo Sirainen <[EMAIL PROTECTED]> wrote:
> On Sun, 2008-10-05 at 17:00 +0200, Diego Liziero wrote:
>> >
>> > file index-mail.c: line 1091 (index_mail_close): assertion failed:
>> > (!mail->data.destroying_stream)
>
> Hmm. So that assert could be because Dovecot ran out of disk space? I've
> been trying to figure out several times why it would happen, but I
> haven't been able to.

Not the asset in the subject, but "file index-mail.c: line 1091
(index_mail_close): assertion failed: (!mail->data.destroying_stream)"
means "Dovecot ran out of disk space".

The assert in the subject here didn't happen any longer, maybe it has
been solved with one of your recent fixes.

Regards,
Diego.


Re: [Dovecot] FETCH for mailbox mailboxname UID #1 got too little data: #2 vs #3

2008-10-05 Thread Diego Liziero
On Sun, Oct 5, 2008 at 3:38 PM, Timo Sirainen <[EMAIL PROTECTED]> wrote:
> On Sun, 2008-10-05 at 15:27 +0300, Timo Sirainen wrote:
>> On Wed, 2008-09-24 at 22:35 +0200, Diego Liziero wrote:
>> > I got it with multiple imaptest instances even with current dovecot-1.1 hg 
>> > tree.
>> >
>> > I checked the emails with that UIDs and they are actually truncated.
>> >
>> > Some things I noted on these mails:
>> > - they are all with MIME multipart attachments.
>> > - the last multipart attachment is truncated
>> > - the truncated last line is not complete
>> > - the following line is the beginning of a new mail (From line)
>> > without any empty line after the truncated attachment.
>> > - in "got too little data: #2 vs #3"  0 <= #2 < #3 (zero included)
>> > - Content-Length: #4 is sometimes bigger than both #2 and #3, sometimes 
>> > smaller.
>>
>> Any idea how the messages got there? Via deliver or IMAP APPEND, IMAP
>> COPY or something else like procmail?

Here procmail uses just INBOX folder, this error happens also to Sent
and Trash folders that are used just by dovecot.

> Maybe this helps: http://hg.dovecot.org/dovecot-1.1/rev/f55e58146f77

Just got this with current hg and imaptest:
Error: IMAP(username): FETCH for mailbox Trash UID 215597 got too
little data: 469042 vs 470768

But I'm not using a new user. I'm using the usual test user without
deleting any mbox or index.

Maybe that UID got corrupted by a previous dovecot version.

Now I start testing with a clean user and I'll let you know if it happens again.


Re: [Dovecot] FETCH for mailbox mailboxname UID #1 got too little data: #2 vs #3

2008-10-05 Thread Diego Liziero
On Sun, Oct 5, 2008 at 6:34 PM, Diego Liziero <[EMAIL PROTECTED]> wrote:
> On Sun, Oct 5, 2008 at 3:38 PM, Timo Sirainen <[EMAIL PROTECTED]> wrote:
>> On Sun, 2008-10-05 at 15:27 +0300, Timo Sirainen wrote:
>>> On Wed, 2008-09-24 at 22:35 +0200, Diego Liziero wrote:
>>> > I got it with multiple imaptest instances even with current dovecot-1.1 
>>> > hg tree.
>>> >
>>> > I checked the emails with that UIDs and they are actually truncated.
>>> >
>>> > Some things I noted on these mails:
>>> > - they are all with MIME multipart attachments.
>>> > - the last multipart attachment is truncated
>>> > - the truncated last line is not complete
>>> > - the following line is the beginning of a new mail (From line)
>>> > without any empty line after the truncated attachment.
>>> > - in "got too little data: #2 vs #3"  0 <= #2 < #3 (zero included)
>>> > - Content-Length: #4 is sometimes bigger than both #2 and #3, sometimes 
>>> > smaller.
>
>> Maybe this helps: http://hg.dovecot.org/dovecot-1.1/rev/f55e58146f77
> [..]
> Now I start testing with a clean user and I'll let you know if it happens 
> again.

Unfortunately it happens also with a new user and using just multiple
imaptest instances (no procmail involved here):
dovecot: Oct 05 19:25:24 Error: IMAP(newuser): FETCH for mailbox INBOX
UID 1636 got too little data: 322644 vs 324628


Re: [Dovecot] Disabling global content_filter with an empty filter specified with an access table

2008-10-08 Thread Diego Liziero
On Wed, Oct 8, 2008 at 7:36 PM, mouss <[EMAIL PROTECTED]> wrote:
>
> an alternative is
>
> content_filter =
> smtpd_sender_restrictions =
>check_sender_access hash:/etc/postfix/sender_ok
>check_sender_access pcre:/etc/postfix/filter
>
> == dsn_ok
> <>  OK
>
> == filter
> /./ FILTER filter:[1.2.3.4]:10024
>
> where "filter:[1.2.3" is what you used to put in content_filter.

Yes, that's what I was thinking when I wrote "workaround" in my first mail:

On Wed, Oct 8, 2008 at 5:08 PM, Diego Liziero <[EMAIL PROTECTED]> wrote:
>[..]
>I think that this solution is more readable than the only workaround I
>can imagine now (that is disabling global filtering and enabling it in
>a pcre table for everything except that particular case).

So it's a bit less readable, I've to remember to disable sender
restrictions in the content filter return transport to avoid filter
loops, there will be one more line for each mail in the log stating
that the filter is triggered, but apart from that it should work as I
need.

Thank you to everyone that helped me,
tomorrow I'm going to test both methods.

Regards,
Diego.


Re: [Dovecot] Dovecot performance on GFS clustered filesystem

2008-10-13 Thread Diego Liziero
On Wed, Sep 24, 2008 at 9:18 PM, Diego Liziero <[EMAIL PROTECTED]> wrote:
> I've read somewhere that one of gfs2 goals was to improve performance
> for directory access with many files.
>
> I've tested it doing a simple ls in a directory with many test empty
> files in gfs and it was _really_ slow, doing the ls on a gfs2 with the
> same amount of emtpy files is actually faster.
>
> But when I tested gfs2 with bonnie++ I got fewer sequential I/O speed
> than in gfs (consider that I tested a beta version of gfs2 some months
> ago, maybe things are better now).

This seems true only for sequential write speed.

> So my conclusion of the tests was that gfs is best with mbox, gfs2
> beta with maildir.
>
> But, again, I haven't tested gfs2 improvements recently.
>
> Regards,
> Diego.

I lauched bonnie++ on the same lvm2 partition formatted with different
filesystems to test their speed.
Consider that I launced the test just one time per filesystem, so the
numbers should not be considered sharp, but with a certain percentage
of error.

With clustered filesystems (gfs and gfs2) the test has been launched
on the same box, but with the fillesystem mounted on two nodes.
Option used to format gfs and gfs2: "-r 2048 -p lock_dlm  -j 4"

The test box has two dual-core Opteron processors, 8 Gb of ram, two
4Gb fiber channel HBA, gigabit ethernet (for the distributed lock
manager connection) and Centos 5.2 x86_64 installed.

With this configuration it seems that:
gfs is faster than gfs2 when writing sequential blocks, much faster in
creating and deleting files;
gfs2 seems to have a faster read speed.

Regards,
Diego.

bonnie++ -s 16g
Version 1.03c   --Sequential Output-- --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Filesystem Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
ext216G 69230  95 149430  47 54849  25 71681  90 215568
54 473.3   1
xfs 16G 62828  94 135482  54 64010  28 71841  92 238351
51 632.3   2
ext316G 56485  93 115398  68 56051  32 73536  92 211219
48 552.0   2
gfs 16G 47079  98 124123  82 42651  53 65692  91 189533
65 431.5   3
gfs216G 40203  77  74620  53 52596  39 73187  93 226909
58 496.2   2

--Sequential Create-- Random Create
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
ext2 16  3810  99 + +++ + +++  2395  99 + +++ 10099  99
xfs  16  2446  51 + +++  1073  24  2019  59 + +++   796  21
ext3 16  5597  77 + +++ 16764  99 19450  99 + +++ + +++
gfs  16   882  40 + +++  4408  82  1001  56 + +++  2734  69
gfs2 16   457   0 + +++   994   0   477   0 + +++   995  29


[Dovecot] dovecot 1.1.4 maildir imap segfault in message_parse_header_next

2008-10-16 Thread Diego Liziero
I've tried to stress test dovecot 1.1.4 with imaptest for days without
any assertion failure or crash.
Just some "got too little data" messages.

So far it's the most stable 1.1.x version.

Today a user got this imap segfault with vanilla 1.1.4 (I don't know
if it's something you have already fixed in current tree).
The user didn't complain of anything, I've just found the error in the
logs and the core file.

Regards,
Diego.


Core was generated by `/usr/libexec/dovecot/imap'.
Program terminated with signal 11, Segmentation fault.
#0  0x080c8d41 in message_parse_header_next (ctx=0x8774fa0,
hdr_r=0xbfa438e0) at message-header-parser.c:114
114 if (msg[0] == '\n' ||
(gdb) bt full
#0  0x080c8d41 in message_parse_header_next (ctx=0x8774fa0,
hdr_r=0xbfa438e0) at message-header-parser.c:114
msg = (const unsigned char *) 0x0
i = 
size = 0
startpos = 
colon_pos = 4294967295
parse_size = 0
value_pos = 
ret = -2
continued = false
continues = 
crlf_newline = false
#1  0x080c62f5 in read_header (mstream=0x877d6d0) at istream-header-filter.c:163
hdr = (struct message_header_line *) 0x0
highwater_offset = 
pos = 
ret = 
matched = false
hdr_ret = 
__PRETTY_FUNCTION__ = '\0' 
#2  0x080c6a17 in i_stream_header_filter_read (stream=0x877d6d0) at
istream-header-filter.c:293
mstream = (struct header_filter_istream *) 0xfffe
ret = 
pos = 
#3  0x080d4fa8 in i_stream_read (stream=0x877d6f8) at istream.c:73
_stream = (struct istream_private *) 0xfffe
ret = 
__PRETTY_FUNCTION__ = '\0' 
#4  0x080d505d in i_stream_read_data (stream=0x877d6f8,
data_r=0xbfa439a8, size_r=0xbfa439a4, threshold=0) at istream.c:299
ret = 0
read_more = false
__PRETTY_FUNCTION__ = '\0' 
#5  0x080cb8ec in message_get_body_size (input=0x877d6f8,
body=0xbfa439d8, has_nuls=0x0) at message-size.c:76
msg = 
i = 
size = 
missing_cr_count = 
__PRETTY_FUNCTION__ = '\0' 
#6  0x08064178 in fetch_body_header_fields (ctx=0x875e668,
mail=0x8776138, body=0x875e958) at imap-fetch-body.c:458
size = {physical_size = 0, virtual_size = 0, lines = 0}
old_offset = 0
#7  0x08062218 in imap_fetch (ctx=0x875e668) at imap-fetch.c:309
_data_stack_cur_id = 4
ret = 
__PRETTY_FUNCTION__ = "\000\000\000\000\000\000\000\000\000\000"
#8  0x0805bd9e in cmd_fetch (cmd=0x875e5c0) at cmd-fetch.c:152
ctx = (struct imap_fetch_context *) 0x875e668
args = (const struct imap_arg *) 0x8762638
search_arg = (struct mail_search_arg *) 0x875e610
messageset = 
ret = 
#9  0x0805fe8c in client_command_input (cmd=0x875e5c0) at client.c:580
client = (struct client *) 0x875e368
command = 
__PRETTY_FUNCTION__ = '\0' 
#10 0x0805ff35 in client_command_input (cmd=0x875e5c0) at client.c:629
client = (struct client *) 0x875e368
command = (struct command *) 0x0
__PRETTY_FUNCTION__ = '\0' 
#11 0x080606f5 in client_handle_input (client=0x875e368) at client.c:670
_data_stack_cur_id = 3
ret = 
remove_io = 
handled_commands = false
#12 0x0806090e in client_input (client=0x875e368) at client.c:725
cmd = 
output = (struct ostream *) 0x875e4ec
bytes = 197
__PRETTY_FUNCTION__ = '\0' 
#13 0x080d8710 in io_loop_handler_run (ioloop=0x875c9b0) at ioloop-epoll.c:203
ctx = 
event = (const struct epoll_event *) 0x875cae8
list = (struct io_list *) 0x875e568
io = (struct io_file *) 0x875e548
tv = {tv_sec = 4, tv_usec = 926334}
t_id = 2
msecs = 
ret = 1
i = 0
j = 0
call = 
#14 0x080d77f8 in io_loop_run (ioloop=0x875c9b0) at ioloop.c:320
No locals.
#15 0x0806848c in main (argc=Cannot access memory at address 0x0
) at main.c:293
No locals.


Re: [Dovecot] assertion failure in current hg, file istream.c: line 303 (i_stream_read_data): assertion failed: (stream->stream_errno != 0) (1.1.3 was working fine)

2008-10-16 Thread Diego Liziero
On Sun, Oct 5, 2008 at 5:44 PM, Timo Sirainen <[EMAIL PROTECTED]> wrote:
> On Sun, 2008-10-05 at 17:23 +0200, Diego Liziero wrote:
>>
>> Not the asset in the subject, but "file index-mail.c: line 1091
>> (index_mail_close): assertion failed: (!mail->data.destroying_stream)"
>> means "Dovecot ran out of disk space".
>
> Right. And that had been annoying me for a long time now. Fixed finally:
> http://hg.dovecot.org/dovecot-1.1/rev/3b0d23902a32

Another "Dovecot ran out of disk space" (if you feel like it's worth fixing it)

(sorry, no core file as user was out of fs quota)

file mbox-sync-rewrite.c: line 590 (mbox_sync_rewrite): assertion
failed: (mails[idx].from_offset == start_offset)

Raw backtrace:
/usr/libexec/dovecot/imap [0x80d0ae0] ->
/usr/libexec/dovecot/imap [0x80d0b3a] ->
/usr/libexec/dovecot/imap [0x80d03cc] ->
/usr/libexec/dovecot/imap(mbox_sync_rewrite+0xe05) [0x8086675] ->
/usr/libexec/dovecot/imap(mbox_sync+0x140c) [0x80845ec] ->
/usr/libexec/dovecot/imap(mbox_storage_sync_init+0x47) [0x8084b87] ->
/usr/libexec/dovecot/imap(imap_sync_init+0x49) [0x8066889] ->
/usr/libexec/dovecot/imap(cmd_sync_delayed+0x1a9) [0x8066ad9] ->
/usr/libexec/dovecot/imap [0x806083e] ->
/usr/libexec/dovecot/imap(client_input+0x5e) [0x806090e] ->
/usr/libexec/dovecot/imap(io_loop_handler_run+0x100) [0x80d8710] ->
/usr/libexec/dovecot/imap(io_loop_run+0x28) [0x80d77f8] ->
/usr/libexec/dovecot/imap(main+0x4ac) [0x806848c] ->
/lib/libc.so.6(__libc_start_main+0xdc) [0x42bdec] ->
/usr/libexec/dovecot/imap [0x805a1f1]

Diego


Re: [Dovecot] dovecot 1.1.4 maildir imap segfault in message_parse_header_next

2008-10-16 Thread Diego Liziero
On Thu, Oct 16, 2008 at 11:39 AM, Timo Sirainen <[EMAIL PROTECTED]> wrote:
> On Oct 16, 2008, at 11:33 AM, Diego Liziero wrote:
>
>> Today a user got this imap segfault with vanilla 1.1.4 (I don't know
>
> Hmm. And Maildir as topic says?

No, sorry, wrong subject, mbox

>> #0  0x080c8d41 in message_parse_header_next (ctx=0x8774fa0,
>> hdr_r=0xbfa438e0) at message-header-parser.c:114
>
> p *ctx.input
> p *ctx.input.real_stream

(gdb)  p *ctx.input
$1 = {v_offset = 0, stream_errno = 0, mmaped = 0, blocking = 1, closed
= 0, seekable = 1, eof = 0, real_stream = 0x8771538}
(gdb) p *ctx.input.real_stream
$2 = {iostream = {refcount = 3, close = 0x80e3f10
,
destroy = 0x80c6d50 ,
set_max_buffer_size = 0x80c6d20
,
destroy_callback = 0x8094630 ,
destroy_context = 0x8776138},
  read = 0x80c6940 , seek = 0x80c6be0
,
  sync = 0x80c5fa0 , stat = 0x80c6b40
, istream = {v_offset = 0,
stream_errno = 0, mmaped = 0, blocking = 1, closed = 0, seekable =
1, eof = 0, real_stream = 0x8771538}, fd = -1,
  abs_start_offset = 374333755, statbuf = {st_dev = 0, __pad1 = 0,
__st_ino = 0, st_mode = 0, st_nlink = 0, st_uid = 0, st_gid = 0,
st_rdev = 0, __pad2 = 0, st_size = -1, st_blksize = 0, st_blocks =
0, st_atim = {tv_sec = 1224104682, tv_nsec = 0}, st_mtim = {
  tv_sec = 1224104682, tv_nsec = 0}, st_ctim = {tv_sec =
1224104682, tv_nsec = 0}, st_ino = 0}, buffer = 0x0, w_buffer = 0x0,
  buffer_size = 0, max_buffer_size = 8192, skip = 0, pos = 0, parent =
0x8770fe0, parent_start_offset = 0, line_str = 0x0}

>>   size = 0
>
> i_stream_read_data() returned 0 bytes, but
>
>>   ret = -2
>
> it also returned that the input buffer is full. That shouldn't be happening.
> http://hg.dovecot.org/dovecot-1.1/rev/82d4756f43cc should catch it earlier.

Ok thanks.


[Dovecot] Proxying pop3 sessions into an imap one.

2008-10-21 Thread Diego Liziero
I know it seems at least unusual.

But I would like to know if someone knows a software that can proxy
multiple pop3 and imap connections to the same account and use only
imap connections to the real server.

I need it because I've to deal with two pieces of software that are
out of my control:

- 1 - A certified mail provider that states something that I would
like to know if it's really true, or if it's worth keeping like a
funny signature: "multiple imap sessions are possible. When an imap
session is active, no pop3 connections are possible on the same
account; when no imap connection is active, only one pop3 connection
at a time is possible on the same account. This is a limitation of the
pop3 protocol."

- 2 - A software written to deal with certified mails that supports
only pop3 used by many people on the same account, (and the same
account is used also by multiple imap mail readers).

Regards,
Diego.


Re: [Dovecot] Proxying pop3 sessions into an imap one.

2008-10-22 Thread Diego Liziero
On Wed, Oct 22, 2008 at 12:49 PM, Sotiris Tsimbonis
<[EMAIL PROTECTED]> wrote:
>
> I don't think that you'll find any piece of software that internally
> 'translates' pop3 commands to imap ..

Unless it saves a local copy of the mailbox...

> Pop3 sessions are usually locked, so multiple sessions are not possible..
> Unless something like the following is set in dovecot.conf
>
>  # Keep the mailbox locked for the entire POP3 session.
>  pop3_lock_session = no
>
> but this is dovecot specific (does this mail provider use dovecot?).

No idea about the provider.

> Also, I believe that behaviour will probably by uncertain if, say, multiple
> pop3 clients try to modify the same (single) mbox file.. So it also depends
> on the mailbox format you are trying to access, maybe Maildir or dbox are
> better..
>
> [..]
>
> Again.. Many people using pop3.. Depends on mail server software and mailbox
> format.. IMAP does multiple sessions better.

Just thinking aloud...

Could be a solution if I keep a local copy of the mailbox fetching it
somehow through imap or pop3 from the provider and then use dovecot
with  "pop3_lock_session = no" to serve the clients?

I'm not sure of it just because I don't know how the certified email
internally works.

Thank you for your help,
Diego.


[Dovecot] assertion failed in 1.1.7 file mbox-sync.c: line 1305 (mbox_sync_handle_eof_updates)

2008-12-03 Thread Diego Liziero
Dovecot 1.1.7 is running so smoothly that I gave up checking its log
files daily. :)

I've just had a look, and among the usual
"IMAP(username): FETCH for mailbox Sent UID xx got too little data: xx vs xx"
messages (that means that unfortunately sometimes some messages are
still written truncated) I saw this assertion failure:

file mbox-sync.c: line 1305 (mbox_sync_handle_eof_updates): assertion
failed: (file_size >=
 sync_ctx->expunged_space + trailer_size)

Regards,
Diego.



(gdb) bt full
#0  0x008a9402 in __kernel_vsyscall ()
No symbol table info available.
#1  0x00a2fd20 in raise () from /lib/libc.so.6
No symbol table info available.
#2  0x00a31631 in abort () from /lib/libc.so.6
No symbol table info available.
#3  0x080f73c0 in default_fatal_finish (type=LOG_TYPE_PANIC, status=0)
at failures.c:150
backtrace = 0x97df358 "/usr/libexec/dovecot/imap [0x80f739e] ->
/usr/libexec/dovecot/imap [0x80f7c5f] ->
/usr/libexec/dovecot/imap(i_fatal+0) [0x80f7518] ->
/usr/libexec/dovecot/imap [0x80911de] -> /usr/libexec/dovecot/imap"...
#4  0x080f7c5f in i_internal_fatal_handler (type=LOG_TYPE_PANIC,
status=0, fmt=0x811771c "file %s: line %d (%s): assertion failed:
(%s)",
args=0xbfbabf14 "\020w\021\b\031\005") at failures.c:430
No locals.
#5  0x080f7518 in i_panic (format=0x811771c "file %s: line %d (%s):
assertion failed: (%s)") at failures.c:197
args = 0xbfbabf14 "\020w\021\b\031\005"
#6  0x080911de in mbox_sync_handle_eof_updates (sync_ctx=0xbfbac0f4,
mail_ctx=0xbfbac008) at mbox-sync.c:1305
st = (const struct stat *) 0x97f3420
file_size = 242
offset = 242
padding = 684287899293748052
trailer_size = 0
__PRETTY_FUNCTION__ = "mbox_sync_handle_eof_updates"
#7  0x08091d77 in mbox_sync_do (sync_ctx=0xbfbac0f4, flags=0) at
mbox-sync.c:1547
mail_ctx = {sync_ctx = 0xbfbac0f4, mail = {uid = 0, idx_seq = 1,
keywords = {arr = {buffer = 0x0, element_size = 0}, v = 0x0,
  v_modifiable = 0x0}, flags = 12 '\f', uid_broken = 0, expunged =
1, pseudo = 0, from_offset = 0, body_size = 0, offset = 0,
space = 243}, seq = 1, hdr_offset = 66, body_offset = 151,
header_first_change = 4294967295, header_last_change = 0,
  header = 0x9800878, hdr_md5_sum =
"�\035\214�\217\000�\004�\200\t\230��B~", content_length =
18446744073709551615, hdr_pos = {69, 12,
4294967295, 82, 32}, parsed_uid = 2, last_uid_updated_value = 0,
last_uid_value_start_pos = 2, have_eoh = 1, need_rewrite = 0,
  seen_imapbase = 1, updated = 0, recent = 0, dirty = 0,
imapbase_rewrite = 0, imapbase_updated = 0}
st = (const struct stat *) 0x97f3420
i = 0
ret = 1
partial = 1
#8  0x0809277e in mbox_sync_int (mbox=0x97f0c28, flags=0) at mbox-sync.c:1806
index_sync_ctx = (struct mail_index_sync_ctx *) 0x9808b38
sync_view = (struct mail_index_view *) 0x97f1648
trans = (struct mail_index_transaction *) 0x97f3168
sync_ctx = {mbox = 0x97f0c28, flags = 0, input = 0x97f34e8,
file_input = 0x97f3400, write_fd = 8, orig_mtime = 1227954336,
  orig_atime = 1227954336, orig_size = 242, last_stat = {st_dev =
37637, __pad1 = 0, __st_ino = 4751377, st_mode = 33152, st_nlink = 1,
st_uid = 631, st_gid = 508, st_rdev = 0, __pad2 = 0, st_size =
242, st_blksize = 4096, st_blocks = 16, st_atim = {
  tv_sec = 1227960259, tv_nsec = 0}, st_mtim = {tv_sec =
1227954336, tv_nsec = 0}, st_ctim = {tv_sec = 1227954336, tv_nsec =
0},
st_ino = 4751377}, index_sync_ctx = 0x9808b38, sync_view =
0x97f1648, t = 0x97f3168, reset_hdr = {major_version = 0 '\0',
minor_version = 0 '\0', base_header_size = 0, header_size = 0,
record_size = 0, compat_flags = 0 '\0', unused = "\000\000",
indexid = 0, flags = 0, uid_validity = 0, next_uid = 0,
messages_count = 0, unused_old_recent_messages_count = 0,
seen_messages_count = 0, deleted_messages_count = 0,
first_recent_uid = 0, first_unseen_uid_lowwater = 0,
first_deleted_uid_lowwater = 0, log_file_seq = 0,
log_file_tail_offset = 0, log_file_head_offset = 0, sync_size = 0,
sync_stamp = 0,
day_stamp = 0, day_first_uid = {0, 0, 0, 0, 0, 0, 0, 0}}, hdr =
0x97f1300, header = 0x9800878, from_line = 0x9800858,
  base_uid_validity = 1, base_uid_last = 2, base_uid_last_offset = 0,
mails = {arr = {buffer = 0x97f8c40, element_size = 52},
v = 0x97f8c40, v_modifiable = 0x97f8c40}, sync_changes =
0x97f8c60, mail_keyword_pool = 0x9800168, saved_keywords_pool =
0x97f50d8,
  prev_msg_uid = 2, next_uid = 3, idx_next_uid = 3, seq = 1, idx_seq =
2, need_space_seq = 0, last_nonrecent_uid = 0,
  expunged_space = 243, space_diff = 0, dest_first_mail = 1,
first_mail_crlf_expunged = 0, delay_writes = 1, renumber_uids = 0,
  moved_offsets = 0, ext_modified = 0, index_reset = 0, errors = 0}
sync_flags = MAIL_INDEX_SYNC_FLAG_DROP_RECENT
lock_id = 23
ret = 1
changed = 1
delay_writes = true
__PRETTY_FUNCTION__ = "mbox_sync_int"
#9  0x08092a0a in mbox_sync (mbox=0x97f

Re: [Dovecot] assertion failed in 1.1.7 file mbox-sync.c: line 1305 (mbox_sync_handle_eof_updates)

2008-12-13 Thread Diego Liziero
On Thu, Dec 4, 2008 at 8:26 AM, Diego Liziero  wrote:
> Dovecot 1.1.7 is running so smoothly that I gave up checking its log
> files daily. :)
>
> I've just had a look, and among the usual
> "IMAP(username): FETCH for mailbox Sent UID xx got too little data: xx vs xx"
> messages (that means that unfortunately sometimes some messages are
> still written truncated) I saw this assertion failure:
>
> file mbox-sync.c: line 1305 (mbox_sync_handle_eof_updates): assertion
> failed: (file_size >=
>  sync_ctx->expunged_space + trailer_size)

Btw:
(gdb) fr 6
#6  0x080911de in mbox_sync_handle_eof_updates (sync_ctx=0xbfbac0f4,
mail_ctx=0xbfbac008) at mbox-sync.c:1305
1305i_assert(file_size >= sync_ctx->expunged_space
+ trailer_size);
(gdb) print file_size
$1 = 242
(gdb) print sync_ctx->expunged_space
$2 = 243
(gdb) print trailer_size
$3 = 0

Regards,
Diego.


[Dovecot] another assertion failure in dovecot 1.1.7 mbox-sync-rewrite: (mails[idx].from_offset == start_offset)

2008-12-13 Thread Diego Liziero
Sorry, this time I've no core file, (I forgot to set ulimit -c
unlimited before starting dovecot)

Regards,
Diego.
---
dovecot: Dec 09 08:26:52 Panic: IMAP(user): file mbox-sync-rewrite.c:
line 590 (mbox_sync_rewrite): assertion failed:
(mails[idx].from_offset == start_offset)
dovecot: Dec 09 08:26:52 Error: IMAP(user): Raw backtrace:
/usr/libexec/dovecot/imap [0x80f739e] -> /usr/libexec/dovecot/imap
[0x80f7c5f] -> /usr/libexec/dovecot/imap(i_fatal+0) [0x80f7518] ->
/usr/libexec/dovecot/imap(mbox_sync_rewrite+0x6e8) [0x8094e45] ->
/usr/libexec/dovecot/imap [0x809105c] -> /usr/libexec/dovecot/imap
[0x8091d77] -> /usr/libexec/dovecot/imap [0x809277e] ->
/usr/libexec/dovecot/imap(mbox_sync+0x2b) [0x8092a0a] ->
/usr/libexec/dovecot/imap [0x8084ff3] ->
/usr/libexec/dovecot/imap(mailbox_close+0x47) [0x80b701c] ->
/usr/libexec/dovecot/imap(cmd_close+0xc0) [0x805b5d8] ->
/usr/libexec/dovecot/imap [0x80625c4] -> /usr/libexec/dovecot/imap
[0x80627f9] -> /usr/libexec/dovecot/imap [0x80628f7] ->
/usr/libexec/dovecot/imap [0x8062933] ->
/usr/libexec/dovecot/imap(client_input+0xb7) [0x8062ac1] ->
/usr/libexec/dovecot/imap(io_loop_handler_run+0x17d) [0x8101ecd] ->
/usr/libexec/dovecot/imap(io_loop_run+0x35) [0x8101164] ->
/usr/libexec/dovecot/imap(main+0xb0) [0x806df19] ->
/lib/libc.so.6(__libc_start_main+0xdc) [0x7a5dec] ->
/usr/libexec/dovecot/imap [0x805a2b1]


Re: [Dovecot] another assertion failure in dovecot 1.1.7 mbox-sync-rewrite: (mails[idx].from_offset == start_offset)

2008-12-13 Thread Diego Liziero
On Sat, Dec 13, 2008 at 5:16 PM, Timo Sirainen  wrote:
>
> I did several mbox fixes today. Wonder if they could have fixed also
> some of these other mbox bugs you've reported?

I'm stress-testing current 1.1.x mercurial head with imaptest and with
my account.

So far everything seems fine.

It sounds great if you managed to fix these last rare bugs.

Thanks for your work Timo,
I'll let you know if anything wrong occurs.

btw, did you have time to have a look at multiappend valgrind stuff I sent you?

regards,
Diego.


Re: [Dovecot] another assertion failure in dovecot 1.1.7 mbox-sync-rewrite: (mails[idx].from_offset == start_offset)

2008-12-29 Thread Diego Liziero
On Sat, Dec 13, 2008 at 4:45 PM, Diego Liziero  wrote:
> Sorry, this time I've no core file, (I forgot to set ulimit -c
> unlimited before starting dovecot)

No, I didn't forget, I got it again without core file because this is
another "disk full" assertion failure.

Both users that got it were over quota.

So probably nothing to worry about.

Regards,
Diego.

> ---
> dovecot: Dec 09 08:26:52 Panic: IMAP(user): file mbox-sync-rewrite.c:
> line 590 (mbox_sync_rewrite): assertion failed:
> (mails[idx].from_offset == start_offset)
> dovecot: Dec 09 08:26:52 Error: IMAP(user): Raw backtrace:
> /usr/libexec/dovecot/imap [0x80f739e] -> /usr/libexec/dovecot/imap
> [0x80f7c5f] -> /usr/libexec/dovecot/imap(i_fatal+0) [0x80f7518] ->
> /usr/libexec/dovecot/imap(mbox_sync_rewrite+0x6e8) [0x8094e45] ->
> /usr/libexec/dovecot/imap [0x809105c] -> /usr/libexec/dovecot/imap
> [0x8091d77] -> /usr/libexec/dovecot/imap [0x809277e] ->
> /usr/libexec/dovecot/imap(mbox_sync+0x2b) [0x8092a0a] ->
> /usr/libexec/dovecot/imap [0x8084ff3] ->
> /usr/libexec/dovecot/imap(mailbox_close+0x47) [0x80b701c] ->
> /usr/libexec/dovecot/imap(cmd_close+0xc0) [0x805b5d8] ->
> /usr/libexec/dovecot/imap [0x80625c4] -> /usr/libexec/dovecot/imap
> [0x80627f9] -> /usr/libexec/dovecot/imap [0x80628f7] ->
> /usr/libexec/dovecot/imap [0x8062933] ->
> /usr/libexec/dovecot/imap(client_input+0xb7) [0x8062ac1] ->
> /usr/libexec/dovecot/imap(io_loop_handler_run+0x17d) [0x8101ecd] ->
> /usr/libexec/dovecot/imap(io_loop_run+0x35) [0x8101164] ->
> /usr/libexec/dovecot/imap(main+0xb0) [0x806df19] ->
> /lib/libc.so.6(__libc_start_main+0xdc) [0x7a5dec] ->
> /usr/libexec/dovecot/imap [0x805a2b1]
>


Re: [Dovecot] mbox "Next message unexpectedly lost" bug fixed

2009-01-06 Thread Diego Liziero
First the good news: with these patches, the "Next message
unexpectedly lost" bugs I got so far were all caused reading a mbox
written by a previous version of dovecot.

So I think this time you really fixed the last annoying 1.1.x bug :)

Then another doubt: with these patches I'm still getting some:

FETCH for mailbox Sent UID 1077 got too little data: 1380 vs 3541
Corrupted index cache file
/mailhome/username/.imap/Sent/dovecot.index.cache: Broken virtual size
for mail UID 1077

These mails were written by a hg version pulled Dec 14 (that includes
the mbox patches).

Any suggestion for the kind of debug I can add to help solving these ones?

Regards,
Diego.


Re: [Dovecot] PDF corruption issue

2009-01-11 Thread Diego Liziero
On Sun, Jan 11, 2009 at 10:56 PM, Stefan Klatt
 wrote:
>
> Hi Ed,
>
>> I can basically confirm the same intermittent issue, but not where it's
>> coming from
> Same to me.
> I found it sometimes at JPGs included at emails.
> The most of the image shows correct but about the last 30 percent is
> only grey.

Could you please have a look if you are seeing messages like this in
your dovecot.log?

Error: IMAP(username): FETCH for mailbox Sent UID 2375 got too little
data: 4096 vs 5247

(with different numbers)

Thanks,
Diego Liziero.


[Dovecot] [patch] make /./ a config option.

2008-02-15 Thread Diego Liziero
Hi,
I think that the wu-ftp style chroot /./ should be a configurable
option.

In our servers we have some home directories in /chroot-web/./username
(where web users can upload their web sites in a chrooted environment)
and all imap mail in /mail-disk/username.

We are planning a dovecot migration from our modified version of uw-imap
and we noticed that the chroot in /chroot-web/ can't be disabled.

This patch adds the bool option home_slash_dot_slash_chroot (feel free
to change this name to something easier to understand). Setting this to
"no" disables the wu-ftp style /./ chroot.

I hope this feature can be considered useful and soon included in
dovecot.

Regards,
Diego Liziero.

diff -dur dovecot-1.0.10/dovecot-example.conf dovecot-1.0.10-disable-slash-dot-slash-chroot/dovecot-example.conf
--- dovecot-1.0.10/dovecot-example.conf	2007-12-11 19:52:08.0 +0100
+++ dovecot-1.0.10-disable-slash-dot-slash-chroot/dovecot-example.conf	2008-02-15 10:44:39.0 +0100
@@ -354,6 +354,14 @@
 # their mail directory anyway. 
 #mail_chroot = 
 
+# Enable checking /./ in user's home directory for chrooting.
+# With this enabled (default), when user's home contains /./ (eg.
+# /newroot/./newhome/user) two things are changed:
+# - mail_chroot is overridden and set to the path before /./
+# - %h (home) is set to the path after /.
+#
+#home_slash_dot_slash_chroot = yes
+
 ##
 ## Mailbox handling optimizations
 ##
diff -dur dovecot-1.0.10/src/master/mail-process.c dovecot-1.0.10-disable-slash-dot-slash-chroot/src/master/mail-process.c
--- dovecot-1.0.10/src/master/mail-process.c	2007-12-20 21:51:23.0 +0100
+++ dovecot-1.0.10-disable-slash-dot-slash-chroot/src/master/mail-process.c	2008-02-15 09:42:53.0 +0100
@@ -477,7 +477,7 @@
 		}
 	}
 
-	if (*chroot_dir == '\0' && (p = strstr(home_dir, "/./")) != NULL) {
+	if (set->home_slash_dot_slash_chroot && *chroot_dir == '\0' && (p = strstr(home_dir, "/./")) != NULL) {
 		/* wu-ftpd like /./ */
 		chroot_dir = t_strdup_until(home_dir, p);
 		home_dir = p + 2;
diff -dur dovecot-1.0.10/src/master/master-settings-defs.c dovecot-1.0.10-disable-slash-dot-slash-chroot/src/master/master-settings-defs.c
--- dovecot-1.0.10/src/master/master-settings-defs.c	2007-12-11 19:52:09.0 +0100
+++ dovecot-1.0.10-disable-slash-dot-slash-chroot/src/master/master-settings-defs.c	2008-02-15 09:13:30.0 +0100
@@ -50,6 +50,7 @@
 	/* mail */
 	DEF(SET_STR, valid_chroot_dirs),
 	DEF(SET_STR, mail_chroot),
+	DEF(SET_BOOL, home_slash_dot_slash_chroot),
 	DEF(SET_INT, max_mail_processes),
 	DEF(SET_BOOL, verbose_proctitle),
 
diff -dur dovecot-1.0.10/src/master/master-settings.c dovecot-1.0.10-disable-slash-dot-slash-chroot/src/master/master-settings.c
--- dovecot-1.0.10/src/master/master-settings.c	2007-12-21 16:10:24.0 +0100
+++ dovecot-1.0.10-disable-slash-dot-slash-chroot/src/master/master-settings.c	2008-02-15 09:12:21.0 +0100
@@ -199,6 +199,7 @@
 	/* mail */
 	MEMBER(valid_chroot_dirs) "",
 	MEMBER(mail_chroot) "",
+	MEMBER(home_slash_dot_slash_chroot) TRUE,
 	MEMBER(max_mail_processes) 1024,
 	MEMBER(verbose_proctitle) FALSE,
 
diff -dur dovecot-1.0.10/src/master/master-settings.h dovecot-1.0.10-disable-slash-dot-slash-chroot/src/master/master-settings.h
--- dovecot-1.0.10/src/master/master-settings.h	2007-12-11 19:52:09.0 +0100
+++ dovecot-1.0.10-disable-slash-dot-slash-chroot/src/master/master-settings.h	2008-02-15 09:12:36.0 +0100
@@ -60,6 +60,7 @@
 	/* mail */
 	const char *valid_chroot_dirs;
 	const char *mail_chroot;
+	bool home_slash_dot_slash_chroot;
 	unsigned int max_mail_processes;
 	bool verbose_proctitle;
 


Re: [Dovecot] [patch] make /./ a config option.

2008-02-16 Thread Diego Liziero
On Fri, 2008-02-15 at 14:53 +0200, Timo Sirainen wrote:
> On Fri, 2008-02-15 at 13:40 +0100, Diego Liziero wrote:
> > This patch adds the bool option home_slash_dot_slash_chroot (feel free
> > to change this name to something easier to understand). Setting this to
> > "no" disables the wu-ftp style /./ chroot.
> 
> There are already too many options, but I guess valid_chroot_dirs could
> be used for this. Committed to v1.1:
> http://hg.dovecot.org/dovecot-1.1/rev/17c65dfdac2a

Great, but this patch solves partially what we would like to have:
it allows chroot options to be completely disabled, but it doesn't allow
to override /./ chroot with a global mail_chroot option.

This happens because to have mail_chroot config option working, we have
to add its entry in valid_chroot_dirs, too.
This should not be necessary.

In this case validate_chroot should be called before checking for
mail_chroot (see the patch below).

Thank you for your quick answer,
Regards,
Diego Liziero.
diff -dur dovecot-1.0.10/src/master/mail-process.c dovecot-1.0.10-chroot/src/master/mail-process.c
--- dovecot-1.0.10/src/master/mail-process.c	2007-12-20 21:51:23.0 +0100
+++ dovecot-1.0.10-chroot/src/master/mail-process.c	2008-02-16 13:26:16.0 +0100
@@ -492,9 +492,6 @@
 			return FALSE;
 	}
 
-	if (*chroot_dir == '\0' && *set->mail_chroot != '\0')
-		chroot_dir = set->mail_chroot;
-
 	if (*chroot_dir != '\0') {
 		if (!validate_chroot(set, chroot_dir)) {
 			i_error("Invalid chroot directory '%s' (user %s) "
@@ -502,6 +499,12 @@
 chroot_dir, user);
 			return FALSE;
 		}
+	}
+
+	if (*chroot_dir == '\0' && *set->mail_chroot != '\0')
+		chroot_dir = set->mail_chroot;
+
+if (*chroot_dir != '\0') {
 		if (set->mail_drop_priv_before_exec) {
 			i_error("Can't chroot to directory '%s' (user %s) "
 "with mail_drop_priv_before_exec=yes",
Only in dovecot-1.0.10-chroot/src/master: mail-process.c.orig


Re: [Dovecot] Corrupted index cache file

2009-02-12 Thread Diego Liziero
On Thu, Feb 12, 2009 at 5:39 PM, Charles Marcus
 wrote:
> On 2/12/2009 3:31 AM, Frank Bonnet wrote:
>> dovecot: Feb 11 16:07:27 Error: IMAP(dumontj): FETCH for mailbox Sent
>> UID 7139 got too little data: 2 vs 11160
>> dovecot: Feb 11 16:07:27 Error: IMAP(dumontj): Corrupted index cache
>> file /user/dumontj/.imap/Sent/dovecot.index.cache: Broken virtual size
>> for mail UID 7139
>>
>> Does remove the cache will cure the problem ?
>
> Version? There have been lots of mbox fixes in recent versions...

I'm still getting this error in the 1.1.x hg tree (using a post 1.1.11
snapshot taken 6 days ago), happy to see I'm not the only one.

It's really _rare_ and difficult to reproduce.
Usually, it happens when a thunderbird user tells dovecot to store a
mail with multipart mime attachments in a mbox folder (mostly Sent or
Trash folder).

When this error appears, the last mail attachment is actually truncated.

Filesystem here is ext3 on a two node drbd active/passive cluster (Centos 5.2)

Regards,
Diego.


Re: [Dovecot] Corrupted index cache file

2009-02-13 Thread Diego Liziero
On Fri, Feb 13, 2009 at 12:32 AM, Timo Sirainen  wrote:
> On Fri, 2009-02-13 at 00:10 +0100, Diego Liziero wrote:
>> >> dovecot: Feb 11 16:07:27 Error: IMAP(dumontj): FETCH for mailbox Sent
>> >> UID 7139 got too little data: 2 vs 11160
> ..
>> When this error appears, the last mail attachment is actually truncated.
>
> Any idea if the saved data simply isn't written / is truncated, instead
> of the following message overwriting it? i.e. has this happened to the
> last message in the mbox file (with uid = next_uid-1 to make sure the
> last message wasn't actually expunged)?

Sorry, no idea.
There is always a delay between when the error is written in
dovecot.log and when I and ask the user what happened. The truncated
message is always one of the latest in the mbox, but I can't tell if
it happens when the message is the last one.

Maybe a debug message that logs the latest mbox uid together with the
"FETCH for mailbox Sent UID 7139 got too little data : 2 vs 11160"
message could help.
Or maybe a debug extra check that reads every newly written mbox mail
if it's actually written up to the last byte...

Diego.


Re: Dovecot v2.3.13 released

2021-01-13 Thread Diego Liziero
Hello Aki,
fts-solr is still crashing here.
We have many X- headers from antispam, DKIM, and so on, I don't know if it
has anything to do with it.
The same configuration worked a couple of versions ago.

Regards,
Diego.

Latest debian 10.7, binaries from
repo.dovecot.org/ce-2.3-latest/debian/buster

# dovecot --version
2.3.13 (89f716dc2)
# for i in diego.liziero; do doveadm index -u $i \*; echo indexed $i; done
doveadm(diego.liziero): Panic: file http-client-request.c: line 1240
(http_client_request_send_more): assertion failed: (req->payload_input !=
NULL)
doveadm(diego.liziero): Error: Raw backtrace:
/usr/lib/dovecot/libdovecot.so.0(backtrace_append+0x3d) [0x7f9108b8561d] ->
/usr/lib/dovecot/libdovecot.so.0(backtrace_get+0x1e) [0x7f9108b8573e] ->
/usr/lib/dovecot/libdovecot.so.0(+0xfa79b) [0x7f9108b9179b] ->
/usr/lib/dovecot/libdovecot.so.0(+0xfa7d1) [0x7f9108b917d1] ->
/usr/lib/dovecot/libdovecot.so.0(+0x52e30) [0x7f9108ae9e30] ->
/usr/lib/dovecot/libdovecot.so.0(+0x4a868) [0x7f9108ae1868] ->
/usr/lib/dovecot/libdovecot.so.0(http_client_connection_output+0xf2)
[0x7f9108b36cc2] -> /usr/lib/dovecot/libdovecot.so.0(+0x120481)
[0x7f9108bb7481] -> /usr/lib/dovecot/libdovecot.so.0(io_loop_call_io+0x69)
[0x7f9108ba7599] ->
/usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run_internal+0x131)
[0x7f9108ba8b11] ->
/usr/lib/dovecot/libdovecot.so.0(io_loop_handler_run+0x4c) [0x7f9108ba763c]
-> /usr/lib/dovecot/libdovecot.so.0(io_loop_run+0x40) [0x7f9108ba77b0] ->
/usr/lib/dovecot/libdovecot.so.0(+0x9b610) [0x7f9108b32610] ->
/usr/lib/dovecot/libdovecot.so.0(http_client_request_send_payload+0x30)
[0x7f9108b326e0] -> /usr/lib/dovecot/modules/lib20_fts_plugin.so(+0xf15d)
[0x7f910831415d] ->
/usr/lib/dovecot/modules/lib20_fts_plugin.so(fts_parser_more+0x27)
[0x7f9108312f87] -> /usr/lib/dovecot/modules/lib20_fts_plugin.so(+0xc25f)
[0x7f910831125f] ->
/usr/lib/dovecot/modules/lib20_fts_plugin.so(fts_build_mail+0x4d)
[0x7f910831198d] -> /usr/lib/dovecot/modules/lib20_fts_plugin.so(+0x12060)
[0x7f9108317060] ->
/usr/lib/dovecot/libdovecot-storage.so.0(mail_precache+0x2e)
[0x7f9108ca5a0e] -> doveadm(+0x368ff) [0x56138f8c98ff] -> doveadm(+0x30ee6)
[0x56138f8c3ee6] -> doveadm(+0x31ada) [0x56138f8c4ada] ->
doveadm(doveadm_cmd_ver2_to_mail_cmd_wrapper+0x21a) [0x56138f8c587a] ->
doveadm(doveadm_cmd_run_ver2+0x4df) [0x56138f8d5d2f] ->
doveadm(doveadm_cmd_try_run_ver2+0x37) [0x56138f8d5d87] ->
doveadm(main+0x1ca) [0x56138f8b4e9a] ->
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xeb) [0x7f910871e09b] ->
doveadm(_start+0x2a) [0x56138f8b537a]
Aborted


On Mon, Jan 4, 2021 at 1:04 PM Aki Tuomi  wrote:

> [..]
> - fts-solr: HTTP requests may have assert-crashed:
>   Panic: file http-client-request.c: line 1232
> (http_client_request_send_more):
>   assertion failed: (req->payload_input != NULL)
>
>


Re: SOLR 5

2015-06-17 Thread Diego Liziero
On Sat, Feb 28, 2015 at 8:17 PM, Robert Gierzinger  wrote:

> Hello,
>
> I just wanted to give SOLR 5 a try, however there probably have changed
> quite some bits in the config files, did not even manage to create a core
> with various solrconfig.xml and schema.xml files, but I am absolutely no
> expert in solr.
> Has anybody given it a try or are there some tips on how to get it running?
>
> regards,
> Robert
>


Using solr-5.0.0 with a single core here, the last one I managed to get
working with dovecot 1.1.18.

Starting from 5.1.0 solr changed behaviour and complains with "Bad
contentType for search handler :text/xml" (I've tried to change dovecot
header requests to "application/x-www-form-urlencoded" but I got a "missing
content stream" error).

Here some further steps I did to get it working, not sure if it's the
correct way of doing it, though (but I'm sure that someone will correct me
where I'm wrong).

I've created a new folder server/solr/dovecot/ with a file core.properties
and two subdirs: data and conf.

My core.properties contains these four lines:
name=dovecot
config=solrconfig.xml
schema=schema.xml
dataDir=data

in the server/solr/dovecot/conf dir I initially copied the content of one
of the sample dirs (server/solr/configsets/basic_configs/conf) and then I
copied as schema.xml the solr-schema.xml from dovecot install.

This changes the main url path of solr, so I had to point dovecot config to
http://localhost:8983/solr/dovecot/ and the crontab became:
# solr
0 0 * * *curl http://localhost:8983/solr/dovecot/update?optimize=true
*/10 * * * *curl http://localhost:8983/solr/dovecot/update?commit=true
&>/dev/null

If someone knows how to keep the usual main "http://localhost:8983/solr/";
url with solr-5, please let me know how to do it.

In bin/init.d/solr there is an init.d script with some comments to get it
correctly configured and some pointer to the other config steps (such as
create a solr user, edit conf/solr.in.sh for other environment variables,
and so on).

Regards,
Diego.