Re: Possible akonadi problem?

2015-01-23 Thread Martin Steigerwald

On Donnerstag, 22. Januar 2015 22:11:37 CEST, Martin Steigerwald wrote:

Am Freitag, 23. Januar 2015, 07:17:13 schrieb Dmitry Smirnov:

Hi Brad,

On Fri, 2 Jan 2015 11:28:53 Brad Alexander wrote: ...


Thank you very much, I will try this. And see whether it helps with these 
upstream bugs:


[Akonadi] [Bug 338402] File system cache is inneficient : too many file per 
directory


Bug 332013 - NFS with NetApp FAS: please split payload files in file_db_data
into several directories to avoid reaching maxdirsize limit on Ontap / WAFL
filesystem

Bug 341884 - dozens of duplicate mails in 
~/.local/share/akonadi/file_db_data



I bet it may help with the first two, but third one might be another bug.


Right now locally after the last akonadictl fsck I only have 
from 4600 after 
the fsck to 4900 files now in my private setup with still is 
mostly POP3 just a 
30 day limited IMAP for the Fairphone and as a backup when I 
accidentally mess 
up with something locally. It really seems Akonadi is more snappy since the 
fsck. I have lots more files in there.


I also bumped innodb_buffer_pool_size but didn´t see that much of a change, 
what helped most with MySQL load is to use Akonadi git 1.13 branch with 
database performance improvement.


I now implemented the treshold size change you suggested and 
did another fsck 
and vacuum.


*beware this is partly a rant, but I think a well founded one*

In the end it I had mysqld excessively writing 150-300 MiB for about 15 
minutes or more, sure it wrote the *complete* size of the database several 
times. I do have an mysqld where it complains about not being able to get 
locks and suggesting that a second instance might be running. I found this 
later one.


I don´t think it had to do with the change you suggested, but with another 
problem. I found it often enough that after akonadictl stop and akonadictl 
status telling it is actually stopped, mysqld was still running. I usually 
check this and do SIGTERM on it waiting for it to end. But I didn´t 
yesterday.


After a long time of writes I sigkilled the mysqld process in order to end 
the excessive writes.


I tried to recover from backup, but it didn´t work out. At one time I 
created a second maildir resource and even after making sure I had 
everything of ~/.config/akonadi from backup and also making sure there are 
no other files in it with rsync --del and same for ~/.local/share/akonadi 
and even after deleting akonadi_maildir_resource_1rc from 
~/.kde/share/config it insisted on having a maildir resource 1 today and it 
filled database and file_db_data with the results of scanning my million of 
mails again.


So I suggest anyone trying this change:

On akonatictl stop make *perfectly* sure that mysqld process has ended.

After my attempts on getting back from backup failed *twice* I am now doing 
it *all* from scratch. Including adapting a ton of filter rules.


I really hope at some day Akonadi as it is today is gone and replaced by a 
better designed Akonadi Next, just as with Baloo replacing Nepomuk. Akonadi 
Next I think needs to meet the following requirements:


1) It is a *cache* and *nothing* else. It never stores any information 
inside the cache that isn´t stored elsewhere or that it isn´t able to 
rebuild. That said, in case of issues with the cache, it is possible to 
remove it and rebuild it from scratch *without* *any* data loss involved 
and *without* having to recreate filter rules. If it needs an ID for a 
folder, fine, store it in an xattr or otherwise with the original data 
store. I really suggest a functionality to recalibrate filter rules by the 
*name* of the folder otherwise.


2) Make it *just plain* obvious *where* Akonadi stores the configuration 
and the data. For now its at least ~/.config/akonadi one or two files per 
resource (the changes.dat there), ~/.kde/share/config/akonadi*, 
~/.kde/share/config/kmail2rc (it contains references to Akonadi resources), 
~/.kde/share/config/ which contains the local filter rules.


3) Make the filter rules more robust, make them survive a complete deletion 
of the cache, separate config from cache *completely*. Never ever *mix* 
these.


So replace:

[Filter #25]
Applicability=2
AutomaticName=true
ConfigureShortcut=false
ConfigureToolbar=false
Enabled=true
Icon=system-run
StopProcessingHere=true
ToolbarName=: 
accounts-set=akonadi_pop3_resource_0
action-args-0=128
action-name-0=transfer
actions=1
apply-on=manual-filtering
contentsA=
fieldA=List-Id
funcA=contains
identifier=yNhsmyF7PrdhRuTL
name=: 
operator=and
rules=1

with something sane that can recover from the folder name in or an ID that 
is stored *with* the folder on cache loss.


4) I suggest a completely obvious structure:

//
/
//

with that model it should be able to wipe out the cache data in case of any 
problem. Really make this work, make the software assume that the data may 
get corruption, it is not under your sole control to avoid any hardware 
failures. It is *just a cache*.

Re: Possible akonadi problem?

2015-01-23 Thread Martin Steigerwald
Am Freitag, 23. Januar 2015, 09:54:10 schrieb Martin Steigerwald:
> On Donnerstag, 22. Januar 2015 22:11:37 CEST, Martin Steigerwald wrote:
> > Am Freitag, 23. Januar 2015, 07:17:13 schrieb Dmitry Smirnov:
> >> Hi Brad,
> >> 
> >> On Fri, 2 Jan 2015 11:28:53 Brad Alexander wrote: ...
> > 
> > Thank you very much, I will try this. And see whether it helps with
> > these upstream bugs:
> > 
> > [Akonadi] [Bug 338402] File system cache is inneficient : too many
> > file per directory
> > 
> > Bug 332013 - NFS with NetApp FAS: please split payload files in
> > file_db_data into several directories to avoid reaching maxdirsize
> > limit on Ontap / WAFL filesystem
> > 
> > Bug 341884 - dozens of duplicate mails in
> > ~/.local/share/akonadi/file_db_data
> > 
> > 
> > I bet it may help with the first two, but third one might be another
> > bug.
> > 
> > 
> > Right now locally after the last akonadictl fsck I only have
> > from 4600 after
> > the fsck to 4900 files now in my private setup with still is
> > mostly POP3 just a
> > 30 day limited IMAP for the Fairphone and as a backup when I
> > accidentally mess
> > up with something locally. It really seems Akonadi is more snappy
> > since the fsck. I have lots more files in there.
> > 
> > I also bumped innodb_buffer_pool_size but didn´t see that much of a
> > change, what helped most with MySQL load is to use Akonadi git 1.13
> > branch with database performance improvement.
> > 
> > I now implemented the treshold size change you suggested and
> > did another fsck
> > and vacuum.
> 
> *beware this is partly a rant, but I think a well founded one*
> 
> In the end it I had mysqld excessively writing 150-300 MiB for about 15
> minutes or more, sure it wrote the *complete* size of the database
> several times. I do have an mysqld where it complains about not being
> able to get locks and suggesting that a second instance might be
> running. I found this later one.
> 
> I don´t think it had to do with the change you suggested, but with
> another problem. I found it often enough that after akonadictl stop and
> akonadictl status telling it is actually stopped, mysqld was still
> running. I usually check this and do SIGTERM on it waiting for it to
> end. But I didn´t yesterday.
> 
> After a long time of writes I sigkilled the mysqld process in order to
> end the excessive writes.
> 
> I tried to recover from backup, but it didn´t work out. At one time I
> created a second maildir resource and even after making sure I had
> everything of ~/.config/akonadi from backup and also making sure there
> are no other files in it with rsync --del and same for
> ~/.local/share/akonadi and even after deleting
> akonadi_maildir_resource_1rc from
> ~/.kde/share/config it insisted on having a maildir resource 1 today and
> it filled database and file_db_data with the results of scanning my
> million of mails again.

After about half a day of waiting, and wanting to wipe off Akonadi from my 
system for good, re-importing filter I am almost back to some sanity.

I am really looking forward to a replacement of Akonadi. I think it needs 
one.

One with simplicity, robustness and performance as in only use as much 
resources as the task at hand really needs in mind.

-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7


--
To UNSUBSCRIBE, email to debian-kde-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/2901284.TduDEhymTY@merkaba



Re: Possible akonadi problem?

2015-01-23 Thread Diederik de Haas
On Friday 23 January 2015 07:17:13 Dmitry Smirnov wrote:
> Here is what helped me:
> 
>  * Set "SizeThreshold=32768" in "~/.config/akonadi/akonadiserverrc"
>(default is 4096).
> 
>  * Run "akonadictl fsck" which moved unreferenced files to
>"~/.local/share/akonadi/file_lost+found" (safe to remove)
>and gobbled files smaller than 32768 bytes from "file_db_data"
>into database.
> 
> The above procedure dramatically improved kmail performance and response
> time.

Thank you for that :-) It indeed seems to improve performance substantially.

Someone else suggested 'akonadictl fsck' before (but can't find it) and that 
seems to have solved the 'Resource X is broken/went offline' which was quite 
annoying and didn't make sense at all since I have a fast and stable internet 
connection.

Cheers,
  Diederik
-- 
GPG: 0x138E41915C7EFED6

signature.asc
Description: This is a digitally signed message part.


Re: How to view audio CD in Dolphin/Konqueror?

2015-01-23 Thread D. R. Evans
Marco Valli wrote on 01/22/2015 08:18 AM:
> In data giovedì 22 gennaio 2015 08:08:23, D. R. Evans ha scritto:
>> or does anyone here have more suggestions?
> 
> Did you deleted the old configuration files of kde in your /home?
> 

I'm sorry, but I don't understand what you're asking.

I haven't deleted any configuration files; but I don't know what you mean by
"old", nor which of the many KDE configuration files you think I should delete.

  Doc

-- 
Web:  http://www.sff.net/people/N7DR



signature.asc
Description: OpenPGP digital signature


Re: Possible akonadi problem?

2015-01-23 Thread Martin Steigerwald
Am Freitag, 23. Januar 2015, 07:17:13 schrieb Dmitry Smirnov:
> Hi Brad,
> 
> On Fri, 2 Jan 2015 11:28:53 Brad Alexander wrote:
> > Can anyone suggest a fix or workaround other than starting akonadi
> > every 5 - 10 minutes?
> 
> I had similar problem when akonadi accumulated nearly 2 million files in
> the
> 
> ~/.local/share/akonadi/file_db_data
> 
> Here is what helped me:
> 
>  * Set "SizeThreshold=32768" in "~/.config/akonadi/akonadiserverrc"
>(default is 4096).
> 
>  * Run "akonadictl fsck" which moved unreferenced files to
>"~/.local/share/akonadi/file_lost+found" (safe to remove)
>and gobbled files smaller than 32768 bytes from "file_db_data"
>into database.
> 
> The above procedure dramatically improved kmail performance and response
> time.
> 
> To optimise further you can check MySQL configuration in
> 
> ~/.config/akonadi/mysql-local.conf
> ~/.local/share/akonadi/mysql.conf
> 
> and bump "innodb_buffer_pool_size" and/or other relevant parameters.

With the SizeTreshold=32768 change I get a nice improvement for my work 
IMAP account on the laptop (the one I set to download all mails offline).

Before:

ms@merkaba:~/.local/share/akonadi> du -sch db_data/akonadi/* | sort -rh | 
head -10
2,8Ginsgesamt
2,6Gdb_data/akonadi/parttable.ibd
245Mdb_data/akonadi/pimitemtable.ibd
13M db_data/akonadi/pimitemflagrelation.ibd
248Kdb_data/akonadi/collectionattributetable.ibd
200Kdb_data/akonadi/collectiontable.ibd
136Kdb_data/akonadi/tagtable.ibd
120Kdb_data/akonadi/tagtypetable.ibd
120Kdb_data/akonadi/tagremoteidresourcerelationtable.ibd
120Kdb_data/akonadi/tagattributetable.ibd

ms@merkaba:~/.local/share/akonadi> find file_db_data | wc -l
524917


ms@merkaba:~/.local/share/akonadi#130> /usr/bin/time -v du -sch 
file_db_data
7,0Gfile_db_data
7,0Ginsgesamt
Command being timed: "du -sch file_db_data"
User time (seconds): 2.14
System time (seconds): 95.93
Percent of CPU this job got: 29%
Elapsed (wall clock) time (h:mm:ss or m:ss): 5:35.47
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 33444
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 1
Minor (reclaiming a frame) page faults: 8079
Voluntary context switches: 667562
Involuntary context switches: 60715
Swaps: 0
File system inputs: 31509216
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0

After the change and akonadictl fsck:

ms@merkaba:~/.local/share/akonadi> find file_db_data | wc -l ;  du -sch 
db_data/akonadi/* | sort -rh | head -10
27
7,5Ginsgesamt
7,3Gdb_data/akonadi/parttable.ibd
245Mdb_data/akonadi/pimitemtable.ibd
13M db_data/akonadi/pimitemflagrelation.ibd
248Kdb_data/akonadi/collectionattributetable.ibd
200Kdb_data/akonadi/collectiontable.ibd
136Kdb_data/akonadi/tagtable.ibd
120Kdb_data/akonadi/tagtypetable.ibd
120Kdb_data/akonadi/tagremoteidresourcerelationtable.ibd
120Kdb_data/akonadi/tagattributetable.ibd

Yep, thats 27 files, instead of >50 (after just a week of the last 
fsck, which reduced to about 50 files, from 65+).


After a nice vacuuming I even get:

ms@merkaba:~/.local/share/akonadi> find file_db_data | wc -l ;  du -sch 
db_data/akonadi/* | sort -rh | head -10
27
6,5Ginsgesamt
6,2Gdb_data/akonadi/parttable.ibd
245Mdb_data/akonadi/pimitemtable.ibd
13M db_data/akonadi/pimitemflagrelation.ibd
248Kdb_data/akonadi/collectionattributetable.ibd
200Kdb_data/akonadi/collectiontable.ibd
136Kdb_data/akonadi/tagtable.ibd
120Kdb_data/akonadi/tagtypetable.ibd
120Kdb_data/akonadi/tagremoteidresourcerelationtable.ibd
120Kdb_data/akonadi/tagattributetable.ibd

merkaba:/home/ms/.local/share/akonadi> du -sh file_db_data 
6,5Mfile_db_data


I definitely prefer this over the original situation.

Original was 2,8 GiB DB + 7 GiB file_db_local.

Now is 6,5 GiB DB + 6,5 file_db_local and more than 524000 files less to 
consider for rsync and our enterprise backup software.

Let's see whether it brings an performance enhancement, but for now I like 
this.

Thanks,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7

signature.asc
Description: This is a digitally signed message part.