On Thu, 2010-10-21 at 14:39 +0800, Chen, Zhenqiang wrote:
Any chance you got a backtrace of that?
Backtrace screenshots are attached.
segfault.GIF is for segmentation fault.
g_ptr_array_ref.GIF is for CRITICAL warning of g_ptr_array_ref.
For test, I just change codes as
There was a reason, but it was not the one I mentioned in my previous
mail; sorry for the noise...
There are some open issues that I'm now fixing in that branch... the
thing got a little bit more complicated yet :-)
Ok, fixed those things; the branch should behave better now.
Just
There was a reason, but it was not the one I mentioned in my previous
mail; sorry for the noise...
There are some open issues that I'm now fixing in that branch... the
thing got a little bit more complicated yet :-)
Ok, fixed those things; the branch should behave better now.
There was a reason, but it was not the one I mentioned in my previous
mail; sorry for the noise...
There are some open issues that I'm now fixing in that branch... the
thing got a little bit more complicated yet :-)
Ok, fixed those things; the branch should behave better now.
On Mon, 2010-10-18 at 20:54 +0200, Aleksander Morgado wrote:
[Cut]
With the new miner-fs-refactor-multi-insert branch, we are merging
SPARQL updates in a single dbus connection; but still those updates are
then SQL-inserted one by one in the SQLite database (IIRC, pvanhoof?),
Correct
not
Hum, yes, that's the idea actually; I don't understand why you say it's
a regression. CREATED and UPDATED events will get merged in the SPARQL
buffer, and the buffer will be flushed (commited to the store)
if any of
Regression means it takes more time than other branches.
But I
Hum, yes, that's the idea actually; I don't understand why you say it's
a regression. CREATED and UPDATED events will get merged in the SPARQL
buffer, and the buffer will be flushed (commited to the store)
if any of
Regression means it takes more time than other branches.
What do you mean with UPDATEs being merged?
I checked pool-sparql_buffer-len. If it is greater than 1, it means several
sparqls are merged.
From my view, more items in sparql_buffer means better performance.
Thanks!
-Zhenqiang
___
tracker-list
Hi hi,
What do you mean with UPDATEs being merged?
I checked pool-sparql_buffer-len. If it is greater than 1, it means several
sparqls are merged.
Hum, yes, that's the idea actually; I don't understand why you say it's
a regression. CREATED and UPDATED events will get merged in the
FYI, just rebased the 'miner-fs-refactor-multi-insert' branch with
master.
--
Aleksander
___
tracker-list mailing list
tracker-list@gnome.org
http://mail.gnome.org/mailman/listinfo/tracker-list
Hum, yes, that's the idea actually; I don't understand why you say it's
a regression. CREATED and UPDATED events will get merged in the SPARQL
buffer, and the buffer will be flushed (commited to the store)
if any of
Regression means it takes more time than other branches.
But I expect it
Hum, yes, that's the idea actually; I don't understand why you say it's
a regression. CREATED and UPDATED events will get merged in the SPARQL
buffer, and the buffer will be flushed (commited to the store)
if any of
Regression means it takes more time than other branches.
But I expect
As in my previous tests with this feature, I couldn't see any major
improvement in my desktop with multiple CPUs, but it should give a
notable improvement in single-CPU machines.
I will be checking this branch in the following days to make sure there
are no new memleaks and also in the most
I tested the branch. There is regression compared with tracker git master
and multi-insert branch. Logs show UPDATES are merged.
This is unexpected. I will investigate it.
What do you mean with UPDATEs being merged?
--
Aleksander
___
Hi all,
With this patch in mind we started working on an API that allows
multiple inserts per DBus call (and yet get multiple errors back, so the
queries don't all belong in the same transaction set).
This experimental work is taking place in the branch multi-insert.
Just finished
Hi there,
With this patch in mind we started working on an API that allows
multiple inserts per DBus call (and yet get multiple errors back, so the
queries don't all belong in the same transaction set).
This experimental work is taking place in the branch multi-insert.
Today GNOME's git is
I will try d-bus-1.3 and latest tracker.
Opps, yea, I meant 1.3.1, not 3.7. Also, note, 1.3.0 is buggy so you
will need 1.3.1. This should avoid quite some memory copies
when indexing.
I tried tracker git code (master, last update Sep 7). But test results show it
is ~15% slower than
What do you think about the idea grouping the updates for files of one dir
into one update?
Seems a good idea :-)
Trying your patch right now! Thanks
--
Aleksander
___
tracker-list mailing list
tracker-list@gnome.org
Le Thu, 9 Sep 2010 17:04:06 +0800,
Chen, Zhenqiang zhenqiang.c...@intel.com a écrit :
I will try d-bus-1.3 and latest tracker.
Opps, yea, I meant 1.3.1, not 3.7. Also, note, 1.3.0 is buggy so you
will need 1.3.1. This should avoid quite some memory copies
when indexing.
I tried
Hi hi,
What do you think about the idea grouping the updates for files of one dir
into one update?
Seems a good idea :-)
Trying your patch right now! Thanks
So, I created a new branch for the issue: miner-fs-merge-updates
I couldn't use your patch, as it was not aligned with
Any comments on how to improve it?
Are you using glib unicode parsing? We changed configure to require
!glib yesterday, the indexing performance difference is incredible,
solved with:
sudo apt-get install libunistring-dev
./autogen.sh
Upgrading to D-Bus 3.7 and getting the latest 0.9.x
On 03/09/10 10:07, Chen, Zhenqiang wrote:
Any comments on how to improve it?
Are you using glib unicode parsing? We changed configure to require
!glib yesterday, the indexing performance difference is incredible,
solved with:
sudo apt-get install libunistring-dev
./autogen.sh
Upgrading
Hi,
I decided I wanted to monitor recursively my whole $HOME directory, and
it takes more than two hours for tracker-miner-fs to place monitors on
each directory recursively, until I made it stop.
Is that normal?
At 11:30 I was fed up and decided to stop it from indexing $HOME and
start index
On 02/09/10 10:52, Mildred Ki'Lya wrote:
Hi,
Hi,
I decided I wanted to monitor recursively my whole $HOME directory, and
it takes more than two hours for tracker-miner-fs to place monitors on
each directory recursively, until I made it stop.
Which version of Tracker are you using? There
On 02/09/10 10:52, Mildred Ki'Lya wrote:
Hi,
Hi,
I decided I wanted to monitor recursively my whole $HOME directory, and
it takes more than two hours for tracker-miner-fs to place monitors on
each directory recursively, until I made it stop.
Which version of Tracker are you using? There
On 09/02/2010 01:16 PM, Martyn Russell wrote:
On 02/09/10 10:52, Mildred Ki'Lya wrote:
Hi,
Hi,
I decided I wanted to monitor recursively my whole $HOME directory, and
it takes more than two hours for tracker-miner-fs to place monitors on
each directory recursively, until I made it stop.
On 09/02/2010 02:45 PM, Mildred Ki'Lya wrote:
It is quite possible that my database is corrupted and so tracker thinks
it has to reindex everything.
I just wiped my databased and started indexing from zero, hopefully,
that'll help.
That was of great help, now tracker starts much more quickly.
For me, an initial index takes:
Finished mining in seconds:303.820123, total
directories:2224, total
files:19550
For crawling subsequent times takes:
Finished mining in seconds:26.631053, total directories:2224, total
files:19550
But my test shows tracker related modules take ~150%
28 matches
Mail list logo