Hi,

Lukas Vlcek wrote:
Hi,

I would like to keep user search history data and I am looking for some
ideas/advices/recommendations. In general I would like to talk about methods
of storing such data, its structure and how to turn it into valuable
information.

As for the structure:
==============
For now I don't have exact idea about what kind of information I should
keep. I know that this is application specific but I believe there can be
some common general patterns. as of now I think can be useful to keep is the
following:

1) system time (time of issuing the query) and userid
2) original user query in raw form (untokenized)
3) expanded user query (both tokenized and untokenized can be useful)
4) query execution time
5) # of objects retrieved from index
6) # of total object count in index (this can change during time)
7) and possibly if user clicked some result and if so then which one (the
hit number) and system time

Remember that you may not want to store all the information available at runtime of the query, since it may result in great performance burden. For example you may want to store the raw form of the query, but not parsed form since you can later parse the query anayway (unless you have some architecture change). Similarly 6 seemed not a good choice for me(again you can store the info externally). You can look at the common and extended log formats which are stored by the web server.

As for the information I can get from this:
=============================
Such minimal data collection could show if the search engine serves users
well or not (generally said). I should note that for the users in this case
the only other option is to not use the search engine at all (so the data
should not be biased by the fact that users are using alternative search
method). I should be able to learn if:

1) there are bottleneck queries (Prefix,Fuzzy,Proximity queries...)
2) users are finding what they want (they can find it fast and results are
ordered by properly defined relevance [my model is well tuned in terms of
term weights] so the result they click is among first hits)
3) user can formulate queries well (do they issue queries which return all
index documents or they can issue queries which return just a couple of
documents)
4) ...?... etc...

Web server log analysis is a very popular topic nowadays, and you can check for the literature, especially clickthrough data anaysis. All the major search engines has to interpret the data to improve their algorithms, and to learn from the latent "collective knowlege" hidden in web server logs.

As for the storage method:
===================
I was planning to keep such data in database but now it seems to me that it
will be better to keep it directly in index (Lucene index). It seems to me
that this approach would allow me for better fuzzy searches across history
and extracting relevant objects and their count more efficiently (with
benefit of the relevance based search on top of history search corpus).

I think that more scalable solution would be to keep such data in pure flat
file and then periodically recreate search history index (or more indices)
from it (for example by Map-Reduce like task). Event better the flat file
could be stored in distributer file system. However, for now I would like to
start with something simple.
I would rather suggest you to keep the logs in rolling flat files. An access to the database for each search will take lots of time. Then you may want to flush those logs to the db once a day if you indeed want to store the data in a relational way.

I infer that you want to mine the data, but you do not know what to mine, right? I suggest you to look at hadoop and pig. Pig is a is designed especially for this purpose.

I know this is a complex topic...

Regards,
Lukas


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to