On Tuesday, November 12, 2002, at 11:14 PM, Mark J. Stang wrote:
Yes, I am replying to my own e-mail.   Woke up this morning
and realized that the sample code listed below doesn't have a
loop.   The overhead of connecting to the database, creating
creating a collection object, etc. is in every execution of
this code.  Depending on the OS/hardware, I have seen
this code, executed one time take anywhere from 2 to 6
seconds.

Yes, I know there is no loop in there...

What I did, was actually add a few more of these within the code; quick and dirty style ;-) :
xpath = "some new xpath";
System.out.println("start:" + String.valueOf(new java.util.Date()));
rs = service.query(xpath);
System.out.println("end:" + String.valueOf(new java.util.Date()));
//get the iterator and print out etc...


I left them out on my post, because I figured it was redundant, and that I would get a little credit for knowing about things like overhead. <g>

So I am aware of the overhead issues - and I would gladly accept even 10 seconds of overhead for the first query. Problem is each and every query is taking upwards of 200 seconds. So I tried dropping the collection, recreating it, adding indexes first and then adding in documents slowly. This experiment wasn't very controlled, but here are some results:

First, just getting docs by key.
Using: xindice rd -c /db/resumes -n DAAN109
Documents       Real            User            Sys
1                       2.81            1.73            0.40
10                      3.16            1.77            0.35
100                     2.91            1.75            0.39
1000            2.84            1.65            0.50
2000            3.63            1.65            0.48
7700            2.26            1.65            0.37

So far, so good, considering there are other processes running on this box...

Now, the times for using xpath
Using: xindice xpath -c /db/resumes -q "/candidate[biographic_data/id='DAAN109']"


Documents       Real    User            Sys
1                       2.98    1.82            0.44
10                      3.30    1.75            0.39
100                     5.74    1.58            0.50
1000            28.53   1.88            0.39
2000            29.87   1.47            0.48
7700            247.19  2.11            0.92

Clearly, something is wrong here, but I just don't know where to look.

I am thinking maybe I screwed up entering the index properly. I did this:
xindiceadmin ai -c /db/resumes -n idindex -p "/candidate/biographic_data/id" --maxkeysize 32 -t trimmed -v


Is that correct?

- Matt




On Tuesday, November 12, 2002, at 11:14 PM, Mark J. Stang wrote:



Reply via email to