Hi,
Interesting! I think you're right, thanks.

I forgot to mention that I'm also using setCaching() function to control
number of records that come back with one call.  And I'm typically setting
that to somewhere between 1 and 1000.  I just did another test where I set
caching to 2 every time.  Also, in my loop I added code that bails out of
the top of the loop if I went through it 2 previous times.

Here's what I observe:
-My bail out code gets triggered.  So I take this to mean that although I
did setCaching(2), the client makes repeated calls to get more results.  So
I probably was getting scan results to the bitter end, as you diagnosed.
-My bail out code should keep me from going to the bitter end.  I tried
bailing out at different numbers (2, 100).  I'm seeing way better
throughput.  CPU still strikes me as high, but maybe I'm underestimating
what's going on under the covers.  I have an 8-core machine.  With 10
parallel clients doing 100 record scans, usage across each core adds up to
60-70%   Does that seem reasonable?

Thanks,
Adam


On 12/19/09 8:42 AM, "stack" <[email protected]> wrote:

> On Thu, Dec 17, 2009 at 9:36 PM, Adam Silberstein
> <[email protected]>wrote:
> 
>> Hi,
>> I wrote some simple client code to parse scan results, and it seems to be
>> causing heavy CPU usage on my machine.  I¹ve commented out most of my code,
>> and have this left:
>> 
>> ResultScanner scanner = null;
>> //some code to set scanner
>> 
>> for (Result rr: scanner) {
>> 
>> }
>> scanner.close();
>> 
>> 
> The above will run through the scanner nexting composing Result objects --
> see ClientScanner#next -- all the way down to its bitter end.  Is this whats
> burning your CPU?
> 
> St.Ack
> 
> 
> 
> 
> 
>> If I comment out the loop, then the CPU problems go away.  If I keep it,
>> but
>> have nothing inside, I see the problem.
>> 
>> I saw a bug mentioned in release notes that talk about a memory leak in
>> scan
>> (
>> 
>> http://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310753&sty
>> leName=Html&version=12314307).  But it appears that it was fixed in
>> 0.20.2,
>> which is the verison I am using.
>> 
>> Has anyone else noticed this, and have any suggestions?
>> 
>> Thanks,
>> Adam
>> 

Reply via email to