Hi Paul,

Somendra Paul wrote:
Hi Armin
        This SQL query we have is not a data only from one table, but is a very 
big SQL query looks like following:
select a,b,c from x,y,z where.... union all select a,b,c from x,y,z where .... union all select a,b,c from x,y,z This is a generic SQL written but the our SQL looks is on similar lines. Do your solutions mentioned of report queries will solve this kind of issues ?

The report query can handle all OJB Criteria based queries and from 1.0.5 it also possible to execute generic SQL via Report queries (more details see 1.0.5 documentation).

regards,
Armin

We might think of 1.5 , depending upon what you say. Thanks and Regards,
Somendra Paul.

----- Original Message ----
From: Armin Waibel <[EMAIL PROTECTED]>
To: OJB Users List <[email protected]>
Sent: Sunday, September 21, 2008 9:05:06 PM
Subject: Re: Handling Large Resultset in Oracle

Hi Paul,

Somendra Paul wrote:
Hi All, I am using OJB1.4 against Oracle10g with classes12.jar. We
are trying to export data from the DB, using SQL query which returns
600000 records , what we found out that is when we do iterator =
query.getIteratorByQuery(),and iterate over the results,  we find
that after iterating over 300000 records, the VM grows rapidly and
the entire program crashes giving OOM errors, where as when we used
simple JDBC program to implement it, we saw that the entire 600K
records were extracted using only 160mb of memory , and the ojb
execution takes more than 1.5gb to execute before crashing.

Do we know how to solve this memory issue when executing large
resultset in Oracle.

One solution that in the OJB archives for ProgressSQL is that to use
fetchSize=<somevalue> , it will solve this issue ???


The problem could be the cache. Dependent on the used cache OJB keep all materialized objects (or copies of these objects) in memory. Most cache implementations use soft-references (so OOM errors shouldn't occur) but maybe your objects have complex relationships or your layer holds hard-references to the materialized objects.
You can try to evict the cache while iterate the result set.

If you don't rely on the materialized java objects you can use a report query
http://db.apache.org/ojb/docu/guides/query.html#Report+Queries
to iterate over the result set (returns a collection of arrays). This bypass the cache and should result in a memory-use comparable with a plain jdbc-query.

The upcoming OJB 1.0.5 has enhanced query features and supports limit and pagination of query results (Oracle is supported) - 1.0.5rc1:
http://www.mail-archive.com/ojb-user%40db.apache.org/msg16078.html
The query-guide of the included documentation show how to use this feature.

regards,
Armin


Thanks and Regards Somendra Paul.




---------------------------------------------------------------------
  To unsubscribe, e-mail: [EMAIL PROTECTED] For
additional commands, e-mail: [EMAIL PROTECTED]



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to