XCC responses come back from the server via a network
socket connection, in sequence.
When an XCC request is done in caching mode (the default)
then XCC itself reads the full response off the socket and
holds it all in memory. That lets you access any of the
items in the sequence at random because they are all
available all the time. When you ask for an InputStream
for an item, XCC makes a new, memory-based stream for it
each time.
But when operating in non-cached mode, XCC does not
pre-read the data. When you ask for an InputStream for
a non-cached item, XCC gives you a stream that reads
the data from the network socket. Because the result
items are sent from the server in sequence through the
network socket, only one item can be read at a time, and
each item can only be read once.
In non-cached mode, if you call next() on a ResultSequence,
that tells XCC that you want to skip to the next result item
in the stream. That requires reading and throwing away any
remaining data for current result item.
What you're trying to do here is to read each of the
items in a streaming result sequence randomly. It's
possible to do that for a cached ResultSequence, but not
for a streaming (non-cached) one.
You have a couple of options here. One is to do a
separate query for each result item, so you only have
one item per response. If your result nodes are huge,
the overhead of running multiple queries may not be
significant compared to the data transfer time.
Another is to write the results to temporary local
disk files, then open InputStreams for each of them.
If you do this, make sure that you take the appropriate
steps to delete them on close.
Also, in any case, when using a non-cached ResultSequence
you should call close() on it when you're finished so
that the connection can be re-used. This can be tricky
if you're handing XCC's InputStream to some other object.
If you allow the owning ResultSequence to fall out of
scope (be garbage collected) it will close all open
InputSequences when it's finalized. It may be a more
robust solution to use temp files.
Good luck.
On Aug 19, 2010, at 9:59 AM, srinivasan venkat wrote:
> Hi,
>
> I am currently using ResultSequence class in com.marklogic.xcc,
>
> I am trying to read sequence of multiple inputstream as,
>
> ResultSequence resultSeq;
> while (resultSeq.hasNext()) {
> InputStream obj = resultSeq.next().asInputStream();
> }
>
> But I am getting Stream closed every time and i am getting an exception.
>
> My aim is direcly hook this stream object to apache wink layer in Restful
> webservice implementation as,
> Response.status(200).entity(obj).type("multipart/mixed").build();
> But Stream Closed exception is caught every time.
>
> According to api of hasNext(),
>
> “Note that if the current item is large (node, binary, text) and has not yet
> been fully consumed by the client, it's value may be flushed and lost as the
> result stream is positioned to the next item.”
>
> My content is huge node. So it value is getting flushed and lost every time
> when I use hasNext() api.
>
> Is there any alternative to achieve streaming of huge multiple nodes directly
> hooking to response object of wink from MarkLogic using sequence inputstream
> ?.
>
>
> Thanks,
> Srinivasan.V
>
> _______________________________________________
> General mailing list
> [email protected]
> http://developer.marklogic.com/mailman/listinfo/general
---
Ron Hitchens {mailto:[email protected]} Ronsoft Technologies
+44 7879 358 212 (voice) http://www.ronsoft.com
+1 707 924 3878 (fax) Bit Twiddling At Its Finest
"No amount of belief establishes any fact." -Unknown
_______________________________________________
General mailing list
[email protected]
http://developer.marklogic.com/mailman/listinfo/general