Art,

OK, this may end up being a large response and we may
want to break up the individual issues into separate threads.
So here it goes.


Handling of multiple result sets.

The code is fairly brute force by just assigning m_FirstRowIdx
to null and extracting the meta data where you do. The main problem
is, in Streaming enabled or disabled, you leave behind all the previous
nodes and their ID's and awkwardly create a new document. i.e.
The first result set will have a root node with an ID of 0, the second
result set will rebuild a document (assuming for example that 100 nodes
were consumed previously) having a root node with an ID of 101, node
ID's 0-100 are still valid ID's but they are not in the DTM tree, not to

mention that the Metadata is handled by reference to conserve memory,
so even if the nodes were still in the tree, result set 1 would
magically
have its meta data changed when result set 2 was traversed and if
results
set 1 did not match column counts, an Out of Bounds exception would
surly occur. I think if you tried this code in non streaming mode it
would
fail if you tried to traverse the document more than once. In a strange
way
you are passing back multiple SQLDocuments, just in a single instance,
where the existing document changes to a new document as you traverse
ResultSets.

Changing the output format.

The Metadata section is a sibling of <row-set>  because we were
using the <row-set> element to define different behavior. Access
of a node in <row-set> may or may not cause the ResultSet to
be advanced. Outside of <row-set>, the ResultSet was not touched
In Streaming Mode, the Metadata can be traversed many times but the
data  <row-set> can only be traversed once, we use this mechanism
to control this.

Without an example, I tried to calculate what you actually
get as a return format. I think my initial assumption are incorrect.
The SQLDocument initially builds the document to be complete
in structure with an empty <row-set>. As nodes are traversed inside
the <row-set>, in Streaming mode the Node data is reused and in
non streaming mode, row elements are accumulated.
With the changes you traverse ResultSets, you sort of restart filling
the DTM with new nodes starting with a new ROOT node. At the
point you traverse a node in the <row-set>, you have a new Root Node,
new Metadata but the DTM also has all the old data and as you traverse,
the DTM is overwritten.

Having multiple <sql> elements will not work because you will have
multiple root elements inside the root node which is invalid XML.

Multiple SQLDocuments seems like it will work and won't require
any code changes to SQLDocument, since SQLDocument takes
a ResultSet as its Constructor, and only the need for an added
method to XConnection.

The propose XSL would be
<xsl:variable name="table1" select="sql:query(...)" />
<xsl:variable name="table2" select="sql:getNextResultSet()" />

Question: I have never had the need to return multiple ResultSets.
Can you fetch all the results sets without processing them and still
maintain their individual context ?? i.e. If you had two results sets,
grab them as two different objects then process a row in RS1 then
a row in RS2 without loosing all the data in RS1.


Parameter passing.

I don't quite understand how your parameter passing is more efficient
and/or cleaner in implementation
i.e.
<xsl:variable name="groupid" select="25" />
<xsl:variable name="I_result" select="0" />
<xsl:variable name="I_server" select="" />
<xsl:variable name="table" select="sql:query($products,
?[integer=I_result] = CALL testproc ?[integer=groupud],
?[varchar=I_server OUT])" />

is any different than
<xsl:value-of select="sql:addParameterWithType("25", "long") />
<xsl:value-of select="sql:addParameterWithType("0", "long") />
<xsl:value-of select="sql:addParameterWithType("", "string") />
<xsl:variable name="table" select="sql:query($products, ? = CALL
testproc ?, ? OUT])" />

Other than the fact that it does not allow the parameter to be modified
on output.
Modifying parameters, especially behind the scenes, is a little more
that a "minor
infraction" it violates one of the basic tenants of XSL variable
handling.

Also I have not seen your code,  are you parsing the string to remove
all the [type=value] tokens to then pass it into CallableStatement to
have
it reparse the string to do variable replacement or are you doing
variable
replacement while parsing the string the first time. If you are doing
the
later, are you handling all the escape characters such as "'" ??

To me the new way only seems to limit the mechanisms that data can be
supplied along with type. i.e. You lose the ability to add data from an
element tree.  More variables are assigned than are necessary and extra
parsing needs to occur.

Regards
John G


Reply via email to