Re: Question regarding DERBY-4208 Parameters ? with OFFSET and/or FETCH

2009-07-09 Thread Lance J. Andersen



Kathey Marsden wrote:

Mike Matrigali wrote:

I would rather wait for an approved standard so that we don't later
get caught with apps depending on a non-standard behavior that we
might want to change in the future to meet a standard.

From the provided info it does not even look like there is a defacto
standard adopted by multiple db's.
I  tend to agree that it is better to wait for a standard. This still 
seems all over the place for the different database product 
implementations and not yet even in a draft standard.

Well it is in the standard for 2008

result offset clause ::=
OFFSET offset row count { ROW | ROWS }
fetch first clause ::=
FETCH { FIRST | NEXT } [ fetch first row count ] { ROW | ROWS } ONLY


So why not support it?


personally, if you can easily support some of the other variants, i 
would do that as well.  Just because something is not in an official 
standard, it indirectly becomes a standard when implemented by multiple 
vendors...


Don't get me wrong, standards are important, but so is making 
applications easier to use and migrate from one platform to another


.02

Regards
Lance


Kathey



Re: Question regarding DERBY-4208 Parameters ? with OFFSET and/or FETCH

2009-07-09 Thread Lance J. Andersen



Rick Hillegas wrote:
I think that this discussion has gotten seriously off-track. It is the 
intent of the standard that the offset and window length values be 
parameterized. This is clear from the standard language and I 
confirmed this with the SQL committee in May. For the record, Lance 
and I sit on the SQL committee as alternate delegates from Sun. 
Dynamic ? parameters are Derby's model for specifying parameters.


I believe this is a serious usability defect of our OFFSET/FETCH 
implementation. As it stands today, you can only scroll one of these 
windows forward by sacrificing the performance benefits of prepared 
statements. It would be a shame if this feature had to remain unusable 
until the next rev of the standard in 2011. If the committee approves 
some other language at that time, then we can implement that extension.


I agree with you Rick and I feel that we should implement this feature


If people wish to veto this proposal, then I would ask them to propose 
an alternative solution which makes this feature usable and which they 
believe fits more comfortably within the intention of the standard.


no veto from me, I am for it.

-Lance


Thanks,
-Rick

Dag H. Wanvik wrote:

Hi folks,

I have a working patch sitting on DERBY-4208. I am wondering if this
is a fix we should consider including for 10.5.2?

The pro argument is that this is a usability issue, and to the extent
it forces the app to construct SQL on the fly, makes the app more
vulnerable to injection attacks, at least in theory. A user has asked
for it.

On the contra side, we have the fact that dynamic arguments are not
allowed by the SQL standard for this construct, at least not yet.

Personally I think it's a nice extension.

Thoughts?

Dag
  




Re: jsr169 build

2008-04-11 Thread Lance J. Andersen

Hi Rick,

JSR 169, removes  some interfaces and methods on interfaces (example 
Array and ResultSet.getArray(), Connection.getTypeMap())


Rick Hillegas wrote:
I am trying to figure out what is the difference between jsr169 and 
jdbc3 which requires that we use the small platform jars in order to 
build Derby's J2ME support. I have tried the following experiment on 
the four source files which comprise our jsr169 support (the 
classnames which end in 169):


1) I made the 3 JDBC classes (the jsr169 versions of ResultSet, 
CallableStatement, and PreparedStatement) extend our JDBC3 versions of 
these classes.


2) Then I compiled Derby with my jsr169compile.classpath pointing at 
my small device jars.


This compilation succeeded. This says to me that the optional small 
device compilation is not going to catch situations where JDBC3 
methods leak into our jsr169 implementation.

true but the TCK should for 169 via the signature tests


I then ran a further experiment on top of these changes:

3) I changed jsr169compile.classpath to point at the jdk1.4 jars instead.

This compilation also succeeded. I am wondering what would break if we 
simply compiled our J2ME support using the jdk1.4 compiler as 
described above. I'm attaching the diff for (1) and (2). I'd be 
curious to learn what happens when this patch is applied and the tests 
are run on the small device platform.




Thanks,
-Rick


Re: [VOTE] V. Narayanan as a committer

2008-04-03 Thread Lance J. Andersen

+1

Dyre Tjeldvoll wrote:

Rick Hillegas wrote:
Please vote on whether we should make V. Narayanan a Derby committer. 
The polls close at 5:00 pm San Francisco time on Thursday April 10.


For several years, Narayanan has contributed valuable features and 
fixes to Derby, starting with JDBC4 and continuing through recent 
work on Derby replication. Narayanan eagerly seeks the community's 
advice and thoroughly responds to feedback. He also fields issues on 
the user list--there his responses are detailed and respectful. With 
commit privilege he will be even more effective.


Hear, hear, +10!

Dyre


Re: [jira] Created: (DERBY-3573) Argument checking for ResultSet.setFetchSize(int) is incorrect

2008-03-27 Thread Lance J. Andersen
Yes, it was an EG decision to correct the javadocs for setFetchSize()  
as if there is no limit specified via setMaxRows(),  getMaxRows() 
returns 0 thus  using:


0 = |rows| = |this.getMaxRows()|

can be problematic depending on implementations.

Also setFetchSize is a hint and can be ignored


Regards
Lance

Dyre Tjeldvoll (JIRA) wrote:

Argument checking for ResultSet.setFetchSize(int) is incorrect
--

 Key: DERBY-3573
 URL: https://issues.apache.org/jira/browse/DERBY-3573
 Project: Derby
  Issue Type: Bug
  Components: JDBC, Network Client, Newcomer
Affects Versions: 10.3.2.1, 10.3.1.4
Reporter: Dyre Tjeldvoll
Priority: Minor


The requirement that the argument to ResultSet.setFetchSize(int) be less than 
Statement.getMaxRows() was dropped in Java 6/JDBC 4, (it is not present in the 
Java 6 javadoc, but can still be seen in the Java 5 javadoc).

The reason why the client driver doesn't throw an exception in this case is 
because am.ResultSet incorrectly checks against ResultSet.maxRows_ and NOT 
am.Statement.getMaxRows(). So when am.Statement.setMaxRows(int) is called after 
a result set has already been created, am.ResultSet.setFechSize(int) will check 
against a stale value.

The question is what to do about this. The client driver clearly has a bug, but 
should we fix it by duplicating the old behavior found in the embedded driver, 
or change both drivers to comply with latest spec which allows any non-negative 
value as argument to ResultSet.setFetchSize(int)?

  


Re: [VOTE] John H Embretsen as a Derby committer

2008-03-26 Thread Lance J. Andersen

+1

Manjula Kutty wrote:

+1
 
Manjula


 
On 3/26/08, *Knut Anders Hatlen* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Daniel John Debrunner [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] writes:

 John is actively involved on both the derby-dev and derby-user lists
 and fully engages in open development. He has had a number of
patches
 committed, most recently taking the stalled JMX work and getting it
 into a shape where it could be committed to allow others to get
 involved.

 Vote closes 2007-04-02 16:00 PDT

+1

--
Knut Anders




--
Thanks,
Manjula. 


Re: {VOTE] Kim Haase as a committer

2008-03-03 Thread Lance J. Andersen

+1

Rick Hillegas wrote:

Rick Hillegas wrote:
Please vote on whether we should make Kim Haase a committer. The vote 
will close at 5:00 pm San Francisco time on Monday March 10.


Kim has made an outstanding contribution to Derby's documentation 
effort. With commit privileges, she will be even more effective.


Regards,
-Rick

+1



Re: Searching the Derby Documentation

2007-11-14 Thread Lance J. Andersen
Perhaps this is an indication that we need to revisit the layout of the 
manuals to make them easier to use and put this as a high priority for 
things to address going forward.  If the documentation is difficult to 
navigate, this can be a turnoff to users.


Regards
lance

John Embretsen wrote:
I know navigating the Derby manuals and finding what you are looking 
for can be

a gruesome experience for both users and developers.

Various tricks can be used to make this easier, e.g. searching all 
manuals

combined into one PDF (such as those found here:
http://dbtg.thresher.com/derby/test/DerbyDocs/index.html ), or using 
advanced

Google searches.

Inspired by David's blog post (and comments) at
http://weblogs.java.net/blog/davidvc/archive/2007/06/searching_the_d.html
I created my own custom (google) search engine for searching the Derby 
docs and

public API. It's available for everyone to use, the URL is

http://www.google.com/coop/cse?cx=013165901514945153857%3A6lvlcohwpjo

(unfortunately not so easy to remember).

I've made some refinements for searching a particular Derby version, or a
particular manual in the doc trunk. There is currently a max limit of 16
refinements.

It's possible to volunteer to contribute to this search engine for 
those who
feel inspired, but maybe it would be better to somehow create 
something similar

that is owned by apache, and include it on our web page?

Anyway, I hope this is useful to someone.




Re: SQLNonTransientConnectionExceptions and SESSION_SEVERITY exceptions that are not '08XXX' spec clarification

2007-09-17 Thread Lance J. Andersen



Kathey Marsden wrote:

Lance J. Andersen wrote:



Kathey Marsden wrote:
There was a discussion started in DERBY-401 on this but I thought I 
would submit it as a separate thread.  The JDBC 4.0 spec says in 
section 8.5.1..


A NonTransient SQLException must extend the class
SQLNonTransientException. A NonTransient SQLException would be thrown
in instances where a retry of the same operation would fail unless 
the cause of the
SQLException is corrected. After a NonTransient SQLException occurs, 
the
application can assume that the connection is still valid. For 
SQLState class values
that indicate non-transient errors but which are not specified in 
the following table,
an implementation may throw an instance of the class 
SQLNonTransientException.


TABLE 8-1 specifies which NonTransientSQLException subclass must be 
thrown

for a a given SQLState class value:
TABLE 8-1 NonTransientSQLExeceptions Subclasses
SQL State Class SQLNonTransientException Subclass
.
08 SQLNonTransientConnectionException
.

Derby has quite a few exceptions which are SESSION_SEVERITY or 
greater which are not SQLState Class '08'.  These exception cause 
loss of connection by the application.  There is a list at the 
bottom of this mail.  I thought all of these should be 
SQLNonTransientConnectionExceptions, 
SQLNonTransientConnectionException aligns with SQL State class value 
08 from 23.1, table 32 of the sql2003 spec.

Thank you Lance for looking at this.
The DRDA Spec section 8.1 defines SQLState mappings to 58XXX which 
don't fall into table 32 and would seem to conflict with the 08006 
exceptions.  Any thoughts on what to do with these?
I have not looked at the DRDA spec, but i was puzzled by the mapping 
based on table 32.  I really cannot do specific mappings for specs like 
DRDA in JDBC or I have to do the same for other protocols such as TDS.


I just do not see where DRDA came up with 58 as a valid class value? 

Of course, it could be buried in another chapter of SQL2003 in another 
document.


regards
lance


Kathey






Re: SQLNonTransientConnectionExceptions and SESSION_SEVERITY exceptions that are not '08XXX' spec clarification

2007-09-17 Thread Lance J. Andersen



Kathey Marsden wrote:

Lance J. Andersen wrote:



Kathey Marsden wrote:
There was a discussion started in DERBY-401 on this but I thought I 
would submit it as a separate thread.  The JDBC 4.0 spec says in 
section 8.5.1..


A NonTransient SQLException must extend the class
SQLNonTransientException. A NonTransient SQLException would be thrown
in instances where a retry of the same operation would fail unless 
the cause of the
SQLException is corrected. After a NonTransient SQLException occurs, 
the
application can assume that the connection is still valid. For 
SQLState class values
that indicate non-transient errors but which are not specified in 
the following table,
an implementation may throw an instance of the class 
SQLNonTransientException.


TABLE 8-1 specifies which NonTransientSQLException subclass must be 
thrown

for a a given SQLState class value:
TABLE 8-1 NonTransientSQLExeceptions Subclasses
SQL State Class SQLNonTransientException Subclass
.
08 SQLNonTransientConnectionException
.

Derby has quite a few exceptions which are SESSION_SEVERITY or 
greater which are not SQLState Class '08'.  These exception cause 
loss of connection by the application.  There is a list at the 
bottom of this mail.  I thought all of these should be 
SQLNonTransientConnectionExceptions, 
SQLNonTransientConnectionException aligns with SQL State class value 
08 from 23.1, table 32 of the sql2003 spec.

Thank you Lance for looking at this.
The DRDA Spec section 8.1 defines SQLState mappings to 58XXX which 
don't fall into table 32 and would seem to conflict with the 08006 
exceptions.  Any thoughts on what to do with these?
perhaps it is worth going back to DRDA and asking them where/how they 
came up with that class value?


As far as what to do, unless you decide to map the DRDA states to the 
appropriate SQL Class value,  i would return a SQLException.  Also, we 
probably should not be  returning this via SQLException.getSQLState() 
unless we can figure out how/where DRDA is getting the sql class value? 


Kathey






Re: SQLNonTransientConnectionExceptions and SESSION_SEVERITY exceptions that are not '08XXX' spec clarification

2007-09-17 Thread Lance J. Andersen



Kathey Marsden wrote:
There was a discussion started in DERBY-401 on this but I thought I 
would submit it as a separate thread.  The JDBC 4.0 spec says in 
section 8.5.1..


A NonTransient SQLException must extend the class
SQLNonTransientException. A NonTransient SQLException would be thrown
in instances where a retry of the same operation would fail unless the 
cause of the

SQLException is corrected. After a NonTransient SQLException occurs, the
application can assume that the connection is still valid. For 
SQLState class values
that indicate non-transient errors but which are not specified in the 
following table,
an implementation may throw an instance of the class 
SQLNonTransientException.


TABLE 8-1 specifies which NonTransientSQLException subclass must be 
thrown

for a a given SQLState class value:
TABLE 8-1 NonTransientSQLExeceptions Subclasses
SQL State Class SQLNonTransientException Subclass
.
08 SQLNonTransientConnectionException
.

Derby has quite a few exceptions which are SESSION_SEVERITY or greater 
which are not SQLState Class '08'.  These exception cause loss of 
connection by the application.  There is a list at the bottom of this 
mail.  I thought all of these should be 
SQLNonTransientConnectionExceptions, 
SQLNonTransientConnectionException aligns with SQL State class value 08 
from 23.1, table 32 of the sql2003 spec.
because the cause loss of the connection, but according to the spec I 
suppose they should be just SQLNonTransientExceptions (right now they 
are thrown as regular SQLExceptions).
The higher level categories categories are there for grouping and to 
allow programmers to just catch the higher level exception if they do 
not care about specifics.  There are multiple subclasses for Connection 
failures defined in table 32, sounds like you might not be reporting the 
correct SQLState  class and sub class values today.


You need to err on the side of caution if you just start mapping your 
errors to non-predefined JDBC SQLException subclasses  based on the 
class value as you could have to change them later down the road if JDBC 
spec add them to a different or new subcategory going forward.


With JDBC 4.0 do you still need to look at error codes to determine if 
the connection is lost or is the expectation that users will use 
isValid() to determine the status of the connection?

Depends on what you are doing there is no one size fits all answer here.


Kathey




Below is the list:


  {{08000,Connection closed by unknown interrupt.,4},
{08001,A connection could not be established because 
the security token is larger than the maximum allowed by the network 
protocol.,4},
{08001,A connection could not be established because 
the user id has a length of zero or is larger than the maximum allowed 
by the network protocol.,4},
{08001,A connection could not be established because 
the password has a length of zero or is larger than the maximum 
allowed by the network protocol.,4},
{08001,Required Derby DataSource property {0} not 
set.,4},
{08001,{0} : Error connecting to server {1} on port {2} 
with message {3}.,4},

{08001,SocketException: '{0}',4},
{08001,Unable to open stream on socket: '{0}'.,4},
{08001,User id length ({0}) is outside the range of 1 
to {1}.,4},
{08001,Password length ({0}) is outside the range of 1 
to {1}.,4},

{08001,User id can not be null.,4},
{08001,Password can not be null.,4},
{08001,A connection could not be established because 
the database name '{0}' is larger than the maximum length allowed by 
the network protocol.,4},

{08003,No current connection.,4},
{08003,getConnection() is not valid on a closed 
PooledConnection.,4},
{08003,Lob method called after connection was 
closed,4},
{08003,The underlying physical connection is stale or 
closed.,4},

{08004,Connection refused : {0},4},
{08004,Connection authentication failure occurred.  
Reason: {0}.,4},
{08004,The connection was refused because the database 
{0} was not found.,4},

{08004,Database connection refused.,4},
{08004,User '{0}' cannot shut down database '{1}'. Only 
the database owner can perform this operation.,4},
{08004,User '{0}' cannot (re)encrypt database '{1}'. 
Only the database owner can perform this operation.,4},
{08004,User '{0}' cannot hard upgrade database '{1}'. 
Only the database owner can perform this operation.,4},
{08004,Connect refused to database '{0}' because it is 
in replication slave mode.,4},
{08006,An error occurred during connect reset and the 
connection has been terminated.  See chained exceptions for 
details.,4},


Re: SQLNonTransientConnectionExceptions and SESSION_SEVERITY exceptions that are not '08XXX' spec clarification

2007-09-17 Thread Lance J. Andersen

Yes, I see that now on a second pass at section 23.

However, to answer Kathy's question, no,  the SQL Class Values used for 
JDBC SQLExceptions were defined around the standard SQL class values, 
not implementation defined class values.  Perhaps we can consider 
extending this in JDBC 4.1, but for now i would just return a SQLException.




Daniel John Debrunner wrote:

Kathey Marsden wrote:

Lance J. Andersen wrote:
perhaps it is worth going back to DRDA and asking them where/how 
they came up with that class value?


I put a query into the one DRDA contact I have but unfortunately he 
is out for a few weeks. Perhaps Rick knows someone who could answer 
where class 58 came from.


58 is a valid implementation-defined subclass SQLState. All that 
DRDA has done is defined its own SQL state values that the SQL 
Standard guarantees it will not use.


Dan.


Re: [VOTE] Dyre Tjeldvoll as a committer

2007-09-10 Thread Lance J. Andersen

+1

Rick Hillegas wrote:

+1

Rick Hillegas wrote:
Please vote on whether we should make Dyre Tjeldvoll a committer. The 
vote will close at 5:00 pm San Francisco time on Monday September 17.


Since 2005 Dyre has submitted many patches and fielded questions on 
the mailing lists. His contributions range across Derby's Network, 
Language, and Storage layers and include the following:


Performance: 2226, 938, 827, 825, 815

Cleanup and bugs: 2594, 2223, 2191, 2114, 2050, 336, 330, 249, 220, 
128, 85


JDBC4: 1380, 1282, 1236, 1235, 1094, 1093, 925, 924

On a personal note, I like how his dry humor leavens our discourse.

Regards,
-Rick





Re: [jira] Commented: (DERBY-2235) Server doesnt support timestamps with timezone

2007-08-30 Thread Lance J. Andersen

fwiw,

for JDBC.next i am looking at adding support for time and timestamp with TZ.

-lance

Daniel John Debrunner (JIRA) wrote:
[ https://issues.apache.org/jira/browse/DERBY-2235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12523890 ] 


Daniel John Debrunner commented on DERBY-2235:
--

Ken  ... if a timezone-less string ordinarily defaults to GMT. Or is a tz-less 
string defaults to the system's tz ...

It's neither. Derby's TIMESTAMP values are TIMESTAMP with no associated timezone information. 


For information on how the datetime values interact with JDBC see:

 
http://db.apache.org/derby/docs/10.3/publishedapi/jdbc3/org/apache/derby/jdbc/package-summary.html



  

Server doesnt support timestamps with timezone
--

Key: DERBY-2235
URL: https://issues.apache.org/jira/browse/DERBY-2235
Project: Derby
 Issue Type: Improvement
 Components: SQL
   Affects Versions: 10.2.2.0
   Reporter: Ken Johanson
   Priority: Minor

DML with datetime literals having timzone offset data (ISO-8601):
update tbl set dt1 = '2007-01-03 04:13:43.006 -0800'
Causes:
SQLException: The syntax of the string representation of a datetime value is 
incorrect.
Error: -1 SQLSTATE: 22007
I believe that even if the storage does not (does it?) support timezone 
storage, the input of a TZ could be normalized (offset applied) to the default 
TZ.



  


Re: [VOTE] Øystein Grøvlen as a commi tter

2007-08-21 Thread Lance J. Andersen

+1

Knut Anders Hatlen wrote:

Rick Hillegas [EMAIL PROTECTED] writes:

  

Please vote on whether we should make Øystein Grøvlen a committer. The
vote will close at 5:00 pm San Francisco time on Tuesday August 28.



+1

  


Re: derby papers on apache web site

2007-08-07 Thread Lance J. Andersen

Perhaps you are looking for:

http://db.apache.org/derby/integrate/index.html

http://wiki.apache.org/db-derby/

http://db.apache.org/derby/manuals/index.html



Julius Stroffek wrote:

Hi All,

I think there used to be a papers section somewhere on Apache Derby 
website. However, I am not able to find it now. Have the location 
changed? Where can I find those papers? Are they still available?


Thanks

Julo


Re: Derby in the .orgZone at Java One

2007-04-25 Thread Lance J. Andersen

i can probably assist if needed.



Rick Hillegas wrote:

Hi Derby dev folks,

At this year's Java One, Derby will have some slots in the .orgZone. 
I've included a schedule below. This is in addition to the related 
presence of Java DB and Cloudscape in the Sun and IBM booths.


I'm looking for a couple community members who'd be willing to sign up 
for these slots and share their knowledge of Derby with people who 
visit the .orgZone booth. I can get exhibitor passes for the lucky 
couple, which will let you trawl the Pavilion when you're not staffing 
the booth.


If you're interested, please let me know by 4:00 pm San Francisco time 
this Friday, April 27. In your response, let me know which slots you 
prefer.


Thanks!
-Rick



JavaOne ,orgZone Pavilion Hours:
Tuesday, May 8: 11:30 am - 1:30 pm  [8] OpenJDK, Derby, Greenfoot, 
OO.org, woodstock, FSF, Funabol, ASF
1:30 pm - 3:30 pm[8] OpenJDK, OO.org, FSF, JCP.org, Portal, 
Funabol, ASF, GlassFish
3:30 pm - 5:30 pm[8] OpenLaszlo, Betavine, Hyperic, Woodstock, 
FSF, Portal, GlassFish, Zimbra
5:30 pm - 7:30 pm[8] OpenJDK, Derby, PostgreSQL, OpenLaszlo, 
oo.org, FSF,  Zimbra, ASF

7:30 pm - 8:30 pm[5] OpenJDK,  PostgreSQL, OO.org, FSF, ASF
Wednesday, May 9: 11:30 am - 1:30 pm  [8] OpenJDK, Derby, 
Betavine, OO.org, Woodstock, Hyperic, Portal, ASF
1:30 pm - 3:30 pm[8] OpenJDK, Derby, PostgreSQL, OpenLaszlo, 
OO.org, Woodstock, FSF, Zimbra
3:30 pm - 4:30 pm[8]  OpenJDK, PostgreSQL, OO.org, Woodstock, FSF, 
JCP.org, Portal, ASF
Thursday, May 10: 11:30 am - 1:30 pm  [8] OpenJDK, PostgreSQL, 
OO.org, FSF, JCP.org, Funabol, ASF, JCP.org
1:30 pm - 3:30 pm[8] OpenJDK, Derby, PostgreSQL, Betavine, OO.org, 
FSF, ASF, Zimbra

3:30 pm - 4:30 pm[4] OpenJDK, Derby, OO.org, FSF




Re: [jira] Commented: (DERBY-1934) Reference Manual updates - J2EE Compliance: Java Transaction API and javax.sql Extensions

2007-03-29 Thread Lance J. Andersen

Kim,
I would leave out a reference to

An alternative to the DriverManager facility, a DataSource object is the 
preferred means of getting a connection.



This is old crud that i did not get a cycle to remove based on when i 
was allowed to do putbacks to the javadocs.


-lance

Kim Haase (JIRA) wrote:
[ https://issues.apache.org/jira/browse/DERBY-1934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485344 ] 


Kim Haase commented on DERBY-1934:
--

Thanks, Laura -- These look excellent.

However, I have since gotten more feedback from Lance (between ===):


The page is not consistent as it should just describe what a DataSource is  to 
align with what it states for the other interfaces.

You can borrow from the spec or from the javadoc for DataSource.

The problem i have with the wording, is you can use a DataSource without JNDI 
and it can (and typically is) looked up as a resource via JNDI.

Also need to migrate to referring to Java EE vs J2EE as well.
===

The last bit can wait, but we may as well reword the description based on the javadoc. 
Here is some text that can replace the first two sentences. I think the last sentence 
(This allows the calling application to access the database by a name (as a data 
source) instead of through a database connection URL.) can stay as is.

A DataSource object is a factory for connections to the physical data source that 
the DataSource object represents. An alternative to the DriverManager facility, a 
DataSource object is the preferred means of getting a connection. An object that 
implements the DataSource interface will typically be registered with a naming service 
based on the Java(TM) Naming and Directory (JNDI) API.

Interesting in view of the fact that our Working With Derby example uses 
DriverManager; at some point perhaps it should be changed to use DataSource, 
but that is yet another task.

BTW, did you also change the title of rrefjta16677.dita in the map file?


  

Reference Manual updates - J2EE Compliance: Java Transaction API and javax.sql 
Extensions
-

Key: DERBY-1934
URL: https://issues.apache.org/jira/browse/DERBY-1934
Project: Derby
 Issue Type: Bug
 Components: Documentation
   Affects Versions: 10.2.1.6
   Reporter: Laura Stewart
Assigned To: Laura Stewart
Attachments: derby1934_1.diff, derby1934_2.diff, derby1934_html2.zip, 
rrefjta18596.html


J2EE Compliance: Java Transaction API and javax.sql Extensions: 
 
Section = javax.sql:JDBC Extensions 
File = http://db.apache.org/derby/docs/dev/ref/rrefjta18596.html 
Update = 
This URL no longer exists: (For more details about these extensions, see  http://java.sun.com/products/jdbc/jdbc20.stdext.javadoc/javax/sql/package-summary.html). The page that has this information, although you have to browse to the section called JDBC 2.0 Optional Package API is  http://java.sun.com/products/jdbc/download.html  



  


Re: [jira] Commented: (DERBY-1934) Reference Manual updates - J2EE Compliance: Java Transaction API and javax.sql Extensions

2007-03-29 Thread Lance J. Andersen



Kim Haase wrote:

Lance J. Andersen wrote:

Kim,
I would leave out a reference to

An alternative to the DriverManager facility, a DataSource object is 
the preferred means of getting a connection.




This is old crud that i did not get a cycle to remove based on when i 
was allowed to do putbacks to the javadocs.


Wow, and it's still there ... 1.4, 1.5, 1.6 ... I guess once it's part 
of the spec it takes a major rev to change it.
Changing Java SE javadocs is a long process and it can only be done 
during a new rev due to localization of javadocs (and because it is part 
of the JDBC spec, in some cases, unless it is trivial, it has to be 
aligned with the JDBC spec).




So we are neutral on DriverManager vs. Datasource, and the Working 
With Derby example is okay?
Yes DriverManager is *not* going away and is a perfectly acceptable, 
especially now that u do not have to load the driver explicitly.


Ideally to use DataSources in a more portable way outside of an 
environment which supports JNDI, it would require a Factory so that you 
do not have to instantiate implementation classes.





Laura, should I add a comment to that effect to the JIRA? Or you can 
when you post the next patch.


Kim



-lance

Kim Haase (JIRA) wrote:
[ 
https://issues.apache.org/jira/browse/DERBY-1934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485344 
]

Kim Haase commented on DERBY-1934:
--

Thanks, Laura -- These look excellent.

However, I have since gotten more feedback from Lance (between ===):


The page is not consistent as it should just describe what a 
DataSource is  to align with what it states for the other interfaces.


You can borrow from the spec or from the javadoc for DataSource.

The problem i have with the wording, is you can use a DataSource 
without JNDI and it can (and typically is) looked up as a resource 
via JNDI.


Also need to migrate to referring to Java EE vs J2EE as well.
===

The last bit can wait, but we may as well reword the description 
based on the javadoc. Here is some text that can replace the first 
two sentences. I think the last sentence (This allows the calling 
application to access the database by a name (as a data source) 
instead of through a database connection URL.) can stay as is.


A DataSource object is a factory for connections to the physical 
data source that the DataSource object represents. An alternative to 
the DriverManager facility, a DataSource object is the preferred 
means of getting a connection. An object that implements the 
DataSource interface will typically be registered with a naming 
service based on the Java(TM) Naming and Directory (JNDI) API.


Interesting in view of the fact that our Working With Derby example 
uses DriverManager; at some point perhaps it should be changed to 
use DataSource, but that is yet another task.


BTW, did you also change the title of rrefjta16677.dita in the map 
file?



 
Reference Manual updates - J2EE Compliance: Java Transaction API 
and javax.sql Extensions
- 



Key: DERBY-1934
URL: https://issues.apache.org/jira/browse/DERBY-1934
Project: Derby
 Issue Type: Bug
 Components: Documentation
   Affects Versions: 10.2.1.6
   Reporter: Laura Stewart
Assigned To: Laura Stewart
Attachments: derby1934_1.diff, derby1934_2.diff, 
derby1934_html2.zip, rrefjta18596.html



J2EE Compliance: Java Transaction API and javax.sql Extensions:  
Section = javax.sql:JDBC Extensions File = 
http://db.apache.org/derby/docs/dev/ref/rrefjta18596.html Update = 
This URL no longer exists: (For more details about these 
extensions, see  
http://java.sun.com/products/jdbc/jdbc20.stdext.javadoc/javax/sql/package-summary.html). 
The page that has this information, although you have to browse to 
the section called JDBC 2.0 Optional Package API is  
http://java.sun.com/products/jdbc/download.html  


  


Re: [jira] Commented: (DERBY-1934) Reference Manual updates - J2EE Compliance: Java Transaction API and javax.sql Extensions

2007-03-29 Thread Lance J. Andersen

There is nothing magic about Derby's implementation of a DataSource.

If i could suggest making a quick scan of DataSource overview in the 
JDBC spec as this will give you an overview of how a DataSource is used 
in conjunction with JNDI.


Laura Stewart (JIRA) wrote:
[ https://issues.apache.org/jira/browse/DERBY-1934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485367 ] 


Laura Stewart commented on DERBY-1934:
--

Kim - Would it be acceptable to have this as the text for DataSource:

The Derby implementation of the DataSource interface provides support the Java 
Naming and Directory
Interface (JNDI).  A DataSource object is a factory for connections to the 
physical data source
that the DataSource object represents. An alternative to the DriverManager facility, a DataSource object is the 
preferred means of getting a connection. An object that implements the DataSource interface will typically be registered
with a naming service based on the Java Naming and Directory (JNDI) API. 
This allows the calling application to access the database by a name (as a

data source) instead of through a database connection URL.

  

Reference Manual updates - J2EE Compliance: Java Transaction API and javax.sql 
Extensions
-

Key: DERBY-1934
URL: https://issues.apache.org/jira/browse/DERBY-1934
Project: Derby
 Issue Type: Bug
 Components: Documentation
   Affects Versions: 10.2.1.6
   Reporter: Laura Stewart
Assigned To: Laura Stewart
Attachments: derby1934_1.diff, derby1934_2.diff, derby1934_html2.zip, 
rrefjta18596.html


J2EE Compliance: Java Transaction API and javax.sql Extensions: 
 
Section = javax.sql:JDBC Extensions 
File = http://db.apache.org/derby/docs/dev/ref/rrefjta18596.html 
Update = 
This URL no longer exists: (For more details about these extensions, see  http://java.sun.com/products/jdbc/jdbc20.stdext.javadoc/javax/sql/package-summary.html). The page that has this information, although you have to browse to the section called JDBC 2.0 Optional Package API is  http://java.sun.com/products/jdbc/download.html  



  


Re: [jira] Commented: (DERBY-1934) Reference Manual updates - J2EE Compliance: Java Transaction API and javax.sql Extensions

2007-03-29 Thread Lance J. Andersen

The comment WRT borrowing is to review the wording and paraphrase from it.

You are right, you cannot take it verbatim... sorry if that was not clear.



Daniel John Debrunner (JIRA) wrote:
[ https://issues.apache.org/jira/browse/DERBY-1934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12485375 ] 


Daniel John Debrunner commented on DERBY-1934:
--

from Lance, via Kim:

 You can borrow from the spec or from the javadoc for DataSource. 

Errrmmm, can we? What licence is the JDBC spec and Java doc under?
The ASF (via derby project in this case) needs to follow any licence terms.



  

Reference Manual updates - J2EE Compliance: Java Transaction API and javax.sql 
Extensions
-

Key: DERBY-1934
URL: https://issues.apache.org/jira/browse/DERBY-1934
Project: Derby
 Issue Type: Bug
 Components: Documentation
   Affects Versions: 10.2.1.6
   Reporter: Laura Stewart
Assigned To: Laura Stewart
Attachments: derby1934_1.diff, derby1934_2.diff, derby1934_html2.zip, 
rrefjta18596.html


J2EE Compliance: Java Transaction API and javax.sql Extensions: 
 
Section = javax.sql:JDBC Extensions 
File = http://db.apache.org/derby/docs/dev/ref/rrefjta18596.html 
Update = 
This URL no longer exists: (For more details about these extensions, see  http://java.sun.com/products/jdbc/jdbc20.stdext.javadoc/javax/sql/package-summary.html). The page that has this information, although you have to browse to the section called JDBC 2.0 Optional Package API is  http://java.sun.com/products/jdbc/download.html  



  


Re: [VOTE] Dag Wanvik as a committer

2007-03-12 Thread Lance J. Andersen

+1

Knut Anders Hatlen wrote:

Rick Hillegas [EMAIL PROTECTED] writes:

  

Please vote on whether we should make Dag Wanvik a Derby
committer. The vote will close at 5:00 pm San Francisco time on Monday
March 19.



+1

  


Re: ResultSetMetaData.isReadOnly who's right embedded or client

2007-03-08 Thread Lance J. Andersen
i would recommend returning false as this returned result is not tied to 
an updatable ResultSet but to whether you can definitively determine the 
column cannot be modified.  This is a JDBC 1.0 method.


Regards
Lance

Kathey Marsden wrote:

Client and embedded differ for isReadOnly.  The javadoc explains it as:

Indicates whether the designated column is definitely not writable.


Embedded always returns false and indicates that this is refering to 
the base table column:

public final boolean isReadOnly(int column) throws SQLException {
   validColumnNumber(column);

   // we just don't know if it is a base table column or not
   return false;
   }


Client returns whether the resultset is updateable or not:

   public boolean isReadOnly(int column) throws SQLException {
   try
   {
   checkForClosedStatement();
   checkForValidColumnIndex(column);
   if (sqlxUpdatable_ == null) {
   return (resultSetConcurrency_ == 
java.sql.ResultSet.CONCUR_READ_ONLY); // If no extended describe, 
return resultSet's concurrecnty

   }
   return sqlxUpdatable_[column - 1] == 0; // PROTOCOL 0 means 
not updatable, 1 means updatable

   }
   catch ( SqlException e )
   {
   throw e.getSQLException();
   }
   }


Which is right here?

Kathey



Global Temp tables

2007-02-23 Thread Lance J. Andersen

Does anyone have an idea as to why the gobal table cannot be found.

Here is the trace output.

Regards
lance

[TopLink Fine]: 
ClientSession(12549034)--Connection(16309502)--Thread(Thread[AWT-EventQueue-0,6,main])--DECLARE 
GLOBAL TEMPORARY TABLE session.TL_CMP3_EMPLOYEE (EMP_ID INTEGER NOT 
NULL, PAY_SCALE VARCHAR(255), ROOM_NUM INTEGER, F_NAME VARCHAR(255), 
STATUS INTEGER, L_NAME VARCHAR(255), VERSION INTEGER, ADDR_ID INTEGER, 
MANAGER_EMP_ID INTEGER, START_DATE DATE, END_DATE DATE, DEPT_ID INTEGER, 
PRIMARY KEY (EMP_ID)) ON COMMIT DELETE ROWS NOT LOGGED
[TopLink Fine]: 
ClientSession(12549034)--Connection(16309502)--Thread(Thread[AWT-EventQueue-0,6,main])--DECLARE 
GLOBAL TEMPORARY TABLE session.TL_CMP3_SALARY (EMP_ID INTEGER NOT NULL, 
SALARY INTEGER, PRIMARY KEY (EMP_ID)) ON COMMIT DELETE ROWS NOT LOGGED
[TopLink Fine]: 
ClientSession(12549034)--Connection(16309502)--Thread(Thread[AWT-EventQueue-0,6,main])--INSERT 
INTO session.TL_CMP3_EMPLOYEE (EMP_ID, ROOM_NUM, VERSION) SELECT 
t0.EMP_ID, t1.SALARY, (t0.VERSION + 1) FROM CMP3_EMPLOYEE t0, 
CMP3_SALARY t1 WHERE ((t0.F_NAME = 'testUpdateUsingTempStorage') AND 
(t1.EMP_ID = t0.EMP_ID))
[TopLink Fine]: 
ClientSession(12549034)--Connection(16309502)--Thread(Thread[AWT-EventQueue-0,6,main])--DELETE 
FROM session.TL_CMP3_EMPLOYEE
[TopLink Fine]: 
ClientSession(12549034)--Connection(16309502)--Thread(Thread[AWT-EventQueue-0,6,main])--DELETE 
FROM session.TL_CMP3_SALARY
[TopLink Warning]: 
UnitOfWork(31852201)--Thread(Thread[AWT-EventQueue-0,6,main])--Local 
Exception Stack:
Exception [TOPLINK-4002] (Oracle ${_EssentialsProductName} - 
${_EssentialsProductVersion} (Build 070220Dev)): 
oracle.toplink.essentials.exceptions.DatabaseException
Internal Exception: org.apache.derby.client.am.SqlException: Table 
'SESSION.TL_CMP3_EMPLOYEE' does not exist.Error Code: -1
Call:INSERT INTO session.TL_CMP3_EMPLOYEE (EMP_ID, ROOM_NUM, VERSION) 
SELECT t0.EMP_ID, t1.SALARY, (t0.VERSION + 1) FROM CMP3_EMPLOYEE t0, 
CMP3_SALARY t1 WHERE ((t0.F_NAME = 'testUpdateUsingTempStorage') AND 
(t1.EMP_ID = t0.EMP_ID))

Query:UpdateAllQuery()


Re: Global Temp tables

2007-02-23 Thread Lance J. Andersen

Mamta,

Thanks for taking the time to respond.


I had the developer run this using the embedded driver and attached the 
log.  It looks like the prepare is failing on the DECLARE.



I have attached the log for your reference.

Regards
Lance

Mamta Satoor wrote:
Lance, I am sure you have already checked following but wanted to 
throw them out anyways
1)Is the temporary table getting referenced by the same connection 
that created it?

2)Does your particular scenario work under embedded Derby?
 
Also, once the connection that created the global table closes, the 
global table cease to exist.
 
I think it will be worth checking the script under embedded Derby to 
rule out Network Server as the culprit.
 
Mamta


 
On 2/23/07, *Lance J. Andersen* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Does anyone have an idea as to why the gobal table cannot be found.

Here is the trace output.

Regards
lance

[TopLink Fine]:

ClientSession(12549034)--Connection(16309502)--Thread(Thread[AWT-EventQueue-0,6,main])--DECLARE
GLOBAL TEMPORARY TABLE session.TL_CMP3_EMPLOYEE (EMP_ID INTEGER NOT
NULL, PAY_SCALE VARCHAR(255), ROOM_NUM INTEGER, F_NAME VARCHAR(255),
STATUS INTEGER, L_NAME VARCHAR(255), VERSION INTEGER, ADDR_ID INTEGER,
MANAGER_EMP_ID INTEGER, START_DATE DATE, END_DATE DATE, DEPT_ID
INTEGER,
PRIMARY KEY (EMP_ID)) ON COMMIT DELETE ROWS NOT LOGGED
[TopLink Fine]:

ClientSession(12549034)--Connection(16309502)--Thread(Thread[AWT-EventQueue-0,6,main])--DECLARE
GLOBAL TEMPORARY TABLE session.TL_CMP3_SALARY (EMP_ID INTEGER NOT
NULL,
SALARY INTEGER, PRIMARY KEY (EMP_ID)) ON COMMIT DELETE ROWS NOT
LOGGED
[TopLink Fine]:

ClientSession(12549034)--Connection(16309502)--Thread(Thread[AWT-EventQueue-0,6,main])--INSERT
INTO session.TL_CMP3_EMPLOYEE (EMP_ID, ROOM_NUM, VERSION) SELECT
t0.EMP_ID, t1.SALARY, (t0.VERSION + 1) FROM CMP3_EMPLOYEE t0,
CMP3_SALARY t1 WHERE ((t0.F_NAME = 'testUpdateUsingTempStorage') AND
(t1.EMP_ID = t0.EMP_ID))
[TopLink Fine]:

ClientSession(12549034)--Connection(16309502)--Thread(Thread[AWT-EventQueue-0,6,main])--DELETE

FROM session.TL_CMP3_EMPLOYEE
[TopLink Fine]:

ClientSession(12549034)--Connection(16309502)--Thread(Thread[AWT-EventQueue-0,6,main])--DELETE
FROM session.TL_CMP3_SALARY
[TopLink Warning]:
UnitOfWork(31852201)--Thread(Thread[AWT-EventQueue-0,6,main])--Local
Exception Stack:
Exception [TOPLINK-4002] (Oracle ${_EssentialsProductName} -
${_EssentialsProductVersion} (Build 070220Dev)):
oracle.toplink.essentials.exceptions.DatabaseException
Internal Exception: org.apache.derby.client.am.SqlException : Table
'SESSION.TL_CMP3_EMPLOYEE' does not exist.Error Code: -1
Call:INSERT INTO session.TL_CMP3_EMPLOYEE (EMP_ID, ROOM_NUM, VERSION)
SELECT t0.EMP_ID, t1.SALARY, (t0.VERSION + 1) FROM CMP3_EMPLOYEE t0,
CMP3_SALARY t1 WHERE ((t0.F_NAME = 'testUpdateUsingTempStorage') AND
(t1.EMP_ID = t0.EMP_ID))
Query:UpdateAllQuery()


2007-02-23 21:18:01.266 GMT Thread[AWT-EventQueue-0,6,main] (XID = 2360), 
(SESSIONID = 41), (DATABASE = C:/Dev_ri/properties/Derby1), (DRDAID = null), 
Executing prepared statement: DELETE FROM CMP3_EMP_PROJ WHERE EXISTS(SELECT 
t0.EMP_ID FROM CMP3_EMPLOYEE t0, CMP3_SALARY t1 WHERE ((t0.F_NAME = 
'testUpdateUsingTempStorage') AND (t1.EMP_ID = t0.EMP_ID)) AND t0.EMP_ID = 
CMP3_EMP_PROJ.EMPLOYEES_EMP_ID) :End prepared statement
2007-02-23 21:18:01.266 GMT Thread[AWT-EventQueue-0,6,main] (XID = 2360), 
(SESSIONID = 41), (DATABASE = C:/Dev_ri/properties/Derby1), (DRDAID = null), 
Executing prepared statement: DELETE FROM CMP3_SALARY WHERE EXISTS(SELECT 
t0.EMP_ID FROM CMP3_EMPLOYEE t0, CMP3_SALARY t1 WHERE ((t0.F_NAME = 
'testUpdateUsingTempStorage') AND (t1.EMP_ID = t0.EMP_ID)) AND t1.EMP_ID = 
CMP3_SALARY.EMP_ID) :End prepared statement
2007-02-23 21:18:01.266 GMT Thread[AWT-EventQueue-0,6,main] (XID = 2360), 
(SESSIONID = 41), (DATABASE = C:/Dev_ri/properties/Derby1), (DRDAID = null), 
Executing prepared statement: DELETE FROM CMP3_EMPLOYEE WHERE NOT EXISTS(SELECT 
t0.EMP_ID FROM CMP3_EMPLOYEE t0, CMP3_SALARY t1 WHERE (t1.EMP_ID = t0.EMP_ID) 
AND t0.EMP_ID = CMP3_EMPLOYEE.EMP_ID) :End prepared statement
2007-02-23 21:18:01.282 GMT Thread[AWT-EventQueue-0,6,main] (XID = 2360), 
(SESSIONID = 41), (DATABASE = C:/Dev_ri/properties/Derby1), (DRDAID = null), 
Executing prepared statement: DELETE FROM CMP3_ADDRESS WHERE (COUNTRY = 
'testUpdateUsingTempStorage') :End prepared statement
2007-02-23 21:18:01.297 GMT Thread[AWT-EventQueue-0,6,main] (XID = 2360), 
(SESSIONID = 41), (DATABASE = C:/Dev_ri/properties/Derby1), (DRDAID = null), 
Executing prepared statement: UPDATE CMP3_EMPLOYEE_SEQ SET SEQ_COUNT = 
SEQ_COUNT + ? WHERE SEQ_NAME = ? :End prepared statement with 2 parameters 
begin parameter #1: 50 :end parameter begin parameter #2: EMPLOYEE_SEQ :end 
parameter 
2007-02-23 21:18:01.297 GMT Thread[AWT

Re: DatabaseMetaData JDBC specification question

2007-01-16 Thread Lance J. Andersen
If the JDBC spec does not indicate that the parameter accepts a pattern 
for the value, then i would suggest you only support patterns where it 
is required.  It is possible tests could be added to do additional 
validation of parameters passed into API methods in future TCKs for 
conformance.


Daniel John Debrunner wrote:
For the DatabaseMetaData methods that fetch attributes of SQL objects 
(such as getTables, getColumns) the parameters are described in the 
javadoc in three ways:


1) Parameter must match the object's name as stored in the database.

example javadoc quote
a table name; must match the table name as it is stored in the database
/example javadoc quote

2) Parameter must match the object's name as stored in the database 
and two special values are allowed (empty string and null).


example javadoc quote
a schema name; must match the schema name as it is stored in the 
database;  retrieves those without a schema; null means that the 
schema name should not be used to narrow the search

/example javadoc quote

3) A pattern, explicitly called out in the overview of 
DatabaseMetaData and states the parameter name will end in 'pattern'.


example javadoc quote
a table name pattern; must match the table name as it is stored in the 
database

/example javadoc quote

These imply very different behaviours for the various styles, e.g.

for 1) null should not be allowed and the value should not be treated 
as a pattern or use empty string to indicate those without a schema.


for 2) the value should not be treated as a pattern.

Derby does not follow this split, it treats every such argument as a 
pattern (in some cases the client may treat arguments differently).


Is the split in DatabaseMetaData intentional, it seems so given the 
explicit wording for patterns? And the fact that in some cases the 
method's description indicates information for 'a table' (singular) is 
returned and the resulting rows have no information that allows one to 
determine which table a row is for (e.g getBestRowIdentifier  
getVersionColumns). Though in both those cases the api also implicitly 
allows for multiple tables to be returned when the same table name 
exists in multiple schemas.


If so should Derby follow the strict definitions of the javadoc or 
could Derby just be providing extensions to the standard behaviour?
The only area where I could see issues coming up with this extension 
is when the parameter is in category 1) or 2) and the object name 
contains '%' or '_'. Then a meta data call could potentially return 
rows for other tables and in some cases the meta data provides no way 
to determine which table a row is for (e.g getBestRowIdentifier  
getVersionColumns).


One other point is that removing the pattern support from a number of 
meta data queries would most likely improve their performance.


Thanks,
Dan.




Re: [jira] Commented: (DERBY-2109) System privileges

2006-12-14 Thread Lance J. Andersen

I agree with David on this that policy files are painful.

David Van Couvering wrote:



Rick Hillegas (JIRA) wrote:


2) Unfamiliar api. Oracle, DB2, Postgres, and MySQL all handle system 
privileges in different ways. Picking one of these models would still 
result in an api that's unfamiliar to many people. That said, these 
databases do tend to use GRANT/REVOKE for system privileges, albeit 
each in its own peculiar fashion. I agree that GRANT/REVOKE is an 
easier model to learn than Java Security. I think however, that the 
complexity of Java Security is borne by the derby-dev developer, not 
by the customer. Creating a policy file is very easy and our user 
documentation gives simple examples which the naive user can just 
crib. With adequate user documentation, I think this approach would 
be straightforward for the customer.


I must respectfully disagree that creating a policy file is very 
easy.  I think it's a royal PITA - the syntax is complex, 
nonintuitive and unforgiving.


Can we provide a GRANT/REVOKE interface on top of an implementation 
that  uses JAAS?




Re: [VOTE] Myrna Van Lunteren as a committer

2006-11-02 Thread Lance J. Andersen

+1

Bryan Pendleton wrote:

I am proposing that we add Myrna Van Lunteren ([EMAIL PROTECTED])
as a committer for Derby.


Myrna does great work.

+1

bryan




Re: [VOTE] Kristian Waagan as a committer

2006-10-25 Thread Lance J. Andersen

+1

Rick Hillegas wrote:

+1

Rick Hillegas wrote:

Please vote on whether we should make Kristian Waagan a Derby committer.

Kristian contributed significantly to the Derby JDBC4 effort. In my 
opinion


1) His patches show consistently superior quality: they are well 
thought out, well documented, and well tested.


2) He cheerfully takes advice.

3) He supplies the community sound counsel.

4) In addition, his commitment to and knowledge of assertion-based 
testing has helped lead the community, by example, toward a simpler, 
faster, more scalable testing model.


Regards,
-Rick




Re: [Vote] Include tomcat5.exe as derby.exe (Re: [jira] Commented: (DERBY-187) Starting derby network server as a service in Win OS)

2006-10-19 Thread Lance J. Andersen
If I am reading this thread correctly, I am not in favor of including an 
*.exe as part of the base derby download.  If you want to provide an 
optional package, that is fine.  Let us keep the base bundle pure Java.


Andrew McIntyre wrote:

On 10/18/06, Bryan Pendleton [EMAIL PROTECTED] wrote:

 Now I'm thinking to include tomcat5.exe into derby as derby.exe as
 resolution for DERBY-187.

Are the following two statements true?

1) This derby.exe program would be useful to Windows users, not to 
other users


Correct, since tomcat5.exe is just a renamed procrun/prunsrv from the
Jakarta Commons Daemon project.

2) This derby.exe program can be used with any release of Derby (that 
is, we
don't have to modify the NetworkServerControl class to enable it to 
be run as

a service by Derby.exe).


Procrun/prunsrv can be used to interface any program, not just java,
with Windows' services. Consider it an Apache licensed srvany.exe.

The question that I think needs to be asked is:

3) Why can't users get a native Windows binary for procrun from the
Jakarta Commons Daemon project that they can use with Derby?

There's even a JIRA that's over a year old (migrated from Bugzilla)
with no comments that asks for exactly that:

http://issues.apache.org/jira/browse/DAEMON-51

If both the above statements are true, then it occurs to me that it 
might be
nice to be able to distribute this new program separately, rather 
than as

part of the basic Derby release.


While I don't see any problem with Derby redistributing procrun, I
don't think we necessarily need to be redistributing it either. It
would be sufficient to provide a pointer to the commons daemon project
and instructions in our documentation for those that want this
functionality.

But, since you can't actually get procrun as a Windows executable
anywhere that I could find from the links at
http://jakarta.apache.org/commons/daemon/index.html, I suppose
redistributing it is probably the best way of providing this
functionality for users.

Maybe there's some reason why they don't redistribute their own code
in binary form? I find it sort of odd, but I haven't gone to look for
the reasons yet.

That way users could decide whether or not to download this program, 
and we
also could release this program independently of releasing the basic 
Derby

software.


If it were possible to download procrun as a binary from the Commons
Daemon project, I would prefer that users pick it up from there, and
we could provide Derby-specific instructions on using it with the
network server.

Since that doesn't seem to be possible, I think it's reasonable to
consider redistributing it ourselves and having a vote on that.
DERBY-187 is the 8th most popular issue in our JIRA, tied with full
text search, so that indicates demand for the feature.

I will vote on this later. I want to do some more research into
procrun and think about how it might fit into the Derby -bin
distribution.

andrew


Re: [jira] Commented: (DERBY-1938) Add support for setObject(arg, null)

2006-10-09 Thread Lance J. Andersen




The following wording was added to the JDBC 4.0 javadocs to address
this issue:

Note: Not all databases allow for a non-typed Null to be sent to
the backend. For maximum portability, the setNull or the setObject(int
parameterIndex, Object x, int sqlType) method should be used
instead of setObject(int parameterIndex, Object x).

Daniel John Debrunner (JIRA) wrote:

  [ http://issues.apache.org/jira/browse/DERBY-1938?page=comments#action_12440912 ] 

Daniel John Debrunner commented on DERBY-1938:
--

Section 13.2.2.2 does not apply here. Since Java null has no type it cannot be mapped using this rule: 
 "the Java Object mapped using the default mapping for that object type. "

I think the real justification for changing setObject(col, null) seems be to match other JDBC drivers (which ones?) and/or applications that seem to expect this to work. But there's liittle evidence of that justification in this thread.

I think it's clear that the JDBC spec (from the tutorial) indicates that applications should not depend on this behaviour.



  
  
Add support for setObject(arg, null)
--

Key: DERBY-1938
URL: http://issues.apache.org/jira/browse/DERBY-1938
Project: Derby
 Issue Type: Improvement
 Components: JDBC
   Reporter: Dag H. Wanvik
Assigned To: Tomohito Nakayama
   Priority: Minor
Attachments: DERBY-1938.patch


Derby presently does not implement support for the method
PreparedStatement.setObject (and similarly for CallableStatement.setObject) 
when the supplied value is null, unless a type argument (3rd arg) is also present. 
That is, in:
void setObject(int parameterIndex,
  Object x)
  throws SQLException
x can not be null. 
Derby will presently throw an SQLException (client: XJ021, embedded: 22005)
if x is null when calling this method on a preparedStatement.
Porting some applications may be made easier if this restriction is lifted.
See also discussion in DERBY-1904.

  
  
  





Re: possible JDBC 4 EOD bug??

2006-09-15 Thread Lance J. Andersen




It is definitely a bug Dan if it has not been resolved.

If you feel the section in the spec could be clearer, please let me
know as i have a small window to clarify this area.

-lance

Daniel John Debrunner wrote:

  Vemund Ostgaard wrote:

  
  
Daniel John Debrunner wrote:

  
  
  
  

  If my select returns a column called 'NAME' then it does not map to the
JavaBean property called 'name'. Instead the name of the column needs to
map to the name of the private field, 'myName'. Then the field is set
correctly but the setter is never used. Is this a bug, it seems like it?
 

  

If I understand you correctly, I believe it is a known bug that I
stumbled on myself a while ago. I thought it had been fixed in one of
the latest Mustang builds but I haven't verified it myself. Which build
have you been using?

  
  
Thanks, the description was a bit rushed because I had ony a little time
before I needed to leave and I wanted to get the question out there.
Do you have the bug number for the issue?
I'm using b98.

Thanks,
Dan.

  





Re: possible JDBC 4 EOD bug??

2006-09-15 Thread Lance J. Andersen






Daniel John Debrunner wrote:

  Daniel John Debrunner wrote:

  
  
Lance J. Andersen wrote:




  It is definitely a bug Dan if it has not been resolved.

If you feel the section in the spec could be clearer, please let me know
as i have a small window to clarify this area.
  


I think if the functionality is working then the spec is fine, I only
got confused by the incorrect behaviour.

  
  
Actually reading on to later sections they only ever mention "field
within a data class", never using JavaBean setter and getters. Does this
mean those sections (19.3.3, 19.3.1.4) only apply to fields?

As an example, how would I use the ResultColumn Annotation with JavaBean
setter and getter methods?
  

Unfortunately no for ResultColumn as we needed to change the Annotation
to be method and field, not just field for the Target type. We meant
to support this but missed the window to update the annotation.

For allColumnsMapped, this is indeed supposed to work with
setters/getters. I will clarify this a bit better as this wording was
added prior to including JavaBeans property support.

Regards
Lance

  
Thanks,
Dan.

  





Re: 10.2 plans (was Re: 10.2 licensing issue)

2006-09-12 Thread Lance J. Andersen





2 - Regarding the Mustang and JDBC 4 issue, my general opinion is that 
if Mustang is still coming out in October then it may be worth it to 
continue on our current path and do a release that includes JDBC 4.  
If Mustang is delayed, then I think it's just time to get 10.2 done to 
get some of the other good features out there.  It's been quite a 
while since we've had a feature release.  Does anyone know the current 
schedule for Mustang?


see http://blogs.sun.com/mr/entry/java_se_6_schedule_update  for details 
on the SE 6 schedule


Kathy



Re: CachedRowSet in Derby

2006-09-07 Thread Lance J. Andersen




That is correct. Java SE ships and will continue to ship the Rowset RI.

-lance

Bernt M. Johnsen wrote:

  Harri Pesonen wrote:

  
  
Since Sun is apparently removing CachedRowSetImpl from JDK (warning:
com.sun.rowset.CachedRowSetImpl is Sun proprietary API and may be
removed in a future release),

  
  
com.sun.rowset.CachedRowSetImpl is still there in jdk1.6.0_b98 and I
have no knowledge of any plans to remove it.

  
  
could it be possible to to add CachedRowSet into Derby, which is
bundled with JDK6?

  
  
If the RowSet implementation were to be removed from the JDK, the
Derby community would have to consider what to do.

  





Re: SQL state 42, access permissions, SQL standard and JDBC 4.0

2006-09-07 Thread Lance J. Andersen



Daniel John Debrunner wrote:

The SQL standard says that SQL State '42' is for syntax error or access
rule violation (section 23.1).

JDBC 4.0 states in section 6.5.1 that TABLE 6-1 specifies which
NonTransientSQLException subclass must be thrown
for a a given SQLState class value: and Table 6.1 has these two lines
of interest:

SQL State 42 - SQLSyntaxErrorException.
SQL State 'N/A' - SQLInvalidAuthorizationException
  
That is a typo.  The javadocs indicate 28 for 
SQLInvalidAuthorizationException.





Derby currently uses SQL State '28' for access rule violations, the SQL
standard says that's for 'invalid authorization specification' and only
used in statements not supported by Derby.


So:

Q1) Should Derby be using '42' for access rule violations?

Q2) If Derby uses '42' for access rule violations should it throw a
SQLSyntaxErrorException, a NonTransientSQLException, a SQLException or
SQLInvalidAuthorizationException?

Dan.



  


Re: SQL state 42, access permissions, SQL standard and JDBC 4.0

2006-09-07 Thread Lance J. Andersen





The javadoc SQLSyntaxErrorException for says:

The subclass of SQLException thrown when the SQLState class value is
'42'. This indicates that the in-progress query has violated SQL syntax
rules.

This somewhat in-conflict with the SQL Standard.

Can a JDBC driver thrown an exception with SQLState '42' and the
exception not be a SQLSyntaxErrorException?


  
Well we put this part of the spec to bed early 2005 so my memory is 
groggy, but it looks to me that our intent was whenever a SQLState class 
value of 42 occurs, th SQLSyntaxErrorException was thrown which would cover


syntax error or access rule violation


I am just sanity checking this now with my EG and will update the paper 
spec accordingly and have to wait for the first patch of SE 6 to tweak 
the javadocs


-lance


Re: SQL state 42, access permissions, SQL standard and JDBC 4.0

2006-09-07 Thread Lance J. Andersen






Daniel John Debrunner wrote:

  Lance J. Andersen wrote:

  
  

Daniel John Debrunner wrote:



  The SQL standard says that SQL State '42' is for "syntax error or access
rule violation" (section 23.1).

JDBC 4.0 states in section 6.5.1 that "TABLE 6-1 specifies which
NonTransientSQLException subclass must be thrown
for a a given SQLState class value:" and Table 6.1 has these two lines
of interest:

SQL State 42 - SQLSyntaxErrorException.
  

  
  
The javadoc SQLSyntaxErrorException for says:

The subclass of SQLException thrown when the SQLState class value is
'42'. This indicates that the in-progress query has violated SQL syntax
rules.

This somewhat in-conflict with the SQL Standard.

Can a JDBC driver thrown an exception with SQLState '42' and the
exception not be a SQLSyntaxErrorException?
  

Whenever 42 is sent as the SQLState class value it should map to
SQLSyntaxErrorException.

I need to clarify the javadocs a bit, but it already indicates that
this is the exception when the value of 42 is returned.



  
Thanks,
Dan.



  





Re: jdk1.6 regresstion test failures: _Suite.junit and TestQueryObject with IllegalAccessException

2006-09-07 Thread Lance J. Andersen






Knut Anders Hatlen wrote:

  Sunitha Kambhampati [EMAIL PROTECTED] writes:

  
  
I was looking at the test results from last weekend (09/01) on our
test machines and  I see the following failures on jdk1.6.

derbyall/derbyall.fail:jdbc4/TestQueryObject.java
derbyall/derbynetclientmats/derbynetclientmats.fail:jdbc4/TestQueryObject.java
derbyall/derbynetclientmats/derbynetclientmats.fail:jdbc4/_Suite.junit

Java Version:1.6.0-rc,JRE - JDBC: Java SE 6 - JDBC 4.0 , jars are
10.3.0.0 alpha - (439522)

The diffs looks like :
* Diff file derbyall/jdbc40/TestQueryObject.diff *** Start:
TestQueryObject jdk1.6.0-rc derbyall:jdbc40 2006-09-01 20:32:36 *** 0
add  java.sql.SQLException: Cannot insert new row into DataSet :
Class com.sun.sql.DataSetImpl can not access a member of class
org.apache.derbyTesting.functionTests.tests.jdbc4.TestData with
modifiers "private"  Caused by: java.lang.IllegalAccessException:
Class com.sun.sql.DataSetImpl can not access a member of class
org.apache.derbyTesting.functionTests.tests.jdbc4.TestData with
modifiers "private"  ... 14 more Test Failed.

  
  
This was a bug in the jdk. Fixed in build 97 or 98, I think.
  

build 98 is the lucky winner

  
  
  
* Diff file
derbyall/derbynetclientmats/DerbyNetClient/jdbc40/_Suite.diff ***
Start: _Suite jdk1.6.0-rc DerbyNetClient derbynetclientmats:jdbc40
2006-09-01 21:27:16 *** 0 add 
F.  There was 1 failure:  1)
testSetBinaryStreamLengthLessOnBlobTooLong(org.apache.derbyTesting.functionTests.tests.jdbc4.PreparedStatementTest)junit.framework.ComparisonFailure:
Unexpected SQL state. expected:22001 but was:58009  FAILURES!!! 
Tests run: 2048, Failures: 1, Errors: 0 Test Failed. *** End: _Suite
jdk1.6.0-rc DerbyNetClient derbynetclientmats:jdbc40 2006-09-01
21:27:39 *** * Diff file derbyall

  
  
I believe Andreas checked in a fix for this. Let's see... Yes,
DERBY-1800.

  
  
Have these issues been fixed the last few days.  If not, I can open
jiras for these issues.

  
  
Thank you for running these tests and reporting the results!

  





Re: installation issue: problems caused by spaces in path pointed to by DERBY_HOME or DERBY_INSTALL

2006-08-24 Thread Lance J. Andersen

perhaps  it is an XP or Win95 issue?

Andrew McIntyre wrote:

On 8/24/06, Rick Hillegas [EMAIL PROTECTED] wrote:

The following line in setEmbeddedCP.bat raises errors if there are
spaces in the path identified by DERBY_HOME or DERBY_INSTALL:

@FOR %%X in (%DERBY_HOME%) DO SET DERBY_HOME=%%~sX

Is this a known problem? I'm not a DOS expert and I don't know
understand what %%~sX means.


That should convert the value of DERBY_HOME to it's DOS 8.3 short form
to avoid further problems with the path containing spaces. This is
working for me on Windows 2000. What version of Windows are you using,
and what is the error you are getting?

andrew


Re: JDBC4 build failing for me

2006-08-22 Thread Lance J. Andersen




These changes were put back a while ago and rick made an integration
for this into the derby codeline.

There will be one more change to Types.java i believe in order to
prevent a collision for a large number of users of a specific database
due to their own types colliding with the JDBC constant values. This
is still in works but i expect it for b98 of Java SE 6.

This issue was raised late friday evening and we are still working
through it.

Daniel John Debrunner wrote:

  H, so Rick wrote on 8/11:

  
  
Build 95 should be the last Mustang version which changes JDBC signatures.

  
  
David wrote: (today)

  
  
Java(TM) SE Runtime Environment (build 1.6.0-rc-b96) 

  
  
Lance wrote: (today)

  
  

  There were changes in this area to the DatabaseMetaData and it looks
like this test might not have caught up to it.
  

  
  
So, do we have a new target build number for Mustang where it is
expected (hoped) that the JDBC 4.0 signatures, constants, classes etc.
do not change?

Dan.

  





Re: JDBC4 build failing for me

2006-08-21 Thread Lance J. Andersen

Hi David,

There were changes in this area to the DatabaseMetaData and it looks 
like this test might not have caught up to it.


-lance

David Van Couvering wrote:
I am running with the latest drop from the jdk 1.6 site, and I have 
the latest stuff from the trunk, and I am getting the following 
errors. These appear to be members of java.sql.DatabaseMetaData.  I am 
hoping someone familiar with this area of the code can provide some 
quick guidance...


Thanks,

David

===

[javac] 
/export/home/dv136566/derby/patch/trunk/java/testing/org/apache/derbyTesting/functionTests/tests/jdbc4/JDBC40TranslationTest.java:39: 
cannot find symbol

[javac] symbol  : variable functionParameterUnknown
[javac] location: interface java.sql.DatabaseMetaData
[javac] 
assertEquals(DatabaseMetaData.functionParameterUnknown,

[javac]  ^
[javac] 
/export/home/dv136566/derby/patch/trunk/java/testing/org/apache/derbyTesting/functionTests/tests/jdbc4/JDBC40TranslationTest.java:44: 
cannot find symbol

[javac] symbol  : variable functionParameterIn
[javac] location: interface java.sql.DatabaseMetaData
[javac] assertEquals(DatabaseMetaData.functionParameterIn,
[javac]  ^
[javac] 
/export/home/dv136566/derby/patch/trunk/java/testing/org/apache/derbyTesting/functionTests/tests/jdbc4/JDBC40TranslationTest.java:49: 
cannot find symbol

[javac] symbol  : variable functionParameterInOut
[javac] location: interface java.sql.DatabaseMetaData
[javac] assertEquals(DatabaseMetaData.functionParameterInOut,
[javac]  ^
[javac] 
/export/home/dv136566/derby/patch/trunk/java/testing/org/apache/derbyTesting/functionTests/tests/jdbc4/TestDbMetaData.java:99: 
cannot find symbol

[javac] symbol  : variable functionParameterUnknown
[javac] location: interface java.sql.DatabaseMetaData
[javac] DatabaseMetaData.functionParameterUnknown));
[javac]^
[javac] 
/export/home/dv136566/derby/patch/trunk/java/testing/org/apache/derbyTesting/functionTests/tests/jdbc4/TestDbMetaData.java:101: 
cannot find symbol

[javac] symbol  : variable functionParameterIn
[javac] location: interface java.sql.DatabaseMetaData
[javac] DatabaseMetaData.functionParameterIn));
[javac]^
[javac] 
/export/home/dv136566/derby/patch/trunk/java/testing/org/apache/derbyTesting/functionTests/tests/jdbc4/TestDbMetaData.java:103: 
cannot find symbol

[javac] symbol  : variable functionParameterInOut
[javac] location: interface java.sql.DatabaseMetaData
[javac] DatabaseMetaData.functionParameterInOut));
[javac]^
[javac] 
/export/home/dv136566/derby/patch/trunk/java/testing/org/apache/derbyTesting/functionTests/tests/jdbc4/TestDbMetaData.java:105: 
cannot find symbol

[javac] symbol  : variable functionParameterOut
[javac] location: interface java.sql.DatabaseMetaData
[javac] DatabaseMetaData.functionParameterOut));
[javac]^
[javac] 
/export/home/dv136566/derby/patch/trunk/java/testing/org/apache/derbyTesting/functionTests/tests/jdbc4/TestDbMetaData.java:157: 
cannot find symbol
[javac] symbol  : method 
getFunctionParameters(nulltype,nulltype,java.lang.String,nulltype)

[javac] location: interface java.sql.DatabaseMetaData
[javac] dumpRS(met.getFunctionParameters(null,null,DUMMY%,null));
[javac]   ^
[javac] 
/export/home/dv136566/derby/patch/trunk/java/testing/org/apache/derbyTesting/functionTests/tests/jdbc4/TestDbMetaData.java:160: 
cannot find symbol
[javac] symbol  : method 
getFunctionParameters(nulltype,nulltype,java.lang.String,java.lang.String) 


[javac] location: interface java.sql.DatabaseMetaData
[javac] dumpRS(met.getFunctionParameters(null,null,DUMMY%,));
[javac]   ^
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 9 errors

BUILD FAILED
/export/home/dv136566/derby/patch/trunk/build.xml:336: The following 
error occurred while executing this line:
/export/home/dv136566/derby/patch/trunk/java/testing/build.xml:75: The 
following error occurred while executing this line:
/export/home/dv136566/derby/patch/trunk/java/testing/org/apache/derbyTesting/functionTests/tests/jdbc4/build.xml:64: 
Compile failed; see the compiler error output for details.




Re: [jira] Commented: (DERBY-1654) Calling Connection.commit() does not throw exception in autocommit mode

2006-08-09 Thread Lance J. Andersen




This has been a requirement since JDBC 1.0 and at there are no plans to
change this given the complexity of the autocommit issue with various
vendors. It is expected the applications that are well behaved will
know what mode they are in and invoke the correct methods.

-lance

Daniel John Debrunner (JIRA) wrote:

  [ http://issues.apache.org/jira/browse/DERBY-1654?page=comments#action_12426938 ] 

Daniel John Debrunner commented on DERBY-1654:
--

I think the behaviour mandated by the spec is a useability problem. If I want to write a generic method that needs to commit or rollback  the transaction at the end of its work I have to have extra logic that checks to see if auto-commit is on. I've actually never understood the logic in disallowing these methods (commit  rollback) in auto-commit mode.  Even with auto-commit true there are points in time in the user application where a transaction is active, if the applicaiton wants to commit/rollback  that  work why is a different non-intutive api required (close the Statement and/or all of its ResultSets) rather than the obvious calls conn.commit()  rollback()?

  
  
Calling Connection.commit() does not throw exception in autocommit mode
---

Key: DERBY-1654
URL: http://issues.apache.org/jira/browse/DERBY-1654
Project: Derby
 Issue Type: Bug
 Components: JDBC
   Affects Versions: 10.1.3.1
   Reporter: Dyre Tjeldvoll
   Priority: Minor
Fix For: 10.3.0.0

Attachments: simple.java


The jdbc spec (don't know chapter and verse) states that an attempt to call Connection.commit() when Connection.getAutoCommit() is true, must throw an exception. The attached repro (simple.java) runs fine with Derby (and Postgres), but fails with MySQL:
[EMAIL PROTECTED]/java$ java -cp $CLASSPATH:. simple com.mysql.jdbc.Driver 'jdbc:mysql://localhost/joindb' joinuser joinpass foo4
java.sql.SQLException: Can't call commit when autocommit=true
at com.mysql.jdbc.Connection.commit(Connection.java:2161)
at simple.main(simple.java:17)

  
  
  





Re: [jira] Commented: (DERBY-1500) PreparedStatement#setObject(int parameterIndex, Object x) throws SQL Exception when binding Short value in embedded mode

2006-08-02 Thread Lance J. Andersen




I updated the JDBC 4 spec data conversion tables for Short and Byte so
I would add support for  them when u can

-lance

Markus Fuchs (JIRA) wrote:

  [ http://issues.apache.org/jira/browse/DERBY-1500?page=comments#action_12424627 ] 

Markus Fuchs commented on DERBY-1500:
-

This behavior is apparantly implemented by many major database vendors, e.g. Oracle, SQLServer, Sybase, and the Derby network server. Because it's undefined by the JDBC spec doesn't mean that it *must* not be provided for user convenience. 

  
  
PreparedStatement#setObject(int parameterIndex, Object x) throws SQL Exception when binding Short value in embedded mode


Key: DERBY-1500
URL: http://issues.apache.org/jira/browse/DERBY-1500
Project: Derby
 Issue Type: Bug
 Components: JDBC
   Affects Versions: 10.1.1.0, 10.1.3.1
Environment: WindowsXP
   Reporter: Markus Fuchs
Attachments: ShortTest.java


When trying to insert a row into the table 
SHORT_TEST( ID int, SHORT_VAL smallint)
an exception is thrown, if the object value given to PreparedStatement#setObject(int parameterIndex, Object x) is of type Short. The exception thrown is:
--- SQLException ---
SQLState:  22005
Message:  An attempt was made to get a data value of type 'SMALLINT' from a data value of type 'java.lang.Short'.
ErrorCode:  2
SQL Exception: An attempt was made to get a data value of type 'SMALLINT' from a data value of type 'java.lang.Short'.
	at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
	at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
	at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source)
	at org.apache.derby.impl.jdbc.EmbedConnection.newSQLException(Unknown Source)
	at org.apache.derby.impl.jdbc.ConnectionChild.newSQLException(Unknown Source)
	at org.apache.derby.impl.jdbc.EmbedPreparedStatement.dataTypeConversion(Unknown Source)
	at org.apache.derby.impl.jdbc.EmbedPreparedStatement.setObject(Unknown Source)
Tested on Derby 10.1.1.0 and 10.1.3.1. The same test runs fine in network mode.

  
  
  





Re: [jira] Commented: (DERBY-1500) PreparedStatement#setObject(int parameterIndex, Object x) throws SQL Exception when binding Short value in embedded mode

2006-07-31 Thread Lance J. Andersen




I am not sure why java.lang.Short is not covered in the conversion
tables in the JDBC spec.  Probably an oversite or just a spec bug.  It
is reasonable to  be supported since the other conversions are
supported.

I will discuss with the EG.

Markus Fuchs (JIRA) wrote:

  [ http://issues.apache.org/jira/browse/DERBY-1500?page=comments#action_12424627 ] 

Markus Fuchs commented on DERBY-1500:
-

This behavior is apparantly implemented by many major database vendors, e.g. Oracle, SQLServer, Sybase, and the Derby network server. Because it's undefined by the JDBC spec doesn't mean that it *must* not be provided for user convenience. 

  
  
PreparedStatement#setObject(int parameterIndex, Object x) throws SQL Exception when binding Short value in embedded mode


Key: DERBY-1500
URL: http://issues.apache.org/jira/browse/DERBY-1500
Project: Derby
 Issue Type: Bug
 Components: JDBC
   Affects Versions: 10.1.1.0, 10.1.3.1
Environment: WindowsXP
   Reporter: Markus Fuchs
Attachments: ShortTest.java


When trying to insert a row into the table 
SHORT_TEST( ID int, SHORT_VAL smallint)
an exception is thrown, if the object value given to PreparedStatement#setObject(int parameterIndex, Object x) is of type Short. The exception thrown is:
--- SQLException ---
SQLState:  22005
Message:  An attempt was made to get a data value of type 'SMALLINT' from a data value of type 'java.lang.Short'.
ErrorCode:  2
SQL Exception: An attempt was made to get a data value of type 'SMALLINT' from a data value of type 'java.lang.Short'.
	at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
	at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
	at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source)
	at org.apache.derby.impl.jdbc.EmbedConnection.newSQLException(Unknown Source)
	at org.apache.derby.impl.jdbc.ConnectionChild.newSQLException(Unknown Source)
	at org.apache.derby.impl.jdbc.EmbedPreparedStatement.dataTypeConversion(Unknown Source)
	at org.apache.derby.impl.jdbc.EmbedPreparedStatement.setObject(Unknown Source)
Tested on Derby 10.1.1.0 and 10.1.3.1. The same test runs fine in network mode.

  
  
  





Re: [jira] Commented: (DERBY-1516) Inconsistent behavior for getBytes and getSubString for embedded versus network

2006-07-31 Thread Lance J. Andersen






Craig Russell (JIRA) wrote:

  [ http://issues.apache.org/jira/browse/DERBY-1516?page=comments#action_12424673 ] 

Craig Russell commented on DERBY-1516:
--

Including the discussion from the alias referenced immediately above:

  
  

  One interesting test with 0 length is the case for getSubString(1,0) 
for a zero length lob. 
Should it throw an exception or return a zero length string? 
  

  
  
  
  
The API working doesn't give much help to resolve this; the wording for the exception in JDK 1.6 is 

  
  
  
  
Throws: 
   SQLException - if there is an error accessing the CLOB value 

  
  
  
  
which I guess is equivalent to YMMV... A case for Lance? 

  
  
  
  
Even if this case is allowed, should it make a difference if position is  (length+1), e.g. getSubString(2,0) for an empty CLOB? 

  
  
Lance has not replied to a request to update the wording, and I think time is running out on this to be added to the specification in progress.
  

Actually that is not true, i did reply if you check the archives, but
this issue is not on my radar as i have other issues that are of higher
importance right now given my short window of opportunity due to the
Mustang schedule.


  
The jdbc spec has followed the java.lang.String spec pretty closely, modulo 1-origin vs. 0-origin indexing. The String spec allows accessing substrings of a 0-length string, as follows:

public String substring(int beginIndex,
int endIndex)
Returns a new string that is a substring of this string. The substring begins at the specified beginIndex and extends to the character at index endIndex - 1. Thus the length of the substring is endIndex-beginIndex...
IndexOutOfBoundsException - if the beginIndex is negative, or endIndex is larger than the length of this String object, or beginIndex is larger than endIndex.

For a zero-length String:
1. endIndex must be 0 or else endIndex would be larger than the length of the String;
2. beginIndex must be 0 or else beginIndex would be larger than endIndex.

Translating this to jdbc, for a zero-length Clob:
1. position must be 1;
2. length must be 0.

I agree we should add positive test cases to extract a zero-length substring from a Clob and Blob. 

I propose adding to clobTest2 and blobTest2 a test like: 
blobclob4BLOB.printInterval(clob, 1, 0, 7, i, clobLength) // zero length
blobclob4BLOB.printInterval(blob, 1, 0, 7, i, blobLength) // zero length


  
  
Inconsistent behavior for getBytes and getSubString for embedded versus network
---

Key: DERBY-1516
URL: http://issues.apache.org/jira/browse/DERBY-1516
Project: Derby
 Issue Type: Bug
 Components: JDBC
   Reporter: Craig Russell
Assigned To: Craig Russell
   Priority: Minor
Attachments: DERBY-1516.patch, DERBY-1516.patch, DERBY-1516.patch


org.apache.derby.client.am.Clob.getSubString(pos, length) and org.apache.derby.client.am.Blob.getBytes(pos, length) check the length for less than zero. 
if ((pos = 0) || (length  0)) {
throw new SqlException(agent_.logWriter_, "Invalid position " + pos + " or length " + length);
But org.apache.derby.impl.jdbc.EmbedClob(pos, length) and org.apache.derby.impl.jdbc.EmbedBlob(pos, length) check the length for less than or equal to zero.
   if (length = 0)
throw Util.generateCsSQLException(
SQLState.BLOB_NONPOSITIVE_LENGTH, new Integer(length));
The specification does not disallow length of zero, so zero length should be allowed. I believe that the implementation in org.apache.derby.client.am is correct, and the implementation in org.apache.derby.impl.jdbc is incorrect. 

  
  
  





Re: Influencing the standards which govern Derby

2006-07-19 Thread Lance J. Andersen



Rick Hillegas wrote:
I would like to understand how the community influences the standards 
which govern Derby:


1) SQL - I've been participating in Derby for a year now. Over the 
past year I don't recall any discussion about a need to change the SQL 
standard. We have proposed new language in rare cases not covered by 
the ANSI volumes. However, I don't recall any attempt to contact the 
SQL group and try to change their spec. Do we need to influence this 
spec and if so, how do we propose to do so?


2) JDBC - There has been substantial discussion about the upcoming 
JDBC4 spec.. Fortunately for us, the spec lead is a member of our 
community. In several cases he has taken our viewpoint back to the 
JDBC expert group and advocated our position. However, we don't know 
who will lead the expert group for JDBC5. How do we expect to 
influence the next rev of JDBC?
I am already gathering input and working on my JDBC.next planning so you 
cannot get rid of me that easily Rick :-)


3) DRDA - Over the last year, I failed to get a Boolean datatype into 
the DRDA spec. This stemmed from the internal dynamics and 
pay-for-play nature of the spec's governing body, the DBIOP 
Consortium. How do we expect to influence the DRDA spec?


If there's a general solution which covers all of these cases, that's 
great. If we handle each spec differently, that's fine too. I'd just 
like some discussion and guidance.


Thanks,
-Rick


Re: Influencing the standards which govern Derby

2006-07-19 Thread Lance J. Andersen






Daniel John Debrunner wrote:

  Rick Hillegas wrote:

  
  
I would like to understand how the community influences the standards
which govern Derby:

1) SQL - I've been participating in Derby for a year now. Over the past
year I don't recall any discussion about a need to change the SQL
standard. We have proposed new language in rare cases not covered by the
ANSI volumes. However, I don't recall any attempt to contact the SQL
group and try to change their spec. Do we need to influence this spec
and if so, how do we propose to do so?

  
  
I've work with the SQL group through IBM's representives (since I work
for IBM). So far from Derby it's been more around getting clarifications
and pointing out areas where the spec is unclear or wrong. I don't know
how an individual would get involved in this process.

You could ask the Postgres folks what why do, or the generic open source
database mailing list - [EMAIL PROTECTED] .

  
  
2) JDBC - There has been substantial discussion about the upcoming JDBC4
spec.. Fortunately for us, the spec lead is a member of our community.
In several cases he has taken our viewpoint back to the JDBC expert
group and advocated our position. However, we don't know who will lead
the expert group for JDBC5. How do we expect to influence the next rev
of JDBC?

  
  
The ASF is on the JCP "Executive Committee for J2SE/J2EE", in addition
it seems individuals can join the JCP for $0.

http://jcp.org/en/participation/membership

So it seems plenty of opportunity to get involved in the next JDBC.
  


There is already 1 Apache member on the JDBC 4.0 EG. Input can also
go through the Sun/IBM reps and of course i try and follow up the best
i can for items that i catch or get prompted on by folks on this alias.



  
  
  
3) DRDA - Over the last year, I failed to get a Boolean datatype into
the DRDA spec. This stemmed from the internal dynamics and pay-for-play
nature of the spec's governing body, the DBIOP Consortium. How do we
expect to influence the DRDA spec?

  
  
Do you have a summary of what happened, I remember e-mails that the
DBIOP was getting back together and now your comments that the process
didn't work, but I don't recall seeing anything inbetween.

  
  
If there's a general solution which covers all of these cases, that's great. 
If we handle each spec differently, that's fine too. I'd just like some discussion and guidance. 

  
  
I would guess it's going to be different in each case.

Dan.

  





Re: [jira] Created: (DERBY-1540) JDBC 4 EoD with default QueryObjectGenerator fails with SecurityManager

2006-07-19 Thread Lance J. Andersen

Amit,

Didn't u fix this already?


Please see the attached

Daniel John Debrunner (JIRA) wrote:

JDBC 4 EoD with default QueryObjectGenerator  fails with SecurityManager


 Key: DERBY-1540
 URL: http://issues.apache.org/jira/browse/DERBY-1540
 Project: Derby
  Issue Type: Bug
  Components: JDBC
Affects Versions: 10.2.0.0
Reporter: Daniel John Debrunner


The test jdbc4/TestQueryObject runs without the security manager because the 
default QueryObjectGenerator uses reflection.
See  
trunk/java/testing/org/apache/derbyTesting/functionTests/tests/jdbc4/TestQueryObject_app.properties

Seems like a bug, but not sure of its cause or solution: Could be one (or none) 
of:

- Make changes in Derby code, e.g. add privilege blocks but don't see how this 
will solve anything as it's not Derby code that's calling the reflection and I 
don't see any javadoc comments in JDBC 4.0 about methods throwing 
SecurityExceptions.

- document the privileges required to use the EoD features, though not sure how 
we would document the ability to grant a privilege to system (JDK) code. Are 
these privileges documented in the JDBC spec?

- a bug in the Mustang beta, default query object not being treated as system 
code, no priv blocks in it?

- a limitation of the default  QueryObjectGenerator , cannot use with a 
security manager?

- a Derby test problem?

This is more of a tracking issue, with a dump of my thoughts.


  


Re: [jira] Created: (DERBY-1540) JDBC 4 EoD with default QueryObjectGenerator fails with SecurityManager

2006-07-19 Thread Lance J. Andersen
btw, did you try this with Beta2 of Mustang as i would be surprised if 
this fails as Rick worked with Amit on this earlier.




Lance J. Andersen wrote:

Amit,

Didn't u fix this already?


Please see the attached

Daniel John Debrunner (JIRA) wrote:

JDBC 4 EoD with default QueryObjectGenerator  fails with SecurityManager


 Key: DERBY-1540
 URL: http://issues.apache.org/jira/browse/DERBY-1540
 Project: Derby
  Issue Type: Bug
  Components: JDBC
Affects Versions: 10.2.0.0
Reporter: Daniel John Debrunner


The test jdbc4/TestQueryObject runs without the security manager 
because the default QueryObjectGenerator uses reflection.
See  
trunk/java/testing/org/apache/derbyTesting/functionTests/tests/jdbc4/TestQueryObject_app.properties 



Seems like a bug, but not sure of its cause or solution: Could be one 
(or none) of:


- Make changes in Derby code, e.g. add privilege blocks but don't see 
how this will solve anything as it's not Derby code that's calling 
the reflection and I don't see any javadoc comments in JDBC 4.0 about 
methods throwing SecurityExceptions.


- document the privileges required to use the EoD features, though 
not sure how we would document the ability to grant a privilege to 
system (JDK) code. Are these privileges documented in the JDBC spec?


- a bug in the Mustang beta, default query object not being treated 
as system code, no priv blocks in it?


- a limitation of the default  QueryObjectGenerator , cannot use with 
a security manager?


- a Derby test problem?

This is more of a tracking issue, with a dump of my thoughts.


  




Re: Problems in SQLBinary when passing in streams with unknown length (SQL, store)

2006-07-14 Thread Lance J. Andersen






Daniel John Debrunner wrote:

  Kristian Waagan wrote:
  
  
Hello,

I just discovered that we are having problems with the length less
overloads in the embedded driver. Before I add any Jiras, I would like
some feedback from the community. There are for sure problems in
SQLBinary.readFromStream(). I would also appreciate if someone with
knowledge of the storage layer can tell me if we are facing trouble
there as well.

SQL layer
=
SQLBinary.readFromStream()
  1) The method does not support streaming.
 It will either grow the buffer array to twice its size, or possibly
 more if the available() method of the input stream returns a
 non-zero value, until all data is read. This approach causes an
 OutOfMemoryError if the stream data cannot fit into memory.

  
  
I think this is because the maximum size for this data type is 255
bytes, so memory usage was not a concern.
SQLBinary corresponds to CHAR FOR BIT DATA, the sub-classes correspond
to the larger data types.

One question that has been nagging me is that the standard response to
why the existing JDBC methods had to declare the length was that the
length was required up-front by most (some?) database engines. Did this
requirement suddenly disappear? I assume it was discussed in the JDBC
4.0 expert group.
  

The new methods are optional in the spec as some vendors do not require
the length and this issue (requiring a length) is a constant compliant
from JDBC users.

We decided to add the methods, leaving them optional for supporting as
of now so if you do not support them you throw
SQLFeatureNotSupportedException. 

  
I haven't looked at your implementation for this, but the root cause may
be that derby does need to verify that the supplied value does not
exceed the declared length for the data type. Prior to any change for
lengthless overloads the incoming length was checked before the data was
inserted into the store. I wonder if with your change it is still
checking the length prior to storing it, but reading the entire value
into a byte array in order to determine its length.

  
  
  2) Might enter endless loop.
 If the available() method of the input stream returns 0, and the
 data in the stream is larger than the initial buffer array, an
 endless loop will be entered. The problem is that the length
 argument of read(byte[],int,int) is set to 0. We don't read any
 more data and the stream is never exhausted.

  
  
That seems like a bug, available() is basically a useless method.

  
  
To me, relying on available() to determine if the stream is exhausted
seems wrong. Also, subclasses of InputStream will return 0 if they don't
override the method.
I wrote a simple workaround for 2), but then the OutOfMemoryError
comes into play for large data.


Store layer
===
I haven't had time to study the store layer, and know very little about
it. I hope somebody can give me some quick answers here.
  3) Is it possible to stream directly to the store layer if you don't
 know the length of the data?
 Can meta information (page headers, record headers etc.) be updated
 "as we go", or must the size be specified when the insert is
 started?

  
  
Yes the store can handle this.

Dan.

  





Re: Not forgiving non-portable applications - Was: Re: behavior of Statement.getGeneratedKeys()

2006-07-14 Thread Lance J. Andersen






Daniel John Debrunner wrote:

  Kathey Marsden wrote:

  
  
 Another similar case is  DERBY-1501 where it
would be nice if Derby were more forgiving of non-portable apps.  Of
course in both of those other cases we would just be adding to existing
support, not changing existing behavior and `there is a risk  to  apps
that develop on Derby and expect to be able to move without changes to
another db.

  
  
We need to be careful about being forgiving to non-portable
applications, part of Derby's charter is to allow applications to easily
migrate from Derby to other databases that follow the same standards.

With 1501 the JDBC spec says the type must be known (I think it's a bug
in the *draft* spec for the type to be ignored), that's the portable
behaviour, ignoring the type not only leads to non-portable applications
but also inconsistencies in derby. E.g. a NULL defined as a DATE could
used for a BLOB value through JDBC, but not using SQL.
  

Can u help me here as to what it the bug you are referring to? too
many emails today to see the forest through the trees.

-lance

  
As extreme examples, should Derby be forgiving of non-portable MySQL
applications that insert NULLs into non-nullable columns, or SQLLite
applications that insert DATE values into INTEGER columns?

Following the standard closely (and helping clean-up the JDBC standard)
provides the clearest direction on these issues.

Dan.

  





Re: Not forgiving non-portable applications - Was: Re: behavior of Statement.getGeneratedKeys()

2006-07-14 Thread Lance J. Andersen






Daniel John Debrunner wrote:

  Lance J. Andersen wrote:

  
  

  With 1501 the JDBC spec says the type must be known (I think it's a bug
in the *draft* spec for the type to be ignored), that's the portable
behaviour, ignoring the type not only leads to non-portable applications
but also inconsistencies in derby. E.g. a NULL defined as a DATE could
used for a BLOB value through JDBC, but not using SQL.
  
  

Can u help me here as to what it the bug you are referring to?  too many
emails today to see the forest through the trees.

  
  
DERBY-1501

http://issues.apache.org/jira/browse/DERBY-1501

You're already on the case. :-)
  

Well that is good to know... 8-) 

  
Dan.

  





Re: [jira] Commented: (DERBY-1501) PreparedStatement#setNull(int parameterIndex, int sqlType) throws SQL Exception if given sqlType is LONGVARBINARY in embedded mode

2006-07-14 Thread Lance J. Andersen




i have removed the rogue sentence in its entirety from the javadocs for
setNull(int,int, String) as it is not needed and is not correct in
regards to typeCode.

-lance

Daniel John Debrunner (JIRA) wrote:

  [ http://issues.apache.org/jira/browse/DERBY-1501?page=comments#action_12420620 ] 

Daniel John Debrunner commented on DERBY-1501:
--

Knut Anders indicates

setNull(int,int,String)
 - If a JDBC driver does not need the type code or type name
  information, it may ignore it. 
setNull(int,int)
You must specify the parameter's SQL type.

Interesting, here the issue is about setNull(int,int) which doesn't have that comment about ignoring typeCode.
Could the omission be intentional and the wording in setNull(int,int,String) meant to be clearer, so that
one of typeCode or typeName could be ignored, but not both?

With setNull(1, Types.LONGVARBINARY) it is saying send a NULL of LONGVARBINARY to the engine,
the engine should then treat it the same as a cast of a LONGVARCHAR FOR BIT DATA to the target type.




  
  
PreparedStatement#setNull(int parameterIndex, int sqlType) throws SQL Exception if given sqlType is LONGVARBINARY in embedded mode
--

 Key: DERBY-1501
 URL: http://issues.apache.org/jira/browse/DERBY-1501
 Project: Derby
Type: Bug

  
  
  
  
Versions: 10.1.1.0
 Environment: WindowsXP
Reporter: Markus Fuchs
 Attachments: ByteArrayTest.java

When inserting a row into following table
BYTEARRAY_TEST( ID int, BYTEARRAY_VAL blob)
PreparedStatement#setNull(int parameterIndex, int sqlType) throws SQL Exception if given sqlType is LONGVARBINARY. You must give sqlType BLOB to make the insert work. The same test works using sqlType LONGVARBINARY in network mode. The following combinations don't work:
Column type   sqlType not working mandatory sqlType
BLOB   LONGVARBINARY BLOB
CLOB   LONGVARCHARCLOB
The issue here is that first Derby behaves differently in network and embedded mode. And secondly, should accept LONGVARBINARY/LONGVARCHAR for BLOB/CLOB columns.

  
  
  





Re: behavior of Statement.getGeneratedKeys()

2006-07-13 Thread Lance J. Andersen

I discussed this briefly at my JDBC EG meeting yesterday.

As i expected, all of the vendors on the call indicated that they return 
the same data type for key returned in the case of column defined as 
identity via the getGeneratedKeys() method.  The consensus was that this 
is what a user would expect.


As to the unique key question posed by Dan, this is going to be an 
ongoing EG discussion as some vendors do return identity columns values 
in cases that  are not unique (because the backend like Derby allows for 
it) which gets complex as some vendors also in some cases support 
returning a ROWID currently (but this is a difference scenario then 
using a defined column in the table).



The behavior of how JDBC methods on the ResultSet/ResultSetMetaData work 
do not change.  The value returned will differ for ResultSetMetaData for 
the returned column type.


This issue pointed out a problem in the JDBC EoD RI which made the 
assumption that the value returned matched the column type in the base 
table.


A Derby user encountered this issue as well, trying to use 10.2 and JDBC 
EoD  http://binkley.blogspot.com/2006/04/nifty-jdbc-40.html.



HTH
-lance





Rick Hillegas wrote:

Hi Kathey,

Thanks for your responses. Some replies follow. Regards-Rick

Kathey Marsden wrote:


Rick Hillegas wrote:


I'd like to try to summarize where I think the discussion stands:

1) Lance, our JDBC expert, has confirmed that this is not a 
compliance problem. That means this is not a bug.


2) Lance would like to change the behavior of 
Statement.getGeneratedKeys(). Currently this method always returns a 
ResultSet whose column has the canonical type DECIMAL( 31, 0). He 
would like this method to return a ResultSet whose column type 
changes depending on the type of the actual autogenerated column in 
the affected table; that is, the column could have type SMALLINT, 
INT, or BIGINT.


3) It does not seem that this change would have a very big impact on 
customers. At least, we have not been able to imagine how this would 
impact customers adversely. However, this is just theory and we have 
not polled the user community yet.



We not only have not polled the user community, we do not have 
anything we can poll them with yet.  getGeneratedKeys returns a 
result set.  Users will call certain methods on  that  ResultSet and 
the return values will be different.   We need to define what those 
are and the potential impact.  Then we map them to the user symptom 
and then we can define scenarios that might be affected.  If it is 
important  that we break our current documented behavior we have to 
take these painful steps to assess  risk.  A vague poll without 
understanding  the possible impact ourselves and presenting it 
clearly is not effective  or fair to the user base as we found with 
DERBY-1459.
Can you please complete the list below with any other changes  in the 
result set returned by getGeneratedKeys or  confirm that there are no 
other calls impacted?  Let's not include the likely of each happening 
yet.  We just want to understand what has changed and what symptoms 
users might see.
I agree with what we have so far the risk is low but w need to go 
through the whole exercise.  How has the result set returned 
changed?  What symptoms might users see?Define user scenarios and 
risk. Then poll the user community.


Certainly there would be  these changes for the ResultSet returned by 
getGeneratedKeys():


o  getMetaData()  would  correspond to the ResultSetMetadata of the 
base table column and so will have different types, columnwidths etc, 
so formatting and other decisions based on this information may be 
affected.


Agreed.

o  getObject()  would  return a different type and applications 
making casts based on the assumption it is a BigDecimal  may see cast 
exceptions or other problematic behavior because of this assumption.


Agreed.

o getString()  would return a different String representation which  
might  be problematic if a particular format was expected and  parsed.


This doesn't appear to be true for the small integers with which I've 
experimented. Are there problems in the toString() methods of 
BigDecimal and (perhaps) Derby's j2me decimal object?




Would other ResultSet methods might be affected?  For instance,  
would getInt(), getLong(), getShort()  etc. all still work as they 
did before and return the same values?


They should.





So what do we think?

A) Does anyone prefer the current behavior over Lance's proposed 
behavior?




Only in that it saves a lot of time in risk assesssment  and not 
changing it prevents us from setting  a precedent for changing 
documented and compliant behaviour to something else with no really 
significant benefit to users, but rather just the sake of tidiness 
and the convenience of writing code that is not guaranteed to be 
portable to other JDBC Drivers.

Re: behavior of Statement.getGeneratedKeys()

2006-07-13 Thread Lance J. Andersen




I think it can be improved but the javadocs indicates (executeUpdate)
that the array is ignored if the statement is not able to return an
autogenerated key and the getGeneratedKeys says it will return an empty
ResultSet if it cannot return generated keys.



Daniel John Debrunner wrote:

  Lance J. Andersen wrote:

  
  
I discussed this briefly at my JDBC EG meeting yesterday.

As i expected, all of the vendors on the call indicated that they return
the same data type for key returned in the case of column defined as
identity via the getGeneratedKeys() method.  The consensus was that this
is what a user would expect.

As to the unique key question posed by Dan, this is going to be an
ongoing EG discussion as some vendors do return identity columns values
in cases that  are not unique (because the backend like Derby allows for
it) which gets complex as some vendors also in some cases support
returning a ROWID currently (but this is a difference scenario then
using a defined column in the table).

  
  
Beyond that it's also unclear what should a driver do if the application
requests columns in the ResultSet that are not generated or identity
columns.

E.g. with the example in section 13.6 of JDBC 4 what is the expected
behaviour if the column list is any of:

 {"ORDER_ID", "ISBN"}
 {"ISBN"}
 {"ORDER_DATE"}
 {"ORDER_ID", "ORDER_DATE"}

where ORDER_DATE is a column that has a default of CURRENT_DATE (ie.
value not provided by the INSERT).

Dan.





  





Re: [jira] Commented: (DERBY-1501) PreparedStatement#setNull(int parameterIndex, int sqlType) throws SQL Exception if given sqlType is LONGVARBINARY in embedded mode

2006-07-13 Thread Lance J. Andersen




I am not sure why the wording was added to the overloaded setNull 
method which was added in JDBC 3.

 i probably would expect it to not ignore the specified sql type in
order to make sure the action requested is valid.  I would have to
check the SQL standard and discuss this with the EG further but it is
something else to try and clean up and added it to my ever growing to
do list


Daniel John Debrunner (JIRA) wrote:

  [ http://issues.apache.org/jira/browse/DERBY-1501?page=comments#action_12420620 ] 

Daniel John Debrunner commented on DERBY-1501:
--

Knut Anders indicates

setNull(int,int,String)
 - If a JDBC driver does not need the type code or type name
  information, it may ignore it. 
setNull(int,int)
You must specify the parameter's SQL type.

Interesting, here the issue is about setNull(int,int) which doesn't have that comment about ignoring typeCode.
Could the omission be intentional and the wording in setNull(int,int,String) meant to be clearer, so that
one of typeCode or typeName could be ignored, but not both?

With setNull(1, Types.LONGVARBINARY) it is saying send a NULL of LONGVARBINARY to the engine,
the engine should then treat it the same as a cast of a LONGVARCHAR FOR BIT DATA to the target type.




  
  
PreparedStatement#setNull(int parameterIndex, int sqlType) throws SQL Exception if given sqlType is LONGVARBINARY in embedded mode
--

 Key: DERBY-1501
 URL: http://issues.apache.org/jira/browse/DERBY-1501
 Project: Derby
Type: Bug

  
  
  
  
Versions: 10.1.1.0
 Environment: WindowsXP
Reporter: Markus Fuchs
 Attachments: ByteArrayTest.java

When inserting a row into following table
BYTEARRAY_TEST( ID int, BYTEARRAY_VAL blob)
PreparedStatement#setNull(int parameterIndex, int sqlType) throws SQL Exception if given sqlType is LONGVARBINARY. You must give sqlType BLOB to make the insert work. The same test works using sqlType LONGVARBINARY in network mode. The following combinations don't work:
Column type   sqlType not working mandatory sqlType
BLOB   LONGVARBINARY BLOB
CLOB   LONGVARCHARCLOB
The issue here is that first Derby behaves differently in network and embedded mode. And secondly, should accept LONGVARBINARY/LONGVARCHAR for BLOB/CLOB columns.

  
  
  





Re: behavior of Statement.getGeneratedKeys()

2006-07-13 Thread Lance J. Andersen



Kathey Marsden wrote:

Lance J. Andersen wrote:

This issue pointed out a problem in the JDBC EoD RI which made the 
assumption that the value returned matched the column type in the 
base table.


A Derby user encountered this issue as well, trying to use 10.2 and 
JDBC EoD  http://binkley.blogspot.com/2006/04/nifty-jdbc-40.html.
Well, it appears that the behavior in Derby was copied from  the IBM DB2 
driver  i am afraid, which did not come up on my EG call discussion 
yesterday as a difference in behavior, but that happens as well without 
sometimes specifically testing.  Nothing sadly is ever easy is it...







So here is a  benefit.  The change  may ease migration to Derby for 
apps that make this assumption.

It would help with some databases such as Oracle for sure.

  I hit a similar thing recently that Derby
Clob.getSubString does not support a zero offset and DDLUtils  
expected it to.  (That one is still on my list to file.  I don't know 
yet if that is a Derby bug or not. )   Another similar case is  
DERBY-1501 where it would be nice if Derby were more forgiving of 
non-portable apps.  Of course in both of those other cases we would 
just be adding to existing support, not changing existing behavior and 
`there is a risk  to  apps that develop on Derby and expect to be able 
to move without changes to another db.


Anyway I think if you would like to make this change it would be 
reasonable to file a Jira issue and pursue due diligence with the user 
community.
Understand, the original intent  of this thread was also to try and 
understand why this behavior was there and know i know.
I'll get  in touch with some of the users I work with and see if it 
might be an issue, but if limited to what has been outlined so far I 
tend to think it won't conflict with most typical usage cases.   I 
think that basically folks are going to be calling getLong() or 
getInt() on the  ResultSet returned and not getObject.  If they are 
looking at the metadata they are expecting it to be as you describe.  
But I will wait until we hear more. My biggest concerns with the 
change are:


1) The precedent it sets. That we can change compliant, documented 
behaviour like this.   But reading the ForwadCompatibility goal  I 
feel reassured that maybe this is ok.


The goal is to allow any application written against the public 
interfaces of an older version of Derby to run, without any changes, 
against a newer version of Derby.


Maybe though, the ForwardCompatiblity guidelines should have 
information on due dilligence when making an incompatible change.


2) The potential high risk and impact of the code change for 
client/server  as outlined in my earlier mail.


Kathey



Re: behavior of Statement.getGeneratedKeys()

2006-07-11 Thread Lance J. Andersen



It's not entirely clear to me that Derby is not compliant.
  
I do not believe i indicated it was or was not compliant, my point was 
is the data type is not what i would expect returned in this scenario.

The ResultSetMetaData does correctly described the number, type and
properties of the generated keys., ie. it describes the ResultSet
correctly. One could say Derby always generates keys internally as of
type DECIMAL(31, 0) and that is what getGeneratedKeys() returns, but
when it is stored in a column it is mapped to the specific type of that
column.
  
The question is why did you decide to do return a data type other then 
the type of the column that it was defined?  This is a more
useful thing to discuss instead of just trying to find means to say it 
is not a Derby problem and Derby is correct.



The spec for the getGeneratedKeys() has always been too vague.
  
I won't deny that this could be written better, but this was done before 
my time and in reality, i do not think you will find a spec out there 
which does not have areas for clarification.
There is a process for requesting improvements in the spec one way is 
you can log a bug via java.sun.com for a Sun lead spec or send an email 
to the comments alias for the spec in question.  This

would be the best way to get clarifications done to the specs.




One could even argue that Derby should not return anything because Derby
does not have a mechanism to generate a *unique* key field (see first
sentence of 13.6 in JDBC 4). Or maybe Derby should only return values
when the column is generated and is defined to be the sole column for a
primary key constraint (and also unique constraint?).
  
Again this is the original wording going back to JDBC 3.I can look 
at trying to clarify this a bit for the final version of JDBC 4, but not 
for the PFD.  No promises at this time.


This still does answer why DECIMAL(31,0) was chosen as the return type 
to be returned in all cases?  Understanding this question would help in 
determining what action if any is needed.

I guess I'm still curious to the benefit of changing it and am
interested to see if a proposed fix adds or removes complexity (and for
what value).
  
The one benefit is consistency, you define an Integer, i get an Integer 
back which represents what was stored in the database.



On the flip side, what is the real risk in changing the behavior?

Whatever you decide for Derby, the Derby documentation needs clarified 
as well.


We all want Derby to be successful and be the best product it can be.

Dan.


  


Re: JDBC reference implentation

2006-07-11 Thread Lance J. Andersen

No it is not.

Kathey Marsden wrote:
Is Derby, JavaDB or something else  the JDBC reference implementation 
these days?





Re: behavior of Statement.getGeneratedKeys()

2006-07-11 Thread Lance J. Andersen



Rick Hillegas wrote:

Kathey Marsden wrote:


Rick Hillegas wrote:



1) Is this a bug? Should Statement.getGeneratedKeys() return a
ResultSet whose column has the same type as the underyling
autogenerated column?

Reading from the JDBC 3.0 and JDBC  4.0 spec it seems clear to me 
that we are not compliant and if non-compliance is a bug, this is  a 
bug.   The spec says: Calling ResultSet.getMetaData on the ResultSet 
object returned by getGeneratedKeys will produce a ResultSetMetaData 
object that can be used to determine the number, type and properties 
of the generated keys.



2) If this is a bug, is it permitted to change this behavior in a
minor release?



Of course debate continues,  but I think it would be first good to 
objectively assess what JDBC calls might be affected.Perhaps 
whomever is considering making this change could do  a  thorough 
analysis and present it to the community. After that we could use 
this issue as a test case for  our goal  at 
http://wiki.apache.org/db-derby/ForwardCompatibility  as we look at 
potential risk and what level of consultation is needed with the user 
community for the change and when it is appropriate.  It should be a 
good test as our current documented behavior and the spec are at odds.


Hi Kathey,

I'm muddled. The affected JDBC call is Statement.getGeneratedKeys(). I 
don't think anyone is proposing to change the behavior of any other 
JDBC call. But your question makes me anxious. Why do you think other 
JDBC calls are affected?


This is the only method in play here.  However, this does return a 
ResultSet which allows you to get the ResultSetMetaData.  The behavior 
would not change, just that it returns the correct type for the returned 
column which would then correctly match the column in the base table




Kathey






Re: behavior of Statement.getGeneratedKeys()

2006-07-10 Thread Lance J. Andersen
To me this is a problematic issue as i would expect the return type for 
the keys to match the datatype of the column.


Rick Hillegas wrote:
I would like the community's advice on whether the following Derby 
behavior is a bug and, if so, whether we would be allowed to change 
this behavior for 10.2:


A) Currently, Derby knows how to automatically generate values for 
columns of type SMALLINT, INT, and BIGINT. You get this behavior if 
you declare the column with this clause: generated {always | by 
default} as identity.


B) You can retrieve autogenerated values using the 
Statement.getGeneratedKeys() call. This call returns a ResultSet with 
a DECIMAL column. That is, the autogenerated keys come back as DECIMAL 
even though they actually appear in the table as SMALLINT, INT, or 
BIGINT.


This seems a bit odd. One might expect that the returned keys would 
have the same datatype as the actual autogenerated value in the table. 
However, technically the javadoc for Statement.getGeneratedKeys() 
doesn't specify the shape of the ResultSet and we don't lose any 
precision. You can retrieve the value in the database by calling the 
appropriate getXXX() method on the DECIMAL result returned by 
Statement.getGeneratedKeys().


Before filing a bug on this, I'd like the community's advice:

1) Is this a bug? Should Statement.getGeneratedKeys() return a 
ResultSet whose column has the same type as the underyling 
autogenerated column?


2) If this is a bug, is it permitted to change this behavior in a 
minor release?


Thanks,
-Rick
**


Re: behavior of Statement.getGeneratedKeys()

2006-07-10 Thread Lance J. Andersen



Kathey Marsden wrote:

Rick Hillegas wrote:


Hi Kathey,

My gut feeling is that changing this behavior could affect 
applications like ij which make formatting decisions based on the 
JDBC types of returned columns.
If you return the correct column type of the base type, then the 
formatting would be correct.


I agree, but I am not sure yet how significant that impact might be. 
I'd like translate it into exactly what JDBC calls will have different 
behavior in order to more accurately assess the risk in  typical usage 
scenarios.


Certainly there are these changes for the ResultSet returned by 
getGeneratedKeys():


o  getMetaData()  would  correspond to the ResultSetMetadata of the 
base table column and so will have different types, columnwidths etc, 
so formatting and other decisions based on this information may be 
affected.
Portable code would adjust accordingly to the correct width.  This is 
what a tool would do.
o  getObject()  would  return a different type and applications making 
casts based on the assumption it is a BigDecimal  may see cast 
exceptions or other problematic behavior because of this assumption.
Or because you are returning a BigDecimal when someone is not expecting 
it, you also have problematic behavior.  This fact is buried in the 
Derby docs currently.
o getString()  would return a different String representation which  
might  be problematic if a particular format was expected and  parsed.
This would be bad  user code if you depended on getString for a numeric 
value of any type to be consistent.


Would other ResultSet methods might be affected?  For instance,  would 
getInt(), getLong(), getShort()  etc. all still work as they did 
before and return the same values?

Of course there is the other question still outstanding:

Why would these methods be effected, they have nothing to do with this.

What  usage cases would *benefit*  from changing it?  This is 
important,  because if there is no real benefit to changing it  why 
take the risk at all?
Well the issue is that this behavior is not expected that you are 
returning a different data type then the column in the table. 





Kathey




Re: behavior of Statement.getGeneratedKeys()

2006-07-10 Thread Lance J. Andersen







  
These seem reasonable. On the other hand, using getGeneratedKeys() to
determine the type  name of a table's generated key seems very very
unlikely to me. It would require that the application/tool can insert a
row of the correct shape, if the app/tool can do that, then it probably
already knows the columns's definition. Also some auto-discovering tool
seems unlikely to use getGeneratedKeys as it would require perturbing
the database by having a successful INSERT (though maybe a INSERT INTO
T(a,b,c) SELECT a,b,c FROM T WHERE 1=0 might work). Seems way too
contrived though to say getGeneratedKeys() is a useful tool for
gathering metadata.

Dan.
  

You are correct, a tool would use DatabaseMetaData and expect the
column type returned from getGeneratedKeys() to match the column
defined in the table for the auto-generated key in the case the
identity column. 




Re: behavior of Statement.getGeneratedKeys()

2006-07-10 Thread Lance J. Andersen






Daniel John Debrunner wrote:

  Lance J. Andersen wrote:

  
  
Kathey Marsden wrote:



  Rick Hillegas wrote:
  

  
  
  
  

  Certainly there are these changes for the ResultSet returned by
getGeneratedKeys():

o  getMetaData()  would  correspond to the ResultSetMetadata of the
base table column and so will have different types, columnwidths etc,
so formatting and other decisions based on this information may be
affected.
  

Portable code would adjust accordingly to the correct width.  This is
what a tool would do.



  o  getObject()  would  return a different type and applications making
casts based on the assumption it is a BigDecimal  may see cast
exceptions or other problematic behavior because of this assumption.
  

Or because you are returning a BigDecimal when someone is not expecting
it, you also have problematic behavior.  This fact is buried in the
Derby docs currently.

  
  
Well portable code would adjust accordingly to the type of the returned
object. :-)
  

Possibly, but if getColumns() tells me that the column is defined as
an Integer in the table then and it is also reasonable to expect
getGeneratedKeys() to return me an Integer.

  
Dan.



  





Re: Revised 10.2 plan for uncoupling the Mustang and Derby release trains

2006-06-30 Thread Lance J. Andersen

July towards the middle (after the Sun shutdown)

Andrew McIntyre wrote:

On 6/30/06, Rick Hillegas [EMAIL PROTECTED] wrote:


1) Lance will publish his proposed final draft (PFD) of the JDBC4 spec
under the early access license.


Is there a rough time frame for this, considering it's the gating
factor for getting the release train rolling? Is it safe to assume
that this will happen before August 11? Or is there a chance that the
generation of the first release candidate may be later than August 11
if the PFD is not posted by then?

+1 to the rest.

andrew


Re: catch-22: Derby, Mustang, and JCP issue

2006-06-23 Thread Lance J. Andersen






Geir Magnusson Jr wrote:

  
Andrew McIntyre wrote:
  
  
On 6/23/06, Daniel John Debrunner [EMAIL PROTECTED] wrote:


  
In #2 of his proposed solution, Geir said he doesn't believe that
Derby qualifies as an implementation, and thus would not be affected
by the JSPA.

  
  I thought Geir's proposed solution was predicated on item 1)

Geir wrote:
  
  
1) Have Sun change the draft spec license for 221 from the current to
the new one that allows distribution with appropriate warning markings.
 I'm going to start working this line w/ the PMO and the JCP.

  
  Until the licence is changed we cannot ship a GA version of Derby with
JDBC 4.0 code.
  

Then I'm confused, if we're not an implementation, thus not subject to
section 5 of the terms in the JSPA, and the copyright concerns w/r/t
the evaluation license are not an issue for us, then why does the spec
draft license need to change? Can somebody spell that out for me?

  
  
Derby isn't an implementation, but there is a small piece that
implements the JDBC4 spec.

  
  
It certainly seems like changing the spec license is the right thing
to do to make everybody happy. So, can someone from Sun or JCP please
confirm that the draft spec license will in fact be changed?

  
  
I've made the request formally.  As I said in a follow-up, the solution
that will be easier will be a permissive license for the upcoming
proposed final draft.
  

Geir and I have spoken and i have also discussed it internally and we
are going to look at updating the license for the PFD

  
I guess that, yes, we still cannot ship a GA version of Derby with the
JDBC 4 until another draft of the spec is posted with the new license
attached.

andrew



  





Re: Proposal for 10.2 release schedule

2006-06-22 Thread Lance J. Andersen






Daniel John Debrunner wrote:

  Andrew McIntyre wrote:

  
  
Call in the lawyers:



  From JSPA - 2.0.1 10 January 2005 [1], which presumably the ASF board
  

has executed, being a JCP Member (they've even got quotes from Geir
prominently featured on their "about JCP 2.6 page" [2]):

5.B. License to Create Independent Implementations.

  
  
Dumb question, is Derby:

 - creating an independent implementation of JSR221
 - or is it implementing a driver that adheres to JSR221?

I would say Apache Harmony (when/if they tackle Jave SE 6) would be
creating an independent implementation of JSR221 and that Derby is not.
  

You cannot have a GA version of a JDBC 4 driver until JSR 221 goes
final.

The Derby Embedded and Network Client drivers provide implementations
of the JDBC drivers based on JSR 221.


A Java SE implementation provides the interfaces and concrete classes
that are used by a JDBC driver for the given Java SE implementation.

JSR 221 falls under the umbrella spec for Java SE 6. They all go final
together.

  
Dan.




  





Re: Proposal for 10.2 release schedule

2006-06-22 Thread Lance J. Andersen



David Van Couvering wrote:

Lance J. Andersen wrote:
  
You cannot have a GA version of a JDBC 4 driver until JSR 221 goes 
final.


Are you *sure* you can't *have* a GA version, e.g the bits can't exist 
somewhere, as long as they're not officially declared generally 
available?  If we can't even create the bits, then it is physically 
and logically impossible for us to give anything to the JDK team for 
integration.
I think we are talking different things here as you are talking about 
getting your final version of your product ready to be released based on 
a JSR which is getting ready to go final , which is fine, which is 
different from what i was trying to say.


http://jcp.org/en/resources/guide#fab  gives you an overview of how a 
JSR goes final.







Personally, I think we need to get clarification from the JCP folks on 
this before we make any final conclusions about this.


Thanks,

David



Re: [PRE-VOTE DISCUSSION] Compatibility rules and interface table

2006-06-21 Thread Lance J. Andersen

hi guys

Rick Hillegas wrote:

Hi David,

I had a couple more comments on the compatibility commitments. 
Cheers-Rick


 - Changes to stored procedures: We will have to change column order if
  we add Derby-specific columns to a metadata ResultSet and then a later
  JDBC also adds more columns.
Any vendor specific columns added should only be accessed via column 
name and you should document that.


we did clarify this in the JDBC spec


 - Changes to Database Tables: We should be allowed to drop indexes
  on System tables.

 - Changes to Command Line Interfaces. I don't understand why error
   message text can't be changed. This contradicts what is said
   in the Interface Table below.

 - Other miscellaneous formats. I'm not clear on what these miscellaneou
  files and strings are. For example, I'd like to make sure that we're 
not enshrining

  the current RUNTIMESTATISTICS output.

 - Interface table:

   o Shouldn't the public client api be stable like the embedded api?

   o What is meant by Defaults returned by DatabaseMetadata
   methods?

   o I think that the format of RUNTIMESTATISTICS output is unstable.


David Van Couvering wrote:

Hi, all.  I am thinking of setting up two separate votes based on the 
Wiki page at


http://wiki.apache.org/db-derby/ForwardCompatibility

The first one would be on our overall model/approach to making 
compatibility commitments, as described in the Wiki page.


The second would be specifically for the interface table, targetted 
at the 10.2 release.


The reason for separating these out is because, for each release, we 
should update the interface table and have a new vote; the overall 
model/approach does not need to be updated or voted on for each release.


I would copy the appropriate text directly into the email for the 
vote, so that the thing we're voting on is a frozen snapshot, not a 
live document like the Wiki page.


I'd like your feedback on this approach.  I'd also like to make sure 
there aren't any lingering issues with the Wiki page as it stands, 
before I go through the process of running a vote.


Thanks,

David





Re: [jira] Commented: (DERBY-1341) LOB setBytes method(s) are currently no supported, but part of the Java 1.4 JDBC interface

2006-06-02 Thread Lance J. Andersen






Anurag Shekhar (JIRA) wrote:

  [ http://issues.apache.org/jira/browse/DERBY-1341?page=comments#action_12414483 ] 

Anurag Shekhar commented on DERBY-1341:
---

I was wrong about life time of lob. It is supposed to restricted only for the transaction (jdbc 3.0 section 16.3.1)
  

For locator based that would be true.  However if it is  a copy , it
could well live past the transaction.  This has been clarified in the
jdbc 4 spec

  
Yes its the model where DatabaseMetaData.locatorsUpdateCopy()  will return true (updates made on a copy)

I am following the thread and plan to be consistant with client driver's behaviour unless its concluded other wise in that thread.

Initially memory may be sufficient to hold the array user sets in. But user may call setBytes multiple times resulting in a huge array which may be stored in memory. Same is true when user is writing in the output stream.

  
  
LOB setBytes method(s) are currently no supported, but part of the Java 1.4 JDBC interface
--

 Key: DERBY-1341
 URL: http://issues.apache.org/jira/browse/DERBY-1341
 Project: Derby
Type: Bug

  
  
  
  
  Components: JDBC
Versions: 10.0.2.0, 10.0.2.1, 10.0.2.2, 10.1.1.0, 10.2.0.0, 10.1.2.0, 10.1.1.1, 10.1.1.2, 10.1.2.1, 10.1.3.0, 10.1.2.2, 10.1.2.3, 10.3.0.0, 10.1.2.4, 10.1.2.5
 Environment: Windows 2000
Reporter: Keith McFarlane
Assignee: Anurag Shekhar

  
  
  
  
 JDBC LOB . getBtypes methods are not implemented in any Derby version to date: there is a "place-holder" method that throws a SQLException reporting that the methods are not implemented.
It would be excellent to have any efficient Derby implementation of the getBytes LOB methods that provide "random-access" to the binary // character content of database large objects. The specific context is implementing a Lucene Directory interface that stores indexing data (index files) and other binary data in a local encrypted Derby instance. 
 A work around is to write an encrypted RandomAccessFile implementation as a file-sdystem buffer, perhaps writing to the database on closure. An efficient Derby implementation of LOB . getBytes would avoid this an make for a clean design. I can think of several reasons why random-access to LOBs would be valuable in a "hostile"  client environment. 
 

  
  
  





Re: [jira] Commented: (DERBY-1286) Fill in Clob methods required for JDBC3 compliance

2006-05-25 Thread Lance J. Andersen

this is in the javadocs for jdbc 4.0

Andrew McIntyre wrote:

On 5/24/06, Lance J. Andersen [EMAIL PROTECTED] wrote:


 This is what we discussed in the EG and agreed to in this regards

 consider a Clob, aClob,  containing the following value for each
 setString() invocation below.

 ABCDEFG
A. aClob.setString(2, XX)

Result:  AXXDEFG

B. aClob.setString(1, XX)

Result:  XXCDEFG


The fact that these are one-indexed instead of zero-indexed seems like
a really good thing to mention in the javadoc for these methods.

my $.02,
andrew


Re: [jira] Created: (DERBY-1316) Wrong value returned by DatabaseMetaData.locatorsUpdateCopy()

2006-05-11 Thread Lance J. Andersen
This method was not added in JDBC 4, it was added in JDBC 3 (I would 
have picked a more articulate name as i find the name a bit confusing as 
do others)


Rick Hillegas (JIRA) wrote:

Wrong value returned by DatabaseMetaData.locatorsUpdateCopy()
-

 Key: DERBY-1316
 URL: http://issues.apache.org/jira/browse/DERBY-1316
 Project: Derby
Type: Bug

  Components: JDBC  
Versions: 10.2.0.0
Reporter: Rick Hillegas

 Fix For: 10.2.0.0


Both the embedded and network implementations of DatabaseMetaData return the 
wrong value for this method, which was added in JDBC4. This method currently 
returns false but should return true. Returning false means that your 
Blob/Clobs are backed by SQL Locators and therefore that set() methods on your 
Lobs write-through to the database.

  


Re: locatorsUpdateCopy

2006-05-11 Thread Lance J. Andersen




Hi Rick,

All DatabaseMetaDataMethods must be implemented.  Unfortunately the
previous authors of the spec did not take into account that some
backends do not support LOB so this should not have been a boolean.

I would probably return true as Derby is not locator based.



Rick Hillegas wrote:
Hi Lance,
  
  
Dan points out the neither true nor false is a satisfying return value
from this method for our embedded implementation: our embedded
implementation doesn't even implement the set() methods on Clob/Blob.
And the javadoc doesn't seem to countenance throwing
SQLFeatureNotSupportedException. What are your thoughts?
  
  
Thanks,
  
-Rick
  
  
  
  

  

Subject:

[jira] Commented: (DERBY-1316) Wrong value returned by
DatabaseMetaData.locatorsUpdateCopy()
  
  

From: 
"Rick Hillegas (JIRA)" derby-dev@db.apache.org
  
  

Date: 
Thu, 11 May 2006 19:02:04 + (GMT+00:00)
  
  

To: 
[EMAIL PROTECTED]
  

  
  

  

To: 
[EMAIL PROTECTED]
  

  
  
  [ http://issues.apache.org/jira/browse/DERBY-1316?page=comments#action_12379131 ] 

Rick Hillegas commented on DERBY-1316:
--

I see that neither true nor false is completely satisfying for the current embedded implementation. Throwing SQLFeatureNotSupportedException is not allowed according to the 1.6 javadoc: this is a mandatory method.


  
  
Wrong value returned by DatabaseMetaData.locatorsUpdateCopy()
-

 Key: DERBY-1316
 URL: http://issues.apache.org/jira/browse/DERBY-1316
 Project: Derby
Type: Bug

  
  
  
  
  Components: JDBC
Versions: 10.2.0.0
Reporter: Rick Hillegas
 Fix For: 10.2.0.0

  
  
  
  
Both the embedded and network implementations of DatabaseMetaData return the wrong value for this method. This method currently returns false but should return true. Returning false means that your Blob/Clobs are backed by SQL Locators and therefore that set() methods on your Lobs write-through to the database.

  
  
  





Re: JDBC4 compliance question, was: [jira] Commented: (DERBY-1288) Bring Derby into JDBC compliance by supporting executeQuery() on escaped procedure invocations

2006-05-09 Thread Lance J. Andersen






Rick Hillegas wrote:
Hi Lance,
  
  
I agree with Knut Anders' interpretation of the javadoc for
java.sql.Statement. He is investigating how executeQuery() and
executeUpdate() should behave when the query text invokes a stored
procedure:
  
  
1) executeQuery() should raise an error if the procedure does not
return EXACTLY one ResultSet
  

could be an empty ResultSet but yes 1 ResultSet of some form

2) executeUpdate() should raise an error if the procedure returns ANY
ResultSets. Otherwise executeUpdate() should return 0.
  

i would expect this to fall into the category of '0 for SQL Statements
that return nothing'

Is this your interpretation, also?
  
  
Thanks,
  
-Rick
  
  
  
  

  

Subject:

[jira] Commented: (DERBY-1288) Bring Derby into JDBC compliance by
supporting executeQuery() on escaped procedure invocations
  
  

From: 
"Knut Anders Hatlen (JIRA)" derby-dev@db.apache.org
  
  

Date: 
Tue, 09 May 2006 15:23:46 + (GMT+00:00)
  
  

To: 
[EMAIL PROTECTED]
  

  
  

  

To: 
[EMAIL PROTECTED]
  

  
  
  [ http://issues.apache.org/jira/browse/DERBY-1288?page=comments#action_12378633 ] 

Knut Anders Hatlen commented on DERBY-1288:
---

What Derby currently does, is

  executeQuery:

fails whenever a stored procedure is invoked (both embedded and
client)

  executeUpdate:

embedded: fails if no result sets are returned, succeeds if one or
more result sets are returned

client: succeeds regardless of how many result sets are returned,
but the return value is invalid (-1) when no result sets are
returned (it is 0 otherwise)

The way I read the javadoc at http://download.java.net/jdk6/docs/api/,
(1) executeQuery() should fail if the number of result sets returned
is not equal to one, and (2) executeUpdate() should fail if the number
of result sets is not equal to zero, and (3) executeUpdate() should
return 0 if the invocation of the stored procedure was successful. Is
my understanding correct?

  
  
Bring Derby into JDBC compliance by supporting executeQuery() on escaped procedure invocations
--

 Key: DERBY-1288
 URL: http://issues.apache.org/jira/browse/DERBY-1288
 Project: Derby
Type: Improvement

  
  
  
  
  Components: JDBC
Versions: 10.2.0.0
Reporter: Rick Hillegas
Assignee: Knut Anders Hatlen
 Fix For: 10.2.0.0

  
  
  
  
The following statement raises an error in Derby:
  statement.executeQuery( "{call foo()}" );
although this statement works:
  statement.executeUpdate( "{call foo()}" );
According to section 6.4 of the latest draft of the JDBC4 Compliance chapter, both statements are supposed to work in order to claim Java EE JDBC Compliance.
We need to bring Derby into compliance by supporting executeQuery() on escaped procedure invocations.

  
  
  





Re: [jira] Commented: (DERBY-1283) Fill in a deprecated but mandatory JDBC3 method: PreparedStatement.setUnicodeStream()

2006-05-04 Thread Lance J. Andersen




Prior versions of the JDBC specification were not clear or concise as
to what a developer and or a user could expect. This as a JDBC driver
developer while at Sybase i found extremely frustrating.

These methods have been in the JDBC spec since 1.0. We will not be
removing them from the spec and just because something is deprecated,
it does not mean that it should not be implemented or ignored. It just
means that there are alternative methods that are recommended. Also
in my working with the Java SE team, they discourage deprecating
methods via the javadoc tag as it adds unnecessary noise at compile
time where as a simple note in the spec/javadoc can point users to the
preferred method.

As far as your question below, it is impossible to determine what
methods are and are not required from the JDBC 3.0 specification. This
is something that i have addressed in JDBC 4.0.

-lance

Daniel John Debrunner wrote:

  Lance J. Andersen wrote:

  
  
Very simple, just because it is deprecated, it does not mean it can be
ignored.  Bottom line, it is required to be there.

  
  
According to which section of JDBC 3.0?

Dan.


  





Re: [jira] Commented: (DERBY-1283) Fill in a deprecated but mandatory JDBC3 method: PreparedStatement.setUnicodeStream()

2006-05-04 Thread Lance J. Andersen




Actually the intent has always been there, just not clearly articulated.

If a driver claims to support a data type such as
Blob/Clob/Array...etc... it is expected that all methods on the
interface are fully implemented and just do not throw an exception.

It is just JDBC 4.0 that i am taking the time to make this clearer.

If your driver or backend does not support the datatype then all
methods on the interface must throw SQLFeatureNotSupportedException for
JDBC 4 and SQLException for JDBC 3.

Daniel John Debrunner wrote:

  Lance J. Andersen wrote:

  
  
The compliance chapter has seen significant clarifications for JDBC 4 to
clarify what is and is not required.  If you implement and interface for
a data type such as blob/clob all methods must be implemented otherwise
you do not support the data type.

  
  
So this is a JDBC 4.0 requirement, not a JDBC 3.0 requirement.

Just trying to understand.
Dan.



  





Re: [jira] Commented: (DERBY-1283) Fill in a deprecated but mandatory JDBC3 method: PreparedStatement.setUnicodeStream()

2006-05-04 Thread Lance J. Andersen






Daniel John Debrunner wrote:

  Lance J. Andersen wrote:
  
  
The compliance chapter has seen significant clarifications for JDBC 4 to
clarify what is and is not required.  If you implement and interface for
a data type such as blob/clob all methods must be implemented otherwise
you do not support the data type.

  
  
Is this a recent change to JDBC 4.0? I have a copy dated March 17th 2006
and I cannot see any significant changes to the Compliance chapter.

  

I have not release the Proposed Final Draft to the JCP and there have
been many updates to the compliance chapter since that version you have
in your hand.

  I do see this sentence (section 6.7 Java EE JDBC compliance):

"Support for the BLOB, CLOB, ARRAY, REF, STRUCT, and JAVA_OBJECT types
is not required."

So, why would full support for Blob be required for JDBC 4.0 compliance,
if BLOB support is not required for Java EE JDBC and JDBC 4.0 compliance.
  

It is only required if you claim support for the data type in your
driver. What we do not want is you to say u support Clobs but only
implement the methods on the interface that you happen to like. We
want to provide a consistent api where users know what to expect.

  
I'm just try to ensure that we are not trying to implement more than is
required for JDBC 4.0 compliance, if we end up pushing against a Sep/Oct
deadline for a Derby release with JDBC 4.0.
  

Again, if you choose to not claim to support these data types, you do
not need to implement the interfaces 

However to claim support and not implement all methods for a given data
type is of limited value and makes it even more difficult to port apps
from other databases to a given database

  
I'm also asking for reference numbers (e.g. section numbers) as we just
recently had a problem where the GRANT/REVOKE functional spec stated
that something was one way according to the SQL spec. It turned out that
the statement was incorrect, and lead to wasted time  effort. Adding
references to the specification to backup facts makes it much easier for
others to verify.
  

The JDBC spec consists of the paper spec and the javadocs.

The compliance chapter articulates the requirements i list above in the
working version of the PFD.

  
Thanks,
Dan.


  





Re: [jira] Commented: (DERBY-1283) Fill in a deprecated but mandatory JDBC3 method: PreparedStatement.setUnicodeStream()

2006-05-04 Thread Lance J. Andersen








  
DJD question According to which section of JDBC 3.0?

Then this is about JDBC 4.0 compliance and not JDBC 3.0.
  

yes and no, the intent has always been there just not clear in print 

If you feel more comfortable stating this is JDBC4 so be it, but again
the intent has always been there, i am just making sure the points are
highlighted

  
I don't see how you can change the rules for JDBC 3.0 compliance with a
release of the JDBC 4.0 specification. I believe that Sun in the past
has confirmed JDBC drivers including Derby  Cloudscape pre-open source
as being JDBC 3.0 compliant, seems wrong to say, oh now there's
additional work to do.
  

There has never been a TCK to validate JDBC compliance end to end.

What there has been is a test suite to validate the requirements of a
JDBC driver in a J2EE environment. and the latest version is for J2EE
1.3 with JDBC 2.x. Passing this allows for a tagline to be used by
driver vendors.

  
Dan.



  





Re: [jira] Created: (DERBY-1288) Bring Derby into JDBC compliance by supporting executeQuery() on escaped procedure invocations

2006-05-04 Thread Lance J. Andersen
Just to be clear, this requirement has been part of the J2EE 
specification since 1999.  It is not new.


JDBC 4 is migrating the section from the J2EE spec WRT J2EE jdbc 
requirements to the JDBC spec and future Java EE specs will refer to 
this chapter for requirements.


-lance

Rick Hillegas (JIRA) wrote:

Bring Derby into JDBC compliance by supporting executeQuery() on escaped 
procedure invocations
--

 Key: DERBY-1288
 URL: http://issues.apache.org/jira/browse/DERBY-1288
 Project: Derby
Type: Improvement

  Components: JDBC  
Versions: 10.2.0.0
Reporter: Rick Hillegas

 Fix For: 10.2.0.0


The following statement raises an error in Derby:

  statement.executeQuery( {call foo()} );

although this statement works:

  statement.executeUpdate( {call foo()} );

According to section 6.4 of the latest draft of the JDBC4 Compliance chapter, 
both statements are supposed to work in order to claim Java EE JDBC Compliance.

We need to bring Derby into compliance by supporting executeQuery() on escaped 
procedure invocations.

  


Re: What does deprecation mean for JDBC? (Was Re: [jira] Commented: (DERBY-1283) Fill in a deprecated but mandatory JDBC3 method: PreparedStatement.setUnicodeStream())

2006-05-04 Thread Lance J. Andersen

I really do not want a war of words here as it serves zero purpose.

Methods that are deprecated are never guaranteed to be removed though it 
is possible they could.  There are no plans to remove any methods that 
are deprecated from JDBC and going forward unless there is a method that 
is severely broken/dangerous, it will not be marked deprecated, but 
instead a note explaining the preferred method will be added to the 
spec/javadocs.


If you want to ask the question a different way and revisit whether 
setUniCodeStream can be considered optional, then do so please and i 
will consider it and have that discussion with the EG.  JDBC drivers 
that have been around since the early days implemented this method.


In compatibility testing, we do not remove or exclude tests just because 
something was deprecated, as long as the method is in the spec it is 
expected to function.  JDBC is probably one of the poorer specs in this 
regard which is something i am working diligently to address.


Kathey Marsden wrote:

Daniel John Debrunner wrote:


We will not be
removing them from the spec and just because something is 
deprecated, it

does not mean that it should not be implemented or ignored.  It just
means that there are alternative methods that are recommended.
Wow!   This is not how I understood the word deprecation. I looked it 
up on wikipedia because I thought maybe I was using it wrong all these 
years.  I am a big fan of a *very  *long deprecation period, but I 
never imagined that new drivers would need to implement deprecated 
methods.


*http://en.wikipedia.org/wiki/Deprecation*

In computer software http://en.wikipedia.org/wiki/Computer_software 
standards and documentation, *deprecation* is the gradual phasing-out 
of a software or programming language 
http://en.wikipedia.org/wiki/Computer_programming_language feature.


A feature or method marked as deprecated is one which is considered 
obsolete, and whose use is discouraged. The feature still works in the 
current version of the software, although it may raise error messages 
http://en.wikipedia.org/wiki/Error_message as warnings. These serve 
to alert the user to the fact that the feature may be removed in 
future releases.


Is there some official language around what deprecation means for the 
JDBC API?


Kathey




Re: [jira] Commented: (DERBY-1288) Bring Derby into JDBC compliance by supporting executeQuery() on escaped procedure invocations

2006-05-04 Thread Lance J. Andersen






Daniel John Debrunner (JIRA) wrote:

  [ http://issues.apache.org/jira/browse/DERBY-1288?page=comments#action_12377843 ] 

Daniel John Debrunner commented on DERBY-1288:
--

What's the required behavior when a update count or multiple result sets are returned?
  

the expected behavior would be no different then what u do a Statement
object today

  
If multiple result sets are returned when should any error be thrown, before the execution starts or once the system detects that multiple result sets are returned?
  

This would probably be implementation defined depending on the
mechanism being used.

  
A lot of existing discussion has been in DERBY-501

  
  
Bring Derby into JDBC compliance by supporting executeQuery() on escaped procedure invocations
--

 Key: DERBY-1288
 URL: http://issues.apache.org/jira/browse/DERBY-1288
 Project: Derby
Type: Improvement

  
  
  
  
  Components: JDBC
Versions: 10.2.0.0
Reporter: Rick Hillegas
 Fix For: 10.2.0.0

  
  
  
  
The following statement raises an error in Derby:
  statement.executeQuery( "{call foo()}" );
although this statement works:
  statement.executeUpdate( "{call foo()}" );
According to section 6.4 of the latest draft of the JDBC4 Compliance chapter, both statements are supposed to work in order to claim Java EE JDBC Compliance.
We need to bring Derby into compliance by supporting executeQuery() on escaped procedure invocations.

  
  
  





Re: [jira] Commented: (DERBY-1288) Bring Derby into JDBC compliance by supporting executeQuery() on escaped procedure invocations

2006-05-04 Thread Lance J. Andersen






Daniel John Debrunner (JIRA) wrote:

  [ http://issues.apache.org/jira/browse/DERBY-1288?page=comments#action_12377874 ] 

Daniel John Debrunner commented on DERBY-1288:
--


dan  If multiple result sets are returned when should any error be thrown, before the execution starts or once the system detects that multiple result sets are returned?

lance  This would probably be implementation defined depending on the mechanism being used.

This is where I get confused, if this is implementation defined then how can returning a single ResultSet not be implementation defined as well? Is there a good clear definition of the behaviour you would like to see?
  

I am not sure how JDBC can guarantee when/how a driver/backend
determines whether a ResultSet or an update count (or both) is coming
down the wire especially given some vendors support java based
procedures, standard stored procedures.

  
As an example, if the behaviour is implementation defined then it seems to me that a procedure that was defined with DYNAMIC RESULT SETS 4 could be rejected at compile time, even though it could at runtime only return a single result set. This seems a valid implementation
  

possibly but there is no way to determine this on vendors such as
Sybase/MS SQL Server with t-sql procedures

  
Similarly a procedure defined with DYNAMIC RESULT SETS 1 could return zero result sets, and thus the executeQuery() has to thrown an exception.

As I've said in DERBY-501, section 13.3.3.3 of JDBC 3.0 states:
"If the type or number of results returned by a CallableStatement object are not
known until run time, the CallableStatement object should be executed with the
method execute."
  

agree but the wording above has nothing to do with the J2EE requirement
which has been there as i said since the J2EE 1.2 spec.  All i have
done is move the wording from the J2EE spec to the compliance section
of the JDBC 4 spec and start pruning things that are already required.

Now myself, i always use execute() with my sprocs but that is me.



  Now with Derby Java procedures, the engine does *not* know until runtime how many ResultSets are returned, so it seems to me that this implies execute() must be used and so executeQuery is invalid.
Maybe I'm reading this the wrong way, and "known" applies to the knowledge of the application developer
and not the driver/database engine?
  

In this case it would apply to the application developer.  So the
intent is that the developer using a sproc with a stored procedure
would only choose executeQuery() when they can guarantee a single
ResultSet down the wire and executeUpdate() when no ResultSets will be
sent down the wire and use execute() otherwise.

  
  
  
Bring Derby into JDBC compliance by supporting executeQuery() on escaped procedure invocations
--

 Key: DERBY-1288
 URL: http://issues.apache.org/jira/browse/DERBY-1288
 Project: Derby
Type: Improvement

  
  
  
  
  Components: JDBC
Versions: 10.2.0.0
Reporter: Rick Hillegas
 Fix For: 10.2.0.0

  
  
  
  
The following statement raises an error in Derby:
  statement.executeQuery( "{call foo()}" );
although this statement works:
  statement.executeUpdate( "{call foo()}" );
According to section 6.4 of the latest draft of the JDBC4 Compliance chapter, both statements are supposed to work in order to claim Java EE JDBC Compliance.
We need to bring Derby into compliance by supporting executeQuery() on escaped procedure invocations.

  
  
  





Re: [jira] Commented: (DERBY-1283) Fill in a deprecated but mandatory JDBC3 method: PreparedStatement.setUnicodeStream()

2006-05-03 Thread Lance J. Andersen




Very simple, just because it is deprecated, it does not mean it can be
ignored.  Bottom line, it is required to be there.

There are no plans to remove these methods from JDBC.

Daniel John Debrunner (JIRA) wrote:

  [ http://issues.apache.org/jira/browse/DERBY-1283?page=comments#action_12377662 ] 

Daniel John Debrunner commented on DERBY-1283:
--

Seems strange to implement a deprecated api. Section 6.7 says

"Deprecation refers to a class, interface, constructor, method or field that is no longer
recommended and may cease to exist in a future version."

Why  would Derby implement a method we don't want applications to use?

  
  
Fill in a deprecated but mandatory JDBC3 method: PreparedStatement.setUnicodeStream()
-

 Key: DERBY-1283
 URL: http://issues.apache.org/jira/browse/DERBY-1283
 Project: Derby
Type: Improvement

  
  
  
  
  Components: JDBC
Versions: 10.2.0.0
Reporter: Rick Hillegas
 Fix For: 10.2.0.0

  
  
  
  
For JDBC3 compliance, implement this method. Right now it throws a NotImplemented exception.

  
  
  





Re: [jira] Commented: (DERBY-1283) Fill in a deprecated but mandatory JDBC3 method: PreparedStatement.setUnicodeStream()

2006-05-03 Thread Lance J. Andersen




The compliance chapter has seen significant clarifications for JDBC 4
to clarify what is and is not required.  If you implement and interface
for a data type such as blob/clob all methods must be implemented
otherwise you do not support the data type.



Daniel John Debrunner (JIRA) wrote:

  [ http://issues.apache.org/jira/browse/DERBY-1283?page=comments#action_12377663 ] 

Daniel John Debrunner commented on DERBY-1283:
--

Also, which section of JDBC 3.0 spec  indicates that Derby is not currently in compliance with JDBC 3.0 with respect to this method.

The method is implemented.
It is implemented by throwing a SQLException because the database engine does not support the feature.

Section 6.3 says the driver must include an implementation of PreparedStatement/ResultSet but does not say it must
fully implement the interface. C.f. the comment in 6.3 for java.sql.Driver




  
  
Fill in a deprecated but mandatory JDBC3 method: PreparedStatement.setUnicodeStream()
-

 Key: DERBY-1283
 URL: http://issues.apache.org/jira/browse/DERBY-1283
 Project: Derby
Type: Improvement

  
  
  
  
  Components: JDBC
Versions: 10.2.0.0
Reporter: Rick Hillegas
 Fix For: 10.2.0.0

  
  
  
  
For JDBC3 compliance, implement this method. Right now it throws a NotImplemented exception.

  
  
  





Re: [jira] Updated: (DERBY-1253) Verify that we don't raise SQLFeatureNotSupportedException for mandatory methods

2006-05-02 Thread Lance J. Andersen




If you support a data type such as Blob/Clob, you must implement all
methods on the interface, not pick and choose.

If your backend does not support the data type, then all methods should
throw SQLFeatureNotSupportedException.

This was a problem in the earlier JDBC specs as it did not clarify
which methods were required and which were not.

Dyre Tjeldvoll (JIRA) wrote:

   [ http://issues.apache.org/jira/browse/DERBY-1253?page=all ]

Dyre Tjeldvoll updated DERBY-1253:
--

Attachment: derby-1253.v1.diff
derby-1253.v1.stat

Attaching a patch that updates the exclude map with additional interfaces and methods.
It still fails, but complains about fewer unsupported methods. I have run the test in embedded and DerbyNetClient, but I have not run any other tests.

The indentation mode that I use doesn't seem to grok such complicated array initializations, so the indentation is a bit strange in some places. And I have not even tried to avoid lines longer than 80 chars here, since that would make the exclude map initialization unreadable...

I'm a bit unsure about what to accept for Blob/Clob. All the methods in these interfaces are allowed to throw NotImplemented, but I'm wondering if the spec allows only some of them to be not implemented?


  
  
Verify that we don't raise SQLFeatureNotSupportedException for mandatory methods


 Key: DERBY-1253
 URL: http://issues.apache.org/jira/browse/DERBY-1253
 Project: Derby
Type: Improvement

  
  
  
  
  Components: JDBC
Versions: 10.2.0.0
Reporter: Rick Hillegas
Assignee: Dyre Tjeldvoll
 Fix For: 10.2.0.0
 Attachments: bug1253_verifier.diff, bug1253_verifier2.diff, derby-1253.v1.diff, derby-1253.v1.stat

The jdk16 javadoc marks optional methods as being able to raise SQLFeatureNotSupportedException. Make sure that we don't raise this exception for mandatory methods--unless we clearly understand why we have chosen to deviate from the JDBC spec.

  
  
  





Re: [jira] Updated: (DERBY-1253) Verify that we don't raise SQLFeatureNotSupportedException for mandatory methods

2006-05-02 Thread Lance J. Andersen




Hi Dyre,

yes that is correct, if you are supporting those data types it is
expected that the required methods are there in order to provide
developers with a consistent set of methods. It does not make sense to
just pick and choose especially seeing these data types have been
around in JDBC for quite some time now. Lack of support will make it
much more difficult for users to migrate from other backends which
support those data types to Derby.

[EMAIL PROTECTED] wrote:

  "Lance J. Andersen" [EMAIL PROTECTED] writes:

  
  
If you support a data type such as Blob/Clob, you must implement all
methods on the interface, not pick and choose.

If your backend does not support the data type, then all methods
should throw SQLFeatureNotSupportedException.

This was a problem in the earlier JDBC specs as it did not clarify
which methods were required and which were not.

  
  
Hi Lance, thanks for the clarification. 

Currently we are missing:

Blob.getBinaryStream(long,long)
Blob.setBinaryStream(long)
Blob.setBytes(long, byte[])
Blob.setBytes(long, byte[], int, int)
Blob.truncate(long)
Blob.free() {DERBY-1145}

Clob.getCharacterStream(long,long) 
Clob.setAsciiStream(long)
Clob.setCharacterStream(long)
Clob.setString(long, String)
Clob.setString(long, String, int, int)
Clob.truncate(long)
Clob.free() {DERBY-1145}


I assume that this means that we also need to implement:

Connection.create[BC]lob()
PreparedStatement.set[BC]lob()
CallableStatement.set[BC]lob() (except named parameter variants)
CallableStatement.get[BC]lob() (except named parameter variants) ?

If so; there is indeed much work that needs to be done before Derby
can claim to support Blob/Clob in Jdbc4 :(

  





Re: Missing JDBC3 methods, was: [jira] Updated: (DERBY-1253) Verify that we don't raise SQLFeatureNotSupportedException for mandatory methods

2006-05-02 Thread Lance J. Andersen

Hi Rick,

named parameters are optional WRT CallableStatements but they need to 
throw SQLFeatureNotSupportedException.


This is also clarified in the JDBC 4 spec compliance chapter

Rick Hillegas wrote:

Hi Lance,

Here's another gap between Derby's JDBC3 implementation and a 
reasonable interpretation of the spec: Currently, Derby supports 
CallableStatement methods of the form:


 getXXX( int paramNumber) and
 setXXX( int paramNumber, FOO paramValue )

but Derby does not implement the corresponding CallableStatement 
methods (they throw Not implemented exceptions):


 getXXX( String paramName) and
 setXXX( String paramName, FOO paramValue )

Is this asymmetry OK or do you think that methods in the second block 
are mandatory when the corresponding methods in the first block work?


Thanks,
-Rick


Lance J. Andersen wrote:


Hi Dyre,

yes that is correct, if you are supporting those data types it is 
expected that the required methods are there in order to provide 
developers with a consistent set of methods.  It does not make sense 
to just pick and choose especially seeing these data types have been 
around in JDBC for quite some time now.  Lack of support will make it 
much more difficult for users to migrate from other backends which 
support those data types to Derby.


[EMAIL PROTECTED] wrote:


Lance J. Andersen [EMAIL PROTECTED] writes:

 


If you support a data type such as Blob/Clob, you must implement all
methods on the interface, not pick and choose.

If your backend does not support the data type, then all methods
should throw SQLFeatureNotSupportedException.

This was a problem in the earlier JDBC specs as it did not clarify
which methods were required and which were not.
  


Hi Lance, thanks for the clarification.
Currently we are missing:

Blob.getBinaryStream(long,long)
Blob.setBinaryStream(long)
Blob.setBytes(long, byte[])
Blob.setBytes(long, byte[], int, int)
Blob.truncate(long)
Blob.free() {DERBY-1145}

Clob.getCharacterStream(long,long) Clob.setAsciiStream(long)
Clob.setCharacterStream(long)
Clob.setString(long, String)
Clob.setString(long, String, int, int)
Clob.truncate(long)
Clob.free() {DERBY-1145}


I assume that this means that we also need to implement:

Connection.create[BC]lob()
PreparedStatement.set[BC]lob()
CallableStatement.set[BC]lob() (except named parameter variants)
CallableStatement.get[BC]lob() (except named parameter variants) ?

If so; there is indeed much work that needs to be done before Derby
can claim to support Blob/Clob in Jdbc4 :(

 





Re: [jira] Commented: (DERBY-1253) Verify that we don't raise SQLFeatureNotSupportedException for mandatory methods

2006-04-26 Thread Lance J. Andersen




This is required.

Daniel John Debrunner (JIRA) wrote:

  [ http://issues.apache.org/jira/browse/DERBY-1253?page=comments#action_12376549 ] 

Daniel John Debrunner commented on DERBY-1253:
--

Is this mandatory/optional method scheme discussed in the JDBC 4.0 spec? In my latest copy of the spec and java doc I cannot find any reference to it. I do see the concept of "Required Interface" but that says any unimplemented methods can throw SQLFeatureNotSupportedException.

  
  
Verify that we don't raise SQLFeatureNotSupportedException for mandatory methods


 Key: DERBY-1253
 URL: http://issues.apache.org/jira/browse/DERBY-1253
 Project: Derby
Type: Improvement

  
  
  
  
  Components: JDBC
Versions: 10.2.0.0
Reporter: Rick Hillegas
Assignee: Dyre Tjeldvoll
 Fix For: 10.2.0.0
 Attachments: bug1253_verifier.diff

The jdk16 javadoc marks optional methods as being able to raise SQLFeatureNotSupportedException. Make sure that we don't raise this exception for mandatory methods--unless we clearly understand why we have chosen to deviate from the JDBC spec.

  
  
  





Re: [jira] Commented: (DERBY-1253) Verify that we don't raise SQLFeatureNotSupportedException for mandatory methods

2006-04-26 Thread Lance J. Andersen






Daniel John Debrunner (JIRA) wrote:

  [ http://issues.apache.org/jira/browse/DERBY-1253?page=comments#action_12376549 ] 

Daniel John Debrunner commented on DERBY-1253:
--

Is this mandatory/optional method scheme discussed in the JDBC 4.0 spec?

oh this is in the compliance chapter and the compliance chapter and the
javadocs have been updated to reflect what is mandatory and what is
optional.

makes it easier for everyone as it was a bit of a guessing game
before...

   In my latest copy of the spec and java doc I cannot find any reference to it. I do see the concept of "Required Interface" but that says any unimplemented methods can throw SQLFeatureNotSupportedException.

  
  
Verify that we don't raise SQLFeatureNotSupportedException for mandatory methods


 Key: DERBY-1253
 URL: http://issues.apache.org/jira/browse/DERBY-1253
 Project: Derby
Type: Improvement

  
  
  
  
  Components: JDBC
Versions: 10.2.0.0
Reporter: Rick Hillegas
Assignee: Dyre Tjeldvoll
 Fix For: 10.2.0.0
 Attachments: bug1253_verifier.diff

The jdk16 javadoc marks optional methods as being able to raise SQLFeatureNotSupportedException. Make sure that we don't raise this exception for mandatory methods--unless we clearly understand why we have chosen to deviate from the JDBC spec.

  
  
  





Re: serialization of Derby DataSources

2006-04-21 Thread Lance J. Andersen

Hi Rick,

once the serialVerisonUID is there, you should not remove it as chaos 
can break out if the IDs start to differ. IMHO would leave them alone.


One example is you have say someone using say derby version x with a an 
ID of 1 and then persisted the object... now u remove the ID in derby y 
and the compiler generates say -2 for the ID , you will encounter 
problems when you try and grab the persisted version as the IDs no 
longer match.




Rick Hillegas wrote:
Thanks, David. I'm afraid I'm still muddled. I think I understand the 
basic purpose of serialVersionUID: It's a compiler-generated checksum 
of the source which serialization uses as a sanity check. By 
explicitly setting this field, the engineer promises to keep the 
following contract: Although the class behavior may change between 
versions, the  non-transient fields won't.


But I'm still not grasping the serialization issue we're addressing 
here. How do we get into a situation where there are two different 
versions of one of these classes? Is anyone persisting these classes 
across upgrades of the Derby code?


Perhaps all that's being addressed here is the following 
recommendation from the javadoc of java.io.Serializable: However, it 
is /strongly recommended/ that all serializable classes explicitly 
declare serialVersionUID values, since the default serialVersionUID 
computation is highly sensitive to class details that may vary 
depending on compiler implementations... I don't think we have this 
problem, though: at release time we produce a standard, vetted version 
of Derby for which the compiler is constant.


Thanks for helping me puzzle through this.

Regards,
-Rick

David W. Van Couvering wrote:

I had to look into this when I was playing around with a classloader 
for code sharing.


Basically, by setting the serialVersionUID, you are telling the VM 
that you guarantee that the newer version of the class is compatible 
with the old version (in terms of serialization).


If you don't set this, then you will get an exception saying the 
class is not compatible if the VM determines that version UID 
(basically a hash) is different.  There is documentation explaining 
how this UID is determined, and I struggled to get it right, but 
finally I had to set the serialVersionUID.


Note that you have to set the serial version UID on the *second* and 
subsequent versions of the class, it's not required for the first 
version of the class.  Basically, you run serialver on the first 
version of the class, and then use this to set serialVersionUID in 
the second version.


I wrote some tests to verify serialization compatibility between 
versions of classes but never got to the point of checking them in. 
They may be valuable, and could be added to our compatibility tests, 
so if you'd like I can poke around and find them.


One bug I uncovered in my tests was that for one of the data sources 
the serialversion UID was not public, so I was getting failures.  Now 
I can't remember if I checked in that fix or not.


David

Rick Hillegas wrote:

I'm confused about the presence of serialVersionUIDs in the 
DataSources exposed by our network client (e.g., 
ClientConnectionPoolDataSource). I think I understand why these 
classes are serializable (JNDI wants to serialize them). But I don't 
understand why we are forcibly setting the serialization id. I don't 
see any documentation explaining the serialization problem this 
addresses, stating the implications for engineers editting these 
classes, or describing our expectations at version upgrade.


Can someone shed some light on this?

Thanks,
-Rick






Re: serialization of Derby DataSources

2006-04-21 Thread Lance J. Andersen



Rick Hillegas wrote:

David W. Van Couvering wrote:

My understanding was that they may persist across upgrades because 
the data source objects are serialized into a JNDI store.  In general 
we can *add* non-transient fields but we can't remove or change them.


Thanks for that warning about the JNDI store. It would be better if we 
could flush the old object from the JNDI store.


Sigh. According to an experiment I just ran, the de-serialization 
silently fails to populate the added field with a meaningful value, 
even if you specify a default in the field declaration or in a no-arg 
constructor. The added field is forced to the Java default for that type.


I think this is tricky enough to warrant comments in these classes.
if you add fields, you need to code it so that they get initialized to a 
reasonable value with when de-serialized using an older copy of the object.


Thanks again,
-Rick



I think also since we support the Referenceable interface, the object 
is reconstructed in a compatible way using our own code, rather than 
depending upon serialization's default mechanism.  But that's where 
I'm still a little muddled.


By the way, using the *exact* same compiler, I tried to gently modify 
a DataSource following all the rules I could imagine, and because I 
didn't know the serialVersionUID was accidentally made private, I 
kept getting an incompatible class error or whatever it's called.  I 
was doing everything perfectly, and it was still breaking.  Once I 
set the serialVersionUID to be public, peace reigned.


David

Rick Hillegas wrote:

Thanks, Lance. I agree. We seem to have a muddle if someone adds a 
new non-transient field to one of these classes: either a) the 
engineer changes the serialVersionUID, giving rise to the problem 
you mention or b) the serialVersionUID isn't changed and 
deserialization fails because the new field is missing from the 
persisted stream. Hopefully we don't mean for these objects to 
persist across Derby upgrades. Hard to tell from the code.


Regards,
-Rick

Lance J. Andersen wrote:


Hi Rick,

once the serialVerisonUID is there, you should not remove it as 
chaos can break out if the IDs start to differ. IMHO would leave 
them alone.


One example is you have say someone using say derby version x with 
a an ID of 1 and then persisted the object... now u remove the ID 
in derby y and the compiler generates say -2 for the ID , you will 
encounter problems when you try and grab the persisted version as 
the IDs no longer match.




Rick Hillegas wrote:

Thanks, David. I'm afraid I'm still muddled. I think I understand 
the basic purpose of serialVersionUID: It's a compiler-generated 
checksum of the source which serialization uses as a sanity check. 
By explicitly setting this field, the engineer promises to keep 
the following contract: Although the class behavior may change 
between versions, the  non-transient fields won't.


But I'm still not grasping the serialization issue we're 
addressing here. How do we get into a situation where there are 
two different versions of one of these classes? Is anyone 
persisting these classes across upgrades of the Derby code?


Perhaps all that's being addressed here is the following 
recommendation from the javadoc of java.io.Serializable: However, 
it is /strongly recommended/ that all serializable classes 
explicitly declare serialVersionUID values, since the default 
serialVersionUID computation is highly sensitive to class details 
that may vary depending on compiler implementations... I don't 
think we have this problem, though: at release time we produce a 
standard, vetted version of Derby for which the compiler is constant.


Thanks for helping me puzzle through this.

Regards,
-Rick

David W. Van Couvering wrote:

I had to look into this when I was playing around with a 
classloader for code sharing.


Basically, by setting the serialVersionUID, you are telling the 
VM that you guarantee that the newer version of the class is 
compatible with the old version (in terms of serialization).


If you don't set this, then you will get an exception saying the 
class is not compatible if the VM determines that version UID 
(basically a hash) is different.  There is documentation 
explaining how this UID is determined, and I struggled to get it 
right, but finally I had to set the serialVersionUID.


Note that you have to set the serial version UID on the *second* 
and subsequent versions of the class, it's not required for the 
first version of the class.  Basically, you run serialver on the 
first version of the class, and then use this to set 
serialVersionUID in the second version.


I wrote some tests to verify serialization compatibility between 
versions of classes but never got to the point of checking them 
in. They may be valuable, and could be added to our compatibility 
tests, so if you'd like I can poke around and find them.


One bug I uncovered in my tests was that for one of the data 
sources

Re: [jira] Commented: (DERBY-941) Add JDBC4 support for Statement Events

2006-04-19 Thread Lance J. Andersen






Knut Anders Hatlen (JIRA) wrote:

  [ http://issues.apache.org/jira/browse/DERBY-941?page=comments#action_12375102 ] 

Knut Anders Hatlen commented on DERBY-941:
--

  
  
V.Narayanan commented on DERBY-941:
---

Hi,
thanx for the comments!

1) In the example we are waiting for the affect of the Delete table
operation to be undone by the create operation before the
PreparedStatement becomes usable again. Is'nt this a special case
where the DDL undoes the operation of an earlier DDL?

  
  
Maybe. It's probably a special case that the table is dropped and the
statement is re-executed too, but it's still a case...

  
  
What if the create table did not happen at all? Then would'nt the
PreparedStatement remain invalid?

  
  
That depends on how "invalid" is defined, but the way I read the
javadoc for StatementEventListener, it is seems like the spec
considers the statement as valid, since it is not necessarily unusable
in the future.

  

Your milage is going to vary as to what/when the statement is invalid. 
A lot will depend on the backend  which is why the wording is not
crystal clear with details.

  
  
2) There are two cases for this Error Occurred Event as I see it

  a) Assume that the ConnectionPoolManager which has registered
  itself to listen to statement events is actually doing what is
  mentioned as part of the javadoc comment (i.e.) creating a
  temporary table in this case it can catch the error occurred
  event check the content to see the PreparedStatement and also
  the SQLException object contained within the StatementEvent
  (which would indicate the reason for occurrence of the event)
  and if it occurred because of non-existence of the temporary
  table ignore it.

  
  
In that case, the connection pool manager needs knowledge about how
the tables are used and whether the database invalidates statements on
DDL operations. I don't think we can expect the manager to have such
knowledge.

  
  
  b) In the case that the ConnectionPoolManager has not created
  a temporary table and it is a genuine case of a invalid
  PreparedStatement it needs to know it can make use of the
  error occurred event that is raised.
   
  Thus throwing a error occurred event would allow the
  ConnectionPoolManager to decide what needs to happen

  
  
Again, I don't think the connection pool manager has enough
information to decide this. It is the application that creates and
accesses the table. The manager just does what the application tells
it to do, and it has no way to find out whether the application will
recreate the table later.

  
  
We are throwing the error occurred event only upon doing an execute
on the PreparedStatement. If the ConnectionPoolManager did know that
the temporary table or the table used in the PreparedStatement or in
the generalized case knew of a DDL invalidating a PreparedStatement
why would it do a execute on the PreparedStatement? Does'nt this
qualify as a faulty Pooling implementation? If it were using a
temporary table it would do an execute only during the time that the
temporary table exists.  Narayanan

  
  
No, I don't think this means the pool manager is faulty. It is the
application, not the manager, that decides when it invokes execute().


  
  
Add JDBC4 support for Statement Events
--

 Key: DERBY-941
 URL: http://issues.apache.org/jira/browse/DERBY-941
 Project: Derby
Type: New Feature

  
  
  
  
  Components: JDBC
Versions: 10.0.2.0
Reporter: Rick Hillegas
Assignee: V.Narayanan
 Attachments: ListenerTest.java, statementeventlisteners_embedded.diff, statementeventlisteners_embedded.stat, statementeventlisteners_embedded_v2.diff, statementeventlisteners_embedded_v2.stat, statementeventlisteners_embedded_ver1.html

As described in the JDBC 4 spec, sections 11.2, 11.7,  and 3.1.
These are the methods which let app servers listen for connection and statement closure and invalidation events.
Section 11.2 of the JDBC 4 spec explains connection events: Connection pool managers which implement the ConnectionEventListener interface can register themselves to listen for  "connectionClosed" and fatal "connectionErrorOccurred" events. App servers can use these events to help them manage the recycling of connections back to the connection pool.
Section 11.7 of the JDBC 4 spec explains statement events: Statement pools which implement StatementEventListener can register themselves to listen for "statementClosed" and "statementErrorOccurred" events. Again, this helps statement pools manage the recycling of statements back to the pool.

  
  
  





Re: updateRow() behavior difference between client and embedded drivers - which is right?

2006-04-10 Thread Lance J. Andersen




Just to be clear, the Proposed Final Draft is NOT available, to the JCP
community at this time. We are continuing to work on polishing the
specification up as we move closer to the release of mustang.

Daniel John Debrunner wrote:

  David W. Van Couvering wrote:

  
  
In inspecting the exceptions across client and embedded drivers, I
noticed that in the method updateRow(), if the current row has not been
modified, the client throws an exception.  However, the embedded driver
returns without taking any action and does not throw an exception.

The JavaDoc for updateRow() says nothing about what behavior is expected.

Does anyone know which one is correct?  Given a choice, I would prefer
the more forgiving implementation in the embedded driver.  An alternate
is to throw a SQLWarning rather than a SQLException.

Once I have a better sense of what the right behavior is, I can log a
bug about this and attach it to our list of inconsistencies.

  
  
JDBC 4.0 (proposed final draft) section 16.2.4.1

"If the concurrency level is
ResultSet.CONCUR_UPDATABLE and updateRow is called without changes being
made to the row, the call will be considered a no-op."

Thus it seems embedded is correct.

Dan.

  





Re: JDBC ResultSets from DatabaseMetaData

2006-03-31 Thread Lance J. Andersen

This has been clarified in the JDBC 4.0 spec.

Again and i cannot say this often enough the tutorial and reference is 
not to be deemed the end-all when it comes to JDBC.  The spec consists 
of the JDBC API javadocs and the PDF spec.


I am trying to correct as many issues that i can for JDBC 4.0.  There 
are things in the tutorial that are wrong and there are some that are 
more correct.  I am addressing them as i find them.


regards
lance

Fernanda Pizzorno wrote:
I just came accross that in the book JDBC API Tutorial and Reference, 
Third Edition. In the last paragraph of section 27.1.19 Queries That 
Produce Updatable Result Sets (p.715) it stands:


Result sets created by means other than the execution of a query, 
such as those returned by several methods in the DatabaseMetaData 
interface, are not scrollable or updatable, nor are they required to be.


Fernanda

Daniel John Debrunner wrote:


I known I've seen a statement somewhere that listed the ResultSet
attributes for ResultSets returned from DatabaseMetaData methods.

It stated that such ResultSets were forward only etc.

Now I can't find that statement in JDBC 4.0/3.0 spec or the javadoc.Does
anyone remember where this statement is?

Thanks,
Dan.

 





Re: JDBC ResultSets from DatabaseMetaData

2006-03-31 Thread Lance J. Andersen




That's because it is not there in the public copy, just my working
copy. Will be there in the PFD as well as the javadocs for these
interfaces.

It will be in the DatabaseMetaData chapter and ResultSet chapter in
the ResultSetMetaData section.

regards
lance

Daniel John Debrunner wrote:

  Lance J. Andersen wrote:

  
  
This has been clarified in the JDBC 4.0 spec.

  
  
Great, I couldn't see anything related to this in the JDBC 4.0 proposed
final draft, do you have a section number?

Thanks,
Dan.


  





Re: JDBC ResultSets from DatabaseMetaData

2006-03-29 Thread Lance J. Andersen




It is not in the spec.

However, it should be TYPE_FORWARD_ONLY, CONCUR_READ_ONLY given the
design was done during the JDBC 1.0 timeframe for DatabaseMetaData and
ResultSetMetaData.

i will look to clarify for JDBC 4

-lance

Daniel John Debrunner wrote:

  I known I've seen a statement somewhere that listed the ResultSet
attributes for ResultSets returned from DatabaseMetaData methods.

It stated that such ResultSets were forward only etc.

Now I can't find that statement in JDBC 4.0/3.0 spec or the javadoc.Does
anyone remember where this statement is?

Thanks,
Dan.

  





Re: JDBC ResultSets from DatabaseMetaData

2006-03-29 Thread Lance J. Andersen




the default holdability is always implementation defined but that seems
reasonable if that is what u do by default anyways with derby

Daniel John Debrunner wrote:

  Lance J. Andersen wrote:

  
  
It is not in the spec.

However, it should be  TYPE_FORWARD_ONLY, CONCUR_READ_ONLY  given the
design was done during the JDBC 1.0 timeframe for DatabaseMetaData and
ResultSetMetaData.

  
  
So holdability should be CLOSE_CURSORS_AT_COMMIT?

  
  
i will look to clarify for JDBC 4

  
  
Thanks,
Dan.


  





Re: Compatibility guarantees for SQL states and messages

2006-03-28 Thread Lance J. Andersen

If it is deemed to be the wrong SQLState, then you should fix it.

My experience is JDBC developers are more focused on the Exception and 
if they check further they often dig into the vendor error.  This was a 
reason we added the SQLException sub classes to help aid in better 
portability.


If you have not bought a copy of the SQL Standard you really do not know 
what this means (SQLState) anyways.


my .02

David W. Van Couvering wrote:
Thanks, Kathey.  What if I find an existing SQLState in the embedded 
code that uses a Derby-specific SQL State but which I think really 
should be a standard SQL state?


For example, I think 42X89 (Types ''{0}'' and ''{1}'' are not type 
compatible. Neither type is assignable to the other type.) really is 
a case of the standard SQL State 22005 - error in assignment


So the question is, using the taxonomy described in

http://wiki.apache.org/db-derby/ForwardCompatibility

should SQL States be Stable or Unstable?  If they are Stable, then I 
can't fix this until 11.0, and I just need to log a bug for now.  If 
they are Unstable, I can fix this in 10.2.


I think really since our SQL States are documented, and we don't 
really think of them as experimental or transitional, then they 
should be considered Stable, and I really can't change an existing SQL 
State in a minor release.


But when adding a new SQL state to the client, which takes priority: 
being consistent with the SQL state in embedded driver which is 
non-compliant with the standard, or being consistent with the SQL 
standard?  I would vote for being consistent with the standard, and 
explain that the inconsistency is due to a bug in the embedded driver 
which will be fixed in the next major release.


Thanks,

David

Kathey Marsden wrote:

David W. Van Couvering wrote:



Hi, all.  I looked at the listing of Derby's public APIs (see
http://wiki.apache.org/db-derby/ForwardCompatibility), and it mentions
Derby's JDBC support.

I need to delve in a little deeper.  Are we guaranteeing compatibility
for the SQL States?  For the 10.2 release, is it OK for me to change
the SQL State of an existing message, or do I need to keep it the same
across minor releases?




I don't think SQLStates are defined by the JDBC Standard but rather the
SQL Standard.
To that extent they should be compliant and match embedded where 
possible.


SQLStates are documented but we have this caveat for client:
http://db.apache.org/derby/docs/dev/ref/rrefexcept71493.html
The following tables list /SQLStates/ for exceptions. Exceptions that
begin with an /X/ are specific to Derby. Note that some SQLStates
specific to the network client might change in future releases.

We also voted early to make client match embedded where possible and
that is in the documentation here.
http://db.apache.org/derby/docs/dev/adminguide/cadminappsclientdiffs.html. 



I think that even within these guidelines early notification and buy in
from the user community is key, so should be posted on the user list.
and a Wiki page provided with information on how to write applications
that will work on both old and new versions.

But  I think it is ok to change the SQLStates on client to:
1) Match the standard.
2) Match embedded.
3)  Create a new  SQLState instead of  having a null SQLState for
SQLExceptions that are specific  to client.

but not ok to :
1) Change client from  some existing  SQLState  to another SQLState that
is neither compliant nor matches embedded. 
I think message text can be changed, but the  null SQLStates and message

text are  an interesting case, because before that was the only way for
an app to check the error. I think some of  our testing code does this.


Kathey















Re: Compatibility guarantees for SQL states and messages

2006-03-28 Thread Lance J. Andersen
My point is this.  If there is an incorrect SQL state applied,  then it 
is a bug simple as that.  Changing these is pretty low risk anyways as 
the majority of developers do not have copies of the standard.I bet 
you will find a fair amount of divergence WRT some of the SQLStates 
returned if you were to survey all of the vendors out there.  No facts, 
just gut feel based on my experience.


Also keep in mind that the SQLState that is returned from a SQLException 
is also derived from XOpen and the ANSI SQL Standard.I would not 
dwell too much on this to be honest.


Regards
Lance

David W. Van Couvering wrote:
It sounds like your vote is that the SQL States be marked Unstable, 
not Stable.


David

Lance J. Andersen wrote:

If it is deemed to be the wrong SQLState, then you should fix it.

My experience is JDBC developers are more focused on the Exception 
and if they check further they often dig into the vendor error.  This 
was a reason we added the SQLException sub classes to help aid in 
better portability.


If you have not bought a copy of the SQL Standard you really do not 
know what this means (SQLState) anyways.


my .02

David W. Van Couvering wrote:

Thanks, Kathey.  What if I find an existing SQLState in the embedded 
code that uses a Derby-specific SQL State but which I think really 
should be a standard SQL state?


For example, I think 42X89 (Types ''{0}'' and ''{1}'' are not type 
compatible. Neither type is assignable to the other type.) really 
is a case of the standard SQL State 22005 - error in assignment


So the question is, using the taxonomy described in

http://wiki.apache.org/db-derby/ForwardCompatibility

should SQL States be Stable or Unstable?  If they are Stable, then I 
can't fix this until 11.0, and I just need to log a bug for now.  If 
they are Unstable, I can fix this in 10.2.


I think really since our SQL States are documented, and we don't 
really think of them as experimental or transitional, then they 
should be considered Stable, and I really can't change an existing 
SQL State in a minor release.


But when adding a new SQL state to the client, which takes priority: 
being consistent with the SQL state in embedded driver which is 
non-compliant with the standard, or being consistent with the SQL 
standard?  I would vote for being consistent with the standard, and 
explain that the inconsistency is due to a bug in the embedded 
driver which will be fixed in the next major release.


Thanks,

David

Kathey Marsden wrote:


David W. Van Couvering wrote:



Hi, all.  I looked at the listing of Derby's public APIs (see
http://wiki.apache.org/db-derby/ForwardCompatibility), and it 
mentions

Derby's JDBC support.

I need to delve in a little deeper.  Are we guaranteeing 
compatibility

for the SQL States?  For the 10.2 release, is it OK for me to change
the SQL State of an existing message, or do I need to keep it the 
same

across minor releases?





I don't think SQLStates are defined by the JDBC Standard but rather 
the

SQL Standard.
To that extent they should be compliant and match embedded where 
possible.


SQLStates are documented but we have this caveat for client:
http://db.apache.org/derby/docs/dev/ref/rrefexcept71493.html
The following tables list /SQLStates/ for exceptions. Exceptions that
begin with an /X/ are specific to Derby. Note that some SQLStates
specific to the network client might change in future releases.

We also voted early to make client match embedded where possible and
that is in the documentation here.
http://db.apache.org/derby/docs/dev/adminguide/cadminappsclientdiffs.html. 



I think that even within these guidelines early notification and 
buy in

from the user community is key, so should be posted on the user list.
and a Wiki page provided with information on how to write applications
that will work on both old and new versions.

But  I think it is ok to change the SQLStates on client to:
1) Match the standard.
2) Match embedded.
3)  Create a new  SQLState instead of  having a null SQLState for
SQLExceptions that are specific  to client.

but not ok to :
1) Change client from  some existing  SQLState  to another SQLState 
that
is neither compliant nor matches embedded. I think message text can 
be changed, but the  null SQLStates and message
text are  an interesting case, because before that was the only way 
for
an app to check the error. I think some of  our testing code does 
this.



Kathey















  1   2   3   >