Re: JDK 1.4 support?
One word of caution. While Sun's JDK 1.4 had it's "end of service life" 2008-10-08, IBM's JDK 1.4 has "End of Service" 2011-09-30 Which means that there is a supported 1.4 JDK out there. If we stop requiring 1.4 for building Derby, we can't guarantee that future versions of Derby will work for IBM's still supported 1.4 JDK. Sun's 1.4 JDK is still available for download. signature.asc Description: Digital signature
Re: Upcoming Birthdays Query for Derby database
"Sebas.Panikulam" <[EMAIL PROTECTED]> writes: > Hi, > If anybody tried writing a query for Derby to fetch the upcoming Birthdays. > I am trying to modify the following query with respect to derby. The below > one will find the B'Days in the next 7 days. > > SELECT * FROM [dbo].[tbl_EMPXYZ] > WHERE MONTH(DOB) BETWEEN MONTH(GETDATE()) AND > MONTH(DATEADD(dd,20,GETDATE())) AND > DAY(DOB) BETWEEN DAY(GETDATE()) AND DAY(DATEADD(dd,7,GETDATE())) > > But I didn't find any function for adding date in derby(similar to DATEADD > in the above query). This link shows some ideas: http://www.mailinglistarchive.com/derby-user@db.apache.org/msg02879.html You could also use stored functions: http://db.apache.org/derby/docs/10.4/ref/rrefcreatefunctionstatement.html Thanks, Dag
Re: JDK 1.4 support?
I think we just need one JDK (1.6), but we do need JREs for 1.4 and 1.5 to compile Derby successfully. I am not sure BUILDING.txt reflects this, though. I agree it is a bit cumbersome for the would-be developer to have to download all that. I think we need the 1.4 JRE to make sure we maintain compatibility with JME CDC 1.1 Foundation profile (Constrained Device Configuration). Presently, there is a single build (one jar file version only) for embedded Derby. It would have been nice if new developers didn't have to worry about that... Thanks, Dag
Re: faster inserts in big tables
Go with latest Derby, if you load data from externat file use bulk import. Using large log file may help. 2008/11/25, [EMAIL PROTECTED] <[EMAIL PROTECTED]>: > I have 10's of thousands of rows to add to a table, possibly 100's of > thousands, and I'm wondering if there's anything else I can do to speed > this up. The table could end up having a couple of million rows. > > This is what I've done so far: > * Using a PreparedStatement that gets reused with each insert. > * Set locking level to TABLE for that table. > * Turned off autocommit. > * Set the connection to READ_COMMIT. > > In addition to that, I'm also setting these system parameters, though > not > necessarily to improve insert performance: > * derby.system.durability=test > * derby.storage.pageSize=32768 > > The table has one odd feature: The last column is a VARCHAR(32672) FOR > BIT DATA. I've tried setting the length to something smaller, but it > didn't really seem to matter. > > The primary key is an auto generated int with another 2-column index on > two BIGINT columns. Something I found interesting is that the inserts > seem to go 2x faster if I have the 2-column index in place than if I > have just the primary-key index. > > I'm running > Derby 10.2.2 > JRE 1.6.0_07 > Windows XP SP2 > > Is there anything else I can do to speed up row inserts? > > Thanks, > Brian >
Upcoming Birthdays Query for Derby database
Hi, If anybody tried writing a query for Derby to fetch the upcoming Birthdays. I am trying to modify the following query with respect to derby. The below one will find the B'Days in the next 7 days. SELECT * FROM [dbo].[tbl_EMPXYZ] WHERE MONTH(DOB) BETWEEN MONTH(GETDATE()) AND MONTH(DATEADD(dd,20,GETDATE())) AND DAY(DOB) BETWEEN DAY(GETDATE()) AND DAY(DATEADD(dd,7,GETDATE())) But I didn't find any function for adding date in derby(similar to DATEADD in the above query). Does any one have a better approach or query? Thanks Sebas -- View this message in context: http://www.nabble.com/Upcoming-Birthdays-Query-for-Derby-database-tp20690090p20690090.html Sent from the Apache Derby Users mailing list archive at Nabble.com.
faster inserts in big tables
I have 10's of thousands of rows to add to a table, possibly 100's of thousands, and I'm wondering if there's anything else I can do to speed this up. The table could end up having a couple of million rows. This is what I've done so far: * Using a PreparedStatement that gets reused with each insert. * Set locking level to TABLE for that table. * Turned off autocommit. * Set the connection to READ_COMMIT. In addition to that, I'm also setting these system parameters, though not necessarily to improve insert performance: * derby.system.durability=test * derby.storage.pageSize=32768 The table has one odd feature: The last column is a VARCHAR(32672) FOR BIT DATA. I've tried setting the length to something smaller, but it didn't really seem to matter. The primary key is an auto generated int with another 2-column index on two BIGINT columns. Something I found interesting is that the inserts seem to go 2x faster if I have the 2-column index in place than if I have just the primary-key index. I'm running Derby 10.2.2 JRE 1.6.0_07 Windows XP SP2 Is there anything else I can do to speed up row inserts? Thanks, Brian
Re: JDK 1.4 support?
Bryan Pendleton wrote: >> I think support of compilation against JDK 1.4 should be removed >> otherwise no developers would be able to work on it. > > It seems like this is more a discussion for the *developers* list > than the *users* list, since it only applies to people trying to > build Derby from source, right? > > So we should probably move it to the derby-dev list. > > Is your current problem that you are having trouble getting > a copy of the 1.4 JDK? I think you have to go to a different > location on the download site, but I believe it is still available. > > Or are you simply raising a concern for the future, about when > Derby should consider dropping support for JDK 1.4? Yes, I am raising a concern about "should already be done" drop support for JDK 1.4. At least, if you do not jump on 1.5 new features, just do not require a JDK 1.4 for compilation since many people do not want to use unsupported not even for development. Requiring an unmaintained JDK just creates a problem for developers. Ahora también puedes acceder a tu correo Terra desde el móvil. Infórmate pinchando aquí.
Re: JDK 1.4 support?
I think support of compilation against JDK 1.4 should be removed otherwise no developers would be able to work on it. It seems like this is more a discussion for the *developers* list than the *users* list, since it only applies to people trying to build Derby from source, right? So we should probably move it to the derby-dev list. Is your current problem that you are having trouble getting a copy of the 1.4 JDK? I think you have to go to a different location on the download site, but I believe it is still available. Or are you simply raising a concern for the future, about when Derby should consider dropping support for JDK 1.4? thanks, bryan
Re: JDK 1.4 support?
Daniel Noll wrote: > Dag H. Wanvik wrote: >> Derby doesn't enforce JDK 1.4 > > It does at compile-time. > > Admittedly I haven't tried setting jdk14.home to point to jdk6. :-) Yes, that is the problem. How do I set jdk14.home if there is no JDK 1.4 ? I think support of compilation against JDK 1.4 should be removed otherwise no developers would be able to work on it. Ahora también puedes acceder a tu correo Terra desde el móvil. Infórmate pinchando aquí.
Re: how to get data out from Clob?
Mikael Sundberg wrote: Thanks for the info Does the same problem exist for Blob? There was a similar problem, but the fix for it should be in 10.4.2.0. In many cases Blob is faster than Clob, because it doesn't have to deal with the character encoding issue. Derby is using the modified UTF-8 encoding, where one character can be represented by either one, two or three bytes. This complicates the positioning logic, and you also get the cost of decoding/encoding of course. If you can easily encode/decode your character data yourself and you only need to simply transfer the data between the client and the server, you might see some improvement by using Blob instead of Clob. Of course, if the data volume is too large to fully represent it in-memory on the client, it complicates things for you to go with Blob. -- Kristian Will update my testservers and try it. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: den 25 november 2008 14:01 To: Derby Discussion Subject: Re: how to get data out from Clob? Mikael Sundberg wrote: Hi Im using derby 10.4.2 and the clientdriver. I changed from BufferedReader r = new BufferedReader(new InputStreamReader(res.getAsciiStream("data"))); for (String line = r.readLine(); line != null; line = r.readLine()) { w.println(line); } To String data = res.getString("data"); In a recent update to our application since the getString seemed to be twice as fast then. Not entirely sure what versions of derby I tested on then. So bacisly I should change back now? On the client driver, it doesn't matter (with respect to the bug I'm talking about). I'm afraid you're stuck with the performance problem until you upgrade to a newer version. We don't want to use any unstable version on production so guess I will have to solve it some other way. Il atleast try out the latest version on my testservers to see if that realy is the problem. I would consider using the newest bits from the 10.4 branch. The changes after the latest release are bug fixes, not new features. You can download test binaries from here http://dbtg.thresher.com/derby/bits/ Note that these are bits used for the nightly testing and should *not* be used in production without verification and testing. Note that the performance problem in this case is severe; the larger the Clob the worse the impact (we're talking hours instead of seconds/minutes). I would strongly suggest you test with a newer, though unreleased, version.
Re: varchar length
Kristian Waagan wrote (2008-11-25 13:36:54): > tom_ wrote: >> Hello Kristian, >> >> this is a good idea, truncating by java before writing by SQL. Yes, varchar >> removes trailing blanks, though it would help if it would cut larger values >> instead of not writing them. If you have a look at the error message it >> shows that he tries to cut the value to the limit but has an exception. >> > > Yes, it is trying to cut, but is not allowed to do so because the data > to be cut consists on non-blanks. > There is no support for allowing Derby to truncate "real data". You > could add a feature request to Jira[1], but since there are relatively > simple work arounds available, I don't know if it will get much traction > in the community. Another thing, I don't think increasing the size of the VARCAHR in Derby should affect Derby performance. -- Bernt Marius Johnsen, Staff Engineer Database Technology Group, Sun Microsystems, Trondheim, Norway signature.asc Description: Digital signature
RE: how to get data out from Clob?
Thanks for the info Does the same problem exist for Blob? Will update my testservers and try it. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: den 25 november 2008 14:01 To: Derby Discussion Subject: Re: how to get data out from Clob? Mikael Sundberg wrote: > Hi > Im using derby 10.4.2 and the clientdriver. > I changed from > BufferedReader r = new BufferedReader(new > InputStreamReader(res.getAsciiStream("data"))); > for (String line = r.readLine(); line != null; line = > r.readLine()) > { > w.println(line); > } > > To > String data = res.getString("data"); > > In a recent update to our application since the getString seemed to be > twice as fast then. Not entirely sure what versions of derby I tested on > then. > So bacisly I should change back now? On the client driver, it doesn't matter (with respect to the bug I'm talking about). I'm afraid you're stuck with the performance problem until you upgrade to a newer version. > We don't want to use any unstable > version on production so guess I will have to solve it some other way. > Il atleast try out the latest version on my testservers to see if that > realy is the problem. > I would consider using the newest bits from the 10.4 branch. The changes after the latest release are bug fixes, not new features. You can download test binaries from here http://dbtg.thresher.com/derby/bits/ Note that these are bits used for the nightly testing and should *not* be used in production without verification and testing. Note that the performance problem in this case is severe; the larger the Clob the worse the impact (we're talking hours instead of seconds/minutes). I would strongly suggest you test with a newer, though unreleased, version. -- Kristian > > -Original Message- > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] > Sent: den 25 november 2008 13:32 > To: Derby Discussion > Subject: Re: how to get data out from Clob? > > Mikael Sundberg write: > >> We are having some trouble with retrieving data from big Clob fields. >> The data is about 40-50MB and it takes forever to retrieve it. >> >> Tried a lot of different methods, getString, getClob, getAsciiStream. >> They all take forever (hours). >> >> When checking whats running in derby we find >> >> CALL SYSIBM.CLOBGETSUBSTRING(?, ?, ?) >> >> A lot of times. >> >> How should I retrieve large CLob fields fastest? >> >> >> >> > > > Hello Mikael, > > You are being hit by a serious performance bug (a lot more prominent > since Derby 10.3). > I understand you are using the client driver, but which version of Derby > > are you using? > > I believe the issue you are experiencing has been fixed in trunk and in > the 10.4 branch, but a release hasn't been made after the fix was > committed. If you don't want to build Derby yourself, *test binaries* > are available if you want to test with a newer version. It should be > sufficient to update the server side only (i.e. derbynet.jar and > derby.jar). If you try it out, please remember to take a backup of your > database! If you need to upgrade you database to trunk, you also need to > > enable pre-release upgrades. > > Unfortunately, there is no way to properly work around the problem in > older releases. You can adjust the buffer/block size used to fetch data > in the client application, but this will only help a little bit and will > > probably not be sufficient. > The optimal buffer size is dependent on the Clob content (due to the > UTF-8 encoding used). If your Clobs contain ASCII data, you should set > your buffer size to 32672 (note that this is smaller than 32KB). If you > are using the embedded driver, make sure you access the Clob using one > of the streaming methods (getSubString has been fixed by now). > There will be several changes regarding Clob performance in the next > feature release. > > We're also in the progress of adding performance regression tests for > LOBs, as this is a problem we should have detected a long time ago! > If you are still seeing the problem with the newest version, I'm very > interested in getting more details from you. > > > regards, >
Re: how to get data out from Clob?
Mikael Sundberg wrote: Hi Im using derby 10.4.2 and the clientdriver. I changed from BufferedReader r = new BufferedReader(new InputStreamReader(res.getAsciiStream("data"))); for (String line = r.readLine(); line != null; line = r.readLine()) { w.println(line); } To String data = res.getString("data"); In a recent update to our application since the getString seemed to be twice as fast then. Not entirely sure what versions of derby I tested on then. So bacisly I should change back now? On the client driver, it doesn't matter (with respect to the bug I'm talking about). I'm afraid you're stuck with the performance problem until you upgrade to a newer version. We don't want to use any unstable version on production so guess I will have to solve it some other way. Il atleast try out the latest version on my testservers to see if that realy is the problem. I would consider using the newest bits from the 10.4 branch. The changes after the latest release are bug fixes, not new features. You can download test binaries from here http://dbtg.thresher.com/derby/bits/ Note that these are bits used for the nightly testing and should *not* be used in production without verification and testing. Note that the performance problem in this case is severe; the larger the Clob the worse the impact (we're talking hours instead of seconds/minutes). I would strongly suggest you test with a newer, though unreleased, version. -- Kristian -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: den 25 november 2008 13:32 To: Derby Discussion Subject: Re: how to get data out from Clob? Mikael Sundberg write: We are having some trouble with retrieving data from big Clob fields. The data is about 40-50MB and it takes forever to retrieve it. Tried a lot of different methods, getString, getClob, getAsciiStream. They all take forever (hours). When checking whats running in derby we find CALL SYSIBM.CLOBGETSUBSTRING(?, ?, ?) A lot of times. How should I retrieve large CLob fields fastest? Hello Mikael, You are being hit by a serious performance bug (a lot more prominent since Derby 10.3). I understand you are using the client driver, but which version of Derby are you using? I believe the issue you are experiencing has been fixed in trunk and in the 10.4 branch, but a release hasn't been made after the fix was committed. If you don't want to build Derby yourself, *test binaries* are available if you want to test with a newer version. It should be sufficient to update the server side only (i.e. derbynet.jar and derby.jar). If you try it out, please remember to take a backup of your database! If you need to upgrade you database to trunk, you also need to enable pre-release upgrades. Unfortunately, there is no way to properly work around the problem in older releases. You can adjust the buffer/block size used to fetch data in the client application, but this will only help a little bit and will probably not be sufficient. The optimal buffer size is dependent on the Clob content (due to the UTF-8 encoding used). If your Clobs contain ASCII data, you should set your buffer size to 32672 (note that this is smaller than 32KB). If you are using the embedded driver, make sure you access the Clob using one of the streaming methods (getSubString has been fixed by now). There will be several changes regarding Clob performance in the next feature release. We're also in the progress of adding performance regression tests for LOBs, as this is a problem we should have detected a long time ago! If you are still seeing the problem with the newest version, I'm very interested in getting more details from you. regards,
RE: how to get data out from Clob?
Hi Im using derby 10.4.2 and the clientdriver. I changed from BufferedReader r = new BufferedReader(new InputStreamReader(res.getAsciiStream("data"))); for (String line = r.readLine(); line != null; line = r.readLine()) { w.println(line); } To String data = res.getString("data"); In a recent update to our application since the getString seemed to be twice as fast then. Not entirely sure what versions of derby I tested on then. So bacisly I should change back now? We don't want to use any unstable version on production so guess I will have to solve it some other way. Il atleast try out the latest version on my testservers to see if that realy is the problem. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Sent: den 25 november 2008 13:32 To: Derby Discussion Subject: Re: how to get data out from Clob? Mikael Sundberg write: > > We are having some trouble with retrieving data from big Clob fields. > The data is about 40-50MB and it takes forever to retrieve it. > > Tried a lot of different methods, getString, getClob, getAsciiStream. > They all take forever (hours). > > When checking whats running in derby we find > > CALL SYSIBM.CLOBGETSUBSTRING(?, ?, ?) > > A lot of times. > > How should I retrieve large CLob fields fastest? > > > Hello Mikael, You are being hit by a serious performance bug (a lot more prominent since Derby 10.3). I understand you are using the client driver, but which version of Derby are you using? I believe the issue you are experiencing has been fixed in trunk and in the 10.4 branch, but a release hasn't been made after the fix was committed. If you don't want to build Derby yourself, *test binaries* are available if you want to test with a newer version. It should be sufficient to update the server side only (i.e. derbynet.jar and derby.jar). If you try it out, please remember to take a backup of your database! If you need to upgrade you database to trunk, you also need to enable pre-release upgrades. Unfortunately, there is no way to properly work around the problem in older releases. You can adjust the buffer/block size used to fetch data in the client application, but this will only help a little bit and will probably not be sufficient. The optimal buffer size is dependent on the Clob content (due to the UTF-8 encoding used). If your Clobs contain ASCII data, you should set your buffer size to 32672 (note that this is smaller than 32KB). If you are using the embedded driver, make sure you access the Clob using one of the streaming methods (getSubString has been fixed by now). There will be several changes regarding Clob performance in the next feature release. We're also in the progress of adding performance regression tests for LOBs, as this is a problem we should have detected a long time ago! If you are still seeing the problem with the newest version, I'm very interested in getting more details from you. regards, -- Kristian
Re: varchar length
tom_ wrote: Hello Kristian, this is a good idea, truncating by java before writing by SQL. Yes, varchar removes trailing blanks, though it would help if it would cut larger values instead of not writing them. If you have a look at the error message it shows that he tries to cut the value to the limit but has an exception. Yes, it is trying to cut, but is not allowed to do so because the data to be cut consists on non-blanks. There is no support for allowing Derby to truncate "real data". You could add a feature request to Jira[1], but since there are relatively simple work arounds available, I don't know if it will get much traction in the community. cheers, -- Kristian [1] https://issues.apache.org/jira/browse/DERBY Thank you for helping Tom Kristian Waagan-4 wrote: tom_ wrote: When defining a datatype with varchar(128) and providing an input with more than 128 characters the value is not inserted in the database. There is an error message "error when trying to cut to 128 characters" (translated from german). Hello Tom, Only trailing white space will be truncated by Derby. I *think* this is according to the SQL standard, but I haven't checked this (anyone?). Two options would be to either extract a substring from the value in Java (i.e. before you call PreparedStatement.setString), or rely on exception handling to detect and handle the problem. regards, -- Kristian I could encrease the maximum length but when there are a lot of values with shorter length and only very few with longer length than 128 this would decrease the speed of search operations ...
Re: how to get data out from Clob?
Mikael Sundberg write: We are having some trouble with retrieving data from big Clob fields. The data is about 40-50MB and it takes forever to retrieve it. Tried a lot of different methods, getString, getClob, getAsciiStream. They all take forever (hours). When checking whats running in derby we find CALL SYSIBM.CLOBGETSUBSTRING(?, ?, ?) A lot of times. How should I retrieve large CLob fields fastest? Hello Mikael, You are being hit by a serious performance bug (a lot more prominent since Derby 10.3). I understand you are using the client driver, but which version of Derby are you using? I believe the issue you are experiencing has been fixed in trunk and in the 10.4 branch, but a release hasn't been made after the fix was committed. If you don't want to build Derby yourself, *test binaries* are available if you want to test with a newer version. It should be sufficient to update the server side only (i.e. derbynet.jar and derby.jar). If you try it out, please remember to take a backup of your database! If you need to upgrade you database to trunk, you also need to enable pre-release upgrades. Unfortunately, there is no way to properly work around the problem in older releases. You can adjust the buffer/block size used to fetch data in the client application, but this will only help a little bit and will probably not be sufficient. The optimal buffer size is dependent on the Clob content (due to the UTF-8 encoding used). If your Clobs contain ASCII data, you should set your buffer size to 32672 (note that this is smaller than 32KB). If you are using the embedded driver, make sure you access the Clob using one of the streaming methods (getSubString has been fixed by now). There will be several changes regarding Clob performance in the next feature release. We're also in the progress of adding performance regression tests for LOBs, as this is a problem we should have detected a long time ago! If you are still seeing the problem with the newest version, I'm very interested in getting more details from you. regards, -- Kristian
Re: varchar length
Hello Kristian, this is a good idea, truncating by java before writing by SQL. Yes, varchar removes trailing blanks, though it would help if it would cut larger values instead of not writing them. If you have a look at the error message it shows that he tries to cut the value to the limit but has an exception. Thank you for helping Tom Kristian Waagan-4 wrote: > > tom_ wrote: >> When defining a datatype with varchar(128) and providing an input with >> more >> than 128 characters the value is not inserted in the database. There is >> an >> error message "error when trying to cut to 128 characters" (translated >> from >> german). >> > > Hello Tom, > > Only trailing white space will be truncated by Derby. I *think* this is > according to the SQL standard, but I haven't checked this (anyone?). > > Two options would be to either extract a substring from the value in > Java (i.e. before you call PreparedStatement.setString), or rely on > exception handling to detect and handle the problem. > > > regards, > -- > Kristian > >> I could encrease the maximum length but when there are a lot of values >> with >> shorter length and only very few with longer length than 128 this would >> decrease the speed of search operations ... >> > > > -- View this message in context: http://www.nabble.com/varchar-length-tp20669171p20680266.html Sent from the Apache Derby Users mailing list archive at Nabble.com.
how to get data out from Clob?
We are having some trouble with retrieving data from big Clob fields. The data is about 40-50MB and it takes forever to retrieve it. Tried a lot of different methods, getString, getClob, getAsciiStream. They all take forever (hours). When checking whats running in derby we find CALL SYSIBM.CLOBGETSUBSTRING(?, ?, ?) A lot of times. How should I retrieve large CLob fields fastest?
Re: varchar length
tom_ wrote: When defining a datatype with varchar(128) and providing an input with more than 128 characters the value is not inserted in the database. There is an error message "error when trying to cut to 128 characters" (translated from german). Hello Tom, Only trailing white space will be truncated by Derby. I *think* this is according to the SQL standard, but I haven't checked this (anyone?). Two options would be to either extract a substring from the value in Java (i.e. before you call PreparedStatement.setString), or rely on exception handling to detect and handle the problem. regards, -- Kristian I could encrease the maximum length but when there are a lot of values with shorter length and only very few with longer length than 128 this would decrease the speed of search operations ...
Re: Closing connection in table function
Hi Rick, Thanks a lot for your precious tips. I opted for the the second alternativ (Creating a ResultSet wrapper), because it's less error-prone than the first one. Thank you again for your help. -Reda -- View this message in context: http://www.nabble.com/Closing-connection-in-table-function-tp20577567p20679258.html Sent from the Apache Derby Users mailing list archive at Nabble.com.