RE: Out-of-memory errors

2021-02-08 Thread Peter Ondruška
OutOfMemoryError: GC overhead limit exceeded happens when JVM spends way much 
garbage collecting. Could it be your heap is too small? One way to check if 
DateTimeFormat may be leaking memory is to run query without this function.

-Original Message-
From: John English  
Sent: Monday, February 8, 2021 10:28 AM
To: Derby Discussion 
Subject: Out-of-memory errors

In the last few days I've suddenly had a bunch of OOM exceptions. I'm using 
Derby 10.9.1.0, Oracle Java 1.8.0 on Ubuntu 64-bit, and haven't upgraded for a 
while (probably years, looking at those numbers).

The place where they happen is in a call to executeQuery() in a method which 
displays a view as a table. Analysing the heap dump for the latest one with the 
Eclipse memory analyser shows this:

One instance of "org.apache.derby.impl.store.access.sort.MergeInserter" 
loaded by "org.eclipse.jetty.webapp.WebAppClassLoader @ 0xf04231b0" 
occupies 134,841,800 (64.65%) bytes. The memory is accumulated in one instance 
of "org.apache.derby.impl.store.access.sort.SortBuffer", loaded by 
"org.eclipse.jetty.webapp.WebAppClassLoader @ 0xf04231b0", which occupies 
134,841,496 (64.65%) bytes.

One instance of "org.apache.derby.impl.services.cache.ConcurrentCache" 
loaded by "org.eclipse.jetty.webapp.WebAppClassLoader @ 0xf04231b0" 
occupies 43,766,832 (20.98%) bytes. The memory is accumulated in one instance 
of "org.apache.derby.impl.services.cache.ConcurrentCache",
loaded by "org.eclipse.jetty.webapp.WebAppClassLoader @ 0xf04231b0", which 
occupies 43,766,832 (20.98%) bytes.

The query itself was:

   SELECT DateTimeFormat(t_time,null) AS t_time,
  facility,event,details,name,username,sector,item
   FROM system_log_view
   ORDER BY time DESC
   NULLS LAST
   FETCH NEXT 20 ROWS ONLY

The view is nothing special except that t_time is a duplicate of the time 
column (the timestamp of the log entry) used to create a separate formatted 
copy for display purposes:

   CREATE VIEW system_log_view AS
 SELECT  time AS t_time,
 facility,
 event,
 details,
 name,
 username,
 sector,
 item,
 time
 FROMsystem_log;

The stack trace shows the error is occurring inside the call to DateTimeFormat, 
which is again nothing special:

   public static final String formatDateTime (Timestamp date, String
locale) {
 if (date == null) {
   return null;
 }
 else {
   String fmt = translate("d-MMM- 'at' HH:mm",locale);
   return translatePhrases(fmt.format(date),locale);
 }
   }

Here's the start of the stack trace:

java.sql.SQLException: The exception 'java.lang.OutOfMemoryError: GC overhead 
limit exceeded' was thrown while evaluating an expression.
   at java.text.DigitList.clone()Ljava/lang/Object; (DigitList.java:736)
   at java.text.DecimalFormat.clone()Ljava/lang/Object;
(DecimalFormat.java:2711)
   at java.text.SimpleDateFormat.initialize(Ljava/util/Locale;)V
(SimpleDateFormat.java:645)
   at
java.text.SimpleDateFormat.(Ljava/lang/String;Ljava/util/Locale;)V
(SimpleDateFormat.java:605)
   at java.text.SimpleDateFormat.(Ljava/lang/String;)V
(SimpleDateFormat.java:580)
   at
database.Functions.formatDateTime(Ljava/sql/Timestamp;Ljava/lang/String;)Ljava/lang/String;
(Functions.java:51)

Does anyone have any idea what might be happening, or what I can do to find out 
more?

TIA,
--
John English


RE: Slow mount times

2021-02-07 Thread Peter Ondruška
There is very easy way to check if you are going to run database recovery at 
boot time by looking into logs subfolder. If you gracefully shut down, there 
are only two log files. If there are more your database will perform roll 
forward recovery, in worst case applying all the log files.

From: Rick Hillegas 
Sent: Sunday, February 7, 2021 4:11 PM
To: Derby Discussion ; Alex O'Ree 

Subject: Re: Slow mount times

I don't know of any special trace flags for this. Maybe something will turn up 
in derby.log if you set the diagnostic logging level to its most verbose level 
by running the application with the following system property:


  -Dderby.stream.error.logSeverityLevel=0

Hope this helps,
-Rick

On 2/6/21 6:53 PM, Alex O'Ree wrote:

Thanks i'll give it a shot.

Is there any logging in derby that i can enable into regarding this?



On Sat, Feb 6, 2021 at 7:08 PM Rick Hillegas 


wrote:



The usual cause for this behavior is that the application was brought

down ungracefully, say via a control-c or by killing the window where it

was running. The engine then needs to reconstruct the state of the

database by replaying many recovery logs. To gracefully exit Derby, you

need to explicitly shutdown Derby as described here:

https://db.apache.org/derby/docs/10.15/devguide/tdevdvlp20349.html



On 2/6/21 3:39 PM, Alex O'Ree wrote:

Sometimes when my app starts, it can take several minutes to initialize

the

database. Is there a way to find out whats going on? There isn't much log

output. I have overridden derby.stream.error.method but other than the

the

startup message, I don't have much to go on.



Is there perhaps a startup database file check or something?












Re: I don't find the JDBC driver class in derbyclient.jar

2020-03-05 Thread Peter Ondruška
Hi, in default setup you do not need to use authentication at all. However
if you do with APP user its default password is APP. This was for 10.14 and
I guess the same applies to 10.15. Peter

On Thu, 5 Mar 2020, 10:25 Richard Grin, 
wrote:

> Thanks a lot Rick. I put these 3 files in the lib directory of Payara and
> everything is working now.
>
> In the meanwhile I found the page
> http://db.apache.org/derby/docs/10.15/getstart/twwdactivity4.html but it
> does not mention the file derbytools.jar; is it a mistake?
>
> I am not used to Derby and I had a problem to solve with the default name
> and password to give (APP) in order to connect my Web application to Derby.
> I solved it by guessing the "APP" values to put. Where can I find this
> information in the documentation?
>
> Richard
> Le 05/03/2020 à 00:53, Rick Hillegas a écrit :
>
> Hey Richard,
>
> The drivers moved into derbytools.jar as part of the JPMS modularization
> work introduced by the previous feature release (10.15.1.3). In addition to
> derbyclient.jar, you will need to put derbyshared.jar and derbytools.jar on
> your client-side classpath or modulepath. Please see the corresponding
> release notes at
> http://db.apache.org/derby/releases/release-10.15.1.3.html
>
> Hope this helps,
> -Rick
>
> On 3/4/20 11:10 AM, Richard Grin wrote:
>
> Hi,
>
> I have just started using Derby (version 10.15.2.0) today.
>
> I would like to use it with Payara server so I have to put the JDBC
> driver in Payara. Payara can't find the class
> org.apache.derby.jdbc.ClientDriver. I looked for this class in the
> derbyclient.jar file but I couldn't find it.
>
> What's my mistake? Has the driver class changed? Is the class in another
> file?
>
> I have read articles about JDBC driver for old versions of Derby Network
> server but I can't find the information for the 10.15.2.0 version.
>
> Richard
>
>
>


Re: Avoid locking on DELETE

2019-10-16 Thread Peter
Hi Peter,

Thanks.

This procedure with disabling the autocommit is indeed simpler than I
had before via "DELETE FROM mytable WHERE id IN (...)" but the delete
itself takes longer (5-6 times) and I do not see differences with
different batch sizes.

I've also benchmarked this process against postgresql and derby seems to
be much slower here. Will investigate more as migrating is also a risk.

Regards
Peter

On 16.10.19 14:03, Peter Ondruška wrote:
> You would need to test various scenarios. First I would propose larger
> batch size (N thousands of rows). Are you sure you execute deletes in
> batches? You should have autocommit off, execute N times delete
> statement, commit, repeat. Pseudo code (I am on mobile phone):
>
> 1. Acquire connection
> 2. Set connection autocommit to false
> 3. Create prepared statement with delete, DELETE FROM WHERE primary
> key = ?
> 4. Create set of primary keys to be deleted
> 5. Iterate set (4.) with adding those keys to delete statement (3.) as
> batch
> 6. When you reach batch size or end of key set execute batch and
> commit, continue (5.)
>
> In my case with slow disk this really performs better and should avoid
> your issue as well.
>
> Peter
>
> On Mon, 7 Oct 2019, 22:11 Peter,  <mailto:tableyourt...@gmail.com>> wrote:
>
> Hi Peter,
>
> Thanks! I have implemented this and indeed the maximum delays are
> lower but the time for a delete batch to complete takes now longer
> (roughly 3-4 times; for batchSize=500, total deleted items around
> ~1). The problem is likely that I have VARCHAR for the ID column.
>
> If I increase the frequency of issuing the original DELETE statement:
>
> DELETE FROM mytable WHERE created_at < ?
>
> Won't it have a similar effect due to smaller batches?
>
> Regards
> Peter
>
> On 07.10.19 16:31, Peter Ondruška wrote:
>> In my case I have two separate steps. First SELECT primary keys
>> of those records to be deleted (in your case SELECT id FROM
>> mytable WHERE created_at < some_fixed_millis). And then I issue
>> DELETE for those primary keys in batches of N statements (N being
>> configuration parameter). You could create stored procedure for
>> this with two parameters (some_fixed_millis, batch_size).
>> Your idea DELETE WHERE SELECT and limiting rows needs to be run
>> for every DELETE step making unnecessary read I/O.
>>
>>
>> On Mon, 7 Oct 2019 at 14:10, Peter > <mailto:tableyourt...@gmail.com>> wrote:
>>
>> Hi Peter,
>>
>> Thanks a lot for the suggestion.This would be nice if it
>> performs better.
>>
>> Is the idea to split one request into smaller parts or will
>> "Select+Delete IDs" just perform better?
>>
>> And regarding the latter option - is this possible in one SQL
>> request? So something like
>>
>> DELETE FROM mytable WHERE id IN 
>>
>> ( SELECT id FROM mytable WHERE created_at < some_fixed_millis OFFSET 
>> 0 ROWS FETCH NEXT 1000 ROWS ONLY )
>>
>>
>> And then loop through the results via changing OFFSET and
>> ROWS? (Btw: the column created_at is indexed)
>>
>> Or would you recommend doing this as 2 separate statements in
>> Java/JDBC? Or via maybe even just issuing the original DELETE
>> request more frequent?
>>
>> Regards
>> Peter
>>
>> On 06.10.19 03:50, Peter Ondruška wrote:
>>> Peter, try this if it makes a difference:
>>>
>>> 1. Select entries to be deleted, note their primary keys.
>>> 2. Issue delete using keys to be deleted (1.) and use short
>>> transaction batches.
>>>
>>> On Sun, 6 Oct 2019, 01:33 Peter, >> <mailto:tableyourt...@gmail.com>> wrote:
>>>
>>> Hi,
>>>
>>> I have a table "mytable" with columns "id", "created_at"
>>> and "json"
>>> (VARCHAR, BIGINT, LONG VARCHAR), where data is coming in
>>> like new 200k
>>> entries every hour and I would like to keep only entries
>>> of the last 1
>>> or 2 hours. It is expected behaviour for the user if too
>>> old entries
>>> gets lost as it is some kind of a LRU cache.
>>>
>>> The current solution is to 

Re: Avoid locking on DELETE

2019-10-16 Thread Peter Ondruška
You would need to test various scenarios. First I would propose larger
batch size (N thousands of rows). Are you sure you execute deletes in
batches? You should have autocommit off, execute N times delete statement,
commit, repeat. Pseudo code (I am on mobile phone):

1. Acquire connection
2. Set connection autocommit to false
3. Create prepared statement with delete, DELETE FROM WHERE primary key = ?
4. Create set of primary keys to be deleted
5. Iterate set (4.) with adding those keys to delete statement (3.) as batch
6. When you reach batch size or end of key set execute batch and commit,
continue (5.)

In my case with slow disk this really performs better and should avoid your
issue as well.

Peter

On Mon, 7 Oct 2019, 22:11 Peter,  wrote:

> Hi Peter,
>
> Thanks! I have implemented this and indeed the maximum delays are lower
> but the time for a delete batch to complete takes now longer (roughly 3-4
> times; for batchSize=500, total deleted items around ~1). The problem
> is likely that I have VARCHAR for the ID column.
>
> If I increase the frequency of issuing the original DELETE statement:
>
> DELETE FROM mytable WHERE created_at < ?
>
> Won't it have a similar effect due to smaller batches?
>
> Regards
> Peter
>
> On 07.10.19 16:31, Peter Ondruška wrote:
>
> In my case I have two separate steps. First SELECT primary keys of those
> records to be deleted (in your case SELECT id FROM mytable WHERE created_at
> < some_fixed_millis). And then I issue DELETE for those primary keys in
> batches of N statements (N being configuration parameter). You could create
> stored procedure for this with two parameters (some_fixed_millis,
> batch_size).
> Your idea DELETE WHERE SELECT and limiting rows needs to be run for every
> DELETE step making unnecessary read I/O.
>
>
> On Mon, 7 Oct 2019 at 14:10, Peter  wrote:
>
>> Hi Peter,
>>
>> Thanks a lot for the suggestion.This would be nice if it performs better.
>>
>> Is the idea to split one request into smaller parts or will
>> "Select+Delete IDs" just perform better?
>>
>> And regarding the latter option - is this possible in one SQL request? So
>> something like
>>
>> DELETE FROM mytable WHERE id IN
>>
>> ( SELECT id FROM mytable WHERE created_at < some_fixed_millis OFFSET 0 ROWS 
>> FETCH NEXT 1000 ROWS ONLY )
>>
>>
>> And then loop through the results via changing OFFSET and ROWS? (Btw: the
>> column created_at is indexed)
>>
>> Or would you recommend doing this as 2 separate statements in Java/JDBC?
>> Or via maybe even just issuing the original DELETE request more frequent?
>>
>> Regards
>> Peter
>>
>> On 06.10.19 03:50, Peter Ondruška wrote:
>>
>> Peter, try this if it makes a difference:
>>
>> 1. Select entries to be deleted, note their primary keys.
>> 2. Issue delete using keys to be deleted (1.) and use short transaction
>> batches.
>>
>> On Sun, 6 Oct 2019, 01:33 Peter,  wrote:
>>
>>> Hi,
>>>
>>> I have a table "mytable" with columns "id", "created_at" and "json"
>>> (VARCHAR, BIGINT, LONG VARCHAR), where data is coming in like new 200k
>>> entries every hour and I would like to keep only entries of the last 1
>>> or 2 hours. It is expected behaviour for the user if too old entries
>>> gets lost as it is some kind of a LRU cache.
>>>
>>> The current solution is to delete entries older than 4 hours every 30
>>> minutes:
>>>
>>> DELETE FROM mytable WHERE created_at < ?
>>>
>>> I'm using this in a prepared statement where ? is "4 hours ago" in
>>> milliseconds (new DateTime().getMillis()).
>>>
>>> This works, but some (not all) INSERT statement get a bigger delay in
>>> the same order (2-5 seconds) that this DELETE takes, which is ugly.
>>> These INSERT statements are executed independently (using different
>>> threads) of the DELETE.
>>>
>>> Is there a better way? Can I somehow avoid locking the unrelated INSERT
>>> operations?
>>>
>>> What helps a bit is when I make those deletes more frequently than the
>>> delays will get smaller, but then the number of those delayed requests
>>> will increase.
>>>
>>> What also helps a bit (currently have not seen a negative impact) is
>>> increasing the page size for the Derby Network Server:
>>> -Dderby.storage.pageSize=32768
>>>
>>> Regards
>>> Peter
>>>
>>>
>>
>


Re: Avoid locking on DELETE

2019-10-07 Thread Peter
Hi Peter,

Thanks! I have implemented this and indeed the maximum delays are lower
but the time for a delete batch to complete takes now longer (roughly
3-4 times; for batchSize=500, total deleted items around ~1). The
problem is likely that I have VARCHAR for the ID column.

If I increase the frequency of issuing the original DELETE statement:

DELETE FROM mytable WHERE created_at < ?

Won't it have a similar effect due to smaller batches?

Regards
Peter

On 07.10.19 16:31, Peter Ondruška wrote:
> In my case I have two separate steps. First SELECT primary keys of
> those records to be deleted (in your case SELECT id FROM mytable WHERE
> created_at < some_fixed_millis). And then I issue DELETE for those
> primary keys in batches of N statements (N being configuration
> parameter). You could create stored procedure for this with two
> parameters (some_fixed_millis, batch_size).
> Your idea DELETE WHERE SELECT and limiting rows needs to be run for
> every DELETE step making unnecessary read I/O.
>
>
> On Mon, 7 Oct 2019 at 14:10, Peter  <mailto:tableyourt...@gmail.com>> wrote:
>
> Hi Peter,
>
> Thanks a lot for the suggestion.This would be nice if it performs
> better.
>
> Is the idea to split one request into smaller parts or will
> "Select+Delete IDs" just perform better?
>
> And regarding the latter option - is this possible in one SQL
> request? So something like
>
> DELETE FROM mytable WHERE id IN 
>
> ( SELECT id FROM mytable WHERE created_at < some_fixed_millis OFFSET 0 
> ROWS FETCH NEXT 1000 ROWS ONLY )
>
>
> And then loop through the results via changing OFFSET and ROWS?
> (Btw: the column created_at is indexed)
>
> Or would you recommend doing this as 2 separate statements in
> Java/JDBC? Or via maybe even just issuing the original DELETE
> request more frequent?
>
> Regards
> Peter
>
> On 06.10.19 03:50, Peter Ondruška wrote:
>> Peter, try this if it makes a difference:
>>
>> 1. Select entries to be deleted, note their primary keys.
>> 2. Issue delete using keys to be deleted (1.) and use short
>> transaction batches.
>>
>> On Sun, 6 Oct 2019, 01:33 Peter, > <mailto:tableyourt...@gmail.com>> wrote:
>>
>> Hi,
>>
>> I have a table "mytable" with columns "id", "created_at" and
>> "json"
>> (VARCHAR, BIGINT, LONG VARCHAR), where data is coming in like
>> new 200k
>> entries every hour and I would like to keep only entries of
>> the last 1
>> or 2 hours. It is expected behaviour for the user if too old
>> entries
>> gets lost as it is some kind of a LRU cache.
>>
>> The current solution is to delete entries older than 4 hours
>> every 30
>> minutes:
>>
>> DELETE FROM mytable WHERE created_at < ?
>>
>> I'm using this in a prepared statement where ? is "4 hours
>> ago" in
>> milliseconds (new DateTime().getMillis()).
>>
>> This works, but some (not all) INSERT statement get a bigger
>> delay in
>> the same order (2-5 seconds) that this DELETE takes, which is
>> ugly.
>> These INSERT statements are executed independently (using
>> different
>> threads) of the DELETE.
>>
>> Is there a better way? Can I somehow avoid locking the
>> unrelated INSERT
>> operations?
>>
>> What helps a bit is when I make those deletes more frequently
>> than the
>> delays will get smaller, but then the number of those delayed
>> requests
>> will increase.
>>
>> What also helps a bit (currently have not seen a negative
>> impact) is
>> increasing the page size for the Derby Network Server:
>> -Dderby.storage.pageSize=32768
>>
>> Regards
>> Peter
>>
>



Re: Avoid locking on DELETE

2019-10-07 Thread Peter Ondruška
In my case I have two separate steps. First SELECT primary keys of those
records to be deleted (in your case SELECT id FROM mytable WHERE created_at
< some_fixed_millis). And then I issue DELETE for those primary keys in
batches of N statements (N being configuration parameter). You could create
stored procedure for this with two parameters (some_fixed_millis,
batch_size).

Your idea DELETE WHERE SELECT and limiting rows needs to be run for every
DELETE step making unnecessary read I/O.


On Mon, 7 Oct 2019 at 14:10, Peter  wrote:

> Hi Peter,
>
> Thanks a lot for the suggestion.This would be nice if it performs better.
>
> Is the idea to split one request into smaller parts or will "Select+Delete
> IDs" just perform better?
>
> And regarding the latter option - is this possible in one SQL request? So
> something like
>
> DELETE FROM mytable WHERE id IN
>
> ( SELECT id FROM mytable WHERE created_at < some_fixed_millis OFFSET 0 ROWS 
> FETCH NEXT 1000 ROWS ONLY )
>
>
> And then loop through the results via changing OFFSET and ROWS? (Btw: the
> column created_at is indexed)
>
> Or would you recommend doing this as 2 separate statements in Java/JDBC?
> Or via maybe even just issuing the original DELETE request more frequent?
>
> Regards
> Peter
>
> On 06.10.19 03:50, Peter Ondruška wrote:
>
> Peter, try this if it makes a difference:
>
> 1. Select entries to be deleted, note their primary keys.
> 2. Issue delete using keys to be deleted (1.) and use short transaction
> batches.
>
> On Sun, 6 Oct 2019, 01:33 Peter,  wrote:
>
>> Hi,
>>
>> I have a table "mytable" with columns "id", "created_at" and "json"
>> (VARCHAR, BIGINT, LONG VARCHAR), where data is coming in like new 200k
>> entries every hour and I would like to keep only entries of the last 1
>> or 2 hours. It is expected behaviour for the user if too old entries
>> gets lost as it is some kind of a LRU cache.
>>
>> The current solution is to delete entries older than 4 hours every 30
>> minutes:
>>
>> DELETE FROM mytable WHERE created_at < ?
>>
>> I'm using this in a prepared statement where ? is "4 hours ago" in
>> milliseconds (new DateTime().getMillis()).
>>
>> This works, but some (not all) INSERT statement get a bigger delay in
>> the same order (2-5 seconds) that this DELETE takes, which is ugly.
>> These INSERT statements are executed independently (using different
>> threads) of the DELETE.
>>
>> Is there a better way? Can I somehow avoid locking the unrelated INSERT
>> operations?
>>
>> What helps a bit is when I make those deletes more frequently than the
>> delays will get smaller, but then the number of those delayed requests
>> will increase.
>>
>> What also helps a bit (currently have not seen a negative impact) is
>> increasing the page size for the Derby Network Server:
>> -Dderby.storage.pageSize=32768
>>
>> Regards
>> Peter
>>
>>
>


Re: Avoid locking on DELETE

2019-10-07 Thread Peter
Hi Peter,

Thanks a lot for the suggestion.This would be nice if it performs better.

Is the idea to split one request into smaller parts or will
"Select+Delete IDs" just perform better?

And regarding the latter option - is this possible in one SQL request?
So something like

DELETE FROM mytable WHERE id IN 

( SELECT id FROM mytable WHERE created_at < some_fixed_millis OFFSET 0 ROWS 
FETCH NEXT 1000 ROWS ONLY )


And then loop through the results via changing OFFSET and ROWS? (Btw:
the column created_at is indexed)

Or would you recommend doing this as 2 separate statements in Java/JDBC?
Or via maybe even just issuing the original DELETE request more frequent?

Regards
Peter

On 06.10.19 03:50, Peter Ondruška wrote:
> Peter, try this if it makes a difference:
>
> 1. Select entries to be deleted, note their primary keys.
> 2. Issue delete using keys to be deleted (1.) and use short
> transaction batches.
>
> On Sun, 6 Oct 2019, 01:33 Peter,  <mailto:tableyourt...@gmail.com>> wrote:
>
> Hi,
>
> I have a table "mytable" with columns "id", "created_at" and "json"
> (VARCHAR, BIGINT, LONG VARCHAR), where data is coming in like new 200k
> entries every hour and I would like to keep only entries of the last 1
> or 2 hours. It is expected behaviour for the user if too old entries
> gets lost as it is some kind of a LRU cache.
>
> The current solution is to delete entries older than 4 hours every 30
> minutes:
>
> DELETE FROM mytable WHERE created_at < ?
>
> I'm using this in a prepared statement where ? is "4 hours ago" in
> milliseconds (new DateTime().getMillis()).
>
> This works, but some (not all) INSERT statement get a bigger delay in
> the same order (2-5 seconds) that this DELETE takes, which is ugly.
> These INSERT statements are executed independently (using different
> threads) of the DELETE.
>
> Is there a better way? Can I somehow avoid locking the unrelated
> INSERT
> operations?
>
> What helps a bit is when I make those deletes more frequently than the
> delays will get smaller, but then the number of those delayed requests
> will increase.
>
> What also helps a bit (currently have not seen a negative impact) is
> increasing the page size for the Derby Network Server:
> -Dderby.storage.pageSize=32768
>
> Regards
> Peter
>



Re: Avoid locking on DELETE

2019-10-05 Thread Peter Ondruška
Peter, try this if it makes a difference:

1. Select entries to be deleted, note their primary keys.
2. Issue delete using keys to be deleted (1.) and use short transaction
batches.

On Sun, 6 Oct 2019, 01:33 Peter,  wrote:

> Hi,
>
> I have a table "mytable" with columns "id", "created_at" and "json"
> (VARCHAR, BIGINT, LONG VARCHAR), where data is coming in like new 200k
> entries every hour and I would like to keep only entries of the last 1
> or 2 hours. It is expected behaviour for the user if too old entries
> gets lost as it is some kind of a LRU cache.
>
> The current solution is to delete entries older than 4 hours every 30
> minutes:
>
> DELETE FROM mytable WHERE created_at < ?
>
> I'm using this in a prepared statement where ? is "4 hours ago" in
> milliseconds (new DateTime().getMillis()).
>
> This works, but some (not all) INSERT statement get a bigger delay in
> the same order (2-5 seconds) that this DELETE takes, which is ugly.
> These INSERT statements are executed independently (using different
> threads) of the DELETE.
>
> Is there a better way? Can I somehow avoid locking the unrelated INSERT
> operations?
>
> What helps a bit is when I make those deletes more frequently than the
> delays will get smaller, but then the number of those delayed requests
> will increase.
>
> What also helps a bit (currently have not seen a negative impact) is
> increasing the page size for the Derby Network Server:
> -Dderby.storage.pageSize=32768
>
> Regards
> Peter
>
>


Avoid locking on DELETE

2019-10-05 Thread Peter
Hi,

I have a table "mytable" with columns "id", "created_at" and "json"
(VARCHAR, BIGINT, LONG VARCHAR), where data is coming in like new 200k
entries every hour and I would like to keep only entries of the last 1
or 2 hours. It is expected behaviour for the user if too old entries
gets lost as it is some kind of a LRU cache.

The current solution is to delete entries older than 4 hours every 30
minutes:

DELETE FROM mytable WHERE created_at < ?

I'm using this in a prepared statement where ? is "4 hours ago" in
milliseconds (new DateTime().getMillis()).

This works, but some (not all) INSERT statement get a bigger delay in
the same order (2-5 seconds) that this DELETE takes, which is ugly.
These INSERT statements are executed independently (using different
threads) of the DELETE.

Is there a better way? Can I somehow avoid locking the unrelated INSERT
operations?

What helps a bit is when I make those deletes more frequently than the
delays will get smaller, but then the number of those delayed requests
will increase.

What also helps a bit (currently have not seen a negative impact) is
increasing the page size for the Derby Network Server:
-Dderby.storage.pageSize=32768

Regards
Peter



Re: [derby] searching within a blob

2019-09-30 Thread Peter Ondruška
Alex, I think this may be a solution:
https://db.apache.org/derby/docs/10.14/tools/rtoolsoptlucene.html

On Mon, 30 Sep 2019 at 18:18, Alex O'Ree  wrote:

> I have a use case where i have string data stored in a blob and i want to
> perform a query similar to
>
> select * from table where column1 like '%hello world%'
>
> It doesn't look like this is possible with derby out of the box. Is there
> a way to create a function that calls a java function or something that can
> be used to make this work?
>


Re: Derby DB Encryption

2019-07-24 Thread Peter Ondruška
https://db.apache.org/derby/docs/10.14/security/cseccsecure88690.html

The default encryption algorithm is DES.

You can specify an encryption provider and/or encryption algorithm other
than the defaults by using the encryptionProvider=*providerName* and
encryptionAlgorithm=*algorithm* attributes.

On Thu, 25 Jul 2019, 01:13 Oskar Z,  wrote:

> Does anyone know what is the default encryption algorithm for the Derby DB
> encryption?
>
> Thanks,
> Oskar
>
> On Jul 24, 2019, at 6:14 PM, Oskar Z  wrote:
>
> Looks like the database when being encrypted, must be the FIRST connection
> to DB. If DB has existing connections before encryption, then it will not
> work, and thus the passwords don’t matter.
>
> If encryption is done as a first connection to DB, then the DB must be
> shutdown, and then it seems to work, and the first call should have
> bootPassword. That’s what I found.
>
> Thanks for the help and pointers! It got me thinking :-)
>
> Regards,
> Oskar
>
> On Jul 24, 2019, at 2:08 PM, Peter Ondruška 
> wrote:
>
> Well, you "boot" with bootPassword only once. After your database is
> opened you do not need to specify bootPassword anymore. Maybe even
> specifying incorrect bootPassword after database is already opened does not
> trigger any error and may seem misleading.
>
> On Wed, 24 Jul 2019 at 19:47, Oskar Zinger  wrote:
>
>> I already have authentication working fine. I would like to also have
>> data encryption.
>>
>> Can I have both authentication and data encryption in Derby?
>>
>> Sent from my iPhone
>>
>> On Jul 24, 2019, at 11:37 AM, Peter Ondruška 
>> wrote:
>>
>> Oskar, you mixed two distinct topics, encryption and authentication. You
>> should also follow
>> https://db.apache.org/derby/docs/10.14/security/cseccsecure42374.html.
>> Peter
>>
>> On Wed, 24 Jul 2019 at 16:27, Oskar Z  wrote:
>>
>>> Hello - hope that someone has experience with Derby encryption.
>>>
>>> I’ve been using this documentation:
>>> https://db.apache.org/derby/docs/10.14/security/cseccsecure97760.html
>>>
>>> I’m not sure what’s happening, I’ve encrypted the DB using
>>> dataEncryption=true and provided bootPassword, at first I used the same
>>> password as the Owner of DB, but then I used a different password. But
>>> regardless, what ever password I specify or not specify as bootPassword,
>>> the connection is made OK.
>>>
>>> So to me, it looks as if the DB has not been really encrypted, even
>>> though I’m not see any exceptions/errors in derby.log.
>>>
>>> How can I tell for sure that DB encryption really happened?
>>>
>>> Thanks for you help!
>>>
>>> Kind regards,
>>> Oskar
>>>
>>
>> kaibo, s.r.o., ID 28435036, registered with the commercial register
>> administered by the Municipal Court in Prague, section C, file 141269.
>> Registered office: Kališnická 379/10, Prague 3, 130 00, Czech Republic.
>> https://kaibo.eu
>>
>>
> kaibo, s.r.o., ID 28435036, registered with the commercial register
> administered by the Municipal Court in Prague, section C, file 141269.
> Registered office: Kališnická 379/10, Prague 3, 130 00, Czech Republic.
> https://kaibo.eu
>
>
>
>

-- 
kaibo, s.r.o., ID 28435036, registered with the commercial register 
administered by the Municipal Court in Prague, section C, file 141269.

Registered office: Kališnická 379/10, Prague 3, 130 00, Czech Republic.

https://kaibo.eu <https://kaibo.eu>


Re: Derby DB Encryption

2019-07-24 Thread Peter Ondruška
Well, you "boot" with bootPassword only once. After your database is opened
you do not need to specify bootPassword anymore. Maybe even specifying
incorrect bootPassword after database is already opened does not trigger
any error and may seem misleading.

On Wed, 24 Jul 2019 at 19:47, Oskar Zinger  wrote:

> I already have authentication working fine. I would like to also have data
> encryption.
>
> Can I have both authentication and data encryption in Derby?
>
> Sent from my iPhone
>
> On Jul 24, 2019, at 11:37 AM, Peter Ondruška 
> wrote:
>
> Oskar, you mixed two distinct topics, encryption and authentication. You
> should also follow
> https://db.apache.org/derby/docs/10.14/security/cseccsecure42374.html.
> Peter
>
> On Wed, 24 Jul 2019 at 16:27, Oskar Z  wrote:
>
>> Hello - hope that someone has experience with Derby encryption.
>>
>> I’ve been using this documentation:
>> https://db.apache.org/derby/docs/10.14/security/cseccsecure97760.html
>>
>> I’m not sure what’s happening, I’ve encrypted the DB using
>> dataEncryption=true and provided bootPassword, at first I used the same
>> password as the Owner of DB, but then I used a different password. But
>> regardless, what ever password I specify or not specify as bootPassword,
>> the connection is made OK.
>>
>> So to me, it looks as if the DB has not been really encrypted, even
>> though I’m not see any exceptions/errors in derby.log.
>>
>> How can I tell for sure that DB encryption really happened?
>>
>> Thanks for you help!
>>
>> Kind regards,
>> Oskar
>>
>
> kaibo, s.r.o., ID 28435036, registered with the commercial register
> administered by the Municipal Court in Prague, section C, file 141269.
> Registered office: Kališnická 379/10, Prague 3, 130 00, Czech Republic.
> https://kaibo.eu
>
>

-- 
kaibo, s.r.o., ID 28435036, registered with the commercial register 
administered by the Municipal Court in Prague, section C, file 141269.

Registered office: Kališnická 379/10, Prague 3, 130 00, Czech Republic.

https://kaibo.eu <https://kaibo.eu>


Re: Derby DB Encryption

2019-07-24 Thread Peter Ondruška
Oskar, you mixed two distinct topics, encryption and authentication. You
should also follow
https://db.apache.org/derby/docs/10.14/security/cseccsecure42374.html. Peter

On Wed, 24 Jul 2019 at 16:27, Oskar Z  wrote:

> Hello - hope that someone has experience with Derby encryption.
>
> I’ve been using this documentation:
> https://db.apache.org/derby/docs/10.14/security/cseccsecure97760.html
>
> I’m not sure what’s happening, I’ve encrypted the DB using
> dataEncryption=true and provided bootPassword, at first I used the same
> password as the Owner of DB, but then I used a different password. But
> regardless, what ever password I specify or not specify as bootPassword,
> the connection is made OK.
>
> So to me, it looks as if the DB has not been really encrypted, even though
> I’m not see any exceptions/errors in derby.log.
>
> How can I tell for sure that DB encryption really happened?
>
> Thanks for you help!
>
> Kind regards,
> Oskar
>

-- 
kaibo, s.r.o., ID 28435036, registered with the commercial register 
administered by the Municipal Court in Prague, section C, file 141269.

Registered office: Kališnická 379/10, Prague 3, 130 00, Czech Republic.

https://kaibo.eu <https://kaibo.eu>


Re: any security how to guides for a hybrid derby setup?

2018-12-11 Thread Peter
Hello Alex,

We are doing this:

System.setProperty("javax.net.ssl.keyStore", config.getKeyStorePath());
System.setProperty("javax.net.ssl.keyStorePassword",
config.getKeyStorePassword());

Kind Regards
Peter

Am 11.12.18 um 15:20 schrieb Alex O'Ree:
> The derby security guide for enabling tls connection supports only
> loading the keystore location/password from the global system
> properties. Is there a way to provide this programmatically? I'd
> rather not define this setting globally within the jvm as it's shared
> with tomcat and a number of other components.
>
> There is a NetworkServerControl#getCurrentProperties() method. Can i
> inject the javax.net.ssl properties through there before starting the
> server?
>
> On Mon, Nov 26, 2018 at 7:10 PM Rick Hillegas  <mailto:rick.hille...@gmail.com>> wrote:
>
> On 11/26/18 3:58 PM, Alex O'Ree wrote:
> > My primary use case is to use an embedded derby within my webapp
> for
> > storage and whatnot. I also have another requirement to provide
> > localhost (and possible remote access) to the database via jdbc
> > connection. I know how to get derby up and running
> programmatically in
> > embedded mode and with the network connection, however I'm not
> super
> > sure how to wire up authentication, permissions, ssl/tls, etc. Is
> > there a guide somewhere for configuring this?
>
> Hi Alex,
>
> The Derby Security Guide should have all the information you need:
> http://db.apache.org/derby/docs/10.14/security/index.html
>
> Hope this helps,
>
> -Rick
>



Re: cannot make indexes on long varchar?

2018-12-01 Thread Peter Ondruška
Not true, you can index such columns but it does not make sense using
standard as Mikael explained. For such follow Using the luceneSupport
optional tool
<https://db.apache.org/derby/docs/10.14/tools/rtoolsoptlucene.html>.


On Sat, 1 Dec 2018 at 14:48, Alex O'Ree  wrote:

> Is there a particular reason we can't do indexes on long varchar columns?
>


-- 
Peter Ondruška

-- 
kaibo, s.r.o., ID 28435036, registered with the commercial register 
administered by the Municipal Court in Prague, section C, file 141269.

Registered office and postal address: kaibo, s.r.o., Kališnická 379/10, 
Prague 3, 130 00, Czech Republic.
https://www.kaibo.eu 
<https://www.kaibo.eu>


Re: Installing Derby

2018-11-28 Thread Peter Ondruška
You have started derby from C:\ directory as non-privileged user and derby
engine is trying to create log file in this location which is not writable.

On Wed, 28 Nov 2018 at 04:07, Bob M  wrote:

> Hi
> Am following the instructions
> All going well until I attempt to create my first database
>
> I receive the following message
> ***
> C:\>java org.apache.derby.tools.ij
> ij version 10.14
> ij> connect 'jdbc:derby:myFirstDBase;create=true';
> Wed Nov 28 15:55:44 NZDT 2018 Thread[main,5,main]
> java.io.FileNotFoundException: derby.log (Access is denied)
> ***
> what have I forgotten to do ?
> Bob M
>
>
>
> --
> Sent from:
> http://apache-database.10148.n7.nabble.com/Apache-Derby-Users-f95095.html
>
-- 
Peter Ondruška

-- 
kaibo, s.r.o., ID 28435036, registered with the commercial register 
administered by the Municipal Court in Prague, section C, file 141269.

Registered office and postal address: kaibo, s.r.o., Kališnická 379/10, 
Prague 3, 130 00, Czech Republic.
https://www.kaibo.eu 
<https://www.kaibo.eu>


Re: attempting to migrate from postgres to derby

2018-11-15 Thread Peter
Interesting, thanks!

Derby comes indeed without many surprises and zero administration (at
least for us :)). We run it in production since years and never had a
problem and this durability was the main reason we chose it in the first
place. Still as our requirements are a bit special performance was an
issue (in the beginning) and we had to do a lot of caching, and so the
resulting main workload for derby is relative simple.

(btw: We run postgres for a different workload and have made also good
experience with it and nothing similar to what you reported.)

Regards
Peter

Am 15.11.18 um 22:53 schrieb Alex O'Ree:
> While postgres has been a good database, i've recently ran into a
> number of issues that i haven't been able to resolve and/or understand
> so i'm doing some experiments to see if other vendors have the same
> issue. Having derby be embedded also has advantages as i can remove
> installing postgres as a installation step and remove a lot of
> overhead when working with the database. 
>
> 1) randomly corrupted indexes, this causes all inserts to fail which
> causes data loss until the indexes are either dropped and recreated or
> the "reindex" command is issued.
> 2) the app in question is primarily large amounts of inserts. At
> higher volumes of data ingest, postgres periodically takes long
> pauses. inserts go from single digit ms to 30secs+. I think it's some
> kind of checkpoint or flushing a transaction log, or perhaps it's
> related to auto vacuum. Despite configuration tweaks, i was unable to
> work around it.
>
>
>
> On Thu, Nov 15, 2018 at 4:38 PM Peter  <mailto:tableyourt...@gmail.com>> wrote:
>
> Hello Alex,
>
> May I ask why are you moving to Derby? What are your pain points with
> Postgres?
>
> Kind Regards
> Peter
>
>
> Am 14.11.18 um 22:22 schrieb Alex O'Ree:
> > Greetings. I'm looking for some kind of migration guide and for
> things
> > to watch out for when migration an application to derby.
> >
> > Since i haven't found one yet, i decide to write down and share some
> > of my notes on the things I've ran into so far:
> >
> > DDL - From postgres, there's lots of differences.
> > - Postgres 'text' becomes 'long varchar'
> > - Can't insert from 'text literal' into a blob without some
> quick code
> > and a function to convert it
> > - Postgres gives you the option to select the index type, derby does
> > not appear to. have this function. Not really sure what kind of
> index
> > it is either. btree?
> >
> > JDBC clients
> > - limit and offset has a bit of a strange syntax. most rdbs will
> > access just the literal limit 10 offset 1 syntax. Derby appears to
> > need to wrap this in { }, so select * from table { limit 10
> offset 10}
> > - from a JDBC client, don't include semicolons in your sql code.
> >
> > For the last two, is this "normal"? I have a large code base and
> > refactoring it would be painful. I'm thinking it may be easier
> to hack
> > up the jdbc driver to "fix" the sql statements on the fly. Any
> > thoughts on this? maybe there is some kind of configuration
> setting to
> > make this easier?
>
>



Re: attempting to migrate from postgres to derby

2018-11-15 Thread Peter
Hello Alex,

May I ask why are you moving to Derby? What are your pain points with
Postgres?

Kind Regards
Peter


Am 14.11.18 um 22:22 schrieb Alex O'Ree:
> Greetings. I'm looking for some kind of migration guide and for things
> to watch out for when migration an application to derby.
>
> Since i haven't found one yet, i decide to write down and share some
> of my notes on the things I've ran into so far:
>
> DDL - From postgres, there's lots of differences.
> - Postgres 'text' becomes 'long varchar'
> - Can't insert from 'text literal' into a blob without some quick code
> and a function to convert it
> - Postgres gives you the option to select the index type, derby does
> not appear to. have this function. Not really sure what kind of index
> it is either. btree?
>
> JDBC clients
> - limit and offset has a bit of a strange syntax. most rdbs will
> access just the literal limit 10 offset 1 syntax. Derby appears to
> need to wrap this in { }, so select * from table { limit 10 offset 10}
> - from a JDBC client, don't include semicolons in your sql code.
>
> For the last two, is this "normal"? I have a large code base and
> refactoring it would be painful. I'm thinking it may be easier to hack
> up the jdbc driver to "fix" the sql statements on the fly. Any
> thoughts on this? maybe there is some kind of configuration setting to
> make this easier?




Re: Force TLSv1.2 or higher for the server

2018-07-10 Thread Peter
Hello Bryan,

Thanks for your answer.
I already saw the property and issue DERBY-6764 and tried the
suggestions but they did not lead to just one enabled protocol.

For peerAuthentication there should be a way to provide the
SSLSocketFactorywhere one could try to overload getEnabledProtocols of
SSLSocket without changing any code of Derby but I wasn't able to manage
this.

Also in SSLSocketFactory.getDefault the fallback is
SSLContext.getDefault().getSocketFactory() and so something like this:

SSLContext sslContext = SSLContext.getInstance("TLSv1.2");
sslContext.init(null, null, null);
SSLContext.setDefault(sslContext);

could be used (or the method used in NaiveTrustManager) ... but again in
my case it still printed the 3 enabled protocols.

I think for future it might be wise to support this out of the box as
TLS1.3 is already supported in the JDK 11
https://bugs.openjdk.java.net/browse/JDK-8196584 and the older two are
deprecated.

Kind Regards
Peter

Am 09.07.2018 um 18:39 schrieb Bryan Pendleton:
> There was a similar, but not identical, discussion around these topics
> four years ago, when the code was changed to remove SSLv3 and SSLv2
> support. See DERBY-6764 for the full details.
>
> I think it would certainly be possible to change the code in a similar way
> to allow more configurability, but I am not sure of the implications, and if
> it is similar to the DERBY-6764 work, a fair amount of testing is required.
>
> According to this article:
> https://blogs.oracle.com/java-platform-group/jdk-8-will-use-tls-12-as-default
> you might investigate using the deployment.security.TLSvX.Y=false
> system property.
>
> Perhaps you could investigate whether the referenced blog article
> allows a configuration that suits your needs?
>
> Please let us know what you learn!
>
> thanks,
>
> bryan
>
>
> On Mon, Jul 9, 2018 at 3:25 AM, Peter  wrote:
>> Hello,
>>
>> I cannot find a way to force the server to just use TLSv1.2. Currently
>> it says:
>>
>> Apache Derby Network Server - 10.13.1.1 - (1765088) Enabled Protocols
>> are TLSv1, TLSv1.1, TLSv1.2
>>
>> even when using
>>
>> -Dhttps.protocols=TLSv1.2
>>
>> or similar settings found on the internet. Then I saw in the source:
>>
>> SSLContext ctx = SSLContext.getInstance("TLS");
>>
>> https://github.com/apache/derby/blob/f16c46cbdd5be8dd9bdcee935ec1f68970146478/java/org.apache.derby.commons/org/apache/derby/shared/common/drda/NaiveTrustManager.java#L73
>>
>> that it seems to ignore command line settings. Is it possible to add
>> such a property or a different workaround to avoid older TLS versions?
>>
>> Regards
>> Peter
>>



Force TLSv1.2 or higher for the server

2018-07-09 Thread Peter
Hello,

I cannot find a way to force the server to just use TLSv1.2. Currently
it says:

Apache Derby Network Server - 10.13.1.1 - (1765088) Enabled Protocols
are TLSv1, TLSv1.1, TLSv1.2

even when using

-Dhttps.protocols=TLSv1.2

or similar settings found on the internet. Then I saw in the source:

SSLContext ctx = SSLContext.getInstance("TLS");

https://github.com/apache/derby/blob/f16c46cbdd5be8dd9bdcee935ec1f68970146478/java/org.apache.derby.commons/org/apache/derby/shared/common/drda/NaiveTrustManager.java#L73

that it seems to ignore command line settings. Is it possible to add
such a property or a different workaround to avoid older TLS versions?

Regards
Peter



Re: Very, very large .dat files

2017-12-05 Thread Peter Hansson


Thank for taking time to answer. Appreciated.

You basically confirmed my understanding: I need to go for a DIU 
approach if I want table partitioning.


On 05-Dec-17 17:30, Bryan Pendleton wrote:

You are correct, Derby does not provide table partitioning features
such as those provided by Oracle.

And you are correct, a single Derby table is a single .dat file.

But modern filesystems handle very large files without problems. Is
there a particular reason that you think a very large file will be a
problem for you? Or that you think a larger number of smaller .dat
files would have some benefit?

I believe there are a variety of library/framework approaches to
splitting up a logical table into multiple physical tables. People
often do this with data that is oriented around "time", and becomes of
less interest as it ages. I've seen applications which, e.g., create a
new table each day, named appropriately, and "rotate" these tables
over time so that the old data ages out and is dropped (by dropping
the older no-longer-wanted tables).

Careful use of CREATE VIEW can shield your application from most of
the impact of such techniques.

thanks,

bryan





Re: Very, very large .dat files

2017-12-05 Thread Peter Hansson


I guess the question is if Apache Derby supports table partitioning (the 
term used in Oracle RDBMS).  I understand it doesn't.


This means that if I have an incredible large Derby table then that will 
mean an incredible large single .dat file too. (a table is always stored 
in a single .dat file, right?)


Any suggestions? Any props that may control this?


On 05-Dec-17 13:28, Peter Hansson wrote:


We've seen .dat files  in seg0 directory grow to several hundreds of 
gigabytes. While everything works ok such a file becomes unmanageable 
from an OS point of view. Is there a way to control when Derby starts 
a new conglomerate?  ... so that there are more .dat files but each of 
them of less size ?


Thanks.





Very, very large .dat files

2017-12-05 Thread Peter Hansson


We've seen .dat files  in seg0 directory grow to several hundreds of 
gigabytes. While everything works ok such a file becomes unmanageable 
from an OS point of view. Is there a way to control when Derby starts a 
new conglomerate?  ... so that there are more .dat files but each of 
them of less size ?


Thanks.



Re: Incremental Online Backup

2017-09-16 Thread Peter Ondruška
Depends what you mean by incremental. Should you want to minimize data
created by backup (and increase time to recover) enable log arching and
protect full database backup and log files created after that. There is no
such thing as backing up "incrementally" i.e. only those blocks that
changed since last backup.

I would recommend running with log archiving anyway and doing online backup
(SYSCS_UTIL.SYSCS_BACKUP_DATABASE_AND_ENABLE_LOG_ARCHIVE_MODE) frequently
enough (daily) avoiding applying many log files for recovery.

See https://db.apache.org/derby/docs/10.0/manuals/admin/hubprnt46.html and
documentation on the procedures used in that page.

p.

On 16 September 2017 at 14:32, Shreyans Jain <shreyans2...@gmail.com> wrote:

> Is it possible in any way (may be a workaround or hack) to do online
> incremental backup of anything (logs, data files etc) by which database can
> be restored using that backup.
>
> Regards,
> Shreyans Jain
>



-- 
Peter Ondruška

-- 
kaibo, s.r.o., ID 28435036, registered with the commercial register 
administered by the Municipal Court in Prague, section C, insert 141269.
Registered office and postal address: kaibo, s.r.o., Kališnická 379/10, 
Prague 3, 130 00, Czech Republic.
https://www.kaibo.eu


Re: SYSCS_DIAG.TRANSACTION_TABLE stale records

2017-08-11 Thread Peter Ondruška
Dear Brett,

I did not mention but I use latest stable Derby (10.13). I have checked
with https://bitbucket.org/ondruska/xadbreco and no XA transactions are
reported.

p.

On 11 August 2017 at 14:47, Bergquist, Brett <bbergqu...@canoga.com> wrote:

> Sorry for the late response to this but I did want to comment.  We are
> using ClientXADataSource extensively with Glassfish.   Our transactions are
> correctly reported in the SYSCS_DIAG.TRANSACTION_TABLE.   The only time
> that they have stuck around is when the connection between Glassfish and
> the Derby Network Server has been severed before the XA “prepare” or
> “commit” phase has been reached or due to a XA transaction timeout bug in
> Derby which I fixed and supplied and is in the latest builds (10.10.2.0 is
> what I am using).
>
>
>
> Having the transaction stay around is of course the correct thing since XA
> is the distributed protocol and until prepare/commit/rollback has been
> performed, Derby (the XA resource) has no idea the state of the transaction.
>
>
>
> I think I would write a little program to lists the XA transactions that
> are still open and see if those reported by the
> SYSCS_DIAG.TRANSACTION_TABLE are not in fact real XA transactions that have
> not been finalized.
>
>
>
> *From:* Rick Hillegas [mailto:rick.hille...@gmail.com]
> *Sent:* Tuesday, July 11, 2017 8:56 PM
> *To:* derby-user@db.apache.org
> *Subject:* Re: SYSCS_DIAG.TRANSACTION_TABLE stale records
>
>
>
> Hi Peter,
>
> How are you disconnecting the sessions? I would expect to see 1
> transaction for every active session, as the following script demonstrates:
>
> -- 1 active session = 1 open transaction
> connect 'jdbc:derby:memory:db;create=true' as conn1;
> select count(*) from syscs_diag.transaction_table;
>
> -- 2 active sessions = 2 open transactions
> connect 'jdbc:derby:memory:db' as conn2;
> select count(*) from syscs_diag.transaction_table;
>
> -- 3 active sessions = 3 open transactions
> connect 'jdbc:derby:memory:db' as conn3;
> select count(*) from syscs_diag.transaction_table;
>
> -- 2 active sessions = 2 open transactions
> disconnect;
> set connection conn1;
> select count(*) from syscs_diag.transaction_table;
>
> -- 1 active session = 1 open transaction
> set connection conn2;
> disconnect;
> set connection conn1;
> select count(*) from syscs_diag.transaction_table;
>
> Thanks,
> -Rick
>
> On 7/11/17 10:10 AM, Peter Ondruška wrote:
>
> Dear all,
>
> the documentation mentions "The SYSCS_DIAG.TRANSACTION_TABLE diagnostic
> table shows *all of the transactions that are currently *in the
> database." Is it really correct? In my case I have an application server
> (Payara) connected to database with ClientXADataSource. Over time the
> record count in this table grows. When I stop application server and all
> database sessions are disconnected, record count stays with no change and I
> would expect that it drops as transactions are definitely closed. The only
> way to "clean" the diagnostic table is to restart database.
>
> All the records are same (different XID of course):
>
> XIDGLOBAL_XIDUSERNAMETYPESTATUSFIRST_INSTANT
> SQL_TEXT
> 79512765NULLAPPUserTransactionIDLENULLNULL
>
> except one SystemTransaction:
> XIDGLOBAL_XIDUSERNAMETYPESTATUSFIRST_INSTANT
> SQL_TEXT
> 79241843NULLNULLSystemTransactionIDLENULLNULL
>
> and one UserTransaction (as expected):
> XIDGLOBAL_XIDUSERNAMETYPESTATUSFIRST_INSTANT
> SQL_TEXT
> 79604720NULLAPPUserTransactionIDLENULLSELECT *
> FROM syscs_diag.transaction_table
>
> Regards,
>
>
> --
>
> Peter Ondruška
>
>
> kaibo, s.r.o., ID 28435036, registered with the commercial register
> administered by the Municipal Court in Prague, section C, insert 141269.
> Registered office and postal address: kaibo, s.r.o., Kališnická 379/10,
> Prague 3, 130 00, Czech Republic.
> https://www.kaibo.eu
>
>
>
> --
> Canoga Perkins
> 20600 Prairie Street
> Chatsworth, CA 91311
> (818) 718-6300
>
> This e-mail and any attached document(s) is confidential and is intended
> only for the review of the party to whom it is addressed. If you have
> received this transmission in error, please notify the sender immediately
> and discard the original message and any attachment(s).
>



-- 
Peter Ondruška

-- 
kaibo, s.r.o., ID 28435036, registered with the commercial register 
administered by the Municipal Court in Prague, section C, insert 141269.
Registered office and postal address: kaibo, s.r.o., Kališnická 379/10, 
Prague 3, 130 00, Czech Republic.
https://www.kaibo.eu


Re: Derby Database Corruption Issues

2017-07-12 Thread Peter Ondruška
Hi,

This is still valid (and in general for all systems utilising logging).

Regards,
p.

On 12 July 2017 at 20:04, Shreyans Jain <shreyans2...@gmail.com> wrote:

> I was reading https://wiki.apache.org/db-derby/DatabaseCorruption and i
> found prevention of corruption which states
> "Switch off the machine's write caching" . now the article is written in
> 2013. Is it still application to derby database corruption or now it has no
> effect.
>
> Regards,
> Shreyans Jain
>



-- 
Peter Ondruška

-- 
kaibo, s.r.o., ID 28435036, registered with the commercial register 
administered by the Municipal Court in Prague, section C, insert 141269.
Registered office and postal address: kaibo, s.r.o., Kališnická 379/10, 
Prague 3, 130 00, Czech Republic.
https://www.kaibo.eu


SYSCS_DIAG.TRANSACTION_TABLE stale records

2017-07-11 Thread Peter Ondruška
Dear all,

the documentation mentions "The SYSCS_DIAG.TRANSACTION_TABLE diagnostic
table shows *all of the transactions that are currently *in the database."
Is it really correct? In my case I have an application server (Payara)
connected to database with ClientXADataSource. Over time the record count
in this table grows. When I stop application server and all database
sessions are disconnected, record count stays with no change and I would
expect that it drops as transactions are definitely closed. The only way to
"clean" the diagnostic table is to restart database.

All the records are same (different XID of course):

XIDGLOBAL_XIDUSERNAMETYPESTATUSFIRST_INSTANTSQL_TEXT
79512765NULLAPPUserTransactionIDLENULLNULL

except one SystemTransaction:
XIDGLOBAL_XIDUSERNAMETYPESTATUSFIRST_INSTANTSQL_TEXT
79241843NULLNULLSystemTransactionIDLENULLNULL

and one UserTransaction (as expected):
XIDGLOBAL_XIDUSERNAMETYPESTATUSFIRST_INSTANTSQL_TEXT
79604720NULLAPPUserTransactionIDLENULLSELECT * FROM
syscs_diag.transaction_table

Regards,

-- 
Peter Ondruška

-- 
kaibo, s.r.o., ID 28435036, registered with the commercial register 
administered by the Municipal Court in Prague, section C, insert 141269.
Registered office and postal address: kaibo, s.r.o., Kališnická 379/10, 
Prague 3, 130 00, Czech Republic.
https://www.kaibo.eu


Re: Page cache sizing

2017-07-09 Thread Peter Ondruška
at
org.apache.derby.impl.store.raw.data.CachedPage.setPageArray(Unknown Source)
at org.apache.derby.impl.store.raw.data.CachedPage.readPage(Unknown
Source)
at
org.apache.derby.impl.store.raw.data.CachedPage.setIdentity(Unknown Source)
at
org.apache.derby.impl.services.cache.ConcurrentCache.find(Unknown Source)
at
org.apache.derby.impl.store.raw.data.FileContainer.getLatchedPage(Unknown
Source)
at
org.apache.derby.impl.store.raw.data.RAFContainer.backupContainer(Unknown
Source)
at
org.apache.derby.impl.store.raw.data.BaseContainerHandle.backupContainer(Unknown
Source)
at
org.apache.derby.impl.store.raw.data.BaseDataFileFactory.backupDataFiles(Unknown
Source)
at org.apache.derby.impl.store.raw.RawStore.backup(Unknown Source)
at org.apache.derby.impl.store.raw.RawStore.backup(Unknown Source)

Regards,
p.




On 7 July 2017 at 04:59, Bryan Pendleton <bpendleton.de...@gmail.com> wrote:

>
>
> how does derby.storage.pageCacheSize parameter (
>> https://db.apache.org/derby/docs/10.13/ref/rrefproper81359.html) work
>> with database that has multiple page sizes--tables with default 4096 bytes
>> and tables with long/blob of 32768 byte pages?
>>
>>
> Hi Peter,
>
> I'm not 100% sure how this works; I think you should run some experiments.
>
> Here's what I *think* the behavior is:
> 1) The page cache is sized in 4K pages, so if you set pageCacheSize=1024,
> you get 4 meg of page cache memory
> 2) Tables with 4K pages simply use the cache as you expect.
> 3) Tables with 32K pages chew up 8 cache "pages" at a time, each 32K chunk
> of page cache holding 1 32K page from that large-page table.
>
> Some stuff I found while searching around, which might give you some ideas
> for experiments you could run:
>
> http://apache-database.10148.n7.nabble.com/out-of-memory-
> when-writing-blobs-td100948.html
>
> and
>
> https://issues.apache.org/jira/browse/DERBY-4537
>
> Sorry I'm not of much more help here.
>
> bryan
>
>


-- 
Peter Ondruška

-- 
kaibo, s.r.o., ID 28435036, registered with the commercial register 
administered by the Municipal Court in Prague, section C, insert 141269.
Registered office and postal address: kaibo, s.r.o., Kališnická 379/10, 
Prague 3, 130 00, Czech Republic.
https://www.kaibo.eu


Page cache sizing

2017-07-06 Thread Peter Ondruška
Dear all,

how does derby.storage.pageCacheSize parameter (
https://db.apache.org/derby/docs/10.13/ref/rrefproper81359.html) work with
database that has multiple page sizes--tables with default 4096 bytes and
tables with long/blob of 32768 byte pages?

Thanks,

-- 
Peter Ondruška

-- 
kaibo, s.r.o., ID 28435036, registered with the commercial register 
administered by the Municipal Court in Prague, section C, insert 141269.
Registered office and postal address: kaibo, s.r.o., Kališnická 379/10, 
Prague 3, 130 00, Czech Republic.
https://www.kaibo.eu


SYSCS_DIAG.ERROR_LOG_READER

2017-06-28 Thread Peter Ondruška
Dear all,

in 10.13.1.1 (not tested with older) SYSCS_DIAG.ERROR_LOG_READER does not
work with database booted in server with
-Dderby.stream.error.style=rollingFile.

There are derby.log and derby-0.log in home directory and derby-0.log has
error messages which I expect to be always the current log file when
rollingFile is used.

SELECT * FROM TABLE (SYSCS_DIAG.ERROR_LOG_READER('derby-0.log')) AS T1;
DISCONNECT;
ij version 10.13
ij> ij> ij> ERROR 38000: The exception 'java.sql.SQLException: derby-0.log
(No such file or directory)' was thrown while evaluating an expression.
ERROR null: derby-0.log (No such file or directory)
ij> ij> Finished: SUCCESS

SELECT * FROM TABLE (SYSCS_DIAG.ERROR_LOG_READER('derby.log')) AS T1;
DISCONNECT;
ij version 10.13
ij> ij> ij> ERROR 38000: The exception 'java.sql.SQLException: derby.log
(No such file or directory)' was thrown while evaluating an expression.
ERROR null: derby.log (No such file or directory)
ij> ij> Finished: SUCCESS

SELECT * FROM TABLE (SYSCS_DIAG.ERROR_LOG_READER()) AS T1;
DISCONNECT;
ij version 10.13
ij> ij> ij> TS
|THREADID|XID|LCCID
|DATABASE
|DRDAID
|LOGTEXT

---

0 rows selected
ij> ij> Finished: SUCCESS

Under same environment without rollingFile everything works fine and
content of derby.log is read.

-- 
Peter Ondruška

-- 
kaibo, s.r.o., ID 28435036, registered with the commercial register 
administered by the Municipal Court in Prague, section C, insert 141269.
Registered office and postal address: kaibo, s.r.o., Kališnická 379/10, 
Prague 3, 130 00, Czech Republic.
https://www.kaibo.eu


Re: sunsetting support for Java 8

2017-06-22 Thread Peter Ondruška
Dear Rick,

well, nice to know and I do not see a reason to object but does anybody
have ideas what Java 9 adoption will look like(?).

p.

On 22 June 2017 at 04:15, Rick Hillegas <rick.hille...@gmail.com> wrote:

> The current schedule for Java 9 calls for a GA date some time this autumn:
> http://mail.openjdk.java.net/pipermail/jdk9-dev/2017-June/005867.html.
> Shortly after that, I would expect that the Derby community would publish a
> new 10.14.1 feature release: https://wiki.apache.org/db-der
> by/DerbyTenFourteenOneRelease
>
> I would like to propose that 10.14 be the last release family which runs
> on Java 8. The 10.15 family would only run on Java 9 and higher. I expect
> that we would produce a 10.15.1 release some time after the first
> maintenance release of Java 9 goes GA, probably late in 2018.
>
> Please us know if this platform policy would be a great hardship for you.
>
> Thanks,
> -Rick
>
>


-- 
Peter Ondruška

-- 
kaibo, s.r.o., ID 28435036, registered with the commercial register 
administered by the Municipal Court in Prague, section C, insert 141269.
Registered office and postal address: kaibo, s.r.o., Kališnická 379/10, 
Prague 3, 130 00, Czech Republic.
https://www.kaibo.eu


Re: Connection authentication failure occurred. Reason: Invalid authentication..

2017-05-30 Thread Peter Ondruška
Dear Rick,

well, that is why it is strange because I am very certain I use correct
username and password to connect and the data from database can be
accessed. Yes, NATIVE credentials are stored in the database itself. There
are no other errors or warnings besides this one.

Peter

On 29 May 2017 at 19:26, Rick Hillegas <rick.hille...@gmail.com> wrote:

> Hi Peter,
>
> This is the error which Derby raises when the user presents invalid
> credentials at connection time. Are you confident that correct credentials
> were given? Are the NATIVE credentials stored in the database being
> connected to? Or are they stored in a system-wide credentials database?
> What other errors appear in the diagnostic log prior to this error?
>
> Thanks,
> -Rick
>
>
> On 5/29/17 12:12 AM, Peter Ondruška wrote:
>
> Hello,
>
> I am facing strange situation with 10.13.1.1. This error is logged when
> load against Derby is higher than usuall:
>
> ***
> Mon May 29 08:31:10 CEST 2017 Thread[DRDAConnThread_27,5,main] (XID =
> 74907526), (SESSIONID = 22748), (DATABASE = /*removed*/), (DRDAID =
> .-43515881796
> 1723857{1069}), Cleanup action starting
> java.sql.SQLNonTransientConnectionException: Connection authentication
> failure occurred.  Reason: Invalid authentication..
> at 
> org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
> Source)
> at 
> org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
> Source)
> at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown
> Source)
> at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown
> Source)
> at org.apache.derby.impl.jdbc.EmbedConnection.newSQLException(Unknown
> Source)
> at 
> org.apache.derby.impl.jdbc.EmbedConnection.checkUserCredentials(Unknown
> Source)
> at org.apache.derby.impl.jdbc.EmbedConnection.(Unknown
> Source)
> at org.apache.derby.jdbc.InternalDriver$1.run(Unknown Source)
> at org.apache.derby.jdbc.InternalDriver$1.run(Unknown Source)
> at java.security.AccessController.doPrivileged(
> AccessController.java:650)
> at org.apache.derby.jdbc.InternalDriver.getNewEmbedConnection(Unknown
> Source)
> at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
> at 
> org.apache.derby.jdbc.BasicEmbeddedDataSource40.getConnection(Unknown
> Source)
> at 
> org.apache.derby.jdbc.EmbedPooledConnection.openRealConnection(Unknown
> Source)
> at org.apache.derby.jdbc.EmbedXAConnection.getRealConnection(Unknown
> Source)
> at 
> org.apache.derby.iapi.jdbc.BrokeredConnection.getRealConnection(Unknown
> Source)
> at org.apache.derby.iapi.jdbc.BrokeredConnection.isClosed(Unknown
> Source)
> at 
> org.apache.derby.impl.drda.PiggyBackedSessionData.getInstance(Unknown
> Source)
> at 
> org.apache.derby.impl.drda.Database.getPiggyBackedSessionData(Unknown
> Source)
> at org.apache.derby.impl.drda.DRDAConnThread.writePBSD(Unknown
> Source)
> at org.apache.derby.impl.drda.DRDAConnThread.processCommands(Unknown
> Source)
> at org.apache.derby.impl.drda.DRDAConnThread.run(Unknown Source)
> Caused by: ERROR 08004: Connection authentication failure occurred.
> Reason: Invalid authentication..
> at org.apache.derby.iapi.error.StandardException.newException(Unknown
> Source)
> at org.apache.derby.impl.jdbc.SQLExceptionFactory.
> wrapArgsForTransportAcrossDRDA(Unknown Source)
> ... 22 more
> = begin nested exception, level (1) ===
> ERROR 08004: Connection authentication failure occurred.  Reason: Invalid
> authentication..
> at org.apache.derby.iapi.error.StandardException.newException(Unknown
> Source)
> at org.apache.derby.impl.jdbc.SQLExceptionFactory.
> wrapArgsForTransportAcrossDRDA(Unknown Source)
> at 
> org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
> Source)
> at 
> org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
> Source)
> at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown
> Source)
> at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown
> Source)
> at org.apache.derby.impl.jdbc.EmbedConnection.newSQLException(Unknown
> Source)
> at 
> org.apache.derby.impl.jdbc.EmbedConnection.checkUserCredentials(Unknown
> Source)
> at org.apache.derby.impl.jdbc.EmbedConnection.(Unknown
> Source)
> at org.apache.derby.jdbc.InternalDriver$1.run(Unknown Sourc

Connection authentication failure occurred. Reason: Invalid authentication..

2017-05-29 Thread Peter Ondruška
 with
-Dderby.authentication.native.passwordLifetimeMillis=0 just to be sure but
it has no influence. All the connections are pool from Payara/Glassfish
with ClientXADataSource and therefore it is strange that I see mentions of
Embedded in the trace. And there are no messages from Payara about failed
authentications.

Any ideas? :)

-- 
Peter Ondruška

-- 
kaibo, s.r.o., ID 28435036, registered with the commercial register 
administered by the Municipal Court in Prague, section C, insert 141269.
Registered office and postal address: kaibo, s.r.o., Kališnická 379/10, 
Prague 3, 130 00, Czech Republic.
https://www.kaibo.eu


Re: BACKUP.HISTORY file

2017-01-20 Thread Peter Ondruška
Looking at source code
https://svn.apache.org/repos/asf/db/derby/code/tags/10.12.1.1/java/engine/org/apache/derby/impl/store/raw/RawStore.java#backup
it appears there is no way without modifying source.

On 20 January 2017 at 21:17, Lazur, Eric J. <eric.la...@jhuapl.edu> wrote:

> Hello,
>
>
>
> How can I turn off the creation of the BACKUP.HISTORY file that is created
> by the Derby database?  I am looking for a specific setting in the Derby
> configuration  that I can adjust that will allow me to either turn off the
> generation of the BACKUP.HISTORY file altogether or at least disable any
> writing of information to this file.  We are maintaining a system that does
> frequent backups and we are finding that these BACKUP.HISTORY files are
> growing very large and consuming to much disk space.
>
>
>
> I have searched through the archives at the Derby site but I have been
> unable to locate any information that specifically addresses this issue.
>
>
>
> Thank you,
>
>
>
> -Eric
>
>
>
> JHU/APL
>
>
>



-- 
Peter Ondruška

-- 
kaibo, s.r.o., ID 28435036, registered with the commercial register 
administered by the Municipal Court in Prague, section C, insert 141269.
Registered office and postal address: kaibo, s.r.o., Kališnická 379/10, 
Prague 3, 130 00, Czech Republic.
https://www.kaibo.eu


[SOLVED] Re: Alter table fails with Derby 10.10.2.0

2017-01-18 Thread Peter Nabbefeld

Hi,

found somewhere a hint to "SYSCS_UTIL.SYSCS_COMPRESS_TABLE", which I 
could use to fix my table.


Regards
P.


Am 19.01.2017 um 06:49 schrieb Peter Nabbefeld:


Hello,

I'm using Derby 10.10.2.0 - (1582446). ALTER TABLE fails with following
message (I've copied the Hibernate part, too, but tried also manual):


INFO
[org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl]:
HHH000115: Hibernate connection pool size: 20 (min=1)
INFO
[org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl]:
HHH000115: Hibernate connection pool size: 20 (min=1)
INFO [org.hibernate.tool.hbm2ddl.SchemaUpdate]: HHH000228: Running
hbm2ddl schema update
INFO [org.hibernate.tool.hbm2ddl.SchemaUpdate]: HHH000228: Running
hbm2ddl schema update
SEVERE [org.openide.util.Exceptions]
org.apache.derby.client.am.SqlException: In einer Basistabelle wich die
Anzahl der angeforderten Spalten 28 von der maximalen Spaltenanzahl 29 ab.
at org.apache.derby.client.am.Statement.completeSqlca(Unknown Source)
[...]

For the essage in German, the meaning is:
In a base table the number of requested columns 28 differs from the
maximum number of columns 29.

It seems, the number of columns has been upgraded, but the new column
has not been added - how can I repair that manually?

Kind regards
Peter







Alter table fails with Derby 10.10.2.0

2017-01-18 Thread Peter Nabbefeld


Hello,

I'm using Derby 10.10.2.0 - (1582446). ALTER TABLE fails with following 
message (I've copied the Hibernate part, too, but tried also manual):



INFO 
[org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl]: 
HHH000115: Hibernate connection pool size: 20 (min=1)
INFO 
[org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl]: 
HHH000115: Hibernate connection pool size: 20 (min=1)
INFO [org.hibernate.tool.hbm2ddl.SchemaUpdate]: HHH000228: Running 
hbm2ddl schema update
INFO [org.hibernate.tool.hbm2ddl.SchemaUpdate]: HHH000228: Running 
hbm2ddl schema update

SEVERE [org.openide.util.Exceptions]
org.apache.derby.client.am.SqlException: In einer Basistabelle wich die 
Anzahl der angeforderten Spalten 28 von der maximalen Spaltenanzahl 29 ab.

at org.apache.derby.client.am.Statement.completeSqlca(Unknown Source)
[...]

For the essage in German, the meaning is:
In a base table the number of requested columns 28 differs from the 
maximum number of columns 29.


It seems, the number of columns has been upgraded, but the new column 
has not been added - how can I repair that manually?


Kind regards
Peter



Re: Use Apache Derby Network Server with encrypted database

2016-08-25 Thread Peter Ondruška
Just a note on "If a person is already on the machine and would be able to
sniff the local IP traffic, that person may also have access on the files
of Derby.". To prevent this you encrypt the database and carefully manage
encryption key. But if somebody interepts unencrypted network traffic (be
it local or remote) he can also intercept the encryption key used to boot
database and all encryption is useless, besides attacher will have username
and password to log in to started database and can export data over network.

On 25 August 2016 at 08:47, Hohl, Gerrit <g.h...@aurenz.de> wrote:

> Hello Peter,
>
> hello George,
>
>
>
> thank you for your 2 mails. And sorry I didn't reply earlier.
>
> Yes, I also realized that there is no difference between the embedded and
> the standalone version.
>
> The only exception is that you have to put the encryption library in the
> CLASSPATH of the network server if you want to use one.
>
> Everything else can be passed to the database via the connection.
>
>
>
> About SSL: As I use it as a local database, but in a different process
> than then application (this way I can do some maintenance even independent
> from the application) I guess I don't need it.
>
> Or let's put it this way: If a person is already on the machine and would
> be able to sniff the local IP traffic, that person may also have access on
> the files of Derby.
>
> And somewhere there is also the password for the keystore as well as the
> path to the keystore which contains the certificate(s). I guess from there
> to the boot password it is only a short way.
>
> Or I'm wrong?
>
>
>
> For a database installed on a different machine I definitely would
> recommend using SSL, of course.
>
>
>
> Thanks for your mails again. :-)
>
>
>
> Gruß
>
> Gerrit
>
>
>
> *Von:* toma.georg...@yahoo.com [mailto:toma.georg...@yahoo.com]
> *Gesendet:* Mittwoch, 24. August 2016 22:13
> *An:* derby-user@db.apache.org
> *Betreff:* Re: Use Apache Derby Network Server with encrypted database
>
>
>
> Hi Gerrit,
>
>
>
> Based on Apache Derby page, between the embedded mode and network mode
> there is no difference, https://db.apache.org/derby/
> docs/10.0/manuals/admin/hubprnt19.html#Differences+
> between+running+Derby+in+embedded+mode+and+using+the+Network+Server
>
>
>
> Have you tried to create an encrypted database via the network mode using
> the steps mentioned in your link ? It should work, otherwise can you please
> post your errors/stacktrace/exceptions.
>
>
>
> What I did on my side to try your scenario :
>
>1. Start Apache Derby in network mode
>
>
>1. Create a dummy encrypted database and connect to it via *ij*:
>
>
>1.  connect 'jdbc:derby://*localhost:1527*/MyDbTest;create=true;
>   dataEncryption=true';
>
>
>1. After that I’ve started to create tables and to execute SQL queries
>(select), just to play with the database.
>
>
>1. It worked, no difference between embedded vs network mode.
>
>
>1. Remember to append to the URL the host and the port where Apache
>   Derby server is started ( in my case it was localhost : 1527).
>
>
>
> What I’ve followed in order to achieve the above:
>
>1. http://db.apache.org/derby/papers/DerbyTut/ns_intro.html#
>ij_ns_client
>
>
>1. http://db.apache.org/derby/papers/DerbyTut/ij_intro.html#ij_connect
>
>
>1. https://db.apache.org/derby/docs/10.0/manuals/develop/develop15.html
>
>
>1. If you want a more custom example regarding the algorithm that can
>be used to encrypt the database, please have a look into this page :
>
>
>1. https://db.apache.org/derby/docs/10.2/ref/rrefattribencryptkey.html
>
>
>
> If you need more information, please let me know.
>
>
>
> Regards,
>
> George
>
>
>
>
>
> Sent from Windows Mail
>
>
>
> *Von:* Peter Ondruška [mailto:peter.ondru...@kaibo.eu]
> *Gesendet:* Mittwoch, 24. August 2016 12:32
> *An:* Derby Discussion
> *Betreff:* Re: Use Apache Derby Network Server with encrypted database
>
>
>
> Dear Gerrit,
>
> from my understanding the only difference with Derby network server and
> embedded is relevant part of connection string. The rest where you put
> parameters after semicolon and where you would specify encryption
> properties is the same. Just start network server and then connect using
> network url with decryption parameters, subsequent connections should also
> use those parameters because you do not know if database has already booted
> or not. I strongly recommend using SSL to connect to encrypted database ;)
>
>
&

Re: Use Apache Derby Network Server with encrypted database

2016-08-24 Thread Peter Ondruška
Dear Gerrit,

from my understanding the only difference with Derby network server and
embedded is relevant part of connection string. The rest where you put
parameters after semicolon and where you would specify encryption
properties is the same. Just start network server and then connect using
network url with decryption parameters, subsequent connections should also
use those parameters because you do not know if database has already booted
or not. I strongly recommend using SSL to connect to encrypted database ;)

On 24 August 2016 at 09:15, Hohl, Gerrit <g.h...@aurenz.de> wrote:

> Hello everyone,
>
>
>
> I've used Apache Derby for years now as an embedded RDBMS.
>
> BTW: Thanks to all developer doing a great job developing this database
> system. :-D
>
>
>
> But now I want to use it as a separate service running on Ubuntu Linux.
>
> This is no problem.
>
>
>
> But I haven't found any explanation or example how to create and use
> encrypted database if I'm running Derby as a service.
>
>
>
> I found only this page:
>
> https://db.apache.org/derby/docs/10.0/manuals/develop/develop115.html
>
> But it seems it only deals with an embedded Derby version.
>
>
>
> Regards,
>
> Gerrit
>
>
>



-- 
Peter Ondruška

-- 
kaibo, s.r.o., ID 28435036, registered with the commercial register 
administered by the Municipal Court in Prague, section C, insert 141269.
Registered office and postal address: kaibo, s.r.o., Kališnická 379/10, 
Prague 3, 130 00, Czech Republic.
https://www.kaibo.eu


Re: Derby ERROR XSDB6

2015-10-30 Thread Peter Ondruška
I've had exactly the same problem which has to do with Eclipse not being
aware of embedded Derby specific way to shutdown. Eclipse only closes
connection and database stays open. I solved this by running standalone
Derby network server and connecting from Eclipse and application as network
client.

On Friday, 30 October 2015, Alessandro Manzoni <
manzoni.alessand...@gmail.com> wrote:

> For some reasons I'm using derby embedded driver.
> The application performs as expected unless, for debug purpose, I open the
> database from eclipse DatabaseDevelopment perspective.
> Even if i close the connection from DatabaseDevelopment and closing
> DatabaseDevelopment perspective too, when I connect the db from the
> application I get the SQLExceplion:
> "ERROR XSDB6: Another instance of Derby may have already booted the
> database.
> The only way I found to reset this figure, is restarting eclipse. I
> imagine that's because DatabaseDevelopment is using a different JVM.
> Is there a way to force closing the db?
>
> Thank you.
>
>

-- 
Peter Ondruška


Re: XQuery or XSLT support in Derby

2015-10-29 Thread Peter Ondruška
ntDispatchThread.run(EventDispatchThread.java:91)
> Caused by: ERROR 1: Encountered error while evaluating XML query
> expression for xmlquery operator:
> com.sun.org.apache.xpath.internal.domapi.XPathStylesheetDOM3Exception:
> Prefix must resolve to a namespace: xsl
> at org.apache.derby.iapi.error.StandardException.newException(Unknown
> Source)
> at
> org.apache.derby.impl.jdbc.SQLExceptionFactory.wrapArgsForTransportAcrossDRDA(Unknown
> Source)
> ... 45 more
> Caused by: javax.xml.xpath.XPathExpressionException:
> com.sun.org.apache.xpath.internal.domapi.XPathStylesheetDOM3Exception:
> Prefix must resolve to a namespace: xsl
> at
> com.sun.org.apache.xpath.internal.jaxp.XPathImpl.compile(XPathImpl.java:400)
> at org.apache.derby.iapi.types.SqlXmlUtil.compileXQExpr(Unknown Source)
> at
> org.apache.derby.exe.ac185e801cx0150xb293xf5c3x06bfe7b80.postConstructor(Unknown
> Source)
> at
> org.apache.derby.impl.services.reflect.LoadedGeneratedClass.newInstance(Unknown
> Source)
> at org.apache.derby.impl.sql.GenericActivationHolder.(Unknown Source)
> at
> org.apache.derby.impl.sql.GenericPreparedStatement.getActivation(Unknown
> Source)
> ... 38 more
> Caused by:
> com.sun.org.apache.xpath.internal.domapi.XPathStylesheetDOM3Exception:
> Prefix must resolve to a namespace: xsl
> at
> com.sun.org.apache.xpath.internal.compiler.XPathParser.errorForDOM3(XPathParser.java:657)
> at
> com.sun.org.apache.xpath.internal.compiler.Lexer.mapNSTokens(Lexer.java:642)
> at
> com.sun.org.apache.xpath.internal.compiler.Lexer.tokenize(Lexer.java:219)
> at
> com.sun.org.apache.xpath.internal.compiler.Lexer.tokenize(Lexer.java:100)
> at
> com.sun.org.apache.xpath.internal.compiler.XPathParser.initXPath(XPathParser.java:114)
> at com.sun.org.apache.xpath.internal.XPath.(XPath.java:180)
> at com.sun.org.apache.xpath.internal.XPath.(XPath.java:268)
> at
> com.sun.org.apache.xpath.internal.jaxp.XPathImpl.compile(XPathImpl.java:392)
> ... 43 more
>
>
> How can I avoid that?
>
>
>
> Thank you,
> Greg
>



-- 
Peter Ondruška


Re: EmbeddedDriver and db.lck file

2015-06-17 Thread Peter Ondruška
This is what I di in context listener:

@Override  public void contextDestroyed(final ServletContextEvent sce)
{final ServletContext context = sce.getServletContext();
final String PERSIST_ROOT =
context.getInitParameter(PERSISTENT_ROOT);final String DB_URL =
DB_PREFIX.concat(PERSIST_ROOT).concat(db);try {
DriverManager.getConnection(DB_URL + SHUTDOWN, USERNAME, PASSWORD);
} catch (final SQLException e) {  // ignore}  }



On Wednesday, 17 June 2015, Thomas Meyer tho...@m3y3r.de wrote:

 Hi,

 I have a Servlet running under Jetty 9.2.11 which uses EclipseLink 2.6as
 JPA tool. In the JPA tool I did configure the usage of the Derby Embedded
 10.11.1 driver. For a fresh start of the jetty server everything works as
 expected.
 But when I now redeploy the context config XML file after I did update the
 referenced war file, derby begins to tell me that another instance did
 already boot the database.
 Somehow the db.lck file is not released when I close the
 EntityManagerFactory.

 Any idea what's going on here?

 How can I force the release of the db.lck file in a
 ServletListener.contextDestroyed() method?

 With kind regards
 Thomas



-- 
Peter Ondruška


Re: Copying encypted DB?

2015-04-15 Thread Peter Ondruška
Unless I missed something why not just create backup and then open the
backup copy and change encryption key.

On Wednesday, 15 April 2015, John English john.fore...@gmail.com wrote:

 I have a DB which is encrypted with one password, and I want to generate
 an identical copy of it which will be encrypted using a different password
 (so that I can provide copies of the same DB for two different customers
 without exposing one customer's password to the other).

 I thought at first I could create the tables from a script and then run
 lots of insert into foo (select * from bar) queries, but this won't work
 unless the auto-generated columns are allocated with the same sequence
 numbers so that the foreign key references will match up. And in some case
 the keys are not sequential, due to deletions.

 Is there an easy way to do this?

 TIA,
 --
 John English



-- 
Peter Ondruška


Re: How to Combine Apache Derby without installing with Jar Directly

2015-01-08 Thread Peter Ondruška
You can use Eclipse to create runnable jar that packs other jars inside.

On Wednesday, 7 January 2015, Varun Sawaji sawaji.va...@gmail.com wrote:

 Hi,

 I have application which was developed using Java and Apache Derby. I want
 to create executable JAR and run on any system which doesnot have derby
 DB.When I click on Jar file the derby also install on the system. Is it
 possible.

 Varun



-- 
Peter Ondruška


Re: Urgent question about JIra issue DERBY-526

2014-12-04 Thread Peter Ondruška
Try:

jdbc:derby://[2001:db8:0:f101:0:0:0:9]:1527/xxx;create=true;user=xxx;password=xxx

On 3 December 2014 at 09:09, Lin Ren lin@oracle.com wrote:

 Hi Guys,



 Sorry for the broadcast… I have a quick question about issue DERBY-526,
 I’m currently using Derby version 10.10.1.3, and still meet the same
 problem:



 When I using IPv6 JDBC URL like:
 “jdbc:derby://2001:db8:0:f101:0:0:0:9:1527/xxx;create=true;user=xxx;password=xxx”



 I got the exception: java.lang.NumberFormatException: For input string:
 db8:0:f101:0:0:0:9:1527



 My searched Jira and found out the issue 526, but seems it is still in
 open state, can anyone tell me whether the issue is fixed now? And in which
 version if yes?



 Thanks so much!



 Lin




-- 
Peter Ondruška


Re: Blob column behaviour, when we dont have data when having less data

2014-12-04 Thread Peter Ondruška
Derby uses only as much space as the size of your BLOB. The DDL for size of
BLOB column is the maximum BLOB you can store.

On 5 December 2014 at 06:57, kosurusekhar kosurusek...@gmail.com wrote:

 Thanks Mike for quick reply.

  2) whether derby will occupy complete 6MB space if I am trying to insert
  small size files like 512KB or 1MB?
 *space used will be that of the size of the inserted column plus some
 metadata overhead/page overhead. *

 This means that in a 6MB column if I save 512KB content file then it will
 occupy 6MB + metadata size. Right?

 Is there any provision to space in this kind of scenario ?


 Thanks
 Sekhar.



 --
 View this message in context:
 http://apache-database.10148.n7.nabble.com/Blob-column-behaviour-when-we-dont-have-data-when-having-less-data-tp143363p143376.html
 Sent from the Apache Derby Users mailing list archive at Nabble.com.




-- 
Peter Ondruška


Re: Locks on crashed database

2014-11-27 Thread Peter Ondruška
Dear Knut,

many thanks for the tip. For others who need something similar here is the
complete code:

package xarecovery;

import java.sql.SQLException;
import java.util.logging.Level;

import javax.sql.XAConnection;
import javax.sql.XADataSource;
import javax.transaction.xa.XAException;
import javax.transaction.xa.XAResource;
import javax.transaction.xa.Xid;

import org.apache.derby.jdbc.EmbeddedDataSource;
import org.apache.derby.jdbc.EmbeddedXADataSource;

/**
 * Remove obsolete lock records caused by not gracefully removing database
that was under transaction manager control.br /
 * This can be observed by having records in TRANSACTION_TABLE (and related
in LOCK_TABLE) with state PREPARED:br /
 * SELECT * FROM SYSCS_DIAG.LOCK_TABLE;br /
 * SELECT * FROM SYSCS_DIAG.TRANSACTION_TABLE;
 *
 * @author Knut Anders Hatlen, Peter Ondruška (just slightly modified)
 *
 */
public class Recover {

  private static final java.util.logging.Logger LOGGER =
java.util.logging.Logger.getLogger(Recover.class.getName());

  public static void main(final String[] args) {

final EmbeddedDataSource eds = new EmbeddedXADataSource();
eds.setDatabaseName(pathtodatabase);

final XADataSource ds = (EmbeddedXADataSource) eds;

try {
  final XAConnection xac = ds.getXAConnection();
  final XAResource xar = xac.getXAResource();
  for (final Xid xid : xar.recover(XAResource.TMSTARTRSCAN)) {
LOGGER.log(Level.INFO, Recover using rollback Xid {0},
xid.toString());
xar.rollback(xid);
  }
  xac.close();
} catch (final SQLException | XAException e) {
  LOGGER.log(Level.WARNING, null, e);
}

try {
  eds.setShutdownDatabase(shutdown);
  eds.getConnection();
} catch (final SQLException e) {
  LOGGER.log(Level.INFO, This exception is OK, e);
}

  }

}


On 25 November 2014 at 12:49, Knut Anders Hatlen knut.hat...@oracle.com
wrote:

 Peter Ondruška peter.ondru...@kaibo.eu writes:

  Dear all,
 
  I have a database that has locks in SYSCS_DIAG.LOCK_TABLE. How do I
  remove those locks? I restarted the database but the locks are still
  there. SYSCS_DIAG.TRANSACTION_TABLE also has related record with
  status PREPARED. This database was used with XA on application server
  but it was removed for troubleshooting.

 Hi Peter,

 You probably need to run XA recovery and commit or roll back the
 prepared transactions. Something like this:

 XADataSource ds = ;
 XAConnection xac = ds.getXAConnection();
 XAResource xar = xac.getXAResource();
 for (Xid xid : xar.recover(XAResource.TMSTARTRSCAN)) {
 xar.rollback(xid);
 // or, if you prefer, xar.commit(xid, false);
 }

 Hope this helps,

 --
 Knut Anders




-- 
Peter Ondruška


Locks on crashed database

2014-11-25 Thread Peter Ondruška
Dear all,

I have a database that has locks in SYSCS_DIAG.LOCK_TABLE. How do I remove
those locks? I restarted the database but the locks are still there.
SYSCS_DIAG.TRANSACTION_TABLE also has related record with status PREPARED.
This database was used with XA on application server but it was removed for
troubleshooting.

Thanks

-- 
Peter Ondruška


Re: No Powered By Derby Logo

2014-11-10 Thread Peter Ondruška
Great, very nice, thanks!

On 11 November 2014 03:33, Rick Hillegas rick.hille...@gmail.com wrote:

  Hi John,

 I have combined the poweredBy and Derby logos and put them here:
 http://db.apache.org/derby/logo.html#logo_with_nimbus

 Hope this helps,
 -Rick

 On 11/8/14 4:23 AM, John I. Moore, Jr. wrote:

  There are a lot of “Powered By” logos on the Apache web site (e.g., see
 http://apache.org/foundation/press/kit/ and
 http://apache.org/foundation/press/kit/poweredBy/), and there is an
 official Derby logo (http://db.apache.org/derby/logo.html), but I can’t
 seem to find a “Powered By Derby” logo combination.  Does one exist?  Are
 there plans to create on in the near future?



 _



 John I. Moore, Jr.

 SoftMoore Consulting







-- 
Peter Ondruška


Re: Derby online backup with huge database

2014-11-03 Thread Peter Ondruška
How long does it take to copy database to backup destination using
operating system copy command?

On 4 November 2014 06:54, kosurusekhar kosurusek...@gmail.com wrote:

 Hi All,

 We implemented to take the derby online backup when ever our application is
 launching. It is working fine. But in production the database grows more
 than 2GB. It is taking  more than 7 to 10 minutes to take the backup.

 Is this behaviour is normal with Derby database?

 Is there any thing need to configure/implement to speedup the backup
 process?

 Please advice me in that.

 Thanks in advance.

 Regards
 Sekhar.



 --
 View this message in context:
 http://apache-database.10148.n7.nabble.com/Derby-online-backup-with-huge-database-tp143121.html
 Sent from the Apache Derby Users mailing list archive at Nabble.com.




-- 
Peter Ondruška


Re: Another error to be explained

2014-09-22 Thread Peter Ondruška
To me it is pretty self-explaining. One of the required files of your
database directory is missing. c510.dat.

On 22 September 2014 20:50, Bob M rgmatth...@orcon.net.nz wrote:

 Hi

 Error Message:-

 SQLException:
 SQL State: XSDG3
 Error Code: 45000
 Message: Meta-data for unknown could not be accessed to read:
 C:\\us_copiosus\seg0\c510.dat
 SQLException:
 SQL State: XJ001
 Error Code: 0
 Message: Java exception: 'C:\...us_copiosus\seg0\c510.dat (Access is
 denied):java.io.FileNotFoundException'.

 What does this error mean?

 Bob M
 Dunedin
 New Zealand



 --
 View this message in context:
 http://apache-database.10148.n7.nabble.com/Another-error-to-be-explained-tp142334.html
 Sent from the Apache Derby Users mailing list archive at Nabble.com.




-- 
Peter Ondruška


Re: Another error to be explained

2014-09-22 Thread Peter Ondruška
Access denied, could be antivirus or permission or attributes rendering
file inaccessible.

On Monday, 22 September 2014, Bob M rgmatth...@orcon.net.nz wrote:

 Sorry, but it is NOT missing!

 Bob M



 --
 View this message in context:
 http://apache-database.10148.n7.nabble.com/Another-error-to-be-explained-tp142334p142338.html
 Sent from the Apache Derby Users mailing list archive at Nabble.com.



-- 
Peter Ondruška


Re: Problem with Select statement

2014-09-09 Thread Peter Ondruška
Hello,

can you describe your table testtable please?

On 9 Sep 2014, at 10:20, Kessler, Joerg joerg.kess...@sap.com wrote:

 Hi,
 I want to execute select statement on a table using a Java program and JDBC. 
 The statement is actually not very difficult:
 SELECT MSG_NO, SEND_TO, CREATED_TIME, CONTENT, ENCRYPTION_KEY FROM TESTTABLE  
 WHERE SEQ_ID = ? AND (MSGSTATE IS NULL OR MSGSTATE = 'A')
 When this statement is executed by a test I receive errors like
  
 Column 'A' is either not in any table in the FROM list or appears within a 
 join specification and is outside the scope of the join specification or 
 appears in a HAVING clause and is not in the GROUP BY list. If this is a 
 CREATE or ALTER TABLE  statement then 'A' is not a column in the target table.
  
 When I change the statement to
 SELECT MSG_NO, SEND_TO, CREATED_TIME, CONTENT, ENCRYPTION_KEY FROM TESTTABLE  
 WHERE SEQ_ID = ? AND MSGSTATE IS NULL
 there is no problem. Also when I execute the above statement via Eclipse 
 Database Development/SQL Scrapbook using a fix SEQ_ID the statement is 
 executed without error.
  
 What am I doing wrong?
  
 Best Regards,
  
 Jörg



Re: Difference

2014-08-01 Thread Peter Ondruška
:-) Thanks. I considered 10.10.2.0 stable, actually for me an my use it is very 
stable.
 
Peter


On Thursday, 31 July 2014, 19:31, Myrna van Lunteren m.v.lunte...@gmail.com 
wrote:
 


Hi,

10.10.2.0 has all the *new* functionality of 10.9.1.0 and 10.10.1.0. Plus it 
has  more bug fixes than 10.8.3.0, both because the 10.10 branch was pulled 
from trunk at a later time and because 10.10.2.0 was released later and thus 
even more fixes were back-ported. It therefore also has more possible 
incompatibilities to older versions.

10.8.3.0 only has the most important fixes available at the time of release 
back-ported, and has very few incompatibilities compared to e.g. 10.8.2.


There were some plans to make a 10.9.2 at one time but that fell by the 
wayside. It would have replaced the 10.8.3.0.

Myrna




On Thu, Jul 31, 2014 at 5:41 AM, Rick Hillegas rick.hille...@oracle.com wrote:

On 7/31/14 4:07 AM, Peter Ondruška wrote:

Dear all,

what is the difference between version 10.10.2.0 and 10.8.3.0? Or why is 
there 10.8.3.0 along with 10.10.2.0? Thanks
Peter

The Latest Official Releases tend to be the latest releases produced on the 2 
most active release branches. Once we publish 10.11.1, I expect that we'll 
remove 10.8.3.0 from that list. Right after we produce a feature release, the 
list has this meaning:

i) The top release is the most feature-rich distribution.

ii) The second release is the most stable distribution.

Hope this helps,
-Rick


Difference

2014-07-31 Thread Peter Ondruška
Dear all,

what is the difference between version 10.10.2.0 and 10.8.3.0? Or why is there 
10.8.3.0 along with 10.10.2.0? Thanks
 
Peter

Re: When to shut down a database

2014-04-10 Thread Peter Ondruška
Where did you read that?

If you declare your column to be CLOB(64K) than you have restricted its size. 
CLOB data type
 
 CLOB data type
A CLOB (character large object) value can be up to 2,147,483,647
characters long. A CLOB is used to store unicode character-based data, such
as large documents in any character set.   
View on db.apache.org Preview by Yahoo  
 
 
 String limitations
The following table lists limitations on string values in
Derby. Table 1. String limitations Value Maximum Limit Length of CHAR 254 
characters Length of VARCHAR 32,672 characters   
View on db.apache.org Preview by Yahoo  
 and String limitations



 
Peter
On Friday, 11 April 2014, 6:09, Chux chu...@gmail.com wrote:
 
Awesome insights guys, thanks for all your help.

BTW, I could not access the online documentation for some reason. Although I 
read somewhere that 64k is the maximum size you can allocate a clob on embedded 
mode. Is this correct? I would like to know what the limit is.



variable clob(64 K)

Thanks,
Chux



On Thu, Apr 10, 2014 at 5:32 AM, Dag H. Wanvik dag.wan...@oracle.com wrote:


On 09. april 2014 17:51, Rick Hillegas wrote:

On 4/8/14 2:00 AM, Chux wrote:

Hey Dag,

Thanks for your insight.

I'm using this as an embedded DB in a Java FX desktop application. This is a 
dumb question but would you recommend shutting down the database ever after 
a transaction? Like after you create a record then you shut it down after 
commit.

Depends on the application. If the database holds some kind of infrequently 
referenced metadata, so that say, it is only queried or updated once a day, 
then you could consider an on demand model where the database is booted for 
each query/update, then the query results are returned, then the database is 
shut down so that it doesn't consume any resources. The big extra cost of an 
on demand database would be this:  query/update time would be substantially 
longer since every query/update involves booting the database, compiling the 
query/update, and gracefully closing the database; that cost is on top of the 
steady-state cost of running a pre-compiled query/update.


In such a scenario one might want to shut down the engine, too, not just the 
database.
Note that shutting down the database will resources, but if the engine is 
still running, one can further release resources by shutting that down as well.

Cf. http://db.apache.org/derby/docs/10.10/devguide/tdevdvlp20349.html (engine 
shutdown)
and http://db.apache.org/derby/docs/10.10/devguide/tdevdvlp40464.html 
(shutdown database)

Thanks,
Dag






Hope this helps,
-Rick


Best,
Chux



On Tue, Apr 8, 2014 at 12:32 AM, Dag H. Wanvik dag.wan...@oracle.com 
mailto:dag.wan...@oracle.com wrote:


    On 06. april 2014 21:02, George Toma wrote:

    Hi Chux,

    In my opinion  the example from  app.  referred at commit the
    transaction OR close the connection ( a connection could be
    transacted too ), and not to shutdown the db. If the business
    rule specifies that the db. needs to be shutdown when the app. is
    shutdown, then so be it.

    Normally the db is not shutdown, not even when the app is down.

    This is true for a client/server application. For use with
    embedded Derby, one would normally close down the database (and
    the database engine) before exiting the application. If one
    neglects to do so,
    one would see longer start-up times as Dyre indicated.

    Thanks,
    Dag




    Cheers,
    George
    On Sunday, April 6, 2014 7:14 PM, Chux chu...@gmail.com
    mailto:chu...@gmail.com wrote:
    Hello guys,

    I read in a sample app that you've got to shutdown a database. I
    was just confused if you need to shut it down on every connection
    transaction or just shut it down on application close, in my case
    a desktop applicaiton.

    Best,
    Chux








Re: Corrupt database: ArrayIndexOutOfBoundsException on connect

2014-03-10 Thread Peter Ondruška
Just a reminder to do backups and run the database with log archiving. 
Everything is built in Derby. I know for you it is too late for you :(

Peter

 On 10 Mar 2014, at 17:20, Myrna van Lunteren m.v.lunte...@gmail.com wrote:
 
 Although Derby has transaction control and a recovery mechanism, if a JVM 
 crashes or gets interrupted, the normal transaction steps might be 
 interrupted in unfortunate places, especially during compress. Was the 
 database shutdown before compress? Do you have a backup?
 
 But perhaps there is something of use to you on this page:
 https://wiki.apache.org/db-derby/DatabaseCorruption
 
 HTH
 Myrna
 
 
 
 On Mon, Mar 10, 2014 at 4:49 AM, Phil Bradley ph...@tower.ie wrote:
 
 Hi,
 
 I have a derby database that I am unable to connect to; when I try I get
 java.lang.ArrayIndexOutOfBoundsException. The full stack trace is
 shown below.
 
 Firstly, some background:
 
 - I'm using derby 10.8.2.2 with a Java Webstart application that
 connects in embedded mode. The clients are using Java 7u45 on Windows 7,
 32 bit
 
 - The application is configured to run SYSCS_UTIL.SYSCS_COMPRESS_TABLE()
 every 5 days on each table on startup
 
 - The client was running SYSCS_UTIL.SYSCS_COMPRESS_TABLE() on a
 particular table and based on the application logs, it looks like either
 the JVM crashed or the application was ended via task manager.
 
 - On subsequent attempts to start the application, the client was unable
 to connect to the database as per the stack trace below. I have made a
 copy of the database and I get this error reliably on accessing the
 copy.
 
 I have two questions:
 
 1. Is there anything that I can do to recover from this kind of scenario
 automatically?
 2. Is there any debugging or other investigation that I can do that will
 help reduce the severity of this kind of problem?
 
 Thanks,
 Phil
 
 
 
 
 
 java.sql.SQLException: Failed to start database
 'C:\Users\Administrator\.myapp\myapp_db' with class loader
 com.sun.jnlp.JNLPClassLoader@1bef5e8, see the next exception for
 details.
 at
 
 org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
 Source)
 at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown
 Source)
 at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown
 Source)
 at
 org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(Unknown
 Source)
 at org.apache.derby.impl.jdbc.EmbedConnection.init(Unknown
 Source)
 at org.apache.derby.impl.jdbc.EmbedConnection30.init(Unknown
 Source)
 at org.apache.derby.impl.jdbc.EmbedConnection40.init(Unknown
 Source)
 at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown
 Source)
 at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
 at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown
 Source)
 at java.sql.DriverManager.getConnection(Unknown Source)
 at java.sql.DriverManager.getConnection(Unknown Source)
 at
 
 com.mycompany.database.DbInitializer.runScript(DbInitializer.java:143)
 at
 
 com.mycompany.myapp.ApplicationRunner.initialiseDb(ApplicationRunner.java:817)
 at
 
 com.mycompany.myapp.ApplicationRunner.startApplication(ApplicationRunner.java:945)
 at
 com.mycompany.myapp.ApplicationRunner.run(ApplicationRunner.java:581)
 at
 
 com.mycompany.myapp.ApplicationRunner.main(ApplicationRunner.java:552)
 at
 com.mycompany.myapp.ApplicationLoader.main(ApplicationLoader.java:90)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown
 Source)
 at java.lang.reflect.Method.invoke(Unknown Source)
 at com.sun.javaws.Launcher.executeApplication(Unknown Source)
 at com.sun.javaws.Launcher.executeMainClass(Unknown Source)
 at com.sun.javaws.Launcher.doLaunchApp(Unknown Source)
 at com.sun.javaws.Launcher.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
 Caused by: java.sql.SQLException: Failed to start database
 'C:\Users\Administrator\.myapp\myapp_db' with class loader
 com.sun.jnlp.JNLPClassLoader@1bef5e8, see the next exception for
 details.
 at
 
 org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
 Source)
 at
 
 org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
 Source)
 ... 27 more
 Caused by: java.sql.SQLException: Java exception: ':
 java.lang.ArrayIndexOutOfBoundsException'.
 at
 
 org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
 Source)
 at
 
 org.apache.derby.impl.jdbc.SQLExceptionFactory40

Re: AW: DERBY_OPTS/DERBY_CMD_LINE_ARGS is there anywhere a list

2014-02-27 Thread peter . ondruska
I prefer not to touch anything in Derby installation as it's structure is 
almost perfect so that is my own script outside Derby. It is matter of 
preference and manageability if you set DERBY_OPTS in system/login/launch 
script.

Peter

 On 27 Feb 2014, at 09:54, Kempff, Malte malte.kem...@de.equens.com wrote:
 
 Thanks Peter,
 Well that is pretty inspiring. Do you have that stuff in an own batch file or 
 did you extend one of those found in the bin folder of Derby?
 I tried it now by just providing the DERBY_OPTS  as permanently 
 OS-user-variable. Is that the proper/standard way or are there also other 
 recommendations?
  
 Malte
  
 Von: Peter Ondruška [mailto:peter.ondru...@yahoo.com] 
 Gesendet: Mittwoch, 26. Februar 2014 11:31
 An: Derby Discussion
 Betreff: Re: DERBY_OPTS/DERBY_CMD_LINE_ARGS is there anywhere a list
  
 On my Windows I start Derby network server like this:
  
 set DROPBOX=%USERPROFILE%\Dropbox
 set PATH=%DROPBOX%\derby\bin;%PATH%
 set DERBY_OPTS=-Xms256m -Xmx256m
 set DERBY_OPTS=-Dderby.storage.pageCacheSize=4096 %DERBY_OPTS%
  
 set DERBY_OPTS=-Djava.security.manager %DERBY_OPTS%
 set DERBY_OPTS=-Djava.security.policy=%DROPBOX%\config\derby.policy 
 %DERBY_OPTS%
 set DERBY_OPTS=-Dderby.system.home=C:\TEMP %DERBY_OPTS%
 start Derby server startNetworkServer
  
 Peter
 
 On 26 Feb 2014, at 11:23, Kempff, Malte malte.kem...@de.equens.com wrote:
 
 Hi to all,
 I used to use Derby as embedded data base. Right now I‘d like to use it as 
 Server.
 When I start the server (on my own mashine right now) by using 
 startNetworkServer.bat,
 I got a warning that there is no acess on c:\derby.log.
 After researching on that I understood, that it would be great to give the 
 derby.system.home. I also read about derby.proerties file, where obviously 
 properties are stored. Of course you can give derby.system.home right on the 
 start using –Dprops-name, but if I just like to use the 
 startNetworkServer.bat how I do that there? In that File I saw something like 
 DERBY_OPTS and DERBY_CMD_LINE_ARGS as variables But where not really able to 
 see where the OPTS are supposed to be set (assuming/guessing) that here 
 java-Properties are to be set) A Listing of DERBY_CMD_LINE_ARGS I could not 
 find either.
  
 So using that startNetworkServer.bat what is the best/correct way to provide 
 derby.system.home property?
  
 Thank for Help in advance
  
 Malte
 
  Disclaimer: The contents of this electronic mail message are only binding 
 upon Equens or its affiliates, if the contents of the message are accompanied 
 by a lawfully recognised type of signature. The contents of this electronic 
 mail message are privileged and confidential and are intended only for use by 
 the addressee. If you have received this electronic mail message by error, 
 please notify the sender and delete the message without taking notices of its 
 content, reproducing it and using it in any way. 
 
 
  Disclaimer: The contents of this electronic mail message are only binding 
 upon Equens or its affiliates, if the contents of the message are accompanied 
 by a lawfully recognised type of signature. The contents of this electronic 
 mail message are privileged and confidential and are intended only for use by 
 the addressee. If you have received this electronic mail message by error, 
 please notify the sender and delete the message without taking notices of its 
 content, reproducing it and using it in any way. 
 


Re: DERBY_OPTS/DERBY_CMD_LINE_ARGS is there anywhere a list

2014-02-26 Thread Peter Ondruška
On my Windows I start Derby network server like this:

set DROPBOX=%USERPROFILE%\Dropbox
set PATH=%DROPBOX%\derby\bin;%PATH%
set DERBY_OPTS=-Xms256m -Xmx256m
set DERBY_OPTS=-Dderby.storage.pageCacheSize=4096 %DERBY_OPTS%

set DERBY_OPTS=-Djava.security.manager %DERBY_OPTS%
set DERBY_OPTS=-Djava.security.policy=%DROPBOX%\config\derby.policy %DERBY_OPTS%
set DERBY_OPTS=-Dderby.system.home=C:\TEMP %DERBY_OPTS%
start Derby server startNetworkServer

Peter

 On 26 Feb 2014, at 11:23, Kempff, Malte malte.kem...@de.equens.com wrote:
 
 Hi to all,
 I used to use Derby as embedded data base. Right now I‘d like to use it as 
 Server.
 When I start the server (on my own mashine right now) by using 
 startNetworkServer.bat,
 I got a warning that there is no acess on c:\derby.log.
 After researching on that I understood, that it would be great to give the 
 derby.system.home. I also read about derby.proerties file, where obviously 
 properties are stored. Of course you can give derby.system.home right on the 
 start using –Dprops-name, but if I just like to use the 
 startNetworkServer.bat how I do that there? In that File I saw something like 
 DERBY_OPTS and DERBY_CMD_LINE_ARGS as variables But where not really able to 
 see where the OPTS are supposed to be set (assuming/guessing) that here 
 java-Properties are to be set) A Listing of DERBY_CMD_LINE_ARGS I could not 
 find either.
  
 So using that startNetworkServer.bat what is the best/correct way to provide 
 derby.system.home property?
  
 Thank for Help in advance
  
 Malte
 
  Disclaimer: The contents of this electronic mail message are only binding 
 upon Equens or its affiliates, if the contents of the message are accompanied 
 by a lawfully recognised type of signature. The contents of this electronic 
 mail message are privileged and confidential and are intended only for use by 
 the addressee. If you have received this electronic mail message by error, 
 please notify the sender and delete the message without taking notices of its 
 content, reproducing it and using it in any way. 
 


Re: Apache Derby Command Line?

2014-02-21 Thread Peter Ondruška
C:\Java Server JRE\jdk1.7.0_51\bin\java.exe should be:
C:\Java Server JRE\jdk1.7.0_51\bin\java.exe

Peter

 On 22 Feb 2014, at 02:48, Turtles 3turt...@videotron.ca wrote:
 
 C:\Java Server JRE\jdk1.7.0_51\bin\java.exe


RE: data synchronization with no network

2014-02-17 Thread Haynes, Peter
We have attempted this with a several different variations. We have not been 
able to make it work. Has anyone successfully synced data with a  client by 
manually shipping logs?

Peter Haynes
Pariveda Solutions
24 East Greenway Plaza | Suite 1717 | Houston, Texas 77046
(M) 713.408.8072 | (F) 713.520.4290
The Business of IT(r)
www.parivedasolutions.comhttp://www.parivedasolutions.com/

From: Bergquist, Brett [mailto:bbergqu...@canoga.com]
Sent: Monday, February 17, 2014 1:41 PM
To: Derby Discussion
Subject: RE: data synchronization with no network

What about using the database backup with roll-forward recovery?

http://db.apache.org/derby/docs/10.10/adminguide/index.html

I am not sure if this would work if one were to transfer a copy of the database 
to a slave and enable log archival mode and then periodically transport the 
archived logs to the slave and on the slave do a roll-forward recovery?

From: not me [mailto:turbodo...@gmail.com]
Sent: Sunday, February 16, 2014 6:20 PM
To: derby-user@db.apache.orgmailto:derby-user@db.apache.org
Subject: data synchronization with no network


We need to be able to push changes from a master database to several read-only 
slaves. In many situations, they have no network at all and move files around 
on memory sticks. They operate in remote locations where networks are 
impossible. They need to provided updates once a day to stakeholders who have a 
read-only copy of our application/database for reporting purposes.

Derby replication pushes transaction logs from master to slave.

Is it possible to copy transaction logs to the slave manually, and cause the 
target system to apply the changes to the slave?

Ideally, we would want the following features. Is any of this available in 
Derby, or not to difficult to implement with custom code?

1. The target system will prevent import of data if a previous update file has 
not been applied
2. A user can can trigger the output of an update file (or trigger the system 
to package up for transport files that were written as changes were made), and 
the system will include all database changes since the previous update file was 
generated
3. A user can specify the range of changes to include, to allow inclusion of 
database changes a given slave may have missed

Thanks in advance, and sorry if this is a duplicate question. I searched using 
several different key words and didn't find this topic.


Re: How to log queries in Apache Derby?

2014-02-03 Thread peter . ondruska
I do this:

DERBY_OPTS=-Xms256m -Xmx256m -Dderby.storage.pageCacheSize=2048 
-Dderby.system.home=/MQHA/db -Dderby.storage.tempDirectory=$TMPDIR 
-Dderby.infolog.append=true -Dderby.language.logQueryPlan=false 
-Dderby.language.logStatementText=false startNetworkServer 

Peter

 On 3 Feb 2014, at 11:45, Paul Linehan lineh...@tcd.ie wrote:
 
 Hi Bryan, and thanks for your input.
 
 
 I launch derby by issuing the command (from $DERBY_HOME)
 
 Does it work if you do:
  java -jar $DERBY_HOME/lib/derbyrun.jar 
 -Dderby.language.logStatementText=true server start
 
 No, but this does (at least the server starts - fails on the above command)
 
 java -jar -Dderby.language.logStatementText=true
 $DERBY_HOME/lib/derbyrun.jar server start
 
 But again, no joy for the log file(s).
 
 
 http://db.apache.org/derby/docs/10.10/adminguide/tadminconfigsysteminformation.html
 
 See the output below (end of post) from
 linehanp@lg12l9:~/derby/db-derby-10.10.1.1-bin/bin$ NetworkServerControl 
 sysinfo
 
 The output from this caused me to look in my own home directory - but
 the only thing
 it contains is (just like the others)
 
 
 linehanp@lg12l9:~$ more derby.log
 
 Sat Jan 25 17:21:38 GMT 2014: Shutting down Derby engine
 
 linehanp@lg12l9:~$
 ---
 
 
 I'm still at a loss to understand where my logfile with my query is?
 As mentioned, I have a
 file called derby.properties in $DERBY_HOME/bin and $DERBY_HOME with the line
 derby.language.logStatementText=true
 in it.
 
 
 Paul...
 
 
 bryan
 
 -- 
 
 lineh...@tcd.ie
 
 Mob: 00 353 86 864 5772
 
 
 linehanp@lg12l9:~/derby/db-derby-10.10.1.1-bin/bin$ NetworkServerControl 
 sysinfo
 - Derby Network Server Information 
 Version: CSS10100/10.10.1.1 - (1458268)  Build: 1458268  DRDA Product
 Id: CSS10100
 -- listing properties --
 derby.drda.maxThreads=0
 derby.drda.sslMode=off
 derby.drda.keepAlive=true
 derby.drda.minThreads=0
 derby.drda.portNumber=1527
 derby.drda.logConnections=false
 derby.drda.timeSlice=0
 derby.drda.startNetworkServer=false
 derby.drda.host=localhost
 derby.drda.traceAll=false
 -- Java Information --
 Java Version:1.7.0_45
 Java Vendor: Oracle Corporation
 Java home:   /users/ugrad/linehanp/Downloads/software/jdk/jdk1.7.0_45/jre
 Java classpath:
 /users/ugrad/linehanp/derby/db-derby-10.10.1.1-bin/lib/derbyrun.jar
 OS name: Linux
 OS architecture: amd64
 OS version:  3.2.0-58-generic
 Java user name:  linehanp
 Java user home:  /users/ugrad/linehanp
 Java user dir:   /users/ugrad/linehanp/derby/db-derby-10.10.1.1-bin
 java.specification.name: Java Platform API Specification
 java.specification.version: 1.7
 java.runtime.version: 1.7.0_45-b18
 - Derby Information 
 [/users/ugrad/linehanp/derby/db-derby-10.10.1.1-bin/lib/derby.jar]
 10.10.1.1 - (1458268)
 [/users/ugrad/linehanp/derby/db-derby-10.10.1.1-bin/lib/derbytools.jar]
 10.10.1.1 - (1458268)
 [/users/ugrad/linehanp/derby/db-derby-10.10.1.1-bin/lib/derbynet.jar]
 10.10.1.1 - (1458268)
 [/users/ugrad/linehanp/derby/db-derby-10.10.1.1-bin/lib/derbyclient.jar]
 10.10.1.1 - (1458268)


Re: How know databases connected with Derby networkServer

2014-01-24 Thread Peter Ondruška
Emory is cheap nowadays. Just run each database in separate JVM. If you are on 
Linux or AIX I would recommend IBM Java with class sharing..

Peter

 On 24 Jan 2014, at 11:32, AirDT cont...@solgt.fr wrote:
 
 Hello everyone, 
 
 I run a NetworkServer that allows multiple users to connect to multiple
 Derby database. 
 How do I know databases connected at a given moment ?
 In order to :
 - Have knowledge of potential users connected to each connected dataBase
 - Shutdown these databases in order to remove them without shutDown derby. 
 -...
 
 Any help would be appreciated.
 Thanks in advance
 
 AirDT
 
 
 
 --
 View this message in context: 
 http://apache-database.10148.n7.nabble.com/How-know-databases-connected-with-Derby-networkServer-tp136724.html
 Sent from the Apache Derby Users mailing list archive at Nabble.com.


Re: Able to reconnect previously shutdown in-memory derby database

2013-10-09 Thread Peter Ondruška
My guess is that similar to filesystem if you only shutdown Derby without JVM 
exit database is still there. Similar to filesystem where you need to remove 
database from filesystem.

Peter

 On 9 Oct 2013, at 14:26, pelle.ullberg pelle.ullb...@gmail.com wrote:
 
 Hi, 
 
 Could someone please explain whats wrong with this little unit test that I
 mocked up? I'm using derby version 10.10.1.1
 
 Basically I create an in-memory derby database, shut it down and then
 expected it to not exist anymore. But if the unit test is right, I can
 actually reconnect to it.
 
 Best regards
 /Pelle
 
 Code is below!
 
 
 package com.klarna.derby;
 
 import org.apache.derby.jdbc.EmbeddedDriver;
 import org.junit.Assert;
 import org.junit.Test;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.SQLException;
 import java.util.UUID;
 
 public class DerbyUtilsTest {
 
private static final Logger LOGGER =
 LoggerFactory.getLogger(DerbyUtilsTest.class);
 
@Test(expected = SQLException.class)
public void verifyDerbyShutdown() throws SQLException {
String url = jdbc:derby:memory: + UUID.randomUUID().toString();
 
Connection connection = DriverManager.getConnection(url +
 ;create=true);
 
// Ping derby just to make sure we got it up and running
connection.prepareCall(select * from
 SYS.SYSTABLES).executeQuery();
 
try {
DriverManager.getConnection(jdbc:derby:;shutdown=true);
} catch (SQLException e) {
// This exception is expected:
 http://db.apache.org/derby/docs/10.3/devguide/tdevdvlp20349.html
Assert.assertEquals(Derby system shutdown., e.getMessage());
} finally {
// Make sure old driver is collected
System.gc();
try {
// Re-rgeister driver so that new derby jdbc instances may
 be spawned.
DriverManager.registerDriver(new EmbeddedDriver());
} catch (SQLException e) {
LOGGER.error(Failed to re-register Derby embedded driver: 
 + e.getMessage(), e);
}
}
 
// Expected this to throw something like 'java.sql.SQLException:
 Database 'memory:d77d6863-7624-4990-86fb-2e40a5a1e04d' not found'
DriverManager.getConnection(url);
 
}
 
 }
 
 
 
 
 --
 View this message in context: 
 http://apache-database.10148.n7.nabble.com/Able-to-reconnect-previously-shutdown-in-memory-derby-database-tp134573.html
 Sent from the Apache Derby Users mailing list archive at Nabble.com.


Re: Proper configuration for a very busy DB?

2013-10-02 Thread Peter Ondruška
I mean transaction log, by default they are in log subdirectory of database, 
next to seg0 directory. If you can do batch insertions.

Peter

 On 1 Oct 2013, at 17:53, Jerry Lampi j...@sdsusa.com wrote:
 
 Peter:
 Each client has one connection.  It is used for the entire session (which can 
 be days).
 The Derby log file are configured to have one log file per day.  Format names 
 like: productName-stderr.2013-10-01.log and productName-
 stdout.2013-10-01.log.
 
 Brett:
 - A flurry of data has been as great as 4000 records per second.  That is the 
 number cached by the client(s) and each record is dumped to the DB one at a 
 time.  Not all 30 clients see 4000 per second, likely only 2 or three of 
 them.  The DB has over 10 million records in it at any given time and it is 
 purged daily of older records.
 - We use prepared statements (PS).
 - Each client has one dedicated connection.
 
 All:
 I appreciate your responses.  I will benchmark using JMeter and then follow 
 the tuning tips for derby 10.8 ( 
 http://db.apache.org/derby/docs/10.8/tuning/index.html ).  I will start by 
 tweaking the derby.statementCache.size up from the 100 default.
 
 Any other advice greatly appreciated.
 
 Thanks,
 
 Jerry
 
 On 9/30/2013 2:55 PM, Peter wrote:
 Do you open new connection every time or do you have a pool? How often does 
 Derby checkpoint/switch log file?
 
 
 Peter
 
 
 On 9/30/2013 2:47 PM, Bergquist, Brett wrote:
 Jerry, can you provide a bit more background which might be helpful:
 
 - what is your definition of a flurry of data?   What sort of transaction 
 rate do you estimate this is?
 - are you using prepared statements for your inserts, updates, etc? If not, 
 then do so and also change the derby.statementCache.size to something quite 
 a bit larger.  This will allow the statements to be compiled once and cached 
 instead of being prepared each time you execute them.
 - are you using a connection pool or are you opening/closing connections 
 frequently?
 
 I have a system with a busy database and it took some tuning to get to this 
 point.  Right now it is doing about 100 inserts/second continuous 24x7 and 
 it has peaked up to 200 inserts/second.  Granted my application is different 
 than what you are doing but it is possible to get derby to run when busy.
 
 
 -Original Message-
 From: Jerry Lampi [mailto:j...@sdsusa.com]
 Sent: Monday, September 30, 2013 3:29 PM
 To: Derby User Group
 Subject: Proper configuration for a very busy DB?
 
 We have about 30 clients that connect to our version 10.8.2.2 Derby DB.
 
 The clients are programs that gather data from the operating system of their 
 host and then store that data in the DB, including FTP activity.
 Sometimes, the clients get huge flurries of data all at once and Derby is 
 unable to handle the influx of requests; inserts, updates, etc.  In 
 addition, the clients are written so that if they are unable to talk to the 
 DB, they queue up as much data as possible and then write it to the DB when 
 the DB becomes available.
 
 This client queuing is a poor design, and places greater stress on the DB, 
 as when the 30 clients finally do talk to the DB, they all dump data at 
 once.  The clients do not know about one another and therefore do not 
 attempt any throttling or cooperation when dumping on the DB.
 
 The net effect of all this is that the DB is too slow to keep up with the 
 clients.  As clients try to feed data to the DB, it cannot accept it as fast 
 as desired and this results in the clients queueing more data, exacerbating 
 the issue.
 
 So the DB is very busy.  The only significant thing we have done thus far is 
 change the derby.storage.pageCacheSize=5000 and increase Java heap to 1536m.
 
 Is there a configuration considered optimal for a VERY busy Derby DB?
 
 Thanks,
 
 Jerry
 
 
 ---
 avast! Antivirus: Outbound message clean.
 Virus Database (VPS): 130930-0, 09/30/2013 Tested on: 9/30/2013 2:28:40 PM 
 avast! - copyright (c) 1988-2013 AVAST Software.
 http://www.avast.com
 
 
 
 ---
 avast! Antivirus: Outbound message clean.
 Virus Database (VPS): 131001-0, 10/01/2013
 Tested on: 10/1/2013 10:53:12 AM
 avast! - copyright (c) 1988-2013 AVAST Software.
 http://www.avast.com
 
 
 


Re: Proper configuration for a very busy DB?

2013-09-30 Thread Peter Ondruška
Do you open new connection every time or do you have a pool? How often does 
Derby checkpoint/switch log file?


 
Peter


- Original Message -
From: Jerry Lampi j...@sdsusa.com
To: Derby User Group derby-user@db.apache.org
Cc: 
Sent: Monday, 30 September 2013, 21:28
Subject: Proper configuration for a very busy DB?

We have about 30 clients that connect to our version 10.8.2.2 Derby DB.

The clients are programs that gather data from the operating system of their 
host and then store that data in the DB, including FTP activity.  Sometimes, 
the clients get huge flurries of data all at once and Derby is unable to handle 
the influx of requests; inserts, updates, etc.  In addition, the clients are 
written so that if they are unable to talk to the DB, they queue up as much 
data as possible and then write it to the DB when the DB becomes available.

This client queuing is a poor design, and places greater stress on the DB, as 
when the 30 clients finally do talk to the DB, they all dump data at once.  The 
clients do not know about one another and therefore do not attempt any 
throttling or cooperation when dumping on the DB.

The net effect of all this is that the DB is too slow to keep up with the 
clients.  As clients try to feed data to the DB, it cannot accept it as fast as 
desired and this results in the clients queueing more data, exacerbating the 
issue.

So the DB is very busy.  The only significant thing we have done thus far is 
change the derby.storage.pageCacheSize=5000 and increase Java heap to 1536m.

Is there a configuration considered optimal for a VERY busy Derby DB?

Thanks,

Jerry


---
avast! Antivirus: Outbound message clean.
Virus Database (VPS): 130930-0, 09/30/2013
Tested on: 9/30/2013 2:28:40 PM
avast! - copyright (c) 1988-2013 AVAST Software.
http://www.avast.com


Re: Is there some way to shut down a Derby database faster?

2013-07-04 Thread Peter Ondruška
So if users need data after working with database give them consistent copy 
using backup 
http://db.apache.org/derby/docs/10.0/manuals/admin/hubprnt43.html#HDRSII-BUBBKUP-63476

Peter

On 4. 7. 2013, at 23:17, Trejkaz trej...@trypticon.org wrote:

 On Fri, Jul 5, 2013 at 12:44 AM, Bryan Pendleton
 bpendleton.de...@gmail.com wrote:
 Have you considered using a connection pool in between your
 application layer and the database, so that the connections
 are retained and re-used, rather than being fully reclaimed
 and fully reopened?
 
 From the original post:
 
 | and when the user closes the database they expect to be able to move
 | the files immediately after.
 
 TX


Re: Can anyone help

2012-12-14 Thread Peter Davis
Are you closing prepared statements and result sets
On Dec 14, 2012 7:58 PM, DICKERSON, MICHAEL md2...@att.com wrote:

  I am posting a dump I got from an application that uses Derby and JBoss.
 Can anyone help with why it is running out of memory…or where to look? I am
 thinking it has something with Derby but have no experience with it.

 ** **

 Thanks,

 ** **

 Mike

 ** **

 ** **



Re: Problems with Online Backup SYSCS_BACKUP_DATABASE

2012-08-17 Thread Peter Ondruška
hi, there should be in the backup destination as much disk space available
as your database size without logs.

Peter

On 17. 8. 2012, at 13:06, Stefan R. elstefan...@gmail.com wrote:

Hi,

We're using Derby DB (Version 10.8.2.2) in a larger project. Our database
size is now around 12GB. It is running as network service and is used as
the data backend for several Java web projects, that are connecting to the
database using the network jdbc client. One of the projects is triggering
nightly backups using the command:



CALL SYSCS_UTIL.SYSCS_BACKUP_DATABASE(?)



Randomly (every 2 or 3 days) the backup cannot complete because of the
following exception:



Aug 13 00:03:14 srv- test-001 jsvc.exec[5953]: a:662)#012Caused by:
org.apache.derby.client.am.SqlException: Cannot backup the database, got an
I/O Exception while writing to the backup container file
/mnt/backup/2012-08-13-00-00-00/bd/seg0/c9b1.dat.#012#011at
org.apache.derby.client.am.Statement.completeExecute(Unknown
Source)#012#011at
org.apache.derby.client.net.NetStatementReply.parseEXCSQLSTTreply(Unknown
Source)#012#011at
org.apache.derby.client.net.NetStatementReply.readExecuteCall(Unknown
Source)#012#011at
org.apache.derby.client.net.StatementReply.readExecuteCall(Unknown
Source)#012#011at
org.apache.derby.client.net.NetStatement.readExecuteCall_(Unknown
Source)#012#011at
org.apache.derby.client.am.Statement.readExecuteCall(Unknown
Source)#012#011at
org.apache.derby.client.am.PreparedStatement.flowExecute(Unknown
Source)#012#011at
org.apache.derby.client.am.PreparedStatement.executeX(Unknown
Source)#012#011... 21 more#012Caused by:
org.apache.derby.client.am.SqlException: Java exception: 'No space left on
device: java.io.IOException'.#012#011... 29 more



The available space on the target device is more than sufficient. We
already started the derby process with the Java Option -Djava.io.tmpdir
to point to a directory residing on a partition with more space. This did
not help.



Do you have any suggestions? Which directories are used for running the
backup and how much space needs to be availabe?



Thank you,

Stefan


derby.jar Classpath

2012-07-17 Thread Peter Davis
Hi

I've been using derby db for a while now and have until now been willing to
work around a problem with classpath.  It appears that I have to explicitly
place derby.jar on the class path using one of the standard mechanisms.
e.g. -cp  or in the manifest of the running jar.  Unfortunately my
application uses its own class loader and therefore none of the above is
true.

I'm guessing that derby.jar uses the property java.class.path to find the
path to its other components.

So my question is, Is there a way to load derby.jar without explicitly
declaring it on the classpath.

Peter


Re: Speed up single INSERT INTO statement?

2012-05-09 Thread Peter Ondruška
Consider batching inserts and use larger log file.
On 9 May 2012 19:01, TXVanguard brett.den...@lmco.com wrote:


 I need to speed up a single INSERT statement in Derby.

 The statement has the form:

 INSERT INTO table (col1, col2) SELECT a, b FROM 

 In my application, it take about 10 seconds to insert 3000 records.

 I have experimented with turning off autocommit, adjusting
 derby.storage.pageCacheSize and derby.storage.pageSize, turning off indexes
 for the table, and a few other things, but nothing seems to affect the
 speed
 of the INSERT.

 Are there any other techniques I can try?  Would it be helpful to
 temporarily turn off constraints for the table?
 --
 View this message in context:
 http://old.nabble.com/Speed-up-single-INSERT-INTO-statement--tp33763645p33763645.html
 Sent from the Apache Derby Users mailing list archive at Nabble.com.




Re: Random DRDA Error on IBM J9 JVM

2012-04-04 Thread Peter Ondruška
), (DRDAID = {1}), Java exception: ':
 java.lang.NullPointerException'.
 Wed Apr 04 14:46:24 EDT 2012 : Connection number: 2.
 
 Wed Apr 04 14:46:24 EDT 2012: Shutting down Derby engine
 Wed Apr 04 14:46:24 EDT 2012 : Unexpected exception:
  {0}
 Wed Apr 04 14:46:24 EDT 2012 : null
 java.lang.NullPointerException
 at
 org.apache.derby.impl.services.monitor.TopService.getService(TopService.java:128)
 at
 org.apache.derby.impl.services.monitor.BaseMonitor.shutdown(BaseMonitor.java:199)
 at org.apache.derby.jdbc.InternalDriver.connect(InternalDriver.java:241)
 at org.apache.derby.jdbc.EmbeddedDriver.connect(EmbeddedDriver.java:126)
 at
 org.apache.derby.impl.drda.NetworkServerControlImpl.blockingStart(NetworkServerControlImpl.java:913)
 at
 org.apache.derby.impl.drda.NetworkServerControlImpl.executeWork(NetworkServerControlImpl.java:2243)
 at
 org.apache.derby.drda.NetworkServerControl.main(NetworkServerControl.java:320)
 Wed Apr 04 14:46:24 EDT 2012 : Apache Derby Network Server - 10.8.2.2 -
 (1181258) shutdown


 On Tue, Mar 20, 2012 at 8:22 AM, Brandon L. Duncan
 brandonl.dun...@gmail.com wrote:

 Thanks Peter and Myrna for the feedback. Myrna, those Wiki links were
 helpful; I did not even know they existed. I'm going to get a J9 environment
 setup with 10.8.2.2 in the next day or two and see how that goes.

 Thank you both again,
 Brandon

 On Tue, Mar 20, 2012 at 3:47 AM, Peter Ondruška
 peter.ondruska+de...@kaibo.eu wrote:

 Brandon, I run 10.8.2.2 with J9:

 $ java -version
 java version 1.6.0
 Java(TM) SE Runtime Environment (build pap3260sr9fp2-20110627_03(SR9
 FP2))
 IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 AIX ppc-32
 jvmap3260sr9-20110624_85526 (JIT enabled, AOT enabled)
 J9VM - 20110624_085526
 JIT  - r9_20101028_17488ifx17
 GC   - 20101027_AA)
 JCL  - 20110530_01

 at that works, also SR9 FP3 work fine (this is on AIX).

 On Mon, Mar 19, 2012 at 6:18 PM, Brandon L. Duncan
 brandonl.dun...@gmail.com wrote:
  Thanks Peter.
 
  Do you remember if you upgraded to 10.8.2.2 or 10.8.1.2? We do have a
  test
  environment with 10.8.1.2, and are seeing similar results. I do agree
  that
  it seems to be a strange coexistence with Derby and J9. The IBM Classic
  JVM
  doesn't seem to have this issue, although IBM seems to have eliminated
  it in
  V7R1M0.
 
  Mon Mar 19 12:46:40 EDT 2012 : Apache Derby Network Server - 10.8.1.2 -
  (1095077) started and ready to accept connections on port 11527
  Mon Mar 19 12:46:44 EDT 2012 : Connection number: 1.
  
  Mon Mar 19 12:46:45 EDT 2012:
  Shutting down instance a816c00e-0136-2bda-791f-cab24f1a on database
  directory /database with class loader
  sun.misc.Launcher$AppClassLoader@376a376a
  Mon Mar 19 12:46:45 EDT 2012 Thread[DRDAConnThread_11,10,main] Cleanup
  action starting
  java.lang.NullPointerException
  at
  org.apache.derby.impl.store.raw.data.BaseDataFileFactory.stop(Unknown
  Source)
  at org.apache.derby.impl.services.monitor.TopService.stop(Unknown
  Source)
  at org.apache.derby.impl.services.monitor.TopService.shutdown(Unknown
  Source)
  at
  org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
  Source)
  at
 
  org.apache.derby.impl.services.monitor.BaseMonitor.startProviderService(Unknown
  Source)
  at
 
  org.apache.derby.impl.services.monitor.BaseMonitor.findProviderAndStartService(Unknown
  Source)
  at
 
  org.apache.derby.impl.services.monitor.BaseMonitor.startPersistentService(Unknown
  Source)
  at
 
  org.apache.derby.iapi.services.monitor.Monitor.startPersistentService(Unknown
  Source)
  at org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(Unknown
  Source)
  at org.apache.derby.impl.jdbc.EmbedConnection.init(Unknown Source)
  at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown Source)
  at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
  at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
  at org.apache.derby.impl.drda.Database.makeConnection(Unknown Source)
  at
  org.apache.derby.impl.drda.DRDAConnThread.getConnFromDatabaseName(Unknown
  Source)
  at
  org.apache.derby.impl.drda.DRDAConnThread.verifyUserIdPassword(Unknown
  Source)
  at org.apache.derby.impl.drda.DRDAConnThread.parseSECCHK(Unknown
  Source)
  at
  org.apache.derby.impl.drda.DRDAConnThread.parseDRDAConnection(Unknown
  Source)
  at org.apache.derby.impl.drda.DRDAConnThread.processCommands(Unknown
  Source)
  at org.apache.derby.impl.drda.DRDAConnThread.run(Unknown Source)
  Cleanup action completed
  Mon Mar 19 12:46:45 EDT 2012 Thread[DRDAConnThread_11,10,main] Cleanup
  action starting
  java.lang.NullPointerException
  at
  org.apache.derby.impl.store.raw.data.BaseDataFileFactory.stop(Unknown
  Source)
  at org.apache.derby.impl.services.monitor.TopService.stop(Unknown
  Source)
  at org.apache.derby.impl.services.monitor.TopService.shutdown(Unknown
  Source

Re: Random DRDA Error on IBM J9 JVM

2012-03-20 Thread Peter Ondruška
Brandon, I run 10.8.2.2 with J9:

$ java -version
java version 1.6.0
Java(TM) SE Runtime Environment (build pap3260sr9fp2-20110627_03(SR9 FP2))
IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 AIX ppc-32
jvmap3260sr9-20110624_85526 (JIT enabled, AOT enabled)
J9VM - 20110624_085526
JIT  - r9_20101028_17488ifx17
GC   - 20101027_AA)
JCL  - 20110530_01

at that works, also SR9 FP3 work fine (this is on AIX).

On Mon, Mar 19, 2012 at 6:18 PM, Brandon L. Duncan
brandonl.dun...@gmail.com wrote:
 Thanks Peter.

 Do you remember if you upgraded to 10.8.2.2 or 10.8.1.2? We do have a test
 environment with 10.8.1.2, and are seeing similar results. I do agree that
 it seems to be a strange coexistence with Derby and J9. The IBM Classic JVM
 doesn't seem to have this issue, although IBM seems to have eliminated it in
 V7R1M0.

 Mon Mar 19 12:46:40 EDT 2012 : Apache Derby Network Server - 10.8.1.2 -
 (1095077) started and ready to accept connections on port 11527
 Mon Mar 19 12:46:44 EDT 2012 : Connection number: 1.
 
 Mon Mar 19 12:46:45 EDT 2012:
 Shutting down instance a816c00e-0136-2bda-791f-cab24f1a on database
 directory /database with class loader
 sun.misc.Launcher$AppClassLoader@376a376a
 Mon Mar 19 12:46:45 EDT 2012 Thread[DRDAConnThread_11,10,main] Cleanup
 action starting
 java.lang.NullPointerException
 at org.apache.derby.impl.store.raw.data.BaseDataFileFactory.stop(Unknown
 Source)
 at org.apache.derby.impl.services.monitor.TopService.stop(Unknown Source)
 at org.apache.derby.impl.services.monitor.TopService.shutdown(Unknown
 Source)
 at org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
 Source)
 at
 org.apache.derby.impl.services.monitor.BaseMonitor.startProviderService(Unknown
 Source)
 at
 org.apache.derby.impl.services.monitor.BaseMonitor.findProviderAndStartService(Unknown
 Source)
 at
 org.apache.derby.impl.services.monitor.BaseMonitor.startPersistentService(Unknown
 Source)
 at
 org.apache.derby.iapi.services.monitor.Monitor.startPersistentService(Unknown
 Source)
 at org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(Unknown Source)
 at org.apache.derby.impl.jdbc.EmbedConnection.init(Unknown Source)
 at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown Source)
 at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
 at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
 at org.apache.derby.impl.drda.Database.makeConnection(Unknown Source)
 at org.apache.derby.impl.drda.DRDAConnThread.getConnFromDatabaseName(Unknown
 Source)
 at org.apache.derby.impl.drda.DRDAConnThread.verifyUserIdPassword(Unknown
 Source)
 at org.apache.derby.impl.drda.DRDAConnThread.parseSECCHK(Unknown Source)
 at org.apache.derby.impl.drda.DRDAConnThread.parseDRDAConnection(Unknown
 Source)
 at org.apache.derby.impl.drda.DRDAConnThread.processCommands(Unknown Source)
 at org.apache.derby.impl.drda.DRDAConnThread.run(Unknown Source)
 Cleanup action completed
 Mon Mar 19 12:46:45 EDT 2012 Thread[DRDAConnThread_11,10,main] Cleanup
 action starting
 java.lang.NullPointerException
 at org.apache.derby.impl.store.raw.data.BaseDataFileFactory.stop(Unknown
 Source)
 at org.apache.derby.impl.services.monitor.TopService.stop(Unknown Source)
 at org.apache.derby.impl.services.monitor.TopService.shutdown(Unknown
 Source)
 at org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
 Source)
 at
 org.apache.derby.impl.services.monitor.BaseMonitor.startProviderService(Unknown
 Source)
 at
 org.apache.derby.impl.services.monitor.BaseMonitor.findProviderAndStartService(Unknown
 Source)
 at
 org.apache.derby.impl.services.monitor.BaseMonitor.startPersistentService(Unknown
 Source)
 at
 org.apache.derby.iapi.services.monitor.Monitor.startPersistentService(Unknown
 Source)
 at org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(Unknown Source)
 at org.apache.derby.impl.jdbc.EmbedConnection.init(Unknown Source)
 at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown Source)
 at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
 at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
 at org.apache.derby.impl.drda.Database.makeConnection(Unknown Source)
 at org.apache.derby.impl.drda.DRDAConnThread.getConnFromDatabaseName(Unknown
 Source)
 at org.apache.derby.impl.drda.DRDAConnThread.verifyUserIdPassword(Unknown
 Source)
 at org.apache.derby.impl.drda.DRDAConnThread.parseSECCHK(Unknown Source)
 at org.apache.derby.impl.drda.DRDAConnThread.parseDRDAConnection(Unknown
 Source)
 at org.apache.derby.impl.drda.DRDAConnThread.processCommands(Unknown Source)
 at org.apache.derby.impl.drda.DRDAConnThread.run(Unknown Source)
 Cleanup action completed
 Mon Mar 19 12:46:45 EDT 2012 Thread[DRDAConnThread_11,10,main] (DATABASE =
 /database), (DRDAID = {1}), Java exception: ':
 java.lang.NullPointerException'.
 Mon Mar 19 12:46:47 EDT 2012 : Connection number: 2

Re: Random DRDA Error on IBM J9 JVM

2012-03-19 Thread Peter Ondruška
I have seen the same problem and resolved by upgrading Derby to 10.8.
There must be something strange in J9 and Derby coexistence. This
happened very random.

On Mon, Mar 19, 2012 at 4:55 PM, Brandon L. Duncan
brandonl.dun...@gmail.com wrote:
 I was wondering if anyone came across this error before while attempting to
 establish a connection to Derby? The database seems to boot fine, but when a
 connection is attempted it just bombs out. It also is not
 always reproducible, as at times it will be fine, other times it errors with
 the exception below. The JVM is IBM's J9 implementation. I know Derby
 10.4.2.0 is a wee bit old, but upgrading this instance would not be easy at
 this point.

 java version 1.6.0

 Java(TM) SE Runtime Environment (build
 pap3260sr9ifix-20110211_02(SR9+IZ94423))

 IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 OS/400 ppc-32
 jvmap3260sr9-20101130 (JIT enabled, AOT enabled)

 J9VM - 20101124_069295

 JIT  - r9_20101028_17488ifx2

 GC   - 20101027_AA)

 JCL  - 20101119_01


 Derby Log:

 Apache Derby Network Server - 10.4.2.0 - (689064) started and ready to
 accept connections on port 1555 at 2012-03-15 13:27:29.860 GMT
 Connection number: 1.

 2012-03-15 13:27:34.382 GMT:
 Shutting down instance a816c00e-0136-168a-b0de-d934f54c
 
 2012-03-15 13:27:34.384 GMT Thread[DRDAConnThread_11,5,main] Cleanup action
 starting
 java.lang.NullPointerException
 at org.apache.derby.impl.store.raw.data.BaseDataFileFactory.stop(Unknown
 Source)
 at org.apache.derby.impl.services.monitor.TopService.stop(Unknown Source)
 at org.apache.derby.impl.services.monitor.TopService.shutdown(Unknown
 Source)
 at org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
 Source)
 at
 org.apache.derby.impl.services.monitor.BaseMonitor.startProviderService(Unknown
 Source)
 at
 org.apache.derby.impl.services.monitor.BaseMonitor.findProviderAndStartService(Unknown
 Source)
 at
 org.apache.derby.impl.services.monitor.BaseMonitor.startPersistentService(Unknown
 Source)
 at
 org.apache.derby.iapi.services.monitor.Monitor.startPersistentService(Unknown
 Source)
 at org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(Unknown Source)
 at org.apache.derby.impl.jdbc.EmbedConnection.init(Unknown Source)
 at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown Source)
 at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
 at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
 at org.apache.derby.impl.drda.Database.makeConnection(Unknown Source)
 at org.apache.derby.impl.drda.DRDAConnThread.getConnFromDatabaseName(Unknown
 Source)
 at org.apache.derby.impl.drda.DRDAConnThread.verifyUserIdPassword(Unknown
 Source)
 at org.apache.derby.impl.drda.DRDAConnThread.parseSECCHK(Unknown Source)
 at org.apache.derby.impl.drda.DRDAConnThread.parseDRDAConnection(Unknown
 Source)
 at org.apache.derby.impl.drda.DRDAConnThread.processCommands(Unknown Source)
 at org.apache.derby.impl.drda.DRDAConnThread.run(Unknown Source)
 Cleanup action completed
 2012-03-15 13:27:34.385 GMT Thread[DRDAConnThread_11,5,main] Cleanup action
 starting
 java.lang.NullPointerException
 at org.apache.derby.impl.store.raw.data.BaseDataFileFactory.stop(Unknown
 Source)
 at org.apache.derby.impl.services.monitor.TopService.stop(Unknown Source)
 at org.apache.derby.impl.services.monitor.TopService.shutdown(Unknown
 Source)
 at org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
 Source)
 at
 org.apache.derby.impl.services.monitor.BaseMonitor.startProviderService(Unknown
 Source)
 at
 org.apache.derby.impl.services.monitor.BaseMonitor.findProviderAndStartService(Unknown
 Source)
 at
 org.apache.derby.impl.services.monitor.BaseMonitor.startPersistentService(Unknown
 Source)
 at
 org.apache.derby.iapi.services.monitor.Monitor.startPersistentService(Unknown
 Source)
 at org.apache.derby.impl.jdbc.EmbedConnection.bootDatabase(Unknown Source)
 at org.apache.derby.impl.jdbc.EmbedConnection.init(Unknown Source)
 at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown Source)
 at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
 at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source)
 at org.apache.derby.impl.drda.Database.makeConnection(Unknown Source)
 at org.apache.derby.impl.drda.DRDAConnThread.getConnFromDatabaseName(Unknown
 Source)
 at org.apache.derby.impl.drda.DRDAConnThread.verifyUserIdPassword(Unknown
 Source)
 at org.apache.derby.impl.drda.DRDAConnThread.parseSECCHK(Unknown Source)
 at org.apache.derby.impl.drda.DRDAConnThread.parseDRDAConnection(Unknown
 Source)
 at org.apache.derby.impl.drda.DRDAConnThread.processCommands(Unknown Source)
 at org.apache.derby.impl.drda.DRDAConnThread.run(Unknown Source)
 Cleanup action completed
 2012-03-15 13:27:34.388 GMT Thread[DRDAConnThread_11,5,main] (DATABASE =
 ../../database), (DRDAID = {1}), Java exception: ':
 java.lang.NullPointerException'.
 Connection 

Re: AW: Does derby ned allways a rollback or commt?

2011-11-22 Thread Peter Ondruška
I would extend your question: is there any difference in commit or rollback
after single select statement?
Dne 22.11.2011 12:45 malte.kem...@de.equens.com napsal(a):

 **

 So what would be the best practice using a rollback or a commit when just
 reading a database?

 ** **

 Malte
  --

 *Von:* florin.herin...@sungard.com [mailto:florin.herin...@sungard.com]
 *Gesendet:* Dienstag, 22. November 2011 12:14
 *An:* derby-user@db.apache.org
 *Betreff:* AW: Does derby ned allways a rollback or commt?

 ** **

 That is not derby specific. In any db selects are part of transactions
 too. So either you enable autocommit or you explicitly commit your
 transaction(s) before releasing the connection. Commit will release the
 locks you acquired on the db (read locks if you haven’t modified anything).
 

 ** **

 Cheers,

 ** **

 **Florin**

 ** **

 *Von:* malte.kem...@de.equens.com [mailto:malte.kem...@de.equens.com]
 *Gesendet:* Dienstag, 22. November 2011 12:09
 *An:* derby-user@db.apache.org
 *Betreff:* Does derby ned allways a rollback or commt?

 ** **

 Hi,

 I have an application using embedded derby (10.8.1.2). In the program I
 use a little routine that ends with rollback, commit or nothing at all, by
 using a parameter.

 In a certain mode I just do a select, so As far I concern I don't need to
 do neither rollback nor commit, since it is just a select.

 But I get then always an Exception:

 ** **

 Eine Verbindung kann nicht beendet werden, solange noch eine Transaktion
 aktiv ist.
 

   at
 org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
 Source)
 

   at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown
 Source)
 

   at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown
 Source)
 

   at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown
 Source)
 

   at
 org.apache.derby.impl.jdbc.EmbedConnection.newSQLException(Unknown Source)
 
 

   at
 org.apache.derby.impl.jdbc.EmbedConnection.checkForTransactionInProgress(Unknown
 Source)
 

   at org.apache.derby.impl.jdbc.EmbedConnection.close(Unknown Source)*
 ***
 

 ** **

 That obviously is the code 25001

 ** **

 Even though I initialize all my prepared statements that also are inserts
 and updates, in this mode only a select statement is used.

 How come I get such a Exception. It seems that I have to commit or
 rollback always I quit a derby program even though no change has occurred
 on particular database

 ** **

 Or is there something I should care about?

 Equens SE

 Malte Kempff
 

 Core Applications/ Change Bulk Payments

 Tel:  +49(0)69/58 99 93 - 60417

 Fax: +49(0)69/58 99 93 - 60290

 mail to:malte.kem...@de.equens.com 

 ** **

 Equens SE
 Mainzer Landstraße 201
 60326 Frankfurt
 Germany
 Tel: +49(0)69-589993-09
 Fax: +49(0)69-589993-60300
 Amtsgericht Frankfurt HRB 84 429
 http://www.equens.com 
 

 i...@de.equens.com
 


 Vorstand: Michael Steinbach (Vorstandsvorsitzender)
 Alessandro Baroni
 Dr. Götz Möller
 Jan Sonneveld
 Aufsichtsratsvorsitzender: Erik Dralans
 Sitz: 3526 LB Utrecht, Niederlande, Eendrachtlaan 315, Handelskammer
 3022.0519

 ** **



Re: OutOfMemoryException when executing 180,000 batches

2011-11-09 Thread Peter Ondruška
Of course you get OOME if you use memory only database and your data size
plus overheads exceeds heap.
Dne 8.11.2011 23:44 Pavel Bortnovskiy pbortnovs...@jefferies.com
napsal(a):

 Is it unusual that Derby (used in-memory only) seems to throw an out of
 memory exception when executing 180,000 batched insert statements?
 (The JVM was started with -Xms1024m -Xmx2048m):

 Caused by: java.sql.SQLException: Java exception: 'GC overhead limit
 exceeded: java.lang.OutOfMemoryError'.
at
 org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
 Source)
at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.javaException(Unknown Source)
at
 org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown
 Source)
at
 org.apache.derby.impl.jdbc.EmbedResultSet.noStateChangeException(Unknown
 Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement.setDate(Unknown
 Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement.setDate(Unknown
 Source)
at org.apache.derby.iapi.types.SQLDate.setInto(Unknown Source)
at
 org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeBatchElement(Unknown
 Source)

 Are there memory limitations for Derby running in the in-memory only mode?
 Is there anything that can be done to avoid getting such errors?

 Thanks,
 Pavel.


 Jefferies archives and monitors outgoing and incoming e-mail. The contents
 of this email, including any attachments, are confidential to the ordinary
 user of  the email address to which it was addressed. If you are not the
 addressee of this email you may not copy, forward, disclose or otherwise
 use it or any part of it in any form whatsoever. This email may be produced
 at the request of regulators or in connection with civil litigation.
 Jefferies accepts no liability for any errors or omissions arising as a
 result of transmission. Use by other than intended recipients is
 prohibited. In the United Kingdom, Jefferies operates as Jefferies
 International Limited; registered in England: no. 1978621; registered
 office: Vintners Place, 68 Upper Thames Street, London EC4V 3BJ. Jefferies
 International Limited is authorised and regulated by the Financial Services
 Authority.






Re: Setting Derby NetworkServer JVM size using NetworkServerControl

2011-11-09 Thread Peter Ondruška
append:
DERBY_OPTS=-Xms1g -Xmx1g
to bin\derby_common.bat

On Wed, Nov 9, 2011 at 5:39 PM, Hawley, Dan dan.haw...@lmco.com wrote:
 Hi,

 I have an urgent problem that I have not been able to solve by myself.  I am 
 populating many derby databases with data in preparation for going live with 
 a new application.  After opening about 10 databases, the Derby Network 
 Server crashes with an java.lang.OutOfMemoryError.  See derby log below.  We 
 start the Derby Network Server with a batch file that I also have below.  I 
 think I need to allocate more memory to the Derby Network Server's JVM but I 
 can't see how to do that using the NetworkServerControl to start the server.  
 Does anyone know how to do this?  What is a reasonable memory allocation for 
 the JVM?  Another question I have is: Am I doing something wrong having by 
 having Derby load this many databases?  I am using Hibernate JPA with Derby 
 for my application.  I am closing the EntityManagerFactory and EntityManager 
 objects when I close the application.

 Your assistance and advice is greatly appreciated.  Thanks, Dan

 =start derby server.bat file=
 f:
 cd f:\epns\database\EditorDatabase
 %DERBY_HOME%\bin\NetworkServerControl start -h secapp -p 1527


 ===derby server crashed = ran out of memory ==


 2011-11-08 21:53:28.187 GMT : Apache Derby Network Server - 10.6.1.0 - 
 (938214) started and ready to accept connections on port 1527
 
 2011-11-08 22:12:56.443 GMT:
  Booting Derby version The Apache Software Foundation - Apache Derby - 
 10.6.1.0 - (938214): instance a816c00e-0133-852b-e9db-003a53a0
 on database directory F:\EPNS\database\EditorDatabase\DBEDIT-B   with class 
 loader sun.misc.Launcher$AppClassLoader@fabe9

 Database Class Loader started - derby.database.classpath=''
 
 2011-11-08 22:26:30.306 GMT:
  Booting Derby version The Apache Software Foundation - Apache Derby - 
 10.6.1.0 - (938214): instance 274b9034-0133-852b-e9db-003a53a0
 on database directory F:\EPNS\database\EditorDatabase\011103AD   with class 
 loader sun.misc.Launcher$AppClassLoader@fabe9

 Database Class Loader started - derby.database.classpath=''
 
 2011-11-08 22:28:27.370 GMT:
  Booting Derby version The Apache Software Foundation - Apache Derby - 
 10.6.1.0 - (938214): instance 636e50d2-0133-852b-e9db-003a53a0
 on database directory F:\EPNS\database\EditorDatabase\011103AC   with class 
 loader sun.misc.Launcher$AppClassLoader@fabe9

 Database Class Loader started - derby.database.classpath=''
 
 2011-11-08 22:31:06.678 GMT:
  Booting Derby version The Apache Software Foundation - Apache Derby - 
 10.6.1.0 - (938214): instance f7f21170-0133-852b-e9db-003a53a0
 on database directory F:\EPNS\database\EditorDatabase\011100BM   with class 
 loader sun.misc.Launcher$AppClassLoader@fabe9

 Database Class Loader started - derby.database.classpath=''
 
 2011-11-08 22:33:08.717 GMT:
  Booting Derby version The Apache Software Foundation - Apache Derby - 
 10.6.1.0 - (938214): instance 64d6d20e-0133-852b-e9db-003a53a0
 on database directory F:\EPNS\database\EditorDatabase\011102-A   with class 
 loader sun.misc.Launcher$AppClassLoader@fabe9

 Database Class Loader started - derby.database.classpath=''
 
 2011-11-08 22:37:12.133 GMT:
  Booting Derby version The Apache Software Foundation - Apache Derby - 
 10.6.1.0 - (938214): instance 2a1c92ac-0133-852b-e9db-003a53a0
 on database directory F:\EPNS\database\EditorDatabase\011200-Y   with class 
 loader sun.misc.Launcher$AppClassLoader@fabe9

 Database Class Loader started - derby.database.classpath=''
 
 2011-11-08 22:41:16.888 GMT:
  Booting Derby version The Apache Software Foundation - Apache Derby - 
 10.6.1.0 - (938214): instance c7c3534a-0133-852b-e9db-003a53a0
 on database directory F:\EPNS\database\EditorDatabase\011201-F   with class 
 loader sun.misc.Launcher$AppClassLoader@fabe9

 Database Class Loader started - derby.database.classpath=''
 
 2011-11-08 22:43:54.771 GMT:
  Booting Derby version The Apache Software Foundation - Apache Derby - 
 10.6.1.0 - (938214): instance bdcb13e8-0133-852b-e9db-003a53a0
 on database directory F:\EPNS\database\EditorDatabase\DBITST-A   with class 
 loader sun.misc.Launcher$AppClassLoader@fabe9

 Database Class Loader started - derby.database.classpath=''
 
 2011-11-08 22:48:35.977 GMT:
  Booting Derby version The Apache 

Re: Derby secure by default

2011-09-19 Thread Peter Ondruška
Rick, I’d vote for secure by default in v.11. Thanks

On Mon, Sep 19, 2011 at 6:39 PM, Rick Hillegas rick.hille...@oracle.com wrote:
 The Derby developers are considering introducing a single master security
 property. Turning this property on will enable most Derby security
 mechanisms:

 1) Authentication - Will be on, requiring username/password credentials at
 connection time. Derby will supply a default authentication mechanism.

 2) SQL authorization - Will be on, hiding a user's data from other people.
 In addition, Derby will support more SQL Standard protections for Java
 routines.

 3) File permissions - Will be tightened as described by DERBY-5363.

 4) PUBLIC -This keyword will not be allowed as a user name.

 5) SSL/TLS encryption - Will shield client/server traffic.

 6) Server administration -  Will require credentials.

 When the property is off, Derby will behave as it does today:
 Authentication, authorization, and network encryption will be off, file
 permissions will inherit defaults from the account which runs the VM, PUBLIC
 will be a legal user name, and server administration won't need credentials.

 This new master property will make it easier to configure a more secure
 application. We want to introduce the property in an upcoming 10.x release,
 where it will default to being off. That means that it won't cause
 compatibility problems.

 Later on, we might change the default for this property so that it would
 normally be turned on. This would make Derby more secure out of the box at
 the cost of breaking existing applications. Many applications would need to
 explicitly turn the property off in order to run as they did previously.
 Release notes would document this behavioral change and we would bump the
 major release id from 10 to 11 in order to call attention to the change.

 We would like your feedback on this trade-off between security out of the
 box versus disruption. Should this extra security be enabled by default?

 Thanks,
 -Rick




Re: Opinions on new security feature requested

2011-09-02 Thread Peter Ondruška
+1 for more restrictive permissions. Actually when I run Derby on Unix
it runs under own user+group and database files are not accessible by
others.

On Fri, Sep 2, 2011 at 7:31 PM, Dag H. Wanvik dag.wan...@oracle.com wrote:
 Hi folks,

 we are always working to make Derby more secure; in this day and age,
 security is ever more on people's minds for obvious reasons;
 IT systems are everywhere and the bad guys never tire of finding new holes
 to break them. Up till now, Derby creates database files and logs using the
 default operating system permission in effect. On Unix/linux this is
 controlled by the umask of the process starting the Derby engine, be in
 embedded or a standalone Derby network server. Similarly on Windows, NTFS
 has a security model can be configured to give various default permissions.

 Now, often the defaults will allow other (than the one starting the VM) OS
 users the permissions to read and possibly write to the database files. This
 can be intention to allow several users to boot the same (shared) database),
 or it can be accidental. In DERBY-5363 we have been discussion use cases and
 scenarios for when it would be desirable to let Derby be more secure than
 the default permissions. Other database also do this, e.g. PostGreSQL, MS
 SQL server.

 Typically, only the OS user creating the database would have access (default
 behavior) unless one told the database to be lax and not worry about
 tightening up the default OS permissions.
 Obviously, one can achieve the same restrictive permissions, by setting the
 umask to 0077 on Unix, or tweak the NTFS settings similarly (Windows), but
 this requires some care and presumes that the users remembers to do so (many
 people don't grok the NTFS security model..)

 To be clear, one would be able to enable/disable this extra security by
 providing Derby with a property setting, so the question is really what is
 the msot appropriate default: use lax permissions (rely on the user having
 tightened up be fore starting the VM), or use the new proactive secure
 settings proposed in DERBY-5363.

 Secure default pros:
 - users will get better security by default. If one needs to share the
 database files, one can use a property to get old, lax behavior.
 - no need to change startup scripts to get better security

 Secure default cons:
 - upwards compatibility: if an installation relies on sharing database
 files, on would have to start using a property after upgrading.
 - requires at least Java 6 (on Unix), Java 7 on Windows/NTFS to work (an
 incentive to upgrade, though :-)

 In the discussion it as been suggested that many deployments, especially of
 embedded Derby, rely on several OS users having permissions, so changing the
 default Derby behavior would cause upgrade issues. Probably for most
 client/server deployments, where the server is started from the command
 line, it would be the same OS user starting the Derby server every time. In
 mixed deployments (embedded, but the server is sometimes started via the
 API), the latter may not hold true.

 A possible trade-off between the concerns would thus be to start embedded
 with the exisiting, lax permissions by default, but start the server from
 the command line with a secure (restrictive) default. In both cases, one
 would get the opposite behavior by providing a system property on VM
 startup.

 Before we settle the discussion on this, it would be good to hear what you
 think! Thanks!

 Dag




Re: nulls in prepared statement

2011-07-21 Thread Peter Ondruška
You would only subclass PreparedStatement as public
MyPreparedStatement extends PreparedStatement and override setString
method. And in your code use replace PrepareStatement with
MyPreparedStatement.

On Thu, Jul 21, 2011 at 5:31 PM, Tomcat Programmer
tcprogram...@yahoo.com wrote:


 Well may be you won't have less ugly code, but at least it will be hidden
 ;-)

 I think the easiest way it to use you own PreparedStatement class. So you
 can do
 any special treatment or workaround in a centralized and unique place.

 Hi JYL this is a very insightful and interesting solution, which I would not
 have thought of.  Is this as simple as creating my own class with the derby
 version as its superclass and then just overriding the method?  That seems
 too easy .. is there any catches or pitfalls you can give me a heads up on?
  Thanks again for your help!
 Adding one specific thought:  if I extend the class, how do I get my version
 to be instantiated? (this is a web application and so it will be picking up
 driver registration and so forth from JNDI.)




Re: nulls in prepared statement

2011-07-20 Thread Peter Ondruška
Eclipselink or Hibernate might help if you want less ugly code.
Dne 20.7.2011 23:38 Tomcat Programmer tcprogram...@yahoo.com napsal(a):



 You must explicitly set value to null:

 if (cobj.getPartNo()==null) pstmt.setNull(1, java.sql.Types.VARCHAR);
 else pstmt.setString(1,cobj.getPartNo());


 Hi Peter, thanks for responding but I am aware of this as I indicated in
my post. You realize how tedious this will be with any significant number of
fields?  What I am asking about is if there is an alternate solution.

 Thanks in advance,
 TC


Re: nulls in prepared statement

2011-07-19 Thread Peter Ondruška
You must explicitly set value to null:

if (cobj.getPartNo()==null) pstmt.setNull(1, java.sql.Types.VARCHAR);
else pstmt.setString(1,cobj.getPartNo());

On Wed, Jul 20, 2011 at 4:42 AM, Tomcat Programmer
tcprogram...@yahoo.com wrote:
 Hi Everyone,

 I've tried doing research on this on the web but can't seem to find a clear 
 answer, I hope someone can help.  I'm currently working on an application 
 using Derby where the data being entered is optional for a lot of the fields. 
 The columns in the tables are defined using VARCHAR data type and accept 
 nulls.  If, in the code to do the insert or update,  someone leaves a field 
 empty in the application it will result in a null value being passed through 
 setString method like this:

 pstmt.setString(1,cobj.getPartNo());

 where the getPartNo() method returns null.  When this happens I get a null 
 pointer exception.  Is there a configuration parameter for Derby to accept 
 Java nulls and treat them as SQL nulls?   If not, the code is going to get 
 incredibly ugly fast if each value has to be checked for null and then issue 
 a separate setNull() method call.

 Thanks in advance,
 TC



Re: How to unlock a table in derby

2011-07-18 Thread Peter Ondruška
Rollback might work as well :)
Dne 18.7.2011 15:57 Lahiru Gunathilake glah...@gmail.com napsal(a):
 I will try it, thank you Byan !

 Lahiru

 On Fri, Jul 15, 2011 at 8:47 PM, Bryan Pendleton 
bpendleton.de...@gmail.com
 wrote:

 I execute query lock table table name
 in share mode but I cannot see any documentation on how to unlock a
derby
 table.


 Commit.

 thanks,

 bryan




Re: Does anyone want to run Derby 10.9 on JVM 1.4 or on CDC/FP 1.1?

2011-06-27 Thread Peter Ondruška
Please drop 1.4
Dne 27.6.2011 16:05 Rick Hillegas rick.hille...@oracle.com napsal(a):
 The 1.4 JVM has not been supported as a free platform for some time
 (although I believe you can buy a support contract for 1.4 if you need
 to). Does anyone plan to run Derby 10.9 on this platform? Does anyone
 plan to run Derby 10.9 on the related small device CDC/FP 1.1 platform?
 For the next feature release, we are considering dropping support for
 one or both of these platforms. Your responses will help us make a
decision.

 Thanks,
 -Rick


Re: error executing multiple insert statements

2011-06-16 Thread Peter Ondruška
Run this in ij. That is for running SQL scripts. Or execute each
statement (without ;) separately.

On Thu, Jun 16, 2011 at 8:03 PM, Lothar Krenzien lkrenz...@web.de wrote:
 Hello,

 I'd like to execute multiple insert statements over JDBC at once, but can't 
 get it work ;(

 Here a small demo:

 ...

 insert into POSITIONTEMPLATE
 (ID,VERSION,NAME,PRINT)
 values
 (NEXT VALUE FOR unique_id,0,'abc',1);

 insert into POSITIONTEMPLATE
 (ID,VERSION,NAME,PRINT)
 values
 (NEXT VALUE FOR unique_id,0,'efg',0);


 If I execute every statement at once it works fine but if I execute all 
 statements at once I always get the following error:

 Encountered ; at line 4 column 9.

 I know that it means that there is a problem with the ; sign - but that 
 can't be right or ?
 Derby 10.8.1.2

 Thank you



Re: performance issue on 64 bit JVM

2011-06-03 Thread Peter Ondruška
Unless you need to address heap over 32bit JVM limits use 32bit JVM.
Just my EUR .02 :-)

2011/6/3 Arnaud Masson amas...@gmail.com:
 64-bit uses more memory,
 so if your Xmx is too small,
 the 64-bit version may have more GC overhead.

 You can increase Xmx
 or activate compressed-pointers to have a better memory usage.

 (Also the max heap size must fit in physical memory, otherwise the swap
 on disk could kill perfs.)


 Le 03/06/2011 04:09, QA Wang Yang a écrit :
 Dear Derby

 Could you tell me how tuning the performance issue?

 This is running sample query to fetch data from Derby database

 On my dual-core XP x-32 laptop the Corporate Overview takes 406(ms).
 But we saw on the dual quad core it took 22973 (ms).

 I don't know why the performance became bad on my powerful machine.

 This is my test detail

 Query:
 select * from Customer;
 select * from product;
 select * from order;
 select * from Customer1;

 Dual-core XP SP3 32 bit,JDK1.6.0_25 32 bit,
 Dual quad core Windows 2008 64 bit, JDK1.6.0_25 64 bit,


 *32 bit Dual core
 Fetched record size: 1413
 Fetch record time: 250 (ms)
 Fetched record size 8
 Fetch record time: 15 (ms)
 Fetched record size: 6
 Fetch record time: 0 (ms)
 Fetched record size: 1413 records
 Fetch record time: 141 (ms)
 total 406 ms

 **64 bit Dual quad core
 Fetched record size: 1413
 Fetch record time: 11483 (ms)
 Fetched record size 8
 Fetch record time: 3 (ms)
 Fetched record size: 6
 Fetch record time: 4 (ms)
 Fetched record size: 1413
 Fetch record time: 11484 (ms)
 total 22973 ms

 Thanks a lot



 Wangyang
 QA Group
 Jinfonet Software, Inc.

 www.jinfonet.com
 2nd floor on the east of BEC Theatre
 135 Xi Zhi Men Wai Street, Xicheng District Beijing, China 100044
 yang.w...@support.jinfonet.com
 86-10-68316633

 JReport
 Embedded Reporting for Java Applications







Add current_timestamp default to existing timestamp column

2011-03-28 Thread Peter Robbins
To whom it may concern,

Although mentioned in the documentation, doesn't appear to work?

Could you please give me correct syntax for this? Or add this to the Jira items?

Thanks a lot for your help.

Kind regards,
Peter

PS: I couldn't find any examples anywhere, only a web discussion where someone 
was having the same problems.

Re: Inserting control characters in SQL

2011-03-11 Thread Peter Ondruška
Have you tried \b ?

Peter
On Mar 11, 2011 4:41 PM, John English john.fore...@gmail.com wrote:
 I have a DDL schema which I am processing using IJ. I want to insert a
 row into a table containg a backspace character:

 CREATE TABLE foo (name VARCHAR(20), value VARCHAR(200));
 INSERT INTO foo VALUES('first',bs);

 where bs is an actual backspace character. I foolishly tried CHAR(8)
 but of course this doesn't do it. Obviously I could write a Java program
 that does this, but this means reinventing the IJ wheel. Is there any
 existing way to do this?

 TIA,

 
 John English | My old University of Brighton home page is still here:
 | http://www.it.brighton.ac.uk/staff/je
 


Re: inserting missing values

2010-12-14 Thread Peter Ondruška
You need to check using wasNull method whether the value is null. See jdbc
javadocs for ResultSet class.
On Dec 14, 2010 8:46 PM, Patrick Meyer meyer...@gmail.com wrote:
 What is the best way to handle missing values. For example, suppose I have
 an array that I want to insert into a table, like double[] row = {1.0,
2.0,
 MISSING, 4.0} where MISSING indicates a value that is missing. (I realize
 this is not a valid double value). I have been using a prepared statement
to
 set a null value anytime I have missing data like, pstmt.setNull(i+1,
 Types.DOUBLE); However, the problem is that using rs.getDouble(3) returns
a
 value of zero instead of null. The problem is that zero is a legitimate
 double value, not a missing or null value. Is it better to inset data
using
 Double.NAN like pstmt.setDouble(i+1, Double.NaN);? What is the best way to
 handle missing data?

 Thanks


Re: Hot backups

2010-12-03 Thread Peter Ondruška
Yes,see docs.
On Dec 3, 2010 4:23 PM, Clemens Wyss clemens...@mysign.ch wrote:
 Does Derby support hot backup(s)?

 Regards
 Clemens


Re: Hot backups

2010-12-03 Thread Peter Ondruška
No prob. Was on mobile while replying to your msg.

On Fri, Dec 3, 2010 at 6:03 PM, Clemens Wyss clemens...@mysign.ch wrote:
 RTFM
 - http://db.apache.org/derby/docs/dev/adminguide/derbyadmin.pdf
 sorry  thx

Von: Peter Ondruška [mailto:peter.ondru...@gmail.com]
Gesendet: Freitag, 3. Dezember 2010 16:38
An: Derby Discussion
Betreff: Re: Hot backups

Yes,see docs.
On Dec 3, 2010 4:23 PM, Clemens Wyss clemens...@mysign.ch wrote:
 Does Derby support hot backup(s)?

 Regards
 Clemens




-- 
Peter


Re: Language of error message

2010-11-27 Thread Peter Ondruška
It is determined by JRE's system locale. You probably use German
locale in your system's settings. If you are on Unix, set
LC_ALL=en_US, if you are on Windows go to regional settings in control
panel. If you cannot any of those try using your application with
additional JRE parameter -Dlanguage=en.

On Sat, Nov 27, 2010 at 3:50 PM, Thomas thomas.k.h...@t-online.de wrote:
 Hi,

 can someone please advise what is determining the language which will be seen 
 in
 error messages? Is this depend on the locale of the machine? I am currently
 seeing messages in german, but would like to switch to english, but don't know
 how to do that.

 Thanks in advance
 Thomas





-- 
Peter


Re: Language of error message

2010-11-27 Thread Peter Ondruška
Yeah,that's what I ment... just forgot user.
On Nov 27, 2010 4:23 PM, Marco Ferretti marco.ferre...@gmail.com wrote:
 set locale from vm options?

 -Duser.language=language -Duser.region=region

 -- Marco (from iPhone)

 On Nov 27, 2010, at 3:50 PM, Thomas thomas.k.h...@t-online.de wrote:

 Hi,

 can someone please advise what is determining the language which will be
seen in
 error messages? Is this depend on the locale of the machine? I am
currently
 seeing messages in german, but would like to switch to english, but don't
know
 how to do that.

 Thanks in advance
 Thomas



Re: NFS and Derby

2010-11-11 Thread Peter Ondruška
You could use NFS mounted read only databases as you can do so with
CD/DVD based media.

The risks with read-write databases on NFS devices is (was) that in
the old days of UDP protocol based NFS client/servers your connection
may easily break. It is not the case anymore with decent operating
systems (Solaris for example) and good NFS servers (again mostly
Solaris based or those from famous vendor) and good highly available
network infrastructure. Nowadays your servers disks are likely network
connected anyway (FC SAN, iSCSI).

On Thu, Nov 11, 2010 at 5:18 PM, Donald McLean dmclea...@gmail.com wrote:
 A local database on an NFS mounted disk? I would never consider such a 
 thing.

 My experience with NFS mounted resources is that network congestion
 can cause all sorts of nasty side effects. Even something as simple as
 an unexpectedly slow read or write can cause unanticipated cascading
 failure conditions. And no matter what value is used for a timeout,
 you can pretty much guarantee that it will be exceeded eventually.

 I realize that this doesn't address Derby specific concerns such as
 database corruption. Fortunately, I have no experience with that.

 Donald

 On Thu, Nov 11, 2010 at 10:56 AM, Kathey Marsden
 kmarsdende...@sbcglobal.net wrote:
 I have always told users they have to have their databases on a local disk
 to ensure data integrity and that  a system crash for an NFS mounted
 database could cause fatal corruption, but had a user this morning take me
 to task on this and ask me to explain exactly why.  I gave my general
 response about not being able to guarantee a sync to disk over the network,
 but want to have a more authoritative reference for why  you cannot count on
 an NFS mounted disk although I did find several places where the sync option
 favors data integrity which certainly doesn't sound like a guarantee.
  Does anyone know a good general reference I can use on this topic to
 support my you gotta use a local disk mantra.


 Also I think our documentation on this topic should be a bit stronger.
  Currently we just say it may not work and probably should be clearer that
 data corruption could occur.  I will file an issue to beef up the language
 based on the conversation in this thread.

 http://db.apache.org/derby/docs/10.5/devguide/cdevdvlp40350.html




-- 
Peter


Re: performance: Derby vs H2

2010-04-22 Thread Peter Ondruška
Rayson, some/most of us are looking for best performance AND best
stability/scalability/tools/etc.

When I look for performance I usually go with Berkeley DB JE ;-)

On Thu, Apr 22, 2010 at 7:14 PM, Rayson Ho raysonlo...@gmail.com wrote:
 On Thu, Apr 22, 2010 at 12:06 PM, bruehlicke bruehli...@gmail.com wrote:
 It is a unconstructive question to ask. It depends on many
 requirements and situations.

 I guess I just got an unconstructive answer from you :-D

 I am trying to find out whether the benchmarks they used are fair, and
 whether the Derby developers know that H2 is faster.

 Even with unfair benchmarks that make H2 really shine, can't Derby
 learn from H2??

 And no, I don't have an application that I need to choose between Derby or H2.

 Rayson




 Is a car faster than an airplane ? Well
 if I have to go to the next corner shopping mall to pick up some milk
 your forgot, this is most likely the case.

 So try it out - and if you think one is faster than the other for your
 particular situation - go for it. For me the maturity, support, proven
 deployments and scalability together with features available are the
 key driving forces.

 B-)

 On Thu, Apr 22, 2010 at 11:47 AM, Rayson Ho raysonlo...@gmail.com wrote:
 Is it really true that H2 is faster than Derby??

 http://en.wikipedia.org/wiki/Apache_Derby

 A year ago, I tried to remove the section that says that H2 is faster,
 but someone always added it back into the article. And besides me,
 seems like no one really care about the Comparison to other embedded
 SQL Java databases section.

 http://en.wikipedia.org/wiki/Talk:Apache_Derby#Benchmarks

 Is it a well-known fact that H2 is always faster??

 And there is also H2's benchmark page:

 http://www.h2database.com/html/performance.html

 Is it a fair comparsion??

 Rayson





Re: performance: Derby vs H2

2010-04-22 Thread Peter Ondruška
Not really. By stability I mean being happy Derby user since its
Cloudscape ages..

On Thu, Apr 22, 2010 at 7:45 PM, Rayson Ho raysonlo...@gmail.com wrote:
 On Thu, Apr 22, 2010 at 12:24 PM, Peter Ondruška
 peter.ondru...@gmail.com wrote:
 Rayson, some/most of us are looking for best performance AND best 
 stability/scalability/tools/etc.

 Thanks for the reply, Peter.

 By stability, you mean ACID kind of stability??

 Rayson




 When I look for performance I usually go with Berkeley DB JE ;-)

 On Thu, Apr 22, 2010 at 7:14 PM, Rayson Ho raysonlo...@gmail.com wrote:
 On Thu, Apr 22, 2010 at 12:06 PM, bruehlicke bruehli...@gmail.com wrote:
 It is a unconstructive question to ask. It depends on many
 requirements and situations.

 I guess I just got an unconstructive answer from you :-D

 I am trying to find out whether the benchmarks they used are fair, and
 whether the Derby developers know that H2 is faster.

 Even with unfair benchmarks that make H2 really shine, can't Derby
 learn from H2??

 And no, I don't have an application that I need to choose between Derby or 
 H2.

 Rayson




 Is a car faster than an airplane ? Well
 if I have to go to the next corner shopping mall to pick up some milk
 your forgot, this is most likely the case.

 So try it out - and if you think one is faster than the other for your
 particular situation - go for it. For me the maturity, support, proven
 deployments and scalability together with features available are the
 key driving forces.

 B-)

 On Thu, Apr 22, 2010 at 11:47 AM, Rayson Ho raysonlo...@gmail.com wrote:
 Is it really true that H2 is faster than Derby??

 http://en.wikipedia.org/wiki/Apache_Derby

 A year ago, I tried to remove the section that says that H2 is faster,
 but someone always added it back into the article. And besides me,
 seems like no one really care about the Comparison to other embedded
 SQL Java databases section.

 http://en.wikipedia.org/wiki/Talk:Apache_Derby#Benchmarks

 Is it a well-known fact that H2 is always faster??

 And there is also H2's benchmark page:

 http://www.h2database.com/html/performance.html

 Is it a fair comparsion??

 Rayson







Re: Derby Char Column Type Versus VarChar

2010-04-05 Thread Peter Ondruška
I think Oracle Database and Derby behavior will be same. CHAR(size)
will preallocate size*characters in database page/block whereas
VARCHAR(size) will not. Maybe if you give us any hint what are you
trying to do we could help better. Peter

On Mon, Apr 5, 2010 at 3:41 PM, Mamatha Kodigehalli Venkatesh
mamatha.venkat...@ness.com wrote:
 Hello,



 Please let us know what will be the performance impact when I use VarChar
 (4000) instead of Char.

 I have found that the inserts to the table which contains this change are
 very slow…



 But on oracle, there is not much of impact on this.



 Based on the business requirement I need to use VarChar.



 Any suggestions will be of great help.



 Thanks

 Mamatha






SELECT .. ORDER BY .. DESC delivers wrong ordering

2010-03-31 Thread Jodeleit, Peter
Hello,

I've got a problem with ORDER BY .. DESC. I've searched the JIRA and found no 
matching issue. Is this really a unknown bug?
We use derby v10.5.3.0. I can deliver the complete database as testcase if 
needed.

Here ist the statement:

SELECT DISTINCT  t14.FS_ID, t14.NAME_DE 
FROM p8001571_8001467.PRODUCTS t14 LEFT JOIN 
p8001571_8001467.RT_PRODUCT_CATEGORIES_PRODUCTS_CATEGORIES_LIST t9 
ON t14.FS_ID=t9.PROD_FS_ID0  AND t9.FS_VALID_TO1268993576528 AND 
t9.FS_VALID_FROM=1268993576528 
LEFT JOIN p8001571_8001467.PRODUCT_CATEGORIES t8 ON t9.PROD_FS_ID=t8.FS_ID AND 
t8.FS_VALID_TO1268993576528 AND t8.FS_VALID_FROM=1268993576528
WHERE t8.FS_ID=1091 AND t14.FS_VALID_TO1268993576528 AND 
t14.FS_VALID_FROM=1268993576528 
ORDER BY t14.NAME_DE ASC

This select delivers:

1408,   DS 1000 block
1152,   DS 1000 modular
1409,   DS 1200 block
1344,   DS 1200 modular
1472,   DS 1400 block
1345,   DS 1400 modular


If I change the order by to DESC I get the following result which is 
obviously not correct ordered:

1152,   DS 1000 modular
1344,   DS 1200 modular
1345,   DS 1400 modular
1408,   DS 1000 block
1409,   DS 1200 block
1472,   DS 1400 block

If I keep DESC and change the columns in the request to t14.NAME_DE, 
t14.FS_ID the result is correct again:

DS 1000 block,1408
DS 1000 modular,  1152
DS 1200 block,1409
DS 1200 modular,  1344
DS 1400 block,1472
DS 1400 modular,  1345

The result is also correct if only the column NAME_DE is selected.

I cross-checked the behaviour with PostgreSQL, it delivered the results as I 
expected.

Peter

-- 

Peter Jodeleit 

Lead Developer
www.e-Spirit.com 

--
Folgen Sie uns auf Twitter: www.twitter.com/espirit_news  
-- 
Bleiben Sie auf dem Laufenden: www.e-Spirit.com/newsletter 
--

Sitz des Unternehmens: Barcelonaweg 14 | 44269 Dortmund  
Vorstand: Jörn Bodemann (Vors.) | Christoph Junge 
Vorsitzender des Aufsichtsrats: Michael Kenfenheuer 
Amtsgericht Dortmund (HRB 20399)


Re: Case Sensitivity

2010-03-24 Thread Peter Ondruška
select
Id as id,
Code as code,
TypeStr as typeStr
  from MyTable

On Wed, Mar 24, 2010 at 10:30 PM, Pavel Bortnovskiy
pbortnovs...@jefferies.com wrote:

 Hello, all:

 when executing a statement, such as:

                         select
                     Id as id,
                     Code as code,
                     TypeStr as typeStr
                   from MyTable

 against a Derby in-memory table, the ResultSetMetaData present column names
 all in upper case and the original case is not preserved.

 Is there a setting which would allow to preserve the case specified in the
 SQL statement, i.e. with the statement above, the first column would be
 id, second - code, third - typeStr, etc...

 Thank you,

 Pavel.



 Jefferies archives and monitors outgoing and incoming e-mail. The contents
 of this email, including any attachments, are confidential to the ordinary
 user of the email address to which it was addressed. If you are not the
 addressee of this email you may not copy, forward, disclose or otherwise use
 it or any part of it in any form whatsoever. This email may be produced at
 the request of regulators or in connection with civil litigation. Jefferies
 accepts no liability for any errors or omissions arising as a result of
 transmission. Use by other than intended recipients is prohibited.  In the
 United Kingdom, Jefferies operates as Jefferies International Limited;
 registered in England: no. 1978621; registered office: Vintners Place, 68
 Upper Thames Street, London EC4V 3BJ.  Jefferies International Limited is
 authorised and regulated by the Financial Services Authority.


Re: Case Sensitivity

2010-03-24 Thread Peter Ondruška
Not that I know, this is by SQL standard.

On Wed, Mar 24, 2010 at 10:43 PM, Pavel Bortnovskiy 
pbortnovs...@jefferies.com wrote:


 Thank you, Peter, for your prompt response. Is enclosing them in quotes the
 only way to do so? Is there any setting which can be applied to Derby (or a
 -Define), so that the quotes could be omitted?





 *Peter Ondruška peter.ondru...@gmail.com*

 03/24/2010 05:33 PM
 Please respond to
 Derby Discussion derby-user@db.apache.org

   To
 Derby Discussion derby-user@db.apache.org
 cc
   Subject
 Re: Case Sensitivity




 select
Id as id,
Code as code,
TypeStr as typeStr
  from MyTable

 On Wed, Mar 24, 2010 at 10:30 PM, Pavel Bortnovskiy
 pbortnovs...@jefferies.com wrote:
 
  Hello, all:
 
  when executing a statement, such as:
 
  select
  Id as id,
  Code as code,
  TypeStr as typeStr
from MyTable
 
  against a Derby in-memory table, the ResultSetMetaData present column
 names
  all in upper case and the original case is not preserved.
 
  Is there a setting which would allow to preserve the case specified in
 the
  SQL statement, i.e. with the statement above, the first column would be
  id, second - code, third - typeStr, etc...
 
  Thank you,
 
  Pavel.
 
 
 
  Jefferies archives and monitors outgoing and incoming e-mail. The
 contents
  of this email, including any attachments, are confidential to the
 ordinary
  user of the email address to which it was addressed. If you are not the
  addressee of this email you may not copy, forward, disclose or otherwise
 use
  it or any part of it in any form whatsoever. This email may be produced
 at
  the request of regulators or in connection with civil litigation.
 Jefferies
  accepts no liability for any errors or omissions arising as a result of
  transmission. Use by other than intended recipients is prohibited.  In
 the
  United Kingdom, Jefferies operates as Jefferies International Limited;
  registered in England: no. 1978621; registered office: Vintners Place, 68
  Upper Thames Street, London EC4V 3BJ.  Jefferies International Limited is
  authorised and regulated by the Financial Services Authority.





 Jefferies archives and monitors outgoing and incoming e-mail. The contents
 of this email, including any attachments, are confidential to the ordinary
 user of the email address to which it was addressed. If you are not the
 addressee of this email you may not copy, forward, disclose or otherwise use
 it or any part of it in any form whatsoever. This email may be produced at
 the request of regulators or in connection with civil litigation. Jefferies
 accepts no liability for any errors or omissions arising as a result of
 transmission. Use by other than intended recipients is prohibited.  In the
 United Kingdom, Jefferies operates as Jefferies International Limited;
 registered in England: no. 1978621; registered office: Vintners Place, 68
 Upper Thames Street, London EC4V 3BJ.  Jefferies International Limited is
 authorised and regulated by the Financial Services Authority.



  1   2   >