Re: Is the Wiki available

2019-11-04 Thread Bergquist, Brett
Thanks for the link Bryan.  Much appreciated.

This was the page I was thinking of:

https://cwiki.apache.org/confluence/display/DERBY/CheckingForIndexStatistics


On 11/4/19, 8:48 PM, "Bryan Pendleton"  wrote:

Unfortunately, the wiki has moved to confluence, and the process
wasn't all that clean, and that auto-redirect system isn't pointing to
the right place.

The right place, I think, is
https://cwiki.apache.org/confluence/display/DERBY/FrontPage

I'm not sure how to fix that auto-redirect system, so we need to fix
the bad references I think.

But we should at least start finding the places where we have bad
links to the old wiki and try to replace them with links to the new
wiki.

I'm not sure which page you were remembering, unfortunately.

Was it by any chance
https://cwiki.apache.org/confluence/display/DERBY/LanguageOptimize

or maybe 
https://cwiki.apache.org/confluence/display/DERBY/PermutationCosting

bryan

On Mon, Nov 4, 2019 at 5:32 PM Brett Bergquist
 wrote:
>
> I tried to use the link on the Community page
>
> http://wiki.apache.org/db-derby
>
> And it tells me the page is
>
> https://cwiki.apache.org/confluence/display/db-derby
>
> And it is a Page Not Found
>
>
> There was a page on there about checking the cardinality statistics for 
all of the tables that I was interested in.
>





Canoga Perkins
20600 Prairie Street
Chatsworth, CA 91311
(818) 718-6300

This e-mail and any attached document(s) is confidential and is intended only 
for the review of the party to whom it is addressed. If you have received this 
transmission in error, please notify the sender immediately and discard the 
original message and any attachment(s).


RE: Question on why I am getting a security exception calling SQLJ.INSTALL_JAR with Derby 10.14.2.0

2018-06-28 Thread Bergquist, Brett
I was able to move my "jar" to the "derby home" directory and load the 
functions from the "jar" from there without modifying the security policy file. 
  This was an ok solution for me as the "jar" does not need to stick around 
after being loaded and only needs to be loaded one, so it was acceptable for me 
to do this.

Thank for the help Bryan!

-----Original Message-
From: Bergquist, Brett [mailto:bbergqu...@canoga.com]
Sent: Thursday, June 07, 2018 3:09 PM
To: Derby Discussion
Subject: RE: Question on why I am getting a security exception calling 
SQLJ.INSTALL_JAR with Derby 10.14.2.0

Yes the release notes are showing up now.   I cleared the browser cache to make 
sure as well.   It was very disconcerting to find the information gone and one 
dumped into a directory listing of the Apache projects and having to root 
around (ie. I had to assume that the link "db" took me to where Derby might be 
located).   Glad it is all better now!

I can create a security policy file and start it with that or I believe I could 
move my "jar" that has functions and procedures loaded into Derby with the 
"SQLJ.INSTALL_JAR" call into the "derby home" directory and load the functions 
from there.   It is only during installation of our product that these 
functions/procedures are loaded.   I don't need to have a general purpose area 
to load from.

Thanks for taking the time to get back on this Bryan.   It is much appreciated.

Brett

-Original Message-
From: Bryan Pendleton [mailto:bpendleton.de...@gmail.com]
Sent: Monday, June 04, 2018 9:02 PM
To: Derby Discussion
Subject: Re: Question on why I am getting a security exception calling 
SQLJ.INSTALL_JAR with Derby 10.14.2.0

Hi Brett,

I'm not sure what's going on with the mirrors. The cgi page is working properly 
for me now, and it contains the release notes on that page.

http://db.apache.org/derby/releases/release-10.14.2.0.cgi

When you finally got the the right page, did it have the release notes on it?

Meanwhile, w.r.t. the permissions question, I think you are right that 
DERBY-6987 is the source of your problems.

Can you change your startup procedures so that you specify a security policy 
with the slightly looser permission that your application requires?

Here's the docs on how to provide your desired security policy:
https://db.apache.org/derby/docs/10.14/security/csecjavasecurity.html

thanks,

bryan


On Mon, Jun 4, 2018 at 8:18 AM, Bergquist, Brett  wrote:
> I am getting this error when trying to install a JAR
>
>
>
>   [sql] Failed to execute:  CALL
> SQLJ.INSTALL_JAR('/opt/canogaview/temp/buil
>
> d/core_pkg/META-INF/customize/csemdb/csem-derby.jar',
> 'CSEM.csemderby', 0)
>
>   [sql] java.sql.SQLTransactionRollbackException: The exception
> 'java.securi
>
> ty.AccessControlException: access denied ("java.io.FilePermission"
> "/opt/canogav
>
> iew/temp/build/core_pkg/META-INF/customize/csemdb/csem-derby.jar" "read")'
> was t
>
> hrown while evaluating an expression.
>
>   [sql] Failed to execute:   CALL
> SYSCS_UTIL.SYSCS_SET_DATABASE_PROPERTY('de
>
> rby.database.classpath','CSEM.csemderby')
>
>   [sql] java.sql.SQLSyntaxErrorException: The database class path
> contains a
>
> n unknown jar file '"CSEM"."CSEMDERBY"'.
>
>
>
> This error does not occur with Derby 10.13.1.1 but does occur now that
> I have upgrade to 10.14.2.0.
>
>
>
> I believe the this is related to some security change.   Possibly:
>
>
>
> https://issues.apache.org/jira/browse/DERBY-6987
>
>
>
> The application that is starting the JVM that is running Derby Network
> Server has not changed.   Only the version of Derby has changed.
>
>
>
> Any pointers will be greatly appreciated.
>
>
>
> I will be honest here.  Trying to find the change in the release notes
> is darn near impossible since the release notes for the releases are no longer
> on the Derby web site, at least I cannot find them.   Having to download an
> release and expand it to see what has changed is pretty awful.
>
>
>
> So interestingly I just clicked on this link again:
>
>
>
> http://db.apache.org/derby/releases/release-10.14.2.0.cgi
>
>
>
> Now I get a totally different page being returned.   I think this has
> something to do with the mirrors.   Earlier this morning, it brought me to
> page where it shows all of the apache projects and I had to drill down
> to “db/derby” and then was able to download the “bin” zip file for 10.14.2.0.
>
>
>
> So there is possibly some issue with the mirrors sometimes not
> displaying the same content.
>
>
> 
> Canoga Perkins
> 20600 Prairie Stree

RE: Question on why I am getting a security exception calling SQLJ.INSTALL_JAR with Derby 10.14.2.0

2018-06-07 Thread Bergquist, Brett
Yes the release notes are showing up now.   I cleared the browser cache to make 
sure as well.   It was very disconcerting to find the information gone and one 
dumped into a directory listing of the Apache projects and having to root 
around (ie. I had to assume that the link "db" took me to where Derby might be 
located).   Glad it is all better now!

I can create a security policy file and start it with that or I believe I could 
move my "jar" that has functions and procedures loaded into Derby with the 
"SQLJ.INSTALL_JAR" call into the "derby home" directory and load the functions 
from there.   It is only during installation of our product that these 
functions/procedures are loaded.   I don't need to have a general purpose area 
to load from.

Thanks for taking the time to get back on this Bryan.   It is much appreciated.

Brett

-Original Message-
From: Bryan Pendleton [mailto:bpendleton.de...@gmail.com]
Sent: Monday, June 04, 2018 9:02 PM
To: Derby Discussion
Subject: Re: Question on why I am getting a security exception calling 
SQLJ.INSTALL_JAR with Derby 10.14.2.0

Hi Brett,

I'm not sure what's going on with the mirrors. The cgi page is working properly 
for me now, and it contains the release notes on that page.

http://db.apache.org/derby/releases/release-10.14.2.0.cgi

When you finally got the the right page, did it have the release notes on it?

Meanwhile, w.r.t. the permissions question, I think you are right that 
DERBY-6987 is the source of your problems.

Can you change your startup procedures so that you specify a security policy 
with the slightly looser permission that your application requires?

Here's the docs on how to provide your desired security policy:
https://db.apache.org/derby/docs/10.14/security/csecjavasecurity.html

thanks,

bryan


On Mon, Jun 4, 2018 at 8:18 AM, Bergquist, Brett  wrote:
> I am getting this error when trying to install a JAR
>
>
>
>   [sql] Failed to execute:  CALL
> SQLJ.INSTALL_JAR('/opt/canogaview/temp/buil
>
> d/core_pkg/META-INF/customize/csemdb/csem-derby.jar',
> 'CSEM.csemderby', 0)
>
>   [sql] java.sql.SQLTransactionRollbackException: The exception
> 'java.securi
>
> ty.AccessControlException: access denied ("java.io.FilePermission"
> "/opt/canogav
>
> iew/temp/build/core_pkg/META-INF/customize/csemdb/csem-derby.jar" "read")'
> was t
>
> hrown while evaluating an expression.
>
>   [sql] Failed to execute:   CALL
> SYSCS_UTIL.SYSCS_SET_DATABASE_PROPERTY('de
>
> rby.database.classpath','CSEM.csemderby')
>
>   [sql] java.sql.SQLSyntaxErrorException: The database class path
> contains a
>
> n unknown jar file '"CSEM"."CSEMDERBY"'.
>
>
>
> This error does not occur with Derby 10.13.1.1 but does occur now that
> I have upgrade to 10.14.2.0.
>
>
>
> I believe the this is related to some security change.   Possibly:
>
>
>
> https://issues.apache.org/jira/browse/DERBY-6987
>
>
>
> The application that is starting the JVM that is running Derby Network
> Server has not changed.   Only the version of Derby has changed.
>
>
>
> Any pointers will be greatly appreciated.
>
>
>
> I will be honest here.  Trying to find the change in the release notes
> is darn near impossible since the release notes for the releases are no longer
> on the Derby web site, at least I cannot find them.   Having to download an
> release and expand it to see what has changed is pretty awful.
>
>
>
> So interestingly I just clicked on this link again:
>
>
>
> http://db.apache.org/derby/releases/release-10.14.2.0.cgi
>
>
>
> Now I get a totally different page being returned.   I think this has
> something to do with the mirrors.   Earlier this morning, it brought me to
> page where it shows all of the apache projects and I had to drill down
> to “db/derby” and then was able to download the “bin” zip file for 10.14.2.0.
>
>
>
> So there is possibly some issue with the mirrors sometimes not
> displaying the same content.
>
>
> 
> Canoga Perkins
> 20600 Prairie Street
> Chatsworth, CA 91311
> (818) 718-6300
>
> This e-mail and any attached document(s) is confidential and is
> intended only for the review of the party to whom it is addressed. If
> you have received this transmission in error, please notify the sender
> immediately and discard the original message and any attachment(s).

Canoga Perkins
20600 Prairie Street
Chatsworth, CA 91311
(818) 718-6300

This e-mail and any attached document(s) is confidential and is intended only 
for the review of the party to whom it is addressed. If you have received this 
transmission in error, please notify the sender immediately and discard the 
original message and any attachment(s).


Question on why I am getting a security exception calling SQLJ.INSTALL_JAR with Derby 10.14.2.0

2018-06-04 Thread Bergquist, Brett
I am getting this error when trying to install a JAR

  [sql] Failed to execute:  CALL SQLJ.INSTALL_JAR('/opt/canogaview/temp/buil
d/core_pkg/META-INF/customize/csemdb/csem-derby.jar', 'CSEM.csemderby', 0)
  [sql] java.sql.SQLTransactionRollbackException: The exception 'java.securi
ty.AccessControlException: access denied ("java.io.FilePermission" "/opt/canogav
iew/temp/build/core_pkg/META-INF/customize/csemdb/csem-derby.jar" "read")' was t
hrown while evaluating an expression.
  [sql] Failed to execute:   CALL SYSCS_UTIL.SYSCS_SET_DATABASE_PROPERTY('de
rby.database.classpath','CSEM.csemderby')
  [sql] java.sql.SQLSyntaxErrorException: The database class path contains a
n unknown jar file '"CSEM"."CSEMDERBY"'.

This error does not occur with Derby 10.13.1.1 but does occur now that I have 
upgrade to 10.14.2.0.

I believe the this is related to some security change.   Possibly:

https://issues.apache.org/jira/browse/DERBY-6987

The application that is starting the JVM that is running Derby Network Server 
has not changed.   Only the version of Derby has changed.

Any pointers will be greatly appreciated.

I will be honest here.  Trying to find the change in the release notes is darn 
near impossible since the release notes for the releases are no longer on the 
Derby web site, at least I cannot find them.   Having to download an release 
and expand it to see what has changed is pretty awful.

So interestingly I just clicked on this link again:

http://db.apache.org/derby/releases/release-10.14.2.0.cgi

Now I get a totally different page being returned.   I think this has something 
to do with the mirrors.   Earlier this morning, it brought me to page where it 
shows all of the apache projects and I had to drill down to "db/derby" and then 
was able to download the "bin" zip file for 10.14.2.0.

So there is possibly some issue with the mirrors sometimes not displaying the 
same content.


Canoga Perkins
20600 Prairie Street
Chatsworth, CA 91311
(818) 718-6300

This e-mail and any attached document(s) is confidential and is intended only 
for the review of the party to whom it is addressed. If you have received this 
transmission in error, please notify the sender immediately and discard the 
original message and any attachment(s).


RE: SYSCS_DIAG.TRANSACTION_TABLE stale records

2017-08-11 Thread Bergquist, Brett
Sorry for the late response to this but I did want to comment.  We are using 
ClientXADataSource extensively with Glassfish.   Our transactions are correctly 
reported in the SYSCS_DIAG.TRANSACTION_TABLE.   The only time that they have 
stuck around is when the connection between Glassfish and the Derby Network 
Server has been severed before the XA “prepare” or “commit” phase has been 
reached or due to a XA transaction timeout bug in Derby which I fixed and 
supplied and is in the latest builds (10.10.2.0 is what I am using).

Having the transaction stay around is of course the correct thing since XA is 
the distributed protocol and until prepare/commit/rollback has been performed, 
Derby (the XA resource) has no idea the state of the transaction.

I think I would write a little program to lists the XA transactions that are 
still open and see if those reported by the SYSCS_DIAG.TRANSACTION_TABLE are 
not in fact real XA transactions that have not been finalized.

From: Rick Hillegas [mailto:rick.hille...@gmail.com]
Sent: Tuesday, July 11, 2017 8:56 PM
To: derby-user@db.apache.org
Subject: Re: SYSCS_DIAG.TRANSACTION_TABLE stale records

Hi Peter,

How are you disconnecting the sessions? I would expect to see 1 transaction for 
every active session, as the following script demonstrates:

-- 1 active session = 1 open transaction
connect 'jdbc:derby:memory:db;create=true' as conn1;
select count(*) from syscs_diag.transaction_table;

-- 2 active sessions = 2 open transactions
connect 'jdbc:derby:memory:db' as conn2;
select count(*) from syscs_diag.transaction_table;

-- 3 active sessions = 3 open transactions
connect 'jdbc:derby:memory:db' as conn3;
select count(*) from syscs_diag.transaction_table;

-- 2 active sessions = 2 open transactions
disconnect;
set connection conn1;
select count(*) from syscs_diag.transaction_table;

-- 1 active session = 1 open transaction
set connection conn2;
disconnect;
set connection conn1;
select count(*) from syscs_diag.transaction_table;

Thanks,
-Rick

On 7/11/17 10:10 AM, Peter Ondruška wrote:
Dear all,
the documentation mentions "The SYSCS_DIAG.TRANSACTION_TABLE diagnostic table 
shows all of the transactions that are currently in the database." Is it really 
correct? In my case I have an application server (Payara) connected to database 
with ClientXADataSource. Over time the record count in this table grows. When I 
stop application server and all database sessions are disconnected, record 
count stays with no change and I would expect that it drops as transactions are 
definitely closed. The only way to "clean" the diagnostic table is to restart 
database.
All the records are same (different XID of course):

XIDGLOBAL_XIDUSERNAMETYPESTATUSFIRST_INSTANTSQL_TEXT
79512765NULLAPPUserTransactionIDLENULLNULL
except one SystemTransaction:
XIDGLOBAL_XIDUSERNAMETYPESTATUSFIRST_INSTANTSQL_TEXT
79241843NULLNULLSystemTransactionIDLENULLNULL
and one UserTransaction (as expected):
XIDGLOBAL_XIDUSERNAMETYPESTATUSFIRST_INSTANTSQL_TEXT
79604720NULLAPPUserTransactionIDLENULLSELECT * FROM 
syscs_diag.transaction_table
Regards,

--
Peter Ondruška

kaibo, s.r.o., ID 28435036, registered with the commercial register 
administered by the Municipal Court in Prague, section C, insert 141269.
Registered office and postal address: kaibo, s.r.o., Kališnická 379/10, Prague 
3, 130 00, Czech Republic.
https://www.kaibo.eu




Canoga Perkins
20600 Prairie Street
Chatsworth, CA 91311
(818) 718-6300

This e-mail and any attached document(s) is confidential and is intended only 
for the review of the party to whom it is addressed. If you have received this 
transmission in error, please notify the sender immediately and discard the 
original message and any attachment(s).


Re: What is the fastest query to check for an existence of a row

2016-05-25 Thread Bergquist, Brett
Thank you Kristian.  That is exactly what I found as well but was waiting for 
the email to show up so I could reply to my own.

Wow what a difference that made!  I keep scouring the documentation and find 
some new nugget each time.

Thank you very much for taking the time to reply and confirm what I though I 
had found as being best (after the question of course)

> On May 25, 2016, at 6:16 PM, Kristian Waagan <krist...@apache.org> wrote:
>
> Den 25.05.2016 23.12, skrev Bergquist, Brett:
>> I have a table with 80,640,704 rows.   My query currently is:
>>
>>
>>
>> SELECT
>>
>> COUNT(*) AS USE_COUNT
>>
>> FROM
>> PKG_9145E_V1.COSEDDROPPROFILEDSCPTABLEBUNDLE_COSEDDROPPROFILEDSCPTABLEENTRY
>> CBCE
>>
>> WHERE CBCE.COSEDDROPPROFILEDSCPTABLEENTRY_ID = 768
>>
>> AND CBCE.COSEDDROPPROFILEDSCPTABLEBUNDLE_ID != 2
>>
>>
>>
>> This query is going to return a count of 78,569.
>>
>>
>>
>> What I really want to know is if there is any row that satisfies the
>> condition.   I have logic that needs to run if there is such a row.   So
>> is there a more efficient query in Derby that will short circuit when it
>> find a row?
>
> Hi Brett,
>
> FETCH/OFFSET [1] comes to mind, i.e. going from
>
> SELECT COUNT(*)
> FROM CBCE
> WHERE ...
>
> to
>
> SELECT some_column_or_a constant
> FROM CBCE
> WHERE ...
> FETCH FIRST ROW ONLY
>
>
> Since there's no ORDER BY (or other relevant operations / clauses) in
> your query, that should hopefully cause Derby to stop processing and
> return once the first matching row is found.
> I say hopefully, because I haven't verified it :)
>
>
> Regards,
> --
> Kristian
>
>
> [1] https://db.apache.org/derby/docs/10.12/ref/rrefsqljoffsetfetch.html
>
>>
>>
>>
>> Thanks
>>
>>
>>
>> Brett
>>
>>
>> 
>> Canoga Perkins
>> 20600 Prairie Street
>> Chatsworth, CA 91311
>> (818) 718-6300
>>
>> This e-mail and any attached document(s) is confidential and is intended
>> only for the review of the party to whom it is addressed. If you have
>> received this transmission in error, please notify the sender
>> immediately and discard the original message and any attachment(s).
>


Canoga Perkins
20600 Prairie Street
Chatsworth, CA 91311
(818) 718-6300

This e-mail and any attached document(s) is confidential and is intended only 
for the review of the party to whom it is addressed. If you have received this 
transmission in error, please notify the sender immediately and discard the 
original message and any attachment(s).


What is the fastest query to check for an existence of a row

2016-05-25 Thread Bergquist, Brett
I have a table with 80,640,704 rows.   My query currently is:

SELECT
COUNT(*) AS USE_COUNT
FROM 
PKG_9145E_V1.COSEDDROPPROFILEDSCPTABLEBUNDLE_COSEDDROPPROFILEDSCPTABLEENTRY CBCE
WHERE CBCE.COSEDDROPPROFILEDSCPTABLEENTRY_ID = 768
AND CBCE.COSEDDROPPROFILEDSCPTABLEBUNDLE_ID != 2

This query is going to return a count of 78,569.

What I really want to know is if there is any row that satisfies the condition. 
  I have logic that needs to run if there is such a row.   So is there a more 
efficient query in Derby that will short circuit when it find a row?

Thanks

Brett


Canoga Perkins
20600 Prairie Street
Chatsworth, CA 91311
(818) 718-6300

This e-mail and any attached document(s) is confidential and is intended only 
for the review of the party to whom it is addressed. If you have received this 
transmission in error, please notify the sender immediately and discard the 
original message and any attachment(s).


Re: Performance issue with same index name in multiple schemas

2016-02-05 Thread Bergquist, Brett
You should run your queries and take a look at the query plans being used.  It 
may be possible that your indexes are not being used as you think.   See

http://db.apache.org/derby/docs/10.10/tuning/index.html

the section on "How you use the RUNTIMESTATISTICS attribute”

Possibly the statistics used by the optimizer are not up to date.  Initially 
when an index is created and if the table is empty, no statistics are created 
for the index and subsequently the optimizer may not choose to use the index 
even though later there may be many rows in the table and the index may be the 
preferred mechanism.

Also take a look at "Selectivity and cardinality statistics” and especially how 
to update the statistics in

http://db.apache.org/derby/docs/10.10/ref/index.html

for the “SYSCS_UTIL.SYSCS_UPDATE_STATISTICS” function.  Also look to see if the 
index statistics daemon is enabled.   Look at the 
“derby.storage.indexStats.auto” property.

> On Feb 5, 2016, at 7:22 AM, kosurusekhar  wrote:
>
> Hi All,
>
> We are having three schemas with almost same table structure, indexes,
> queries, stored procedures in one database. But in one schema we are facing
> some performance issue. We are having same index names and table names in
> all schemas. In another schemas it is running without any problem. In one
> schema with basic load only application is becoming slow.
>
> Is there any relationship with table names and index names with multiple
> schemas? What ever we have queries/stored procedure's with performance
> issues, these queries/stored procedure's running with out any issue in
> another schema's. I am not understanding what is going wrong.
>
> I could see some times single transaction is locking couple of tables more
> than once and not releasing. At this point of time we are feeling the
> slowness.
>
> Please help me with some points to sort out this situation.
>
> Thanks in advance.
>
> Regards
> Sekhar.
>
>
>
> --
> View this message in context: 
> http://apache-database.10148.n7.nabble.com/Performance-issue-with-same-index-name-in-multiple-schemas-tp145507.html
> Sent from the Apache Derby Users mailing list archive at Nabble.com.


Canoga Perkins
20600 Prairie Street
Chatsworth, CA 91311
(818) 718-6300

This e-mail and any attached document(s) is confidential and is intended only 
for the review of the party to whom it is addressed. If you have received this 
transmission in error, please notify the sender immediately and discard the 
original message and any attachment(s).


RE: Derby online backup with huge database

2014-11-04 Thread Bergquist, Brett
Also, what OS are you running?  

We have a Derby database that is about 300GB and the online backup would slow 
down operation of the system too much and take too long to perform.
Fortunately we are using Solaris 10 with the database being on a ZFS pool and 
what we ended up doing is to do a freeze of the database, take a ZFS snapshot 
of the filesystem, and then unfreeze the database.   Then we backup the 
database from the ZFS snapshot at leisure using tar and gzip and offload 
that to another server.

Now some of the Linux based systems have ZFS so we will start to be able to 
support those as well.  

-Original Message-
From: kosurusekhar [mailto:kosurusek...@gmail.com] 
Sent: Tuesday, November 04, 2014 12:54 AM
To: derby-user@db.apache.org
Subject: Derby online backup with huge database

Hi All,

We implemented to take the derby online backup when ever our application is 
launching. It is working fine. But in production the database grows more than 
2GB. It is taking  more than 7 to 10 minutes to take the backup. 

Is this behaviour is normal with Derby database?

Is there any thing need to configure/implement to speedup the backup process?

Please advice me in that.

Thanks in advance.

Regards
Sekhar.



--
View this message in context: 
http://apache-database.10148.n7.nabble.com/Derby-online-backup-with-huge-database-tp143121.html
Sent from the Apache Derby Users mailing list archive at Nabble.com.


RE: Apache Derby - Locks up.

2014-06-06 Thread Bergquist, Brett
Maybe connect up with jvisualvm to the process and see what it is reporting.
Jvisualvm will show the environment of the process.Since you are specifying 
both a -Xmx and -Xms, the heap should be initially allocated to your 6G and 
never grow or shrink.  Jvisualvm will show if that is it is doing so.

It seems strange that it is requiring so much memory.   Yes your row count is 
high but I have tables in my database that are regularly 10M records that are 
queried every 5 minutes with no issue and I can do so with a much smaller 
memory setting (4G is what I am using in testing).

Derby can work in an environment like this as one system that we have has 
tables that are inserted into about 10M records/day with no issues.

-Original Message-
From: Chris Olver [mailto:chris.ol...@3plearning.com] 
Sent: Thursday, June 05, 2014 11:42 PM
To: Derby Discussion
Subject: RE: Apache Derby - Locks up.

Just before it locked up, I managed to get this:

--- Derby Network Server Runtime Information ---
-- Session Information --- Session # :4 Database :etl User 
:a # Statements:1 Prepared Statement Information:
Stmt ID SQLText
-   ---
SYSLH0001   SELECT * FROM APP.PROFILES

Session # :19

-
# Connection Threads : 2
# Active Sessions : 2
# Waiting  Sessions : 0

Total Memory : 3042443264   Free Memory : 153934376

So I'm not sure what heap size it is reaching. 

Regards,

-Chris


-Original Message-
From: Bryan Pendleton [mailto:bpendleton.de...@gmail.com]
Sent: 06 June 2014 12:32
To: Derby Discussion
Subject: Re: Apache Derby - Locks up.

It's interesting that you are trying to set it to use 6 G, and yet in your 
other message there was the line:

  Total Memory : 1756889088   Free Memory : 306272128

which seems to indicate that it's only using 1.7 GB.

Like maybe you're running a 32 bit JVM, not a 64 bit JVM, somehow?

bryan


RE: Apache Derby - Locks up.

2014-06-05 Thread Bergquist, Brett
Chris, can you get a stack trace from the Network Server process when this 
occurs?

From: Chris Olver [mailto:chris.ol...@3plearning.com]
Sent: Wednesday, June 04, 2014 4:42 AM
To: derby-user@db.apache.org
Subject: Apache Derby - Locks up.

Hi,

We are looking to use Apache Derby (Network Server) as a Caching Layer for our 
ETL process. Unfortunately we are finding that it locks up quite frequently. We 
have two tables which have around 10 million rows, two indexes in each. We can 
be reading (straight SELECT * FROM) or writing updates when it happens.

CPU will spike to 100% (its on a rather powerful box) and then all existing and 
new JDBC clients are unable to connect. Running runtimeinfo (when it locks up, 
issuing this command can take a few minutes to get a response):

--- Derby Network Server Runtime Information ---
-- Session Information ---
Session # :116
Database :etl
User : abc
# Statements:1
Prepared Statement Information:
Stmt ID SQLText
-   ---
SYSLH0001   SELECT * FROM APP.USERS

Session # :117
-
# Connection Threads : 4
# Active Sessions : 2
# Waiting  Sessions : 0

Total Memory : 1756889088   Free Memory : 306272128

No errors can be seen in the log. I am rather confused as it seems like the 
perfect solution Derby just locks up.

Thoughts or advise appreciated.

OS: Windows 8.1
Java Runtime: 1.8.0_05-b13
Derby: 10.10.2.0

Regards,

-Chris


Re: Question on why this is causing a deadlock

2014-02-17 Thread Bergquist, Brett
Thanks for taking the time to respond Mamta.  I have been through those and I 
did find the chat very difficult to follow ;)

I am surprised that it did not dump out the lock table when this occurred as I 
have the following in my derby.properties:

#Java DB Properties File
#derby.language.logStatementText=true
#derby.language.logQueryPlan=true
derby.locks.waitTimeout=60
derby.locks.deadlockTrace=true
derby.locks.monitor=true
derby.locks.escalationThreshold=2
derby.jdbc.xaTransactionTimeout=1800
derby.infolog.append=true
derby.language.statementCacheSize=1
derby.storage.indexStats.auto=true

I will try to get this to happen with the lock dump and figure it out.   I 
think it might have something to do with the tables in question being very 
sparse at this time (a couple of rows each) and maybe an index is not being 
used causing an exclusive table lock.  With isolation level of READ-COMMITTED, 
I believe that the SELECT statement should only be locking and unlocking a row 
at a time as it progresses through the result set and the UPDATE statement 
should only be locking one row because there is a unique index on ID.   So I 
don’t see how a deadlock could be occurring if this would be the case because 
one or the other statements should succeed and release its lock.



On Feb 15, 2014, at 2:24 PM, Mamta Satoor 
msat...@gmail.commailto:msat...@gmail.com wrote:

Brett,

May be you are aware of following information already but here is what has been 
suggested for investigating deadlock issues in the past.

***
Dag Wanvik suggested
In general, this resource may help you understand how Derby uses locks:
http://db.apache.org/derby/docs/10.8/devguide/cdevconcepts30291.html
 and specifically how to debug deadlocks:
http://db.apache.org/derby/docs/10.8/devguide/cdevconcepts50894.html
http://wiki.apache.org/db-derby/LockDebugging

 If you feel you understand how Derby takes locks but are seeing Derby
 take locks that you feel are not appropriate for your usage and
 isolation level, please let us know. If you can provide a lock table
 dump, that would be helpful. Please state version of Derby are you
 using, too.
***
Kathey Marsden suggested
If you are using the new 10.8.1.2 release, setting 
derby.stream.error.extendedDiagSeverityLevel=3
 will print the stack traces of all active threads on deadlock to derby.log, 
which can help debug both application and possible Derby  issues with Deadlocks.
http://db.apache.org/derby/docs/10.8/ref/rrefproperextdiagsevlevel.html
***

thanks,
Mamta


On Sat, Feb 15, 2014 at 9:57 AM, Bergquist, Brett 
bbergqu...@canoga.commailto:bbergqu...@canoga.com wrote:
Here is the output from the deadlock:

Fri Feb 14 16:33:55 EST 2014 Thread[DRDAConnThread_26,5,main] (XID = 879610), 
(SESSIONID = 28952), (DATABASE = csemdb), (DRDAID = 
NF01.F677-578992634681601532{719}), Failed Statement is: UPDATE 
CORE_V1.PROXY_NID_CLIENT_STATUS SET CONNECTION_STATE_DATE = ?, OPLOCK = ? WHERE 
((ID = ?) AND (OPLOCK = ?)) with 4 parameters begin parameter #1: 2014-02-14 
16:33:35.667 :end parameter begin parameter #2: 10607 :end parameter begin 
parameter #3: 2 :end parameter begin parameter #4: 10606 :end parameter
ERROR 40001: A lock could not be obtained due to a deadlock, cycle of locks and 
waiters is:
Lock : ROW, PROXY_NID_CLIENT_STATUS, (1,7)
  Waiting XID : {879610, X} , CSEM, UPDATE CORE_V1.PROXY_NID_CLIENT_STATUS SET 
CONNECTION_STATE_DATE = ?, OPLOCK = ? WHERE ((ID = ?) AND (OPLOCK = ?))
  Granted XID : {879611, S}
Lock : ROW, PROXY_NID_STATUS, (1,8)
  Waiting XID : {879611, S} , CSEM, SELECT COUNT(*) FROM 
CORE_V1.PROXY_NID_CLIENT PNC JOIN CORE_V1.PROXY_NID_CLIENT_STATUS PNCS ON 
PNC.STATUS_ID = PNCS.IDhttp://pncs.id/ JOIN CORE_V1.PROXYNID_PROXYNIDCLIENT 
PNPNC ON PNC.IDhttp://pnc.id/ = PNPNC.PROXYNIDCLIENT_ID JOIN 
CORE_V1.PROXY_NID PN ON PNPNC.PROXYNID_ID = PN.IDhttp://pn.id/ JOIN 
CORE_V1.PROXY_NID_STATUS PNS ON PN.STATUS_ID = PNS.IDhttp://pns.id/ JOIN 
CORE_V1.AGENT_MANAGED_HARDWARE AMH ON PN.IDhttp://pn.id/ = AMH.PROXYNID_ID 
JOIN CORE_V1.HARDWARE HW ON AMH.IDhttp://amh.id/ = HW.IDhttp://hw.id/ JOIN 
CORE_V1.SNMP_DEVICE SD ON AMH.IDhttp://amh.id/ = SD.AGENT_MANAGED_HARDWARE_ID 
JOIN CORE_V1.SNMP_DEVICE_IP SDIP ON SD.IDhttp://sd.id/ = SDIP.SNMPDEVICE_ID
  Granted XID : {879610, X}
. The selected victim is XID : 879610.

There are two separate processes running.  One is periodically querying which 
is the “SELECT COUNT(*)…” above.  The second is updating the state of one of 
the rows which is the “UPDATE” above.

I am not sure how to read the above and what I can do about it.

The query is done using a native JPA query and the second is using JPA 
directly.  Both are being done within a Java EE stateless session being with a 
transaction.   The isolation level is read committed.

Any help will be appreciated.

Brett




RE: Derby running on local computer ans Data Base located on server

2013-12-31 Thread Bergquist, Brett
If you mean that the database files are located on a server and are going to 
be access using a networked file system protocol such as NFS or SMB, etc, then 
this is really just networked file I/O and it will work with the caveat that 
you might have corruption if the file system I/O calls return before the file 
changes have actually been committed to stable storage on the  server.   This 
is also the same thing that can occur with file I/O that is cached by the OS on 
a local file system as will.   You really need the networked file I/O to not 
return until the changes have been committed or live with the possibility of 
database corruption.

Brett

From: SolGT [mailto:cont...@solgt.fr]
Sent: Tuesday, December 31, 2013 6:02 AM
To: derby-user@db.apache.org
Subject: Derby running on local computer ans Data Base located on server

Hi

In derbydev.pdf ( 10.10.1.1) page 26, I read  :
**
Conventions for specifying the database path
...
You can specify only databases that are local to the machine on which the JVM
is running. ...
**
I want to use Derby (embedded mode) in my application on a networked computer 
and place the databases on the server.
Is it possible ?

Thanks

AirDT


RE: Proper configuration for a very busy DB?

2013-10-01 Thread Bergquist, Brett
Jerry, I have a similar database size, about +10,000,000 records a day being 
stored and needing purging.  I found that purging the database had a 
significant impact on the insertion rate.  I originally had one table in the 
database for these (in my case performance measurements) records.   I 
ultimately went to a poor man's partitioning.   I created separate tables for 
each week of the year (53 tables), inserted records into the correct week, used 
a database view to join these tables back into one logical table (a union 
across the tables) and then purging was done by week.  This was nearly 
instantaneous using a TRUNCATE TABLE.

Just something to be aware of.

-Original Message-
From: Jerry Lampi [mailto:j...@sdsusa.com] 
Sent: Tuesday, October 01, 2013 11:53 AM
To: Derby Discussion
Subject: Re: Proper configuration for a very busy DB?

Peter:
Each client has one connection.  It is used for the entire session (which can 
be days).
The Derby log file are configured to have one log file per day.  Format names 
like: productName-stderr.2013-10-01.log and productName- stdout.2013-10-01.log.

Brett:
- A flurry of data has been as great as 4000 records per second.  That is the 
number cached by the client(s) and each record is dumped to the DB one at a 
time.  Not all 30 clients see 4000 per second, likely only 2 or three of them.  
The DB has over 10 million records in it at any given time and it is purged 
daily of older records.
- We use prepared statements (PS).
- Each client has one dedicated connection.

All:
I appreciate your responses.  I will benchmark using JMeter and then follow the 
tuning tips for derby 10.8 ( 
http://db.apache.org/derby/docs/10.8/tuning/index.html ).  I will start by 
tweaking the derby.statementCache.size up from the 100 default.

Any other advice greatly appreciated.

Thanks,

Jerry

On 9/30/2013 2:55 PM, Peter wrote:
Do you open new connection every time or do you have a pool? How often does 
Derby checkpoint/switch log file?


Peter


On 9/30/2013 2:47 PM, Bergquist, Brett wrote:
 Jerry, can you provide a bit more background which might be helpful:

 - what is your definition of a flurry of data?   What sort of transaction 
 rate do you estimate this is?
 - are you using prepared statements for your inserts, updates, etc? If not, 
 then do so and also change the derby.statementCache.size to something quite a 
 bit larger.  This will allow the statements to be compiled once and cached 
 instead of being prepared each time you execute them.
 - are you using a connection pool or are you opening/closing connections 
 frequently?

 I have a system with a busy database and it took some tuning to get to this 
 point.  Right now it is doing about 100 inserts/second continuous 24x7 and it 
 has peaked up to 200 inserts/second.  Granted my application is different 
 than what you are doing but it is possible to get derby to run when busy.


 -Original Message-
 From: Jerry Lampi [mailto:j...@sdsusa.com]
 Sent: Monday, September 30, 2013 3:29 PM
 To: Derby User Group
 Subject: Proper configuration for a very busy DB?

 We have about 30 clients that connect to our version 10.8.2.2 Derby DB.

 The clients are programs that gather data from the operating system of their 
 host and then store that data in the DB, including FTP activity.
 Sometimes, the clients get huge flurries of data all at once and Derby is 
 unable to handle the influx of requests; inserts, updates, etc.  In addition, 
 the clients are written so that if they are unable to talk to the DB, they 
 queue up as much data as possible and then write it to the DB when the DB 
 becomes available.

 This client queuing is a poor design, and places greater stress on the DB, as 
 when the 30 clients finally do talk to the DB, they all dump data at once.  
 The clients do not know about one another and therefore do not attempt any 
 throttling or cooperation when dumping on the DB.

 The net effect of all this is that the DB is too slow to keep up with the 
 clients.  As clients try to feed data to the DB, it cannot accept it as fast 
 as desired and this results in the clients queueing more data, exacerbating 
 the issue.

 So the DB is very busy.  The only significant thing we have done thus far is 
 change the derby.storage.pageCacheSize=5000 and increase Java heap to 1536m.

 Is there a configuration considered optimal for a VERY busy Derby DB?

 Thanks,

 Jerry


 ---
 avast! Antivirus: Outbound message clean.
 Virus Database (VPS): 130930-0, 09/30/2013 Tested on: 9/30/2013 2:28:40 PM 
 avast! - copyright (c) 1988-2013 AVAST Software.
 http://www.avast.com








---
avast! Antivirus: Outbound message clean.
Virus Database (VPS): 131001-0, 10/01/2013 Tested on: 10/1/2013 10:53:12 AM 
avast! - copyright (c) 1988-2013 AVAST Software.
http://www.avast.com







Trying to determine the space used by tables and indexes in the database

2013-03-12 Thread Bergquist, Brett
Derby is 18.8.   From the Derby 10.8 reference manual, I fiind:

SELECT T2.*
FROM
SYS.SYSTABLES systabs,
TABLE (SYSCS_DIAG.SPACE_TABLE()) AS T2
WHERE systabs.tabletype = 'T'
AND systabs.tableid = T2.tableid;


If I try to run that, I get:

Error: No constructor was found with the signature 
org.apache.derby.diag.SpaceTable().  It may be that the parameter types are not 
method invocation convertible.
SQLState:  42X75
ErrorCode: -1

Any ideas on how to run that to get the space used by each table/index?




RE: Trying to determine the space used by tables and indexes in the database

2013-03-12 Thread Bergquist, Brett
Thanks Rick.   I found that on the Derby wiki and did use that.  I just thought 
maybe I was doing something wrong.  Glad to see it is a bug in the 
documentation and not a bug in the implementation (other than not being present 
in 10.8).

-Original Message-
From: Rick Hillegas [mailto:rick.hille...@oracle.com] 
Sent: Tuesday, March 12, 2013 2:24 PM
To: derby-user@db.apache.org
Subject: Re: Trying to determine the space used by tables and indexes in the 
database

Hi Brett,

It seems as though a doc fix was propagated back to 10.8 but the corresponding 
code fix wasn't. The old way of invoking SPACE_TABLE is still documented in the 
10.4 docs:

SELECT T2.*
 FROM
 SYS.SYSTABLES systabs,
 TABLE (SYSCS_DIAG.SPACE_TABLE(systabs.tablename)) AS T2
 WHERE systabs.tabletype = 'T';

Hope this helps,
-Rick

On 3/12/13 9:22 AM, Bergquist, Brett wrote:

 SELECT T2.*

 FROM

 SYS.SYSTABLES systabs,

 TABLE (SYSCS_DIAG.SPACE_TABLE()) AS T2

 WHERE systabs.tabletype = 'T'

 AND systabs.tableid = T2.tableid;






RE: JPA required?

2013-02-13 Thread Bergquist, Brett
For your use case, probably not.   JPA is not something that is going to solve 
a database element corruption and in fact with JPA and its normal use, you have 
less control when entity changes are flushed to the database.

Note that if you don't have your database stored on medium that has write 
caching, if the host computer goes down, the database is not going to be 
corrupt; it might not have the latest change, but it will be consistent if you 
are using transactions.

-Original Message-
From: JimCrowell37 [mailto:jimcrow...@email.com] 
Sent: Tuesday, February 12, 2013 4:52 PM
To: derby-user@db.apache.org
Subject: JPA required?

Hello,

I have spent today reading up on JPA and I have a question if I really need it.

I have a data entry form class where each data entry field is associated with 
an element of a Derby dynamic database table. As each data entry field looses 
it's form focus, I shall write the entered data entry value to the Database 
table. The Database table primary key is the fields row / column indices.

Since my goal is to save all data entries in a persistent manner, my question 
is do I need to implement JPA?

I think that the worst case scenario is that my end users host computer goes 
down sometime during the Database write processing and that Database element 
may be corrupted.

That thought is what led me to learning about JPA to persist the Database 
transaction.

Do I need to implement JPA or is there a better way to achieve my persistence 
goal?

Regards,
Jim...



--
View this message in context: 
http://apache-database.10148.n7.nabble.com/JPA-required-tp127242.html
Sent from the Apache Derby Users mailing list archive at Nabble.com.




RE: SYSCS_DIAG.LOCK_TABLE keeps coming up enpty

2013-02-04 Thread Bergquist, Brett
If this is using JPA, matbe you are not seeing any locking because the entity 
manager is flushing is requests to the database engine only when the entity 
bean methods is returned from and the transaction manager kicks in to persist 
the changes.   

Try adding a em.flush()  call on the entity manger which will force the 
database changes to occur and then maybe you might see the lock table populated.

Just a thought.

Brett



From: Mark Stephen Krueger [m...@the-kruegers.name]
Sent: Sunday, February 03, 2013 11:21 AM
To: derby-user@db.apache.org
Subject: SYSCS_DIAG.LOCK_TABLE keeps coming up enpty

I'm trying to debug an issue with a deadlock with an EJB app running under
Glassfish 3.1.2.1.  The app uses entity beans and the entity manager.  I
want to view the lock table at various points so I wrote the following code.
The problem is everywhere I place a call to it, the lock table is always
coming back as empty (no rows). What am I missing?

@Resource(mappedName=jdbc/myderbyjndi)
private DataSource dataSource;

..

private void dumpLockTable()
{
try ( Connection connection = dataSource.getConnection() )
{
PreparedStatement ps = connection.prepareStatement( SELECT *
FROM SYSCS_DIAG.LOCK_TABLE );
ResultSet rs = ps.executeQuery();
ResultSetMetaData rsmd = rs.getMetaData();
int columns = rsmd.getColumnCount();
StringBuilder sb = new StringBuilder();
sb.append( Lock Table\n );
while( rs.next() )
{
for ( int col = 1; col = columns; col++ )
{
sb.append( rs.getString( col ) );
sb.append( \t| );
}
sb.append( \n );
}
logger.info( sb.toString() );
}

catch ( SQLException sqle )
{
logger.throwing( LOG_CLASS_NAME, dumpLockTable, sqle );
}
}


I believe this is Derby 10.8.






RE: Force unique constraint to be enforced at statement execution?

2013-01-08 Thread Bergquist, Brett
Not exactly familiar with Spring Data JPA but I am with JPA with container 
managed transactions.   Assuming you have a handle on the EntityManager, invoke 
the 'flush' method on it which will cause the entity manager to perform the 
database operations before the commit.   You can then wrap your code with a 
try/catch block if needed to catch the exception that occurs.

-Original Message-
From: Matthew Adams [mailto:matt...@matthewadams.me] 
Sent: Tuesday, January 08, 2013 1:46 PM
To: derby-user@db.apache.org
Subject: Force unique constraint to be enforced at statement execution?

Hi all,

How do I force a unique constraint to be enforced upon statement execution 
instead of at transaction commit?

I'm using Spring Data JPA over DataNucleus  Derby (embedded), and Spring's 
persistence exception translation proxy is not translating the unique 
constraint exception because it's happening at transaction commit instead of 
when the statement is executed.

Thanks,
Matthew




--
View this message in context: 
http://apache-database.10148.n7.nabble.com/Force-unique-constraint-to-be-enforced-at-statement-execution-tp126304.html
Sent from the Apache Derby Users mailing list archive at Nabble.com.




RE: Does SYSCS_UTIL.SYSCS_BACKUP_DATABASE block transactions?

2012-08-28 Thread Bergquist, Brett
I have used SYSCS_UTIL.SYSCS_BACKUP_DATABASE successfully with a database that 
is being updated continuously (~60 transactions/second). It does not block the 
transactions but may slow down the process enough such that you get lock 
timeouts.  Try changing the derby.locks.timeout to something like 60 seconds 
and see if your problem goes away.

Just my observation and experience.

Brett

-Original Message-
From: istanbullu [mailto:yuksel.gu...@gmail.com] 
Sent: Tuesday, August 28, 2012 2:19 PM
To: derby-user@db.apache.org
Subject: Does SYSCS_UTIL.SYSCS_BACKUP_DATABASE block transactions?


I have a tool that backs up a DB using SYSCS_UTIL.SYSCS_BACKUP_DATABASE. 
According to the Derby admin guide, online backup does not block transactions.  
However, I see an exception thrown from another DB application trying to 
execute a query on the DB that is being backed up. 
The exception is: [java.sql.SQLTransactionRollbackException]: A lock could not 
be obtained within the time.  Does this imply that the admin guide is not 
accurate on this matter?  The size of the DB is 8-9 GBs.  I only see this issue 
when DB size is larger than about 2 GBs.
--
View this message in context: 
http://old.nabble.com/Does-SYSCS_UTIL.SYSCS_BACKUP_DATABASE-block-transactions--tp34348234p34348234.html
Sent from the Apache Derby Users mailing list archive at Nabble.com.





RE: No Connection Exception

2012-04-25 Thread Bergquist, Brett
I believe if there is a severe condition that Derby detects, it will internally 
restart and output to the log the startup message again.  If you don't have the 
derby.log setting to append it will appear as if the complete engine restarted. 
 If you do have it to append, you might see the severe error that caused the 
internal restart.

I think it is DRDAConnection.c where this can be seen but I don't have the code 
in front of me at this moment.

-Original Message-
From: Rick Hillegas [mailto:rick.hille...@oracle.com] 
Sent: Wednesday, April 25, 2012 8:32 AM
To: derby-user@db.apache.org
Subject: Re: No Connection Exception

On 4/24/12 6:49 PM, Tomcat Programmer wrote:
 That is a very good question... there is no error message at all.  
 However, what I did see was a startup message, like when the database 
 initially starts. Except, I never restarted it!  And the machine did 
 not reboot... are there ever conditions where it auto-restarts itself?
Derby engines and databases don't spontaneously reboot as far as I know. 
This suggests that the engine and/or database are being bounced by code higher 
up your application stack.

Hope this helps,
-Rick


 --
 --
 *From:* Katherine Marsden kmarsdende...@sbcglobal.net
 *To:* Derby Discussion derby-user@db.apache.org
 *Sent:* Tuesday, April 24, 2012 5:06 PM
 *Subject:* Re: No Connection Exception

 On 4/24/2012 1:19 PM, Tomcat Programmer wrote:
 Hello,



 The only way to clear this up seems to be restarting the web 
 container.   Is there some setting in Derby which is causing these 
 connections to go stale?  Is there any way to trace this and find out 
 the cause?

 Is there an interesting error in the derby.log that might indicate 
 when you lost the connection?








Can someone explain the use of logged archive backup?

2012-03-29 Thread Bergquist, Brett
I need to protect against a media crash and it is not so important to go back 
to specific periods of time for the database.   I have read the documentation 
on the Derby Admin guide but am still confused on the use of 
SYSCS_UTIL.SYSCS_BACKUP_DATABASE_AND_ENABLE_LOG_ARCHIVE_MODE.

From what it says, a copy of the last full backup, the archived logs, and the 
active logs are needed to restore.   I am a little confused here on the 
archived logs and the active logs.


* Is there something special that should be done with the archived 
logs?   Or is it that the log file location needs to be on another media?

* When SYSCS_UTIL.SYSCS_BACKUP_DATABASE_AND_ENABLE_LOG_ARCHIVE_MODE is 
used, are all transactions from that point on logged forever until the next 
full backup is done?   Is this more expensive in terms of disk space usage than 
just doing an SYSCS_UTIL.SYSCS_BACKUP_DATABASE periodically

* Is the difference between SYSCS_UTIL.SYSCS_BACKUP_DATABASE and 
SYSCS_UTIL.SYSCS_BACKUP_DATABASE_AND_ENABLE_LOG_ARCHIVE_MODE that with the 
SYSCS_UTIL.SYSCS_BACKUP_DATABASE I can restore to that point in time, but with 
SYSCS_UTIL.SYSCS_BACKUP_DATABASE_AND_ENABLE_LOG_ARCHIVE_MODE (assuming the 
backup and log files are on a separate media), that I can recover to the last 
transaction?

Another question is the time to perform a roll-forward recovery if 
SYSCS_UTIL.SYSCS_BACKUP_DATABASE_AND_ENABLE_LOG_ARCHIVE_MODE is used.   Might 
it not be better to just have a separate instance of Derby available and use 
database replication.  Would this not provide a fully functional database up to 
just about the last transaction so recovery would be a matter or copying the 
replicated database to the system and restarting?




RE: Can someone explain the use of logged archive backup?

2012-03-29 Thread Bergquist, Brett
Thanks Bryan, but I am a little dense on this one and just want to understand 
the proper procedure to use 
SYSCS_UTIL.SYSCS_BACKUP_DATABASE_AND_ENABLE_LOG_ARCHIVE_MODE so that I can 
either consider it or not.  

On one system that uses ZFS, we will probably use the 
SYS_UTIL.SYSCS_FREEZE_DATABASE to freeze, perform an ZFS snapshot, and 
SYS_UTIL.SYSCS_FREEZE_DATABASE to unfreeze the database.   I am concerned of a 
failure in the code between freezing the database and unfreezing the database 
as I believe if the connection is lost, the database will remain frozen with no 
way to unfreeze.  I need to test that and maybe patch Derby to perform an 
unfreeze if the connection is lost. I have had this happen when I used a script 
to call IJ to do the freeze, the shell to do the ZFS snapshot, and IJ again to 
unfreeze.  The last IJ to unfreeze could not get a connection and could not 
unfreeze.  And the database was locked up and could not even be shutdown 
cleanly.

Another system is an older system that is running UFS so this will not be 
possible.

Back to SYSCS_UTIL.SYSCS_BACKUP_DATABASE_AND_ENABLE_LOG_ARCHIVE_MODE.  So the 
procedure would be:

- use this procedure to create a backup of the database and enable log 
archiving.  Presumably the database would be created on some other media.
- if the log directory has not been moved with the from 
database/log, what does one do with this?  Periodically copy the file in 
there to the other media?  How does one determine the archive logs from the 
active logs used for crash recovery or is even necessary.
- time goes on and another full backup is done.  Does one use 
SYSCS_UTIL.SYSCS_BACKUP_DATABASE_AND_ENABLE_LOG_ARCHIVE_MODE again?  I assume 
that the new archive logs need to be kept with this backup now, right?
- I guess I confused by the archive logs and where they are located, 
what they look like, and what to do with them

Now there is a disk crash that contains the database, a new disk is put in 
place, formatted, etc. and is available to the OS.  So the procedure to recover 
this is to issue a connect like:

- connect 'jdbc:derby:database;rollForwardRecoveryFrom=path to last 
full backup with logs';

Is that correct?  Is that any different than physically copying the database 
and all of the logs to the new disk simply connecting like:

- connect 'jdbc:derby:database'


Any light you can shed on this will be helpful.  I think the Administration 
guide should go through this.  I t starts to, but the part about handling the 
archive logs is not really presented.



-Original Message-
From: Bryan Pendleton [mailto:bpendleton.de...@gmail.com] 
Sent: Thursday, March 29, 2012 10:31 AM
To: Derby Discussion
Subject: Re: Can someone explain the use of logged archive backup?

 confused here on the archived logs and the active logs.

In general, there can be multiple logs covering the time between one backup and 
the next backup, and those logs must be applied, serially, in the correct 
order, to recover the database fully.

Once you take that next backup, you no longer need the previous backup/logs, 
though it's probably wise to establish a reasonable holding period depending on 
your resources (e.g., backup weekly, keep
3 months of backups and logs, destroy the oldest set when you complete the most 
recent backup, etc.)

 I need to protect against a media crash and it is not so important to go back 
 to specific periods of time for the database.

Perhaps you can entirely use lower-level mechanisms, then, such as RAID or 
other redundant storage hardware, or a modern filesystem which automatically 
replicates the underlying data against the failure of the storage, such as ZFS 
(http://en.wikipedia.org/wiki/ZFS)

I think it's still wise to have an application-level backup strategy, because 
sometimes logical recovery is necessary (e.g., to recover from an application 
bug or an administrative mistake), so I think that the exercise you're going 
through about documenting your backup and recovery strategies is an excellent 
one.

And don't forget to test those backup/restore practices, since an untested 
restore is no better than no restore at all.

I've found that one useful technique is to provision a secondary machine, which 
can be MUCH smaller in terms of CPU, memory, networking, etc., and just has to 
have enough disk space, and automate things so that every time I take a backup, 
my scripts automatically copy the backup to this spare machine, restore the 
backup and apply all the logs, and then run a few queries to satisfy myself 
that the database is correctly recovered.

thanks,

bryan




RE: Compress large table

2012-03-14 Thread Bergquist, Brett
The two procedures SYSCS_COMPRESS_TABLE and SYSCS_INPLACE_COMPRESS_TABLE) for 
compacting are going to hold a lock on the table during the total time of 
compression which for 20 million rows is going to be a long time.   Not the 
answer that you want to hear but that is what it is.

Have you tried querying the SYSCS_DIAG.SPACE_TABLE to make sure that you have 
space to be reclaimed.  You might find that compressing the table is not going 
to return much space in any case.

SELECT *
FROM TABLE (SYSCS_DIAG.SPACE_TABLE('MYSCHEMA', 'MYTABLE'))
AS T2

Replace 'MYSCHEMA' and 'MYTABLE' with your particular ones.

A couple of thoughts, it might be that you could copy the rows from one table 
to a new table and then drop the original table and rename new table back.  Of 
course, this does require you to have about twice the disk usage while being 
done and you would need to be smart about making sure that any new rows after 
you start the copy are also moved to the new table, but it might allow you to 
have the old table be used while you are building a new compressed table.

Hope this helps.

Brett

From: Stefan R. [mailto:elstefan...@gmail.com]
Sent: Tuesday, March 13, 2012 12:59 PM
To: derby-user@db.apache.org
Subject: Compress large table

Hi,
In one of our current projects we are using derby and have some large db tables 
(over 20 Million rows and some indexes).
We decided to clean this table with a scheduled task. Now to be able to free 
some disk space we want to compress the files using SYSCS_COMPRESS_TABLE. What 
would be the best option to use this function and keep the table lock as short 
as possible? Our app needs to keep running while the compression is taking 
place.
Do you have any suggestions?

Thank you,
Stefan



RE: How to drop two interrelated tables at the same time?

2012-02-09 Thread Bergquist, Brett
Use

alter table business_objects drop constraint fk_created_by;;
alter table users drop constraint fk_boid;
drop table business objects;
drop table users;

Basically reverse what you did to create the tables.

Brett

From: Libor Jelinek [ljeli...@virtage.com]
Sent: Thursday, February 09, 2012 6:44 PM
To: derby-user
Subject: How to drop two interrelated tables at the same time?

Hello dear Derby community!
Very stupid question but as I newcomer from MySQL (MyISAM) I am wondering how 
to delete table that has references to another table?

Table business_objects refers to table users. Vice versa table users referes to 
business_objects.

create table business_objects (
boid int not null generated always as identity,
constraint pk_boid primary key (boid)
);

create table users (
username varchar(60) not null,
boid int not null,
constraint pk_username primary key (username)
);

alter table business_objects add constraint fk_created_by
foreign key (created_by) references users (username);

alter table users add constraint fk_boid
foreign key (boid) references business_objects (boid);

Dropping table users as first is denied because of existing reference from 
table business_objects:

An error occurred when executing the SQL command:
   drop table users
Operation 'DROP CONSTRAINT' cannot be performed on object 'PK_USERNAME' because 
CONSTRAINT 'FK_CREATED_BY' is dependent on that object. [SQL State=X0Y25, DB 
Errorcode=-1]

But dropping table businness as first is also denied because of existing 
reference from table users with similar error.

So only option is delete these interrelated tables at the same time -- but how? 
I tried to drop them in transaction but still the same as above.

Thanks a lot!
Libor



Got an error doing an online backup and don't know where to look

2011-11-28 Thread Bergquist, Brett
Got this error two nights in a row now while doing an online backup:

  [sql] Failed to execute:  CALL SYSCS_UTIL.SYSCS_BACKUP_DATABASE('/containe
r1/backup/database/2011-11-28_00-15')
  [sql] java.sql.SQLException: Cannot backup the database, got an I/O Except
ion while writing to the backup container file /container1/backup/database/2011-
11-28_00-15/csemdb/seg0/cf3a0.dat.

I run the online backup through an ANT SQL task.   The 'derby.log' does not 
show any errors, the server system log (Oracle M5000/Solaris 10) shows no 
errors either.   I am stumped where to look and any help will be greatly 
appreciated.


RE: SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE question

2011-11-03 Thread Bergquist, Brett
Just a heads up, I have recently went through this exercise of having a table 
that is always being inserted at one end and deleted at the other.  Network 
test results were being inserted at a rate of about 4 million per day and the 
same amount from earlier days needed to be purged out.  I had major performance 
issues in trying to delete rows while inserting rows.  Turns out I could insert 
faster than delete.  And on top of that, the space was not being reused 
efficiently enough and I had no down time to run the compress procedures.

What I ended up doing was to do data partitioning.  I ended up creating a table 
for each week of the year and used the Restrict VTI functions to build a view 
combining all of the tables using UNION back into one virtual table, and then 
using truncate table to purge a whole week of data in a few seconds.

The Restricted VTI was used because of the ability to optimize the returned 
rows based on the incoming query constraints.  Not perfect as it is a poor 
man's partitioning but it does work.

You might consider this route if you have no downtime to delete and compress.

Just some thoughts

Brett

-Original Message-
From: Sundar Narayanaswamy [mailto:sundar...@yahoo.com] 
Sent: Saturday, October 29, 2011 1:02 AM
To: derby-user@db.apache.org
Subject: Re: SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE question


I have posted the issue to DERBY-5487. I have also attached the Java test
program. 

The test rows do insert at one end of the primary key and delete the other
end.. Interestingly, I noticed that primary key space is reclaimed if I
reuse the primary keys across the insert-delete loops. But, my application
requires me to use continuously increasing primary keys (not reuse them).


Mike Matrigali wrote:
 
 Posting your test to a JIRA issue would be best.  It would be 
 interesting to post the space table results after each
 insert/delete/compress iteration (or every 10, ...).
 When do you commit (every row or every 1)?  Is it multi-threaded? 
 Does your
 test always insert rows at one end of the index and delete them
 from the other end.  If so it may be DERBY-5473 (a runtime issue,
 not a compress table issue).
 
 

-- 
View this message in context: 
http://old.nabble.com/SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE-question-tp32736560p32742387.html
Sent from the Apache Derby Users mailing list archive at Nabble.com.





RE: Derby secure by default

2011-09-19 Thread Bergquist, Brett
Off by default for the release coming out.  On by default if you want at the 
next major release.

-Original Message-
From: Rick Hillegas [mailto:rick.hille...@oracle.com] 
Sent: Monday, September 19, 2011 12:39 PM
To: Derby Discussion
Subject: Derby secure by default

The Derby developers are considering introducing a single master 
security property. Turning this property on will enable most Derby 
security mechanisms:

1) Authentication - Will be on, requiring username/password credentials 
at connection time. Derby will supply a default authentication mechanism.

2) SQL authorization - Will be on, hiding a user's data from other 
people. In addition, Derby will support more SQL Standard protections 
for Java routines.

3) File permissions - Will be tightened as described by DERBY-5363.

4) PUBLIC -This keyword will not be allowed as a user name.

5) SSL/TLS encryption - Will shield client/server traffic.

6) Server administration -  Will require credentials.

When the property is off, Derby will behave as it does today: 
Authentication, authorization, and network encryption will be off, file 
permissions will inherit defaults from the account which runs the VM, 
PUBLIC will be a legal user name, and server administration won't need 
credentials.

This new master property will make it easier to configure a more secure 
application. We want to introduce the property in an upcoming 10.x 
release, where it will default to being off. That means that it won't 
cause compatibility problems.

Later on, we might change the default for this property so that it would 
normally be turned on. This would make Derby more secure out of the box 
at the cost of breaking existing applications. Many applications would 
need to explicitly turn the property off in order to run as they did 
previously. Release notes would document this behavioral change and we 
would bump the major release id from 10 to 11 in order to call attention 
to the change.

We would like your feedback on this trade-off between security out of 
the box versus disruption. Should this extra security be enabled by default?

Thanks,
-Rick





RE: Question on automatic statistics feature in 10.8.2.1

2011-05-24 Thread Bergquist, Brett
Okay, so the  automatic update doesn't kick in until the table contains at 
least 100 rows was the information that I was missing.  Is that somewhere I 
the release notes or documentation?

In any case, this feature is greatly appreciated!  It will make the database 
much more zero administration which my situation requires.  Thanks!

Brett

-Original Message-
From: Knut Anders Hatlen [mailto:knut.hat...@oracle.com] 
Sent: Tuesday, May 24, 2011 3:30 AM
To: derby-user@db.apache.org
Subject: Re: Question on automatic statistics feature in 10.8.2.1

Bergquist, Brett bbergqu...@canoga.com writes:

 I have been testing with the new automatic statistics feature in
 10.8.2.1 and turned on the logging and see it being triggered as I
 make changes to my database.  So this appears to be working.  What I
 am surprised at however, is that I have tables in my database that
 have out of date statistics (none) as reported by this query:

[...]

 The tables start empty but with indexes and then data gets added
 later.  I thought that probably the new automatic statistics feature
 would be triggered on a query of these tables but it does not seem to
 be.  If the table already has statistics they seem to be updated.   I
 could be wrong however and maybe my query is not sufficient to trigger
 statistics update but I did do a query for a specific value of primary
 key and saw nothing in derby.log.

What's the size of the table? The automatic update doesn't kick in until
the table contains at least 100 rows (the threshold can be tuned with
the undocumented property derby.storage.indexStats.debug.createThreshold).

There's also a possibility that the query you executed wasn't actually
compiled, but just fetched from the statement cache. Derby doesn't check
that the statistics are up to date on every execution. By default, it
checks after 100 executions whether a recompile is necessary, and it's
during the recompile the statistics update is scheduled. To eliminate
this as the cause, you could call
syscs_util.syscs_empty_statement_cache() before you execute the query
against the table, and see if the statistics get created then.

-- 
Knut Anders




Question on automatic statistics feature in 10.8.2.1

2011-05-23 Thread Bergquist, Brett
I have been testing with the new automatic statistics feature in 10.8.2.1 and 
turned on the logging and see it being triggered as I make changes to my 
database.  So this appears to be working.  What I am surprised at however, is 
that I have tables in my database that have out of date statistics (none) as 
reported by this query:

select schemaname,
tablename || ' (' || CONGLOMERATENAME || ')' as Table (Index),
CASE WHEN CAST
(
   creationtimestamp AS varchar(24)
)
IS NULL THEN 'Recreate Index to Initialize' ELSE CAST
(
   creationtimestamp AS varchar(24)
)
END
from sys.systables t
join sys.sysconglomerates c on t.tableid = c.tableid
JOIN sys.SYSSCHEMAS x on t.SCHEMAID = x.SCHEMAID
LEFT OUTER JOIN sys.sysstatistics s ON c.conglomerateid = s.referenceid
where t.tableid = c.tableid
and c.isindex = true
and t.tabletype = 'T';

The tables start empty but with indexes and then data gets added later.  I 
thought that probably the new automatic statistics feature would be triggered 
on a query of these tables but it does not seem to be.  If the table already 
has statistics they seem to be updated.   I could be wrong however and maybe my 
query is not sufficient to trigger statistics update but I did do a query for a 
specific value of primary key and saw nothing in derby.log.

So do I still need to prime these statistics myself with a call to 
syscs_util.update_statistics?

Thanks for any information.

Brett


RE: Executing sql from function

2011-03-03 Thread Bergquist, Brett
I believe it is because of your no sql in the CREATE FUNCTION call.  This 
should probably be reads sql data but check the reference manual section on 
the 'create' statement.

Brett

-Original Message-
From: trolll [mailto:m.roki...@ibe.edu.pl] 
Sent: Thursday, March 03, 2011 9:42 AM
To: derby-user@db.apache.org
Subject: Executing sql from function


Hi everyone

I have a problem with executing sql statements from inside of a sql function
written in java. This is what I did

1. This is how I execute the statemnt in java

...
con=DriverManager.getConnection(jdbc:default:connection);
q=con.prepareStatement(SELECT * FROM users);
q.executeQuery();
...

2. I built a Jar that contains this class
3. I install the jar in a derby database

CALL SQLJ.install_jar ('file:///home/michal/foo.jar', 'foo', 1);
CALL SYSCS_UTIL.SYSCS_SET_DATABASE_PROPERTY ('derby.database.classpath',
'App.foo');

4. I create a function that uses my java method from the jar file

CRATE FUNCTION bar(varchar(50)) RETURNS varchar(50) LANGUAGE java EXTERNAL
NAME 'foo.Tools.bar' PARAMETER STYLE java no sql;

5. I use the method in a sql query

SELECT bar(user_id) FROM users

While query execution I get the following exception:
java.sql.SQLException: The external routine is not allowed to execute SQL
statements.

I use Java 6 and derby 10.7.1.1

Does anybody know what do I do wrong
-- 
View this message in context: 
http://old.nabble.com/Executing-sql-from-function-tp31059263p31059263.html
Sent from the Apache Derby Users mailing list archive at Nabble.com.





RE: Copy data from one table to another

2011-03-03 Thread Bergquist, Brett
Try something like:

UPDATE TABLEA SET
TABLEA.X = (SELECT TABLEB.X FROM TABLEB WHERE TABLEA.A = TABLEB.B),
TABLEA.Y = (SELECT TABLEB.Y FROM TABLEB WHERE TABLEA.A = TABLEB.B),
TABLEA.Z = (SELECT TABLEB.Z FROM TABLEB WHERE TABLEA.A = TABLEB.B);

The value can be an expression.

From: John Storta Jr [mailto:j...@storta.net]
Sent: Thursday, March 03, 2011 2:50 PM
To: derby-user@db.apache.org
Subject: Copy data from one table to another

TABLEA
Contains columns called A, X, Y, and Z
Column A is a key field while columns X, Y, and Z are empty.
There are also other columns with valid data so I cannot remove the table and 
start fresh.

TABLEB
Contains columns called A, X, Y, and Z also
Column A is a foreign key to TABLEA
Columns X, Y, and Z contain valid data.

I want to copy the data in columns X, Y, and Z from TABLEB into the columns X, 
Y, and Z in TABLEA where column A matches.

UPDATE TABLEA SET
TABLEA.X = TABLEB.X,
TABLEA.Y = TABLEB.Y,
TABLEA.Z = TABLEB.Z
WHERE TABLEA.A = TABLEB.A;

Naturally I get an error because TABLEB is not in a FROM clause.  I have tried 
a few different things to incorporate TABLEB into the mix, but nothing is 
working.  Most just give me a syntax error.


Any thoughts?

Thanks,
John S.



RE: Is it possible to run multiple network servers in parallel?

2011-01-09 Thread Bergquist, Brett
Yes, just start each server on a different port:

http://db.apache.org/derby/docs/10.6/adminguide/

Your connection URL for your client specifies the host and port to connect to:

   jdbc:derby://host:port/database

so just adjust your URL for the host and different server ports.


From: Thomas [thomas.k.h...@t-online.de]
Sent: Sunday, January 09, 2011 8:16 AM
To: derby-user@db.apache.org
Subject: Is it possible to run multiple network servers in parallel?

Hi,

is it possible to run for example a network server in version 10.6 in parallel
to running the current version 10.7 on one and the same (server) machine? If so,
how would a client program like IJ know to which server to connect to?

Thanks





Thoughts on a design to help a use case with continuous inserted and deleteed records

2010-08-04 Thread Bergquist, Brett
Okay after some testing, it seems that Derby is reusing the space of deleted 
records before allocating more space from the file system.  If this is the case 
then what use does the call:

    call syscs_util.syscs_inplace_compress_table(SCHEMA,TABLE, 1, 
0, 0);

have?  Basically what does the PURGE_ROWS option do that is above and beyond 
what is being done by Derby already to reuse the space of deleted records?

Also after testing we are seeing the following. With a database with no deleted 
rows, my test application is inserting about 150 records/second into a table.  
I let this run for about 2 million records and the insert rate is consistent.  
Now I purge out 1.5 million records and run the test again.   The insert rate 
is now about 25 records/second.  Running the above compress with the PURGE_ROWS 
option and rerun the test and still about 25 records/second.  Run full 
SYSCS_UTIL.SYSCS_COMPRESS_TABLE and rerun the test and the insert rate is back 
to 150 records/second.

The reduced insert rate because of deleted records is very much a problem.  We 
have a table that gets about 700K records inserted per day and purges out 30 
days old data at about 700K records per day.  This has a great effect on our 
insert rate.  Why the big hit because of deleted records and can anything other 
than a compress help?  This process has no downtime so running a compress can 
only be done maybe once a month.

Thanks for an feedback.

Brett





Thoughts on a design to help a use case with continuous inserted and deleteed records

2010-08-04 Thread Bergquist, Brett
As I have previously written, we have a use case where a process is continually 
inserting records into a table at a continuous rate of about 8/second right 
now, every second of every day.  There are test measurement records being 
received from devices in a network.  Because of disk space requirements, only 
30 days of data is kept and older data is purged out each day.   The purging 
has a great effect on the rate of insertion and keeping a sustained rate.  In 
addition, since this is a continuous process, there is no down time except 
maybe once a month.  

It seems this use case is not handled that well by the database and search the 
net, it seems that many other databases have the same issue.  So I was 
wondering if it might be possible to attack this problem from an application 
level.  Instead of having one table that is constantly being inserted and 
deleted from, what if I had 6 tables, one each for 5 weeks and the insertion 
always inserts into the proper table for the week associated with the data.  
When the 6'th week's table starts to be inserted into, the oldest table can be 
dropped and recreated as the next week's available table.   It does make the 
insertion and query tricky but purging out results is a matter of dropping and 
recreating a table.

Any thoughts on an approach like this?  This almost seems like something that 
the database could support...

Brett 



RE: How to best constrain database space with many records being inserted and deleted

2010-07-19 Thread Bergquist, Brett
Sorry I was on vacation last week and had no access to email (welcome first ;) 
).  I will check this out and report back.  Thanks.

Brett

-Original Message-
From: Kristian Waagan [mailto:kristian.waa...@oracle.com] 
Sent: Monday, July 19, 2010 7:53 AM
To: derby-user@db.apache.org
Subject: Re: How to best constrain database space with many records being 
inserted and deleted

On 06.07.10 20:09, Bergquist, Brett wrote:

 I  have a situation where my system is adding about 600K records a day 
 to a table and also deleting about 600K records a day.  Basically it 
 is keeping 30 days worth of history of some network management service 
 level agreement data.  So each day about 600K new tests are performed 
 and recorded in the database and each day after the 30 day mark, about 
 600K old records are purged out.  On average there is about 18 million 
 records in the table.


Hi Brett,

Are you able to determine if the growing conglomerate is a heap or a btree?
Do to that, find the largest file(s) in the seg0 directory of your 
database. Convert the number from hex to dec, i.e. 'c300.dat' - 768.
Then do something like:
select isindex, descriptor from sys.sysconglomerates where 
conglomeratenumber = ?;
(? would be 768 for the example file)


Regards,
-- 
Kristian

 I have little to no down time for database maintenance.  Maybe 2 hours 
 per month maximum.  What I am seeing is that the database is growing 
 and it does not seem to be reusing the deleted space.  Should it be?  
 The records being inserted are exactly the size of the records being 
 deleted.

 I know that I could use the SYSCS_UTIL.SYSCS_COMPRESS_TABLE to reclaim 
 the space, but I am not so interested in giving back to the OS, but 
 rather ensuring the space available from the deleted records is 
 reused.  I have attempted to reclaim the space with this command, 
 however, and about 5 hours of time is just too much.

 I also see there is a SYS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE but I am 
 not sure what the PURGE_ROW option is going to do.  It sounds like it 
 is something that I want to look into, but the documentation is not 
 clear enough if that is what I need.

 Thanks in advance.

 Brett //






How to best constrain database space with many records being inserted and deleted

2010-07-06 Thread Bergquist, Brett
I  have a situation where my system is adding about 600K records a day to a 
table and also deleting about 600K records a day.  Basically it is keeping 30 
days worth of history of some network management service level agreement data.  
So each day about 600K new tests are performed and recorded in the database and 
each day after the 30 day mark, about 600K old records are purged out.  On 
average there is about 18 million records in the table.

I have little to no down time for database maintenance.  Maybe 2 hours per 
month maximum.  What I am seeing is that the database is growing and it does 
not seem to be reusing the deleted space.  Should it be?  The records being 
inserted are exactly the size of the records being deleted.

I know that I could use the SYSCS_UTIL.SYSCS_COMPRESS_TABLE to reclaim the 
space, but I am not so interested in giving back to the OS, but rather ensuring 
the space available from the deleted records is reused.  I have attempted to 
reclaim the space with this command, however, and about 5 hours of time is just 
too much.

I also see there is a SYS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE but I am not sure 
what the PURGE_ROW option is going to do.  It sounds like it is something that 
I want to look into, but the documentation is not clear enough if that is what 
I need.

Thanks in advance.

Brett


RE: How to best constrain database space with many records being inserted and deleted

2010-07-06 Thread Bergquist, Brett
This is the schema of the table in question:

CREATE TABLE NPARESULTS
(
ID BIGINT NOT NULL GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
REPORTKEY CHAR(56) NOT NULL,
MASTERIP CHAR(4) FOR BIT DATA NOT NULL,
BOOTCOUNT INTEGER NOT NULL,
TESTRESULTID BIGINT NOT NULL,
PROFILEREFID INTEGER NOT NULL,
ADDRESSREFID INTEGER NOT NULL,
STARTDATETIME TIMESTAMP NOT NULL,
ACCURACYLEVEL SMALLINT NOT NULL,
RESULTFLAG SMALLINT NOT NULL,
PACKETSSENT INTEGER NOT NULL,
ROUNDTRIPPACKETS INTEGER NOT NULL,
DROPPEDPACKETS INTEGER NOT NULL,
OUTOFORDERPACKETS INTEGER NOT NULL,
MINROUNDTRIPLATENCY INTEGER NOT NULL,
MAXROUNDTRIPLATENCY INTEGER NOT NULL,
TOTALROUNDTRIPLATENCY BIGINT NOT NULL,
AVGROUNDTRIPLATENCY INTEGER NOT NULL,
LATENCYBUCKETVALUE1 INTEGER NOT NULL,
LATENCYBUCKETVALUE2 INTEGER NOT NULL,
LATENCYBUCKETVALUE3 INTEGER NOT NULL,
LATENCYBUCKETVALUE4 INTEGER NOT NULL,
LATENCYBUCKETVALUE5 INTEGER NOT NULL,
LATENCYBUCKETVALUE6 INTEGER NOT NULL,
LATENCYBUCKETVALUE7 INTEGER NOT NULL,
LATENCYBUCKETVALUE8 INTEGER NOT NULL,
LATENCYBUCKETVALUE9 INTEGER NOT NULL,
LATENCYBUCKETVALUE10 INTEGER NOT NULL,
JITTERMEASUREMENT INTEGER NOT NULL,
MINLOCALREMOTEJITTER INTEGER NOT NULL,
MAXLOCALREMOTEJITTER INTEGER NOT NULL,
TOTALLOCALREMOTEJITTER BIGINT NOT NULL,
AVGLOCALREMOTEJITTER INTEGER NOT NULL,
LOCALREMOTEJITTERBUCKETVALUE1 INTEGER NOT NULL,
LOCALREMOTEJITTERBUCKETVALUE2 INTEGER NOT NULL,
LOCALREMOTEJITTERBUCKETVALUE3 INTEGER NOT NULL,
LOCALREMOTEJITTERBUCKETVALUE4 INTEGER NOT NULL,
LOCALREMOTEJITTERBUCKETVALUE5 INTEGER NOT NULL,
LOCALREMOTEJITTERBUCKETVALUE6 INTEGER NOT NULL,
LOCALREMOTEJITTERBUCKETVALUE7 INTEGER NOT NULL,
LOCALREMOTEJITTERBUCKETVALUE8 INTEGER NOT NULL,
LOCALREMOTEJITTERBUCKETVALUE9 INTEGER NOT NULL,
MINREMOTELOCALJITTER INTEGER NOT NULL,
MAXREMOTELOCALJITTER INTEGER NOT NULL,
TOTALREMOTELOCALJITTER BIGINT NOT NULL,
AVGREMOTELOCALJITTER INTEGER NOT NULL,
REMOTELOCALJITTERBUCKETVALUE1 INTEGER NOT NULL,
REMOTELOCALJITTERBUCKETVALUE2 INTEGER NOT NULL,
REMOTELOCALJITTERBUCKETVALUE3 INTEGER NOT NULL,
REMOTELOCALJITTERBUCKETVALUE4 INTEGER NOT NULL,
REMOTELOCALJITTERBUCKETVALUE5 INTEGER NOT NULL,
REMOTELOCALJITTERBUCKETVALUE6 INTEGER NOT NULL,
REMOTELOCALJITTERBUCKETVALUE7 INTEGER NOT NULL,
REMOTELOCALJITTERBUCKETVALUE8 INTEGER NOT NULL,
REMOTELOCALJITTERBUCKETVALUE9 INTEGER NOT NULL,
CIRCUIT1REFID INTEGER NOT NULL,
CIRCUIT2REFID INTEGER NOT NULL
);

So would this type of schema trigger this kind of problem?  Also, what does the 
PURGE_ROW option do?


From: Kathey Marsden [mailto:kmarsdende...@sbcglobal.net]
Sent: Tuesday, July 06, 2010 2:26 PM
To: derby-user@db.apache.org
Subject: Re: How to best constrain database space with many records being 
inserted and deleted

On 7/6/2010 11:09 AM, Bergquist, Brett wrote:
I  What I am seeing is that the database is growing and it does not seem to be 
reusing the deleted space.  Should it be?  The records being inserted are 
exactly the size of the records being deleted.

The known issues in this area are:
https://issues.apache.org/jira/browse/DERBY-4057
https://issues.apache.org/jira/browse/DERBY-4055

For DERBY-4055 a possible workaround is here:
https://issues.apache.org/jira/browse/DERBY-4055?focusedCommentId=12680196page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#action_12680196
Also synchronizing table access has helped some users.

You can verify if you are seeing DERBY-4055 by running with a SANE build and 
putting
derby.debug.true=DaemonTrace

in your derby.properties.

All that said, I have heard anecdotal reports that there may be another bug.  
If you can come up with a reproduction that is not either of these issues, we 
would appreciate it.

Thanks

Kathey