how to set the timezone of CURRENT_TIMESTAMP to GMT?

2010-02-08 Thread Erin Drummond
Hi,

I have a table in a database that looks like this: CREATE TABLE
Data(RowID CHAR(16) FOR BIT DATA PRIMARY KEY,Value LONG VARCHAR NOT
NULL,LastUpdated TIMESTAMP NOT NULL DEFAULT CURRENT TIMESTAMP)

As you can see, the LastUpdated field defaults to CURRENT TIMESTAMP.
However, this value is always my local timezone (GMT+12). How can I
automatically convert it to GMT using a builtin database function? I
am working in GMT timestamps because it is not possible to use the
timezone of the server for the timestamps because there is no server
(a separate copy of the database is stored in several nodes all over
the world, and it replicates the data based on timestamp (which is why
the timestamp must be in GMT for it to be accurate)). I want to avoid
doing this at the application level because I have a lot of code that
omits the LastUpdated field when creating a new record, so I would
like to set it automatically.

Can someone help me?

Thanks,
Erin


Re: how to set the timezone of CURRENT_TIMESTAMP to GMT?

2010-02-08 Thread Bernt M. Johnsen
Den 02/08/2010 10:40 AM, skrev Erin Drummond:
 Hi,
 
 I have a table in a database that looks like this: CREATE TABLE
 Data(RowID CHAR(16) FOR BIT DATA PRIMARY KEY,Value LONG VARCHAR NOT
 NULL,LastUpdated TIMESTAMP NOT NULL DEFAULT CURRENT TIMESTAMP)

The proper solution would be to use the datatype TIMESTAMP WITH TIMEZONE for the
LastUpdated column, but it's regrettably not implemented in Derby.

 As you can see, the LastUpdated field defaults to CURRENT TIMESTAMP.
 However, this value is always my local timezone (GMT+12). How can I
 automatically convert it to GMT using a builtin database function? I
 am working in GMT timestamps because it is not possible to use the
 timezone of the server for the timestamps because there is no server
 (a separate copy of the database is stored in several nodes all over
 the world, and it replicates the data based on timestamp (which is why
 the timestamp must be in GMT for it to be accurate)). I want to avoid
 doing this at the application level because I have a lot of code that
 omits the LastUpdated field when creating a new record, so I would
 like to set it automatically.

I (using Linux) would have run the database with the environment variable 
TZ=GMT.

 
 Can someone help me?
 
 Thanks,
 Erin


-- 
Bernt Marius Johnsen, Staff Engineer
Database Group, Sun Microsystems, Trondheim, Norway



signature.asc
Description: OpenPGP digital signature


import with column

2010-02-08 Thread Glenn Ambrose

Hi

Does 'CALL SYSCS_UTIL.SYSCS_IMPORT_TABLE(...);' handle column names? I've 
noticed that when the database that we use 'Agrobase' exports to .csv it always 
adds the column names to the top of the row.

What's the best way of handling this as I can't see any any provisions for this 
in the above procedure.

Thanks
Glenn
  
_
Search for properties that match your lifestyle! Start searching NOW!
http://clk.atdmt.com/NMN/go/157631292/direct/01/

Re: import with column

2010-02-08 Thread Rick Hillegas

Hi Glenn,

Derby doesn't understand the format you're describing. You can write a 
table function which reads the csv file and strips the column names. 
Then you can import the file like this:


insert into myTable select * from table ( myTableFunction(...) ) s

However, with this solution you will lose some of the speed of the 
import procedures because logging won't be turned off.


Hope this helps,
-Rick

Glenn Ambrose wrote:

Hi

Does 'CALL SYSCS_UTIL.SYSCS_IMPORT_TABLE(...);' handle column names? 
I've noticed that when the database that we use 'Agrobase' exports to 
.csv it always adds the column names to the top of the row.


What's the best way of handling this as I can't see any any provisions 
for this in the above procedure.


Thanks
Glenn

Start searching NOW! Search for properties that match your lifestyle! 
http://clk.atdmt.com/NMN/go/157631292/direct/01/




OutOfMemoryErrors on group by select

2010-02-08 Thread Ronald Rudy
I've gotten to the point where I seem to be able to frequently induce 
OutOfMemoryErrors when executing a statement like this one:

SELECT groupCol, count(*) FROM myTable GROUP BY groupCol;

The total number of records doesn't seem terribly large (about 400,000 records) 
though while I'm executing the above statement there are likely some inserts 
and deletes happening concurrently.  I don't need anything transactional here, 
I'm really just monitoring a table from a separate network connection.

In our app the database runs in its own JSVC daemon as a NetworkServer with 
512MB allocated to it.  Page cache size is set to 2000.Page size should be 
Derby default (4096 I think).  

I cannot be 100% sure that the above is specifically what is causing the error, 
but I can say that I haven't seen memory errors until I started monitoring this 
table frequently and even then I really only saw it when the table size started 
getting a bit bigger (not big by any stretch, but  300k rows).

The table itself is not really big - there are 7 varchar columns along with an 
ID column.

In this production-level situation stability is more important than 
performance.  I'd like Derby configured so that no matter what is requested SQL 
wise the daemon itself is not impacted.  

I'd also like to be able to build in some handling code perhaps that if/when an 
OutOfMemoryError is encountered, it will automatically restart - for this I 
might need to add a script with -XX:OnOutOfMemoryError as a JVM option unless 
there's some way to handle this internally?

-Ron

Re: OutOfMemoryErrors on group by select

2010-02-08 Thread Peter Ondruška
What Derby version is this? Do you have an index on groupCol?

On Mon, Feb 8, 2010 at 3:11 PM, Ronald Rudy ronchal...@gmail.com wrote:
 I've gotten to the point where I seem to be able to frequently induce 
 OutOfMemoryErrors when executing a statement like this one:

 SELECT groupCol, count(*) FROM myTable GROUP BY groupCol;

 The total number of records doesn't seem terribly large (about 400,000 
 records) though while I'm executing the above statement there are likely some 
 inserts and deletes happening concurrently.  I don't need anything 
 transactional here, I'm really just monitoring a table from a separate 
 network connection.

 In our app the database runs in its own JSVC daemon as a NetworkServer with 
 512MB allocated to it.  Page cache size is set to 2000.    Page size should 
 be Derby default (4096 I think).

 I cannot be 100% sure that the above is specifically what is causing the 
 error, but I can say that I haven't seen memory errors until I started 
 monitoring this table frequently and even then I really only saw it when the 
 table size started getting a bit bigger (not big by any stretch, but  300k 
 rows).

 The table itself is not really big - there are 7 varchar columns along with 
 an ID column.

 In this production-level situation stability is more important than 
 performance.  I'd like Derby configured so that no matter what is requested 
 SQL wise the daemon itself is not impacted.

 I'd also like to be able to build in some handling code perhaps that if/when 
 an OutOfMemoryError is encountered, it will automatically restart - for this 
 I might need to add a script with -XX:OnOutOfMemoryError as a JVM option 
 unless there's some way to handle this internally?

 -Ron


Re: OutOfMemoryErrors on group by select

2010-02-08 Thread Ronald Rudy
Version is 10.4.2.0, and no I don't - I am actually trying the group by/etc. on 
another column that is indexed to see if there's any memory benefits..  The 
table isn't optimized for the below statement because in production it will 
never be executed like that, I'm just trying to monitor some counts while the 
app is running..


On Feb 8, 2010, at 9:21:29 AM, Peter Ondruška wrote:

 What Derby version is this? Do you have an index on groupCol?
 
 On Mon, Feb 8, 2010 at 3:11 PM, Ronald Rudy ronchal...@gmail.com wrote:
 I've gotten to the point where I seem to be able to frequently induce 
 OutOfMemoryErrors when executing a statement like this one:
 
 SELECT groupCol, count(*) FROM myTable GROUP BY groupCol;
 
 The total number of records doesn't seem terribly large (about 400,000 
 records) though while I'm executing the above statement there are likely 
 some inserts and deletes happening concurrently.  I don't need anything 
 transactional here, I'm really just monitoring a table from a separate 
 network connection.
 
 In our app the database runs in its own JSVC daemon as a NetworkServer with 
 512MB allocated to it.  Page cache size is set to 2000.Page size should 
 be Derby default (4096 I think).
 
 I cannot be 100% sure that the above is specifically what is causing the 
 error, but I can say that I haven't seen memory errors until I started 
 monitoring this table frequently and even then I really only saw it when the 
 table size started getting a bit bigger (not big by any stretch, but  300k 
 rows).
 
 The table itself is not really big - there are 7 varchar columns along with 
 an ID column.
 
 In this production-level situation stability is more important than 
 performance.  I'd like Derby configured so that no matter what is requested 
 SQL wise the daemon itself is not impacted.
 
 I'd also like to be able to build in some handling code perhaps that if/when 
 an OutOfMemoryError is encountered, it will automatically restart - for this 
 I might need to add a script with -XX:OnOutOfMemoryError as a JVM option 
 unless there's some way to handle this internally?
 
 -Ron



Re: OutOfMemoryErrors on group by select

2010-02-08 Thread Peter Ondruška
Try creating a non-unique index on the column you group by. Have you
tried more recent Derby version if you can reproduce that there?

On Mon, Feb 8, 2010 at 3:25 PM, Ronald Rudy ronchal...@gmail.com wrote:
 Version is 10.4.2.0, and no I don't - I am actually trying the group by/etc. 
 on another column that is indexed to see if there's any memory benefits..  
 The table isn't optimized for the below statement because in production it 
 will never be executed like that, I'm just trying to monitor some counts 
 while the app is running..


 On Feb 8, 2010, at 9:21:29 AM, Peter Ondruška wrote:

 What Derby version is this? Do you have an index on groupCol?

 On Mon, Feb 8, 2010 at 3:11 PM, Ronald Rudy ronchal...@gmail.com wrote:
 I've gotten to the point where I seem to be able to frequently induce 
 OutOfMemoryErrors when executing a statement like this one:

 SELECT groupCol, count(*) FROM myTable GROUP BY groupCol;

 The total number of records doesn't seem terribly large (about 400,000 
 records) though while I'm executing the above statement there are likely 
 some inserts and deletes happening concurrently.  I don't need anything 
 transactional here, I'm really just monitoring a table from a separate 
 network connection.

 In our app the database runs in its own JSVC daemon as a NetworkServer with 
 512MB allocated to it.  Page cache size is set to 2000.    Page size should 
 be Derby default (4096 I think).

 I cannot be 100% sure that the above is specifically what is causing the 
 error, but I can say that I haven't seen memory errors until I started 
 monitoring this table frequently and even then I really only saw it when 
 the table size started getting a bit bigger (not big by any stretch, but  
 300k rows).

 The table itself is not really big - there are 7 varchar columns along with 
 an ID column.

 In this production-level situation stability is more important than 
 performance.  I'd like Derby configured so that no matter what is requested 
 SQL wise the daemon itself is not impacted.

 I'd also like to be able to build in some handling code perhaps that 
 if/when an OutOfMemoryError is encountered, it will automatically restart - 
 for this I might need to add a script with -XX:OnOutOfMemoryError as a JVM 
 option unless there's some way to handle this internally?

 -Ron




Re: OutOfMemoryErrors on group by select

2010-02-08 Thread Ronald Rudy
On its own the query seems to work ok, it seems that under load it pushes it 
over the line.  My memory usage goes to ~100MB right away just executing the 
query.  Repeated executions of the query don't seem to move the needle much 
though.  

It's not really tenable right now to move to 10.5, though I'd like to during 
our next major revision (in the next month) so I can take advantage of the 
OFFSET capabilities.  

I guess I'm more looking for general guidelines for managing memory usage in 
Derby - are there page size settings or anything else I can pass into Derby to 
force it to utilize less memory at the expense of performance?  I set the 
pageCacheSize at 2000, are there other params that will help limit memory usage 
in Derby?

Also, it seems in general that aggregate functionality (for example sum() 
functions on group by's) is rather poor performing in Derby - is there any way 
to improve this outside of indexing?  When I did a benchmark where in one 
implementation I straight iterated over the results and aggregated my grouped 
results in simple in-memory Map objects as compared against executing a Derby 
group by directly, the results were identical - is there something I can do 
here?


On Feb 8, 2010, at 9:37:59 AM, Peter Ondruška wrote:

 Try creating a non-unique index on the column you group by. Have you
 tried more recent Derby version if you can reproduce that there?
 



Re: OutOfMemoryErrors on group by select

2010-02-08 Thread Kristian Waagan

On 08.02.10 15:25, Ronald Rudy wrote:

Version is 10.4.2.0, and no I don't - I am actually trying the group by/etc. on 
another column that is indexed to see if there's any memory benefits..  The 
table isn't optimized for the below statement because in production it will 
never be executed like that, I'm just trying to monitor some counts while the 
app is running..
   


Hi Ronald,

If you can easily control this process, can you run it with 
-XX:+HeapDumpOnOutOfMemoryError and post a heap dump summary?
(I don't remember off the top of my head how you extract a histogram 
from the binary heap dump, but I guess maybe using jmap or HeapViewer)

Are there many Derby classes in the top 10?
You might only see a bunch of arrays up there, in which case more 
investigation may be required to understand what's going on.



Regards,
--
Kristian



On Feb 8, 2010, at 9:21:29 AM, Peter Ondruška wrote:

   

What Derby version is this? Do you have an index on groupCol?

On Mon, Feb 8, 2010 at 3:11 PM, Ronald Rudyronchal...@gmail.com  wrote:
 

I've gotten to the point where I seem to be able to frequently induce 
OutOfMemoryErrors when executing a statement like this one:

SELECT groupCol, count(*) FROM myTable GROUP BY groupCol;

The total number of records doesn't seem terribly large (about 400,000 records) 
though while I'm executing the above statement there are likely some inserts 
and deletes happening concurrently.  I don't need anything transactional here, 
I'm really just monitoring a table from a separate network connection.

In our app the database runs in its own JSVC daemon as a NetworkServer with 
512MB allocated to it.  Page cache size is set to 2000.Page size should be 
Derby default (4096 I think).

I cannot be 100% sure that the above is specifically what is causing the error, 
but I can say that I haven't seen memory errors until I started monitoring this 
table frequently and even then I really only saw it when the table size started 
getting a bit bigger (not big by any stretch, but  300k rows).

The table itself is not really big - there are 7 varchar columns along with an 
ID column.

In this production-level situation stability is more important than 
performance.  I'd like Derby configured so that no matter what is requested SQL 
wise the daemon itself is not impacted.

I'd also like to be able to build in some handling code perhaps that if/when an 
OutOfMemoryError is encountered, it will automatically restart - for this I 
might need to add a script with -XX:OnOutOfMemoryError as a JVM option unless 
there's some way to handle this internally?

-Ron
   
   




Re: Using NetworkServerControl to access in-memory (only) tables of Embedded Derby

2010-02-08 Thread Pavel Bortnovskiy
Thank you for your response, Bryan.

When such situation arises, I will generate a thread-dump to verify. I 
don't have it yet at this moment.

My thinking is in-line with yours - Derby Network server control should 
run its own thread, but I don't see anything in the API which would allow 
me to do so.
(unless I download the source code and try to debug it).

So, would there be anyone on this forum who could help out please?
If you had any experience running Network Server Control in your 
application, can you please elaborate on how the threading was 
implemented,
so that the NSC is independent of all other threads and their states?

Thanks,
Pavel.






Bryan Pendleton bpendle...@amberpoint.com 
02/05/2010 12:07 PM
Please respond to
Derby Discussion derby-user@db.apache.org


To
Derby Discussion derby-user@db.apache.org
cc

Subject
Re: Using NetworkServerControl to access in-memory (only) tables of 
Embedded Derby






 DerbyServer thread seems to still be running, yet the server is 
 unresponsive.

In addition to your own DerbyServer thread, which I don't think
actually *needs* to still be running, there should be a separate
thread which is started by the Derby network server code itself,
which has the job of accepting connections and delivering them
to other threads to be processed.

In my running network server, for example, when I look at the
threads that are active, I see:

NetworkServerThread_2 prio=6 tid=0x03044c80 nid=0x27c runnable 
[0x033cf000..0x033cfae8]
 at java.net.PlainSocketImpl.socketAccept(Native Method)
 at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:384)
 - locked 0x22a857d8 (a java.net.SocksSocketImpl)
 at java.net.ServerSocket.implAccept(ServerSocket.java:450)
 at java.net.ServerSocket.accept(ServerSocket.java:421)
 at org.apache.derby.impl.drda.ClientThread$1.run(Unknown Source)
 at java.security.AccessController.doPrivileged(Native Method)
 at org.apache.derby.impl.drda.ClientThread.run(Unknown Source)

Do you see a thread like that?

When your server becomes stuck/wedged/unresponsive, why don't you try 
this:
  - collect a thread dump of the entire JVM
  - look through the threads for any which mention org.apache.derby in 
their stacks
  - edit out all the other threads from your thread dump

Then post a message with a cut-and-paste of just the derby-related threads
in your wedged server, and maybe it will be more clear to somebody else
what the problem is.

thanks,

bryan








Jefferies archives and monitors outgoing and incoming e-mail. The contents of 
this email, including any attachments, are confidential to the ordinary user of 
the email address to which it was addressed. If you are not the addressee of 
this email you may not copy, forward, disclose or otherwise use it or any part 
of it in any form whatsoever. This email may be produced at the request of 
regulators or in connection with civil litigation. Jefferies accepts no 
liability for any errors or omissions arising as a result of transmission. Use 
by other than intended recipients is prohibited.  In the United Kingdom, 
Jefferies operates as Jefferies International Limited; registered in England: 
no. 1978621; registered office: Vintners Place, 68 Upper Thames Street, London 
EC4V 3BJ.  Jefferies International Limited is authorised and regulated by the 
Financial Services Authority.

Re: OutOfMemoryErrors on group by select

2010-02-08 Thread Ronald Rudy
Kristian,

Yes, I actually do have a heap dump just like you mentioned - sorry I had 
completely forgotten I retrieved one earlier (it was done so late last evening 
and I am still getting my bearings this morning :))

Here is the top 17 or so from the histogram: (view the email w/ fixed width and 
it should be ok)

ClassInstance CountTotal Size
[C  1596861307166530
[B1071969009057
org.apache.derby.iapi.types.SQLVarchar  139137038958360
java.lang.String139778622364576
[[C 139246016709932
[Lorg.apache.derby.iapi.types.DataValueDescriptor;   2009328796896
org.apache.derby.impl.store.access.sort.Node 1983425553576
org.apache.derby.iapi.types.SQLInteger   3982011991005
org.apache.derby.impl.store.raw.data.StoredRecordHeader   931191583023
[Lorg.apache.derby.impl.store.access.sort.Node;   91049092
org.apache.derby.impl.store.raw.data.RecordId 48191578292
[Ljava.util.concurrent.ConcurrentHashMap$HashEntry; 192425876
[Lorg.apache.derby.impl.store.raw.data.StoredRecordHeader; 2000404880
org.apache.derby.impl.store.raw.data.StoredPage2000352000
[Ljava.util.HashMap$Entry;  837303432
[I 4839286212
java.lang.Class2584196384

It looks like it's keeping a ton of SQLVarchar's in memory.. 

-Ron



On Feb 8, 2010, at 9:48:52 AM, Kristian Waagan wrote:
 Hi Ronald,
 
 If you can easily control this process, can you run it with 
 -XX:+HeapDumpOnOutOfMemoryError and post a heap dump summary?
 (I don't remember off the top of my head how you extract a histogram from the 
 binary heap dump, but I guess maybe using jmap or HeapViewer)
 Are there many Derby classes in the top 10?
 You might only see a bunch of arrays up there, in which case more 
 investigation may be required to understand what's going on.
 
 
 Regards,
 -- 
 Kristian



Re: OutOfMemoryErrors on group by select

2010-02-08 Thread Kristian Waagan

On 08.02.10 16:42, Ronald Rudy wrote:

Kristian,

Yes, I actually do have a heap dump just like you mentioned - sorry I had 
completely forgotten I retrieved one earlier (it was done so late last evening 
and I am still getting my bearings this morning :))

Here is the top 17 or so from the histogram: (view the email w/ fixed width and 
it should be ok)

ClassInstance CountTotal Size
[C  1596861307166530
[B1071969009057
org.apache.derby.iapi.types.SQLVarchar  139137038958360
java.lang.String139778622364576
[[C 139246016709932
[Lorg.apache.derby.iapi.types.DataValueDescriptor;   2009328796896
org.apache.derby.impl.store.access.sort.Node 1983425553576
org.apache.derby.iapi.types.SQLInteger   3982011991005
org.apache.derby.impl.store.raw.data.StoredRecordHeader   931191583023
[Lorg.apache.derby.impl.store.access.sort.Node;   91049092
org.apache.derby.impl.store.raw.data.RecordId 48191578292
[Ljava.util.concurrent.ConcurrentHashMap$HashEntry; 192425876
[Lorg.apache.derby.impl.store.raw.data.StoredRecordHeader; 2000404880
org.apache.derby.impl.store.raw.data.StoredPage2000352000
[Ljava.util.HashMap$Entry;  837303432
[I 4839286212
java.lang.Class2584196384

It looks like it's keeping a ton of SQLVarchar's in memory..
   


Thanks, Ron.

Indeed... If I wanted to get this fixed, I would have done one or both 
of these:

 a) Check if the same problem exist in 10.5.
 b) Write a runnable repro executing the problematic query (the data 
should preferably be generated by the repro), attach it to a Jira [1].


Maybe one could learn something by using jhat or similar (i.e. why is 
Derby holding on to all the SQLVarchar objects), but having a repro 
would help a lot... Further, it is not unlikely that many of the Strings 
and the character arrays belong to the SQLVarchar objects...



Regards,
--
Kristian

[1] https://issues.apache.org/jira/browse/DERBY


-Ron



On Feb 8, 2010, at 9:48:52 AM, Kristian Waagan wrote:
   

Hi Ronald,

If you can easily control this process, can you run it with 
-XX:+HeapDumpOnOutOfMemoryError and post a heap dump summary?
(I don't remember off the top of my head how you extract a histogram from the 
binary heap dump, but I guess maybe using jmap or HeapViewer)
Are there many Derby classes in the top 10?
You might only see a bunch of arrays up there, in which case more investigation 
may be required to understand what's going on.


Regards,
--
Kristian
 
   




Re: how to set the timezone of CURRENT_TIMESTAMP to GMT?

2010-02-08 Thread Dag H. Wanvik
Bernt M. Johnsen bernt.john...@sun.com writes:

 Den 02/08/2010 10:40 AM, skrev Erin Drummond:
 Hi,
 
 I have a table in a database that looks like this: CREATE TABLE
 Data(RowID CHAR(16) FOR BIT DATA PRIMARY KEY,Value LONG VARCHAR NOT
 NULL,LastUpdated TIMESTAMP NOT NULL DEFAULT CURRENT TIMESTAMP)

As a work-around you could auto-record the time zone using a generated
column and a function which looks up the time zone.

Dag