Re: POLL - Need for shared code

2005-10-31 Thread Dyre . Tjeldvoll
David W. Van Couvering [EMAIL PROTECTED] writes:

 Hi, all.  It is my belief that there is a current need for shared code, 
 not just future needs.  I'd like to test this belief.

 Can those of you who are working on functionality that could use shared 
 code please send an item to the list describing what it is you want to 
 do and why you have a need for a shared code infrastructure?

 Also, if you have a general opinion that it is important (or not) to 
 have shared code sooner than later, your views on this would be much 
 appreciated.

Personally I think that sharing common code is an absolute
requirement, and I cannot understand why anyone would question it. If
sharing code causes problems, then those problems have to be
addressed somehow. I cannot see any linking/versioning problem that would
justify maintaining multiple copies of the same code, or to maintain
your own version of external libraries.

I'll admit that I don't understand all the issues around multiple
jars, classpaths, multiple apps in a vm and class-loaders, but I do
know from past experience that maintaining online upgradability takes
a lot of developer effort, while it provides only marginal benefit to
users in most cases. 

-- 
dt



Re: POLL - Need for shared code

2005-10-31 Thread Andrew McIntyre


On Oct 31, 2005, at 12:50 AM, [EMAIL PROTECTED] wrote:


Personally I think that sharing common code is an absolute
requirement, and I cannot understand why anyone would question it.


applies hat of devil's advocate

New functionality (in this case, code sharing) should be questioned  
if it causes a regression in the behaviour of Derby.



If sharing code causes problems, then those problems have to be
addressed somehow.


Better now than later. Better to be thorough than to be incomplete.  
Not necessarily in that order. :-)



I cannot see any linking/versioning problem that would
justify maintaining multiple copies of the same code, or to maintain
your own version of external libraries.


That is, unless real world scenarios of Derby use, that currently  
work, cease to function because of changes introduced by code sharing.


Working towards a goal of sharing common code, as often as possible,  
is a *very* good idea. But, I think it's an idea that should be  
applied sparingly, not exceedingly, across the current code base.  
Personally, I feel that it would be better to apply the ideals of  
code sharing to some new functionality (e.g. full text indexing) than  
to try and retrofit the current code to share some particular bit of  
existing functionality.


It's not that I believe that the opportunities for sharing code in  
the current code base cannot be addressed and understood. I just  
think that it may be more productive, at this time, to focus the  
efforts of code sharing on some new functionality for which there is  
no history against which the principles of code sharing need to fight.


That said, itches are there to be scratched. Clearly, David has found  
an itch that is definitely in need of scratching. I encourage him to  
scratch it however he sees fit. :-)


andrew


Re: HEADS UP: DERBY-330 commit is huge

2005-10-31 Thread Oyvind . Bakksjo

[EMAIL PROTECTED] wrote:

[EMAIL PROTECTED] writes:

So I did the scripts in one commit, and now I've done the rest. Sorry if 
all you get to do today is run 'svn update'... Like I said, probably 
never a good time to do this, except it should have been done at the 
very beginning of this repository.


Transmitting file data 
.

Committed revision 329187.



Does that mean that I can close and resolve DERBY-330?


Yes.

--
Oyvind Bakksjo
Sun Microsystems, Database Technology Group
Trondheim, Norway
http://weblogs.java.net/blog/bakksjo/


Re: [jira] Created: (DERBY-646) In-memory backend storage support

2005-10-31 Thread Øystein Grøvlen
 SF == Stephen Fitch [EMAIL PROTECTED] writes:

SF Hi Mike,
SF the issue I'm having is I can't  find a way to tell the network server
SF what StorageFactory to use from the network client driver.


SF On the  embedded side  of things, I  just define a  new subsubprotocol
SF when I start java with:

SF 
-Dderby.subSubProtocol.memory=org.apache.derby.impl.io.MemoryStorageFactory
SF (alternatively I can change some of the engine code and register it as
SF a persistent service which results in the same issue)

I am a bit confused about why the use of StorageFactory is decided
when opening a connection.  Will it be possible for different
connections to use different StorageFactory within the same system?  I
guess different StorageFactory within the same database is not
possible.  In that case, why not make it a parameter when creating the
database?

Generally, I would think it would be useful for a single
connection/database be able to use main-memory storage for some tables
and disk storage for others.

-- 
Øystein



[jira] Commented: (DERBY-587) Providing JDBC 4.0 support for derby

2005-10-31 Thread Rick Hillegas (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-587?page=comments#action_12356415 ] 

Rick Hillegas commented on DERBY-587:
-

Thanks for pointing out these issues, Satheesh. I didn't turn up on the 
published list until a couple months after I submitted my ICLA. I'm hoping that 
it will be sufficient for Narayanan to say that he has faxed in his ICLA. That 
is, I hope we don't have to wait another 2 months to contribute this patch.

 Providing JDBC 4.0 support for derby
 

  Key: DERBY-587
  URL: http://issues.apache.org/jira/browse/DERBY-587
  Project: Derby
 Type: New Feature
   Components: JDBC
 Versions: 10.2.0.0
 Reporter: V.Narayanan
 Assignee: V.Narayanan
 Priority: Minor
  Fix For: 10.2.0.0
  Attachments: jdbc4.0.sxw, jdbc4.diff



-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: [jira] Created: (DERBY-646) In-memory backend storage support

2005-10-31 Thread Stephen Fitch

Øystein Grøvlen wrote:

SF == Stephen Fitch [EMAIL PROTECTED] writes:



SF Hi Mike,
SF the issue I'm having is I can't  find a way to tell the network server
SF what StorageFactory to use from the network client driver.


SF On the  embedded side  of things, I  just define a  new subsubprotocol
SF when I start java with:

SF 
-Dderby.subSubProtocol.memory=org.apache.derby.impl.io.MemoryStorageFactory
SF (alternatively I can change some of the engine code and register it as
SF a persistent service which results in the same issue)

I am a bit confused about why the use of StorageFactory is decided
when opening a connection.  


When you open a connection, derby attempts to boot the database (if it 
exists) or creates a new one if specified. The StorageFactory needs to 
be determined before any creation or access to the StorageFiles the 
database is stored on.



Will it be possible for different
connections to use different StorageFactory within the same system?


My understanding is there's one StorageFactory per database, and all 
connections to that database use that StorageFactory.



I guess different StorageFactory within the same database is not
possible.  


Not really with the current design as far as I can tell :/.

I think the current design allows for a different StorageFactory 
instance to be used for logs residing in a different directory than the 
default. I'm planning to look at allowing the database to be in-memory 
but logging to disk.


I'm also going to make my in-memory implementation automatically(?) 
import a database from disk into memory if it already exists.



In that case, why not make it a parameter when creating the
database?


I'd like to be able to import existing disk-based databases. So, rather 
than specifying it when we create the database, we specify it when 
booting the database.


Something like:
jdbc:derby:test;storageFactory=memory
jdbc:derby://host:port/test;storageFactory=memory

It could coded dynamically like the current embedded client's syntax 
(jdbc:derby:[storagefactory:]database;attributes). This way you can 
specify whatever StorageFactory you want even if it's not part of derby.





Stephen Fitch
Acadia University







Re: [jira] Commented: (DERBY-587) Providing JDBC 4.0 support for derby

2005-10-31 Thread Rick Hillegas
Does Derby or Apache  have a set of coding guidelines? Where would a new 
contributor find rules about copyright notices and @author tags? It 
would be helpful if we had a link to the rules in the 'Contribute Code 
or Documentation' section of the Community tab.


Thanks,
-Rick

Satheesh Bandaram (JIRA) wrote:

   [ http://issues.apache.org/jira/browse/DERBY-587?page=comments#action_12356249 ] 


Satheesh Bandaram commented on DERBY-587:
-

I noticed Narayanan doesn't have ICLA signed with Apache.  Any reasonable 
size contribution would require the contributor to have an ICLA signed and be on file. 
List of ICLAs can be found at: http://people.apache.org/~jim/committers.html. The process 
of starting an ICLA submission: http://www.apache.org/licenses/#clas

I think it is wise to hold off on checking this in, pending ICLA submission. I 
was so close... :(

Several new JAVA files don't have the ASF copyright notices at the top ... 
Isn't this required for all new JAVA files?

I also noticed an @author tag in the patch. While I am not sure what guidelines 
Apache or Derby follows, it may be best to remove @author tags. Geronimo has a 
policy against this tag: http://wiki.apache.org/geronimo/CodingStandards

One last minor one... The copyright notice should have 2005 for the file 
jdk16.java

 


Providing JDBC 4.0 support for derby


Key: DERBY-587
URL: http://issues.apache.org/jira/browse/DERBY-587
Project: Derby
   Type: New Feature
 Components: JDBC
   Versions: 10.2.0.0
   Reporter: V.Narayanan
   Assignee: V.Narayanan
   Priority: Minor
Fix For: 10.2.0.0
Attachments: jdbc4.0.sxw, jdbc4.diff

   




 





Re: Some idea about automatic checkpointing issue

2005-10-31 Thread Øystein Grøvlen
 RR == Raymond Raymond [EMAIL PROTECTED] writes:

RR Oystein wrote:
 
 I would like to suggest the following:
 - 1. The user may be able to configure a certain recovery time
 that Derby should try to satisfy.  (An appropriate default
 must be determined).
 - 2. During initilization of Derby, we run some measurement that
 determines the performance of the system and maps the
 recovery time into some X megabytes of log.)
 - 3. A checkpoint is made by default every X megabytes of log.
 - 4. One tries to dynamically adjust the write rate of the
 checkpoint so that the writing takes an entire checkpoint
 interval.  (E.g., write Y pages, then pause for some time).
 - 5. If data reads or a log writes (if log in default location)
 start to have long response times, one can increase the
 checkpoint interval.  The user should be able to turn this
 feature off in case longer recovery times are no acceptable.
 
 Hope this rambling has some value,
 
 --
 Øystein
 
RR Thanks for Oystein's comment. I agree with your comment
RR and I have any other thought about it. In order to be easier
RR to explain,I added the sequence number to your comment.

RR In step 3 and 4 I have another idea. Generally, we do checkpointing
RR from the earliest useful log record which is determined by the
RR repPoint and the undoLWM, whichever is earlier, to the current
RR log   instant   (redoLWM)   and   then  update   the   derby   control
RR file(ref.  http://db.apache.org/derby/papers/recovery.html).  

I am not sure I understand what you mean by do checkpointing.  Are
you talking about writing the checkpoint log record to the log?

RR I  agree with
RR you to spread the writes out over the checkpoint interval, but the
RR trade-off is that we have to do recovery from the penultimate
RR checkpoint(Am I right here?^_^). If the log is long, it will take us
RR a long time in recovery. 

From the perspective of recovery, it will still be the checkpoint
reflected in the log control file.  It is true that a new checkpoint
had probably been started when the crash occurred, but that may happen
today also.  It is less likely, but the principles are the same.

I agree that it will be more log to redo during recovery.  The
advantage with my proposal is that the recovery time will be more
deterministic since it will less dependent on how long time it takes
to clean the page cache.  The average log size for recovery will
always be 1.5 checkpoint interval with my proposal.  The maximum log
size will be 2 checkpoint interval, and this is also true for the
current solution.  If the goal is to guarantee a maximum recovery
time, I think my proposal is better.  It is no point in reducing
performance in order to be able to do recovery in 30 seconds, if the
user is willing to accept recovery times of 2 minutes. 

RR How about we update the derby control
RR file periodically instead updating the control file when the whole
RR checkpoint is done? (E.g. write several pages, if we detect that the
RR system is busy, then we update the derby control file and pause for
RR some time or we update the control file once every several
RR minutes)

I guess that is possible, but in that case, you will need to have some
way of determining the redoLWM for the checkpoint.  It will no longer
be the current log instant when the checkpoint starts.  I guess you
can do this by either scanning the entire page cache or by keeping the
pages sorted by age.

RR That seems we do a part of checkpoint at a time if the system
RR become busy. In this way, if the system crushes, the last checkpoint
RR mark (the log address up to where the last checkpoint did)will be
RR closer to the tail of the log than if we update the control file when
RR the whole checkpoint is done. Maybe we can call it Incremental
RR Checkpointing.

Unless each checkpoint cleans the entire cache, the redoLWM may be
much older than the the last checkpoint mark.  Hence, updating the
control file more often, does not reduce recovery times by itself.
However, making sure that the oldest dirty pages are written a
checkpoint, should advance redoLWM and reduce recovery times.
 
-- 
Øystein



Re: VOTE: Principles of sharing code

2005-10-31 Thread David W. Van Couvering
Thanks, Dan.  I would like to know if we need to have a vote on the 
overall principles of shared code or if people just want to see an 
examlpe implementation.  I am fine either way, although personally I'd 
like to make sure we agree on the principles before I spend too much 
time on an implementation.


I would say if I see at least two committers besides me saying they want 
a vote on the principles, then I will send out a proposed wording, and 
then we can discuss and do a vote.  Otherwise, I will work on an initial 
implementation with some small set of messages migrated over to the 
shared component environment that demonstrates the framework, and submit 
*that* for a vote.


Thanks,

David

Daniel John Debrunner wrote:

Rick Hillegas wrote:




I hope we're not talking past one another here. It's so easy to do in
email. I think we may agree on the following points:

o There is an existing problem when you mix two Derby-enabled
applications in the same vm.

o David's proposal increases our customer's exposure to this problem.

However, we may part ways on the following points:

o How significant that extra exposure is.

o Whether the benefits of code sharing justify that extra exposure.



That's a good summary.

I thought the last set of principles (the ones without the 'must use
classloader') took a good approach to minimize the exposure.

I think we should move forward on shared code, I think we all know it's
needed at some point. If we reject shared code now, then how do we get
to text indexing, xpath/xquery etc. I don't want to waste my time
writing text search code when Lucene exists.

I think the principles are good, especially this one:

'This implementation will not have any significant impact on jar file
size or otherwise affect product distribution or usage. '

though, probably it should say 'Any implementation will ...', or 'Code
sharing will ...' since the vote is on the principles, not an
implementation.

I think though, that we do need to address Kathey's concerns, her ideas
about logging different versions etc. are all useful. And we do need to
be ever vigilant on the usability of Derby.

Let's have some shared code, so that we can see what problems it creates
and then solve them.

Most likely any code we do affects usability in some way, and in some
cases we have to analyze the risk and see if the value out-weighs the risk.

Dan.
PS. this post was brought to you by too much Peet's coffee in the
afternoon ...

begin:vcard
fn:David W Van Couvering
n:Van Couvering;David W
org:Sun Microsystems, Inc.;Database Technology Group
email;internet:[EMAIL PROTECTED]
title:Senior Staff Software Engineer
tel;work:510-550-6819
tel;cell:510-684-7281
x-mozilla-html:TRUE
version:2.1
end:vcard



Coding Guidelines: was [Fwd: [jira] Commented: (DERBY-587) Providing JDBC 4.0 support for derby]

2005-10-31 Thread Rick Hillegas

Re-posting with a more germane subject line. Cheers-Rick
---BeginMessage---
[ 
http://issues.apache.org/jira/browse/DERBY-587?page=comments#action_12356415 ] 

Rick Hillegas commented on DERBY-587:
-

Thanks for pointing out these issues, Satheesh. I didn't turn up on the 
published list until a couple months after I submitted my ICLA. I'm hoping that 
it will be sufficient for Narayanan to say that he has faxed in his ICLA. That 
is, I hope we don't have to wait another 2 months to contribute this patch.

 Providing JDBC 4.0 support for derby
 

  Key: DERBY-587
  URL: http://issues.apache.org/jira/browse/DERBY-587
  Project: Derby
 Type: New Feature
   Components: JDBC
 Versions: 10.2.0.0
 Reporter: V.Narayanan
 Assignee: V.Narayanan
 Priority: Minor
  Fix For: 10.2.0.0
  Attachments: jdbc4.0.sxw, jdbc4.diff



-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira

---End Message---


Coding guidelines

2005-10-31 Thread Rick Hillegas

Hm, let me try this again.

Where would a new contributor go to find Derby/Apache coding guidelines 
such as our policies around copyright statements and @author tags? I 
would expect to find a link to these policies under the Contribute Code 
or Documentation section of the Community tab. But I don't see anything 
relevant there.


Thanks,
-Rick


[jira] Commented: (DERBY-587) Providing JDBC 4.0 support for derby

2005-10-31 Thread Rick Hillegas (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-587?page=comments#action_12356422 ] 

Rick Hillegas commented on DERBY-587:
-

Hi Satheesh: Narayanan has faxed in his ICLA. He's on holiday (Diwali) right 
now. Would it be ok to clean up the copyright and tag issues later? If so, 
would you be kind enough to check in this patch today? Thanks!

 Providing JDBC 4.0 support for derby
 

  Key: DERBY-587
  URL: http://issues.apache.org/jira/browse/DERBY-587
  Project: Derby
 Type: New Feature
   Components: JDBC
 Versions: 10.2.0.0
 Reporter: V.Narayanan
 Assignee: V.Narayanan
 Priority: Minor
  Fix For: 10.2.0.0
  Attachments: jdbc4.0.sxw, jdbc4.diff



-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



codeline health

2005-10-31 Thread Rick Hillegas
I may have missed some messages. Last Friday the codeline was 
dis-arranged and derbyall tests were failing. Has the problem been 
fixed? Can we expect clean test runs from the current mainline?


Thanks,
-Rick


Re: POLL - Need for shared code

2005-10-31 Thread Francois Orsini
I would like to share some Security pieces of logic (classes) which are
used by the client and engine during DRDA user network authentication;
hence introducing a new common security package.

--francoisOn 10/28/05, David W. Van Couvering [EMAIL PROTECTED] wrote:
Hi, all.It is my belief that there is a current need for shared code,not just future needs.I'd like to test this belief.Can those of you who are working on functionality that could use sharedcode please send an item to the list describing what it is you want to
do and why you have a need for a shared code infrastructure?Also, if you have a general opinion that it is important (or not) tohave shared code sooner than later, your views on this would be muchappreciated.
Thanks,David


[jira] Created: (DERBY-664) seg0 hardcoded in code

2005-10-31 Thread JIRA
seg0 hardcoded in code
--

 Key: DERBY-664
 URL: http://issues.apache.org/jira/browse/DERBY-664
 Project: Derby
Type: Bug
  Components: Store  
Versions: 10.2.0.0
 Environment: Any
Reporter: Øystein Grøvlen
Priority: Trivial


In a few files the name of the default data segment, seg0,  is hard coded.   I 
have seen this in RAMTransaction.java and BaseDataFileFactory.java.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Created: (DERBY-665) Remove backup(File ...) methods

2005-10-31 Thread JIRA
Remove backup(File ...) methods
---

 Key: DERBY-665
 URL: http://issues.apache.org/jira/browse/DERBY-665
 Project: Derby
Type: Improvement
  Components: Store  
Versions: 10.2.0.0
 Environment: Any
Reporter: Øystein Grøvlen
Priority: Minor


The code contains backup methods both for specifying the backup directory both 
as a String and as a File parameter.  Only the String versions are currently 
used.  The File versions should be removed to avoid duplication of code etc.
Examples of such methods are:

BasicDatabase.backup
BasicDatabase.backupAndEnableLogArchiveMode
RAMAccessManager.backup
RAMAccessManager.backupAndEnableLogArchiveMode
RawStore.backupAndEnableLogArchiveMode

plus corresponding interfaces.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: Coding guidelines

2005-10-31 Thread David W. Van Couvering
I can't find the email, but I could have sworn when I first started that 
I was referred to the Java coding guidelines at


http://java.sun.com/docs/codeconv/html/CodeConvTOC.doc.html

If we agree this is as good as any other standard, it would be good to 
adopt it as our own coding guideline and publish this somewhere...


I do know one key standard: four-space tabs.  We seem to be fairly 
inconsistent about everything else, although that said most of the code 
is fairly readable.


David

Rick Hillegas wrote:

Hm, let me try this again.

Where would a new contributor go to find Derby/Apache coding guidelines 
such as our policies around copyright statements and @author tags? I 
would expect to find a link to these policies under the Contribute Code 
or Documentation section of the Community tab. But I don't see anything 
relevant there.


Thanks,
-Rick
begin:vcard
fn:David W Van Couvering
n:Van Couvering;David W
org:Sun Microsystems, Inc.;Database Technology Group
email;internet:[EMAIL PROTECTED]
title:Senior Staff Software Engineer
tel;work:510-550-6819
tel;cell:510-684-7281
x-mozilla-html:TRUE
version:2.1
end:vcard



Re: codeline health

2005-10-31 Thread Daniel John Debrunner
David W. Van Couvering wrote:

 The latest tinderbox testrun, at
 
 http://www.multinet.no/~solberg/public/Apache/TinderBox_Derby/Limited/testSummary-previous.html
 
 
 shows derbyall with 12 failures
 
 http://www.multinet.no/~solberg/public/Apache/TinderBox_Derby/testlog/SunOS-5.10_i86pc-i386/329372-derbyall_diff.txt
 
 
 these diffs are consistent with what I'm seeing in my own test run.
 
 Do I need to log a JIRA issue or is someone going to work on fixing these?

Typically it's the originator of the change that does the fixing up.

Aren't these due to the changes to DERBY-330?

And I think Tomohito entered DERBY-663 for this.


If no-one fixes up problems from a commit, then any committer can vote
-1 and revert the change. Though in this case, reverting DERBY-330 would
cause more problems than fixing them, due to the merges it forces on
anyone with modified files.

The tests all pass on windows btw.
Dan.



Re: codeline health

2005-10-31 Thread Rick Hillegas

Thanks, David.

I don't think this requires a JIRA issue. The committers have broken the 
codeline. They know who they are. The committers who have recently 
committed need to exchange email among themselves, determine who broke 
the codeline, determine who among them is going to fix it, and then fix 
the codeline. Quickly, please. The codeline has been broken for three 
days now. This is pretty bad.


-Rick

David W. Van Couvering wrote:


The latest tinderbox testrun, at

http://www.multinet.no/~solberg/public/Apache/TinderBox_Derby/Limited/testSummary-previous.html 



shows derbyall with 12 failures

http://www.multinet.no/~solberg/public/Apache/TinderBox_Derby/testlog/SunOS-5.10_i86pc-i386/329372-derbyall_diff.txt 



these diffs are consistent with what I'm seeing in my own test run.

Do I need to log a JIRA issue or is someone going to work on fixing 
these?


Thanks,

David

Rick Hillegas wrote:

I may have missed some messages. Last Friday the codeline was 
dis-arranged and derbyall tests were failing. Has the problem been 
fixed? Can we expect clean test runs from the current mainline?


Thanks,
-Rick






Re: codeline health

2005-10-31 Thread David W. Van Couvering
To be fair (a) it works fine on Windows, which is probably the platform 
the committer(s) tested with and (b) it has only been one working day.


But I agree this needs to be fixed ASAP.  As you know I have your 
checkin in the queue and I am uncomfortable checking in until things get 
cleaned up a bit.


David

Rick Hillegas wrote:

Thanks, David.

I don't think this requires a JIRA issue. The committers have broken the 
codeline. They know who they are. The committers who have recently 
committed need to exchange email among themselves, determine who broke 
the codeline, determine who among them is going to fix it, and then fix 
the codeline. Quickly, please. The codeline has been broken for three 
days now. This is pretty bad.


-Rick

David W. Van Couvering wrote:


The latest tinderbox testrun, at

http://www.multinet.no/~solberg/public/Apache/TinderBox_Derby/Limited/testSummary-previous.html 



shows derbyall with 12 failures

http://www.multinet.no/~solberg/public/Apache/TinderBox_Derby/testlog/SunOS-5.10_i86pc-i386/329372-derbyall_diff.txt 



these diffs are consistent with what I'm seeing in my own test run.

Do I need to log a JIRA issue or is someone going to work on fixing 
these?


Thanks,

David

Rick Hillegas wrote:

I may have missed some messages. Last Friday the codeline was 
dis-arranged and derbyall tests were failing. Has the problem been 
fixed? Can we expect clean test runs from the current mainline?


Thanks,
-Rick





begin:vcard
fn:David W Van Couvering
n:Van Couvering;David W
org:Sun Microsystems, Inc.;Database Technology Group
email;internet:[EMAIL PROTECTED]
title:Senior Staff Software Engineer
tel;work:510-550-6819
tel;cell:510-684-7281
x-mozilla-html:TRUE
version:2.1
end:vcard



Re: [jira] Commented: (DERBY-239) Need a online backup feature that does not block update operations when online backup is in progress.

2005-10-31 Thread Øystein Grøvlen
 ST == Suresh Thalamati [EMAIL PROTECTED] writes:

ST Hi Øystein,
ST Thanks for reviewing the patch. My answers are in-line...

ST Øystein Grøvlen (JIRA) wrote:

...

 * Intuitively, it seems wrong to hard-code seg0, but I see that
 this is done all over the code.


ST Could you  please file a  JIRA entry for  this one. I noticed  it too,
ST hard coding  is definitely  not the  correct way here.  One of  us can
ST clean up this.

Done. Derby-664.

...

 * backup(File ...) seems like it would create an endless recursion
 if called.  Fortunately, it seems like it never will be
 called. Why do we need the methods with a File parameter instead
 of a String.  The system procedures uses the String variant.
 Maybe we could just remove the File variant?

ST I think  they are  left overs from  the old cloudscape  versions, File
ST variant is  not supported any  more , these  can be cleaned  up. There
ST should  be some File  variant calls  in access  layer also,  Could you
ST please  file Jira  for this  cleanup,  so that  these won't  forgotten
ST again.

Done. Derby-665.

...

 BaseDataFileFactory.java:

 * I do think basing which files to back up on the contents of the
 seg0 directory is very robust.  What if someone by accident has
 written a file with a name that matches the pattern you are
 looking for.  Then I would think you may get a very strange
 error message that may not be easy to resolve.  Could not this
 be based on some system catalog?


ST I think  users should  not be creating/deleting  any files in  SEG0 to
ST start with. Pattern matching is just a simple precaution. I thought of
ST scanning the catalogs  to find the container to  backup, but could not
ST convince my self that it is required at least for now.

I agree that users should not mess with seg0.  However, I think part
of being an easy-to-use database, is to be able to give meaningful
error messages when users mess things up.

ST If you  think scanning the  catalogs is more robust,  this enhancement
ST can be done at later point of time easily.

Ok. I will see if I feel the itch.  8-)

 Another scenario is if someone
 by accident deletes a file for a table that is not accessed very
 often.  A later backup will then not detect that this file is
 missing.  Since the backup is believed to be succesful, the
 latest backup of this file may be deleted.
 

ST Backup is as good as the database in this case :-) If users access the
ST deleted file table from the backup DB , it will fail with NO container
ST error, as it will on the main database.


ST By using the catalog approach also backup can only throw a warning
ST about the  deleted container. Once the user deletes a container file ,
ST there is no way out, users can not even drop that table. I don't think
ST making  backup  fail  forever   when  this  scenario  occurs  will  be
ST acceptable to the users.

I agree, but detecting in on backup, gives the user the option of
recovering the table by restoring the previous bakcup.

 FileContainer.java:
 * I cannot find any backup-specific about getPageForBackup() so I
 think a more general name would be better (e.g, getLatchedPage).
 


ST Yes, currently this routine does not have any backup specific stuff in
ST this  routine. I  would  like to  get  the page  from  the cache  with
ST specific weight for  the backup page in future,  when the weight based
ST cache mechanism is implemented.

In that case, I would suggest that the weight be a parameter to the
method, not part of the name.  

 RAFContainer.java:
 * The changes to this file seems not to be quite in line with some
 of the original design philosophies.  I am not sure that is
 necessarily bad, but it would be nice to here the arguments for
 doing it this way.  More specifically:
 - While RAFContainer so far has used the
 StorageRandomAccessFile/StorageFile abstractions, backup
 use RandomAccessFile/File directly.  Is there a particular
 reason for that?

ST Yes. Currently there is No API  available for the users to specify the
ST type  of Storage  Factory  to use  for  the backup.  It  will be  good
ST enhancement to add later.


ST Online backup implementation is doing what the current backup does;
ST i.e use the  disk IO calls directly , instead  of the database Storage
ST Factory. I think one  reason to do this way is if  a database is using
ST in-memory  Storage Factory,  users can  backup  the database  on to  a
ST disk.  If Storage Factory  used by  the Database  is java.io.*  , then
ST backup is  doing the  same without going  through the  Storage Factory
ST interface.

Makes sense.  If one want a StorageFactory for backup, one can add that
later.

 - In order to be able to backup a page, 

Re: codeline health

2005-10-31 Thread Daniel John Debrunner
Rick Hillegas wrote:

 Thanks, David.
 
 I don't think this requires a JIRA issue. The committers have broken the
 codeline. They know who they are. The committers who have recently
 committed need to exchange email among themselves, determine who broke
 the codeline, determine who among them is going to fix it, and then fix
 the codeline.

Why exchange email among themselves? This is open source,
communication, decisions etc. are made on the derby-dev list.

Dan.




Re: codeline health

2005-10-31 Thread Rick Hillegas

Hi Dan,

I'm not particular about where the discussion happens. Derby-dev is fine 
by me. I just want the discussion to start soon.


Regards,
-Rick

Daniel John Debrunner wrote:


Rick Hillegas wrote:

 


Thanks, David.

I don't think this requires a JIRA issue. The committers have broken the
codeline. They know who they are. The committers who have recently
committed need to exchange email among themselves, determine who broke
the codeline, determine who among them is going to fix it, and then fix
the codeline.
   



Why exchange email among themselves? This is open source,
communication, decisions etc. are made on the derby-dev list.

Dan.


 





[jira] Updated: (DERBY-239) Need a online backup feature that does not block update operations when online backup is in progress.

2005-10-31 Thread Suresh Thalamati (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-239?page=all ]

Suresh Thalamati updated DERBY-239:
---

Attachment: onlinebackup_2.diff

Fix  to the problem found by Øystein  while reviewing the previous online 
backup patch(online_backup1.diff).
Backup of a container code was doing a seek  incorrectly on the file container 
instead of the backup file. 

Tests: All tests passed on jdk142  Windows XP. 

It would be great if some one can commit this patch. 


Thanks
-suresht
  

 Need a online backup feature  that does not block update operations   when 
 online backup is in progress.
 

  Key: DERBY-239
  URL: http://issues.apache.org/jira/browse/DERBY-239
  Project: Derby
 Type: New Feature
   Components: Store
 Versions: 10.1.1.0
 Reporter: Suresh Thalamati
 Assignee: Suresh Thalamati
  Attachments: onlinebackup.html, onlinebackup_1.diff, onlinebackup_2.diff

 Currently Derby allows users to perfoms  online backups using 
 SYSCS_UTIL.SYSCS_BACKUP_DATABASE() procedure,  but while the backup is in 
 progress, update operations are temporarily blocked, but read operations can 
 still proceed.
 Blocking update operations can be real issue specifically in client server 
 environments, because user requests will be blocked for a long time if a 
 backup is in the progress on the server.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Resolved: (DERBY-582) Dynamic parameter should be allowed to be the operand of unary operator -. Derby throws exception 42X36: The '-' operator is not allowed to take a ? parameter as an oper

2005-10-31 Thread Mamta A. Satoor (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-582?page=all ]
 
Mamta A. Satoor resolved DERBY-582:
---

Fix Version: 10.2.0.0
 Resolution: Fixed

Satheesh checked in the fix for this in 10.2 codeline with revision r329295.

 Dynamic parameter should be allowed to be the operand of unary operator -. 
 Derby throws exception 42X36: The '-' operator is not allowed to take a ? 
 parameter as an operand.
 

  Key: DERBY-582
  URL: http://issues.apache.org/jira/browse/DERBY-582
  Project: Derby
 Type: Bug
   Components: JDBC
 Versions: 10.2.0.0
 Reporter: Mamta A. Satoor
 Assignee: Mamta A. Satoor
  Fix For: 10.2.0.0
  Attachments: Derby582UnaryDynamic092605.txt, 
 Derby582UnaryMinusDynamic104005.txt, Derby582UnaryParameter101105.txt

 A simple test program which uses dynamic parameter for unary operator - 
 fails with an exception. Following is the snippet of the code
   ps = con.prepareStatement(select * from t1 where c11 = -?);
   ps.setInt(1,1);
   rs = ps.executeQuery();
 The prepareStatement call fails with following exception
 SQLSTATE(42X36):ERROR 42X36: The '-' operator is not allowed to take a ? 
 parameter as an operand.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Closed: (DERBY-582) Dynamic parameter should be allowed to be the operand of unary operator -. Derby throws exception 42X36: The '-' operator is not allowed to take a ? parameter as an operan

2005-10-31 Thread Mamta A. Satoor (JIRA)
 [ http://issues.apache.org/jira/browse/DERBY-582?page=all ]
 
Mamta A. Satoor closed DERBY-582:
-


 Dynamic parameter should be allowed to be the operand of unary operator -. 
 Derby throws exception 42X36: The '-' operator is not allowed to take a ? 
 parameter as an operand.
 

  Key: DERBY-582
  URL: http://issues.apache.org/jira/browse/DERBY-582
  Project: Derby
 Type: Bug
   Components: JDBC
 Versions: 10.2.0.0
 Reporter: Mamta A. Satoor
 Assignee: Mamta A. Satoor
  Fix For: 10.2.0.0
  Attachments: Derby582UnaryDynamic092605.txt, 
 Derby582UnaryMinusDynamic104005.txt, Derby582UnaryParameter101105.txt

 A simple test program which uses dynamic parameter for unary operator - 
 fails with an exception. Following is the snippet of the code
   ps = con.prepareStatement(select * from t1 where c11 = -?);
   ps.setInt(1,1);
   rs = ps.executeQuery();
 The prepareStatement call fails with following exception
 SQLSTATE(42X36):ERROR 42X36: The '-' operator is not allowed to take a ? 
 parameter as an operand.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: Coding guidelines

2005-10-31 Thread Satheesh Bandaram
I thought Rick was asking about Apache/Derby specific guidelines and/or
policies. Anyway, Rick, you have seen Dan's posting. Copyright headers
being a requirement for ASL, I will have to wait for another patch... Sorry!

Satheesh

David W. Van Couvering wrote:

 I can't find the email, but I could have sworn when I first started
 that I was referred to the Java coding guidelines at

 http://java.sun.com/docs/codeconv/html/CodeConvTOC.doc.html

 If we agree this is as good as any other standard, it would be good to
 adopt it as our own coding guideline and publish this somewhere...

 I do know one key standard: four-space tabs.  We seem to be fairly
 inconsistent about everything else, although that said most of the
 code is fairly readable.

 David

 Rick Hillegas wrote:

 Hm, let me try this again.

 Where would a new contributor go to find Derby/Apache coding
 guidelines such as our policies around copyright statements and
 @author tags? I would expect to find a link to these policies under
 the Contribute Code or Documentation section of the Community tab.
 But I don't see anything relevant there.

 Thanks,
 -Rick




[jira] Commented: (DERBY-587) Providing JDBC 4.0 support for derby

2005-10-31 Thread Satheesh Bandaram (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-587?page=comments#action_12356434 ] 

Satheesh Bandaram commented on DERBY-587:
-

As Dan (PMC member) said, we have to wait for the copyright notices for the 
files... Also, it would be good if Narayanan himself can post whether or not he 
FAXed his ICLA. 

 Providing JDBC 4.0 support for derby
 

  Key: DERBY-587
  URL: http://issues.apache.org/jira/browse/DERBY-587
  Project: Derby
 Type: New Feature
   Components: JDBC
 Versions: 10.2.0.0
 Reporter: V.Narayanan
 Assignee: V.Narayanan
 Priority: Minor
  Fix For: 10.2.0.0
  Attachments: jdbc4.0.sxw, jdbc4.diff



-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Please sign ICLAs with Apache ...

2005-10-31 Thread Satheesh Bandaram




... if you plan to submit major changes or some number of small
modifications. I had to stop submissions couple of times, because at
the last minute, I found ICLAs were missing. Since we have several new
developers, I thought I would recycle this old advise. It is good for
all folks registered as derby-developers to consider this step.

ICLA information can be found at: http://www.apache.org/licenses/#clas

Satheesh






[jira] Commented: (DERBY-85) NPE when creating a trigger on a table and default schema doesn't exist.

2005-10-31 Thread Rick Hillegas (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-85?page=comments#action_12356438 ] 

Rick Hillegas commented on DERBY-85:


The patch itself looks good. I'm running derbyall now. The patch, however, 
needs to add a regression test case to one of the language tests to verify that 
the bug is fixed.

 NPE when creating a trigger on a table and default schema doesn't exist.
 

  Key: DERBY-85
  URL: http://issues.apache.org/jira/browse/DERBY-85
  Project: Derby
 Type: Bug
   Components: SQL
 Versions: 10.0.2.0
 Reporter: A B
 Assignee: Dyre Tjeldvoll
  Attachments: derby-85-notabs.diff, derby-85.diff, derby-85.stat, 
 derbyall_report.txt

 BACKGROUND:
 When connecting to a Derby db with a user id and password, the default schema 
 is USER.  For example, if I connect with:
 ij connect 'jdbc:derby:myDB;user=someUser;password=somePwd';
 then the default schema is SOMEUSER.
 PROBLEM:
 It turns out that if a table t1 exists in a non-default schema and the 
 default schema (in this case, SOMEUSER) doesn't exist yet (because no 
 objects have been created in that schema), then attempts to create a trigger 
 on t1 using its qualified name will lead to a null pointer exception in the 
 Derby engine.
 REPRO:
 In ij:
 -- Create database with default schema SOMEUSER.
 ij connect 'jdbc:derby:myDB;create=true;user=someUser;password=somePwd';
 -- Create table t1 in a non-default schema; in this case, call it ITKO.
 ij create table itko.t1 (i int);
 0 rows inserted/updated/deleted
 -- Now schema ITKO exists, and T1 exists in schema ITKO, but default schema 
 SOMEUSER does NOT exist, because we haven't created any objects in that 
 schema yet.
 -- So now we try to create a trigger in the ITKO (i.e. the non-default) 
 schema...
 ij create trigger trig1 after update on itko.t1 for each row mode db2sql 
 select * from sys.systables;
 ERROR XJ001: Java exception: ': java.lang.NullPointerException'.
 A look at the derby.log file shows the stack trace given below.  In a word, 
 it looks like the compilation schema field of SYS.SYSTRIGGERS isn't getting 
 set, and so it ends up being null.  That causes the NPE in subsequent 
 processing...
 java.lang.NullPointerException
   at 
 org.apache.derby.impl.sql.catalog.SYSSTATEMENTSRowFactory.makeSYSSTATEMENTSrow(SYSSTATEMENTSRowFactory.java:200)
   at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.addSPSDescriptor(DataDictionaryImpl.java:2890)
   at 
 org.apache.derby.impl.sql.execute.CreateTriggerConstantAction.createSPS(CreateTriggerConstantAction.java:354)
   at 
 org.apache.derby.impl.sql.execute.CreateTriggerConstantAction.executeConstantAction(CreateTriggerConstantAction.java:258)
   at 
 org.apache.derby.impl.sql.execute.MiscResultSet.open(MiscResultSet.java:56)
   at 
 org.apache.derby.impl.sql.GenericPreparedStatement.execute(GenericPreparedStatement.java:366)
   at 
 org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(EmbedStatement.java:1100)
   at 
 org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java:509)
   at 
 org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java:467)
   at org.apache.derby.impl.tools.ij.ij.executeImmediate(ij.java:299)
   at org.apache.derby.impl.tools.ij.utilMain.doCatch(utilMain.java:433)
   at org.apache.derby.impl.tools.ij.utilMain.go(utilMain.java:310)
   at org.apache.derby.impl.tools.ij.Main.go(Main.java:210)
   at org.apache.derby.impl.tools.ij.Main.mainCore(Main.java:176)
   at org.apache.derby.impl.tools.ij.Main14.main(Main14.java:56)
   at org.apache.derby.tools.ij.main(ij.java:60)

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: Please sign ICLAs with Apache ...

2005-10-31 Thread Jean T. Anderson

Satheesh Bandaram wrote:
... if you plan to submit major changes or some number of small 
modifications. I had to stop submissions couple of times, because at the 
last minute, I found ICLAs were missing. Since we have several new 
developers, I thought I would recycle this old advise. It is good for 
all folks registered as *derby-developers* to consider this step.


ICLA information can be found at: http://www.apache.org/licenses/#clas

Satheesh



And the page with the fax # is here:

http://www.apache.org/foundation/contact.html

My icla was lost in the shuffle at the beginning -- faxing doesn't 
always work. Additionally sending hard copy snail mail might be a good 
idea -- and seems to be recommmended by 
http://www.apache.org/licenses/#clas .


If your icla seems lost, I'd start by sending email to 
[EMAIL PROTECTED] email, which is the general address listed on 
http://www.apache.org/foundation/contact.html . Whomever responds to 
that email may route you to somebody else.


 -jean



Re: [jira] Commented: (DERBY-587) Providing JDBC 4.0 support for derby

2005-10-31 Thread Jean T. Anderson

Satheesh Bandaram (JIRA) wrote:
[ http://issues.apache.org/jira/browse/DERBY-587?page=comments#action_12356434 ] 


Satheesh Bandaram commented on DERBY-587:
-

As Dan (PMC member) said, we have to wait for the copyright notices for the files... Also, it would be good if Narayanan himself can post whether or not he FAXed his ICLA. 


I suggest that Narayanan re-fax to the number at 
http://www.apache.org/foundation/contact.html , then send followup email 
to [EMAIL PROTECTED] asking whom to contact to verify it was actually 
received, mentioning that the previous attempt to fax seems to have been 
lost. (Perhaps also mail hard copy.) If Narayanan can report 
verification that it was received, I personally would have no problem 
with Satheesh committing the patch, provided any remaining issues such 
as the copyright notices are addressed.


I have had very little success with faxes. I faxed my speaker agreement 
for ApacheCon three times last week, and *one* of those made it through. 
So successful transmission from your end doesn't necessarily mean it was 
successful.


 -jean



Re: [jira] Reopened: (DERBY-644) Eclipse UI plug-in zip file for the 10.1.2 snapshot release

2005-10-31 Thread Susan Cline
Hi Dan,

I'll look into this. There are a few items I want to sort out with the plug-ins. I'd like to automate the build, possibly create an Eclipse update site at Apache and also ask folks about the current packaging of the plug-ins. We've had a few queries about packaging the core plug-in a little bit differently.

SusanDaniel John Debrunner [EMAIL PROTECTED] wrote:
Andrew McIntyre wrote: I've been including the Eclipse plugins as part of the official release and posted on the downloads page, since the code for the plugins lives in the Derby code tree and is straightforward to build, though the UI plugin build not automated.I guess that was the part I was missing, the not automated build. I hopesomeone will scratch that itch one day.Thanks,Dan.

[jira] Created: (DERBY-666) Enhance derby.locks.deadlockTrace to print stack traces for all threads involved in a deadlock

2005-10-31 Thread Bryan Pendleton (JIRA)
Enhance derby.locks.deadlockTrace to print stack traces for all threads 
involved in a deadlock
--

 Key: DERBY-666
 URL: http://issues.apache.org/jira/browse/DERBY-666
 Project: Derby
Type: Improvement
  Components: Store  
Versions: 10.1.1.0
Reporter: Bryan Pendleton
Priority: Minor


I was reading http://www.linux-mag.com/content/view/2134/ (good article, btw!), 
and it says:

   The next two properties are needed to diagnose concurrency (locking and 
 deadlock) problems.

  *derby.locks.monitor=true logs all deadlocks that occur in the system.
  *derby.locks.deadlockTrace=true log a stack trace of all threads 
 involved in lock-related rollbacks.

It seems, that, in my environment, the deadlockTrace property does not log a 
stack trace of *all* threads involved in the deadlock.

Instead, it only logs a stack trace of the *victim* thread involved in the 
deadlock.

I think it would be very useful if the derby.locks.deadlockTrace setting could 
in fact log a stack trace of all involved threads.

In a posting to derby-dev, Mike Matrigali noted that an earlier implementation 
of a similar feature had to be removed because it was too expensive in both 
time and space, but he suggested that there might be several possible ways to 
implement this in an acceptably efficient manner:

 A long time ago there use to be room in each lock to point at a
 stack trace for each lock, but that was removed to optimize the size
 of the lock data structure which can have many objects outstanding.
 And creating and storing the stack for every lock was incredibly slow
 and just was not very useful for any very active application.  I think
 I was the only one who ever used it.

 The plan was sometime to add a per user data structure which could be
 filled in when it was about to wait on a lock, which would give most of what 
 is interesting in a deadlock.
 
 The current deadlockTrace is meant to dump the lock table out to derby.log 
 when a deadlock is encountered.
 
 I agree getting a dump of all stack traces would be very useful, and
 with the later jvm debug interfaces may now be possible - in earlier
 JVM's there weren't any java interfaces to do so.  Does anyone have
 the code to donate to dump all thread stacks to a buffer?

Mike also suggested a manual technique as a workaround; it would be useful to 
put this into the documentation somewhere, perhaps on the page which documents 
derby.locks.deadlockTrace? Here's Mike's suggestion:

 What I do if I can reproduce easily is set try to catch the wait by
 hand and then depending on the environment either send the magic
 signal or hit ctrl-break in the server window which will send the JVM
 specific thread dumps to derby.log.

The magic signal, btw, is 'kill -QUIT', at least with Sun JVMs in my experience.


-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: [jira] Commented: (DERBY-239) Need a online backup feature that does not block update operations when online backup is in progress.

2005-10-31 Thread Øystein Grøvlen

I agree that option 1 is to be preferrable.  I do not think there is
any need for transactional behavior for backup.  

--
Øystein

 ST( == Suresh Thalamati (JIRA) derby-dev@db.apache.org writes:

ST( [ 
http://issues.apache.org/jira/browse/DERBY-239?page=comments#action_12356240 ] 
ST( Suresh Thalamati commented on DERBY-239:
ST( 

ST( What to do if backup is started in a transaction that  already has 
unlogged operations executed?

ST( In  previous discussions about online backup, it was concluded that 
existing 
ST( backup procedures calls will WAIT for the transaction with unlogged 
ST( operations to commit before proceeding with the backup. One issue that 
was
ST( missing from the discussion was, what to do if user starts a backup 
ST( in the same transaction that has unlogged operations executed before 
the backup call. 
ST( WAIT will  not be an acceptable option here, because backup call will 
wait forever. 

ST( I can think of two ways this issue can be addressed:

ST( 1) Add a restriction that backup procedures can only be called in a 
brand
ST(NEW transaction. And also implicitly commit the backup transaction 
at the end of the 
ST(backup. Commit is not required as such to solve this problem, but 
it would
ST(be cleaner because backup itself is not a rollback-able operation. 
  
ST( 2) Make backup procedures fail, if transaction that it is started in 
ST(contains unlogged operations.


ST( I am inclined towards implementing the first option. Any 
comments/suggestion 
ST( will be appreciated. 


ST( Thanks
ST( -suresht


 Need a online backup feature  that does not block update operations   
when online backup is in progress.
 

 
 Key: DERBY-239
 URL: http://issues.apache.org/jira/browse/DERBY-239
 Project: Derby
 Type: New Feature
 Components: Store
 Versions: 10.1.1.0
 Reporter: Suresh Thalamati
 Assignee: Suresh Thalamati
 Attachments: onlinebackup.html, onlinebackup_1.diff
 
 Currently Derby allows users to perfoms  online backups using 
SYSCS_UTIL.SYSCS_BACKUP_DATABASE() procedure,  but while the backup is in 
progress, update operations are temporarily blocked, but read operations can 
still proceed.
 Blocking update operations can be real issue specifically in client 
server environments, because user requests will be blocked for a long time if a 
 backup is in the progress on the server.

ST( -- 
ST( This message is automatically generated by JIRA.
ST( -
ST( If you think it was sent incorrectly contact one of the administrators:
ST(http://issues.apache.org/jira/secure/Administrators.jspa
ST( -
ST( For more information on JIRA, see:
ST(http://www.atlassian.com/software/jira




-- 
Øystein



ASL

2005-10-31 Thread Rick Hillegas
I think that a reasonable first-time contributor could be confused by 
Apache's rules for including copyright notices 
(http://www.apache.org/dev/apply-license.html#new). Apache advises us to 
include a short copyright notice in each source file (code and 
documentation) but excluding the LICENSE and NOTICE files). The 
definition of source and documentation is a little vague although it 
seems to include LICENSE and NOTICE files which are immediately and 
happily excluded.


So what constitutes source and documentation? A reasonable person might 
suppose these terms to include every file under various subversion roots 
including https://svn.apache.org/repos/asf/db/derby/code/trunk and 
https://svn.apache.org/repos/asf/db/derby/docs/trunk. But a quick glance 
at our source tree indicates that this is not what we intend. We don't 
seem to include copyright notices in:


o Localized message files. These really look like a kind of source code 
to me.


o Other properties files used to control configurations and tests.

o Ant build scripts.

o Documentation on how to build and test Derby.

Where do we state our rules about which files require copyright notices? 
Is this the implicit rule:


o Only files with the extension java require copyright notices.

Or should a first-time contributor apply some other implicit rules:

o When creating a new subversion controlled file, first look for an 
existing file with the same extension. If the existing file you picked 
has a copyright notice, then include a copyright notice in your new file.


o If your new file has a completely novel extension and there's no 
corresponding file under source control, then do what seems reasonable 
to you.


Re: [jira] Commented: (DERBY-239) Need a online backup feature that does not block update operations when online backup is in progress.

2005-10-31 Thread Øystein Grøvlen
 MM == Mike Matrigali [EMAIL PROTECTED] writes:

MM I like option 1, make sure it is well documented.  I actually lean
MM toward even stronger, have the command commit the current transaction
MM before and after the backup.

Generally, I do not like such implicit commits.  It is likely to catch
someone by surprise.  On the other, I do not think many people would
intentionally do backup as part of a larger transaction.

In my opinion, the ideal solution would be to execute backup in a
nested transaction.  I do not know whether it is worth the effort.

-- 
Øystein



Re: [jira] Commented: (DERBY-239) Need a online backup feature that does not block update operations when online backup is in progress.

2005-10-31 Thread Bryan Pendleton
windows allows one to partition in software, I think included in the 
base OS.  Can someone say if linux does or not (or at least a particular

version of linux).


Well, Linux has an extremely powerful component called the Logical
Volume Manager, which sounds like what you mean:

http://www.tldp.org/HOWTO/LVM-HOWTO/index.html
http://www.tldp.org/HOWTO/LVM-HOWTO/whatisvolman.html

This has been widely available in Linux distributions for at
least 5 years or so.

bryan



[jira] Commented: (DERBY-85) NPE when creating a trigger on a table and default schema doesn't exist.

2005-10-31 Thread Rick Hillegas (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-85?page=comments#action_12356444 ] 

Rick Hillegas commented on DERBY-85:


Derbyall passed. When a new patch is submitted including a regression test, 
I'll just run the appropriate suite.

 NPE when creating a trigger on a table and default schema doesn't exist.
 

  Key: DERBY-85
  URL: http://issues.apache.org/jira/browse/DERBY-85
  Project: Derby
 Type: Bug
   Components: SQL
 Versions: 10.0.2.0
 Reporter: A B
 Assignee: Dyre Tjeldvoll
  Attachments: derby-85-notabs.diff, derby-85.diff, derby-85.stat, 
 derbyall_report.txt

 BACKGROUND:
 When connecting to a Derby db with a user id and password, the default schema 
 is USER.  For example, if I connect with:
 ij connect 'jdbc:derby:myDB;user=someUser;password=somePwd';
 then the default schema is SOMEUSER.
 PROBLEM:
 It turns out that if a table t1 exists in a non-default schema and the 
 default schema (in this case, SOMEUSER) doesn't exist yet (because no 
 objects have been created in that schema), then attempts to create a trigger 
 on t1 using its qualified name will lead to a null pointer exception in the 
 Derby engine.
 REPRO:
 In ij:
 -- Create database with default schema SOMEUSER.
 ij connect 'jdbc:derby:myDB;create=true;user=someUser;password=somePwd';
 -- Create table t1 in a non-default schema; in this case, call it ITKO.
 ij create table itko.t1 (i int);
 0 rows inserted/updated/deleted
 -- Now schema ITKO exists, and T1 exists in schema ITKO, but default schema 
 SOMEUSER does NOT exist, because we haven't created any objects in that 
 schema yet.
 -- So now we try to create a trigger in the ITKO (i.e. the non-default) 
 schema...
 ij create trigger trig1 after update on itko.t1 for each row mode db2sql 
 select * from sys.systables;
 ERROR XJ001: Java exception: ': java.lang.NullPointerException'.
 A look at the derby.log file shows the stack trace given below.  In a word, 
 it looks like the compilation schema field of SYS.SYSTRIGGERS isn't getting 
 set, and so it ends up being null.  That causes the NPE in subsequent 
 processing...
 java.lang.NullPointerException
   at 
 org.apache.derby.impl.sql.catalog.SYSSTATEMENTSRowFactory.makeSYSSTATEMENTSrow(SYSSTATEMENTSRowFactory.java:200)
   at 
 org.apache.derby.impl.sql.catalog.DataDictionaryImpl.addSPSDescriptor(DataDictionaryImpl.java:2890)
   at 
 org.apache.derby.impl.sql.execute.CreateTriggerConstantAction.createSPS(CreateTriggerConstantAction.java:354)
   at 
 org.apache.derby.impl.sql.execute.CreateTriggerConstantAction.executeConstantAction(CreateTriggerConstantAction.java:258)
   at 
 org.apache.derby.impl.sql.execute.MiscResultSet.open(MiscResultSet.java:56)
   at 
 org.apache.derby.impl.sql.GenericPreparedStatement.execute(GenericPreparedStatement.java:366)
   at 
 org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(EmbedStatement.java:1100)
   at 
 org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java:509)
   at 
 org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java:467)
   at org.apache.derby.impl.tools.ij.ij.executeImmediate(ij.java:299)
   at org.apache.derby.impl.tools.ij.utilMain.doCatch(utilMain.java:433)
   at org.apache.derby.impl.tools.ij.utilMain.go(utilMain.java:310)
   at org.apache.derby.impl.tools.ij.Main.go(Main.java:210)
   at org.apache.derby.impl.tools.ij.Main.mainCore(Main.java:176)
   at org.apache.derby.impl.tools.ij.Main14.main(Main14.java:56)
   at org.apache.derby.tools.ij.main(ij.java:60)

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: [jira] Commented: (DERBY-85) NPE when creating a trigger on a table and default schema doesn't exist.

2005-10-31 Thread David W. Van Couvering
How did you get derbyall to pass with all the failures we are having 
right now?  Did you run on Windows?


Thanks,

David

Rick Hillegas (JIRA) wrote:
[ http://issues.apache.org/jira/browse/DERBY-85?page=comments#action_12356444 ] 


Rick Hillegas commented on DERBY-85:


Derbyall passed. When a new patch is submitted including a regression test, 
I'll just run the appropriate suite.



NPE when creating a trigger on a table and default schema doesn't exist.


Key: DERBY-85
URL: http://issues.apache.org/jira/browse/DERBY-85
Project: Derby
   Type: Bug
 Components: SQL
   Versions: 10.0.2.0
   Reporter: A B
   Assignee: Dyre Tjeldvoll
Attachments: derby-85-notabs.diff, derby-85.diff, derby-85.stat, 
derbyall_report.txt

BACKGROUND:
When connecting to a Derby db with a user id and password, the default schema 
is USER.  For example, if I connect with:
ij connect 'jdbc:derby:myDB;user=someUser;password=somePwd';
then the default schema is SOMEUSER.
PROBLEM:
It turns out that if a table t1 exists in a non-default schema and the default schema (in 
this case, SOMEUSER) doesn't exist yet (because no objects have been created 
in that schema), then attempts to create a trigger on t1 using its qualified name will 
lead to a null pointer exception in the Derby engine.
REPRO:
In ij:
-- Create database with default schema SOMEUSER.
ij connect 'jdbc:derby:myDB;create=true;user=someUser;password=somePwd';
-- Create table t1 in a non-default schema; in this case, call it ITKO.
ij create table itko.t1 (i int);
0 rows inserted/updated/deleted
-- Now schema ITKO exists, and T1 exists in schema ITKO, but default schema 
SOMEUSER does NOT exist, because we haven't created any objects in that schema 
yet.
-- So now we try to create a trigger in the ITKO (i.e. the non-default) 
schema...
ij create trigger trig1 after update on itko.t1 for each row mode db2sql 
select * from sys.systables;
ERROR XJ001: Java exception: ': java.lang.NullPointerException'.
A look at the derby.log file shows the stack trace given below.  In a word, it looks like 
the compilation schema field of SYS.SYSTRIGGERS isn't getting set, and so it 
ends up being null.  That causes the NPE in subsequent processing...
java.lang.NullPointerException
at 
org.apache.derby.impl.sql.catalog.SYSSTATEMENTSRowFactory.makeSYSSTATEMENTSrow(SYSSTATEMENTSRowFactory.java:200)
at 
org.apache.derby.impl.sql.catalog.DataDictionaryImpl.addSPSDescriptor(DataDictionaryImpl.java:2890)
at 
org.apache.derby.impl.sql.execute.CreateTriggerConstantAction.createSPS(CreateTriggerConstantAction.java:354)
at 
org.apache.derby.impl.sql.execute.CreateTriggerConstantAction.executeConstantAction(CreateTriggerConstantAction.java:258)
at 
org.apache.derby.impl.sql.execute.MiscResultSet.open(MiscResultSet.java:56)
at 
org.apache.derby.impl.sql.GenericPreparedStatement.execute(GenericPreparedStatement.java:366)
at 
org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(EmbedStatement.java:1100)
at 
org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java:509)
at 
org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java:467)
at org.apache.derby.impl.tools.ij.ij.executeImmediate(ij.java:299)
at org.apache.derby.impl.tools.ij.utilMain.doCatch(utilMain.java:433)
at org.apache.derby.impl.tools.ij.utilMain.go(utilMain.java:310)
at org.apache.derby.impl.tools.ij.Main.go(Main.java:210)
at org.apache.derby.impl.tools.ij.Main.mainCore(Main.java:176)
at org.apache.derby.impl.tools.ij.Main14.main(Main14.java:56)
at org.apache.derby.tools.ij.main(ij.java:60)



begin:vcard
fn:David W Van Couvering
n:Van Couvering;David W
org:Sun Microsystems, Inc.;Database Technology Group
email;internet:[EMAIL PROTECTED]
title:Senior Staff Software Engineer
tel;work:510-550-6819
tel;cell:510-684-7281
x-mozilla-html:TRUE
version:2.1
end:vcard



Derby I/O issues during checkpointing

2005-10-31 Thread Øystein Grøvlen

Some tests runs we have done show very long transaction response times
during checkpointing.  This has been seen on several platforms.  The
load is TPC-B like transactions and the write cache is turned off so
the system is I/O bound.  There seems to be two major issues:

1. Derby does checkpointing by writing all dirty pages by
   RandomAccessFile.write() and then do file sync when the entire
   cache has been scanned.  When the page cache is large, the file
   system buffer will overflow during checkpointing, and occasionally
   the writes will take very long.  I have observed single write
   operations that took almost 12 seconds.  What is even worse is that
   during this period also read performance on other files can be very
   bad.  For example, reading an index page from disk can take close
   to 10 seconds when the base table is checkpointed.  Hence,
   transactions are severely slowed down.

   I have managed to improve response times by flushing every file for
   every 100th write.  Is this something we should consider including
   in the code?  Do you have better suggestions?

2. What makes thing even worse is that only a single thread can read a
   page from a file at a time.  (Note that Derby has one file per
   table). This is because the implementation of RAFContainer.readPage
   is as follow:

synchronized (this) {  // 'this' is a FileContainer, i.e. a file object
fileData.seek(pageOffset);  // fileData is a RandomAccessFile
fileData.readFully(pageData, 0, pageSize);
}

   During checkpoint when I/O is slow this creates long queques of
   readers.  In my run with 20 clients, I observed read requests that
   took more than 20 seconds.

   This behavior will also limit throughput and can partly explains
   why I get low CPU utilization with 20 clients.  All my TPCB-B
   clients are serialized since most will need 1-2 disk accesses
   (index leaf page and one page of the account table).

   Generally, in order to make the OS able to optimize I/O, one should
   have many outstanding I/O calls at a time.  (See Frederiksen,
   Bonnet: Getting Priorities Straight: Improving Linux Support for
   Database I/O, VLDB 2005).  

   I have attached a patch where I have introduced several file
   descriptors (RandomAccessFile objects) per RAFContainer.  These are
   used for reading.  The principle is that when all readers are busy,
   a readPage request will create a new reader.  (There is a maximum
   number of readers.)  With this patch, throughput was improved by
   50% on linux.  The combination of this patch and the synching for
   every 100th write, reduced maximum transaction response times with
   90%.

   The patch is not ready for inclusion into Derby, but I would like
   to here whether you think this is a viable approach.

-- 
Øystein

Index: java/engine/org/apache/derby/impl/store/raw/data/RAFContainer.java
===
--- java/engine/org/apache/derby/impl/store/raw/data/RAFContainer.java   
(revision 312819)
+++ java/engine/org/apache/derby/impl/store/raw/data/RAFContainer.java (working 
copy)
@@ -45,7 +45,8 @@
 import org.apache.derby.io.StorageFile;
 import org.apache.derby.io.StorageRandomAccessFile;
 
-import java.util.Vector;
+import java.util.ArrayList;
+import java.util.List;
 
 import java.io.DataInput;
 import java.io.IOException;
@@ -66,12 +67,15 @@
   * Immutable fields
  */
 protected StorageRandomAccessFile fileData;
-
+
   /* 
 ** Mutable fields, only valid when the identity is valid.
   */
  protected boolean   needsSync;
 
+private int openReaders;
+private List freeReaders;
+
 /* privileged actions */
 private int actionCode;
 private static final int GET_FILE_NAME_ACTION = 1;
@@ -79,6 +83,7 @@
 private static final int REMOVE_FILE_ACTION = 3;
 private static final int OPEN_CONTAINER_ACTION = 4;
 private static final int STUBBIFY_ACTION = 5;
+private static final int OPEN_READONLY_ACTION = 6;
 private ContainerKey actionIdentity;
 private boolean actionStub;
 private boolean actionErrorOK;
@@ -86,12 +91,15 @@
 private StorageFile actionFile;
 private LogInstant actionInstant;
 
+
   /*
   * Constructors
  */
 
RAFContainer(BaseDataFileFactory factory) {
 super(factory);
+openReaders = 0;
+freeReaders = new ArrayList();
 }
 
  /*
@@ -193,12 +201,25 @@
 
long pageOffset = pageNumber * pageSize;
 
-  synchronized (this) {
+
+StorageRandomAccessFile reader = null;
+for (;;) {
+synchronized(freeReaders) {
+if (freeReaders.size()  0) {
+reader = (StorageRandomAccessFile)freeReaders.remove(0);
+break;
+}
+}
+openNewReader();
+} 
 
-

Re: [jira] Commented: (DERBY-85) NPE when creating a trigger on a table and default schema doesn't exist.

2005-10-31 Thread Rick Hillegas

Yep, Windows it was.

Cheers,
-Rick

David W. Van Couvering wrote:

How did you get derbyall to pass with all the failures we are having 
right now?  Did you run on Windows?


Thanks,

David

Rick Hillegas (JIRA) wrote:

[ 
http://issues.apache.org/jira/browse/DERBY-85?page=comments#action_12356444 
]

Rick Hillegas commented on DERBY-85:


Derbyall passed. When a new patch is submitted including a regression 
test, I'll just run the appropriate suite.



NPE when creating a trigger on a table and default schema doesn't 
exist.
 



Key: DERBY-85
URL: http://issues.apache.org/jira/browse/DERBY-85
Project: Derby
   Type: Bug
 Components: SQL
   Versions: 10.0.2.0
   Reporter: A B
   Assignee: Dyre Tjeldvoll
Attachments: derby-85-notabs.diff, derby-85.diff, derby-85.stat, 
derbyall_report.txt


BACKGROUND:
When connecting to a Derby db with a user id and password, the 
default schema is USER.  For example, if I connect with:

ij connect 'jdbc:derby:myDB;user=someUser;password=somePwd';
then the default schema is SOMEUSER.
PROBLEM:
It turns out that if a table t1 exists in a non-default schema and 
the default schema (in this case, SOMEUSER) doesn't exist yet 
(because no objects have been created in that schema), then attempts 
to create a trigger on t1 using its qualified name will lead to a 
null pointer exception in the Derby engine.

REPRO:
In ij:
-- Create database with default schema SOMEUSER.
ij connect 
'jdbc:derby:myDB;create=true;user=someUser;password=somePwd';
-- Create table t1 in a non-default schema; in this case, call it 
ITKO.

ij create table itko.t1 (i int);
0 rows inserted/updated/deleted
-- Now schema ITKO exists, and T1 exists in schema ITKO, but default 
schema SOMEUSER does NOT exist, because we haven't created any 
objects in that schema yet.
-- So now we try to create a trigger in the ITKO (i.e. the 
non-default) schema...
ij create trigger trig1 after update on itko.t1 for each row mode 
db2sql select * from sys.systables;

ERROR XJ001: Java exception: ': java.lang.NullPointerException'.
A look at the derby.log file shows the stack trace given below.  In 
a word, it looks like the compilation schema field of 
SYS.SYSTRIGGERS isn't getting set, and so it ends up being null.  
That causes the NPE in subsequent processing...

java.lang.NullPointerException
at 
org.apache.derby.impl.sql.catalog.SYSSTATEMENTSRowFactory.makeSYSSTATEMENTSrow(SYSSTATEMENTSRowFactory.java:200) 

at 
org.apache.derby.impl.sql.catalog.DataDictionaryImpl.addSPSDescriptor(DataDictionaryImpl.java:2890) 

at 
org.apache.derby.impl.sql.execute.CreateTriggerConstantAction.createSPS(CreateTriggerConstantAction.java:354) 

at 
org.apache.derby.impl.sql.execute.CreateTriggerConstantAction.executeConstantAction(CreateTriggerConstantAction.java:258) 

at 
org.apache.derby.impl.sql.execute.MiscResultSet.open(MiscResultSet.java:56) 

at 
org.apache.derby.impl.sql.GenericPreparedStatement.execute(GenericPreparedStatement.java:366) 

at 
org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(EmbedStatement.java:1100) 

at 
org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java:509) 

at 
org.apache.derby.impl.jdbc.EmbedStatement.execute(EmbedStatement.java:467) 


at org.apache.derby.impl.tools.ij.ij.executeImmediate(ij.java:299)
at 
org.apache.derby.impl.tools.ij.utilMain.doCatch(utilMain.java:433)

at org.apache.derby.impl.tools.ij.utilMain.go(utilMain.java:310)
at org.apache.derby.impl.tools.ij.Main.go(Main.java:210)
at org.apache.derby.impl.tools.ij.Main.mainCore(Main.java:176)
at org.apache.derby.impl.tools.ij.Main14.main(Main14.java:56)
at org.apache.derby.tools.ij.main(ij.java:60)








Re: [jira] Commented: (DERBY-239) Need a online backup feature that does not block update operations when online backup is in progress.

2005-10-31 Thread Øystein Grøvlen
 MM == Mike Matrigali [EMAIL PROTECTED] writes:

MM I believe Derby gives the OS plenty of opportunity to do parallel I/O
MM if there are multiple users in the database.  Every thread can possibly
MM be doing I/O at a single time.  There may be room to improve in the case
MM of checkpoint, but not sure how important that is.

See my recent email on Derby I/O issues.  Only one thread may
read/write a page from/to a file (table) at a time since readPage and
writePage synchronize their accesses on the RAFContainer object.

-- 
Øystein



Re: Derby I/O issues during checkpointing

2005-10-31 Thread Mike Matrigali



Øystein Grøvlen wrote:

Some tests runs we have done show very long transaction response times
during checkpointing.  This has been seen on several platforms.  The
load is TPC-B like transactions and the write cache is turned off so
the system is I/O bound.  There seems to be two major issues:

1. Derby does checkpointing by writing all dirty pages by
   RandomAccessFile.write() and then do file sync when the entire
   cache has been scanned.  When the page cache is large, the file
   system buffer will overflow during checkpointing, and occasionally
   the writes will take very long.  I have observed single write
   operations that took almost 12 seconds.  What is even worse is that
   during this period also read performance on other files can be very
   bad.  For example, reading an index page from disk can take close
   to 10 seconds when the base table is checkpointed.  Hence,
   transactions are severely slowed down.

   I have managed to improve response times by flushing every file for
   every 100th write.  Is this something we should consider including
   in the code?  Do you have better suggestions?


probably the first thing to do is make sure we are doing a reasonable
amount of checkpoints, most people who run these benchmarks configure 
the system such that it either does 0 or 1 checkpoints during the run.
This goes to the ongoing discussion on how best to automatically 
configure checkpoint interval - the current defaults don't make much

sense for an OLTP system.

I had hoped that with the current checkpoint design that usually by the 
time that the file sync happened all the pages would have already made

it to disk.  The hope was that while holding the write semaphore we
would not do any I/O and thus not cause much interruption to the rest of
the system.

What OS/filesystem are you seeing these results on? Any idea why a write
would take 10 seconds.  Do you think the write blocks when the sync is
called?  If so do you think the block a Derby sync point or an OS 
internal sync point.


We moved away from using the write then sync approach for log files 
because we found that on some OS/Filesystems performance of the sync

was linearly related to the size of the file, rather than the number
of modified pages.  I left it for checkpoint as it seemed an easy
way to do async write which I thought would then provide the OS with
basically the equivalent of many concurrent writes to do.

Another approach may be to change checkpoint to use the direct sync 
write, but make it get it's own open on the file similar to what you

describe below - that would mean other reader/writer would not block
ever on checkpoint read/write - at least from derby level.  Whether
this would increase or decrease overall checkpoint elapsed time is
probably system dependent - I am pretty sure it would increase time
on windows, but I continue to believe elapsed time of checkpoint is
not important - as you point out it is more important to make sure
it interferes with real work as little as possible.


2. What makes thing even worse is that only a single thread can read a
   page from a file at a time.  (Note that Derby has one file per
   table). This is because the implementation of RAFContainer.readPage
   is as follow:

synchronized (this) {  // 'this' is a FileContainer, i.e. a file object
fileData.seek(pageOffset);  // fileData is a RandomAccessFile
fileData.readFully(pageData, 0, pageSize);
}

   During checkpoint when I/O is slow this creates long queques of
   readers.  In my run with 20 clients, I observed read requests that
   took more than 20 seconds.

   This behavior will also limit throughput and can partly explains
   why I get low CPU utilization with 20 clients.  All my TPCB-B
   clients are serialized since most will need 1-2 disk accesses
   (index leaf page and one page of the account table).

   Generally, in order to make the OS able to optimize I/O, one should
   have many outstanding I/O calls at a time.  (See Frederiksen,
   Bonnet: Getting Priorities Straight: Improving Linux Support for
   Database I/O, VLDB 2005).  


   I have attached a patch where I have introduced several file
   descriptors (RandomAccessFile objects) per RAFContainer.  These are
   used for reading.  The principle is that when all readers are busy,
   a readPage request will create a new reader.  (There is a maximum
   number of readers.)  With this patch, throughput was improved by
   50% on linux.  The combination of this patch and the synching for
   every 100th write, reduced maximum transaction response times with
   90%.

   The patch is not ready for inclusion into Derby, but I would like
   to here whether you think this is a viable approach.

I now see what you were talking about, I was thinking at too high a 
level.  In your test is the data spread across more than a single disk?

Especially with data spread across multiple disks it would make sense
to allow multiple 

eol and merge problems from trunk to branch

2005-10-31 Thread Mike Matrigali

In my svn commit of 329494, when I went to merge it
to the branch I got a conflict on every line of the change -
all eol issues.

My questions are:
1) did I just get unlucky with my timing vs. the recent eol mass change?
2) Did I do something wrong with my original checkin, or something with
   the merge command (kathy tried the merge and got the same conficts)?
3) Is this going to happen with all future trunk to branch merges?



Re: [jira] Updated: (DERBY-239) Need a online backup feature that does not block update operations when online backup is in progress.

2005-10-31 Thread Mike Matrigali

committed this fix as svn 329934.

Suresh Thalamati (JIRA) wrote:

 [ http://issues.apache.org/jira/browse/DERBY-239?page=all ]

Suresh Thalamati updated DERBY-239:
---

Attachment: onlinebackup_2.diff

Fix  to the problem found by Øystein  while reviewing the previous online 
backup patch(online_backup1.diff).
Backup of a container code was doing a seek  incorrectly on the file container instead of the backup file. 

Tests: All tests passed on jdk142  Windows XP. 

It would be great if some one can commit this patch. 



Thanks
-suresht
  




Need a online backup feature  that does not block update operations   when 
online backup is in progress.


Key: DERBY-239
URL: http://issues.apache.org/jira/browse/DERBY-239
Project: Derby
   Type: New Feature
 Components: Store
   Versions: 10.1.1.0
   Reporter: Suresh Thalamati
   Assignee: Suresh Thalamati
Attachments: onlinebackup.html, onlinebackup_1.diff, onlinebackup_2.diff

Currently Derby allows users to perfoms  online backups using 
SYSCS_UTIL.SYSCS_BACKUP_DATABASE() procedure,  but while the backup is in 
progress, update operations are temporarily blocked, but read operations can 
still proceed.
Blocking update operations can be real issue specifically in client server environments, because user requests will be blocked for a long time if a 
backup is in the progress on the server.







Re: Derby I/O issues during checkpointing

2005-10-31 Thread Francois Orsini
In order for a thread to generate many outstanding I/O calls at a time,
it should *not* block on an I/O in the first place if it does not have
to - this is what you observed - Typically, we would want to be able to
issue Asynchronous I/O's so that a given thread at the low-level does
not block but rather is allowed to check for I/O completion at a later
time as appropriate, while producing additional I/O requests (i.e.
read-ahead) - Asynchronous I/O's in Java is not something you used to
get out of the box and people have implemented it via I/O worker
threads (simulated Async I/O's) or/and using JNI (calling into OS
proprietary asynchronous I/O driver on Unix FSs and Windows (NT)).

I think the approach you have made is good in terms of principles and
prototyping but I would think we would need to implement something more
sophisticated and having an implementation of worker threads simulating
asynchronous I/Os (whether we end-up using Java Asynchronous I/O in NIO
or not). I think we could even see additional performance gain.

Just my 0.02 cents...

--francois


Re: [jira] Commented: (DERBY-239) Need a online backup feature that does not block update operations when online backup is in progress.

2005-10-31 Thread Suresh Thalamati

Thanks for the input. I will go with the option 1.

Nested transaction may not solve this problem , Nested Transaction 
will be be on the same thread as the user thread, so user will not be 
able to commit the unlogged operations for the backup operation to 
proceed.


Thanks
-suresht


Øystein Grøvlen wrote:

MM == Mike Matrigali [EMAIL PROTECTED] writes:



MM I like option 1, make sure it is well documented.  I actually lean
MM toward even stronger, have the command commit the current transaction
MM before and after the backup.

Generally, I do not like such implicit commits.  It is likely to catch
someone by surprise.  On the other, I do not think many people would
intentionally do backup as part of a larger transaction.

In my opinion, the ideal solution would be to execute backup in a
nested transaction.  I do not know whether it is worth the effort.





DERBY-615 - security manager now enabled by default

2005-10-31 Thread Daniel John Debrunner

Svn commit (on the trunk) 329876 enables the security manager by default
for tests in the test harness, except:

  - when useProcess=true
  - jcc is the client driver
  - noSecurityManager=true

http://svn.apache.org/viewcvs?view=revrev=329876

This is part of the incremental development for DERBY-615.

http://issues.apache.org/jira/browse/DERBY-615

I tested jar/non-jar and sane/non-sane with no problems.

I see the Sun tinderbox tests were ok, no new problems introduced.

I think roughly about 58% of the JDBC client (emebedded or network
client) side tests now install a security manager (with J2SE).

http://wiki.apache.org/db-derby/SecurityManagerTesting

I added a section in the testing README file on the security manager.

Dan.





Re: Grant and Revoke ... DERBY-464...

2005-10-31 Thread Daniel John Debrunner
Satheesh Bandaram wrote:

 Hi
 
 I just attached my proposal to enhance Derby by adding Grant and Revoke
 capability to DERBY-464
 http://issues.apache.org/jira/browse/DERBY-464. Hope this leads to
 many other enhancements to Derby in the access-control and security
 areas to make Derby much more capable in client-server configurations.
 
[snip]
 
 When a table, view, function, or procedure is created its owner
 (creator) has full privileges on it. No other user has any privileges on
 it until the owner grants privileges.

Can those permissions, the owner's, be revoked?

Ie. if I a create table, can I then revoke DELETE permission on it, from
myself? So that no-one can perform a DELETE.

Dan.




[jira] Commented: (DERBY-666) Enhance derby.locks.deadlockTrace to print stack traces for all threads involved in a deadlock

2005-10-31 Thread Francois Orsini (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-666?page=comments#action_12356460 ] 

Francois Orsini commented on DERBY-666:
---

The new J2SE 5.0 has some new API to dump individual or all threads' 
stracktrace running in a JVM - There is a new notion of StackTraceElement 
object which represent a stack frame and can be output'ed as a String

So you can get all frames of a particular thread's stack  dump as well as for 
all threads in the JVM.

http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Thread.html#getStackTrace()
http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Thread.html#getAllStackTraces()

http://java.sun.com/j2se/1.5.0/docs/api/java/lang/StackTraceElement.html

Thread(s) stack dumps can also be performed on the command line using 'jstack' 
(1.5) utility to dump all the JVM's thread stack traces given a JVM pid.
http://java.sun.com/j2se/1.5.0/docs/tooldocs/share/jstack.html

fyi.

 Enhance derby.locks.deadlockTrace to print stack traces for all threads 
 involved in a deadlock
 --

  Key: DERBY-666
  URL: http://issues.apache.org/jira/browse/DERBY-666
  Project: Derby
 Type: Improvement
   Components: Store
 Versions: 10.1.1.0
 Reporter: Bryan Pendleton
 Priority: Minor


 I was reading http://www.linux-mag.com/content/view/2134/ (good article, 
 btw!), and it says:
The next two properties are needed to diagnose concurrency (locking and 
  deadlock) problems.
 
   *derby.locks.monitor=true logs all deadlocks that occur in the system.
   *derby.locks.deadlockTrace=true log a stack trace of all threads 
  involved in lock-related rollbacks.
 It seems, that, in my environment, the deadlockTrace property does not log a 
 stack trace of *all* threads involved in the deadlock.
 Instead, it only logs a stack trace of the *victim* thread involved in the 
 deadlock.
 I think it would be very useful if the derby.locks.deadlockTrace setting 
 could in fact log a stack trace of all involved threads.
 In a posting to derby-dev, Mike Matrigali noted that an earlier 
 implementation of a similar feature had to be removed because it was too 
 expensive in both time and space, but he suggested that there might be 
 several possible ways to implement this in an acceptably efficient manner:
  A long time ago there use to be room in each lock to point at a
  stack trace for each lock, but that was removed to optimize the size
  of the lock data structure which can have many objects outstanding.
  And creating and storing the stack for every lock was incredibly slow
  and just was not very useful for any very active application.  I think
  I was the only one who ever used it.
 
  The plan was sometime to add a per user data structure which could be
  filled in when it was about to wait on a lock, which would give most of 
  what is interesting in a deadlock.
  
  The current deadlockTrace is meant to dump the lock table out to derby.log 
  when a deadlock is encountered.
  
  I agree getting a dump of all stack traces would be very useful, and
  with the later jvm debug interfaces may now be possible - in earlier
  JVM's there weren't any java interfaces to do so.  Does anyone have
  the code to donate to dump all thread stacks to a buffer?
 Mike also suggested a manual technique as a workaround; it would be useful to 
 put this into the documentation somewhere, perhaps on the page which 
 documents derby.locks.deadlockTrace? Here's Mike's suggestion:
  What I do if I can reproduce easily is set try to catch the wait by
  hand and then depending on the environment either send the magic
  signal or hit ctrl-break in the server window which will send the JVM
  specific thread dumps to derby.log.
 The magic signal, btw, is 'kill -QUIT', at least with Sun JVMs in my 
 experience.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



is getAllStackTraces() something we are allowed to call from the server given the recent SecurityManager changes?

2005-10-31 Thread Mike Matrigali



Francois Orsini (JIRA) wrote:
[ http://issues.apache.org/jira/browse/DERBY-666?page=comments#action_12356460 ] 


Francois Orsini commented on DERBY-666:
---

The new J2SE 5.0 has some new API to dump individual or all threads' 
stracktrace running in a JVM - There is a new notion of StackTraceElement 
object which represent a stack frame and can be output'ed as a String

So you can get all frames of a particular thread's stack  dump as well as for 
all threads in the JVM.

http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Thread.html#getStackTrace()
http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Thread.html#getAllStackTraces()

http://java.sun.com/j2se/1.5.0/docs/api/java/lang/StackTraceElement.html

Thread(s) stack dumps can also be performed on the command line using 'jstack' 
(1.5) utility to dump all the JVM's thread stack traces given a JVM pid.
http://java.sun.com/j2se/1.5.0/docs/tooldocs/share/jstack.html

fyi.



Enhance derby.locks.deadlockTrace to print stack traces for all threads 
involved in a deadlock
--

Key: DERBY-666
URL: http://issues.apache.org/jira/browse/DERBY-666
Project: Derby
   Type: Improvement
 Components: Store
   Versions: 10.1.1.0
   Reporter: Bryan Pendleton
   Priority: Minor




I was reading http://www.linux-mag.com/content/view/2134/ (good article, btw!), 
and it says:


 The next two properties are needed to diagnose concurrency (locking and 
deadlock) problems.

*derby.locks.monitor=true logs all deadlocks that occur in the system.
*derby.locks.deadlockTrace=true log a stack trace of all threads involved 
in lock-related rollbacks.


It seems, that, in my environment, the deadlockTrace property does not log a 
stack trace of *all* threads involved in the deadlock.
Instead, it only logs a stack trace of the *victim* thread involved in the 
deadlock.
I think it would be very useful if the derby.locks.deadlockTrace setting could 
in fact log a stack trace of all involved threads.
In a posting to derby-dev, Mike Matrigali noted that an earlier implementation 
of a similar feature had to be removed because it was too expensive in both 
time and space, but he suggested that there might be several possible ways to 
implement this in an acceptably efficient manner:


A long time ago there use to be room in each lock to point at a
stack trace for each lock, but that was removed to optimize the size
of the lock data structure which can have many objects outstanding.
And creating and storing the stack for every lock was incredibly slow
and just was not very useful for any very active application.  I think
I was the only one who ever used it.

The plan was sometime to add a per user data structure which could be
filled in when it was about to wait on a lock, which would give most of what is 
interesting in a deadlock.

The current deadlockTrace is meant to dump the lock table out to derby.log when 
a deadlock is encountered.

I agree getting a dump of all stack traces would be very useful, and
with the later jvm debug interfaces may now be possible - in earlier
JVM's there weren't any java interfaces to do so.  Does anyone have
the code to donate to dump all thread stacks to a buffer?


Mike also suggested a manual technique as a workaround; it would be useful to 
put this into the documentation somewhere, perhaps on the page which documents 
derby.locks.deadlockTrace? Here's Mike's suggestion:


What I do if I can reproduce easily is set try to catch the wait by
hand and then depending on the environment either send the magic
signal or hit ctrl-break in the server window which will send the JVM
specific thread dumps to derby.log.


The magic signal, btw, is 'kill -QUIT', at least with Sun JVMs in my experience.







Re: [jira] Commented: (DERBY-239) Need a online backup feature that does not block update operations when online backup is in progress.

2005-10-31 Thread Suresh Thalamati

Øystein Grøvlen wrote:

ST == Suresh Thalamati [EMAIL PROTECTED] writes:



ST Hi Øystein,
ST Thanks for reviewing the patch. My answers are in-line...

ST Øystein Grøvlen (JIRA) wrote:


snip ...




ST By using the catalog approach also backup can only throw a warning
ST about the  deleted container. Once the user deletes a container file ,
ST there is no way out, users can not even drop that table. I don't think
ST making  backup  fail  forever   when  this  scenario  occurs  will  be
ST acceptable to the users.

I agree, but detecting in on backup, gives the user the option of
recovering the table by restoring the previous bakcup.



I agree with you, notifying the user as early as possible about the 
missing files is a vey good idea. If there is no previous backup then 
detecting and stopping the backup is not a good idea either.

I belive  this is not common case scenario, this  improvements can
be added later,  once I get the online backup/restore to work.


snip ..


ST 2)  Move  the backing  up  and  unlatching of  the  page  code to  the
ST FileContainer.java.


ST I have  not thought of  any other better  ways to do design  this, any
ST ideas will be helpful.

Could the backup code be in a separate class instead of part of
RAFContainer?  It does not seem to use much of RAFContainer's internal
state. 



  I think  backup code in Rafcontainer.java is ok. Purpose of the 
backup code there is to backup the container.  Backup needs to 
synchronize with the container file deletions and truncation. And also 
backup code is just couple of methods in this file, so it may not be 
worth creating yet another Class file.



Thanks
-suresht



Re: Grant and Revoke ... DERBY-464...

2005-10-31 Thread Satheesh Bandaram
I wasn't planning on supporting that... An authorization ID can not
revoke a privilege from itself... Same as when an authorization ID tries
to GRANT itself some privilege.

Satheesh

Daniel John Debrunner wrote:

Can those permissions, the owner's, be revoked?

Ie. if I a create table, can I then revoke DELETE permission on it, from
myself? So that no-one can perform a DELETE.

Dan.




  




Re: is getAllStackTraces() something we are allowed to call from the server given the recent SecurityManager changes?

2005-10-31 Thread Francois Orsini
if permission is granted as part of the security policies I would think
so - it will call the appropriate permission check on the security
manager installed...

from the Javadoc:
---

If there is a security manager, then the security manager's
checkPermission method is called with a
RuntimePermission(getStackTrace) permission as well as
RuntimePermission(modifyThreadGroup) permission to see if it is ok to
get the stack trace of all threads.

Throws:
SecurityException - if a security manager exists and its checkPermission method doesn't allow getting the stack trace of thread.On 10/31/05, Mike Matrigali
 [EMAIL PROTECTED] wrote:
Francois Orsini (JIRA) wrote: [ http://issues.apache.org/jira/browse/DERBY-666?page=comments#action_12356460 ]
 Francois Orsini commented on DERBY-666: ---
The new J2SE 5.0 has some new API to dump individual or all threads'
stracktrace running in a JVM - There is a new notion of
StackTraceElement object which represent a stack frame and can be
output'ed as a String So you can get all frames of a particular thread's stackdump as well as for all threads in the JVM. 
http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Thread.html#getStackTrace() http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Thread.html#getAllStackTraces()
 http://java.sun.com/j2se/1.5.0/docs/api/java/lang/StackTraceElement.html
Thread(s) stack dumps can also be performed on the command line using
'jstack' (1.5) utility to dump all the JVM's thread stack traces given
a JVM pid. http://java.sun.com/j2se/1.5.0/docs/tooldocs/share/jstack.html fyi.Enhance derby.locks.deadlockTrace
 to print stack traces for all threads involved in a deadlock-- Key: DERBY-666 URL: 
http://issues.apache.org/jira/browse/DERBY-666 Project: DerbyType: ImprovementComponents: StoreVersions: 
10.1.1.0Reporter: Bryan PendletonPriority: MinorI was reading http://www.linux-mag.com/content/view/2134/
 (good article, btw!), and it says:The next two properties are needed to diagnose concurrency (locking and deadlock) problems. *derby.locks.monitor=true
 logs all deadlocks that occur in the system.
*derby.locks.deadlockTrace=true log a stack trace of all threads
involved in lock-related rollbacks.It seems,
that, in my environment, the deadlockTrace property does not log a
stack trace of *all* threads involved in the deadlock.Instead, it only logs a stack trace of the *victim* thread involved in the deadlock.I
think it would be very useful if the derby.locks.deadlockTrace setting
could in fact log a stack trace of all involved threads.In
a posting to derby-dev, Mike Matrigali noted that an earlier
implementation of a similar feature had to be removed because it was
too expensive in both time and space, but he suggested that there might
be several possible ways to implement this in an acceptably efficient
manner:A long time ago there use to be room in each lock to point at astack trace for each lock, but that was removed to optimize the sizeof the lock data structure which can have many objects outstanding.
And creating and storing the stack for every lock was incredibly slowand just was not very useful for any very active application.I thinkI was the only one who ever used it.
The plan was sometime to add a per user data structure which could befilled in when it was about to wait on a lock, which would give most of what is interesting in a deadlock.
The current deadlockTrace is meant to dump the lock table out to derby.log when a deadlock is encountered.I agree getting a dump of all stack traces would be very useful, and
with the later jvm debug interfaces may now be possible - in earlierJVM's there weren't any java interfaces to do so.Does anyone havethe code to donate to dump all thread stacks to a buffer?
Mike
also suggested a manual technique as a workaround; it would be useful
to put this into the documentation somewhere, perhaps on the page which
documents derby.locks.deadlockTrace? Here's Mike's suggestion:What I do if I can reproduce easily is set try to catch the wait byhand and then depending on the environment either send the magic
signal or hit ctrl-break in the server window which will send the JVMspecific thread dumps to derby.log.The magic signal, btw, is 'kill -QUIT', at least with Sun JVMs in my experience.



Re: [jira] Reopened: (DERBY-644) Eclipse UI plug-in zip file for the 10.1.2 snapshot release

2005-10-31 Thread Rajesh Kartha

Susan Cline wrote:


Hi Dan,
 
I'll look into this.  There are a few items I want to sort out with 
the plug-ins.  I'd like to automate the build, possibly create an 
Eclipse update site at Apache and also ask folks about the current 
packaging of the plug-ins.  We've had a few queries about packaging 
the core plug-in a little bit differently.
 
Susan


*/Daniel John Debrunner [EMAIL PROTECTED]/* wrote:

Andrew McIntyre wrote:
 
 I've been including the Eclipse plugins as part of the official

 release and posted on the downloads page, since the code for the
 plugins lives in the Derby code tree and is straightforward to
build,
 though the UI plugin build not automated.

I guess that was the part I was missing, the not automated build.
I hope
someone will scratch that itch one day.

Thanks,
Dan.


Hi,

Building the Derby 'ui' plug-in outside Eclipse PDE is desirable, but I 
wish to point out that it depends on many of the Eclipse APIs.


org.eclipse.core.resources
org.eclipse.ui
org.apache.derby.core
org.eclipse.jdt.launching
org.eclipse.jdt.core
org.eclipse.core.runtime
org.eclipse.ui.console

 etc... to name a few, for providing the pop-up menu, extension and 
launching mechanisms in Eclipse.  These come in separate jars or

as directories within Eclipse.

Automating the build means these Eclipse APIs should be available for 
the build mechanim (Ant).  Maybe we can add a property like 'eclipse.home'
in the ant.properties,  and access the needed Eclipse jars for building. 
Users can point this 'eclipse.home' to their respective Eclipse 
environment, while

building the Derby 'ui' plug-in.  This needs to be tested, though.

The current way (directory stucture + docs to build) was  provided based 
on what other Eclipse plug-in projects have done (Forrest, dbEdit 
etc.).  Building
the plug-ins is also simple, since all one needs to do is import the 
plugin project into their Eclipse environment.


Setting up an update site for the Derby plug-ins will be very useful, 
now since Derby has graduated and have a permanent url at Apache.


-Rajesh




[jira] Commented: (DERBY-587) Providing JDBC 4.0 support for derby

2005-10-31 Thread V.Narayanan (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-587?page=comments#action_12356470 ] 

V.Narayanan commented on DERBY-587:
---

Hi,
Thanx for pointing out the issues satheesh. Also thanx for the follow up email 
address jean. I faxed the agreement to apache yesterday. Also I will follow 
this up using the email address given by jean. Sorry for my late response.
Narayanan

 Providing JDBC 4.0 support for derby
 

  Key: DERBY-587
  URL: http://issues.apache.org/jira/browse/DERBY-587
  Project: Derby
 Type: New Feature
   Components: JDBC
 Versions: 10.2.0.0
 Reporter: V.Narayanan
 Assignee: V.Narayanan
 Priority: Minor
  Fix For: 10.2.0.0
  Attachments: jdbc4.0.sxw, jdbc4.diff



-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Commented: (DERBY-587) Providing JDBC 4.0 support for derby

2005-10-31 Thread Craig Russell (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-587?page=comments#action_12356471 ] 

Craig Russell commented on DERBY-587:
-

I recognize that this is an internal project issue, but:

Doesn't the grant Apache license check box on the patch upload mean anything? 
I always assumed that it allowed someone without an ICLA on file to contribute 
code. The patches uploaded for this bug have the grant Apache license 
checked...Of course, the copyright notices are required in the code for new 
files.

Craig

 Providing JDBC 4.0 support for derby
 

  Key: DERBY-587
  URL: http://issues.apache.org/jira/browse/DERBY-587
  Project: Derby
 Type: New Feature
   Components: JDBC
 Versions: 10.2.0.0
 Reporter: V.Narayanan
 Assignee: V.Narayanan
 Priority: Minor
  Fix For: 10.2.0.0
  Attachments: jdbc4.0.sxw, jdbc4.diff



-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Commented: (DERBY-587) Providing JDBC 4.0 support for derby

2005-10-31 Thread Daniel John Debrunner (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-587?page=comments#action_12356472 ] 

Daniel John Debrunner commented on DERBY-587:
-

I think this is an Apache wide issue, or desire:

http://www.apache.org/licenses/#clas

The ASF desires that all contributors of ideas, code, or documentation to the 
Apache projects complete, sign, and submit (via snailmail or fax) a Individual 
Contributor License Agreement (CLA)


 Providing JDBC 4.0 support for derby
 

  Key: DERBY-587
  URL: http://issues.apache.org/jira/browse/DERBY-587
  Project: Derby
 Type: New Feature
   Components: JDBC
 Versions: 10.2.0.0
 Reporter: V.Narayanan
 Assignee: V.Narayanan
 Priority: Minor
  Fix For: 10.2.0.0
  Attachments: jdbc4.0.sxw, jdbc4.diff



-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



[jira] Commented: (DERBY-587) Providing JDBC 4.0 support for derby

2005-10-31 Thread Craig Russell (JIRA)
[ 
http://issues.apache.org/jira/browse/DERBY-587?page=comments#action_12356473 ] 

Craig Russell commented on DERBY-587:
-

I have just a few comments.

1. The code does not appear to be consistent with regard to spaces, tabs, 
indents, and which line the { appears on. Are there coding standards to which 
we try to hold contributions? Other projects recognize that some files in the 
same project use different coding styles (standards?) but try to maintain 
standards for new files. The rule is that patches to existing files use the 
conventions already used, but new files have a standard approach. Are there any 
such standards for Derby?

2. The copyright notices definitely need to be there for this contribution.

3. I agree that for a contribution of this magnitude, a signed ICLA should be a 
requirement.

4. In response to Dan's comments immediately above, I'd think that the Apache 
board might want to discuss why the JIRA has a check box for contributions. If 
it's really irrelevant, it's certainly a distraction.




 Providing JDBC 4.0 support for derby
 

  Key: DERBY-587
  URL: http://issues.apache.org/jira/browse/DERBY-587
  Project: Derby
 Type: New Feature
   Components: JDBC
 Versions: 10.2.0.0
 Reporter: V.Narayanan
 Assignee: V.Narayanan
 Priority: Minor
  Fix For: 10.2.0.0
  Attachments: jdbc4.0.sxw, jdbc4.diff



-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira



Re: [jira] Commented: (DERBY-587) Providing JDBC 4.0 support for derby

2005-10-31 Thread Daniel John Debrunner
Craig Russell (JIRA) wrote:

 [ 
 http://issues.apache.org/jira/browse/DERBY-587?page=comments#action_12356473 
 ] 
 
 Craig Russell commented on DERBY-587:
 -

 4. In response to Dan's comments immediately above, I'd think that the Apache 
 board might want to discuss why the JIRA has a check box for contributions. 
 If it's really irrelevant, it's certainly a distraction.


Not sure it is irrelevant, at that point (in Jira) it's a choice between
 I am contibuting this attachment to the ASF vs. I'm not.

Since there is always the option of this is useful information but it's
not a contribution, there has to be the alternative of it *is* a
contribution. Thus how else would you label that, except this is a
contribution under the ASL?

Dan.