I am wondering, if is ok to permit TIMESTAMPADD/TIMESTAMPDIFF
functions on soft upgrade from 10. 0 ?
For example , if I create a trigger that uses TIMESTAMPADD function on
soft upgrade from 10.0 to 10.2 , and
when I revert back to 10.0 it givessyntax error when trigger is
How about modifying it to ?
Note that you should avoid using a datetime column
inside a timestamp arithmetic function in WHERE
clauses if there is a index on the column because the
optimizer will not use any index on the column.
Thanks
-suresht.
Jeff Levitt wrote:
--- Suresh Thalamati
Project: Derby
Type: Bug
Versions: 10.1.0.0
Environment: Windows.
Reporter: Suresh Thalamati
Crash was done manually when compress was almost in the end.
Log trace before the crash:
DEBUG LogTrace OUTPUT: Write log record: tranId=14527 instant: (23,6270154) leng
th: 20BeginXact
[ http://issues.apache.org/jira/browse/DERBY-361?page=all ]
Suresh Thalamati updated DERBY-361:
---
Attachment: wombat.jar
wombat..jar - database which got unknown page format error.
Unknown page format error while doing recovery after a a crash
[ http://issues.apache.org/jira/browse/DERBY-361?page=all ]
Suresh Thalamati updated DERBY-361:
---
Attachment: derby2.log
derby2.log - partial log file with log trace on.
Unknown page format error while doing recovery after a a crash while
Type: Bug
Components: SQL
Versions: 10.1.0.0
Reporter: Suresh Thalamati
Priority: Minor
I think in place compress option should not be allowed on synonyms, currenly
engine throws assert on debug build, will throw NPE on insane build.
repro:
ij version 10.1
ij connect
Jack Klebanoff wrote:
The attached patch fixes a problem that Derby had with conflicting
referential constraints. Consider the following DDL:
create table t2( ref1 int references t1(id) on delete cascade,
ref2 int references t1(id) on delete set null)
If both the ref1
Øystein Grøvlen wrote:
As part of Derby-298, I am trying to make a test that recovers a
database on which backup was performed just before it stopped.
Following the scheme of the other tests in the storerecovery test
suite, I first have a test that stops without shutting down the
database and
[ http://issues.apache.org/jira/browse/DERBY-96?page=all ]
Suresh Thalamati closed DERBY-96:
-
partial log record writes that occur because of out-of order writes need to
be handled by recovery
[ http://issues.apache.org/jira/browse/DERBY-235?page=all ]
Suresh Thalamati closed DERBY-235:
--
unable to create a database using a different storage factory than the one
provided by default with the engine
://issues.apache.org/jira/browse/DERBY-321
Project: Derby
Type: Bug
Versions: 10.1.0.0
Environment: Windows jdk142
Reporter: Suresh Thalamati
Priority: Minor
while doing random crash/recovery tests. I got the following assert failure
[ http://issues.apache.org/jira/browse/DERBY-101?page=all ]
Suresh Thalamati resolved DERBY-101:
Resolution: Fixed
Fix Version: 10.1.0.0
chages: r170686, 178486:
178486: This fix increases the maximum possible log file number to 2^31 -1
[ http://issues.apache.org/jira/browse/DERBY-96?page=all ]
Suresh Thalamati resolved DERBY-96:
---
Resolution: Fixed
Fix Version: 10.1.0.0
Following changes fixed this problem:
r178494 :
small fix to make sure that log buffers are switched
Mike Matrigali wrote:Any requirement to put a specific log record next
in the stream is
snip
I believe currently log recordsare being written to the in memory log buffer
while the switch is
happening (suresh, is this right?).
Not to the log buffers (in LogAccessFile.java) that are common
Øystein Grøvlen wrote:
ST == Suresh Thalamati [EMAIL PROTECTED] writes:
ST derby has two types of log files , one that works in RWS mode with a
ST preallocated log file ,
What is RWS mode?
RWS mode is writing transction log using the write sync mechanism
supported
[ http://issues.apache.org/jira/browse/DERBY-239?page=all ]
Suresh Thalamati reassigned DERBY-239:
--
Assign To: Suresh Thalamati
Need a online backup feature that does not block update operations when
online backup is in progress
Hi Andrew,
Almost done with following two issues I had been working on:
DERBY-96 - transaction log checksum feature to recognize partial log
record writes that occur because
of out-of order writes.
DERBY-101 - increasing the possible log file id's from 2^22 -1 to 2^33
Attached is small fix to make sure that log buffers are switched are
correctly when the are full,
when the log checksum feature is disabled due to a soft upgrade.
Tests: Ran derbyall , There were only two known
failires(store/backupRestore1.
catching runtime exception like NPE, will report a wrong error if
there is
a real bug in the code at a later point in time. Unless there is a
real difference
in performance , I think it is better to do checks to make sure that NPE
does not
occur.
Thanks
-suresht.
Mike Matrigali wrote:
Dibyendu Majumdar wrote:
Hello Kathey,
...
Killing the test and the server leaves the system corrupted.
Attached is a log file that shows the error I get when I attempt to restart
the server.
Hi Dibyendu,
Are u working on a fresh database or a old one ?. ID 200 seems to
have been
to be done to eek out the last few bits (or documenting it
somewhere appropriate in the code is also fine - just rather not see
you have learned lost). Given that this
soft upgrade stuff is new I would rather be safe this time.
/mikem
Suresh Thalamati wrote:
By looking at the code , it is kind
strongly believes it should be
2^33 -1 , I will try
to do the necessary upgrade changes.
Any comments/suggestions ?
Thanks
-suresht
Suresh Thalamati wrote:
I just realized , this patch actaully break's hard upgrade also, if
log needs to be replayed.
Becuase I extract the log file number and log
with a follow on
patch.
Suresh Thalamati wrote:
,
but if that is not possible then just doing it for databases created
since this version would also work - but still leave problems with old
hard upgraded databases.
I have reviewed the code and am running tests. I plan on committing
this part of the fix and let you address upgrade issues with a follow on
patch.
Suresh
Project: Derby
Type: Bug
Components: Store
Versions: 10.0.2.1
Reporter: Suresh Thalamati
Priority: Minor
If the system crashes after a rollforward backup; last log file
is empty(say log2.dat). On next crash-recovery system ignores the empty log
file and starts
, this is enabled
+ * by setting derby.debug.true=testMaxLogFileNumber in the properties file.
+ * In Non debug mode, this tests just acts as a plain log recovery test.
+ *
+ * @author a href=mailto:[EMAIL PROTECTED]Suresh Thalamati/a
+ * @version 1.0
+ */
+
+public class MaxLogNumber{
+
+ MaxLogNumber
I am getting following error:
Form Errors:
* An exception occurred: java.util.NoSuchElementException.
Thanks
-suresh
address upgrade issues with a follow on
patch.
Suresh Thalamati wrote:
...
If the database is being properly shutdown , one case I can think,
which might make Derby system spend
more time in recovery code is rollback of the incomplete transactions;
this can happen if you are crashing with
lot of uncommitted long transactions. Just curious
- how much time is it
- Why is password hard coded in the test harness code, is it not possible to
specify it as test property ? for eg on the db URL itself.
+ String encryptUrl = dataEncryption=true;bootPassword=Thursday;
Thanks
-suresht
Mike Matrigali wrote:
I'll look into committing this one. If
Now that derby client code is checked in , I am wondering what are the
plans for the next release ?
I am kind of done with coding the fix for handling out-of order
transaction log writes(derby-96),
have some more testing to do. but I should be able to complete it by
end of this month. The
Hi Sunitha,
Good comments. I am surprised to find that this patch doing more than
what I imagined it will do, your changes
seems to allow switching between durable test mode(No syncs) and the
default durability level (all sync). I don't get why
these mode switch functionality is required,
[ http://issues.apache.org/jira/browse/DERBY-235?page=all ]
Suresh Thalamati resolved DERBY-235:
Resolution: Fixed
Fix Version: 10.1.0.0
patch is commiteed for this problem with svn 165645
unable to create a database using a different
My two cents:
How about cryptic one like
derby.system.durabilityLevel = 0
0 - mean none.
That way some one using it will be forced to read the doc and find
out what it mean :-)
-suresht
Daniel John Debrunner wrote:
David Van Couvering wrote:
I think testMode isn't very clear what it's
Problem was service name on database creation was getting just set to
the canonical name of the database directory
without subsub protocol name added in the beginning. Whereas rest of the
system seems to expect that the
subsub protocol name also is part of the service name. For example
if
Stanislav Gromov wrote:
I have an question.
As I can understand, Derby don't load whole file(table) in the memory.
It reads pages from it.
That's correct. Derby has a page cache (a.k.a buffer cache in OS) where
pages are stored in the memory when they are
read from the disk or before writing to
Another thing I would do is to skip the Log record that it is failing
on currently through the debugger/by adding skip code.
That way you can find some other container that might give better
clues, ofcourse not always :-)
Thanks
-suresht
Mike Matrigali wrote:
Any idea if the whole db is not
/DERBY-235
Project: Derby
Type: Bug
Components: Services
Versions: 10.1.0.0
Environment: Windows , jdk142
Reporter: Suresh Thalamati
Priority: Minor
java
-Dderby.subSubProtocol.csf=org.apache.derbyTesting.functionTests.util.corruptio.CorruptDiskStorageFactory
Attached is a second patch towards implementing checksum support for
transaction
log to handle out of order incomplete log writes during recovery. This
patch addresses
upgrade issues related to the new checksum log record.
Testing : Ran derbyall test suite, all tests passed.
Full upgrade : No
+1
Satheesh Bandaram wrote:
I just realized I called for a vote to accept Derby client without the
[VOTE] subject header, in the middle of a longish email. I would like to
reissue the call for vote. I call for all those who voted already to
vote again, please...
[ ] +1: Accept the contribution of
Hi,
I am looking at creating a new corruptible storage factory by
extending the engine's disk storage factory.
Purpose of this is to do explicitly corrupt the IO and do some
recovery testing. Thought ideal place
for the corruptible storage factory is to be in the test code
utilities not
performance results for new users looks bad when compared to other
databases which default to not syncing at commit time.
Part of this change should be to document the possible recovery
failures which can result by not syncing.
/mikem
Suresh Thalamati wrote:
Sunitha Kambhampati (JIRA) wrote:
Add
to increase the dynamic handling.
Also some code will have to be added as I don't think the system tracks
the difference between #1 and #2.
Suresh Thalamati wrote:
I believe providing mode which can cause non-bootable database to gain
performace is a bad idea.
Most of the users like me does not read
Sunitha Kambhampati (JIRA) wrote:
Add Relaxed Durability option
--
Key: DERBY-218
URL: http://issues.apache.org/jira/browse/DERBY-218
Project: Derby
Type: Improvement
Components: Store
Versions: 10.1.0.0
Environment: all
+1
Kathey Marsden wrote
Vote
+1 Reject the tally. New method of choosing a logo is determined by the community.
-1 Not to reject the tally. Keep the vote tally as is with 32 as the winner.
Thanks
Kathey
[X] 35
+1 for option 2.
Andrew McIntyre wrote:
Recently, a vote was approved that the source for the Derby
documentation should be XML DITA format. Details can be found in
these threads:
http://mail-archives.eu.apache.org/mod_mbox/db-derby-dev/200502.mbox/
[EMAIL PROTECTED]
Jeremy Boynes wrote:
Kathey Marsden wrote:
I think there are not only maintenance and development issues with
having multiple branches for the jvm versions, but usability issues as
well. Suddenly users have to think of which set of jar files to
download, we need to document when and why to use
Daniel John Debrunner wrote:
Jeremy Boynes wrote:
Reserving additional words from the second group poses a bigger issue as
users' may have databases out there already using these words as
identifiers. The smoothest path is probably to give people an indication
of which words will need to be
[ http://issues.apache.org/jira/browse/DERBY-96?page=comments#action_59041
]
Suresh Thalamati commented on DERBY-96:
---
Conclusion was to solve this problem by writing a checksum log record before
writing the log buffer and verify the checksum
buffer, calculate checksum
for a group of buffers when they are being written to the disk and
write checksum log record
before writing the log buffers contents.
Any comments/suggestions ?
Thanks
-suresht
Suresh Thalamati (JIRA) wrote:
[ http://issues.apache.org/jira/browse/DERBY-96
It is strange that you are getting lock time out error with a single
thread. First thing I would
do is to identify what is holding the lock for this thread to get lock
time out. One way to
do that is to set the following properties in derby.properties or with
-D option on JVM start.
Following tests are failing when run under JDK15:
derbyall/derbynetmats/derbynetmats.fail:derbynet/callable.java
derbyall/derbynetmats/derbynetmats.fail:derbynet/csPrepStmt.java
derbyall/derbynetmats/derbynetmats.fail:derbynet/prepStmt.java
Example diff:
cstmt.getBigDecimal(2): 0.001
[
http://issues.apache.org/jira/browse/DERBY-133?page=comments#action_58066 ]
Suresh Thalamati commented on DERBY-133:
I don't think whole transaction should be rolled back , if an insert fails with
foreign key constraint violation only
I agree that support for user databases upgrade is not required at all
times to the trunk versions, but I believe there should be a way to
upgrade from previous version to the trunk for derby developers to make
sure that their changes are not breaking the upgrade path.
Thanks
-suresh
Jean T.
[ http://issues.apache.org/jira/browse/DERBY-94?page=history ]
Suresh Thalamati resolved DERBY-94:
---
Resolution: Fixed
This issue has been fixed with SVN COMMIT r123267.
Lock not being released properly, possibly related to occurence of lock
Hi David,
I briefly looked at the import in code org.apache.derby.impl.load
package, it does not seem to be designed to keep all the rows that
are being imported in the memory. It seems to read one rows at a time
through BufferedReader() .
I tested importing 300 MB of data into emp(id int,
My earlier patch for this problem did not include the test case;
Resubmitting the patch with a test case added to the store regression
suite.
-suresht
Suresh Thalamati wrote:
Problem:
Container group level locks were not getting released when lock is
escalated to table level exclusive lock
Barnet Wagman wrote:
Does anyone know what the following SQL exception (code 3) might
mean?
Java linkage error thrown during load of generated class
org.apache.derby.exe.ac601a400fx0100xefx1a6cx001b574011d
Most likely this error occurred because size of one of the methods in
the
Amit Handa wrote:
Suresh Thalamati wrote:
Barnet Wagman wrote:
Does anyone know what the following SQL exception (code 3) might
mean?
Java linkage error thrown during load of generated class
org.apache.derby.exe.ac601a400fx0100xefx1a6cx001b574011d
Since the class is being
Gerald Khin (JIRA) wrote:
[
http://nagoya.apache.org/jira/browse/DERBY-106?page=comments#action_56877 ]
Gerald Khin commented on DERBY-106:
---
The system property derby.language.maxMemoryPerTable is the system property I
asked for. Setting it to 0
Barnet Wagman wrote:
A couple questions/issues:
Re logging: Something I read in the Derby documentation (or perhaps in
the mailing list archive) indicated that logging may be expensive. Is
there any way to disable logging completely?
I think there is no option available in derby to disable
to do with the new log record).
Suresh Thalamati (JIRA) wrote:
[
http://nagoya.apache.org/jira/browse/DERBY-96?page=comments#action_56482 ]
Suresh Thalamati commented on DERBY-96:
---
Some thoughts on how this problem could be solved
[ http://nagoya.apache.org/jira/browse/DERBY-96?page=history ]
Suresh Thalamati reassigned DERBY-96:
-
Assign To: Suresh Thalamati
partial log record writes that occur because of out-of order writes need to
be handled by recovery
Mike Matrigali wrote:
If java ever provides a way to directly queue I/O straight from
our buffer to disk with no intermediate data copy then it may
be important to use a page based log scheme. For now it looks
like the stream interfaces being used and the JVM's optimization
of those interfaces
Daniel John Debrunner wrote:
It would be great if some one can run this test on other platforms and
post the results to the list.
On Suse Linux, with an old Dell machine (2cpu 733Mhz Pentium) I see no
real difference between aligned and non-aligned writes.
I used Sun's 1.4.2 1.5.0 and
Jan Hlavatý wrote:
Suresh Thalamati wrote:
It would be great if some one can run this test on other platforms and
post the results to the list.
I have added display of percentual difference to make it easier to look at.
Results vary a lot, so dont take it for absolute.
Heres what I
: Derby
Type: New Feature
Components: Store
Versions: 10.0.2.1
Reporter: Suresh Thalamati
Incomplete log record write that occurs because of
an out of order partial writes gets recognized as complete during
recovery if the first sector and last sector happens to get written
[ http://nagoya.apache.org/jira/browse/DERBY-96?page=comments#action_56482
]
Suresh Thalamati commented on DERBY-96:
---
Some thoughts on how this problem could be solved:
To identify the partial writes, some form of checksum has to be added
+1 .
I think it might be helpful to add INSTALL file in the main directory
with getting started information
(http://incubator.apache.org/derby/manuals/getstart/gspr16.html.).
Saves a trip back to the download page :-)
-suresht
Samuel Andrew McIntyre wrote:
Hello derby-dev,
It appears
Hi all,
I am trying to find out whether there would be any performance
improvement for insert/delete/updates operations by modifying the
logging system to do writes aligned on sector boundaries (512 bytes).
This could possibly done by grouping log records to 4k/8k pages. The way
current system
Hi all,
I am looking into incomplete log record write that occurs because of
an out of order partial writes gets recognized as complete during
recovery if the first sector and last sector happens to get written.
Current system recognizes incompletely written log records by checking
the length
Cleaned up some calls that are not being used in log facttory. Attached
is the diff file. I am not sure how
patch command handles deleted files. Following are the files that are
deleted in the submitted patch:
D java\engine\org\apache\derby\impl\store\raw\log\SaveLWMOperation.java
D
Which one of the following properties have higher precedence ? For
example if user sets derby.drda.keepAlive = true (Assume 2 hours on a
platform) and derby.drda.connSoTimeout= 1hour., Connection is going to
be terminated after 2 hours or 1hour ?
Thanks
-suresht
Kathey Marsden wrote:
Patch to fix following two issues reported in Derby-32:
1) Exclusive file lock on dbex.lck is getting released before the
database is shutdown, allowing multiple jvm's to boot the same database
in parallel
2) Exclusive file lock on dbex.lck is not released even when database
is shutdown.
Hi Joe,
Does Mac OS has concept of write cache enabling ? Numbers below
are with write cache enabled/disabled ?
-suresht
Joseph Grace wrote:
Here are some stats from OSX
1.5GHz PowerBook
5400 rpm h/d (internal), [EMAIL PROTECTED] full
HFS+ file system
FYI, notably some tests can vary
Joseph Grace wrote:
Suresh:
Does Mac OS has concept of write cache enabling ? Numbers below
are with write cache enabled/disabled ?
Yes, OSX has write cache enabling. Presumably it's on by default
(though I admit to not knowing how to confirm that), so my numbers
include it. That's why I
-to-date OSX 10.3.5 and ditto for OSX java:
java version 1.4.2_05
Java(TM) 2 Runtime Environment, Standard Edition 4(build 1.4.2_05-11)
Java HotSpot(TM) Client VM (build 1.4.2-38, mixed mode)
and I have not tested on any other platform.
I believe your assessments below are correct.
Suresh Thalamati
501 - 578 of 578 matches
Mail list logo