[ 
https://issues.apache.org/jira/browse/DERBY-4243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12723181#action_12723181
 ] 

Rick Hillegas commented on DERBY-4243:
--------------------------------------

Hi Trung: Should we resolve this issue too, since it arose as part of a 
workaround for DERBY-4139? Thanks.

> error on 10.4.1.3 using IBM 32 bit jdk  1.5 on AS400 V5R4
> ---------------------------------------------------------
>
>                 Key: DERBY-4243
>                 URL: https://issues.apache.org/jira/browse/DERBY-4243
>             Project: Derby
>          Issue Type: Bug
>          Components: Store
>    Affects Versions: 10.4.1.3
>         Environment: IBM 32 bit jdk 1.5 on AS400 V5R4
>            Reporter: Trung Tran
>            Priority: Critical
>
> because of Derby-4139, I switched from the Classic IBM JVM on the AS400 to 
> the IBM Technology JVM.  The IBM Technology JVM uses the same codebase as the 
> IBM JVM on AIX and linux on AS400.  Using the IBM Technology JVM got around 
> the 2GB limit using 10.4.1.3, but during testing, I received this error  in 
> derby.log
> 2009-05-23 03:00:09.268 GMT Thread[derby.rawStoreDaemon,5,derby.daemons] 
> Cleanup action starting  
> java.nio.channels.NonWritableChannelException                                 
>                     
>  at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:676)                
>                     
>  at org.apache.derby.impl.store.raw.data.RAFContainer4.writeFull(Unknown 
> Source)                  
>  at org.apache.derby.impl.store.raw.data.RAFContainer4.writeAtOffset(Unknown 
> Source)              
>  at org.apache.derby.impl.store.raw.data.FileContainer.writeHeader(Unknown 
> Source)                
>  at org.apache.derby.impl.store.raw.data.RAFContainer.writeRAFHeader(Unknown 
> Source)              
>  at org.apache.derby.impl.store.raw.data.RAFContainer.clean(Unknown Source)   
>                     
>  at 
> org.apache.derby.impl.services.cache.ConcurrentCache.cleanAndUnkeepEntry(Unknown
>  Source)      
>  at org.apache.derby.impl.services.cache.ConcurrentCache.cleanCache(Unknown 
> Source)               
>  at org.apache.derby.impl.services.cache.ConcurrentCache.cleanAll(Unknown 
> Source)                 
>  at 
> org.apache.derby.impl.store.raw.data.BaseDataFileFactory.checkpoint(Unknown 
> Source)           
>  at org.apache.derby.impl.store.raw.log.LogToFile.checkpointWithTran(Unknown 
> Source)              
>  at org.apache.derby.impl.store.raw.log.LogToFile.checkpoint(Unknown Source)  
>                     
>  at org.apache.derby.impl.store.raw.RawStore.checkpoint(Unknown Source)
> at org.apache.derby.impl.store.raw.log.LogToFile.performWork(Unknown Source)  
>       
> at org.apache.derby.impl.services.daemon.BasicDaemon.serviceClient(Unknown 
> Source)  
> at org.apache.derby.impl.services.daemon.BasicDaemon.work(Unknown Source)     
>       
> at org.apache.derby.impl.services.daemon.BasicDaemon.run(Unknown Source)      
>       
> at java.lang.Thread.run(Thread.java:810)                                      
>       
> Cleanup action completed                                                      
>        
> After this point, the old logs from this database are not deleted even though 
> new ones are generated.  I'm running on with archive log mode disabled and 
> one of the data files grow to 5 times it's original size.  Once this happens, 
> the database is useless.  I don't know if this problem exists on 10.4.2.0 and 
> above.
>                            

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to