Re: Logging of exceptions

2007-08-08 Thread Felix Meschberger
Hi,

I agree, that the cause must not be lost. But rather than logging the
cause inside loadBundle (in this case), I suggest the
hasNonVirtualItemState method should not ignore the exception and log
it.

IMHO, if an exception is thrown, the same message should not be logged
by the thrower, because the thrower already informs about the situation.
Rather the catcher of the exception should handle it by logging or
rethrowing.

The problem on the other hand is, that ItemStateException does not
always relate a problem: Sometime it is thrown IIRC if an item state is
just not availabel and sometimes - as here - it is thrown as a
consequence of a possibly real problem This distinction can probably
not be made by the catcher of the exception 

Regards
Felix

Am Donnerstag, den 09.08.2007, 01:05 +0200 schrieb Christoph Kiehl:
> Hi,
> 
> while preparing the testcase for JCR-1039 I found a construct like this:
> 
> protected synchronized NodePropBundle loadBundle(NodeId id)
>  throws ItemStateException {
>  [...]
>  } catch (Exception e) {
>  String msg = "failed to read bundle: " + id + ": " + e;
>  log.error(msg);
>  throw new ItemStateException(msg, e);
>  } finally {
>  [...]
> }
> 
> In the calling method the exception is handled like this:
> 
> private boolean hasNonVirtualItemState(ItemId id) {
>  [...]
>  } catch (ItemStateException ise) {
>  return false;
>  }
>  [...]
> }
> 
> The result is that you will never find the root cause "e" in the log. 
> This makes it hard to diagnose bugs without using a debugger or 
> modifying the source code.
> I would suggest to use log.error(msg, e) instead of log.error(msg). I 
> would even consider log.error(msg) to be harmful since there is almost 
> always an exception which should be logged if you use log.error().
> WDYT?
> 
> Cheers,
> Christoph
> 



Logging of exceptions

2007-08-08 Thread Christoph Kiehl

Hi,

while preparing the testcase for JCR-1039 I found a construct like this:

protected synchronized NodePropBundle loadBundle(NodeId id)
throws ItemStateException {
[...]
} catch (Exception e) {
String msg = "failed to read bundle: " + id + ": " + e;
log.error(msg);
throw new ItemStateException(msg, e);
} finally {
[...]
}

In the calling method the exception is handled like this:

private boolean hasNonVirtualItemState(ItemId id) {
[...]
} catch (ItemStateException ise) {
return false;
}
[...]
}

The result is that you will never find the root cause "e" in the log. 
This makes it hard to diagnose bugs without using a debugger or 
modifying the source code.
I would suggest to use log.error(msg, e) instead of log.error(msg). I 
would even consider log.error(msg) to be harmful since there is almost 
always an exception which should be logged if you use log.error().

WDYT?

Cheers,
Christoph



[jira] Commented: (JCR-1041) Avoid using BitSets in ChildAxisQuery to minimize memory usage

2007-08-08 Thread Christoph Kiehl (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-1041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12518568
 ] 

Christoph Kiehl commented on JCR-1041:
--

Well, according to various sources 
(http://martin.nobilitas.com/java/sizeof.html and 
http://www.javaworld.com/javaworld/javatips/jw-javatip130.html?page=2) an 
Integer instance needs 16 bytes. A native int needs 4 bytes. This is why I 
would prefer an array of ints (not even counting the TreeSet overhead which 
uses a TreeMap which wraps every Integer in an Entry instance).
The AdaptingHits class is only needed for corner cases where contextQuery 
result sets get really large. You could instead just use ArrayHits directly to 
resemble what a TreeSet would achieve just using less memory.
But I thought we could use those Hits classes in a few other places like 
DescendantSelfAxisQuery as well, where large contextQuery results are much more 
common.

> Avoid using BitSets in ChildAxisQuery to minimize memory usage
> --
>
> Key: JCR-1041
> URL: https://issues.apache.org/jira/browse/JCR-1041
> Project: Jackrabbit
>  Issue Type: Improvement
>  Components: query
>Affects Versions: 1.3
>Reporter: Christoph Kiehl
>Assignee: Christoph Kiehl
> Attachments: avoid_using_bitsets.patch
>
>
> When doing ChildAxisQueries on large indexes the internal BitSet instance 
> (hits) may consume a lot of memory because the BitSet is always as large as 
> IndexReader.maxDoc(). In our case we had a query consisting of 7 
> ChildAxisQueries which combined to a total of 14MB. Since we have multiple 
> users executing this query simultaneously this caused an out of memory error.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Better Jira components

2007-08-08 Thread Jukka Zitting
Hi,

On 8/8/07, Padraic Hannon <[EMAIL PROTECTED]> wrote:
> Managing releases that span multiple projects does tend to be a pain in
> Jira as the version field is specific to a project. This means you
> cannot query for all tasks for a release across multiple projects unless
> you create a separate release field (at least that has been our
> experience here at Edmunds).

The idea of splitting the Jackrabbit into multiple Jira projects would
be tied to starting independent release cycles for each subproject
that is big enough to warrant it's own Jira project. This way we would
keep a simple mapping between releases and Jira projects.

BR,

Jukka Zitting


Re: Better Jira components

2007-08-08 Thread Padraic Hannon
Managing releases that span multiple projects does tend to be a pain in 
Jira as the version field is specific to a project. This means you 
cannot query for all tasks for a release across multiple projects unless 
you create a separate release field (at least that has been our 
experience here at Edmunds).


-paddy

Jukka Zitting wrote:

Hi,

On 8/8/07, Stefan Guggisberg <[EMAIL PROTECTED]> wrote:
  

i'd definitely like to keep the finer granularity in core.
ideally there should be something like subcomponents
but prefixing would be fine with me.



Eventually (perhaps starting with Jackrabbit 2.0) we may want to start
having separate release cycles for the different subprojects, which
means that we would also have separate Jira projects and components
for each Jackrabbit subproject. There's some overhead to doing that,
so I'd like to do that only when the pain of doing synchronous
releases (like nobody being able to review too big release candidates)
gets too big.

BR,

Jukka Zitting
  




[jira] Updated: (JCR-1005) More Fine grained Permission Flags

2007-08-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/JCR-1005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Claus Köll updated JCR-1005:


Attachment: acces.patch

A patch that would help me.
Hope somebody will look at and comment it ...

> More Fine grained Permission Flags
> --
>
> Key: JCR-1005
> URL: https://issues.apache.org/jira/browse/JCR-1005
> Project: Jackrabbit
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.3
>Reporter: Claus Köll
> Attachments: acces.patch
>
>
> It would be fine to have one more Permission Flag on node add.
> At the moment there are 3 flags. We need to know if a node will be updated or 
> created.
> This is not possible with the current implementation because on node add the 
> permission flag 
> AccessManager.WRITE will be used. This is a Problem in a  WebDav Scenario 
> with Microsoft-Word because if i open a Node and 
> try to save it i need write permissions on the parent node. this is ok. If a 
> user trys to save the file with a other name
> he can because the same PermissionFlag will be used.
> Maybe there is a other solution for this problem ?
> BR,
> claus

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



IndexingConfiguration jr 1.4 release, analyzing, searching and synonymprovider

2007-08-08 Thread Ard Schrijvers
Hello, 

and sorry for spamming, but I just want to share my findings/impressions, and 
what I am posting I am willimg to implement and port to the JackRabbit trunk 
(so if you bother to read it, and are positive about it, I will implement it 
:-) )

(if you make it to the end of this mail, I also describe how simple it would 
become to add a just in the trunk created SynonymProvider functionality)

First of all, the IndexingConfiguration, very promising! Exactly what we need 
for better indexing, and, consequently better search results. Because, in the 
end, what good is a repository when customers can't find the results they are 
looking for? Storing, versioning, workflow, all very important, but no good 
when nobody can find their content (duhh, obviously).

So, one part that bothers me, is multilinguality (with lang specific stopwords, 
stemming, synonyms). Many customers these days want multilingual sites, and 
search them accordingly. And, obviously, lucene has quite some code for exactly 
this : see contrib/analyzers/src/java. 

Obviously, lucene has many more analyzers, and you can easily add your own. 
AFAIU, there is a single configuration place where I can define the overall 
JackRabbit analyzer that is used within one workspace: 

in repository.xml :



but, what I want, is a per property defineable analyzer (I would give bode_fr a 
french analyzer, body_de a german, some properties i might want to be indexed 
with keyword analyzers, like zipcodes). The best place for this IMO, is the 
IndexingConfiguration: then, if you do not configure it, nothing changes for 
you.
 
So, for example the first index rule at 
http://wiki.apache.org/jackrabbit/IndexingConfiguration would change in:


text_de


and during loading, we construct a Map of {jr-property,analyzer} (call it 
propertyAnalyzerMap). Then, all we need to add is one jackrabbit global 
analyzer, that look like:

class JRAnalyzer extends Analyzer {
Analyzer defaultAnalyzer = new StandardAnalyzer();

public TokenStream tokenStream(String fieldName, Reader reader) {
Analyzer analyzer = 
(Analyzer)propertyAnalyzerMap.get(fieldName);
if(analyzer!=null){
return analyzer.tokenStream(fieldName, reader);
}else{
return this.defaultAnalyzer.tokenStream(fieldName, 
reader);
}
}
}

This very same JRAnalyzer is also used for the QueryParser in 
LuceneQueryBuilder, so this will work also for searching IIUC. So, WDOT? I can 
implement it and send a patch, but if the community is reluctant to it, I will 
have to do it for myself in a non jr code intrusive way.

Example of the SynonymProvider mentioned at the top:

If my suggested changes are accepted, things like a SynonymProvider becomes 
superfluous, and very easy to add on the fly:

suppose, I want on the "body" property of my nodes always full searching with 
dutch synonyms. This boils down to adding an analyzer for this property, that 
extends the DutchAnalyzer in lucene, and that adds synonym functionality (very 
simple example in "lucene in action" book). I think it is better to do synonyms 
during analyzing (as opposed to the SynonymProvider in jr trunk), and simply 
use an analyzer for it. Ofcourse, a difference of using it, would be that with 
the current SynonymProvider you specifically have to define that you do a 
synonymsearch (~term), while with an analyzer, you define which properties 
whould be indexed with an synonymanalyzer, and searched accordingly (without 
having to specify it),

So WDOT? Again, sry for mailing so much, just trying to sell my ideas :-) 

 
-- 

Hippo
Oosteinde 11
1017WT Amsterdam
The Netherlands
Tel  +31 (0)20 5224466
-
[EMAIL PROTECTED] / [EMAIL PROTECTED] / http://www.hippo.nl
-- 


[jira] Resolved: (JCR-1056) JCR2SPI: improve ItemDefinitionProviderImpl.getMatchingPropdef to better handle multiple residuals

2007-08-08 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-1056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke resolved JCR-1056.
-

Resolution: Fixed

Changed as proposed with revision 563916.


> JCR2SPI: improve ItemDefinitionProviderImpl.getMatchingPropdef to better 
> handle multiple residuals
> --
>
> Key: JCR-1056
> URL: https://issues.apache.org/jira/browse/JCR-1056
> Project: Jackrabbit
>  Issue Type: Bug
>  Components: SPI
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>
> When a new property is set with unknown type (missing PropertyType 
> parameter), ItemDefinitionProviderImpl.getMatchingPropdef() is used to find 
> an applicable property definition.
> There may be cases where multiple residual property defs may match, for 
> instance, when the repository allows only a certain set of property types on 
> that node type.
> In this case, when the set of allowable types includes STRING, that propdef 
> should be returned. After all, the client did not specify the type, so STRING 
> is most likely the best match.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (JCR-1056) JCR2SPI: improve ItemDefinitionProviderImpl.getMatchingPropdef to better handle multiple residuals

2007-08-08 Thread Julian Reschke (JIRA)
JCR2SPI: improve ItemDefinitionProviderImpl.getMatchingPropdef to better handle 
multiple residuals
--

 Key: JCR-1056
 URL: https://issues.apache.org/jira/browse/JCR-1056
 Project: Jackrabbit
  Issue Type: Bug
  Components: SPI
Reporter: Julian Reschke
Priority: Minor


When a new property is set with unknown type (missing PropertyType parameter), 
ItemDefinitionProviderImpl.getMatchingPropdef() is used to find an applicable 
property definition.

There may be cases where multiple residual property defs may match, for 
instance, when the repository allows only a certain set of property types on 
that node type.

In this case, when the set of allowable types includes STRING, that propdef 
should be returned. After all, the client did not specify the type, so STRING 
is most likely the best match.


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (JCR-1056) JCR2SPI: improve ItemDefinitionProviderImpl.getMatchingPropdef to better handle multiple residuals

2007-08-08 Thread Julian Reschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-1056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julian Reschke reassigned JCR-1056:
---

Assignee: Julian Reschke

> JCR2SPI: improve ItemDefinitionProviderImpl.getMatchingPropdef to better 
> handle multiple residuals
> --
>
> Key: JCR-1056
> URL: https://issues.apache.org/jira/browse/JCR-1056
> Project: Jackrabbit
>  Issue Type: Bug
>  Components: SPI
>Reporter: Julian Reschke
>Assignee: Julian Reschke
>Priority: Minor
>
> When a new property is set with unknown type (missing PropertyType 
> parameter), ItemDefinitionProviderImpl.getMatchingPropdef() is used to find 
> an applicable property definition.
> There may be cases where multiple residual property defs may match, for 
> instance, when the repository allows only a certain set of property types on 
> that node type.
> In this case, when the set of allowable types includes STRING, that propdef 
> should be returned. After all, the client did not specify the type, so STRING 
> is most likely the best match.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



improving the scalability in searching part 2

2007-08-08 Thread Ard Schrijvers
Problem 2:

2) The XPath jcr:like implementation, for example : //*[jcr:like(@mytext,'%foo 
bar qu%')]

The jcr:like implementation (for sql holds the same) is translated to a 
JackRabbit WildcardQuery which in turn uses a WildcardTermEnum which has a 
"protected boolean termCompare(Term term)" method (though I haven't sorted out 
where the exact bottleneck is).

Now, it boils down that when you search for nodes which have some string in 
some property, this implies scanning UN_TOKENIZED fields in lucene, which is 
IMHO, not the way to do it (though, I haven't yet got *the* solution for the 
wildcard parts. Without the wildcards, obviously a PhraseQuery would do on the 
indexed TOKENIZED property  field. 

Anyway, the current jcr:like results in queries taking up to 10 seconds to 
complete for only 1000 nodes with one property, "mytext" which is on average 
500 words long. A cached IndexReader won't be faster in it. 

The jcr:like is I think not useable according the current implementation. 
Perhaps somebody else know how to be able to use the PhraseQuery in a way that 
suits our needs (I already posted to the lucene list if there is some best way 
to implement an 'like' functionality)

Regards Ard

-- 

Hippo
Oosteinde 11
1017WT Amsterdam
The Netherlands
Tel  +31 (0)20 5224466
-
[EMAIL PROTECTED] / [EMAIL PROTECTED] / http://www.hippo.nl
-- 


[jira] Resolved: (JCR-1039) Bundle Persistence Manager error - failing to read bundle the first time

2007-08-08 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-1039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg resolved JCR-1039.


Resolution: Fixed

fixed in svn r563900 by committing patch.

> Bundle Persistence Manager error - failing to read bundle the first time
> 
>
> Key: JCR-1039
> URL: https://issues.apache.org/jira/browse/JCR-1039
> Project: Jackrabbit
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.3.1
> Environment: Windows
>Reporter: Sridhar
> Fix For: 1.4
>
> Attachments: jackrabbit-core-testcase.patch, 
> JCR-1039.BundleDbPersistenceManager.diff
>
>
> Code:
> NodeIterator entiter = null;
> Node root = null, contNode = null, entsNode = null;
> try
> {
> root = session.getRootNode();
> contNode = root.getNode("sr:cont");
> entsNode = contNode.getNode("sr:ents");
> entiter = entsNode.getNodes();
> }
> catch (Exception e)
> {
> logger.error("Getting ents nodes", e);
> }
> Output:
> 12359 [http-8080-Processor24] ERROR 
> org.apache.jackrabbit.core.persistence.bundle.BundleDbPersistenceManager - 
> failed to read bundle: c3a09c19-cc6b-45bd-a42e-c4c925b67d02: 
> java.io.IOException: ERROR 40XD0: Container has been closed
> 12375 [http-8080-Processor24] ERROR com.taxila.editor.sm.RepoOperations - 
> Getting ents nodes
> javax.jcr.PathNotFoundException: sr:ents
> at org.apache.jackrabbit.core.NodeImpl.getNode(NodeImpl.java:2435)
> at com.taxila.editor.sm.RepoOperations.getEntityNodes 
> (RepoOperations.java:4583)
> at 
> com.taxila.editor.sm.RepoOperations.displayEntities(RepoOperations.java:4159)
> at 
> com.taxila.editor.sm.RepoOperations.displayEntities(RepoOperations.java:4114)
> at com.taxila.editor.em.um.MainEntityForm.reset (MainEntityForm.java:215)
> at org.apache.struts.taglib.html.FormTag.doStartTag(FormTag.java:640)
> at 
> org.apache.jsp.pages.jsp.entity.MainEntity_jsp._jspService(MainEntity_jsp.java:414)
> at org.apache.jasper.runtime.HttpJspBase.service (HttpJspBase.java:97)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
> at 
> org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:332)
> at org.apache.jasper.servlet.JspServlet.serviceJspFile 
> (JspServlet.java:314)
> at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:264)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
> at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter 
> (ApplicationFilterChain.java:252)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
> at 
> org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java
>  :672)
> at 
> org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:463)
> at 
> org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:398)
> at org.apache.catalina.core.ApplicationDispatcher.forward 
> (ApplicationDispatcher.java:301)
> at 
> org.apache.struts.action.RequestProcessor.doForward(RequestProcessor.java:1014)
> at 
> org.apache.struts.action.RequestProcessor.processForwardConfig(RequestProcessor.java:417)
> at 
> org.apache.struts.action.RequestProcessor.processActionForward(RequestProcessor.java:390)
> at 
> org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:271)
> at org.apache.struts.action.ActionServlet.process 
> (ActionServlet.java:1292)
> at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:510)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:709)
> at javax.servlet.http.HttpServlet.service (HttpServlet.java:802)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java
>  :173)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178)
> at org.apache.catalina.core.StandardHostValve.invoke 
> (StandardHostValve.java:126)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:105)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107)
> at org.apache.catalina.connector.CoyoteAdapter.service 
> (CoyoteAdapter.java:148)
> at 
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:869)
> at 
> org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java
>  :664)
> at 
> org.apache.tomcat.util.net.PoolTcpEndpoint.process

[jira] Updated: (JCR-1039) Bundle Persistence Manager error - failing to read bundle the first time

2007-08-08 Thread Stefan Guggisberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-1039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stefan Guggisberg updated JCR-1039:
---

Fix Version/s: 1.4
Affects Version/s: (was: 1.3)
   1.3.1

> Bundle Persistence Manager error - failing to read bundle the first time
> 
>
> Key: JCR-1039
> URL: https://issues.apache.org/jira/browse/JCR-1039
> Project: Jackrabbit
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.3.1
> Environment: Windows
>Reporter: Sridhar
> Fix For: 1.4
>
> Attachments: jackrabbit-core-testcase.patch, 
> JCR-1039.BundleDbPersistenceManager.diff
>
>
> Code:
> NodeIterator entiter = null;
> Node root = null, contNode = null, entsNode = null;
> try
> {
> root = session.getRootNode();
> contNode = root.getNode("sr:cont");
> entsNode = contNode.getNode("sr:ents");
> entiter = entsNode.getNodes();
> }
> catch (Exception e)
> {
> logger.error("Getting ents nodes", e);
> }
> Output:
> 12359 [http-8080-Processor24] ERROR 
> org.apache.jackrabbit.core.persistence.bundle.BundleDbPersistenceManager - 
> failed to read bundle: c3a09c19-cc6b-45bd-a42e-c4c925b67d02: 
> java.io.IOException: ERROR 40XD0: Container has been closed
> 12375 [http-8080-Processor24] ERROR com.taxila.editor.sm.RepoOperations - 
> Getting ents nodes
> javax.jcr.PathNotFoundException: sr:ents
> at org.apache.jackrabbit.core.NodeImpl.getNode(NodeImpl.java:2435)
> at com.taxila.editor.sm.RepoOperations.getEntityNodes 
> (RepoOperations.java:4583)
> at 
> com.taxila.editor.sm.RepoOperations.displayEntities(RepoOperations.java:4159)
> at 
> com.taxila.editor.sm.RepoOperations.displayEntities(RepoOperations.java:4114)
> at com.taxila.editor.em.um.MainEntityForm.reset (MainEntityForm.java:215)
> at org.apache.struts.taglib.html.FormTag.doStartTag(FormTag.java:640)
> at 
> org.apache.jsp.pages.jsp.entity.MainEntity_jsp._jspService(MainEntity_jsp.java:414)
> at org.apache.jasper.runtime.HttpJspBase.service (HttpJspBase.java:97)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
> at 
> org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:332)
> at org.apache.jasper.servlet.JspServlet.serviceJspFile 
> (JspServlet.java:314)
> at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:264)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
> at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter 
> (ApplicationFilterChain.java:252)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
> at 
> org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java
>  :672)
> at 
> org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:463)
> at 
> org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:398)
> at org.apache.catalina.core.ApplicationDispatcher.forward 
> (ApplicationDispatcher.java:301)
> at 
> org.apache.struts.action.RequestProcessor.doForward(RequestProcessor.java:1014)
> at 
> org.apache.struts.action.RequestProcessor.processForwardConfig(RequestProcessor.java:417)
> at 
> org.apache.struts.action.RequestProcessor.processActionForward(RequestProcessor.java:390)
> at 
> org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:271)
> at org.apache.struts.action.ActionServlet.process 
> (ActionServlet.java:1292)
> at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:510)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:709)
> at javax.servlet.http.HttpServlet.service (HttpServlet.java:802)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java
>  :173)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178)
> at org.apache.catalina.core.StandardHostValve.invoke 
> (StandardHostValve.java:126)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:105)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107)
> at org.apache.catalina.connector.CoyoteAdapter.service 
> (CoyoteAdapter.java:148)
> at 
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:869)
> at 
> org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java
>  :664)
> at 
> org.apache.tomcat.util.

Re: [Proposal] Sling

2007-08-08 Thread Alexandru Popescu ☀
On 8/8/07, Felix Meschberger <[EMAIL PROTECTED]> wrote:
> Hi Alex,
>
> Am Mittwoch, den 08.08.2007, 17:03 +0300 schrieb Alexandru Popescu ☀:
> > Very interesting initiative Felix! Still, I am wondering why this
> > would not start as a Jackrabbit contrib project, and I don't think I
> > have seen this in the attached proposal.
>
> The reason for not simply adding it as a contrib to Jackrabbit is the
> "size" of the project: It consists of roughly 30 sub-projects. Pushing a
> project of this size simply as a contrib to an existing project has been
> considered as going through the backdoor and not being the Apache way.
>
> I have not taken a note on this in the proposal because we just could
> not imagine just hacking it into Jackrabbit as a contrib.
>

I see. Maybe this should be clarified in the proposal, so that others
will not have to ask :-).

As I mentioned by the time you've sent out the first email, I am very
interested in seeing what this project is.

Looking forward to hearing more,

./alex
--
.w( the_mindstorm )p.



> Hope this helps.
>
> Regards
> Felix
>
>


Re: [Proposal] Sling

2007-08-08 Thread Felix Meschberger
Hi Alex,

Am Mittwoch, den 08.08.2007, 17:03 +0300 schrieb Alexandru Popescu ☀:
> Very interesting initiative Felix! Still, I am wondering why this
> would not start as a Jackrabbit contrib project, and I don't think I
> have seen this in the attached proposal.

The reason for not simply adding it as a contrib to Jackrabbit is the
"size" of the project: It consists of roughly 30 sub-projects. Pushing a
project of this size simply as a contrib to an existing project has been
considered as going through the backdoor and not being the Apache way.

I have not taken a note on this in the proposal because we just could
not imagine just hacking it into Jackrabbit as a contrib.

Hope this helps.

Regards
Felix



improving the scalability in searching

2007-08-08 Thread Ard Schrijvers
Hello,

As mentioned in https://issues.apache.org/jira/browse/JCR-1051 I think there 
might be some optimization in scalability and performance in some parts of the 
current lucene implementation.

For now, I have two major performance/scalability concerns in the current 
indexing/searching implementation :

1) The XPath implementation for //[EMAIL PROTECTED] (sql same problem)
2) The XPath jcr:like implementation, for example : //*[jcr:like(@mytext,'%foo 
bar qu%')]

Problem 1):

//[EMAIL PROTECTED] is transformed into the 
org.apache.jackrabbit.core.query.lucene.MatchAllQuery, that through the 
MatchAllWeight uses the MatchAllScorer. In this MatchAllScorer there is a 
calculateDocFilter() that IMO does not scale. Suppose, I have 100.000 nodes 
with a property 'title'. Suppose there are no duplicate titles (or few).

Now, suppose I have XPath /rootnode/articles/[EMAIL PROTECTED] Then, the while 
loop in calculateDocFilter() is done 100.000 times (See code below). 100.000 
times 

terms.term().text().startsWith(FieldNames.createNamedValue(field, "") 
docs.seek(terms) 
docFilter.set(docs.doc());

This scales linearly AFAIU, and becomes slow pretty fast (I can add a unit test 
that shows this, but on my modest machine I see for 100.000 nodes searches take 
already 400 ms with a cached reader, while it can easily be 0 ms IIULC  : "if i 
understand lucene correcly" :-)  ). 

Solution 1):

IMO, we should index more (derived) data about a documents properties (I'll 
return to this in a mail about IndexingConfiguration which I think we can add 
some features that might tackle this) if we want to be able to query fast. For 
this specific problem, the solution would be very simple:

Beside 

/**
 * Name of the field that contains all values of properties that are indexed
 * as is without tokenizing. Terms are prefixed with the property name.
 */
public static final String PROPERTIES = "_:PROPERTIES".intern();

I suggest to add 

/**
 * Name of the field that contains all available properties that present 
for a certain node
 */
public static final String PROPERTIES_SET = "_:PROPERTIES_SET".intern();

and when indexing a node, each property name of that node is added to its index 
(few lines of code in NodeIndexer): 

Then, when searching for all nodes that have a property, is one single 
docs.seek(terms); and set the docFilter. This approach scales to millions of 
documents easily with times close to 0 ms. WDOT? Ofcourse, I can implement this 
in the trunk.

I will do problem (2) in a next mail because my mail is getting a little long,

Regards Ard

-
TermEnum terms = reader.terms(new Term(FieldNames.PROPERTIES, 
FieldNames.createNamedValue(field, "")));
try {
TermDocs docs = reader.termDocs();
try {
while (terms.term() != null
&& terms.term().field() == FieldNames.PROPERTIES
&& 
terms.term().text().startsWith(FieldNames.createNamedValue(field, ""))) {
docs.seek(terms);
counter++;
while (docs.next()) {
docFilter.set(docs.doc());
}
terms.next();
}
} finally {
docs.close();
}
} finally {
terms.close();
}

-

-- 

Hippo
Oosteinde 11
1017WT Amsterdam
The Netherlands
Tel  +31 (0)20 5224466
-
[EMAIL PROTECTED] / [EMAIL PROTECTED] / http://www.hippo.nl
-- 


Re: [Proposal] Sling

2007-08-08 Thread Alexandru Popescu ☀
Very interesting initiative Felix! Still, I am wondering why this
would not start as a Jackrabbit contrib project, and I don't think I
have seen this in the attached proposal.

tia,

./alex
--
.w( the_mindstorm )p.


On 8/8/07, Felix Meschberger <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> I would like to propose a new Apache project named Sling. The proposal
> text is attached to this message and can also be found at [1].
>
> More information on Sling may be found in the Jackrabbit Wiki at [2].
>
> I would like to have the discussion on this proposal to go on in
> parallel in the Jackrabbit Developers and Incubator General mailing
> lists to gather as much feedback as possible. The vote for submission of
> the project will be held on the Jackrabbit Developers list.
>
> Of course we still welcome people willing to contribute to Sling or
> merely serve as additional mentors.
>
> Regards
> Felix
>
> [1] http://wiki.apache.org/jackrabbit/SlingProposal
> [2] http://wiki.apache.org/jackrabbit/ApacheSling
>
>
> Sling Proposal
> ==
>
> This is a draft version of a proposal to take the ApacheSling to the
> Apache Incubator with a goal of becoming an Apache Jackrabbit
> subproject. Discuss this proposal on the Jackrabbit Dev List. See the
> proposal guide for a description of the expected proposal content.
>
>
> Abstract
> Sling is a framework to develop content centric web applications based
> on the idea of modularizing the rendering of HTTP resources.
>
>
> Proposal
> Sling allows easy development of web application which are centered
> around content which is acted upon. As such each request URL addresses a
> Content object which in turn resolves to a Component, which finally is
> responsible for handling the request and providing a response. Though
> Content is a Java abstraction, the main source of data is a Java Content
> Repository conforming to the JSR 170 API such as Apache Jackrabbit.
>
>
> Background
> Sling came to live in an effort to rewrite the central request
> processing engine of Day Communiqué 4.0 and make it available for other
> applications requiring an easy to use and flexible web application
> framework. As such, the guidelines to develop Sling can be summarized as
> follows:
>
>
> Modularization:
> Functional blocks of the processing engine should be split to
> enable independent upgrade and replacement. At the same time
> some degree of dependency management amongst the modules is
> required.
>
> Runtime Management:
> Modules must enabled to be separatly started and stopped as well
> as installed, upgraded and removed.
>
> Components:
> Recognizing the need to componentize development of Web
> Applications and have easy mix and match for such components to
> build web pages, the basic building blocks of web pages are
> called components.
>
> Content Repository:
> Leading the Java Content Repository (JCR) initiative a new
> request processing engine must natively support JCR (e.g. Apache
> Jackrabbit) as the store for its content.
>
>
> By leveraging the OSGi core service platform specification the
> modularization and runtime management goals can be more than met. Sling
> is therefore built as a set of bundles. Additionally Sling provides a
> standalone application launcher and a web application to launch Apache
> Felix as its OSGi framework to deploy the Sling bundles into.
>
>
> Rationale
> Content repositories, as defined in the Content Repository for Java
> Technology API (JCR), are well suited to work as content stores of web
> applications. However, the JCR API deals with generic Nodes and
> Properties and not with business objects that would be more meaningful
> to application developers. Therefore one of the building blocks of Sling
> is the integration of a content mapping infrastructure, namely
> Jackrabbit Object Content Mapping.
>
> Another cause of regular headaches in current web application frameworks
> is managing the life cycle of long-running applications: Add new
> functionality, fix bugs, starting and stopping modules. This is where
> the OSGi service platform comes into play. This specification provides
> both help in the area of modularization and lifecycle management (and
> more, actually) and definitions of services, so called Compendium
> Services, which help concentrate on the core application and not worry
> about issues such as logging, configuration management etc. Sling uses
> Apache Felix as the OSGi framework.
>
> Third, a request will generally not be handled by a single Component but
> a series Components. The idea is that a request URL addresses a Content
> object, which is mapped from a JCR Node and which is serviced by a
> Component. The Component may then access child Content objects, aka
> child nodes, and have Sling service those Content objects through their
> Components again. Using this mechanism, each Component only contributes
> to pa

[Proposal] Sling

2007-08-08 Thread Felix Meschberger
Hi all,

I would like to propose a new Apache project named Sling. The proposal
text is attached to this message and can also be found at [1].

More information on Sling may be found in the Jackrabbit Wiki at [2].

I would like to have the discussion on this proposal to go on in
parallel in the Jackrabbit Developers and Incubator General mailing
lists to gather as much feedback as possible. The vote for submission of
the project will be held on the Jackrabbit Developers list.

Of course we still welcome people willing to contribute to Sling or
merely serve as additional mentors.

Regards
Felix

[1] http://wiki.apache.org/jackrabbit/SlingProposal
[2] http://wiki.apache.org/jackrabbit/ApacheSling


Sling Proposal
==

This is a draft version of a proposal to take the ApacheSling to the
Apache Incubator with a goal of becoming an Apache Jackrabbit
subproject. Discuss this proposal on the Jackrabbit Dev List. See the
proposal guide for a description of the expected proposal content. 


Abstract
Sling is a framework to develop content centric web applications based
on the idea of modularizing the rendering of HTTP resources. 


Proposal
Sling allows easy development of web application which are centered
around content which is acted upon. As such each request URL addresses a
Content object which in turn resolves to a Component, which finally is
responsible for handling the request and providing a response. Though
Content is a Java abstraction, the main source of data is a Java Content
Repository conforming to the JSR 170 API such as Apache Jackrabbit. 


Background
Sling came to live in an effort to rewrite the central request
processing engine of Day Communiqué 4.0 and make it available for other
applications requiring an easy to use and flexible web application
framework. As such, the guidelines to develop Sling can be summarized as
follows: 


Modularization:
Functional blocks of the processing engine should be split to
enable independent upgrade and replacement. At the same time
some degree of dependency management amongst the modules is
required. 

Runtime Management:
Modules must enabled to be separatly started and stopped as well
as installed, upgraded and removed. 

Components:
Recognizing the need to componentize development of Web
Applications and have easy mix and match for such components to
build web pages, the basic building blocks of web pages are
called components. 

Content Repository:
Leading the Java Content Repository (JCR) initiative a new
request processing engine must natively support JCR (e.g. Apache
Jackrabbit) as the store for its content. 


By leveraging the OSGi core service platform specification the
modularization and runtime management goals can be more than met. Sling
is therefore built as a set of bundles. Additionally Sling provides a
standalone application launcher and a web application to launch Apache
Felix as its OSGi framework to deploy the Sling bundles into. 


Rationale
Content repositories, as defined in the Content Repository for Java
Technology API (JCR), are well suited to work as content stores of web
applications. However, the JCR API deals with generic Nodes and
Properties and not with business objects that would be more meaningful
to application developers. Therefore one of the building blocks of Sling
is the integration of a content mapping infrastructure, namely
Jackrabbit Object Content Mapping. 

Another cause of regular headaches in current web application frameworks
is managing the life cycle of long-running applications: Add new
functionality, fix bugs, starting and stopping modules. This is where
the OSGi service platform comes into play. This specification provides
both help in the area of modularization and lifecycle management (and
more, actually) and definitions of services, so called Compendium
Services, which help concentrate on the core application and not worry
about issues such as logging, configuration management etc. Sling uses
Apache Felix as the OSGi framework. 

Third, a request will generally not be handled by a single Component but
a series Components. The idea is that a request URL addresses a Content
object, which is mapped from a JCR Node and which is serviced by a
Component. The Component may then access child Content objects, aka
child nodes, and have Sling service those Content objects through their
Components again. Using this mechanism, each Component only contributes
to part of the final web page. 

The advantage of this mechanism is, that Sling does not require tons of
templates or scripts to render different pages. Rather the developer may
provide a tool box of Components which may be mix-and-matched as
required. In addition this even allows for easy content authoring. 

In short, the main features of Sling may be summarized as follows: 

  * Uses a JCR Repository as its data st

Re: JCR 2.0 extensions

2007-08-08 Thread Thomas Mueller
Hi,

org.apache.jackrabbit.jsr283 is a good idea. In a new component, or in
jackrabbit-api.

> But since jsr283 should be mainly backwards compatible

It's not. Some methods now return something that didn't before.

> In case we do ran up with compatibility issues,
> I think we have a good case to request a change in JSR 283.

Maybe making the JSR 283 API backwards compatible is the best solution.

If this is not done, one solution is to use wrapper classes for both
JCR 1.0 and JCR 2.0. Another solution: 'switch' the code to one or the
other API by automatically 'remarking out' (filtering) the 'wrong'
method.

Thomas


[jira] Commented: (JCR-1037) Memory leak causing performance problems

2007-08-08 Thread Antonio Carballo (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-1037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12518434
 ] 

Antonio Carballo commented on JCR-1037:
---

We have one session and is never closed due to the nature of the application. 
We have run tests using two sessions and I believe the results were the same. I 
can rerun a test and confirm my statement if wish.


> Memory leak causing performance problems
> 
>
> Key: JCR-1037
> URL: https://issues.apache.org/jira/browse/JCR-1037
> Project: Jackrabbit
>  Issue Type: Bug
>  Components: Jackrabbit API
>Affects Versions: 1.2.1, 1.2.2, 1.2.3, 1.3
> Environment: Tomcat 6.0, XP Pro w/1Gb
>Reporter: Antonio Carballo
> Attachments: JCR-Trace.txt
>
>
> Folks,
> We have been running tests on JCR v1.3 and v1.2.1 for the past two weeks. The 
> system keeps running out of memory after X number of documents are added. Our 
> initial test consisted of about 50 documents and gradually increased to about 
> 150 documents. The size of the documents ranged from 1K to 9MB. We later 
> changed the test to consist of files with less than 1K in length with the 
> same result. Increasing the heap size delays the error but the outcome is 
> always the same (Servlet runs out of heap memory.)
> Using JProbe we found a high number of references created by the caching 
> sub-system (SessionItemStateManager.java, SharedItemStateManager.java, 
> LocalItemStateManager.java).  We changed the caching parameters using 
> CacheManager (min 64K - max 16MB). This change only delayed the error. 
> Servlet eventually runs out of heap memory.
> We are more than happy to share our findings (even source code and test data) 
> with the Jackrabbit team. Please let us know how you wish to proceed.
> Sincerely,
> Antonio Carballo

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (JCR-1048) JNDIDatabaseFileSystem was not woring in tomcat webapp

2007-08-08 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-1048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12518425
 ] 

Jukka Zitting commented on JCR-1048:


Wouldn't a more appropriate fix be to simply change your repository 
configuration to include the "java:comp/env" prefix?

> JNDIDatabaseFileSystem was not woring in tomcat webapp
> --
>
> Key: JCR-1048
> URL: https://issues.apache.org/jira/browse/JCR-1048
> Project: Jackrabbit
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.3
>Reporter: Stephen More
>
> The new method should look like this:
> protected Connection getConnection() throws NamingException, SQLException {
> InitialContext ic = new InitialContext();
> DataSource dataSource = null;
> try
> {
> dataSource = (DataSource) ic.lookup(dataSourceLocation);
> }
> catch( javax.naming.NameNotFoundException e )
> {
> // Now lets try it this way
> javax.naming.Context envCtx = (javax.naming.Context)ic.lookup( 
> "java:comp/env" );
> dataSource = (DataSource) envCtx.lookup(dataSourceLocation);
> }
> return dataSource.getConnection();
> }

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (JCR-1041) Avoid using BitSets in ChildAxisQuery to minimize memory usage

2007-08-08 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-1041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12518422
 ] 

Jukka Zitting commented on JCR-1041:


How about an alternative implementation of using just a TreeSet of Integers 
instead of BitSets? A sparse set would scale with the number of query results 
instead of the size of the entire workspace.

> Avoid using BitSets in ChildAxisQuery to minimize memory usage
> --
>
> Key: JCR-1041
> URL: https://issues.apache.org/jira/browse/JCR-1041
> Project: Jackrabbit
>  Issue Type: Improvement
>  Components: query
>Affects Versions: 1.3
>Reporter: Christoph Kiehl
>Assignee: Christoph Kiehl
> Attachments: avoid_using_bitsets.patch
>
>
> When doing ChildAxisQueries on large indexes the internal BitSet instance 
> (hits) may consume a lot of memory because the BitSet is always as large as 
> IndexReader.maxDoc(). In our case we had a query consisting of 7 
> ChildAxisQueries which combined to a total of 14MB. Since we have multiple 
> users executing this query simultaneously this caused an out of memory error.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: JCR 2.0 extensions

2007-08-08 Thread Christoph Kiehl

Julian Reschke wrote:

Jukka Zitting wrote:

On 8/8/07, Christoph Kiehl <[EMAIL PROTECTED]> wrote:
Sounds good. We just need to make sure that those jsr283 interfaces 
do not
interfere with jsr170 interfaces because some Jackrabbit classes will 
need to

implement both interfaces. But since jsr283 should be mainly backwards
compatible this shouldn't be a problem.


Yep. In case we do ran up with compatibility issues, I think we have a
good case to request a change in JSR 283.


As far as I recall, we have at least one method signature that changed 
with respect to throwing exceptions...


May be can we skip that method as long as possible if it's going to be a 
problem.


Or do you want to start a whole new branch where all jsr170 interfaces
are replaced by jsr283 interfaces?


We need to do that eventually, but I'd very much like to push the
branch point as far ahead as possible to avoid getting stuck with two
parallel actively developed codebases. Ideally we'd release 1.4 and
perhaps even 1.5 with early JCR 2.0 features included before dropping
JCR 1.0 from svn trunk and focusing on the real JCR 2.0 interfaces for
the Jackrabbit 2.0 release.


+1


I completely agree. I was in no means in favor of creating a branch as long as 
it could be avoided.


Cheers,
Christoph



[jira] Updated: (JCR-1055) Incorrect node position after import

2007-08-08 Thread Marcus Kaar (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-1055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Kaar updated JCR-1055:
-

Description: 
I have found a behavior that does not seem to be consistent with the
spec:

After replacing a node with importXML using 
IMPORT_UUID_COLLISION_REPLACE_EXISTING the new node is not at the position of 
the replaced node (talking about the position among the siblings).

The origininal node is removed, but the new node is created as the last child 
of the parent node, and not spec-compliant at the position of the replaced node.

Here how I use it:

// assume Session s, Node n, String text (holding XML data)

s.importXML(
n.getPath(), 
new ByteArrayInputStream (text.getBytes("UTF-8")),
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING
);
s.save();
 

And here a quote from the spec section 7.3.6

ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING: 
If an incoming referenceable node has the same UUID as a node already existing 
in the workspace then the already existing node is replaced by the incoming 
node in the same position as the existing node.
[note "same position"]


  was:
I have found a behavior that does not seem to be consistent with the
spec:

After replacing a node with importXML using 
IMPORT_UUID_COLLISION_REPLACE_EXISTING the new node is not at the position of 
the replaced node (talking about the position among the siblings).

The origininal node is removed, but the new node is created as the last child 
of the parent node, and not spec-compliant at the position of the replaced node.

Here how I use it:

// assume Session s, Node n, String text (holding XML data)

s.importXML(
n.getPath(), 
new ByteArrayInputStream (text.getBytes("UTF-8")),
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING
);
s.save();
 

And here a quote from the spec section 7.3.6

ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING: 
If an incoming referenceable node has the same UUID as a node already existing 
in the workspace then the already existing node is replaced by the incoming 
node in the same position as the existing node.
[note "same position"]

another strange effect is, that after importing the node i cannot access it. An 
InvalidItemStateException is thrown, although the node with te given ID is 
present ( at the wrong position)

here the stacktrace:

javax.jcr.InvalidItemStateException: 
1321d317-43b4-4599-84ad-c6ac41167942: the item does not exist anymore
at org.apache.jackrabbit.core.ItemImpl.sanityCheck(ItemImpl.java:157)
at org.apache.jackrabbit.core.NodeImpl.getParent(NodeImpl.java:1903)
at 
at.onlaw.redsys.KommentarManager.replaceText(KommentarManager.java:479)
at org.apache.jsp.editNE_jsp._jspService(editNE_jsp.java:94)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:97)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
at 
org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:334)
at 
org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:314)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:264)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:126)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:105)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:148)
at 
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:869)
at 
org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:664)
at 
org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:527)
at 
org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:80)
at 
org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:684)
at java.lang.Thread.run(Thread.java:595)



> Incorrect node position after import
> 
>
> Key: JCR-1055
> URL: https://issues.apache.org/jira/browse/JCR-1055
> Project: Jackrabbit
>  Issue Type: Bug
>Affects Versions: 1.3
>Reporter: Marcus K

[jira] Commented: (JCR-1039) Bundle Persistence Manager error - failing to read bundle the first time

2007-08-08 Thread Tobias Bocanegra (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-1039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12518406
 ] 

Tobias Bocanegra commented on JCR-1039:
---

+1 for this patch

> Bundle Persistence Manager error - failing to read bundle the first time
> 
>
> Key: JCR-1039
> URL: https://issues.apache.org/jira/browse/JCR-1039
> Project: Jackrabbit
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.3
> Environment: Windows
>Reporter: Sridhar
> Attachments: jackrabbit-core-testcase.patch, 
> JCR-1039.BundleDbPersistenceManager.diff
>
>
> Code:
> NodeIterator entiter = null;
> Node root = null, contNode = null, entsNode = null;
> try
> {
> root = session.getRootNode();
> contNode = root.getNode("sr:cont");
> entsNode = contNode.getNode("sr:ents");
> entiter = entsNode.getNodes();
> }
> catch (Exception e)
> {
> logger.error("Getting ents nodes", e);
> }
> Output:
> 12359 [http-8080-Processor24] ERROR 
> org.apache.jackrabbit.core.persistence.bundle.BundleDbPersistenceManager - 
> failed to read bundle: c3a09c19-cc6b-45bd-a42e-c4c925b67d02: 
> java.io.IOException: ERROR 40XD0: Container has been closed
> 12375 [http-8080-Processor24] ERROR com.taxila.editor.sm.RepoOperations - 
> Getting ents nodes
> javax.jcr.PathNotFoundException: sr:ents
> at org.apache.jackrabbit.core.NodeImpl.getNode(NodeImpl.java:2435)
> at com.taxila.editor.sm.RepoOperations.getEntityNodes 
> (RepoOperations.java:4583)
> at 
> com.taxila.editor.sm.RepoOperations.displayEntities(RepoOperations.java:4159)
> at 
> com.taxila.editor.sm.RepoOperations.displayEntities(RepoOperations.java:4114)
> at com.taxila.editor.em.um.MainEntityForm.reset (MainEntityForm.java:215)
> at org.apache.struts.taglib.html.FormTag.doStartTag(FormTag.java:640)
> at 
> org.apache.jsp.pages.jsp.entity.MainEntity_jsp._jspService(MainEntity_jsp.java:414)
> at org.apache.jasper.runtime.HttpJspBase.service (HttpJspBase.java:97)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
> at 
> org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:332)
> at org.apache.jasper.servlet.JspServlet.serviceJspFile 
> (JspServlet.java:314)
> at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:264)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
> at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter 
> (ApplicationFilterChain.java:252)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
> at 
> org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java
>  :672)
> at 
> org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:463)
> at 
> org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:398)
> at org.apache.catalina.core.ApplicationDispatcher.forward 
> (ApplicationDispatcher.java:301)
> at 
> org.apache.struts.action.RequestProcessor.doForward(RequestProcessor.java:1014)
> at 
> org.apache.struts.action.RequestProcessor.processForwardConfig(RequestProcessor.java:417)
> at 
> org.apache.struts.action.RequestProcessor.processActionForward(RequestProcessor.java:390)
> at 
> org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:271)
> at org.apache.struts.action.ActionServlet.process 
> (ActionServlet.java:1292)
> at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:510)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:709)
> at javax.servlet.http.HttpServlet.service (HttpServlet.java:802)
> at 
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252)
> at 
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java
>  :173)
> at 
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213)
> at 
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178)
> at org.apache.catalina.core.StandardHostValve.invoke 
> (StandardHostValve.java:126)
> at 
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:105)
> at 
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107)
> at org.apache.catalina.connector.CoyoteAdapter.service 
> (CoyoteAdapter.java:148)
> at 
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:869)
> at 
> org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java
>  :664)
> at 
> org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:527)
> at 
> org.apa

[jira] Updated: (JCR-1055) Incorrect node position after import

2007-08-08 Thread Marcus Kaar (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-1055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Kaar updated JCR-1055:
-

Description: 
I have found a behavior that does not seem to be consistent with the
spec:

After replacing a node with importXML using 
IMPORT_UUID_COLLISION_REPLACE_EXISTING the new node is not at the position of 
the replaced node (talking about the position among the siblings).

The origininal node is removed, but the new node is created as the last child 
of the parent node, and not spec-compliant at the position of the replaced node.

Here how I use it:

// assume Session s, Node n, String text (holding XML data)

s.importXML(
n.getPath(), 
new ByteArrayInputStream (text.getBytes("UTF-8")),
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING
);
s.save();
 

And here a quote from the spec section 7.3.6

ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING: 
If an incoming referenceable node has the same UUID as a node already existing 
in the workspace then the already existing node is replaced by the incoming 
node in the same position as the existing node.
[note "same position"]

another strange effect is, that after importing the node i cannot access it. An 
InvalidItemStateException is thrown, although the node with te given ID is 
present ( at the wrong position)

here the stacktrace:

javax.jcr.InvalidItemStateException: 
1321d317-43b4-4599-84ad-c6ac41167942: the item does not exist anymore
at org.apache.jackrabbit.core.ItemImpl.sanityCheck(ItemImpl.java:157)
at org.apache.jackrabbit.core.NodeImpl.getParent(NodeImpl.java:1903)
at 
at.onlaw.redsys.KommentarManager.replaceText(KommentarManager.java:479)
at org.apache.jsp.editNE_jsp._jspService(editNE_jsp.java:94)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:97)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
at 
org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:334)
at 
org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:314)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:264)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:126)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:105)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:148)
at 
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:869)
at 
org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:664)
at 
org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:527)
at 
org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:80)
at 
org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:684)
at java.lang.Thread.run(Thread.java:595)


  was:
I have found a behavior that does not seem to be consistent with the
spec:

After replacing a node with importXML using 
IMPORT_UUID_COLLISION_REPLACE_EXISTING the new node is not at the position of 
the replaced node (talking about the position among the siblings).

The origininal node is removed, but the new node is created as the last child 
of the parent node, and not spec-compliant at the position of the replaced node.

Here how I use it:

// assume Session s, Node n, String text (holding XML data)

s.importXML(
n.getPath(), 
new ByteArrayInputStream (text.getBytes("UTF-8")),
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING
);
s.save();
 

And here a quote from the spec section 7.3.6

ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING: 
If an incoming referenceable node has the same UUID as a node already existing 
in the workspace then the already existing node is replaced by the incoming 
node in the same position as the existing node.
[note "same position"]



> Incorrect node position after import
> 
>
> Key: JCR-1055
> URL: https://issues.apache.org/jira/browse/JCR-1055
> Project: Jackrabbit
>  Issue Type: Bug
>Affects Versions: 1.3
>Reporter: Marcus K

[jira] Commented: (JCR-937) CacheManager max memory size

2007-08-08 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12518405
 ] 

Jukka Zitting commented on JCR-937:
---

There's a minimum size (default 128kB) per each cache that overrides the global 
maximum memory setting when you start having large numbers of sessions. Each 
session is in effect guaranteed at least a small slice of memory for caching.

Do you have an idea how many sessions you have open concurrently?

> CacheManager max memory size
> 
>
> Key: JCR-937
> URL: https://issues.apache.org/jira/browse/JCR-937
> Project: Jackrabbit
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.3
>Reporter: Xiaohua Lu
>Priority: Minor
>
> I have ran into OutOfMemory a couple of times with Jackrabbit (cluster, 4 
> nodes, each has 1G mem heap size). 
> After adding some debug into the CacheManager, I noticed that maxMemorySize 
> (default to 16M) is not really honored during resizeAll check.  Each 
> individual MLRUItemStateCache seems to honor the size, but the total 
> number/size of MLRUItemStateCache is not. If you put some print statement of 
> totalMemoryUsed and unusedMemory, you can see that totalMemoryUsed is more 
> than 16M and unusedMemory is negative. 
> The other problem we have noticed during the profiling is there are a lots of 
> other in memory objects that are consuming memory but not included in 
> CacheManager caches control. One example is CachingHierarchyManager which 
> consumed 58M out of 242M through its use of PathMap. If CacheManager's 
> maxSize can control the total cache size used by Jackrabbit, that would be 
> easier from a management's perspective. (btw, upper_limit in 
> CachingHierarchyManager is hardcoded and can't be control from outside)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (JCR-1037) Memory leak causing performance problems

2007-08-08 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-1037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12518402
 ] 

Jukka Zitting commented on JCR-1037:


Do you have multiple sessions accessing the repository? Do you ever close the 
sessions?

The trace seems to suggest that the total number of caches (there's one 
associated with each session) keeps increasing and since there is a 128kB 
minimum allocated size per each cache (this minimum per cache overrides the 
global maximum cache allocation) having more and more sessions would nicely 
explain the behaviour you are seeing.

> Memory leak causing performance problems
> 
>
> Key: JCR-1037
> URL: https://issues.apache.org/jira/browse/JCR-1037
> Project: Jackrabbit
>  Issue Type: Bug
>  Components: Jackrabbit API
>Affects Versions: 1.2.1, 1.2.2, 1.2.3, 1.3
> Environment: Tomcat 6.0, XP Pro w/1Gb
>Reporter: Antonio Carballo
> Attachments: JCR-Trace.txt
>
>
> Folks,
> We have been running tests on JCR v1.3 and v1.2.1 for the past two weeks. The 
> system keeps running out of memory after X number of documents are added. Our 
> initial test consisted of about 50 documents and gradually increased to about 
> 150 documents. The size of the documents ranged from 1K to 9MB. We later 
> changed the test to consist of files with less than 1K in length with the 
> same result. Increasing the heap size delays the error but the outcome is 
> always the same (Servlet runs out of heap memory.)
> Using JProbe we found a high number of references created by the caching 
> sub-system (SessionItemStateManager.java, SharedItemStateManager.java, 
> LocalItemStateManager.java).  We changed the caching parameters using 
> CacheManager (min 64K - max 16MB). This change only delayed the error. 
> Servlet eventually runs out of heap memory.
> We are more than happy to share our findings (even source code and test data) 
> with the Jackrabbit team. Please let us know how you wish to proceed.
> Sincerely,
> Antonio Carballo

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (JCR-1040) JCR2SPI: remove node operation missing in submitted SPI batch

2007-08-08 Thread Julian Reschke (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-1040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12518398
 ] 

Julian Reschke commented on JCR-1040:
-

Confirming this fixes the problem I saw. Thanks, Angela.


> JCR2SPI: remove node operation missing in submitted SPI batch
> -
>
> Key: JCR-1040
> URL: https://issues.apache.org/jira/browse/JCR-1040
> Project: Jackrabbit
>  Issue Type: Bug
>  Components: SPI
>Reporter: Julian Reschke
>Assignee: angela
>
> In JCR2SPI, the following sequence of operations seems to lead to an 
> incorrect SPI batch being submitted:
> 1) remove "/a"
> 2) add "/a"
> 3) add "/a/b"
> 4) session.save()
> This seems to create an SPI batch where the first remove operation is missing.
> Note that the problem only seems to occur when step 3 is part of the sequence.
> Full Java source for test:
> try {
>   if 
> (session.getRepository().getDescriptor(Repository.LEVEL_2_SUPPORTED).equals("true"))
>  {
> Node testnode;
> String name = "delete-test";
>   
> Node root = session.getRootNode();
> 
> // make sure it's there
> if (! root.hasNode(name)) {
>   root.addNode(name, "nt:folder");
>   session.save();
> }
> 
> // now test remove/add in one batch
> if (root.hasNode(name)) {
>   testnode = root.getNode(name);
>   testnode.remove();
>   // session.save(); // un-commenting this makes the test pass
> }
> 
> testnode = root.addNode(name, "nt:folder");
> // add one child
> testnode.addNode(name, "nt:folder"); // commenting this out makes the 
> test pass
> 
> session.save();
>   }
> } finally {
>   session.logout();
> }
> 
> 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: JCR 2.0 extensions

2007-08-08 Thread Julian Reschke

Jukka Zitting wrote:

On 8/8/07, Christoph Kiehl <[EMAIL PROTECTED]> wrote:

Sounds good. We just need to make sure that those jsr283 interfaces do not
interfere with jsr170 interfaces because some Jackrabbit classes will need to
implement both interfaces. But since jsr283 should be mainly backwards
compatible this shouldn't be a problem.


Yep. In case we do ran up with compatibility issues, I think we have a
good case to request a change in JSR 283.


As far as I recall, we have at least one method signature that changed 
with respect to throwing exceptions...



Or do you want to start a whole new branch where all jsr170 interfaces
are replaced by jsr283 interfaces?


We need to do that eventually, but I'd very much like to push the
branch point as far ahead as possible to avoid getting stuck with two
parallel actively developed codebases. Ideally we'd release 1.4 and
perhaps even 1.5 with early JCR 2.0 features included before dropping
JCR 1.0 from svn trunk and focusing on the real JCR 2.0 interfaces for
the Jackrabbit 2.0 release.


+1

Best regards, Julian



Re: Better Jira components

2007-08-08 Thread Jukka Zitting
Hi,

On 8/8/07, Stefan Guggisberg <[EMAIL PROTECTED]> wrote:
> i'd definitely like to keep the finer granularity in core.
> ideally there should be something like subcomponents
> but prefixing would be fine with me.

Eventually (perhaps starting with Jackrabbit 2.0) we may want to start
having separate release cycles for the different subprojects, which
means that we would also have separate Jira projects and components
for each Jackrabbit subproject. There's some overhead to doing that,
so I'd like to do that only when the pain of doing synchronous
releases (like nobody being able to review too big release candidates)
gets too big.

BR,

Jukka Zitting


[jira] Created: (JCR-1055) Incorrect node position after import

2007-08-08 Thread Marcus Kaar (JIRA)
Incorrect node position after import


 Key: JCR-1055
 URL: https://issues.apache.org/jira/browse/JCR-1055
 Project: Jackrabbit
  Issue Type: Bug
Affects Versions: 1.3
Reporter: Marcus Kaar


I have found a behavior that does not seem to be consistent with the
spec:

After replacing a node with importXML using 
IMPORT_UUID_COLLISION_REPLACE_EXISTING the new node is not at the position of 
the replaced node (talking about the position among the siblings).

The origininal node is removed, but the new node is created as the last child 
of the parent node, and not spec-compliant at the position of the replaced node.

Here how I use it:

// assume Session s, Node n, String text (holding XML data)

s.importXML(
n.getPath(), 
new ByteArrayInputStream (text.getBytes("UTF-8")),
ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING
);
s.save();
 

And here a quote from the spec section 7.3.6

ImportUUIDBehavior.IMPORT_UUID_COLLISION_REPLACE_EXISTING: 
If an incoming referenceable node has the same UUID as a node already existing 
in the workspace then the already existing node is replaced by the incoming 
node in the same position as the existing node.
[note "same position"]


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: JCR 2.0 extensions

2007-08-08 Thread Jukka Zitting
Hi,

On 8/8/07, Christoph Kiehl <[EMAIL PROTECTED]> wrote:
> Sounds good. We just need to make sure that those jsr283 interfaces do not
> interfere with jsr170 interfaces because some Jackrabbit classes will need to
> implement both interfaces. But since jsr283 should be mainly backwards
> compatible this shouldn't be a problem.

Yep. In case we do ran up with compatibility issues, I think we have a
good case to request a change in JSR 283.

> Or do you want to start a whole new branch where all jsr170 interfaces
> are replaced by jsr283 interfaces?

We need to do that eventually, but I'd very much like to push the
branch point as far ahead as possible to avoid getting stuck with two
parallel actively developed codebases. Ideally we'd release 1.4 and
perhaps even 1.5 with early JCR 2.0 features included before dropping
JCR 1.0 from svn trunk and focusing on the real JCR 2.0 interfaces for
the Jackrabbit 2.0 release.

BR,

Jukka Zitting


Re: JCR 2.0 extensions

2007-08-08 Thread Christoph Kiehl

Jukka Zitting wrote:


To best manage the change I would like to start a separate
jackrabbit-jsr283 component that would contain tentative JCR 2.0
extension interfaces in org.apache.jackrabbit.jsr283. We wouldn't need
to make any backwards compatibility claims for that component, but any
other components like jackrabbit-core, jackrabbit-jcr-tests,
jackrabbit-jcr-rmi, and jackrabbit-jcr2spi could depend on that
component until the final jcr-2.0.jar is available.

What do you think? I guess there is some licensing stuff to figure
out, but otherwise I think this would be the cleanest approach.


Sounds good. We just need to make sure that those jsr283 interfaces do not 
interfere with jsr170 interfaces because some Jackrabbit classes will need to 
implement both interfaces. But since jsr283 should be mainly backwards 
compatible this shouldn't be a problem. Or do you want to start a whole new 
branch where all jsr170 interfaces are replaced by jsr283 interfaces?


Cheers,
Christoph



Re: JCR 2.0 extensions

2007-08-08 Thread Stefan Guggisberg
hi jukka

On 8/8/07, Jukka Zitting <[EMAIL PROTECTED]> wrote:
> Hi,
>
> Now that the public review draft of JCR 2.0 is available we can start
> implementing the changes and new features. We need to do this already
> now for Jackrabbit to be ready for use as the JCR 2.0 reference
> implementation when the standard becomes final.
>
> The difficult part in implementing the JCR 2.0 features is that things
> may still change before the spec is finalized. Also, we don't have any
> official jcr-2.0 API jar file and we can't go extending the existing
> JCR 1.0 javax.jcr interfaces without serious breakage. So, to
> implement the new features we need to use some "staging area" for the
> new interfaces and methods being introduced in JCR 2.0.
>
> One option would be to use the existing jackrabbit-api package and add
> the JCR 2.0 extensions as custom org.apache.jackrabbit.api extensions.
> This may however cause trouble later on as we should maintain
> reasonable backwards compatiblity with any additions to jackrabbit-api
> even if JCR 2.0 ends up being different.
>
> To best manage the change I would like to start a separate
> jackrabbit-jsr283 component that would contain tentative JCR 2.0
> extension interfaces in org.apache.jackrabbit.jsr283. We wouldn't need
> to make any backwards compatibility claims for that component, but any
> other components like jackrabbit-core, jackrabbit-jcr-tests,
> jackrabbit-jcr-rmi, and jackrabbit-jcr2spi could depend on that
> component until the final jcr-2.0.jar is available.
>
> What do you think? I guess there is some licensing stuff to figure
> out, but otherwise I think this would be the cleanest approach.

i agree, +1

cheers
stefan

>
> BR,
>
> Jukka Zitting
>


Re: JCR 2.0 extensions

2007-08-08 Thread Julian Reschke

Jukka Zitting wrote:

...
What do you think? I guess there is some licensing stuff to figure
out, but otherwise I think this would be the cleanest approach.
...


Sounds exactly right to me.

Best regards, Julian


[jira] Resolved: (JCR-1040) JCR2SPI: remove node operation missing in submitted SPI batch

2007-08-08 Thread angela (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-1040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

angela resolved JCR-1040.
-

Resolution: Fixed

the problem was caused upon creating the effective node type for the new 
replacement node, which tried to load the jcr:mixinTypes from the persistent 
storage.
consequently the jcr:uuid was retrieved with the effects mentioned above.

fix: in case of a NEW node entry it does not make sense to (try to) load 
props/child nodes from the persistent storage.

> JCR2SPI: remove node operation missing in submitted SPI batch
> -
>
> Key: JCR-1040
> URL: https://issues.apache.org/jira/browse/JCR-1040
> Project: Jackrabbit
>  Issue Type: Bug
>  Components: SPI
>Reporter: Julian Reschke
>Assignee: angela
>
> In JCR2SPI, the following sequence of operations seems to lead to an 
> incorrect SPI batch being submitted:
> 1) remove "/a"
> 2) add "/a"
> 3) add "/a/b"
> 4) session.save()
> This seems to create an SPI batch where the first remove operation is missing.
> Note that the problem only seems to occur when step 3 is part of the sequence.
> Full Java source for test:
> try {
>   if 
> (session.getRepository().getDescriptor(Repository.LEVEL_2_SUPPORTED).equals("true"))
>  {
> Node testnode;
> String name = "delete-test";
>   
> Node root = session.getRootNode();
> 
> // make sure it's there
> if (! root.hasNode(name)) {
>   root.addNode(name, "nt:folder");
>   session.save();
> }
> 
> // now test remove/add in one batch
> if (root.hasNode(name)) {
>   testnode = root.getNode(name);
>   testnode.remove();
>   // session.save(); // un-commenting this makes the test pass
> }
> 
> testnode = root.addNode(name, "nt:folder");
> // add one child
> testnode.addNode(name, "nt:folder"); // commenting this out makes the 
> test pass
> 
> session.save();
>   }
> } finally {
>   session.logout();
> }
> 
> 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



JCR 2.0 extensions

2007-08-08 Thread Jukka Zitting
Hi,

Now that the public review draft of JCR 2.0 is available we can start
implementing the changes and new features. We need to do this already
now for Jackrabbit to be ready for use as the JCR 2.0 reference
implementation when the standard becomes final.

The difficult part in implementing the JCR 2.0 features is that things
may still change before the spec is finalized. Also, we don't have any
official jcr-2.0 API jar file and we can't go extending the existing
JCR 1.0 javax.jcr interfaces without serious breakage. So, to
implement the new features we need to use some "staging area" for the
new interfaces and methods being introduced in JCR 2.0.

One option would be to use the existing jackrabbit-api package and add
the JCR 2.0 extensions as custom org.apache.jackrabbit.api extensions.
This may however cause trouble later on as we should maintain
reasonable backwards compatiblity with any additions to jackrabbit-api
even if JCR 2.0 ends up being different.

To best manage the change I would like to start a separate
jackrabbit-jsr283 component that would contain tentative JCR 2.0
extension interfaces in org.apache.jackrabbit.jsr283. We wouldn't need
to make any backwards compatibility claims for that component, but any
other components like jackrabbit-core, jackrabbit-jcr-tests,
jackrabbit-jcr-rmi, and jackrabbit-jcr2spi could depend on that
component until the final jcr-2.0.jar is available.

What do you think? I guess there is some licensing stuff to figure
out, but otherwise I think this would be the cleanest approach.

BR,

Jukka Zitting


[jira] Commented: (JCR-926) Global data store for binaries

2007-08-08 Thread Thomas Mueller (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12518389
 ] 

Thomas Mueller commented on JCR-926:


Hi,

Is there anything I can do to help speed up integrating the global data store 
changes?
It's a bit frustrating for me if I have to wait and can't do anything.

Thomas

> Global data store for binaries
> --
>
> Key: JCR-926
> URL: https://issues.apache.org/jira/browse/JCR-926
> Project: Jackrabbit
>  Issue Type: New Feature
>  Components: core
>Reporter: Jukka Zitting
> Attachments: dataStore.patch, DataStore.patch, DataStore2.patch, 
> dataStore3.patch, dataStore4.zip, dataStore5-garbageCollector.patch, 
> internalValue.patch, ReadWhileSaveTest.patch
>
>
> There are three main problems with the way Jackrabbit currently handles large 
> binary values:
> 1) Persisting a large binary value blocks access to the persistence layer for 
> extended amounts of time (see JCR-314)
> 2) At least two copies of binary streams are made when saving them through 
> the JCR API: one in the transient space, and one when persisting the value
> 3) Versioining and copy operations on nodes or subtrees that contain large 
> binary values can quickly end up consuming excessive amounts of storage space.
> To solve these issues (and to get other nice benefits), I propose that we 
> implement a global "data store" concept in the repository. A data store is an 
> append-only set of binary values that uses short identifiers to identify and 
> access the stored binary values. The data store would trivially fit the 
> requirements of transient space and transaction handling due to the 
> append-only nature. An explicit mark-and-sweep garbage collection process 
> could be added to avoid concerns about storing garbage values.
> See the recent NGP value record discussion, especially [1], for more 
> background on this idea.
> [1] 
> http://mail-archives.apache.org/mod_mbox/jackrabbit-dev/200705.mbox/[EMAIL 
> PROTECTED]

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Better Jira components

2007-08-08 Thread Stefan Guggisberg
hi jukka

On 8/8/07, Jukka Zitting <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I'd like to streamline the Jackrabbit components in Jira (currently
> core, xml, query, etc.) to better match the project structure in svn.
> This would help us better track changes per release artifact.
>
> See below for a proposed restructuring of the Jira components. In
> short this change would lessen the categorization metadata on
> jackrabbit-core issues but increase metadata on other components. If
> we want to keep the finer granularity, we could use an prefix
> mechanism like core:query, core:xml, etc.

i'd definitely like to keep the finer granularity in core.
ideally there should be something like subcomponents
but prefixing would be fine with me.

cheers
stefan
>
> BR,
>
> Jukka Zitting
>
> 
>
> contrib
> contrib PMs
>
> jackrabbit-api
> Jackrabbit API
>
> jackrabbit-classloader
>
> jackrabbit-core
> clustering
> config
> core
> locks
> xml
> xpath
> namespace
> nodetype
> observation
> query
> security
> sql
> transactions
> versioning
>
> jackrabbit-jca
> jca
>
> jackrabbit-jcr-commons
>
> jackrabbit-jcr-rmi
> rmi
>
> jackrabbit-jcr-server
>
> jackrabbit-jcr-servlet
>
> jackrabbit-jcr-tests
> test
> JCR TCK
>
> jackrabbit-site
> docs
> site
>
> jackrabbit-text-extractors
> indexing
>
> jackrabbit-webapp
> webapp
>
> jackrabbit-webdav
> webdav
>
> There are also a few generic and not yet released components that I
> haven't yet figured how to best handle:
>
> JCR 1.0.1
> JCR 2.0
> JCR API
> jcr-mapping
> jira
> maven
> SPI
>


Re: Better Jira components

2007-08-08 Thread Felix Meschberger
Hi Jukka,

Makes sense (incl. Christophe's comments).

Am Mittwoch, den 08.08.2007, 10:33 +0300 schrieb Jukka Zitting:
> Hi,
> 
> I'd like to streamline the Jackrabbit components in Jira (currently
> core, xml, query, etc.) to better match the project structure in svn.
> This would help us better track changes per release artifact.
> 
> See below for a proposed restructuring of the Jira components. In
> short this change would lessen the categorization metadata on
> jackrabbit-core issues but increase metadata on other components. If
> we want to keep the finer granularity, we could use an prefix
> mechanism like core:query, core:xml, etc.
> 
> BR,
> 
> Jukka Zitting
> 
> 
> 
> contrib
> contrib PMs
> 
> jackrabbit-api
> Jackrabbit API
> 
> jackrabbit-classloader
> 
> jackrabbit-core
> clustering
> config
> core
> locks
> xml
> xpath
> namespace
> nodetype
> observation
> query
> security
> sql
> transactions
> versioning
> 
> jackrabbit-jca
> jca
> 
> jackrabbit-jcr-commons
> 
> jackrabbit-jcr-rmi
> rmi
> 
> jackrabbit-jcr-server
> 
> jackrabbit-jcr-servlet
> 
> jackrabbit-jcr-tests
> test
> JCR TCK
> 
> jackrabbit-site
> docs
> site
> 
> jackrabbit-text-extractors
> indexing
> 
> jackrabbit-webapp
> webapp
> 
> jackrabbit-webdav
> webdav
> 
> There are also a few generic and not yet released components that I
> haven't yet figured how to best handle:
> 
> JCR 1.0.1
> JCR 2.0
> JCR API
> jcr-mapping
> jira
> maven
> SPI



Re: Better Jira components

2007-08-08 Thread Christophe Lombart
Why not for the jcr-mapping somerthing like :

jcr-mapping
  ocm
  nodetype-management
  annotation
  spring



On 8/8/07, Jukka Zitting <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> I'd like to streamline the Jackrabbit components in Jira (currently
> core, xml, query, etc.) to better match the project structure in svn.
> This would help us better track changes per release artifact.
>
> See below for a proposed restructuring of the Jira components. In
> short this change would lessen the categorization metadata on
> jackrabbit-core issues but increase metadata on other components. If
> we want to keep the finer granularity, we could use an prefix
> mechanism like core:query, core:xml, etc.
>
> BR,
>
> Jukka Zitting
>
> 
>
> contrib
> contrib PMs
>
> jackrabbit-api
> Jackrabbit API
>
> jackrabbit-classloader
>
> jackrabbit-core
> clustering
> config
> core
> locks
> xml
> xpath
> namespace
> nodetype
> observation
> query
> security
> sql
> transactions
> versioning
>
> jackrabbit-jca
> jca
>
> jackrabbit-jcr-commons
>
> jackrabbit-jcr-rmi
> rmi
>
> jackrabbit-jcr-server
>
> jackrabbit-jcr-servlet
>
> jackrabbit-jcr-tests
> test
> JCR TCK
>
> jackrabbit-site
> docs
> site
>
> jackrabbit-text-extractors
> indexing
>
> jackrabbit-webapp
> webapp
>
> jackrabbit-webdav
> webdav
>
> There are also a few generic and not yet released components that I
> haven't yet figured how to best handle:
>
> JCR 1.0.1
> JCR 2.0
> JCR API
> jcr-mapping
> jira
> maven
> SPI
>


Better Jira components

2007-08-08 Thread Jukka Zitting
Hi,

I'd like to streamline the Jackrabbit components in Jira (currently
core, xml, query, etc.) to better match the project structure in svn.
This would help us better track changes per release artifact.

See below for a proposed restructuring of the Jira components. In
short this change would lessen the categorization metadata on
jackrabbit-core issues but increase metadata on other components. If
we want to keep the finer granularity, we could use an prefix
mechanism like core:query, core:xml, etc.

BR,

Jukka Zitting



contrib
contrib PMs

jackrabbit-api
Jackrabbit API

jackrabbit-classloader

jackrabbit-core
clustering
config
core
locks
xml
xpath
namespace
nodetype
observation
query
security
sql
transactions
versioning

jackrabbit-jca
jca

jackrabbit-jcr-commons

jackrabbit-jcr-rmi
rmi

jackrabbit-jcr-server

jackrabbit-jcr-servlet

jackrabbit-jcr-tests
test
JCR TCK

jackrabbit-site
docs
site

jackrabbit-text-extractors
indexing

jackrabbit-webapp
webapp

jackrabbit-webdav
webdav

There are also a few generic and not yet released components that I
haven't yet figured how to best handle:

JCR 1.0.1
JCR 2.0
JCR API
jcr-mapping
jira
maven
SPI


[jira] Resolved: (JCR-597) Implement NamespaceManager.unregister()

2007-08-08 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting resolved JCR-597.
---

Resolution: Won't Fix

Resolving this as Won't Fix based on the rationale above.

> Implement NamespaceManager.unregister()
> ---
>
> Key: JCR-597
> URL: https://issues.apache.org/jira/browse/JCR-597
> Project: Jackrabbit
>  Issue Type: Improvement
>  Components: namespace
>Affects Versions: 0.9, 1.0, 1.0.1, 1.1
> Environment: N/A
>Reporter: Tako Schotanus
>Priority: Minor
>
> At this moment the NamespaceManager.unregister() method always throws an 
> exception. In the code is a comment basically saying that it is not 
> implemented yet.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (JCR-679) Add setLimit() and setOffset() to query classes

2007-08-08 Thread Jukka Zitting (JIRA)

 [ 
https://issues.apache.org/jira/browse/JCR-679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jukka Zitting resolved JCR-679.
---

Resolution: Duplicate
  Assignee: (was: Jukka Zitting)

Duplicate of JCR-989

> Add setLimit() and setOffset() to query classes
> ---
>
> Key: JCR-679
> URL: https://issues.apache.org/jira/browse/JCR-679
> Project: Jackrabbit
>  Issue Type: New Feature
>  Components: query
>Reporter: Jukka Zitting
>Priority: Minor
>
> As discussed before on the mailing list, in some cases it would be beneficial 
> to have explicit setLimit() and setOffset() methods on the Query 
> implementation instead of relying on skip() and lazy loading of query results.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (JCR-989) Modify LazyQueryResultImpl to allow resultFetchSize to be set programmatically

2007-08-08 Thread Jukka Zitting (JIRA)

[ 
https://issues.apache.org/jira/browse/JCR-989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12518354
 ] 

Jukka Zitting commented on JCR-989:
---

Nice work!

I'm not 100% sure if we should place JCR 2.0 extensions in jackrabbit-api, 
after all they are still subject to change and it might well be that the 
interfaces will change before JCR 2.0 is final. Putting the extensions in 
jackrabbit-api implies that we are ready to offer backwards compatibility to 
these new methods even if JCR 2.0 decides to drop them or implement them in 
some other way. More on dev@ in a  moment...

> Modify LazyQueryResultImpl to allow resultFetchSize to be set programmatically
> --
>
> Key: JCR-989
> URL: https://issues.apache.org/jira/browse/JCR-989
> Project: Jackrabbit
>  Issue Type: New Feature
>  Components: query
>Affects Versions: 1.3
>Reporter: Christoph Kiehl
>Assignee: Christoph Kiehl
>Priority: Minor
> Fix For: 1.4
>
> Attachments: jackrabbit-api.patch, jackrabbit-core.patch
>
>
> In our application we have a search which only shows part of a query result. 
> We always know which part of the result needs to be shown. This means we know 
> in advance how many results need to be fetched. I would like to be able to 
> programmatically set resultFetchSize to minimize the number of loaded lucene 
> docs and therefore improve the performance.
> I know it is already possible to the set the resultFetchSize via the index 
> configuration, but this number is fixed and doesn't work well in environments 
> where you use paging for your results because if you set this number too low 
> the query will be executed multiple times and if you set it too high too many 
> lucene docs are loaded.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.