[jira] [Updated] (SOLR-1395) Integrate Katta

2011-09-15 Thread tom liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tom liu updated SOLR-1395:
--

Attachment: (was: solr-1395-katta-0.6.3-4.patch)

 Integrate Katta
 ---

 Key: SOLR-1395
 URL: https://issues.apache.org/jira/browse/SOLR-1395
 Project: Solr
  Issue Type: New Feature
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: 3.4, 4.0

 Attachments: SOLR-1395.patch, SOLR-1395.patch, SOLR-1395.patch, 
 back-end.log, front-end.log, hadoop-core-0.19.0.jar, katta-core-0.6-dev.jar, 
 katta-solrcores.jpg, katta.node.properties, katta.zk.properties, 
 log4j-1.2.13.jar, solr-1395-1431-3.patch, solr-1395-1431-4.patch, 
 solr-1395-1431-katta0.6.patch, solr-1395-1431-katta0.6.patch, 
 solr-1395-1431.patch, solr-1395-katta-0.6.2-1.patch, 
 solr-1395-katta-0.6.2-2.patch, solr-1395-katta-0.6.2-3.patch, 
 solr-1395-katta-0.6.2.patch, solr1395.jpg, test-katta-core-0.6-dev.jar, 
 zkclient-0.1-dev.jar, zookeeper-3.2.1.jar

   Original Estimate: 336h
  Remaining Estimate: 336h

 We'll integrate Katta into Solr so that:
 * Distributed search uses Hadoop RPC
 * Shard/SolrCore distribution and management
 * Zookeeper based failover
 * Indexes may be built using Hadoop

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-1395) Integrate Katta

2011-09-15 Thread tom liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tom liu updated SOLR-1395:
--

Attachment: solr-1395-katta-0.6.3-4.patch

 Integrate Katta
 ---

 Key: SOLR-1395
 URL: https://issues.apache.org/jira/browse/SOLR-1395
 Project: Solr
  Issue Type: New Feature
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: 3.4, 4.0

 Attachments: SOLR-1395.patch, SOLR-1395.patch, SOLR-1395.patch, 
 back-end.log, front-end.log, hadoop-core-0.19.0.jar, katta-core-0.6-dev.jar, 
 katta-solrcores.jpg, katta.node.properties, katta.zk.properties, 
 log4j-1.2.13.jar, solr-1395-1431-3.patch, solr-1395-1431-4.patch, 
 solr-1395-1431-katta0.6.patch, solr-1395-1431-katta0.6.patch, 
 solr-1395-1431.patch, solr-1395-katta-0.6.2-1.patch, 
 solr-1395-katta-0.6.2-2.patch, solr-1395-katta-0.6.2-3.patch, 
 solr-1395-katta-0.6.2.patch, solr-1395-katta-0.6.3-4.patch, solr1395.jpg, 
 test-katta-core-0.6-dev.jar, zkclient-0.1-dev.jar, zookeeper-3.2.1.jar

   Original Estimate: 336h
  Remaining Estimate: 336h

 We'll integrate Katta into Solr so that:
 * Distributed search uses Hadoop RPC
 * Shard/SolrCore distribution and management
 * Zookeeper based failover
 * Indexes may be built using Hadoop

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1395) Integrate Katta

2011-09-15 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13105812#comment-13105812
 ] 

tom liu commented on SOLR-1395:
---

Johnwu:
   surely, i do test the patch with:
   stats,terms,termvector,hl,facet,debug

   PS: i changed katta code for managing shards of updating.
   iff you do not need this, pls comment the following code including 
client.broadcastToNodes(...) :
{code:title=KattaClient.java|borderStyle=solid}
public ClientResultKattaResponse request(long timeout,
String[] indexNames, KattaRequest request) throws 
KattaException {
ClientResultKattaResponse results = null;
String path = request.getParams().get(CommonParams.QT);
if (path!=null  path.equals(update)) {
// only for qt=update
results = client.broadcastToNodes(
timeout, true, REQUEST_METHOD, 0, 
indexNames, null, request);
} else {
results = client.broadcastToIndices(
timeout, true, REQUEST_METHOD, 0, 
indexNames, null, request);
}
return results;
}
{code} 

 Integrate Katta
 ---

 Key: SOLR-1395
 URL: https://issues.apache.org/jira/browse/SOLR-1395
 Project: Solr
  Issue Type: New Feature
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: 3.4, 4.0

 Attachments: SOLR-1395.patch, SOLR-1395.patch, SOLR-1395.patch, 
 back-end.log, front-end.log, hadoop-core-0.19.0.jar, katta-core-0.6-dev.jar, 
 katta-solrcores.jpg, katta.node.properties, katta.zk.properties, 
 log4j-1.2.13.jar, solr-1395-1431-3.patch, solr-1395-1431-4.patch, 
 solr-1395-1431-katta0.6.patch, solr-1395-1431-katta0.6.patch, 
 solr-1395-1431.patch, solr-1395-katta-0.6.2-1.patch, 
 solr-1395-katta-0.6.2-2.patch, solr-1395-katta-0.6.2-3.patch, 
 solr-1395-katta-0.6.2.patch, solr-1395-katta-0.6.3-4.patch, solr1395.jpg, 
 test-katta-core-0.6-dev.jar, zkclient-0.1-dev.jar, zookeeper-3.2.1.jar

   Original Estimate: 336h
  Remaining Estimate: 336h

 We'll integrate Katta into Solr so that:
 * Distributed search uses Hadoop RPC
 * Shard/SolrCore distribution and management
 * Zookeeper based failover
 * Indexes may be built using Hadoop

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Edited] (SOLR-1395) Integrate Katta

2011-09-08 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13100161#comment-13100161
 ] 

tom liu edited comment on SOLR-1395 at 9/8/11 8:50 AM:
---

i upload one patch, that based on current trunk version.
my env:
hadoop:  0.20.2
zookeep: 3.3.3
katta:   0.6.3

  was (Author: tom_lt):
i upload one patch, that based on current trunk version.
my env:
hadoop:  0.20.2
zookeep: 3.3.3

  
 Integrate Katta
 ---

 Key: SOLR-1395
 URL: https://issues.apache.org/jira/browse/SOLR-1395
 Project: Solr
  Issue Type: New Feature
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: 3.4, 4.0

 Attachments: SOLR-1395.patch, SOLR-1395.patch, SOLR-1395.patch, 
 back-end.log, front-end.log, hadoop-core-0.19.0.jar, katta-core-0.6-dev.jar, 
 katta-solrcores.jpg, katta.node.properties, katta.zk.properties, 
 log4j-1.2.13.jar, solr-1395-1431-3.patch, solr-1395-1431-4.patch, 
 solr-1395-1431-katta0.6.patch, solr-1395-1431-katta0.6.patch, 
 solr-1395-1431.patch, solr-1395-katta-0.6.2-1.patch, 
 solr-1395-katta-0.6.2-2.patch, solr-1395-katta-0.6.2-3.patch, 
 solr-1395-katta-0.6.2.patch, solr-1395-katta-0.6.3-4.patch, solr1395.jpg, 
 test-katta-core-0.6-dev.jar, zkclient-0.1-dev.jar, zookeeper-3.2.1.jar

   Original Estimate: 336h
  Remaining Estimate: 336h

 We'll integrate Katta into Solr so that:
 * Distributed search uses Hadoop RPC
 * Shard/SolrCore distribution and management
 * Zookeeper based failover
 * Indexes may be built using Hadoop

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-2539) VectorValueSource returnes floatVal of DocValues is wrong

2011-05-24 Thread tom liu (JIRA)
VectorValueSource returnes floatVal of DocValues is wrong
-

 Key: SOLR-2539
 URL: https://issues.apache.org/jira/browse/SOLR-2539
 Project: Solr
  Issue Type: Bug
  Components: search
 Environment: JDK1.6/Tomcat6
Reporter: tom liu


@Override
public void floatVal(int doc, float[] vals) {
  vals[0] = x.byteVal(doc);
  vals[1] = y.byteVal(doc);
}
should be:
@Override
public void floatVal(int doc, float[] vals) {
  vals[0] = x.floatVal(doc);
  vals[1] = y.floatVal(doc);
}


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-2539) VectorValueSource returnes floatVal of DocValues is wrong

2011-05-24 Thread tom liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tom liu closed SOLR-2539.
-


fixed

 VectorValueSource returnes floatVal of DocValues is wrong
 -

 Key: SOLR-2539
 URL: https://issues.apache.org/jira/browse/SOLR-2539
 Project: Solr
  Issue Type: Bug
  Components: search
 Environment: JDK1.6/Tomcat6
Reporter: tom liu
 Fix For: 3.2


 @Override
 public void floatVal(int doc, float[] vals) {
   vals[0] = x.byteVal(doc);
   vals[1] = y.byteVal(doc);
 }
 should be:
 @Override
 public void floatVal(int doc, float[] vals) {
   vals[0] = x.floatVal(doc);
   vals[1] = y.floatVal(doc);
 }

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-1395) Integrate Katta

2011-03-01 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13000786#comment-13000786
 ] 

tom liu commented on SOLR-1395:
---

you can debug or trace the process:
# webapp's param: shards=*
# kattaclient's process: shards=seo0,seo1,...
# sub-proxy's param: shards=seo0 [maybe many requestes,so param is not same]
# sub-proxy then dispatch request to seo0 queryCore

other process, iff you put shards=seo0::
# webapp's param: shards=seo0
# kattaclient's process: shards=seo0
# sub-proxy's param: shards=seo0 [one request]
# sub-proxy then dispatch request to seo0 queryCore

 Integrate Katta
 ---

 Key: SOLR-1395
 URL: https://issues.apache.org/jira/browse/SOLR-1395
 Project: Solr
  Issue Type: New Feature
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: Next

 Attachments: SOLR-1395.patch, SOLR-1395.patch, SOLR-1395.patch, 
 back-end.log, front-end.log, hadoop-core-0.19.0.jar, katta-core-0.6-dev.jar, 
 katta-solrcores.jpg, katta.node.properties, katta.zk.properties, 
 log4j-1.2.13.jar, solr-1395-1431-3.patch, solr-1395-1431-4.patch, 
 solr-1395-1431-katta0.6.patch, solr-1395-1431-katta0.6.patch, 
 solr-1395-1431.patch, solr-1395-katta-0.6.2-1.patch, 
 solr-1395-katta-0.6.2-2.patch, solr-1395-katta-0.6.2-3.patch, 
 solr-1395-katta-0.6.2.patch, test-katta-core-0.6-dev.jar, 
 zkclient-0.1-dev.jar, zookeeper-3.2.1.jar

   Original Estimate: 336h
  Remaining Estimate: 336h

 We'll integrate Katta into Solr so that:
 * Distributed search uses Hadoop RPC
 * Shard/SolrCore distribution and management
 * Zookeeper based failover
 * Indexes may be built using Hadoop

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-1395) Integrate Katta

2011-02-24 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12998773#comment-12998773
 ] 

tom liu commented on SOLR-1395:
---

please see conf file katta.node.properties.
the node.shard.folder property defined the folder of queryCore

 Integrate Katta
 ---

 Key: SOLR-1395
 URL: https://issues.apache.org/jira/browse/SOLR-1395
 Project: Solr
  Issue Type: New Feature
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: Next

 Attachments: SOLR-1395.patch, SOLR-1395.patch, SOLR-1395.patch, 
 back-end.log, front-end.log, hadoop-core-0.19.0.jar, katta-core-0.6-dev.jar, 
 katta-solrcores.jpg, katta.node.properties, katta.zk.properties, 
 log4j-1.2.13.jar, solr-1395-1431-3.patch, solr-1395-1431-4.patch, 
 solr-1395-1431-katta0.6.patch, solr-1395-1431-katta0.6.patch, 
 solr-1395-1431.patch, solr-1395-katta-0.6.2-1.patch, 
 solr-1395-katta-0.6.2-2.patch, solr-1395-katta-0.6.2-3.patch, 
 solr-1395-katta-0.6.2.patch, test-katta-core-0.6-dev.jar, 
 zkclient-0.1-dev.jar, zookeeper-3.2.1.jar

   Original Estimate: 336h
  Remaining Estimate: 336h

 We'll integrate Katta into Solr so that:
 * Distributed search uses Hadoop RPC
 * Shard/SolrCore distribution and management
 * Zookeeper based failover
 * Indexes may be built using Hadoop

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-1395) Integrate Katta

2011-02-18 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12996307#comment-12996307
 ] 

tom liu commented on SOLR-1395:
---

use the katta addindex seo.zip to deploy the querycore to katta slave node.

seo.zip is patched solr.

 Integrate Katta
 ---

 Key: SOLR-1395
 URL: https://issues.apache.org/jira/browse/SOLR-1395
 Project: Solr
  Issue Type: New Feature
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: Next

 Attachments: SOLR-1395.patch, SOLR-1395.patch, SOLR-1395.patch, 
 back-end.log, front-end.log, hadoop-core-0.19.0.jar, katta-core-0.6-dev.jar, 
 katta-solrcores.jpg, katta.node.properties, katta.zk.properties, 
 log4j-1.2.13.jar, solr-1395-1431-3.patch, solr-1395-1431-4.patch, 
 solr-1395-1431-katta0.6.patch, solr-1395-1431-katta0.6.patch, 
 solr-1395-1431.patch, solr-1395-katta-0.6.2-1.patch, 
 solr-1395-katta-0.6.2-2.patch, solr-1395-katta-0.6.2-3.patch, 
 solr-1395-katta-0.6.2.patch, test-katta-core-0.6-dev.jar, 
 zkclient-0.1-dev.jar, zookeeper-3.2.1.jar

   Original Estimate: 336h
  Remaining Estimate: 336h

 We'll integrate Katta into Solr so that:
 * Distributed search uses Hadoop RPC
 * Shard/SolrCore distribution and management
 * Zookeeper based failover
 * Indexes may be built using Hadoop

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-1395) Integrate Katta

2011-02-16 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12995212#comment-12995212
 ] 

tom liu commented on SOLR-1395:
---

On Katta slave node, my folder hierarchy is:
|/var/data|root|
|/var/data/hadoop|store hadoop data|
|/var/data/hdfszips|store zip tmp data, which get from hdfs,then move to 
katta's shardes|
|/var/data/solr|root store solr core configures|
|/var/data/solr/seoproxy|store seoproxy's solr config,which is used by 
sub-proxy|
|/var/data/katta/shards/nodename_2/seo0#seo0|store seo0 shard,which is 
deployed from master node|
|/var/data/zkdata|store zkserver data,which is zk logs and snapshotes|

On Katta master node, my folder hierarchy is:
|/var/data|root|
|/var/data/hadoop|store hadoop data|
|/var/data/hdfsfile|store solr tmp data, which get from solr dataimporter,then 
zip  put to hdfs|
|/var/data/solr|root store solr core configures|
|/var/data/solr/seo|store seo's solr config,which is used by tomcat's webapp|
|/var/data/zkdata|store zkserver data,which is zk logs and snapshotes|

so, my config is from five folderes:
|Master|/var/data/solr/seo|tomcat webapp's solrcore config|
|Slave|/var/data/solr/seoproxy|sub-proxy's solrcore config|
|Master|/var/data/hdfsfile|query-core's config,which is config template|
|HDFS|http://hdfsname:9000/seo/seo0.zip|query-core seo0's zip file,which is 
hold conf|
|Slave|/var/data/katta/shards/nodename_2/seo0#seo0/conf|query-core seo0's 
config,which is unzipped from seo0.zip of HDFS|

and, /var/data/hdfsfile structure is:
{noformat}
seo@seo-solr1:/var/data/hdfsfile$ ll
total 28
drwxr-xr-x 6 seo seo 4096 Oct 21 15:21 ./
drwxr-xr-x 4 seo seo 4096 Feb 16 15:49 ../
drwxr-xr-x 2 seo seo 4096 Oct  8 09:17 bin/
drwxr-xr-x 4 seo seo 4096 Jan 21 18:22 conf/
drwxr-xr-x 3 seo seo 4096 Oct 21 15:21 data/
drwxr-xr-x 2 seo seo 4096 Sep 29 14:01 lib/
-rw-r--r-- 1 seo seo 1320 Oct  8 09:20 solr.xml
{noformat}


 Integrate Katta
 ---

 Key: SOLR-1395
 URL: https://issues.apache.org/jira/browse/SOLR-1395
 Project: Solr
  Issue Type: New Feature
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: Next

 Attachments: SOLR-1395.patch, SOLR-1395.patch, SOLR-1395.patch, 
 back-end.log, front-end.log, hadoop-core-0.19.0.jar, katta-core-0.6-dev.jar, 
 katta-solrcores.jpg, katta.node.properties, katta.zk.properties, 
 log4j-1.2.13.jar, solr-1395-1431-3.patch, solr-1395-1431-4.patch, 
 solr-1395-1431-katta0.6.patch, solr-1395-1431-katta0.6.patch, 
 solr-1395-1431.patch, solr-1395-katta-0.6.2-1.patch, 
 solr-1395-katta-0.6.2-2.patch, solr-1395-katta-0.6.2-3.patch, 
 solr-1395-katta-0.6.2.patch, test-katta-core-0.6-dev.jar, 
 zkclient-0.1-dev.jar, zookeeper-3.2.1.jar

   Original Estimate: 336h
  Remaining Estimate: 336h

 We'll integrate Katta into Solr so that:
 * Distributed search uses Hadoop RPC
 * Shard/SolrCore distribution and management
 * Zookeeper based failover
 * Indexes may be built using Hadoop

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-1395) Integrate Katta

2011-02-15 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12994688#comment-12994688
 ] 

tom liu commented on SOLR-1395:
---

ISolrServer's config is set by katta script. QueryCore's config will be set 
autolly.

Sub-proxy solr is just proxy, which do not process any request.
so, sub-proxy dispatch request to querycore. and querycore process request, 
return solrdoclists.

but, you get the exception which do not cast object type. i think that 
querycore would be wrong.

 Integrate Katta
 ---

 Key: SOLR-1395
 URL: https://issues.apache.org/jira/browse/SOLR-1395
 Project: Solr
  Issue Type: New Feature
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: Next

 Attachments: SOLR-1395.patch, SOLR-1395.patch, SOLR-1395.patch, 
 back-end.log, front-end.log, hadoop-core-0.19.0.jar, katta-core-0.6-dev.jar, 
 katta-solrcores.jpg, katta.node.properties, katta.zk.properties, 
 log4j-1.2.13.jar, solr-1395-1431-3.patch, solr-1395-1431-4.patch, 
 solr-1395-1431-katta0.6.patch, solr-1395-1431-katta0.6.patch, 
 solr-1395-1431.patch, solr-1395-katta-0.6.2-1.patch, 
 solr-1395-katta-0.6.2-2.patch, solr-1395-katta-0.6.2-3.patch, 
 solr-1395-katta-0.6.2.patch, test-katta-core-0.6-dev.jar, 
 zkclient-0.1-dev.jar, zookeeper-3.2.1.jar

   Original Estimate: 336h
  Remaining Estimate: 336h

 We'll integrate Katta into Solr so that:
 * Distributed search uses Hadoop RPC
 * Shard/SolrCore distribution and management
 * Zookeeper based failover
 * Indexes may be built using Hadoop

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-1395) Integrate Katta

2011-02-14 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12994255#comment-12994255
 ] 

tom liu commented on SOLR-1395:
---

the ISolrServer is handled by Katta-Node, configures be:
# solrconfig.xml: which is used by ISolrServer's Default SolrCore
# katta script: which is used to tell iSolrServer's SolrHome.

Katta's Script:[On katta node, but not on katta master]
{noformat}
KATTA_OPTS=$KATTA_OPTS -Dsolr.home=/var/data/solr 
-Dsolr.directoryFactory=solr.MMapDirectoryFactory
{noformat}

Katta startup node, the IsolrServer will be got solr.home and 
solr.directoryFactory
and then, ISolrServer's Default SolrCore will use those env to hold solrcore.

 Integrate Katta
 ---

 Key: SOLR-1395
 URL: https://issues.apache.org/jira/browse/SOLR-1395
 Project: Solr
  Issue Type: New Feature
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: Next

 Attachments: SOLR-1395.patch, SOLR-1395.patch, SOLR-1395.patch, 
 back-end.log, front-end.log, hadoop-core-0.19.0.jar, katta-core-0.6-dev.jar, 
 katta-solrcores.jpg, katta.node.properties, katta.zk.properties, 
 log4j-1.2.13.jar, solr-1395-1431-3.patch, solr-1395-1431-4.patch, 
 solr-1395-1431-katta0.6.patch, solr-1395-1431-katta0.6.patch, 
 solr-1395-1431.patch, solr-1395-katta-0.6.2-1.patch, 
 solr-1395-katta-0.6.2-2.patch, solr-1395-katta-0.6.2-3.patch, 
 solr-1395-katta-0.6.2.patch, test-katta-core-0.6-dev.jar, 
 zkclient-0.1-dev.jar, zookeeper-3.2.1.jar

   Original Estimate: 336h
  Remaining Estimate: 336h

 We'll integrate Katta into Solr so that:
 * Distributed search uses Hadoop RPC
 * Shard/SolrCore distribution and management
 * Zookeeper based failover
 * Indexes may be built using Hadoop

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-2247) ClosedChannelException throws on Linux

2011-02-11 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12993789#comment-12993789
 ] 

tom liu commented on SOLR-2247:
---

yes,after i use JAVA_OPTS=$JAVA_OPTS 
-Dsolr.directoryFactory=solr.MMapDirectoryFactory, i do not catch the CCE.

thx

 ClosedChannelException throws on Linux
 --

 Key: SOLR-2247
 URL: https://issues.apache.org/jira/browse/SOLR-2247
 Project: Solr
  Issue Type: Bug
  Components: search
 Environment: JDK1.6/Tomcat6
Reporter: tom liu

 i use distributed query, but found ClosedChannelException. 
 {noformat}
 Caused by: java.nio.channels.ClosedChannelException
 at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:88)
 at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:613)
 at 
 org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.readInternal(NIOFSDirectory.java:161)
 at 
 org.apache.lucene.store.BufferedIndexInput.readBytes(BufferedIndexInput.java:139)
 at 
 org.apache.lucene.index.CompoundFileReader$CSIndexInput.readInternal(CompoundFileReader.java:285)
 at 
 org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:160)
 at 
 org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:39)
 at org.apache.lucene.store.DataInput.readVInt(DataInput.java:86)
 at 
 org.apache.lucene.index.codecs.DeltaBytesReader.read(DeltaBytesReader.java:40)
 at 
 org.apache.lucene.index.codecs.PrefixCodedTermsReader$FieldReader$SegmentTermsEnum.next(PrefixCodedTermsReader.java:469)
 at 
 org.apache.lucene.index.codecs.PrefixCodedTermsReader$FieldReader$SegmentTermsEnum.seek(PrefixCodedTermsReader.java:385)
 at org.apache.lucene.index.TermsEnum.seek(TermsEnum.java:68)
 at org.apache.lucene.index.Terms.docFreq(Terms.java:53)
 at 
 org.apache.lucene.index.SegmentReader.docFreq(SegmentReader.java:898)
 at org.apache.lucene.index.IndexReader.docFreq(IndexReader.java:882)
 at 
 org.apache.lucene.index.DirectoryReader.docFreq(DirectoryReader.java:687)
 at 
 org.apache.solr.search.SolrIndexReader.docFreq(SolrIndexReader.java:305)
 at 
 org.apache.lucene.search.IndexSearcher.docFreq(IndexSearcher.java:136)
 at org.apache.lucene.search.Similarity.idfExplain(Similarity.java:804)
 at 
 org.apache.lucene.search.PhraseQuery$PhraseWeight.init(PhraseQuery.java:150)
 at 
 org.apache.lucene.search.PhraseQuery.createWeight(PhraseQuery.java:321)
 at org.apache.lucene.search.Query.weight(Query.java:101)
 at org.apache.lucene.search.Searcher.createWeight(Searcher.java:147)
 at org.apache.lucene.search.Searcher.search(Searcher.java:88)
 at 
 org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1388)
 at 
 org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1284)
 at 
 org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:343)
 at 
 org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:
 ...
 {noformat}
 with lucene-2239, i found NIOFs would throw ClosedChannelException.
 see https://issues.apache.org/jira/browse/LUCENE-2239 

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Closed: (SOLR-2247) ClosedChannelException throws on Linux

2011-02-11 Thread tom liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tom liu closed SOLR-2247.
-

Resolution: Not A Problem

 ClosedChannelException throws on Linux
 --

 Key: SOLR-2247
 URL: https://issues.apache.org/jira/browse/SOLR-2247
 Project: Solr
  Issue Type: Bug
  Components: search
 Environment: JDK1.6/Tomcat6
Reporter: tom liu

 i use distributed query, but found ClosedChannelException. 
 {noformat}
 Caused by: java.nio.channels.ClosedChannelException
 at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:88)
 at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:613)
 at 
 org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.readInternal(NIOFSDirectory.java:161)
 at 
 org.apache.lucene.store.BufferedIndexInput.readBytes(BufferedIndexInput.java:139)
 at 
 org.apache.lucene.index.CompoundFileReader$CSIndexInput.readInternal(CompoundFileReader.java:285)
 at 
 org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:160)
 at 
 org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:39)
 at org.apache.lucene.store.DataInput.readVInt(DataInput.java:86)
 at 
 org.apache.lucene.index.codecs.DeltaBytesReader.read(DeltaBytesReader.java:40)
 at 
 org.apache.lucene.index.codecs.PrefixCodedTermsReader$FieldReader$SegmentTermsEnum.next(PrefixCodedTermsReader.java:469)
 at 
 org.apache.lucene.index.codecs.PrefixCodedTermsReader$FieldReader$SegmentTermsEnum.seek(PrefixCodedTermsReader.java:385)
 at org.apache.lucene.index.TermsEnum.seek(TermsEnum.java:68)
 at org.apache.lucene.index.Terms.docFreq(Terms.java:53)
 at 
 org.apache.lucene.index.SegmentReader.docFreq(SegmentReader.java:898)
 at org.apache.lucene.index.IndexReader.docFreq(IndexReader.java:882)
 at 
 org.apache.lucene.index.DirectoryReader.docFreq(DirectoryReader.java:687)
 at 
 org.apache.solr.search.SolrIndexReader.docFreq(SolrIndexReader.java:305)
 at 
 org.apache.lucene.search.IndexSearcher.docFreq(IndexSearcher.java:136)
 at org.apache.lucene.search.Similarity.idfExplain(Similarity.java:804)
 at 
 org.apache.lucene.search.PhraseQuery$PhraseWeight.init(PhraseQuery.java:150)
 at 
 org.apache.lucene.search.PhraseQuery.createWeight(PhraseQuery.java:321)
 at org.apache.lucene.search.Query.weight(Query.java:101)
 at org.apache.lucene.search.Searcher.createWeight(Searcher.java:147)
 at org.apache.lucene.search.Searcher.search(Searcher.java:88)
 at 
 org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1388)
 at 
 org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1284)
 at 
 org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:343)
 at 
 org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:
 ...
 {noformat}
 with lucene-2239, i found NIOFs would throw ClosedChannelException.
 see https://issues.apache.org/jira/browse/LUCENE-2239 

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Issue Comment Edited: (SOLR-1395) Integrate Katta

2011-01-18 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12928464#action_12928464
 ] 

tom liu edited comment on SOLR-1395 at 1/18/11 3:27 AM:


JohnWu,Huang :

in katta integrations, the solr core has three roles:
# proxy, that is query dispatches or front server.
all query would be sent to this proxy, and then dispatch to subproxy on katta 
cluster node.
in this proxy, QueryComponent's distributedProcess would be executed. but the 
param isShard=false.
# subproxy, that is proxy on katta cluster node. 
because each node maybe has more than one cores, so subproxy would receive 
query from proxy, and send query to any core.
in this subproxy, QueryComponent's distributedProcess would be executed. but 
the param isShard=true.
# queryCore, that is real query solr core.
any query would be sent to querycore, and the querycore execute 
QueryComponent's process method.

so, when run solr cluster or distribution, we would setup three envs.
# proxy's solrconfig.xml 
{noformat}
requestHandler name=standard class=solr.KattaRequestHandler default=true
lst name=defaults
str name=echoParamsexplicit/str
str name=shards*/str
 /lst
/requestHandler
{noformat}
# subproxy's solrconfig.xml
requestHandler name=standard class=solr.SearchHandler 
default=true.../requestHandler
# querycore's solrconfig.xml
requestHandler name=standard class=solr.MultiEmbeddedSearchHandler 
default=true.../requestHandler

in katta's katta.node.properties::
node.server.class=org.apache.solr.katta.DeployableSolrKattaServer

and in classes dirs of proxy's solr webapps
pls add two files:
# katta.zk.properties
# katta.node.properties

  was (Author: tom_lt):
JohnWu,Huang :

in katta integrations, the solr core has three roles:
# proxy, that is query dispatches or front server.
all query would be sent to this proxy, and then dispatch to subproxy on katta 
cluster node.
in this proxy, QueryComponent's distributedProcess would be executed. but the 
param isShard=false.
# subproxy, that is proxy on katta cluster node. 
because each node maybe has more than one cores, so subproxy would receive 
query from proxy, and send query to any core.
in this subproxy, QueryComponent's distributedProcess would be executed. but 
the param isShard=true.
# queryCore, that is real query solr core.
any query would be sent to querycore, and the querycore execute 
QueryComponent's process method.

so, when run solr cluster or distribution, we would setup three envs.
# proxy's solrconfig.xml 
{noformat}
requestHandler name=standard class=solr.KattaRequestHandler default=true
lst name=defaults
str name=echoParamsexplicit/str
str name=shards*/str
 /lst
/requestHandler
{noformat}
# subproxy's solrconfig.xml
requestHandler name=standard class=solr.SearchHandler 
default=true.../requestHandler
# querycore's solrconfig.xml
requestHandler name=standard class=solr.SearchHandler 
default=true.../requestHandler

in katta's katta.node.properties::
node.server.class=org.apache.solr.katta.DeployableSolrKattaServer

and in classes dirs of proxy's solr webapps
pls add two files:
# katta.zk.properties
# katta.node.properties
  
 Integrate Katta
 ---

 Key: SOLR-1395
 URL: https://issues.apache.org/jira/browse/SOLR-1395
 Project: Solr
  Issue Type: New Feature
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: Next

 Attachments: back-end.log, front-end.log, hadoop-core-0.19.0.jar, 
 katta-core-0.6-dev.jar, katta-solrcores.jpg, katta.node.properties, 
 katta.zk.properties, log4j-1.2.13.jar, solr-1395-1431-3.patch, 
 solr-1395-1431-4.patch, solr-1395-1431-katta0.6.patch, 
 solr-1395-1431-katta0.6.patch, solr-1395-1431.patch, 
 solr-1395-katta-0.6.2-1.patch, solr-1395-katta-0.6.2-2.patch, 
 solr-1395-katta-0.6.2-3.patch, solr-1395-katta-0.6.2.patch, SOLR-1395.patch, 
 SOLR-1395.patch, SOLR-1395.patch, test-katta-core-0.6-dev.jar, 
 zkclient-0.1-dev.jar, zookeeper-3.2.1.jar

   Original Estimate: 336h
  Remaining Estimate: 336h

 We'll integrate Katta into Solr so that:
 * Distributed search uses Hadoop RPC
 * Shard/SolrCore distribution and management
 * Zookeeper based failover
 * Indexes may be built using Hadoop

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Issue Comment Edited: (SOLR-1395) Integrate Katta

2011-01-18 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12935709#action_12935709
 ] 

tom liu edited comment on SOLR-1395 at 1/18/11 3:28 AM:


JohnWu:

my conf is:
{code:xml|title=proxy/solrconfig.xml}
requestHandler name=standard class=solr.KattaRequestHandler default=true
lst name=defaults
str name=echoParamsexplicit/str
str name=shards*/str
/lst
  /requestHandler
{code} 

{code:xml|title=subproxy/solrconfig.xml}
requestHandler name=standard class=solr.SearchHandler default=true
!-- default values for query parameters --
 lst name=defaults
   str name=echoParamsexplicit/str
 /lst
  /requestHandler
{code} 

{code:xml|title=querycore(shards)/solrconfig.xml}
requestHandler name=standard class=solr.MultiEmbeddedSearchHandler 
default=true
!-- default values for query parameters --
 lst name=defaults
   str name=echoParamsexplicit/str
 /lst
  /requestHandler
{code} 

{code:xml|title=zoo.cfg}
clientPort=2181
...
{code} 

in Katta/conf and Shards/WEB-INF/classes
{code:xml|title=katta.zk.properties}
zookeeper.embedded=false
zookeeper.servers=localhost:2181
...
{code} 

  was (Author: tom_lt):
JohnWu:

my conf is:
{code:xml|title=proxy/solrconfig.xml}
requestHandler name=standard class=solr.KattaRequestHandler default=true
lst name=defaults
str name=echoParamsexplicit/str
str name=shards*/str
/lst
  /requestHandler
{code} 

{code:xml|title=subproxy/solrconfig.xml}
requestHandler name=standard class=solr.SearchHandler default=true
!-- default values for query parameters --
 lst name=defaults
   str name=echoParamsexplicit/str
 /lst
  /requestHandler
{code} 

{code:xml|title=querycore(shards)/solrconfig.xml}
requestHandler name=standard class=solr.SearchHandler default=true
!-- default values for query parameters --
 lst name=defaults
   str name=echoParamsexplicit/str
 /lst
  /requestHandler
{code} 

{code:xml|title=zoo.cfg}
clientPort=2181
...
{code} 

in Katta/conf and Shards/WEB-INF/classes
{code:xml|title=katta.zk.properties}
zookeeper.embedded=false
zookeeper.servers=localhost:2181
...
{code} 
  
 Integrate Katta
 ---

 Key: SOLR-1395
 URL: https://issues.apache.org/jira/browse/SOLR-1395
 Project: Solr
  Issue Type: New Feature
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: Next

 Attachments: back-end.log, front-end.log, hadoop-core-0.19.0.jar, 
 katta-core-0.6-dev.jar, katta-solrcores.jpg, katta.node.properties, 
 katta.zk.properties, log4j-1.2.13.jar, solr-1395-1431-3.patch, 
 solr-1395-1431-4.patch, solr-1395-1431-katta0.6.patch, 
 solr-1395-1431-katta0.6.patch, solr-1395-1431.patch, 
 solr-1395-katta-0.6.2-1.patch, solr-1395-katta-0.6.2-2.patch, 
 solr-1395-katta-0.6.2-3.patch, solr-1395-katta-0.6.2.patch, SOLR-1395.patch, 
 SOLR-1395.patch, SOLR-1395.patch, test-katta-core-0.6-dev.jar, 
 zkclient-0.1-dev.jar, zookeeper-3.2.1.jar

   Original Estimate: 336h
  Remaining Estimate: 336h

 We'll integrate Katta into Solr so that:
 * Distributed search uses Hadoop RPC
 * Shard/SolrCore distribution and management
 * Zookeeper based failover
 * Indexes may be built using Hadoop

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-1395) Integrate Katta

2011-01-18 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12983076#action_12983076
 ] 

tom liu commented on SOLR-1395:
---

sorry, the above comments have errors:
in querycore(shards)/solrconfig.xml, requestHandler must be 
solr.MultiEmbeddedSearchHandler.
{code:xml|title=querycore(shards)/solrconfig.xml}
  requestHandler name=standard class=solr.MultiEmbeddedSearchHandler 
default=true
!-- default values for query parameters --
 lst name=defaults
   str name=echoParamsexplicit/str
/lst
  /requestHandler
{code} 

QueryComponent returns DocSlice, but XMLWrite or EmbeddedServer returns 
SolrDocumentList from DocList.

 Integrate Katta
 ---

 Key: SOLR-1395
 URL: https://issues.apache.org/jira/browse/SOLR-1395
 Project: Solr
  Issue Type: New Feature
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: Next

 Attachments: back-end.log, front-end.log, hadoop-core-0.19.0.jar, 
 katta-core-0.6-dev.jar, katta-solrcores.jpg, katta.node.properties, 
 katta.zk.properties, log4j-1.2.13.jar, solr-1395-1431-3.patch, 
 solr-1395-1431-4.patch, solr-1395-1431-katta0.6.patch, 
 solr-1395-1431-katta0.6.patch, solr-1395-1431.patch, 
 solr-1395-katta-0.6.2-1.patch, solr-1395-katta-0.6.2-2.patch, 
 solr-1395-katta-0.6.2-3.patch, solr-1395-katta-0.6.2.patch, SOLR-1395.patch, 
 SOLR-1395.patch, SOLR-1395.patch, test-katta-core-0.6-dev.jar, 
 zkclient-0.1-dev.jar, zookeeper-3.2.1.jar

   Original Estimate: 336h
  Remaining Estimate: 336h

 We'll integrate Katta into Solr so that:
 * Distributed search uses Hadoop RPC
 * Shard/SolrCore distribution and management
 * Zookeeper based failover
 * Indexes may be built using Hadoop

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-1395) Integrate Katta

2011-01-13 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12981204#action_12981204
 ] 

tom liu commented on SOLR-1395:
---

In katta intergrated envs, solr is embeded.

Katta does as distributed compute manager, which manages:
# node startup/shutdown
# shard deploy/undeploy
# rpc invoke to application/Solr

and Solr does as application on distributed compute envs.

in Master Box, QueryHandler must be solr.KattaSearchHandler in solrconfig.xml
so that, kattaclient will be invoked by solrapp, and then invoked rpc to slave.

in Slave Box, Katta will startup embeded solr, which is the subproxy.

the shard, that is the query solrcore, will be deployed by katta's script: 
bin/katta addIndex indexName indexPath

 Integrate Katta
 ---

 Key: SOLR-1395
 URL: https://issues.apache.org/jira/browse/SOLR-1395
 Project: Solr
  Issue Type: New Feature
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: Next

 Attachments: back-end.log, front-end.log, hadoop-core-0.19.0.jar, 
 katta-core-0.6-dev.jar, katta-solrcores.jpg, katta.node.properties, 
 katta.zk.properties, log4j-1.2.13.jar, solr-1395-1431-3.patch, 
 solr-1395-1431-4.patch, solr-1395-1431-katta0.6.patch, 
 solr-1395-1431-katta0.6.patch, solr-1395-1431.patch, 
 solr-1395-katta-0.6.2-1.patch, solr-1395-katta-0.6.2-2.patch, 
 solr-1395-katta-0.6.2-3.patch, solr-1395-katta-0.6.2.patch, SOLR-1395.patch, 
 SOLR-1395.patch, SOLR-1395.patch, test-katta-core-0.6-dev.jar, 
 zkclient-0.1-dev.jar, zookeeper-3.2.1.jar

   Original Estimate: 336h
  Remaining Estimate: 336h

 We'll integrate Katta into Solr so that:
 * Distributed search uses Hadoop RPC
 * Shard/SolrCore distribution and management
 * Zookeeper based failover
 * Indexes may be built using Hadoop

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Created: (SOLR-2310) DocBuilder's getTimeElapsedSince Error

2011-01-09 Thread tom liu (JIRA)
DocBuilder's getTimeElapsedSince Error
--

 Key: SOLR-2310
 URL: https://issues.apache.org/jira/browse/SOLR-2310
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 4.0
 Environment: JDK1.6
Reporter: tom liu


i has a job which runs about 65 hours, but the dataimport?command=status http 
requests returns 5 hours.

in getTimeElapsedSince method of DocBuilder:
{noformat} 
static String getTimeElapsedSince(long l) {
l = System.currentTimeMillis() - l;
return (l / (6 * 60)) % 60 + : + (l / 6) % 60 + : + (l / 1000)
% 60 + . + l % 1000;
  }
{noformat} 

the hours Compute is wrong, it mould be :
{noformat} 
static String getTimeElapsedSince(long l) {
l = System.currentTimeMillis() - l;
return (l / (6 * 60)) + : + (l / 6) % 60 + : + (l / 1000)
% 60 + . + l % 1000;
  }
{noformat} 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Created: (LUCENE-2827) JaroWinklerDistance returns 0f when s1.length()=s2.length()=0

2010-12-21 Thread tom liu (JIRA)
JaroWinklerDistance returns 0f when s1.length()=s2.length()=0
-

 Key: LUCENE-2827
 URL: https://issues.apache.org/jira/browse/LUCENE-2827
 Project: Lucene - Java
  Issue Type: Bug
  Components: contrib/spellchecker
Affects Versions: 4.0
 Environment: JDK1.6.17
Reporter: tom liu


StringDistance sd = new JaroWinklerDistance();
System.out.println(sd.getDistance(,));

console prints 0.

but when use LevensteinDistance or NGramDistance, that console prints 1.0

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-1395) Integrate Katta

2010-12-10 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12970111#action_12970111
 ] 

tom liu commented on SOLR-1395:
---

Eric:
please put katta.zk.properties and katta.node.properties into your webapp's 
WEB-INF/classes 

 Integrate Katta
 ---

 Key: SOLR-1395
 URL: https://issues.apache.org/jira/browse/SOLR-1395
 Project: Solr
  Issue Type: New Feature
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: Next

 Attachments: back-end.log, front-end.log, hadoop-core-0.19.0.jar, 
 katta-core-0.6-dev.jar, katta-solrcores.jpg, katta.node.properties, 
 katta.zk.properties, log4j-1.2.13.jar, solr-1395-1431-3.patch, 
 solr-1395-1431-4.patch, solr-1395-1431-katta0.6.patch, 
 solr-1395-1431-katta0.6.patch, solr-1395-1431.patch, 
 solr-1395-katta-0.6.2-1.patch, solr-1395-katta-0.6.2-2.patch, 
 solr-1395-katta-0.6.2-3.patch, solr-1395-katta-0.6.2.patch, SOLR-1395.patch, 
 SOLR-1395.patch, SOLR-1395.patch, test-katta-core-0.6-dev.jar, 
 zkclient-0.1-dev.jar, zookeeper-3.2.1.jar

   Original Estimate: 336h
  Remaining Estimate: 336h

 We'll integrate Katta into Solr so that:
 * Distributed search uses Hadoop RPC
 * Shard/SolrCore distribution and management
 * Zookeeper based failover
 * Indexes may be built using Hadoop

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-1395) Integrate Katta

2010-12-09 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12970069#action_12970069
 ] 

tom liu commented on SOLR-1395:
---

Eric:
please put katta.zk.properties and katta.node.properties into your webapp's 
WEB-INF/lib.

JohnWu:
in katta's lib, there were so many jars, but some jars must be there. you know, 
Solr must include Luence's jar .

with your problem, that can't find pc-slavo2:2, katta must connect to 
pc-slavo2:2 through tcp socket.
how about that you ping pc-slavo2 and telnet pc-slavo2 2?
you can try adding pc-slavo2 with ip address to hosts files.

 Integrate Katta
 ---

 Key: SOLR-1395
 URL: https://issues.apache.org/jira/browse/SOLR-1395
 Project: Solr
  Issue Type: New Feature
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: Next

 Attachments: back-end.log, front-end.log, hadoop-core-0.19.0.jar, 
 katta-core-0.6-dev.jar, katta-solrcores.jpg, katta.node.properties, 
 katta.zk.properties, log4j-1.2.13.jar, solr-1395-1431-3.patch, 
 solr-1395-1431-4.patch, solr-1395-1431-katta0.6.patch, 
 solr-1395-1431-katta0.6.patch, solr-1395-1431.patch, 
 solr-1395-katta-0.6.2-1.patch, solr-1395-katta-0.6.2-2.patch, 
 solr-1395-katta-0.6.2-3.patch, solr-1395-katta-0.6.2.patch, SOLR-1395.patch, 
 SOLR-1395.patch, SOLR-1395.patch, test-katta-core-0.6-dev.jar, 
 zkclient-0.1-dev.jar, zookeeper-3.2.1.jar

   Original Estimate: 336h
  Remaining Estimate: 336h

 We'll integrate Katta into Solr so that:
 * Distributed search uses Hadoop RPC
 * Shard/SolrCore distribution and management
 * Zookeeper based failover
 * Indexes may be built using Hadoop

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-1395) Integrate Katta

2010-12-05 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12967118#action_12967118
 ] 

tom liu commented on SOLR-1395:
---

in proxy:
katta.node.properties:
#node.server.class=net.sf.katta.lib.lucene.LuceneServer
node.server.class=org.apache.solr.katta.DeployableSolrKattaServer

you must put apache-solr-core-XXX.jar to katta's lib, and some relative jars.

 Integrate Katta
 ---

 Key: SOLR-1395
 URL: https://issues.apache.org/jira/browse/SOLR-1395
 Project: Solr
  Issue Type: New Feature
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: Next

 Attachments: back-end.log, front-end.log, hadoop-core-0.19.0.jar, 
 katta-core-0.6-dev.jar, katta-solrcores.jpg, katta.node.properties, 
 katta.zk.properties, log4j-1.2.13.jar, solr-1395-1431-3.patch, 
 solr-1395-1431-4.patch, solr-1395-1431-katta0.6.patch, 
 solr-1395-1431-katta0.6.patch, solr-1395-1431.patch, 
 solr-1395-katta-0.6.2-1.patch, solr-1395-katta-0.6.2-2.patch, 
 solr-1395-katta-0.6.2-3.patch, solr-1395-katta-0.6.2.patch, SOLR-1395.patch, 
 SOLR-1395.patch, SOLR-1395.patch, test-katta-core-0.6-dev.jar, 
 zkclient-0.1-dev.jar, zookeeper-3.2.1.jar

   Original Estimate: 336h
  Remaining Estimate: 336h

 We'll integrate Katta into Solr so that:
 * Distributed search uses Hadoop RPC
 * Shard/SolrCore distribution and management
 * Zookeeper based failover
 * Indexes may be built using Hadoop

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-1395) Integrate Katta

2010-12-02 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12966040#action_12966040
 ] 

tom liu commented on SOLR-1395:
---

solrHome is set :
# webapps/yourapp/web.xml
this is conf of frontserver or proxy.
# katta script
this is conf of subproxy. such as:
{noformat}
...
KATTA_OPTS=$KATTA_OPTS -Dsolr.home=/var/data/solr/kattaproxy
...
{noformat}
# solrcore
this conf would be set by Katta/Solr automatic.

in katta integration, tomcat (or jetty) is the trigger point, which connected 
to zkserver and katta nodes.
katta nodes that deployed solrcores are waiting for querys from tomcat.

 Integrate Katta
 ---

 Key: SOLR-1395
 URL: https://issues.apache.org/jira/browse/SOLR-1395
 Project: Solr
  Issue Type: New Feature
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: Next

 Attachments: back-end.log, front-end.log, hadoop-core-0.19.0.jar, 
 katta-core-0.6-dev.jar, katta.node.properties, katta.zk.properties, 
 log4j-1.2.13.jar, solr-1395-1431-3.patch, solr-1395-1431-4.patch, 
 solr-1395-1431-katta0.6.patch, solr-1395-1431-katta0.6.patch, 
 solr-1395-1431.patch, solr-1395-katta-0.6.2-1.patch, 
 solr-1395-katta-0.6.2-2.patch, solr-1395-katta-0.6.2-3.patch, 
 solr-1395-katta-0.6.2.patch, SOLR-1395.patch, SOLR-1395.patch, 
 SOLR-1395.patch, test-katta-core-0.6-dev.jar, zkclient-0.1-dev.jar, 
 zookeeper-3.2.1.jar

   Original Estimate: 336h
  Remaining Estimate: 336h

 We'll integrate Katta into Solr so that:
 * Distributed search uses Hadoop RPC
 * Shard/SolrCore distribution and management
 * Zookeeper based failover
 * Indexes may be built using Hadoop

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-1395) Integrate Katta

2010-11-25 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12935709#action_12935709
 ] 

tom liu commented on SOLR-1395:
---

JohnWu:

my conf is:
{code:xml|title=proxy/solrconfig.xml}
requestHandler name=standard class=solr.KattaRequestHandler default=true
lst name=defaults
str name=echoParamsexplicit/str
str name=shards*/str
/lst
  /requestHandler
{code} 

{code:xml|title=subproxy/solrconfig.xml}
requestHandler name=standard class=solr.SearchHandler default=true
!-- default values for query parameters --
 lst name=defaults
   str name=echoParamsexplicit/str
 /lst
  /requestHandler
{code} 

{code:xml|title=querycore(shards)/solrconfig.xml}
requestHandler name=standard class=solr.SearchHandler default=true
!-- default values for query parameters --
 lst name=defaults
   str name=echoParamsexplicit/str
 /lst
  /requestHandler
{code} 

{code:xml|title=zoo.cfg}
clientPort=2181
...
{code} 

in Katta/conf and Shards/WEB-INF/classes
{code:xml|title=katta.zk.properties}
zookeeper.embedded=false
zookeeper.servers=localhost:2181
...
{code} 

 Integrate Katta
 ---

 Key: SOLR-1395
 URL: https://issues.apache.org/jira/browse/SOLR-1395
 Project: Solr
  Issue Type: New Feature
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: Next

 Attachments: back-end.log, front-end.log, hadoop-core-0.19.0.jar, 
 katta-core-0.6-dev.jar, katta.node.properties, katta.zk.properties, 
 log4j-1.2.13.jar, solr-1395-1431-3.patch, solr-1395-1431-4.patch, 
 solr-1395-1431-katta0.6.patch, solr-1395-1431-katta0.6.patch, 
 solr-1395-1431.patch, solr-1395-katta-0.6.2-1.patch, 
 solr-1395-katta-0.6.2-2.patch, solr-1395-katta-0.6.2-3.patch, 
 solr-1395-katta-0.6.2.patch, SOLR-1395.patch, SOLR-1395.patch, 
 SOLR-1395.patch, test-katta-core-0.6-dev.jar, zkclient-0.1-dev.jar, 
 zookeeper-3.2.1.jar

   Original Estimate: 336h
  Remaining Estimate: 336h

 We'll integrate Katta into Solr so that:
 * Distributed search uses Hadoop RPC
 * Shard/SolrCore distribution and management
 * Zookeeper based failover
 * Indexes may be built using Hadoop

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Created: (SOLR-2247) ClosedChannelException throws on Linux

2010-11-22 Thread tom liu (JIRA)
ClosedChannelException throws on Linux
--

 Key: SOLR-2247
 URL: https://issues.apache.org/jira/browse/SOLR-2247
 Project: Solr
  Issue Type: Bug
  Components: search
 Environment: JDK1.6/Tomcat6
Reporter: tom liu


i use distributed query, but found ClosedChannelException. 
{noformat}
Caused by: java.nio.channels.ClosedChannelException
at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:88)
at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:613)
at 
org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.readInternal(NIOFSDirectory.java:161)
at 
org.apache.lucene.store.BufferedIndexInput.readBytes(BufferedIndexInput.java:139)
at 
org.apache.lucene.index.CompoundFileReader$CSIndexInput.readInternal(CompoundFileReader.java:285)
at 
org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:160)
at 
org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:39)
at org.apache.lucene.store.DataInput.readVInt(DataInput.java:86)
at 
org.apache.lucene.index.codecs.DeltaBytesReader.read(DeltaBytesReader.java:40)
at 
org.apache.lucene.index.codecs.PrefixCodedTermsReader$FieldReader$SegmentTermsEnum.next(PrefixCodedTermsReader.java:469)
at 
org.apache.lucene.index.codecs.PrefixCodedTermsReader$FieldReader$SegmentTermsEnum.seek(PrefixCodedTermsReader.java:385)
at org.apache.lucene.index.TermsEnum.seek(TermsEnum.java:68)
at org.apache.lucene.index.Terms.docFreq(Terms.java:53)
at org.apache.lucene.index.SegmentReader.docFreq(SegmentReader.java:898)
at org.apache.lucene.index.IndexReader.docFreq(IndexReader.java:882)
at 
org.apache.lucene.index.DirectoryReader.docFreq(DirectoryReader.java:687)
at 
org.apache.solr.search.SolrIndexReader.docFreq(SolrIndexReader.java:305)
at 
org.apache.lucene.search.IndexSearcher.docFreq(IndexSearcher.java:136)
at org.apache.lucene.search.Similarity.idfExplain(Similarity.java:804)
at 
org.apache.lucene.search.PhraseQuery$PhraseWeight.init(PhraseQuery.java:150)
at 
org.apache.lucene.search.PhraseQuery.createWeight(PhraseQuery.java:321)
at org.apache.lucene.search.Query.weight(Query.java:101)
at org.apache.lucene.search.Searcher.createWeight(Searcher.java:147)
at org.apache.lucene.search.Searcher.search(Searcher.java:88)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1388)
at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1284)
at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:343)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:
...
{noformat}

with lucene-2239, i found NIOFs would throw ClosedChannelException.
see https://issues.apache.org/jira/browse/LUCENE-2239 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Created: (SOLR-2243) Group Querys maybe return docList of 0 results

2010-11-17 Thread tom liu (JIRA)
Group Querys maybe return docList of 0 results
--

 Key: SOLR-2243
 URL: https://issues.apache.org/jira/browse/SOLR-2243
 Project: Solr
  Issue Type: Wish
  Components: search
 Environment: JDK1.6/Tomcat6
Reporter: tom liu


i wish have bellow results:
{noformat}
lst name=grouped
   lst name=countrycode
   int name=matches1411/int
   arr name=groups
 lst
str name=groupValueunit/str
result name=doclist numFound=921 start=0/
 /lst
 lst
str name=groupValuechina/str
result name=doclist numFound=139 start=0/
 /lst
   /arr
   /lst
/lst
{noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Updated: (SOLR-2243) Group Querys maybe return docList of 0 results

2010-11-17 Thread tom liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tom liu updated SOLR-2243:
--

Attachment: SolrIndexSearcher.patch

i found:
# set group.limit=0
# in solrIndexSearcher, i give value 1 to Collector constrution

for example:
{noformat}
Phase2GroupCollector collector = new Phase2GroupCollector(
   (TopGroupCollector)gc.collector, gc.groupBy, gc.context, 
collectorSort, 
   gc.docsPerGroup == 0? 1 : groupCommand.docsPerGroup, 
   needScores);
{noformat}

 Group Querys maybe return docList of 0 results
 --

 Key: SOLR-2243
 URL: https://issues.apache.org/jira/browse/SOLR-2243
 Project: Solr
  Issue Type: Wish
  Components: search
 Environment: JDK1.6/Tomcat6
Reporter: tom liu
 Attachments: SolrIndexSearcher.patch


 i wish have bellow results:
 {noformat}
 lst name=grouped
lst name=countrycode
int name=matches1411/int
arr name=groups
  lst
 str name=groupValueunit/str
 result name=doclist numFound=921 start=0/
  /lst
  lst
 str name=groupValuechina/str
 result name=doclist numFound=139 start=0/
  /lst
/arr
/lst
 /lst
 {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-2205) Grouping performance improvements

2010-11-17 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1295#action_1295
 ] 

tom liu commented on SOLR-2205:
---

Now, group Search do not support distributed query.

anyone else, has already been meet this?

 Grouping performance improvements
 -

 Key: SOLR-2205
 URL: https://issues.apache.org/jira/browse/SOLR-2205
 Project: Solr
  Issue Type: Sub-task
  Components: search
Affects Versions: 4.0
Reporter: Martijn van Groningen
 Fix For: 4.0

 Attachments: SOLR-2205.patch, SOLR-2205.patch


 This issue is dedicated to the performance of the grouping functionality.
 I've noticed that the code is not really performing on large indexes. Doing a 
 search (q=*:*) with grouping on an index from around 5M documents took around 
 one second on my local development machine. We had to support grouping on an 
 index that holds around 50M documents per machine, so we made some changes 
 and were able to happily serve that amount of documents. Patch will follow 
 soon.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Created: (SOLR-2228) refactors DebugComponent's merge method to create a class that can be used on other SeachComponents

2010-11-09 Thread tom liu (JIRA)
refactors DebugComponent's merge method to create a class that can be used on 
other SeachComponents
---

 Key: SOLR-2228
 URL: https://issues.apache.org/jira/browse/SOLR-2228
 Project: Solr
  Issue Type: Improvement
  Components: SearchComponents - other
 Environment: JDK1.6/Tomcat6
Reporter: tom liu


Some SearchComponents have to merge many NamedLists to one NamedList.
for example, TermVectorComponent would merge many NLS to one NL.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Updated: (SOLR-2224) TermVectorComponent did not return results when using distributedProcess in distribution envs

2010-11-09 Thread tom liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tom liu updated SOLR-2224:
--

Attachment: TermsVectorComponent.patch

in distributed query envs, use request that queryComponents creates.

the patch use merge method that debugComponents have.
see https://issues.apache.org/jira/browse/SOLR-2228

 TermVectorComponent did not return results when using distributedProcess in 
 distribution envs
 -

 Key: SOLR-2224
 URL: https://issues.apache.org/jira/browse/SOLR-2224
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other
Affects Versions: 4.0
 Environment: JDK1.6/Tomcat6
Reporter: tom liu
 Attachments: TermsVectorComponent.patch


 when using distributed query, TVRH did not return any results.
 in distributedProcess, tv creates one request, that use 
 TermVectorParams.DOC_IDS, for example, tv.docIds=10001
 but queryCommponent returns ids, that is uniqueKeys, not DOCIDS.
 so, in distribution envs, must not use distributedProcess.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Created: (SOLR-2224) TermVectorComponent did not return results when using distributedProcess in distribution envs

2010-11-08 Thread tom liu (JIRA)
TermVectorComponent did not return results when using distributedProcess in 
distribution envs
-

 Key: SOLR-2224
 URL: https://issues.apache.org/jira/browse/SOLR-2224
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other
Affects Versions: 4.0
 Environment: JDK1.6/Tomcat6
Reporter: tom liu


when using distributed query, TVRH did not return any results.
in distributedProcess, tv creates one request, that use 
TermVectorParams.DOC_IDS, for example, tv.docIds=10001
but queryCommponent returns ids, that is uniqueKeys, not DOCIDS.

so, in distribution envs, must not use distributedProcess.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-2224) TermVectorComponent did not return results when using distributedProcess in distribution envs

2010-11-08 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12929916#action_12929916
 ] 

tom liu commented on SOLR-2224:
---

we can delete distributedProcess method, and add modifyRequest method:
{noformat}
public void modifyRequest(ResponseBuilder rb, SearchComponent who, ShardRequest 
sreq) {
  if (rb.stage == ResponseBuilder.STAGE_GET_FIELDS)
  sreq.params.set(tv, true);
  else
  sreq.params.set(tv, false);
}
{noformat}

 TermVectorComponent did not return results when using distributedProcess in 
 distribution envs
 -

 Key: SOLR-2224
 URL: https://issues.apache.org/jira/browse/SOLR-2224
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other
Affects Versions: 4.0
 Environment: JDK1.6/Tomcat6
Reporter: tom liu

 when using distributed query, TVRH did not return any results.
 in distributedProcess, tv creates one request, that use 
 TermVectorParams.DOC_IDS, for example, tv.docIds=10001
 but queryCommponent returns ids, that is uniqueKeys, not DOCIDS.
 so, in distribution envs, must not use distributedProcess.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-1395) Integrate Katta

2010-11-04 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12928464#action_12928464
 ] 

tom liu commented on SOLR-1395:
---

JohnWu,Huang :

in katta integrations, the solr core has three roles:
# proxy, that is query dispatches or front server.
all query would be sent to this proxy, and then dispatch to subproxy on katta 
cluster node.
in this proxy, QueryComponent's distributedProcess would be executed. but the 
param isShard=false.
# subproxy, that is proxy on katta cluster node. 
because each node maybe has more than one cores, so subproxy would receive 
query from proxy, and send query to any core.
in this subproxy, QueryComponent's distributedProcess would be executed. but 
the param isShard=true.
# queryCore, that is real query solr core.
any query would be sent to querycore, and the querycore execute 
QueryComponent's process method.

so, when run solr cluster or distribution, we would setup three envs.
# proxy's solrconfig.xml 
{noformat}
requestHandler name=standard class=solr.KattaRequestHandler default=true
lst name=defaults
str name=echoParamsexplicit/str
str name=shards*/str
 /lst
/requestHandler
{noformat}
# subproxy's solrconfig.xml
requestHandler name=standard class=solr.SearchHandler 
default=true.../requestHandler
# querycore's solrconfig.xml
requestHandler name=standard class=solr.SearchHandler 
default=true.../requestHandler

 Integrate Katta
 ---

 Key: SOLR-1395
 URL: https://issues.apache.org/jira/browse/SOLR-1395
 Project: Solr
  Issue Type: New Feature
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: Next

 Attachments: back-end.log, front-end.log, hadoop-core-0.19.0.jar, 
 katta-core-0.6-dev.jar, katta.node.properties, katta.zk.properties, 
 log4j-1.2.13.jar, solr-1395-1431-3.patch, solr-1395-1431-4.patch, 
 solr-1395-1431-katta0.6.patch, solr-1395-1431-katta0.6.patch, 
 solr-1395-1431.patch, solr-1395-katta-0.6.2-1.patch, 
 solr-1395-katta-0.6.2-2.patch, solr-1395-katta-0.6.2.patch, SOLR-1395.patch, 
 SOLR-1395.patch, SOLR-1395.patch, test-katta-core-0.6-dev.jar, 
 zkclient-0.1-dev.jar, zookeeper-3.2.1.jar

   Original Estimate: 336h
  Remaining Estimate: 336h

 We'll integrate Katta into Solr so that:
 * Distributed search uses Hadoop RPC
 * Shard/SolrCore distribution and management
 * Zookeeper based failover
 * Indexes may be built using Hadoop

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Issue Comment Edited: (SOLR-1395) Integrate Katta

2010-11-04 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12928464#action_12928464
 ] 

tom liu edited comment on SOLR-1395 at 11/4/10 11:48 PM:
-

JohnWu,Huang :

in katta integrations, the solr core has three roles:
# proxy, that is query dispatches or front server.
all query would be sent to this proxy, and then dispatch to subproxy on katta 
cluster node.
in this proxy, QueryComponent's distributedProcess would be executed. but the 
param isShard=false.
# subproxy, that is proxy on katta cluster node. 
because each node maybe has more than one cores, so subproxy would receive 
query from proxy, and send query to any core.
in this subproxy, QueryComponent's distributedProcess would be executed. but 
the param isShard=true.
# queryCore, that is real query solr core.
any query would be sent to querycore, and the querycore execute 
QueryComponent's process method.

so, when run solr cluster or distribution, we would setup three envs.
# proxy's solrconfig.xml 
{noformat}
requestHandler name=standard class=solr.KattaRequestHandler default=true
lst name=defaults
str name=echoParamsexplicit/str
str name=shards*/str
 /lst
/requestHandler
{noformat}
# subproxy's solrconfig.xml
requestHandler name=standard class=solr.SearchHandler 
default=true.../requestHandler
# querycore's solrconfig.xml
requestHandler name=standard class=solr.SearchHandler 
default=true.../requestHandler

in katta's katta.node.properties::
node.server.class=org.apache.solr.katta.DeployableSolrKattaServer

  was (Author: tom_lt):
JohnWu,Huang :

in katta integrations, the solr core has three roles:
# proxy, that is query dispatches or front server.
all query would be sent to this proxy, and then dispatch to subproxy on katta 
cluster node.
in this proxy, QueryComponent's distributedProcess would be executed. but the 
param isShard=false.
# subproxy, that is proxy on katta cluster node. 
because each node maybe has more than one cores, so subproxy would receive 
query from proxy, and send query to any core.
in this subproxy, QueryComponent's distributedProcess would be executed. but 
the param isShard=true.
# queryCore, that is real query solr core.
any query would be sent to querycore, and the querycore execute 
QueryComponent's process method.

so, when run solr cluster or distribution, we would setup three envs.
# proxy's solrconfig.xml 
{noformat}
requestHandler name=standard class=solr.KattaRequestHandler default=true
lst name=defaults
str name=echoParamsexplicit/str
str name=shards*/str
 /lst
/requestHandler
{noformat}
# subproxy's solrconfig.xml
requestHandler name=standard class=solr.SearchHandler 
default=true.../requestHandler
# querycore's solrconfig.xml
requestHandler name=standard class=solr.SearchHandler 
default=true.../requestHandler
  
 Integrate Katta
 ---

 Key: SOLR-1395
 URL: https://issues.apache.org/jira/browse/SOLR-1395
 Project: Solr
  Issue Type: New Feature
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: Next

 Attachments: back-end.log, front-end.log, hadoop-core-0.19.0.jar, 
 katta-core-0.6-dev.jar, katta.node.properties, katta.zk.properties, 
 log4j-1.2.13.jar, solr-1395-1431-3.patch, solr-1395-1431-4.patch, 
 solr-1395-1431-katta0.6.patch, solr-1395-1431-katta0.6.patch, 
 solr-1395-1431.patch, solr-1395-katta-0.6.2-1.patch, 
 solr-1395-katta-0.6.2-2.patch, solr-1395-katta-0.6.2.patch, SOLR-1395.patch, 
 SOLR-1395.patch, SOLR-1395.patch, test-katta-core-0.6-dev.jar, 
 zkclient-0.1-dev.jar, zookeeper-3.2.1.jar

   Original Estimate: 336h
  Remaining Estimate: 336h

 We'll integrate Katta into Solr so that:
 * Distributed search uses Hadoop RPC
 * Shard/SolrCore distribution and management
 * Zookeeper based failover
 * Indexes may be built using Hadoop

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Updated: (SOLR-1395) Integrate Katta

2010-11-03 Thread tom liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tom liu updated SOLR-1395:
--

Attachment: solr-1395-katta-0.6.2-2.patch

i fixed below bugs:
# RPC Server stopping
# Rpc client receive null docs
# Rpc Client request timeout, that solr would receive null docs

BTW::
Walter, i found if change server and client communications that like client 
send request to server, the NPE would not throw.
see: https://issues.apache.org/jira/browse/HADOOP-7017

 Integrate Katta
 ---

 Key: SOLR-1395
 URL: https://issues.apache.org/jira/browse/SOLR-1395
 Project: Solr
  Issue Type: New Feature
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: Next

 Attachments: back-end.log, front-end.log, hadoop-core-0.19.0.jar, 
 katta-core-0.6-dev.jar, katta.node.properties, katta.zk.properties, 
 log4j-1.2.13.jar, solr-1395-1431-3.patch, solr-1395-1431-4.patch, 
 solr-1395-1431-katta0.6.patch, solr-1395-1431-katta0.6.patch, 
 solr-1395-1431.patch, solr-1395-katta-0.6.2-1.patch, 
 solr-1395-katta-0.6.2-2.patch, solr-1395-katta-0.6.2.patch, SOLR-1395.patch, 
 SOLR-1395.patch, SOLR-1395.patch, test-katta-core-0.6-dev.jar, 
 zkclient-0.1-dev.jar, zookeeper-3.2.1.jar

   Original Estimate: 336h
  Remaining Estimate: 336h

 We'll integrate Katta into Solr so that:
 * Distributed search uses Hadoop RPC
 * Shard/SolrCore distribution and management
 * Zookeeper based failover
 * Indexes may be built using Hadoop

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-1395) Integrate Katta

2010-11-03 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12927774#action_12927774
 ] 

tom liu commented on SOLR-1395:
---

JohnWu:

pls use solr-1395-katta-0.6.2.patch.

i did not know how to make a patch from solr-1395-1431.patch.
solr-1395-katta-0.6.2*.patch included solr-1395-1431.patch

 Integrate Katta
 ---

 Key: SOLR-1395
 URL: https://issues.apache.org/jira/browse/SOLR-1395
 Project: Solr
  Issue Type: New Feature
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: Next

 Attachments: back-end.log, front-end.log, hadoop-core-0.19.0.jar, 
 katta-core-0.6-dev.jar, katta.node.properties, katta.zk.properties, 
 log4j-1.2.13.jar, solr-1395-1431-3.patch, solr-1395-1431-4.patch, 
 solr-1395-1431-katta0.6.patch, solr-1395-1431-katta0.6.patch, 
 solr-1395-1431.patch, solr-1395-katta-0.6.2-1.patch, 
 solr-1395-katta-0.6.2-2.patch, solr-1395-katta-0.6.2.patch, SOLR-1395.patch, 
 SOLR-1395.patch, SOLR-1395.patch, test-katta-core-0.6-dev.jar, 
 zkclient-0.1-dev.jar, zookeeper-3.2.1.jar

   Original Estimate: 336h
  Remaining Estimate: 336h

 We'll integrate Katta into Solr so that:
 * Distributed search uses Hadoop RPC
 * Shard/SolrCore distribution and management
 * Zookeeper based failover
 * Indexes may be built using Hadoop

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-1395) Integrate Katta

2010-10-28 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12925765#action_12925765
 ] 

tom liu commented on SOLR-1395:
---

Walter, thanks.
i review codes, found that org.apache.hadoop.ipc.Client class holds connection 
to ShardNode, but for each node, only one socket/connection.
so, if large requests are sent, the connection would be wait synchronized.

i think, for each node, it would have some connections.

 Integrate Katta
 ---

 Key: SOLR-1395
 URL: https://issues.apache.org/jira/browse/SOLR-1395
 Project: Solr
  Issue Type: New Feature
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: Next

 Attachments: back-end.log, front-end.log, hadoop-core-0.19.0.jar, 
 katta-core-0.6-dev.jar, katta.node.properties, katta.zk.properties, 
 log4j-1.2.13.jar, solr-1395-1431-3.patch, solr-1395-1431-4.patch, 
 solr-1395-1431-katta0.6.patch, solr-1395-1431-katta0.6.patch, 
 solr-1395-1431.patch, solr-1395-katta-0.6.2-1.patch, 
 solr-1395-katta-0.6.2.patch, SOLR-1395.patch, SOLR-1395.patch, 
 SOLR-1395.patch, test-katta-core-0.6-dev.jar, zkclient-0.1-dev.jar, 
 zookeeper-3.2.1.jar

   Original Estimate: 336h
  Remaining Estimate: 336h

 We'll integrate Katta into Solr so that:
 * Distributed search uses Hadoop RPC
 * Shard/SolrCore distribution and management
 * Zookeeper based failover
 * Indexes may be built using Hadoop

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-1395) Integrate Katta

2010-10-26 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12924911#action_12924911
 ] 

tom liu commented on SOLR-1395:
---

Concurrency request would be thrown NPE.
Such as:
{noformat}
ab -n 1 -c 5 http://solr01:8080/solr/select?q=solr;...
{noformat}

it would be thrown NPE:
{noformat}
10/10/26 17:36:03 TRACE client.WorkQueue:261 - Done waiting, results = 
ClientResult: 0 results, 0 errors, 0/2 shards (id=2359:0)
10/10/26 17:36:03 TRACE client.WorkQueue:270 - Shutting down work queue, 
results = ClientResult: 0 results, 0 errors, 0/2 shards (id=2359:0)
10/10/26 17:36:03 TRACE client.ClientResult:286 - close() called.
10/10/26 17:36:03 TRACE client.ClientResult:290 - Notifying closed listener.
10/10/26 17:36:03 TRACE client.WorkQueue:136 - Shut down via 
ClientRequest.close()
10/10/26 17:36:03 TRACE client.WorkQueue:188 - Shutdown() called (id=2359)
10/10/26 17:36:03 TRACE client.WorkQueue:277 - Returning results = 
ClientResult: 0 results, 0 errors, 0/2 shards (closed), took 10003 ms 
(id=2359:0)
10/10/26 17:36:03 DEBUG client.Client:427 - 
broadcast(request([Ljava.lang.Object;@7cf02bee), 
{solr03:2=[solrhome01#solrhome01, solrhome02#solrhome02]}) took 10004 msec 
for ClientResult: 0 results, 0 errors, 0/2 shards (closed)
10/10/26 17:36:03 INFO component.SearchHandler:89 - KattaCommComponent 
results.size: 0
10/10/26 17:36:03 WARN component.SearchHandler:93 - Received 0 responses for 
query [], not 1
10/10/26 17:36:03 ERROR core.SolrCore:151 - java.lang.NullPointerException
at 
org.apache.solr.handler.component.QueryComponent.mergeIds(QueryComponent.java:553)
at 
org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:435)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:304)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
{noformat}

But use 
{noformat}
ab -n 1 -c 1 http://solr01:8080/solr/select?q=solr;... 
{noformat}
do not thrown

BTW::
NPE would stop RPC communication in request method of SolrKattaServer.java

 Integrate Katta
 ---

 Key: SOLR-1395
 URL: https://issues.apache.org/jira/browse/SOLR-1395
 Project: Solr
  Issue Type: New Feature
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: Next

 Attachments: back-end.log, front-end.log, hadoop-core-0.19.0.jar, 
 katta-core-0.6-dev.jar, katta.node.properties, katta.zk.properties, 
 log4j-1.2.13.jar, solr-1395-1431-3.patch, solr-1395-1431-4.patch, 
 solr-1395-1431-katta0.6.patch, solr-1395-1431-katta0.6.patch, 
 solr-1395-1431.patch, solr-1395-katta-0.6.2-1.patch, 
 solr-1395-katta-0.6.2.patch, SOLR-1395.patch, SOLR-1395.patch, 
 SOLR-1395.patch, test-katta-core-0.6-dev.jar, zkclient-0.1-dev.jar, 
 zookeeper-3.2.1.jar

   Original Estimate: 336h
  Remaining Estimate: 336h

 We'll integrate Katta into Solr so that:
 * Distributed search uses Hadoop RPC
 * Shard/SolrCore distribution and management
 * Zookeeper based failover
 * Indexes may be built using Hadoop

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-1395) Integrate Katta

2010-10-10 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12919581#action_12919581
 ] 

tom liu commented on SOLR-1395:
---

My deployment is:
# one Master
# two Slaves:
#* solr01
#* solr02
# two Indexes:
#* solrhome01(.zip)
#* solrhome02(.zip)

And, i use:
{noformat}
# bin/katta addIndex solrhome01 hdfs://localhost:9000/solr/solrhome01.zip
# bin/katta addIndex solrhome02 hdfs://localhost:9000/solr/solrhome02.zip
{noformat}

so, my shard-Node is:
# solrhome01#solrhome01
#* --solr01
#* --solr02
# solrhome02#solrhome02
#* --solr01
#* --solr02

When i searched in master, i found that, in any slave, the search ran twice, 
such as:
{noformat}
SolrServer.request: solr01:2 shards:[solrhome01#solrhome01, 
solrhome02#solrhome02] 
request 
params:fl=id%2Cscorestart=0q=solrisShard=truefsv=truerows=10shards=solrhome01%23solrhome01%2Csolrhome02%23solrhome02
2010-10-10 16:17:04 org.apache.solr.core.SolrCore execute
信息: [solrhome02#solrhome02] webapp=null path=/select 
params={fl=id%2Cscorestart=0q=solrisShard=truefsv=truerows=10} hits=1 
status=0 QTime=16 
2010-10-10 16:17:04 org.apache.solr.core.SolrCore execute
信息: [solrhome01#solrhome01] webapp=null path=/select 
params={fl=id%2Cscorestart=0q=solrisShard=truefsv=truerows=10} hits=1 
status=0 QTime=16 
2010-10-10 16:17:04 org.apache.solr.core.SolrCore execute
信息: [solrhome02#solrhome02] webapp=null path=/select 
params={fl=id%2Cscore%2Cidstart=0q=solrisShard=truerows=10ids=SOLR1000} 
status=0 QTime=0 
SolrServer.SolrResponse:{response={numFound=1,start=00.5747526,docs=[SolrDocument[{id=SOLR1000,
 score=0.5747526}]]},QueriedShards=[Ljava.lang.String;@175ace6}
{noformat}
{noformat}
SolrServer.request: tom-SL510:20001 shards:[solrhome01#solrhome01, 
solrhome02#solrhome02] 
request 
params:start=0ids=SOLR1000q=solrisShard=truerows=10shards=solrhome01%23solrhome01%2Csolrhome02%23solrhome02
2010-10-10 16:17:04 org.apache.solr.core.SolrCore execute
信息: [solrhome02#solrhome02] webapp=null path=/select 
params={start=0ids=SOLR1000q=solrisShard=truerows=10fsv=truefl=id%2Cscore}
 status=0 QTime=16
2010-10-10 16:17:04 org.apache.solr.core.SolrCore execute
信息: [solrhome01#solrhome01] webapp=null path=/select 
params={start=0ids=SOLR1000q=solrisShard=truerows=10fsv=truefl=id%2Cscore}
 status=0 QTime=16
2010-10-10 16:17:04 org.apache.solr.core.SolrCore execute
信息: [solrhome01#solrhome01] webapp=null path=/select 
params={start=0ids=SOLR1000ids=SOLR1000q=solrisShard=truerows=10} status=0 
QTime=0
SolrServer.SolrResponse:{response={numFound=1,start=0,docs=[SolrDocument[{id=SOLR1000,
 ...]]},QueriedShards=[Ljava.lang.String;@1d590d}
{noformat}

i think, in slaves, IS_SHARD=true, so, it would prevent this happens.

 Integrate Katta
 ---

 Key: SOLR-1395
 URL: https://issues.apache.org/jira/browse/SOLR-1395
 Project: Solr
  Issue Type: New Feature
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: Next

 Attachments: back-end.log, front-end.log, hadoop-core-0.19.0.jar, 
 katta-core-0.6-dev.jar, katta.node.properties, katta.zk.properties, 
 log4j-1.2.13.jar, solr-1395-1431-3.patch, solr-1395-1431-4.patch, 
 solr-1395-1431-katta0.6.patch, solr-1395-1431-katta0.6.patch, 
 solr-1395-1431.patch, SOLR-1395.patch, SOLR-1395.patch, SOLR-1395.patch, 
 test-katta-core-0.6-dev.jar, zkclient-0.1-dev.jar, zookeeper-3.2.1.jar

   Original Estimate: 336h
  Remaining Estimate: 336h

 We'll integrate Katta into Solr so that:
 * Distributed search uses Hadoop RPC
 * Shard/SolrCore distribution and management
 * Zookeeper based failover
 * Indexes may be built using Hadoop

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Issue Comment Edited: (SOLR-1395) Integrate Katta

2010-10-10 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12919581#action_12919581
 ] 

tom liu edited comment on SOLR-1395 at 10/10/10 4:41 AM:
-

My deployment is:
# one Master
# two Slaves:
#* solr01
#* solr02
# two Indexes:
#* solrhome01(.zip)
#* solrhome02(.zip)

And, i use:
{noformat}
# bin/katta addIndex solrhome01 hdfs://localhost:9000/solr/solrhome01.zip
# bin/katta addIndex solrhome02 hdfs://localhost:9000/solr/solrhome02.zip
{noformat}

so, my shard-Node is:
# solrhome01#solrhome01
#* --solr01
#* --solr02
# solrhome02#solrhome02
#* --solr01
#* --solr02

When i searched in master, i found that, in any slave, the search ran twice, 
such as:
{noformat}
SolrServer.request: solr01:2 shards:[solrhome01#solrhome01, 
solrhome02#solrhome02] 
request 
params:fl=id%2Cscorestart=0q=solrisShard=truefsv=truerows=10shards=solrhome01%23solrhome01%2Csolrhome02%23solrhome02
2010-10-10 16:17:04 org.apache.solr.core.SolrCore execute
信息: [solrhome02#solrhome02] webapp=null path=/select 
params={fl=id%2Cscorestart=0q=solrisShard=truefsv=truerows=10} hits=1 
status=0 QTime=16 
2010-10-10 16:17:04 org.apache.solr.core.SolrCore execute
信息: [solrhome01#solrhome01] webapp=null path=/select 
params={fl=id%2Cscorestart=0q=solrisShard=truefsv=truerows=10} hits=1 
status=0 QTime=16 
2010-10-10 16:17:04 org.apache.solr.core.SolrCore execute
信息: [solrhome02#solrhome02] webapp=null path=/select 
params={fl=id%2Cscore%2Cidstart=0q=solrisShard=truerows=10ids=SOLR1000} 
status=0 QTime=0 
SolrServer.SolrResponse:{response={numFound=1,start=00.5747526,docs=[SolrDocument[{id=SOLR1000,
 score=0.5747526}]]},QueriedShards=[Ljava.lang.String;@175ace6}
{noformat}
{noformat}
SolrServer.request: solr02:2 shards:[solrhome01#solrhome01, 
solrhome02#solrhome02] 
request 
params:start=0ids=SOLR1000q=solrisShard=truerows=10shards=solrhome01%23solrhome01%2Csolrhome02%23solrhome02
2010-10-10 16:17:04 org.apache.solr.core.SolrCore execute
信息: [solrhome02#solrhome02] webapp=null path=/select 
params={start=0ids=SOLR1000q=solrisShard=truerows=10fsv=truefl=id%2Cscore}
 status=0 QTime=16
2010-10-10 16:17:04 org.apache.solr.core.SolrCore execute
信息: [solrhome01#solrhome01] webapp=null path=/select 
params={start=0ids=SOLR1000q=solrisShard=truerows=10fsv=truefl=id%2Cscore}
 status=0 QTime=16
2010-10-10 16:17:04 org.apache.solr.core.SolrCore execute
信息: [solrhome01#solrhome01] webapp=null path=/select 
params={start=0ids=SOLR1000ids=SOLR1000q=solrisShard=truerows=10} status=0 
QTime=0
SolrServer.SolrResponse:{response={numFound=1,start=0,docs=[SolrDocument[{id=SOLR1000,
 ...]]},QueriedShards=[Ljava.lang.String;@1d590d}
{noformat}

i think, in slaves, IS_SHARD=true, so, it would prevent this happens.

  was (Author: tom_lt):
My deployment is:
# one Master
# two Slaves:
#* solr01
#* solr02
# two Indexes:
#* solrhome01(.zip)
#* solrhome02(.zip)

And, i use:
{noformat}
# bin/katta addIndex solrhome01 hdfs://localhost:9000/solr/solrhome01.zip
# bin/katta addIndex solrhome02 hdfs://localhost:9000/solr/solrhome02.zip
{noformat}

so, my shard-Node is:
# solrhome01#solrhome01
#* --solr01
#* --solr02
# solrhome02#solrhome02
#* --solr01
#* --solr02

When i searched in master, i found that, in any slave, the search ran twice, 
such as:
{noformat}
SolrServer.request: solr01:2 shards:[solrhome01#solrhome01, 
solrhome02#solrhome02] 
request 
params:fl=id%2Cscorestart=0q=solrisShard=truefsv=truerows=10shards=solrhome01%23solrhome01%2Csolrhome02%23solrhome02
2010-10-10 16:17:04 org.apache.solr.core.SolrCore execute
信息: [solrhome02#solrhome02] webapp=null path=/select 
params={fl=id%2Cscorestart=0q=solrisShard=truefsv=truerows=10} hits=1 
status=0 QTime=16 
2010-10-10 16:17:04 org.apache.solr.core.SolrCore execute
信息: [solrhome01#solrhome01] webapp=null path=/select 
params={fl=id%2Cscorestart=0q=solrisShard=truefsv=truerows=10} hits=1 
status=0 QTime=16 
2010-10-10 16:17:04 org.apache.solr.core.SolrCore execute
信息: [solrhome02#solrhome02] webapp=null path=/select 
params={fl=id%2Cscore%2Cidstart=0q=solrisShard=truerows=10ids=SOLR1000} 
status=0 QTime=0 
SolrServer.SolrResponse:{response={numFound=1,start=00.5747526,docs=[SolrDocument[{id=SOLR1000,
 score=0.5747526}]]},QueriedShards=[Ljava.lang.String;@175ace6}
{noformat}
{noformat}
SolrServer.request: tom-SL510:20001 shards:[solrhome01#solrhome01, 
solrhome02#solrhome02] 
request 
params:start=0ids=SOLR1000q=solrisShard=truerows=10shards=solrhome01%23solrhome01%2Csolrhome02%23solrhome02
2010-10-10 16:17:04 org.apache.solr.core.SolrCore execute
信息: [solrhome02#solrhome02] webapp=null path=/select 
params={start=0ids=SOLR1000q=solrisShard=truerows=10fsv=truefl=id%2Cscore}
 status=0 QTime=16
2010-10-10 16:17:04 org.apache.solr.core.SolrCore execute
信息: [solrhome01#solrhome01] webapp=null path=/select 

[jira] Created: (SOLR-2147) NPE throws in ShardDoc's ShardFieldSortedHitQueue when Distributed Searching

2010-10-09 Thread tom liu (JIRA)
NPE throws in ShardDoc's ShardFieldSortedHitQueue when Distributed Searching


 Key: SOLR-2147
 URL: https://issues.apache.org/jira/browse/SOLR-2147
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.0
 Environment: JDK1.6/Tomcat6
Reporter: tom liu


when distributed searching use katta components(solr-1395), throws NPE:
Oct 9, 2010 5:43:59 PM org.apache.solr.common.SolrException log
SEVERE: java.lang.NullPointerException
at 
org.apache.solr.handler.component.ShardFieldSortedHitQueue$1.compare(ShardDoc.java:210)
at 
org.apache.solr.handler.component.ShardFieldSortedHitQueue.lessThan(ShardDoc.java:134)
at org.apache.lucene.util.PriorityQueue.upHeap(PriorityQueue.java:221)
at org.apache.lucene.util.PriorityQueue.add(PriorityQueue.java:130)
at 
org.apache.lucene.util.PriorityQueue.insertWithOverflow(PriorityQueue.java:146)
at 
org.apache.solr.handler.component.QueryComponent.mergeIds(QueryComponent.java:555)
at 
org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:408)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:304)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at 
org.apache.solr.katta.SolrKattaServer.request(SolrKattaServer.java:97)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)

i found that sharddoc.java 210 was :
final float f1 = e1.score;
final float f2 = e2.score;
score field is Float type, it may be null. so should be changed as:
final float f1 = e1.score==null?0.00f:e1.score;
final float f2 = e2.score==null?0.00f:e2.score;


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-1395) Integrate Katta

2010-10-09 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12919478#action_12919478
 ] 

tom liu commented on SOLR-1395:
---

i use solr-4.0 newest code trunk, and katta 0.6.2, hadoop-0.20.2, 
zookeeper-3.3.1, after fixed some bugs , i run it.

the bugs is :
1. solr's ShardDoc.java, ShardFieldSortedHitQueue line 210 :
final float f1 = e1.score==null?0.00f:e1.score;
final float f2 = e2.score==null?0.00f:e2.score;
2. KattaSearchHandler.java, KattaMultiShardHandler may be return more results, 
so must include any results:
if (results.isEmpty()) {
ssr.setResponse(new NamedListObject());
return;
}
+
+   NamedListObject nl = new NamedListObject();
+   NamedListCollection nlc = new NamedListCollection(nl);
+   for(KattaResponse kr : results){
+   nl = nlc.add(kr.getRsp().getResponse());
+   }
ssr.setResponse(nl);
}
+   private class NamedListCollection {
+   private NamedListObject _nl;
+   NamedListCollection(NamedListObject nl){
+   _nl = nl;
+   }
+   NamedListObject add(NamedListObject nl){
+   IteratorEntryString,Object it = 
nl.iterator();
+   while (it.hasNext()){
+   EntryString,Object entry = it.next();
+   String key = entry.getKey();
+   Object obj = entry.getValue();
+   Object old = _nl.remove(key);
+   if(old != null){
+   add(key, obj , old );
+   }else{
+   _nl.add(key, obj);
+   }
+   }
+   return _nl;
+   }
+   void add(String key,Object obj,Object old){
+   if(key.equals(response)){
+   SolrDocumentList doca = 
(SolrDocumentList)obj;
+   SolrDocumentList docb = 
(SolrDocumentList)old;
+   SolrDocumentList docs = new 
SolrDocumentList();
+   
docs.setNumFound(doca.getNumFound()+docb.getNumFound());
+   
//doca.setStart(doca.getStart()+docb.getStart());
+   
docs.setMaxScore(Math.max(doca.getMaxScore(), docb.getMaxScore()));
+   docs.addAll(doca);
+   docs.addAll(docb);
+   _nl.add(key,docs);
+   }else if(key.equals(QueriedShards)){
+   CollectionString qsa = 
(ArrayListString)obj;
+   CollectionString qsb = 
(ArrayListString)old;
+   CollectionString qs = new 
ArrayListString();
+   qs.addAll(qsa);
+   qs.addAll(qsb);
+   _nl.add(key, qs);
+   }
+   }
+   }


 Integrate Katta
 ---

 Key: SOLR-1395
 URL: https://issues.apache.org/jira/browse/SOLR-1395
 Project: Solr
  Issue Type: New Feature
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: Next

 Attachments: back-end.log, front-end.log, hadoop-core-0.19.0.jar, 
 katta-core-0.6-dev.jar, katta.node.properties, katta.zk.properties, 
 log4j-1.2.13.jar, solr-1395-1431-3.patch, solr-1395-1431-4.patch, 
 solr-1395-1431-katta0.6.patch, solr-1395-1431-katta0.6.patch, 
 solr-1395-1431.patch, SOLR-1395.patch, SOLR-1395.patch, SOLR-1395.patch, 
 test-katta-core-0.6-dev.jar, zkclient-0.1-dev.jar, zookeeper-3.2.1.jar

   Original Estimate: 336h
  Remaining Estimate: 336h

 We'll integrate Katta into Solr so that:
 * Distributed search uses Hadoop RPC
 * Shard/SolrCore distribution and management
 * Zookeeper based failover
 * Indexes may be built using Hadoop

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: 

[jira] Commented: (SOLR-2109) NPE throws in /solr/browse page

2010-09-09 Thread tom liu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12907535#action_12907535
 ] 

tom liu commented on SOLR-2109:
---

i am using Solr's trunk.
now, i found that i did not use post.sh to add data into solr. After i do that 
, the exception do not throw.

 NPE throws in /solr/browse page
 ---

 Key: SOLR-2109
 URL: https://issues.apache.org/jira/browse/SOLR-2109
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.0
 Environment: java1.6.0_17/windowsxp/tomcat6.0.29
Reporter: tom liu

 in solradmin, i use solr/browse, but see NPE in console:
 2010-9-8 13:50:25 org.apache.solr.core.SolrCore execute
 INFO: [] webapp=/solr path=/terms 
 params={timestamp=1283925025672limit=10terms.fl=nameq=solrwt=velocityterms.sort=countv.template=suggestterms.prefix=solr}
  status=500 QTime=0
 2010-9-8 13:53:08 org.apache.solr.common.SolrException log
 Fatal: java.io.IOException: Can't find resource '/terms.vm' in classpath or 
 'D:\apps\solr\solrhome\.\conf/', cwd=D:\apps\apache-tomcat-6.0.29\bin
 at 
 org.apache.solr.response.VelocityResponseWriter.getTemplate(VelocityResponseWriter.java:169)
 at 
 org.apache.solr.response.VelocityResponseWriter.write(VelocityResponseWriter.java:42)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:324)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:253)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
 at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
 at 
 org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:861)
 at 
 org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:579)
 at 
 org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1584)
 at java.lang.Thread.run(Thread.java:619)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Created: (SOLR-2109) NPE throws in /solr/browse page

2010-09-08 Thread tom liu (JIRA)
NPE throws in /solr/browse page
---

 Key: SOLR-2109
 URL: https://issues.apache.org/jira/browse/SOLR-2109
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.0
 Environment: java1.6.0_17/windowsxp/tomcat6.0.29
Reporter: tom liu


in solradmin, i use solr/browse, but see NPE in console:
2010-9-8 13:50:25 org.apache.solr.core.SolrCore execute
INFO: [] webapp=/solr path=/terms 
params={timestamp=1283925025672limit=10terms.fl=nameq=solrwt=velocityterms.sort=countv.template=suggestterms.prefix=solr}
 status=500 QTime=0
2010-9-8 13:53:08 org.apache.solr.common.SolrException log
Fatal: java.io.IOException: Can't find resource '/terms.vm' in classpath or 
'D:\apps\solr\solrhome\.\conf/', cwd=D:\apps\apache-tomcat-6.0.29\bin
at 
org.apache.solr.response.VelocityResponseWriter.getTemplate(VelocityResponseWriter.java:169)
at 
org.apache.solr.response.VelocityResponseWriter.write(VelocityResponseWriter.java:42)
at 
org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:324)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:253)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at 
org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:861)
at 
org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:579)
at 
org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1584)
at java.lang.Thread.run(Thread.java:619)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Updated: (SOLR-2109) NPE throws in /solr/browse page

2010-09-08 Thread tom liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tom liu updated SOLR-2109:
--


the url that jQuery request is : 
http://localhost:8080/solr/terms?q=solrwt=velocitytimestamp=1283925025672limit=10terms.fl=nameterms.sort=countv.template=suggestterms.prefix=solr

the Exception is:
java.lang.NullPointerException
at 
org.apache.solr.handler.component.TermsComponent.process(TermsComponent.java:113)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:210)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1323)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:337)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:240)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at 
org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:861)
at 
org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:579)
at 
org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1584)
at java.lang.Thread.run(Thread.java:619)

 NPE throws in /solr/browse page
 ---

 Key: SOLR-2109
 URL: https://issues.apache.org/jira/browse/SOLR-2109
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.0
 Environment: java1.6.0_17/windowsxp/tomcat6.0.29
Reporter: tom liu

 in solradmin, i use solr/browse, but see NPE in console:
 2010-9-8 13:50:25 org.apache.solr.core.SolrCore execute
 INFO: [] webapp=/solr path=/terms 
 params={timestamp=1283925025672limit=10terms.fl=nameq=solrwt=velocityterms.sort=countv.template=suggestterms.prefix=solr}
  status=500 QTime=0
 2010-9-8 13:53:08 org.apache.solr.common.SolrException log
 Fatal: java.io.IOException: Can't find resource '/terms.vm' in classpath or 
 'D:\apps\solr\solrhome\.\conf/', cwd=D:\apps\apache-tomcat-6.0.29\bin
 at 
 org.apache.solr.response.VelocityResponseWriter.getTemplate(VelocityResponseWriter.java:169)
 at 
 org.apache.solr.response.VelocityResponseWriter.write(VelocityResponseWriter.java:42)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:324)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:253)
 at 
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
 at 
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
 at 
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
 at 
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
 at 
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
 at 
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
 at 
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
 at 
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
 at 
 org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:861)
 at 
 org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:579)
 at 
 org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1584)
 at java.lang.Thread.run(Thread.java:619)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Created: (SOLR-1732) DIH throw exception when configs dups pk on child entity

2010-01-25 Thread tom liu (JIRA)
DIH throw exception when configs dups pk on child entity


 Key: SOLR-1732
 URL: https://issues.apache.org/jira/browse/SOLR-1732
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 1.5
 Environment: jdk 1.6.0.16/tomcat 6.0/linux centos5.2
Reporter: tom liu


data-config.xml like this :
document name=products
entity name=qa pk=idx dataSource=db
 ...
field column=idx   name=id /
entity name=answer pk=idx,qaidx dataSource=db .../

tomcat log is :
Jan 21, 2010 2:43:05 PM org.apache.solr.handler.dataimport.DataImporter 
doDeltaImport
SEVERE: Delta Import Failed
java.lang.NullPointerException
at 
org.apache.solr.handler.dataimport.DocBuilder.collectDelta(DocBuilder.java:650)
at 
org.apache.solr.handler.dataimport.DocBuilder.collectDelta(DocBuilder.java:616)
at 
org.apache.solr.handler.dataimport.DocBuilder.doDelta(DocBuilder.java:266)
at 
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:174)
at 
org.apache.solr.handler.dataimport.DataImporter.doDeltaImport(DataImporter.java:355)
at 
org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:394)
at 
org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:373)

so, i found that in DocBuilder 650 line is :
if (modifiedRow.get(entity.getPk()).equals(row.get(entity.getPk( {
modifiedRow not contains 'idx,qaidx' cols.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (SOLR-1422) Add one QParser to Query Chinese Sentences

2009-09-11 Thread tom liu (JIRA)
Add one QParser to Query Chinese Sentences
--

 Key: SOLR-1422
 URL: https://issues.apache.org/jira/browse/SOLR-1422
 Project: Solr
  Issue Type: Improvement
  Components: search
 Environment: windows xp/ jdk1.6 / tomcat6
Reporter: tom liu


DisMaxQParser do not correctly analysis chinese sentence. So, i implement one 
QParser derived from DisMax.
Limis:
 in schema.xml, set defaultSearchField to chineseFieldType-Field
Result:
 if you input C1C2C3C4, then:
 in DisMaxQParser, we will find that qstr is C1C2 C3C4

1. SentenceDisMaxQParser Class::
package org.apache.solr.search;

import org.apache.lucene.queryParser.ParseException;
import org.apache.lucene.search.BooleanClause;
import org.apache.lucene.search.BooleanQuery;
import org.apache.lucene.search.Query;

import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.Token;
import java.io.StringReader;

import org.apache.solr.common.SolrException;
import org.apache.solr.common.params.DefaultSolrParams;
import org.apache.solr.common.params.DisMaxParams;
import org.apache.solr.common.params.SolrParams;
import org.apache.solr.common.util.NamedList;
import org.apache.solr.request.SolrQueryRequest;
import org.apache.solr.util.SolrPluginUtils;

import java.util.ArrayList;
import java.util.List;
import java.util.Map;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class SentenceDisMaxQParser extends DisMaxQParser {
  private static Logger log = 
LoggerFactory.getLogger(SentenceDisMaxQParser.class);
  public SentenceDisMaxQParser(String qstr, SolrParams localParams, SolrParams 
params, SolrQueryRequest req) {
super(qstr, localParams, params, req);

Analyzer analyzer = req.getSchema().getQueryAnalyzer();
if(null == analyzer)
return;

StringBuilder norm = new StringBuilder();
log.info(before analyzer, qstr=+this.qstr);
try{
TokenStream tokens = analyzer.reusableTokenStream( 
req.getSchema().getDefaultSearchFieldName(), new StringReader( this.qstr ) );
tokens.reset();
Token token = tokens.next();
while( token != null ) {
  norm.append( new String(token.termBuffer(), 0, 
token.termLength()) ).append( );
  token = tokens.next();
}
} catch(Exception ex){
log.info(Ex=+ex);
}
if(norm.length()  0)
this.qstr = norm.toString();
log.info(after analyzer, qstr=+this.qstr);
  }
}

2. SentenceDisMaxQParserPlugin Class::
package org.apache.solr.search;

import org.apache.solr.common.params.SolrParams;
import org.apache.solr.common.util.NamedList;
import org.apache.solr.request.SolrQueryRequest;

public class SentenceDisMaxQParserPlugin extends QParserPlugin {
  public static String NAME = sdismax;

  public void init(NamedList args) {
  }

  public QParser createParser(String qstr, SolrParams localParams, SolrParams 
params, SolrQueryRequest req) {
return new SentenceDisMaxQParser(qstr, localParams, params, req);
  }
}


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (SOLR-1418) Improve QueryElevationComponent to Query Complex Strings

2009-09-09 Thread tom liu (JIRA)
Improve QueryElevationComponent to Query Complex Strings


 Key: SOLR-1418
 URL: https://issues.apache.org/jira/browse/SOLR-1418
 Project: Solr
  Issue Type: Improvement
Affects Versions: 1.4
 Environment: windows xp/jdk1.6/tomcat6
Reporter: tom liu


In Solr 1.4, QueryElevationComponent use Query DocNode to create ElevationObj, 
then add to elevationCache. After that, when user invoke a querystring qstr, 
prepare method invokes getAnalyzedQuery(qstr) to get analyzedQueryStrings, then 
get ElevationObj from elevationCache.
So, user input string qstr must be Query-DocNode-String, if not , we will not 
get Elevation results from elevation.xml.

I think this would be improved. Such as:
1. Change method [String getAnalyzedQuery( String query ) throws IOException] 
to [String[ ] getAnalyzedQuery( String query ) throws IOException]
2. Change method prepare:
booster = getElevationMap( reader, req.getCore() ).get( qstr );
to:
for(String qstr : qstrs){
 booster = getElevationMap( reader, req.getCore() ).get( qstr );
 if(null != booster) break;
}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.