[jira] [Commented] (SOLR-6449) Add first class support for Real Time Get in Solrj

2015-01-10 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14272803#comment-14272803
 ] 

Anurag Sharma commented on SOLR-6449:
-

Great job!

 Add first class support for Real Time Get in Solrj
 --

 Key: SOLR-6449
 URL: https://issues.apache.org/jira/browse/SOLR-6449
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Reporter: Shalin Shekhar Mangar
  Labels: difficulty-medium, impact-medium
 Fix For: 5.0

 Attachments: SOLR-6449.patch, SOLR-6449.patch


 Any request handler can be queried by Solrj using a custom param map and the 
 qt parameter but I think /get should get first-class support in the java 
 client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6559) Create an endpoint /update/xml/docs endpoint to do custom xml indexing

2014-12-17 Thread Anurag Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Sharma updated SOLR-6559:

Attachment: SOLR-6559.patch

Attaching patch that can be applied on latest trunk. 
The XPathRecordReader doesn't support wild card. Either we have to implement 
the wildcard functionality or use another XPath parser. 
Also added a unit test (testSupportedWildCard) demonstrating the capability is 
unsupported. Also the patch has positive unit tests which are working.

 Create an endpoint /update/xml/docs endpoint to do custom xml indexing
 --

 Key: SOLR-6559
 URL: https://issues.apache.org/jira/browse/SOLR-6559
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6559.patch, SOLR-6559.patch, SOLR-6559.patch, 
 SOLR-6559.patch


 Just the way we have an json end point create an xml end point too. use the 
 XPathRecordReader in DIH to do the same . The syntax would require slight 
 tweaking to match the params of /update/json/docs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6559) Create an endpoint /update/xml/docs endpoint to do custom xml indexing

2014-12-17 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14250268#comment-14250268
 ] 

Anurag Sharma commented on SOLR-6559:
-

Looked for wildcard '*' couldn't find any unit test in TestXPathRecordReader 

 Create an endpoint /update/xml/docs endpoint to do custom xml indexing
 --

 Key: SOLR-6559
 URL: https://issues.apache.org/jira/browse/SOLR-6559
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6559.patch, SOLR-6559.patch, SOLR-6559.patch, 
 SOLR-6559.patch


 Just the way we have an json end point create an xml end point too. use the 
 XPathRecordReader in DIH to do the same . The syntax would require slight 
 tweaking to match the params of /update/json/docs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6559) Create an endpoint /update/xml/docs endpoint to do custom xml indexing

2014-12-13 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14245252#comment-14245252
 ] 

Anurag Sharma commented on SOLR-6559:
-

Can you review the patch for merge if srcField is not required, 

 Create an endpoint /update/xml/docs endpoint to do custom xml indexing
 --

 Key: SOLR-6559
 URL: https://issues.apache.org/jira/browse/SOLR-6559
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6559.patch, SOLR-6559.patch, SOLR-6559.patch


 Just the way we have an json end point create an xml end point too. use the 
 XPathRecordReader in DIH to do the same . The syntax would require slight 
 tweaking to match the params of /update/json/docs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6559) Create an endpoint /update/xml/docs endpoint to do custom xml indexing

2014-12-13 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14245252#comment-14245252
 ] 

Anurag Sharma edited comment on SOLR-6559 at 12/13/14 2:53 PM:
---

Can you review the patch for merge if srcField is not required, Also, would 
like to know why srcField is not required, is there another api to store raw 
data?


was (Author: anuragsharma):
Can you review the patch for merge if srcField is not required, 

 Create an endpoint /update/xml/docs endpoint to do custom xml indexing
 --

 Key: SOLR-6559
 URL: https://issues.apache.org/jira/browse/SOLR-6559
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6559.patch, SOLR-6559.patch, SOLR-6559.patch


 Just the way we have an json end point create an xml end point too. use the 
 XPathRecordReader in DIH to do the same . The syntax would require slight 
 tweaking to match the params of /update/json/docs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6559) Create an endpoint /update/xml/docs endpoint to do custom xml indexing

2014-12-12 Thread Anurag Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Sharma updated SOLR-6559:

Attachment: SOLR-6559.patch

Attaching the patch file.

Struggling to make srcField work. 

srcField functionality is not present in this patch. The unit test for this 
functionality is 'testXMLDocFormatWithSplitWithSrcField'. Facing issue to get 
the raw xml from XMLStreamReader as it doesn't buffer the data. It would be 
great if someone can suggest a quick tip.

The entry point for the xml doc functionality is /update/xml/docs. It is 
implicitly registered and no need of request handler. Following parameters are 
implemented and unit test are added for them
split=Solr based XPath splitter
f=field from schema xml:xpath splitter

More or less functionality is similar to /update/json/docs.

 Create an endpoint /update/xml/docs endpoint to do custom xml indexing
 --

 Key: SOLR-6559
 URL: https://issues.apache.org/jira/browse/SOLR-6559
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6559.patch, SOLR-6559.patch, SOLR-6559.patch


 Just the way we have an json end point create an xml end point too. use the 
 XPathRecordReader in DIH to do the same . The syntax would require slight 
 tweaking to match the params of /update/json/docs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6559) Create an endpoint /update/xml/docs endpoint to do custom xml indexing

2014-12-08 Thread Anurag Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Sharma updated SOLR-6559:

Attachment: SOLR-6559.patch

I also prefer sticking to the XPath format. 

Attaching the patch covering the basic functionality. I'll update with more 
patches covering the other use cases as supported in json.  


 Create an endpoint /update/xml/docs endpoint to do custom xml indexing
 --

 Key: SOLR-6559
 URL: https://issues.apache.org/jira/browse/SOLR-6559
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6559.patch, SOLR-6559.patch


 Just the way we have an json end point create an xml end point too. use the 
 XPathRecordReader in DIH to do the same . The syntax would require slight 
 tweaking to match the params of /update/json/docs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6559) Create an endpoint /update/xml/docs endpoint to do custom xml indexing

2014-12-02 Thread Anurag Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Sharma updated SOLR-6559:

Attachment: SOLR-6559.patch

Attached unit test demonstrates flattening capabilities of XPathRecordReader. 
For /update/xml/docs endpoint should we keep the XPath syntax and also support 
/update/json/docs format for indexing?

 Create an endpoint /update/xml/docs endpoint to do custom xml indexing
 --

 Key: SOLR-6559
 URL: https://issues.apache.org/jira/browse/SOLR-6559
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul
 Attachments: SOLR-6559.patch


 Just the way we have an json end point create an xml end point too. use the 
 XPathRecordReader in DIH to do the same . The syntax would require slight 
 tweaking to match the params of /update/json/docs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6547) CloudSolrServer query getqtime Exception

2014-11-15 Thread Anurag Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Sharma updated SOLR-6547:

Attachment: SOLR-6547.patch

Fix using #2 approach, without UT.

 CloudSolrServer query getqtime Exception
 

 Key: SOLR-6547
 URL: https://issues.apache.org/jira/browse/SOLR-6547
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 4.10
Reporter: kevin
 Attachments: SOLR-6547.patch


 We are using CloudSolrServer to query ,but solrj throw Exception ;
 java.lang.ClassCastException: java.lang.Long cannot be cast to 
 java.lang.Integer  at 
 org.apache.solr.client.solrj.response.SolrResponseBase.getQTime(SolrResponseBase.java:76)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6599) Wrong error logged on DIH connection problem

2014-11-07 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14202246#comment-14202246
 ] 

Anurag Sharma commented on SOLR-6599:
-

I am not sure if the issue still exist. Please update if it's reproducible. 

Tried the following number of scenarios and not able to produce the exception 
mentioned in the description. 
Here are the scenario's and exceptions, error messages seen:

# using invalid hostname
{code}
Caused by: java.net.UnknownHostException::x
{code}
# pointing to non-routable IP
{code}
Creating a connection for entity item with URL: 
jdbc:mysql://172.31.255.241/employees
 [java] 338169 [commitScheduler-8-thread-1] INFO  
org.apache.solr.update.UpdateHandler  – start 
commit{,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
 [java] 338170 [commitScheduler-8-thread-1] INFO  
org.apache.solr.update.UpdateHandler  – No uncommitted changes. Skipping 
IW.commit.
 [java] 338171 [commitScheduler-8-thread-1] INFO  
org.apache.solr.update.UpdateHandler  – end_commit_flush
 {code}
# valid hostname but not connectable
{code}
 [java] Caused by: java.net.ConnectException: Connection timed out
 [java] at java.net.PlainSocketImpl.socketConnect(Native Method)
 [java] at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 [java] at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 [java] at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
 [java] at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
 [java] at java.net.Socket.connect(Socket.java:579)
 [java] at 
com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:213)
 [java] at com.mysql.jdbc.MysqlIO.init(MysqlIO.java:297)
{code}
# making sql-server down
{code}
 Caused by: java.net.ConnectException: Connection refused
 [java] at java.net.PlainSocketImpl.socketConnect(Native Method)
 [java] at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 [java] at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 [java] at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
 [java] at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
 [java] at java.net.Socket.connect(Socket.java:579)
 [java] at 
com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:213)
 [java] at com.mysql.jdbc.MysqlIO.init(MysqlIO.java:297)
 [java] ... 29 more
{code}

 Wrong error logged on DIH connection problem
 

 Key: SOLR-6599
 URL: https://issues.apache.org/jira/browse/SOLR-6599
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 4.10.1
 Environment: Debian Squeeze, Oracle-java-8, mysql-connector-5.1.28
Reporter: Thomas Lamy
Priority: Minor
  Labels: difficulty-medium, impact-low

 If I try a full import via DIH from a mysql server which is firewalled or 
 down, I get a misleading error message (see below, only SQL statement 
 shortened).
 I don't know Java very well, but I suspect the connection error is catched, 
 the connection handle is null, which in turn leads to the null pointer 
 exception at the end of the stack trace.
 {code}
 Full Import failed:java.lang.RuntimeException: java.lang.RuntimeException: 
 org.apache.solr.handler.dataimport.DataImportHandlerException: Unable to 
 execute query: SELECT SenderID, ProviderID, `Name`, RefSenderID, CameraURL, 
 ChatURL, [.] Processing Document # 1
   at 
 org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:271)
   at 
 org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
   at 
 org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:480)
   at 
 org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:461)
 Caused by: java.lang.RuntimeException: 
 org.apache.solr.handler.dataimport.DataImportHandlerException: Unable to 
 execute query: SELECT SenderID, ProviderID, `Name`, RefSenderID, CameraURL, 
 ChatURL, [...] Processing Document # 1
   at 
 org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:417)
   at 
 org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:330)
   at 
 org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
   ... 3 more
 Caused by: org.apache.solr.handler.dataimport.DataImportHandlerException: 
 Unable to execute query: SELECT SenderID, ProviderID, `Name`, RefSenderID, 
 CameraURL, ChatURL, [...] 

[jira] [Commented] (SOLR-6474) Smoke tester should use the Solr start scripts to start Solr

2014-11-07 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14202323#comment-14202323
 ] 

Anurag Sharma commented on SOLR-6474:
-

Summarizing the issue I've faced while running smoke Tester.

First I was running it using Python-27 and seen SyntaxError issues and got rid 
of them when tried with Python 3.4.2.
Further, seen below error when tried to run smoke using :
{noformat}
python -u smokeTestRelease.py 
http://people.apache.org/~mikemccand/staging_area/lucene-solr-4.10.2-RC1-rev1634293
{noformat}

{code}
Java 1.7 JAVA_HOME=C:\Program Files\Java\jdk1.7.0_51
Traceback (most recent call last):
  File smokeTestRelease.py, line 1522, in module
main()
  File smokeTestRelease.py, line 1465, in main
c = parse_config()
  File smokeTestRelease.py, line 1351, in parse_config
c.java = make_java_config(parser, c.test_java8)
  File smokeTestRelease.py, line 1303, in make_java_config
run_java7 = _make_runner(java7_home, '1.7')
  File smokeTestRelease.py, line 1294, in _make_runner
shell=True, stderr=subprocess.STDOUT).decode('utf-8')
  File C:\Program Files (x86)\Python34\lib\subprocess.py, line 620, in 
check_output
raise CalledProcessError(retcode, process.args, output=output)
subprocess.CalledProcessError: Command 'export JAVA_HOME=C:\Program 
Files\Java\jdk1.7.0_51 PATH=C:\Program Files\Java\jdk1.7.0_51/bin:$PATH 
JAVACMD=C:\Program Files\Java\jdk1.7.0_51/bin/java; java -version' returned 
non-zero exit status 1
{code}
The only usage example I find in the code is it takes a URL param and it's 
giving the above error:
{noformat}
Example usage:
python3.2 -u dev-tools/scripts/smokeTestRelease.py 
http://people.apache.org/~whoever/staging_area/lucene-solr-4.3.0-RC1-rev1469340
{noformat}

Shawn Heisey's {anchor:apa...@elyograg.org} observation:
When running that exact command on the tags/lucene_solr_4_10_2 checkout, it 
fails.  I think there must be something in the configuration that still says 
4.10.1:
{code}
prepare-release-no-sign:
[mkdir] Created dir:
/home/elyograg/asf/lucene_solr_4_10_2/lucene/build/fakeRelease
 [copy] Copying 431 files to
/home/elyograg/asf/lucene_solr_4_10_2/lucene/build/fakeRelease/lucene
 [copy] Copying 239 files to
/home/elyograg/asf/lucene_solr_4_10_2/lucene/build/fakeRelease/solr
 [exec] JAVA7_HOME is /usr/lib/jvm/java-7-oracle
 [exec] Traceback (most recent call last):
 [exec]   File
/home/elyograg/asf/lucene_solr_4_10_2/dev-tools/scripts/smokeTestRelease.py,
line 1467, in module
 [exec] main()
 [exec]   File
/home/elyograg/asf/lucene_solr_4_10_2/dev-tools/scripts/smokeTestRelease.py,
line 1308, in main
 [exec] smokeTest(baseURL, svnRevision, version, tmpDir,
isSigned, testArgs)
 [exec]   File
/home/elyograg/asf/lucene_solr_4_10_2/dev-tools/scripts/smokeTestRelease.py,
line 1446, in smokeTest
 [exec] checkSigs('lucene', lucenePath, version, tmpDir, isSigned)
 [exec]   File
/home/elyograg/asf/lucene_solr_4_10_2/dev-tools/scripts/smokeTestRelease.py,
line 359, in checkSigs
 [exec] raise RuntimeError('%s: unknown artifact %s: expected
prefix %s' % (project, text, expected))
 [exec] RuntimeError: lucene: unknown artifact
lucene-4.10.2-src.tgz: expected prefix lucene-4.10.1
 [exec] NOTE: output encoding is UTF-8
 [exec]
 [exec] Load release URL
file:/home/elyograg/asf/lucene_solr_4_10_2/lucene/build/fakeRelease/...
 [exec]
 [exec] Test Lucene...
 [exec]   test basics...
{code}


* Some Run using ant:
{code}
$ ant nightly-smoke -Dversion=6.0.0
Buildfile: C:\work\trunk\build.xml

clean:

clean:

clean:

-nightly-smoke-java8params:

nightly-smoke:

BUILD FAILED
C:\work\trunk\build.xml:392: Execute failed: java.io.IOException: Cannot run 
program python3.2: CreateProcess error=2, The system cannot find the file 
specified
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
at java.lang.Runtime.exec(Runtime.java:617)
at 
org.apache.tools.ant.taskdefs.launcher.Java13CommandLauncher.exec(Java13CommandLauncher.java:41)
at org.apache.tools.ant.taskdefs.Execute.launch(Execute.java:428)
at org.apache.tools.ant.taskdefs.Execute.execute(Execute.java:442)
at org.apache.tools.ant.taskdefs.ExecTask.runExecute(ExecTask.java:628)
at org.apache.tools.ant.taskdefs.ExecTask.runExec(ExecTask.java:669)
at org.apache.tools.ant.taskdefs.ExecTask.execute(ExecTask.java:495)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at 

[jira] [Comment Edited] (SOLR-6474) Smoke tester should use the Solr start scripts to start Solr

2014-11-07 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14202323#comment-14202323
 ] 

Anurag Sharma edited comment on SOLR-6474 at 11/7/14 5:39 PM:
--

Summarizing the issue I've faced while running smoke Tester.

First I was running it using Python-27 and seen SyntaxError issues and got rid 
of them when tried with Python 3.4.2.
Further, seen below error when tried to run smoke using :
{noformat}
python -u smokeTestRelease.py 
http://people.apache.org/~mikemccand/staging_area/lucene-solr-4.10.2-RC1-rev1634293
{noformat}

{code}
Java 1.7 JAVA_HOME=C:\Program Files\Java\jdk1.7.0_51
Traceback (most recent call last):
  File smokeTestRelease.py, line 1522, in module
main()
  File smokeTestRelease.py, line 1465, in main
c = parse_config()
  File smokeTestRelease.py, line 1351, in parse_config
c.java = make_java_config(parser, c.test_java8)
  File smokeTestRelease.py, line 1303, in make_java_config
run_java7 = _make_runner(java7_home, '1.7')
  File smokeTestRelease.py, line 1294, in _make_runner
shell=True, stderr=subprocess.STDOUT).decode('utf-8')
  File C:\Program Files (x86)\Python34\lib\subprocess.py, line 620, in 
check_output
raise CalledProcessError(retcode, process.args, output=output)
subprocess.CalledProcessError: Command 'export JAVA_HOME=C:\Program 
Files\Java\jdk1.7.0_51 PATH=C:\Program Files\Java\jdk1.7.0_51/bin:$PATH 
JAVACMD=C:\Program Files\Java\jdk1.7.0_51/bin/java; java -version' returned 
non-zero exit status 1
{code}
The only usage example I find in the code is it takes a URL param and it's 
giving the above error:
{noformat}
Example usage:
python3.2 -u dev-tools/scripts/smokeTestRelease.py 
http://people.apache.org/~whoever/staging_area/lucene-solr-4.3.0-RC1-rev1469340
{noformat}

Shawn Heisey's {anchor:apa...@elyograg.org} observation when running that exact 
command on the tags/lucene_solr_4_10_2 checkout, it fails.  I think there must 
be something in the configuration that still says 4.10.1:
{code}
prepare-release-no-sign:
[mkdir] Created dir:
/home/elyograg/asf/lucene_solr_4_10_2/lucene/build/fakeRelease
 [copy] Copying 431 files to
/home/elyograg/asf/lucene_solr_4_10_2/lucene/build/fakeRelease/lucene
 [copy] Copying 239 files to
/home/elyograg/asf/lucene_solr_4_10_2/lucene/build/fakeRelease/solr
 [exec] JAVA7_HOME is /usr/lib/jvm/java-7-oracle
 [exec] Traceback (most recent call last):
 [exec]   File
/home/elyograg/asf/lucene_solr_4_10_2/dev-tools/scripts/smokeTestRelease.py,
line 1467, in module
 [exec] main()
 [exec]   File
/home/elyograg/asf/lucene_solr_4_10_2/dev-tools/scripts/smokeTestRelease.py,
line 1308, in main
 [exec] smokeTest(baseURL, svnRevision, version, tmpDir,
isSigned, testArgs)
 [exec]   File
/home/elyograg/asf/lucene_solr_4_10_2/dev-tools/scripts/smokeTestRelease.py,
line 1446, in smokeTest
 [exec] checkSigs('lucene', lucenePath, version, tmpDir, isSigned)
 [exec]   File
/home/elyograg/asf/lucene_solr_4_10_2/dev-tools/scripts/smokeTestRelease.py,
line 359, in checkSigs
 [exec] raise RuntimeError('%s: unknown artifact %s: expected
prefix %s' % (project, text, expected))
 [exec] RuntimeError: lucene: unknown artifact
lucene-4.10.2-src.tgz: expected prefix lucene-4.10.1
 [exec] NOTE: output encoding is UTF-8
 [exec]
 [exec] Load release URL
file:/home/elyograg/asf/lucene_solr_4_10_2/lucene/build/fakeRelease/...
 [exec]
 [exec] Test Lucene...
 [exec]   test basics...
{code}


Smoke run using ant on trunk:
{code}
$ ant nightly-smoke -Dversion=6.0.0
Buildfile: C:\work\trunk\build.xml

clean:

clean:

clean:

-nightly-smoke-java8params:

nightly-smoke:

BUILD FAILED
C:\work\trunk\build.xml:392: Execute failed: java.io.IOException: Cannot run 
program python3.2: CreateProcess error=2, The system cannot find the file 
specified
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
at java.lang.Runtime.exec(Runtime.java:617)
at 
org.apache.tools.ant.taskdefs.launcher.Java13CommandLauncher.exec(Java13CommandLauncher.java:41)
at org.apache.tools.ant.taskdefs.Execute.launch(Execute.java:428)
at org.apache.tools.ant.taskdefs.Execute.execute(Execute.java:442)
at org.apache.tools.ant.taskdefs.ExecTask.runExecute(ExecTask.java:628)
at org.apache.tools.ant.taskdefs.ExecTask.runExec(ExecTask.java:669)
at org.apache.tools.ant.taskdefs.ExecTask.execute(ExecTask.java:495)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 

[jira] [Updated] (SOLR-6474) Smoke tester should use the Solr start scripts to start Solr

2014-11-07 Thread Anurag Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Sharma updated SOLR-6474:

Attachment: SOLR-6474-smoke_trunk_6.0.0.log
SOLR-6474.patch

Attached patch has following changes:
# most of the places it was already using the solr bin script. Only place 
remaining was when launching example techproducts. Change made to call it 
through solr bin script.
# current smoke tester is strongly tied with python 3.2 version. It doesn't 
work on higher version e.g. python 3.4. Changes done in 'build.xml' to work 
with the latest python 3.4 version.

Also attached log generated when running smoke locally using {noformat}
ant nightly-smoke -Dversion=6.0.0
{noformat}

Following should be addressed in solr bin script to make smoke run smoothly
# The Solr bin script should also be changed with the change of solr/example 
-- solr/server e.g %SOLR_TIP%/example/exampledocs/post.jar is no longer valid 
in the latest context.
# Log the example techproducts log in 'post-example-docs.log'


 Smoke tester should use the Solr start scripts to start Solr
 

 Key: SOLR-6474
 URL: https://issues.apache.org/jira/browse/SOLR-6474
 Project: Solr
  Issue Type: Task
  Components: scripts and tools
Reporter: Shalin Shekhar Mangar
  Labels: difficulty-easy, impact-low
 Fix For: 5.0

 Attachments: SOLR-6474-smoke_trunk_6.0.0.log, SOLR-6474.patch


 We should use the Solr bin scripts created by SOLR-3617 in the smoke tester 
 to test Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6478) need docs / tests of the rules as far as collection names go

2014-11-02 Thread Anurag Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Sharma updated SOLR-6478:

Attachment: SOLR-6478.patch

Unit test covering the allowed and not allowed collection names is attached. 

W3 http://www.w3.org/Addressing/URL/uri-spec.html has a standard for valid 
character set in the URI. In the code currently there are no filters to 
disallow any character. W3 guideline can be used to filter some characters in 
the collection name.

Query params having special characters or whitespaces can be send after 
encoding while making API calls. Here is an example to create rand chars {£  
$ 1234567890-+=`~@} collection
{code}
$ curl 
'http://localhost:8983/solr/admin/collections?action=CREATEname=rand%20chars%20%7B%C2%A3%20%26%20%24%201234567890-%2B%3D%60~%40%7DnumShards=1collection.configName=myconfindent=truewt=json'

{
  responseHeader:{
status:0,
QTime:28509},
  success:{
:{
  responseHeader:{
status:0,
QTime:22011},
  core:rand chars {£  $ 1234567890-+=`~@}_shard1_replica1}}}

{code}

 need docs / tests of the rules as far as collection names go
 --

 Key: SOLR-6478
 URL: https://issues.apache.org/jira/browse/SOLR-6478
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
  Labels: difficulty-medium, impact-medium
 Attachments: SOLR-6478.patch


 historically, the rules for core names have been vague but implicitly 
 defined based on the rule that it had to be a valid directory path name - but 
 i don't know that we've ever documented anywhere what the rules are for a 
 collection name when dealing with the Collections API.
 I haven't had a chance to try this, but i suspect that using the Collections 
 API you can create any collection name you want, and the zk/clusterstate.json 
 data will all be fine, and you'll then be able to request anything you want 
 from that collection as long as you properly URL escape it in your request 
 URLs ... but we should have a test that tries to do this, and document any 
 actual limitations that pop up and/or fix those limitations so we really can 
 have arbitrary collection names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6478) need docs / tests of the rules as far as collection names go

2014-11-02 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14193817#comment-14193817
 ] 

Anurag Sharma edited comment on SOLR-6478 at 11/2/14 12:45 PM:
---

Unit test covering the allowed and not allowed collection names is attached. 

W3 http://www.w3.org/Addressing/URL/uri-spec.html has a standard for valid 
character set in the URI. In the code currently there are no filters to 
disallow any character. W3 guideline can be used to filter some characters in 
the collection name.

Query params having special characters or whitespaces can be send after 
encoding while making API calls. Here is an example to create rand chars {£  
$ 1234567890-+=`~@\} collection

{code}
$ curl 
'http://localhost:8983/solr/admin/collections?action=CREATEname=rand%20chars%20%7B%C2%A3%20%26%20%24%201234567890-%2B%3D%60~%40%7DnumShards=1collection.configName=myconfindent=truewt=json'

{
  responseHeader:{
status:0,
QTime:28509},
  success:{
:{
  responseHeader:{
status:0,
QTime:22011},
  core:rand chars {£  $ 1234567890-+=`~@\}_shard1_replica1}}}

{code}


was (Author: anuragsharma):
Unit test covering the allowed and not allowed collection names is attached. 

W3 http://www.w3.org/Addressing/URL/uri-spec.html has a standard for valid 
character set in the URI. In the code currently there are no filters to 
disallow any character. W3 guideline can be used to filter some characters in 
the collection name.

Query params having special characters or whitespaces can be send after 
encoding while making API calls. Here is an example to create rand chars {£  
$ 1234567890-+=`~@} collection
{code}
$ curl 
'http://localhost:8983/solr/admin/collections?action=CREATEname=rand%20chars%20%7B%C2%A3%20%26%20%24%201234567890-%2B%3D%60~%40%7DnumShards=1collection.configName=myconfindent=truewt=json'

{
  responseHeader:{
status:0,
QTime:28509},
  success:{
:{
  responseHeader:{
status:0,
QTime:22011},
  core:rand chars {£  $ 1234567890-+=`~@}_shard1_replica1}}}

{code}

 need docs / tests of the rules as far as collection names go
 --

 Key: SOLR-6478
 URL: https://issues.apache.org/jira/browse/SOLR-6478
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
  Labels: difficulty-medium, impact-medium
 Attachments: SOLR-6478.patch


 historically, the rules for core names have been vague but implicitly 
 defined based on the rule that it had to be a valid directory path name - but 
 i don't know that we've ever documented anywhere what the rules are for a 
 collection name when dealing with the Collections API.
 I haven't had a chance to try this, but i suspect that using the Collections 
 API you can create any collection name you want, and the zk/clusterstate.json 
 data will all be fine, and you'll then be able to request anything you want 
 from that collection as long as you properly URL escape it in your request 
 URLs ... but we should have a test that tries to do this, and document any 
 actual limitations that pop up and/or fix those limitations so we really can 
 have arbitrary collection names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6531) better error message when lockType doesn't work with directoryFactory

2014-11-01 Thread Anurag Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Sharma updated SOLR-6531:

Attachment: SOLR-6531.patch

Hoss - Thanks a lot for the details.

Attaching is the patch for the fix.
A separate solrconfig file 
core/src/test-files/solr/collection1/conf/solrconfig-locktype.xml  is created 
for unit test.

 better error message when lockType doesn't work with directoryFactory
 -

 Key: SOLR-6531
 URL: https://issues.apache.org/jira/browse/SOLR-6531
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
  Labels: difficulty-easy, impact-low
 Attachments: SOLR-6531.patch


 SOLR-6519 improved the logic about which lockTypes could be configured with 
 which directoryFactory implementations, but the result is a somewhat 
 confusing error message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6598) Solr Collections API, case sensitivity of collection name and core's/replica's instance directory

2014-11-01 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14193718#comment-14193718
 ] 

Anurag Sharma commented on SOLR-6598:
-

I am able to reproducible on windows/cygwin. 
Interesting observation, it allows creation creation of 'tEsT' but not 'TEST' 
after creation of 'test' collection.

{code}
$ curl 
'http://localhost:8983/solr/admin/collections?action=CREATEname=testnumShards=1collection.configName=myconfindent=truewt=json'
{
  responseHeader:{
status:0,
QTime:8565},
  success:{
:{
  responseHeader:{
status:0,
QTime:8003},
  core:test_shard1_replica1}}}

$ curl 
'http://localhost:8983/solr/admin/collections?action=CREATEname=testnumShards=1collection.configName=myconfindent=truewt=json'
{
  responseHeader:{
status:400,
QTime:156},
  Operation create caused 
exception::org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
 collection already exists: test,
  exception:{
msg:collection already exists: test,
rspCode:400},
  error:{
msg:collection already exists: test,
code:400}}

$ curl 
'http://localhost:8983/solr/admin/collections?action=CREATEname=TESTnumShards=1collection.configName=myconfindent=truewt=json'
{
  responseHeader:{
status:0,
QTime:7768},
  failure:{

:org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error 
CREATEing SolrCore 'TEST_shard1_replica1': Unable to create core 
[TEST_shard1_replica1] Caused by: Lock obtain timed out: 
NativeFSLock@C:\\work\\trunk\\solr\\node2\\solr\\test_shard1_replica1\\data\\index\\write.lock}}

$ curl 
'http://localhost:8983/solr/admin/collections?action=CREATEname=TEST1numShards=1collection.configName=myconfindent=truewt=json'
{
  responseHeader:{
status:0,
QTime:6869},
  success:{
:{
  responseHeader:{
status:0,
QTime:6415},
  core:TEST1_shard1_replica1}}}

$ curl 
'http://localhost:8983/solr/admin/collections?action=CREATEname=tEsTnumShards=1collection.configName=myconfindent=truewt=json'
{
  responseHeader:{
status:0,
QTime:6471},
  success:{
:{
  responseHeader:{
status:0,
QTime:6064},
  core:tEsT_shard1_replica1}}}

$ curl 
'http://localhost:8983/solr/admin/collections?action=CREATEname=TESTnumShards=1collection.configName=myconfindent=truewt=json'
{
  responseHeader:{
status:0,
QTime:8690},
  failure:{

:org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error 
CREATEing SolrCore 'TEST_shard1_replica1': Unable to create core 
[TEST_shard1_replica1] Caused by: Lock obtain timed out: 
NativeFSLock@C:\\work\\trunk\\solr\\node2\\solr\\test_shard1_replica1\\

$ curl 
'http://localhost:8983/solr/admin/collections?action=CREATEname=TEstnumShards=1collection.configName=myconfindent=truewt=json'
{
  responseHeader:{
status:0,
QTime:5920},
  failure:{

:org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error 
CREATEing SolrCore 'TEst_shard1_replica1': Unable to create core 
[TEst_shard1_replica1] Caused by: Lock obtain timed out: 
NativeFSLock@C:\\work\\trunk\\solr\\node2\\solr\\test_shard1_replica1\\data\\index\\write.lock}}

$ curl 
'http://localhost:8983/solr/admin/collections?action=CREATEname=TestnumShards=1collection.configName=myconfindent=truewt=json'
{
  responseHeader:{
status:0,
QTime:6332},
  failure:{

:org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error 
CREATEing SolrCore 'Test_shard1_replica1': Unable to create core 
[Test_shard1_replica1] Caused by: Lock obtain timed out: 
NativeFSLock@C:\\work\\trunk\\solr\\node1\\solr\\tEsT_shard1_replica1\\data\\index\\write.lock}}

$ curl 
'http://localhost:8983/solr/admin/collections?action=CREATEname=Test1numShards=1collection.configName=myconfindent=truewt=json'
{
  responseHeader:{
status:0,
QTime:6214},
  success:{
:{
  responseHeader:{
status:0,
QTime:5636},
  core:Test1_shard1_replica1}}}

{code}

 Solr Collections API, case sensitivity of collection name and 
 core's/replica's instance directory
 -

 Key: SOLR-6598
 URL: https://issues.apache.org/jira/browse/SOLR-6598
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.1
 Environment: Mac OS X
Reporter: Alexey Serba
Priority: Trivial

 Solr Collections API returns misleading error when trying to create two 
 collections with the same name but with different case on MacOS file system 
 (which is case insensitive). 
 {noformat}
 sh curl 
 'http://localhost:8983/solr/admin/collections?action=CREATEname=testnumShards=1collection.configName=defaultindent=truewt=json'
 {
   responseHeader:{
 status:0,
   

[jira] [Commented] (SOLR-6531) better error message when lockType doesn't work with directoryFactory

2014-10-28 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187950#comment-14187950
 ] 

Anurag Sharma commented on SOLR-6531:
-

Hoss - Can you suggest the steps to reproduce this issue when running locally?

 better error message when lockType doesn't work with directoryFactory
 -

 Key: SOLR-6531
 URL: https://issues.apache.org/jira/browse/SOLR-6531
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
  Labels: difficulty-easy, impact-low

 SOLR-6519 improved the logic about which lockTypes could be configured with 
 which directoryFactory implementations, but the result is a somewhat 
 confusing error message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6449) Add first class support for Real Time Get in Solrj

2014-10-26 Thread Anurag Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Sharma updated SOLR-6449:

Attachment: SOLR-6449.patch

Implemented direct call to get the doc(s) by ID(s). Provided two apis

1. Takes 1 or more Ids using java variable number of args
{code} 
GetByIdResponse getById(String... ids)
{code}

2. Takes input as set collection
{code}
GetByIdResponse getById(SetString ids)
{code}

Unit tests added for single(using #1) and multiple(using #1  #2) ID(s) 
request(s).

 Add first class support for Real Time Get in Solrj
 --

 Key: SOLR-6449
 URL: https://issues.apache.org/jira/browse/SOLR-6449
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Reporter: Shalin Shekhar Mangar
  Labels: difficulty-medium, impact-medium
 Fix For: 5.0

 Attachments: SOLR-6449.patch


 Any request handler can be queried by Solrj using a custom param map and the 
 qt parameter but I think /get should get first-class support in the java 
 client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6547) CloudSolrServer query getqtime Exception

2014-10-26 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184481#comment-14184481
 ] 

Anurag Sharma commented on SOLR-6547:
-

QTime is saved as long after truncating nano second part of time. This is 
equivalent to capturing the time in milliseconds. Here is the code snippet of 
what's happening in the CloudSolrServer. 
{code}
long start = System.nanoTime();
.
.
long end = System.nanoTime();
RouteResponse rr =  condenseResponse(shardResponses, (long)((end - 
start)/100));
{code}

condenseResponse function
{code}
public RouteResponse condenseResponse(NamedList response, long timeMillis) {
  .
  .
  cheader.add(QTime, timeMillis);
  .
  .
}
{code}

Since the time in seconds can be captured with Integer, there are two ways to 
fix the issue:
# In CloudSolrServer, truncate the milliseconds part as well and save QTime in 
Integer. This way getQTime won't throw Long to Integer ClassCastException as 
the object coming to it already Integer.
# In SolrResponseBase.getQTime function, check the instanceOf Object and get 
integer from it as shown in the code snippet below
{code}
int qtime = 0;
if(obj instanceof Long) {
qtime = (int)(((Long) obj).longValue()/1000);
} else if (obj instanceof Integer) {
qtime = (Integer)obj;
} else if (obj instanceof String) {
qtime = Integer.parseInt((String) obj); 
}
return qtime;
{code}

Please vote for proceeding the best approach. Also like to get opinion on 
writing unit test in CloudSolrServerTest class.

 CloudSolrServer query getqtime Exception
 

 Key: SOLR-6547
 URL: https://issues.apache.org/jira/browse/SOLR-6547
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 4.10
Reporter: kevin

 We are using CloudSolrServer to query ,but solrj throw Exception ;
 java.lang.ClassCastException: java.lang.Long cannot be cast to 
 java.lang.Integer  at 
 org.apache.solr.client.solrj.response.SolrResponseBase.getQTime(SolrResponseBase.java:76)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6469) Solr search with multicore + grouping + highlighting cause NPE

2014-10-25 Thread Anurag Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Sharma updated SOLR-6469:

Attachment: SOLR-6469.patch

 Solr search with multicore + grouping + highlighting cause NPE
 --

 Key: SOLR-6469
 URL: https://issues.apache.org/jira/browse/SOLR-6469
 Project: Solr
  Issue Type: Bug
  Components: highlighter, multicore, SearchComponents - other
Affects Versions: 4.8.1
 Environment: Windows 7, Intellij
Reporter: Shay Sofer
  Labels: patch
 Attachments: SOLR-6469.patch


 Integration of Grouping + shards + highlighting cause NullPointerException.
 Query: 
 localhost:8983/solr/Global_A/select?q=%2Btext%3A%28shay*%29+rows=100fl=id%2CobjId%2Cnullshards=http%3A%2F%2F127.0.0.1%3A8983%2Fsolr%2F0_A%2Chttp%3A%2F%2F127.0.0.1%3A8983%2Fsolr%2FGlobal_Agroup=truegroup.query=name__s%3Ashaysort=name__s_sort+aschl=true
 results:
 java.lang.NullPointerException
  at 
 org.apache.solr.handler.component.HighlightComponent.finishStage(HighlightComponent.java:189)
  at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:330)
  at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:1952)
  at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:774)
  at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
  at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
  at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
  at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
  at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
  at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
  at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
  at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
  at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
  at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
  at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
  at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
  at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
  at org.eclipse.jetty.server.Server.handle(Server.java:368)
  at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
  at 
 org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
  at 
 org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
  at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
  at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
  at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
  at 
 org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
  at 
 org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
  at 
 org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
  at 
 org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
  at java.lang.Thread.run(Thread.java:722)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6469) Solr search with multicore + grouping + highlighting cause NPE

2014-10-25 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184053#comment-14184053
 ] 

Anurag Sharma commented on SOLR-6469:
-

Highlighter component assumes that rb.resultIds has a valid data and is not 
null. When query is formed in shard+grouping+highlight case rb.resultIds is 
coming as null because in groupedFinishStage  after merging groups resultIds 
becomes reinitialized and old ids are lost. So at the time of highlighting it 
throw NPE.

Attaching the patch that fixes the issue but it's without a unit test. Tested 
the fix directly on server and it doesn't show NPE any more. While writing the 
Unit test facing issue of running the shard. Here is the unit test code snippet:
{code}
+  assertQ(Shards+highlight+Grouping, 
+  req(CommonParams.Q, text:(AAA), 
+  CommonParams.QT, /elevate,
+  CommonParams.SORT, id asc,
+  GroupParams.GROUP_SORT, id asc, 
+  GroupParams.GROUP_QUERY,text:AAA,
+  GroupParams.GROUP, true,
+  CommonParams.FL, id,
+  HighlightParams.HIGHLIGHT,true,
+  ShardParams.SHARDS,localhost/solr/elevated)
+  ,//lst[@name='highlighting']
+  );
{code}

Please suggest if anyone knows how to write unit test for this case.

 Solr search with multicore + grouping + highlighting cause NPE
 --

 Key: SOLR-6469
 URL: https://issues.apache.org/jira/browse/SOLR-6469
 Project: Solr
  Issue Type: Bug
  Components: highlighter, multicore, SearchComponents - other
Affects Versions: 4.8.1
 Environment: Windows 7, Intellij
Reporter: Shay Sofer
  Labels: patch
 Attachments: SOLR-6469.patch


 Integration of Grouping + shards + highlighting cause NullPointerException.
 Query: 
 localhost:8983/solr/Global_A/select?q=%2Btext%3A%28shay*%29+rows=100fl=id%2CobjId%2Cnullshards=http%3A%2F%2F127.0.0.1%3A8983%2Fsolr%2F0_A%2Chttp%3A%2F%2F127.0.0.1%3A8983%2Fsolr%2FGlobal_Agroup=truegroup.query=name__s%3Ashaysort=name__s_sort+aschl=true
 results:
 java.lang.NullPointerException
  at 
 org.apache.solr.handler.component.HighlightComponent.finishStage(HighlightComponent.java:189)
  at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:330)
  at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:1952)
  at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:774)
  at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
  at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
  at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
  at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
  at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
  at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
  at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
  at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
  at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
  at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
  at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
  at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
  at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
  at org.eclipse.jetty.server.Server.handle(Server.java:368)
  at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
  at 
 org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
  at 
 org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
  at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
  at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
  at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
  at 
 org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
  at 
 org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
  at 
 org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
  at 
 

[jira] [Updated] (SOLR-6572) lineshift in solrconfig.xml is not supported

2014-10-25 Thread Anurag Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Sharma updated SOLR-6572:

Attachment: SOLR-6572.unittest
SOLR-6572.patch

I think this issue could have been easily reproduced in unit test. In 
DOMUtilTest some how space, new line, tab unit tests were missing before and 
after node value. This infers that other elements in solrconfig.xml will also 
have issues. One sample I tried is in infoStream under indexConfig tag. eg. 
If I give like infoStream value is picked as true and works fine.
{code}
indexConfig
:
infoStreamtrue/infoStream
/indexConfig
{code}
on the other hand If I give like infoStream value is picked as false and 
miss to read it as true
{code}
indexConfig
:
infoStreamtrue 
/infoStream
/indexConfig
{code}

Fix with unit tests and unit test run report is attached.

 lineshift in solrconfig.xml is not supported
 

 Key: SOLR-6572
 URL: https://issues.apache.org/jira/browse/SOLR-6572
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8.1
Reporter: Fredrik Rodland
  Labels: difficulty-easy, impact-low, solrconfig.xml
 Attachments: SOLR-6572.patch, SOLR-6572.unittest


 This has been a problem for a long time, and is still a problem at least for 
 SOLR 4.8.1.
 If lineshifts are introduced in some elements in solrconfig.xml SOLR fails to 
 pick up on the values.
 example:
 ok:
 {code}
 requestHandler name=/replication class=solr.ReplicationHandler 
 enable=${enable.replication:false}
 lst name=slave
 str 
 name=masterUrl${solr.master.url:http://solr-admin1.finn.no:12910/solr/front-static/replication}/str
 {code}
 not ok:
 {code}
 requestHandler name=/replication class=solr.ReplicationHandler 
 enable=${enable.replication:false}
 lst name=slave
 str 
 name=masterUrl${solr.master.url:http://solr-admin1.finn.no:12910/solr/front-static/replication}
 /str
 {code}
 Other example:
 ok:
 {code}
 str 
 name=shardslocalhost:12100/solr,localhost:12200/solr,localhost:12300/solr,localhost:12400/solr,localhost:12500/solr,localhost:12530/solr/str
 {code}
 not ok:
 {code}
 str name=shards
 localhost:12100/solr,localhost:12200/solr,localhost:12300/solr,localhost:12400/solr,localhost:12500/solr,localhost:12530/solr
/str
 {code}
 IDEs and people tend to introduce lineshifts in xml-files to make them 
 prettyer.  SOLR should really not be affected by this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6618) SolrCore Initialization Failures when the solr is restarted, unable to Initialization a collection

2014-10-23 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181113#comment-14181113
 ] 

Anurag Sharma commented on SOLR-6618:
-

From the exception it looks like that ZK failed to find configName for 
overnighttest collection in multiple attempts. This looks like a problem 
with the configuration files. Request you share the configuration and detailed 
steps when the failure happen.

 SolrCore Initialization Failures when the solr is restarted, unable to 
 Initialization a collection
 --

 Key: SOLR-6618
 URL: https://issues.apache.org/jira/browse/SOLR-6618
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8
Reporter: Vijaya Jonnakuti

 I have uploaded  one config:default and  do specify 
 collection.configName=default when I create the collection
 and when solr is restart I get this error 
 org.apache.solr.common.cloud.ZooKeeperException:org.apache.solr.common.cloud.ZooKeeperException:
  Could not find configName for collection overnighttest found:[default, 
 collection1, collection2 and so on]
 These collection1 and collection2 empty configs are created when I run 
 DataImportHandler using ZKPropertiesWriter 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6618) SolrCore Initialization Failures when the solr is restarted, unable to Initialization a collection

2014-10-23 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14181113#comment-14181113
 ] 

Anurag Sharma edited comment on SOLR-6618 at 10/23/14 8:19 AM:
---

From the exception it looks like that ZK failed to find configName for 
overnighttest collection in multiple attempts. This looks like a problem 
with the configuration files. Request you share the configuration on which 
solr initialization failure is seen.


was (Author: anuragsharma):
From the exception it looks like that ZK failed to find configName for 
overnighttest collection in multiple attempts. This looks like a problem 
with the configuration files. Request you share the configuration and detailed 
steps when the failure happen.

 SolrCore Initialization Failures when the solr is restarted, unable to 
 Initialization a collection
 --

 Key: SOLR-6618
 URL: https://issues.apache.org/jira/browse/SOLR-6618
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8
Reporter: Vijaya Jonnakuti

 I have uploaded  one config:default and  do specify 
 collection.configName=default when I create the collection
 and when solr is restart I get this error 
 org.apache.solr.common.cloud.ZooKeeperException:org.apache.solr.common.cloud.ZooKeeperException:
  Could not find configName for collection overnighttest found:[default, 
 collection1, collection2 and so on]
 These collection1 and collection2 empty configs are created when I run 
 DataImportHandler using ZKPropertiesWriter 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6469) Solr search with multicore + grouping + highlighting cause NPE

2014-10-19 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14176275#comment-14176275
 ] 

Anurag Sharma commented on SOLR-6469:
-

Shay:
Can you upload the configuration files and steps to reproduce the NPE

 Solr search with multicore + grouping + highlighting cause NPE
 --

 Key: SOLR-6469
 URL: https://issues.apache.org/jira/browse/SOLR-6469
 Project: Solr
  Issue Type: Bug
  Components: highlighter, multicore, SearchComponents - other
Affects Versions: 4.8.1
 Environment: Windows 7, Intellij
Reporter: Shay Sofer
  Labels: patch

 Integration of Grouping + shards + highlighting cause NullPointerException.
 Query: 
 localhost:8983/solr/Global_A/select?q=%2Btext%3A%28shay*%29+rows=100fl=id%2CobjId%2Cnullshards=http%3A%2F%2F127.0.0.1%3A8983%2Fsolr%2F0_A%2Chttp%3A%2F%2F127.0.0.1%3A8983%2Fsolr%2FGlobal_Agroup=truegroup.query=name__s%3Ashaysort=name__s_sort+aschl=true
 results:
 java.lang.NullPointerException
  at 
 org.apache.solr.handler.component.HighlightComponent.finishStage(HighlightComponent.java:189)
  at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:330)
  at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:1952)
  at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:774)
  at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
  at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
  at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
  at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
  at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
  at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
  at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
  at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
  at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
  at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
  at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
  at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
  at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
  at org.eclipse.jetty.server.Server.handle(Server.java:368)
  at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
  at 
 org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
  at 
 org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
  at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
  at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
  at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
  at 
 org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
  at 
 org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
  at 
 org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
  at 
 org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
  at java.lang.Thread.run(Thread.java:722)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6531) better error message when lockType doesn't work with directoryFactory

2014-10-19 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14176311#comment-14176311
 ] 

Anurag Sharma commented on SOLR-6531:
-

It's not reproducible with the below testcase while running on windows/cygwin
{code}
ant test  -Dtestcase=CoreMergeIndexesAdminHandlerTest 
-Dtests.method=testMergeIndexesCoreAdminHandler -Dtests.seed=CD7BE4551EE0F637 
-Dtests.slow=true -Dtests.locale=zh_SG -Dtests.timezone=Asia/Calcutta 
-Dtests.file.encoding=US-ASCII
{code}

 better error message when lockType doesn't work with directoryFactory
 -

 Key: SOLR-6531
 URL: https://issues.apache.org/jira/browse/SOLR-6531
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
  Labels: difficulty-easy, impact-low

 SOLR-6519 improved the logic about which lockTypes could be configured with 
 which directoryFactory implementations, but the result is a somewhat 
 confusing error message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6474) Smoke tester should use the Solr start scripts to start Solr

2014-10-18 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175879#comment-14175879
 ] 

Anurag Sharma commented on SOLR-6474:
-

This is lucene smoke test or solr? 
Other the smoke tester, how about updating all the references of starting solr 
java -jar start.jar with start script  in SOLR-3617?

 Smoke tester should use the Solr start scripts to start Solr
 

 Key: SOLR-6474
 URL: https://issues.apache.org/jira/browse/SOLR-6474
 Project: Solr
  Issue Type: Task
  Components: scripts and tools
Reporter: Shalin Shekhar Mangar
  Labels: difficulty-easy, impact-low
 Fix For: 5.0


 We should use the Solr bin scripts created by SOLR-3617 in the smoke tester 
 to test Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6449) Add first class support for Real Time Get in Solrj

2014-10-18 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175891#comment-14175891
 ] 

Anurag Sharma commented on SOLR-6449:
-

Just wondering should we also support POST and PUT as well like 
solrServer.post(id,object)


 Add first class support for Real Time Get in Solrj
 --

 Key: SOLR-6449
 URL: https://issues.apache.org/jira/browse/SOLR-6449
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Reporter: Shalin Shekhar Mangar
  Labels: difficulty-medium, impact-medium
 Fix For: 5.0


 Any request handler can be queried by Solrj using a custom param map and the 
 qt parameter but I think /get should get first-class support in the java 
 client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6474) Smoke tester should use the Solr start scripts to start Solr

2014-10-18 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175879#comment-14175879
 ] 

Anurag Sharma edited comment on SOLR-6474 at 10/18/14 7:14 AM:
---

Other than the smoke tester, how about updating all the references of starting 
solr java -jar start.jar with start script  in SOLR-3617?


was (Author: anuragsharma):
This is lucene smoke test or solr? 
Other the smoke tester, how about updating all the references of starting solr 
java -jar start.jar with start script  in SOLR-3617?

 Smoke tester should use the Solr start scripts to start Solr
 

 Key: SOLR-6474
 URL: https://issues.apache.org/jira/browse/SOLR-6474
 Project: Solr
  Issue Type: Task
  Components: scripts and tools
Reporter: Shalin Shekhar Mangar
  Labels: difficulty-easy, impact-low
 Fix For: 5.0


 We should use the Solr bin scripts created by SOLR-3617 in the smoke tester 
 to test Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6547) CloudSolrServer query getqtime Exception

2014-10-18 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175969#comment-14175969
 ] 

Anurag Sharma commented on SOLR-6547:
-

I am not sure of the usage. If there is a use case to calculate the response 
time then long would be precise to respond the query time in millisecond. With 
int the millisecond detail is missed.

 CloudSolrServer query getqtime Exception
 

 Key: SOLR-6547
 URL: https://issues.apache.org/jira/browse/SOLR-6547
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 4.10
Reporter: kevin

 We are using CloudSolrServer to query ,but solrj throw Exception ;
 java.lang.ClassCastException: java.lang.Long cannot be cast to 
 java.lang.Integer  at 
 org.apache.solr.client.solrj.response.SolrResponseBase.getQTime(SolrResponseBase.java:76)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6478) need docs / tests of the rules as far as collection names go

2014-10-18 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175984#comment-14175984
 ] 

Anurag Sharma commented on SOLR-6478:
-

Proper URL escaping can take care the white spaces and special characters. 
Also, since collection name is a folder it is bounded by the rules on 
folder/directory name.

Any suggestion on good place to put test cases?

 need docs / tests of the rules as far as collection names go
 --

 Key: SOLR-6478
 URL: https://issues.apache.org/jira/browse/SOLR-6478
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
  Labels: difficulty-medium, impact-medium

 historically, the rules for core names have been vague but implicitly 
 defined based on the rule that it had to be a valid directory path name - but 
 i don't know that we've ever documented anywhere what the rules are for a 
 collection name when dealing with the Collections API.
 I haven't had a chance to try this, but i suspect that using the Collections 
 API you can create any collection name you want, and the zk/clusterstate.json 
 data will all be fine, and you'll then be able to request anything you want 
 from that collection as long as you properly URL escape it in your request 
 URLs ... but we should have a test that tries to do this, and document any 
 actual limitations that pop up and/or fix those limitations so we really can 
 have arbitrary collection names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6572) lineshift in solrconfig.xml is not supported

2014-10-18 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14176046#comment-14176046
 ] 

Anurag Sharma commented on SOLR-6572:
-

As mentioned by Jan in the first comment a failing JUnit test is very helpful. 
It can speed up analysing/fixing the actual issue.

 lineshift in solrconfig.xml is not supported
 

 Key: SOLR-6572
 URL: https://issues.apache.org/jira/browse/SOLR-6572
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8.1
Reporter: Fredrik Rodland
  Labels: difficulty-easy, impact-low, solrconfig.xml

 This has been a problem for a long time, and is still a problem at least for 
 SOLR 4.8.1.
 If lineshifts are introduced in some elements in solrconfig.xml SOLR fails to 
 pick up on the values.
 example:
 ok:
 {code}
 requestHandler name=/replication class=solr.ReplicationHandler 
 enable=${enable.replication:false}
 lst name=slave
 str 
 name=masterUrl${solr.master.url:http://solr-admin1.finn.no:12910/solr/front-static/replication}/str
 {code}
 not ok:
 {code}
 requestHandler name=/replication class=solr.ReplicationHandler 
 enable=${enable.replication:false}
 lst name=slave
 str 
 name=masterUrl${solr.master.url:http://solr-admin1.finn.no:12910/solr/front-static/replication}
 /str
 {code}
 Other example:
 ok:
 {code}
 str 
 name=shardslocalhost:12100/solr,localhost:12200/solr,localhost:12300/solr,localhost:12400/solr,localhost:12500/solr,localhost:12530/solr/str
 {code}
 not ok:
 {code}
 str name=shards
 localhost:12100/solr,localhost:12200/solr,localhost:12300/solr,localhost:12400/solr,localhost:12500/solr,localhost:12530/solr
/str
 {code}
 IDEs and people tend to introduce lineshifts in xml-files to make them 
 prettyer.  SOLR should really not be affected by this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6547) CloudSolrServer query getqtime Exception

2014-10-18 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14176048#comment-14176048
 ] 

Anurag Sharma commented on SOLR-6547:
-

Based on the above comment we are good to go with Hoss Mann comment to extract 
the intValue from long:
{code}
return ((Number) header.get(QTime)).intValue()
{code}

A couple of failing unit tests can help in fixing and creating the patch. 

 CloudSolrServer query getqtime Exception
 

 Key: SOLR-6547
 URL: https://issues.apache.org/jira/browse/SOLR-6547
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 4.10
Reporter: kevin

 We are using CloudSolrServer to query ,but solrj throw Exception ;
 java.lang.ClassCastException: java.lang.Long cannot be cast to 
 java.lang.Integer  at 
 org.apache.solr.client.solrj.response.SolrResponseBase.getQTime(SolrResponseBase.java:76)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6598) Solr Collections API, case sensitivity of collection name and core's/replica's instance directory

2014-10-18 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14176053#comment-14176053
 ] 

Anurag Sharma commented on SOLR-6598:
-

Is it specific to MacOS? Case sensitive names are supported by most of the OS.

 Solr Collections API, case sensitivity of collection name and 
 core's/replica's instance directory
 -

 Key: SOLR-6598
 URL: https://issues.apache.org/jira/browse/SOLR-6598
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.1
 Environment: Mac OS X
Reporter: Alexey Serba
Priority: Trivial

 Solr Collections API returns misleading error when trying to create two 
 collections with the same name but with different case on MacOS file system 
 (which is case insensitive). 
 {noformat}
 sh curl 
 'http://localhost:8983/solr/admin/collections?action=CREATEname=testnumShards=1collection.configName=defaultindent=truewt=json'
 {
   responseHeader:{
 status:0,
 QTime:1949},
   success:{
 :{
   responseHeader:{
 status:0,
 QTime:1833},
   core:test_shard1_replica1}}}
 sh curl 
 'http://localhost:8983/solr/admin/collections?action=CREATEname=TESTnumShards=1collection.configName=defaultindent=truewt=json'
 {
   responseHeader:{
 status:0,
 QTime:2509},
   failure:{
 
 :org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error
  CREATEing SolrCore 'TEST_shard1_replica1': Unable to create core 
 [TEST_shard1_replica1] Caused by: Lock obtain timed out: 
 NativeFSLock@/Users/alexey/Desktop/solr-4.10.1/node1/solr/test_shard1_replica1/data/index/write.lock}}
 {noformat}
 See {{Lock obtain timed out}} exception. It will be more user friendly to 
 check existence of instance dir {{test_shard1_replica1}} and return something 
 like Node A has replica B that uses the same index directory exception 
 (instead of just trying to hijack that existing directory and then 
 propagating the inexplicable lock exception that arises as a result).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6475) SOLR-5517 broke the ExtractingRequestHandler / Tika content-type detection.

2014-10-18 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14176067#comment-14176067
 ] 

Anurag Sharma commented on SOLR-6475:
-

Is't it part of Tika project as the fix is expecting in Tika to  triggers the 
auto detection on content-type application/octet-stream? Am I missing 
anything?

 SOLR-5517 broke the ExtractingRequestHandler / Tika content-type detection.
 ---

 Key: SOLR-6475
 URL: https://issues.apache.org/jira/browse/SOLR-6475
 Project: Solr
  Issue Type: Bug
  Components: contrib - Solr Cell (Tika extraction)
Affects Versions: 4.7
Reporter: Dominik Geelen
  Labels: Content-Type, Tika, difficulty-medium, impact-medium

 Hi,
 as discussed with hoss on IRC, i'm creating this Issue about a problem we 
 recently ran into:
 Our company uses Solr to index user-generated files for fulltext searching 
 (PDFs, etc.) by using the ExtractingRequestHandler / Tika. 
 Since we recently upgraded to Solr 4.9, the indexing process began to throw 
 the following exception: Must specify a Content-Type header with POST 
 requests (in solr/servlet/SolrRequestParsers.java, line 684 in the 4.9 
 source).
 This behavior was introduced with SOLR-5517, but even as the Solr wiki 
 states, Tika needs the content-type to be empty or not present to trigger 
 auto detection of the content- / mime-type.
 Since both features block each other, but are basically both correct 
 behavior, hoss suggested that Tika should be fixed to trigger the 
 auto-detection on content-type application/octet-stream too and i highly 
 agree with this proposal.
 *Test case:*
 Just use the example from the ExtractingRequestHandler wiki page:
 {noformat}
 curl 
 http://localhost:8983/solr/update/extract?literal.id=doc5defaultField=text; 
  --data-binary @tutorial.html  [-H 'Content-type:text/html']
 {noformat}
 but don't send the content-type, obviously. or you could just use the 
 SimplePostTool (post.jar) mentioned in the wiki, but i guess this would be 
 broken now, too.
 *Proposed solution:*
 Fix the Tika content guessing in that way, that it also triggers the auto 
 detection on content-type application/octet-stream.
 Thanks,
 Dominik



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6469) Solr search with multicore + grouping + highlighting cause NPE

2014-10-18 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14176074#comment-14176074
 ] 

Anurag Sharma commented on SOLR-6469:
-

A failing unit test describing the scenario could speed up the fix

 Solr search with multicore + grouping + highlighting cause NPE
 --

 Key: SOLR-6469
 URL: https://issues.apache.org/jira/browse/SOLR-6469
 Project: Solr
  Issue Type: Bug
  Components: highlighter, multicore, SearchComponents - other
Affects Versions: 4.8.1
 Environment: Windows 7, Intellij
Reporter: Shay Sofer
  Labels: patch

 Integration of Grouping + shards + highlighting cause NullPointerException.
 Query: 
 localhost:8983/solr/Global_A/select?q=%2Btext%3A%28shay*%29+rows=100fl=id%2CobjId%2Cnullshards=http%3A%2F%2F127.0.0.1%3A8983%2Fsolr%2F0_A%2Chttp%3A%2F%2F127.0.0.1%3A8983%2Fsolr%2FGlobal_Agroup=truegroup.query=name__s%3Ashaysort=name__s_sort+aschl=true
 results:
 java.lang.NullPointerException
  at 
 org.apache.solr.handler.component.HighlightComponent.finishStage(HighlightComponent.java:189)
  at 
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:330)
  at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:1952)
  at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:774)
  at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
  at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
  at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
  at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
  at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
  at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
  at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
  at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
  at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
  at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
  at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
  at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
  at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
  at org.eclipse.jetty.server.Server.handle(Server.java:368)
  at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
  at 
 org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
  at 
 org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
  at 
 org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
  at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
  at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
  at 
 org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
  at 
 org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
  at 
 org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
  at 
 org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
  at java.lang.Thread.run(Thread.java:722)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6599) Wrong error logged on DIH connection problem

2014-10-18 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14176122#comment-14176122
 ] 

Anurag Sharma commented on SOLR-6599:
-

Thomas - Any steps to reproduce the issue locally?

 Wrong error logged on DIH connection problem
 

 Key: SOLR-6599
 URL: https://issues.apache.org/jira/browse/SOLR-6599
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 4.10.1
 Environment: Debian Squeeze, Oracle-java-8, mysql-connector-5.1.28
Reporter: Thomas Lamy
Priority: Minor
  Labels: difficulty-medium, impact-low

 If I try a full import via DIH from a mysql server which is firewalled or 
 down, I get a misleading error message (see below, only SQL statement 
 shortened).
 I don't know Java very well, but I suspect the connection error is catched, 
 the connection handle is null, which in turn leads to the null pointer 
 exception at the end of the stack trace.
 {code}
 Full Import failed:java.lang.RuntimeException: java.lang.RuntimeException: 
 org.apache.solr.handler.dataimport.DataImportHandlerException: Unable to 
 execute query: SELECT SenderID, ProviderID, `Name`, RefSenderID, CameraURL, 
 ChatURL, [.] Processing Document # 1
   at 
 org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:271)
   at 
 org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
   at 
 org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:480)
   at 
 org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:461)
 Caused by: java.lang.RuntimeException: 
 org.apache.solr.handler.dataimport.DataImportHandlerException: Unable to 
 execute query: SELECT SenderID, ProviderID, `Name`, RefSenderID, CameraURL, 
 ChatURL, [...] Processing Document # 1
   at 
 org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:417)
   at 
 org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:330)
   at 
 org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
   ... 3 more
 Caused by: org.apache.solr.handler.dataimport.DataImportHandlerException: 
 Unable to execute query: SELECT SenderID, ProviderID, `Name`, RefSenderID, 
 CameraURL, ChatURL, [...] Processing Document # 1
   at 
 org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:71)
   at 
 org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.init(JdbcDataSource.java:283)
   at 
 org.apache.solr.handler.dataimport.JdbcDataSource.getData(JdbcDataSource.java:240)
   at 
 org.apache.solr.handler.dataimport.JdbcDataSource.getData(JdbcDataSource.java:44)
   at 
 org.apache.solr.handler.dataimport.SqlEntityProcessor.initQuery(SqlEntityProcessor.java:59)
   at 
 org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:73)
   at 
 org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:243)
   at 
 org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476)
   at 
 org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415)
   ... 5 more
 Caused by: java.lang.NullPointerException
   at 
 org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.init(JdbcDataSource.java:271)
   ... 12 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6307) Atomic update remove does not work for int array or date array

2014-10-17 Thread Anurag Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Sharma updated SOLR-6307:

Attachment: SOLR-6307.patch

Here is the patch parsing the Double before Float

 Atomic update remove does not work for int array or date array
 --

 Key: SOLR-6307
 URL: https://issues.apache.org/jira/browse/SOLR-6307
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.9
Reporter: Kun Xi
Assignee: Noble Paul
  Labels: atomic, difficulty-medium, impact-medium
 Fix For: 5.0, Trunk

 Attachments: SOLR-6307.patch, SOLR-6307.patch, SOLR-6307.patch, 
 SOLR-6307.patch, unitTests-6307.txt


 Try to remove an element in the string array with curl:
 {code}
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{ attr_birth_year_is: { remove: 
 [1960]},  id: 1098}]'
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{reserved_on_dates_dts: {remove: 
 [2014-02-12T12:00:00Z, 2014-07-16T12:00:00Z, 2014-02-15T12:00:00Z, 
 2014-02-21T12:00:00Z]}, id: 1098}]'
 {code}
 Neither of them works.
 The set and add operation for int array works. 
 The set, remove, and  add operation for string array works



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6547) CloudSolrServer query getqtime Exception

2014-10-17 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175867#comment-14175867
 ] 

Anurag Sharma commented on SOLR-6547:
-

Hi Hoss,

Thanks for your update. 

Time is usually returned as long value. Since getQTime returns, int looks like 
the interest is to return value in seconds and not milliseconds. 

I agree with your suggestion to change the return type to either long or use 
intValue while returning in the current method.

Is there a way I can reproduce the java.lang.ClassCastException mentioned in 
the description?
Kevin - Any suggestion?

Anurag

 CloudSolrServer query getqtime Exception
 

 Key: SOLR-6547
 URL: https://issues.apache.org/jira/browse/SOLR-6547
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 4.10
Reporter: kevin

 We are using CloudSolrServer to query ,but solrj throw Exception ;
 java.lang.ClassCastException: java.lang.Long cannot be cast to 
 java.lang.Integer  at 
 org.apache.solr.client.solrj.response.SolrResponseBase.getQTime(SolrResponseBase.java:76)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6449) Add first class support for Real Time Get in Solrj

2014-10-16 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14173990#comment-14173990
 ] 

Anurag Sharma commented on SOLR-6449:
-

Not able to get the meaning of first class support here. An example would be 
very helpful to clarify.



 Add first class support for Real Time Get in Solrj
 --

 Key: SOLR-6449
 URL: https://issues.apache.org/jira/browse/SOLR-6449
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Reporter: Shalin Shekhar Mangar
  Labels: difficulty-medium, impact-medium
 Fix For: 5.0


 Any request handler can be queried by Solrj using a custom param map and the 
 qt parameter but I think /get should get first-class support in the java 
 client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6307) Atomic update remove does not work for int array or date array

2014-10-16 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174630#comment-14174630
 ] 

Anurag Sharma commented on SOLR-6307:
-

Sorry I missed the last comment. With float I saw sometime it was truncating 
the value like .666 to 1112 and look for removing 1112 so introduced 
parsing double before float. Both parsing can be done by introducing nested try 
and catch.

Independent of above another issue I saw in remove api is it is not giving any 
message if the element/collection deleted successfully or failed. Probably this 
will help the caller to know the current state otherwise the caller have to 
make another call to real time /get call on the id to know the state. Should we 
raise another issue for this?

 Atomic update remove does not work for int array or date array
 --

 Key: SOLR-6307
 URL: https://issues.apache.org/jira/browse/SOLR-6307
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.9
Reporter: Kun Xi
Assignee: Noble Paul
  Labels: atomic, difficulty-medium, impact-medium
 Fix For: 5.0, Trunk

 Attachments: SOLR-6307.patch, SOLR-6307.patch, SOLR-6307.patch, 
 unitTests-6307.txt


 Try to remove an element in the string array with curl:
 {code}
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{ attr_birth_year_is: { remove: 
 [1960]},  id: 1098}]'
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{reserved_on_dates_dts: {remove: 
 [2014-02-12T12:00:00Z, 2014-07-16T12:00:00Z, 2014-02-15T12:00:00Z, 
 2014-02-21T12:00:00Z]}, id: 1098}]'
 {code}
 Neither of them works.
 The set and add operation for int array works. 
 The set, remove, and  add operation for string array works



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6474) Smoke tester should use the Solr start scripts to start Solr

2014-10-16 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174692#comment-14174692
 ] 

Anurag Sharma commented on SOLR-6474:
-

Can you point the location of smoke tester.

 Smoke tester should use the Solr start scripts to start Solr
 

 Key: SOLR-6474
 URL: https://issues.apache.org/jira/browse/SOLR-6474
 Project: Solr
  Issue Type: Task
  Components: scripts and tools
Reporter: Shalin Shekhar Mangar
  Labels: difficulty-easy, impact-low
 Fix For: 5.0


 We should use the Solr bin scripts created by SOLR-3617 in the smoke tester 
 to test Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6546) Encapsulation problem when importing CSV with multi-valued fields

2014-10-16 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174697#comment-14174697
 ] 

Anurag Sharma commented on SOLR-6546:
-

Agree with Jan

 Encapsulation problem when importing CSV with multi-valued fields
 -

 Key: SOLR-6546
 URL: https://issues.apache.org/jira/browse/SOLR-6546
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 4.3.1
 Environment: Debian 6, OpenJDK 64-Bit Server VM (1.6.0_31 23.25-b01)
Reporter: Brice
Priority: Minor
  Labels: csvparser, difficulty-medium, impact-low

 Importing a CSV file with multi-valued field content like :
 one phrase|another phrase
 fail with error message :
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeaderint name=status400/intint 
 name=QTime0/int/lstlst name=errorstr name=msgCSVLoader: 
 input=null, line=0,can't read line: 0
 values={NO LINES AVAILABLE}/strint name=code400/int/lst
 /response
 Solr log :
 Caused by: java.io.IOException: (line 0) invalid char between encapsulated 
 token end delimiter
 It works with :
 one phrase|another phrase



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6547) CloudSolrServer query getqtime Exception

2014-10-16 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174699#comment-14174699
 ] 

Anurag Sharma commented on SOLR-6547:
-

Can you mention the exact query it will become easy to investigate/reproduce

 CloudSolrServer query getqtime Exception
 

 Key: SOLR-6547
 URL: https://issues.apache.org/jira/browse/SOLR-6547
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 4.10
Reporter: kevin

 We are using CloudSolrServer to query ,but solrj throw Exception ;
 java.lang.ClassCastException: java.lang.Long cannot be cast to 
 java.lang.Integer  at 
 org.apache.solr.client.solrj.response.SolrResponseBase.getQTime(SolrResponseBase.java:76)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6307) Atomic update remove does not work for int array or date array

2014-10-15 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14172262#comment-14172262
 ] 

Anurag Sharma commented on SOLR-6307:
-

Thanks a lot for reviewing and refactoring it. 

Introducing the toNativeType() at FieldType is good. Only one thing I see is 
missed in class TrieIntField when overriding toNativeType. It is not parsing 
Double before Float TrieIntField. The intention of doing it was to find if 
Double can be extracted before Float. Rest is good. 

Snippet from earlier patch
+  // when Double value passed as a String
+  if(!removed  nonIntegerFormat)
+removed = original.remove((new 
Double(Double.parseDouble(object.toString(.intValue());
+  // when Float value passed as a String
+  if(!removed  nonIntegerFormat)
+removed = original.remove(new 
Float(Float.parseFloat(object.toString())).intValue());


 Atomic update remove does not work for int array or date array
 --

 Key: SOLR-6307
 URL: https://issues.apache.org/jira/browse/SOLR-6307
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.9
Reporter: Kun Xi
Assignee: Noble Paul
  Labels: atomic, difficulty-medium, impact-medium
 Attachments: SOLR-6307.patch, SOLR-6307.patch, SOLR-6307.patch, 
 unitTests-6307.txt


 Try to remove an element in the string array with curl:
 {code}
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{ attr_birth_year_is: { remove: 
 [1960]},  id: 1098}]'
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{reserved_on_dates_dts: {remove: 
 [2014-02-12T12:00:00Z, 2014-07-16T12:00:00Z, 2014-02-15T12:00:00Z, 
 2014-02-21T12:00:00Z]}, id: 1098}]'
 {code}
 Neither of them works.
 The set and add operation for int array works. 
 The set, remove, and  add operation for string array works



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6307) Atomic update remove does not work for int array or date array

2014-10-14 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14171231#comment-14171231
 ] 

Anurag Sharma commented on SOLR-6307:
-

My local copy was quite old, updated it with the HEAD revision in trunk. 
Attaching again the patch after update.

 Atomic update remove does not work for int array or date array
 --

 Key: SOLR-6307
 URL: https://issues.apache.org/jira/browse/SOLR-6307
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.9
Reporter: Kun Xi
Assignee: Noble Paul
  Labels: atomic, difficulty-medium, impact-medium
 Attachments: SOLR-6307.patch


 Try to remove an element in the string array with curl:
 {code}
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{ attr_birth_year_is: { remove: 
 [1960]},  id: 1098}]'
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{reserved_on_dates_dts: {remove: 
 [2014-02-12T12:00:00Z, 2014-07-16T12:00:00Z, 2014-02-15T12:00:00Z, 
 2014-02-21T12:00:00Z]}, id: 1098}]'
 {code}
 Neither of them works.
 The set and add operation for int array works. 
 The set, remove, and  add operation for string array works



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6307) Atomic update remove does not work for int array or date array

2014-10-14 Thread Anurag Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Sharma updated SOLR-6307:

Attachment: SOLR-6307.patch

I've added few more comments in the code and cleaned up commented code and 
non-modified file from the patch. 
Clean build and unit test run with latest updated code is still pending. I'll 
update another patch in case find any issue(s).

 Atomic update remove does not work for int array or date array
 --

 Key: SOLR-6307
 URL: https://issues.apache.org/jira/browse/SOLR-6307
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.9
Reporter: Kun Xi
Assignee: Noble Paul
  Labels: atomic, difficulty-medium, impact-medium
 Attachments: SOLR-6307.patch, SOLR-6307.patch


 Try to remove an element in the string array with curl:
 {code}
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{ attr_birth_year_is: { remove: 
 [1960]},  id: 1098}]'
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{reserved_on_dates_dts: {remove: 
 [2014-02-12T12:00:00Z, 2014-07-16T12:00:00Z, 2014-02-15T12:00:00Z, 
 2014-02-21T12:00:00Z]}, id: 1098}]'
 {code}
 Neither of them works.
 The set and add operation for int array works. 
 The set, remove, and  add operation for string array works



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6307) Atomic update remove does not work for int array or date array

2014-10-14 Thread Anurag Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Sharma updated SOLR-6307:

Attachment: unitTests-6307.txt

Am able to successfully execute the existing and newly added unit tests in the 
AtomicUpdatesTest. Log attached.

 Atomic update remove does not work for int array or date array
 --

 Key: SOLR-6307
 URL: https://issues.apache.org/jira/browse/SOLR-6307
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.9
Reporter: Kun Xi
Assignee: Noble Paul
  Labels: atomic, difficulty-medium, impact-medium
 Attachments: SOLR-6307.patch, SOLR-6307.patch, unitTests-6307.txt


 Try to remove an element in the string array with curl:
 {code}
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{ attr_birth_year_is: { remove: 
 [1960]},  id: 1098}]'
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{reserved_on_dates_dts: {remove: 
 [2014-02-12T12:00:00Z, 2014-07-16T12:00:00Z, 2014-02-15T12:00:00Z, 
 2014-02-21T12:00:00Z]}, id: 1098}]'
 {code}
 Neither of them works.
 The set and add operation for int array works. 
 The set, remove, and  add operation for string array works



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6307) Atomic update remove does not work for int array or date array

2014-10-14 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14171445#comment-14171445
 ] 

Anurag Sharma commented on SOLR-6307:
-

Need more clarification regarding applicability of the patch on trunk 

I see following differences in the revision:
during patch upload --- 
core/src/java/org/apache/solr/update/processor/DistributedUpdateProcessor.java  
(revision 1631811)
now --- 
core/src/java/org/apache/solr/update/processor/DistributedUpdateProcessor.java  
(revision 1631826)

My doubt is the revision will keep bumping and current revision and patch will 
never be in sync. So what's the correct way to apply the patch on trunk. I am 
new to the process so trying to understand.

 Atomic update remove does not work for int array or date array
 --

 Key: SOLR-6307
 URL: https://issues.apache.org/jira/browse/SOLR-6307
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.9
Reporter: Kun Xi
Assignee: Noble Paul
  Labels: atomic, difficulty-medium, impact-medium
 Attachments: SOLR-6307.patch, SOLR-6307.patch, unitTests-6307.txt


 Try to remove an element in the string array with curl:
 {code}
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{ attr_birth_year_is: { remove: 
 [1960]},  id: 1098}]'
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{reserved_on_dates_dts: {remove: 
 [2014-02-12T12:00:00Z, 2014-07-16T12:00:00Z, 2014-02-15T12:00:00Z, 
 2014-02-21T12:00:00Z]}, id: 1098}]'
 {code}
 Neither of them works.
 The set and add operation for int array works. 
 The set, remove, and  add operation for string array works



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6307) Atomic update remove does not work for int array or date array

2014-10-13 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14169367#comment-14169367
 ] 

Anurag Sharma commented on SOLR-6307:
-

Yes. Sure, go ahead! 
Should I send the review using RBT or the attached patch file will work?

 Atomic update remove does not work for int array or date array
 --

 Key: SOLR-6307
 URL: https://issues.apache.org/jira/browse/SOLR-6307
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.9
Reporter: Kun Xi
Assignee: Noble Paul
  Labels: atomic, difficulty-medium, impact-medium
 Attachments: SOLR-6307.patch


 Try to remove an element in the string array with curl:
 {code}
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{ attr_birth_year_is: { remove: 
 [1960]},  id: 1098}]'
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{reserved_on_dates_dts: {remove: 
 [2014-02-12T12:00:00Z, 2014-07-16T12:00:00Z, 2014-02-15T12:00:00Z, 
 2014-02-21T12:00:00Z]}, id: 1098}]'
 {code}
 Neither of them works.
 The set and add operation for int array works. 
 The set, remove, and  add operation for string array works



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6307) Atomic update remove does not work for int array or date array

2014-09-28 Thread Anurag Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Sharma updated SOLR-6307:

Attachment: SOLR-6307.patch

Here is the patch for review using approach#2.

Other than Int and Date also have covered the case for float.



 Atomic update remove does not work for int array or date array
 --

 Key: SOLR-6307
 URL: https://issues.apache.org/jira/browse/SOLR-6307
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.9
Reporter: Kun Xi
  Labels: atomic, difficulty-medium, impact-medium
 Attachments: SOLR-6307.patch


 Try to remove an element in the string array with curl:
 {code}
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{ attr_birth_year_is: { remove: 
 [1960]},  id: 1098}]'
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{reserved_on_dates_dts: {remove: 
 [2014-02-12T12:00:00Z, 2014-07-16T12:00:00Z, 2014-02-15T12:00:00Z, 
 2014-02-21T12:00:00Z]}, id: 1098}]'
 {code}
 Neither of them works.
 The set and add operation for int array works. 
 The set, remove, and  add operation for string array works



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6282) ArrayIndexOutOfBoundsException during search

2014-09-28 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14151096#comment-14151096
 ] 

Anurag Sharma commented on SOLR-6282:
-

Jason - So far there is no clarity on the steps to reproduce this issue. Also 
from the above comment it looks like the issue doesn't exist at all. If you 
still see the issue please update with the detailed steps. Otherwise with the 
above comments and information we are bound to close it.

 ArrayIndexOutOfBoundsException during search
 

 Key: SOLR-6282
 URL: https://issues.apache.org/jira/browse/SOLR-6282
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.8
Reporter: Jason Emeric
Priority: Critical
  Labels: difficulty-medium, impact-low

 When executing a search with the following query strings a
 ERROR org.apache.solr.servlet.SolrDispatchFilter  â 
 null:java.lang.ArrayIndexOutOfBoundsException
 error is thrown and no stack trace is provided.  This is happening on 
 searches that seem to have no similar pattern to them (special characters, 
 length, spaces, etc.)
 q=((work_title_search:(%22+zoe%22%20)%20OR%20work_title_search:%22+zoe%22^100)%20AND%20(performer_name_search:(+big~0.75%20+b%27z%20%20)^7%20OR%20performer_name_search:%22+big%20+b%27z%20%20%22^30))
 q=((work_title_search:(%22+rtb%22%20)%20OR%20work_title_search:%22+rtb%22^100)%20AND%20(performer_name_search:(+fly~0.75%20+street~0.75%20+gang~0.75%20)^7%20OR%20performer_name_search:%22+fly%20+street%20+gang%20%22^30))



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6307) Atomic update remove does not work for int array or date array

2014-09-22 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14143130#comment-14143130
 ] 

Anurag Sharma commented on SOLR-6307:
-

Here is the code snippet of remove function. fieldValue is an Object containing 
items to be removed.
  final CollectionObject original = existingField.getValues();
  if (fieldVal instanceof Collection) {
original.removeAll((Collection) fieldVal);
  } else {
original.remove(fieldVal);
  }
The type of fieldVal is parsed by org.noggit.JSONParser.

To describe more on #3, original.removeAll((Collection) fieldVal); is only 
successful when collection items of original and fieldVal are of the same type. 
So before making above call another collection can be be build at runtime 
containing same item type as the item types of original by typecasting the 
value(s) from fieldVal and pass them like: 
original.removeAll(typeCastedCollectionOfInputItems);
This approach has minimal code change and will impact only doRemove() function 
in DistributedUpdateProcessor class.


 Atomic update remove does not work for int array or date array
 --

 Key: SOLR-6307
 URL: https://issues.apache.org/jira/browse/SOLR-6307
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.9
Reporter: Kun Xi
  Labels: atomic, difficulty-medium, impact-medium

 Try to remove an element in the string array with curl:
 {code}
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{ attr_birth_year_is: { remove: 
 [1960]},  id: 1098}]'
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{reserved_on_dates_dts: {remove: 
 [2014-02-12T12:00:00Z, 2014-07-16T12:00:00Z, 2014-02-15T12:00:00Z, 
 2014-02-21T12:00:00Z]}, id: 1098}]'
 {code}
 Neither of them works.
 The set and add operation for int array works. 
 The set, remove, and  add operation for string array works



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6307) Atomic update remove does not work for int array or date array

2014-09-22 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14143130#comment-14143130
 ] 

Anurag Sharma edited comment on SOLR-6307 at 9/22/14 11:55 AM:
---

Here is the code snippet of remove function. fieldValue is an Object containing 
items to be removed.
{quote}
  final CollectionObject original = existingField.getValues();
  if (fieldVal instanceof Collection) {
original.removeAll((Collection) fieldVal);
  } else {
original.remove(fieldVal);
  }
{quote}
The type of fieldVal is parsed by org.noggit.JSONParser.

To describe more on #3, original.removeAll((Collection) fieldVal); is only 
successful when collection items of original and fieldVal are of the same type. 
So before making above call another collection can be be build at runtime 
containing same item type as the item types of original by typecasting the 
value(s) from fieldVal and pass them like: 
original.removeAll(typeCastedCollectionOfInputItems);
This approach has minimal code change and will impact only doRemove() function 
in DistributedUpdateProcessor class.



was (Author: anuragsharma):
Here is the code snippet of remove function. fieldValue is an Object containing 
items to be removed.
  final CollectionObject original = existingField.getValues();
  if (fieldVal instanceof Collection) {
original.removeAll((Collection) fieldVal);
  } else {
original.remove(fieldVal);
  }
The type of fieldVal is parsed by org.noggit.JSONParser.

To describe more on #3, original.removeAll((Collection) fieldVal); is only 
successful when collection items of original and fieldVal are of the same type. 
So before making above call another collection can be be build at runtime 
containing same item type as the item types of original by typecasting the 
value(s) from fieldVal and pass them like: 
original.removeAll(typeCastedCollectionOfInputItems);
This approach has minimal code change and will impact only doRemove() function 
in DistributedUpdateProcessor class.


 Atomic update remove does not work for int array or date array
 --

 Key: SOLR-6307
 URL: https://issues.apache.org/jira/browse/SOLR-6307
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.9
Reporter: Kun Xi
  Labels: atomic, difficulty-medium, impact-medium

 Try to remove an element in the string array with curl:
 {code}
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{ attr_birth_year_is: { remove: 
 [1960]},  id: 1098}]'
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{reserved_on_dates_dts: {remove: 
 [2014-02-12T12:00:00Z, 2014-07-16T12:00:00Z, 2014-02-15T12:00:00Z, 
 2014-02-21T12:00:00Z]}, id: 1098}]'
 {code}
 Neither of them works.
 The set and add operation for int array works. 
 The set, remove, and  add operation for string array works



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6307) Atomic update remove does not work for int array or date array

2014-09-22 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14143130#comment-14143130
 ] 

Anurag Sharma edited comment on SOLR-6307 at 9/22/14 12:00 PM:
---

Here is the code snippet of remove function. fieldValue is an Object containing 
items to be removed.
{quote}
  final CollectionObject original = existingField.getValues();
  if (fieldVal instanceof Collection) \{
original.removeAll((Collection) fieldVal);
  \} else \{
original.remove(fieldVal);
 \ }
{quote}
The type of fieldVal is parsed by org.noggit.JSONParser.

To describe more on #3, original.removeAll((Collection) fieldVal); is only 
successful when collection items of original and fieldVal are of the same type. 
So before making above call another collection can be be build at runtime 
containing same item type as the item types of original by typecasting the 
value(s) from fieldVal and pass them like: 
original.removeAll(typeCastedCollectionOfInputItems);
This approach has minimal code change and will impact only doRemove() function 
in DistributedUpdateProcessor class.



was (Author: anuragsharma):
Here is the code snippet of remove function. fieldValue is an Object containing 
items to be removed.
{quote}
  final CollectionObject original = existingField.getValues();
  if (fieldVal instanceof Collection) {
original.removeAll((Collection) fieldVal);
  } else {
original.remove(fieldVal);
  }
{quote}
The type of fieldVal is parsed by org.noggit.JSONParser.

To describe more on #3, original.removeAll((Collection) fieldVal); is only 
successful when collection items of original and fieldVal are of the same type. 
So before making above call another collection can be be build at runtime 
containing same item type as the item types of original by typecasting the 
value(s) from fieldVal and pass them like: 
original.removeAll(typeCastedCollectionOfInputItems);
This approach has minimal code change and will impact only doRemove() function 
in DistributedUpdateProcessor class.


 Atomic update remove does not work for int array or date array
 --

 Key: SOLR-6307
 URL: https://issues.apache.org/jira/browse/SOLR-6307
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.9
Reporter: Kun Xi
  Labels: atomic, difficulty-medium, impact-medium

 Try to remove an element in the string array with curl:
 {code}
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{ attr_birth_year_is: { remove: 
 [1960]},  id: 1098}]'
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{reserved_on_dates_dts: {remove: 
 [2014-02-12T12:00:00Z, 2014-07-16T12:00:00Z, 2014-02-15T12:00:00Z, 
 2014-02-21T12:00:00Z]}, id: 1098}]'
 {code}
 Neither of them works.
 The set and add operation for int array works. 
 The set, remove, and  add operation for string array works



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6307) Atomic update remove does not work for int array or date array

2014-09-22 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14143130#comment-14143130
 ] 

Anurag Sharma edited comment on SOLR-6307 at 9/22/14 12:00 PM:
---

Here is the code snippet of remove function. fieldValue is an Object containing 
items to be removed.
{quote}
  final CollectionObject original = existingField.getValues();
  if (fieldVal instanceof Collection) \{
original.removeAll((Collection) fieldVal);
  \} else \{
original.remove(fieldVal);
 \}
{quote}
The type of fieldVal is parsed by org.noggit.JSONParser.

To describe more on #3, original.removeAll((Collection) fieldVal); is only 
successful when collection items of original and fieldVal are of the same type. 
So before making above call another collection can be be build at runtime 
containing same item type as the item types of original by typecasting the 
value(s) from fieldVal and pass them like: 
original.removeAll(typeCastedCollectionOfInputItems);
This approach has minimal code change and will impact only doRemove() function 
in DistributedUpdateProcessor class.



was (Author: anuragsharma):
Here is the code snippet of remove function. fieldValue is an Object containing 
items to be removed.
{quote}
  final CollectionObject original = existingField.getValues();
  if (fieldVal instanceof Collection) \{
original.removeAll((Collection) fieldVal);
  \} else \{
original.remove(fieldVal);
 \ }
{quote}
The type of fieldVal is parsed by org.noggit.JSONParser.

To describe more on #3, original.removeAll((Collection) fieldVal); is only 
successful when collection items of original and fieldVal are of the same type. 
So before making above call another collection can be be build at runtime 
containing same item type as the item types of original by typecasting the 
value(s) from fieldVal and pass them like: 
original.removeAll(typeCastedCollectionOfInputItems);
This approach has minimal code change and will impact only doRemove() function 
in DistributedUpdateProcessor class.


 Atomic update remove does not work for int array or date array
 --

 Key: SOLR-6307
 URL: https://issues.apache.org/jira/browse/SOLR-6307
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.9
Reporter: Kun Xi
  Labels: atomic, difficulty-medium, impact-medium

 Try to remove an element in the string array with curl:
 {code}
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{ attr_birth_year_is: { remove: 
 [1960]},  id: 1098}]'
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{reserved_on_dates_dts: {remove: 
 [2014-02-12T12:00:00Z, 2014-07-16T12:00:00Z, 2014-02-15T12:00:00Z, 
 2014-02-21T12:00:00Z]}, id: 1098}]'
 {code}
 Neither of them works.
 The set and add operation for int array works. 
 The set, remove, and  add operation for string array works



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6307) Atomic update remove does not work for int array or date array

2014-09-21 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14142407#comment-14142407
 ] 

Anurag Sharma commented on SOLR-6307:
-

Kun,
Thanks a lot for the update. 

The atomic removal is not working due to the difference in types of document 
schema i.e. for birth_year_is has multivalue Integer field but removal is 
called with Long for value [1970]. 

I am working on creating a patch

 Atomic update remove does not work for int array or date array
 --

 Key: SOLR-6307
 URL: https://issues.apache.org/jira/browse/SOLR-6307
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.9
Reporter: Kun Xi
  Labels: atomic, difficulty-medium, impact-medium

 Try to remove an element in the string array with curl:
 {code}
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{ attr_birth_year_is: { remove: 
 [1960]},  id: 1098}]'
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{reserved_on_dates_dts: {remove: 
 [2014-02-12T12:00:00Z, 2014-07-16T12:00:00Z, 2014-02-15T12:00:00Z, 
 2014-02-21T12:00:00Z]}, id: 1098}]'
 {code}
 Neither of them works.
 The set and add operation for int array works. 
 The set, remove, and  add operation for string array works



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6282) ArrayIndexOutOfBoundsException during search

2014-09-21 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14142566#comment-14142566
 ] 

Anurag Sharma commented on SOLR-6282:
-

Jason,
Can you send the details about creating this schema and populating the values 
to clearly understand the steps to reproduce.

 ArrayIndexOutOfBoundsException during search
 

 Key: SOLR-6282
 URL: https://issues.apache.org/jira/browse/SOLR-6282
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.8
Reporter: Jason Emeric
Priority: Critical
  Labels: difficulty-medium, impact-low

 When executing a search with the following query strings a
 ERROR org.apache.solr.servlet.SolrDispatchFilter  â 
 null:java.lang.ArrayIndexOutOfBoundsException
 error is thrown and no stack trace is provided.  This is happening on 
 searches that seem to have no similar pattern to them (special characters, 
 length, spaces, etc.)
 q=((work_title_search:(%22+zoe%22%20)%20OR%20work_title_search:%22+zoe%22^100)%20AND%20(performer_name_search:(+big~0.75%20+b%27z%20%20)^7%20OR%20performer_name_search:%22+big%20+b%27z%20%20%22^30))
 q=((work_title_search:(%22+rtb%22%20)%20OR%20work_title_search:%22+rtb%22^100)%20AND%20(performer_name_search:(+fly~0.75%20+street~0.75%20+gang~0.75%20)^7%20OR%20performer_name_search:%22+fly%20+street%20+gang%20%22^30))



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6307) Atomic update remove does not work for int array or date array

2014-09-21 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14142569#comment-14142569
 ] 

Anurag Sharma commented on SOLR-6307:
-

There are three approaches to fix this issue:
1. Enhance the org.noggit.JSONParser to support Integer and Date types.
2. Create the input collection type based on document id and it's lucene schema.
3. During remove typecast the input collection to original based on key for 
Integer and Date.

Please suggest which approach can be taken. To me approach #2 looks generic.

 Atomic update remove does not work for int array or date array
 --

 Key: SOLR-6307
 URL: https://issues.apache.org/jira/browse/SOLR-6307
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.9
Reporter: Kun Xi
  Labels: atomic, difficulty-medium, impact-medium

 Try to remove an element in the string array with curl:
 {code}
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{ attr_birth_year_is: { remove: 
 [1960]},  id: 1098}]'
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{reserved_on_dates_dts: {remove: 
 [2014-02-12T12:00:00Z, 2014-07-16T12:00:00Z, 2014-02-15T12:00:00Z, 
 2014-02-21T12:00:00Z]}, id: 1098}]'
 {code}
 Neither of them works.
 The set and add operation for int array works. 
 The set, remove, and  add operation for string array works



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6282) ArrayIndexOutOfBoundsException during search

2014-09-21 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14142610#comment-14142610
 ] 

Anurag Sharma commented on SOLR-6282:
-

Jason - There is syntax error of unequal parenthesis in your query.

Tried similar query scenario having special characters, length, spaces, etc. 
using books.csv schema in exampledocs. It works without any issue. The request 
and response are:
http://localhost:8983/solr/select?q=((title%22%20rtb%22%20)%20OR%20title:%22%20rtb%22^100)%20AND%20((author%20fly~0.75%20street~0.75%20gang~0.75%20)^7%20OR%20author:%22%20fly%20street%20gang%20%22^30)

response
lst name=responseHeader
int name=status0/int
int name=QTime4/int
lst name=params
str name=q
((title rtb ) OR title: rtb^100) AND ((author fly~0.75 street~0.75 
gang~0.75 )^7 OR author: fly street gang ^30)
/str
/lst
/lst
result name=response numFound=0 start=0/
/response

Also tried query with unequal parentheses which clearly gave syntax error. Here 
are the request and response:
http://localhost:8983/solr/select?q=((title%22%20rtb%22%20)%20OR%20title:%22%20rtb%22^100)%20AND%20((author%20fly~0.75%20street~0.75%20gang~0.75%20)^7%20OR%20author:%22%20fly%20street%20gang%20%22^30))
response
lst name=responseHeader
int name=status400/int
int name=QTime10/int
lst name=params
str name=q
((title rtb ) OR title: rtb^100) AND ((author fly~0.75 street~0.75 
gang~0.75 )^7 OR author: fly street gang ^30))
/str
/lst
/lst
lst name=error
str name=msg
org.apache.solr.search.SyntaxError: Cannot parse '((title rtb ) OR title: 
rtb^100) AND ((author fly~0.75 street~0.75 gang~0.75 )^7 OR author: fly 
street gang ^30))': Encountered  ) )  at line 1, column 118. Was 
expecting one of: EOF AND ... OR ... NOT ... + ... - ... BAREOPER 
... ( ... * ... ^ ... QUOTED ... TERM ... PREFIXTERM ... WILDTERM 
... REGEXPTERM ... [ ... { ... LPARAMS ... NUMBER ...
/str
int name=code400/int
/lst
/response


I did not see ArrayIndexOutOfBoundsException for any of the above queries. 
Please provide valid schema and data to reproduce the issue.

 ArrayIndexOutOfBoundsException during search
 

 Key: SOLR-6282
 URL: https://issues.apache.org/jira/browse/SOLR-6282
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.8
Reporter: Jason Emeric
Priority: Critical
  Labels: difficulty-medium, impact-low

 When executing a search with the following query strings a
 ERROR org.apache.solr.servlet.SolrDispatchFilter  â 
 null:java.lang.ArrayIndexOutOfBoundsException
 error is thrown and no stack trace is provided.  This is happening on 
 searches that seem to have no similar pattern to them (special characters, 
 length, spaces, etc.)
 q=((work_title_search:(%22+zoe%22%20)%20OR%20work_title_search:%22+zoe%22^100)%20AND%20(performer_name_search:(+big~0.75%20+b%27z%20%20)^7%20OR%20performer_name_search:%22+big%20+b%27z%20%20%22^30))
 q=((work_title_search:(%22+rtb%22%20)%20OR%20work_title_search:%22+rtb%22^100)%20AND%20(performer_name_search:(+fly~0.75%20+street~0.75%20+gang~0.75%20)^7%20OR%20performer_name_search:%22+fly%20+street%20+gang%20%22^30))



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6307) Atomic update remove does not work for int array or date array

2014-09-19 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14140207#comment-14140207
 ] 

Anurag Sharma commented on SOLR-6307:
-

Hi Kun Xi,

Can you provide more positive and negative test cases covering actual and 
expected results on this issue. 
I am working on fixing the issue.

Thanks
Anurag

 Atomic update remove does not work for int array or date array
 --

 Key: SOLR-6307
 URL: https://issues.apache.org/jira/browse/SOLR-6307
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.9
Reporter: Kun Xi
  Labels: atomic, difficulty-medium, impact-medium

 Try to remove an element in the string array with curl:
 {code}
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{ attr_birth_year_is: { remove: 
 [1960]},  id: 1098}]'
 curl http://localhost:8080/update\?commit\=true -H 
 'Content-type:application/json' -d '[{reserved_on_dates_dts: {remove: 
 [2014-02-12T12:00:00Z, 2014-07-16T12:00:00Z, 2014-02-15T12:00:00Z, 
 2014-02-21T12:00:00Z]}, id: 1098}]'
 {code}
 Neither of them works.
 The set and add operation for int array works. 
 The set, remove, and  add operation for string array works



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Updated: (HADOOP-4) tool to mount dfs on linux

2007-12-12 Thread Anurag Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Sharma updated HADOOP-4:
---

Attachment: (was: fuse-j-hadoopfs-0.1.zip)

 tool to mount dfs on linux
 --

 Key: HADOOP-4
 URL: https://issues.apache.org/jira/browse/HADOOP-4
 Project: Hadoop
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.5.0
 Environment: linux only
Reporter: John Xing
Assignee: Doug Cutting
 Attachments: fuse-hadoop-0.1.0_fuse-j.2.2.3_hadoop.0.5.0.tar.gz, 
 fuse-hadoop-0.1.0_fuse-j.2.4_hadoop.0.5.0.tar.gz, fuse-hadoop-0.1.1.tar.gz, 
 fuse_dfs.c


 tool to mount dfs on linux

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-4) tool to mount dfs on linux

2007-12-12 Thread Anurag Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Sharma updated HADOOP-4:
---

Attachment: fuse-j-hadoopfs-03.tar.gz

 tool to mount dfs on linux
 --

 Key: HADOOP-4
 URL: https://issues.apache.org/jira/browse/HADOOP-4
 Project: Hadoop
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.5.0
 Environment: linux only
Reporter: John Xing
Assignee: Doug Cutting
 Attachments: fuse-hadoop-0.1.0_fuse-j.2.2.3_hadoop.0.5.0.tar.gz, 
 fuse-hadoop-0.1.0_fuse-j.2.4_hadoop.0.5.0.tar.gz, fuse-hadoop-0.1.1.tar.gz, 
 fuse-j-hadoopfs-03.tar.gz, fuse_dfs.c


 tool to mount dfs on linux

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Issue Comment Edited: (HADOOP-4) tool to mount dfs on linux

2007-12-12 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12551102
 ] 

as106 edited comment on HADOOP-4 at 12/12/07 11:56 AM:
---

hi,
We re-submitted the fuse-j-hadoopfs package with the following changes (as 
suggested above):
- we are hosting a patched FUSE-J on a separate server.
- The fuse-j-hadoopfs build downloads this patched version at compile time.

We restructured the fuse-j-hadoopfs build to be a contrib, and have tested it 
with the Hadoop source-tree build.

The fuse-j-hadoopfs build is a no-op when a standard compile target is 
specified.  To actually build fuse-j-hadoopfs, the user has to specify the 
following command line: ant compile -Dbuild-fuse-j-hadoopfs=1.

We still have the following todo's remaining:

- Pick up some environment variables dynamically, so the user doesn't have to 
set them in our build.properties file (these do not affect the no-op build).
- Change the 'hadoopfs_fuse_mount.sh' script to use the 'hadoop/bin' scripts, 
so it can automatically pick up hadoop-specific conf, jar and class files.

The above tarball (fuse-j-hadoopfs-03.tar.gz) consists of a directory that can 
be placed inside hadoop/src/contrib, please let us know if we should submit 
this as a patch-file instead, or if we need to make more changes...

-thanks



  was (Author: as106):
hi,
We re-submitted the fuse-j-hadoopfs package with the following changes (as 
suggested above):
- we are hosting a patched FUSE-J on a separate server.
- The fuse-j-hadoopfs build downloads this patched version at compile time.

We restructured the fuse-j-hadoopfs build to be a contrib, and have tested it 
with the Hadoop source-tree build.

The fuse-j-hadoopfs build is a no-op when a standard compile target is 
specified.
To actually build fuse-j-hadoopfs, the user has to specify the following 
command line: ant compile -Dbuild-fuse-j-hadoopfs=1.

We still have the following todo's remaining:

- Pick up some environment variables dynamically, so the user doesn't have to 
set them in our build.properties file (these do not affect the no-op build).
- Change the 'hadoopfs_fuse_mount.sh' script to use the 'hadoop/bin' scripts, 
so it can automatically pick up hadoop-specific conf, jar and class files.

The above tarball (fuse-j-hadoopfs-03.tar.gz) consists of a directory that can 
be placed inside hadoop/src/contrib, please let us know if we should submit 
this as a patch-file instead, or if we need to make more changes...

-thanks


  
 tool to mount dfs on linux
 --

 Key: HADOOP-4
 URL: https://issues.apache.org/jira/browse/HADOOP-4
 Project: Hadoop
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.5.0
 Environment: linux only
Reporter: John Xing
Assignee: Doug Cutting
 Attachments: fuse-hadoop-0.1.0_fuse-j.2.2.3_hadoop.0.5.0.tar.gz, 
 fuse-hadoop-0.1.0_fuse-j.2.4_hadoop.0.5.0.tar.gz, fuse-hadoop-0.1.1.tar.gz, 
 fuse-j-hadoopfs-03.tar.gz, fuse_dfs.c


 tool to mount dfs on linux

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Issue Comment Edited: (HADOOP-4) tool to mount dfs on linux

2007-12-12 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12551102
 ] 

as106 edited comment on HADOOP-4 at 12/12/07 11:58 AM:
---

hi,
We re-submitted the fuse-j-hadoopfs package with the following changes (as 
suggested above):
- we are hosting a patched FUSE-J on a separate server.
- the fuse-j-hadoopfs build downloads this patched version at compile time.

We restructured the fuse-j-hadoopfs build to be a contrib, and have tested it 
with the Hadoop source-tree build.

The fuse-j-hadoopfs build is a no-op when a standard compile target is 
specified.  To actually build fuse-j-hadoopfs, the user has to specify the 
following command line: ant compile -Dbuild-fuse-j-hadoopfs=1.

We still have the following todo's remaining:
- Pick up some environment variables dynamically, so the user doesn't have to 
set them in our build.properties file (these do not affect the no-op build).
- Change the 'hadoopfs_fuse_mount.sh' script to use the 'hadoop/bin' scripts, 
so it can automatically pick up hadoop-specific conf, jar and class files.

The above tarball (fuse-j-hadoopfs-03.tar.gz) consists of a directory that can 
be placed inside hadoop/src/contrib, please let us know if we should submit 
this as a patch-file instead, or if we need to make more changes...

-thanks



  was (Author: as106):
hi,
We re-submitted the fuse-j-hadoopfs package with the following changes (as 
suggested above):
-we are hosting a patched FUSE-J on a separate server.
-the fuse-j-hadoopfs build downloads this patched version at compile time.

We restructured the fuse-j-hadoopfs build to be a contrib, and have tested it 
with the Hadoop source-tree build.

The fuse-j-hadoopfs build is a no-op when a standard compile target is 
specified.  To actually build fuse-j-hadoopfs, the user has to specify the 
following command line: ant compile -Dbuild-fuse-j-hadoopfs=1.

We still have the following todo's remaining:

-Pick up some environment variables dynamically, so the user doesn't have to 
set them in our build.properties file (these do not affect the no-op build).
-Change the 'hadoopfs_fuse_mount.sh' script to use the 'hadoop/bin' scripts, so 
it can automatically pick up hadoop-specific conf, jar and class files.

The above tarball (fuse-j-hadoopfs-03.tar.gz) consists of a directory that can 
be placed inside hadoop/src/contrib, please let us know if we should submit 
this as a patch-file instead, or if we need to make more changes...

-thanks


  
 tool to mount dfs on linux
 --

 Key: HADOOP-4
 URL: https://issues.apache.org/jira/browse/HADOOP-4
 Project: Hadoop
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.5.0
 Environment: linux only
Reporter: John Xing
Assignee: Doug Cutting
 Attachments: fuse-hadoop-0.1.0_fuse-j.2.2.3_hadoop.0.5.0.tar.gz, 
 fuse-hadoop-0.1.0_fuse-j.2.4_hadoop.0.5.0.tar.gz, fuse-hadoop-0.1.1.tar.gz, 
 fuse-j-hadoopfs-03.tar.gz, fuse_dfs.c


 tool to mount dfs on linux

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Issue Comment Edited: (HADOOP-4) tool to mount dfs on linux

2007-12-12 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12551102
 ] 

as106 edited comment on HADOOP-4 at 12/12/07 11:55 AM:
---

hi,
We re-submitted the fuse-j-hadoopfs package with the following changes (as 
suggested above):
- we are hosting a patched FUSE-J on a separate server.
- The fuse-j-hadoopfs build downloads this patched version at compile time.

We restructured the fuse-j-hadoopfs build to be a contrib, and have tested it 
with the Hadoop source-tree build.

The fuse-j-hadoopfs build is a no-op when a standard compile target is 
specified.
To actually build fuse-j-hadoopfs, the user has to specify the following 
command line: ant compile -Dbuild-fuse-j-hadoopfs=1.

We still have the following todo's remaining:

- Pick up some environment variables dynamically, so the user doesn't have to 
set them in our build.properties file (these do not affect the no-op build).
- Change the 'hadoopfs_fuse_mount.sh' script to use the 'hadoop/bin' scripts, 
so it can automatically pick up hadoop-specific conf, jar and class files.

The above tarball (fuse-j-hadoopfs-03.tar.gz) consists of a directory that can 
be placed inside hadoop/src/contrib, please let us know if we should submit 
this as a patch-file instead, or if we need to make more changes...

-thanks



  was (Author: as106):
hi,
We re-submitted the fuse-j-hadoopfs package with the following changes (as 
suggested above):
* we are hosting a patched FUSE-J on a separate server.
* The fuse-j-hadoopfs build downloads this patched version at compile time.

We restructured the fuse-j-hadoopfs build to be a contrib, and have tested it 
with the Hadoop source-tree build.

The fuse-j-hadoopfs build is a no-op when a standard compile target is 
specified.
To actually build fuse-j-hadoopfs, the user has to specify the following 
command line: ant compile -Dbuild-fuse-j-hadoopfs=1.

We still have the following todo's remaining:

* Pick up some environment variables dynamically, so the user doesn't have to 
set them in our build.properties file (these do not affect the no-op build).
* Change the 'hadoopfs_fuse_mount.sh' script to use the 'hadoop/bin' scripts, 
so it can automatically pick up hadoop-specific conf, jar and class files.

The above tarball (fuse-j-hadoopfs-03.tar.gz) consists of a directory that can 
be placed inside hadoop/src/contrib, please let us know if we should submit 
this as a patch-file instead, or if we need to make more changes...

-thanks


  
 tool to mount dfs on linux
 --

 Key: HADOOP-4
 URL: https://issues.apache.org/jira/browse/HADOOP-4
 Project: Hadoop
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.5.0
 Environment: linux only
Reporter: John Xing
Assignee: Doug Cutting
 Attachments: fuse-hadoop-0.1.0_fuse-j.2.2.3_hadoop.0.5.0.tar.gz, 
 fuse-hadoop-0.1.0_fuse-j.2.4_hadoop.0.5.0.tar.gz, fuse-hadoop-0.1.1.tar.gz, 
 fuse-j-hadoopfs-03.tar.gz, fuse_dfs.c


 tool to mount dfs on linux

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-4) tool to mount dfs on linux

2007-12-12 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12551102
 ] 

Anurag Sharma commented on HADOOP-4:


hi,
We re-submitted the fuse-j-hadoopfs package with the following changes (as 
suggested above):
* we are hosting a patched FUSE-J on a separate server.
* The fuse-j-hadoopfs build downloads this patched version at compile time.

We restructured the fuse-j-hadoopfs build to be a contrib, and have tested it 
with the Hadoop source-tree build.

The fuse-j-hadoopfs build is a no-op when a standard compile target is 
specified.
To actually build fuse-j-hadoopfs, the user has to specify the following 
command line: ant compile -Dbuild-fuse-j-hadoopfs=1.

We still have the following todo's remaining:

* Pick up some environment variables dynamically, so the user doesn't have to 
set them in our build.properties file (these do not affect the no-op build).
* Change the 'hadoopfs_fuse_mount.sh' script to use the 'hadoop/bin' scripts, 
so it can automatically pick up hadoop-specific conf, jar and class files.

The above tarball (fuse-j-hadoopfs-03.tar.gz) consists of a directory that can 
be placed inside hadoop/src/contrib, please let us know if we should submit 
this as a patch-file instead, or if we need to make more changes...

-thanks



 tool to mount dfs on linux
 --

 Key: HADOOP-4
 URL: https://issues.apache.org/jira/browse/HADOOP-4
 Project: Hadoop
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.5.0
 Environment: linux only
Reporter: John Xing
Assignee: Doug Cutting
 Attachments: fuse-hadoop-0.1.0_fuse-j.2.2.3_hadoop.0.5.0.tar.gz, 
 fuse-hadoop-0.1.0_fuse-j.2.4_hadoop.0.5.0.tar.gz, fuse-hadoop-0.1.1.tar.gz, 
 fuse-j-hadoopfs-03.tar.gz, fuse_dfs.c


 tool to mount dfs on linux

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-4) tool to mount dfs on linux

2007-12-05 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12548789
 ] 

Anurag Sharma commented on HADOOP-4:


hi Doug,

Thanks for pointing out this issue.  I will remove the FUSE-J patch and try one 
of the other routes you suggested (to have a patched FUSE-J available), and 
will come back with a resolution on this very soon.

-anurag

 tool to mount dfs on linux
 --

 Key: HADOOP-4
 URL: https://issues.apache.org/jira/browse/HADOOP-4
 Project: Hadoop
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.5.0
 Environment: linux only
Reporter: John Xing
Assignee: Doug Cutting
 Attachments: fuse-hadoop-0.1.0_fuse-j.2.2.3_hadoop.0.5.0.tar.gz, 
 fuse-hadoop-0.1.0_fuse-j.2.4_hadoop.0.5.0.tar.gz, fuse-hadoop-0.1.1.tar.gz, 
 fuse-j-hadoopfs-0.1.zip, fuse-j-patch.zip


 tool to mount dfs on linux

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-4) tool to mount dfs on linux

2007-12-05 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12548806
 ] 

Anurag Sharma commented on HADOOP-4:


Hi Doug,

I went through the license for Fuse-J and it is distributed under LGPL, do you 
think that would allow the Fuse-J patches to be hosted on Apache?

(In the latter case we would still modify the submission above to be a contrib 
module that downloads Fuse-J, applies our patch, and builds it, except we won't 
have to find a place to host the patch).

-thanks
-anurag


 tool to mount dfs on linux
 --

 Key: HADOOP-4
 URL: https://issues.apache.org/jira/browse/HADOOP-4
 Project: Hadoop
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.5.0
 Environment: linux only
Reporter: John Xing
Assignee: Doug Cutting
 Attachments: fuse-hadoop-0.1.0_fuse-j.2.2.3_hadoop.0.5.0.tar.gz, 
 fuse-hadoop-0.1.0_fuse-j.2.4_hadoop.0.5.0.tar.gz, fuse-hadoop-0.1.1.tar.gz, 
 fuse-j-hadoopfs-0.1.zip, fuse-j-patch.zip


 tool to mount dfs on linux

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-496) Expose HDFS as a WebDAV store

2007-12-05 Thread Anurag Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Sharma updated HADOOP-496:
-

Attachment: (was: fuse-j-hadoopfs-0.zip)

 Expose HDFS as a WebDAV store
 -

 Key: HADOOP-496
 URL: https://issues.apache.org/jira/browse/HADOOP-496
 Project: Hadoop
  Issue Type: New Feature
  Components: dfs
Reporter: Michel Tourn
Assignee: Enis Soztutar
 Attachments: hadoop-496-3.patch, hadoop-496-4.patch, 
 hadoop-496-spool-cleanup.patch, hadoop-webdav.zip, jetty-slide.xml, 
 lib.webdav.tar.gz, screenshot-1.jpg, slideusers.properties, 
 webdav_wip1.patch, webdav_wip2.patch


 WebDAV stands for Distributed Authoring and Versioning. It is a set of 
 extensions to the HTTP protocol that lets users collaboratively edit and 
 manage files on a remote web server. It is often considered as a replacement 
 for NFS or SAMBA
 HDFS (Hadoop Distributed File System) needs a friendly file system interface. 
 DFSShell commands are unfamiliar. Instead it is more convenient for Hadoop 
 users to use a mountable network drive. A friendly interface to HDFS will be 
 used both for casual browsing of data and for bulk import/export. 
 The FUSE provider for HDFS is already available ( 
 http://issues.apache.org/jira/browse/HADOOP-17 )  but it had scalability 
 problems. WebDAV is a popular alternative. 
 The typical licensing terms for WebDAV tools are also attractive: 
 GPL for Linux client tools that Hadoop would not redistribute anyway. 
 More importantly, Apache Project/Apache license for Java tools and for server 
 components. 
 This allows for a tighter integration with the HDFS code base.
 There are some interesting Apache projects that support WebDAV.
 But these are probably too heavyweight for the needs of Hadoop:
 Tomcat servlet: 
 http://tomcat.apache.org/tomcat-4.1-doc/catalina/docs/api/org/apache/catalina/servlets/WebdavServlet.html
 Slide:  http://jakarta.apache.org/slide/
 Being HTTP-based and backwards-compatible with Web Browser clients, the 
 WebDAV server protocol could even be piggy-backed on the existing Web UI 
 ports of the Hadoop name node / data nodes. WebDAV can be hosted as (Jetty) 
 servlets. This minimizes server code bloat and this avoids additional network 
 traffic between HDFS and the WebDAV server.
 General Clients (read-only):
 Any web browser
 Linux Clients: 
 Mountable GPL davfs2  http://dav.sourceforge.net/
 FTP-like  GPL Cadaver http://www.webdav.org/cadaver/
 Server Protocol compliance tests:
 http://www.webdav.org/neon/litmus/  
 A goal is for Hadoop HDFS to pass this test (minus support for Properties)
 Pure Java clients:
 DAV Explorer Apache lic. http://www.ics.uci.edu/~webdav/  
 WebDAV also makes it convenient to add advanced features in an incremental 
 fashion:
 file locking, access control lists, hard links, symbolic links.
 New WebDAV standards get accepted and more or less featured WebDAV clients 
 exist.
 core  http://www.webdav.org/specs/rfc2518.html
 ACLs  http://www.webdav.org/specs/rfc3744.html
 redirects soft links http://greenbytes.de/tech/webdav/rfc4437.html
 BIND hard links http://www.webdav.org/bind/
 quota http://tools.ietf.org/html/rfc4331

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-4) tool to mount dfs on linux

2007-12-05 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12548852
 ] 

Anurag Sharma commented on HADOOP-4:


hi Doug.  ok :- ), we will follow one of the alternate options you suggested of 
hosting either the patch or the jar file ourselves, and fixing the 
fuse-j-hadoop package build to work with this.  Will re-submit our changes soon.
-thanks,
-anurag

 tool to mount dfs on linux
 --

 Key: HADOOP-4
 URL: https://issues.apache.org/jira/browse/HADOOP-4
 Project: Hadoop
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.5.0
 Environment: linux only
Reporter: John Xing
Assignee: Doug Cutting
 Attachments: fuse-hadoop-0.1.0_fuse-j.2.2.3_hadoop.0.5.0.tar.gz, 
 fuse-hadoop-0.1.0_fuse-j.2.4_hadoop.0.5.0.tar.gz, fuse-hadoop-0.1.1.tar.gz, 
 fuse-j-hadoopfs-0.1.zip, fuse-j-patch.zip


 tool to mount dfs on linux

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-4) tool to mount dfs on linux

2007-12-05 Thread Anurag Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Sharma updated HADOOP-4:
---

Attachment: (was: fuse-j-patch.zip)

 tool to mount dfs on linux
 --

 Key: HADOOP-4
 URL: https://issues.apache.org/jira/browse/HADOOP-4
 Project: Hadoop
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.5.0
 Environment: linux only
Reporter: John Xing
Assignee: Doug Cutting
 Attachments: fuse-hadoop-0.1.0_fuse-j.2.2.3_hadoop.0.5.0.tar.gz, 
 fuse-hadoop-0.1.0_fuse-j.2.4_hadoop.0.5.0.tar.gz, fuse-hadoop-0.1.1.tar.gz, 
 fuse-j-hadoopfs-0.1.zip


 tool to mount dfs on linux

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-496) Expose HDFS as a WebDAV store

2007-12-05 Thread Anurag Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Sharma updated HADOOP-496:
-

Attachment: (was: fuse-j-patch.zip)

 Expose HDFS as a WebDAV store
 -

 Key: HADOOP-496
 URL: https://issues.apache.org/jira/browse/HADOOP-496
 Project: Hadoop
  Issue Type: New Feature
  Components: dfs
Reporter: Michel Tourn
Assignee: Enis Soztutar
 Attachments: hadoop-496-3.patch, hadoop-496-4.patch, 
 hadoop-496-spool-cleanup.patch, hadoop-webdav.zip, jetty-slide.xml, 
 lib.webdav.tar.gz, screenshot-1.jpg, slideusers.properties, 
 webdav_wip1.patch, webdav_wip2.patch


 WebDAV stands for Distributed Authoring and Versioning. It is a set of 
 extensions to the HTTP protocol that lets users collaboratively edit and 
 manage files on a remote web server. It is often considered as a replacement 
 for NFS or SAMBA
 HDFS (Hadoop Distributed File System) needs a friendly file system interface. 
 DFSShell commands are unfamiliar. Instead it is more convenient for Hadoop 
 users to use a mountable network drive. A friendly interface to HDFS will be 
 used both for casual browsing of data and for bulk import/export. 
 The FUSE provider for HDFS is already available ( 
 http://issues.apache.org/jira/browse/HADOOP-17 )  but it had scalability 
 problems. WebDAV is a popular alternative. 
 The typical licensing terms for WebDAV tools are also attractive: 
 GPL for Linux client tools that Hadoop would not redistribute anyway. 
 More importantly, Apache Project/Apache license for Java tools and for server 
 components. 
 This allows for a tighter integration with the HDFS code base.
 There are some interesting Apache projects that support WebDAV.
 But these are probably too heavyweight for the needs of Hadoop:
 Tomcat servlet: 
 http://tomcat.apache.org/tomcat-4.1-doc/catalina/docs/api/org/apache/catalina/servlets/WebdavServlet.html
 Slide:  http://jakarta.apache.org/slide/
 Being HTTP-based and backwards-compatible with Web Browser clients, the 
 WebDAV server protocol could even be piggy-backed on the existing Web UI 
 ports of the Hadoop name node / data nodes. WebDAV can be hosted as (Jetty) 
 servlets. This minimizes server code bloat and this avoids additional network 
 traffic between HDFS and the WebDAV server.
 General Clients (read-only):
 Any web browser
 Linux Clients: 
 Mountable GPL davfs2  http://dav.sourceforge.net/
 FTP-like  GPL Cadaver http://www.webdav.org/cadaver/
 Server Protocol compliance tests:
 http://www.webdav.org/neon/litmus/  
 A goal is for Hadoop HDFS to pass this test (minus support for Properties)
 Pure Java clients:
 DAV Explorer Apache lic. http://www.ics.uci.edu/~webdav/  
 WebDAV also makes it convenient to add advanced features in an incremental 
 fashion:
 file locking, access control lists, hard links, symbolic links.
 New WebDAV standards get accepted and more or less featured WebDAV clients 
 exist.
 core  http://www.webdav.org/specs/rfc2518.html
 ACLs  http://www.webdav.org/specs/rfc3744.html
 redirects soft links http://greenbytes.de/tech/webdav/rfc4437.html
 BIND hard links http://www.webdav.org/bind/
 quota http://tools.ietf.org/html/rfc4331

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-496) Expose HDFS as a WebDAV store

2007-12-03 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12547938
 ] 

Anurag Sharma commented on HADOOP-496:
--

hi Owen, ok, will move fuse-j-hadoop to the HADOOP-4 jira.  Thanks for the info.

 Expose HDFS as a WebDAV store
 -

 Key: HADOOP-496
 URL: https://issues.apache.org/jira/browse/HADOOP-496
 Project: Hadoop
  Issue Type: New Feature
  Components: dfs
Reporter: Michel Tourn
Assignee: Enis Soztutar
 Attachments: fuse-j-hadoopfs-0.zip, fuse-j-patch.zip, 
 hadoop-496-3.patch, hadoop-496-4.patch, hadoop-496-spool-cleanup.patch, 
 hadoop-webdav.zip, jetty-slide.xml, lib.webdav.tar.gz, screenshot-1.jpg, 
 slideusers.properties, webdav_wip1.patch, webdav_wip2.patch


 WebDAV stands for Distributed Authoring and Versioning. It is a set of 
 extensions to the HTTP protocol that lets users collaboratively edit and 
 manage files on a remote web server. It is often considered as a replacement 
 for NFS or SAMBA
 HDFS (Hadoop Distributed File System) needs a friendly file system interface. 
 DFSShell commands are unfamiliar. Instead it is more convenient for Hadoop 
 users to use a mountable network drive. A friendly interface to HDFS will be 
 used both for casual browsing of data and for bulk import/export. 
 The FUSE provider for HDFS is already available ( 
 http://issues.apache.org/jira/browse/HADOOP-17 )  but it had scalability 
 problems. WebDAV is a popular alternative. 
 The typical licensing terms for WebDAV tools are also attractive: 
 GPL for Linux client tools that Hadoop would not redistribute anyway. 
 More importantly, Apache Project/Apache license for Java tools and for server 
 components. 
 This allows for a tighter integration with the HDFS code base.
 There are some interesting Apache projects that support WebDAV.
 But these are probably too heavyweight for the needs of Hadoop:
 Tomcat servlet: 
 http://tomcat.apache.org/tomcat-4.1-doc/catalina/docs/api/org/apache/catalina/servlets/WebdavServlet.html
 Slide:  http://jakarta.apache.org/slide/
 Being HTTP-based and backwards-compatible with Web Browser clients, the 
 WebDAV server protocol could even be piggy-backed on the existing Web UI 
 ports of the Hadoop name node / data nodes. WebDAV can be hosted as (Jetty) 
 servlets. This minimizes server code bloat and this avoids additional network 
 traffic between HDFS and the WebDAV server.
 General Clients (read-only):
 Any web browser
 Linux Clients: 
 Mountable GPL davfs2  http://dav.sourceforge.net/
 FTP-like  GPL Cadaver http://www.webdav.org/cadaver/
 Server Protocol compliance tests:
 http://www.webdav.org/neon/litmus/  
 A goal is for Hadoop HDFS to pass this test (minus support for Properties)
 Pure Java clients:
 DAV Explorer Apache lic. http://www.ics.uci.edu/~webdav/  
 WebDAV also makes it convenient to add advanced features in an incremental 
 fashion:
 file locking, access control lists, hard links, symbolic links.
 New WebDAV standards get accepted and more or less featured WebDAV clients 
 exist.
 core  http://www.webdav.org/specs/rfc2518.html
 ACLs  http://www.webdav.org/specs/rfc3744.html
 redirects soft links http://greenbytes.de/tech/webdav/rfc4437.html
 BIND hard links http://www.webdav.org/bind/
 quota http://tools.ietf.org/html/rfc4331

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-4) tool to mount dfs on linux

2007-12-03 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-4?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12547964
 ] 

Anurag Sharma commented on HADOOP-4:


Hello,
We posted this on HADOOP-496 and were pointed to this jira entry as a better 
place to post this patch.  Pasting our original submission message below...

--
Hi,

We revived the old fuse-hadoop project (a FUSE-J based plugin that lets you 
mount Hadoop-FS). We have tried this on a small cluster (10 nodes) and basic 
functionality works (mount, ls, cat,cp, mkdir, rm, mv, ...).

The main changes include some bug fixes to FUSE-J and changing the previous 
fuse-hadoop implementation to enforce write-once. We found the FUSE framework 
to be straightforward and simple.

We have seen several mentions of using FUSE with Hadoop, so if there is a 
better place to post these files, please let me know.

Attachments to follow...

-thanks
--

Attachments include the following:
  * fuse-j-hadoop package
  * fuse-j patch.


 tool to mount dfs on linux
 --

 Key: HADOOP-4
 URL: https://issues.apache.org/jira/browse/HADOOP-4
 Project: Hadoop
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.5.0
 Environment: linux only
Reporter: John Xing
Assignee: Doug Cutting
 Attachments: fuse-hadoop-0.1.0_fuse-j.2.2.3_hadoop.0.5.0.tar.gz, 
 fuse-hadoop-0.1.0_fuse-j.2.4_hadoop.0.5.0.tar.gz, fuse-hadoop-0.1.1.tar.gz, 
 fuse-j-hadoopfs-0.1.zip, fuse-j-patch.zip


 tool to mount dfs on linux

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-4) tool to mount dfs on linux

2007-12-03 Thread Anurag Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-4?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Sharma updated HADOOP-4:
---

Attachment: fuse-j-patch.zip
fuse-j-hadoopfs-0.1.zip

 tool to mount dfs on linux
 --

 Key: HADOOP-4
 URL: https://issues.apache.org/jira/browse/HADOOP-4
 Project: Hadoop
  Issue Type: Improvement
  Components: fs
Affects Versions: 0.5.0
 Environment: linux only
Reporter: John Xing
Assignee: Doug Cutting
 Attachments: fuse-hadoop-0.1.0_fuse-j.2.2.3_hadoop.0.5.0.tar.gz, 
 fuse-hadoop-0.1.0_fuse-j.2.4_hadoop.0.5.0.tar.gz, fuse-hadoop-0.1.1.tar.gz, 
 fuse-j-hadoopfs-0.1.zip, fuse-j-patch.zip


 tool to mount dfs on linux

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-496) Expose HDFS as a WebDAV store

2007-11-30 Thread Anurag Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Sharma updated HADOOP-496:
-

Attachment: fuse-j-patch.zip
fuse-j-hadoopfs-0.zip

hi,

Attachments include the following:
- fuse-j-hadoop package
- fuse-j patch.

-thanks

 Expose HDFS as a WebDAV store
 -

 Key: HADOOP-496
 URL: https://issues.apache.org/jira/browse/HADOOP-496
 Project: Hadoop
  Issue Type: New Feature
  Components: dfs
Reporter: Michel Tourn
Assignee: Enis Soztutar
 Attachments: fuse-j-hadoopfs-0.zip, fuse-j-patch.zip, 
 hadoop-496-3.patch, hadoop-496-4.patch, hadoop-496-spool-cleanup.patch, 
 hadoop-webdav.zip, jetty-slide.xml, lib.webdav.tar.gz, screenshot-1.jpg, 
 slideusers.properties, webdav_wip1.patch, webdav_wip2.patch


 WebDAV stands for Distributed Authoring and Versioning. It is a set of 
 extensions to the HTTP protocol that lets users collaboratively edit and 
 manage files on a remote web server. It is often considered as a replacement 
 for NFS or SAMBA
 HDFS (Hadoop Distributed File System) needs a friendly file system interface. 
 DFSShell commands are unfamiliar. Instead it is more convenient for Hadoop 
 users to use a mountable network drive. A friendly interface to HDFS will be 
 used both for casual browsing of data and for bulk import/export. 
 The FUSE provider for HDFS is already available ( 
 http://issues.apache.org/jira/browse/HADOOP-17 )  but it had scalability 
 problems. WebDAV is a popular alternative. 
 The typical licensing terms for WebDAV tools are also attractive: 
 GPL for Linux client tools that Hadoop would not redistribute anyway. 
 More importantly, Apache Project/Apache license for Java tools and for server 
 components. 
 This allows for a tighter integration with the HDFS code base.
 There are some interesting Apache projects that support WebDAV.
 But these are probably too heavyweight for the needs of Hadoop:
 Tomcat servlet: 
 http://tomcat.apache.org/tomcat-4.1-doc/catalina/docs/api/org/apache/catalina/servlets/WebdavServlet.html
 Slide:  http://jakarta.apache.org/slide/
 Being HTTP-based and backwards-compatible with Web Browser clients, the 
 WebDAV server protocol could even be piggy-backed on the existing Web UI 
 ports of the Hadoop name node / data nodes. WebDAV can be hosted as (Jetty) 
 servlets. This minimizes server code bloat and this avoids additional network 
 traffic between HDFS and the WebDAV server.
 General Clients (read-only):
 Any web browser
 Linux Clients: 
 Mountable GPL davfs2  http://dav.sourceforge.net/
 FTP-like  GPL Cadaver http://www.webdav.org/cadaver/
 Server Protocol compliance tests:
 http://www.webdav.org/neon/litmus/  
 A goal is for Hadoop HDFS to pass this test (minus support for Properties)
 Pure Java clients:
 DAV Explorer Apache lic. http://www.ics.uci.edu/~webdav/  
 WebDAV also makes it convenient to add advanced features in an incremental 
 fashion:
 file locking, access control lists, hard links, symbolic links.
 New WebDAV standards get accepted and more or less featured WebDAV clients 
 exist.
 core  http://www.webdav.org/specs/rfc2518.html
 ACLs  http://www.webdav.org/specs/rfc3744.html
 redirects soft links http://greenbytes.de/tech/webdav/rfc4437.html
 BIND hard links http://www.webdav.org/bind/
 quota http://tools.ietf.org/html/rfc4331

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-496) Expose HDFS as a WebDAV store

2007-11-30 Thread Anurag Sharma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12547345
 ] 

Anurag Sharma commented on HADOOP-496:
--

hi,

We revived the old fuse-hadoop project (a FUSE-J based plugin that lets you 
mount Hadoop-FS).  We have tried this on a small cluster (10 nodes) and basic 
functionality works (mount, ls, cat,cp, mkdir, rm, mv, ...).

The main changes include some bug fixes to FUSE-J and changing the previous 
fuse-hadoop implementation to enforce write-once.  We found the FUSE framework 
to be straightforward and simple.

We have seen several mentions of using FUSE with Hadoop, so if there is a 
better place to post these files, please let me know.

Attachments to follow...

-thanks




 Expose HDFS as a WebDAV store
 -

 Key: HADOOP-496
 URL: https://issues.apache.org/jira/browse/HADOOP-496
 Project: Hadoop
  Issue Type: New Feature
  Components: dfs
Reporter: Michel Tourn
Assignee: Enis Soztutar
 Attachments: hadoop-496-3.patch, hadoop-496-4.patch, 
 hadoop-496-spool-cleanup.patch, hadoop-webdav.zip, jetty-slide.xml, 
 lib.webdav.tar.gz, screenshot-1.jpg, slideusers.properties, 
 webdav_wip1.patch, webdav_wip2.patch


 WebDAV stands for Distributed Authoring and Versioning. It is a set of 
 extensions to the HTTP protocol that lets users collaboratively edit and 
 manage files on a remote web server. It is often considered as a replacement 
 for NFS or SAMBA
 HDFS (Hadoop Distributed File System) needs a friendly file system interface. 
 DFSShell commands are unfamiliar. Instead it is more convenient for Hadoop 
 users to use a mountable network drive. A friendly interface to HDFS will be 
 used both for casual browsing of data and for bulk import/export. 
 The FUSE provider for HDFS is already available ( 
 http://issues.apache.org/jira/browse/HADOOP-17 )  but it had scalability 
 problems. WebDAV is a popular alternative. 
 The typical licensing terms for WebDAV tools are also attractive: 
 GPL for Linux client tools that Hadoop would not redistribute anyway. 
 More importantly, Apache Project/Apache license for Java tools and for server 
 components. 
 This allows for a tighter integration with the HDFS code base.
 There are some interesting Apache projects that support WebDAV.
 But these are probably too heavyweight for the needs of Hadoop:
 Tomcat servlet: 
 http://tomcat.apache.org/tomcat-4.1-doc/catalina/docs/api/org/apache/catalina/servlets/WebdavServlet.html
 Slide:  http://jakarta.apache.org/slide/
 Being HTTP-based and backwards-compatible with Web Browser clients, the 
 WebDAV server protocol could even be piggy-backed on the existing Web UI 
 ports of the Hadoop name node / data nodes. WebDAV can be hosted as (Jetty) 
 servlets. This minimizes server code bloat and this avoids additional network 
 traffic between HDFS and the WebDAV server.
 General Clients (read-only):
 Any web browser
 Linux Clients: 
 Mountable GPL davfs2  http://dav.sourceforge.net/
 FTP-like  GPL Cadaver http://www.webdav.org/cadaver/
 Server Protocol compliance tests:
 http://www.webdav.org/neon/litmus/  
 A goal is for Hadoop HDFS to pass this test (minus support for Properties)
 Pure Java clients:
 DAV Explorer Apache lic. http://www.ics.uci.edu/~webdav/  
 WebDAV also makes it convenient to add advanced features in an incremental 
 fashion:
 file locking, access control lists, hard links, symbolic links.
 New WebDAV standards get accepted and more or less featured WebDAV clients 
 exist.
 core  http://www.webdav.org/specs/rfc2518.html
 ACLs  http://www.webdav.org/specs/rfc3744.html
 redirects soft links http://greenbytes.de/tech/webdav/rfc4437.html
 BIND hard links http://www.webdav.org/bind/
 quota http://tools.ietf.org/html/rfc4331

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.