Create collections in SolrCloud "Could not get shard id for core"

2016-04-22 Thread Pablo Anzorena
Hey,
I'm using solr 5.2.1 and yesterday I started migrating to SolrCloud so I
might be quite noobish with it. The thing is that I could create 3
collections with not so many inconvenients, but when trying to create
another collection it throws a timeout and inside solr log it says "Could
not get shard id for core my_core_shard1_replica1".

My SolrCloud has three servers running zookeeper and two of them running
solr in cloud mode.
The command for creating the collection is:

http://localhost:8983/solr/admin/collections?action=CREATE=my_core=1=1=my_core_config=server1:8983_solr

Thanks.


Re: Solr5.5:DocValues/CopyField does not work with Atomic updates

2016-04-22 Thread Erick Erickson
I think I just added the right person, let us know if you don't have
access and/or if you need access to the LUCENE JIRA.

Erick

On Fri, Apr 22, 2016 at 5:17 PM, Karthik Ramachandran
 wrote:
> Eric
>   I have created a JIRA id (kramachand...@commvault.com).  Once I get
> access I will create the JIRA and submit the patch.
>
> With Thanks & Regards
> Karthik Ramachandran
> CommVault
> Direct: (732) 923-2197
> P Please don't print this e-mail unless you really need to
>
>
>
> On 4/22/16, 8:04 PM, "Erick Erickson"  wrote:
>
>>Karthik:
>>
>>The Apache mailing list is pretty aggressive about removing
>>attachments. Could you possibly open a JIRA and attach the file as a
>>patch? If at all possible a patch file with just the diffs would be
>>best.
>>
>>One problem is that it'll be a two-step process. The JIRAs have been
>>being hit with spam, so you'll have to request access once you create
>>a JIRA ID (this list would be fine).
>>
>>Best,
>>Erick
>>
>>On Thu, Apr 21, 2016 at 9:09 PM, Karthik Ramachandran
>> wrote:
>>> We feel the issue is in RealTimeGetComponent.getInputDocument(SolrCore
>>>core,
>>> BytesRef idBytes) where solr calls getNonStoredDVs and add the fields
>>>to the
>>> original document without excluding the copyFields.
>>>
>>>
>>>
>>> We made changes to send the filteredList to
>>>searcher.decorateDocValueFields
>>> and it started working.
>>>
>>>
>>>
>>> Attached is the modified file.
>>>
>>>
>>>
>>> With Thanks & Regards
>>> Karthik Ramachandran
>>> CommVault
>>> P Please don't print this e-mail unless you really need to
>>>
>>>
>>>
>>> -Original Message-
>>> From: Karthik Ramachandran [mailto:mrk...@gmail.com]
>>> Sent: Friday, April 22, 2016 12:08 AM
>>> To: solr-user@lucene.apache.org
>>> Subject: Re: Solr5.5:DocValues/CopyField does not work with Atomic
>>>updates
>>>
>>>
>>>
>>> We are trying to update Field A.
>>>
>>>
>>>
>>>
>>>
>>> -Karthik
>>>
>>>
>>>
>>> On Thu, Apr 21, 2016 at 10:36 PM, John Bickerstaff
>>>>>
 wrote:
>>>
>>>
>>>
 Which field do you try to atomically update?  A or B or some other?
>>>
 On Apr 21, 2016 8:29 PM, "Tirthankar Chatterjee" <
>>>
 tchatter...@commvault.com>
>>>
 wrote:
>>>

>>>
 > Hi,
>>>
 > Here is the scenario for SOLR5.5:
>>>
 >
>>>
 > FieldA type= stored=true indexed=true
>>>
 >
>>>
 > FieldB type= stored=false indexed=true docValue=true
>>>
 > usedocvalueasstored=false
>>>
 >
>>>
 > FieldA copyTo FieldB
>>>
 >
>>>
 > Try an Atomic update and we are getting this error:
>>>
 >
>>>
 > possible analysis error: DocValuesField "mtmround" appears more than
>>>
 > once in this document (only one value is allowed per field)
>>>
 >
>>>
 > How do we resolve this.
>>>
 >
>>>
 >
>>>
 >
>>>
 > ***Legal
>>>
 > Disclaimer***
>>>
 > "This communication may contain confidential and privileged material
>>>
 > for the sole use of the intended recipient. Any unauthorized review,
>>>
 > use or distribution by others is strictly prohibited. If you have
>>>
 > received the message by mistake, please advise the sender by reply
>>>
 > email and delete the message. Thank
>>>
 you."
>>>
 > 
>>>
 > **
>>>

>>>
>>> ***Legal Disclaimer***
>>> "This communication may contain confidential and privileged material
>>>for the
>>> sole use of the intended recipient. Any unauthorized review, use or
>>> distribution
>>> by others is strictly prohibited. If you have received the message by
>>> mistake,
>>> please advise the sender by reply email and delete the message. Thank
>>>you."
>>> **
>>
>
>
>
>
> ***Legal Disclaimer***
> "This communication may contain confidential and privileged material for the
> sole use of the intended recipient. Any unauthorized review, use or 
> distribution
> by others is strictly prohibited. If you have received the message by mistake,
> please advise the sender by reply email and delete the message. Thank you."
> **


Re: Replicas for same shard not in sync

2016-04-22 Thread Erick Erickson
Slow down a bit ;)...

First, just to cover the bases, you have done a commit, right? The
index generation on the UI screen is a bit misleading as replicas in
SolrCloud don't necessarily have the same generation, that's normal.
The "master/slave" bits are cruft from the older non-cloud days.

So I'm having a bit of trouble understanding what the problem is. On
the one hand you say "The shard has the same number of docs on each
host", yet "customers searching in one specific collection reported
seeing varying results with the same search".

OK, what does that mean? Results in different order? Some docs just
not there? What I would do is run the queries at the Solr nodes (after
committing and when there was no indexing going on) with
=false set. Fire this at _each_ replica for _each_ shard. If
you see varying numbers of hits on the _same_ shard, that's one
problem.

Do note that the ordering can, in some edge cases, be different so you
can't assume that just because a doc appears on one shard and doesn't
on another it's really not there, it's possible it was sorted a bit
lower in the list. It's rare that a customer would notice this though.

Best,
Erick

On Fri, Apr 22, 2016 at 11:48 AM, tedsolr  wrote:
> I have a SolrCloud setup with v5.2.1 - just two hosts. A ZK ensemble of 3
> hosts. Just today, customers searching in one specific collection reported
> seeing varying results with the same search. I could confirm this by looking
> at the logs - same search with different hits by the solr host. In the admin
> console I see that the Replication section on one node is highlighted -
> drawing your attention to a version and generation difference between
> listing of "master" and "slave".
>
> The shard has the same number of docs on each host, they are just at
> different generations. What's the proper way to re-sync? Should I restart
> the host with the out of sync collection? Or click the "optimize" button for
> the one shard? Or reload the collection? Do I need to delete the replica and
> build a new one?
>
> Thanks!
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Replicas-for-same-shard-not-in-sync-tp4272236.html
> Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr5.5:DocValues/CopyField does not work with Atomic updates

2016-04-22 Thread Karthik Ramachandran
Eric
  I have created a JIRA id (kramachand...@commvault.com).  Once I get
access I will create the JIRA and submit the patch.

With Thanks & Regards
Karthik Ramachandran
CommVault
Direct: (732) 923-2197
P Please don't print this e-mail unless you really need to



On 4/22/16, 8:04 PM, "Erick Erickson"  wrote:

>Karthik:
>
>The Apache mailing list is pretty aggressive about removing
>attachments. Could you possibly open a JIRA and attach the file as a
>patch? If at all possible a patch file with just the diffs would be
>best.
>
>One problem is that it'll be a two-step process. The JIRAs have been
>being hit with spam, so you'll have to request access once you create
>a JIRA ID (this list would be fine).
>
>Best,
>Erick
>
>On Thu, Apr 21, 2016 at 9:09 PM, Karthik Ramachandran
> wrote:
>> We feel the issue is in RealTimeGetComponent.getInputDocument(SolrCore
>>core,
>> BytesRef idBytes) where solr calls getNonStoredDVs and add the fields
>>to the
>> original document without excluding the copyFields.
>>
>>
>>
>> We made changes to send the filteredList to
>>searcher.decorateDocValueFields
>> and it started working.
>>
>>
>>
>> Attached is the modified file.
>>
>>
>>
>> With Thanks & Regards
>> Karthik Ramachandran
>> CommVault
>> P Please don't print this e-mail unless you really need to
>>
>>
>>
>> -Original Message-
>> From: Karthik Ramachandran [mailto:mrk...@gmail.com]
>> Sent: Friday, April 22, 2016 12:08 AM
>> To: solr-user@lucene.apache.org
>> Subject: Re: Solr5.5:DocValues/CopyField does not work with Atomic
>>updates
>>
>>
>>
>> We are trying to update Field A.
>>
>>
>>
>>
>>
>> -Karthik
>>
>>
>>
>> On Thu, Apr 21, 2016 at 10:36 PM, John Bickerstaff
>>>
>>> wrote:
>>
>>
>>
>>> Which field do you try to atomically update?  A or B or some other?
>>
>>> On Apr 21, 2016 8:29 PM, "Tirthankar Chatterjee" <
>>
>>> tchatter...@commvault.com>
>>
>>> wrote:
>>
>>>
>>
>>> > Hi,
>>
>>> > Here is the scenario for SOLR5.5:
>>
>>> >
>>
>>> > FieldA type= stored=true indexed=true
>>
>>> >
>>
>>> > FieldB type= stored=false indexed=true docValue=true
>>
>>> > usedocvalueasstored=false
>>
>>> >
>>
>>> > FieldA copyTo FieldB
>>
>>> >
>>
>>> > Try an Atomic update and we are getting this error:
>>
>>> >
>>
>>> > possible analysis error: DocValuesField "mtmround" appears more than
>>
>>> > once in this document (only one value is allowed per field)
>>
>>> >
>>
>>> > How do we resolve this.
>>
>>> >
>>
>>> >
>>
>>> >
>>
>>> > ***Legal
>>
>>> > Disclaimer***
>>
>>> > "This communication may contain confidential and privileged material
>>
>>> > for the sole use of the intended recipient. Any unauthorized review,
>>
>>> > use or distribution by others is strictly prohibited. If you have
>>
>>> > received the message by mistake, please advise the sender by reply
>>
>>> > email and delete the message. Thank
>>
>>> you."
>>
>>> > 
>>
>>> > **
>>
>>>
>>
>> ***Legal Disclaimer***
>> "This communication may contain confidential and privileged material
>>for the
>> sole use of the intended recipient. Any unauthorized review, use or
>> distribution
>> by others is strictly prohibited. If you have received the message by
>> mistake,
>> please advise the sender by reply email and delete the message. Thank
>>you."
>> **
>




***Legal Disclaimer***
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**


Re: Where to set Shards.tolerant to true ?

2016-04-22 Thread Erick Erickson
I'm confused. Are you sharding or not? Sharding is used when your
index is too big to fit on one Solr, so your docs go to separate Solr
nodes. That is, if shard1 contains the doc with id=12, shard2 will NOT
have that doc.

If you're not sharding (i.e. each slave has all the docs in your
collection), then the shards parameter is totally unnecessary, just
put the slaves behind a load balancer as you've indicated and you're
done.

If you _are_ sharding, then I _strongly_ recommend you go to SolrCloud
where you don't have to pay attention to these kinds of details.
Otherwise you're trying to re-invent the wheel as the fault tolerance
& etc is what SolrCloud is _about_.

Best,
Erick



On Fri, Apr 22, 2016 at 11:21 AM, sangeetha.subraman...@gtnexus.com
 wrote:
> Hey guys,
>
> I am trying to implement Distributed search with Master Slave server. Search 
> requests goes to Slave Servers. I am planning to have a load balancer before 
> the Slave servers. So here is the custom search handler which is defined.
>
> 
>  
> *:*
>  host address of the slaves
>  
>
>
> I believe if more than one slave servers are provided in the shards 
> parameter, it will not be fault tolerant. So in that case I came across 
> something like shard.tolenance = true parameter.
> But I am not sure on where we can define this ? 
> https://cwiki.apache.org/confluence/display/solr/Read+and+Write+Side+Fault+Tolerance
>  Can we set this up with Solr Master Slave architecture.
> Could someone please tell me if this is possible to set up at Solr Server 
> level ?
>
> Thanks
> Sangeetha


Re: ConcurrentUpdateSolrClient Invalid version (expected 2, but 60) or the data in not in 'javabin' format

2016-04-22 Thread Doug Turnbull
Joe this might be _version_ as in Solr's optimistic concurrency used in
atomic updates, etc

http://yonik.com/solr/optimistic-concurrency/

On Fri, Apr 22, 2016 at 5:24 PM Joe Lawson <
jlaw...@opensourceconnections.com> wrote:

> I'm updating from a basic Solr Client to the ConcurrentUpdateSolrClient and
> I'm hitting a really strange error. I cannot share the code but the snippet
> is like:
>
> try (ConcurrentUpdateSolrClient solrUpdateClient =
> >  new ConcurrentUpdateSolrClient("
> > http://localhost:8983/solr;, 1000, 1)) {
> > String _core = "lots";
> > List batch = docs.subList(batch_start,
> > batch_end);
> > response = solrUpdateClient.add(_core,batch);
> > solrUpdateClient.commit(_core);
> > ...
> > }
>
>
>
> Once the commit is called I get the following error:
>
> 17:17:22.585 [concurrentUpdateScheduler-1-thread-1-processing-http://
> //localhost:8983//solr]
> >> WARN  o.a.s.c.s.i.ConcurrentUpdateSolrClient - Failed to parse error
> >> response from http://localhost:8983/solr due to:
> >> java.lang.RuntimeException: Invalid version (expected 2, but 60) or the
> >> data in not in 'javabin' format
> >
> > 17:17:22.588 [concurrentUpdateScheduler-1-thread-1-processing-http://
> //localhost:8983//solr]
> >> ERROR o.a.s.c.s.i.ConcurrentUpdateSolrClient - error
> >
> > org.apache.solr.common.SolrException: Not Found
> >
> >
> >>
> >>
> >> request: http://localhost:8983/solr/update?wt=javabin=2
> >
> > at
> >>
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.sendUpdateStream(ConcurrentUpdateSolrClient.java:290)
> >> [solr-solrj-6.0.0.jar:6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 -
> >> nknize - 2016-04-01 14:41:50]
> >
> > at
> >>
> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:161)
> >> [solr-solrj-6.0.0.jar:6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 -
> >> nknize - 2016-04-01 14:41:50]
> >
> > at
> >>
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
> >> [solr-solrj-6.0.0.jar:6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 -
> >> nknize - 2016-04-01 14:41:50]
> >
> > at
> >>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> >> ~[na:1.8.0_92]
> >
> > at
> >>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> >> ~[na:1.8.0_92]
> >
> > at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_92]
> >
> >
> Any help suggestions is appreciated.
>
> Cheers,
>
> Joe Lawson
>


Re: Solr5.5:DocValues/CopyField does not work with Atomic updates

2016-04-22 Thread Erick Erickson
Karthik:

The Apache mailing list is pretty aggressive about removing
attachments. Could you possibly open a JIRA and attach the file as a
patch? If at all possible a patch file with just the diffs would be
best.

One problem is that it'll be a two-step process. The JIRAs have been
being hit with spam, so you'll have to request access once you create
a JIRA ID (this list would be fine).

Best,
Erick

On Thu, Apr 21, 2016 at 9:09 PM, Karthik Ramachandran
 wrote:
> We feel the issue is in RealTimeGetComponent.getInputDocument(SolrCore core,
> BytesRef idBytes) where solr calls getNonStoredDVs and add the fields to the
> original document without excluding the copyFields.
>
>
>
> We made changes to send the filteredList to searcher.decorateDocValueFields
> and it started working.
>
>
>
> Attached is the modified file.
>
>
>
> With Thanks & Regards
> Karthik Ramachandran
> CommVault
> P Please don't print this e-mail unless you really need to
>
>
>
> -Original Message-
> From: Karthik Ramachandran [mailto:mrk...@gmail.com]
> Sent: Friday, April 22, 2016 12:08 AM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr5.5:DocValues/CopyField does not work with Atomic updates
>
>
>
> We are trying to update Field A.
>
>
>
>
>
> -Karthik
>
>
>
> On Thu, Apr 21, 2016 at 10:36 PM, John Bickerstaff 
>> wrote:
>
>
>
>> Which field do you try to atomically update?  A or B or some other?
>
>> On Apr 21, 2016 8:29 PM, "Tirthankar Chatterjee" <
>
>> tchatter...@commvault.com>
>
>> wrote:
>
>>
>
>> > Hi,
>
>> > Here is the scenario for SOLR5.5:
>
>> >
>
>> > FieldA type= stored=true indexed=true
>
>> >
>
>> > FieldB type= stored=false indexed=true docValue=true
>
>> > usedocvalueasstored=false
>
>> >
>
>> > FieldA copyTo FieldB
>
>> >
>
>> > Try an Atomic update and we are getting this error:
>
>> >
>
>> > possible analysis error: DocValuesField "mtmround" appears more than
>
>> > once in this document (only one value is allowed per field)
>
>> >
>
>> > How do we resolve this.
>
>> >
>
>> >
>
>> >
>
>> > ***Legal
>
>> > Disclaimer***
>
>> > "This communication may contain confidential and privileged material
>
>> > for the sole use of the intended recipient. Any unauthorized review,
>
>> > use or distribution by others is strictly prohibited. If you have
>
>> > received the message by mistake, please advise the sender by reply
>
>> > email and delete the message. Thank
>
>> you."
>
>> > 
>
>> > **
>
>>
>
> ***Legal Disclaimer***
> "This communication may contain confidential and privileged material for the
> sole use of the intended recipient. Any unauthorized review, use or
> distribution
> by others is strictly prohibited. If you have received the message by
> mistake,
> please advise the sender by reply email and delete the message. Thank you."
> **


ConcurrentUpdateSolrClient Invalid version (expected 2, but 60) or the data in not in 'javabin' format

2016-04-22 Thread Joe Lawson
I'm updating from a basic Solr Client to the ConcurrentUpdateSolrClient and
I'm hitting a really strange error. I cannot share the code but the snippet
is like:

try (ConcurrentUpdateSolrClient solrUpdateClient =
>  new ConcurrentUpdateSolrClient("
> http://localhost:8983/solr;, 1000, 1)) {
> String _core = "lots";
> List batch = docs.subList(batch_start,
> batch_end);
> response = solrUpdateClient.add(_core,batch);
> solrUpdateClient.commit(_core);
> ...
> }



Once the commit is called I get the following error:

17:17:22.585 
[concurrentUpdateScheduler-1-thread-1-processing-http:localhost:8983//solr]
>> WARN  o.a.s.c.s.i.ConcurrentUpdateSolrClient - Failed to parse error
>> response from http://localhost:8983/solr due to:
>> java.lang.RuntimeException: Invalid version (expected 2, but 60) or the
>> data in not in 'javabin' format
>
> 17:17:22.588 
> [concurrentUpdateScheduler-1-thread-1-processing-http:localhost:8983//solr]
>> ERROR o.a.s.c.s.i.ConcurrentUpdateSolrClient - error
>
> org.apache.solr.common.SolrException: Not Found
>
>
>>
>>
>> request: http://localhost:8983/solr/update?wt=javabin=2
>
> at
>> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.sendUpdateStream(ConcurrentUpdateSolrClient.java:290)
>> [solr-solrj-6.0.0.jar:6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 -
>> nknize - 2016-04-01 14:41:50]
>
> at
>> org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:161)
>> [solr-solrj-6.0.0.jar:6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 -
>> nknize - 2016-04-01 14:41:50]
>
> at
>> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
>> [solr-solrj-6.0.0.jar:6.0.0 48c80f91b8e5cd9b3a9b48e6184bd53e7619e7e3 -
>> nknize - 2016-04-01 14:41:50]
>
> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> ~[na:1.8.0_92]
>
> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> ~[na:1.8.0_92]
>
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_92]
>
>
Any help suggestions is appreciated.

Cheers,

Joe Lawson


Replicas for same shard not in sync

2016-04-22 Thread tedsolr
I have a SolrCloud setup with v5.2.1 - just two hosts. A ZK ensemble of 3
hosts. Just today, customers searching in one specific collection reported
seeing varying results with the same search. I could confirm this by looking
at the logs - same search with different hits by the solr host. In the admin
console I see that the Replication section on one node is highlighted -
drawing your attention to a version and generation difference between
listing of "master" and "slave".

The shard has the same number of docs on each host, they are just at
different generations. What's the proper way to re-sync? Should I restart
the host with the out of sync collection? Or click the "optimize" button for
the one shard? Or reload the collection? Do I need to delete the replica and
build a new one?

Thanks!



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Replicas-for-same-shard-not-in-sync-tp4272236.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Making managed schema unmutable correctly?

2016-04-22 Thread Boman
Solved it - had to make sure default requestHander was configured for
spellcheck.





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Making-managed-schema-unmutable-correctly-tp4264051p4272235.html
Sent from the Solr - User mailing list archive at Nabble.com.


Where to set Shards.tolerant to true ?

2016-04-22 Thread sangeetha.subraman...@gtnexus.com
Hey guys,

I am trying to implement Distributed search with Master Slave server. Search 
requests goes to Slave Servers. I am planning to have a load balancer before 
the Slave servers. So here is the custom search handler which is defined.


 
*:*
 host address of the slaves
 
   

I believe if more than one slave servers are provided in the shards parameter, 
it will not be fault tolerant. So in that case I came across something like 
shard.tolenance = true parameter.
But I am not sure on where we can define this ? 
https://cwiki.apache.org/confluence/display/solr/Read+and+Write+Side+Fault+Tolerance
 Can we set this up with Solr Master Slave architecture.
Could someone please tell me if this is possible to set up at Solr Server level 
?

Thanks
Sangeetha


Re: Re[2]: Block Join faceting on intermediate levels with JSON Facet API (might be related to block join rollups & SOLR-8998)

2016-04-22 Thread Yonik Seeley
On Fri, Apr 22, 2016 at 12:26 PM, Alisa Z.  wrote:
>  Hi Yonik,
>
> Thanks a lot for your response.
>
> I have discussed this with Mikhail Khludnev already and tried this 
> suggestion. Here's what I've got:
>
>
>
> sentiment: positive
> author: Bob
> text: Great post about Solr
> 2.blog-posts.comments-id: 10735-23004   //this is a 
> new field, field name is different on each level for each type, values are 
> unique
> date: 2015-04-10T11:30:00Z
> path: 2.blog-posts.comments
> id: 10735-23004
> Query:
> curl http://localhost:8985/solr/solr_nesting_unique/query -d 
> 'q=path:2.blog-posts.comments=0&
> json.facet={
>   filter_by_child_type :{
> type:query,
> q:"path:*comments*keywords",
> domain: { blockChildren : "path:2.blog-posts.comments" },
> facet:{
>   top_entity_text : {
> type: terms,
> field: text,
> limit: 10,
> sort: "counts_by_comments desc",
> facet: {
>counts_by_comments: "unique (2.blog-posts.comments-id )"   
>  // changed
>  }'


Something is wrong if you are getting 0 counts.
Lets try taking it piece-by-piece:

Step 1:  q=path:2.blog-posts.comments
This finds level 2 documents

Step 2:  domain: { blockChildren : "path:2.blog-posts.comments" }
This first maps to  all of the children (level 3 and level4)

Step 3:  q:"path:*comments*keywords"
This selects a subset of level3 and level4 documents with keywords
(Note, in the future this should be doable as an additional filter in
the domain spec, w/o an additional sub-facet level)

Step 4:
Facet on the text field of those level3 and level4 keyword docs. For
each bucket, also find the unique number of values in the
"2.blog-posts.comments-id" field on those documents.

"Without seeing what you indexed, my guess is that the issue is that
the "2.blog-posts.comments-id" field does not actually exist on those
level3 and level4 docs being faceted.  The JSON Facet API doesn't
propagate field values up/down the nested stack yet.  That's what
https://issues.apache.org/jira/browse/SOLR-8998 is mostly about.

-Yonik


>
> Response:
>
> "response":{"numFound":3,"start":0,"docs":[]
>   },
>   "facets":{
> "count":3,
> "filter_by_child_type":{
>   "count":9,
>   "top_entity_text":{
> "buckets":[{
> "val":"Elasticsearch",
> "count":2,
> "counts_by_comments":0},
>   {
> "val":"Solr",
> "count":5,
> "counts_by_comments":0},
>   {
> "val":"Solr 5.5",
> "count":1,
> "counts_by_comments":0},
>   {
> "val":"feature",
> "count":1,
> "counts_by_comments":0}]
>
> So unless I messed something up... or the field name does not look 
> "canonical" (but it was fast to generate and  it is accepted in a normal query
> http://localhost:8985/solr/solr_nesting_unique/query?q=2.blog-posts.body-id 
> :* )
>
> So I think that it's just a JSON facet API limitation...
>
> Best,
> --Alisa
>
>
>>Пятница, 22 апреля 2016, 9:55 -04:00 от Yonik Seeley :
>>
>>Hi Alisa,
>>This was a bit too hard for me to grok on a first pass... then I saw
>>your related blog post which includes the actual sample data and makes
>>it more clear.
>>
>> More comments inline:
>>
>>On Wed, Apr 20, 2016 at 2:29 PM, Alisa Z. < prol...@mail.ru > wrote:
>>>  Hi all,
>>>
>>> I have been stretching some SOLR's capabilities for nested documents 
>>> handling and I've come up with the following issue...
>>>
>>> Let's say I have the following structure:
>>>
>>> {
>>> "blog-posts":{  //level 1
>>> "leaf-fields":[
>>> "date",
>>> "author"],
>>> "title":{   //level 2
>>> "leaf-fields":[ "text"],
>>> "keywords":{//level 3
>>> "leaf-fields":[
>>> "text",
>>> "type"]
>>> }
>>> },
>>> "body":{//level 2
>>> "leaf-fields":[ "text"],
>>> "keywords":{//level 3
>>> "leaf-fields":[
>>> "text",
>>> "type"]
>>> }
>>> },
>>> "comments":{//level 2
>>> "leaf-fields":[
>>> "date",
>>> "author",
>>> "text",
>>> "sentiment"
>>> ],
>>> "keywords":{//level 3
>>> "leaf-fields":[
>>> "text",
>>> "type"]
>>> },
>>> "replies":{ //level 3
>>> "leaf-fields":[
>>> "date",
>>> "author",
>>> "text",
>>> "sentiment"],
>>> "keywords":{//level 4
>>> "leaf-fields":[
>>> "text",
>>> 

Re[2]: Block Join faceting on intermediate levels with JSON Facet API (might be related to block join rollups & SOLR-8998)

2016-04-22 Thread Alisa Z .
 Hi Yonik, 

Thanks a lot for your response.  

I have discussed this with Mikhail Khludnev already and tried this suggestion. 
Here's what I've got:  



sentiment: positive
author: Bob
text: Great post about Solr
2.blog-posts.comments-id: 10735-23004       //this is a new 
field, field name is different on each level for each type, values are unique
date: 2015-04-10T11:30:00Z
path: 2.blog-posts.comments
id: 10735-23004
Query:
curl http://localhost:8985/solr/solr_nesting_unique/query -d 
'q=path:2.blog-posts.comments=0&
json.facet={
  filter_by_child_type :{
    type:query,
    q:"path:*comments*keywords",
    domain: { blockChildren : "path:2.blog-posts.comments" },
    facet:{
  top_entity_text : {
    type: terms,
    field: text,
    limit: 10,
    sort: "counts_by_comments desc",
    facet: {
   counts_by_comments: "unique (2.blog-posts.comments-id )" 
   // changed
 }'


Response:

"response":{"numFound":3,"start":0,"docs":[]
  },
  "facets":{
    "count":3,
    "filter_by_child_type":{
  "count":9,
  "top_entity_text":{
    "buckets":[{
    "val":"Elasticsearch",
    "count":2,
    "counts_by_comments":0},
  {
    "val":"Solr",
    "count":5,
    "counts_by_comments":0},
  {
    "val":"Solr 5.5",
    "count":1,
    "counts_by_comments":0},
  {
    "val":"feature",
    "count":1,
    "counts_by_comments":0}]

So unless I messed something up... or the field name does not look "canonical" 
(but it was fast to generate and  it is accepted in a normal query 
http://localhost:8985/solr/solr_nesting_unique/query?q=2.blog-posts.body-id :* 
) 

So I think that it's just a JSON facet API limitation...  

Best,
--Alisa 


>Пятница, 22 апреля 2016, 9:55 -04:00 от Yonik Seeley :
>
>Hi Alisa,
>This was a bit too hard for me to grok on a first pass... then I saw
>your related blog post which includes the actual sample data and makes
>it more clear.
>
> More comments inline:
>
>On Wed, Apr 20, 2016 at 2:29 PM, Alisa Z. < prol...@mail.ru > wrote:
>>  Hi all,
>>
>> I have been stretching some SOLR's capabilities for nested documents 
>> handling and I've come up with the following issue...
>>
>> Let's say I have the following structure:
>>
>> {
>> "blog-posts":{  //level 1
>> "leaf-fields":[
>> "date",
>> "author"],
>> "title":{   //level 2
>> "leaf-fields":[ "text"],
>> "keywords":{//level 3
>> "leaf-fields":[
>> "text",
>> "type"]
>> }
>> },
>> "body":{//level 2
>> "leaf-fields":[ "text"],
>> "keywords":{//level 3
>> "leaf-fields":[
>> "text",
>> "type"]
>> }
>> },
>> "comments":{//level 2
>> "leaf-fields":[
>> "date",
>> "author",
>> "text",
>> "sentiment"
>> ],
>> "keywords":{//level 3
>> "leaf-fields":[
>> "text",
>> "type"]
>> },
>> "replies":{ //level 3
>> "leaf-fields":[
>> "date",
>> "author",
>> "text",
>> "sentiment"],
>> "keywords":{//level 4
>> "leaf-fields":[
>> "text",
>> "type"]
>> }
>>
>>
>> And I want to know the distribution of all readers' keywords (levels 3 and 
>> 4) by comments (level 2).
>> In JSON Facet API I tried this:
>>
>> curl http://localhost:8983/solr/my_index/query -d 
>> 'q=path:2.blog-posts.comments=0&
>> json.facet={
>>   filter_by_child_type :{
>> type:query,
>> q:"path:*comments*keywords",
>> domain: { blockChildren : "path:2.blog-posts.comments" },
>> facet:{
>>   top_keywords : {
>> type: terms,
>> field: text,
>> sort: "counts_by_comments desc",
>> facet: {
>>counts_by_comments: "unique(_root_)"// I suspect in should be 
>> a different field, not _root_, but would it be for an intermediate document?
>>  }'
>>
>> Which gives me the wrong results, it aggregates by posts, not by comments 
>> (it's a toy data set, so I know that the correct answer for "Solr" is 3 when 
>> faceted by for comments)
>
>
>Yeah, this type if thing isn't currently directly supported, but
>SOLR-8998 should address that.
>You can currently hack around it (for simple counts) using unique(),
>as you've discovered, but you need a unique ID at the right level to
>get the right count.
>
>_root_ is unique for blog posts, hence that's why you get numbers of

Re: concat 2 fields

2016-04-22 Thread Reth RM
Have you added this new processor chain to update handler that you are
using(as shown below)?
 myChain

https://wiki.apache.org/solr/UpdateRequestProcessor#Selecting_the_UpdateChain_for_Your_Request



On Thu, Apr 21, 2016 at 2:59 PM, vrajesh  wrote:

> to concatenating two fields to use it as one field from
>
> http://grokbase.com/t/lucene/solr-user/138vr75hvj/concat-2-fields-in-another-field
> ,
> but the solution whichever is given i tried but its not working. please
> help
> me on it.
>  i am trying to concat latitude and longitude fields to make it as single
> unit using following:
>  
>
> 
>
>  
>  i added it to solrconfig.xml.
>
>  some of my doubts are :
>  - should we define destination field (geo_location) in schema.xml?
>
>  - i want to make this combined field  (geo_location) as field facet so i
> have to add   in
>
>  - any specific tag in which i should add above process script to make it
> working.
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/concat-2-fields-tp4271760.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


Re: Solr Max Query length

2016-04-22 Thread Reth RM
I'm not sure, may be this should work :
QueryResponse response = solr.query(q, METHOD.POST);

Let's wait for others response.




On Fri, Apr 22, 2016 at 8:51 PM, Kelly, Frank  wrote:

> I am using the SolrJ library - does it have a way to specify one variant
> (POST) over the other (GET)?
>
> -Frank
>
>
>
>
> On 4/22/16, 11:13 AM, "Reth RM"  wrote:
>
> >Are you using get instead of post?
> >
> >https://dzone.com/articles/solr-select-query-get-vs-post
> >
> >
> >
> >On Fri, Apr 22, 2016 at 8:12 PM, Kelly, Frank 
> >wrote:
> >
> >> I used SolrJ and wrote a test to confirm that the max query length
> >> supported by Solr (by default) was 8192 in Solr 5.3.1
> >> Based on the default Jetty settings
> >>
> >> jetty.xml: >> name="solr.jetty.request.header.size" default="8192" />
> >>
> >>
> >> The test would not work however until I had used a max size of 4096 (so
> >> the query passes at 4095 and returns a RemoteSolrException at 4097).
> >>
> >>
> >> Is there another setting somewhere limiting the max query length?
> >>
> >>
> >> -Frank
> >>
> >> *Frank Kelly*
> >>
> >> Principal Software Engineer
> >>
> >> Predictive Analytics Team (SCBE/HAC/CDA)
> >>
> >>
> >> *HERE *
> >>
> >> 5 Wayside Rd, Burlington, MA 01803, USA
> >>
> >> *42° 29' 7" N 71° 11' 32² W*
> >>
> >>
> >>    
> >> 
> >>
> >>   
> >>
> >>
>
>


Re: Solr Max Query length

2016-04-22 Thread Kelly, Frank
I am using the SolrJ library - does it have a way to specify one variant
(POST) over the other (GET)?

-Frank




On 4/22/16, 11:13 AM, "Reth RM"  wrote:

>Are you using get instead of post?
>
>https://dzone.com/articles/solr-select-query-get-vs-post
>
>
>
>On Fri, Apr 22, 2016 at 8:12 PM, Kelly, Frank 
>wrote:
>
>> I used SolrJ and wrote a test to confirm that the max query length
>> supported by Solr (by default) was 8192 in Solr 5.3.1
>> Based on the default Jetty settings
>>
>> jetty.xml:> name="solr.jetty.request.header.size" default="8192" />
>>
>>
>> The test would not work however until I had used a max size of 4096 (so
>> the query passes at 4095 and returns a RemoteSolrException at 4097).
>>
>>
>> Is there another setting somewhere limiting the max query length?
>>
>>
>> -Frank
>>
>> *Frank Kelly*
>>
>> Principal Software Engineer
>>
>> Predictive Analytics Team (SCBE/HAC/CDA)
>>
>>
>> *HERE *
>>
>> 5 Wayside Rd, Burlington, MA 01803, USA
>>
>> *42° 29' 7" N 71° 11' 32² W*
>>
>>
>>    
>> 
>>
>>   
>>
>>



Re: Solr Max Query length

2016-04-22 Thread Reth RM
Are you using get instead of post?

https://dzone.com/articles/solr-select-query-get-vs-post



On Fri, Apr 22, 2016 at 8:12 PM, Kelly, Frank  wrote:

> I used SolrJ and wrote a test to confirm that the max query length
> supported by Solr (by default) was 8192 in Solr 5.3.1
> Based on the default Jetty settings
>
> jetty.xml: name="solr.jetty.request.header.size" default="8192" />
>
>
> The test would not work however until I had used a max size of 4096 (so
> the query passes at 4095 and returns a RemoteSolrException at 4097).
>
>
> Is there another setting somewhere limiting the max query length?
>
>
> -Frank
>
> *Frank Kelly*
>
> Principal Software Engineer
>
> Predictive Analytics Team (SCBE/HAC/CDA)
>
>
> *HERE *
>
> 5 Wayside Rd, Burlington, MA 01803, USA
>
> *42° 29' 7" N 71° 11' 32” W*
>
>
>    
> 
>   
>
>


Solr Max Query length

2016-04-22 Thread Kelly, Frank
I used SolrJ and wrote a test to confirm that the max query length supported by 
Solr (by default) was 8192 in Solr 5.3.1
Based on the default Jetty settings


jetty.xml:


The test would not work however until I had used a max size of 4096 (so the 
query passes at 4095 and returns a RemoteSolrException at 4097).


Is there another setting somewhere limiting the max query length?


-Frank

[cid:8D937273-A2BE-4FA2-9798-BF96569CE612]
Frank Kelly
Principal Software Engineer
Predictive Analytics Team (SCBE/HAC/CDA)






HERE
5 Wayside Rd, Burlington, MA 01803, USA
42° 29' 7" N 71° 11' 32" W

[cid:EE1A5D21-5E1F-4367-8C2D-9CDA6E290D85]  
[cid:D08A4FC6-FF2D-49A9-81A7-18416A59A574]    
[cid:29AF1DC5-112C-41DC-9B44-45D77B7244B0] 
[cid:49338447-5524-432A-9DF5-410F280C84B5] 

[cid:3B33E3BD-D090-4FCD-B163-6F757AA6D2F8] 







Re: Block Join faceting on intermediate levels with JSON Facet API (might be related to block join rollups & SOLR-8998)

2016-04-22 Thread Yonik Seeley
Hi Alisa,
This was a bit too hard for me to grok on a first pass... then I saw
your related blog post which includes the actual sample data and makes
it more clear.

 More comments inline:

On Wed, Apr 20, 2016 at 2:29 PM, Alisa Z.  wrote:
>  Hi all,
>
> I have been stretching some SOLR's capabilities for nested documents handling 
> and I've come up with the following issue...
>
> Let's say I have the following structure:
>
> {
> "blog-posts":{  //level 1
> "leaf-fields":[
> "date",
> "author"],
> "title":{   //level 2
> "leaf-fields":[ "text"],
> "keywords":{//level 3
> "leaf-fields":[
> "text",
> "type"]
> }
> },
> "body":{//level 2
> "leaf-fields":[ "text"],
> "keywords":{//level 3
> "leaf-fields":[
> "text",
> "type"]
> }
> },
> "comments":{//level 2
> "leaf-fields":[
> "date",
> "author",
> "text",
> "sentiment"
> ],
> "keywords":{//level 3
> "leaf-fields":[
> "text",
> "type"]
> },
> "replies":{ //level 3
> "leaf-fields":[
> "date",
> "author",
> "text",
> "sentiment"],
> "keywords":{//level 4
> "leaf-fields":[
> "text",
> "type"]
> }
>
>
> And I want to know the distribution of all readers' keywords (levels 3 and 4) 
> by comments (level 2).
> In JSON Facet API I tried this:
>
> curl http://localhost:8983/solr/my_index/query -d 
> 'q=path:2.blog-posts.comments=0&
> json.facet={
>   filter_by_child_type :{
> type:query,
> q:"path:*comments*keywords",
> domain: { blockChildren : "path:2.blog-posts.comments" },
> facet:{
>   top_keywords : {
> type: terms,
> field: text,
> sort: "counts_by_comments desc",
> facet: {
>counts_by_comments: "unique(_root_)"// I suspect in should be 
> a different field, not _root_, but would it be for an intermediate document?
>  }'
>
> Which gives me the wrong results, it aggregates by posts, not by comments 
> (it's a toy data set, so I know that the correct answer for "Solr" is 3 when 
> faceted by for comments)


Yeah, this type if thing isn't currently directly supported, but
SOLR-8998 should address that.
You can currently hack around it (for simple counts) using unique(),
as you've discovered, but you need a unique ID at the right level to
get the right count.

_root_ is unique for blog posts, hence that's why you get numbers of
posts (as opposed to numbers of level-2 comments).
You could add a "level2_comment_id" field to the level 2 commends and
it's children, and then use unique() on that.

-Yonik


> {
> "response":{"numFound":3,"start":0,"docs":[]
>   },
>   "facets":{
> "count":3,
> "filter_by_child_type":{
>   "count":9,
>   "top_keywords":{
> "buckets":[{
> "val":"Elasticsearch",
> "count":2,
> "counts_by_comments":2},
>   {
> "val":"Solr",
> "count":5,
> "counts_by_comments":2},   //here the count by 
> "comments" should be 3
>   {
> "val":"Solr 5.5",
> "count":1,
> "counts_by_comments":1},
>   {
> "val":"feature",
> "count":1,
> "counts_by_comments":1}]
>
>
> Am I writing the query wrong?
>
>
> By the way, Block Join Faceting works fine for this:
> bjqfacet?q={!parent%20which=path:2.blog-posts.comments}path:*.comments*keywords=0=true=text=json=true
>
> {
>   "response":{"numFound":3,"start":0,"docs":[]
>   },
>   "facet_counts":{
> "facet_queries":{},
> "facet_fields":{
>   "text":[
> "Elasticsearch",2,
> "Solr",3,  //correct result
> "Solr 5.5",1,
> "feature",1]},
> "facet_dates":{},
> "facet_ranges":{},
> "facet_intervals":{},
> "facet_heatmaps":{}}}
>
> But we've already discussed that it returns too much stuff: no way to put 
> limits or order by counts :(  That's why I want to see whether it's posible 
> to make JSON Facet API straight.
>
> Thank you in advance!
>
> --
> Alisa Zhila


Re: How can I set the defaultOperator to be AND?

2016-04-22 Thread Bastien Latard - MDPI AG

Yes Jan, I'm using edismax.

This is (a part of) my requestHandler:


 
false
   explicit
   10
   title,abstract,authors,doi
   edismax
   title^1.0  author^1.0
[...]

Is there anything I should do to improve/fix it?

Kind regards,
Bastien

On 22/04/2016 12:42, Jan Høydahl wrote:

Hi

Which query parser are you using? If using edismax yo may be hitting a recent 
bug concerning default operator and explicit boolean operators.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com


22. apr. 2016 kl. 11.26 skrev Bastien Latard - MDPI AG 
:

Hi guys,

How can I set the defaultOperator to be AND?
If I add the following line to the schema.xml, even if I do a search 'title:"test" OR author:"me"', 
it will returns documents matching 'title:"test" AND author:"me"':


solr version: 6.0

I know that I can overwrite the query with q.op, but this is not that 
convenient...
I would need to write a complex query for a simple search '(a:x AND b:y) OR c:z'

Kind regards,
Bastien Latard
Web engineer
--
MDPI AG
Postfach, CH-4005 Basel, Switzerland
Office: Klybeckstrasse 64, CH-4057
Tel. +41 61 683 77 35
Fax: +41 61 302 89 18
E-mail:
lat...@mdpi.com
http://www.mdpi.com/





Kind regards,
Bastien Latard
Web engineer
--
MDPI AG
Postfach, CH-4005 Basel, Switzerland
Office: Klybeckstrasse 64, CH-4057
Tel. +41 61 683 77 35
Fax: +41 61 302 89 18
E-mail:
lat...@mdpi.com
http://www.mdpi.com/



Re: set session variable in mysql importHandler

2016-04-22 Thread Zaccheo Bagnati
sessionVariables=group_concat_max_len=. in the connection URL works as
expected.
Thank you very much!
Bye
Zaccheo

Il giorno gio 21 apr 2016 alle ore 01:14 Alexandre Rafalovitch <
arafa...@gmail.com> ha scritto:

> The driver documentation talks about "sessionVariables" that might be
> possible to pass through the connection URL:
>
> https://dev.mysql.com/doc/connector-j/5.1/en/connector-j-reference-configuration-properties.html
>
> Alternatively, there might be a way to configure driver via JNDI and
> set some variables that way.
>
> I haven't tested either though.
>
> Regards,
>Alex.
> 
> Newsletter and resources for Solr beginners and intermediates:
> http://www.solr-start.com/
>
>
> On 20 April 2016 at 23:49, Shawn Heisey  wrote:
> > On 4/20/2016 6:01 AM, Zaccheo Bagnati wrote:
> >> I configured an ImportHandler on a MySQL table using jdbc driver. I'm
> >> wondering if is possible to set a session variable in the mysql
> connection
> >> before executing queries. e. g. "SET SESSION group_concat_max_len =
> >> 100;"
> >
> > Normally the MySQL JDBC driver will not allow you to send more than one
> > SQL statement in a single request -- this is to prevent SQL injection
> > attacks.
> >
> > I think MySQL probably has a JDBC parameter that would allow multiple
> > statements per request, but a better option might be to put all the
> > statements you need in a stored procedure and call the procedure from
> > the import handler.  You'll need to consult MySQL support resources for
> > help with how to do this.
> >
> > Thanks,
> > Shawn
> >
>


Re: How can I set the defaultOperator to be AND?

2016-04-22 Thread Jan Høydahl
Hi

Which query parser are you using? If using edismax yo may be hitting a recent 
bug concerning default operator and explicit boolean operators.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 22. apr. 2016 kl. 11.26 skrev Bastien Latard - MDPI AG 
> :
> 
> Hi guys,
> 
> How can I set the defaultOperator to be AND?
> If I add the following line to the schema.xml, even if I do a search 
> 'title:"test" OR author:"me"', it will returns documents matching 
> 'title:"test" AND author:"me"':
> 
> 
> solr version: 6.0
> 
> I know that I can overwrite the query with q.op, but this is not that 
> convenient...
> I would need to write a complex query for a simple search '(a:x AND b:y) OR 
> c:z'
> 
> Kind regards,
> Bastien Latard
> Web engineer
> -- 
> MDPI AG
> Postfach, CH-4005 Basel, Switzerland
> Office: Klybeckstrasse 64, CH-4057
> Tel. +41 61 683 77 35
> Fax: +41 61 302 89 18
> E-mail:
> lat...@mdpi.com
> http://www.mdpi.com/
> 



How can I set the defaultOperator to be AND?

2016-04-22 Thread Bastien Latard - MDPI AG

Hi guys,

How can I set the defaultOperator to be AND?
If I add the following line to the schema.xml, even if I do a search 
'title:"test" OR author:"me"', it will returns documents matching 
'title:"test" AND author:"me"':



solr version: 6.0

I know that I can overwrite the query with q.op, but this is not that 
convenient...
I would need to write a complex query for a simple search '(a:x AND b:y) 
OR c:z'


Kind regards,
Bastien Latard
Web engineer
--
MDPI AG
Postfach, CH-4005 Basel, Switzerland
Office: Klybeckstrasse 64, CH-4057
Tel. +41 61 683 77 35
Fax: +41 61 302 89 18
E-mail:
lat...@mdpi.com
http://www.mdpi.com/



Solr - index polygons from csv

2016-04-22 Thread Jan Nekuda
Hello guys,
I use solr 6 for indexing data with points and polygons.

I have a question about indexing polygons from csv file. I have configured
type:


and field


I have tried to import this csv:
kod_adresa,nazev_ulice,cislo_orientacni,cislo_domovni,polygon_mapa,nazev_obec,Nazev_cast_obce,kod_ulice,kod_cast_obce,kod_obec,kod_momc,nazev_momc,Nazev,psc,nazev_vusc,kod_vusc,Nazev_okres,Kod_okres
9,,,4,"POLYGON ((-30 -10,-10 -20,-20 -40,-40 -40,-30
-10))",Vacov,Javorník,,57843,550621,,,Stachy,38473,Jihočeský
kraj,35,Prachatice,3306

and result is:

Posting files to [base] url http://localhost:8983/solr/ruian/update...
Entering auto mode. File endings considered are
xml,json,jsonl,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,odt,odp,ods,ott,otp,ots,rtf,htm,html,txt,log
POSTing file polygon.csv (text/csv) to [base]
SimplePostTool: WARNING: Solr returned an error #400 (Bad Request) for url:
http://localhost:8983/solr/ruian/update
SimplePostTool: WARNING: Response: 

4003org.apache.solr.common.SolrExceptionjava.lang.UnsupportedOperationExceptionCouldn't parse shape 'POLYGON ((-30 -10,-10 -20,-20 -40,-40
-40,-30 -10))' because: java.lang.UnsupportedOperationException:
Unsupported shape of this SpatialContext. Try JTS or Geo3D.400

SimplePostTool: WARNING: IOException while reading response:
java.io.IOException: Server returned HTTP response code: 400 for URL:
http://localhost:8983/solr/ruian/update
1 files indexed.
COMMITting Solr index changes to http://localhost:8983/solr/ruian/update...
Time spent: 0:00:00.036

Could someone give me any advice how to solve it? With indexing points in
the same way I'm fine.

and one more question:
I have this field type:
 

if I use  geo=false for solr.SpatialRecursivePrefixTreeFieldType and I use
this query:
http://localhost:8983/solr/ruian/select?indent=on=*:*={!bbox%20sfield=mapa}=-818044.37%20-1069122.12=20

for
getting all object in distance. But I actually don't know in which units
the distance is with this settings.



Thank you very much

Jan