Export request handler via SolrJ

2016-02-01 Thread deniz
I have been trying to export the whole resultset via SolrJ but till now
everything (Including the tricks here:
http://stackoverflow.com/questions/33540577/how-can-use-the-export-request-handler-via-solrj)
has failed... On curl, it is working totally fine to query with
server:port/solr/core/export but couldnt find a way to have the same results
via SolrJ...

Anyone has tried "exporting" via SolrJ or it doesnt support it yet? 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Export-request-handler-via-SolrJ-tp4254597.html
Sent from the Solr - User mailing list archive at Nabble.com.


How to get parent as well as children with one query ?

2016-02-01 Thread Pranaya Behera

Hi,
I have my parent document mapped in the field named isParent: 
Boolean value. Now children has their own ids, they don't match to parent.


I am searching query.setQuery("level:0") This gives me all the parent 
documents but not the associated children.
I have looked at the 
https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherParsers-BlockJoinQueryParsers 
and 
https://cwiki.apache.org/confluence/display/solr/Transforming+Result+Documents#TransformingResultDocuments-[child]-ChildDocTransformerFactory 
But I couldnt fully understand how this is to achieve. Could someone 
give one example on how to achieve in both cases ?


--
Thanks & Regards
Pranaya Behera



Import data from one core to another

2016-02-01 Thread vidya
Hi 

 How to import data from one solr core to another using request handler and
data-config.xml?
In solr-config.xml : I included this in target collection,



/root/Desktop/vidya/solr-data-config.xml



And solr-data-config.xml in the path as mentioned in request handler class

  
http://localhost:8983/solr/";>

  


Then i reloaded the collection and executed this query
http://10.138.90.227:8983/solr/#/student_shard1_replica1/dataimport?command=full-import

But i got an error as "NO DATA IMPORT HANDLER IS DEFINED"

What is the query to fully import data from source to target?

Thanks in advance




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Import-data-from-one-core-to-another-tp4254590.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr highlight

2016-02-01 Thread Zheng Lin Edwin Yeo
Do you have any setting for "df" and "hl.fl: under your /highlight request
handler in your solrconfig.xml? And which version of Solr are you using?

Regards,
Edwin

On 2 February 2016 at 12:54, Anil  wrote:

> HI,
>
> Any info on below ?
>
> Regards,
> Anil
>
> On 1 February 2016 at 20:53, Anil  wrote:
>
> > HI,
> >
> > We have five shards and 2 replicas and using collection aliases.
> >
> > I have set hl=true in my query to search against all fields of solr
> > document .
> >
> > i am searching a text (ex:2010-0561-T-0312) on all
> > fields(q=(2010-0561-T-0312)) , highlights is empty. When I search
> > q=docId:(2010-0561-T-0312), i can see highlights.
> >
> > I am able to see the highlight section if search any other text which is
> > not part of docId.
> >
> > Please help me in identifying the issue.
> >
> >
> > Thanks,
> > Anil
> >
> >
> >
>


Re: facet on min of multi valued field

2016-02-01 Thread Midas A
Erick,
Actually  we are eCommerce site and we have  master child relationship in
our catalog.
we show only masters in our website . for example we have Iphone as a
master product and different sellers are selling ipone through our site
these products are child product . the price of the master is being decide
by some Ranking Algo (RA)  so the price that is being shown on website is
decided by RA from these child products .
our current RA is (min price + quantity of the product )
so Price of the product(master) is dynamically changed in our system  .

And on this scenario we want price facet on website .


Please give some insight to solve our problem .

~MA








On Tue, Feb 2, 2016 at 2:08 AM, Erick Erickson 
wrote:

> Frankly, I have no idea what this means. Only count a facet
> for a particular document for the minimum for a MV field? I.e.
> if the doc has values 1, 2, 3, 4 in a MV field, it should only be
> counted in the "1" bucket?
>
> The easiest would be to have a second field that contained the
> min value and facet on _that_. If you're using min as an exemplar
> of an arbitrary math function it's harder.
>
> See also: http://yonik.com/solr-facet-functions/
>
> Best,
> Erick
>
> On Mon, Feb 1, 2016 at 3:50 AM, Midas A  wrote:
> > Hi ,
> > we want facet query on min of multi valued field .
> >
> >
> > Regards,
> > Abhishek Tiwari
>


Re: Memory leak defect or misssuse of SolrJ API?

2016-02-01 Thread Shawn Heisey
On 1/30/2016 6:15 AM, Steven White wrote:
> I'm getting memory leak in my code.  I narrowed the code to the following
> minimal to cause the leak.
>
> while (true) {
> HttpSolrClient client = new HttpSolrClient(" 
> http://192.168.202.129:8983/solr/core1";);
> client.close();
> }
>
> Is this a defect or an issue in the way I'm using HttpSolrClient?

As mentioned by others, you are indeed using HttpSolrClient
incorrectly.  Even so, the fact that this code causes OOM does indicate
that *something* is leaking in your environment.

I could not reproduce the leak.  I tried the above code loop in some
test code (as a testcase in the branch_5x code) and could not get it to
OOM orshow any evidence of a leak.  I let it run for ten minutes on a
512MB heap, which produced this jconsole memory graph:

https://www.dropbox.com/s/em392mx1gr6af67/client-loop-memory-graph.png?dl=0

That memory graph does not look like a program with a memory leak. 
Here's the test code that I was running -- specifically, the
testFullClient() method:

https://www.dropbox.com/s/dooy5bayv4hu6jk/TestHttpSolrClientMemoryLeak.java?dl=0

What versions of the dependent jars do you have in your project?  There
might be something leaking in a dependency rather than within SolrJ.

I also set up a test program using SolrJ 5.2.1, with updated
dependencies beyond the versions included with SolrJ, and could not get
that to show a leak either.

Thanks,
Shawn



Re: Solr highlight

2016-02-01 Thread Anil
HI,

Any info on below ?

Regards,
Anil

On 1 February 2016 at 20:53, Anil  wrote:

> HI,
>
> We have five shards and 2 replicas and using collection aliases.
>
> I have set hl=true in my query to search against all fields of solr
> document .
>
> i am searching a text (ex:2010-0561-T-0312) on all
> fields(q=(2010-0561-T-0312)) , highlights is empty. When I search
> q=docId:(2010-0561-T-0312), i can see highlights.
>
> I am able to see the highlight section if search any other text which is
> not part of docId.
>
> Please help me in identifying the issue.
>
>
> Thanks,
> Anil
>
>
>


Re: Data Import Handler takes different time on different machines

2016-02-01 Thread Erick Erickson
The first thing I'd be looking at is how I the JDBC batch size compares
between the two machines.

AFAIK, Solr shouldn't notice the difference, and since a large majority
of the development is done on Linux-based systems, I'd be surprised if
this was worse than Windows, which would lead me to the one thing that
is definitely different between the two: Your JDBC driver and its settings.
At least that's where I'd look first.

If nothing immediate pops up, I'd probably write a small driver program to
just access the database from the two machines and process your 10M
records _without_ sending them to Solr and see what the comparison is.

You can also forgo DIH and do a simple import program via SolrJ. The
advantage here is that the comparison I'm talking about above is
really simple, just comment out the call that sends data to Solr. Here's an
example...

https://lucidworks.com/blog/2012/02/14/indexing-with-solrj/

Best,
Erick

On Mon, Feb 1, 2016 at 7:34 PM, Troy Edwards  wrote:
> Sorry, I should explain further. The Data Import Handler had been running
> for a while retrieving only about 15 records from the database. Both in
> development env (windows) and linux machine it took about 3 mins.
>
> The query has been changed and we are now trying to retrieve about 10
> million records. We do expect the time to increase.
>
> With the new query the time taken on windows machine is consistently around
> 40 mins. While the DIH is running queries slow down i.e. a query that
> typically took 60 msec takes 100 msec.
>
> The time taken on linux machine is consistently around 2.5 hours. While the
> DIH is running queries take about 200  to 400 msec.
>
> Thanks!
>
> On Mon, Feb 1, 2016 at 8:45 PM, Erick Erickson 
> wrote:
>
>> What happens if you run just the SQL query from the
>> windows box and from the linux box? Is there any chance
>> that somehow the connection from the linux box is
>> just slower?
>>
>> Best,
>> Erick
>>
>> On Mon, Feb 1, 2016 at 6:36 PM, Alexandre Rafalovitch
>>  wrote:
>> > What are you importing from? Is the source and Solr machine collocated
>> > in the same fashion on dev and prod?
>> >
>> > Have you tried running this on a Linux dev machine? Perhaps your prod
>> > machine is loaded much more than a dev.
>> >
>> > Regards,
>> >Alex.
>> > 
>> > Newsletter and resources for Solr beginners and intermediates:
>> > http://www.solr-start.com/
>> >
>> >
>> > On 2 February 2016 at 13:21, Troy Edwards 
>> wrote:
>> >> We have a windows development machine on which the Data Import Handler
>> >> consistently takes about 40 mins to finish. Queries run fine. JVM
>> memory is
>> >> 2 GB per node.
>> >>
>> >> But on a linux machine it consistently takes about 2.5 hours. The
>> queries
>> >> also run slower. JVM memory here is also 2 GB per node.
>> >>
>> >> How should I go about analyzing and tuning the linux machine?
>> >>
>> >> Thanks
>>


Re: Data Import Handler takes different time on different machines

2016-02-01 Thread Troy Edwards
Sorry, I should explain further. The Data Import Handler had been running
for a while retrieving only about 15 records from the database. Both in
development env (windows) and linux machine it took about 3 mins.

The query has been changed and we are now trying to retrieve about 10
million records. We do expect the time to increase.

With the new query the time taken on windows machine is consistently around
40 mins. While the DIH is running queries slow down i.e. a query that
typically took 60 msec takes 100 msec.

The time taken on linux machine is consistently around 2.5 hours. While the
DIH is running queries take about 200  to 400 msec.

Thanks!

On Mon, Feb 1, 2016 at 8:45 PM, Erick Erickson 
wrote:

> What happens if you run just the SQL query from the
> windows box and from the linux box? Is there any chance
> that somehow the connection from the linux box is
> just slower?
>
> Best,
> Erick
>
> On Mon, Feb 1, 2016 at 6:36 PM, Alexandre Rafalovitch
>  wrote:
> > What are you importing from? Is the source and Solr machine collocated
> > in the same fashion on dev and prod?
> >
> > Have you tried running this on a Linux dev machine? Perhaps your prod
> > machine is loaded much more than a dev.
> >
> > Regards,
> >Alex.
> > 
> > Newsletter and resources for Solr beginners and intermediates:
> > http://www.solr-start.com/
> >
> >
> > On 2 February 2016 at 13:21, Troy Edwards 
> wrote:
> >> We have a windows development machine on which the Data Import Handler
> >> consistently takes about 40 mins to finish. Queries run fine. JVM
> memory is
> >> 2 GB per node.
> >>
> >> But on a linux machine it consistently takes about 2.5 hours. The
> queries
> >> also run slower. JVM memory here is also 2 GB per node.
> >>
> >> How should I go about analyzing and tuning the linux machine?
> >>
> >> Thanks
>


Re: Data Import Handler takes different time on different machines

2016-02-01 Thread Erick Erickson
What happens if you run just the SQL query from the
windows box and from the linux box? Is there any chance
that somehow the connection from the linux box is
just slower?

Best,
Erick

On Mon, Feb 1, 2016 at 6:36 PM, Alexandre Rafalovitch
 wrote:
> What are you importing from? Is the source and Solr machine collocated
> in the same fashion on dev and prod?
>
> Have you tried running this on a Linux dev machine? Perhaps your prod
> machine is loaded much more than a dev.
>
> Regards,
>Alex.
> 
> Newsletter and resources for Solr beginners and intermediates:
> http://www.solr-start.com/
>
>
> On 2 February 2016 at 13:21, Troy Edwards  wrote:
>> We have a windows development machine on which the Data Import Handler
>> consistently takes about 40 mins to finish. Queries run fine. JVM memory is
>> 2 GB per node.
>>
>> But on a linux machine it consistently takes about 2.5 hours. The queries
>> also run slower. JVM memory here is also 2 GB per node.
>>
>> How should I go about analyzing and tuning the linux machine?
>>
>> Thanks


Re: Data Import Handler takes different time on different machines

2016-02-01 Thread Alexandre Rafalovitch
What are you importing from? Is the source and Solr machine collocated
in the same fashion on dev and prod?

Have you tried running this on a Linux dev machine? Perhaps your prod
machine is loaded much more than a dev.

Regards,
   Alex.

Newsletter and resources for Solr beginners and intermediates:
http://www.solr-start.com/


On 2 February 2016 at 13:21, Troy Edwards  wrote:
> We have a windows development machine on which the Data Import Handler
> consistently takes about 40 mins to finish. Queries run fine. JVM memory is
> 2 GB per node.
>
> But on a linux machine it consistently takes about 2.5 hours. The queries
> also run slower. JVM memory here is also 2 GB per node.
>
> How should I go about analyzing and tuning the linux machine?
>
> Thanks


Data Import Handler takes different time on different machines

2016-02-01 Thread Troy Edwards
We have a windows development machine on which the Data Import Handler
consistently takes about 40 mins to finish. Queries run fine. JVM memory is
2 GB per node.

But on a linux machine it consistently takes about 2.5 hours. The queries
also run slower. JVM memory here is also 2 GB per node.

How should I go about analyzing and tuning the linux machine?

Thanks


Re: Solr segment merging in different replica

2016-02-01 Thread Zheng Lin Edwin Yeo
Hi Emir,

My setup is SolrCloud.

Also, will it be good to use a separate network interface to connect the
two node with the interface that is used to connect to the network for
searching?

Regards,
Edwin


On 1 February 2016 at 19:01, Emir Arnautovic 
wrote:

> Hi Edwin,
> What is your setup - SolrCloud or Master-Slave? If it si SolrCloud, then
> under normal index updates, each core is behaving as independent index. In
> theory, if all changes happen at the same time on all nodes, merges will
> happen at the same time. But that is not realistic and it is expected to
> happen in slightly different time.
> If you are running Master-Slave, then new segments will be copied from
> master to slave.
>
> Regards,
> Emir
>
> --
> Monitoring * Alerting * Anomaly Detection * Centralized Log Management
> Solr & Elasticsearch Support * http://sematext.com/
>
>
>
>
> On 01.02.2016 11:56, Zheng Lin Edwin Yeo wrote:
>
>> Hi,
>>
>> I would like to check, during segment merging, how did the replical node
>> do
>> the merging?
>> Will it do the merging concurrently, or will the replica node delete the
>> old segment and replace the new one?
>>
>> Also, is it possible to separate the network interface for inter-node
>> communication from the network interface for update/search requests?
>> If so I could put two network cards in each machine and route the index
>> and
>> search traffic over the first interface and the traffic for the inter-node
>> communication (sending documents to replicas) over the second interface.
>>
>> I'm using Solr 5.4.0
>>
>> Regards,
>> Edwin
>>
>>
>


Re: KeepWord

2016-02-01 Thread Erik Hatcher
And if you want to have the “kept” words stored, consider the trick used in 
example/files for url/e-mail extraction mentioned here (note the related fix in 
the patch in the JIRA issue mentioned): 

   https://lucidworks.com/blog/2016/01/27/example_files/ 





> On Feb 1, 2016, at 3:23 PM, John Blythe  wrote:
> 
> i immediately realized after sending that i'd had stored="true" in the
> field's config and that it was storing the original data, not the processed
> data. silly me, thanks anyway!
> 
> -- 
> *John Blythe*
> Product Manager & Lead Developer
> 
> 251.605.3071 | j...@curvolabs.com
> www.curvolabs.com
> 
> 58 Adams Ave
> Evansville, IN 47713
> 
> On Mon, Feb 1, 2016 at 3:18 PM, John Blythe  wrote:
> 
>> hi all,
>> 
>> i'm having trouble with what would seem to be a pretty straightforward
>> filter.
>> 
>> i'm trying to 'tag' documents based off of a list of relevant words that a
>> description field may contain. if the data contains any of the words then
>> this field is populated with it and acts as a quick reference for
>> relevant/bucketed documents.
>> 
>> i receive no errors when reloading the core or indexing the data. each
>> document, however, has its description listed in this tag field *even if
>> none of the targeted words are in it.*
>> 
>> here's the analyzer, tokenizer, and filter:
>> 
>> 
>>
>>> ignoreCase="true"/>
>> 
>> 
>> to add to the confusion, when i run test data through both of the
>> appropriate FieldName/FieldType in the Analysis UI I get the expected
>> results: the non-targeted words are left out of processing.
>> 
>> thanks for any info/help-
>> 



Re: URI is too long

2016-02-01 Thread Salman Ansari
That is what I have tried. I tried using POST with
application/x-www-form-urlencoded and I got the exception I mentioned. Is
there a way I can get around this exception?

Regards,
Salman

On Mon, Feb 1, 2016 at 6:08 PM, Susheel Kumar  wrote:

> Post is pretty much similar to GET. You can use any REST Client to try.
> Same select URL & pass below header and put the queries parameters into
> body
>
> POST:  http://localhost:8983/solr/techproducts/select
>
> Header
> ==
> Content-Type:application/x-www-form-urlencoded
>
> payload/body:
> ==
> q=*:*&rows=2
>
>
> Thanks,
> Susheel
>
> On Mon, Feb 1, 2016 at 2:38 AM, Salman Ansari 
> wrote:
>
> > Cool. I would give POST a try. Any samples of using Post while passing
> the
> > query string values (such as ORing between Solr field values) using
> > Solr.NET?
> >
> > Regards,
> > Salman
> >
> > On Sun, Jan 31, 2016 at 10:21 PM, Shawn Heisey 
> > wrote:
> >
> > > On 1/31/2016 7:20 AM, Salman Ansari wrote:
> > > > I am building a long query containing multiple ORs between query
> > terms. I
> > > > started to receive the following exception:
> > > >
> > > > The remote server returned an error: (414) Request-URI Too Long. Any
> > idea
> > > > what is the limit of the URL in Solr? Moreover, as a solution I was
> > > > thinking of chunking the query into multiple requests but I was
> > wondering
> > > > if anyone has a better approach?
> > >
> > > The default HTTP header size limit on most webservers and containers
> > > (including the Jetty that ships with Solr) is 8192 bytes.  A typical
> > > request like this will start with "GET " and end with " HTTP/1.1",
> which
> > > count against that 8192 bytes.  The max header size can be increased.
> > >
> > > If you place the parameters into a POST request instead of on the URL,
> > > then the default size limit of that POST request in Solr is 2MB.  This
> > > can also be increased.
> > >
> > > Thanks,
> > > Shawn
> > >
> > >
> >
>


Re: facet on min of multi valued field

2016-02-01 Thread Erick Erickson
Frankly, I have no idea what this means. Only count a facet
for a particular document for the minimum for a MV field? I.e.
if the doc has values 1, 2, 3, 4 in a MV field, it should only be
counted in the "1" bucket?

The easiest would be to have a second field that contained the
min value and facet on _that_. If you're using min as an exemplar
of an arbitrary math function it's harder.

See also: http://yonik.com/solr-facet-functions/

Best,
Erick

On Mon, Feb 1, 2016 at 3:50 AM, Midas A  wrote:
> Hi ,
> we want facet query on min of multi valued field .
>
>
> Regards,
> Abhishek Tiwari


Re: Shard allocation across nodes

2016-02-01 Thread Erick Erickson
See the createNodeset and node parameters for the Collections API CREATE and
ADDREPLICA commands, respectively. That's more a manual process, there's
nothing OOB but Jeff's suggestion is sound.

Best,
Erick



On Mon, Feb 1, 2016 at 11:00 AM, Jeff Wartes  wrote:
>
> You could write your own snitch: 
> https://cwiki.apache.org/confluence/display/solr/Rule-based+Replica+Placement
>
> Or, it would be more annoying, but you can always add/remove replicas 
> manually and juggle things yourself after you create the initial collection.
>
>
>
>
> On 2/1/16, 8:42 AM, "Tom Evans"  wrote:
>
>>Hi all
>>
>>We're setting up a solr cloud cluster, and unfortunately some of our
>>VMs may be physically located on the same VM host. Is there a way of
>>ensuring that all copies of a shard are not located on the same
>>physical server?
>>
>>If they do end up in that state, is there a way of rebalancing them?
>>
>>Cheers
>>
>>Tom


Re: catch alls and nuances

2016-02-01 Thread Erick Erickson
Likely you also have WordDelimiterFilterFactory in
your fieldType, that's what will split on alphanumeric
transitions.

So you should be able to use wildcards here, i.e. 1234L*

However, that'll only work if you have preserveOriginal set in
WordDelimiterFilterFactory in your indexing chain.

And just to make life "interesting", there are some peculiarities
with parsing wildcards at query time, so be sure to see the
admin/analysis page

Best,
Erick

On Mon, Feb 1, 2016 at 12:20 PM, John Blythe  wrote:
> Hi there
>
> I have a a catch all field called 'text' that I copy my item description,
> manufacturer name, and the item's catalog number into. I'm having an issue
> with keeping the broadness of the tokenizers in place whilst still allowing
> some good precision in the case of very specific queries.
>
> The results are generally good. But, for instance, the products named 1234L
> and 1234LT aren't behaving how i would like. If I search 1234 they both
> show. If I search 1234L only the first one is returned. I'm guessing this
> is due to the splitting of the numeric and string portions. The "1234" and
> the "L" both hit in the first case ("1234" and "L") but the L is of no
> value in the "1234" and "LT" indexed item.
>
> What is the best way around this so that a small levenstein distance, for
> instance, is picked up?


Re: KeepWord

2016-02-01 Thread John Blythe
i immediately realized after sending that i'd had stored="true" in the
field's config and that it was storing the original data, not the processed
data. silly me, thanks anyway!

-- 
*John Blythe*
Product Manager & Lead Developer

251.605.3071 | j...@curvolabs.com
www.curvolabs.com

58 Adams Ave
Evansville, IN 47713

On Mon, Feb 1, 2016 at 3:18 PM, John Blythe  wrote:

> hi all,
>
> i'm having trouble with what would seem to be a pretty straightforward
> filter.
>
> i'm trying to 'tag' documents based off of a list of relevant words that a
> description field may contain. if the data contains any of the words then
> this field is populated with it and acts as a quick reference for
> relevant/bucketed documents.
>
> i receive no errors when reloading the core or indexing the data. each
> document, however, has its description listed in this tag field *even if
> none of the targeted words are in it.*
>
> here's the analyzer, tokenizer, and filter:
>
> 
> 
>  ignoreCase="true"/>
> 
>
> to add to the confusion, when i run test data through both of the
> appropriate FieldName/FieldType in the Analysis UI I get the expected
> results: the non-targeted words are left out of processing.
>
> thanks for any info/help-
>


catch alls and nuances

2016-02-01 Thread John Blythe
Hi there

I have a a catch all field called 'text' that I copy my item description,
manufacturer name, and the item's catalog number into. I'm having an issue
with keeping the broadness of the tokenizers in place whilst still allowing
some good precision in the case of very specific queries.

The results are generally good. But, for instance, the products named 1234L
and 1234LT aren't behaving how i would like. If I search 1234 they both
show. If I search 1234L only the first one is returned. I'm guessing this
is due to the splitting of the numeric and string portions. The "1234" and
the "L" both hit in the first case ("1234" and "L") but the L is of no
value in the "1234" and "LT" indexed item.

What is the best way around this so that a small levenstein distance, for
instance, is picked up?


KeepWord

2016-02-01 Thread John Blythe
hi all,

i'm having trouble with what would seem to be a pretty straightforward
filter.

i'm trying to 'tag' documents based off of a list of relevant words that a
description field may contain. if the data contains any of the words then
this field is populated with it and acts as a quick reference for
relevant/bucketed documents.

i receive no errors when reloading the core or indexing the data. each
document, however, has its description listed in this tag field *even if
none of the targeted words are in it.*

here's the analyzer, tokenizer, and filter:






to add to the confusion, when i run test data through both of the
appropriate FieldName/FieldType in the Analysis UI I get the expected
results: the non-targeted words are left out of processing.

thanks for any info/help-


Re: Error configuring UIMA

2016-02-01 Thread Jack Krupansky
Yeah, that's exactly the kind of innocent user error that UIMA simply has
no code to detect and reasonably report.

-- Jack Krupansky

On Mon, Feb 1, 2016 at 12:13 PM, Gian Maria Ricci - aka Alkampfer <
alkamp...@nablasoft.com> wrote:

> It was a stupid error, I've mistyped the logField configuration in UIMA
>
> I'd like error not to use the Id but another field, but I've mistyped in
> solrconfig.xml and then I've got that error.
>
> Gian Maria.
>
> --
> Gian Maria Ricci
> Cell: +39 320 0136949
>
>
> -Original Message-
> From: Jack Krupansky [mailto:jack.krupan...@gmail.com]
> Sent: lunedì 1 febbraio 2016 16:54
> To: solr-user@lucene.apache.org
> Subject: Re: Error configuring UIMA
>
> What was the specific error you had to correct? The NPE appears to be in
> exception handling code so the actual exception is not indicated in the
> stack trace.
>
> The UIMA code is rather poor in terms of failing to check and report
> missing parameters or bad parameters which in turn reference data that does
> not exist.
>
> -- Jack Krupansky
>
> On Mon, Feb 1, 2016 at 10:18 AM, alkampfer 
> wrote:
>
> >
> >
> > From: outlook_288fbf38c031d...@outlook.com
> > To: solr-user@lucene.apache.org
> > Cc:
> > Date: Mon, 1 Feb 2016 15:59:02 +0100
> > Subject: Error configuring UIMA
> >
> > I've solved the problem, it was caused by wrong configuration in
> > solrconfig.xml.
> >
> > Thanks.
> >
> >
> >
> > > Hi,>  > I’ve followed the guide
> > https://cwiki.apache.org/confluence/display/solr/UIMA+Integration to
> > setup a UIMA integration to test this feature. The doc is not updated
> > for Solr5, I’ve followed the latest comment to that guide and did some
> > other changes but now each request to /update handler fails with the
> > following error.>  > Someone have a clue on what I did wrong?>  > Thanks
> in advance.>
> >  > {>   "responseHeader": {> "status": 500,> "QTime": 443>   },>
> > "error": {> "trace": "java.lang.NullPointerException\n\tat
> > org.apache.solr.uima.processor.UIMAUpdateRequestProcessor.processAdd(U
> > IMAUpdateRequestProcessor.java:105)\n\tat
> > org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.pro
> > cessUpdate(JsonLoader.java:143)\n\tat
> > org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.loa
> > d(JsonLoader.java:113)\n\tat
> > org.apache.solr.handler.loader.JsonLoader.load(JsonLoader.java:76)\n\t
> > at
> > org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandl
> > er.java:98)\n\tat
> > org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(Con
> > tentStreamHandlerBase.java:74)\n\tat
> > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandle
> > rBase.java:143)\n\tat
> > org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)\n\tat
> > org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669)\n\
> > tat
> > org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462)\n\tat
> > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter
> > .java:210)\n\tat
> > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter
> > .java:179)\n\tat
> > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletH
> > andler.java:1652)\n\tat
> > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:
> > 585)\n\tat
> > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.ja
> > va:143)\n\tat
> > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java
> > :577)\n\tat
> > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandle
> > r.java:223)\n\tat
> > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandle
> > r.java:1127)\n\tat
> > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:5
> > 15)\n\tat
> > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler
> > .java:185)\n\tat
> > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler
> > .java:1061)\n\tat
> > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.ja
> > va:141)\n\tat
> > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(Conte
> > xtHandlerCollection.java:215)\n\tat
> > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerColle
> > ction.java:110)\n\tat
> > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.
> > java:97)\n\tat
> > org.eclipse.jetty.server.Server.handle(Server.java:499)\n\tat
> > org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)\n\ta
> > t
> > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java
> > :257)\n\tat
> > org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:
> > 540)\n\tat
> > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool
> > .java:635)\n\tat
> >
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)\n\tat
> > java.lang.Thread.run(Thread.java:745)\n",> "code": 500>   }> }>  > --
> > > Gian M

Re: alternative forum for SOLR user

2016-02-01 Thread GW
I personally hate email lists..

But this one is actually pretty good. Excellent actually.

I'm a convert.

Joined it with Gogle mail, forward all to a folder and search it.

Piece of cake.


On 1 February 2016 at 11:08, Jean-Jacques MONOT  wrote:

> Thank you for the very quick answer : the mailing list is very efficient.
>
> The trouble with a mailing list is that I will receive a lot of message in
> my mail box  I will see if I unsubscribe ...
>
>
>   De : Binoy Dalal 
>  À : SOLR Users 
>  Envoyé le : Lundi 1 février 2016 9h30
>  Objet : Re: alternative forum for SOLR user
>
> This is the forum if you want help. There are additional forums for dev and
> other discussions.
> Check it out here: lucene.apache.org/solr/resources.html
>
> If you are looking for the archives just Google solr user list archive.
>
> On Mon, 1 Feb 2016, 13:43 Jean-Jacques MONOT  wrote:
>
> > Hello
> >
> > I am a newbie with SOLR and just registered to this mailing list.
> >
> > Is there an alternative forum for SOLR user ? I am using this mailing
> > list for support, but did not find "real" web forum.
> >
> > JJM
> >
> > ---
> > L'absence de virus dans ce courrier électronique a été vérifiée par le
> > logiciel antivirus Avast.
> > https://www.avast.com/antivirus
> >
> > --
> Regards,
> Binoy Dalal
>
>
>


Re: sorry, no dataimport-handler defined!

2016-02-01 Thread Susheel Kumar
Please register Data Import Handler to work with it
https://cwiki.apache.org/confluence/display/solr/Uploading+Structured+Data+Store+Data+with+the+Data+Import+Handler


On Mon, Feb 1, 2016 at 2:31 PM, Jean-Jacques MONOT 
wrote:

> Hello
>
> I am using SOLR 5.4.1 and the graphical admin UI.
>
> I successfully created multiples cores and indexed various documents,
> using in line commands : (create -c) and (post.jar) on W10.
>
> But in the GUI, when I click on "Dataimport", I get the following message
> : "sorry, no dataimport-handler defined!"
>
> I get the same message even on 5.3.1 or for different cores.
>
> What is wrong ?
>
> JJM
>
> ---
> L'absence de virus dans ce courrier électronique a été vérifiée par le
> logiciel antivirus Avast.
> https://www.avast.com/antivirus
>
>


sorry, no dataimport-handler defined!

2016-02-01 Thread Jean-Jacques MONOT

Hello

I am using SOLR 5.4.1 and the graphical admin UI.

I successfully created multiples cores and indexed various documents,
using in line commands : (create -c) and (post.jar) on W10.

But in the GUI, when I click on "Dataimport", I get the following
message : "sorry, no dataimport-handler defined!"

I get the same message even on 5.3.1 or for different cores.

What is wrong ?

JJM

---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel 
antivirus Avast.
https://www.avast.com/antivirus



Re: Shard allocation across nodes

2016-02-01 Thread Jeff Wartes

You could write your own snitch: 
https://cwiki.apache.org/confluence/display/solr/Rule-based+Replica+Placement

Or, it would be more annoying, but you can always add/remove replicas manually 
and juggle things yourself after you create the initial collection.




On 2/1/16, 8:42 AM, "Tom Evans"  wrote:

>Hi all
>
>We're setting up a solr cloud cluster, and unfortunately some of our
>VMs may be physically located on the same VM host. Is there a way of
>ensuring that all copies of a shard are not located on the same
>physical server?
>
>If they do end up in that state, is there a way of rebalancing them?
>
>Cheers
>
>Tom


Re: Restoring backups of solrcores

2016-02-01 Thread Jeff Wartes

Aliases work when indexing too.

Create collection: collection1
Create alias: this_week -> collection1
Index to: this_week

Next week...

Create collection: collection2
Create (Move) alias: this_week -> collection2
Index to: this_week




On 2/1/16, 2:14 AM, "vidya"  wrote:

>Hi 
>
>How can that be useful, can u please explain.
>I want to have the same collection name everytime when I index data i.e.,
>current_collection.
>
>By collection aliasing, i can create a new collection and point my alias
>(say ALIAS) to new collection but cannot rename that collection to the same
>current_collection which i have created and indexed previous week.
>
>So, are you asking me to create whatever collection name i want to create
>but point out my alias with name i want and change that alias pointing to
>new collection that i create and query using my alias name.
>
>Please help me on this.
>
>Thanks in advance
>
>
>
>--
>View this message in context: 
>http://lucene.472066.n3.nabble.com/Restoring-backups-of-solrcores-tp4254080p4254366.html
>Sent from the Solr - User mailing list archive at Nabble.com.


RE: Error configuring UIMA

2016-02-01 Thread Gian Maria Ricci - aka Alkampfer
It was a stupid error, I've mistyped the logField configuration in UIMA

I'd like error not to use the Id but another field, but I've mistyped in 
solrconfig.xml and then I've got that error.

Gian Maria.

--
Gian Maria Ricci
Cell: +39 320 0136949


-Original Message-
From: Jack Krupansky [mailto:jack.krupan...@gmail.com] 
Sent: lunedì 1 febbraio 2016 16:54
To: solr-user@lucene.apache.org
Subject: Re: Error configuring UIMA

What was the specific error you had to correct? The NPE appears to be in 
exception handling code so the actual exception is not indicated in the stack 
trace.

The UIMA code is rather poor in terms of failing to check and report missing 
parameters or bad parameters which in turn reference data that does not exist.

-- Jack Krupansky

On Mon, Feb 1, 2016 at 10:18 AM, alkampfer  wrote:

>
>
> From: outlook_288fbf38c031d...@outlook.com
> To: solr-user@lucene.apache.org
> Cc:
> Date: Mon, 1 Feb 2016 15:59:02 +0100
> Subject: Error configuring UIMA
>
> I've solved the problem, it was caused by wrong configuration in 
> solrconfig.xml.
>
> Thanks.
>
>
>
> > Hi,>  > I’ve followed the guide
> https://cwiki.apache.org/confluence/display/solr/UIMA+Integration to 
> setup a UIMA integration to test this feature. The doc is not updated 
> for Solr5, I’ve followed the latest comment to that guide and did some 
> other changes but now each request to /update handler fails with the 
> following error.>  > Someone have a clue on what I did wrong?>  > Thanks in 
> advance.>
>  > {>   "responseHeader": {> "status": 500,> "QTime": 443>   },>
> "error": {> "trace": "java.lang.NullPointerException\n\tat
> org.apache.solr.uima.processor.UIMAUpdateRequestProcessor.processAdd(U
> IMAUpdateRequestProcessor.java:105)\n\tat
> org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.pro
> cessUpdate(JsonLoader.java:143)\n\tat
> org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.loa
> d(JsonLoader.java:113)\n\tat 
> org.apache.solr.handler.loader.JsonLoader.load(JsonLoader.java:76)\n\t
> at 
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandl
> er.java:98)\n\tat 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(Con
> tentStreamHandlerBase.java:74)\n\tat
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandle
> rBase.java:143)\n\tat 
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)\n\tat
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669)\n\
> tat 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462)\n\tat
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter
> .java:210)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter
> .java:179)\n\tat 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletH
> andler.java:1652)\n\tat 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:
> 585)\n\tat 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.ja
> va:143)\n\tat 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java
> :577)\n\tat 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandle
> r.java:223)\n\tat 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandle
> r.java:1127)\n\tat 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:5
> 15)\n\tat 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler
> .java:185)\n\tat 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler
> .java:1061)\n\tat 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.ja
> va:141)\n\tat 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(Conte
> xtHandlerCollection.java:215)\n\tat
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerColle
> ction.java:110)\n\tat 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.
> java:97)\n\tat 
> org.eclipse.jetty.server.Server.handle(Server.java:499)\n\tat
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)\n\ta
> t 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java
> :257)\n\tat 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:
> 540)\n\tat 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool
> .java:635)\n\tat 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)\n\tat
> java.lang.Thread.run(Thread.java:745)\n",> "code": 500>   }> }>  > --
> > Gian Maria Ricci
> > Cell: +39 320 0136949> >
>
>


Re: FileBased Spellcheck on Solr cloud

2016-02-01 Thread Binoy Dalal
1) Try building the dictionaries individually on each node.
2) Have you configured the shards.qt parameter in your solrconfig.xml for
the query handler you're using?
The shards.qt parameter should point to the request handler you're using,
something like:
/spellcheck

On Mon, Feb 1, 2016 at 8:51 PM Riyaz 
wrote:

> Thank you Binoy. We are generating spellcheck source data:
> spellings_xxx.txt
> by querying the main index only(we do have the field indexed in cloud). Due
> to huge amount of data(160 million records), spellcheck build request
> taking
> lot of time and consuming lot memory for index based spellcheck. So we have
> query the filed value of one content type alone to build the
> spellings_xxx.txt(1 million entries-14MB size).
>
> Here is the behavior of the FileBasedSpellcheck on cloud:
>
>
> Testing environment:
>
> total : 4 solr instances : 4.10.4
> External zookeeper ensemble: 3 instances
>
> shard-1
>  -- leader1
>  -- replica1
>
> shard2-2
>  -- leader2
>  -- replica2
>
>
> we have pushed the file to cloud and sends the spellcheck build request to
> leader-1 of shard-1.
>
> 1. First time, it has built the spellcheck index on leader-1 instance
> shard-1 and replica-2 of shard2
>
> 2. Next time(cleaned the spellcheck index and restarted the cloud), have
> noticed, it has built the index on leader-1 instance shard-1 and  leader-2
> instance shard-2(on both the leaders only)
>
> Our spellcheck queries(posting to leader-1) are not returning any
> suggestions , if we keep on refreshing the page.(may be when replicas get
> the request)
>
>
> Can you please let us know, is there way to built the filebased spellchekc
> on cloud?.
>
> Thanks
> Riyaz
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/FileBased-Spellcheck-on-Solr-cloud-tp4252034p4254432.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
-- 
Regards,
Binoy Dalal


Shard allocation across nodes

2016-02-01 Thread Tom Evans
Hi all

We're setting up a solr cloud cluster, and unfortunately some of our
VMs may be physically located on the same VM host. Is there a way of
ensuring that all copies of a shard are not located on the same
physical server?

If they do end up in that state, is there a way of rebalancing them?

Cheers

Tom


Re: Error in UIMA, probably opencalais,

2016-02-01 Thread alkampfer


From: "alkampfer" alkamp...@nablasoft.com
To: solr-user@lucene.apache.org
Cc: solr-user@lucene.apache.org
Date: Mon,  1 Feb 2016 17:04:01 +0100
Subject: Re: Error in UIMA, probably opencalais,


Just a quick follow up (sorry for issuing too many e-mail).


I've simply taken all the xml files directly from source code 
(/solr/contrib/uima/src/resources/org/apache/uima/desc/)then modified the 
OverridingParamsExtServicesAE.xml commenting out the line regarding 
OpenCalais.Changed solrconfig.xml to refer to version in my local directory 
/var/solr/uima/OverridingParamsExtServicesAE.xml


Now everything works and I can index document and I can find UIMA data from 
Alchemy API


This confirms me that the error is indeed in OpenCalais API, so I wonder if 
anyone has gotten them to work in Solr 5.3.1. 
All that I did was simply to add the API Key Open Calais gave me in the 
registration E-Mail. 


Thanks in advance.


Gian Maria.

> 
> 
> From: "Jack Krupansky" jack.krupan...@gmail.com
> To: solr-user@lucene.apache.org
> Cc: 
> Date: Mon, 1 Feb 2016 10:55:44 -0500
> Subject: Re: Error in UIMA, probably opencalais,
> 
> 
> Yes, that is the exact error that I got, but I think that the error is 
> somewhat due to the return value of OpenCalais API because it is in the 
> method OpenCalaisAnnotator.process.
> 
> 
> As a general question I'd like to know if I can disable the openCalais part 
> and using only Alchemy API. 
> 
> 
> I've found information on this in some old message with 
> OverridingParamsExtServicesAE.xml, but I did not find anything in the 
> documentation that suggest how to do this.
> 
> 
> Anyone has a link that explain how to use OverridingParamsExtServicesAE.xml 
> to avoid using OpenCalais api?
> 
> 
> Thanks in advance.
> 
> 
> Gian Maria.
> 
> > At the bottom (the fine print!) it says: lineNumber: 15; columnNumber: 7;
> > The element type "meta" must be terminated by the matching end-tag
> > "".
> > 
> > -- Jack Krupansky
> > 
> > On Mon, Feb 1, 2016 at 10:45 AM, Gian Maria Ricci - aka Alkampfer <  
> > alkamp...@nablasoft.com> wrote:
> > 
> > > Hi,
> > >
> > >
> > >
> > > I’ve configured integration with UIMA but when I try to add a document I
> > > always got the error reported at bottom of the mail.
> > >
> > >
> > >
> > > It seems to be related to openCalais, but I’ve registered to OpenCalais
> > > and setup my token in solrconfig, so I wonder if anyone has some clue on
> > > what could be the reason of the error.
> > >
> > >
> > >
> > > I’m running this on Solr 5.3.1 instance running on linux.
> > >
> > >
> > >
> > > Gian Maria.
> > >
> > >
> > >
> > > null:org.apache.solr.common.SolrException: processing error null.
> > > id=doc4,  text="This is some textual content to verify UIMA 
> > > integration..."
> > >
> > >  at
> > > org.apache.solr.uima.processor.UIMAUpdateRequestProcessor.processAdd(UIMAUpdateRequestProcessor.java:127)
> > >
> > >  at
> > > org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:250)
> > >
> > >  at
> > > org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:177)
> > >
> > >  at
> > > org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:98)
> > >
> > >  at
> > > org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
> > >
> > >  at
> > > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> > >
> > >  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
> > >
> > >  at
> > > org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669)
> > >
> > >  at
> > > org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462)
> > >
> > >  at
> > > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:210)
> > >
> > >  at
> > > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
> > >
> > >  at
> > > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> > >
> > >  at
> > > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> > >
> > >  at
> > > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> > >
> > >  at
> > > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> > >
> > >  at
> > > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> > >
> > >  at
> > > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> > >
> > >  at
> > > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> > >
> > >  at
> > > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> > >
> > >  at
> > > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> > >
> > >  at
> > > org.eclipse.j

Re: use /update in the Gui admin interface

2016-02-01 Thread Jean-Jacques MONOT
Thank you
This is exactly what I was looking for ! Solr is very powerful but quite 
complicated to handle.


  De : Erik Hatcher 
 À : solr-user@lucene.apache.org 
 Envoyé le : Lundi 1 février 2016 11h26
 Objet : Re: use /update in the Gui admin interface
   
JJM - use the “Documents” tab in the admin UI instead of the “Query” one.

    Erik



> On Feb 1, 2016, at 3:10 AM, Jean-Jacques MONOT  wrote:
> 
> Hello
> 
> I am using the GUI admin interface for the SOLR java server.
> 
> No problem to make "classical" query with the /select request handler.
> 
> But now, I would like to make an update on a selected document : modify
> the value of a field.
> 
> How should I do ?
> 
> I think I should use :
> - /update  : for the request handler
> - id="" :  for the q field (in order to select the doc)
> 
> but  I do not see how to place a "set" to the field that I have added to
> the schema and that is in the field list of my doc ?
> 
> example, I would like to be able to do like in json on the selected doc :
> 
> myid : {set : 14}
> 
> JJM
> 
> ---
> L'absence de virus dans ce courrier électronique a été vérifiée par le 
> logiciel antivirus Avast.
> https://www.avast.com/antivirus
> 


  

Re: alternative forum for SOLR user

2016-02-01 Thread Jean-Jacques MONOT
Thank you for the very quick answer : the mailing list is very efficient.

The trouble with a mailing list is that I will receive a lot of message in my 
mail box  I will see if I unsubscribe ... 


  De : Binoy Dalal 
 À : SOLR Users  
 Envoyé le : Lundi 1 février 2016 9h30
 Objet : Re: alternative forum for SOLR user
   
This is the forum if you want help. There are additional forums for dev and
other discussions.
Check it out here: lucene.apache.org/solr/resources.html

If you are looking for the archives just Google solr user list archive.

On Mon, 1 Feb 2016, 13:43 Jean-Jacques MONOT  wrote:

> Hello
>
> I am a newbie with SOLR and just registered to this mailing list.
>
> Is there an alternative forum for SOLR user ? I am using this mailing
> list for support, but did not find "real" web forum.
>
> JJM
>
> ---
> L'absence de virus dans ce courrier électronique a été vérifiée par le
> logiciel antivirus Avast.
> https://www.avast.com/antivirus
>
> --
Regards,
Binoy Dalal

  

Re: Error in UIMA, probably opencalais,

2016-02-01 Thread alkampfer


From: "Jack Krupansky" jack.krupan...@gmail.com
To: solr-user@lucene.apache.org
Cc: 
Date: Mon, 1 Feb 2016 10:55:44 -0500
Subject: Re: Error in UIMA, probably opencalais,


Yes, that is the exact error that I got, but I think that the error is somewhat 
due to the return value of OpenCalais API because it is in the method 
OpenCalaisAnnotator.process.


As a general question I'd like to know if I can disable the openCalais part and 
using only Alchemy API. 


I've found information on this in some old message with 
OverridingParamsExtServicesAE.xml, but I did not find anything in the 
documentation that suggest how to do this.


Anyone has a link that explain how to use OverridingParamsExtServicesAE.xml to 
avoid using OpenCalais api?


Thanks in advance.


Gian Maria.

> At the bottom (the fine print!) it says: lineNumber: 15; columnNumber: 7;
> The element type "meta" must be terminated by the matching end-tag
> "".
> 
> -- Jack Krupansky
> 
> On Mon, Feb 1, 2016 at 10:45 AM, Gian Maria Ricci - aka Alkampfer <  
> alkamp...@nablasoft.com> wrote:
> 
> > Hi,
> >
> >
> >
> > I’ve configured integration with UIMA but when I try to add a document I
> > always got the error reported at bottom of the mail.
> >
> >
> >
> > It seems to be related to openCalais, but I’ve registered to OpenCalais
> > and setup my token in solrconfig, so I wonder if anyone has some clue on
> > what could be the reason of the error.
> >
> >
> >
> > I’m running this on Solr 5.3.1 instance running on linux.
> >
> >
> >
> > Gian Maria.
> >
> >
> >
> > null:org.apache.solr.common.SolrException: processing error null.
> > id=doc4,  text="This is some textual content to verify UIMA integration..."
> >
> >  at
> > org.apache.solr.uima.processor.UIMAUpdateRequestProcessor.processAdd(UIMAUpdateRequestProcessor.java:127)
> >
> >  at
> > org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:250)
> >
> >  at
> > org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:177)
> >
> >  at
> > org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:98)
> >
> >  at
> > org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
> >
> >  at
> > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> >
> >  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
> >
> >  at
> > org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669)
> >
> >  at
> > org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462)
> >
> >  at
> > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:210)
> >
> >  at
> > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
> >
> >  at
> > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> >
> >  at
> > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> >
> >  at
> > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> >
> >  at
> > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> >
> >  at
> > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> >
> >  at
> > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> >
> >  at
> > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> >
> >  at
> > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> >
> >  at
> > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> >
> >  at
> > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> >
> >  at
> > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> >
> >  at
> > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> >
> >  at
> > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> >
> >  at org.eclipse.jetty.server.Server.handle(Server.java:499)
> >
> >  at
> > org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> >
> >  at
> > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> >
> >  at
> > org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> >
> >  at
> > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> >
> >  at
> > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> >
> >  at java.lang.Thread.run(Thread.java:745)
> >
> > Caused by: org.apache.uima.analysis_engine.AnalysisEngineProcessException
> >
> >  *at
> > org.apache.uima.annotator.calais.OpenCalaisAnnotator.process(O

Re: Error configuring UIMA

2016-02-01 Thread Jack Krupansky
What was the specific error you had to correct? The NPE appears to be in
exception handling code so the actual exception is not indicated in the
stack trace.

The UIMA code is rather poor in terms of failing to check and report
missing parameters or bad parameters which in turn reference data that does
not exist.

-- Jack Krupansky

On Mon, Feb 1, 2016 at 10:18 AM, alkampfer  wrote:

>
>
> From: outlook_288fbf38c031d...@outlook.com
> To: solr-user@lucene.apache.org
> Cc:
> Date: Mon, 1 Feb 2016 15:59:02 +0100
> Subject: Error configuring UIMA
>
> I've solved the problem, it was caused by wrong configuration in
> solrconfig.xml.
>
> Thanks.
>
>
>
> > Hi,>  > I’ve followed the guide
> https://cwiki.apache.org/confluence/display/solr/UIMA+Integration to
> setup a UIMA integration to test this feature. The doc is not updated for
> Solr5, I’ve followed the latest comment to that guide and did some other
> changes but now each request to /update handler fails with the following
> error.>  > Someone have a clue on what I did wrong?>  > Thanks in advance.>
>  > {>   "responseHeader": {> "status": 500,> "QTime": 443>   },>
> "error": {> "trace": "java.lang.NullPointerException\n\tat
> org.apache.solr.uima.processor.UIMAUpdateRequestProcessor.processAdd(UIMAUpdateRequestProcessor.java:105)\n\tat
> org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.processUpdate(JsonLoader.java:143)\n\tat
> org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.load(JsonLoader.java:113)\n\tat
> org.apache.solr.handler.loader.JsonLoader.load(JsonLoader.java:76)\n\tat
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:98)\n\tat
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)\n\tat
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)\n\tat
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)\n\tat
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669)\n\tat
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462)\n\tat
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:210)\n\tat
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)\n\tat
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)\n\tat
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)\n\tat
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)\n\tat
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)\n\tat
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)\n\tat
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)\n\tat
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)\n\tat
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)\n\tat
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)\n\tat
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)\n\tat
> org.eclipse.jetty.server.Server.handle(Server.java:499)\n\tat
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)\n\tat
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)\n\tat
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)\n\tat
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)\n\tat
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)\n\tat
> java.lang.Thread.run(Thread.java:745)\n",> "code": 500>   }> }>  > --
> > Gian Maria Ricci
> > Cell: +39 320 0136949> >
>
>


Re: alternative forum for SOLR user

2016-02-01 Thread Jack Krupansky
Some people prefer to use Stack Overflow, but this mailing list is still
the definitive "forum" for Solr users.

See:
http://stackoverflow.com/questions/tagged/solr


-- Jack Krupansky

On Mon, Feb 1, 2016 at 10:58 AM, Shawn Heisey  wrote:

> On 2/1/2016 1:13 AM, Jean-Jacques MONOT wrote:
> > I am a newbie with SOLR and just registered to this mailing list.
> >
> > Is there an alternative forum for SOLR user ? I am using this mailing
> > list for support, but did not find "real" web forum.
>
> Are you using "forum" as a word that can include a mailing list, or are
> you talking explicitly about a website for Solr that is running forum
> software?
>
> There is at least one "forum" website that actually mirrors this mailing
> list -- posts made on the forum are sent to the mailing list, and
> vice-versa.  The example I am thinking of is Nabble.
>
> This mailing list is the primary official path to find support on Solr
> -- the list is run by the Apache Software Foundation, which owns all
> rights connected to Solr.  There is no official "forum" website for the
> project, and nothing like it is planned for the near future.  Nabble is
> a third-party website.
>
> There are some third-party systems, entirely separate from this mailing
> list, that offer community support for Solr, such as stackoverflow.
> Another possibility is the #solr IRC channel, which is not exactly an
> official resource, but is frequented by users who have an official
> connection with the project.
>
> Thanks,
> Shawn
>
>


Re: alternative forum for SOLR user

2016-02-01 Thread Shawn Heisey
On 2/1/2016 1:13 AM, Jean-Jacques MONOT wrote:
> I am a newbie with SOLR and just registered to this mailing list.
>
> Is there an alternative forum for SOLR user ? I am using this mailing
> list for support, but did not find "real" web forum.

Are you using "forum" as a word that can include a mailing list, or are
you talking explicitly about a website for Solr that is running forum
software?

There is at least one "forum" website that actually mirrors this mailing
list -- posts made on the forum are sent to the mailing list, and
vice-versa.  The example I am thinking of is Nabble.

This mailing list is the primary official path to find support on Solr
-- the list is run by the Apache Software Foundation, which owns all
rights connected to Solr.  There is no official "forum" website for the
project, and nothing like it is planned for the near future.  Nabble is
a third-party website.

There are some third-party systems, entirely separate from this mailing
list, that offer community support for Solr, such as stackoverflow. 
Another possibility is the #solr IRC channel, which is not exactly an
official resource, but is frequented by users who have an official
connection with the project.

Thanks,
Shawn



Solr highlight

2016-02-01 Thread Anil
HI,

We have five shards and 2 replicas and using collection aliases.

I have set hl=true in my query to search against all fields of solr
document .

i am searching a text (ex:2010-0561-T-0312) on all
fields(q=(2010-0561-T-0312)) , highlights is empty. When I search
q=docId:(2010-0561-T-0312), i can see highlights.

I am able to see the highlight section if search any other text which is
not part of docId.

Please help me in identifying the issue.


Thanks,
Anil


Re: Error in UIMA, probably opencalais,

2016-02-01 Thread Jack Krupansky
At the bottom (the fine print!) it says: lineNumber: 15; columnNumber: 7;
The element type "meta" must be terminated by the matching end-tag
"".

-- Jack Krupansky

On Mon, Feb 1, 2016 at 10:45 AM, Gian Maria Ricci - aka Alkampfer <
alkamp...@nablasoft.com> wrote:

> Hi,
>
>
>
> I’ve configured integration with UIMA but when I try to add a document I
> always got the error reported at bottom of the mail.
>
>
>
> It seems to be related to openCalais, but I’ve registered to OpenCalais
> and setup my token in solrconfig, so I wonder if anyone has some clue on
> what could be the reason of the error.
>
>
>
> I’m running this on Solr 5.3.1 instance running on linux.
>
>
>
> Gian Maria.
>
>
>
> null:org.apache.solr.common.SolrException: processing error null.
> id=doc4,  text="This is some textual content to verify UIMA integration..."
>
>  at
> org.apache.solr.uima.processor.UIMAUpdateRequestProcessor.processAdd(UIMAUpdateRequestProcessor.java:127)
>
>  at
> org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:250)
>
>  at
> org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:177)
>
>  at
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:98)
>
>  at
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
>
>  at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>
>  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
>
>  at
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669)
>
>  at
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462)
>
>  at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:210)
>
>  at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
>
>  at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>
>  at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>
>  at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>
>  at
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>
>  at
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>
>  at
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>
>  at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>
>  at
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>
>  at
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>
>  at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>
>  at
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>
>  at
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>
>  at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>
>  at org.eclipse.jetty.server.Server.handle(Server.java:499)
>
>  at
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>
>  at
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>
>  at
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>
>  at
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>
>  at
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>
>  at java.lang.Thread.run(Thread.java:745)
>
> Caused by: org.apache.uima.analysis_engine.AnalysisEngineProcessException
>
>  *at
> org.apache.uima.annotator.calais.OpenCalaisAnnotator.process(OpenCalaisAnnotator.java:208)*
>
>  at
> org.apache.uima.analysis_component.CasAnnotator_ImplBase.process(CasAnnotator_ImplBase.java:56)
>
>  at
> org.apache.uima.analysis_engine.impl.PrimitiveAnalysisEngine_impl.callAnalysisComponentProcess(PrimitiveAnalysisEngine_impl.java:377)
>
>  at
> org.apache.uima.analysis_engine.impl.PrimitiveAnalysisEngine_impl.processAndOutputNewCASes(PrimitiveAnalysisEngine_impl.java:295)
>
>  at
> org.apache.uima.analysis_engine.asb.impl.ASB_impl$AggregateCasIterator.processUntilNextOutputCas(ASB_impl.java:567)
>
>  at
> org.apache.uima.analysis_engine.asb.impl.ASB_impl$AggregateCasIterator.(ASB_impl.java:409)
>
>  at
> org.apache.uima.analysis_engine.asb.impl.ASB_impl.process(ASB_impl.java:342)
>
>  at
> org.apache.uima.analysis_engine.impl.AggregateAnalysisEngine_impl.processAndOutputNewCASes(AggregateAnalysisEngine_impl.java:267)
>
>  at
> org.apache.uima.analysis_engine.impl.AnalysisEngineImplBase.process(AnalysisEngineImplBase.java

Error in UIMA, probably opencalais,

2016-02-01 Thread Gian Maria Ricci - aka Alkampfer
Hi,

 

I've configured integration with UIMA but when I try to add a document I
always got the error reported at bottom of the mail.

 

It seems to be related to openCalais, but I've registered to OpenCalais and
setup my token in solrconfig, so I wonder if anyone has some clue on what
could be the reason of the error.

 

I'm running this on Solr 5.3.1 instance running on linux.

 

Gian Maria.

 

null:org.apache.solr.common.SolrException: processing error null. id=doc4,
text="This is some textual content to verify UIMA integration..."

 at
org.apache.solr.uima.processor.UIMAUpdateRequestProcessor.processAdd(UIMAUpd
ateRequestProcessor.java:127)

 at
org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:250)

 at
org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:177)

 at
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.jav
a:98)

 at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentSt
reamHandlerBase.java:74)

 at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.
java:143)

 at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)

 at
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669)

 at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462)

 at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:
210)

 at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:
179)

 at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler
.java:1652)

 at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)

 at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143
)

 at
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)

 at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java
:223)

 at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java
:1127)

 at
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)

 at
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:
185)

 at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:
1061)

 at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141
)

 at
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHand
lerCollection.java:215)

 at
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.
java:110)

 at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:9
7)

 at org.eclipse.jetty.server.Server.handle(Server.java:499)

 at
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)

 at
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)

 at
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)

 at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:
635)

 at
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:5
55)

 at java.lang.Thread.run(Thread.java:745)

Caused by: org.apache.uima.analysis_engine.AnalysisEngineProcessException

 at
org.apache.uima.annotator.calais.OpenCalaisAnnotator.process(OpenCalaisAnnot
ator.java:208)

 at
org.apache.uima.analysis_component.CasAnnotator_ImplBase.process(CasAnnotato
r_ImplBase.java:56)

 at
org.apache.uima.analysis_engine.impl.PrimitiveAnalysisEngine_impl.callAnalys
isComponentProcess(PrimitiveAnalysisEngine_impl.java:377)

 at
org.apache.uima.analysis_engine.impl.PrimitiveAnalysisEngine_impl.processAnd
OutputNewCASes(PrimitiveAnalysisEngine_impl.java:295)

 at
org.apache.uima.analysis_engine.asb.impl.ASB_impl$AggregateCasIterator.proce
ssUntilNextOutputCas(ASB_impl.java:567)

 at
org.apache.uima.analysis_engine.asb.impl.ASB_impl$AggregateCasIterator.(ASB_impl.java:409)

 at
org.apache.uima.analysis_engine.asb.impl.ASB_impl.process(ASB_impl.java:342)

 at
org.apache.uima.analysis_engine.impl.AggregateAnalysisEngine_impl.processAnd
OutputNewCASes(AggregateAnalysisEngine_impl.java:267)

 at
org.apache.uima.analysis_engine.impl.AnalysisEngineImplBase.process(Analysis
EngineImplBase.java:267)

 at
org.apache.uima.analysis_engine.impl.AnalysisEngineImplBase.process(Analysis
EngineImplBase.java:280)

 at
org.apache.solr.uima.processor.UIMAUpdateRequestProcessor.processText(UIMAUp
dateRequestProcessor.java:173)

 at
org.apache.solr.uima.processor.UIMAUpdateRequestProcessor.processAdd(UIMAUpd
ateRequestProcessor.java:79)

 ... 30 more

Caused by: org.xml.sax.SAXParseException; lineNumber: 15; columnNumber: 7;
The element 

Re: FileBased Spellcheck on Solr cloud

2016-02-01 Thread Riyaz
Thank you Binoy. We are generating spellcheck source data: spellings_xxx.txt
by querying the main index only(we do have the field indexed in cloud). Due
to huge amount of data(160 million records), spellcheck build request taking
lot of time and consuming lot memory for index based spellcheck. So we have
query the filed value of one content type alone to build the
spellings_xxx.txt(1 million entries-14MB size).

Here is the behavior of the FileBasedSpellcheck on cloud:


Testing environment:

total : 4 solr instances : 4.10.4
External zookeeper ensemble: 3 instances

shard-1
 -- leader1
 -- replica1

shard2-2
 -- leader2
 -- replica2


we have pushed the file to cloud and sends the spellcheck build request to
leader-1 of shard-1.

1. First time, it has built the spellcheck index on leader-1 instance
shard-1 and replica-2 of shard2

2. Next time(cleaned the spellcheck index and restarted the cloud), have
noticed, it has built the index on leader-1 instance shard-1 and  leader-2
instance shard-2(on both the leaders only)

Our spellcheck queries(posting to leader-1) are not returning any
suggestions , if we keep on refreshing the page.(may be when replicas get
the request)


Can you please let us know, is there way to built the filebased spellchekc
on cloud?.

Thanks
Riyaz




--
View this message in context: 
http://lucene.472066.n3.nabble.com/FileBased-Spellcheck-on-Solr-cloud-tp4252034p4254432.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re:Error configuring UIMA

2016-02-01 Thread alkampfer


From: outlook_288fbf38c031d...@outlook.com
To: solr-user@lucene.apache.org
Cc: 
Date: Mon, 1 Feb 2016 15:59:02 +0100
Subject: Error configuring UIMA

I've solved the problem, it was caused by wrong configuration in solrconfig.xml.

Thanks.



> Hi,>  > I’ve followed the guide 
> https://cwiki.apache.org/confluence/display/solr/UIMA+Integration to setup a 
> UIMA integration to test this feature. The doc is not updated for Solr5, I’ve 
> followed the latest comment to that guide and did some other changes but now 
> each request to /update handler fails with the following error.>  > Someone 
> have a clue on what I did wrong?>  > Thanks in advance.>  > {>   
> "responseHeader": {>     "status": 500,>     "QTime": 443>   },>   "error": 
> {>     "trace": "java.lang.NullPointerException\n\tat 
> org.apache.solr.uima.processor.UIMAUpdateRequestProcessor.processAdd(UIMAUpdateRequestProcessor.java:105)\n\tat
>  
> org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.processUpdate(JsonLoader.java:143)\n\tat
>  
> org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.load(JsonLoader.java:113)\n\tat
>  org.apache.solr.handler.loader.JsonLoader.load(JsonLoader.java:76)\n\tat 
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:98)\n\tat
>  
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)\n\tat
>  org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:210)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:499)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)\n\tat
>  
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)\n\tat
>  
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)\n\tat
>  java.lang.Thread.run(Thread.java:745)\n",>     "code": 500>   }> }>  > --
> Gian Maria Ricci
> Cell: +39 320 0136949> >  



Re: URI is too long

2016-02-01 Thread Susheel Kumar
Post is pretty much similar to GET. You can use any REST Client to try.
Same select URL & pass below header and put the queries parameters into body

POST:  http://localhost:8983/solr/techproducts/select

Header
==
Content-Type:application/x-www-form-urlencoded

payload/body:
==
q=*:*&rows=2


Thanks,
Susheel

On Mon, Feb 1, 2016 at 2:38 AM, Salman Ansari 
wrote:

> Cool. I would give POST a try. Any samples of using Post while passing the
> query string values (such as ORing between Solr field values) using
> Solr.NET?
>
> Regards,
> Salman
>
> On Sun, Jan 31, 2016 at 10:21 PM, Shawn Heisey 
> wrote:
>
> > On 1/31/2016 7:20 AM, Salman Ansari wrote:
> > > I am building a long query containing multiple ORs between query
> terms. I
> > > started to receive the following exception:
> > >
> > > The remote server returned an error: (414) Request-URI Too Long. Any
> idea
> > > what is the limit of the URL in Solr? Moreover, as a solution I was
> > > thinking of chunking the query into multiple requests but I was
> wondering
> > > if anyone has a better approach?
> >
> > The default HTTP header size limit on most webservers and containers
> > (including the Jetty that ships with Solr) is 8192 bytes.  A typical
> > request like this will start with "GET " and end with " HTTP/1.1", which
> > count against that 8192 bytes.  The max header size can be increased.
> >
> > If you place the parameters into a POST request instead of on the URL,
> > then the default size limit of that POST request in Solr is 2MB.  This
> > can also be increased.
> >
> > Thanks,
> > Shawn
> >
> >
>


Error configuring UIMA

2016-02-01 Thread Gian Maria Ricci - aka Alkampfer
Hi,

 

I've followed the guide
https://cwiki.apache.org/confluence/display/solr/UIMA+Integration to setup a
UIMA integration to test this feature. The doc is not updated for Solr5,
I've followed the latest comment to that guide and did some other changes
but now each request to /update handler fails with the following error.

 

Someone have a clue on what I did wrong?

 

Thanks in advance.

 

{

  "responseHeader": {

"status": 500,

"QTime": 443

  },

  "error": {

"trace": "java.lang.NullPointerException\n\tat
org.apache.solr.uima.processor.UIMAUpdateRequestProcessor.processAdd(UIMAUpd
ateRequestProcessor.java:105)\n\tat
org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.processUp
date(JsonLoader.java:143)\n\tat
org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.load(Json
Loader.java:113)\n\tat
org.apache.solr.handler.loader.JsonLoader.load(JsonLoader.java:76)\n\tat
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.jav
a:98)\n\tat
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentSt
reamHandlerBase.java:74)\n\tat
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.
java:143)\n\tat
org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)\n\tat
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669)\n\tat
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462)\n\tat
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:
210)\n\tat
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:
179)\n\tat
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler
.java:1652)\n\tat
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)\n
\tat
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143
)\n\tat
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)\
n\tat
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java
:223)\n\tat
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java
:1127)\n\tat
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)\n\
tat
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:
185)\n\tat
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:
1061)\n\tat
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141
)\n\tat
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHand
lerCollection.java:215)\n\tat
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.
java:110)\n\tat
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:9
7)\n\tat org.eclipse.jetty.server.Server.handle(Server.java:499)\n\tat
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)\n\tat
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)\
n\tat
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)\n
\tat
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:
635)\n\tat
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:5
55)\n\tat java.lang.Thread.run(Thread.java:745)\n",

"code": 500

  }

}

 

--
Gian Maria Ricci
Cell: +39 320 0136949

 

   


 



Multi-lingual search

2016-02-01 Thread vidya
Hi

 My use case is to index and able to query different languages in solr which
are not in-built languages supported by solr. How can i implement this ? 

My input document consists of different languages in a field. I came across
"Solr in action" book with searching content in multiple languages i.e.,
chapter 14. For built in languages i have implemented this approach. But for
languages like Tamil, how to implement? Do i need to find for filter classes
of that particular language or any libraries in specific.

Please help me on this.

Thanks in advance.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Multi-lingual-search-tp4254398.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud with large synonym files

2016-02-01 Thread Vincenzo D'Amore
Hi Janit,

we had same problem, and used an XML External Entity.

First we moved all files into a subdirectory inside collection config.

/configs/collectionName/custom_synonyms/...

Then inside your schema.xml you can reference all files in you need using
ENTITY:



]>
http://www.w3.org/2001/XInclude";>
[... omissis...]

At last, when you need, use &custom_synonyms;

For example:



Hope this helps.

Best,
Vincenzo




On Mon, Feb 1, 2016 at 11:22 AM, Janit Anjaria  wrote:

> Hi,
>
> We had a similar problem. We solved it by splitting up the file into
> various
> files < 1MB.
>
> In our case, the comma separated list of synonym file names did not work.
> So
> we actually had consecutive synonym filters defined in the required order
> with the corresponding synonym file name. I think this should solve the
> problem.
>
> Janit.
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/SolrCloud-with-large-synonym-files-tp3473568p4254370.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>



-- 
Vincenzo D'Amore
email: v.dam...@gmail.com
skype: free.dev
mobile: +39 349 8513251


Re: URI is too long

2016-02-01 Thread Salman Ansari
I tried using POST but faced an issue where I am still not able to send
long data. When I send data in the body that exceeds 35KB I get the
following exception:

"An exception of type 'SolrNet.Exceptions.SolrConnectionException' occurred
in [Myproject] but was not handled in user code

Additional information: The request was aborted: The connection was closed
unexpectedly."

Any ideas why this is happening and how to resolve this?

Regards,
Salman


On Mon, Feb 1, 2016 at 2:15 PM, Upayavira  wrote:

> POST is supposed (as defined by REST) to imply a request with
> side-effects. A query as such does not have side effects, so
> conceptually, it should be a GET. In practice, whilst it might cause
> some developers to grumble, using a POST for a request should make no
> difference to Solr (other than accepting a larger query).
>
> Upayavira
>
> On Mon, Feb 1, 2016, at 11:05 AM, Midas A wrote:
> > Is there any drawback of POST request and why we prefer GET.
> >
> > On Mon, Feb 1, 2016 at 1:08 PM, Salman Ansari 
> > wrote:
> >
> > > Cool. I would give POST a try. Any samples of using Post while passing
> the
> > > query string values (such as ORing between Solr field values) using
> > > Solr.NET?
> > >
> > > Regards,
> > > Salman
> > >
> > > On Sun, Jan 31, 2016 at 10:21 PM, Shawn Heisey 
> > > wrote:
> > >
> > > > On 1/31/2016 7:20 AM, Salman Ansari wrote:
> > > > > I am building a long query containing multiple ORs between query
> > > terms. I
> > > > > started to receive the following exception:
> > > > >
> > > > > The remote server returned an error: (414) Request-URI Too Long.
> Any
> > > idea
> > > > > what is the limit of the URL in Solr? Moreover, as a solution I was
> > > > > thinking of chunking the query into multiple requests but I was
> > > wondering
> > > > > if anyone has a better approach?
> > > >
> > > > The default HTTP header size limit on most webservers and containers
> > > > (including the Jetty that ships with Solr) is 8192 bytes.  A typical
> > > > request like this will start with "GET " and end with " HTTP/1.1",
> which
> > > > count against that 8192 bytes.  The max header size can be increased.
> > > >
> > > > If you place the parameters into a POST request instead of on the
> URL,
> > > > then the default size limit of that POST request in Solr is 2MB.
> This
> > > > can also be increased.
> > > >
> > > > Thanks,
> > > > Shawn
> > > >
> > > >
> > >
>


facet on min of multi valued field

2016-02-01 Thread Midas A
Hi ,
we want facet query on min of multi valued field .


Regards,
Abhishek Tiwari


Re: URI is too long

2016-02-01 Thread Upayavira
POST is supposed (as defined by REST) to imply a request with
side-effects. A query as such does not have side effects, so
conceptually, it should be a GET. In practice, whilst it might cause
some developers to grumble, using a POST for a request should make no
difference to Solr (other than accepting a larger query).

Upayavira

On Mon, Feb 1, 2016, at 11:05 AM, Midas A wrote:
> Is there any drawback of POST request and why we prefer GET.
> 
> On Mon, Feb 1, 2016 at 1:08 PM, Salman Ansari 
> wrote:
> 
> > Cool. I would give POST a try. Any samples of using Post while passing the
> > query string values (such as ORing between Solr field values) using
> > Solr.NET?
> >
> > Regards,
> > Salman
> >
> > On Sun, Jan 31, 2016 at 10:21 PM, Shawn Heisey 
> > wrote:
> >
> > > On 1/31/2016 7:20 AM, Salman Ansari wrote:
> > > > I am building a long query containing multiple ORs between query
> > terms. I
> > > > started to receive the following exception:
> > > >
> > > > The remote server returned an error: (414) Request-URI Too Long. Any
> > idea
> > > > what is the limit of the URL in Solr? Moreover, as a solution I was
> > > > thinking of chunking the query into multiple requests but I was
> > wondering
> > > > if anyone has a better approach?
> > >
> > > The default HTTP header size limit on most webservers and containers
> > > (including the Jetty that ships with Solr) is 8192 bytes.  A typical
> > > request like this will start with "GET " and end with " HTTP/1.1", which
> > > count against that 8192 bytes.  The max header size can be increased.
> > >
> > > If you place the parameters into a POST request instead of on the URL,
> > > then the default size limit of that POST request in Solr is 2MB.  This
> > > can also be increased.
> > >
> > > Thanks,
> > > Shawn
> > >
> > >
> >


Re: SOLR-6690

2016-02-01 Thread Joel Bernstein
This issue has not been fixed yet. It does not appear that it's being
worked on at the moment.

Joel Bernstein
http://joelsolr.blogspot.com/

On Mon, Feb 1, 2016 at 3:43 AM, Anil  wrote:

> HI,
>
> was there any fix for https://issues.apache.org/jira/browse/SOLR-6690 ? or
> any ETA ?
>
> Thanks.
>
> Regards,
> Anil
>


Re: URI is too long

2016-02-01 Thread Midas A
Is there any drawback of POST request and why we prefer GET.

On Mon, Feb 1, 2016 at 1:08 PM, Salman Ansari 
wrote:

> Cool. I would give POST a try. Any samples of using Post while passing the
> query string values (such as ORing between Solr field values) using
> Solr.NET?
>
> Regards,
> Salman
>
> On Sun, Jan 31, 2016 at 10:21 PM, Shawn Heisey 
> wrote:
>
> > On 1/31/2016 7:20 AM, Salman Ansari wrote:
> > > I am building a long query containing multiple ORs between query
> terms. I
> > > started to receive the following exception:
> > >
> > > The remote server returned an error: (414) Request-URI Too Long. Any
> idea
> > > what is the limit of the URL in Solr? Moreover, as a solution I was
> > > thinking of chunking the query into multiple requests but I was
> wondering
> > > if anyone has a better approach?
> >
> > The default HTTP header size limit on most webservers and containers
> > (including the Jetty that ships with Solr) is 8192 bytes.  A typical
> > request like this will start with "GET " and end with " HTTP/1.1", which
> > count against that 8192 bytes.  The max header size can be increased.
> >
> > If you place the parameters into a POST request instead of on the URL,
> > then the default size limit of that POST request in Solr is 2MB.  This
> > can also be increased.
> >
> > Thanks,
> > Shawn
> >
> >
>


Re: Solr segment merging in different replica

2016-02-01 Thread Emir Arnautovic

Hi Edwin,
What is your setup - SolrCloud or Master-Slave? If it si SolrCloud, then 
under normal index updates, each core is behaving as independent index. 
In theory, if all changes happen at the same time on all nodes, merges 
will happen at the same time. But that is not realistic and it is 
expected to happen in slightly different time.
If you are running Master-Slave, then new segments will be copied from 
master to slave.


Regards,
Emir

--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support * http://sematext.com/



On 01.02.2016 11:56, Zheng Lin Edwin Yeo wrote:

Hi,

I would like to check, during segment merging, how did the replical node do
the merging?
Will it do the merging concurrently, or will the replica node delete the
old segment and replace the new one?

Also, is it possible to separate the network interface for inter-node
communication from the network interface for update/search requests?
If so I could put two network cards in each machine and route the index and
search traffic over the first interface and the traffic for the inter-node
communication (sending documents to replicas) over the second interface.

I'm using Solr 5.4.0

Regards,
Edwin





Solr segment merging in different replica

2016-02-01 Thread Zheng Lin Edwin Yeo
Hi,

I would like to check, during segment merging, how did the replical node do
the merging?
Will it do the merging concurrently, or will the replica node delete the
old segment and replace the new one?

Also, is it possible to separate the network interface for inter-node
communication from the network interface for update/search requests?
If so I could put two network cards in each machine and route the index and
search traffic over the first interface and the traffic for the inter-node
communication (sending documents to replicas) over the second interface.

I'm using Solr 5.4.0

Regards,
Edwin


Re: SolrCloud with large synonym files

2016-02-01 Thread Janit Anjaria
Hi,

We had a similar problem. We solved it by splitting up the file into various
files < 1MB. 

In our case, the comma separated list of synonym file names did not work. So
we actually had consecutive synonym filters defined in the required order
with the corresponding synonym file name. I think this should solve the
problem.

Janit.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-with-large-synonym-files-tp3473568p4254370.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: use /update in the Gui admin interface

2016-02-01 Thread Erik Hatcher
JJM - use the “Documents” tab in the admin UI instead of the “Query” one.

Erik



> On Feb 1, 2016, at 3:10 AM, Jean-Jacques MONOT  wrote:
> 
> Hello
> 
> I am using the GUI admin interface for the SOLR java server.
> 
> No problem to make "classical" query with the /select request handler.
> 
> But now, I would like to make an update on a selected document : modify
> the value of a field.
> 
> How should I do ?
> 
> I think I should use :
> - /update  : for the request handler
> - id="" :  for the q field (in order to select the doc)
> 
> but  I do not see how to place a "set" to the field that I have added to
> the schema and that is in the field list of my doc ?
> 
> example, I would like to be able to do like in json on the selected doc :
> 
> myid : {set : 14}
> 
> JJM
> 
> ---
> L'absence de virus dans ce courrier électronique a été vérifiée par le 
> logiciel antivirus Avast.
> https://www.avast.com/antivirus
> 



Re: Restoring backups of solrcores

2016-02-01 Thread vidya
Hi 

How can that be useful, can u please explain.
I want to have the same collection name everytime when I index data i.e.,
current_collection.

By collection aliasing, i can create a new collection and point my alias
(say ALIAS) to new collection but cannot rename that collection to the same
current_collection which i have created and indexed previous week.

So, are you asking me to create whatever collection name i want to create
but point out my alias with name i want and change that alias pointing to
new collection that i create and query using my alias name.

Please help me on this.

Thanks in advance



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Restoring-backups-of-solrcores-tp4254080p4254366.html
Sent from the Solr - User mailing list archive at Nabble.com.


SOLR-6690

2016-02-01 Thread Anil
HI,

was there any fix for https://issues.apache.org/jira/browse/SOLR-6690 ? or
any ETA ?

Thanks.

Regards,
Anil


Re: alternative forum for SOLR user

2016-02-01 Thread Binoy Dalal
This is the forum if you want help. There are additional forums for dev and
other discussions.
Check it out here: lucene.apache.org/solr/resources.html

If you are looking for the archives just Google solr user list archive.

On Mon, 1 Feb 2016, 13:43 Jean-Jacques MONOT  wrote:

> Hello
>
> I am a newbie with SOLR and just registered to this mailing list.
>
> Is there an alternative forum for SOLR user ? I am using this mailing
> list for support, but did not find "real" web forum.
>
> JJM
>
> ---
> L'absence de virus dans ce courrier électronique a été vérifiée par le
> logiciel antivirus Avast.
> https://www.avast.com/antivirus
>
> --
Regards,
Binoy Dalal


alternative forum for SOLR user

2016-02-01 Thread Jean-Jacques MONOT

Hello

I am a newbie with SOLR and just registered to this mailing list.

Is there an alternative forum for SOLR user ? I am using this mailing
list for support, but did not find "real" web forum.

JJM

---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel 
antivirus Avast.
https://www.avast.com/antivirus



use /update in the Gui admin interface

2016-02-01 Thread Jean-Jacques MONOT

Hello

I am using the GUI admin interface for the SOLR java server.

No problem to make "classical" query with the /select request handler.

But now, I would like to make an update on a selected document : modify
the value of a field.

How should I do ?

I think I should use :
- /update  : for the request handler
- id="" :  for the q field (in order to select the doc)

but  I do not see how to place a "set" to the field that I have added to
the schema and that is in the field list of my doc ?

example, I would like to be able to do like in json on the selected doc :

myid : {set : 14}

JJM

---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel 
antivirus Avast.
https://www.avast.com/antivirus