Re: SolrCloud High Availability during indexing operation

2013-09-26 Thread Saurabh Saxena
Sorry for the late reply.

All the documents have unique id. If I repeat the experiment, the num of
docs indexed changes (I guess it depends when I shutdown a particular
shard). When I do the experiment without shutting down leader Shards, all
80k docs get indexed (which I think proves that all documents are valid).

I need to dig the logs to find error message. Also, I am not tracking of
curl return code, will run again and reply.

Regards,
Saurabh


On Wed, Sep 25, 2013 at 3:01 AM, Erick Erickson wrote:

> And do any of the documents have the same , which
> is usually called "id"? Subsequent adds of docs with the same
>  replace the earlier one.
>
> It's not definitive because it changes as merges happen, old copies
> of docs that have been deleted or updated will be purged, but what
> does your admin page show for "maxDoc"? If it's more than "numDocs"
> then you have duplicate s. NOTE: if you optimize
> (which you usually shouldn't) then maxDoc and numDocs will be
> the same so if you test this don't optimize.
>
> Best,
> Erick
>
>
> On Tue, Sep 24, 2013 at 10:43 AM, Walter Underwood
>  wrote:
> > Did all of the curl update commands return success? Ane errors in the
> logs?
> >
> > wunder
> >
> > On Sep 24, 2013, at 6:40 AM, Otis Gospodnetic wrote:
> >
> >> Is it possible that some of those 80K docs were simply not valid? e.g.
> >> had a wrong field, had a missing required field, anything like that?
> >> What happens if you clear this collection and just re-run the same
> >> indexing process and do everything else the same?  Still some docs
> >> missing?  Same number?
> >>
> >> And what if you take 1 document that you know is valid and index it
> >> 80K times, with a different ID, of course?  Do you see 80K docs in the
> >> end?
> >>
> >> Otis
> >> --
> >> Solr & ElasticSearch Support -- http://sematext.com/
> >> Performance Monitoring -- http://sematext.com/spm
> >>
> >>
> >>
> >> On Tue, Sep 24, 2013 at 2:45 AM, Saurabh Saxena 
> wrote:
> >>> Doc count did not change after I restarted the nodes. I am doing a
> single
> >>> commit after all 80k docs. Using Solr 4.4.
> >>>
> >>> Regards,
> >>> Saurabh
> >>>
> >>>
> >>> On Mon, Sep 23, 2013 at 6:37 PM, Otis Gospodnetic <
> >>> otis.gospodne...@gmail.com> wrote:
> >>>
>  Interesting. Did the doc count change after you started the nodes
> again?
>  Can you tell us about commits?
>  Which version? 4.5 will be out soon.
> 
>  Otis
>  Solr & ElasticSearch Support
>  http://sematext.com/
>  On Sep 23, 2013 8:37 PM, "Saurabh Saxena" 
> wrote:
> 
> > Hello,
> >
> > I am testing High Availability feature of SolrCloud. I am using the
> > following setup
> >
> > - 8 linux hosts
> > - 8 Shards
> > - 1 leader, 1 replica / host
> > - Using Curl for update operation
> >
> > I tried to index 80K documents on replicas (10K/replica in parallel).
> > During indexing process, I stopped 4 Leader nodes. Once indexing is
> done,
> > out of 80K docs only 79808 docs are indexed.
> >
> > Is this an expected behaviour ? In my opinion replica should take
> care of
> > indexing if leader is down.
> >
> > If this is an expected behaviour, any steps that can be taken from
> the
> > client side to avoid such a situation.
> >
> > Regards,
> > Saurabh Saxena
> >
> 
> >
> > --
> > Walter Underwood
> > wun...@wunderwood.org
> >
> >
> >
>


Re: Search statistics in category scale

2013-09-26 Thread Otis Gospodnetic
Hi Marina,

I think you can get to these numbers in (only?) two ways:
1) at query time, for each suggestion hit the index, get the number of
hits, and show it in the UI
2) at "suggestion data structure creation time" query the index for
each suggestion, get the number of hits, include it in the lookup data
structure, and retrieve it from there for the UI.

1) will give you accurate counts, but it can result in a lt of
hits to the main index - i.e., performance will suffer.
2) will not give you accurate counts IFF your index is regularly being
updated. If the index is static for a good period of time then counts
will be correct.  This would also have to hit the index to get the
counts, but only while the suggestion lookup structure is being built,
not at search time - i.e., you control when you hit the index and not
all users suffer.

http://sematext.com/products/autocomplete can be made to do 2).
To do 1) you'd implement that in your search app.

Otis
--
Solr & ElasticSearch Support -- http://sematext.com/
Performance Monitoring -- http://sematext.com/spm



On Tue, Sep 24, 2013 at 8:52 AM, Marina  wrote:
> I need to implement further functionality picture of it is attached below.
>   I have
> already running application based o Solr search.
> In a few words about it: drop down will contain similar search phrases
> within concrete category and number of items found.
> Does Solr provide to collect such data and somehow receive it?
>
>
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Search-statistics-in-category-scale-tp4091734.html
> Sent from the Solr - User mailing list archive at Nabble.com.


Re: ALIAS feature, can be used for what?

2013-09-26 Thread Otis Gospodnetic
Hi,

Imagine you have an index and you need to reindex your data into a new
index, but don't want to have to reconfigure or restart client apps
when you want to point them to the new index.  This is where aliases
come in handy.  If you created an alias for the first index and made
your apps hit that alias, then you can just repoint the same alias to
your new index and avoid having to touch client apps.

No, I don't think you can write to multiple collections through a single alias.

Otis
--
Solr & ElasticSearch Support -- http://sematext.com/
Performance Monitoring -- http://sematext.com/spm



On Thu, Sep 26, 2013 at 6:34 AM, yriveiro  wrote:
> Today I was thinking about the ALIAS feature and the utility on Solr.
>
> Can anyone explain me with an example where this feature may be useful?
>
> It's possible have an ALIAS of multiples collections, if I do a write to the
> alias, Is this write replied to all collections?
>
> /Yago
>
>
>
> -
> Best regards
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/ALIAS-feature-can-be-used-for-what-tp4092095.html
> Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr Autocomplete with "did you means" functionality handle misspell word like google

2013-09-26 Thread Otis Gospodnetic
Hi,

Not sure if Solr suggester can do this (can it, anyone?), but...
shameless plug... I know
http://sematext.com/products/autocomplete/index.html can do that.

Otis
--
Solr & ElasticSearch Support -- http://sematext.com/
Performance Monitoring -- http://sematext.com/spm



On Thu, Sep 26, 2013 at 8:26 AM, Suneel Pandey  wrote:
> 
>
> Hi,
>
> I have implemented auto complete it's  working file but, I want to implement
> autosuggestion like google (see above screen) . when someone typing misspell
> words suggestion should be show e.g: cmputer => computer.
>
>
> Please help me.
>
>
>
>
>
>
> -
> Regards,
>
> Suneel Pandey
> Sr. Software Developer
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Solr-Autocomplete-with-did-you-means-functionality-handle-misspell-word-like-google-tp4092127.html
> Sent from the Solr - User mailing list archive at Nabble.com.


Re: * not working in Phrase Search in solar 4.4

2013-09-26 Thread Otis Gospodnetic
There is an oldish JIRA issue (with patches?) for "complex
queries" https://issues.apache.org/jira/browse/SOLR-1604

Otis
--
Solr & ElasticSearch Support -- http://sematext.com/
Performance Monitoring -- http://sematext.com/spm



On Thu, Sep 26, 2013 at 12:15 PM, soumikghosh05
 wrote:
> Hi,
>
> I have a doc that contains "Hello World" in the title field and title is of
> type of text_general.
>
> When I am searching with
>
>  title:"Hello Wo*" -- not returning
>  title:"Hello World" -- returning
>
> Could you please explain what I am missing? I have used
> WhitespaceTokenizerFactory instead of StandardTokenizerFactory. But no luck.
>
>  positionIncrementGap="100">
>   
> 
>  words="stopwords.txt" />
> 
>   
>   
> 
>  words="stopwords.txt" />
>  ignoreCase="true" expand="true"/>
> 
>   
>
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/not-working-in-Phrase-Search-in-solar-4-4-tp4092205.html
> Sent from the Solr - User mailing list archive at Nabble.com.


Re: ContributorsGroup

2013-09-26 Thread JavaOne
Yes - that is me.

mikelabib is my Jira user. Thanks for asking. 

Sent from my iPhone

On Sep 26, 2013, at 7:32 PM, Erick Erickson  wrote:

> Hmmm, did Stefan add you correctly? I see MichaelLabib as a
> contributor, but not mikelabib...
> 
> Best
> Erick
> 
> On Thu, Sep 26, 2013 at 1:20 PM, Mike L.  wrote:
>> 
>> ah sorry! its: mikelabib
>> 
>> thanks!
>> 
>> From: Stefan Matheis 
>> To: solr-user@lucene.apache.org
>> Sent: Thursday, September 26, 2013 12:05 PM
>> Subject: Re: ContributorsGroup
>> 
>> 
>> Mike
>> 
>> To add you as Contributor i'd need to know your Username? :)
>> 
>> Stefan
>> 
>> 
>> On Thursday, September 26, 2013 at 6:50 PM, Mike L. wrote:
>> 
>>> 
>>> Solr Admins,
>>> 
>>> I've been using Solr for the last couple years and would like to 
>>> contribute to this awesome project. Can I be added to the Contributorsgroup 
>>> with also access to update the Wiki?
>>> 
>>> Thanks in advance.
>>> 
>>> Mike L.
>>> 
>>> 


Re: cold searcher

2013-09-26 Thread Erick Erickson
Upping the number of concurrent warming searchers is almost always the
wrong thing to do. I'd lengthen the polling interval or the commit interval.
Throwing away warming searchers is uselessly consuming resources. And
if you're trying to do any filter queries, your caches will almost never be
used since you're throwing them away so often.

Best,
Erick

On Thu, Sep 26, 2013 at 3:52 PM, Shawn Heisey  wrote:
> On 9/26/2013 10:56 AM, Dmitry Kan wrote:
>>
>> Btw, related to master-slave setup. What makes read-only slave not to come
>> across the same issue? Would it not pull data from the master and warm up
>> searchers? Or does it do updates in a more controlled fashion that makes
>> it
>> avoid these issues?
>
>
> Most people have the slave pollInterval configured on an interval that's
> pretty long, like 15 seconds to several minutes -- much longer than a
> typical searcher warming time.
>
> For a slave, new searchers are only created when there is a change copied
> over from the master.  There may be several master-side commits that happen
> during the pollInterval, but the slave won't see all of those.
>
> Thanks,
> Shawn
>


Re: Not able to index documents using CloudSolrServer

2013-09-26 Thread Erick Erickson
this is key: openSearcher=false

That means your commit did, indeed, fire but no new
searcher was opened so the document remains invisible.

If you configure softCommit, that will make the doc visible
after the interval, or if you set openSearcher=true in your
commit configuration.

Or, as you noted, if you issue an explicit commit.

You might find this useful:

http://searchhub.org/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/

Best,
Erick

On Thu, Sep 26, 2013 at 3:22 PM, shamik  wrote:
> Just an update, I finally saw the documents getting indexed. But it happened
> after 4-5 hours since I had used CloudServer to send the documents to Solr.
> Is there any configuration change required ? I've having 2 nodes with a
> replica each and a single zookeeper instance.
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Not-able-to-index-documents-using-CloudSolrServer-tp4092074p4092238.html
> Sent from the Solr - User mailing list archive at Nabble.com.


Re: * not working in Phrase Search in solar 4.4

2013-09-26 Thread Erick Erickson
Certain parts of the analysis chain can be
included in wildcard processing, see
anything impolementing MultiTermAware. See:
http://wiki.apache.org/solr/MultitermQueryAnalysis

But phrases are different, and as Shawn says the
analysis chain isn't applied similarly. The data is
lowercased, but the wildcard is stripped.

Best,
Erick

On Thu, Sep 26, 2013 at 3:09 PM, Shawn Heisey  wrote:
> On 9/26/2013 10:15 AM, soumikghosh05 wrote:
>>
>> I have a doc that contains "Hello World" in the title field and title is
>> of
>> type of text_general.
>>
>> When I am searching with
>>
>>   title:"Hello Wo*" -- not returning
>>   title:"Hello World" -- returning
>
>
> When you use wildcards, your analysis chain is not used, so the query text
> won't be lowercased, which means that it won't match what's been indexed.
>
> You only included the index analyzer in what you pasted, but the fact that
> it works without the wildcard suggests that your query analyzer has the
> lowercase filter as well ... but with a wildcard search, that doesn't get
> used.  You'll need to handle lowercasing your query terms yourself when
> wildcards are part of the picture.
>
> I seem to remember something being discussed for fixing this a while back,
> but I don't remember whether anything was actually implemented.
>
> Thanks,
> Shawn
>


Re: Doing time sensitive search in solr

2013-09-26 Thread Otis Gospodnetic
Hi Darniz,

Just put the date in a separate field and add a range query on that field
to your existing query.

Otis
Solr & ElasticSearch Support
http://sematext.com/
On Sep 26, 2013 7:53 PM, "Darniz"  wrote:

> hello Users,
>
> i have a requirement where my content should be search based upon time. For
> example below is our content in our cms.
> 
> Sept content : Honda is releasing the car this month
> 
>
> 
> Dec content : Toyota is releasing the car this month
> 
>
> On the website based upon time we display the content. On the solr side,
> until now we were indexing all entries element in Solr in text field. Now
> after we introduced time sensitive information in our cms, i need to know
> if
> someone queries for word "Toyota" it should NOT come up in my search
> results
> since that content is going live in dec.
>
> The solr text field looks something like
> 
> Honda is releasing the car this month
> Toyota is releasing this month
> 
>
> is there a way we can search the text field or append any meta data to the
> text field based on date.
>
> i hope i have made the issue clear. i kind of don't agree with this kind of
> practice but our requirement is pretty peculiar since we don't want to
> reindex data again and again.
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Doing-time-sensitive-search-in-solr-tp4092273.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


Re: ContributorsGroup

2013-09-26 Thread Erick Erickson
Hmmm, did Stefan add you correctly? I see MichaelLabib as a
contributor, but not mikelabib...

Best
Erick

On Thu, Sep 26, 2013 at 1:20 PM, Mike L.  wrote:
>
> ah sorry! its: mikelabib
>
> thanks!
>
> From: Stefan Matheis 
> To: solr-user@lucene.apache.org
> Sent: Thursday, September 26, 2013 12:05 PM
> Subject: Re: ContributorsGroup
>
>
> Mike
>
> To add you as Contributor i'd need to know your Username? :)
>
> Stefan
>
>
> On Thursday, September 26, 2013 at 6:50 PM, Mike L. wrote:
>
>>
>> Solr Admins,
>>
>>  I've been using Solr for the last couple years and would like to 
>> contribute to this awesome project. Can I be added to the Contributorsgroup 
>> with also access to update the Wiki?
>>
>> Thanks in advance.
>>
>> Mike L.
>>
>>


Re: Sorting dependent on user preferences with FunctionQuery

2013-09-26 Thread Erick Erickson
You could also group by this field (aka "field collapsing") then
have the front end put whichever group first you wanted.

Best,
Erick

On Thu, Sep 26, 2013 at 9:41 AM, Ing. Jorge Luis Betancourt Gonzalez
 wrote:
> I think you could use boosting queries: for group A you boost one category 
> and for group B some other category.
>
> - Mensaje original -
> De: "Snubbel" 
> Para: solr-user@lucene.apache.org
> Enviados: Jueves, 26 de Septiembre 2013 8:01:36
> Asunto: Sorting dependent on user preferences with FunctionQuery
>
> Hello,
>
> I want to present to different user groups a search result in different
> orders.
> Say, i have customer group A, which I know prefers Books, I want to get
> Books at the top of my query result, DVDs at the bottom.
> And for group B, preferring DVD, these first.
> In my index I have a field of type text named "category" with values Book
> and DVD.
>
> I thought maybe I could solve this with QueryFunctions, maybe like this:
>
>
> select?q=*%3A*&sort=query(qf=category v='Book')desc
>
> but Solr returns "Can't determine a Sort Order (asc or desc) in sort".
>
> What is wrong? I tried different ways of formulating the query without
> success...
>
>
> Or, does anyone have a better idea how to solve this?
>
> Best regards, Nikola
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Sorting-dependent-on-user-preferences-with-FunctionQuery-tp4092119.html
> Sent from the Solr - User mailing list archive at Nabble.com.
> 
> III Escuela Internacional de Invierno en la UCI del 17 al 28 de febrero del 
> 2014. Ver www.uci.cu
> 
> III Escuela Internacional de Invierno en la UCI del 17 al 28 de febrero del 
> 2014. Ver www.uci.cu


Re: Xml file is not inserting from code java -jar post.jar *.xml

2013-09-26 Thread Erick Erickson
Solr does not index arbitrary XML, it only indexes
XML in a very specific format. You haven't
shown an example of what you're trying to index.

See the examples in example/exempledocs for the
format required.

Best,
Erick

On Thu, Sep 26, 2013 at 8:32 AM, Furkan KAMACI  wrote:
> You should start to read from here:
> http://lucene.apache.org/solr/4_4_0/tutorial.html
>
>
> 2013/9/26 Kishan Parmar 
>
>>
>> http://www.coretechnologies.com/products/AlwaysUp/Apps/RunApacheSolrAsAService.html
>> \
>>
>> this is the link from where i fown the solr installation
>>
>> Regards,
>>
>> Kishan Parmar
>> Software Developer
>> +91 95 100 77394
>> Jay Shree Krishnaa !!
>>
>>
>>
>> On Thu, Sep 26, 2013 at 1:13 PM, Kishan Parmar 
>> wrote:
>>
>> > i am not using tomcat  but i am using alwaysup software to run the solr
>> > system.
>> > it is working perfectly
>> >
>> > but  i can not add my xml file to index..i channged my schema file as per
>> > requirement of my xml file ...
>> > and also
>> > i am using this command to insert xml to index
>> >
>> >
>> > java -Durl=http://localhost:8983/solr/core0/update -jar post.jar *.xml
>> >
>> > ibut it gives an error  and if i write java -jar post.jar *.xml then it
>> > index the data but in anoter core collection1
>> > and
>> > there is an error also in it that "no dataimport handler is found";
>> > so what can i do for this problems
>> >
>> > Regards,
>> >
>> > Kishan Parmar
>> > Software Developer
>> > +91 95 100 77394
>> > Jay Shree Krishnaa !!
>> >
>> >
>> >
>> > On Sun, Sep 22, 2013 at 8:53 PM, Erick Erickson > >wrote:
>> >
>> >> Please review:
>> >>
>> >> http://wiki.apache.org/solr/UsingMailingLists
>> >>
>> >> Best,
>> >> Erick
>> >>
>> >> On Sun, Sep 22, 2013 at 8:06 AM, Jack Krupansky <
>> j...@basetechnology.com>
>> >> wrote:
>> >> > Did you start Solr? How did you verify that Solr is running? Are you
>> >> able to
>> >> > query Solr and access the Admin UI?
>> >> >
>> >> > Most importantly, did you successfully complete the standard Solr
>> >> tutorial?
>> >> > (IOW, you know all the necessarily steps for basic operation of Solr.)
>> >> >
>> >> > Lastly, did you verify (by examining the log) whether Solr was able to
>> >> > successfully load your schema changes without errors?
>> >> >
>> >> > -- Jack Krupansky
>> >> >
>> >> > -Original Message- From: Kishan Parmar
>> >> > Sent: Sunday, September 22, 2013 9:56 AM
>> >> > To: solr-user@lucene.apache.org
>> >> > Subject: Xml file is not inserting from code java -jar post.jar *.xml
>> >> >
>> >> >
>> >> > hi
>> >> >
>> >> > i am new user of Solr i have done my schema file and when i write a
>> >> code to
>> >> > insert xxl file to index from cmd .java -jar post.jar *.xml
>> >> >
>> >> > it give us error solr returned errer 404 not found
>> >> >
>> >> > what can i do???
>> >> >
>> >> >
>> >> > Regards,
>> >> >
>> >> > Kishan Parmar
>> >> > Software Developer
>> >> > +91 95 100 77394
>> >> > Jay Shree Krishnaa !!
>> >>
>> >
>> >
>>


Re: autocomplete_edge type split words

2013-09-26 Thread Erick Erickson
This is a classic issue where there's confusion between
the query parser and field analysis.

Early in the process the query parser has to take the input
and break it up. that's how, for instance, a query like
text:term1 term2
gets parsed as
text:term1 defaultfield:term2
This happens long before the terms get to the analysis chain
for the field.

So your only options are to either quote the string or
escape the spaces.

Best,
Erick

On Wed, Sep 25, 2013 at 9:24 AM, elisabeth benoit
 wrote:
> Hello,
>
> I am using solr 4.2.1 and I have a autocomplete_edge type defined in
> schema.xml
>
>
> 
>   
>  mapping="mapping-ISOLatin1Accent.txt"/>
> 
> 
>  replacement=" " replace="all"/>
>  minGramSize="1"/>
>
>   
>  mapping="mapping-ISOLatin1Accent.txt"/>
> 
> 
>  replacement=" " replace="all"/>
>   pattern="^(.{30})(.*)?" replacement="$1" replace="all"/>
>   
> 
>
> When I have a request with more then one word, for instance "rue de la", my
> request doesn't match with my autocomplete_edge field unless I use quotes
> around the query. In other words q=rue de la doesnt work and q="rue de la"
> works.
>
> I've check the request with debugQuery=on, and I can see in first case, the
> query is splitted into words, and I don't understand why since my field
> type uses KeywordTokenizerFactory.
>
> Does anyone have a clue on how I can request my field without using quotes?
>
> Thanks,
> Elisabeth


Re: Custom Request Handlers

2013-09-26 Thread Erick Erickson
Well, I'd start by stating the conditions, maybe there's a better way
to accomplish what you want without a custom request handler.

They're not hard, but have you exhausted other options? This may
be an XY problem.

Best,
Erick

On Wed, Sep 25, 2013 at 9:24 AM, PAVAN  wrote:
> Hi,
>
>
>   I am new to solr Can anyone suggest me how can i write my own custom
> handlers. Because i need to filter queries based on 4 to 5 conditions.
>
>
>
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Custom-Request-Handlers-tp4091936.html
> Sent from the Solr - User mailing list archive at Nabble.com.


Doing time sensitive search in solr

2013-09-26 Thread Darniz
hello Users,

i have a requirement where my content should be search based upon time. For
example below is our content in our cms.

Sept content : Honda is releasing the car this month



Dec content : Toyota is releasing the car this month


On the website based upon time we display the content. On the solr side,
until now we were indexing all entries element in Solr in text field. Now
after we introduced time sensitive information in our cms, i need to know if
someone queries for word "Toyota" it should NOT come up in my search results
since that content is going live in dec.  

The solr text field looks something like

Honda is releasing the car this month
Toyota is releasing this month


is there a way we can search the text field or append any meta data to the
text field based on date.

i hope i have made the issue clear. i kind of don't agree with this kind of
practice but our requirement is pretty peculiar since we don't want to
reindex data again and again.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Doing-time-sensitive-search-in-solr-tp4092273.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Cross index join query performance

2013-09-26 Thread Joel Bernstein
It looks like you are using int join keys so you may want to check out
SOLR-4787, specifically the hjoin and bjoin.

These perform well when you have a large number of results from the
fromIndex. If you have a small number of results in the fromIndex the
standard join will be faster.


On Wed, Sep 25, 2013 at 3:39 PM, Peter Keegan wrote:

> I forgot to mention - this is Solr 4.3
>
> Peter
>
>
>
> On Wed, Sep 25, 2013 at 3:38 PM, Peter Keegan  >wrote:
>
> > I'm doing a cross-core join query and the join query is 30X slower than
> > each of the 2 individual queries. Here are the queries:
> >
> > Main query: http://localhost:8983/solr/mainindex/select?q=title:java
> > QTime: 5 msec
> > hit count: 1000
> >
> > Sub query: http://localhost:8983/solr/subindex/select?q=+fld1:[0.1 TO
> 0.3]
> > QTime: 4 msec
> > hit count: 25K
> >
> > Join query:
> >
> http://localhost:8983/solr/mainindex/select?q=title:java&fq={!joinfromIndex=mainindextoIndex=subindex
>  from=docid to=docid}fld1:[0.1 TO 0.3]
> > QTime: 160 msec
> > hit count: 205
> >
> > Here are the index spec's:
> >
> > mainindex size: 117K docs, 1 segment
> > mainindex schema:
> > > required="true" multiValued="false" />
> > > stored="true" multiValued="false" />
> >docid
> >
> > subindex size: 117K docs, 1 segment
> > subindex schema:
> > > required="true" multiValued="false" />
> > > required="false" multiValued="false" />
> >docid
> >
> > With debugQuery=true I see:
> >   "debug":{
> > "join":{
> >   "{!join from=docid to=docid fromIndex=subindex}fld1:[0.1 TO 0.3]":{
> > "time":155,
> > "fromSetSize":24742,
> > "toSetSize":24742,
> > "fromTermCount":117810,
> > "fromTermTotalDf":117810,
> > "fromTermDirectCount":117810,
> > "fromTermHits":24742,
> > "fromTermHitsTotalDf":24742,
> > "toTermHits":24742,
> > "toTermHitsTotalDf":24742,
> > "toTermDirectCount":24627,
> > "smallSetsDeferred":115,
> > "toSetDocsAdded":24742}},
> >
> > Via profiler and debugger, I see 150 msec spent in the outer
> > 'while(term!=null)' loop in: JoinQueryWeight.getDocSet(). This seems
> like a
> > lot of time to join the bitsets. Does this seem right?
> >
> > Peter
> >
> >
>



-- 
Joel Bernstein
Professional Services LucidWorks


Re: solr 4.5 release date

2013-09-26 Thread Arcadius Ahouansou
Thank you very much Steve and Shawn for the information.

It's most appreciated.

Arcadius.



On 26 September 2013 20:35, Shawn Heisey  wrote:

> On 9/26/2013 11:27 AM, Arcadius Ahouansou wrote:
>
>> Please, any idea of the final Solr-4.5 release date?
>>
>
> Steve gave you a better general answer than what I was writing. :)
>
> If you want to have your finger on the pulse of Solr development, join the
> dev mailing list.  Nothing about the release process is hidden.
>
> A strong warning: The dev list is a VERY high traffic list, and it covers
> Lucene as well as Solr.  Most of the email traffic is generated by various
> Apache project services, not people.  There is no way to only subscribe to
> the human-generated content.
>
> http://lucene.apache.org/core/**discussion.html#developer-**
> discussion-devlucene
>
> One thing that it says clearly on the link bears repeating: "Please do not
> send mail to the dev list with usage questions or configuration questions
> and problems; that is what the java-user and solr-user mailing lists are
> for."
>
> Thanks,
> Shawn
>
>


Re: Prevent public access to Solr Admin Page

2013-09-26 Thread Anshum Gupta
It's less Solr and more about tomcat.
This should help you:
http://stackoverflow.com/questions/4850112/restrict-access-to-specific-url-apache-tomcat


On Fri, Sep 27, 2013 at 12:15 AM, uwe72  wrote:

> unfortunately i didn't understand at all.
>
> We are using a tomcat for the solr server.
>
> how exactly can i prevent that user access the solr admin page?
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Prevent-public-access-to-Solr-Admin-Page-tp4092080p4092236.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>



-- 

Anshum Gupta
http://www.anshumgupta.net


RE: Prevent public access to Solr Admin Page

2013-09-26 Thread Markus Jelsma
As Shawn said, do not expose your Solr server to the internet. Do your internet 
users access the server directly or via some frontend application? Almost all 
web based applications connect to Solr via some frontend. Usually Solr is 
hidden from the internet just as some DBMS is.

Do not expose it and hide it in an internal network or behind a firewall. Using 
Tomcat access rules is possible but it's not recommended. Hiding it behind a 
proxy is possible but that one needs to be well configured. 
 
-Original message-
> From:uwe72 
> Sent: Thursday 26th September 2013 22:57
> To: solr-user@lucene.apache.org
> Subject: Re: Prevent public access to Solr Admin Page
> 
> unfortunately i didn't understand at all.
> 
> We are using a tomcat for the solr server.
> 
> how exactly can i prevent that user access the solr admin page?
> 
> 
> 
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Prevent-public-access-to-Solr-Admin-Page-tp4092080p4092236.html
> Sent from the Solr - User mailing list archive at Nabble.com.
> 


Re: Prevent public access to Solr Admin Page

2013-09-26 Thread uwe72
unfortunately i didn't understand at all.

We are using a tomcat for the solr server.

how exactly can i prevent that user access the solr admin page?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Prevent-public-access-to-Solr-Admin-Page-tp4092080p4092236.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: Problem loading my codec sometimes

2013-09-26 Thread Scott Schneider
Ok, I created SOLR-5278.  Thanks again!

Scott


> -Original Message-
> From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
> Sent: Wednesday, September 25, 2013 10:15 AM
> To: solr-user@lucene.apache.org
> Subject: RE: Problem loading my codec sometimes
> 
> 
> : Ah, I fixed it.  I wasn't properly including the
> : org.apache.lucene.codecs.Codec file in my jar.  I wasn't sure if it
> was
> : necessary in Solr, since I specify my factory in solrconfig.xml.  I
> : think that's why I could create a new index, but not load an existing
> : one.
> 
> Ah interesting.
> 
> yes, you definitely need the SPI registration in the jar file so that
> it
> can resolve codec files found on disk when opening them -- the
> configuration in solrconfig.xml tells solr hch codec to use when
> writing
> new segments, but it must respect the codec information in segements
> found
> on disk when opening them (that's how the index backcompat works), and
> those are looked up via SPI.
> 
> Can you do me a favor please and still file an issue with these
> details.
> the attachments i asked about before would still be handy, but probably
> not neccessary -- at a minimum could you show us the "jar tf" output of
> your plugin jar when you were having the problem.
> 
> Even if the codec factory code can find the configured codec on
> startup,
> we should probably throw a very load error write away if that same
> codec
> can't be found by name using SPI to prevent people from running into
> confusing problems when making mistakes like this.
> 
> 
> 
> -Hoss


Re: cold searcher

2013-09-26 Thread Shawn Heisey

On 9/26/2013 10:56 AM, Dmitry Kan wrote:

Btw, related to master-slave setup. What makes read-only slave not to come
across the same issue? Would it not pull data from the master and warm up
searchers? Or does it do updates in a more controlled fashion that makes it
avoid these issues?


Most people have the slave pollInterval configured on an interval that's 
pretty long, like 15 seconds to several minutes -- much longer than a 
typical searcher warming time.


For a slave, new searchers are only created when there is a change 
copied over from the master.  There may be several master-side commits 
that happen during the pollInterval, but the slave won't see all of those.


Thanks,
Shawn



Re: solr 4.5 release date

2013-09-26 Thread Shawn Heisey

On 9/26/2013 11:27 AM, Arcadius Ahouansou wrote:

Please, any idea of the final Solr-4.5 release date?


Steve gave you a better general answer than what I was writing. :)

If you want to have your finger on the pulse of Solr development, join 
the dev mailing list.  Nothing about the release process is hidden.


A strong warning: The dev list is a VERY high traffic list, and it 
covers Lucene as well as Solr.  Most of the email traffic is generated 
by various Apache project services, not people.  There is no way to only 
subscribe to the human-generated content.


http://lucene.apache.org/core/discussion.html#developer-discussion-devlucene

One thing that it says clearly on the link bears repeating: "Please do 
not send mail to the dev list with usage questions or configuration 
questions and problems; that is what the java-user and solr-user mailing 
lists are for."


Thanks,
Shawn



Re: Not able to index documents using CloudSolrServer

2013-09-26 Thread shamik
Just an update, I finally saw the documents getting indexed. But it happened
after 4-5 hours since I had used CloudServer to send the documents to Solr.
Is there any configuration change required ? I've having 2 nodes with a
replica each and a single zookeeper instance.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Not-able-to-index-documents-using-CloudSolrServer-tp4092074p4092238.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: * not working in Phrase Search in solar 4.4

2013-09-26 Thread Shawn Heisey

On 9/26/2013 10:15 AM, soumikghosh05 wrote:

I have a doc that contains "Hello World" in the title field and title is of
type of text_general.

When I am searching with

  title:"Hello Wo*" -- not returning
  title:"Hello World" -- returning


When you use wildcards, your analysis chain is not used, so the query 
text won't be lowercased, which means that it won't match what's been 
indexed.


You only included the index analyzer in what you pasted, but the fact 
that it works without the wildcard suggests that your query analyzer has 
the lowercase filter as well ... but with a wildcard search, that 
doesn't get used.  You'll need to handle lowercasing your query terms 
yourself when wildcards are part of the picture.


I seem to remember something being discussed for fixing this a while 
back, but I don't remember whether anything was actually implemented.


Thanks,
Shawn



Re: XPathEntityProcessor nested in TikaEntityProcessor query null exception

2013-09-26 Thread P Williams
Hi,

Haven't tried this myself but maybe try leaving out the
FieldReaderDataSource entirely.  From my quick searching looks like it's
tied to SQL.  Did you try copying the
http://wiki.apache.org/solr/TikaEntityProcessor Advanced Parsing example
exactly?  What happens when you leave out FieldReaderDataSource?

Cheers,
Tricia


On Thu, Sep 26, 2013 at 4:17 AM, Andreas Owen  wrote:

> i'm using solr 4.3.1 and the dataimporter. i am trying to use
> XPathEntityProcessor within the TikaEntityProcessor for indexing html-pages
> but i'm getting this error for each document. i have also tried
> dataField="tika.text" and dataField="text" to no avail. the nested
> XPathEntityProcessor "detail" creates the error, the rest works fine. what
> am i doing wrong?
>
> error:
>
> ERROR - 2013-09-26 12:08:49.006;
> org.apache.solr.handler.dataimport.SqlEntityProcessor; The query failed
> 'null'
> java.lang.ClassCastException: java.io.StringReader cannot be cast to
> java.util.Iterator
> at
> org.apache.solr.handler.dataimport.SqlEntityProcessor.initQuery(SqlEntityProcessor.java:59)
> at
> org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:73)
> at
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:243)
> at
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:465)
> at
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:491)
> at
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:491)
> at
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:404)
> at
> org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:319)
> at
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:227)
> at
> org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:422)
> at
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:487)
> at
> org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:179)
> at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1820)
> at
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:656)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:359)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:155)
> at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1307)
> at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:453)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:560)
> at
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1072)
> at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:382)
> at
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1006)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
> at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
> at org.eclipse.jetty.server.Server.handle(Server.java:365)
> at
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:485)
> at
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
> at
> org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:937)
> at
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:998)
> at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:856)
> at
> org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)
> at
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
> at
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
> at java.lang.Threa

Re: Prevent public access to Solr Admin Page

2013-09-26 Thread Shawn Heisey

On 9/26/2013 10:43 AM, Raymond Wiker wrote:

On Sep 26, 2013, at 11:13 , uwe72  wrote:

how can i prevent that everybody who knows the URL of our solr admin page,
has the right to access it?


I'd restrict access to the jetty server to localhost, and use an Apache httpd 
instance (or some other capable web server) running on the same host as an 
authenticated proxy.


I would go further - no end user should have direct (or even proxied) 
access to Solr at all.  They should be talking to front-end code that 
sanitizes user input and makes behind-the-scenes requests to Solr.


Thanks,
Shawn



Re: solr 4.5 release date

2013-09-26 Thread Steve Rowe
Hi Arcadius,

The fifth release candidate, hopefully the last one, will be cut today.  The 
release vote will be open until next Tuesday, and assuming all goes well, the 
release could be available next Wednesday at the earliest, six days from today.

Steve

On Sep 26, 2013, at 1:27 PM, Arcadius Ahouansou  wrote:

> Hello.
> 
> Please, any idea of the final Solr-4.5 release date?
> 
> Many thanks.
> 
> Arcadius.



solr 4.5 release date

2013-09-26 Thread Arcadius Ahouansou
Hello.

Please, any idea of the final Solr-4.5 release date?

Many thanks.

Arcadius.


Re: ContributorsGroup

2013-09-26 Thread Mike L.
 
ah sorry! its: mikelabib 
 
thanks!

From: Stefan Matheis 
To: solr-user@lucene.apache.org 
Sent: Thursday, September 26, 2013 12:05 PM
Subject: Re: ContributorsGroup


Mike

To add you as Contributor i'd need to know your Username? :)

Stefan 


On Thursday, September 26, 2013 at 6:50 PM, Mike L. wrote:

>  
> Solr Admins,
>  
>      I've been using Solr for the last couple years and would like to 
>contribute to this awesome project. Can I be added to the Contributorsgroup 
>with also access to update the Wiki?
>  
> Thanks in advance.
>  
> Mike L.
> 
> 

Re: ContributorsGroup

2013-09-26 Thread Stefan Matheis
Mike

To add you as Contributor i'd need to know your Username? :)

Stefan 


On Thursday, September 26, 2013 at 6:50 PM, Mike L. wrote:

>  
> Solr Admins,
>  
>  I've been using Solr for the last couple years and would like to 
> contribute to this awesome project. Can I be added to the Contributorsgroup 
> with also access to update the Wiki?
>  
> Thanks in advance.
>  
> Mike L.
> 
> 




Re: cold searcher

2013-09-26 Thread Dmitry Kan
Thanks for following up the question on IRC!

All right, your explanation makes it more clear, and now the words "until
the *first searcher* is done warming" stand out.

Yes, I have noticed commits in a quick succession (related to the other
question on stamping core names on log entries I asked). They certainly
gave Solr hard time warming up the searchers.
Since we use master-only shards setup (i.e. no slaves), we have increased
the maxWarmingSearchers to 5 and have not yet seen the issue.

Btw, related to master-slave setup. What makes read-only slave not to come
across the same issue? Would it not pull data from the master and warm up
searchers? Or does it do updates in a more controlled fashion that makes it
avoid these issues?

Thanks,
Dmitry


On Thu, Sep 26, 2013 at 5:24 PM, Shawn Heisey  wrote:

> On 9/26/2013 6:43 AM, Dmitry Kan wrote:
> > Can someone please help me understand the comment in solr 4.3.1's
> > solrconfig.xml:
> >
> > 
> > false
>
> As I understand it, this only applies to Solr startup or core reload,
> because that's the only time you'd normally have no registered
> searchers.  The warming queries in that case are defined by
> firstSearcher and newSearcher events, they aren't your cache
> autowarming.  For cache autowarming, you have a registered searcher
> already.
>
> Now I will answer a question that you asked on IRC early this morning
> (in my timezone):  If you are seeing messages in your log about
> exceeding maxWarmingSearchers, then it is likely that the amount of time
> that it takes for warming is exceeding the amount of time before the
> next commit comes in.
>
> Thanks,
> Shawn
>
>


ContributorsGroup

2013-09-26 Thread Mike L.
 
Solr Admins,
 
 I've been using Solr for the last couple years and would like to 
contribute to this awesome project. Can I be added to the Contributorsgroup 
with also access to update the Wiki?
 
Thanks in advance.
 
Mike L.

Re: how to output solr core name with log4j

2013-09-26 Thread Dmitry Kan
yes, and vice versa: I'm sleeping when U.S. folks like you are active. :)

Thanks for posting an answer. As you suggested I have filed a jira:

https://issues.apache.org/jira/browse/SOLR-5277

Thanks!

Dmitry


On Thu, Sep 26, 2013 at 5:31 PM, Shawn Heisey  wrote:

> On 9/26/2013 5:16 AM, Dmitry Kan wrote:
> > Is there any way to always output core name into log with solr4j
> > configuration?
> >
> > If you prefer to get some SO points, the same question posted to:
> >
> >
> http://stackoverflow.com/questions/19026577/how-to-output-solr-core-name-with-log4j
>
> I posted an answer on SO, before I saw this.  I was sleeping when you
> were on IRC. :)
>
> Thanks,
> Shawn
>
>


Re: Prevent public access to Solr Admin Page

2013-09-26 Thread Raymond Wiker
On Sep 26, 2013, at 11:13 , uwe72  wrote:
> Hi there,
> 
> how can i prevent that everybody who knows the URL of our solr admin page,
> has the right to access it?
> 
> Thanks in advance!
> Uwe


I'd restrict access to the jetty server to localhost, and use an Apache httpd 
instance (or some other capable web server) running on the same host as an 
authenticated proxy.

How to use NumericTermsRangeEnum from NumericRangeQuery

2013-09-26 Thread Chetan Vora
Hi all

I was trying to use the above enum to do some range search on dates... this
enum is returned by NumericRangeQuery.getTermsEnum() but I realized that
this is a protected method of the class and since this is a final class, I
can't see how I can use it. Maybe I'm missing something ?

Would appreciate any pointers.

Thanks


* not working in Phrase Search in solar 4.4

2013-09-26 Thread soumikghosh05
Hi,

I have a doc that contains "Hello World" in the title field and title is of
type of text_general.

When I am searching with

 title:"Hello Wo*" -- not returning
 title:"Hello World" -- returning

Could you please explain what I am missing? I have used
WhitespaceTokenizerFactory instead of StandardTokenizerFactory. But no luck.


  



  
  




  




--
View this message in context: 
http://lucene.472066.n3.nabble.com/not-working-in-Phrase-Search-in-solar-4-4-tp4092205.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Not able to index documents using CloudSolrServer

2013-09-26 Thread shamik
Anyone suggestion ?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Not-able-to-index-documents-using-CloudSolrServer-tp4092074p4092185.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: how to input .txt or .html to the server in Solrj

2013-09-26 Thread Darius Miliauskas
Thanks Shawn,

so, as far as I understood the "fieldname" is the title of my .txt file,
and "fieldValue" is the entire text parsed to a very long string, isn't it?

I guess I need just to parse the content of .txt file to the string: a)
Apache Tika is one of the choice recommended online, b) reading the file
line by line. Which one?. But I wonder how would it help me later to
analyze word-by-word or get back the text in the later searches.

The field schema is field like this . So, the text is indexed and stored by
default with "doc.addField("fieldname", "fieldValue");"?


Thanks,

Darius


2013/9/26 Shawn Heisey 

> On 9/26/2013 5:33 AM, Darius Miliauskas wrote:
> > Dear All,
> >
> > I am trying to use Solr (actually Solrj) to make a simple app which will
> > give me the recommended texts according to the similarity to the history
> of
> > reading other texts. Firstly, I need to input these texts to the server.
> > Let's say I have 1000 .txt files in one folder or 1000 html articles
> > online. How can I input these texts into server with Java? What words
> > should I use instead of the question marks in .addField(? ?)? It would be
> > awesome if somebody would give me any samples of code in Java.
>
> Here's a very simplistic example of how to add documents with SolrJ.
> The "server" variable is a SolrServer object that you must define.
> Usually it will be either HttpSolrServer or CloudSolrServer.
>
> Collection docs = new ArrayList();
> SolrInputDocument doc = new SolrInputDocument();
> doc.addField("fieldname", "fieldValue");
> docs.add(doc);
> server.add(docs);
>
> Thanks,
> Shawn
>
>


Re: how to input .txt or .html to the server in Solrj

2013-09-26 Thread Shawn Heisey
On 9/26/2013 5:33 AM, Darius Miliauskas wrote:
> Dear All,
> 
> I am trying to use Solr (actually Solrj) to make a simple app which will
> give me the recommended texts according to the similarity to the history of
> reading other texts. Firstly, I need to input these texts to the server.
> Let's say I have 1000 .txt files in one folder or 1000 html articles
> online. How can I input these texts into server with Java? What words
> should I use instead of the question marks in .addField(? ?)? It would be
> awesome if somebody would give me any samples of code in Java.

Here's a very simplistic example of how to add documents with SolrJ.
The "server" variable is a SolrServer object that you must define.
Usually it will be either HttpSolrServer or CloudSolrServer.

Collection docs = new ArrayList();
SolrInputDocument doc = new SolrInputDocument();
doc.addField("fieldname", "fieldValue");
docs.add(doc);
server.add(docs);

Thanks,
Shawn



Re: how to output solr core name with log4j

2013-09-26 Thread Shawn Heisey
On 9/26/2013 5:16 AM, Dmitry Kan wrote:
> Is there any way to always output core name into log with solr4j
> configuration?
> 
> If you prefer to get some SO points, the same question posted to:
> 
> http://stackoverflow.com/questions/19026577/how-to-output-solr-core-name-with-log4j

I posted an answer on SO, before I saw this.  I was sleeping when you
were on IRC. :)

Thanks,
Shawn



Re: cold searcher

2013-09-26 Thread Shawn Heisey
On 9/26/2013 6:43 AM, Dmitry Kan wrote:
> Can someone please help me understand the comment in solr 4.3.1's
> solrconfig.xml:
> 
> 
> false

As I understand it, this only applies to Solr startup or core reload,
because that's the only time you'd normally have no registered
searchers.  The warming queries in that case are defined by
firstSearcher and newSearcher events, they aren't your cache
autowarming.  For cache autowarming, you have a registered searcher already.

Now I will answer a question that you asked on IRC early this morning
(in my timezone):  If you are seeing messages in your log about
exceeding maxWarmingSearchers, then it is likely that the amount of time
that it takes for warming is exceeding the amount of time before the
next commit comes in.

Thanks,
Shawn



Re: Sorting dependent on user preferences with FunctionQuery

2013-09-26 Thread Ing. Jorge Luis Betancourt Gonzalez
I think you could use boosting queries: for group A you boost one category and 
for group B some other category.

- Mensaje original -
De: "Snubbel" 
Para: solr-user@lucene.apache.org
Enviados: Jueves, 26 de Septiembre 2013 8:01:36
Asunto: Sorting dependent on user preferences with FunctionQuery

Hello,

I want to present to different user groups a search result in different
orders.
Say, i have customer group A, which I know prefers Books, I want to get
Books at the top of my query result, DVDs at the bottom.
And for group B, preferring DVD, these first.
In my index I have a field of type text named "category" with values Book
and DVD.

I thought maybe I could solve this with QueryFunctions, maybe like this:

 
select?q=*%3A*&sort=query(qf=category v='Book')desc

but Solr returns "Can't determine a Sort Order (asc or desc) in sort".

What is wrong? I tried different ways of formulating the query without
success...


Or, does anyone have a better idea how to solve this?

Best regards, Nikola



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Sorting-dependent-on-user-preferences-with-FunctionQuery-tp4092119.html
Sent from the Solr - User mailing list archive at Nabble.com.

III Escuela Internacional de Invierno en la UCI del 17 al 28 de febrero del 
2014. Ver www.uci.cu

III Escuela Internacional de Invierno en la UCI del 17 al 28 de febrero del 
2014. Ver www.uci.cu


Re: Implementing Solr Suggester for Autocomplete (multiple columns)

2013-09-26 Thread Ing. Jorge Luis Betancourt Gonzalez
Great!! I haven't see your message yet, perhaps you could create a PR to that 
Github repository, son it will be in sync with current versions of Solr.

- Mensaje original -
De: "JMill" 
Para: solr-user@lucene.apache.org
Enviados: Jueves, 26 de Septiembre 2013 9:10:49
Asunto: Re: Implementing Solr Suggester for Autocomplete (multiple columns)

solved.


On Thu, Sep 26, 2013 at 1:50 PM, JMill  wrote:

> I managed to get rid of the query error by playing jquery file in the
> velocity folder and adding line: " src="#{url_for_solr}/admin/file?file=/velocity/jquery.min.js&contentType=text/javascript">".
> That has not solved the issues the console is showing a new error -
> "[13:42:55.181] TypeError: $.browser is undefined @
> http://localhost:8983/solr/ac/admin/file?file=/velocity/jquery.autocomplete.js&contentType=text/javascript:90";.
> Any ideas?
>
>
> On Thu, Sep 26, 2013 at 1:12 PM, JMill wrote:
>
>> Do you know the directory the "#{url_root}" in > type="text/javascript" src="#{url_root}/js/lib/
>> jquery-1.7.2.min.js"> points too? and same for
>> ""#{url_for_solr}" > src="#{url_for_solr}/js/lib/jquery-1.7.2.min.js">
>>
>>
>> On Wed, Sep 25, 2013 at 7:33 PM, Ing. Jorge Luis Betancourt Gonzalez <
>> jlbetanco...@uci.cu> wrote:
>>
>>> Try quering the core where the data has been imported, something like:
>>>
>>> http://localhost:8983/solr/suggestions/select?q=uc
>>>
>>> In the previous URL suggestions is the name I give to the core, so this
>>> should change, if you get results, then the problem could be the jquery
>>> dependency. I don't remember doing any change, as far as I know that js
>>> file is bundled with solr (at leat in 3.x) version perhaps you could change
>>> it the correct jquery version on solr 4.4, if you go into the admin panel
>>> (in solr 3.6):
>>>
>>> http://localhost:8983/solr/admin/schema.jsp
>>>
>>> And inspect the loaded code, the required file (jquery-1.4.2.min.js)
>>> gets loaded in solr 4.4 it should load a similar file, but perhaps a more
>>> recent version.
>>>
>>> Perhaps you could change that part to something like:
>>>
>>>   >> src="#{url_root}/js/lib/jquery-1.7.2.min.js">
>>>
>>> Which is used at least on a solr 4.1 that I have laying aroud here
>>> somewhere.
>>>
>>> In any case you can test the suggestions using the URL that I suggest on
>>> the top of this mail, in that case you should be able to see the possible
>>> results, of course in a less fancy way.
>>>
>>> - Mensaje original -
>>> De: "JMill" 
>>> Para: solr-user@lucene.apache.org
>>> Enviados: Miércoles, 25 de Septiembre 2013 13:59:32
>>> Asunto: Re: Implementing Solr Suggester for Autocomplete (multiple
>>> columns)
>>>
>>> Could it be the jquery library that is the problem?   I opened up
>>> solr-home/ac/conf/velocity/head.vm with an editor and I see a reference
>>> to
>>> the jquery library but I can't seem to find the directory referenced,
>>>  line:  

Re: Implementing Solr Suggester for Autocomplete (multiple columns)

2013-09-26 Thread Stefan Matheis
That is because of jQuery's changes ..

jQuery.browser (http://api.jquery.com/jQuery.browser/)
Description: Contains flags for the useragent, read from navigator.userAgent. 
This property was removed in jQuery 1.9 and is available only through the 
jQuery.migrate plugin. Please try to use feature detection instead.

Those plugins (like autocomplete, for example) normally have version 
dependencies to jQuery .. to ensure their functionality

-Stefan  


On Thursday, September 26, 2013 at 2:50 PM, JMill wrote:

> I managed to get rid of the query error by playing jquery file in the
> velocity folder and adding line: " src="#{url_for_solr}/admin/file?file=/velocity/jquery.min.js&contentType=text/javascript">".
> That has not solved the issues the console is showing a new error -
> "[13:42:55.181] TypeError: $.browser is undefined @
> http://localhost:8983/solr/ac/admin/file?file=/velocity/jquery.autocomplete.js&contentType=text/javascript:90";.
> Any ideas?
>  
>  
> On Thu, Sep 26, 2013 at 1:12 PM, JMill  (mailto:apprentice...@googlemail.com)> wrote:
>  
> > Do you know the directory the "#{url_root}" in  > type="text/javascript" src="#{url_root}/js/lib/
> > jquery-1.7.2.min.js"> points too? and same for ""#{url_for_solr}"
> >  > src="#{url_for_solr}/js/lib/jquery-1.7.2.min.js">
> >  
> >  
> > On Wed, Sep 25, 2013 at 7:33 PM, Ing. Jorge Luis Betancourt Gonzalez <
> > jlbetanco...@uci.cu (mailto:jlbetanco...@uci.cu)> wrote:
> >  
> > > Try quering the core where the data has been imported, something like:
> > >  
> > > http://localhost:8983/solr/suggestions/select?q=uc
> > >  
> > > In the previous URL suggestions is the name I give to the core, so this
> > > should change, if you get results, then the problem could be the jquery
> > > dependency. I don't remember doing any change, as far as I know that js
> > > file is bundled with solr (at leat in 3.x) version perhaps you could 
> > > change
> > > it the correct jquery version on solr 4.4, if you go into the admin panel
> > > (in solr 3.6):
> > >  
> > > http://localhost:8983/solr/admin/schema.jsp
> > >  
> > > And inspect the loaded code, the required file (jquery-1.4.2.min.js) gets
> > > loaded in solr 4.4 it should load a similar file, but perhaps a more 
> > > recent
> > > version.
> > >  
> > > Perhaps you could change that part to something like:
> > >  
> > >  > > src="#{url_root}/js/lib/jquery-1.7.2.min.js">
> > >  
> > > Which is used at least on a solr 4.1 that I have laying aroud here
> > > somewhere.
> > >  
> > > In any case you can test the suggestions using the URL that I suggest on
> > > the top of this mail, in that case you should be able to see the possible
> > > results, of course in a less fancy way.
> > >  
> > > - Mensaje original -
> > > De: "JMill"  > > (mailto:apprentice...@googlemail.com)>
> > > Para: solr-user@lucene.apache.org (mailto:solr-user@lucene.apache.org)
> > > Enviados: Miércoles, 25 de Septiembre 2013 13:59:32
> > > Asunto: Re: Implementing Solr Suggester for Autocomplete (multiple
> > > columns)
> > >  
> > > Could it be the jquery library that is the problem? I opened up
> > > solr-home/ac/conf/velocity/head.vm with an editor and I see a reference to
> > > the jquery library but I can't seem to find the directory referenced,
> > > line: 

Re: Implementing Solr Suggester for Autocomplete (multiple columns)

2013-09-26 Thread JMill
solved.


On Thu, Sep 26, 2013 at 1:50 PM, JMill  wrote:

> I managed to get rid of the query error by playing jquery file in the
> velocity folder and adding line: " src="#{url_for_solr}/admin/file?file=/velocity/jquery.min.js&contentType=text/javascript">".
> That has not solved the issues the console is showing a new error -
> "[13:42:55.181] TypeError: $.browser is undefined @
> http://localhost:8983/solr/ac/admin/file?file=/velocity/jquery.autocomplete.js&contentType=text/javascript:90";.
> Any ideas?
>
>
> On Thu, Sep 26, 2013 at 1:12 PM, JMill wrote:
>
>> Do you know the directory the "#{url_root}" in > type="text/javascript" src="#{url_root}/js/lib/
>> jquery-1.7.2.min.js"> points too? and same for
>> ""#{url_for_solr}" > src="#{url_for_solr}/js/lib/jquery-1.7.2.min.js">
>>
>>
>> On Wed, Sep 25, 2013 at 7:33 PM, Ing. Jorge Luis Betancourt Gonzalez <
>> jlbetanco...@uci.cu> wrote:
>>
>>> Try quering the core where the data has been imported, something like:
>>>
>>> http://localhost:8983/solr/suggestions/select?q=uc
>>>
>>> In the previous URL suggestions is the name I give to the core, so this
>>> should change, if you get results, then the problem could be the jquery
>>> dependency. I don't remember doing any change, as far as I know that js
>>> file is bundled with solr (at leat in 3.x) version perhaps you could change
>>> it the correct jquery version on solr 4.4, if you go into the admin panel
>>> (in solr 3.6):
>>>
>>> http://localhost:8983/solr/admin/schema.jsp
>>>
>>> And inspect the loaded code, the required file (jquery-1.4.2.min.js)
>>> gets loaded in solr 4.4 it should load a similar file, but perhaps a more
>>> recent version.
>>>
>>> Perhaps you could change that part to something like:
>>>
>>>   >> src="#{url_root}/js/lib/jquery-1.7.2.min.js">
>>>
>>> Which is used at least on a solr 4.1 that I have laying aroud here
>>> somewhere.
>>>
>>> In any case you can test the suggestions using the URL that I suggest on
>>> the top of this mail, in that case you should be able to see the possible
>>> results, of course in a less fancy way.
>>>
>>> - Mensaje original -
>>> De: "JMill" 
>>> Para: solr-user@lucene.apache.org
>>> Enviados: Miércoles, 25 de Septiembre 2013 13:59:32
>>> Asunto: Re: Implementing Solr Suggester for Autocomplete (multiple
>>> columns)
>>>
>>> Could it be the jquery library that is the problem?   I opened up
>>> solr-home/ac/conf/velocity/head.vm with an editor and I see a reference
>>> to
>>> the jquery library but I can't seem to find the directory referenced,
>>>  line:  

Re: Implementing Solr Suggester for Autocomplete (multiple columns)

2013-09-26 Thread JMill
I managed to get rid of the query error by playing jquery file in the
velocity folder and adding line: "".
That has not solved the issues the console is showing a new error -
"[13:42:55.181] TypeError: $.browser is undefined @
http://localhost:8983/solr/ac/admin/file?file=/velocity/jquery.autocomplete.js&contentType=text/javascript:90";.
Any ideas?


On Thu, Sep 26, 2013 at 1:12 PM, JMill  wrote:

> Do you know the directory the "#{url_root}" in  type="text/javascript" src="#{url_root}/js/lib/
> jquery-1.7.2.min.js"> points too? and same for ""#{url_for_solr}"
>  src="#{url_for_solr}/js/lib/jquery-1.7.2.min.js">
>
>
> On Wed, Sep 25, 2013 at 7:33 PM, Ing. Jorge Luis Betancourt Gonzalez <
> jlbetanco...@uci.cu> wrote:
>
>> Try quering the core where the data has been imported, something like:
>>
>> http://localhost:8983/solr/suggestions/select?q=uc
>>
>> In the previous URL suggestions is the name I give to the core, so this
>> should change, if you get results, then the problem could be the jquery
>> dependency. I don't remember doing any change, as far as I know that js
>> file is bundled with solr (at leat in 3.x) version perhaps you could change
>> it the correct jquery version on solr 4.4, if you go into the admin panel
>> (in solr 3.6):
>>
>> http://localhost:8983/solr/admin/schema.jsp
>>
>> And inspect the loaded code, the required file (jquery-1.4.2.min.js) gets
>> loaded in solr 4.4 it should load a similar file, but perhaps a more recent
>> version.
>>
>> Perhaps you could change that part to something like:
>>
>>   > src="#{url_root}/js/lib/jquery-1.7.2.min.js">
>>
>> Which is used at least on a solr 4.1 that I have laying aroud here
>> somewhere.
>>
>> In any case you can test the suggestions using the URL that I suggest on
>> the top of this mail, in that case you should be able to see the possible
>> results, of course in a less fancy way.
>>
>> - Mensaje original -
>> De: "JMill" 
>> Para: solr-user@lucene.apache.org
>> Enviados: Miércoles, 25 de Septiembre 2013 13:59:32
>> Asunto: Re: Implementing Solr Suggester for Autocomplete (multiple
>> columns)
>>
>> Could it be the jquery library that is the problem?   I opened up
>> solr-home/ac/conf/velocity/head.vm with an editor and I see a reference to
>> the jquery library but I can't seem to find the directory referenced,
>>  line:  

cold searcher

2013-09-26 Thread Dmitry Kan
Hello!

Can someone please help me understand the comment in solr 4.3.1's
solrconfig.xml:


false

What precisely happens when userColdSearcher is set to true and a request
arrives while a searcher is warming up?

 - Does the state of warming slow down the search performance? Does it
affect on anything else?

  - How does this setting interplay with soft commits? I.e. do soft-commits
warm up searchers too?

Regards,
Dmitry


Re: Xml file is not inserting from code java -jar post.jar *.xml

2013-09-26 Thread Furkan KAMACI
You should start to read from here:
http://lucene.apache.org/solr/4_4_0/tutorial.html


2013/9/26 Kishan Parmar 

>
> http://www.coretechnologies.com/products/AlwaysUp/Apps/RunApacheSolrAsAService.html
> \
>
> this is the link from where i fown the solr installation
>
> Regards,
>
> Kishan Parmar
> Software Developer
> +91 95 100 77394
> Jay Shree Krishnaa !!
>
>
>
> On Thu, Sep 26, 2013 at 1:13 PM, Kishan Parmar 
> wrote:
>
> > i am not using tomcat  but i am using alwaysup software to run the solr
> > system.
> > it is working perfectly
> >
> > but  i can not add my xml file to index..i channged my schema file as per
> > requirement of my xml file ...
> > and also
> > i am using this command to insert xml to index
> >
> >
> > java -Durl=http://localhost:8983/solr/core0/update -jar post.jar *.xml
> >
> > ibut it gives an error  and if i write java -jar post.jar *.xml then it
> > index the data but in anoter core collection1
> > and
> > there is an error also in it that "no dataimport handler is found";
> > so what can i do for this problems
> >
> > Regards,
> >
> > Kishan Parmar
> > Software Developer
> > +91 95 100 77394
> > Jay Shree Krishnaa !!
> >
> >
> >
> > On Sun, Sep 22, 2013 at 8:53 PM, Erick Erickson  >wrote:
> >
> >> Please review:
> >>
> >> http://wiki.apache.org/solr/UsingMailingLists
> >>
> >> Best,
> >> Erick
> >>
> >> On Sun, Sep 22, 2013 at 8:06 AM, Jack Krupansky <
> j...@basetechnology.com>
> >> wrote:
> >> > Did you start Solr? How did you verify that Solr is running? Are you
> >> able to
> >> > query Solr and access the Admin UI?
> >> >
> >> > Most importantly, did you successfully complete the standard Solr
> >> tutorial?
> >> > (IOW, you know all the necessarily steps for basic operation of Solr.)
> >> >
> >> > Lastly, did you verify (by examining the log) whether Solr was able to
> >> > successfully load your schema changes without errors?
> >> >
> >> > -- Jack Krupansky
> >> >
> >> > -Original Message- From: Kishan Parmar
> >> > Sent: Sunday, September 22, 2013 9:56 AM
> >> > To: solr-user@lucene.apache.org
> >> > Subject: Xml file is not inserting from code java -jar post.jar *.xml
> >> >
> >> >
> >> > hi
> >> >
> >> > i am new user of Solr i have done my schema file and when i write a
> >> code to
> >> > insert xxl file to index from cmd .java -jar post.jar *.xml
> >> >
> >> > it give us error solr returned errer 404 not found
> >> >
> >> > what can i do???
> >> >
> >> >
> >> > Regards,
> >> >
> >> > Kishan Parmar
> >> > Software Developer
> >> > +91 95 100 77394
> >> > Jay Shree Krishnaa !!
> >>
> >
> >
>


Solr Autocomplete with "did you means" functionality handle misspell word like google

2013-09-26 Thread Suneel Pandey
 

Hi, 

I have implemented auto complete it's  working file but, I want to implement
autosuggestion like google (see above screen) . when someone typing misspell
words suggestion should be show e.g: cmputer => computer. 

 
Please help me.






-
Regards,

Suneel Pandey
Sr. Software Developer
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Autocomplete-with-did-you-means-functionality-handle-misspell-word-like-google-tp4092127.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Implementing Solr Suggester for Autocomplete (multiple columns)

2013-09-26 Thread JMill
Do you know the directory the "#{url_root}" in  points too? and same for ""#{url_for_solr}"



On Wed, Sep 25, 2013 at 7:33 PM, Ing. Jorge Luis Betancourt Gonzalez <
jlbetanco...@uci.cu> wrote:

> Try quering the core where the data has been imported, something like:
>
> http://localhost:8983/solr/suggestions/select?q=uc
>
> In the previous URL suggestions is the name I give to the core, so this
> should change, if you get results, then the problem could be the jquery
> dependency. I don't remember doing any change, as far as I know that js
> file is bundled with solr (at leat in 3.x) version perhaps you could change
> it the correct jquery version on solr 4.4, if you go into the admin panel
> (in solr 3.6):
>
> http://localhost:8983/solr/admin/schema.jsp
>
> And inspect the loaded code, the required file (jquery-1.4.2.min.js) gets
> loaded in solr 4.4 it should load a similar file, but perhaps a more recent
> version.
>
> Perhaps you could change that part to something like:
>
>src="#{url_root}/js/lib/jquery-1.7.2.min.js">
>
> Which is used at least on a solr 4.1 that I have laying aroud here
> somewhere.
>
> In any case you can test the suggestions using the URL that I suggest on
> the top of this mail, in that case you should be able to see the possible
> results, of course in a less fancy way.
>
> - Mensaje original -
> De: "JMill" 
> Para: solr-user@lucene.apache.org
> Enviados: Miércoles, 25 de Septiembre 2013 13:59:32
> Asunto: Re: Implementing Solr Suggester for Autocomplete (multiple columns)
>
> Could it be the jquery library that is the problem?   I opened up
> solr-home/ac/conf/velocity/head.vm with an editor and I see a reference to
> the jquery library but I can't seem to find the directory referenced,
>  line:  

Sorting dependent on user preferences with FunctionQuery

2013-09-26 Thread Snubbel
Hello,

I want to present to different user groups a search result in different
orders.
Say, i have customer group A, which I know prefers Books, I want to get
Books at the top of my query result, DVDs at the bottom.
And for group B, preferring DVD, these first.
In my index I have a field of type text named "category" with values Book
and DVD.

I thought maybe I could solve this with QueryFunctions, maybe like this:

 
select?q=*%3A*&sort=query(qf=category v='Book')desc

but Solr returns "Can't determine a Sort Order (asc or desc) in sort".

What is wrong? I tried different ways of formulating the query without
success...


Or, does anyone have a better idea how to solve this?

Best regards, Nikola



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Sorting-dependent-on-user-preferences-with-FunctionQuery-tp4092119.html
Sent from the Solr - User mailing list archive at Nabble.com.


Prevent public access to Solr Admin Page

2013-09-26 Thread uwe72
Hi there,

how can i prevent that everybody who knows the URL of our solr admin page,
has the right to access it?

Thanks in advance!
Uwe



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Prevent-public-access-to-Solr-Admin-Page-tp4092080.html
Sent from the Solr - User mailing list archive at Nabble.com.


how to input .txt or .html to the server in Solrj

2013-09-26 Thread Darius Miliauskas
Dear All,

I am trying to use Solr (actually Solrj) to make a simple app which will
give me the recommended texts according to the similarity to the history of
reading other texts. Firstly, I need to input these texts to the server.
Let's say I have 1000 .txt files in one folder or 1000 html articles
online. How can I input these texts into server with Java? What words
should I use instead of the question marks in .addField(? ?)? It would be
awesome if somebody would give me any samples of code in Java.


Thanks,

Darius


RE: Exact Word Match Search comes in first come In Solr4.3

2013-09-26 Thread Markus Jelsma
That won't boost order but Lucene's SpanFirstQuery does. You do have to make a 
custom query parser plugin for it but that's trivial.
 
-Original message-
> From:Otis Gospodnetic 
> Sent: Thursday 26th September 2013 13:24
> To: solr-user@lucene.apache.org
> Subject: Re: Exact Word Match Search comes in first come In Solr4.3
> 
> Hello there.
> 
> Use two fields, one unanalyzed and the other analyzed and boost the former.
> 
> Otis
> Solr & ElasticSearch Support
> http://sematext.com/
> On Sep 26, 2013 7:19 AM, "Viresh Modi"  wrote:
> 
> > I want to get ORDER As Per Exact Search match:
> >
> > Search with "EMIR" comes First exact match  “Emir”  not “United Arab
> > Emirates”.
> >
> >  For example, when you search for “EMIR” the first result has nothing to do
> > with that and is all about “United Arab Emirates”, which obviously contains
> > “Emir” as part of “Emirates”. This is obviously less relevant than an exact
> > match on “EMIR”.
> >
> > *MY SOLR INDEX RESULT:*
> >
> > 
> >
> > Weight  United Arab Emirates
> >
> > 
> > 
> >
> > Emir My Search Content
> >
> > 
> >
> > *Debug for Query :*
> >
> >  
> > 0.4016216 = (MATCH) weight(text:emir in 0) [DefaultSimilarity], result of:
> >   0.4016216 = fieldWeight in 0, product of:
> > 1.0 = tf(freq=1.0), with freq of:
> >   1.0 = termFreq=1.0
> > 3.2129729 = idf(docFreq=48, maxDocs=448)
> > 0.125 = fieldNorm(doc=0)
> > 
> > 0.4016216 = (MATCH) weight(text:emir in 0) [DefaultSimilarity], result of:
> >   0.4016216 = fieldWeight in 0, product of:
> > 1.0 = tf(freq=1.0), with freq of:
> >   1.0 = termFreq=1.0
> > 3.2129729 = idf(docFreq=48, maxDocs=448)
> > 0.125 = fieldNorm(doc=0)
> >
> > *MY Schema.xml Looks like :*
> >
> >  > termVectors="true" termPositions="true" termOffsets="true" />
> >
> >
> >  > positionIncrementGap="100" autoGeneratePhraseQueries="true">
> >   
> > 
> > > ignoreCase="true"
> > words="lang/stopwords_en.txt"
> > enablePositionIncrements="true"
> > />
> >  > generateNumberParts="1" catenateWords="1" catenateNumbers="1"
> > catenateAll="0" splitOnCaseChange="1"/>
> > 
> >  > protected="protwords.txt"/>
> > 
> >   
> >   
> > 
> >  > ignoreCase="true" expand="true"/>
> >  > words="lang/stopwords_en.txt" enablePositionIncrements="true"/>
> >  > generateWordParts="1" generateNumberParts="1" catenateWords="1"
> > catenateNumbers="1" catenateAll="0" splitOnCaseChange="0"/>
> > 
> >  > protected="protwords.txt"/>
> > 
> >   
> > 
> >
> 


Re: Exact Word Match Search comes in first come In Solr4.3

2013-09-26 Thread Otis Gospodnetic
Hello there.

Use two fields, one unanalyzed and the other analyzed and boost the former.

Otis
Solr & ElasticSearch Support
http://sematext.com/
On Sep 26, 2013 7:19 AM, "Viresh Modi"  wrote:

> I want to get ORDER As Per Exact Search match:
>
> Search with "EMIR" comes First exact match  “Emir”  not “United Arab
> Emirates”.
>
>  For example, when you search for “EMIR” the first result has nothing to do
> with that and is all about “United Arab Emirates”, which obviously contains
> “Emir” as part of “Emirates”. This is obviously less relevant than an exact
> match on “EMIR”.
>
> *MY SOLR INDEX RESULT:*
>
> 
>
> Weight  United Arab Emirates
>
> 
> 
>
> Emir My Search Content
>
> 
>
> *Debug for Query :*
>
>  
> 0.4016216 = (MATCH) weight(text:emir in 0) [DefaultSimilarity], result of:
>   0.4016216 = fieldWeight in 0, product of:
> 1.0 = tf(freq=1.0), with freq of:
>   1.0 = termFreq=1.0
> 3.2129729 = idf(docFreq=48, maxDocs=448)
> 0.125 = fieldNorm(doc=0)
> 
> 0.4016216 = (MATCH) weight(text:emir in 0) [DefaultSimilarity], result of:
>   0.4016216 = fieldWeight in 0, product of:
> 1.0 = tf(freq=1.0), with freq of:
>   1.0 = termFreq=1.0
> 3.2129729 = idf(docFreq=48, maxDocs=448)
> 0.125 = fieldNorm(doc=0)
>
> *MY Schema.xml Looks like :*
>
>  termVectors="true" termPositions="true" termOffsets="true" />
>
>
>  positionIncrementGap="100" autoGeneratePhraseQueries="true">
>   
> 
> ignoreCase="true"
> words="lang/stopwords_en.txt"
> enablePositionIncrements="true"
> />
>  generateNumberParts="1" catenateWords="1" catenateNumbers="1"
> catenateAll="0" splitOnCaseChange="1"/>
> 
>  protected="protwords.txt"/>
> 
>   
>   
> 
>  ignoreCase="true" expand="true"/>
>  words="lang/stopwords_en.txt" enablePositionIncrements="true"/>
>  generateWordParts="1" generateNumberParts="1" catenateWords="1"
> catenateNumbers="1" catenateAll="0" splitOnCaseChange="0"/>
> 
>  protected="protwords.txt"/>
> 
>   
> 
>


Exact Word Match Search comes in first come In Solr4.3

2013-09-26 Thread Viresh Modi
I want to get ORDER As Per Exact Search match:

Search with "EMIR" comes First exact match  “Emir”  not “United Arab
Emirates”.

 For example, when you search for “EMIR” the first result has nothing to do
with that and is all about “United Arab Emirates”, which obviously contains
“Emir” as part of “Emirates”. This is obviously less relevant than an exact
match on “EMIR”.

*MY SOLR INDEX RESULT:*



Weight  United Arab Emirates




Emir My Search Content



*Debug for Query :*

 
0.4016216 = (MATCH) weight(text:emir in 0) [DefaultSimilarity], result of:
  0.4016216 = fieldWeight in 0, product of:
1.0 = tf(freq=1.0), with freq of:
  1.0 = termFreq=1.0
3.2129729 = idf(docFreq=48, maxDocs=448)
0.125 = fieldNorm(doc=0)

0.4016216 = (MATCH) weight(text:emir in 0) [DefaultSimilarity], result of:
  0.4016216 = fieldWeight in 0, product of:
1.0 = tf(freq=1.0), with freq of:
  1.0 = termFreq=1.0
3.2129729 = idf(docFreq=48, maxDocs=448)
0.125 = fieldNorm(doc=0)

*MY Schema.xml Looks like :*





  

   




  
  







  



how to output solr core name with log4j

2013-09-26 Thread Dmitry Kan
Hello,

Is there any way to always output core name into log with solr4j
configuration?

If you prefer to get some SO points, the same question posted to:

http://stackoverflow.com/questions/19026577/how-to-output-solr-core-name-with-log4j

Thanks,

Dmitry


ALIAS feature, can be used for what?

2013-09-26 Thread yriveiro
Today I was thinking about the ALIAS feature and the utility on Solr.

Can anyone explain me with an example where this feature may be useful?

It's possible have an ALIAS of multiples collections, if I do a write to the
alias, Is this write replied to all collections?

/Yago



-
Best regards
--
View this message in context: 
http://lucene.472066.n3.nabble.com/ALIAS-feature-can-be-used-for-what-tp4092095.html
Sent from the Solr - User mailing list archive at Nabble.com.


XPathEntityProcessor nested in TikaEntityProcessor query null exception

2013-09-26 Thread Andreas Owen
i'm using solr 4.3.1 and the dataimporter. i am trying to use 
XPathEntityProcessor within the TikaEntityProcessor for indexing html-pages but 
i'm getting this error for each document. i have also tried 
dataField="tika.text" and dataField="text" to no avail. the nested 
XPathEntityProcessor "detail" creates the error, the rest works fine. what am i 
doing wrong?

error:

ERROR - 2013-09-26 12:08:49.006; 
org.apache.solr.handler.dataimport.SqlEntityProcessor; The query failed 'null'
java.lang.ClassCastException: java.io.StringReader cannot be cast to 
java.util.Iterator
at 
org.apache.solr.handler.dataimport.SqlEntityProcessor.initQuery(SqlEntityProcessor.java:59)
at 
org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:73)
at 
org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:243)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:465)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:491)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:491)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:404)
at 
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:319)
at 
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:227)
at 
org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:422)
at 
org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:487)
at 
org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:179)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1820)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:656)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:359)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:155)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1307)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:453)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:560)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1072)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:382)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1006)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:365)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:485)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:937)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:998)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:856)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Unknown Source)
ERROR - 2013-09-26 12:08:49.022; org.apache.solr.common.SolrException; 
Exception in entity : 
detail:org.apache.solr.handler.dataimport.DataImportHandlerException: 
java.lang.ClassCastException: java.io.StringReader cannot be cast to 
java.util.Iterator
at 
org.apache.solr.handler.dataimport.SqlEntityProcessor.initQuery(SqlEntityProcessor.java:65)
at 
org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:73)
at 
org.apache.solr.handler.dataim

Re: Select all descendants in a relation index

2013-09-26 Thread Oussama Mubarak

Thank you very much Erick.

Would you know by any chance of a tutorial or book that explains how to 
use PathHierarchyTokenizerFactory ?

How does solr know how to generate the path ?

Most examples online look like the one below, and don't explain how the 
path is generated:



  

  
  

  


Thank you,

Semiaddict

Le 25/09/2013 11:56, Erick Erickson a écrit :

Well, then index the path to that node and do wildcard queries.
With a file path example, index (maybe string type)
/dir1/dir2/dir3/file

Finding all descendants of "dir2" is simple, just search on
/dir1/dir2/*

Also see http://wiki.apache.org/solr/HierarchicalFaceting

for other approaches.


Best,
Erick

On Tue, Sep 24, 2013 at 10:37 AM, Oussama Mubarak  wrote:

Thank you Erick.

I actually do need it to extend to grandchildren as stated in "I need to be
able to find all descendants of a node with one query".
I already have an index that allows me to find the direct children of a
node, what I need is to be able to get all descendants of a node (children,
grandchildren... etc).

I have submitted this questions on stackoverflow where I put in more details
:
http://stackoverflow.com/questions/18984183/join-query-in-apache-solr-how-to-get-all-levels-in-hierarchical-data

Semiaddict


Le 24/09/2013 16:08, Erick Erickson a écrit :

Sure, index the parent node id (perhaps multiple) with each child
and add &fq=parent_id:12.

you can do the reverse and index each node with it's child node IDs
to to ask the inverse question.

This won't extend to grandchildren/parents, but you haven't stated that you
need to do this.

Best,
Erick

On Mon, Sep 23, 2013 at 6:23 PM, Semiaddict  wrote:

Hello,

I am using Solr to index Drupal node relations (over 300k relations on over
500k nodes), where each relation consists of the following fields:
- id : the id of the relation
- source_id : the source (parent) node id
- targe_id : the targe (child) node id

I need to be able to find all descendants of a node with one query.
So far I've managed to get direct children using the join syntax of Solr4
such as (http://wiki.apache.org/solr/Join):
/solr/collection/select?q={!join from=source_id to=target_id}source_id:12

Note that each node can have multiple parents and multiple children.

Is there a way to get all descendants of node 12 without having to create a
loop in PHP to find all children, then all children of each child, etc ?
If not, is it possible to create a recursive query directly in Solr, or is
there a better way to index tree structures ?

Any help or suggestion would be highly appreciated.

Thank you in advance,

Semiaddict






auto commit error...:java.lang.ArrayIndexOutOfBoundsException

2013-09-26 Thread Tor Egil
Running Solr - 4.4.0 1504776 - sarowe - 2013-07-19 03:00:56 on solrcloud with
1 master and 2 replicas.
I have tested autocommit (every 1 document) for a while, and came over
this one today:

2013-09-26T06:47:07 INFO  (o.a.solr.update.UpdateHandler:511) - start
commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
2013-09-26T06:47:09 ERROR (o.a.solr.update.CommitTracker:119) - auto commit
error...:java.lang.ArrayIndexOutOfBoundsException: -20463
at
org.apache.lucene.index.ByteSliceReader.init(ByteSliceReader.java:54)
at
org.apache.lucene.index.TermsHashPerField.initReader(TermsHashPerField.java:104)
at
org.apache.lucene.index.FreqProxTermsWriterPerField.flush(FreqProxTermsWriterPerField.java:393)
at
org.apache.lucene.index.FreqProxTermsWriter.flush(FreqProxTermsWriter.java:85)
at org.apache.lucene.index.TermsHash.flush(TermsHash.java:116)
at org.apache.lucene.index.DocInverter.flush(DocInverter.java:53)
at
org.apache.lucene.index.DocFieldProcessor.flush(DocFieldProcessor.java:81)
at
org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:501)
at
org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:478)
at
org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:615)
at
org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2748)
at
org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2897)
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2872)
at
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:549)
at org.apache.solr.update.CommitTracker.run(CommitTracker.java:216)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)


Tor Egil



--
View this message in context: 
http://lucene.472066.n3.nabble.com/auto-commit-error-java-lang-ArrayIndexOutOfBoundsException-tp4092078.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Xml file is not inserting from code java -jar post.jar *.xml

2013-09-26 Thread Kishan Parmar
http://www.coretechnologies.com/products/AlwaysUp/Apps/RunApacheSolrAsAService.html
\

this is the link from where i fown the solr installation

Regards,

Kishan Parmar
Software Developer
+91 95 100 77394
Jay Shree Krishnaa !!



On Thu, Sep 26, 2013 at 1:13 PM, Kishan Parmar  wrote:

> i am not using tomcat  but i am using alwaysup software to run the solr
> system.
> it is working perfectly
>
> but  i can not add my xml file to index..i channged my schema file as per
> requirement of my xml file ...
> and also
> i am using this command to insert xml to index
>
>
> java -Durl=http://localhost:8983/solr/core0/update -jar post.jar *.xml
>
> ibut it gives an error  and if i write java -jar post.jar *.xml then it
> index the data but in anoter core collection1
> and
> there is an error also in it that "no dataimport handler is found";
> so what can i do for this problems
>
> Regards,
>
> Kishan Parmar
> Software Developer
> +91 95 100 77394
> Jay Shree Krishnaa !!
>
>
>
> On Sun, Sep 22, 2013 at 8:53 PM, Erick Erickson 
> wrote:
>
>> Please review:
>>
>> http://wiki.apache.org/solr/UsingMailingLists
>>
>> Best,
>> Erick
>>
>> On Sun, Sep 22, 2013 at 8:06 AM, Jack Krupansky 
>> wrote:
>> > Did you start Solr? How did you verify that Solr is running? Are you
>> able to
>> > query Solr and access the Admin UI?
>> >
>> > Most importantly, did you successfully complete the standard Solr
>> tutorial?
>> > (IOW, you know all the necessarily steps for basic operation of Solr.)
>> >
>> > Lastly, did you verify (by examining the log) whether Solr was able to
>> > successfully load your schema changes without errors?
>> >
>> > -- Jack Krupansky
>> >
>> > -Original Message- From: Kishan Parmar
>> > Sent: Sunday, September 22, 2013 9:56 AM
>> > To: solr-user@lucene.apache.org
>> > Subject: Xml file is not inserting from code java -jar post.jar *.xml
>> >
>> >
>> > hi
>> >
>> > i am new user of Solr i have done my schema file and when i write a
>> code to
>> > insert xxl file to index from cmd .java -jar post.jar *.xml
>> >
>> > it give us error solr returned errer 404 not found
>> >
>> > what can i do???
>> >
>> >
>> > Regards,
>> >
>> > Kishan Parmar
>> > Software Developer
>> > +91 95 100 77394
>> > Jay Shree Krishnaa !!
>>
>
>


Not able to index documents using CloudSolrServer

2013-09-26 Thread Shamik Bandopadhyay
Hi,

  I've recently started exploring SolrCloud and is trying to index
documents using  CloudSolrServer client. The issue I'm seeing is if I don't
fire an explicit commit on CloudSolrServer object, the documents are not
getting indexed. Here's my code snippet :

CloudSolrServer server = new CloudSolrServer("localhost:2181");
server.setDefaultCollection("collection1");
SolrInputDocument doc = new SolrInputDocument();
doc.addField("id", "http://test.com/akn/test6.html";);
doc.addField("Source2", "aknsource");
doc.addField("url", "http://test.com/akn/test6.html";);
doc.addField("title", "SolrCloud rocks");
doc.addField("text", "This is a sample text");
UpdateResponse resp = server.add(doc);
//UpdateResponse res = server.commit();

I've 2 shards with 1 replica each and a single zookeeper instance.

Once I run this test code, I'm able to see the request hitting the nodes.
Here's the output from the log :


INFO  - 2013-09-26 03:19:04.981;
org.apache.solr.update.processor.LogUpdateProcessor; [collection1]
webapp=/solr path=/update params={distrib.from=
http://ec2-1-2-3-4.us-west-1.compute.amazonaws.com:8983/solr/collection1/&update.distrib=TOLEADER&wt=javabin&version=2}
{add=[http://test.com/akn/test6.html (1447223565945405440)]} 0 42
INFO  - 2013-09-26 03:19:19.943;
org.apache.solr.update.DirectUpdateHandler2; start
commit{,optimize=false,openSearcher=false,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
INFO  - 2013-09-26 03:19:20.249; org.apache.solr.core.SolrDeletionPolicy;
SolrDeletionPolicy.onCommit: commits: num=2

commit{dir=NRTCachingDirectory(org.apache.lucene.store.MMapDirectory@/mnt/ebs2/AutoDeskSolr44/solr/collection1/data/index
lockFactory=org.apache.lucene.store.NativeFSLockFactory@36ddc581;
maxCacheMB=48.0 maxMergeSizeMB=4.0),segFN=segments_7,generation=7}

commit{dir=NRTCachingDirectory(org.apache.lucene.store.MMapDirectory@/mnt/ebs2/AutoDeskSolr44/solr/collection1/data/index
lockFactory=org.apache.lucene.store.NativeFSLockFactory@36ddc581;
maxCacheMB=48.0 maxMergeSizeMB=4.0),segFN=segments_8,generation=8}
INFO  - 2013-09-26 03:19:20.250; org.apache.solr.core.SolrDeletionPolicy;
newest commit generation = 8
INFO  - 2013-09-26 03:19:20.252; org.apache.solr.search.SolrIndexSearcher;
Opening Searcher@c324b85 realtime
INFO  - 2013-09-26 03:19:20.254;
org.apache.solr.update.DirectUpdateHandler2; end_commit_flush


>From the log, it looked like that the commit has gone through successfully.
But then if I query the servers, none of the entries are showing up.

Now, if I turn on UpdateResponse res = server.commit(); , I do the see the
data indexed. Here's the log :

INFO  - 2013-09-26 03:41:24.433;
org.apache.solr.update.processor.LogUpdateProcessor; [collection1]
webapp=/solr path=/update params={wt=javabin&version=2} {add=[
http://autodesk.com/akn/test6.html (1447224970494083072)]} 0 12
INFO  - 2013-09-26 03:41:24.490;
org.apache.solr.update.DirectUpdateHandler2; start
commit{,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
INFO  - 2013-09-26 03:41:24.788; org.apache.solr.core.SolrDeletionPolicy;
SolrDeletionPolicy.onCommit: commits: num=2

commit{dir=NRTCachingDirectory(org.apache.lucene.store.MMapDirectory@/mnt/ebs2/AutoDeskSolr44/solr/collection1/data/index
lockFactory=org.apache.lucene.store.NativeFSLockFactory@36ddc581;
maxCacheMB=48.0 maxMergeSizeMB=4.0),segFN=segments_8,generation=8}

commit{dir=NRTCachingDirectory(org.apache.lucene.store.MMapDirectory@/mnt/ebs2/AutoDeskSolr44/solr/collection1/data/index
lockFactory=org.apache.lucene.store.NativeFSLockFactory@36ddc581;
maxCacheMB=48.0 maxMergeSizeMB=4.0),segFN=segments_9,generation=9}
INFO  - 2013-09-26 03:41:24.788; org.apache.solr.core.SolrDeletionPolicy;
newest commit generation = 9
INFO  - 2013-09-26 03:41:24.792; org.apache.solr.search.SolrIndexSearcher;
Opening Searcher@138ba593 main
INFO  - 2013-09-26 03:41:24.794;
org.apache.solr.update.DirectUpdateHandler2; end_commit_flush
INFO  - 2013-09-26 03:41:24.794; org.apache.solr.core.QuerySenderListener;
QuerySenderListener sending requests to
Searcher@138ba593main{StandardDirectoryReader(segments_9:21:nrt
_0(4.4):C1 _1(4.4):C1
_3(4.4):C1 _4(4.4):C1 _5(4.4):C1 _7(4.4):C1)}
INFO  - 2013-09-26 03:41:24.795; org.apache.solr.core.QuerySenderListener;
QuerySenderListener done.
INFO  - 2013-09-26 03:41:24.798; org.apache.solr.core.SolrCore;
[collection1] Registered new searcher
Searcher@138ba593main{StandardDirectoryReader(segments_9:21:nrt
_0(4.4):C1 _1(4.4):C1
_3(4.4):C1 _4(4.4):C1 _5(4.4):C1 _7(4.4):C1)}
INFO  - 2013-09-26 03:41:24.798;
org.apache.solr.update.processor.LogUpdateProcessor; [collection1]
webapp=/solr path=/update
params={waitSearcher=true&commit=true&wt=javabin&expungeDeletes=false&commit_end_point=true&version=2&softCommit=false}
{commit=} 0 308

Here's what I've in config file :


3
false



1000
 

Not sure what I'm missing here. I've been using ConcurrentUpdateSolrServer
before in 4

Re: Xml file is not inserting from code java -jar post.jar *.xml

2013-09-26 Thread Kishan Parmar
i am not using tomcat  but i am using alwaysup software to run the solr
system.
it is working perfectly

but  i can not add my xml file to index..i channged my schema file as per
requirement of my xml file ...
and also
i am using this command to insert xml to index


java -Durl=http://localhost:8983/solr/core0/update -jar post.jar *.xml

ibut it gives an error  and if i write java -jar post.jar *.xml then it
index the data but in anoter core collection1
and
there is an error also in it that "no dataimport handler is found";
so what can i do for this problems

Regards,

Kishan Parmar
Software Developer
+91 95 100 77394
Jay Shree Krishnaa !!



On Sun, Sep 22, 2013 at 8:53 PM, Erick Erickson wrote:

> Please review:
>
> http://wiki.apache.org/solr/UsingMailingLists
>
> Best,
> Erick
>
> On Sun, Sep 22, 2013 at 8:06 AM, Jack Krupansky 
> wrote:
> > Did you start Solr? How did you verify that Solr is running? Are you
> able to
> > query Solr and access the Admin UI?
> >
> > Most importantly, did you successfully complete the standard Solr
> tutorial?
> > (IOW, you know all the necessarily steps for basic operation of Solr.)
> >
> > Lastly, did you verify (by examining the log) whether Solr was able to
> > successfully load your schema changes without errors?
> >
> > -- Jack Krupansky
> >
> > -Original Message- From: Kishan Parmar
> > Sent: Sunday, September 22, 2013 9:56 AM
> > To: solr-user@lucene.apache.org
> > Subject: Xml file is not inserting from code java -jar post.jar *.xml
> >
> >
> > hi
> >
> > i am new user of Solr i have done my schema file and when i write a code
> to
> > insert xxl file to index from cmd .java -jar post.jar *.xml
> >
> > it give us error solr returned errer 404 not found
> >
> > what can i do???
> >
> >
> > Regards,
> >
> > Kishan Parmar
> > Software Developer
> > +91 95 100 77394
> > Jay Shree Krishnaa !!
>