Re: Need Help for Location searching

2013-12-31 Thread Ahmet Arslan
Hi Rashi,

Closest thing came to my mind is PathHierarchyTokenizerFactory
http://wiki.apache.org/solr/HierarchicalFaceting


ahmet


On Tuesday, December 31, 2013 3:16 PM, rashi gandhi  
wrote:
Hi,


I wanted to design an analyzer that can support location containment
relationship For example Europe->France->Paris


My requirement is like: when a user search for any country , then results
must have the documents having that country , as well as the documents
having states and cities which comes under that country.

But, documents with country name must have high relevancy.

And the same when a user search for state or city.

It must obeys containment relationship up to 4 levels .i.e.
Continent->Country->State->City


Also, I have designed analyzer using synonym filter factory for the same,
and its working as per expectation.

But I wanted to know, is there any another way, apart from using synonym
filter factory that can be used for the same.

Is SOLR provide any tokenziers or filters for this?

Please provide me some pointers to move ahead.


Thanks in Advance



Re: config JoinQParserPlugin

2013-12-31 Thread Chris Hostetter

: Earlier I tried join queries using curl 
: 'http://myLinux:8983/solr/abc.edu_up/select?debug=true&q=*:*&fq={defType=join 
: from=id to=id fromIndex=abc.edu}subject:financial'  but didn't get any 
: response. There was nothing on Solr log either. So, I thought I need to 
: config join. Is there another way to at least get some response from 
: join queries?

When posting questions, it's important to not only show the URLs you 
tried, but also exactly what response you got -- in this case you have 
debuging turned on (good!) but you don't show us what the debugging 
information returend.

from whati can tell, you are missunderstanding how to use localparams 
and the use of "type" vs "defTpe" in local params.  

1) the syntax for local params is "{!p1=v1 p2=v2 ...}" ... note the "!", 
it's important, otherwise the "{...}" is just treated as input to the 
default parser.

2) inside local params, you use the "type" param to indicate which parser 
you want to use (or as a shorthand just specify the parser name 
immediately after the "!"

3) if you use "defType" as a localparam, it controls which parser is used 
for parsing hte *nested* query.

- - -

So in your example, you should probably be using...

/abc.edu_up/select?debug=true&q=*:*&fq={!type=join ...

...or this syntactic sugar...

/abc.edu_up/select?debug=true&q=*:*&fq={!join ...


If that still isn't working for you, please show us what output you do 
get, and some examples of the same query w/o the join filter (as well as 
showing us what the nested join query produces on it's own so we can 
verify you have docs matching it)

Re: Grouping results with group.limit return wrong numFound ?

2013-12-31 Thread Ahmet Arslan
Hi Tasmaniski,

I don't follow. How come Liu's faceting workaround and n.groups=true produce 
different results?






On Tuesday, December 31, 2013 6:08 PM, tasmaniski  wrote:
@kamaci
Ofcourse. That is the problem. 

"group.limit is: the number of results (documents) to return for each
group."
NumFound is number of total found, but *not* sum number of *return for each
group.*

@Liu Bo
seems to be the is only workaround for problem but 
it's to much expensive to go through all the groups and calculate total
number of found/returned (I use PHP for client:) ).

@iorixxx 
Yes, I consider that (group.ngroups=true) 
but in some group I have number of found result  lesser than limit.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Grouping-results-with-group-limit-return-wrong-numFound-tp4108174p4108906.html

Sent from the Solr - User mailing list archive at Nabble.com.



Re: Grouping results with group.limit return wrong numFound ?

2013-12-31 Thread Chris Hostetter

I'm not sure if i'm completley following this thread, but i wanted to 
point out hte existence of this bug in case it's causing problems in 
your specific case...

https://issues.apache.org/jira/browse/SOLR-4310

...there is a patch on that issue, but some unresolved questions about 
wether it works correctly in distributed cases.  If youd like to try it 
out and post comments about ewther it works for you (or even better: help 
write some additional tests) that would be helpful towards getting it 
committed.


: Date: Tue, 31 Dec 2013 08:07:50 -0800 (PST)
: From: tasmaniski 
: Reply-To: solr-user@lucene.apache.org
: To: solr-user@lucene.apache.org
: Subject: Re: Grouping results with group.limit return wrong numFound ?
: 
: @kamaci
: Ofcourse. That is the problem. 
: 
: "group.limit is: the number of results (documents) to return for each
: group."
: NumFound is number of total found, but *not* sum number of *return for each
: group.*
: 
: @Liu Bo
: seems to be the is only workaround for problem but 
: it's to much expensive to go through all the groups and calculate total
: number of found/returned (I use PHP for client:) ).
: 
: @iorixxx 
: Yes, I consider that (group.ngroups=true) 
: but in some group I have number of found result  lesser than limit.
: 
: 
: 
: --
: View this message in context: 
http://lucene.472066.n3.nabble.com/Grouping-results-with-group-limit-return-wrong-numFound-tp4108174p4108906.html
: Sent from the Solr - User mailing list archive at Nabble.com.
: 

-Hoss
http://www.lucidworks.com/


Re: Grouping results with group.limit return wrong numFound ?

2013-12-31 Thread tasmaniski
@kamaci
Ofcourse. That is the problem. 

"group.limit is: the number of results (documents) to return for each
group."
NumFound is number of total found, but *not* sum number of *return for each
group.*

@Liu Bo
seems to be the is only workaround for problem but 
it's to much expensive to go through all the groups and calculate total
number of found/returned (I use PHP for client:) ).

@iorixxx 
Yes, I consider that (group.ngroups=true) 
but in some group I have number of found result  lesser than limit.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Grouping-results-with-group-limit-return-wrong-numFound-tp4108174p4108906.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Need Help for Location searching

2013-12-31 Thread Jack Krupansky
Be aware that a number of countries span more than one continent. For 
example, Russia and Turkey and others in that region of the world.


Normally, you flatten your data model for Solr, so each document would have 
a continent, country, state, and city.


You can do a dismax query for the terms on those four fields with a boost, 
although I'm not sure a boost will be of any value in the case of a dismax 
which is providing an exact match anyway.


-- Jack Krupansky

-Original Message- 
From: rashi gandhi

Sent: Tuesday, December 31, 2013 8:15 AM
To: solr-user@lucene.apache.org
Subject: Need Help for Location searching

Hi,


I wanted to design an analyzer that can support location containment
relationship For example Europe->France->Paris


My requirement is like: when a user search for any country , then results
must have the documents having that country , as well as the documents
having states and cities which comes under that country.

But, documents with country name must have high relevancy.

And the same when a user search for state or city.

It must obeys containment relationship up to 4 levels .i.e.
Continent->Country->State->City


Also, I have designed analyzer using synonym filter factory for the same,
and its working as per expectation.

But I wanted to know, is there any another way, apart from using synonym
filter factory that can be used for the same.

Is SOLR provide any tokenziers or filters for this?

Please provide me some pointers to move ahead.


Thanks in Advance 



Need Help for Location searching

2013-12-31 Thread rashi gandhi
Hi,


I wanted to design an analyzer that can support location containment
relationship For example Europe->France->Paris


My requirement is like: when a user search for any country , then results
must have the documents having that country , as well as the documents
having states and cities which comes under that country.

But, documents with country name must have high relevancy.

And the same when a user search for state or city.

It must obeys containment relationship up to 4 levels .i.e.
Continent->Country->State->City


Also, I have designed analyzer using synonym filter factory for the same,
and its working as per expectation.

But I wanted to know, is there any another way, apart from using synonym
filter factory that can be used for the same.

Is SOLR provide any tokenziers or filters for this?

Please provide me some pointers to move ahead.


Thanks in Advance


Re: question regarding dismax query results

2013-12-31 Thread Ahmet Arslan
Hi Vulcanoid,

If you want to consider proximity, you need to use pf (phrase fields) and ps 
(phrase slop) parameter. Please see :

http://wiki.apache.org/solr/SolrRelevancyFAQ#How_can_I_search_for_one_term_near_another_term_.28say.2C_.22batman.22_and_.22movie.22.29


P.S. edismax has more fine grained control over this via pf2 pf3 parameters.


On Tuesday, December 31, 2013 12:36 PM, Vulcanoid Developer 
 wrote:
Hi,

I have a solr schema which has fields related to Indian legal judgments and
want to provide a search engine on top of them.  I came across a problem
which I thought I would take the group's advise on.

For discussion sake let us assume there are only two fields "assessee" and
"itat_order" which are text fields; the latter has the entire judgment of
the court in text form.

Now I search using dismax against these 2 fields using a query like below

http://localhost:8983/solr/itat/select?q=additional+depreciation&start=20&rows=30&fl=assessee%2C+itat_order%2C+score&wt=xml&indent=true&defType=dismax&qf=assessee
^0.3+itat_order^0.2


For such a dismax query, the words additional depreciation (2 words without
quotes), we get results with additional and depreciation separately
occurring having higher score than results which have the words additional
depreciation occurring immediately together.  Why does this happen?

Shouldn't we ideally be getting exact matches of additional depreciation
first and then matches which have both the words but apart from each other
after these exact matches?  (In general when I search for A B shouldn't I
get matches with A B as they appear first and then A and B separated by
distance or singly occuring?)

Below I have pasted the score and # of occurences given for three results;
if you want I can share the text fields in these cases too.

(Also, for what its worth, the solr index uses only a
whitespacerfilterfactory and lowercasefilterfactory for querying and
indexing)

thanks
Vulcanoid

"""
decision of Heatshrink Technologies :
       score                          : 0.083743244
       additional depreciation  : 0 occurrence
         additional                     : 2 occurrences
         depreciation                 : 27 occurrences


decision of   Srinivasa Raju
       score                          : 0.08313061
       additional depreciation  : 0 occurrences
         additional                     : 5 occurrences
         depreciation                 : 30 occurrences


decision of     Nani Agro Foods
       score                          : 0.08217349
       additional depreciation  : 5 occurrences
         additional                     : 5 occurrences
         depreciation                 : 5 occurrences
"""


Re: Grouping results with group.limit return wrong numFound ?

2013-12-31 Thread Ahmet Arslan
Hi Liu,

Did you consider using group.ngroups=true ? It should give the same number as 
your faceting solution. 


Ahmet


On Tuesday, December 31, 2013 10:22 AM, Liu Bo  wrote:
Hi

I've met the same problem, and I've googled it around but not found direct
solution.

But there's a work around, do a facet on your group field, with parameters
like

   true
   your_field
   -1
   1

and then count how many facted pairs in the response. This should be the
same with the number of documents after grouping.

Cheers

Bold





On 31 December 2013 06:40, Furkan KAMACI  wrote:

> Hi;
>
> group.limit is: the number of results (documents) to return for each group.
> Defaults to 1. Did you check the page here:
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=32604232
>
> Thanks;
> Furkan KAMACI
>
>
> 25 Aralık 2013 Çarşamba tarihinde tasmaniski  adlı
> kullanıcı şöyle yazdı:
> > Hi All, When I perform a search with grouping result in a groups and do
> limit
> > results in one group I got that *numFound* is the same as I didn't use
> > limit.looks like SOLR first perform search and calculate numFound and
> that
> > group and limit the results.I do not know if this is a bug or a feature
> > :)But I cannot use pagination and other stuff.Is there any workaround or
> I
> > missed something ?Example:I want to search book title and limit the
> search
> > to 3 results per one publisher.q=book_title: solr
> > php&group=true&group.field=publisher&group.limit=3&group.main=trueI have
> for
> > apress publisher 20 results but I show only 3 that works OKBut in
> numFound I
> > still have 20 for apress publisher...
> >
> >
> >
> > --
> > View this message in context:
>
> http://lucene.472066.n3.nabble.com/Grouping-results-with-group-limit-return-wrong-numFound-tp4108174.html
> > Sent from the Solr - User mailing list archive at Nabble.com.
>



-- 
All the best

Liu Bo


Custom sorting on facets

2013-12-31 Thread Gupta, Abhinav
Hi,

I am using facets for suggestions. By default facet sort is based only on index 
order and count.
Now, in that I have a requirement that based on a value in solr doc ;some 
suggestions must be at top and then other.

Example :

doc1
ProductInstance
hydraulic



doc2
ProductInstance
other hydraulic




doc3
Product
test hydraulic




doc4
Product
other test hydraulic 



In above 4 solr documents  ,I will be having many more fields... In suggestions 
I want to create facet on machineID... But I want to sort it based on name 
field ..
Say if I queried for hy*.. then I should get facet from all 4 docs but sorted 
by name field.

PS: I can't use any other type of suggester as I need to display whole 
machineID text as suggestions and not a single word..

Many Thanks,
Abhinav


Re: adding wild card at the end of the text and search(like sql like search)

2013-12-31 Thread Ahmet Arslan
Hi Suren,

Try ComplexPhrase-4.2.1.zip, there is a read me file inside it.

Ahmet


On Monday, December 30, 2013 8:21 PM, suren  wrote:
Ahmet,
          I am using solr 4.3.1. do i still need to apply this patch ? if
yes please tell me the steps to follow. In the given link i see lot of
patches, not sure which patch for what version of solr also i don't see the
patch note how to apply.

Thanks,
Suren.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/adding-wild-card-at-the-end-of-the-text-and-search-like-sql-like-search-tp4108399p4108765.html

Sent from the Solr - User mailing list archive at Nabble.com.



Re: Possible memory leak after segment merge? (related to DocValues?)

2013-12-31 Thread Michael McCandless
On Mon, Dec 30, 2013 at 1:22 PM, Greg Preston
 wrote:
> That was it.  Setting omitNorms="true" on all fields fixed my problem.
>  I left it indexing all weekend, and heap usage still looks great.

Good!

> I'm still not clear why bouncing the solr instance freed up memory,
> unless the in-memory structure for this norms data is lazily loaded
> somehow.

In fact it is lazily loaded, the first time a search (well,
Similarity) needs to load the norms for scoring.

> Anyway, thank you very much for the suggestion.

You're welcome.

Mike McCandless

http://blog.mikemccandless.com


Facet in query parameters return all the fields instead of the field mentioned in facet.field parameter

2013-12-31 Thread deepakas
Hi ,
 I am making a solr query by passing a facet field. For some reason it
returns all the fields in solr index as faceted instead of just the field
"inputmethod". Is it because I have all the fields mentioned in the
solrconfig.xml or am I missing some other parameter ? 

http://solrhost:8983/solr/collection1/select?q=event_date:2013-12-19&facet=true&facet.field=inputmethod&wt=json&indent=true

Thanks, Deepak



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Facet-in-query-parameters-return-all-the-fields-instead-of-the-field-mentioned-in-facet-field-paramer-tp4108884.html
Sent from the Solr - User mailing list archive at Nabble.com.


Very long running replication.

2013-12-31 Thread anand chandak
Quick question about solr replication : What happens if there's a 
replication running for very large index that runs more than the 
interval for 2 replication ? would the automatic runs of replication 
interfere with the current running one or it would not even spawn next 
iteration of replication ? Can somebody throw some light ?






Define index and query with many delimiter

2013-12-31 Thread dtphat
Hi all,
I have a problem with delimiter when index: I define like below code in
schema.xml:



But I want to delimiter with general delimiters and other different
delimiters, such as "_", ";",  and others defined cases.

example: input: hello_world => I want: index {"hello", "world"} instead of
index {"hello_world"}.

I search on internet and I see types="wdfftypes.txt", but I don't know how
to define delimiter in types="wdfftypes.txt".

Help me solve this problem!







-
Phat T. Dong
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Define-index-and-query-with-many-delimiter-tp4108870.html
Sent from the Solr - User mailing list archive at Nabble.com.


question regarding dismax query results

2013-12-31 Thread Vulcanoid Developer
Hi,

I have a solr schema which has fields related to Indian legal judgments and
want to provide a search engine on top of them.  I came across a problem
which I thought I would take the group's advise on.

For discussion sake let us assume there are only two fields "assessee" and
"itat_order" which are text fields; the latter has the entire judgment of
the court in text form.

Now I search using dismax against these 2 fields using a query like below

http://localhost:8983/solr/itat/select?q=additional+depreciation&start=20&rows=30&fl=assessee%2C+itat_order%2C+score&wt=xml&indent=true&defType=dismax&qf=assessee
^0.3+itat_order^0.2


For such a dismax query, the words additional depreciation (2 words without
quotes), we get results with additional and depreciation separately
occurring having higher score than results which have the words additional
depreciation occurring immediately together.  Why does this happen?

Shouldn't we ideally be getting exact matches of additional depreciation
first and then matches which have both the words but apart from each other
after these exact matches?  (In general when I search for A B shouldn't I
get matches with A B as they appear first and then A and B separated by
distance or singly occuring?)

Below I have pasted the score and # of occurences given for three results;
if you want I can share the text fields in these cases too.

(Also, for what its worth, the solr index uses only a
whitespacerfilterfactory and lowercasefilterfactory for querying and
indexing)

thanks
Vulcanoid

"""
decision of Heatshrink Technologies :
   score  : 0.083743244
   additional depreciation  : 0 occurrence
 additional : 2 occurrences
 depreciation : 27 occurrences


decision of   Srinivasa Raju
   score  : 0.08313061
   additional depreciation  : 0 occurrences
 additional : 5 occurrences
 depreciation : 30 occurrences


decision of Nani Agro Foods
   score  : 0.08217349
   additional depreciation  : 5 occurrences
 additional : 5 occurrences
 depreciation : 5 occurrences
"""


Re: Solr search videos

2013-12-31 Thread Fkyz
Thanks again! =D
You have been very helpful! I will search i little bit more before i start
something.
Thanks, and 
Happy new year!



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-search-videos-tp4108731p4108864.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Solr search videos

2013-12-31 Thread Furkan KAMACI
Hi;

First of all if you try to crawl a whole web site such Youtube your IP
adress may be banned and your crawl time may decrease dramatically. You can
easily test it with Wikipedia. On the other hand you will not parse the
video files, you parse the meta information at the page and that is what
you will index at Solr.

Thanks;
Furkan KAMACI


2013/12/31 Fkyz 

> Thanks! But i still got this doubt... Can Nutch crawl the entire youtube?
> and
> if it can.. how large could be the index file?
>
> Thanks Again! =)
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Solr-search-videos-tp4108731p4108861.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


Re: Solr -The connection has timed out

2013-12-31 Thread Furkan KAMACI
Hi;

Beside the other error lines did you realize that log lines:

*java.net.BindException: Address already in use*

Could you check that is there any other application that is using 8983 port?

Thanks;
Furkan KAMACI


2013/12/31 rakesh 

> Finally able to get the full log details
>
> ERROR - 2013-12-30 15:13:00.811; org.apache.solr.core.SolrCore;
> [collection1] Solr index directory
>
> '/ctgapps/apache-solr-4.6.0/solr-4.6.0/example/solr/collection1/data/index/'
> is locked.  Throwing exception
> INFO  - 2013-12-30 15:13:00.812; org.apache.solr.core.SolrCore;
> [collection1]  CLOSING SolrCore org.apache.solr.core.SolrCore@de26e52
> INFO  - 2013-12-30 15:13:00.812; org.apache.solr.update.SolrCoreState;
> Closing SolrCoreState
> INFO  - 2013-12-30 15:13:00.813;
> org.apache.solr.update.DefaultSolrCoreState; SolrCoreState ref count has
> reached 0 - closing IndexWriter
> INFO  - 2013-12-30 15:13:00.813; org.apache.solr.core.SolrCore;
> [collection1] Closing main searcher on request.
> INFO  - 2013-12-30 15:13:00.814;
> org.apache.solr.core.CachingDirectoryFactory; Closing
> NRTCachingDirectoryFactory - 2 directories currently being tracked
> INFO  - 2013-12-30 15:13:00.814;
> org.apache.solr.core.CachingDirectoryFactory; looking to close
> /ctgapps/apache-solr-4.6.0/solr-4.6.0/example/solr/collection1/data/index
>
> [CachedDir<>]
> INFO  - 2013-12-30 15:13:00.814;
> org.apache.solr.core.CachingDirectoryFactory; Closing directory:
> /ctgapps/apache-solr-4.6.0/solr-4.6.0/example/solr/collection1/data/index
> INFO  - 2013-12-30 15:13:00.815;
> org.apache.solr.core.CachingDirectoryFactory; looking to close
> /ctgapps/apache-solr-4.6.0/solr-4.6.0/example/solr/collection1/data
>
> [CachedDir<>]
> INFO  - 2013-12-30 15:13:00.815;
> org.apache.solr.core.CachingDirectoryFactory; Closing directory:
> /ctgapps/apache-solr-4.6.0/solr-4.6.0/example/solr/collection1/data
> ERROR - 2013-12-30 15:13:00.817; org.apache.solr.core.CoreContainer; Unable
> to create core: collection1
> org.apache.solr.common.SolrException: Index locked for write for core
> collection1
> at org.apache.solr.core.SolrCore.(SolrCore.java:834)
> at org.apache.solr.core.SolrCore.(SolrCore.java:625)
> at
> org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:557)
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:592)
> at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:271)
> at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:263)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.lucene.store.LockObtainFailedException: Index locked
> for write for core collection1
> at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:491)
> at org.apache.solr.core.SolrCore.(SolrCore.java:755)
> ... 13 more
> ERROR - 2013-12-30 15:13:00.819; org.apache.solr.common.SolrException;
> null:org.apache.solr.common.SolrException: Unable to create core:
> collection1
> at
> org.apache.solr.core.CoreContainer.recordAndThrow(CoreContainer.java:977)
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:601)
> at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:271)
> at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:263)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.solr.common.SolrException: Index locked for write for
> core collection1
> at org.apache.solr.core.SolrCore.(SolrCore.java:834)
> at org.apache.solr.core.SolrCore.(SolrCore.java:625)
> at
> org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:557)
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:592)
> ... 10 more
> Caused by: org.apache.lucene.store.LockObtainFailedException: Index locked
> for write for core collection1
> at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:491)
> at org.apache.solr.core.SolrCore.(SolrCore.java:755)
> ... 13 more
>
> INFO  - 2013-12-30 15:13:00.820;
> org.apache.solr.servlet.

Re: Solr search videos

2013-12-31 Thread Fkyz
Thanks! But i still got this doubt... Can Nutch crawl the entire youtube? and
if it can.. how large could be the index file?

Thanks Again! =)



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-search-videos-tp4108731p4108861.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Not able to access solr core

2013-12-31 Thread Shawn Heisey
On 12/31/2013 1:00 AM, kumar wrote:
> I have two cores "core0" and "core1"
>  
> When i am accessing core0 using following url it is giving proper results.
> 
> http://hostname/solr/core0/main?q=*%3A*&wt=json&indent=true
> 
> But when i am trying to use core1 using the following url it is not giving
> the results. saying authorization requiredand 401 error.
> 
> http://hostname/solr/core1/main?q=*%3A*&wt=json&indent=true
> 
> How can i resolve this problem. i am using tomcat7, ubuntu OS

Solr has no security built in.  Your tomcat configuration contains some
kind of authentication.  As for why it's working on core0 without a
problem, my best guess is that it has some kind of exception list for
that URL path.  You'll need to get help from a tomcat support resource.

It's strongly recommended that you run Solr using the Jetty that's
included with the example.  That is a fully tested container for Solr,
with an optimized configuration for most setups.

Thanks,
Shawn



Re: Grouping results with group.limit return wrong numFound ?

2013-12-31 Thread Liu Bo
Hi

I've met the same problem, and I've googled it around but not found direct
solution.

But there's a work around, do a facet on your group field, with parameters
like

   true
   your_field
   -1
   1

and then count how many facted pairs in the response. This should be the
same with the number of documents after grouping.

Cheers

Bold




On 31 December 2013 06:40, Furkan KAMACI  wrote:

> Hi;
>
> group.limit is: the number of results (documents) to return for each group.
> Defaults to 1. Did you check the page here:
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=32604232
>
> Thanks;
> Furkan KAMACI
>
>
> 25 Aralık 2013 Çarşamba tarihinde tasmaniski  adlı
> kullanıcı şöyle yazdı:
> > Hi All, When I perform a search with grouping result in a groups and do
> limit
> > results in one group I got that *numFound* is the same as I didn't use
> > limit.looks like SOLR first perform search and calculate numFound and
> that
> > group and limit the results.I do not know if this is a bug or a feature
> > :)But I cannot use pagination and other stuff.Is there any workaround or
> I
> > missed something ?Example:I want to search book title and limit the
> search
> > to 3 results per one publisher.q=book_title: solr
> > php&group=true&group.field=publisher&group.limit=3&group.main=trueI have
> for
> > apress publisher 20 results but I show only 3 that works OKBut in
> numFound I
> > still have 20 for apress publisher...
> >
> >
> >
> > --
> > View this message in context:
>
> http://lucene.472066.n3.nabble.com/Grouping-results-with-group-limit-return-wrong-numFound-tp4108174.html
> > Sent from the Solr - User mailing list archive at Nabble.com.
>



-- 
All the best

Liu Bo


Re: Chaining plugins

2013-12-31 Thread Liu Bo
Hi

I've done similar things as paul.

what I do is extending the default QueryComponent and overwrite the
preparing method,

then I just change the solrparams according to our logic and then call
super.prepare(). Then replace the default QueryComponent with it in my
search/query handler.

In this way, nothing of solr default behavior is touched. I think you can
do your logic in prepare method, and then let solr proceed the search.

I've tested it along with other components in both single solr node and
solrcloud. It works fine.

Hope it helps

Cheers

Bold



On 31 December 2013 06:03, Chris Hostetter  wrote:

>
> You don't need to write your own handler.
>
> See the previpous comment about implementing a SearchComponent -- you can
> check for the params in your prepare() method and do whatever side effects
> you want, then register your custom component and hook it into the
> component chain of whatever handler configuration you want (either using
> the "components"  or by specifying it as a "first-components"...
>
>
> https://cwiki.apache.org/confluence/display/solr/RequestHandlers+and+SearchComponents+in+SolrConfig
>
> : I want to save the query into a file when a user is changing a parameter
> in
> : the query, lets say he adds "logTofile=1" then the searchHandler will
> : provide the same result as without this parameter, but in the background
> it
> : will do some logic(ex. save the query to file) .
> : But I dont want to touch solr source code, all I want is to add code(like
> : plugin). if i understand it right I want to write my own search handler
> , do
> : some logic , then pass the data to solr default search handler.
>
>
>
>
> -Hoss
> http://www.lucidworks.com/
>



-- 
All the best

Liu Bo


Not able to access solr core

2013-12-31 Thread kumar
Hi, 

I have two cores "core0" and "core1"
 
When i am accessing core0 using following url it is giving proper results.

http://hostname/solr/core0/main?q=*%3A*&wt=json&indent=true

But when i am trying to use core1 using the following url it is not giving
the results. saying authorization requiredand 401 error.

http://hostname/solr/core1/main?q=*%3A*&wt=json&indent=true

How can i resolve this problem. i am using tomcat7, ubuntu OS



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Not-able-to-access-solr-core-tp4108856.html
Sent from the Solr - User mailing list archive at Nabble.com.