Re: Is there any special meaning for # symbol in solr.

2012-09-03 Thread Ahmet Arslan
> if i use this link ,http://localhost:8080/solr/select?&q=(techskill%3Ac%23)
> , solr is going to display techskill:c result.
> But i want to display only techskill:c#  result.

Can you paste field type definition (used for techskill)?

By the way you don't need parenthesis, you q should be url encoded string 
"techskill:c#"




Re: BaseTokenFilterFactory is missing in Solr 4.0?

2012-09-03 Thread Ahmet Arslan
> seems like it became this :
> org.apache.lucene.analysis.util.TokenFilterFactory
> 
> can anyone confirm if i am correct? still not totally sure

Yes it is.


Re: BaseTokenFilterFactory is missing in Solr 4.0?

2012-09-03 Thread deniz
seems like it became this :
org.apache.lucene.analysis.util.TokenFilterFactory

can anyone confirm if i am correct? still not totally sure



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/BaseTokenFilterFactory-is-missing-in-Solr-4-0-tp4005142p4005143.html
Sent from the Solr - User mailing list archive at Nabble.com.


BaseTokenFilterFactory is missing in Solr 4.0?

2012-09-03 Thread deniz
well as the title says... I couldnt find it in any jars while I was following
the tutorial  here:
http://solr.pl/en/2012/05/14/developing-your-own-solr-filter/

i thought something was wrong with my env so i was trying to find which jar
includes that class but search on grepcode only shows 3.6.0

http://grepcode.com/search/?query=BaseTokenFilterFactory

then i have checked changes.txt files in solr nightlies (in 4.0 aLPHA and
3.6.1) but there is nothing about BaseTokenFlterFactory...

is this a bug? or i am seriously missing somehing here? 



-
Zeki ama calismiyor... Calissa yapar...
--
View this message in context: 
http://lucene.472066.n3.nabble.com/BaseTokenFilterFactory-is-missing-in-Solr-4-0-tp4005142.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: AW: AW: auto completion search with solr using NGrams in SOLR

2012-09-03 Thread aniljayanti
Hi,

thanks,

Im not able to attach asked xml files here. Can u give me ur Email id, so
that i can send schema and solrconfig.xmls.

Regards,
AnilJayanti



--
View this message in context: 
http://lucene.472066.n3.nabble.com/auto-completion-search-with-solr-using-NGrams-in-SOLR-tp3998559p4005141.html
Sent from the Solr - User mailing list archive at Nabble.com.


Searching of Chinese characters and English

2012-09-03 Thread waynelam

Hi all,

I tried to modified the schema.xml and solrconfig.xml come with Drupal 
"search_api_solr" modules. I tried to modified it so that it is suitable 
for an CJK environment. I can see Chinese words cut up each 2 words in 
"Field Analysis". If i use the following query


my_ip_address:8080/solr/select?indent=on&version=2.2&fq=t_title:"Find"&start=0&rows=10&fl=t_title

I can see it returning results. The problem is when i change the search 
keywords for one of my field (e.g. t_title) to Chinese characters. It 
always shows




in the results. It is strange because if a title contains both chinese 
and english (e.g. testing ??), when i search just the english part (e.g. 
fq=t_title:"testing"), i can find the result perfectly. It just happened 
to be problem when searching chinese characters.



Much appreciated if you guys can show me which part i did wrong.

Thanks

Wayne

*My Settings:*
Java : 1.6.0_24
Solr : version 3.6.1
tomcat: version 6.0.35

*My schema.xml* (i highlighted the place i changed from default)

*stored="true" multiValued="true">**
**  class="org.apache.lucene.analysis.cjk.CJKAnalyzer">**

****
**generateWordParts="1" generateNumberParts="1" catenateWords="1" 
catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>**

****
**language="English" protected="protwords.txt"/>**

****
**version="icu4j" composed="false" remove_diacritics="true" 
remove_modifiers="true" fold="true"/>**

****
**  **
**  class="org.apache.lucene.analysis.cjk.CJKAnalyzer">**

****
**generateWordParts="1" generateNumberParts="1" catenateWords="0" 
catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>**

****
**language="English" protected="protwords.txt"/>**

****
**version="icu4j" composed="false" remove_diacritics="true" 
remove_modifiers="true" fold="true"/>**

****
**  **
***

stored="true" sortMissingLast="true" omitNorms="true">

  





  




class="solr.StrField" />

 
 

   required="true" />
   required="true" />
   required="true" />


   
   multiValued="true"/>

   

*autoGeneratePhraseQueries="false"/>*

   
   termVectors="true" />
   termVectors="true" />
   termVectors="true" />
   termVectors="true" />
   termVectors="true" />
   termVectors="true" />
   termVectors="true" />
   termVectors="true" />
   termVectors="true" />
   termVectors="true" />
   termVectors="true" />
   termVectors="true" />

   
   
   
 

 id
 




Re: Is there any special meaning for # symbol in solr.

2012-09-03 Thread veena rani
if i use this link ,http://localhost:8080/solr/select?&q=(techskill%3Ac%23)
, solr is going to display techskill:c result.
But i want to display only techskill:c#  result.


On Mon, Sep 3, 2012 at 7:23 PM, Toke Eskildsen wrote:

> On Mon, 2012-09-03 at 13:39 +0200, veena rani wrote:
> > >  I have an issue with the # symbol, in solr,
> > >  I m trying to search for string ends up with # , Eg:c#, it is throwing
> > >  error Like, org.apache.lucene.queryparser.classic.ParseException:
> Cannot
> > >  parse '(techskill:c': Encountered "" at line 1, column 12.
>
> Solr only received '(techskill:c', which has unbalanced parentheses.
> My guess is that you do not perform a URL-encode of '#' and that you
> were sending something like
> http://localhost:8080/solr/select?&q=(techskill:c#)
> when you should have been sending
> http://localhost:8080/solr/select?&q=(techskill%3Ac%23)
>
>


-- 
Regards,
Veena.
Banglore.


Re: Missing Features - AndMaybe and Otherwise

2012-09-03 Thread Lance Norskog
> AndMaybe(a, b)*

Lucene does not really have booleans. Instead, it has plus, minus, and neither. 
Plus means "this term has to be in", minus means "this term cannot be in" and 
neither means "maybe". This means "A has to be in the document, and the 
document scores higher if B is in it": "+A B". What you describe is "A AND 
MAYBE B". "Plus A space B" does this. MAYBE is not in the AND/OR/NOT syntax.

>  *Otherwise(a, b)*

I'm not sure that "if-then" can be done.


- Original Message -
| From: "Ramzi Alqrainy" 
| To: solr-user@lucene.apache.org
| Sent: Monday, September 3, 2012 7:36:06 AM
| Subject: Missing Features - AndMaybe and Otherwise
| 
| Hi,
| 
| I would like to help me for certain problem. I have encountered a
| problem in
| task I think if you implement the below functions/conditions, you
| will help
| us for many issues.
| 
| *AndMaybe(a, b)*
| Binary query takes results from the first query. If and only if the
| same
| document also appears in the results from the second query, the score
| from
| the second query will be added to the score from the first query.
| 
| *Otherwise(a, b)*
| A binary query that only matches the second clause if the first
| clause
| doesn’t match any documents.
| 
| 
| My problem is :
| 
| I have documents and I do group by on certain field (e.g. Field1).  I
| want
| to get documents with Field2 is [3 or 9 or 12] if exist, otherwise
| get any
| document. please see the below example.
| 
| 
| D1 :---
| Field1 : 1  -
| Field2 : 3 ->  D1 (group by on field1 and
| field2 is
| 3)
|   -
| D2: ---
| Field1 : 1
| Field2 : 4
| 
| 
| D3:---
| Field1 : 2   -
| Field2 : 5--> any document D3 or D4
| -
| D4:---
| Field1 : 2
| Field2 : 7
| 
| I want to get the results like below
| D1(Mandatory)
| (D3 OR D4)
| 
| 
| 
| --
| View this message in context:
| 
http://lucene.472066.n3.nabble.com/Missing-Features-AndMaybe-and-Otherwise-tp4005059.html
| Sent from the Solr - User mailing list archive at Nabble.com.
| 


RE: Solr Not releasing memory

2012-09-03 Thread Mikhail Khludnev
Rohit,

Why do you think it should free it during idle time? Let us what numbers
you are actually watching. Check this it can be intetesting
blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
04.09.2012 0:45 пользователь "Markus Jelsma" 
написал:

> You've got more than 45GB of physical RAM in your machine? I assume it's
> actually virtual memory you're seeing, which is not a problem, even on
> Windows. It's not uncommon for resident memory to be higher than the
> allocated heap space and it's normal to have a high virtual memory address
> space if you have a large index.
>
> -Original message-
> > From:Rohit 
> > Sent: Tue 04-Sep-2012 00:33
> > To: solr-user@lucene.apache.org
> > Subject: RE: Solr Not releasing memory
> >
> > I am taking of Physical memory here, we start at -Xms of 2gb but very
> soon it goes high as 45Gb. The memory never comes down even when a single
> user is not using the system.
> >
> > Regards,
> > Rohit
> >
> >
> > -Original Message-
> > From: Markus Jelsma [mailto:markus.jel...@openindex.io]
> > Sent: 03 September 2012 14:58
> > To: solr-user@lucene.apache.org
> > Subject: RE: Solr Not releasing memory
> >
> > It would be helpful yo know which memory isn't being released. Is it
> virtual or physical or shared memory? Is it the heap space?
> >
> >
> > -Original message-
> > > From:Mikhail Khludnev 
> > > Sent: Mon 03-Sep-2012 16:52
> > > To: solr-user@lucene.apache.org
> > > Subject: RE: Solr Not releasing memory
> > >
> > > Rohit,
> > > Which collector do you use? Releasing physical ram is possible with
> > > compacting collectors like serial, parallel and maybe g1 and not
> > > possible with cms. The more important thing that releasing is really
> > > suspicious and even odd requrement. Please provide more details about
> > > your jvm and overall challenge.
> > > 03.09.2012 15:03 пользователь "Rohit"  написал:
> > > >
> > > > I am currently using StandardDirectoryFactory, would switching
> > > > directory
> > > factory have any impact on the indexes?
> > > >
> > > > Regards,
> > > > Rohit
> > > >
> > > >
> > > > -Original Message-
> > > > From: Claudio Ranieri [mailto:claudio.rani...@estadao.com]
> > > > Sent: 03 September 2012 10:03
> > > > To: solr-user@lucene.apache.org
> > > > Subject: RES: Solr Not releasing memory
> > > >
> > > > Are you using MMapDirectoryFactory?
> > > > I had swap problem in linux to a big index when I used
> > > MMapDirectoryFactory.
> > > > You can to try use solr.NIOFSDirectoryFactory.
> > > >
> > > >
> > > > -Mensagem original-
> > > > De: Lance Norskog [mailto:goks...@gmail.com] Enviada em: domingo, 2
> > > > de
> > > setembro de 2012 22:00
> > > > Para: solr-user@lucene.apache.org
> > > > Assunto: Re: Solr Not releasing memory
> > > >
> > > > 1) I believe Java 1.7 release memory back to the OS.
> > > > 2) All of the Javas I've used on Windows do this.
> > > >
> > > > Is the physical memory use a problem? Does it push out all other
> programs?
> > > >
> > > > Or is it just that the Java process appears larger? This explains
> > > > the
> > > latter:
> > > > http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.h
> > > > tml
> > > >
> > > > - Original Message -
> > > > | From: "Rohit" 
> > > > | To: solr-user@lucene.apache.org
> > > > | Sent: Sunday, September 2, 2012 1:22:14 AM
> > > > | Subject: Solr Not releasing memory
> > > > |
> > > > | Hi,
> > > > |
> > > > |
> > > > |
> > > > | We are running solr3.5 using tomcal 6.26  on a Windows Enterprise
> > > > | RC2 server, our index size if pretty large.
> > > > |
> > > > |
> > > > |
> > > > | We have noticed that once tomcat starts using/reserving ram it
> > > > | never releases them, even when there is not a single user on the
> > > > | system.  I have tried forced garbage collection, but that doesn't
> > > > | seem to help either.
> > > > |
> > > > |
> > > > |
> > > > | Regards,
> > > > |
> > > > | Rohit
> > > > |
> > > > |
> > > > |
> > > > |
> > > >
> > > >
> > >
> >
> >
> >
>


RE: Solr Not releasing memory

2012-09-03 Thread Markus Jelsma
You've got more than 45GB of physical RAM in your machine? I assume it's 
actually virtual memory you're seeing, which is not a problem, even on Windows. 
It's not uncommon for resident memory to be higher than the allocated heap 
space and it's normal to have a high virtual memory address space if you have a 
large index.
 
-Original message-
> From:Rohit 
> Sent: Tue 04-Sep-2012 00:33
> To: solr-user@lucene.apache.org
> Subject: RE: Solr Not releasing memory
> 
> I am taking of Physical memory here, we start at -Xms of 2gb but very soon it 
> goes high as 45Gb. The memory never comes down even when a single user is not 
> using the system.
> 
> Regards,
> Rohit
> 
> 
> -Original Message-
> From: Markus Jelsma [mailto:markus.jel...@openindex.io] 
> Sent: 03 September 2012 14:58
> To: solr-user@lucene.apache.org
> Subject: RE: Solr Not releasing memory
> 
> It would be helpful yo know which memory isn't being released. Is it virtual 
> or physical or shared memory? Is it the heap space?
>  
>  
> -Original message-
> > From:Mikhail Khludnev 
> > Sent: Mon 03-Sep-2012 16:52
> > To: solr-user@lucene.apache.org
> > Subject: RE: Solr Not releasing memory
> > 
> > Rohit,
> > Which collector do you use? Releasing physical ram is possible with 
> > compacting collectors like serial, parallel and maybe g1 and not 
> > possible with cms. The more important thing that releasing is really 
> > suspicious and even odd requrement. Please provide more details about 
> > your jvm and overall challenge.
> > 03.09.2012 15:03 пользователь "Rohit"  написал:
> > >
> > > I am currently using StandardDirectoryFactory, would switching 
> > > directory
> > factory have any impact on the indexes?
> > >
> > > Regards,
> > > Rohit
> > >
> > >
> > > -Original Message-
> > > From: Claudio Ranieri [mailto:claudio.rani...@estadao.com]
> > > Sent: 03 September 2012 10:03
> > > To: solr-user@lucene.apache.org
> > > Subject: RES: Solr Not releasing memory
> > >
> > > Are you using MMapDirectoryFactory?
> > > I had swap problem in linux to a big index when I used
> > MMapDirectoryFactory.
> > > You can to try use solr.NIOFSDirectoryFactory.
> > >
> > >
> > > -Mensagem original-
> > > De: Lance Norskog [mailto:goks...@gmail.com] Enviada em: domingo, 2 
> > > de
> > setembro de 2012 22:00
> > > Para: solr-user@lucene.apache.org
> > > Assunto: Re: Solr Not releasing memory
> > >
> > > 1) I believe Java 1.7 release memory back to the OS.
> > > 2) All of the Javas I've used on Windows do this.
> > >
> > > Is the physical memory use a problem? Does it push out all other programs?
> > >
> > > Or is it just that the Java process appears larger? This explains 
> > > the
> > latter:
> > > http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.h
> > > tml
> > >
> > > - Original Message -
> > > | From: "Rohit" 
> > > | To: solr-user@lucene.apache.org
> > > | Sent: Sunday, September 2, 2012 1:22:14 AM
> > > | Subject: Solr Not releasing memory
> > > |
> > > | Hi,
> > > |
> > > |
> > > |
> > > | We are running solr3.5 using tomcal 6.26  on a Windows Enterprise 
> > > | RC2 server, our index size if pretty large.
> > > |
> > > |
> > > |
> > > | We have noticed that once tomcat starts using/reserving ram it 
> > > | never releases them, even when there is not a single user on the 
> > > | system.  I have tried forced garbage collection, but that doesn't 
> > > | seem to help either.
> > > |
> > > |
> > > |
> > > | Regards,
> > > |
> > > | Rohit
> > > |
> > > |
> > > |
> > > |
> > >
> > >
> > 
> 
> 
> 


RE: Solr Not releasing memory

2012-09-03 Thread Rohit
I am taking of Physical memory here, we start at -Xms of 2gb but very soon it 
goes high as 45Gb. The memory never comes down even when a single user is not 
using the system.

Regards,
Rohit


-Original Message-
From: Markus Jelsma [mailto:markus.jel...@openindex.io] 
Sent: 03 September 2012 14:58
To: solr-user@lucene.apache.org
Subject: RE: Solr Not releasing memory

It would be helpful yo know which memory isn't being released. Is it virtual or 
physical or shared memory? Is it the heap space?
 
 
-Original message-
> From:Mikhail Khludnev 
> Sent: Mon 03-Sep-2012 16:52
> To: solr-user@lucene.apache.org
> Subject: RE: Solr Not releasing memory
> 
> Rohit,
> Which collector do you use? Releasing physical ram is possible with 
> compacting collectors like serial, parallel and maybe g1 and not 
> possible with cms. The more important thing that releasing is really 
> suspicious and even odd requrement. Please provide more details about 
> your jvm and overall challenge.
> 03.09.2012 15:03 пользователь "Rohit"  написал:
> >
> > I am currently using StandardDirectoryFactory, would switching 
> > directory
> factory have any impact on the indexes?
> >
> > Regards,
> > Rohit
> >
> >
> > -Original Message-
> > From: Claudio Ranieri [mailto:claudio.rani...@estadao.com]
> > Sent: 03 September 2012 10:03
> > To: solr-user@lucene.apache.org
> > Subject: RES: Solr Not releasing memory
> >
> > Are you using MMapDirectoryFactory?
> > I had swap problem in linux to a big index when I used
> MMapDirectoryFactory.
> > You can to try use solr.NIOFSDirectoryFactory.
> >
> >
> > -Mensagem original-
> > De: Lance Norskog [mailto:goks...@gmail.com] Enviada em: domingo, 2 
> > de
> setembro de 2012 22:00
> > Para: solr-user@lucene.apache.org
> > Assunto: Re: Solr Not releasing memory
> >
> > 1) I believe Java 1.7 release memory back to the OS.
> > 2) All of the Javas I've used on Windows do this.
> >
> > Is the physical memory use a problem? Does it push out all other programs?
> >
> > Or is it just that the Java process appears larger? This explains 
> > the
> latter:
> > http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.h
> > tml
> >
> > - Original Message -
> > | From: "Rohit" 
> > | To: solr-user@lucene.apache.org
> > | Sent: Sunday, September 2, 2012 1:22:14 AM
> > | Subject: Solr Not releasing memory
> > |
> > | Hi,
> > |
> > |
> > |
> > | We are running solr3.5 using tomcal 6.26  on a Windows Enterprise 
> > | RC2 server, our index size if pretty large.
> > |
> > |
> > |
> > | We have noticed that once tomcat starts using/reserving ram it 
> > | never releases them, even when there is not a single user on the 
> > | system.  I have tried forced garbage collection, but that doesn't 
> > | seem to help either.
> > |
> > |
> > |
> > | Regards,
> > |
> > | Rohit
> > |
> > |
> > |
> > |
> >
> >
> 




Re: Solr New Version causes NIO Closed Channel Exception

2012-09-03 Thread Mikhail Khludnev
Hi
Does mmap directory works for you?
03.09.2012 19:20 пользователь "Pavitar Singh"  написал:

> Hi,
>
> We are facing this problem repeatedly and it goes away on restarts.
>
>
> [#|2012-09-01T12:07:06.947+|SEVERE|glassfish3.1|org.apache.solr.core.SolrCore|_ThreadID=712;_ThreadName=Thread-2;|java.nio.channels.ClosedChannelException
> at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:88)
> at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:613)
> at
>
> org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.readInternal(NIOFSDirectory.java:161)
> at
>
> org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:160)
> at
>
> org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:39)
> at org.apache.lucene.store.DataInput.readVInt(DataInput.java:86)
> at
>
> org.apache.lucene.index.codecs.standard.StandardPostingsReader$SegmentDocsEnum.read(StandardPostingsReader.java:300)
> at org.apache.lucene.search.TermScorer.refillBuffer(TermScorer.java:74)
> at org.apache.lucene.search.TermScorer.nextDoc(TermScorer.java:121)
> at org.apache.lucene.search.TermScorer.score(TermScorer.java:70)
> at
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:210)
> at org.apache.lucene.search.Searcher.search(Searcher.java:101)
> at
>
> org.apache.solr.search.SolrIndexSearcher.getDocListAndSetNC(SolrIndexSearcher.java:1289)
> at
>
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1099)
> at
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:358)
> at
>
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:423)
> at
>
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:231)
> at
>
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1359)
> at
>
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:337)
> at
>
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:240)
> at
>
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
> at
>
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:215)
> at
>
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:279)
> at
>
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)
> at
>
> org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:655)
> at
> org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:595)
> at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:98)
> at
>
> com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:91)
> at
>
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:162)
> at
>
> org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:655)
> at
> org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:595)
> at
>
> org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:323)
> at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:227)
> at
>
> com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:170)
> at
> com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:822)
> at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:719)
> at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1013)
> at
>
> com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:225)
> at
>
> com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137)
> at
> com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104)
> at
> com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90)
> at
> com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79)
> at
>
> com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54)
> at
>
> com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59)
> at com.sun.grizzly.ContextTask.run(ContextTask.java:71)
> at
>
> com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532)
> at
>
> com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513)
> at java.lang.Thread.run(Thread.java:619)
> |#]
>


Re: Query-side Join work in distributed Solr?

2012-09-03 Thread Pavel Goncharik
That's correct, but according to the comments to the issue SOLR-2066,
"counts returned when using field collapse are only accurate when the
documents getting collapsed together are all on the same shard" (it's
actually a quote from SOLR-2592). Which sounds like SOLR-2592 will
make distributed field collapsing complete.

It's a pity there are no plans to make the same thing for joins: since
SolrCloud by definition a distributed system, being able to "join"
only on a single shard makes this feature much less attractive...

On Fri, Aug 24, 2012 at 9:33 PM, Erick Erickson  wrote:
> Not as I understand it. All that allows is a pluggable assignment of
> documents to shards in SolrCloud. There's nothing tying that JIRA to
> distributed joins or field collapsing.
>
> Distributed grouping is already in place as of Solr 3.5, see:
> https://issues.apache.org/jira/browse/SOLR-2066
>
> Best
> Erick
>
> On Fri, Aug 24, 2012 at 2:49 PM, Pavel Goncharik
>  wrote:
>> Do I understand correctly that once
>> https://issues.apache.org/jira/browse/SOLR-2592 is resolved, it will
>> make both distributed joins and field collapsing work?
>>
>> Best regards, Pavel
>>
>> On Fri, Aug 24, 2012 at 6:01 PM, Erick Erickson  
>> wrote:
>>> Right, there hasn't been any action on that patch in a while...
>>>
>>> Best
>>> Erick
>>>
>>> On Wed, Aug 22, 2012 at 12:18 PM, Timothy Potter  
>>> wrote:
 Just to clarify that query-side joins ( e.g. {!join from=id
 to=parent_signal_id_s}id:foo ) do not work in a distributed mode yet?
 I saw LUCENE-3759 as unresolved but also some some Twitter traffic
 saying there was a patch available.

 Cheers,
 Tim


Solr New Version causes NIO Closed Channel Exception

2012-09-03 Thread Pavitar Singh
Hi,

We are facing this problem repeatedly and it goes away on restarts.

[#|2012-09-01T12:07:06.947+|SEVERE|glassfish3.1|org.apache.solr.core.SolrCore|_ThreadID=712;_ThreadName=Thread-2;|java.nio.channels.ClosedChannelException
at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:88)
at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:613)
at
org.apache.lucene.store.NIOFSDirectory$NIOFSIndexInput.readInternal(NIOFSDirectory.java:161)
at
org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:160)
at
org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:39)
at org.apache.lucene.store.DataInput.readVInt(DataInput.java:86)
at
org.apache.lucene.index.codecs.standard.StandardPostingsReader$SegmentDocsEnum.read(StandardPostingsReader.java:300)
at org.apache.lucene.search.TermScorer.refillBuffer(TermScorer.java:74)
at org.apache.lucene.search.TermScorer.nextDoc(TermScorer.java:121)
at org.apache.lucene.search.TermScorer.score(TermScorer.java:70)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:210)
at org.apache.lucene.search.Searcher.search(Searcher.java:101)
at
org.apache.solr.search.SolrIndexSearcher.getDocListAndSetNC(SolrIndexSearcher.java:1289)
at
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1099)
at
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:358)
at
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:423)
at
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:231)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1359)
at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:337)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:240)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:256)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:215)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:279)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)
at
org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:655)
at
org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:595)
at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:98)
at
com.sun.enterprise.web.PESessionLockingStandardPipeline.invoke(PESessionLockingStandardPipeline.java:91)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:162)
at
org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:655)
at
org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:595)
at
org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:323)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:227)
at
com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:170)
at
com.sun.grizzly.http.ProcessorTask.invokeAdapter(ProcessorTask.java:822)
at com.sun.grizzly.http.ProcessorTask.doProcess(ProcessorTask.java:719)
at com.sun.grizzly.http.ProcessorTask.process(ProcessorTask.java:1013)
at
com.sun.grizzly.http.DefaultProtocolFilter.execute(DefaultProtocolFilter.java:225)
at
com.sun.grizzly.DefaultProtocolChain.executeProtocolFilter(DefaultProtocolChain.java:137)
at
com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:104)
at
com.sun.grizzly.DefaultProtocolChain.execute(DefaultProtocolChain.java:90)
at
com.sun.grizzly.http.HttpProtocolChain.execute(HttpProtocolChain.java:79)
at
com.sun.grizzly.ProtocolChainContextTask.doCall(ProtocolChainContextTask.java:54)
at
com.sun.grizzly.SelectionKeyContextTask.call(SelectionKeyContextTask.java:59)
at com.sun.grizzly.ContextTask.run(ContextTask.java:71)
at
com.sun.grizzly.util.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:532)
at
com.sun.grizzly.util.AbstractThreadPool$Worker.run(AbstractThreadPool.java:513)
at java.lang.Thread.run(Thread.java:619)
|#]


Re: Faceting Facets

2012-09-03 Thread Dotan Cohen
On Mon, Sep 3, 2012 at 6:07 PM, Tanguy Moal  wrote:
> I think it's not possible to combine pivots with facet queries, nor with
> facet ranges (or facet dates), please someone correct me if I'm wrong...
>
> I think only "standard" fields are "pivotable" :)
>
> That said, if you always use the same ranges for your DateTime field, you
> *could* have a "string" version of the time field that only outputs the
> hour of the day of the date contained in your time field, and then you'll
> be able to use facet.pivots with those two text fields.
>
> You could still use the original date time field to constrain the results
> set to return docs within the last 24 hours...
>
> Would that make sense to you ?
>

I think that I understand you!

Actually, the DateTime is currently being stored as a UNIX timestamp
for compatibility with other software. I had planned on converting it
all over to the internal Solr Datetime type, but I now see that I
should leave it as a timestamp.

Thanks.

-- 
Dotan Cohen

http://gibberish.co.il
http://what-is-what.com


Re: Faceting Facets

2012-09-03 Thread Dotan Cohen
On Mon, Sep 3, 2012 at 5:50 PM, Alexey Serba  wrote:
> http://wiki.apache.org/solr/SimpleFacetParameters#Pivot_.28ie_Decision_Tree.29_Faceting
>

Thank you, that does seem to be only available on Solr 4.0. Luckily,
we're using Websolr so upgrading is rather easy!

Thanks!

-- 
Dotan Cohen

http://gibberish.co.il
http://what-is-what.com


Re: Faceting Facets

2012-09-03 Thread Tanguy Moal
I think it's not possible to combine pivots with facet queries, nor with
facet ranges (or facet dates), please someone correct me if I'm wrong...

I think only "standard" fields are "pivotable" :)

That said, if you always use the same ranges for your DateTime field, you
*could* have a "string" version of the time field that only outputs the
hour of the day of the date contained in your time field, and then you'll
be able to use facet.pivots with those two text fields.

You could still use the original date time field to constrain the results
set to return docs within the last 24 hours...

Would that make sense to you ?

--
Tanguy

2012/9/3 Alexey Serba 

>
> http://wiki.apache.org/solr/SimpleFacetParameters#Pivot_.28ie_Decision_Tree.29_Faceting
>
> On Mon, Sep 3, 2012 at 6:38 PM, Dotan Cohen  wrote:
> > Is there any way to nest facet searches in Solr? Specifically, I have
> > a User field and a DateTime field. I need to know how many Documents
> > match each User for each one-hour period in the past 24 hours. That
> > is, 16 Users * 24 time periods = 384 values to return.
> >
> > I could run 16 queries and facet on DateTime, or 24 queries and facet
> > on User. However, if there is a way to facet the facets, then I would
> > love to know. Thanks!
> >
> > --
> > Dotan Cohen
> >
> > http://gibberish.co.il
> > http://what-is-what.com
>


RE: Solr Not releasing memory

2012-09-03 Thread Markus Jelsma
It would be helpful yo know which memory isn't being released. Is it virtual or 
physical or shared memory? Is it the heap space?
 
 
-Original message-
> From:Mikhail Khludnev 
> Sent: Mon 03-Sep-2012 16:52
> To: solr-user@lucene.apache.org
> Subject: RE: Solr Not releasing memory
> 
> Rohit,
> Which collector do you use? Releasing physical ram is possible with
> compacting collectors like serial, parallel and maybe g1 and not possible
> with cms. The more important thing that releasing is really suspicious and
> even odd requrement. Please provide more details about your jvm and overall
> challenge.
> 03.09.2012 15:03 пользователь "Rohit"  написал:
> >
> > I am currently using StandardDirectoryFactory, would switching directory
> factory have any impact on the indexes?
> >
> > Regards,
> > Rohit
> >
> >
> > -Original Message-
> > From: Claudio Ranieri [mailto:claudio.rani...@estadao.com]
> > Sent: 03 September 2012 10:03
> > To: solr-user@lucene.apache.org
> > Subject: RES: Solr Not releasing memory
> >
> > Are you using MMapDirectoryFactory?
> > I had swap problem in linux to a big index when I used
> MMapDirectoryFactory.
> > You can to try use solr.NIOFSDirectoryFactory.
> >
> >
> > -Mensagem original-
> > De: Lance Norskog [mailto:goks...@gmail.com] Enviada em: domingo, 2 de
> setembro de 2012 22:00
> > Para: solr-user@lucene.apache.org
> > Assunto: Re: Solr Not releasing memory
> >
> > 1) I believe Java 1.7 release memory back to the OS.
> > 2) All of the Javas I've used on Windows do this.
> >
> > Is the physical memory use a problem? Does it push out all other programs?
> >
> > Or is it just that the Java process appears larger? This explains the
> latter:
> > http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
> >
> > - Original Message -
> > | From: "Rohit" 
> > | To: solr-user@lucene.apache.org
> > | Sent: Sunday, September 2, 2012 1:22:14 AM
> > | Subject: Solr Not releasing memory
> > |
> > | Hi,
> > |
> > |
> > |
> > | We are running solr3.5 using tomcal 6.26  on a Windows Enterprise RC2
> > | server, our index size if pretty large.
> > |
> > |
> > |
> > | We have noticed that once tomcat starts using/reserving ram it never
> > | releases them, even when there is not a single user on the system.  I
> > | have tried forced garbage collection, but that doesn't seem to help
> > | either.
> > |
> > |
> > |
> > | Regards,
> > |
> > | Rohit
> > |
> > |
> > |
> > |
> >
> >
> 


Re: Faceting Facets

2012-09-03 Thread Alexey Serba
http://wiki.apache.org/solr/SimpleFacetParameters#Pivot_.28ie_Decision_Tree.29_Faceting

On Mon, Sep 3, 2012 at 6:38 PM, Dotan Cohen  wrote:
> Is there any way to nest facet searches in Solr? Specifically, I have
> a User field and a DateTime field. I need to know how many Documents
> match each User for each one-hour period in the past 24 hours. That
> is, 16 Users * 24 time periods = 384 values to return.
>
> I could run 16 queries and facet on DateTime, or 24 queries and facet
> on User. However, if there is a way to facet the facets, then I would
> love to know. Thanks!
>
> --
> Dotan Cohen
>
> http://gibberish.co.il
> http://what-is-what.com


RE: Solr Not releasing memory

2012-09-03 Thread Mikhail Khludnev
Rohit,
Which collector do you use? Releasing physical ram is possible with
compacting collectors like serial, parallel and maybe g1 and not possible
with cms. The more important thing that releasing is really suspicious and
even odd requrement. Please provide more details about your jvm and overall
challenge.
03.09.2012 15:03 пользователь "Rohit"  написал:
>
> I am currently using StandardDirectoryFactory, would switching directory
factory have any impact on the indexes?
>
> Regards,
> Rohit
>
>
> -Original Message-
> From: Claudio Ranieri [mailto:claudio.rani...@estadao.com]
> Sent: 03 September 2012 10:03
> To: solr-user@lucene.apache.org
> Subject: RES: Solr Not releasing memory
>
> Are you using MMapDirectoryFactory?
> I had swap problem in linux to a big index when I used
MMapDirectoryFactory.
> You can to try use solr.NIOFSDirectoryFactory.
>
>
> -Mensagem original-
> De: Lance Norskog [mailto:goks...@gmail.com] Enviada em: domingo, 2 de
setembro de 2012 22:00
> Para: solr-user@lucene.apache.org
> Assunto: Re: Solr Not releasing memory
>
> 1) I believe Java 1.7 release memory back to the OS.
> 2) All of the Javas I've used on Windows do this.
>
> Is the physical memory use a problem? Does it push out all other programs?
>
> Or is it just that the Java process appears larger? This explains the
latter:
> http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
>
> - Original Message -
> | From: "Rohit" 
> | To: solr-user@lucene.apache.org
> | Sent: Sunday, September 2, 2012 1:22:14 AM
> | Subject: Solr Not releasing memory
> |
> | Hi,
> |
> |
> |
> | We are running solr3.5 using tomcal 6.26  on a Windows Enterprise RC2
> | server, our index size if pretty large.
> |
> |
> |
> | We have noticed that once tomcat starts using/reserving ram it never
> | releases them, even when there is not a single user on the system.  I
> | have tried forced garbage collection, but that doesn't seem to help
> | either.
> |
> |
> |
> | Regards,
> |
> | Rohit
> |
> |
> |
> |
>
>


Missing Features - AndMaybe and Otherwise

2012-09-03 Thread Ramzi Alqrainy
Hi,

I would like to help me for certain problem. I have encountered a problem in
task I think if you implement the below functions/conditions, you will help
us for many issues.  

*AndMaybe(a, b)*
Binary query takes results from the first query. If and only if the same
document also appears in the results from the second query, the score from
the second query will be added to the score from the first query.

*Otherwise(a, b)*
A binary query that only matches the second clause if the first clause
doesn’t match any documents.


My problem is :

I have documents and I do group by on certain field (e.g. Field1).  I want
to get documents with Field2 is [3 or 9 or 12] if exist, otherwise get any
document. please see the below example.


D1 :---
Field1 : 1  -
Field2 : 3 ->  D1 (group by on field1 and field2 is
3)
  -
D2: ---
Field1 : 1
Field2 : 4


D3:---
Field1 : 2   -
Field2 : 5--> any document D3 or D4
-
D4:---
Field1 : 2
Field2 : 7

I want to get the results like below 
D1(Mandatory) 
(D3 OR D4)



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Missing-Features-AndMaybe-and-Otherwise-tp4005059.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Is there any special meaning for # symbol in solr.

2012-09-03 Thread Toke Eskildsen
On Mon, 2012-09-03 at 13:39 +0200, veena rani wrote:
> >  I have an issue with the # symbol, in solr,
> >  I m trying to search for string ends up with # , Eg:c#, it is throwing
> >  error Like, org.apache.lucene.queryparser.classic.ParseException: Cannot
> >  parse '(techskill:c': Encountered "" at line 1, column 12.

Solr only received '(techskill:c', which has unbalanced parentheses.
My guess is that you do not perform a URL-encode of '#' and that you
were sending something like
http://localhost:8080/solr/select?&q=(techskill:c#)
when you should have been sending
http://localhost:8080/solr/select?&q=(techskill%3Ac%23)



RE: Solr Not releasing memory

2012-09-03 Thread Rohit
I am currently using StandardDirectoryFactory, would switching directory 
factory have any impact on the indexes? 

Regards,
Rohit


-Original Message-
From: Claudio Ranieri [mailto:claudio.rani...@estadao.com] 
Sent: 03 September 2012 10:03
To: solr-user@lucene.apache.org
Subject: RES: Solr Not releasing memory

Are you using MMapDirectoryFactory? 
I had swap problem in linux to a big index when I used MMapDirectoryFactory.
You can to try use solr.NIOFSDirectoryFactory.


-Mensagem original-
De: Lance Norskog [mailto:goks...@gmail.com] Enviada em: domingo, 2 de setembro 
de 2012 22:00
Para: solr-user@lucene.apache.org
Assunto: Re: Solr Not releasing memory

1) I believe Java 1.7 release memory back to the OS.
2) All of the Javas I've used on Windows do this.

Is the physical memory use a problem? Does it push out all other programs?

Or is it just that the Java process appears larger? This explains the latter:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html

- Original Message -
| From: "Rohit" 
| To: solr-user@lucene.apache.org
| Sent: Sunday, September 2, 2012 1:22:14 AM
| Subject: Solr Not releasing memory
| 
| Hi,
| 
|  
| 
| We are running solr3.5 using tomcal 6.26  on a Windows Enterprise RC2 
| server, our index size if pretty large.
| 
|  
| 
| We have noticed that once tomcat starts using/reserving ram it never 
| releases them, even when there is not a single user on the system.  I 
| have tried forced garbage collection, but that doesn't seem to help 
| either.
| 
|  
| 
| Regards,
| 
| Rohit
| 
|  
| 
| 




Re: Antwort: Re: Antwort: Re: Query during a query

2012-09-03 Thread Erick Erickson
If you don't do what Chantal indicates, you allow your users to issue
requests like:
http://localhost:8983/solr/update?stream.body=*:*

Followed by:

http://localhost:8983/solr/update?stream.body=

Presto! You have an index with zero documents.

Best
Erick

On Mon, Sep 3, 2012 at 4:52 AM, Chantal Ackermann
 wrote:
> Hi Johannes,
>
> on production, SOLR is better a backend service to your actual web 
> application:
>
> Client (Browser) <---> Web App <---> Solr Server
>
> Very much like a database. The processes are implemented in your Web App, and 
> when they require results from Solr for whatever reason they simply query it.
>
> Chantal
>
>
>
>
> Am 03.09.2012 um 06:48 schrieb johannes.schwendin...@blum.com:
>
>> The problem is, that I don't know how to do this. :P
>>
>> My sequence: the user enters his search words. This is sent to solr. There
>> I need to make another query first to get metadata from the index. with
>> this metadata I have to connect to an external source to get some
>> information about the user. With this information and the first search
>> words I query then the solr index to get the search result.
>>
>> I hope its clear now wheres my problem and what I want to do
>>
>> Regards,
>> Johannes
>>
>>
>>
>> Von:
>> "Jack Krupansky" 
>> An:
>> 
>> Datum:
>> 31.08.2012 15:03
>> Betreff:
>> Re: Antwort: Re: Query during a query
>>
>>
>>
>> So, just do another query before doing the main query. What's the problem?
>>
>> Be more specific. Walk us through the sequence of processing that you
>> need.
>>
>> -- Jack Krupansky
>>
>> -Original Message-
>> From: johannes.schwendin...@blum.com
>> Sent: Friday, August 31, 2012 1:52 AM
>> To: solr-user@lucene.apache.org
>> Subject: Antwort: Re: Query during a query
>>
>> Thanks for the answer, but I want to know how I can do a seperate query
>> before the main query.
>> And I only want this data in my programm. The user won't see it.
>> I need the values from one field to get some information from an external
>> source while the main query is executed.
>>
>> pravesh  schrieb am 31.08.2012 07:42:48:
>>
>>> Von:
>>>
>>> pravesh 
>>>
>>> An:
>>>
>>> solr-user@lucene.apache.org
>>>
>>> Datum:
>>>
>>> 31.08.2012 07:43
>>>
>>> Betreff:
>>>
>>> Re: Query during a query
>>>
>>> Did you checked SOLR Field Collapsing/Grouping.
>>> http://wiki.apache.org/solr/FieldCollapsing
>>> http://wiki.apache.org/solr/FieldCollapsing
>>> If this is what you are looking for.
>>>
>>>
>>> Thanx
>>> Pravesh
>>>
>>>
>>>
>>> --
>>> View this message in context: http://lucene.472066.n3.nabble.com/
>>> Query-during-a-query-tp4004624p4004631.html
>>> Sent from the Solr - User mailing list archive at Nabble.com.
>>
>>
>


Is there any special meaning for # symbol in solr.

2012-09-03 Thread veena rani
 Hi,

>
>  I have an issue with the # symbol, in solr,
>  I m trying to search for string ends up with # , Eg:c#, it is throwing
>  error Like, org.apache.lucene.queryparser.classic.ParseException: Cannot
>  parse '(techskill:c': Encountered "" at line 1, column 12.
>  Was expecting one of:
>   ...
>   ...
>   ...
>  "+" ...
>  "-" ...
>   ...
>  "(" ...
>  ")" ...
>  "*" ...
>  "^" ...
>   ...
>   ...
>   ...
>   ...
>   ...
>   ...
>  "[" ...
>  "{" ...
>   ...
>

-- 
Regards,
Veena.
Banglore.


Re: Solr4 distributed IDF

2012-09-03 Thread Erick Erickson
When starting a new discussion on a mailing list, please do not reply to
an existing message, instead start a fresh email.  Even if you change the
subject line of your email, other mail headers still track which thread
you replied to and your question is "hidden" in that thread and gets less
attention.   It makes following discussions in the mailing list archives
particularly difficult.

Best
Erick

On Mon, Sep 3, 2012 at 6:21 AM, veena rani  wrote:
> Hi,
>
> I have an issue with the # symbol, in solr,
> I m trying to search for string ends up with # , Eg:c#, it is throwing
> error Like, org.apache.lucene.queryparser.classic.ParseException: Cannot
> parse '(techskill:c': Encountered "" at line 1, column 12.
> Was expecting one of:
>  ...
>  ...
>  ...
> "+" ...
> "-" ...
>  ...
> "(" ...
> ")" ...
> "*" ...
> "^" ...
>  ...
>  ...
>  ...
>  ...
>  ...
>  ...
> "[" ...
> "{" ...
>  ...
> --
> Regards,
> Veena.
> Banglore.


RE: Solr Not releasing memory

2012-09-03 Thread Rohit
HI Lance,

Thanks for explaining this, it does push out all other programs. 

Regards,
Rohit
Mobile: +91-9901768202

-Original Message-
From: Lance Norskog [mailto:goks...@gmail.com] 
Sent: 03 September 2012 01:00
To: solr-user@lucene.apache.org
Subject: Re: Solr Not releasing memory

1) I believe Java 1.7 release memory back to the OS.
2) All of the Javas I've used on Windows do this.

Is the physical memory use a problem? Does it push out all other programs?

Or is it just that the Java process appears larger? This explains the latter:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html

- Original Message -
| From: "Rohit" 
| To: solr-user@lucene.apache.org
| Sent: Sunday, September 2, 2012 1:22:14 AM
| Subject: Solr Not releasing memory
| 
| Hi,
| 
|  
| 
| We are running solr3.5 using tomcal 6.26  on a Windows Enterprise RC2 
| server, our index size if pretty large.
| 
|  
| 
| We have noticed that once tomcat starts using/reserving ram it never 
| releases them, even when there is not a single user on the system.  I 
| have tried forced garbage collection, but that doesn't seem to help 
| either.
| 
|  
| 
| Regards,
| 
| Rohit
| 
|  
| 
| 




Re: Solr4 distributed IDF

2012-09-03 Thread veena rani
Hi,

I have an issue with the # symbol, in solr,
I m trying to search for string ends up with # , Eg:c#, it is throwing
error Like, org.apache.lucene.queryparser.classic.ParseException: Cannot
parse '(techskill:c': Encountered "" at line 1, column 12.
Was expecting one of:
 ...
 ...
 ...
"+" ...
"-" ...
 ...
"(" ...
")" ...
"*" ...
"^" ...
 ...
 ...
 ...
 ...
 ...
 ...
"[" ...
"{" ...
 ...
-- 
Regards,
Veena.
Banglore.


RES: Solr Not releasing memory

2012-09-03 Thread Claudio Ranieri
Are you using MMapDirectoryFactory? 
I had swap problem in linux to a big index when I used MMapDirectoryFactory.
You can to try use solr.NIOFSDirectoryFactory.


-Mensagem original-
De: Lance Norskog [mailto:goks...@gmail.com] 
Enviada em: domingo, 2 de setembro de 2012 22:00
Para: solr-user@lucene.apache.org
Assunto: Re: Solr Not releasing memory

1) I believe Java 1.7 release memory back to the OS.
2) All of the Javas I've used on Windows do this.

Is the physical memory use a problem? Does it push out all other programs?

Or is it just that the Java process appears larger? This explains the latter:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html

- Original Message -
| From: "Rohit" 
| To: solr-user@lucene.apache.org
| Sent: Sunday, September 2, 2012 1:22:14 AM
| Subject: Solr Not releasing memory
| 
| Hi,
| 
|  
| 
| We are running solr3.5 using tomcal 6.26  on a Windows Enterprise RC2 
| server, our index size if pretty large.
| 
|  
| 
| We have noticed that once tomcat starts using/reserving ram it never 
| releases them, even when there is not a single user on the system.  I 
| have tried forced garbage collection, but that doesn't seem to help 
| either.
| 
|  
| 
| Regards,
| 
| Rohit
| 
|  
| 
| 


Re: Antwort: Re: Antwort: Re: Query during a query

2012-09-03 Thread Chantal Ackermann
Hi Johannes,

on production, SOLR is better a backend service to your actual web application:

Client (Browser) <---> Web App <---> Solr Server

Very much like a database. The processes are implemented in your Web App, and 
when they require results from Solr for whatever reason they simply query it.

Chantal




Am 03.09.2012 um 06:48 schrieb johannes.schwendin...@blum.com:

> The problem is, that I don't know how to do this. :P
> 
> My sequence: the user enters his search words. This is sent to solr. There 
> I need to make another query first to get metadata from the index. with 
> this metadata I have to connect to an external source to get some 
> information about the user. With this information and the first search 
> words I query then the solr index to get the search result.
> 
> I hope its clear now wheres my problem and what I want to do
> 
> Regards,
> Johannes
> 
> 
> 
> Von:
> "Jack Krupansky" 
> An:
> 
> Datum:
> 31.08.2012 15:03
> Betreff:
> Re: Antwort: Re: Query during a query
> 
> 
> 
> So, just do another query before doing the main query. What's the problem? 
> 
> Be more specific. Walk us through the sequence of processing that you 
> need.
> 
> -- Jack Krupansky
> 
> -Original Message- 
> From: johannes.schwendin...@blum.com
> Sent: Friday, August 31, 2012 1:52 AM
> To: solr-user@lucene.apache.org
> Subject: Antwort: Re: Query during a query
> 
> Thanks for the answer, but I want to know how I can do a seperate query
> before the main query.
> And I only want this data in my programm. The user won't see it.
> I need the values from one field to get some information from an external
> source while the main query is executed.
> 
> pravesh  schrieb am 31.08.2012 07:42:48:
> 
>> Von:
>> 
>> pravesh 
>> 
>> An:
>> 
>> solr-user@lucene.apache.org
>> 
>> Datum:
>> 
>> 31.08.2012 07:43
>> 
>> Betreff:
>> 
>> Re: Query during a query
>> 
>> Did you checked SOLR Field Collapsing/Grouping.
>> http://wiki.apache.org/solr/FieldCollapsing
>> http://wiki.apache.org/solr/FieldCollapsing
>> If this is what you are looking for.
>> 
>> 
>> Thanx
>> Pravesh
>> 
>> 
>> 
>> --
>> View this message in context: http://lucene.472066.n3.nabble.com/
>> Query-during-a-query-tp4004624p4004631.html
>> Sent from the Solr - User mailing list archive at Nabble.com. 
> 
> 



Re: Solr4 distributed IDF

2012-09-03 Thread Toke Eskildsen
On Fri, 2012-08-31 at 02:25 +0200, Lance Norskog wrote:
> The math for "confidence values" in probability theory shows that
> distributed DF does not matter after not very many documents. If you
> have 10s of thousands of documents in each shard, don't worry.

The old advice of distributing the documents by hashing id or a similar
deterministic method is sound enough. However, it is my experience that
sharding is often done by source or material: When building a workflow,
it is the logical thing to do. This might be more of an educational than
a technical problem.

For setups with a large unchanging set of data and a smaller set with
high update frequency, the standard advice is to have a large unchanging
shard and a smaller NRT one. For that case, I would expect that the
unchanging data is often quite different from the changing ones.

Third case: Distributed search where the separate indexes are controlled
by different parties, where the parties does want to collaborate on the
distribution part but does not want to have their data indexed by the
other parties. We currently have this challenge.

Regards,
Toke Eskildsen



Solr from scratch in 3 lines

2012-09-03 Thread Michael Guymon


Hello Solrian,

I put together a JRuby gem that will setup and run a Solr instance:

gem install solr_sail
solrsail install
solrsail start

There is a copy of the Solr config with the gem that gets extracted and 
Jar dependences are downloaded with the install step. An embedded Jetty 
server is used to run the Solr app with the start step. This is based on 
work I did with using Solr and embedded Tomcat that ran from a Jar. Solr 
in a Jar worked great, I am curious to see how stable and what the 
performance is like by running Solr over JRuby.


My rambling blog post with more details -

http://blog.tobedevoured.com/post/30784789256/solrsail-solr-from-scratch-in-3-lines


thanks,
Michael