Slowness of Solr search during the replication

2014-01-06 Thread sivaprasad
Hi,

I have configured Solr salve replication for every 1hr. During this time I
am seeing my search is unresponsive. Any other architectural changes do we
need to do to overcome this?

I am giving some cache details which I have in my solrconfig.xml.

 filterCache class=solr.FastLRUCache
 size=512
 initialSize=512 cleanupThread=true 
 autowarmCount=0/



queryResultCache class=solr.LRUCache
 size=512
 initialSize=512
 autowarmCount=0/



  documentCache class=solr.LRUCache
   size=512
   initialSize=512
   autowarmCount=0/

   fieldValueCache class=solr.FastLRUCache
size=512 cleanupThread=true 
autowarmCount=128
showItems=32 /

useFilterForSortedQuerytrue/useFilterForSortedQuery
 useColdSearcherfalse/useColdSearcher
maxWarmingSearchers2/maxWarmingSearchers

Any suggestions are appreciable.

Regards,
Siva http://smarttechie.org/  




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Slowness-of-Solr-search-during-the-replication-tp4109712.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Slowness of Solr search during the replication

2014-01-06 Thread sivaprasad
Do we need to set the autowarmCount on Slave or master? As per the Solr WIKI,
I found the below information.

Solr4.0 autowarmCount can now be specified as a percentage (ie: 90%) which
will be evaluated relative to the number of items in the existing cache. 
This can be an advantageous setting in an instance of Solr where you don't
expect any search traffic (ie a master),
 but you want some caches so that if it does take on traffic it won't be too
overloaded. Once the traffic dies down, subsequent commits will gradually
decrease the number of items being warmed.

Regards,
Siva



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Slowness-of-Solr-search-during-the-replication-tp4109712p4109739.html
Sent from the Solr - User mailing list archive at Nabble.com.


Solr cores across multiple machines

2013-12-16 Thread sivaprasad
Hi,

In my project, we are doing full index on dedicated machine and the index
will be copied to other search serving machine. For this, we are copying the
data folder from indexing machine to serving machine manually. Now, we
wanted to use Solr's SWAP configuration to do this job. Looks like the SWAP
will work between the cores. Based on our setup, any one has any idea how to
move the data from indexing machine to serving machine? Is there any other
alternatives?

Regards,
Siva



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-cores-across-multiple-machines-tp4107035.html
Sent from the Solr - User mailing list archive at Nabble.com.


Return the synonyms as part of Solr response

2013-10-30 Thread sivaprasad
Hi, 
We have a requirement where we need to send the matched synonyms as part of
Solr response. 

Do we need to customize the Solr response handler to do this?

Regards,
Siva



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Return-the-synonyms-as-part-of-Solr-response-tp4098389.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Facing Solr performance during query search

2013-08-21 Thread sivaprasad
Here I am providing the slave solrconfig information.
indexConfig
   commitLockTimeout1/commitLockTimeout
   mergePolicy class=org.apache.lucene.index.TieredMergePolicy
  int name=maxMergeAtOnce35/int
  int name=segmentsPerTier35/int
   /mergePolicy
   mergeScheduler
class=org.apache.lucene.index.ConcurrentMergeScheduler
  int name=maxMergeCount6/int
  int name=maxThreadCount1/int
   /mergeScheduler 
/indexConfig
query
maxBooleanClauses1024/maxBooleanClauses
   queryResultWindowSize20/queryResultWindowSize
listener event=newSearcher class=solr.QuerySenderListener
  arr name=queries

  /arr
/listener
listener event=firstSearcher class=solr.QuerySenderListener
  arr name=queries
lst
  str name=qstatic firstSearcher warming in solrconfig.xml/str
/lst
  /arr
/listener
useColdSearcherfalse/useColdSearcher
/query

The slave will poll for every 1hr. 

The field list is given below.

field name=product_id  type=string indexed=true required=true
stored=true /
field name=product_name type=text_en_splitting indexed=true
stored=true  omitNorms=false termVectors=true/
field name=prod_name type=alphaOnlySort indexed=true
stored=false/
field name=product_desc type=text_en_splitting indexed=true 
stored=true omitNorms=false/
field name=mpn type=string indexed=false stored=true 
multiValued=true /
field name=sku type=string indexed=false stored=true 
multiValued=true /
field name=upc type=string indexed=false stored=true 
multiValued=true /
field name=Brands type=cat_text indexed=true stored=true
omitNorms=false termVectors=true/
field name=searchCategory type=text_en_splitting indexed=true 
omitNorms=false/
field name=category_id type=int indexed=true stored=true/
field name=atom_id type=int indexed=false stored=true/
field name=sale_amount type=double indexed=true stored=true/
field name=image_url type=string indexed=false stored=true/   
field name=Categories type=string indexed=true stored=false
termVectors=true/
field name=cataloged type=string indexed=false stored=true/
field name=product_rating type=double indexed=true 
stored=true/
field name=num_retailers type=int indexed=true stored=true/ 
field name=retailer_highest_rating type=double indexed=true
stored=true/
field name=has_image type=string indexed=true stored=true/
field name=valid_prod_desc type=string indexed=true 
stored=true/
field name=reviewCount type=int indexed=false stored=true/
field name=offers type=string indexed=false stored=true
multiValued=true/
field name=Ratings type=int indexed=true stored=true/
dynamicField name=f_* type=string indexed=true stored=false/  

field name=prod_spell type=textSpell indexed=true /
field name=_version_ type=long indexed=true stored=true/

We have configured ~2000 facets and the machine configuration is given
below.

6 core processor, 22528 GB RAM allotted to JVM . The solr version is 4.1.0

Please let me know, if you require any more information.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Facing-Solr-performance-during-query-search-tp4085426p4085825.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Regarding mointoring the solr

2013-08-19 Thread sivaprasad
You can look at this  tool
http://sematext.com/spm/solr-performance-monitoring/  



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Regarding-mointoring-the-solr-tp4085392p4085423.html
Sent from the Solr - User mailing list archive at Nabble.com.


Facing Solr performance during query search

2013-08-19 Thread sivaprasad
Hi,

Last week we configured Solr master and slave set up. All the Solr search
requests are routed to slave. After this configuration, we are seeing
drastic performance problems with Solr.

Can any one explain what would be the reason?

And, how to disable optimizing the index, warming the searcher and cache on
Slave?

Regards,
Siva



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Facing-Solr-performance-during-query-search-tp4085426.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Auto Correction of Solr Query

2013-07-30 Thread sivaprasad
Thank you for the quick response.

I checked the document on spellcheck.collate. Looks like, it is going to
return the suggestion to the client and the client need to make one more
request to the server with the suggestion.

Is there any way to auto correct at the server end?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Auto-Correction-of-Solr-Query-tp4081220p4081243.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Facets with 5000 facet fields - Out of memory error during the query time

2013-04-25 Thread sivaprasad
I got more information with the responses.Now, It's time to re look into  the
number of facets to be configured.

Thanks,
Siva
http://smarttechies.wordpress.com/



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Facets-with-5000-facet-fields-Out-of-memory-error-during-the-query-time-tp4048450p4059079.html
Sent from the Solr - User mailing list archive at Nabble.com.


Facets with 5000 facet fields

2013-03-18 Thread sivaprasad
Hi,

We have configured solr for 5000 facet fields as part of request handler.We
have 10811177 docs in the index.

The solr server machine is quad core with 12 gb of RAM.

When we are querying with facets, we are getting out of memory error.

What we observed is , If we have larger number of facets we need to have
larger RAM allocated for JVM. In this case we need to scale up the system as
and when we add more facets.

To scale out the system, do we need to go with distributed search?

Any thoughts on this helps me to handle this situation.

Thanks,
Siva




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Facets-with-5000-facet-fields-tp4048450.html
Sent from the Solr - User mailing list archive at Nabble.com.


Full Import failed:org.apache.solr.handler.dataimport.DataImportHandlerException: com.mysql.jdbc.CommunicationsException: Communications link failure due to underlying exception

2012-04-23 Thread sivaprasad
Hi,

When i am trying to index 16 millions of documents using dataimport handler,
intermittently i am getting the below exception and the indexing get
stopped.

STACKTRACE:

java.io.EOFException: Can not read response from server. Expected to read 4
bytes, read 0 bytes before connection was unexpectedly lost.
at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:1997)
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2411)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2916)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1360)
at com.mysql.jdbc.MysqlIO.fetchRowsViaCursor(MysqlIO.java:4044)
at
com.mysql.jdbc.CursorRowProvider.fetchMoreRows(CursorRowProvider.java:396)
at
com.mysql.jdbc.CursorRowProvider.hasNext(CursorRowProvider.java:313)
at com.mysql.jdbc.ResultSet.next(ResultSet.java:7296)
at
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:331)
at
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.access$600(JdbcDataSource.java:228)
at
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:262)
at
org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:77)
at
org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
at
org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:238)
at
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:591)
at
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:267)
at
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:186)
at
org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:359)
at
org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:427)
at
org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:408)


** END NESTED EXCEPTION **



Last packet sent to the server was 2 ms ago.
at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2622)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2916)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1360)
at com.mysql.jdbc.MysqlIO.fetchRowsViaCursor(MysqlIO.java:4044)
at
com.mysql.jdbc.CursorRowProvider.fetchMoreRows(CursorRowProvider.java:396)
at
com.mysql.jdbc.CursorRowProvider.hasNext(CursorRowProvider.java:313)
at com.mysql.jdbc.ResultSet.next(ResultSet.java:7296)
at
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:331)
... 11 more

2012-04-23 08:25:35,693 SEVERE
[org.apache.solr.handler.dataimport.DataImporter] (Thread-21) Full Import
failed:org.apache.solr.handler.dataimport.DataImportHandlerException:
com.mysql.jdbc.CommunicationsException: Communications link failure due to
underlying exception:


And the db-config.xml has the below configuration.


dataSource driver=com.mysql.jdbc.Driver
url=jdbc:mysql://localhost:3306/phpq user=slrmgr  defaultFetchSize=30
useCursorFetch=true autoReconnect=true tcpKeepAlive=true
connectionTimeout=12 password=pqmgr123 batch-size=-1/

Any help on this is much appreciable.


--
View this message in context: 
http://lucene.472066.n3.nabble.com/Full-Import-failed-org-apache-solr-handler-dataimport-DataImportHandlerException-com-mysql-jdbc-Commn-tp3932521p3932521.html
Sent from the Solr - User mailing list archive at Nabble.com.


Search on multiple fields is not working

2011-11-23 Thread sivaprasad
Hi,

I have two indexed fields called profileId and tagName.When i issue a query
like q=profileId:99964 OR profileId:10076 OR tagName:MUSIC AND
DESIGNER, i am getting only the results for tagName:MUSIC AND
DESIGNER.The results are not containing profileId 99964 and 10076.

Can anybody tell what i am doing wrong?

Regards,
Siva

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Search-on-multiple-fields-is-not-working-tp3530145p3530145.html
Sent from the Solr - User mailing list archive at Nabble.com.


Out of memory during the indexing

2011-11-08 Thread sivaprasad
Hi,

I am getting the following error during the indexing.I am trying to index 14
million records but the document size is very minimal.

*Error:*
2011-11-08 14:53:24,634 ERROR [STDERR] (Thread-12)
java.lang.OutOfMemoryError: GC overhead limit exceeded

2011-11-08 14:54:07,910 ERROR [org.apache.coyote.http11.Http11Protocol]
(http-10.32.7.136-8180-2) Error reading request, ignored

java.lang.OutOfMemoryError: GC overhead limit exceeded

2011-11-08 14:53:54,961 ERROR [STDERR]
(DefaultQuartzScheduler_QuartzSchedulerThread) Exception in thread
DefaultQuartzScheduler_QuartzSchedulerThread

2011-11-08 14:54:21,780 ERROR
[org.apache.catalina.core.ContainerBase.[jboss.web].[localhost].[/solr].[jsp]]
(http-10.32.7.136-8180-9) Servlet.service() for servlet jsp threw exception

java.lang.OutOfMemoryError: GC overhead limit exceeded

2011-11-08 14:54:18,417 ERROR [org.apache.catalina.connector.CoyoteAdapter]
(http-10.32.7.136-8180-7) An exception or error occurred in the container
during the request processing

java.lang.OutOfMemoryError: GC overhead limit exceeded

2011-11-08 14:54:18,417 ERROR [org.apache.catalina.connector.CoyoteAdapter]
(http-10.32.7.136-8180-6) An exception or error occurred in the container
during the request processing

java.lang.OutOfMemoryError: GC overhead limit exceeded

2011-11-08 14:54:36,237 SEVERE
[org.apache.solr.handler.dataimport.SolrWriter] (Thread-19) Exception while
solr commit.

java.lang.RuntimeException: java.lang.OutOfMemoryError: GC overhead limit
exceeded

at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1099)

at
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:425)

at
org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:85)

at org.apache.solr.handler.dataimport.SolrWriter.commit(SolrWriter.java:179)

at org.apache.solr.handler.dataimport.DocBuilder.finish(DocBuilder.java:236)

at
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:208)

at
org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:359)

at
org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:427)

at
org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:408)

Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded

at java.util.Arrays.copyOfRange(Arrays.java:3209)

at java.lang.String.init(String.java:215)

at org.apache.lucene.index.TermBuffer.toTerm(TermBuffer.java:122)

at org.apache.lucene.index.SegmentTermEnum.term(SegmentTermEnum.java:176)

at org.apache.lucene.index.TermInfosReader.init(TermInfosReader.java:122)

at
org.apache.lucene.index.SegmentCoreReaders.init(SegmentCoreReaders.java:75)

at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:114)

at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:92)

at org.apache.lucene.index.DirectoryReader.init(DirectoryReader.java:235)

at
org.apache.lucene.index.ReadOnlyDirectoryReader.init(ReadOnlyDirectoryReader.java:34)

at
org.apache.lucene.index.DirectoryReader.doReopen(DirectoryReader.java:484)

at
org.apache.lucene.index.DirectoryReader.access$000(DirectoryReader.java:45)

at
org.apache.lucene.index.DirectoryReader$2.doBody(DirectoryReader.java:476)

at
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:750)

at
org.apache.lucene.index.DirectoryReader.doReopenNoWriter(DirectoryReader.java:471)

at
org.apache.lucene.index.DirectoryReader.doReopen(DirectoryReader.java:429)

at org.apache.lucene.index.DirectoryReader.reopen(DirectoryReader.java:392)

at org.apache.solr.search.SolrIndexReader.reopen(SolrIndexReader.java:414)

at org.apache.solr.search.SolrIndexReader.reopen(SolrIndexReader.java:425)

at org.apache.solr.search.SolrIndexReader.reopen(SolrIndexReader.java:35)

at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1080)

... 8 more

2011-11-08 14:54:34,905 WARN 
[org.jboss.system.server.profileservice.hotdeploy.HDScanner] (HDScanner)
Scan failed

java.lang.OutOfMemoryError: GC overhead limit exceeded

2011-11-08 14:54:25,132 ERROR [STDERR]
(DefaultQuartzScheduler_QuartzSchedulerThread) java.lang.OutOfMemoryError:
GC overhead limit exceeded

2011-11-08 14:54:36,238 ERROR [STDERR]
(DefaultQuartzScheduler_QuartzSchedulerThread)   at
java.util.TreeMap.key(TreeMap.java:1206)

2011-11-08 14:54:36,238 ERROR [STDERR]
(DefaultQuartzScheduler_QuartzSchedulerThread)   at
java.util.TreeMap.firstKey(TreeMap.java:267)

2011-11-08 14:54:36,238 ERROR [STDERR]
(DefaultQuartzScheduler_QuartzSchedulerThread)   at
java.util.TreeSet.first(TreeSet.java:377)

2011-11-08 14:54:36,238 ERROR [STDERR]
(DefaultQuartzScheduler_QuartzSchedulerThread)   at
org.quartz.simpl.RAMJobStore.acquireNextTrigger(RAMJobStore.java:1131)

2011-11-08 14:54:36,238 ERROR [STDERR]
(DefaultQuartzScheduler_QuartzSchedulerThread)   at
org.quartz.core.QuartzSchedulerThread.run(QuartzSchedulerThread.java:233)

2011-11-08 14:54:36,238 ERROR [STDERR]

Spellcheck suggestions as solr docs

2011-04-14 Thread sivaprasad
Hi,

I have configured spell check for the terms and it is working fine.But the
spell check suggestions are just simple strings.For my requirement, i need
the documents.How can i achieve this? Are there any other ways to do spell
check?Please suggest ..

Regards,
Siva

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Spellcheck-suggestions-as-solr-docs-tp2820837p2820837.html
Sent from the Solr - User mailing list archive at Nabble.com.


Search for social networking sites

2011-01-20 Thread sivaprasad

Hi,
I am building a social networking site.For searching profiles, i am trying
to implement solr.
But here i am facing a problem.As a social networking site, the data base is
going to get more updates/inserts frequently.That means,the search is going
to be in real time.How can we achieve this using search servers rather than
pure database search?

Any ideas are helpful.

Regards,
Siva
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Search-for-social-networking-sites-tp2295261p2295261.html
Sent from the Solr - User mailing list archive at Nabble.com.


Adding weightage to the facets count

2011-01-20 Thread sivaprasad

Hi,

I am building tag cloud for products by using facets.I made tag names as
facets and i am taking facets count as reference to display tag cloud.Each
product has tags with their own weightage.Let us say,

For example

prod1 has tag called “Light Weight” with weightage 20,
prod2 has tag called “Light Weight” with weightage 100,

If i get facet for “Light Weight” , i will get Light Weight (2) ,
here i need to consider the weightage in to account, and the result will be
Light Weight (120) 

How can we achieve this?Any ideas are really helpful.

Regards,
Siva

-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Adding-weightage-to-the-facets-count-tp2295308p2295308.html
Sent from the Solr - User mailing list archive at Nabble.com.


Search based on images

2010-12-08 Thread sivaprasad

Hi,

If i upload a product image, i need to find the similar images based on the
uploaded image.The sample is given below.
http://www.gazopa.com/similar?img=eb04%3A9601%2F1543805img_url=http%3A%2F%2Fd.yimg.com%2Fi%2Fng%2Fsp%2Fp4%2F20090208%2F20%2F350396776.jpg#

Anybody has any ideas on this.Please suggest any papers on this.


Regards,
JS
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Search-based-on-images-tp2056050p2056050.html
Sent from the Solr - User mailing list archive at Nabble.com.


Out of memory error

2010-12-06 Thread sivaprasad

Hi,

When i am trying to import the data using DIH, iam getting Out of memory
error.The below are the configurations which i have.

Database:Mysql
Os:windows
No Of documents:15525532
In Db-config.xml i made batch size as -1

The solr server is running on Linux machine with tomcat.
i set tomcat arguments as ./startup.sh -Xms1024M -Xmx2048M

Can anybody has idea, where the things are going wrong?

Regards,
JS


-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Out-of-memory-error-tp2031761p2031761.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: DIH - index Multiple tables in Database?

2010-11-23 Thread sivaprasad


I think, the query it self is wrong,
 
The query should be 
 
select id_bread,buttertype from  bread,butter where a.id_bread=b.id_bread


-Original Message-
From: snowyeoghan [via Lucene] 
ml-node+1956579-1073231976-225...@n3.nabble.com
Sent: Tuesday, November 23, 2010 4:22pm
To: sivaprasad sivaprasa...@echidnainc.com
Subject: DIH - index Multiple tables in Database?

Hello. 

Hopefully somebody here can help me... 

I have a database, with several tables. I want to index these tables in one go, 
but I am having no luck. 

For example: 

Two dB's: 
db1 = bread 
db2 = butter 

entity name=bread101 query=select id_bread from bread UNION select 
buttertype from butter 

field column=id_bread name=id / 

field column=buttertype name=buttertype / 

/entity 


Only id_bread is being indexed. Can solr handle the UNION case?? 

Is there an alternative way of doing this? I looked into merging the two 
tables, but that looks like a serious pain. 


Any help is greatly appreciated :) 



View message @ 
[http://lucene.472066.n3.nabble.com/DIH-index-Multiple-tables-in-Database-tp1956579p1956579.html]
 
http://lucene.472066.n3.nabble.com/DIH-index-Multiple-tables-in-Database-tp1956579p1956579.html
To start a new topic under Solr - User, email 
ml-node+472068-1030716887-225...@n3.nabble.com 
To unsubscribe from Solr - User, 
[http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=472068code=c2l2YXByYXNhZC5qQGVjaGlkbmFpbmMuY29tfDQ3MjA2OHwtMjAyODMzMTY4OQ==]
 click here.
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/DIH-index-Multiple-tables-in-Database-tp1956579p1958449.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: unknown field 'name'

2010-11-23 Thread sivaprasad


The field names in the xml and schema.xml should be matched

-Original Message-
From: McGibbney, Lewis John [via Lucene] 
ml-node+1956387-780012783-225...@n3.nabble.com
Sent: Tuesday, November 23, 2010 4:01pm
To: sivaprasad sivaprasa...@echidnainc.com
Subject: unknown field 'name'

Good Evening List, 

I have been working with Nutch and due to numerous integration advantages I 
decided to get to grips with the Solr code base. 

Solr dist - 1.4.1 
java version 1.6.0_22 
Windows Vista Home Premium 
Command Prompt to execute commands 

I encountered the following problem very early on during indexing stage, and 
even though I asked this question (through the wrong list :0|) I have been 
unable to resolve what it is thats going wrong. My searches to date pick up 
hits relating to Db problems and are of no use. I have a new dist of Solr and 
have made no configuration to date. 

C:\Users\Mcgibbney\Documents\LEWIS\apache-solr-1.4.1\apache-solr-1.4.1\example\e
 
xampledocsjava -jar post.jar *.xml 
SimplePostTool: version 1.2 
SimplePostTool: WARNING: Make sure your XML documents are encoded in UTF-8, 
othe 
r encodings are not currently supported 
SimplePostTool: POSTing files to [http://localhost:8983/solr/update] 
http://localhost:8983/solr/update.. 
SimplePostTool: POSTing file hd.xml 
SimplePostTool: FATAL: Solr returned an error: ERRORunknown_field_name 

Help would be great. 

Lewis Mc 

Glasgow Caledonian University is a registered Scottish charity, number SC021474 

Winner: Times Higher Education's Widening Participation Initiative of the Year 
2009 and Herald Society's Education Initiative of the Year 2009 
[http://www.gcu.ac.uk/newsevents/news/bycategory/theuniversity/1/name,6219,en.html]
 
http://www.gcu.ac.uk/newsevents/news/bycategory/theuniversity/1/name,6219,en.html



View message @ 
[http://lucene.472066.n3.nabble.com/unknown-field-name-tp1956387p1956387.html] 
http://lucene.472066.n3.nabble.com/unknown-field-name-tp1956387p1956387.html
To start a new topic under Solr - User, email 
ml-node+472068-1030716887-225...@n3.nabble.com 
To unsubscribe from Solr - User, 
[http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=472068code=c2l2YXByYXNhZC5qQGVjaGlkbmFpbmMuY29tfDQ3MjA2OHwtMjAyODMzMTY4OQ==]
 click here.
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/unknown-field-name-tp1956387p1958454.html
Sent from the Solr - User mailing list archive at Nabble.com.


How to write custom component

2010-11-22 Thread sivaprasad

Hi,

I want to write a custom component which will be invoked before the query
parser.The out put of this component should go to the query parser.

How can i configure it in solrConfig.xml

How can i get SynonymFilterFactory object programmatically.

Please share your ideas.

Regards,
Siva
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/How-to-write-custom-component-tp1945093p1945093.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Problem with synonyms

2010-11-22 Thread sivaprasad


In synonyms.txt file i have the below synonyms.

ipod, i-pod, i pod

If expand==false  during the index time, Is it going to replace all the
occurences of i-pod, i pod with ipod ?


-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Problem-with-synonyms-tp1905051p1946336.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Issue with relevancy

2010-11-21 Thread sivaprasad

Thanks guys, I will try the mentioned options
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Issue-with-relevancy-tp1930292p1943645.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: Phrase Search amp; Multiple Keywords with Double quotes

2010-11-21 Thread sivaprasad


You have to escape the special characters.Use the below method to escape
 
public static String escapeQueryChars(String s) {
   StringBuilder sb = new StringBuilder();
   for (int i = 0; i  s.length(); i++) {
 char c = s.charAt(i);
 // These characters are part of the query syntax and must be escaped
 if (c == '\\' || c == '+' || c == '-' || c == '!'  || c == '(' || c == ')' 
|| c == ':'
   || c == '^' || c == '[' || c == ']' || c == '\' || c == '{' || c == '}' 
|| c == '~'
   || c == '*' || c == '?' || c == '|' || c == ''  || c == ';'
   || Character.isWhitespace(c)) {
   sb.append('\\');
 }
 sb.append(c);
   }
   return sb.toString();
 }


-Original Message-
From: pawan.darira [via Lucene] 
ml-node+1943780-1932990465-225...@n3.nabble.com
Sent: Monday, November 22, 2010 12:39am
To: sivaprasad sivaprasa...@echidnainc.com
Subject: Phrase Search amp; Multiple Keywords with Double quotes

Hi 

I want to do pharse searching with single/double quotes. Also there are 
cases that those phrases include special characters like  etc. 

What all i need to do while indexing such special characters  while 
searching them. How to handle phrase search with quotes 

Please suggest 

-- 
Thanks, 
Pawan Darira 




View message @ 
[http://lucene.472066.n3.nabble.com/Phrase-Search-Multiple-Keywords-with-Double-quotes-tp1943780p1943780.html]
 
http://lucene.472066.n3.nabble.com/Phrase-Search-Multiple-Keywords-with-Double-quotes-tp1943780p1943780.html
To start a new topic under Solr - User, email 
ml-node+472068-1030716887-225...@n3.nabble.com 
To unsubscribe from Solr - User, 
[http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=472068code=c2l2YXByYXNhZC5qQGVjaGlkbmFpbmMuY29tfDQ3MjA2OHwtMjAyODMzMTY4OQ==]
 click here.
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Phrase-Search-Multiple-Keywords-with-Double-quotes-tp1943780p1943793.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Problem with synonyms

2010-11-21 Thread sivaprasad

Hi,
This is looks like a bug.See the below url.

https://issues.apache.org/jira/browse/LUCENE-1622

-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Problem-with-synonyms-tp1905051p1944183.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: String field with lower case filter

2010-11-20 Thread sivaprasad

Thank you,It is perfectly working
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/String-field-with-lower-case-filter-tp1930941p1935283.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Problem with synonyms

2010-11-20 Thread sivaprasad

Even after expanding the synonyms also i am unable to get same results.

Is there any other method to achieve this
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Problem-with-synonyms-tp1905051p1935419.html
Sent from the Solr - User mailing list archive at Nabble.com.


Issue with relevancy

2010-11-19 Thread sivaprasad

Hi,

I configured search request handler as shown below.

requestHandler name=standard class=solr.SearchHandler default=true
!-- default values for query parameters --
 lst name=defaults
   str name=echoParamsexplicit/str
/requestHandler

I am submitting the below query for search.

http://localhost:8680/solr/catalogSearch/select?facet=truespellcheck=trueindent=onomitHeader=truestats.field=sal_amtstats.field=volumestats=truewt=xmlq=prod_n%3ADesktop+Computer+OR+manf_mdl_nbr%3ADesktop+Computer+OR+upc%3ADesktop+Computer+OR+Brands%3ADesktop+Computerstart=0rows=60fl=*,scoredebugQuery=on

I am getting the below results ,But for the first doc the score is higher
than second doc, Even though the prod_n only has Computers word.

1)
doc
float name=score1.6884389/float
str name=BrandsGN Netcom/str
str name=categoryHeadsets  Microphones/str
str name=media_id79634/str
str name=media_url
http://image.shopzilla.com/resize?sq=60uid=1640146921
/str
int name=prnt_ctgy_i482/int
int name=prod_i1640146921/int
str name=prod_nComputer Headset/str
double name=sal_amt17.95/double
str name=source_typeEXTERNAL/str
/doc

2)
doc
float name=score1.4326878/float
str name=categoryDesktop Computers/str
str name=media_id1565338/str
str name=media_url
http://image.shopzilla.com/resize?sq=60uid=1983384776
/str
int name=prnt_ctgy_i461/int
int name=prod_i1983384776/int
str name=prod_nRain Computers ION 6-Core DAW Computer/str
double name=sal_amt1799.0/double
str name=source_typeEXTERNAL/str
/doc

I want to push down the first doc to second.H


The debug query analysis is given below.

lst name=debug

str name=rawquerystring
prod_n:Desktop Computer OR manf_mdl_nbr:Desktop Computer OR upc:Desktop
Computer OR Brands:Desktop Computer
/str

str name=querystring
prod_n:Desktop Computer OR manf_mdl_nbr:Desktop Computer OR upc:Desktop
Computer OR Brands:Desktop Computer
/str

str name=parsedquery
+prod_n:comput prod_n:comput manf_mdl_nbr:comput prod_n:comput upc:Desktop
prod_n:comput Brands:Desktop +prod_n:comput
/str

str name=parsedquery_toString
+prod_n:comput prod_n:comput manf_mdl_nbr:comput prod_n:comput upc:Desktop
prod_n:comput Brands:Desktop +prod_n:comput
/str

lst name=explain

str name=1640146921

1.6884388 = (MATCH) product of:
  2.701502 = (MATCH) sum of:
0.5403004 = (MATCH) weight(prod_n:comput in 35844), product of:
  0.18838727 = queryWeight(prod_n:comput), product of:
4.5888486 = idf(docFreq=3518, maxDocs=127361)
0.041053277 = queryNorm
  2.8680303 = (MATCH) fieldWeight(prod_n:comput in 35844), product of:
1.0 = tf(termFreq(prod_n:comput)=1)
4.5888486 = idf(docFreq=3518, maxDocs=127361)
0.625 = fieldNorm(field=prod_n, doc=35844)
0.5403004 = (MATCH) weight(prod_n:comput in 35844), product of:
  0.18838727 = queryWeight(prod_n:comput), product of:
4.5888486 = idf(docFreq=3518, maxDocs=127361)
0.041053277 = queryNorm
  2.8680303 = (MATCH) fieldWeight(prod_n:comput in 35844), product of:
1.0 = tf(termFreq(prod_n:comput)=1)
4.5888486 = idf(docFreq=3518, maxDocs=127361)
0.625 = fieldNorm(field=prod_n, doc=35844)
0.5403004 = (MATCH) weight(prod_n:comput in 35844), product of:
  0.18838727 = queryWeight(prod_n:comput), product of:
4.5888486 = idf(docFreq=3518, maxDocs=127361)
0.041053277 = queryNorm
  2.8680303 = (MATCH) fieldWeight(prod_n:comput in 35844), product of:
1.0 = tf(termFreq(prod_n:comput)=1)
4.5888486 = idf(docFreq=3518, maxDocs=127361)
0.625 = fieldNorm(field=prod_n, doc=35844)
0.5403004 = (MATCH) weight(prod_n:comput in 35844), product of:
  0.18838727 = queryWeight(prod_n:comput), product of:
4.5888486 = idf(docFreq=3518, maxDocs=127361)
0.041053277 = queryNorm
  2.8680303 = (MATCH) fieldWeight(prod_n:comput in 35844), product of:
1.0 = tf(termFreq(prod_n:comput)=1)
4.5888486 = idf(docFreq=3518, maxDocs=127361)
0.625 = fieldNorm(field=prod_n, doc=35844)
0.5403004 = (MATCH) weight(prod_n:comput in 35844), product of:
  0.18838727 = queryWeight(prod_n:comput), product of:
4.5888486 = idf(docFreq=3518, maxDocs=127361)
0.041053277 = queryNorm
  2.8680303 = (MATCH) fieldWeight(prod_n:comput in 35844), product of:
1.0 = tf(termFreq(prod_n:comput)=1)
4.5888486 = idf(docFreq=3518, maxDocs=127361)
0.625 = fieldNorm(field=prod_n, doc=35844)
  0.625 = coord(5/8)
/str

str name=1983384776

1.4326876 = (MATCH) product of:
  2.2923002 = (MATCH) sum of:
0.45846006 = (MATCH) weight(prod_n:comput in 57069), product of:
  0.18838727 = queryWeight(prod_n:comput), product of:
4.5888486 = idf(docFreq=3518, maxDocs=127361)
0.041053277 = queryNorm
  2.4336042 = (MATCH) fieldWeight(prod_n:comput in 57069), product of:
1.4142135 = tf(termFreq(prod_n:comput)=2)
4.5888486 = idf(docFreq=3518, maxDocs=127361)
0.375 = 

Re: Dismax is failing with json response writer

2010-11-19 Thread sivaprasad

The issue is solved.I replaced the solr core jar.

Thanks Erick
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Dismax-is-failing-with-json-response-writer-tp1922170p1930382.html
Sent from the Solr - User mailing list archive at Nabble.com.


String field with lower case filter

2010-11-19 Thread sivaprasad

Hi,

I am using a string filed with below configuration.

fieldType name=cat_string class=solr.StrField sortMissingLast=true
omitNorms=true 
  analyzer 
tokenizer class=solr.KeywordTokenizerFactory/ 
filter class=solr.LowerCaseFilterFactory/ 
  /analyzer 
/fieldType 

One of the filed is using the fields type as cat_string. I am using this
as facet and i am searching on that field .While searching i need case
insensitive search.

Let us say cat:Games or cat:games should give same results.

But with above configuration i am not getting any results.Can anybody has
idea.

Regards,
Siva
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/String-field-with-lower-case-filter-tp1930941p1930941.html
Sent from the Solr - User mailing list archive at Nabble.com.


Dismax is failing with json response writer

2010-11-17 Thread sivaprasad

Hi,

I am using dismax query parser.When i want the response in JSON, iam giving
wt=json.Here it is throwing the below exception.

HTTP Status 500 - null java.lang.NullPointerException at
org.apache.solr.search.DocSlice$1.score(DocSlice.java:121) at
org.apache.solr.request.JSONWriter.writeDocList(JSONResponseWriter.java:502)
at
org.apache.solr.request.TextResponseWriter.writeVal(TextResponseWriter.java:141)
at
org.apache.solr.request.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:182)
at
org.apache.solr.request.JSONWriter.writeNamedList(JSONResponseWriter.java:297)
at
org.apache.solr.request.JSONWriter.writeResponse(JSONResponseWriter.java:92)
at
org.apache.solr.request.JSONResponseWriter.write(JSONResponseWriter.java:51)
at
org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:325)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:254)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852)
at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:619) 


But the same is wroking with Standared request handler.

Any has idea how to fix this.

Regards,
Siva
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Dismax-is-failing-with-json-response-writer-tp1922170p1922170.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Term component sort is not working

2010-11-16 Thread sivaprasad

I am capturing all the user entered search terms in to the database and the 
number of times the search term is entered.Let us say

laptop has entered 100 times.
laptop bag has entered 80 times.
laptop battery has entered 90 times.

I am using terms component for auto suggest feature.If the user types for lap 
, i am getting response as 
laptop bag
laptop
laptop battery

But i should get laptop
 laptop battery
 laptop bag

As laptop is entered by more number of times it should come first in the 
result.In this case i need to utilize the WEIGHTAGE(the number times searched) 
for sorting.
 

Regards,
Siva

-Original Message-
From: Erick Erickson [via Lucene] 
ml-node+1910694-73953797-225...@n3.nabble.com
Sent: Tuesday, November 16, 2010 7:52am
To: sivaprasad sivaprasa...@echidnainc.com
Subject: Re: Term component sort is not working




You haven't defined what you want to see, so it's hard
to help. What does top mean? The order you put it
into the index? Lexical sort? Frequency count?
Numerical ordering?

Why do you want to do this? Perhaps if you explained
your use case we'd be able to offer some alternatives.

Best
Erick

On Tue, Nov 16, 2010 at 1:22 AM, sivaprasad sivaprasa...@echidnainc.comwrote:



 Hi,

 I have given the terms component configuration.

 searchComponent name=termsComponent
 class=org.apache.solr.handler.component.TermsComponent/

  requestHandler name=/autoSuggest
 class=org.apache.solr.handler.component.SearchHandler
 lst name=defaults
  bool name=termstrue/bool
  str name=terms.flautoSuggestTerm/str
  str name=terms.sortindex/str
/lst
arr name=components
  strtermsComponent/str
/arr
  /requestHandler

 And i have two fileds in schema file

 field name=autoSuggestTerm type=string indexed=true stored=true/
  field name=WEIGHTAGE type=integer stored=true/

 Now iam trying to sort the terms which are returned by terms component
 based
 on WEIGHTAGE  field.
 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/Term-component-sort-is-not-working-tp1905059p1909386.html
 Sent from the Solr - User mailing list archive at Nabble.com.



__
View message @ 
http://lucene.472066.n3.nabble.com/Term-component-sort-is-not-working-tp1905059p1910694.html
To start a new topic under Solr - User, email 
ml-node+472068-1030716887-225...@n3.nabble.com
To unsubscribe from Solr - User, click 
http://lucene.472066.n3.nabble.com/template/TplServlet.jtp?tpl=unsubscribe_by_codenode=472068code=c2l2YXByYXNhZC5qQGVjaGlkbmFpbmMuY29tfDQ3MjA2OHwtMjAyODMzMTY4OQ==



-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Term-component-sort-is-not-working-tp1905059p1910874.html
Sent from the Solr - User mailing list archive at Nabble.com.


Problem with synonyms

2010-11-15 Thread sivaprasad

Hi,

I have a set of synonyms in synonyms.txt file.

For ex:
hdtv,High Definition Television, High Definition TV


In the admin screen when i type High Definition Television as the query
term to analyze , i got hdtv as the result of the analysis.

But when is search for the term hdtv and High Definition Television the
results count is mismatching.

The analysis chain is given below

fieldType name=text class=solr.TextField positionIncrementGap=100
 analyzer
   tokenizer class=solr.WhitespaceTokenizerFactory/
   filter class=solr.SynonymFilterFactory synonyms=synonyms.txt
ignoreCase=true expand=true/
   filter class=solr.StopFilterFactory ignoreCase=true
words=stopwords.txt/   
   filter class=solr.LowerCaseFilterFactory/
   filter class=solr.EnglishPorterFilterFactory
protected=protwords.txt/
   filter class=solr.RemoveDuplicatesTokenFilterFactory/
 /analyzer   
 /fieldType

As part of search results i enabled debugQuery=true and then the query term
is coming as shown below.

+searchtext:high +searchtext:definit +searchtext:televis

But if i put the query term in double quotes(for ex:High Definition
Television) , it is working fine.

What is the cause for this problem?

Regards,
Siva
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Problem-with-synonyms-tp1905051p1905051.html
Sent from the Solr - User mailing list archive at Nabble.com.


Term component sort is not working

2010-11-15 Thread sivaprasad

Hi,

As part of terms component we have a parameter terms.sort=index|count.

If we put terms.sort=index, will be returns the terms in index order.

While doing the import, i have used the below query to index.

SELECT  ID,SEARCH_KEY,WEIGHTAGE FROM SEARCH_KEY_WEIGHTAGE ORDER BY weightage
DESC

So the top WEIGHTAGE document will be first indexed.While returning it is
not happening like that.How can i get the top weightage term first, in the
result of the matched term?


Regards,
Siva

-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Term-component-sort-is-not-working-tp1905059p1905059.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Term component sort is not working

2010-11-15 Thread sivaprasad

How can i utilize the weightage of terms which i captured from end user? any
ideas
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Term-component-sort-is-not-working-tp1905059p1905958.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Problem with synonyms

2010-11-15 Thread sivaprasad

Do i need to expand the synonyms at index time?
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Problem-with-synonyms-tp1905051p1905976.html
Sent from the Solr - User mailing list archive at Nabble.com.


Search results are not coming if the query contains()

2010-11-15 Thread sivaprasad

Hi,
I have  the below analysis settings at index time and query time.
   tokenizer class=solr.WhitespaceTokenizerFactory/
   filter class=solr.StopFilterFactory ignoreCase=true
words=stopwords.txt/
   filter class=solr.LowerCaseFilterFactory/
   filter class=solr.EnglishPorterFilterFactory
protected=protwords.txt/
   filter class=solr.RemoveDuplicatesTokenFilterFactory/


When i analyze the query using admin screen i got the below result.

hp  laserjet2200(h3978a)mainten kit
(h3978-60001)

But when i submit the query for searching i am not getting any results.But
with above query i have data in index.

What can be the problem?

-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Search-results-are-not-coming-if-the-query-contains-tp1906181p1906181.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Search results are not coming if the query contains()

2010-11-15 Thread sivaprasad

Hi 
 Here  is the query.

str name=rawquerystring
  hp laserjet 2200 (h3978a) mainten kit (h3978-60001)
/str
str name=querystring
 hp laserjet 2200 (h3978a) mainten kit (h3978-60001)
/str
str name=parsedquery
+searchtext:hp +searchtext:laserjet +searchtext:2200 +searchtext:h3978a 
+searchtext:mainten +searchtext:kit +searchtext:h3978-60001
/str
str name=parsedquery_toString
+searchtext:hp +searchtext:laserjet +searchtext:2200 +searchtext:h3978a 
+searchtext:mainten +searchtext:kit +searchtext:h3978-60001
/str

Why the ( ,) are trimming?

Regards,
Siva

-Original Message-
From: Erick Erickson [via Lucene] 
ml-node+1907736-570186422-225...@n3.nabble.com
Sent: Monday, November 15, 2010 5:29pm
To: sivaprasad sivaprasa...@echidnainc.com
Subject: Re: Search results are not coming if the query contains()




I don't see the query you're submitting. Try submitting your query
with debugQuery=on and pasting the results...

But first I'd try escaping the parenthesis in your query since
they are part of the query syntax...

Best
Erick

On Mon, Nov 15, 2010 at 1:04 PM, sivaprasad sivaprasa...@echidnainc.comwrote:


 Hi,
 I have  the below analysis settings at index time and query time.
   tokenizer class=solr.WhitespaceTokenizerFactory/
   filter class=solr.StopFilterFactory ignoreCase=true
 words=stopwords.txt/
   filter class=solr.LowerCaseFilterFactory/
   filter class=solr.EnglishPorterFilterFactory
 protected=protwords.txt/
   filter class=solr.RemoveDuplicatesTokenFilterFactory/


 When i analyze the query using admin screen i got the below result.

 hp  laserjet2200(h3978a)mainten kit
 (h3978-60001)

 But when i submit the query for searching i am not getting any results.But
 with above query i have data in index.

 What can be the problem?

 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/Search-results-are-not-coming-if-the-query-contains-tp1906181p1906181.html
 Sent from the Solr - User mailing list archive at Nabble.com.



__
View message @ 
http://lucene.472066.n3.nabble.com/Search-results-are-not-coming-if-the-query-contains-tp1906181p1907736.html
To start a new topic under Solr - User, email 
ml-node+472068-1030716887-225...@n3.nabble.com
To unsubscribe from Solr - User, click 
http://lucene.472066.n3.nabble.com/template/TplServlet.jtp?tpl=unsubscribe_by_codenode=472068code=c2l2YXByYXNhZC5qQGVjaGlkbmFpbmMuY29tfDQ3MjA2OHwtMjAyODMzMTY4OQ==



-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Search-results-are-not-coming-if-the-query-contains-tp1906181p1909318.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Problem with synonyms

2010-11-15 Thread sivaprasad

I did changes to the schema file as shown below.

analyzer
tokenizer class=solr.WhitespaceTokenizerFactory/
filter class=solr.SynonymFilterFactory synonyms=synonyms.txt
ignoreCase=true expand=true/
filter class=solr.StopFilterFactory ignoreCase=true
words=stopwords.txt/   
filter class=solr.LowerCaseFilterFactory/
filter class=solr.EnglishPorterFilterFactory
protected=protwords.txt/
filter class=solr.RemoveDuplicatesTokenFilterFactory/
/analyzer  

And i have an entry in the synonym.txt file as shown below.

hdtv = High Definition Television, High Definition TV,High Definition
Televisions,High Definition TVs

Now i submitted the query with debugQuery=on .

Query1:hdtv

The parsed query is given below.

str name=rawquerystringhdtv/str 
str name=querystringhdtv/str 
str name=parsedqueryMultiPhraseQuery(searchtext:high definit (televis
tv tvs))/str 
str name=parsedquery_toStringsearchtext:high definit (televis tv
tvs)/str 

and the number of results returned is ZERO.

Query2:High Definition Television

The parsed query is given below.
str name=rawquerystringHigh Definition Television/str 
str name=querystringHigh Definition Television/str 
str name=parsedquery+searchtext:high +searchtext:definit
+(searchtext:televis searchtext:tv searchtext:tvs)/str 
str name=parsedquery_toString+searchtext:high +searchtext:definit
+(searchtext:televis searchtext:tv searchtext:tvs)/str 

And the number of resullts is 1.

Why i am getting the results like this even after expanding the synonyms.
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Problem-with-synonyms-tp1905051p1909369.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Term component sort is not working

2010-11-15 Thread sivaprasad

 
Hi,

I have given the terms component configuration.

searchComponent name=termsComponent
class=org.apache.solr.handler.component.TermsComponent/

  requestHandler name=/autoSuggest
class=org.apache.solr.handler.component.SearchHandler
 lst name=defaults
  bool name=termstrue/bool
  str name=terms.flautoSuggestTerm/str 
  str name=terms.sortindex/str
/lst 
arr name=components
  strtermsComponent/str
/arr
  /requestHandler

And i have two fileds in schema file

field name=autoSuggestTerm type=string indexed=true stored=true/
 field name=WEIGHTAGE type=integer stored=true/

Now iam trying to sort the terms which are returned by terms component based
on WEIGHTAGE  field.
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Term-component-sort-is-not-working-tp1905059p1909386.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Search results are not coming if the query contains()

2010-11-15 Thread sivaprasad

I tried by escaping the special chars.Now it is working fine.

Thanks guys.
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Search-results-are-not-coming-if-the-query-contains-tp1906181p1909589.html
Sent from the Solr - User mailing list archive at Nabble.com.


Searching with acronyms

2010-11-14 Thread sivaprasad

Hi,

I have a requirement where a user enters acronym of a word, then the search
results should come for the expandable word.Let us say. If the user enters
'TV', the search results should come for 'Television'.

Is the synonyms filter is the way to achieve this?

Any inputs.

Regards,
Siva

-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Searching-with-acronyms-tp1902583p1902583.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: Solr MySQL Adding new column to table

2010-11-02 Thread sivaprasad

You have to change the old configuration for the newly added field.Or you can 
use dynamic fields concept.

Go through the link
http://wiki.apache.org/solr/SchemaXml




-Original Message-
From: nitin.vanaku...@gmail.com [via Lucene] 
ml-node+1826759-1041834398-225...@n3.nabble.com
Sent: Tuesday, November 2, 2010 4:50am
To: sivaprasad sivaprasa...@echidnainc.com
Subject: Solr MySQL Adding new column to table

Hello Techies,

I am new to Solr, i am using it with mysql.
Suppose i have table called person in mysql with two columns name,  age
and i have configured mysql in solr. now i have added a new column to person
table called phoneNumber, is it possible for solr to recognize new column 
dynamically ?
i mean with out changing old configuration.
thanks in advance

Nitin


View message @ 
[http://lucene.472066.n3.nabble.com/Solr-MySQL-Adding-new-column-to-table-tp1826759p1826759.html]
 
http://lucene.472066.n3.nabble.com/Solr-MySQL-Adding-new-column-to-table-tp1826759p1826759.html
To start a new topic under Solr - User, email 
ml-node+472068-1030716887-225...@n3.nabble.com
To unsubscribe from Solr - User, 
[http://lucene.472066.n3.nabble.com/template/TplServlet.jtp?tpl=unsubscribe_by_codenode=472068code=c2l2YXByYXNhZC5qQGVjaGlkbmFpbmMuY29tfDQ3MjA2OHwtMjAyODMzMTY4OQ==]
 click here.

-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-MySQL-Adding-new-column-to-table-tp1826759p1826792.html
Sent from the Solr - User mailing list archive at Nabble.com.


Filtering results based on score

2010-11-01 Thread sivaprasad

Hi,
As part of solr results i am able to get the max score.If i want to filter
the results based on the max score, let say the max score  is 10 And i need
only the results between max score  to 50 % of max score.This max score is
going to change dynamically.How can we implement this?Do we need to
customize the solr?Please any suggestions.


Regards,
JS
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Filtering-results-based-on-score-tp1819769p1819769.html
Sent from the Solr - User mailing list archive at Nabble.com.


Solr Relevency Calculation

2010-11-01 Thread sivaprasad

Hi,
I have 25 indexed fields in my document.But by default, if i give
q=laptops this is going to search on five fields and iam getting the score
as part of search results.How solr will calculate the score?Is it going to
calculate only on the five fields or on 25 fields which are indexed?What is
the order it is going to take to calculate score?Any documents related to
this topic is helpful for me.

Regards,
JS
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Relevency-Calculation-tp1819798p1819798.html
Sent from the Solr - User mailing list archive at Nabble.com.


Boosting the score based on certain field

2010-11-01 Thread sivaprasad

Hi,

In my document i have a filed called category.This contains
electronics,games ,..etc.For some of the category values i need to boost
the document score.Let us say, for electronics category, i will decide the
boosting parameter grater than the games category.Is there any body has
the idea to achieve this functionality?

Regards,
Siva


-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Boosting-the-score-based-on-certain-field-tp1819820p1819820.html
Sent from the Solr - User mailing list archive at Nabble.com.


Externalizing properties file

2010-10-26 Thread sivaprasad

Hi,
I created custom component in solr.This is using one properties file.When i
place the jar in solr_home  lib directory the class is coming into class
path, but the properties file is not.If i bundle the properties file in side
jar , the file is coming into class path.But i need to externalize the
properties file.I am using ResourceBundle.getBundle to load the properties
file.Where do i need to place the properties file?Can anybody has the idea?

Regards,
JS
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Externalizing-properties-file-tp1768972p1768972.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Speeding up solr indexing

2010-10-09 Thread sivaprasad

Hi,
Please find the configurations below.

Machine configurations(Solr running here):

RAM - 4 GB
HardDisk - 180GB
Os - Red Hat linux version 5
Processor-2x Intel Core 2 Duo CPU @2.66GHz



Machine configurations(Mysql server is running here):
RAM - 4 GB
HardDisk - 180GB
Os - Red Hat linux version 5
Processor-2x Intel Core 2 Duo CPU @2.66GHz

My sql Server deatils:
My sql version - Mysql 5.0.22

Solr configuration details:

 indexDefaults
  
useCompoundFilefalse/useCompoundFile

mergeFactor20/mergeFactor
   
!--maxBufferedDocs1000/maxBufferedDocs--
ramBufferSizeMB100/ramBufferSizeMB
maxMergeDocs2147483647/maxMergeDocs
maxFieldLength1/maxFieldLength
writeLockTimeout1000/writeLockTimeout
commitLockTimeout1/commitLockTimeout
!--luceneAutoCommitfalse/luceneAutoCommit--
   
   
!--mergePolicyorg.apache.lucene.index.LogByteSizeMergePolicy/mergePolicy--

   
!--mergeSchedulerorg.apache.lucene.index.ConcurrentMergeScheduler/mergeScheduler--
lockTypesingle/lockType
  /indexDefaults

  mainIndex

useCompoundFilefalse/useCompoundFile
ramBufferSizeMB100/ramBufferSizeMB
mergeFactor20/mergeFactor
   
!--maxBufferedDocs1000/maxBufferedDocs--
maxMergeDocs2147483647/maxMergeDocs
maxFieldLength1/maxFieldLength
unlockOnStartupfalse/unlockOnStartup
  /mainIndex

  !-- the default high-performance update handler --
  updateHandler class=solr.DirectUpdateHandler2
maxPendingDeletes10/maxPendingDeletes
autoCommit 
  maxDocs1/maxDocs 
  maxTime6/maxTime
/autoCommit

!-- A postCommit event is fired after every commit or optimize command
listener event=postCommit class=solr.RunExecutableListener
  str name=exesolr/bin/snapshooter/str
  str name=dir./str
  bool name=waittrue/bool
  arr name=args strarg1/str strarg2/str /arr
  arr name=env strMYVAR=val1/str /arr
/listener
--
!-- A postOptimize event is fired only after every optimize command,
useful
 in conjunction with index distribution to only distribute optimized
indicies 
listener event=postOptimize class=solr.RunExecutableListener
  str name=exesnapshooter/str
  str name=dirsolr/bin/str
  bool name=waittrue/bool
/listener
--
  /updateHandler

Solr document details:

21 fields are indexed and stored
3 fileds are indexed only.
3 fileds are stored only.
3 fileds are indexed,stored and multi valued
2 fileds indexed and multi valued

And i am copying some of the indexed fileds.In this 2 fileds are multivalued
and has thousands of values.

In db-config-file the main table contains 0.6 million records.

When i tested for the same records, the index has taken 1hr 30 min.In this
case one of the multivalued filed table doesn't have records.After putting
data into this table,for each main table record , this table has thousands
of records and this filed is indexed and stored.It is taking more than 24
hrs .

Solr is running on tomcat 6.0.26, jdk1.6.0_17 and solr 1.4.1

I am using JVM's default settings.

Why this is taking this much time?Any body has suggestions, where i am going
wrong.

Thanks,
JS
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Speeding-up-solr-indexing-tp1667054p1670737.html
Sent from the Solr - User mailing list archive at Nabble.com.


Speeding up solr indexing

2010-10-08 Thread sivaprasad

Hi,
I am indexing the data using DIH.Data coming from mysql.Each document
contains 30 fields.Some of the fields are multi valued.When i am trying to
index 10 million records it taking more time to index.

Any body has suggestions to speed up indexing process?Any suggestions on
solr admin level configurations?


Thanks,
JS
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Speeding-up-solr-indexing-tp1667054p1667054.html
Sent from the Solr - User mailing list archive at Nabble.com.


Autosuggest with inner phrases

2010-10-02 Thread sivaprasad

Hi ,
I implemented the auto suggest using terms component.But the suggestions are
coming from the starting of the word.But i want inner phrases also.For
example, if I type bass Auto-Complete should offer suggestions that
include bass fishing  or bass guitar, and even sea bass (note how
bass is not necessarily the first word).

How can i achieve this using solr's terms component.

Regards,
Siva
-- 
View this message in context: 
http://lucene.472066.n3.nabble.com/Autosuggest-with-inner-phrases-tp1619326p1619326.html
Sent from the Solr - User mailing list archive at Nabble.com.