Re: MySQL Exception: Communications link failure WITH DataImportHandler

2012-08-16 Thread Jienan Duan
Hi all:
I have resolved this problem by configuring a jndi datasource in tomcat.
But I still want to find out why it throw an exception in DIH when I
configure datasource in data-configure.xml but a jndi resource.

Regards.

2012/8/16 Jienan Duan 

> Hi all:
> I'm using DataImportHandler load data from MySQL.
> It works fine on my develop machine and online environment.
> But I got an exception on test environment:
>
>> Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
>>> Communications link failure
>>
>>
>>> The last packet sent successfully to the server was 0 milliseconds ago.
>>> The driver has not received any packets from the server.
>>
>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>> Method)
>>
>> at
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>
>> at
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>
>> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>
>> at com.mysql.jdbc.Util.handleNewInstance(Util.java:406)
>>
>> at
>>> com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1074)
>>
>> at com.mysql.jdbc.MysqlIO.(MysqlIO.java:343)
>>
>> at
>>> com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2132)
>>
>> ... 26 more
>>
>> Caused by: java.net.ConnectException: Connection timed out
>>
>> at java.net.PlainSocketImpl.socketConnect(Native Method)
>>
>> at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351)
>>
>> at
>>> java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213)
>>
>> at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200)
>>
>> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
>>
>> at java.net.Socket.connect(Socket.java:529)
>>
>> at java.net.Socket.connect(Socket.java:478)
>>
>> at java.net.Socket.(Socket.java:375)
>>
>> at java.net.Socket.(Socket.java:218)
>>
>> at
>>> com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:253)
>>
>> at com.mysql.jdbc.MysqlIO.(MysqlIO.java:292)
>>
>> ... 27 more
>>
>> This make me confused,because the test env and online env almost
> same:Tomcat runs on a Linux Server with JDK6,MySql5 runs on another.
> Even I wrote a simple JDBC test class it works,a jsp file with JDBC code
> also works.Only DataImportHandler failed.
> I'm trying to read Solr source code and found that it seems Solr has it's
> own ClassLoader.I'm not sure if it goes wrong with Tomcat on some specific
> configuration.
> Dose anyone know how to fix this problem? Thank you very much.
>
> Best Regards.
>
> Jienan Duan
>
> --
> --
> 不走弯路,就是捷径。
> http://www.jnan.org/
>
>


-- 
--
不走弯路,就是捷径。
http://www.jnan.org/


Can two Solr share index?

2012-08-16 Thread Alex King
Hi all,
We use Solr to build a search service for our customers. We have got one
core on one physical machine. When our site will be ready and running it
could potentially be very busy with serving search results to users. But I
want to perform indexing of new documents without affecting Solr search
performance. (there can be huge amount of documents at one time)
We figured out that we can have one Solr only for search and another (or
more in the future) Solr to do the indexing. Those Solrs should eventually
have the same index (it isn't neccessery to synchronize those indexes every
time new document was added, but I want the search Solr to get new
documents, lets say ones in a week).
So my question is: it a good idea, at all? ;) And how can I accomplish
this? Can Solrs share index? Or should I copy the index from indexing
machine to search machine (how to do it technically? is there some special
tool for this purpose?)
Regards,
Alex


Re: Atomic Multicore Operations - E.G. Move Docs

2012-08-16 Thread Nicholas Ball

I've been close to implementing a 2PC protocol before for something else,
however for this it's not needed.
As the move operation will be done on a single node which has both the
cores, this could be done differently. Just not entirely sure how to do it.

When a commit is done at the moment, the core must get locked somehow, it
is at this point where we should lock the other core too if a move
operation is being executed.

Nick

On Thu, 16 Aug 2012 10:32:10 +0800, Li Li  wrote:
>
http://zookeeper.apache.org/doc/r3.3.6/recipes.html#sc_recipes_twoPhasedCommit
> 
> On Thu, Aug 16, 2012 at 7:41 AM, Nicholas Ball
>  wrote:
>>
>> Haven't managed to find a good way to do this yet. Does anyone have any
>> ideas on how I could implement this feature?
>> Really need to move docs across from one core to another atomically.
>>
>> Many thanks,
>> Nicholas
>>
>> On Mon, 02 Jul 2012 04:37:12 -0600, Nicholas Ball
>>  wrote:
>>> That could work, but then how do you ensure commit is called on the
two
>>> cores at the exact same time?
>>>
>>> Cheers,
>>> Nicholas
>>>
>>> On Sat, 30 Jun 2012 16:19:31 -0700, Lance Norskog 
>>> wrote:
 Index all documents to both cores, but do not call commit until both
 report that indexing worked. If one of the cores throws an exception,
 call roll back on both cores.

 On Sat, Jun 30, 2012 at 6:50 AM, Nicholas Ball
  wrote:
>
> Hey all,
>
> Trying to figure out the best way to perform atomic operation across
> multiple cores on the same solr instance i.e. a multi-core
>> environment.
>
> An example would be to move a set of docs from one core onto another
>>> core
> and ensure that a softcommit is done as the exact same time. If one
>>> were
> to
> fail so would the other.
> Obviously this would probably require some customization but wanted
to
> know what the best way to tackle this would be and where should I be
> looking in the source.
>
> Many thanks for the help in advance,
> Nicholas a.k.a. incunix


org.apache.solr.client.solrj.SolrServerException: java.net.SocketException: Unrecognized Windows Sockets error: 0: JVM_Bind

2012-08-16 Thread Kashif Khan
Does anybody know why this error shows up? 

org.apache.solr.client.solrj.SolrServerException: java.net.SocketException:
Unrecognized Windows Sockets error: 0: JVM_Bind

I have developed a solr plugin and have been querying to my plugin with
multiple requests then only this error shows up. if i request single query
to the plugin it wont show this problem. 




--
View this message in context: 
http://lucene.472066.n3.nabble.com/org-apache-solr-client-solrj-SolrServerException-java-net-SocketException-Unrecognized-Windows-Socked-tp4001594.html
Sent from the Solr - User mailing list archive at Nabble.com.


missing core name in path

2012-08-16 Thread Muzaffer Tolga Özses

Hi,

I've started Solr as usual, and when I browsed to 
http://www.example.com:8983/solr/admin, I got


HTTP ERROR 404

Problem accessing /solr/admin/index.jsp. Reason:

missing core name in path
Powered by Jetty://

Also, below are the lines I got when starting it:

SEVERE: org.apache.solr.common.SolrException: Schema Parsing Failed: 
multiple points

at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:688)
at org.apache.solr.schema.IndexSchema.(IndexSchema.java:123)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:478)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:332)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:216)
at 
org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:161)
at 
org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96)

at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at 
org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)

at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
at 
org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
at 
org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
at 
org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at 
org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
at 
org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at 
org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at 
org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)

at org.mortbay.jetty.Server.doStart(Server.java:224)
at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)

at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:985)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

at java.lang.reflect.Method.invoke(Method.java:597)
at org.mortbay.start.Main.invokeMain(Main.java:194)
at org.mortbay.start.Main.start(Main.java:534)
at org.mortbay.start.Main.start(Main.java:441)
at org.mortbay.start.Main.main(Main.java:119)
Caused by: java.lang.NumberFormatException: multiple points
at 
sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1082)

at java.lang.Float.parseFloat(Float.java:422)
at org.apache.solr.core.Config.getFloat(Config.java:307)
at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:430)
... 31 more

Aug 16, 2012 1:43:03 PM org.apache.solr.servlet.SolrDispatchFilter init
INFO: user.dir=/usr/local/solr/SOLR/example
Aug 16, 2012 1:43:03 PM org.apache.solr.servlet.SolrDispatchFilter init
INFO: SolrDispatchFilter.init() done
Aug 16, 2012 1:43:03 PM org.apache.solr.servlet.SolrServlet init
INFO: SolrServlet.init()
Aug 16, 2012 1:43:03 PM org.apache.solr.core.SolrResourceLoader 
locateSolrHome

INFO: JNDI not configured for solr (NoInitialContextEx)
Aug 16, 2012 1:43:03 PM org.apache.solr.core.SolrResourceLoader 
locateSolrHome
INFO: solr home defaulted to 'solr/' (could not find system property or 
JNDI)

Aug 16, 2012 1:43:03 PM org.apache.solr.servlet.SolrServlet init
INFO: SolrServlet.init() done
Aug 16, 2012 1:43:03 PM org.apache.solr.core.SolrResourceLoader 
locateSolrHome

INFO: JNDI not configured for solr (NoInitialContextEx)
Aug 16, 2012 1:43:03 PM org.apache.solr.core.SolrResourceLoader 
locateSolrHome
INFO: solr home defaulted to 'solr/' (could not find system property or 
JNDI)

Aug 16, 2012 1:43:03 PM org.apache.solr.servlet.SolrUpdateServlet init
INFO: SolrUpdateServlet.init() done
2012-08-16 13:43:03.105:INFO::Started SocketConnector@0.0.0.0:8983
2012-08-16 13:45:24.162:WARN::/solr/admin/
java.lang.IllegalStateException: STREAM
at org.mortbay.jetty.Response.getWriter(Response.java:616)
at 
org.apache.jasper.runtime.JspWriterImpl.initOut(JspWriterImpl.java:187)
at 
org.apache.jasper.runtime.JspWriterImpl.flushBuffer(JspWriterImpl.java:180)
at 
org.apache.jasper.runtime.PageContextImpl.release(PageContextImpl.java:237)
at 
org.apache.jasper.runtime.JspFactoryImpl.internalReleasePageContext(JspFactoryImpl.java:173)
at 
org.apache.jasper.runtime.JspFactoryImpl.releasePageContext(JspFactoryImpl.java:124)
at 
org.apache.jsp.admin.index_jsp._jspService(org.apache.jsp.admin.index_jsp:415)

at org.apache.jasp

Chinese character not encoded for facet.prefix but encoded for q field

2012-08-16 Thread Rajani Maski
Chinese character not encoded for facet.prefix but encoded for q field  -
BODY
*
*
*why?what might be the problem?*

This is done :
   




[image: Inline image 2]


Re: Can two Solr share index?

2012-08-16 Thread Erik Hatcher
Alex - what you're looking for is called "replication" - 
http://wiki.apache.org/solr/SolrReplication

Erik


On Aug 16, 2012, at 04:48 , Alex King wrote:

> Hi all,
> We use Solr to build a search service for our customers. We have got one
> core on one physical machine. When our site will be ready and running it
> could potentially be very busy with serving search results to users. But I
> want to perform indexing of new documents without affecting Solr search
> performance. (there can be huge amount of documents at one time)
> We figured out that we can have one Solr only for search and another (or
> more in the future) Solr to do the indexing. Those Solrs should eventually
> have the same index (it isn't neccessery to synchronize those indexes every
> time new document was added, but I want the search Solr to get new
> documents, lets say ones in a week).
> So my question is: it a good idea, at all? ;) And how can I accomplish
> this? Can Solrs share index? Or should I copy the index from indexing
> machine to search machine (how to do it technically? is there some special
> tool for this purpose?)
> Regards,
> Alex



Re: missing core name in path

2012-08-16 Thread Yury Kats
On 8/16/2012 6:57 AM, Muzaffer Tolga Özses wrote:
> 
> Also, below are the lines I got when starting it:
> 
> SEVERE: org.apache.solr.common.SolrException: Schema Parsing Failed: 
> multiple points
> ...
> Caused by: java.lang.NumberFormatException: multiple points
>  at 
> sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1082)

This looks like the version number at the top of the schema has more than one 
dot,
eg "1.2.3". Solr parses version as a floating point number, so it must be 
"1.23" instead.



Re: missing core name in path

2012-08-16 Thread Jack Krupansky
Compare your current schema.xml to a previous known good copy (or to the 
original from the Solr example) and see what changes have occurred. Maybe 
you were viewing it in some editor and accidentally hit some keys that 
corrupted the format.


And, tell us what release of Solr you are using.

-- Jack Krupansky

-Original Message- 
From: Muzaffer Tolga Özses

Sent: Thursday, August 16, 2012 6:57 AM
To: solr-user@lucene.apache.org
Subject: missing core name in path

Hi,

I've started Solr as usual, and when I browsed to
http://www.example.com:8983/solr/admin, I got

HTTP ERROR 404

Problem accessing /solr/admin/index.jsp. Reason:

missing core name in path
Powered by Jetty://

Also, below are the lines I got when starting it:

SEVERE: org.apache.solr.common.SolrException: Schema Parsing Failed:
multiple points
at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:688)
at org.apache.solr.schema.IndexSchema.(IndexSchema.java:123)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:478)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:332)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:216)
at
org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:161)
at
org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96)
at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
at
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at
org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
at
org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
at
org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
at
org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
at
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at
org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
at
org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
at
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at
org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
at
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at
org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
at org.mortbay.jetty.Server.doStart(Server.java:224)
at
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:985)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.mortbay.start.Main.invokeMain(Main.java:194)
at org.mortbay.start.Main.start(Main.java:534)
at org.mortbay.start.Main.start(Main.java:441)
at org.mortbay.start.Main.main(Main.java:119)
Caused by: java.lang.NumberFormatException: multiple points
at
sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1082)
at java.lang.Float.parseFloat(Float.java:422)
at org.apache.solr.core.Config.getFloat(Config.java:307)
at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:430)
... 31 more

Aug 16, 2012 1:43:03 PM org.apache.solr.servlet.SolrDispatchFilter init
INFO: user.dir=/usr/local/solr/SOLR/example
Aug 16, 2012 1:43:03 PM org.apache.solr.servlet.SolrDispatchFilter init
INFO: SolrDispatchFilter.init() done
Aug 16, 2012 1:43:03 PM org.apache.solr.servlet.SolrServlet init
INFO: SolrServlet.init()
Aug 16, 2012 1:43:03 PM org.apache.solr.core.SolrResourceLoader
locateSolrHome
INFO: JNDI not configured for solr (NoInitialContextEx)
Aug 16, 2012 1:43:03 PM org.apache.solr.core.SolrResourceLoader
locateSolrHome
INFO: solr home defaulted to 'solr/' (could not find system property or
JNDI)
Aug 16, 2012 1:43:03 PM org.apache.solr.servlet.SolrServlet init
INFO: SolrServlet.init() done
Aug 16, 2012 1:43:03 PM org.apache.solr.core.SolrResourceLoader
locateSolrHome
INFO: JNDI not configured for solr (NoInitialContextEx)
Aug 16, 2012 1:43:03 PM org.apache.solr.core.SolrResourceLoader
locateSolrHome
INFO: solr home defaulted to 'solr/' (could not find system property or
JNDI)
Aug 16, 2012 1:43:03 PM org.apache.solr.servlet.SolrUpdateServlet init
INFO: SolrUpdateServlet.init() done
2012-08-16 13:43:03.105:INFO::Started SocketConnector@0.0.0.0:8983
2012-08-16 13:45:24.162:WARN::/solr/admin/
java.lang.IllegalStateException: STREAM
at org.mortbay.jetty.Response.getWriter(Response.java:616)
at
org.apache.jasper.runtime.JspWriterImpl.initOut(JspWriterImpl.java:187)
at
org.apache.jasper.runtime.J

Sort on dynamic field

2012-08-16 Thread Peter Kirk
Hi, a question about sorting and dynamic fields in "Solr Specification Version: 
3.6.0.2012.04.06.11.34.07".

I have a field defined like


Where type int is


If I perform a search, which returns 13 documents, and sort by a field called 
"item_group_int", then it almost produces the results I expect. The only thing 
I don't expect is that one document in the results, which does not have a field 
called "item_group_int", appears in the sort order as number 2. 

I think this has something to do with the fact that one of the other documents 
has a value of 0 for this field - it comes first in the sort.

How do I relegate documents which do not possess the requested sort-field to 
the last positions in the results?

Thanks,
Peter






Need Help - Solr - Sitecore integration

2012-08-16 Thread Samuthira Pandi S
Hi,

Currently I am working as a Sitecore Developer.
My client would like  to implement SOLR search integration on my sitecore 
application.
I don't have idea to implement, if you have any document related to this.
Kindly share the configure document.

Thanks & regards
Samuthirapandi.S



 CAUTION - Disclaimer *
This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
for the use of the addressee(s). If you are not the intended recipient, please
notify the sender by e-mail and delete the original message. Further, you are 
not
to copy, disclose, or distribute this e-mail or its contents to any other 
person and
any such actions are unlawful. This e-mail may contain viruses. Infosys has 
taken
every reasonable precaution to minimize this risk, but is not liable for any 
damage
you may sustain as a result of any virus in this e-mail. You should carry out 
your
own virus checks before opening the e-mail or attachment. Infosys reserves the
right to monitor and review the content of all messages sent to or from this 
e-mail
address. Messages sent to or from this e-mail address may be stored on the
Infosys e-mail system.
***INFOSYS End of Disclaimer INFOSYS***


Re: missing core name in path

2012-08-16 Thread Muzaffer Tolga Özses

Sorry for the late reply.

I didn't install it, our sysadmin did based on my tutorialised 
experience. The version is 3.6.1

On 08/16/2012 02:28 PM, Jack Krupansky wrote:
Compare your current schema.xml to a previous known good copy (or to 
the original from the Solr example) and see what changes have 
occurred. Maybe you were viewing it in some editor and accidentally 
hit some keys that corrupted the format.


And, tell us what release of Solr you are using.

-- Jack Krupansky

-Original Message- From: Muzaffer Tolga Özses
Sent: Thursday, August 16, 2012 6:57 AM
To: solr-user@lucene.apache.org
Subject: missing core name in path

Hi,

I've started Solr as usual, and when I browsed to
http://www.example.com:8983/solr/admin, I got

HTTP ERROR 404

Problem accessing /solr/admin/index.jsp. Reason:

missing core name in path
Powered by Jetty://

Also, below are the lines I got when starting it:

SEVERE: org.apache.solr.common.SolrException: Schema Parsing Failed:
multiple points
at 
org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:688)

at org.apache.solr.schema.IndexSchema.(IndexSchema.java:123)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:478)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:332)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:216)
at
org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:161) 


at
org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96) 

at 
org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)

at
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at
org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713) 


at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
at
org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282) 


at
org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
at
org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
at
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at
org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152) 


at
org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156) 


at
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at
org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152) 


at
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at
org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
at org.mortbay.jetty.Server.doStart(Server.java:224)
at
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:985)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 


at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 


at java.lang.reflect.Method.invoke(Method.java:597)
at org.mortbay.start.Main.invokeMain(Main.java:194)
at org.mortbay.start.Main.start(Main.java:534)
at org.mortbay.start.Main.start(Main.java:441)
at org.mortbay.start.Main.main(Main.java:119)
Caused by: java.lang.NumberFormatException: multiple points
at
sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1082)
at java.lang.Float.parseFloat(Float.java:422)
at org.apache.solr.core.Config.getFloat(Config.java:307)
at 
org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:430)

... 31 more

Aug 16, 2012 1:43:03 PM org.apache.solr.servlet.SolrDispatchFilter init
INFO: user.dir=/usr/local/solr/SOLR/example
Aug 16, 2012 1:43:03 PM org.apache.solr.servlet.SolrDispatchFilter init
INFO: SolrDispatchFilter.init() done
Aug 16, 2012 1:43:03 PM org.apache.solr.servlet.SolrServlet init
INFO: SolrServlet.init()
Aug 16, 2012 1:43:03 PM org.apache.solr.core.SolrResourceLoader
locateSolrHome
INFO: JNDI not configured for solr (NoInitialContextEx)
Aug 16, 2012 1:43:03 PM org.apache.solr.core.SolrResourceLoader
locateSolrHome
INFO: solr home defaulted to 'solr/' (could not find system property or
JNDI)
Aug 16, 2012 1:43:03 PM org.apache.solr.servlet.SolrServlet init
INFO: SolrServlet.init() done
Aug 16, 2012 1:43:03 PM org.apache.solr.core.SolrResourceLoader
locateSolrHome
INFO: JNDI not configured for solr (NoInitialContextEx)
Aug 16, 2012 1:43:03 PM org.apache.solr.core.SolrResourceLoader
locateSolrHome
INFO: solr home defaulted to 'solr/' (could not find system property or
JNDI)
Aug 16, 2012 1:43:03 PM org.apache.solr.servlet.SolrUpdateServlet init
INFO: SolrUpdateServlet.init() done
2012-08-16 13:43:03.105:INFO::Started SocketConnector@0.0.0.0:8983
2012-08-16 13:45:24.162:WARN::/solr/admin/
java.lang.Ille

Your opinion please concerning prod' installation

2012-08-16 Thread Bruno Mannina

Dear All,

I would like to present you what I want to do to install Solr on a brand 
new production server.


1. uninstall apache-tomcat provides with the standard Ubuntu 12.04
2. uninstall Sun Java provides with the standard Ubuntu 12.04

3. install Java 7
4. install apache-tomcat

5. download Solr 3.6.1 (or do you think I can download the 4.0?)

6. secure server (ok but how)

Thanks for your comment!

Of course if you have a link that details these processes It will be 
great. (not one page for each step I have them)


Have a nice day !
Bruno



Re: Sort on dynamic field

2012-08-16 Thread Yonik Seeley
On Thu, Aug 16, 2012 at 8:00 AM, Peter Kirk  wrote:
> Hi, a question about sorting and dynamic fields in "Solr Specification 
> Version: 3.6.0.2012.04.06.11.34.07".
>
> I have a field defined like
>  multiValued="false"/>
>
> Where type int is
>  omitNorms="true" positionIncrementGap="0"/>

Try adding sortMissingLast="true" to this type.

-Yonik
http://lucidworks.com


How to index multivalued field tokens by their attached metadata?

2012-08-16 Thread Fuu
Hello all Solrians.

I'm fairly new to Solr, having only played with it for about a month now.
I'm working with the Solr 4.0.0-Alpha release, trying to figure out a proper
approach to an indexing problem, but the methods I've come up with are not
panning out. I describe below the problem and my 3 attempts of solving it. I
hope someone here has had similar issues and solved them or can tell me that
my current ones are no good. :)

Problem: I have a dataset that consists of email type of documents. From
these documents I need to extract to extract certain tokens, attach meta
information to each token and then make them searchable based on the
attached meta information. If it works, I could search the index for tokens
that were in a document created in certain data range, or based on any other
metadata like this attached to the token. 

Basically: For each document on disk => X amount of extracted token based
documents in the index.

Attempt one: As a starting point I first used a PatternTokenizer to get  the
tokens that I want, so each indexed document now would have a multivalued
field of tokens. I then wrote a TokenFilter that attached the metadata to
each token as a payload. I tried searching by payload and discovered it only
worked if I used the token as search parameter. Apparently searching by
keywords in token payload is not implemented yet? 

Attempt two: I read about PpdateRequestProcessors and processor chains, and
tried writing a Processor that would take in a document, check if it has a
field with my tokens extracted using the TokenFilter from the first approach
in it, and then write hand out each token as a separate document to the next
processor. I couldn't figure out how to do this, apparently once you call
super.processAdd() it jumps to the next document, rather than allow me to
insert new document based on next token of the current document.

Attempt 3: Use a Lucene IndexWriter directly from the custom
UpdaterequestProcessor to write the created meta-token documents to a
separate index. As a concept it should work, but how would this second index
conform to it's Solr schema if i directly write data to an Index? I assume
that I would configure the index as a second core with it's own schema and
search parameters. Can Solr still query the index normally?

As you can see I'm little bit at loss on how to implement this. Are all of
the above approaches are bad? Have I misunderstood one of them and it should
actually work?  I could go back to basics and write a document processor in
Python to do all the parsing and patternmatching and token extraction
outside of Solr and just feed it documents to index, but this seems like
something that Solr should do and I'm just not seeing The Right Way. 

Regards,
Juha
 





--
View this message in context: 
http://lucene.472066.n3.nabble.com/How-to-index-multivalued-field-tokens-by-their-attached-metadata-tp4001627.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: How to index multivalued field tokens by their attached metadata?

2012-08-16 Thread Jack Krupansky
It would help if you could give a simple, contrived example of the kind of 
input you want to process, what tokens you consider special, their metadata, 
and the Solr documents you hope to generate, with the schema definition 
(summarized.) But keep it simple, at first.


That said, as a general rule, any "complex processing" is best done upstream 
from Solr. Sure, you can do some interesting processing in an update 
handler, including with scripts now, but just don't go overboard.


In fact, maybe you might be better off by first writing your processing as a 
standalone program that feeds Solr XML documents to Solr, get that working, 
and THEN decide whether and how that processing might be integrated more 
tightly with Solr.


But, start by defining the inputs, processing, and output as requested 
above.


-- Jack Krupansky

-Original Message- 
From: Fuu

Sent: Thursday, August 16, 2012 9:54 AM
To: solr-user@lucene.apache.org
Subject: How to index multivalued field tokens by their attached metadata?

Hello all Solrians.

I'm fairly new to Solr, having only played with it for about a month now.
I'm working with the Solr 4.0.0-Alpha release, trying to figure out a proper
approach to an indexing problem, but the methods I've come up with are not
panning out. I describe below the problem and my 3 attempts of solving it. I
hope someone here has had similar issues and solved them or can tell me that
my current ones are no good. :)

Problem: I have a dataset that consists of email type of documents. From
these documents I need to extract to extract certain tokens, attach meta
information to each token and then make them searchable based on the
attached meta information. If it works, I could search the index for tokens
that were in a document created in certain data range, or based on any other
metadata like this attached to the token.

Basically: For each document on disk => X amount of extracted token based
documents in the index.

Attempt one: As a starting point I first used a PatternTokenizer to get  the
tokens that I want, so each indexed document now would have a multivalued
field of tokens. I then wrote a TokenFilter that attached the metadata to
each token as a payload. I tried searching by payload and discovered it only
worked if I used the token as search parameter. Apparently searching by
keywords in token payload is not implemented yet?

Attempt two: I read about PpdateRequestProcessors and processor chains, and
tried writing a Processor that would take in a document, check if it has a
field with my tokens extracted using the TokenFilter from the first approach
in it, and then write hand out each token as a separate document to the next
processor. I couldn't figure out how to do this, apparently once you call
super.processAdd() it jumps to the next document, rather than allow me to
insert new document based on next token of the current document.

Attempt 3: Use a Lucene IndexWriter directly from the custom
UpdaterequestProcessor to write the created meta-token documents to a
separate index. As a concept it should work, but how would this second index
conform to it's Solr schema if i directly write data to an Index? I assume
that I would configure the index as a second core with it's own schema and
search parameters. Can Solr still query the index normally?

As you can see I'm little bit at loss on how to implement this. Are all of
the above approaches are bad? Have I misunderstood one of them and it should
actually work?  I could go back to basics and write a document processor in
Python to do all the parsing and patternmatching and token extraction
outside of Solr and just feed it documents to index, but this seems like
something that Solr should do and I'm just not seeing The Right Way.

Regards,
Juha






--
View this message in context: 
http://lucene.472066.n3.nabble.com/How-to-index-multivalued-field-tokens-by-their-attached-metadata-tp4001627.html
Sent from the Solr - User mailing list archive at Nabble.com. 



Sharding and Replication setup

2012-08-16 Thread erolagnab
Hi all,

I'm trying to setup Solr in current environment to provide high availabilty
and fault torrenlance infrastucture.
What I have is:
1. 2 physical servers running 2 Tomcats
2. A load balancer doing round-robin requests to 2 Tomcats

After reading thru different posts
(http://lucidworks.lucidimagination.com/display/solr/Scaling+and+Distribution,
http://www.slideshare.net/sourcesense/sharded-solr-setup-with-master), I'm
thinking of having the setup as in the image attached 
http://lucene.472066.n3.nabble.com/file/n4001642/Solr_Sharding_and_Replication_HA.png
Solr_Sharding_and_Replication_HA.png 

Basically, it's a combination of sharding and replication.

Server 1, I have a 3-core Solr instance. e.g.: Master 1, Slave 1 (replicated
locally from Master 1) and Slave 2 (replicated remotely from Master 2 in
Server 2)
Server 2, I have similar setup, e.g.: Master 2, Slave 2 (replicated locally
from Master 2) and Slave 1 (replicated remotely from Master 1 in Server 1)

The indexing request can come to Master 1 or Master 2 hence Slave 1 and
Slave 2 become high available shards (as in both servers).

The search requests are served by a virtual coordinator (can hit slave 1
core or slave 2 core directly with shards parameter) to combine results from
Slave 1 and Slave 2.

I haven't done the actual implementation yet. Just post here so to hear any
suggestion/recommendation/pitfalls/gotchas on this setup from experts.

Very appreciate your attention.

Cheers,

Trung



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Sharding-and-Replication-setup-tp4001642.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SOLR3.6:Field Collapsing/Grouping throws OOM

2012-08-16 Thread prosens
"*Solr's sorting reads all the unique values into memory whether or not they
satisfy the query...*"
Is it not a overhead for any query? Any reason why it is done on entire
result set and not on the query? Is there a way to avoid this?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/SOLR3-6-Field-Collapsing-Grouping-throws-OOM-tp4001055p4001640.html
Sent from the Solr - User mailing list archive at Nabble.com.


solr 4 degraded behavior failure

2012-08-16 Thread Buttler, David
Hi all,
I am testing out the could features in Solr 4, and I have a observation about 
the behavior under failure.

Following the cloud tutorial, I set up a collection with 2 shards.  I started 4 
servers (so each shard is replicated twice).  I added the test documents, and 
everything works fine.  If I kill one or two servers, everything continues to 
work.  However, when three servers are killed, zero results are returned.  This 
is an improvement over previous versions of the cloud branch where having 
missing shards would result in an error, but I would have expected fewer 
results rather than zero results.

It turns out that there is a parameter that can be added to a query to get 
degraded results, but it is not described on the Solr cloud page.  It is on the 
DistributedSearch page, but it is poorly defined, and difficult to locate 
starting from the cloud page.

The way to get degraded results is to append:
shards.tolerant=true
to your Solr query.

Dave






RE: solr 4 degraded behavior failure

2012-08-16 Thread Buttler, David
Is there a way to make the shards.tolerant=true behavior the default behavior?

-Original Message-
From: Buttler, David [mailto:buttl...@llnl.gov] 
Sent: Thursday, August 16, 2012 11:01 AM
To: solr-user@lucene.apache.org
Subject: solr 4 degraded behavior failure

Hi all,
I am testing out the could features in Solr 4, and I have a observation about 
the behavior under failure.

Following the cloud tutorial, I set up a collection with 2 shards.  I started 4 
servers (so each shard is replicated twice).  I added the test documents, and 
everything works fine.  If I kill one or two servers, everything continues to 
work.  However, when three servers are killed, zero results are returned.  This 
is an improvement over previous versions of the cloud branch where having 
missing shards would result in an error, but I would have expected fewer 
results rather than zero results.

It turns out that there is a parameter that can be added to a query to get 
degraded results, but it is not described on the Solr cloud page.  It is on the 
DistributedSearch page, but it is poorly defined, and difficult to locate 
starting from the cloud page.

The way to get degraded results is to append:
shards.tolerant=true
to your Solr query.

Dave






solrcloud/solr4: how to specify a new configuration to use, superseding previous configs w/o deleting‏

2012-08-16 Thread Zulu Chas







I am trying to set up a solrcloud/solr4.0.0-beta with multiple cores, multiple 
shards, and a separate zookeeper process, but I think the exact details are 
less important than the method of bootstrapping zookeeper with the 
configs.Since I'm running zookeeper separately, I'd like to be able to version 
solrcloud configs and drop them into zookeeper over time without necessarily 
having to overwrite or immediately delete old ones.
After starting zookeeper using zkServer.sh start, I've been initially 
bootstrapping it with:

java -Dsolr.solr.home=instance1 -Djetty.port=8501 
-Djetty.home=$SOLR_HOME/solrcloud 
-Dbootstrap_confdir=$SOLR_HOME/solrcloud/instance1/core0/conf 
-Dcollection.configName=A01 -DnumShards=2 -DzkHost=localhost:2181 -jar start.jar

and then starting a second shard with:

java -Dsolr.solr.home=instance2 -Djetty.port=8502 
-Djetty.home=$SOLR_HOME/solrcloud -Dcollection.configName=A01 
-DzkHost=localhost:2181 -jar start.jar
hoping that "-Dcollection.configName=A01" would do what I think it should do :)

But if I then iterate on the "A01" config and bootstrap a new config "A02" 
(stopping all instances of solr first) but don't delete anything from zookeeper:

java -Dsolr.solr.home=instance1 -Djetty.port=8501 
-Djetty.home=$SOLR_HOME/solrcloud 
-Dbootstrap_confdir=$SOLR_HOME/solrcloud/instance1/core0/conf 
-Dcollection.configName=A02 -DnumShards=2 -DzkHost=localhost:2181 -jar start.jar
I see it upload the configs:

Aug 16, 2012 1:05:47 PM org.apache.solr.common.cloud.SolrZkClient makePathINFO: 
makePath: /configs/A02/admin-extra.htmlAug 16, 2012 1:05:47 PM 
org.apache.solr.common.cloud.SolrZkClient makePathINFO: makePath: 
/configs/A02/admin-extra.menu-bottom.html
...
...so I had hoped it would use the given one in configName.  Things seem to run 
fine, but then if I do delete something from A01 to prove that it's using the 
new config, I start to see error messages about things missing from A01 ... 
showing me it's still using the old config:

(delete /configs/A01/admin-extra.html in zkCli)Aug 16, 2012 2:04:01 PM 
org.apache.solr.common.SolrException logSEVERE: 
org.apache.solr.common.SolrException: Can not find: 
/configs/A01/admin-extra.htmlat 
org.apache.solr.handler.admin.ShowFileRequestHandler.showFromZooKeeper(ShowFileRequestHandler.java:155)
  at 
org.apache.solr.handler.admin.ShowFileRequestHandler.handleRequestBody(ShowFileRequestHandler.java:120)on
 the console running instance1

I'm sure I'm just missing something simple here, but I didn't see any reference 
online about re-bootstrapping.  I also tried this adding -Dbootstrap_conf=true 
[1] to each but that didn't make a difference.
-Chaz
[1] 
http://mail-archives.apache.org/mod_mbox/lucene-dev/201205.mbox/%3cbb7ad9bf-389b-4b94-8c1b-bbfc4028a...@gmail.com%3E

  

Re: Duplicated facet counts in solr 4 beta: user error

2012-08-16 Thread Erick Erickson
I say you updated the Wiki, thanks!

Erick

On Wed, Aug 15, 2012 at 9:34 AM, Erick Erickson  wrote:
> No problem, and thanks for posting the resolution
>
> If you have the time and energy, anyone can edit the Wiki if you
> create a logon, so any clarification you'd like to provide to keep
> others from having this problem would be most welcome!
>
> Best
> Erick
>
> On Tue, Aug 14, 2012 at 6:13 PM, Buttler, David  wrote:
>> Here are my steps:
>>
>> 1)  Download apache-solr-4.0.0-BETA
>>
>> 2)  Untar into a directory
>>
>> 3)  cp -r example example2
>>
>> 4)  cp -r example exampleB
>>
>> 5)  cp -r example example2B
>>
>> 6)  cd example;  java -Dbootstrap_confdir=./solr/collection1/conf 
>> -Dcollection.configName=myconf -DzkRun -DnumShards=2 -jar start.jar
>>
>> 7)  cd example2; java -Djetty.port=7574 -DzkHost=localhost:9983 -jar 
>> start.jar
>>
>> 8)  cd exampleB; java -Djetty.port=8900 -DzkHost=localhost:9983 -jar 
>> start.jar
>>
>> 9)  cd example2B; java -Djetty.port=7500 -DzkHost=localhost:9983 -jar 
>> start.jar
>>
>> 10)   cd example/exampledocs; java 
>> -Durl=http://localhost:8983/solr/collection1/update -jar post.jar *.xml
>>
>> http://localhost:8983/solr/collection1/select?q=*:*&wt=xml&fq=cat:%22electronics%22
>> 14 results returned
>>
>> This is correct.  Let's try a slightly more circuitous route by running 
>> through the solr tutorial first
>>
>>
>> 1)  Download apache-solr-4.0.0-BETA
>>
>> 2)  Untar into a directory
>>
>> 3)  cd example; java  -jar start.jar
>>
>> 4)  cd example/exampledocs; java 
>> -Durl=http://localhost:8983/solr/collection1/update -jar post.jar *.xml
>>
>> 5)  kill jetty server
>>
>> 6)  cp -r example example2
>>
>> 7)  cp -r example exampleB
>>
>> 8)  cp -r example example2B
>>
>> 9)  cd example;  java -Dbootstrap_confdir=./solr/collection1/conf 
>> -Dcollection.configName=myconf -DzkRun -DnumShards=2 -jar start.jar
>>
>> 10)   cd example2; java -Djetty.port=7574 -DzkHost=localhost:9983 -jar 
>> start.jar
>>
>> 11)   cd exampleB; java -Djetty.port=8900 -DzkHost=localhost:9983 -jar 
>> start.jar
>>
>> 12)   cd example2B; java -Djetty.port=7500 -DzkHost=localhost:9983 -jar 
>> start.jar
>>
>> 13)   cd example/exampledocs; java 
>> -Durl=http://localhost:8983/solr/collection1/update -jar post.jar *.xml
>>
>> With the same query as above, 22 results are returned.
>>
>> Looking at this, it is somewhat obvious that what is happening is that the 
>> index was copied over from the tutorial and was not cleaned up before 
>> running the cloud examples.
>>
>> Adding the debug=query parameter to the query URL produces the following:
>> 
>> *:*
>> *:*
>> MatchAllDocsQuery(*:*)
>> *:*
>> LuceneQParser
>> 
>> cat:"electronics"
>> 
>> 
>> cat:electronics
>> 
>> 
>>
>> So, Erick's diagnoses is correct: pilot error.  However, the straightforward 
>> path through the tutorial and on to solr cloud makes it easy to make this 
>> mistake. Maybe a small warning in the solr cloud page would help?
>>
>> Now, running a delete operations fixes things:
>> cd example/exampledocs;
>> java -Dcommit=false -Ddata=args -jar post.jar 
>> "*:*"
>> causes the number of results to be zero.  So, let's reload the data:
>> java -Durl=http://localhost:8983/solr/collection1/update -jar post.jar *.xml
>> now the number of results for our query
>> http://localhost:8983/solr/collection1/select?q=*:*&wt=xml&fq=cat:"electronics"
>> is back to the correct 14 results.
>>
>> Dave
>>
>> PS apologizes for hijacking the thread earlier.


How to implement Index time boost via DIH?

2012-08-16 Thread bbarani
Hi,

I am trying to implement index time boost via DIH. but I want to boost at
field level rather than document level. Ex:

Consider there are 2 fields in a document: LastUpdatedBy, fullName. After
indexing both the fields, I am copying the data from both the field in to a
default search field. Now when search happens on default search field I want
fullNameto get more priority than LastUpdatedBy even though both might have
the search keyword. 

Note: I have tried dismax handler before for runtime boosting, want to try
index time boosing using DIH.

Thanks!!!

Thanks,
BB



--
View this message in context: 
http://lucene.472066.n3.nabble.com/How-to-implement-Index-time-boost-via-DIH-tp4001720.html
Sent from the Solr - User mailing list archive at Nabble.com.


Solr4.0 BETA - Error when StempelPolishStemFilterFactory

2012-08-16 Thread sausarkar
I just upgrade to Solr 4.0.0-BETA and it seems there is a problem with the
StempelPolishStemFilterFactory it cannot find a resource, it seems that the
bug was introduced in the new beta release, this works fine in the alpha
release.

Here is the exception I am seeing in the logs:

SEVERE: null:java.lang.RuntimeException: java.io.IOException: Can't find
resource '/org/apache/lucene/analysis/pl/stemmer_2.tbl' in classpath or
'solr/collection1/conf/', cwd /apache-solr-4.0.0-BETA/example
at org.apache.solr.schema.IndexSchema.(IndexSchema.java:116)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:850)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:539)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:360)
at
org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:309)
at
org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:106)
at org.eclipse.jetty.servlet.FilterHolder.doStart(FilterHolder.java:114)
at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:59)
at
org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:754)
at
org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:258)

Anyone has any clue on this?




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr4-0-BETA-Error-when-StempelPolishStemFilterFactory-tp4001724.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: solrcloud/solr4: how to specify a new configuration to use, superseding previous configs w/o deleting‏

2012-08-16 Thread Mark Miller
Check out some of the new documentation I added today: 
http://wiki.apache.org/solr/SolrCloud/#Command_Line_Util

Collections link with config sets following these rules:

1. When a collection starts, if there is only one config set, the collection 
will link to it. 

2. If a collection starts and it shares the name of a config set, it will link 
to it. 

3. Otherwise, the link has to be created manually - or by specifying it 
explicitly when creating the collection by HTTP API.

boostrap_conf=true uploads a conf set for each core in solr.xml - and it links 
the conf sets to the collections by creating conf set names that match the 
collection names.

If a collection has been linked to a config set, you have to manually update it 
to link it to a new config.

You can do this with the above Command Line Util tool - one of the commands 
links collections to config sets.

You can even make the links before starting Solr for the first time - when you 
do start Solr, as collections are started, it will find and use the link info 
you have created with the tool when names match.

If you change the link while Solr is running you would want to reload all the 
cores in your collection to pick up the new config - you can do this with the 
Collections API: 
http://wiki.apache.org/solr/SolrCloud/#Managing_collections_via_the_Collections_API

Mark Miller
lucidworks.com

On Aug 16, 2012, at 2:24 PM, Zulu Chas  wrote:
> 
> I am trying to set up a solrcloud/solr4.0.0-beta with multiple cores, 
> multiple shards, and a separate zookeeper process, but I think the exact 
> details are less important than the method of bootstrapping zookeeper with 
> the configs.Since I'm running zookeeper separately, I'd like to be able to 
> version solrcloud configs and drop them into zookeeper over time without 
> necessarily having to overwrite or immediately delete old ones.
> After starting zookeeper using zkServer.sh start, I've been initially 
> bootstrapping it with:
> 
> java -Dsolr.solr.home=instance1 -Djetty.port=8501 
> -Djetty.home=$SOLR_HOME/solrcloud 
> -Dbootstrap_confdir=$SOLR_HOME/solrcloud/instance1/core0/conf 
> -Dcollection.configName=A01 -DnumShards=2 -DzkHost=localhost:2181 -jar 
> start.jar
> 
> and then starting a second shard with:
> 
> java -Dsolr.solr.home=instance2 -Djetty.port=8502 
> -Djetty.home=$SOLR_HOME/solrcloud -Dcollection.configName=A01 
> -DzkHost=localhost:2181 -jar start.jar
> hoping that "-Dcollection.configName=A01" would do what I think it should do 
> :)
> 
> But if I then iterate on the "A01" config and bootstrap a new config "A02" 
> (stopping all instances of solr first) but don't delete anything from 
> zookeeper:
> 
> java -Dsolr.solr.home=instance1 -Djetty.port=8501 
> -Djetty.home=$SOLR_HOME/solrcloud 
> -Dbootstrap_confdir=$SOLR_HOME/solrcloud/instance1/core0/conf 
> -Dcollection.configName=A02 -DnumShards=2 -DzkHost=localhost:2181 -jar 
> start.jar
> I see it upload the configs:
> 
> Aug 16, 2012 1:05:47 PM org.apache.solr.common.cloud.SolrZkClient 
> makePathINFO: makePath: /configs/A02/admin-extra.htmlAug 16, 2012 1:05:47 PM 
> org.apache.solr.common.cloud.SolrZkClient makePathINFO: makePath: 
> /configs/A02/admin-extra.menu-bottom.html
> ...
> ...so I had hoped it would use the given one in configName.  Things seem to 
> run fine, but then if I do delete something from A01 to prove that it's using 
> the new config, I start to see error messages about things missing from A01 
> ... showing me it's still using the old config:
> 
> (delete /configs/A01/admin-extra.html in zkCli)Aug 16, 2012 2:04:01 PM 
> org.apache.solr.common.SolrException logSEVERE: 
> org.apache.solr.common.SolrException: Can not find: 
> /configs/A01/admin-extra.html  at 
> org.apache.solr.handler.admin.ShowFileRequestHandler.showFromZooKeeper(ShowFileRequestHandler.java:155)
>   at 
> org.apache.solr.handler.admin.ShowFileRequestHandler.handleRequestBody(ShowFileRequestHandler.java:120)on
>  the console running instance1
> 
> I'm sure I'm just missing something simple here, but I didn't see any 
> reference online about re-bootstrapping.  I also tried this adding 
> -Dbootstrap_conf=true [1] to each but that didn't make a difference.
> -Chaz
> [1] 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201205.mbox/%3cbb7ad9bf-389b-4b94-8c1b-bbfc4028a...@gmail.com%3E
> 
> 















Re: solrcloud/solr4: how to specify a new configuration to use, superseding previous configs w/o deleting‏

2012-08-16 Thread Mark Miller
Another clarification: -Dbootstrap_confdir=./solr/collection1/conf 
-Dcollection.configName=myconf creates no actual link. It simply uploads a 
config set from a dir and calls it myconf. In the example we count on the rule 
of, if there is only one config set a new collection will be linked to it.

On Aug 16, 2012, at 5:45 PM, Mark Miller  wrote:

> Check out some of the new documentation I added today: 
> http://wiki.apache.org/solr/SolrCloud/#Command_Line_Util
> 
> Collections link with config sets following these rules:
> 
> 1. When a collection starts, if there is only one config set, the collection 
> will link to it. 
> 
> 2. If a collection starts and it shares the name of a config set, it will 
> link to it. 
> 
> 3. Otherwise, the link has to be created manually - or by specifying it 
> explicitly when creating the collection by HTTP API.
> 
> boostrap_conf=true uploads a conf set for each core in solr.xml - and it 
> links the conf sets to the collections by creating conf set names that match 
> the collection names.
> 
> If a collection has been linked to a config set, you have to manually update 
> it to link it to a new config.
> 
> You can do this with the above Command Line Util tool - one of the commands 
> links collections to config sets.
> 
> You can even make the links before starting Solr for the first time - when 
> you do start Solr, as collections are started, it will find and use the link 
> info you have created with the tool when names match.
> 
> If you change the link while Solr is running you would want to reload all the 
> cores in your collection to pick up the new config - you can do this with the 
> Collections API: 
> http://wiki.apache.org/solr/SolrCloud/#Managing_collections_via_the_Collections_API
> 
> Mark Miller
> lucidworks.com
> 
> On Aug 16, 2012, at 2:24 PM, Zulu Chas  wrote:
>> 
>> I am trying to set up a solrcloud/solr4.0.0-beta with multiple cores, 
>> multiple shards, and a separate zookeeper process, but I think the exact 
>> details are less important than the method of bootstrapping zookeeper with 
>> the configs.Since I'm running zookeeper separately, I'd like to be able to 
>> version solrcloud configs and drop them into zookeeper over time without 
>> necessarily having to overwrite or immediately delete old ones.
>> After starting zookeeper using zkServer.sh start, I've been initially 
>> bootstrapping it with:
>> 
>> java -Dsolr.solr.home=instance1 -Djetty.port=8501 
>> -Djetty.home=$SOLR_HOME/solrcloud 
>> -Dbootstrap_confdir=$SOLR_HOME/solrcloud/instance1/core0/conf 
>> -Dcollection.configName=A01 -DnumShards=2 -DzkHost=localhost:2181 -jar 
>> start.jar
>> 
>> and then starting a second shard with:
>> 
>> java -Dsolr.solr.home=instance2 -Djetty.port=8502 
>> -Djetty.home=$SOLR_HOME/solrcloud -Dcollection.configName=A01 
>> -DzkHost=localhost:2181 -jar start.jar
>> hoping that "-Dcollection.configName=A01" would do what I think it should do 
>> :)
>> 
>> But if I then iterate on the "A01" config and bootstrap a new config "A02" 
>> (stopping all instances of solr first) but don't delete anything from 
>> zookeeper:
>> 
>> java -Dsolr.solr.home=instance1 -Djetty.port=8501 
>> -Djetty.home=$SOLR_HOME/solrcloud 
>> -Dbootstrap_confdir=$SOLR_HOME/solrcloud/instance1/core0/conf 
>> -Dcollection.configName=A02 -DnumShards=2 -DzkHost=localhost:2181 -jar 
>> start.jar
>> I see it upload the configs:
>> 
>> Aug 16, 2012 1:05:47 PM org.apache.solr.common.cloud.SolrZkClient 
>> makePathINFO: makePath: /configs/A02/admin-extra.htmlAug 16, 2012 1:05:47 PM 
>> org.apache.solr.common.cloud.SolrZkClient makePathINFO: makePath: 
>> /configs/A02/admin-extra.menu-bottom.html
>> ...
>> ...so I had hoped it would use the given one in configName.  Things seem to 
>> run fine, but then if I do delete something from A01 to prove that it's 
>> using the new config, I start to see error messages about things missing 
>> from A01 ... showing me it's still using the old config:
>> 
>> (delete /configs/A01/admin-extra.html in zkCli)Aug 16, 2012 2:04:01 PM 
>> org.apache.solr.common.SolrException logSEVERE: 
>> org.apache.solr.common.SolrException: Can not find: 
>> /configs/A01/admin-extra.html at 
>> org.apache.solr.handler.admin.ShowFileRequestHandler.showFromZooKeeper(ShowFileRequestHandler.java:155)
>>   at 
>> org.apache.solr.handler.admin.ShowFileRequestHandler.handleRequestBody(ShowFileRequestHandler.java:120)on
>>  the console running instance1
>> 
>> I'm sure I'm just missing something simple here, but I didn't see any 
>> reference online about re-bootstrapping.  I also tried this adding 
>> -Dbootstrap_conf=true [1] to each but that didn't make a difference.
>> -Chaz
>> [1] 
>> http://mail-archives.apache.org/mod_mbox/lucene-dev/201205.mbox/%3cbb7ad9bf-389b-4b94-8c1b-bbfc4028a...@gmail.com%3E
>> 
>>
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 

- Mark Miller
lucidimagination.com













splitOnCaseChange not working?

2012-08-16 Thread vod
Hi,

I'm getting unexpected results and hoping someone can help. My schema.xml
has splitOnCaseChange="1" for the field I'm searching on (both index &
query), and the default search behavior is "OR".

I have a field with the word "Airline" indexed. When i search for "Airline"
I get the match. When I search for "Airline Alias", I get the match (as
expected). However, when I search for "AirlineAlias", I am not getting a
match. I was expecting the splitOnCaseChange property to separate out the
term AirlineAlias into the 2 base words. However, if that was happening,
then it should be finding the match to "Airline" (i.e. it should be the
exact same query as "Airline Alias").

Is my understanding correct? If so, any ideas on why I would not be getting
the correct search results?

I have copied the relevant sections from the schema.xml file below.

Thanks in advance for the help.






















...

/fields>






--
View this message in context: 
http://lucene.472066.n3.nabble.com/splitOnCaseChange-not-working-tp4001708.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Running out of memory

2012-08-16 Thread Amit Nithian
I am debugging an out of memory error myself and a few suggestions:
1) Are you looking at your search logs around the time of the memory
error? In my case, I found a few bad queries requesting a ton of rows
(basically the whole index's worth which I think is an error somewhere
in our app just have to find it) which happened close to the OOM error
being thrown.
2) Do you have Solr hooked up to something like NewRelic/AppDynamics
to see the cache usage in real time? Maybe as was suggested, tuning
down or eliminating low used caches could help.
3) Are you ensuring that you aren't setting "stored=true" on fields
that don't need it? This will increase the index size and possibly the
cache size if lazy loading isn't enabled (to be honest, this part I am
a bit unclear of since I haven't had much experience with this
myself).

Thanks
Amit

On Mon, Aug 13, 2012 at 11:37 AM, Jon Drukman  wrote:
> On Sun, Aug 12, 2012 at 12:31 PM, Alexey Serba  wrote:
>
>> > It would be vastly preferable if Solr could just exit when it gets a
>> memory
>> > error, because we have it running under daemontools, and that would cause
>> > an automatic restart.
>> -XX:OnOutOfMemoryError="; "
>> Run user-defined commands when an OutOfMemoryError is first thrown.
>>
>> > Does Solr require the entire index to fit in memory at all times?
>> No.
>>
>> But it's hard to say about your particular problem without additional
>> information. How often do you commit? Do you use faceting? Do you sort
>> by Solr fields and if yes what are those fields? And you should also
>> check caches.
>>
>
> I upgraded to solr-3.6.1 and an extra large amazon instance (15GB RAM) so
> we'll see if that helps.  So far no out of memory errors.


Re: splitOnCaseChange not working?

2012-08-16 Thread Jack Krupansky
Just set autoGeneratePhraseQueries="false" on the ="text_en_splitting" field 
type.


The current setting treated AirlineAlias as the quoted phrase "Airline 
Alias".


-- Jack Krupansky

-Original Message- 
From: vod

Sent: Thursday, August 16, 2012 4:21 PM
To: solr-user@lucene.apache.org
Subject: splitOnCaseChange not working?
.
Hi,

I'm getting unexpected results and hoping someone can help. My schema.xml
has splitOnCaseChange="1" for the field I'm searching on (both index &
query), and the default search behavior is "OR".

I have a field with the word "Airline" indexed. When i search for "Airline"
I get the match. When I search for "Airline Alias", I get the match (as
expected). However, when I search for "AirlineAlias", I am not getting a
match. I was expecting the splitOnCaseChange property to separate out the
term AirlineAlias into the 2 base words. However, if that was happening,
then it should be finding the match to "Airline" (i.e. it should be the
exact same query as "Airline Alias").

Is my understanding correct? If so, any ideas on why I would not be getting
the correct search results?

I have copied the relevant sections from the schema.xml file below.

Thanks in advance for the help.






















...

/fields>






--
View this message in context: 
http://lucene.472066.n3.nabble.com/splitOnCaseChange-not-working-tp4001708.html
Sent from the Solr - User mailing list archive at Nabble.com. 



Re: splitOnCaseChange not working?

2012-08-16 Thread vod
Tested the change and it is indeed working.
Thank you for the quick response.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/splitOnCaseChange-not-working-tp4001708p4001733.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: Solr4.0 BETA - Error when StempelPolishStemFilterFactory

2012-08-16 Thread Steven A Rowe
Hi sausarkar,

You've probably been hit by the local configuration equivalent of 
 - the Solr example 
configuration directory added a path segment, so  references have to be 
changed to include an extra "../".

Steve

-Original Message-
From: sausarkar [mailto:sausar...@ebay.com] 
Sent: Thursday, August 16, 2012 5:33 PM
To: solr-user@lucene.apache.org
Subject: Solr4.0 BETA - Error when StempelPolishStemFilterFactory

I just upgrade to Solr 4.0.0-BETA and it seems there is a problem with the
StempelPolishStemFilterFactory it cannot find a resource, it seems that the
bug was introduced in the new beta release, this works fine in the alpha
release.

Here is the exception I am seeing in the logs:

SEVERE: null:java.lang.RuntimeException: java.io.IOException: Can't find
resource '/org/apache/lucene/analysis/pl/stemmer_2.tbl' in classpath or
'solr/collection1/conf/', cwd /apache-solr-4.0.0-BETA/example
at org.apache.solr.schema.IndexSchema.(IndexSchema.java:116)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:850)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:539)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:360)
at
org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:309)
at
org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:106)
at org.eclipse.jetty.servlet.FilterHolder.doStart(FilterHolder.java:114)
at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:59)
at
org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:754)
at
org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:258)

Anyone has any clue on this?




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr4-0-BETA-Error-when-StempelPolishStemFilterFactory-tp4001724.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: MySQL Exception: Communications link failure WITH DataImportHandler

2012-08-16 Thread Alexey Serba
My memory is vague, but I think I've seen something similar with older
versions of Solr.

Is it possible that you have significant database import and there's a
big segments merge happening in the middle causing blocking in dih
indexing process (and reading records from database as well), since
long inactivity in communication with db server and timeout as a
result. If this is the case then you can either increase timeout limit
on db server (don't remember the actual parameter) or upgrade Solr to
newer version that doesn't have such long pauses (4.0 beta?).

On Thu, Aug 16, 2012 at 12:37 PM, Jienan Duan  wrote:
> Hi all:
> I have resolved this problem by configuring a jndi datasource in tomcat.
> But I still want to find out why it throw an exception in DIH when I
> configure datasource in data-configure.xml but a jndi resource.
>
> Regards.
>
> 2012/8/16 Jienan Duan 
>
>> Hi all:
>> I'm using DataImportHandler load data from MySQL.
>> It works fine on my develop machine and online environment.
>> But I got an exception on test environment:
>>
>>> Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
 Communications link failure
>>>
>>>
 The last packet sent successfully to the server was 0 milliseconds ago.
 The driver has not received any packets from the server.
>>>
>>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
 Method)
>>>
>>> at
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>>
>>> at
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>>
>>> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>>
>>> at com.mysql.jdbc.Util.handleNewInstance(Util.java:406)
>>>
>>> at
 com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1074)
>>>
>>> at com.mysql.jdbc.MysqlIO.(MysqlIO.java:343)
>>>
>>> at
 com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2132)
>>>
>>> ... 26 more
>>>
>>> Caused by: java.net.ConnectException: Connection timed out
>>>
>>> at java.net.PlainSocketImpl.socketConnect(Native Method)
>>>
>>> at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351)
>>>
>>> at
 java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213)
>>>
>>> at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200)
>>>
>>> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
>>>
>>> at java.net.Socket.connect(Socket.java:529)
>>>
>>> at java.net.Socket.connect(Socket.java:478)
>>>
>>> at java.net.Socket.(Socket.java:375)
>>>
>>> at java.net.Socket.(Socket.java:218)
>>>
>>> at
 com.mysql.jdbc.StandardSocketFactory.connect(StandardSocketFactory.java:253)
>>>
>>> at com.mysql.jdbc.MysqlIO.(MysqlIO.java:292)
>>>
>>> ... 27 more
>>>
>>> This make me confused,because the test env and online env almost
>> same:Tomcat runs on a Linux Server with JDK6,MySql5 runs on another.
>> Even I wrote a simple JDBC test class it works,a jsp file with JDBC code
>> also works.Only DataImportHandler failed.
>> I'm trying to read Solr source code and found that it seems Solr has it's
>> own ClassLoader.I'm not sure if it goes wrong with Tomcat on some specific
>> configuration.
>> Dose anyone know how to fix this problem? Thank you very much.
>>
>> Best Regards.
>>
>> Jienan Duan
>>
>> --
>> --
>> 不走弯路,就是捷径。
>> http://www.jnan.org/
>>
>>
>
>
> --
> --
> 不走弯路,就是捷径。
> http://www.jnan.org/


RE: Solr4.0 BETA - Error when StempelPolishStemFilterFactory

2012-08-16 Thread sausarkar
No I tried that, Solr is finding and loading all the other contrib jars but
only for the Polish one it is complaining, it seems like this is a bug.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr4-0-BETA-Error-when-StempelPolishStemFilterFactory-is-used-tp4001724p4001736.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Custom Geocoder with Solr and Autosuggest

2012-08-16 Thread Alexey Serba
> My first decision was to divide SOLR into two cores, since I am already
> using SOLR as my search server. One core would be for the main search of the
> site and one for the geocoding.
Correct. And you can even use that location index/collection for
locations extraction for a non structural documents - i.e. if you
don't have separate field with geographical names in your corpus (or
location data is just not good enough compared to what can be mined
from documents)

> My second decision is to store the name data in a normalised state, some
> examples are shown below:
> London, England
> England
> Swindon, Wiltshire, England
Yes, you can add postcode/outcodes there also. And I would add
additional field "type" region/county/town/postcode/outcode.

> The third decision was to return “autosuggest” results, for example when the
> user types “Lond” I would like to suggest “London, England”. For this to
> work I think it makes sense to return up to 5 results via JSON based on
> relevancy and have these displayed under the search box.
Yeah, you might want to boost cities more than towns (I'm sure there
are plenty ambiguous terms), use some kind of geoip service,
additional scoring factors.

> My fourth decision is that when the user actually hits the “search” button
> on the location field, SOLR is again queries and returns the most relevant
> result, including the co-ordinates which are stored.
You can also have special logic to decide if you want to use spatial
search or just simple textual match would be better. I.e. you have
"England" in your example. It doesn't sound practical to return
coordinates and use spatial search for this use case, right?

HTH,
Alexey


RE: Solr4.0 BETA - Error when StempelPolishStemFilterFactory

2012-08-16 Thread Steven A Rowe
I can reproduce - I agree, this seems like a bug.

I've opened an issue: https://issues.apache.org/jira/browse/SOLR-3737

Thanks for reporting!

Steve
 
-Original Message-
From: sausarkar [mailto:sausar...@ebay.com] 
Sent: Thursday, August 16, 2012 6:42 PM
To: solr-user@lucene.apache.org
Subject: RE: Solr4.0 BETA - Error when StempelPolishStemFilterFactory

No I tried that, Solr is finding and loading all the other contrib jars but
only for the Polish one it is complaining, it seems like this is a bug.


Re: Solr4.0 BETA - Error when StempelPolishStemFilterFactory

2012-08-16 Thread sausarkar
Thank you.

From: "steve_rowe [via Lucene]" 
mailto:ml-node+s472066n4001741...@n3.nabble.com>>
Date: Thursday, August 16, 2012 5:05 PM
To: "Sarkar, Sauvik" mailto:sausar...@ebay.com>>
Subject: RE: Solr4.0 BETA - Error when StempelPolishStemFilterFactory

I can reproduce - I agree, this seems like a bug.

I've opened an issue: https://issues.apache.org/jira/browse/SOLR-3737

Thanks for reporting!

Steve

-Original Message-
From: sausarkar [mailto:[hidden 
email]]
Sent: Thursday, August 16, 2012 6:42 PM
To: [hidden email]
Subject: RE: Solr4.0 BETA - Error when StempelPolishStemFilterFactory

No I tried that, Solr is finding and loading all the other contrib jars but
only for the Polish one it is complaining, it seems like this is a bug.



If you reply to this email, your message will be added to the discussion below:
http://lucene.472066.n3.nabble.com/Solr4-0-BETA-Error-when-StempelPolishStemFilterFactory-is-used-tp4001724p4001741.html
To unsubscribe from Solr4.0 BETA - Error when StempelPolishStemFilterFactory is 
used, click 
here.
NAML




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr4-0-BETA-Error-when-StempelPolishStemFilterFactory-is-used-tp4001724p4001758.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: MySQL Exception: Communications link failure WITH DataImportHandler

2012-08-16 Thread Gora Mohanty
On 16 August 2012 14:07, Jienan Duan  wrote:
> Hi all:
> I have resolved this problem by configuring a jndi datasource in tomcat.
> But I still want to find out why it throw an exception in DIH when I
> configure datasource in data-configure.xml but a jndi resource.
[...]

When does this error happen? Immediately upon connection,
or randomly at some point of time. If the former, double
check your mysql configuration string.

With Solr 1.4, we occasionally saw DIH connection failures
to the database, but those were rare, probably attributable
to network issues, and largely went away when:
  (a) We first moved to a different JDBC driver
  (b) Moved from SQL server to mysql.
Is it possible that there are network issues between your live
Solr and the database server? Are you monitoring the load
on the database server, and the volume of network traffic?

Regards,
Gora


Re: SOLR3.6:Field Collapsing/Grouping throws OOM

2012-08-16 Thread Tirthankar Chatterjee
Awesome, you rock !!!. Thanks to Eric too for coming up with the ideas. But 
honestly I am more interested to know this EFF concept. I will do my reading 
and then throw up questions if I don't get it. 

External File Field and function queries are something which i have not 
evaluated yet, but I am not too sure if they all will work in scale set ups 
with millions of objects inside SOLR.

**Legal Disclaimer***
"This communication may contain confidential and privileged
material for the sole use of the intended recipient. Any
unauthorized review, use or distribution by others is strictly
prohibited. If you have received the message in error, please
advise the sender by reply email and delete the message. Thank
you."
*


Re: MySQL Exception: Communications link failure WITH DataImportHandler

2012-08-16 Thread Jienan Duan
It happens immediately upon connection,the jdbc string is correct.
This problem only happens when DIH trying get a connection to mysql,the
network and database server are fine.
So I guess it caused by some security configuration on the server and that
config has blocked the connection.
But other app's jdbc connection and Tomcat's JNDI datasource are
insusceptible.
PS: I use Solr 3.5

Regards.

2012/8/17 Gora Mohanty 

> On 16 August 2012 14:07, Jienan Duan  wrote:
> > Hi all:
> > I have resolved this problem by configuring a jndi datasource in tomcat.
> > But I still want to find out why it throw an exception in DIH when I
> > configure datasource in data-configure.xml but a jndi resource.
> [...]
>
> When does this error happen? Immediately upon connection,
> or randomly at some point of time. If the former, double
> check your mysql configuration string.
>
> With Solr 1.4, we occasionally saw DIH connection failures
> to the database, but those were rare, probably attributable
> to network issues, and largely went away when:
>   (a) We first moved to a different JDBC driver
>   (b) Moved from SQL server to mysql.
> Is it possible that there are network issues between your live
> Solr and the database server? Are you monitoring the load
> on the database server, and the volume of network traffic?
>
> Regards,
> Gora
>



-- 
--
不走弯路,就是捷径。
http://www.jnan.org/


Re: How to implement Index time boost via DIH?

2012-08-16 Thread Jienan Duan
Hi
Why don't you do this at the searching time?ex:
q=fullName:foo+lastUpdatedBy:bar&qf=fullName^2+lastUpdatedBy^0.5

2012/8/17 bbarani 

> Hi,
>
> I am trying to implement index time boost via DIH. but I want to boost at
> field level rather than document level. Ex:
>
> Consider there are 2 fields in a document: LastUpdatedBy, fullName. After
> indexing both the fields, I am copying the data from both the field in to a
> default search field. Now when search happens on default search field I
> want
> fullNameto get more priority than LastUpdatedBy even though both might have
> the search keyword.
>
> Note: I have tried dismax handler before for runtime boosting, want to try
> index time boosing using DIH.
>
> Thanks!!!
>
> Thanks,
> BB
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/How-to-implement-Index-time-boost-via-DIH-tp4001720.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>



-- 
--
不走弯路,就是捷径。
http://www.jnan.org/