OK, I think I have found it. I provided when starting the 4 solr instances
via start.jar always the data directory property via

*-Dsolr.data.dir=/home/myuser/data
*
After removing this it worked fine. What is weird is, that all 4 instances
are totally separated, so that instance-2 should never conflict with
instance-1. they could also be on totally different physical servers.

Thanks. Daniel

On Wed, Jun 13, 2012 at 8:10 PM, Mark Miller <markrmil...@gmail.com> wrote:

> Thats an interesting data dir location: NativeFSLock@/home/myuser/
> data/index/write.lock
>
> Where are the other data dirs located? Are you sharing one drive or
> something? It looks like something already has a writer lock - are you sure
> another solr instance is not running somehow?
>
> On Wed, Jun 13, 2012 at 11:11 AM, Daniel Brügge <
> daniel.brue...@googlemail.com> wrote:
>
> > BTW: i am running the solr instances using -Xms512M -Xmx1024M
> >
> > so not so little memory.
> >
> > Daniel
> >
> > On Wed, Jun 13, 2012 at 4:28 PM, Daniel Brügge <
> > daniel.brue...@googlemail.com> wrote:
> >
> > > Hi,
> > >
> > > am struggling around with creating multiple collections on a 4
> instances
> > > SolrCloud
> > > setup:
> > >
> > > I have 4 virtual OpenVZ instances, where I have installed SolrCloud on
> > > each and
> > > on one is also a standalone Zookeeper running.
> > >
> > > Loading the Solr configuration into ZK works fine.
> > >
> > > Then I startup the 4 instances and everything is also running smoothly.
> > >
> > > After that I am adding one core with the name e.g. '123'.
> > >
> > > This core is correctly visible on the instance I have used for creating
> > > it.
> > >
> > > it maps like
> > >
> > > '123' ----> shard1 -----> virtual-instance-1
> > >
> > >
> > > After that I am creating a core with the same name '123' on the second
> > > instance and it
> > > creates it, but an exception is thrown after some while and the cluster
> > > state of
> > > the newly created core goes to 'recovering'
> > >
> > >
> > >   *"123":{"shard1":{
> > >       "virtual-instance-1:8983_solr_123":{
> > >         "shard":"shard1",
> > >         "roles":null,
> > >         "leader":"true",
> > >         "state":"active",
> > >         "core":"123",
> > >         "collection":"123",
> > >         "node_name":"virtual-instance-1:8983_solr",
> > >         "base_url":"http://virtual-instance-1:8983/solr"},
> > >       "**virtual-instance-2**:8983_solr_123":{*
> > > *        "shard":"shard1",
> > >         "roles":null,
> > >         "state":"recovering",
> > >         "core":"123",
> > >         "collection":"123",
> > >         "node_name":"virtual-instance-2:8983_solr",
> > >         "base_url":"http://virtual-instance-2:8983/solr"}}},*
> > >
> > >
> > > The exception throws is on the first virtual instance:
> > >
> > > *Jun 13, 2012 2:18:40 PM org.apache.solr.common.SolrException log*
> > > *SEVERE: null:org.apache.lucene.store.LockObtainFailedException: Lock
> > > obtain timed out: NativeFSLock@/home/myuser/data/index/write.lock*
> > > * at org.apache.lucene.store.Lock.obtain(Lock.java:84)*
> > > * at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:607)*
> > > * at
> > > org.apache.solr.update.SolrIndexWriter.<init>(SolrIndexWriter.java:58)*
> > > * at
> > >
> >
> org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:112)
> > > *
> > > * at
> > >
> >
> org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:52)
> > > *
> > > * at
> > >
> >
> org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:364)
> > > *
> > > * at
> > >
> >
> org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:82)
> > > *
> > > * at
> > >
> >
> org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64)
> > > *
> > > * at
> > >
> >
> org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:919)
> > > *
> > > * at
> > >
> >
> org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:154)
> > > *
> > > * at
> > >
> >
> org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69)
> > > *
> > > * at
> > >
> >
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
> > > *
> > > * at
> > >
> >
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
> > > *
> > > * at org.apache.solr.core.SolrCore.execute(SolrCore.java:1566)*
> > > * at
> > >
> >
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:442)
> > > *
> > > * at
> > >
> >
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:263)
> > > *
> > > * at
> > >
> >
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1337)
> > > *
> > > * at
> > >
> >
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:484)
> > > *
> > > * at
> > >
> >
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119)
> > > *
> > > * at
> > >
> >
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524)
> > > *
> > > * at
> > >
> >
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:233)
> > > *
> > > * at
> > >
> >
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1065)
> > > *
> > > * at
> > >
> >
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:413)*
> > > * at
> > >
> >
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:192)
> > > *
> > > * at
> > >
> >
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:999)
> > > *
> > > * at
> > >
> >
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117)
> > > *
> > > * at
> > >
> >
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:250)
> > > *
> > > * at
> > >
> >
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:149)
> > > *
> > > * at
> > >
> >
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:111)
> > > *
> > > * at org.eclipse.jetty.server.Server.handle(Server.java:351)*
> > > * at
> > >
> >
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:454)
> > > *
> > > * at
> > >
> >
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:47)
> > > *
> > > * at
> > >
> >
> org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:900)
> > > *
> > > * at
> > >
> >
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:954)
> > > *
> > > * at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:857)*
> > > * at
> > org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
> > > *
> > > * at
> > >
> >
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:66)
> > > *
> > > * at
> > >
> >
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:254)
> > > *
> > > * at
> > >
> >
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:599)
> > > *
> > > * at
> > >
> >
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:534)
> > > *
> > > * at java.lang.Thread.run(Thread.java:662)*
> > > *
> > > *
> > > I've thought that currently the only way to create multiple cores in
> solr
> > > cloud which are distributed over shards
> > > is to create cores with the same name on each single solr instance?
> > >
> > > Thanks & regards
> > >
> > > Daniel
> > >
> >
>
>
>
> --
> - Mark
>
> http://www.lucidimagination.com
>

Reply via email to