some cores goes down during indexing

2013-12-02 Thread Grzegorz Sobczyk
Hi
I have strange situation. During indexing some of cores goes down:

ZkController.publish(1017) | publishing core=shops5 state=down
ZkController.register(785) | Register replica - core:shops5 address:
http://host77:8280/solr collection:shops5 shard:shard1
ZkController.register(810) | We are http://host77:8280/solr/shops5/ and
leader is http://host136:8280/solr/shops5/

After that core doesn't register as working in the cloud, even if it should
(It can process requests).
For now I can only restart solr to fix the situation. Reload core doesn't
help.

Has someone faced similar problem?

Some info:
Multiple Solr 4.5.1 in SolrCloud
3x Zk

http://host77:8280/solr/#/shops5/replication:
Index Version Gen Size
Master (Searching) 1385955301185 67 127.62 KB
Master (Replicable) 1385955301185 67 -

http://host136:8280/solr/#/shops5/replication:
Index Version Gen Size
Master (Searching) 1385955301218 68 127.65 KB
Master (Replicable) 1385955301218 68 -

http://host141:8280/solr/#/shops5/replication:
Index Version Gen Size
Master (Searching) 1385955301265 68 127.37 KB
Master (Replicable) 1385955301265 68 -

Logs from other core:
ZkController.publish(1017) | publishing core=shops3 state=down
ZkController.register(785) | Register replica - core:shops3 address:
http://host77:8280/solr collection:shops3 shard:shard1
ZkController.register(810) | We are http://host77:8280/solr/shops3/ and
leader is http://host136:8280/solr/shops3/
ZkController.register(841) | No LogReplay needed for core=shops3 baseURL=
http://host77:8280/solr
ZkController.checkRecovery(993) | Core needs to recover:shops3
RecoveryStrategy.run(216) | Starting recovery process. core=shops3
recoveringAfterStartup=false
ZkController.publish(1017) | publishing core=shops3 state=recovering
RecoveryStrategy.doRecovery(356) | Attempting to PeerSync from
http://host136:8280/solr/shops3/ core=shops3 - recoveringAfterStartup=false
RecoveryStrategy.doRecovery(368) | PeerSync Recovery was successful -
registering as Active. core=shops3
ZkController.publish(1017) | publishing core=shops3 state=active
SolrCore.registerSearcher(1812) | [shops3] Registered new searcher
Searcher@45df7f8c main{StandardDirectoryReader(segments_ik:1977:nrt
_n1(4.5.1):C97)}
PeerSync.sync(186) | PeerSync: core=shops3
url=http://host77:8280/solrSTART replicas=[
http://host136:8280/solr/shops3/] nUpdates=100
PeerSync.handleVersions(346) | PeerSync: core=shops3 url=
http://host77:8280/solr Received 97 versions from host136:8280/solr/shops3/
PeerSync.handleVersions(399) | PeerSync: core=shops3 url=
http://host77:8280/solr Our versions are newer.
ourLowThreshold=1453188869165940736 otherHigh=1453279151809101824
PeerSync.sync(272) | PeerSync: core=shops3
url=http://host77:8280/solrDONE. sync succeeded

above lines are missing for core shops5

-- 
Grzegorz Sobczyk


Re: Timeout Errors while using Collections API

2013-10-17 Thread Grzegorz Sobczyk
Thanks, I'll try upgade.


On 17 October 2013 15:55, Mark Miller  wrote:

> There was a reload bug in SolrCloud that was fixed in 4.4 -
> https://issues.apache.org/jira/browse/SOLR-4805
>
> Mark
>
> On Oct 17, 2013, at 7:18 AM, Grzegorz Sobczyk  wrote:
>
> > Sorry for previous spam (something eat my message)
> >
> > I have the same problem but with reload action
> > ENV:
> > - 3x Solr 4.2.1 with 4 cores each
> > - ZK
> >
> > Before error I have:
> > - 14, 2013 5:25:36 AM CollectionsHandler handleReloadAction INFO:
> Reloading
> > Collection : name=products&action=RELOAD
> > - hundreds of (with the same timestamp) 14, 2013 5:25:36 AM
> > DistributedQueue$LatchChildWatcher process INFO: Watcher fired on path:
> > /overseer/collection-queue-work state: SyncConnected type
> > NodeChildrenChanged
> > - 13 times (from 2013 5:25:39 to 5:25:45):
> > -- 14, 2013 5:25:39 AM SolrDispatchFilter handleAdminRequest INFO:
> [admin]
> > webapp=null path=/admin/cores params={action=STATUS&wt=ruby} status=0
> > QTime=2
> > -- 14, 2013 5:25:39 AM SolrDispatchFilter handleAdminRequest INFO:
> [admin]
> > webapp=null path=/admin/cores params={action=STATUS&wt=ruby} status=0
> > QTime=1
> > -- 14, 2013 5:25:39 AM SolrCore execute INFO: [forum] webapp=/solr
> > path=/admin/mbeans params={stats=true&wt=ruby} status=0 QTime=2
> > -- 14, 2013 5:25:39 AM SolrCore execute INFO: [knowledge] webapp=/solr
> > path=/admin/mbeans params={stats=true&wt=ruby} status=0 QTime=2
> > -- 14, 2013 5:25:39 AM SolrCore execute INFO: [products] webapp=/solr
> > path=/admin/mbeans params={stats=true&wt=ruby} status=0 QTime=2
> > -- 14, 2013 5:25:39 AM SolrCore execute INFO: [shops] webapp=/solr
> > path=/admin/mbeans params={stats=true&wt=ruby} status=0 QTime=1
> > - 14, 2013 5:26:21 AM SolrCore execute INFO: [products] webapp=/solr
> > path=/select/ params={q=solrpingquery} hits=0 status=0 QTime=0
> > - 14, 2013 5:26:36 AM DistributedQueue$LatchChildWatcher process INFO:
> > Watcher fired on path: /overseer/collection-queue-work/qnr-000806
> > state: SyncConnected type NodeDeleted
> > - 14, 2013 5:26:36 AM SolrException log SEVERE:
> > org.apache.solr.common.SolrException: reloadcollection the collection
> time
> > out:60s
> > at
> >
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:162)
> > at
> >
> org.apache.solr.handler.admin.CollectionsHandler.handleReloadAction(CollectionsHandler.java:184)
> > at
> >
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:120)
> >
> > What are possilibities of such behaviour? When this error is thrown?
> > Does anybody has the same issue?
> >
> >
> > On 17 October 2013 13:08, Grzegorz Sobczyk  wrote:
> >
> >>
> >>
> >> On 16 October 2013 11:48, RadhaJayalakshmi <
> >> rlakshminaraya...@inautix.co.in> wrote:
> >>
> >>> Hi,
> >>> My setup is
> >>> Zookeeper ensemble - running with 3 nodes
> >>> Tomcats - 9 Tomcat instances are brought up, by registereing with
> >>> zookeeper.
> >>>
> >>> Steps :
> >>> 1) I uploaded the solr configuration like db_data_config, solrconfig,
> >>> schema
> >>> xmls into zookeeoper
> >>> 2)  Now, i am trying to create a collection with the collection API
> like
> >>> below:
> >>>
> >>>
> >>>
> http://miadevuser001.albridge.com:7021/solr/admin/collections?action=CREATE&name=Schwab_InvACC_Coll&numShards=1&replicationFactor=2&createNodeSet=localhost:7034_solr,localhost:7036_solr&collection.configName=InvestorAccountDomainConfig
> >>>
> >>> Now, when i execute this command, i am getting the following error:
> >>> 500 >>> name="QTime">60015 >>> name="msg">createcollection the collection time out:60s >>> name="trace">org.apache.solr.common.SolrException: createcollection the
> >>> collection time out:60s
> >>>at
> >>>
> >>>
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:175)
> >>>at
> >>>
> >>>
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:156)
> >>>at
> >>>
> >>>
> org.apache.solr.handler.admin.CollectionsHandler.handleCreateAction(CollectionsHan

Re: Timeout Errors while using Collections API

2013-10-17 Thread Grzegorz Sobczyk
Sorry for previous spam (something eat my message)

I have the same problem but with reload action
ENV:
 - 3x Solr 4.2.1 with 4 cores each
 - ZK

Before error I have:
- 14, 2013 5:25:36 AM CollectionsHandler handleReloadAction INFO: Reloading
Collection : name=products&action=RELOAD
- hundreds of (with the same timestamp) 14, 2013 5:25:36 AM
DistributedQueue$LatchChildWatcher process INFO: Watcher fired on path:
/overseer/collection-queue-work state: SyncConnected type
NodeChildrenChanged
- 13 times (from 2013 5:25:39 to 5:25:45):
-- 14, 2013 5:25:39 AM SolrDispatchFilter handleAdminRequest INFO: [admin]
webapp=null path=/admin/cores params={action=STATUS&wt=ruby} status=0
QTime=2
-- 14, 2013 5:25:39 AM SolrDispatchFilter handleAdminRequest INFO: [admin]
webapp=null path=/admin/cores params={action=STATUS&wt=ruby} status=0
QTime=1
-- 14, 2013 5:25:39 AM SolrCore execute INFO: [forum] webapp=/solr
path=/admin/mbeans params={stats=true&wt=ruby} status=0 QTime=2
-- 14, 2013 5:25:39 AM SolrCore execute INFO: [knowledge] webapp=/solr
path=/admin/mbeans params={stats=true&wt=ruby} status=0 QTime=2
-- 14, 2013 5:25:39 AM SolrCore execute INFO: [products] webapp=/solr
path=/admin/mbeans params={stats=true&wt=ruby} status=0 QTime=2
-- 14, 2013 5:25:39 AM SolrCore execute INFO: [shops] webapp=/solr
path=/admin/mbeans params={stats=true&wt=ruby} status=0 QTime=1
- 14, 2013 5:26:21 AM SolrCore execute INFO: [products] webapp=/solr
path=/select/ params={q=solrpingquery} hits=0 status=0 QTime=0
- 14, 2013 5:26:36 AM DistributedQueue$LatchChildWatcher process INFO:
Watcher fired on path: /overseer/collection-queue-work/qnr-000806
state: SyncConnected type NodeDeleted
- 14, 2013 5:26:36 AM SolrException log SEVERE:
org.apache.solr.common.SolrException: reloadcollection the collection time
out:60s
at
org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:162)
at
org.apache.solr.handler.admin.CollectionsHandler.handleReloadAction(CollectionsHandler.java:184)
at
org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:120)

What are possilibities of such behaviour? When this error is thrown?
Does anybody has the same issue?


On 17 October 2013 13:08, Grzegorz Sobczyk  wrote:

>
>
> On 16 October 2013 11:48, RadhaJayalakshmi <
> rlakshminaraya...@inautix.co.in> wrote:
>
>> Hi,
>> My setup is
>> Zookeeper ensemble - running with 3 nodes
>> Tomcats - 9 Tomcat instances are brought up, by registereing with
>> zookeeper.
>>
>> Steps :
>> 1) I uploaded the solr configuration like db_data_config, solrconfig,
>> schema
>> xmls into zookeeoper
>> 2)  Now, i am trying to create a collection with the collection API like
>> below:
>>
>>
>> http://miadevuser001.albridge.com:7021/solr/admin/collections?action=CREATE&name=Schwab_InvACC_Coll&numShards=1&replicationFactor=2&createNodeSet=localhost:7034_solr,localhost:7036_solr&collection.configName=InvestorAccountDomainConfig
>>
>> Now, when i execute this command, i am getting the following error:
>> 500> name="QTime">60015> name="msg">createcollection the collection time out:60s> name="trace">org.apache.solr.common.SolrException: createcollection the
>> collection time out:60s
>> at
>>
>> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:175)
>> at
>>
>> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:156)
>> at
>>
>> org.apache.solr.handler.admin.CollectionsHandler.handleCreateAction(CollectionsHandler.java:290)
>> at
>>
>> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:112)
>> at
>>
>> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>> at
>>
>> org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:611)
>> at
>>
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:218)
>> at
>>
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:158)
>> at
>>
>> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
>> at
>>
>> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
>> at
>>
>> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
>> at
>>
>> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
>>

Re: Timeout Errors while using Collections API

2013-10-17 Thread Grzegorz Sobczyk
On 16 October 2013 11:48, RadhaJayalakshmi
wrote:

> Hi,
> My setup is
> Zookeeper ensemble - running with 3 nodes
> Tomcats - 9 Tomcat instances are brought up, by registereing with
> zookeeper.
>
> Steps :
> 1) I uploaded the solr configuration like db_data_config, solrconfig,
> schema
> xmls into zookeeoper
> 2)  Now, i am trying to create a collection with the collection API like
> below:
>
>
> http://miadevuser001.albridge.com:7021/solr/admin/collections?action=CREATE&name=Schwab_InvACC_Coll&numShards=1&replicationFactor=2&createNodeSet=localhost:7034_solr,localhost:7036_solr&collection.configName=InvestorAccountDomainConfig
>
> Now, when i execute this command, i am getting the following error:
> 500 name="QTime">60015 name="msg">createcollection the collection time out:60s name="trace">org.apache.solr.common.SolrException: createcollection the
> collection time out:60s
> at
>
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:175)
> at
>
> org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:156)
> at
>
> org.apache.solr.handler.admin.CollectionsHandler.handleCreateAction(CollectionsHandler.java:290)
> at
>
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:112)
> at
>
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
> at
>
> org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:611)
> at
>
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:218)
> at
>
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:158)
> at
>
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
> at
>
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
> at
>
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
> at
>
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
> at
>
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
> at
>
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
> at
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:947)
> at
>
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
> at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
> at
>
> org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1009)
> at
>
> org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
> at
>
> org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:310)
> at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:722)
> 500
>
> Now after i got this error, i am not able to do any operation on these
> instances with collection API. It is repeteadly giving the same timeout
> error..
> This setup was working fine 5 mins back. suddenly it started throwing this
> exceptions. Any ideas please??
>
>
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Timeout-Errors-while-using-Collections-API-tp4095852.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>



-- 
Grzegorz Sobczyk


logging UI stops working when additional handlers defined

2013-08-07 Thread Grzegorz Sobczyk
I run solr on tomcat with configured JUL to log solr to separate file:

org.apache.solr.level = INFO
org.apache.solr.handlers = 4solrerr.org.apache.juli.FileHandler

​I've noticed that logging UI ​stops working. Is it normal behavior or is
it bug?
(When cores are initialized JulWatcher is registered only for root logger.)

-- 
Grzegorz Sobczyk


Re: java.lang.OutOfMemoryError: Requested array size exceeds VM limit

2013-08-02 Thread Grzegorz Sobczyk
Solr parameters listed in dashboard:
-DzkHost=localhost:2181,172.27.5.121:2181,172.27.5.122:2181
-XX:+UseConcMarkSweepGC
-Xmx3072m
-Djava.awt.headless=true

Mem usage in last days: http://i42.tinypic.com/29z5rew.png

It's production system and there's too many requests to detect which query
is buggy
some lines before every error (result of: zgrep -B 200 "SEVERE:
null:java.lang.RuntimeException: java.lang.OutOfMemoryError"
daemon.err.2.gz > out.log )
https://gist.github.com/gsobczyk/65e2ab9bb13a99aec89e

I've tried invoking recent request before exception but without success of
reproduce error



On 1 August 2013 20:39, Erick Erickson  wrote:

> What are the memory parameters you start Solr with? The Solr admin page
> will tell you how much memory the JVM has.
>
> Also, cut/paste the queries you're running when you see this.
>
> Best
> Erick
>
>
> On Thu, Aug 1, 2013 at 9:50 AM, Grzegorz Sobczyk 
> wrote:
>
> > after node starts in log I have only few requests:
> > https://gist.github.com/gsobczyk/6131503#file-solr-oom-log
> > this error occurred multiple times
> >
> >
> >
> >
> > On 1 August 2013 15:33, Rafał Kuć  wrote:
> >
> > > Hello!
> > >
> > > The exception you've shown tells you that Solr tried to allocate an
> > > array that exceeded heap size. Do you use some custom sorts? Did you
> > > send large bulks during the time that the exception occurred?
> > >
> > > --
> > > Regards,
> > >  Rafał Kuć
> > >  Sematext :: http://sematext.com/ :: Solr - Lucene - ElasticSearch
> > >
> > > > Today I found in solr logs exception: java.lang.OutOfMemoryError:
> > > Requested
> > > > array size exceeds VM limit.
> > > > At that time memory usage was ~200MB / Xmx3g
> > >
> > > > Env looks like this:
> > > > 3x standalone zK (Java 7)
> > > > 3x Solr 4.2.1 on Tomcat (Java 7)
> > > > Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1 x86_64 GNU/Linux
> > > > One Solr and one ZK on single host: lmsiprse01, lmsiprse02,
> lmsiprse03
> > >
> > > > Before exception I start restarting Solr:
> > > > lmsiprse01:
> > > > [2013-08-01 05:23:43]: /etc/init.d/tomcat6-1 stop
> > > > [2013-08-01 05:25:09]: /etc/init.d/tomcat6-1 start
> > > > lmsiprse02 (leader):
> > > > 2013-08-01 05:27:21]: /etc/init.d/tomcat6-1 stop
> > > > 2013-08-01 05:29:31]: /etc/init.d/tomcat6-1 start
> > > > lmsiprse03:
> > > > [2013-08-01 05:25:48]: /etc/init.d/tomcat6-1 stop
> > > > [2013-08-01 05:26:42]: /etc/init.d/tomcat6-1 start
> > >
> > > > and error shows up at 2013 5:27:26 on lmsiprse01
> > >
> > > > Is anybody knows what happened?
> > >
> > > > fragment of log looks:
> > > > sie 01, 2013 5:27:26 AM org.apache.solr.core.SolrCore execute
> > > > INFO: [products] webapp=/solr path=/select
> > > >
> > >
> >
> params={facet=true&start=0&q=&facet.limit=-1&facet.field=attribute_u-typ&facet.field=attribute_u-gama-kolorystyczna&facet.field=brand_name&wt=javabin&fq=node_id:1056&version=2&rows=0}
> > > > hits=1241 status=0 QTime=33
> > > > sie 01, 2013 5:27:26 AM org.apache.solr.common.SolrException log
> > > > SEVERE: null:java.lang.RuntimeException: java.lang.OutOfMemoryError:
> > > > Requested array size exceeds VM limit
> > > > at
> > > >
> > >
> >
> org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:653)
> > > > at
> > > >
> > >
> >
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:366)
> > > > at
> > > >
> > >
> >
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)
> > > > at
> > > >
> > >
> >
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
> > > > at
> > > >
> > >
> >
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> > > > at
> > > >
> > >
> >
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> > > > at
> > > >
> > >
> >
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> > > > at
> > > >
> > >
> >
> org.

Re: java.lang.OutOfMemoryError: Requested array size exceeds VM limit

2013-08-01 Thread Grzegorz Sobczyk
after node starts in log I have only few requests:
https://gist.github.com/gsobczyk/6131503#file-solr-oom-log
this error occurred multiple times




On 1 August 2013 15:33, Rafał Kuć  wrote:

> Hello!
>
> The exception you've shown tells you that Solr tried to allocate an
> array that exceeded heap size. Do you use some custom sorts? Did you
> send large bulks during the time that the exception occurred?
>
> --
> Regards,
>  Rafał Kuć
>  Sematext :: http://sematext.com/ :: Solr - Lucene - ElasticSearch
>
> > Today I found in solr logs exception: java.lang.OutOfMemoryError:
> Requested
> > array size exceeds VM limit.
> > At that time memory usage was ~200MB / Xmx3g
>
> > Env looks like this:
> > 3x standalone zK (Java 7)
> > 3x Solr 4.2.1 on Tomcat (Java 7)
> > Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1 x86_64 GNU/Linux
> > One Solr and one ZK on single host: lmsiprse01, lmsiprse02, lmsiprse03
>
> > Before exception I start restarting Solr:
> > lmsiprse01:
> > [2013-08-01 05:23:43]: /etc/init.d/tomcat6-1 stop
> > [2013-08-01 05:25:09]: /etc/init.d/tomcat6-1 start
> > lmsiprse02 (leader):
> > 2013-08-01 05:27:21]: /etc/init.d/tomcat6-1 stop
> > 2013-08-01 05:29:31]: /etc/init.d/tomcat6-1 start
> > lmsiprse03:
> > [2013-08-01 05:25:48]: /etc/init.d/tomcat6-1 stop
> > [2013-08-01 05:26:42]: /etc/init.d/tomcat6-1 start
>
> > and error shows up at 2013 5:27:26 on lmsiprse01
>
> > Is anybody knows what happened?
>
> > fragment of log looks:
> > sie 01, 2013 5:27:26 AM org.apache.solr.core.SolrCore execute
> > INFO: [products] webapp=/solr path=/select
> >
> params={facet=true&start=0&q=&facet.limit=-1&facet.field=attribute_u-typ&facet.field=attribute_u-gama-kolorystyczna&facet.field=brand_name&wt=javabin&fq=node_id:1056&version=2&rows=0}
> > hits=1241 status=0 QTime=33
> > sie 01, 2013 5:27:26 AM org.apache.solr.common.SolrException log
> > SEVERE: null:java.lang.RuntimeException: java.lang.OutOfMemoryError:
> > Requested array size exceeds VM limit
> > at
> >
> org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:653)
> > at
> >
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:366)
> > at
> >
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)
> > at
> >
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
> > at
> >
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
> > at
> >
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
> > at
> >
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
> > at
> >
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
> > at
> >
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
> > at
> >
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
> > at
> >
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
> > at
> >
> org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859)
> > at
> >
> org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:602)
> > at
> > org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
> > at java.lang.Thread.run(Thread.java:724)
> > Caused by: java.lang.OutOfMemoryError: Requested array size exceeds VM
> limit
> > at org.apache.lucene.util.PriorityQueue.(PriorityQueue.java:64)
> > at org.apache.lucene.util.PriorityQueue.(PriorityQueue.java:37)
> > at
> >
> org.apache.solr.handler.component.ShardFieldSortedHitQueue.(ShardDoc.java:113)
> > at
> >
> org.apache.solr.handler.component.QueryComponent.mergeIds(QueryComponent.java:766)
> > at
> >
> org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:625)
> > at
> >
> org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:604)
> > at
> >
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:311)
> > at
> >
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
> > at org.apache.solr.core.SolrCore.execute(SolrCore.java:1817)
> > at
> >
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:639)
> > at
> >
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
> > ... 13 more
>
>


-- 
Grzegorz Sobczyk


java.lang.OutOfMemoryError: Requested array size exceeds VM limit

2013-08-01 Thread Grzegorz Sobczyk
Today I found in solr logs exception: java.lang.OutOfMemoryError: Requested
array size exceeds VM limit.
At that time memory usage was ~200MB / Xmx3g

Env looks like this:
3x standalone zK (Java 7)
3x Solr 4.2.1 on Tomcat (Java 7)
Linux 3.2.0-4-amd64 #1 SMP Debian 3.2.46-1 x86_64 GNU/Linux
One Solr and one ZK on single host: lmsiprse01, lmsiprse02, lmsiprse03

Before exception I start restarting Solr:
lmsiprse01:
[2013-08-01 05:23:43]: /etc/init.d/tomcat6-1 stop
[2013-08-01 05:25:09]: /etc/init.d/tomcat6-1 start
lmsiprse02 (leader):
2013-08-01 05:27:21]: /etc/init.d/tomcat6-1 stop
2013-08-01 05:29:31]: /etc/init.d/tomcat6-1 start
lmsiprse03:
[2013-08-01 05:25:48]: /etc/init.d/tomcat6-1 stop
[2013-08-01 05:26:42]: /etc/init.d/tomcat6-1 start

and error shows up at 2013 5:27:26 on lmsiprse01

Is anybody knows what happened?

fragment of log looks:
sie 01, 2013 5:27:26 AM org.apache.solr.core.SolrCore execute
INFO: [products] webapp=/solr path=/select
params={facet=true&start=0&q=&facet.limit=-1&facet.field=attribute_u-typ&facet.field=attribute_u-gama-kolorystyczna&facet.field=brand_name&wt=javabin&fq=node_id:1056&version=2&rows=0}
hits=1241 status=0 QTime=33
sie 01, 2013 5:27:26 AM org.apache.solr.common.SolrException log
SEVERE: null:java.lang.RuntimeException: java.lang.OutOfMemoryError:
Requested array size exceeds VM limit
at
org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:653)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:366)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859)
at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:602)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.lang.OutOfMemoryError: Requested array size exceeds VM limit
at org.apache.lucene.util.PriorityQueue.(PriorityQueue.java:64)
at org.apache.lucene.util.PriorityQueue.(PriorityQueue.java:37)
at
org.apache.solr.handler.component.ShardFieldSortedHitQueue.(ShardDoc.java:113)
at
org.apache.solr.handler.component.QueryComponent.mergeIds(QueryComponent.java:766)
at
org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:625)
at
org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:604)
at
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:311)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1817)
at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:639)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
... 13 more

-- 
Grzegorz Sobczyk


Re: facet prefix with tokenized fields

2012-10-29 Thread Grzegorz Sobczyk

I'd like to use faceting. I don't want list of documents.

Using ngram would give me response which is useless for me.

Querying smth like this:
fq=category_ngram:child&facet.field=category_exactly

would give me something like this (for multivalued category fields):
"toys for children"
"games"
"memory"
"tv"




W dniu 29.10.2012 o 13:12 Rafał Kuć  pisze:


Hello!

Do you have to use faceting for prefixing ? Maybe it would be better to  
use ngram based field and return the stored value ?






--
Grzegorz Sobczyk



facet prefix with tokenized fields

2012-10-29 Thread Grzegorz Sobczyk

Hi.Is there any solution to facet documents with specified prefix on some tokenized field, but in result gets the original value of a field?e.q.: 		 		 			 			 Indexed value: "toys for children"query: q=&start=0&rows=0&facet.limit=-1&facet.mincount=1&f.category_ac.facet.prefix=chi&facet.field=category_ac&facet=trueI'd like to get exacly "toys for children", not "children"--Grzegorz Sobczyk

Re: maven artifact for solr-solrj-4.0.0

2012-10-18 Thread Grzegorz Sobczyk

Thanks!

W dniu 18.10.2012 o 10:37 Jeevanandam Madanagopal   
pisze:



Sorry, missed the maven central repo link -
http://search.maven.org/#artifactdetails|org.apache.solr|solr-solrj|4.0.0|jar

Cheers, Jeeva
Blog: http://www.myjeeva.com

On Oct 18, 2012, at 1:59 PM, Jeevanandam Madanagopal   
wrote:



Grzegorz Sobczyk - It's already available in Maven central repo link


   org.apache.solr
   solr-solrj
   4.0.0


PS: use this 'http://search.maven.org' official website of maven  
central repository for artifact search/download


Cheers, Jeeva
Blog: http://www.myjeeva.com

On Oct 18, 2012, at 12:30 PM, Amit Nithian  wrote:


I am not sure if this repository
https://repository.apache.org/content/repositories/releases/ works but
the modification dates seem reasonable given the timing of the
release. I suspect it'll be on maven central soon (hopefully)

On Wed, Oct 17, 2012 at 11:13 PM, Grzegorz Sobczyk
 wrote:

Hello
Is there maven artifact for solrj 4.0.0 release ?
When it will be available to download from http://mvnrepository.com/  
??


version 4.0.0-BETA isn't compatibile with 4.0.0 (problems with  
zookeeper and

clusterstate.json parsing)

Best regards
Grzegorz Sobczyk











--
Pozdrawiam
Grzegorz Sobczyk
Dział Realizacji i Wdrożeń
Contium