Hi,
Do you know how to optimize index on a single shard only ? I was trying to
use optimize=truewaitFlush=trueshard.keys=myshard but it does not work
- it optimizes all shards instead of just one.
Kind regards.
?
Ahmet
On Tuesday, May 20, 2014 10:23 AM, Marcin Rzewucki mrzewu...@gmail.com
wrote:
Hi,
Do you know how to optimize index on a single shard only ? I was trying
to
use optimize=truewaitFlush=trueshard.keys=myshard but it does not
work
- it optimizes all shards instead of just
the collection.you can reference the
SolrCmdDistributor.distribCommit.
2014-05-20 17:27 GMT+08:00 Marcin Rzewucki mrzewu...@gmail.com:
Well, it should not hang if all is configured fine :) How many shards and
memory you have ? Note that optimize rewrites index so you might need
additional disk
.
Erick
On Wed, Mar 19, 2014 at 8:48 AM, Marcin Rzewucki mrzewu...@gmail.com
wrote:
Hi,
I have the following issue with join query parser and filter query.
For
such query:
str name=q*:*/str
str name=fq
(({!join from=inner_id to=outer_id fromIndex=othercore
AM, Marcin Rzewucki mrzewu...@gmail.com
wrote:
Nope. There is no line break in the string and it is not feed from file.
What else could be the reason ?
On 19 March 2014 17:57, Erick Erickson erickerick...@gmail.com wrote:
It looks to me like you're feeding this from some
kind
Stara
Or have a line break in the string you paste into the URL
or something similar.
Kind of shooting in the dark though.
Erick
On Wed, Mar 19, 2014 at 8:48 AM, Marcin Rzewucki mrzewu...@gmail.com
wrote:
Hi,
I have the following issue with join query parser and filter query
Hi everyone,
I got the following issue recently. I'm trying to use frange on a field
which has hyphen in name:
lst name=params
str name=debugQuerytrue/str
str name=indenton/str
str name=q*:*/str
str name=wtxml/str
arr name=fq
str
{!frange l=1 u=99}sub(if(1,
(acc_curr_834_2-1900_tl,1)
becomes:
div(field('acc_curr_834_2-1900_tl'),1)
-- Jack Krupansky
-Original Message- From: Marcin Rzewucki
Sent: Wednesday, March 19, 2014 8:13 AM
To: solr-user@lucene.apache.org
Subject: frange and field with hyphen
Hi everyone,
I got
Hi,
I have the following issue with join query parser and filter query. For
such query:
str name=q*:*/str
str name=fq
(({!join from=inner_id to=outer_id fromIndex=othercore}city:Stara
Zagora)) AND (prod:214)
/str
I got error:
lst name=error
str name=msg
org.apache.solr.search.SyntaxError:
Hi,
After upgrading from solr 4.3.1 to solr 4.4 I have the following issue:
ERROR - 2013-07-25 20:00:15.433; org.apache.solr.core.CoreContainer; Unable
to create core: awslocal_shard5
org.apache.solr.common.SolrException: Error opening new searcher
at
http://wiki.apache.org/solr/DocValues#Specifying_a_different_Codec_implementation
OK, it seems there's no back compat for disk based docvalues
implementation. I have to reindex documents to get rid of this issue.
On 25 July 2013 22:17, Marcin Rzewucki mrzewu...@gmail.com wrote:
Hi,
After
Hi,
You should use CoreAdmin API (or Solr Admin page) and UNLOAD unneeded
cores. This will unregister them from the zookeeper (cluster state will be
updated), so they won't be used for querying any longer. Solrcloud restart
is not needed in this case.
Regards.
On 16 July 2013 06:18, Ali, Saqib
Hi,
I have a problem (wonder if it is possible to solve it at all) with the
following query. There are documents with a field which contains a text and
a number in brackets, eg.
myfield: this is a text (number)
There might be some other documents with the same text but different number
in
at indexing time.
--
Oleg
On Tue, Jul 16, 2013 at 10:12 AM, Marcin Rzewucki mrzewu...@gmail.com
wrote:
Hi,
I have a problem (wonder if it is possible to solve it at all) with the
following query. There are documents with a field which contains a text
and
a number in brackets, eg
?
On Tue, Jul 16, 2013 at 10:51 AM, Marcin Rzewucki mrzewu...@gmail.com
wrote:
Hi Oleg,
It's a multivalued field and it won't be easier to query when I split
this
field into text and numbers. I may get wrong results.
Regards.
On 16 July 2013 09:35, Oleg Burlaca oburl...@gmail.com
Krupansky
-Original Message- From: Marcin Rzewucki
Sent: Tuesday, July 16, 2013 5:13 AM
To: solr-user@lucene.apache.org
Subject: Re: Range query on a substring.
By multivalued I meant an array of values. For example:
arr name=myfield
strtext1 (X)/str
strtext2 (Y)/str
nodes from the cluster state
not cores, unless you can unload cores specific to an already offline node
from zookeeper.
On Tue, Jul 16, 2013 at 1:55 AM, Marcin Rzewucki mrzewu...@gmail.com
wrote:
Hi,
You should use CoreAdmin API (or Solr Admin page) and UNLOAD unneeded
cores
Hi,
Is there something similar to ElasticSearch searchscroll function, but in
Solr ? For me, it's very useful for making dump of some documents only.
Regards.
OK. Thanks for explanation.
On 23 April 2013 23:16, Yonik Seeley yo...@lucidworks.com wrote:
On Tue, Apr 23, 2013 at 3:51 PM, Marcin Rzewucki mrzewu...@gmail.com
wrote:
Recently I noticed a lot of Reordered DBQs detected messages in logs.
As
far as I checked in logs it could be related
Hi,
Recently I noticed a lot of Reordered DBQs detected messages in logs. As
far as I checked in logs it could be related with deleting documents, but
not sure. Do you know what is the reason of those messages ?
Apr 23, 2013 1:20:14 AM org.apache.solr.search.SolrIndexSearcher init
INFO: Opening
Hi,
Atomic updates (single field updates) do not depend on DocValues. They were
implemented in Solr4.0 and works fine (but all fields have to be
retrievable). DocValues are supposed to be more efficient than FieldCache.
Why not enabled by default ? Maybe because they are not for all fields and
update mechanism is not really a field update
mechanism. It just looks like that from the outside. DocValues
should make true field updates implementable.
Otis
--
Solr ElasticSearch Support
http://sematext.com/
On Fri, Mar 29, 2013 at 3:30 PM, Marcin Rzewucki mrzewu...@gmail.com
wrote
like that from the outside. DocValues
should make true field updates implementable.
Otis
--
Solr ElasticSearch Support
http://sematext.com/
On Fri, Mar 29, 2013 at 3:30 PM, Marcin Rzewucki mrzewu...@gmail.com
wrote:
Hi,
Atomic updates (single field updates) do not depend
Hi John,
Mark is right. DocValues can be enabled in two ways: RAM resident (default)
or on-disk. You can read more here:
http://www.slideshare.net/LucidImagination/column-stride-fields-aka-docvalues
Regards.
On 22 March 2013 16:55, John Nielsen j...@mcb.dk wrote:
with the on disk option.
Hi Chris,
Thanks for your detailed explanations. The default value is a difficult
limitation. Especially for financial figures. I may try with some
workaround like the lowest possible number for TrieLongField, but would be
better to avoid such :)
Regards.
On 22 March 2013 20:39, Chris Hostetter
Hi Shawn,
Thank you for your response. Yes, that's strange. By enabling DocValues the
information about missing fields is lost, which changes the way of sorting
as well. Adding default value to the fields can change a logic of
application dramatically (I can't set default value to 0 for all
Hi,
I have a collection with more than 4K fields, but mostly Trie*Fields types.
It is used for faceting,sorting,searching and statsComponent. It works
pretty fine on Amazon 4xm1.large (7.5GB RAM) EC2 boxes. I'm using
SolrCloud, multi A-Z setup and ephemeral storage. Index is managed by mmap,
4GB
Hi,
Can somebody explain why there are additional requirements for a field to
be able to use DocValues ? For example: Trie*Fields have to be required or
have default value.
Schema Parsing Failed: Field
Hi there,
Let's say we use custom hashing algorithm and there is a document already
indexed in shard1. After some time the same document has changed and
should be indexed to shard2 (because of routing rules used in indexing
program). It has been indexed without issues and as a result 2 almost the
Right. Collection API can be used here.
On 18 February 2013 21:36, Timothy Potter thelabd...@gmail.com wrote:
@Marcin - Maybe I mis-understood your process but I don't think you
need to reload the collection on each node if you use the expanded
collections admin API, i.e. the following will
Hi,
I was able to implement custom hashing with the use of _shard_ field. It
contains the name of shard a document should go to. Works fine. Maybe
there's some other method to do the same with the use of solrconfig.xml,
but I have not found any docs about it so far.
Regards.
On 18 February
Hi,
It does not work for distributed search:
org.apache.solr.handler.component.ShardFieldSortedHitQueue.getCachedComparator(ShardDoc.java:193)
...
case DOC:
// TODO: we can support this!
throw new RuntimeException(Doc sort not supported);
...
Try to sort by unique ID.
Regards.
expiration in the logs? That is the
likely culprit for something like this. You may need to raise the timeout:
http://wiki.apache.org/solr/SolrCloud#FAQ
If you see no session timeouts, I don't have a guess yet.
- Mark
On Feb 2, 2013, at 7:35 PM, Marcin Rzewucki mrzewu...@gmail.com wrote:
I'm
February 2013 20:55, Mark Miller markrmil...@gmail.com wrote:
What led you to trying that? I'm not connecting the dots in my head - the
exception and the solution.
- Mark
On Feb 3, 2013, at 2:48 PM, Marcin Rzewucki mrzewu...@gmail.com wrote:
Hi,
I think the issue was not in zk client
to check if this is the
reason.
Regards.
On 3 February 2013 21:26, Shawn Heisey s...@elyograg.org wrote:
On 2/3/2013 1:07 PM, Marcin Rzewucki wrote:
I'm loading in batches. 10 threads are reading json files and load to Solr
by sending POST request (from couple of dozens to couple of hundreds docs
for maxFormContentSize (1M) and there were no issues either.
Regards.
On 3 February 2013 22:16, Marcin Rzewucki mrzewu...@gmail.com wrote:
Hi,
I set this:
Call name=setAttribute
Argorg.eclipse.jetty.server.Request.maxFormContentSize/Arg
Arg10485760/Arg
/Call
I meant I get fields from parent core only. Is it possible to get fields
from both cores using join query?
On 1 February 2013 23:36, Marcin Rzewucki mrzewu...@gmail.com wrote:
Thanks Yonik. I see no errors now. Is it possible to get fields from both
cores for returned results ?
On 1
I'm experiencing same problem in Solr4.1 during bulk loading. After 50
minutes of indexing the following error starts to occur:
INFO: [core] webapp=/solr path=/update params={} {} 0 4
Feb 02, 2013 11:36:15 PM org.apache.solr.common.SolrException log
SEVERE: org.apache.solr.common.SolrException:
Check below if that's better for you:
http://pastebin.com/ardqNcC7
On 1 February 2013 21:25, Marcin Rzewucki mrzewu...@gmail.com wrote:
Hi,
I was trying to join documents across cores on same shard in SolrCloud4.1
and I got this error:
java.lang.NullPointerException
a better error message rather than a NPE of course...
-Yonik
http://lucidworks.com
On Fri, Feb 1, 2013 at 3:45 PM, Marcin Rzewucki mrzewu...@gmail.com
wrote:
Check below if that's better for you:
http://pastebin.com/ardqNcC7
On 1 February 2013 21:25, Marcin Rzewucki mrzewu...@gmail.com
Hi,
If you add security constraint for /admin/*, SolrCloud will not work. At
least that's what I had in Solr4.0. I have not tried the same with Solr4.1,
but I guess it is the same.
Also I found some issues with URL patterns in webdefault.xml
This:
url-pattern/core/update/url-pattern
works,
Hi,
The best is if you could find a query for all docs you want to remove. If
this is not simple you can use the following syntax: id: (1 2 3 4 5) to
remove group of docs by ID (and if your default query operator is OR).
Regards.
On 27 January 2013 11:47, Bruno Mannina bmann...@free.fr wrote:
You can write a script and remove say 50 docs in 1 call. It's always better
than removing 1 by 1.
Regards.
On 27 January 2013 13:17, Bruno Mannina bmann...@free.fr wrote:
Hi,
Even If I have one or two thousands of Id ?
Thanks
Le 27/01/2013 13:15, Marcin Rzewucki a écrit :
Hi
, at 3:14 PM, Marcin Rzewucki mrzewu...@gmail.com wrote:
Guys, I pasted you the full log (see pastebin url). Yes, it is Solr4.0. 2
cores are in sync, but the 3rd one is not:
INFO: PeerSync Recovery was not successful - trying replication.
core=ofac
INFO: Starting Replication Recovery. core
that this is
harmless?
Marcin Rzewucki asked the same question on December 28 2012 and got no
response. Can someone kindly respond please?
Thanks
PixalSoft
make complete sense. It then says 'trying replication', which is what I
would expect, and the bit you are saying has failed. So the interesting
bit is likely immediately after the snippet you showed.
Upayavira
On Wed, Jan 23, 2013, at 07:40 AM, Marcin Rzewucki wrote:
OK, so I did
23, 2013, at 10:28 AM, Marcin Rzewucki wrote:
Hi,
Previously, I took the lines related to collection I tested. Maybe some
interesting part was missing. I'm sending the full log this time.
It ends up with:
INFO: Finished recovery process. core=ofac
The issue I described is related
org.apache.solr.core.CachingDirectoryFactory get
INFO: return new directory for
/solr/cores/bpr/selekta/data/index.20130121090342477 forceNew:false
Once you look in that dir, how do things look?
Upayavira
On Wed, Jan 23, 2013, at 10:45 AM, Marcin Rzewucki wrote:
OK, check this link: http://pastebin.com/qMC9kDvt
:
Was your full logged stripped? You are right, we need more. Yes, the
peer
sync failed, but then you cut out all the important stuff about the
replication attempt that happens after.
- Mark
On Jan 23, 2013, at 5:28 AM, Marcin Rzewucki mrzewu...@gmail.com
wrote:
Hi,
Previously, I took
Hi,
Have you tried to add aliases to your network interface (for master and
slave)? Then you should use -Djetty.host and -Djetty.port to bind Solr with
appropriate IPs. I think you should also use different directories for Solr
files (-Dsolr.solr.home) as there may be some conflict with index
versions if it doesn't read them from a tlog on startup...
- Mark
On Jan 22, 2013, at 3:31 PM, Marcin Rzewucki mrzewu...@gmail.com wrote:
Hi,
I'm using SolrCloud4.0 with 2 shards and I did such test: stopped Solr on
shard1 replica, removed index and tlog directories and started Solr
.
On 22 January 2013 23:06, Yonik Seeley yo...@lucidworks.com wrote:
On Tue, Jan 22, 2013 at 4:37 PM, Marcin Rzewucki mrzewu...@gmail.com
wrote:
Sorry, my mistake. I did 2 tests: in the 1st I removed just index
directory
and in 2nd test I removed both index and tlog directory. Log lines
Hi Romita,
The 3rd parameter should be '/solr/corename' because ping() sends request
to /solr/corename/admin/ping handler. Try, it should work.
Regards.
On 17 December 2012 03:23, Romita Saha romita.s...@sg.panasonic.com wrote:
Hi,
I open the Solr browser using the following url:
There's no problem with indexing while taking snapshot. The only issue I
found is some problem with index directory:
https://issues.apache.org/jira/browse/SOLR-4170
It looks like Solr always looks in .../data/index/ directory without
reading index.properties file (sometimes your index dir name can
Definitely. I agree. It's good to stop loading before snapshot. Anyway,
doing index snapshot say every 1 hour and re-indexing documents never than
last 1-1.5 hour should reduce your index recovery time.
On 8 January 2013 07:36, Otis Gospodnetic otis.gospodne...@gmail.comwrote:
Hi,
Right, you
ramBufferSizeMB and anything else that has the
potential of making indexing gentler on resources, be that CPU or disk
or...
Otis
--
Solr ElasticSearch Support
http://sematext.com/
On Fri, Jan 4, 2013 at 4:44 PM, Marcin Rzewucki mrzewu...@gmail.com
wrote:
Hi all,
I'm using SolrCloud4x
load balancer. Also, as I
understand it, zookeeper maintains active connections to solr hosts,
which is not a common scenario for load balances as I understand it.
Upayavira
On Fri, Dec 28, 2012, at 04:39 PM, Marcin Rzewucki wrote:
Hi,
Does Solr need connection to all of hosts
JIRA ticket created: https://issues.apache.org/jira/browse/SOLR-4170
On 27 November 2012 23:41, Mark Miller markrmil...@gmail.com wrote:
Perhaps you can file a JIRA ticket with your findings?
- Mark
On Nov 27, 2012, at 5:31 PM, Marcin Rzewucki mrzewu...@gmail.com wrote:
Yes, I have
Hi,
I have Solr cluster and I want to use UUID for unique key. I configured
solrconfig and schema according to the rules on Wiki page:
http://wiki.apache.org/solr/UniqueKey
In logs I can see some UUID is being generated when adding new document:
INFO: [selekta] webapp=/solr path=/update
Hi,
I think you should change/set value for multipartUploadLimitInKB attribute
of requestParsers in solrconfig.xml
Regards.
On 29 November 2012 07:58, deniz denizdurmu...@gmail.com wrote:
hello,
during tests, I keep getting
SEVERE: null:java.lang.IllegalStateException: Form too
Hi,
I have SolrCloud4x. I'd like to be able to make index backup on each node.
When I did /replication?command=backup on one of them I got:
str name=snapShootExceptionFile
/data/index.20121119140848151/segments_1am does not exist/str
What does it mean? File is not there indeed, but querying and
, but I can't
think of any reason it should not work.
Have you tried a back up with a single node Solr setup with 4x?
- Mark
On Nov 27, 2012, at 4:28 PM, Marcin Rzewucki mrzewu...@gmail.com wrote:
Hi,
I have SolrCloud4x. I'd like to be able to make index backup on each
node.
When I did
Hi,
It seems like the file is missing from Zookeeper. Can you confirm ?
Regards.
On 26 November 2012 07:57, deniz denizdurmu...@gmail.com wrote:
Hi all,
I am working on solrcloud and trying to import from db... but I am getting
this error:
?xml version=1.0 encoding=UTF-8?
response
lst
Hi,
I added authentication in Jetty and it works fine. However, it's strange
that url pattern like /admin/cores* is not working, but /admin/* works
correct.
Regards.
On 17 November 2012 01:10, Marcin Rzewucki mrzewu...@gmail.com wrote:
Hi,
Yes, I'm trying to add authentication to Jetty
suggestions from other people that
dealed with this kind of issues.
Regards,
- Luis Cappa.
2012/11/21 Marcin Rzewucki mrzewu...@gmail.com
Yes, I meant the same (not -zkRun). However, I was asking if it is safe
to
have zookeeper and solr processes running on the same node or better
Hi,
I'm using cloud-scripts/zkcli.sh script for reloading configuration, for
example:
$ ./cloud-scripts/zkcli.sh -cmd upconfig -confdir config.dir -solrhome
solr.home -confname config.name -z zookeeper.host
Then I'm reloading collection on each node in cloud, but maybe someone
knows better
I think solrhome is not mandatory.
Yes, reloading is uploading config dir again. It's a pity we can't update
just modified files.
Regards.
On 22 November 2012 19:38, Cool Techi cooltec...@outlook.com wrote:
Thanks, but why do we need to specify the -solrhome?
I am using the following command
Hi,
I have 4 solr collections, 2-3mn documents per collection, up to 100K
updates per collection daily (roughly). I'm going to create SolrCloud4x on
Amazon's m1.large instances (7GB mem,2x2.4GHz cpu each). The question is
what about zookeeper? It's going to be external ensemble, but is it better
.
- Mark
On Nov 21, 2012, at 8:54 AM, Marcin Rzewucki mrzewu...@gmail.com
wrote:
Hi,
I have 4 solr collections, 2-3mn documents per collection, up to 100K
updates per collection daily (roughly). I'm going to create SolrCloud4x
on
Amazon's m1.large instances (7GB mem,2x2.4GHz cpu
-
ElasticSearch
Separate is generally nice because then you can restart Solr nodes
without consideration for ZooKeeper.
Performance-wise, I doubt it's a big deal either way.
- Mark
On Nov 21, 2012, at 8:54 AM, Marcin Rzewucki mrzewu...@gmail.com
wrote:
Hi,
I have 4 solr
Hi,
As far as I know CloudSolrServer is recommended to be used for indexing to
SolrCloud. I wonder what are advantages of this approach over external
load-balancer ? Let's say I have 4 nodes SolrCloud (2 shards + replicas) +
1 server running ZooKeeper. I can use CloudSolrServer for indexing or
hashing, will auto add/remove nodes from rotation based
on the cluster state in Zookeeper, and is probably out of the box more
intelligent about retrying on some responses (for example responses that
are returned on shutdown or startup).
- Mark
On Nov 19, 2012, at 6:54 AM, Marcin Rzewucki mrzewu
Hi,
It happens a lot of times that I need to update just 1 file in ZooKeeper.
I'm using zkcli.sh and -upconfig for the whole directory with configuration
files. I wonder, if it is possible to update a single file in ZooKeeper.
Do you have any ideas ?
Thanks!
Hi,
You can prepare the following structure:
add
doc
field name=solrfieldname1value1/field
field name=solrfieldname2value2/field
/doc
/add
You can find sample files in solr package (example/exampledocs/ dir) along
with post.sh script which might be useful for you.
Regards.
On 16
Hi,
Does anybody know if SOLR supports Admin Page authentication ?
I'm using Jetty from the latest solr package. I added security option to
start.ini:
OPTIONS=Server,webapp,security
and in configuration file I have (according to Jetty documentation):
!--
noticed with 4.0 it no longer lives under /admin but rather /solr...and
that means you can't just password-protect it without password-protecting
all of solr. If I am wrong, please let me know...I would love to protect it
somehow
On 11/16/2012 10:55 AM, Marcin Rzewucki wrote:
Hi,
Does
76 matches
Mail list logo