Subject: Re: how to force rescan of core.properties file in solr
On 3/10/2016 3:00 AM, Gian Maria Ricci - aka Alkampfer wrote:
> but this change in core.properties is not available until I restart
> the service and Solr does core autodiscovery. Issuing a Core RELOAD
> does not work.
>
>
I've setup a configuration in my solrconfig.xml to manage maseter or slave
with settings in core.properties file.
This allows me to select if the core is slave or master with a simple change
of core.properties file.
I've setup a dns entry for master.mysolr..xxx, this allows me to point
Plugins
I used those links to learn to write my first plugin as well.
I might have that code still lying around somewhere. Let me take a look and get
back.
On Thu, 4 Feb 2016, 19:32 Gian Maria Ricci - aka Alkampfer <
alkamp...@nablasoft.com> wrote:
> I've already found these two pre
get a hang of how it's to be done it's really not that difficult.
On Wed, 3 Feb 2016, 21:59 Gian Maria Ricci - aka Alkampfer <
alkamp...@nablasoft.com> wrote:
> Hi,
>
>
>
> I wonder if there is some code samples or tutorial (updated to work
> with version 5) to he
h component and a request handler respectively.
>> > They are on older solr versions but they should work with 5.x as well.
>> > I used these to get started when I was trying to write my first plugin.
>> > Once you get a hang of how it's to be done it's really not that
>&g
Hi,
I wonder if there is some code samples or tutorial (updated to work with
version 5) to help users writing plugins.
I've found lots of difficulties on the past to find such kind of information
when I needed to write some plugins, and I wonder if I missed some site or
link that does a
: lunedì 1 febbraio 2016 21:19
To: solr-user@lucene.apache.org
Subject: Re: Error configuring UIMA
Yeah, that's exactly the kind of innocent user error that UIMA simply has no
code to detect and reasonably report.
-- Jack Krupansky
On Mon, Feb 1, 2016 at 12:13 PM, Gian Maria Ricci - aka Alkampfer
Hi,
I've followed the guide
https://cwiki.apache.org/confluence/display/solr/UIMA+Integration to setup a
UIMA integration to test this feature. The doc is not updated for Solr5,
I've followed the latest comment to that guide and did some other changes
but now each request to /update handler
Hi,
I've configured integration with UIMA but when I try to add a document I
always got the error reported at bottom of the mail.
It seems to be related to openCalais, but I've registered to OpenCalais and
setup my token in solrconfig, so I wonder if anyone has some clue on what
could be
It was a stupid error, I've mistyped the logField configuration in UIMA
I'd like error not to use the Id but another field, but I've mistyped in
solrconfig.xml and then I've got that error.
Gian Maria.
--
Gian Maria Ricci
Cell: +39 320 0136949
-Original Message-
From: Jack
@lucene.apache.org
Subject: Re: Which version of java is preferrable to install on a Red Hat
Enterprise Linux 7?
On 1/27/2016 5:34 AM, Gian Maria Ricci - aka Alkampfer wrote:
>
> As for the subject, which version of java is preferrable to install to
> run Solr 5.3.1 on RHEL 7?
>
>
>
&
location parameter is used.
You can also checkout : https://github.com/bloomreach/solrcloud-haft for doing
backup and restore of your solrcloud collections.
On Fri, Jan 15, 2016 at 12:23 AM,
Gian Maria Ricci - aka Alkampfer <
alkamp...@nablasoft.com> wrote:
> Ok thanks, I also think that i
As for the subject, which version of java is preferrable to install to run
Solr 5.3.1 on RHEL 7?
I remember in old doc (https://wiki.apache.org/solr/SolrInstall) that for
full international charset support there is the need to install full IDK, it
is still true or is preferable to install
From: outlook_288fbf38c031d...@outlook.com
[mailto:outlook_288fbf38c031d...@outlook.com] On Behalf Of Gian Maria Ricci
- aka Alkampfer
Sent: Thursday, January 21, 2016 6:38 AM
To: solr-user@lucene.apache.org
Subject: Couple of question about Virtualization and Load Balancer
Hi,
I've a coupl
e of quick que
ut
that's not a question or issue for Solr. The only issue there is assuring that
you have enough Solr shards and replicas to handle the aggregate request load.
-- Jack Krupansky
On Thu, Jan 21, 2016 at 6:37 AM, Gian Maria Ricci - aka Alkampfer <
alkamp...@nablasoft.com> wrote:
&g
e-a-definitive-answer/.
>
> However, there is a Solr
performance test tool with a track record -
> SolrMeter<https://github.com/tflobbe/solrmeter/blob/wiki/Usage.md>. You
> can also do a lot with good old JMeter.
>
> From: outlook_288fbf38c031d...@outlook.com
> [mail
Hi,
I've a couple of quick question about production setup.
The first one is about virtualization, I'd like to know if there are any
official test on loss of performance in virtualization environment. I think
that the loss of performance is negligible, and quick question on test
spm) where you can see
metrics for both system and SOLR.
Thanks,
Emir
On 15.01.2016 10:43, Gian Maria Ricci - aka Alkampfer wrote:
>
> Hi,
>
> When it is time to calculate how much RAM a solr instance needs to run
> with good performance, I know that it is some form of art, but I
ilure. If only that first message appeals,
it means the backup is still in progress.
-- Jack Krupansky
On Thu, Jan 14, 2016 at 9:23 AM, Gian Maria Ricci - aka Alkampfer <
alkamp...@nablasoft.com> wrote:
> If I start a backup operation using the location parameter
>
>
>
> *htt
ct: Re: Pro and cons of using Solr Cloud vs standard Master Slave Replica
re: SolrCloud backup/restore: https://issues.apache.org/jira/browse/SOLR-5750
not committed yet, but getting attention.
On Thu, Jan 14, 2016 at 6:19 AM, Gian Maria Ricci - aka Alkampfer
<alkamp...@nablasoft.c
Hi,
When it is time to calculate how much RAM a solr instance needs to run with
good performance, I know that it is some form of art, but I'm looking at a
general "formula" to have at least one good starting point.
Apart the RAM devoted to Java HEAP, that is strongly dependant on how I
- Zookeeper is a standard for all HA in Apache Hadoop
>> - You have collections which will manage your shards across nodes
>> - SolrJ Client is now fault tolerant with CloudSolrClient
>>
>> This is the way future direction of the product will go.
>>
>>
>>
gt;
> > wrote:
> >
> >> - SolrCloud uses zookeeper to manage HA
> >> - Zookeeper is a standard for all HA in Apache Hadoop
> >> - You have collections which will manage your shards across nodes
> >> - SolrJ Client is now fault tolerant with Cl
If I start a backup operation using the location parameter
http://localhost:8983/solr/mycore/replication?command=backup=mycore
ation=z:\temp\backupmycore
How can I monitor when the backup operation is finished? Issuing a standard
details operation
http://localhost:8983/solr/ mycore
AM, Gian Maria Ricci - aka Alkampfer wrote:
> a customer need a comprehensive list of all pro and cons of using
> standard Master Slave replica VS using Solr Cloud. I’m interested
> especially in query performance consideration, because in this
> specific situation the rate of n
uch more work ( running all the update request processor before the
distribution) .
All the consideration already mentioned are of course still valid.
Cheers
[1]
https://cwiki.apache.org/confluence/display/solr/Update+Request+Processors
On 12 January 2016 at 08:19, Gian Maria Ricci - a
: WArning in SolrCloud logs
To be honest, that block is not necessary anymore.
As Erick and Shawn were saying that is now implicit and defined by default.
Cheers
On 12 January 2016 at 08:22, Gian Maria Ricci - aka Alkampfer <
alkamp...@nablasoft.com> wrote:
> THis is the replicatio
org>
Subject: Re: WArning in SolrCloud logs
Just show us the solrconfig.xml file, particularly any thing referring to
replication, it's easier than talking past each other.
Best,
Erick.
On Mon, Jan 11, 2016 at 12:18 PM, Gian Maria Ricci - aka Alkampfer
<alkamp...@nablasoft.com> wrote
gments.
Best,
Erick
On Mon, Jan 11, 2016 at 1:42 PM, Shawn Heisey <apa...@elyograg.org> wrote:
> On 1/11/2016 1:23 PM, Gian Maria Ricci - aka Alkampfer wrote:
>> Ok, this imply that if I have X replica of a shard, the document is indexed
>> X+1 times? one for each replica plus
ake sure you" + " intend this
> behavior, it usually indicates a mis-configuration. Master setting is
> " +
> Boolean.toString(enableMaster) + " and slave setting is " + Boolean.
> toString(enableSlave));
> }
> }
Cheers
On 11 January 2016 at 15:08, Gian Mar
Jan 11, 2016 at 9:19 AM, Gian Maria Ricci - aka Alkampfer
<alkamp...@nablasoft.com> wrote:
> Thanks.
>
> This arise a different question: when I index a document, it is assigned to
> one of the three shard based on the value of the ID field. Actually indexing
> a document is usua
Hi guys,
a customer need a comprehensive list of all pro and cons of using standard
Master Slave replica VS using Solr Cloud. I'm interested especially in query
performance consideration, because in this specific situation the rate of
new documents is really slow, but the amount of data is
/11/2016 8:45 AM, Gian Maria Ricci - aka Alkampfer wrote:
> Due to the different reboot times probably, I’ve noticed that upon
> reboot all three leader shards are on a single machine. I’m expecting
> shard leaders to be distributed evenly between machines, because if
> all
I’ve configured three node in solrcloud, everything seems ok, but in the log I
see this kind of warning
SolrCloud is enabled for core xxx_shard3_replica1 but so is old-style
replication. Make sure you intend this behavior, it usually indicates a
mis-configuration. Master setting is true
I've a test solrCloud installation consisting of Three CentOS machines, each
one running one zookeeper node and one solr instance. I've created a
collection with 3 shards and 2 replica per each shard, then, after some
tests, rebooted all three machines.
Due to the different reboot times
I've issued a command to create some collections, but there were an error in
solrconfig.xml (I've specified wrong path to dataimporthandler.jar files).
The creation of the collection failed but now I don't know how to cleanup
everything.
This is a test solrcloud where I'm experimenting in
e else and provide a link?
Best,
Erick
On Wed, Jan 6, 2016 at 3:22 AM, Gian Maria Ricci - aka Alkampfer <
alkamp...@nablasoft.com> wrote:
> I’ve issued a command to create some collections, but there were an
> error in solrconfig.xml (I’ve specified wrong path to
> dataimporthandler.jar
ing a compiled jar is difficult but building from the code is
> > prett=
y
> > straightforward and will only take you a few minutes.
> >
> > On Mon, 28 Dec 2015, 13:47 Gian Maria Ricci - aka Alkampfer <
> > alkampfer@nablas
oft.com> wrote:
> >
> >> Hi,
Hi,
I've read on SolrWiki that solrmeter is not active developed anymore, but I
wonder if it is still valid to do some performance test or if there is some
better approach / tool.
I'd like also to know where I can find the latest compiled version for
SolrMeter instead of compiling with
to be slightly old.
--
Gian Maria Ricci
Cell: +39 320 0136949
-Original Message-
From: outlook_288fbf38c031d...@outlook.com
[mailto:outlook_288fbf38c031d...@outlook.com] On Behalf Of Gian Maria Ricci -
aka Alkampfer
Sent: sabato 12 dicembre 2015 11:39
To: solr-user
Hi,
I just want some feedback on best practice to run incremental DIH. During
last years I always preferred to have dedicated application that pushes data
inside ElasticSearch / Solr, but now I have a situation where we are forced
to use DIH.
I have several SQL Server database with a
: Re: Use multiple istance simultaneously
On 12/11/2015 8:19 AM, Gian Maria Ricci - aka Alkampfer wrote:
> Thanks for all of your clarification. I know that solrcloud is a
> really better configuration than any other, but actually it has a
> complexity that is really higher. I just wan
model (copying state from one of healthy nodes and DIH-ing
diff). I would recommend using SolrCloud with single shard and let Solr do
the hard work.
Regards,
Emir
On 04.12.2015 14:37, Gian Maria Ricci - aka Alkampfer wrote:
> Many thanks for your response.
>
> I worked with Solr until earl
@lucene.apache.org
Subject: Re: Use multiple istance simultaneously
On 12/3/2015 1:25 AM, Gian Maria Ricci - aka Alkampfer wrote:
> In such a scenario could it be feasible to simply configure 2 or 3
> identical instance of Solr and configure the application that transfer
> data to so
Suppose that for some reason you are not able to use SolrCloud and you are
forced to use the old Master-Slave approach to guarantee High Availability.
In such a scenario, if the master failed, application are still able to
search with slaves, but clearly, no more data can be indexed until the
45 matches
Mail list logo