Re: Solr migration related issues.

2020-11-05 Thread Modassar Ather
Thanks Erick.

I am going through the recommendations given in this email chain.

Best,
Modassar

On Thu, Nov 5, 2020 at 7:17 PM Erick Erickson 
wrote:

> Oh dear.
>
> You made me look at the reference guide. Ouch.
>
> We have this nice page “Defining core.properties” that talks about
> defining core.properties. Unfortunately it has _no_ warning about the fact
> that trying to use this in SolrCloud is a really bad idea. As in “don’t do
> it”. To make matters worse, as you have found out painfully, it kinda
> worked in cloud mode in times past.
>
> Then the collections API doc says you can add property.name=value, with
> no mention that the name here should NOT be any of the properties necessary
> for SolrCloud to operate.
>
> The problem here is that adding property.name=value would set that
> property to the _same_ value in all cores. Naming all the replicas for a
> collection to the same thing is not supported officially, if it works in
> older Solrs that was by chance not design. And there’s no special provision
> for just using that property as a prefix. That’s really designed for custom
> properties. And, by the way, “name” is really kind of a no-op, the thing
> displayed in the drop-down is taken from Zookeeper’s node_name. Please
> don’t try to name that.
>
> I very strongly recommend that you stop trying to do this. Whatever you
> are doing that requires a specific name, I’d change _that_ process to use
> the names assigned by Solr. If it’s just for aesthetics, there’s really no
> good way to change what’s in the drop-down.
>
> Best,
> Erick
>
> > On Nov 5, 2020, at 5:25 AM, Modassar Ather 
> wrote:
> >
> > Hi Shawn,
> >
> > I understand that we do not need to modify the core.properties and use
> the
> > APIs to create core and collection and that is what I am doing now.
> > This question of naming the core as per the choice comes from our older
> > setup where we have 12 shards, a collection and core both named the same
> > and the core were discovered by core.properties with entries as mentioned
> > in my previous mail.
> >
> > Thanks for the responses. I will continue with the new collection and
> core
> > created by the APIs and test our indexing and queries.
> >
> > Best,
> > Modassar
> >
> >
> > On Thu, Nov 5, 2020 at 12:58 PM Shawn Heisey 
> wrote:
> >
> >> On 11/4/2020 9:32 PM, Modassar Ather wrote:
> >>> Another thing: how can I control the core naming? I want the core name
> to
> >>> be *mycore* instead of *mycore**_shard1_replica_n1*/*mycore*
> >>> *_shard2_replica_n2*.
> >>> I tried setting it using property.name=*mycore* but it did not work.
> >>> What can I do to achieve this? I am not able to find any config option.
> >>
> >> Why would you need to this or even want to?  It sounds to me like an XY
> >> problem.
> >>
> >> http://xyproblem.info/
> >>
> >>> I understand the core.properties file is required for core discovery
> but
> >>> when this file is present under a subdirectory of SOLR_HOME I see it
> not
> >>> getting loaded and not available in Solr dashboard.
> >>
> >> You should not be trying to manipulate core.properties files yourself.
> >> This is especially discouraged when Solr is running in cloud mode.
> >>
> >> When you're in cloud mode, the collection information in zookeeper will
> >> always be consulted during core discovery.  If the found core is NOT
> >> described in zookeeper, it will not be loaded.  And in any recent Solr
> >> version when running in cloud mode, a core that is not referenced in ZK
> >> will be entirely deleted.
> >>
> >> Thanks,
> >> Shawn
> >>
>
>


Re: Solr migration related issues.

2020-11-05 Thread Erick Erickson
Oh dear.

You made me look at the reference guide. Ouch. 

We have this nice page “Defining core.properties” that talks about defining 
core.properties. Unfortunately it has _no_ warning about the fact that trying 
to use this in SolrCloud is a really bad idea. As in “don’t do it”. To make 
matters worse, as you have found out painfully, it kinda worked in cloud mode 
in times past.

Then the collections API doc says you can add property.name=value, with no 
mention that the name here should NOT be any of the properties necessary for 
SolrCloud to operate.

The problem here is that adding property.name=value would set that property to 
the _same_ value in all cores. Naming all the replicas for a collection to the 
same thing is not supported officially, if it works in older Solrs that was by 
chance not design. And there’s no special provision for just using that 
property as a prefix. That’s really designed for custom properties. And, by the 
way, “name” is really kind of a no-op, the thing displayed in the drop-down is 
taken from Zookeeper’s node_name. Please don’t try to name that.

I very strongly recommend that you stop trying to do this. Whatever you are 
doing that requires a specific name, I’d change _that_ process to use the names 
assigned by Solr. If it’s just for aesthetics, there’s really no good way to 
change what’s in the drop-down.

Best,
Erick

> On Nov 5, 2020, at 5:25 AM, Modassar Ather  wrote:
> 
> Hi Shawn,
> 
> I understand that we do not need to modify the core.properties and use the
> APIs to create core and collection and that is what I am doing now.
> This question of naming the core as per the choice comes from our older
> setup where we have 12 shards, a collection and core both named the same
> and the core were discovered by core.properties with entries as mentioned
> in my previous mail.
> 
> Thanks for the responses. I will continue with the new collection and core
> created by the APIs and test our indexing and queries.
> 
> Best,
> Modassar
> 
> 
> On Thu, Nov 5, 2020 at 12:58 PM Shawn Heisey  wrote:
> 
>> On 11/4/2020 9:32 PM, Modassar Ather wrote:
>>> Another thing: how can I control the core naming? I want the core name to
>>> be *mycore* instead of *mycore**_shard1_replica_n1*/*mycore*
>>> *_shard2_replica_n2*.
>>> I tried setting it using property.name=*mycore* but it did not work.
>>> What can I do to achieve this? I am not able to find any config option.
>> 
>> Why would you need to this or even want to?  It sounds to me like an XY
>> problem.
>> 
>> http://xyproblem.info/
>> 
>>> I understand the core.properties file is required for core discovery but
>>> when this file is present under a subdirectory of SOLR_HOME I see it not
>>> getting loaded and not available in Solr dashboard.
>> 
>> You should not be trying to manipulate core.properties files yourself.
>> This is especially discouraged when Solr is running in cloud mode.
>> 
>> When you're in cloud mode, the collection information in zookeeper will
>> always be consulted during core discovery.  If the found core is NOT
>> described in zookeeper, it will not be loaded.  And in any recent Solr
>> version when running in cloud mode, a core that is not referenced in ZK
>> will be entirely deleted.
>> 
>> Thanks,
>> Shawn
>> 



Re: Solr migration related issues.

2020-11-05 Thread Modassar Ather
Hi Shawn,

I understand that we do not need to modify the core.properties and use the
APIs to create core and collection and that is what I am doing now.
This question of naming the core as per the choice comes from our older
setup where we have 12 shards, a collection and core both named the same
and the core were discovered by core.properties with entries as mentioned
in my previous mail.

Thanks for the responses. I will continue with the new collection and core
created by the APIs and test our indexing and queries.

Best,
Modassar


On Thu, Nov 5, 2020 at 12:58 PM Shawn Heisey  wrote:

> On 11/4/2020 9:32 PM, Modassar Ather wrote:
> > Another thing: how can I control the core naming? I want the core name to
> > be *mycore* instead of *mycore**_shard1_replica_n1*/*mycore*
> > *_shard2_replica_n2*.
> > I tried setting it using property.name=*mycore* but it did not work.
> > What can I do to achieve this? I am not able to find any config option.
>
> Why would you need to this or even want to?  It sounds to me like an XY
> problem.
>
> http://xyproblem.info/
>
> > I understand the core.properties file is required for core discovery but
> > when this file is present under a subdirectory of SOLR_HOME I see it not
> > getting loaded and not available in Solr dashboard.
>
> You should not be trying to manipulate core.properties files yourself.
> This is especially discouraged when Solr is running in cloud mode.
>
> When you're in cloud mode, the collection information in zookeeper will
> always be consulted during core discovery.  If the found core is NOT
> described in zookeeper, it will not be loaded.  And in any recent Solr
> version when running in cloud mode, a core that is not referenced in ZK
> will be entirely deleted.
>
> Thanks,
> Shawn
>


Re: Solr migration related issues.

2020-11-04 Thread Shawn Heisey

On 11/4/2020 9:32 PM, Modassar Ather wrote:

Another thing: how can I control the core naming? I want the core name to
be *mycore* instead of *mycore**_shard1_replica_n1*/*mycore*
*_shard2_replica_n2*.
I tried setting it using property.name=*mycore* but it did not work.
What can I do to achieve this? I am not able to find any config option.


Why would you need to this or even want to?  It sounds to me like an XY 
problem.


http://xyproblem.info/


I understand the core.properties file is required for core discovery but
when this file is present under a subdirectory of SOLR_HOME I see it not
getting loaded and not available in Solr dashboard.


You should not be trying to manipulate core.properties files yourself. 
This is especially discouraged when Solr is running in cloud mode.


When you're in cloud mode, the collection information in zookeeper will 
always be consulted during core discovery.  If the found core is NOT 
described in zookeeper, it will not be loaded.  And in any recent Solr 
version when running in cloud mode, a core that is not referenced in ZK 
will be entirely deleted.


Thanks,
Shawn


Re: Solr migration related issues.

2020-11-04 Thread Modassar Ather
Hi Erick,

I have put solr configs in Zookeeper. I have created a collection using the
following API.
admin/collections?action=CREATE&name=mycore&numShards=2&replicationFactor=1&collection.configName=mycore&
property.name=mycore

The collection got created and I can see *mycore**_shard1_replica_n1* and
*mycore**_shard2_replica_n2* under core on the dashboard. The
core.properties got created with the following values.
numShards=2
collection.configName=mycore
name=mycore
replicaType=NRT
shard=shard1
collection=mycore
coreNodeName=core_node3

Another thing: how can I control the core naming? I want the core name to
be *mycore* instead of *mycore**_shard1_replica_n1*/*mycore*
*_shard2_replica_n2*.
I tried setting it using property.name=*mycore* but it did not work.
What can I do to achieve this? I am not able to find any config option.

I understand the core.properties file is required for core discovery but
when this file is present under a subdirectory of SOLR_HOME I see it not
getting loaded and not available in Solr dashboard.
Previous core.properties values :

numShards=2

name=mycore

collection=mycore

configSet=mycore
Can you please help me with this?

Best,
Modassar


On Wed, Nov 4, 2020 at 7:37 PM Erick Erickson 
wrote:

> inline
>
> > On Nov 4, 2020, at 2:17 AM, Modassar Ather 
> wrote:
> >
> > Thanks Erick and Ilan.
> >
> > I am using APIs to create core and collection and have removed all the
> > entries from core.properties. Currently I am facing init failure and
> > debugging it.
> > Will write back if I am facing any issues.
> >
>
> If that means you still _have_ a core.properties file and it’s empty, that
> won’t
> work.
>
> When Solr starts, it goes through “core discovery”. Starting at SOLR_HOME
> it
> recursively descends the directories and whenever it finds a
> “core.properties”
> file says “aha! There’s a replica here. I'll go tell Zookeeper who I am
> and that
> I'm open for business”. It uses the values in core.properties to know what
> collection and shard it belongs to and which replica of that shard it is.
>
> Incidentally, core discovery stops descending and moves to the next sibling
> directory when it hits the first core.properties file so you can’t have a
> replica
> underneath another replica in your directory tree.
>
> You’ll save yourself a lot of grief if you start with an empty SOLR_HOME
> (except
> for solr.xml if you haven’t put it in Zookeeper. BTW, I’d recommend you do
> put
> put solr.xml in Zookeeper!).
>
> Best,
> Erick
>
>
> > Best,
> > Modassar
> >
> > On Wed, Nov 4, 2020 at 3:20 AM Erick Erickson 
> > wrote:
> >
> >> Do note, though, that the default value for legacyCloud changed from
> >> true to false so even though you can get it to work by setting
> >> this cluster prop I wouldn’t…
> >>
> >> The change in the default value is why it’s failing for you.
> >>
> >>
> >>> On Nov 3, 2020, at 11:20 AM, Ilan Ginzburg  wrote:
> >>>
> >>> I second Erick's recommendation, but just for the record legacyCloud
> was
> >>> removed in (upcoming) Solr 9 and is still available in Solr 8.x. Most
> >>> likely this explains Modassar why you found it in the documentation.
> >>>
> >>> Ilan
> >>>
> >>>
> >>> On Tue, Nov 3, 2020 at 5:11 PM Erick Erickson  >
> >>> wrote:
> >>>
>  You absolutely need core.properties files. It’s just that they
>  should be considered an “implementation detail” that you
>  should rarely, if ever need to be aware of.
> 
>  Scripting manual creation of core.properties files in order
>  to define your collections has never been officially supported, it
>  just happened to work.
> 
>  Best,
>  Erick
> 
> > On Nov 3, 2020, at 11:06 AM, Modassar Ather 
>  wrote:
> >
> > Thanks Erick for your response.
> >
> > I will certainly use the APIs and not rely on the core.properties. I
> >> was
> > going through the documentation on core.properties and found it to be
>  still
> > there.
> > I have all the solr install scripts based on older Solr versions and
>  wanted
> > to re-use the same as the core.properties way is still available.
> >
> > So does this mean that we do not need core.properties anymore?
> > How can we ensure that the core name is configurable and not
> >> dynamically
> > set?
> >
> > I will try to use the APIs to create the collection as well as the
> >> cores.
> >
> > Best,
> > Modassar
> >
> > On Tue, Nov 3, 2020 at 5:55 PM Erick Erickson <
> erickerick...@gmail.com
> >>>
> > wrote:
> >
> >> You’re relying on legacyMode, which is no longer supported. In
> >> older versions of Solr, if a core.properties file was found on disk
> >> Solr
> >> attempted to create the replica (and collection) on the fly. This is
> >> no
> >> longer true.
> >>
> >>
> >> Why are you doing it this manually instead of using the collections
> >> API?
> >> You can precisely place each replica with that 

Re: Solr migration related issues.

2020-11-04 Thread Erick Erickson
inline

> On Nov 4, 2020, at 2:17 AM, Modassar Ather  wrote:
> 
> Thanks Erick and Ilan.
> 
> I am using APIs to create core and collection and have removed all the
> entries from core.properties. Currently I am facing init failure and
> debugging it.
> Will write back if I am facing any issues.
> 

If that means you still _have_ a core.properties file and it’s empty, that won’t
work.

When Solr starts, it goes through “core discovery”. Starting at SOLR_HOME it
recursively descends the directories and whenever it finds a “core.properties”
file says “aha! There’s a replica here. I'll go tell Zookeeper who I am and 
that 
I'm open for business”. It uses the values in core.properties to know what 
collection and shard it belongs to and which replica of that shard it is.

Incidentally, core discovery stops descending and moves to the next sibling
directory when it hits the first core.properties file so you can’t have a 
replica
underneath another replica in your directory tree.

You’ll save yourself a lot of grief if you start with an empty SOLR_HOME (except
for solr.xml if you haven’t put it in Zookeeper. BTW, I’d recommend you do put
put solr.xml in Zookeeper!).

Best,
Erick


> Best,
> Modassar
> 
> On Wed, Nov 4, 2020 at 3:20 AM Erick Erickson 
> wrote:
> 
>> Do note, though, that the default value for legacyCloud changed from
>> true to false so even though you can get it to work by setting
>> this cluster prop I wouldn’t…
>> 
>> The change in the default value is why it’s failing for you.
>> 
>> 
>>> On Nov 3, 2020, at 11:20 AM, Ilan Ginzburg  wrote:
>>> 
>>> I second Erick's recommendation, but just for the record legacyCloud was
>>> removed in (upcoming) Solr 9 and is still available in Solr 8.x. Most
>>> likely this explains Modassar why you found it in the documentation.
>>> 
>>> Ilan
>>> 
>>> 
>>> On Tue, Nov 3, 2020 at 5:11 PM Erick Erickson 
>>> wrote:
>>> 
 You absolutely need core.properties files. It’s just that they
 should be considered an “implementation detail” that you
 should rarely, if ever need to be aware of.
 
 Scripting manual creation of core.properties files in order
 to define your collections has never been officially supported, it
 just happened to work.
 
 Best,
 Erick
 
> On Nov 3, 2020, at 11:06 AM, Modassar Ather 
 wrote:
> 
> Thanks Erick for your response.
> 
> I will certainly use the APIs and not rely on the core.properties. I
>> was
> going through the documentation on core.properties and found it to be
 still
> there.
> I have all the solr install scripts based on older Solr versions and
 wanted
> to re-use the same as the core.properties way is still available.
> 
> So does this mean that we do not need core.properties anymore?
> How can we ensure that the core name is configurable and not
>> dynamically
> set?
> 
> I will try to use the APIs to create the collection as well as the
>> cores.
> 
> Best,
> Modassar
> 
> On Tue, Nov 3, 2020 at 5:55 PM Erick Erickson >> 
> wrote:
> 
>> You’re relying on legacyMode, which is no longer supported. In
>> older versions of Solr, if a core.properties file was found on disk
>> Solr
>> attempted to create the replica (and collection) on the fly. This is
>> no
>> longer true.
>> 
>> 
>> Why are you doing it this manually instead of using the collections
>> API?
>> You can precisely place each replica with that API in a way that’ll
>> be continued to be supported going forward.
>> 
>> This really sounds like an XY problem, what is the use-case you’re
>> trying to solve?
>> 
>> Best,
>> Erick
>> 
>>> On Nov 3, 2020, at 6:39 AM, Modassar Ather 
>> wrote:
>>> 
>>> Hi,
>>> 
>>> I am migrating from Solr 6.5.1 to Solr 8.6.3. As a part of the entire
>>> upgrade I have the first task to install and configure the solr with
 the
>>> core and collection. The solr is installed in SolrCloud mode.
>>> 
>>> In Solr 6.5.1 I was using the following key values in core.properties
>> file.
>>> The configuration files were uploaded to zookeeper using the upconfig
>>> command.
>>> The core and collection was automatically created with the setting in
>>> core.properties files and the configSet uploaded in zookeeper and it
 used
>>> to display on the Solr 6.5.1 dashboard.
>>> 
>>> numShards=12
>>> 
>>> name=mycore
>>> 
>>> collection=mycore
>>> 
>>> configSet=mycore
>>> 
>>> 
>>> With the latest Solr 8.6.3 the same approach is not working. As per
>> my
>>> understanding the core is identified using the location of
>> core.properties
>>> which is under */mycore/core.properties.*
>>> 
>>> Can you please help me with the following?
>>> 
>>> 
>>> - Is there any property I am missing to load the core and collec

Re: Solr migration related issues.

2020-11-03 Thread Modassar Ather
Thanks Erick and Ilan.

I am using APIs to create core and collection and have removed all the
entries from core.properties. Currently I am facing init failure and
debugging it.
Will write back if I am facing any issues.

Best,
Modassar

On Wed, Nov 4, 2020 at 3:20 AM Erick Erickson 
wrote:

> Do note, though, that the default value for legacyCloud changed from
> true to false so even though you can get it to work by setting
> this cluster prop I wouldn’t…
>
> The change in the default value is why it’s failing for you.
>
>
> > On Nov 3, 2020, at 11:20 AM, Ilan Ginzburg  wrote:
> >
> > I second Erick's recommendation, but just for the record legacyCloud was
> > removed in (upcoming) Solr 9 and is still available in Solr 8.x. Most
> > likely this explains Modassar why you found it in the documentation.
> >
> > Ilan
> >
> >
> > On Tue, Nov 3, 2020 at 5:11 PM Erick Erickson 
> > wrote:
> >
> >> You absolutely need core.properties files. It’s just that they
> >> should be considered an “implementation detail” that you
> >> should rarely, if ever need to be aware of.
> >>
> >> Scripting manual creation of core.properties files in order
> >> to define your collections has never been officially supported, it
> >> just happened to work.
> >>
> >> Best,
> >> Erick
> >>
> >>> On Nov 3, 2020, at 11:06 AM, Modassar Ather 
> >> wrote:
> >>>
> >>> Thanks Erick for your response.
> >>>
> >>> I will certainly use the APIs and not rely on the core.properties. I
> was
> >>> going through the documentation on core.properties and found it to be
> >> still
> >>> there.
> >>> I have all the solr install scripts based on older Solr versions and
> >> wanted
> >>> to re-use the same as the core.properties way is still available.
> >>>
> >>> So does this mean that we do not need core.properties anymore?
> >>> How can we ensure that the core name is configurable and not
> dynamically
> >>> set?
> >>>
> >>> I will try to use the APIs to create the collection as well as the
> cores.
> >>>
> >>> Best,
> >>> Modassar
> >>>
> >>> On Tue, Nov 3, 2020 at 5:55 PM Erick Erickson  >
> >>> wrote:
> >>>
>  You’re relying on legacyMode, which is no longer supported. In
>  older versions of Solr, if a core.properties file was found on disk
> Solr
>  attempted to create the replica (and collection) on the fly. This is
> no
>  longer true.
> 
> 
>  Why are you doing it this manually instead of using the collections
> API?
>  You can precisely place each replica with that API in a way that’ll
>  be continued to be supported going forward.
> 
>  This really sounds like an XY problem, what is the use-case you’re
>  trying to solve?
> 
>  Best,
>  Erick
> 
> > On Nov 3, 2020, at 6:39 AM, Modassar Ather 
>  wrote:
> >
> > Hi,
> >
> > I am migrating from Solr 6.5.1 to Solr 8.6.3. As a part of the entire
> > upgrade I have the first task to install and configure the solr with
> >> the
> > core and collection. The solr is installed in SolrCloud mode.
> >
> > In Solr 6.5.1 I was using the following key values in core.properties
>  file.
> > The configuration files were uploaded to zookeeper using the upconfig
> > command.
> > The core and collection was automatically created with the setting in
> > core.properties files and the configSet uploaded in zookeeper and it
> >> used
> > to display on the Solr 6.5.1 dashboard.
> >
> > numShards=12
> >
> > name=mycore
> >
> > collection=mycore
> >
> > configSet=mycore
> >
> >
> > With the latest Solr 8.6.3 the same approach is not working. As per
> my
> > understanding the core is identified using the location of
>  core.properties
> > which is under */mycore/core.properties.*
> >
> > Can you please help me with the following?
> >
> >
> > - Is there any property I am missing to load the core and collection
> >> as
> > it used to be in Solr 6.5.1 with the help of core.properties and
>  config set
> > on zookeeper?
> > - The name of the core and collection should be configurable and not
>  the
> > dynamically generated names. How can I control that in the latest
> >> Solr?
> > - Is the core and collection API the only way to create core and
> > collection as I see that the core is also not getting listed even if
>  the
> > core.properties file is present?
> >
> > Please note that I will be doing a full indexing once the setup is
> >> done.
> >
> > Kindly help me with your suggestions.
> >
> > Best,
> > Modassar
> 
> 
> >>
> >>
>
>


Re: Solr migration related issues.

2020-11-03 Thread Erick Erickson
Do note, though, that the default value for legacyCloud changed from
true to false so even though you can get it to work by setting
this cluster prop I wouldn’t…

The change in the default value is why it’s failing for you.


> On Nov 3, 2020, at 11:20 AM, Ilan Ginzburg  wrote:
> 
> I second Erick's recommendation, but just for the record legacyCloud was
> removed in (upcoming) Solr 9 and is still available in Solr 8.x. Most
> likely this explains Modassar why you found it in the documentation.
> 
> Ilan
> 
> 
> On Tue, Nov 3, 2020 at 5:11 PM Erick Erickson 
> wrote:
> 
>> You absolutely need core.properties files. It’s just that they
>> should be considered an “implementation detail” that you
>> should rarely, if ever need to be aware of.
>> 
>> Scripting manual creation of core.properties files in order
>> to define your collections has never been officially supported, it
>> just happened to work.
>> 
>> Best,
>> Erick
>> 
>>> On Nov 3, 2020, at 11:06 AM, Modassar Ather 
>> wrote:
>>> 
>>> Thanks Erick for your response.
>>> 
>>> I will certainly use the APIs and not rely on the core.properties. I was
>>> going through the documentation on core.properties and found it to be
>> still
>>> there.
>>> I have all the solr install scripts based on older Solr versions and
>> wanted
>>> to re-use the same as the core.properties way is still available.
>>> 
>>> So does this mean that we do not need core.properties anymore?
>>> How can we ensure that the core name is configurable and not dynamically
>>> set?
>>> 
>>> I will try to use the APIs to create the collection as well as the cores.
>>> 
>>> Best,
>>> Modassar
>>> 
>>> On Tue, Nov 3, 2020 at 5:55 PM Erick Erickson 
>>> wrote:
>>> 
 You’re relying on legacyMode, which is no longer supported. In
 older versions of Solr, if a core.properties file was found on disk Solr
 attempted to create the replica (and collection) on the fly. This is no
 longer true.
 
 
 Why are you doing it this manually instead of using the collections API?
 You can precisely place each replica with that API in a way that’ll
 be continued to be supported going forward.
 
 This really sounds like an XY problem, what is the use-case you’re
 trying to solve?
 
 Best,
 Erick
 
> On Nov 3, 2020, at 6:39 AM, Modassar Ather 
 wrote:
> 
> Hi,
> 
> I am migrating from Solr 6.5.1 to Solr 8.6.3. As a part of the entire
> upgrade I have the first task to install and configure the solr with
>> the
> core and collection. The solr is installed in SolrCloud mode.
> 
> In Solr 6.5.1 I was using the following key values in core.properties
 file.
> The configuration files were uploaded to zookeeper using the upconfig
> command.
> The core and collection was automatically created with the setting in
> core.properties files and the configSet uploaded in zookeeper and it
>> used
> to display on the Solr 6.5.1 dashboard.
> 
> numShards=12
> 
> name=mycore
> 
> collection=mycore
> 
> configSet=mycore
> 
> 
> With the latest Solr 8.6.3 the same approach is not working. As per my
> understanding the core is identified using the location of
 core.properties
> which is under */mycore/core.properties.*
> 
> Can you please help me with the following?
> 
> 
> - Is there any property I am missing to load the core and collection
>> as
> it used to be in Solr 6.5.1 with the help of core.properties and
 config set
> on zookeeper?
> - The name of the core and collection should be configurable and not
 the
> dynamically generated names. How can I control that in the latest
>> Solr?
> - Is the core and collection API the only way to create core and
> collection as I see that the core is also not getting listed even if
 the
> core.properties file is present?
> 
> Please note that I will be doing a full indexing once the setup is
>> done.
> 
> Kindly help me with your suggestions.
> 
> Best,
> Modassar
 
 
>> 
>> 



Re: Solr migration related issues.

2020-11-03 Thread Ilan Ginzburg
I second Erick's recommendation, but just for the record legacyCloud was
removed in (upcoming) Solr 9 and is still available in Solr 8.x. Most
likely this explains Modassar why you found it in the documentation.

Ilan


On Tue, Nov 3, 2020 at 5:11 PM Erick Erickson 
wrote:

> You absolutely need core.properties files. It’s just that they
> should be considered an “implementation detail” that you
> should rarely, if ever need to be aware of.
>
> Scripting manual creation of core.properties files in order
> to define your collections has never been officially supported, it
> just happened to work.
>
> Best,
> Erick
>
> > On Nov 3, 2020, at 11:06 AM, Modassar Ather 
> wrote:
> >
> > Thanks Erick for your response.
> >
> > I will certainly use the APIs and not rely on the core.properties. I was
> > going through the documentation on core.properties and found it to be
> still
> > there.
> > I have all the solr install scripts based on older Solr versions and
> wanted
> > to re-use the same as the core.properties way is still available.
> >
> > So does this mean that we do not need core.properties anymore?
> > How can we ensure that the core name is configurable and not dynamically
> > set?
> >
> > I will try to use the APIs to create the collection as well as the cores.
> >
> > Best,
> > Modassar
> >
> > On Tue, Nov 3, 2020 at 5:55 PM Erick Erickson 
> > wrote:
> >
> >> You’re relying on legacyMode, which is no longer supported. In
> >> older versions of Solr, if a core.properties file was found on disk Solr
> >> attempted to create the replica (and collection) on the fly. This is no
> >> longer true.
> >>
> >>
> >> Why are you doing it this manually instead of using the collections API?
> >> You can precisely place each replica with that API in a way that’ll
> >> be continued to be supported going forward.
> >>
> >> This really sounds like an XY problem, what is the use-case you’re
> >> trying to solve?
> >>
> >> Best,
> >> Erick
> >>
> >>> On Nov 3, 2020, at 6:39 AM, Modassar Ather 
> >> wrote:
> >>>
> >>> Hi,
> >>>
> >>> I am migrating from Solr 6.5.1 to Solr 8.6.3. As a part of the entire
> >>> upgrade I have the first task to install and configure the solr with
> the
> >>> core and collection. The solr is installed in SolrCloud mode.
> >>>
> >>> In Solr 6.5.1 I was using the following key values in core.properties
> >> file.
> >>> The configuration files were uploaded to zookeeper using the upconfig
> >>> command.
> >>> The core and collection was automatically created with the setting in
> >>> core.properties files and the configSet uploaded in zookeeper and it
> used
> >>> to display on the Solr 6.5.1 dashboard.
> >>>
> >>> numShards=12
> >>>
> >>> name=mycore
> >>>
> >>> collection=mycore
> >>>
> >>> configSet=mycore
> >>>
> >>>
> >>> With the latest Solr 8.6.3 the same approach is not working. As per my
> >>> understanding the core is identified using the location of
> >> core.properties
> >>> which is under */mycore/core.properties.*
> >>>
> >>> Can you please help me with the following?
> >>>
> >>>
> >>>  - Is there any property I am missing to load the core and collection
> as
> >>>  it used to be in Solr 6.5.1 with the help of core.properties and
> >> config set
> >>>  on zookeeper?
> >>>  - The name of the core and collection should be configurable and not
> >> the
> >>>  dynamically generated names. How can I control that in the latest
> Solr?
> >>>  - Is the core and collection API the only way to create core and
> >>>  collection as I see that the core is also not getting listed even if
> >> the
> >>>  core.properties file is present?
> >>>
> >>> Please note that I will be doing a full indexing once the setup is
> done.
> >>>
> >>> Kindly help me with your suggestions.
> >>>
> >>> Best,
> >>> Modassar
> >>
> >>
>
>


Re: Solr migration related issues.

2020-11-03 Thread Erick Erickson
You absolutely need core.properties files. It’s just that they
should be considered an “implementation detail” that you
should rarely, if ever need to be aware of.

Scripting manual creation of core.properties files in order
to define your collections has never been officially supported, it
just happened to work.

Best,
Erick

> On Nov 3, 2020, at 11:06 AM, Modassar Ather  wrote:
> 
> Thanks Erick for your response.
> 
> I will certainly use the APIs and not rely on the core.properties. I was
> going through the documentation on core.properties and found it to be still
> there.
> I have all the solr install scripts based on older Solr versions and wanted
> to re-use the same as the core.properties way is still available.
> 
> So does this mean that we do not need core.properties anymore?
> How can we ensure that the core name is configurable and not dynamically
> set?
> 
> I will try to use the APIs to create the collection as well as the cores.
> 
> Best,
> Modassar
> 
> On Tue, Nov 3, 2020 at 5:55 PM Erick Erickson 
> wrote:
> 
>> You’re relying on legacyMode, which is no longer supported. In
>> older versions of Solr, if a core.properties file was found on disk Solr
>> attempted to create the replica (and collection) on the fly. This is no
>> longer true.
>> 
>> 
>> Why are you doing it this manually instead of using the collections API?
>> You can precisely place each replica with that API in a way that’ll
>> be continued to be supported going forward.
>> 
>> This really sounds like an XY problem, what is the use-case you’re
>> trying to solve?
>> 
>> Best,
>> Erick
>> 
>>> On Nov 3, 2020, at 6:39 AM, Modassar Ather 
>> wrote:
>>> 
>>> Hi,
>>> 
>>> I am migrating from Solr 6.5.1 to Solr 8.6.3. As a part of the entire
>>> upgrade I have the first task to install and configure the solr with the
>>> core and collection. The solr is installed in SolrCloud mode.
>>> 
>>> In Solr 6.5.1 I was using the following key values in core.properties
>> file.
>>> The configuration files were uploaded to zookeeper using the upconfig
>>> command.
>>> The core and collection was automatically created with the setting in
>>> core.properties files and the configSet uploaded in zookeeper and it used
>>> to display on the Solr 6.5.1 dashboard.
>>> 
>>> numShards=12
>>> 
>>> name=mycore
>>> 
>>> collection=mycore
>>> 
>>> configSet=mycore
>>> 
>>> 
>>> With the latest Solr 8.6.3 the same approach is not working. As per my
>>> understanding the core is identified using the location of
>> core.properties
>>> which is under */mycore/core.properties.*
>>> 
>>> Can you please help me with the following?
>>> 
>>> 
>>>  - Is there any property I am missing to load the core and collection as
>>>  it used to be in Solr 6.5.1 with the help of core.properties and
>> config set
>>>  on zookeeper?
>>>  - The name of the core and collection should be configurable and not
>> the
>>>  dynamically generated names. How can I control that in the latest Solr?
>>>  - Is the core and collection API the only way to create core and
>>>  collection as I see that the core is also not getting listed even if
>> the
>>>  core.properties file is present?
>>> 
>>> Please note that I will be doing a full indexing once the setup is done.
>>> 
>>> Kindly help me with your suggestions.
>>> 
>>> Best,
>>> Modassar
>> 
>> 



Re: Solr migration related issues.

2020-11-03 Thread Modassar Ather
Thanks Erick for your response.

I will certainly use the APIs and not rely on the core.properties. I was
going through the documentation on core.properties and found it to be still
there.
I have all the solr install scripts based on older Solr versions and wanted
to re-use the same as the core.properties way is still available.

So does this mean that we do not need core.properties anymore?
How can we ensure that the core name is configurable and not dynamically
set?

I will try to use the APIs to create the collection as well as the cores.

Best,
Modassar

On Tue, Nov 3, 2020 at 5:55 PM Erick Erickson 
wrote:

> You’re relying on legacyMode, which is no longer supported. In
> older versions of Solr, if a core.properties file was found on disk Solr
> attempted to create the replica (and collection) on the fly. This is no
> longer true.
>
>
> Why are you doing it this manually instead of using the collections API?
> You can precisely place each replica with that API in a way that’ll
> be continued to be supported going forward.
>
> This really sounds like an XY problem, what is the use-case you’re
> trying to solve?
>
> Best,
> Erick
>
> > On Nov 3, 2020, at 6:39 AM, Modassar Ather 
> wrote:
> >
> > Hi,
> >
> > I am migrating from Solr 6.5.1 to Solr 8.6.3. As a part of the entire
> > upgrade I have the first task to install and configure the solr with the
> > core and collection. The solr is installed in SolrCloud mode.
> >
> > In Solr 6.5.1 I was using the following key values in core.properties
> file.
> > The configuration files were uploaded to zookeeper using the upconfig
> > command.
> > The core and collection was automatically created with the setting in
> > core.properties files and the configSet uploaded in zookeeper and it used
> > to display on the Solr 6.5.1 dashboard.
> >
> > numShards=12
> >
> > name=mycore
> >
> > collection=mycore
> >
> > configSet=mycore
> >
> >
> > With the latest Solr 8.6.3 the same approach is not working. As per my
> > understanding the core is identified using the location of
> core.properties
> > which is under */mycore/core.properties.*
> >
> > Can you please help me with the following?
> >
> >
> >   - Is there any property I am missing to load the core and collection as
> >   it used to be in Solr 6.5.1 with the help of core.properties and
> config set
> >   on zookeeper?
> >   - The name of the core and collection should be configurable and not
> the
> >   dynamically generated names. How can I control that in the latest Solr?
> >   - Is the core and collection API the only way to create core and
> >   collection as I see that the core is also not getting listed even if
> the
> >   core.properties file is present?
> >
> > Please note that I will be doing a full indexing once the setup is done.
> >
> > Kindly help me with your suggestions.
> >
> > Best,
> > Modassar
>
>


Re: Solr migration related issues.

2020-11-03 Thread Erick Erickson
You’re relying on legacyMode, which is no longer supported. In
older versions of Solr, if a core.properties file was found on disk Solr
attempted to create the replica (and collection) on the fly. This is no
longer true.


Why are you doing it this manually instead of using the collections API?
You can precisely place each replica with that API in a way that’ll
be continued to be supported going forward.

This really sounds like an XY problem, what is the use-case you’re
trying to solve?

Best,
Erick

> On Nov 3, 2020, at 6:39 AM, Modassar Ather  wrote:
> 
> Hi,
> 
> I am migrating from Solr 6.5.1 to Solr 8.6.3. As a part of the entire
> upgrade I have the first task to install and configure the solr with the
> core and collection. The solr is installed in SolrCloud mode.
> 
> In Solr 6.5.1 I was using the following key values in core.properties file.
> The configuration files were uploaded to zookeeper using the upconfig
> command.
> The core and collection was automatically created with the setting in
> core.properties files and the configSet uploaded in zookeeper and it used
> to display on the Solr 6.5.1 dashboard.
> 
> numShards=12
> 
> name=mycore
> 
> collection=mycore
> 
> configSet=mycore
> 
> 
> With the latest Solr 8.6.3 the same approach is not working. As per my
> understanding the core is identified using the location of core.properties
> which is under */mycore/core.properties.*
> 
> Can you please help me with the following?
> 
> 
>   - Is there any property I am missing to load the core and collection as
>   it used to be in Solr 6.5.1 with the help of core.properties and config set
>   on zookeeper?
>   - The name of the core and collection should be configurable and not the
>   dynamically generated names. How can I control that in the latest Solr?
>   - Is the core and collection API the only way to create core and
>   collection as I see that the core is also not getting listed even if the
>   core.properties file is present?
> 
> Please note that I will be doing a full indexing once the setup is done.
> 
> Kindly help me with your suggestions.
> 
> Best,
> Modassar



Solr migration related issues.

2020-11-03 Thread Modassar Ather
Hi,

I am migrating from Solr 6.5.1 to Solr 8.6.3. As a part of the entire
upgrade I have the first task to install and configure the solr with the
core and collection. The solr is installed in SolrCloud mode.

In Solr 6.5.1 I was using the following key values in core.properties file.
The configuration files were uploaded to zookeeper using the upconfig
command.
The core and collection was automatically created with the setting in
core.properties files and the configSet uploaded in zookeeper and it used
to display on the Solr 6.5.1 dashboard.

numShards=12

name=mycore

collection=mycore

configSet=mycore


With the latest Solr 8.6.3 the same approach is not working. As per my
understanding the core is identified using the location of core.properties
which is under */mycore/core.properties.*

Can you please help me with the following?


   - Is there any property I am missing to load the core and collection as
   it used to be in Solr 6.5.1 with the help of core.properties and config set
   on zookeeper?
   - The name of the core and collection should be configurable and not the
   dynamically generated names. How can I control that in the latest Solr?
   - Is the core and collection API the only way to create core and
   collection as I see that the core is also not getting listed even if the
   core.properties file is present?

Please note that I will be doing a full indexing once the setup is done.

Kindly help me with your suggestions.

Best,
Modassar


Re: Solr Migration to The AWS Cloud

2019-06-06 Thread Jörn Franke
I guess you can do this by switching off the source data center, but you would 
need to look more in your architecture and especially applications that use 
solr to verify this.

It may look easy but I would test it before.

> Am 06.06.2019 um 17:24 schrieb Joe Lerner :
> 
> Ooohh...interesting. Then, presumably there is some way to have what was the
> cross-data-center replica become the new "primary"? 
> 
> It's getting too easy!
> 
> Joe
> 
> 
> 
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Solr Migration to The AWS Cloud

2019-06-06 Thread Joe Lerner
Ooohh...interesting. Then, presumably there is some way to have what was the
cross-data-center replica become the new "primary"? 

It's getting too easy!

Joe



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Solr Migration to The AWS Cloud

2019-06-05 Thread Jörn Franke
An alternative to backup and restore could be the data center replication in 
Solr:

https://lucene.apache.org/solr/guide/7_3/cross-data-center-replication-cdcr.html

> Am 05.06.2019 um 19:18 schrieb Joe Lerner :
> 
> Hi,
> 
> Our application is migrating from on-premise to AWS. We are currently on
> Solr Cloud 7.3.0.
> 
> We are interested in exploring ways to do this with minimal,  down-time, as
> in, maybe one hour.
> 
> One strategy would be to set up a new empty Solr Cloud instance in AWS, and
> reindex the world. But reindexing takes us around ~14 hours, so, that is not
> a viable approach.
> 
> I think one very attractive option would be to set up a new live
> node/replica in AWS, and, once it replicates, we're essentially
> done--literally zero down time (for search anyway). But I don't think we're
> going to be able to do that from a networking/security perspective.
> 
> From what I've seen, the other option is to copy the Solr index files to
> AWS, and somehow use them to set up a new pre-indexed instance. Do I need to
> shut down my application and Solr on prem before I copy the files, or can I
> copy while things are active. 
> 
> If I can do the copy while the application is running, I can probably:
> 
> 1. Copy files to AWS Friday at noon
> 2. Keep a record of what got re-indexed after Friday at noon (or, heck,
> 11:45am)
> 3. Start up the new Solr in AWS against the copied files
> 4. Reindex the stuff that got re-indexed after Friday at noon
> 
> Is there a cleaner/simpler/more official way of moving an index from what
> place to another? Export/import, or something like that?
> 
> Thanks for any help!
> 
> Joe
> 
> 
> 
> 
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Solr Migration to The AWS Cloud

2019-06-05 Thread Shawn Heisey

On 6/5/2019 11:18 AM, Joe Lerner wrote:

Our application is migrating from on-premise to AWS. We are currently on
Solr Cloud 7.3.0.

We are interested in exploring ways to do this with minimal,  down-time, as
in, maybe one hour.

One strategy would be to set up a new empty Solr Cloud instance in AWS, and
reindex the world. But reindexing takes us around ~14 hours, so, that is not
a viable approach.


You could go this route by reindexing in AWS and then switching your 
application once the index is ready.



I think one very attractive option would be to set up a new live
node/replica in AWS, and, once it replicates, we're essentially
done--literally zero down time (for search anyway). But I don't think we're
going to be able to do that from a networking/security perspective.

 From what I've seen, the other option is to copy the Solr index files to
AWS, and somehow use them to set up a new pre-indexed instance. Do I need to
shut down my application and Solr on prem before I copy the files, or can I
copy while things are active.


If the index is changing, life can get interesting.  If it's not 
changing, then as long as the OS permits it (most of them should) you're 
free to copy while Solr is running.



If I can do the copy while the application is running, I can probably:

1. Copy files to AWS Friday at noon
2. Keep a record of what got re-indexed after Friday at noon (or, heck,
11:45am)
3. Start up the new Solr in AWS against the copied files
4. Reindex the stuff that got re-indexed after Friday at noon


If your existing cloud is on an OS where rsync is natively available, it 
should be pretty easy to do what you're trying with very little 
downtime, possibly just long enough to reconfigure and restart your 
applications.



Is there a cleaner/simpler/more official way of moving an index from what
place to another? Export/import, or something like that?


The Backup/Restore capability in the Collections API is probably the 
most official you're going to get.


I will write a followup with the way that I would do this.  That's going 
to take me a while.  I might put it on my blog instead and provide a link.


One thing to note:  Lock down your AWS firewall so only trusted 
systems/people can reach your Solr install.  That's the best way you can 
secure things.


Thanks,
Shawn


Solr Migration to The AWS Cloud

2019-06-05 Thread Joe Lerner
Hi,

Our application is migrating from on-premise to AWS. We are currently on
Solr Cloud 7.3.0.

We are interested in exploring ways to do this with minimal,  down-time, as
in, maybe one hour.

One strategy would be to set up a new empty Solr Cloud instance in AWS, and
reindex the world. But reindexing takes us around ~14 hours, so, that is not
a viable approach.

I think one very attractive option would be to set up a new live
node/replica in AWS, and, once it replicates, we're essentially
done--literally zero down time (for search anyway). But I don't think we're
going to be able to do that from a networking/security perspective.

>From what I've seen, the other option is to copy the Solr index files to
AWS, and somehow use them to set up a new pre-indexed instance. Do I need to
shut down my application and Solr on prem before I copy the files, or can I
copy while things are active. 

If I can do the copy while the application is running, I can probably:

1. Copy files to AWS Friday at noon
2. Keep a record of what got re-indexed after Friday at noon (or, heck,
11:45am)
3. Start up the new Solr in AWS against the copied files
4. Reindex the stuff that got re-indexed after Friday at noon

Is there a cleaner/simpler/more official way of moving an index from what
place to another? Export/import, or something like that?

Thanks for any help!

Joe




--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: SOLR migration

2018-06-27 Thread Jan Høydahl
Hi,

Apache Solr is an open source project, with contributors and users doing our 
best
to both document and answer questions, but sometimes it may take time, either 
due
busy schedules or because the answer is unknown or an uncommon one that very few
have tried.

I suspect the last reason for the silence on this since there is no "feature" 
for migrating
an index to another disk as such. Although Emir's answer could be an option.

If I were in your shoes here's how I'd solve it:

Find yourself a maintenance window and plan the operation in detail
Shut down all Solr nodes
Copy SOLR_HOME (where your data is) to exact same path on F drive, e.g. com 
E:\solr\data to F:\solr\data
Unmount both drives and re-mount with drive letters swapped (new small disk 
becomes E while old disk becomes F)
Start Solr on all nodes

The length of your down-time depends on the amount of data

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 18. jun. 2018 kl. 15:56 skrev Ana Mercan (RO) :
> 
> Hi,
> 
> I have the following scenario, I'm having a shared cluster solr
> installation environment (app server 1-app server 2 load balanced) which
> has 4 solr instances.
> 
> After reviewing the space audit we have noticed that the partition where
> the installation resides is too big versus what is used in term of space.
> 
> Therefore we have installed a new drive which is smaller and now we want to
> migrate from the old dive (E:) to the new drive (F).
> 
> Can you please provide an official answer whether this is a supported
> scenario?
> 
> If yes, will you please share the steps with us?
> 
> Thanks,
> 
> Ana
> 
> -- 
> The information transmitted is intended only for the person or entity to 
> which it is addressed and may contain confidential and/or privileged 
> material. Any review, retransmission, dissemination or other use of, or 
> taking of any action in reliance upon, this information by persons or 
> entities other than the intended recipient is prohibited. If you received 
> this in error, please contact the sender and delete the material from any 
> computer.



Re: SOLR migration

2018-06-19 Thread Emir Arnautović
Hi Ana,
There is no documentation because this is not something that is common. 
Assuming you are using SolrCloud and that you don’t want any downtime. What you 
could do is set up new Solr node on the same box but configure it to use this 
new disk. After it is set, you use ADDREPLICA and REMOVEREPLICA to “move” 
replicas from one to another and shut down the old node.

If you can afford downtime, you can simply set up new Solr to use new disk, 
create empty collections and copy indices from one disk to another.

HTH,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/



> On 19 Jun 2018, at 10:26, Ana Mercan (RO)  wrote:
> 
> Hello guys,
> 
> I would appreciate if you could kindly treat this topic with priority as
> the lack of documentation is kind of a blocker for us.
> 
> Thanks in advance,
> Ana
> 
> 
> On Mon, Jun 18, 2018 at 4:56 PM, Ana Mercan (RO)  wrote:
> 
>> Hi,
>> 
>> I have the following scenario, I'm having a shared cluster solr
>> installation environment (app server 1-app server 2 load balanced) which
>> has 4 solr instances.
>> 
>> After reviewing the space audit we have noticed that the partition where
>> the installation resides is too big versus what is used in term of space.
>> 
>> Therefore we have installed a new drive which is smaller and now we want
>> to migrate from the old dive (E:) to the new drive (F).
>> 
>> Can you please provide an official answer whether this is a supported
>> scenario?
>> 
>> If yes, will you please share the steps with us?
>> 
>> Thanks,
>> 
>> Ana
>> 
> 
> -- 
> The information transmitted is intended only for the person or entity to 
> which it is addressed and may contain confidential and/or privileged 
> material. Any review, retransmission, dissemination or other use of, or 
> taking of any action in reliance upon, this information by persons or 
> entities other than the intended recipient is prohibited. If you received 
> this in error, please contact the sender and delete the material from any 
> computer.



Re: SOLR migration

2018-06-19 Thread Ana Mercan (RO)
Hello guys,

I would appreciate if you could kindly treat this topic with priority as
the lack of documentation is kind of a blocker for us.

Thanks in advance,
Ana


On Mon, Jun 18, 2018 at 4:56 PM, Ana Mercan (RO)  wrote:

> Hi,
>
> I have the following scenario, I'm having a shared cluster solr
> installation environment (app server 1-app server 2 load balanced) which
> has 4 solr instances.
>
> After reviewing the space audit we have noticed that the partition where
> the installation resides is too big versus what is used in term of space.
>
> Therefore we have installed a new drive which is smaller and now we want
> to migrate from the old dive (E:) to the new drive (F).
>
> Can you please provide an official answer whether this is a supported
> scenario?
>
> If yes, will you please share the steps with us?
>
> Thanks,
>
> Ana
>

-- 
The information transmitted is intended only for the person or entity to 
which it is addressed and may contain confidential and/or privileged 
material. Any review, retransmission, dissemination or other use of, or 
taking of any action in reliance upon, this information by persons or 
entities other than the intended recipient is prohibited. If you received 
this in error, please contact the sender and delete the material from any 
computer.


SOLR migration

2018-06-18 Thread Ana Mercan (RO)
Hi,

I have the following scenario, I'm having a shared cluster solr
installation environment (app server 1-app server 2 load balanced) which
has 4 solr instances.

After reviewing the space audit we have noticed that the partition where
the installation resides is too big versus what is used in term of space.

Therefore we have installed a new drive which is smaller and now we want to
migrate from the old dive (E:) to the new drive (F).

Can you please provide an official answer whether this is a supported
scenario?

If yes, will you please share the steps with us?

Thanks,

Ana

-- 
The information transmitted is intended only for the person or entity to 
which it is addressed and may contain confidential and/or privileged 
material. Any review, retransmission, dissemination or other use of, or 
taking of any action in reliance upon, this information by persons or 
entities other than the intended recipient is prohibited. If you received 
this in error, please contact the sender and delete the material from any 
computer.


RE: FAST to SOLR migration

2016-09-23 Thread Garth Grimm
Have you evaluated whether the "mm" parameter might help?

https://cwiki.apache.org/confluence/display/solr/The+DisMax+Query+Parser#TheDisMaxQueryParser-Themm(MinimumShouldMatch)Parameter

-Original Message-
From: preeti kumari [mailto:preeti.bg...@gmail.com] 
Sent: Friday, September 23, 2016 5:32 AM
To: solr-user@lucene.apache.org
Subject: FAST to SOLR migration

Hi All,

I am trying to migrate FAST esp to SOLR search engine.

I am trying to implement mode="ONEAR" from FAST in solr.

Please let me know if anyone has any idea about this.

ngram:string("750 500 000 000 000 000",mode="ONEAR")

In solr we are splitting to split field in "750 500 000 000 000 000" but it 
gives me matches even if one of the term matches eg: match with ngram as 750. 
This results in lots of irrelevant matches. I need matches where atleast 3 
terms from ngram matches.

Thanks
Preeti


FAST to SOLR migration

2016-09-23 Thread preeti kumari
Hi All,

I am trying to migrate FAST esp to SOLR search engine.

I am trying to implement mode="ONEAR" from FAST in solr.

Please let me know if anyone has any idea about this.

ngram:string("750 500 000 000 000 000",mode="ONEAR")

In solr we are splitting to split field in "750 500 000 000 000 000" but it
gives me matches even if one of the term matches eg: match with ngram as
750. This results in lots of irrelevant matches. I need matches where
atleast 3 terms from ngram matches.

Thanks
Preeti


Re: Solr Migration 1.4.1 -> 5.2.1

2015-10-15 Thread Shawn Heisey
On 10/15/2015 9:53 AM, fabigol wrote:
> i catch an old project working with Solr 1.4.1, jbossPortal and java 1.5.
> I try to migrate my  solr version  to solr 5.2.1 work with java 1.7.
> The solr indexation works but i fail to display the new results on my
> application.
> I want to know if i must update the jar in my project or if the old
version
> client (solr-solrj) can work with my new version Solr.
> How do i do?

Erick gave you information about Solr itself.  Here's some info for SolrJ:

If you want a 1.x version of SolrJ to work with Solr 3.x or later, you
must change the response parser to XML.

server.setParser(new XMLResponseParser());

https://wiki.apache.org/solr/Solrj#SolrJ.2FSolr_cross-version_compatibility

Depending on exactly how you use SolrJ, there may be other things that
need changing for compatibility, but the XML response parser will be
absolutely required.

Updating SolrJ is probably a better option, but you may need to update
your Java version too.  You will definitely need to change jars for
SolrJ dependencies.  Some code changes, such as HttpSolrServer to
HttpSolrClient, are recommended.

Thanks,
Shawn



Re: Solr Migration 1.4.1 -> 5.2.1

2015-10-15 Thread Erick Erickson
Solr does not guarantee  backwards compatibility more than one major
version back. That is,
Solr 3.x is guaranteed to at least read Solr 1.4 indexes (there was no
Solr 2.x).
Solr 4x can read 3x but not 1.x.
Solr 5x can read 4x indexes, but not 3x

You really only have two choices here. You _could_ try to install Solr
3.x and optimize.
Then install Solr 4x and optimize. HOpefully this rewrites the entire
index to 4.x
Then install Solr 5x.

You could also try the indexupgrader tool, see:
https://lucene.apache.org/core/4_0_0/core/org/apache/lucene/index/IndexUpgrader.html.
I'm not sure how many revisions this can "reach back", I.e. if the 4.x
index upgrader
can read the 1.4, then you're pretty well set, since Solr 5 should be
able to read the 4.x
format.

But what I'd really do is just re-index the corpus if at all possible.
The index upgrade process
works reasonable well, but there will be some options that have been
added, others that have
been deprecated in terms of the schema etc.

Best,
Erick

On Thu, Oct 15, 2015 at 8:53 AM, fabigol  wrote:
> Hi,
> i catch an old project working with Solr 1.4.1, jbossPortal and java 1.5.
> I try to migrate my  solr version  to solr 5.2.1 work with java 1.7.
> The solr indexation works but i fail to display the new results on my
> application.
> I want to know if i must update the jar in my project or if the old version
> client (solr-solrj) can work with my new version Solr.
> How do i do?
>
>
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Solr-Migration-1-4-1-5-2-1-tp4234599.html
> Sent from the Solr - User mailing list archive at Nabble.com.


Solr Migration 1.4.1 -> 5.2.1

2015-10-15 Thread fabigol
Hi,
i catch an old project working with Solr 1.4.1, jbossPortal and java 1.5.
I try to migrate my  solr version  to solr 5.2.1 work with java 1.7.
The solr indexation works but i fail to display the new results on my
application.
I want to know if i must update the jar in my project or if the old version
client (solr-solrj) can work with my new version Solr.
How do i do?

 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Migration-1-4-1-5-2-1-tp4234599.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Planning Solr migration to production: clean and autoSoftCommit

2015-07-13 Thread wwang525
Hi Erick,

I think this is good solution. It is going to work although I have not
implemented with Http API which I was able to find in
https://wiki.apache.org/solr/SolrReplication.

In my local machine, a total of 800MB of index files were "downloaded"
within a minute to another folder. However, transfer the index files across
network could be longer.

I will test it with two-machine scenario.

Thanks



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Planning-Solr-migration-to-production-clean-and-autoSoftCommit-tp4216736p4217122.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Planning Solr migration to production: clean and autoSoftCommit

2015-07-13 Thread Erick Erickson
bq: When the slave instances poll the indexing instance (master), do these slave
instances also auto warm queries in the existing cache

Yes.

bq: When we talk about the "forced replication" solution, are we pushing
/overwriting all the old index files with the new index files?

I believe so, but don't know the entire details. In our situation this
is what'll
happen anyway since you're cleaning, right? So it really doesn't matter if
you do a fetchindex or just disable/enable polling, the work will essentially
be the same.

bq: do we need to restart Solr instance?
no

bq: In addition, will slave instances warmed up in any
way?

all autowarming will be done.


Really, I'd just start by disabling replication on the master, doing the
indexing, then re-enabling it. The rest should "just happen".

Best,
Erick


On Mon, Jul 13, 2015 at 10:48 AM, wwang525  wrote:
> Hi Erick,
>
> That status request shows if the Solr instance is "busy" or "idle". I think
> this is a doable option to check if the indexing process completed (idle) or
> not (busy).
>
> Now, I have some concern about the solution of not using the default polling
> mechanism from the slave instance to the master instance.
>
> The load test showed that the initial batches of requests got much longer
> response time than later batches after the Solr server was started up.
> Gradually, the performance got much better, presumably due to the cache
> being warmed up .
>
> I understand that the indexing process will commit the changes and also auto
> warms queries in the existing cache. In this case, the indexing Solr
> instance will be in a good shape to serve the requests after the indexing
> process is completed.
>
> The question:
>
> When the slave instances poll the indexing instance (master), do these slave
> instances also auto warm queries in the existing cache? If it does, then the
> polling mechanism will also make the slave instance more ready to server
> requests (more performant) at any time.
>
> When we talk about the "forced replication" solution, are we pushing
> /overwriting all the old index files with the new index files? do we need to
> restart Solr instance? In addition, will slave instances warmed up in any
> way?
>
> If there are too many issues with the "force replication", I might as well
> work out the "incremental indexing" option.
>
> Thanks
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Planning-Solr-migration-to-production-clean-and-autoSoftCommit-tp4216736p4217102.html
> Sent from the Solr - User mailing list archive at Nabble.com.


Re: Planning Solr migration to production: clean and autoSoftCommit

2015-07-13 Thread wwang525
Hi Erick,

That status request shows if the Solr instance is "busy" or "idle". I think
this is a doable option to check if the indexing process completed (idle) or
not (busy).

Now, I have some concern about the solution of not using the default polling
mechanism from the slave instance to the master instance.

The load test showed that the initial batches of requests got much longer
response time than later batches after the Solr server was started up.
Gradually, the performance got much better, presumably due to the cache
being warmed up .

I understand that the indexing process will commit the changes and also auto
warms queries in the existing cache. In this case, the indexing Solr
instance will be in a good shape to serve the requests after the indexing
process is completed. 

The question:

When the slave instances poll the indexing instance (master), do these slave
instances also auto warm queries in the existing cache? If it does, then the
polling mechanism will also make the slave instance more ready to server
requests (more performant) at any time.

When we talk about the "forced replication" solution, are we pushing
/overwriting all the old index files with the new index files? do we need to
restart Solr instance? In addition, will slave instances warmed up in any
way?

If there are too many issues with the "force replication", I might as well
work out the "incremental indexing" option. 

Thanks



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Planning-Solr-migration-to-production-clean-and-autoSoftCommit-tp4216736p4217102.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Planning Solr migration to production: clean and autoSoftCommit

2015-07-11 Thread Erick Erickson
I'll leave the DIH question to someone who knows more about it than I do.
You're right though, somehow you have to know when you're done. Although
a very quick google search shows the "status" command, have you tried that?

https://cwiki.apache.org/confluence/display/solr/Uploading+Structured+Data+Store+Data+with+the+Data+Import+Handler

The other option is to move to a SolrJ program

Best,
Erick

On Fri, Jul 10, 2015 at 4:02 PM, Wenbin Wang  wrote:
> Hi Erick,
>
> Scheduling the indexing job is not an issue. The question is how to push
> the index to other two slave instances while the polling from other two
> slave instance needs to be manipulated.
>
> In the first option you proposed, I need to detect if the indexing job has
> completed, and force replication. In this case, the polling is not enabled
>
> In the second option, I also need to detect the status of the indexing job
> and enable / disable polling from the two slave machine.
>
> Is there any API to do it?
>
> In addition, It looks like I also need to make this job to poll the
> indexing machine to check a new version of index? I might be able to get
> around this requirement by using a scheduled job since I know roughly how
> long the indexing job is going to take, and execute the job well after the
> indexing job should be finished.
>
> Thanks
>
> Thanks
>
> On Fri, Jul 10, 2015 at 3:57 PM, Erick Erickson 
> wrote:
>
>> bq: The re-indexing is going to be every 4 hours or even every 2 hours a
>> day, so
>> it is not rare. Manually managing replication is not an option
>>
>> Why not? Couldn't this all be done from a shell script run via a cron job?
>>
>> On Fri, Jul 10, 2015 at 11:03 AM, wwang525  wrote:
>> > Hi Erick,
>> >
>> > It is Solr 4.7. For the time being, we are considering the old style
>> > master/slave configuration.
>> >
>> > The re-indexing is going to be every 4 hours or even every 2 hours a
>> day, so
>> > it is not rare. Manually managing replication is not an option. Is there
>> any
>> > other easy-to-manage option ?
>> >
>> > Thanks
>> >
>> >
>> >
>> > --
>> > View this message in context:
>> http://lucene.472066.n3.nabble.com/Planning-Solr-migration-to-production-clean-and-autoSoftCommit-tp4216736p4216744.html
>> > Sent from the Solr - User mailing list archive at Nabble.com.
>>


Re: Planning Solr migration to production: clean and autoSoftCommit

2015-07-10 Thread Wenbin Wang
Hi Erick,

Scheduling the indexing job is not an issue. The question is how to push
the index to other two slave instances while the polling from other two
slave instance needs to be manipulated.

In the first option you proposed, I need to detect if the indexing job has
completed, and force replication. In this case, the polling is not enabled

In the second option, I also need to detect the status of the indexing job
and enable / disable polling from the two slave machine.

Is there any API to do it?

In addition, It looks like I also need to make this job to poll the
indexing machine to check a new version of index? I might be able to get
around this requirement by using a scheduled job since I know roughly how
long the indexing job is going to take, and execute the job well after the
indexing job should be finished.

Thanks

Thanks

On Fri, Jul 10, 2015 at 3:57 PM, Erick Erickson 
wrote:

> bq: The re-indexing is going to be every 4 hours or even every 2 hours a
> day, so
> it is not rare. Manually managing replication is not an option
>
> Why not? Couldn't this all be done from a shell script run via a cron job?
>
> On Fri, Jul 10, 2015 at 11:03 AM, wwang525  wrote:
> > Hi Erick,
> >
> > It is Solr 4.7. For the time being, we are considering the old style
> > master/slave configuration.
> >
> > The re-indexing is going to be every 4 hours or even every 2 hours a
> day, so
> > it is not rare. Manually managing replication is not an option. Is there
> any
> > other easy-to-manage option ?
> >
> > Thanks
> >
> >
> >
> > --
> > View this message in context:
> http://lucene.472066.n3.nabble.com/Planning-Solr-migration-to-production-clean-and-autoSoftCommit-tp4216736p4216744.html
> > Sent from the Solr - User mailing list archive at Nabble.com.
>


Re: Planning Solr migration to production: clean and autoSoftCommit

2015-07-10 Thread Erick Erickson
bq: The re-indexing is going to be every 4 hours or even every 2 hours a day, so
it is not rare. Manually managing replication is not an option

Why not? Couldn't this all be done from a shell script run via a cron job?

On Fri, Jul 10, 2015 at 11:03 AM, wwang525  wrote:
> Hi Erick,
>
> It is Solr 4.7. For the time being, we are considering the old style
> master/slave configuration.
>
> The re-indexing is going to be every 4 hours or even every 2 hours a day, so
> it is not rare. Manually managing replication is not an option. Is there any
> other easy-to-manage option ?
>
> Thanks
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Planning-Solr-migration-to-production-clean-and-autoSoftCommit-tp4216736p4216744.html
> Sent from the Solr - User mailing list archive at Nabble.com.


Re: Planning Solr migration to production: clean and autoSoftCommit

2015-07-10 Thread wwang525
Hi Erick,

It is Solr 4.7. For the time being, we are considering the old style
master/slave configuration.

The re-indexing is going to be every 4 hours or even every 2 hours a day, so
it is not rare. Manually managing replication is not an option. Is there any
other easy-to-manage option ?

Thanks



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Planning-Solr-migration-to-production-clean-and-autoSoftCommit-tp4216736p4216744.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Planning Solr migration to production: clean and autoSoftCommit

2015-07-10 Thread Erick Erickson
You're confusing a couple of things here.

First, I'm assuming that you are NOT using SolrCloud, but older-style
master/slave.
If that's not true, disregard the rest of this.

autoSoftCommit is _local_ and has nothing to do with changing the
Lucene segments.
And since you're not searching on the master, you might as well set
this to -1. It
really has no effect on the slaves since they're not doing any indexing.

Slaves periodically poll the master to see if the index has changed. What you're
describing is not having that process occur until after DIH is done in
order not to
get partial views of the data. What I'd do is just use the replication
API to force
replication after you know DIH is done. Especially in a situation
where re-indexing
is rare, this may be your best option. See:
https://cwiki.apache.org/confluence/display/solr/Index+Replication

Or just disable polling on the slaves, do the DIH thing, then re-enable polling.

Best,
Erick

On Fri, Jul 10, 2015 at 10:21 AM, wwang525  wrote:
> Hi,
>
> The following questions are about the basic configuration options in
> production.
>
> We will have three machines: one indexing instance (master) and two Solr
> instances (in different machines) for searching purpose. This way, we will
> always have two Solr instances dedicated for executing search requests.
>
> Right now, we are only considering re-build full index every once in a
> while, so there will be no incremental indexing.
>
> I understand that the indexing instance can have the indexing parameter
> "clean" to be set as true or false. If I set it as true, the search index in
> the indexing instance will be cleaned up and anytime when I check the index,
> it is going to grow.
>
> The question is :
>
> (1) Will the slave instance (for executing requests) get in sync with the
> master if we set the "clean" to true? This is not what we would like it to
> be since the search index will be clean up and grow. Customers will need to
> wait for some period of time to search for the entire data pending the
> completion of the indexing job
>
> (2) The "autoSoftCommit" is supposed to make the update visible to search. I
> also configured "autoSoftCommit" in solrconfig.xml in the master. When I set
> the "clean" to true in the indexing job, what is the impact of this
> parameter to the search requests executed in slave machine?
>
> Thanks
>
>
>
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Planning-Solr-migration-to-production-clean-and-autoSoftCommit-tp4216736.html
> Sent from the Solr - User mailing list archive at Nabble.com.


Planning Solr migration to production: clean and autoSoftCommit

2015-07-10 Thread wwang525
Hi,

The following questions are about the basic configuration options in
production. 

We will have three machines: one indexing instance (master) and two Solr
instances (in different machines) for searching purpose. This way, we will
always have two Solr instances dedicated for executing search requests.

Right now, we are only considering re-build full index every once in a
while, so there will be no incremental indexing. 

I understand that the indexing instance can have the indexing parameter
"clean" to be set as true or false. If I set it as true, the search index in
the indexing instance will be cleaned up and anytime when I check the index,
it is going to grow.

The question is :

(1) Will the slave instance (for executing requests) get in sync with the
master if we set the "clean" to true? This is not what we would like it to
be since the search index will be clean up and grow. Customers will need to
wait for some period of time to search for the entire data pending the
completion of the indexing job

(2) The "autoSoftCommit" is supposed to make the update visible to search. I
also configured "autoSoftCommit" in solrconfig.xml in the master. When I set
the "clean" to true in the indexing job, what is the impact of this
parameter to the search requests executed in slave machine? 

Thanks






--
View this message in context: 
http://lucene.472066.n3.nabble.com/Planning-Solr-migration-to-production-clean-and-autoSoftCommit-tp4216736.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: Endeca to Solr Migration

2014-07-02 Thread Dyer, James
We migrated a big application from Endeca (6.0, I think) a several years ago.  
We were not using any of the business UI tools, but we found that Solr is a lot 
more flexible and performant than Endeca.  But with more flexibility comes more 
you need to know.

The hardest thing was to migrate the Endeca dimensions to Solr facets.  We had 
endeca-api specific dependencies throughout the application, even in the 
presentation layer.  We ended up writing a bridge api that allowed us to keep 
our endeca-specific code and translate the queries to solr queries.  We are 
storing a cross-reference between the "N" values from Endeca and key/value 
pairs to translate something like N=4000 to "fq=Language:English".  With solr, 
there is more you need to do in your app that the backend doesn't manage for 
you.  In the end, though, it lets you sparate your concerns better.

James Dyer
Ingram Content Group
(615) 213-4311


-Original Message-
From: mrg81 [mailto:maya...@gmail.com] 
Sent: Saturday, June 28, 2014 1:11 PM
To: solr-user@lucene.apache.org
Subject: Endeca to Solr Migration

Hello --

I wanted to get some details on Endeca to Solr Migration. I am
interested in few topics:

1. We would like to migrate the Faceted Navigation, Boosting individual
records and a few other items. 
2. But the biggest question is about the UI [Experience Manager] - I have
not found a tool that comes close to Experience Manager. I did read about
Hue [In response to Gareth's question on Migration], but it seems that we
will have to do a lot of customization to use that. 

Questions:

1. Is there a UI that we can use? Is it possible to un-hook the Experience
Manager UI and point to Solr?
2. How long does a typical migration take? Assuming that we have to migrate
the Faceted Navigation and Boosted records? 

Thanks



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Endeca-to-Solr-Migration-tp4144582.html
Sent from the Solr - User mailing list archive at Nabble.com.




Re: Endeca to Solr Migration

2014-06-29 Thread Mikhail Khludnev
Yes. I think so, but the scope seems challenging.


On Sun, Jun 29, 2014 at 10:13 PM, mrg81  wrote:

> Thanks Mikhail. In your opinion, is this something that be done in 4-6
> months?
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Endeca-to-Solr-Migration-tp4144582p4144664.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>



-- 
Sincerely yours
Mikhail Khludnev
Principal Engineer,
Grid Dynamics

<http://www.griddynamics.com>
 


Re: Endeca to Solr Migration

2014-06-29 Thread mrg81
Thanks Mikhail. In your opinion, is this something that be done in 4-6
months?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Endeca-to-Solr-Migration-tp4144582p4144664.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Endeca to Solr Migration

2014-06-28 Thread Mikhail Khludnev
Hello,
Please check inlined below

On Sat, Jun 28, 2014 at 10:10 PM, mrg81  wrote:

> Hello --
>
> I wanted to get some details on Endeca to Solr Migration. I am
> interested in few topics:
>
> 1. We would like to migrate the Faceted Navigation, Boosting individual
> records and a few other items.
> 2. But the biggest question is about the UI [Experience Manager] - I have
> not found a tool that comes close to Experience Manager. I did read about
> Hue [In response to Gareth's question on Migration], but it seems that we
> will have to do a lot of customization to use that.
>
> Questions:
>
> 1. Is there a UI that we can use? Is it possible to un-hook the Experience
> Manager UI and point to Solr?
>
 AFAIK, Experience Manager is close to Adobe's one, and they are both are
clones of guess what.. http://jackrabbit.apache.org/ (check wiki for jcr or
try to visit day.com). I suppose you can employ almost any CMS system
instead, which you consider affordable and handy.

2. How long does a typical migration take? Assuming that we have to migrate
> the Faceted Navigation and Boosted records?
>
I suppose it's not a piece of cake.. I suppose that it takes few month mid
size project to launch it. The challenges are:
- Faceted Navigation, which is done via Dimensions, exposed to frontend
that's quite unnatural for Solr. To be honest, Solr doesn't navigate
taxonomies out-of-the-box, but just provides a few hints to do so. Also,
navigating nested SKUs sometimes reveals some gaps, you know...
- Endeca also has some smart text search features like phrase guessing or
so. It's need to research how much you relies on it, and leverage some
Solr's straightforwardness
note:
- whatever boosting not a problem for Solr
- Hue is a data analytic interactive UI or even "IDE", I don't think you
need to look at.


>
> Thanks
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Endeca-to-Solr-Migration-tp4144582.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>



-- 
Sincerely yours
Mikhail Khludnev
Principal Engineer,
Grid Dynamics

<http://www.griddynamics.com>
 


Endeca to Solr Migration

2014-06-28 Thread mrg81
Hello --

I wanted to get some details on Endeca to Solr Migration. I am
interested in few topics:

1. We would like to migrate the Faceted Navigation, Boosting individual
records and a few other items. 
2. But the biggest question is about the UI [Experience Manager] - I have
not found a tool that comes close to Experience Manager. I did read about
Hue [In response to Gareth's question on Migration], but it seems that we
will have to do a lot of customization to use that. 

Questions:

1. Is there a UI that we can use? Is it possible to un-hook the Experience
Manager UI and point to Solr?
2. How long does a typical migration take? Assuming that we have to migrate
the Faceted Navigation and Boosted records? 

Thanks



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Endeca-to-Solr-Migration-tp4144582.html
Sent from the Solr - User mailing list archive at Nabble.com.


FAST ESP -> Solr migration webinar

2010-11-11 Thread Yonik Seeley
We're holding a free webinar on migration from FAST to Solr.  Details below.

-Yonik
http://www.lucidimagination.com

=
Solr To The Rescue: Successful Migration From FAST ESP to Open Source
Search Based on Apache Solr

Thursday, Nov 18, 2010, 14:00 EST (19:00 GMT)
Hosted by SearchDataManagement.com

For anyone concerned about the future of their FAST ESP applications
since the purchase of Fast Search and Transfer by Microsoft in 2008,
this webinar will provide valuable insights on making the switch to
Solr.  A three-person rountable will discuss factors driving the need
for FAST ESP alternatives, differences between FAST and Solr, a
typical migration project lifecycle & methodology, complementary open
source tools, best practices, customer examples, and recommended next
steps.

The speakers for this webinar are:

Helge Legernes, Founding Partner & CTO of Findwise
Michael McIntosh, VP Search Solutions for TNR Global
Eric Gaumer, Chief Architect for ESR Technology.

For more information and to register, please go to:

http://SearchDataManagement.bitpipe.com/detail/RES/1288718603_527.html?asrc=CL_PRM_Lucid2
=