Thanks, I might try moving to the HBase implementation anyway because:

1) It is already in NiFi 1.3
2) We already have HBase installed (but unused) on our cluster
3) There doesn't seem to be a limit to the number of cache entries.
For our use case (avoiding downloading the same file multiple times)
it was always a bit icky to set the number of cache entries to
something that should be "big enough"

Thanks again,

James

On 13 April 2018 at 20:24, Joe Witt <joe.w...@gmail.com> wrote:
> James,
>
> You have it right about the proper solution path..  I think we have a
> Redis one in there now too that might be interesting (not in 1.3.0
> perhaps but..).
>
> We offered a simple out of the box one early and to ensure the
> interfaces are right.  Since then the community has popped up some
> real/stronger implementations like you're mentioning.
>
> Thanks
>
> On Fri, Apr 13, 2018 at 7:14 PM, James Srinivasan
> <james.sriniva...@gmail.com> wrote:
>> Hi all,
>>
>> Is there a recommended way to set up a
>> DistributedMapCacheServer/Client on a cluster, ideally with some
>> amount of HA (NiFi 1.3.0)? I'm using a shared persistence directory,
>> and when adding and enabling the controller it seems to start on my
>> primary node (but not the other two - status keeps saying "enabling"
>> rather than "enabled"). Adding the DistributedMapCacheClientService is
>> harder, because I have to specify the host it runs on. Setting it to
>> the current primary node works, but presumably won't fail over?
>>
>> I guess the proper solution is to use the HBase versions (or even
>> implement my own Accumulo one for our cluster)
>>
>> Thanks very much,
>>
>> James

Reply via email to