Hi - For now, the only option is to allow larger blobs via jute.maxbuffer 
(whatever jute means). Despite ZK being designed for kb sized blobs, Solr 
demands us to abuse it. I think there was a ticket for compression support, but 
that only stretches the limit.

We are running ZK with 16 MB for maxbuffer. It holds the large dictionaries, it 
runs fine. 

Regards,
Markus
 
-----Original message-----
> From:Atita Arora <atitaar...@gmail.com>
> Sent: Tuesday 13th March 2018 22:38
> To: solr-user@lucene.apache.org
> Subject: How to store files larger than zNode limit
> 
> Hi ,
> 
> I have a use case supporting multiple clients and multiple languages in a
> single application.
> So , In order to improve the language support, we want to leverage the Solr
> dictionary (userdict.txt) files as large as 10MB.
> I understand that ZooKeeper's default zNode file size limit is 1MB.
> I'm not sure sure if someone tried increasing it before and how does that
> fares in terms of performance.
> Looking at - https://zookeeper.apache.org/doc/r3.2.2/zookeeperAdmin.html
> It states -
> Unsafe Options
> 
> The following options can be useful, but be careful when you use them. The
> risk of each is explained along with the explanation of what the variable
> does.
> jute.maxbuffer:
> 
> (Java system property:* jute.maxbuffer*)
> 
> This option can only be set as a Java system property. There is no
> zookeeper prefix on it. It specifies the maximum size of the data that can
> be stored in a znode. The default is 0xfffff, or just under 1M. If this
> option is changed, the system property must be set on all servers and
> clients otherwise problems will arise. This is really a sanity check.
> ZooKeeper is designed to store data on the order of kilobytes in size.
> I would appreciate if someone has any suggestions  on what are the best
> practices for handling large config/dictionary files in ZK?
> 
> Thanks ,
> Atita
> 

Reply via email to