On 10/29/13, 12:28 PM, Terry P. wrote:
What are your thoughts on doing an hourly flush of the table in the
shell to ensure entries are flushed to disk more frequently to help
minimize the replay required if connectivity to a node is lost?
If you want to go the route of flushing more frequently
Hi Josh,
Thanks for the advice. I am of course concerned about the nodes dropping
out of the cluster, but we're in a position where we do not provide the
infrastructure and thus have no control over it. Despite the
infrastructure having multiple points of redundancy, the network glitch
still happ
It kind of sounds like you should be more concerned about nodes randomly
dropping out of your cluster :)
If you're stuck on 1.4 series, you can try to up the property
'tserver.logger.count' to '3' instead of the default of '2' to ensure
that you have a greater chance of not losing a WAL replic
Keith,
And now (having looked at the API docs) I realize that the difference in
expected behavior between the devs flushing their BatchWriters and the
'flush' command in the shell ... thanks.
On Mon, Oct 28, 2013 at 5:27 PM, Terry P. wrote:
> The developers are flushing their BatchWriter.
>
>
>
The developers are flushing their BatchWriter.
On Mon, Oct 28, 2013 at 5:23 PM, Keith Turner wrote:
>
>
>
> On Mon, Oct 28, 2013 at 5:19 PM, Terry P. wrote:
>
>> Greetings all,
>> For a growing table that currently from zero to 70 million entries this
>> weekend, I'm seeing 4.4 million entries
On Mon, Oct 28, 2013 at 5:19 PM, Terry P. wrote:
> Greetings all,
> For a growing table that currently from zero to 70 million entries this
> weekend, I'm seeing 4.4 million entries still in memory, though the client
> programs are supposed to be flushing their entries.
>
Are you flushing the ba
Thanks for the replies. I was approaching it from a data integrity
perspective, as in wanting it flushed to disk in case of a TabletServer
failure. Last weekend we saw two TabletServers exit the cluster due to a
network glitch, and wouldn't you know that the 04 node was secondary logger
for the 03
What are you trying to accomplish by reducing the number of entries in
memory? A tablet server will not minor compact (flush) until the native map
fills up, but keeping things in memory isn't really a performance concern.
You can force a one-time minor compaction via the shell using the 'flush'
co
You can adjust the value of 'tserver.memory.maps.max' in
accumulo-site.xml. This will require a restart of the tabletservers.
I'm a little confused as to why you're concerned about having KVs in
memory though. If you're not running out of memory, this is typically a
good thing to have data han
Greetings all,
For a growing table that currently from zero to 70 million entries this
weekend, I'm seeing 4.4 million entries still in memory, though the client
programs are supposed to be flushing their entries.
Is there a server-side setting to help reduce the number of entries that
are in memo
10 matches
Mail list logo