n an issue in JIRA for this.
>
>> -Original Message-
>> From: webmas...@webmaster.ms [mailto:webmas...@webmaster.ms] On
>> Behalf Of Denis
>> Sent: Thursday, October 22, 2015 7:29 PM
>> To: user@accumulo.apache.org
>> Subject: Re: Tserver's strange state.
>>
>>
Hi
Sometimes my Tablet Servers go into a strange state: they have some
very old scans (see picture: http://i.imgur.com/2sOUM99.png) and being
in this state they cannot be decomissioned gracefully using "accumulo
stop" - number of their tablets decreases down to some fixed number
(say from 6K
g and writing from/to Accumulo during this
> time?
>
>
>
>
> ---- Original message
> From: Denis <de...@camfex.cz>
> Date: 10/22/2015 6:03 PM (GMT-05:00)
> To: user@accumulo.apache.org
> Subject: Re: Tserver's strange state.
>
> Both servers has the
Hi.
My client software needs to work with few accumulo clusters, some 1.6
and some 1.7.
Would it be correct to use 1.7 accumulo libraries on the client and
1.6.x on the server?
Actually it works, I just want to be sure if I am right doing that.
let's give it some thought. I don't think I realized
> that things were actually broken with IPs, just that we strongly
> recommend using hostnames instead.
>
> Denis wrote:
>> I found the answer in the sources: start the daemons with "--address
>> 10.x.x.x" comma
Hello
What would be the recommended way to win the pre-ACCUMULO-1585
behavior, i.e IPs of services in ZooKeeper and no need for DNS not
reverse DNS?
ovide in
> ${ACCUMULO_CONF_DIR}/{masters,monitors,slaves,gc,tracers} should
> ultimately determine what is advertised in ZooKeeper. Maybe that's the
> change Eric made...
>
> Denis wrote:
>> Hello
>>
>> What would be the recommended way to win the pre-ACCUMU
)
), so it was long time before ACCUMULO-3182's Sep-2014 and I have seen
here in the mailing list or in the bugtracker that it was fixed in
1.5)
On 2/20/15, Josh Elser els...@apache.org wrote:
FYI, this was fixed in 1.6.2
https://issues.apache.org/jira/browse/ACCUMULO-3182
Denis wrote
I think it was ACCUMULO-1364
On 2/20/15, Denis de...@camfex.cz wrote:
Hm, I think, that bug is much older, which has been fixed in 1.5 or
next minor 1.4.x. Unfortunately, I did not put the bug number in code
comment (only
conn.tableOperations.setProperty(tableName, table.walog.enabled
, Feb 18, 2015 at 7:12 PM, Denis de...@camfex.cz wrote:
On 2/18/15, Christopher ctubb...@apache.org wrote:
To rule out some scenarios, is it possible that your clients are
writing
to
the wrong tables?
That was the first idea, so I added assert()'s to the code of the
writers few days ago
On 2/18/15, Christopher ctubb...@apache.org wrote:
To rule out some scenarios, is it possible that your clients are writing to
the wrong tables?
That was the first idea, so I added assert()'s to the code of the
writers few days ago. No assert was triggered, but some invalid values
appear after
Hello.
Few times I noticed that some tables have values they cannot have, and
those entries have timestamp close to a tabletserver failure time.
(I mean wrong format, one table has msgpack values at least 10 bytes
long and another table has 1-byte values and after a failure I read
one or two
Hello
Table accumulo.metadata contains absolute urls in a form of
hdfs://old-name-node:8020/
I do not know how to work around it, except of patching accumulo code
adding .replace(OLD_NODE, NEW_NODE) into relevant places
On 1/17/15, Calvin Feder calvin.fe...@argyledata.com wrote:
We need to
in the new server's logs?
-Eric
On Tue, Jan 13, 2015 at 11:48 AM, Denis de...@camfex.cz wrote:
If you jstack your new tablet server, does it show a deadlock?
No
On 1/13/15, Eric Newton eric.new...@gmail.com wrote:
This may be a result of ACCUMULO-3372. If you jstack your new tablet
server
I have not tried yet anything newer than 1.6.1
On 1/12/15, Josh Elser els...@apache.org wrote:
Denis wrote:
created https://issues.apache.org/jira/browse/ACCUMULO-3471
Thanks a bunch!
BTW, In 1.6.1 also balancing may get stuck until the master server is
restarted.
Is this a known issue
created https://issues.apache.org/jira/browse/ACCUMULO-3471
BTW, In 1.6.1 also balancing may get stuck until the master server is restarted.
But then, after the master restart, balancing works very
aggressively, putting many tablets offline for quite long time
(minutes)
On 1/11/15, Denis de
yes, per server
On 1/11/15, Sean Busbey bus...@cloudera.com wrote:
On Sat, Jan 10, 2015 at 3:42 PM, Denis de...@camfex.cz wrote:
On 1/10/15, Christopher ctubb...@apache.org wrote:
...
3) how many tablets do you have per server?
3. about 6000
Just to confirm, this is 6000 tablets
that the 1.4 version being used possibly had one or more of
the many bugs regarding balancing getting 'stuck', which was typically
resolved via bouncing the master. Denis, in 1.4 when you brought you
tserver back online, did you find that things were then balanced or did you
just have a tserver
Hi
I recently upgraded my Accumulo cluster from 1.4 to 1.6 and noticed a
regression.
Removing a tserver makes puts some tablets offline for a while until
other tservers start handling them, that's normal.
But with 1.6 the same happens on adding a tserver as well.
Is it ok?
Hi
Is there any reason why TableOperations.getSplits() does not expose
the location information (the information about tablet-to-tserver
correspondence) ?
It has this information internally and then just drops it.
This information can be useful to perform scans a bit smarter (to
maximize
28, 2013 at 8:27 AM, Denis de...@camfex.cz wrote:
Hi.
Major compaction loads hard disks very hard, even with
tserver.compaction.major.concurrent.max=1
Besides nice peaks on the Load Average and IOstat graps during major
compaction, such high load also badly affects query performance
Hi.
Is there a safer way to put a tserver off than by the killing its process?
I mean, something similar to decommissioning of HDFS data node, when
master knows which nodes are going to be retired and slowly moves the
files from them to other nodes.
Well, the killing must be safe as well, but
system.
John
On Sat, Aug 18, 2012 at 11:08 PM, Denis de...@camfex.cz wrote:
Hi.
I have a trouble with my Accumulo installation.
After hardware failure on NameNode, !METATABLE's root_tables is broken :(
From fsck / output:
/accumulo/tables/!0/root_tablet/A000ornd.rf: CORRUPT block
23 matches
Mail list logo