13 so things may have changed) all ingests were blocked and it took days
> to complete.
>
> With 1.07T tablets to work on this may take some time?
>
>
> ------
> *From:* Mike Drob [mailto:md...@mdrob.com]
> *Sent:* Tuesday, 17 January 2017 09:37
> *To:* u
http://accumulo.apache.org/1.8/accumulo_user_manual.html#_merging_tablets
In order to merge small tablets, you can ask Accumulo to merge sections of
a table smaller than a given size.
root@myinstance> merge -t myTable -s 100M
On Mon, Jan 16, 2017 at 4:31 PM, Dickson, Matt MR <
Whoops, meant to say that we are proud to announce the release of Accumulo
version 1.7.2!
On Thu, Jun 23, 2016 at 10:47 AM, Mike Drob <md...@apache.org> wrote:
> The Accumulo team is proud to announce the release of Accumulo version
> 1.7.1!
>
> This release contain
If something goes wrong (i.e. somebody accidentally issues a big delete),
then having the Trash around makes recovery plausible.
On Mon, Aug 17, 2015 at 2:57 PM, James Hughes jn...@virginia.edu wrote:
Hi all,
From reading about the Accumulo GC, it sounds like temporary files are
routinely
Our very own Eric Newton has a port of OpenTSDB running on Accumulo, might
be what you're looking for.
https://github.com/ericnewton/accumulo-opentsdb
On Mon, Jul 20, 2015 at 5:25 PM, Ranjan Sen ranjan_...@hotmail.com wrote:
Hi All,
Is there something like TSDB (Time series database) on
This sounds super close to a type 1 UUID -
https://en.wikipedia.org/wiki/Universally_unique_identifier#Version_1_.28MAC_address_.26_date-time.29
On Tue, Jun 23, 2015 at 8:14 AM, Keith Turner ke...@deenlo.com wrote:
Would something like the following work?
row=time_client id_client counter
be included by
discretion of the author.
Ex: ACCUMULO-21224 Add more metrics to the monitor. (Jim Contributor via
Mike Drob)
On Wed, Jun 10, 2015 at 1:43 PM, Keith Turner ke...@deenlo.com wrote:
On Wed, Jun 10, 2015 at 2:32 PM, Christopher ctubb...@apache.org wrote:
Okay Accumulators, I have
Also might need to run a 'flush -t $table -w'
On Mon, Jun 8, 2015 at 1:39 PM, Josh Elser josh.el...@gmail.com wrote:
Since 1.5, All of Accumulo files are stored in HDFS: RFiles and WALs.
Tables have the name you provide, but also maintain an internal unique ID
to make operations like
Value will contain whatever the user provided on the command line, so
printing it back out to them shouldn't result in exposing something secret.
On Fri, May 8, 2015 at 12:29 PM, Rodrigo Andrade rodrigo...@gmail.com
wrote:
Hi,
In this commit:
What version?
Could be
https://github.com/apache/accumulo/blob/master/docs/src/main/asciidoc/chapters/troubleshooting.txt#L314
On Tue, May 5, 2015 at 8:54 AM, Bill Slacum wsla...@gmail.com wrote:
After a catasrophic failure, the Master Server section of the monitor =
will report that there
Andrew,
This is a cool thing to work on, I hope you have great success!
A couple of questions about the motivations behind this, if you don't mind -
- There are several SQL implementations already in the Hadoop ecosystem. In
what ways do you expect this to improve upon
Check out the MinCombiner
https://github.com/apache/accumulo/blob/master/core/src/main/java/org/apache/accumulo/core/iterators/user/MinCombiner.java
On Mon, Apr 27, 2015 at 12:19 PM, vaibhav thapliyal
vaibhav.thapliyal...@gmail.com wrote:
Hello everyone.
I am trying to carry out max and min
Can you verify that once the processes started, they stayed up?
ps -C java -fww | grep accumulo
Also check your log directory for .err files
On Thu, Mar 12, 2015 at 9:53 AM, Madabhattula Rajesh Kumar
mrajaf...@gmail.com wrote:
Hi Team,
I'm not able to login into the accumlo shell. It is
Can you verify that zookeeper is running and accepting connections?
nc [zk-host] [zk-port]
stat
And see that it does not result in error.
On Mon, Feb 2, 2015 at 2:58 PM, Wyatt Frelot wyatt.fre...@altamiracorp.com
wrote:
Good afternoon all,
I just literally started having this problem on
Has this error come up before? Is there room for us to intercept that stack
trace and provide a check that HDFS has space left message? This might be
especially relevant after we;ve removed the hadoop info box on the monitor.
On Thu, Jan 22, 2015 at 8:30 AM, Josh Elser josh.el...@gmail.com wrote:
Ara,
There is sometimes propogation delay in setting the properties, since they
have to go through zookeeper and then out to the tablet servers.
Try waiting 30 or 60 seconds before checking, and see if that changes
things.
Mike
On Thu, Jan 8, 2015 at 6:07 PM, Ara Ebrahimi
Ariel,
There is not an easy way to do this recursively. Your best option is going
to be writing your own wrapper around the import command. If you're using
shell commands, this could be as easy as feeding the results of 'find .
-type d' into a script, or in Java you might want to look at
recursion then it would become easy.
On Tue, Nov 25, 2014 at 10:44 AM, Josh Elser josh.el...@gmail.com wrote:
What's the difficulty, Mike? Handling name collision of failures?
Mike Drob wrote:
Ariel,
There is not an easy way to do this recursively. Your best option is
going to be writing
I'm not sure how to quantify this and give you a way to verify, but in my
experience you want to be producing rflies that load into a single tablet.
Typically, this means number of reducers equal to the number of tablets in
the table that you will be importing and perhaps a custom partitioner. I
Unfortunately, I don't think we have a way to do this. Are you trying to
check for the existence of a particular feature, or what is your goal?
On Thu, Oct 23, 2014 at 6:44 PM, Dylan Hutchison dhutc...@stevens.edu
wrote:
Easy question Accumulators:
Is there an easy way to grab the version of
Michael,
These are great ZK instructions. Have you considered contributing them to
the project upstream? We can converse about this off-list if you'd prefer,
since it's not particularly germane to this topic.
Mike
On Thu, Oct 2, 2014 at 12:50 PM, Michael Allen mich...@sqrrl.com wrote:
I cut
Which version of Accumulo are you seeing these files in?
They should be getting cleaned up automatically after
https://issues.apache.org/jira/browse/ACCUMULO-1452 was added to 1.4.5,
1.5.1, and 1.6.0.
The brief explanation of their purpose is that they are the temporary files
for minor/major
Hi Craig!
Part of the HA transition is described at
https://issues.apache.org/jira/browse/ACCUMULO-2793 although you'll have to
read through the comments to get the actual steps. I don't have a concise
summary of what needs to be done because I haven't had a chance to try it
myself.
Mike
On
I've seen several vendors offering newer versions of zookeeper with
Accumulo without issue. Cloudera has tested versions not too far off from
Accumulo 1.4.5 and Accumulo 1.6.0 with CDH4, which uses ZK 3.4.5.
Similarly, I just checked Hortonworks' documents on HDP 2.1 and that
includes both
Filed a JIRA to update the docs, thanks for pointing this out to us, Matt!
https://issues.apache.org/jira/browse/ACCUMULO-3032
On Thu, Jul 31, 2014 at 1:32 AM, Sean Busbey bus...@cloudera.com wrote:
those are the markers that a tablet server has bulk loaded:
You should double-check your data, you might find that it's null padded or
something like that which would screw up the splits. You can do a scan from
the shell which might give you hints.
On Tue, Jul 29, 2014 at 3:53 PM, Pelton, Aaron A. aaron.pel...@gd-ais.com
wrote:
I agree with the idea of
Another option would have been to pick a different instance name when
rebuilding your cluster. Not that it helps you much now...
On Sun, Jul 13, 2014 at 11:28 PM, Jianshi Huang jianshi.hu...@gmail.com
wrote:
Thanks for the help. I think I might better re-ingest the data I need. :(
Jianshi
search posterity! (This applies to everybody).
Mike
On Thu, Jul 10, 2014 at 9:26 AM, Kepner, Jeremy - 0553 - MITLL
kep...@ll.mit.edu wrote:
Mike Drob put together a great talk at the Accumulo Summit (
http://www.slideshare.net/AccumuloSummit/10-30-drob) discussing Accumulo
performance
More likely: Are you inserting data with visibility labels that your scan
user does not have?
Less likely, but possible: Are you pushing any kind of deletes? Do you have
an AgeOffIterator configured?
Mike
On Wed, Jun 25, 2014 at 2:21 PM, Sivan sivan...@gmail.com wrote:
I'm using storm to push
I'm not sure I understand what you are trying to do. Can you give us an
example and a use case?
The metadata table is just like any other table where you can do
inserts/deletes/etc.
On Tue, May 27, 2014 at 4:49 PM, Tiffany Reid tr...@eoir.com wrote:
Or even via the Java API? I haven’t
Is your GC running? It should be catching the unreferenced files.
I think you are safe to manually delete any files not references in the
!METADATA table.
What version of Accumulo are you running?
On Wed, May 21, 2014 at 9:00 PM, Dickson, Matt MR
matt.dick...@defence.gov.au wrote:
Large rows are only an issue if you are going to try to put the entire row
in memory at once. As long as you have small enough entries in the row, and
can treat them individually, you should be fine.
The qualifier is anything that you want to use to determine uniqueness
across keys. So yes, this
Can you share a little more about what you are trying to achieve? My first
thought would be to try looking at the Conditional Mutations present in
1.6.0 (not yet released) as either a ready implementation our a starting
point for your own code.
On Apr 25, 2014 10:13 PM, BlackJack76
Geoffry,
Fixing our logging libraries is an open issue -
https://issues.apache.org/jira/browse/ACCUMULO-1242
I hope to see it resolved soon. It's a pretty big task, so if you feel
inspired to help, it would be appreciated as well!
Thanks,
Mike
On Wed, Apr 23, 2014 at 9:39 AM, Geoffry Roberts
Can you verify that the accumulo files are still present in HDFS?
hdfs -ls /accumulo
On Wed, Apr 16, 2014 at 4:15 PM, Geoffry Roberts threadedb...@gmail.comwrote:
All,
Suddenly, Accumulo will no longer start. Log files are not helpful. Is
there a way to troubleshoot this?
The back
All commands are from memory, so typos might exist. Deleting all rows can
be a very lengthy operation. It will likely be much faster to delete the
table and create a new one.
droptable foo
createtable foo
If you had configuration settings on the table that you wanted to keep,
then it might be
Users,
I am pleased to announce that Accumulo 1.4.5 has been released. The bits
are available on our downloads page [1].
Notable improvements of this release include:
* Support for Hadoop 2
* Resilience to zookeeper node failure
* Provide static utility for resource cleanup for web containers
*
Wait, I'm really confused by what you are describing, Jeff. Sorry if these
are obvious questions, but can you help me get a better grasp of your use
case?
You have a large amount of data, that is generally readable by all users.
Users create their own sandbox, from which they can later exclude
Yes, you are running into the same issue described in
https://issues.apache.org/jira/browse/ACCUMULO-1801
On Wed, Mar 19, 2014 at 6:41 PM, John Vines vi...@apache.org wrote:
Yes, column level filtering happens before any client iterators get a
chance to touch the results.
On Wed, Mar 19,
instance.secret and
trace.password before you post) would also help us figure out what exactly
is going on.
On 3/16/14, 8:41 PM, Mike Drob wrote:
Which version of Accumulo are you using?
You might be missing the hadoop libraries from your classpath. For
this,
you would check your accumulo
post) would also help us figure out what
exactly
is going on.
On 3/16/14, 8:41 PM, Mike Drob wrote:
Which version of Accumulo are you using?
You might be missing the hadoop libraries from your classpath. For
this,
you would check your accumulo-site.xml and find the comment about
Hadoop
2
First instinct is to use it for the root/metadata tablets.
On Tue, Feb 25, 2014 at 10:49 AM, Donald Miner dmi...@clearedgeit.comwrote:
HDFS caching is part of the new Hadoop 2.3 release. From what I
understand, it allows you to mark specific files to be held in memory for
faster reads.
Has
For uuid4 keys, you might want to do [00, 01, 02, ..., 0e, 0f, 10, ..., fd,
fe, ff] to cover the full range.
On Tue, Feb 11, 2014 at 9:16 AM, Josh Elser josh.el...@gmail.com wrote:
Ok. Even so, try adding some split points to the tables before you begin
(if you aren't already) as it will
You can implement your own Balancer. Or kill all the other tablet servers.
:)
On Tue, Feb 4, 2014 at 10:47 AM, Donald Miner dmi...@clearedgeit.comwrote:
Is there a way to force a particular tablet to be hosted off of a
particular tablet server?
There is some tricky stuff I want to do with
:
Balancer is going to do exactly what i need. The second option sounds much
more fun though. Thanks!
On Feb 4, 2014, at 10:49 AM, Mike Drob mad...@cloudera.com wrote:
You can implement your own Balancer. Or kill all the other tablet servers.
:)
On Tue, Feb 4, 2014 at 10:47 AM, Donald Miner
Tangential note - In Java 7, I thought that Swing was deprecated in favour
of JavaFX[1][2]?
[1]: http://www.oracle.com/technetwork/java/javafx/overview/faq-1446554.html
[2]: http://docs.oracle.com/javafx/2/swing/jfxpub-swing.htm
On Tue, Jan 28, 2014 at 1:59 PM, Ott, Charles H.
The tracer does performance metrics logging, and stores the data internally
in accumulo. It needs a tablet server running to persist everything and
will complain until it finds one.
Are your tablet servers and loggers running? I would stop your tracer app
until you have everything else up.
On
What do you get when you try to run accumulo init?
On Wed, Jan 15, 2014 at 2:39 PM, Steve Kruse skr...@adaptivemethods.comwrote:
Hello,
I'm new to accumulo and I am trying to get it up and running. I currently
= have hadoop 2.2.0 and zookeeper 3.4.5 installed and running. I have gone
Joe,
Stand-by master functionality has existed for a while now, (since before
1.4), so you should be good!
Let us know if you run into any issues.
Mike
On Jan 6, 2014 3:13 AM, Joe Gresock jgres...@gmail.com wrote:
I seem to remember reading in one of the user guides that you can
configure
Aaron,
If you attempt to apply the same splits file, then you are attempting to
add already existing splits. Since the data is already split on those
points, there are no changes, and nothing happens, exactly as you observed.
If you apply a different split file to the existing data (after it
Which version of the Cloudera Quickstart VM are you running?
To install a 1.4.4 Accumulo RPM, you will indeed have to build it from
source. 1.5.0 RPMs are available as downloads on the site, like Josh said.
Thanks,
Mike
On Sat, Dec 21, 2013 at 10:28 AM, ashili kash...@yahoo.com wrote:
I
It looks like you are running with an improperly configured Java Security
Policy. In the example accumulo-env.sh files there are some lines that look
like:
if [ -f ${ACCUMULO_CONF_DIR}/accumulo.policy ]
then
POLICY=-Djava.security.manager
Well, yes and no.
Smaller keys still mean less network traffic, potentially less IO, and
maybe faster operations if you're trying to do application logic. Using
data or default or just d probably doesn't matter in the long term
(although there are certainly cases where it might).
On Dec 3, 2013
What are you trying to accomplish by reducing the number of entries in
memory? A tablet server will not minor compact (flush) until the native map
fills up, but keeping things in memory isn't really a performance concern.
You can force a one-time minor compaction via the shell using the 'flush'
that extends
row filter and set it as a compaction iterator.
On Tue, Oct 22, 2013 at 11:45 AM, Mike Drob md...@mdrob.com wrote:
I'm attempting to delete all rows from a table that contain a specific
word in the value of a specified column. My current process looks like:
accumulo shell -e 'egrep
Depending on the version that you are running, compactions can be cancelled
with varying degrees of difficulty and perseverance (and tablet server
restarts).
On Tue, Oct 1, 2013 at 10:09 PM, Dickson, Matt MR
matt.dick...@defence.gov.au wrote:
**
*UNOFFICIAL*
Can a compact process be
There is some development going on as part of
ACCUMULO-1585https://issues.apache.org/jira/browse/ACCUMULO-1585[1]
to allow tservers to store the hostname instead of the ip address.
That
seems like a good place to start, although I'm not sure if this is the same
problem that you're seeing.
[1]:
What version are you using? According to ACCUMULO-241, you should be able
to quote any UTF-8 characters for visibility using the Java API. The shell
will likely have parsing issues, however.
[1]: https://issues.apache.org/jira/browse/ACCUMULO-241
On Mon, Aug 26, 2013 at 3:56 PM, John Vines
David,
I already created a ticket for it -
https://issues.apache.org/jira/browse/ACCUMULO-1501
-Mike
On Thu, Jun 6, 2013 at 9:00 PM, David Medinets david.medin...@gmail.comwrote:
Does it make sense to create a JIRA ticket asking for an age-off iterator
to be the default on the trace table?
Looks like you might be running with a Java Security Policy in place.
On Mon, May 20, 2013 at 4:28 PM, Chris Retford chris.retf...@gmail.comwrote:
Accumulo 1.4.3. Hadoop is CDH3u6 (0.20.2). I can manually list files in
Hadoop. Accumulo was able to run the init script. All accumulo directories
Somebody (totally not me) accidentally kicked off a full table compaction
using Accumulo 1.4.3.
There's a large number of them waiting and the queue is decreasing very
slowly - what are my options for improving the situation. Ideally, I would
be able to just cancel everything and then come back
table. But once it gets triggered, there are then compactions
scheduled locally for the tserver. You might be able to delete the flag and
bounce all the tservers to stop it, but I can't say for certain.
Sent from my phone, please pardon the typos and brevity.
On May 14, 2013 11:48 PM, Mike
reply all
Sent from my phone, please pardon the typos and brevity.
On May 14, 2013 11:54 PM, Mike Drob md...@mdrob.com wrote:
Can I leave the ones that are already running and just dispose of the
queued compactions? If not, that seems like a pretty serious limitation.
On Wed, May 15, 2013
I noticed that ACCUMULO-970 still has 8 open issues. I would like to see
those all resolved before 1.5 is actually released.
On Thu, May 9, 2013 at 5:36 PM, Keith Turner ke...@deenlo.com wrote:
On Thu, May 9, 2013 at 5:23 PM, Christopher ctubb...@apache.org wrote:
Keith, I assume you mean
I've seen people use puppet to achieve the same goal with reasonable
amounts of success.
On Wed, May 8, 2013 at 6:33 PM, Phil Eberhardt p...@sqrrl.com wrote:
Hello,
I was looking into using supervisor (http://supervisord.org/index.html)
to monitor a daemon running on top of Accumulo. I
Grep the master logs for balance usually gives some clue.
On Apr 11, 2013 7:45 AM, David Medinets david.medin...@gmail.com wrote:
From behaviour that I've witnessed before, on v1.4.1, Accumulo spreads
tablets across the cluster. However, this morning I am seeing 807 tablets
for the same table
David,
This doesn't answer your design questions, but it might help shed some
light on how to properly handle losing the sort order. Brian did a lot of
work on this in https://issues.apache.org/jira/browse/ACCUMULO-956 so I
highly recommend looking there and comparing to what you've developed.
RPM is looking for a zookeeper package on the system to satisfy the
automatic dependency management. The installation instructions you linked
to for ZK seem to imply using a downloaded tar.
If that's the case then you'll need to either find a ZK RPM, install
Accumulo using a tar, or install
to
proceed with the next step of modifying conf/accumulo-env.sh I can't seem
to find where it is! locate accumulo-env.sh is resulting in no hits.
Where would the rpm installation have put Accumulo?
On Wed, Dec 19, 2012 at 4:10 PM, Mike Drob md...@mdrob.com wrote:
RPM is looking for a zookeeper
There are a couple tickets that involve making Accumulo and Kerberos play
nice -
https://issues.apache.org/jira/browse/ACCUMULO-404 was to get accumulo
running on a kerberized HDFS
https://issues.apache.org/jira/browse/ACCUMULO-259 is for potentially
delegating the authentications to an external
12.04 x64. It runs
fine
On Aug 28, 2012, at 7:08 PM, Mike Drob md...@mdrob.com wrote:
Does anybody have experience with running Accumulo on top of Java 7?
The mailing list archives show that David Medinets tried compiling 1.3.5 on
the openjdk implementation back in December, but it doesn't
Does anybody have experience with running Accumulo on top of Java 7? The
mailing list archives show that David Medinets tried compiling 1.3.5 on the
openjdk implementation back in December, but it doesn't look like there was
much follow up on it.
When I'm trying to use the 1.4.1 dist tarball on
72 matches
Mail list logo