Just checked on the native mem maps . . . looks like it is set to 1GB. Do
the index and data caches reside in native mem maps if available or is
native mem used for something else?
I just repeated an ingest . . . this time I did not lose any tablet servers
but my logs are filling up with the
When a tablet server (lets call it A) bulk imports a file, it makes a few
bookkeeping entries in the !METADATA table. The tablet server that is
serving the !METADATA table (lets call it B) checks a constraint: tablet
server A must still have its zookeeper lock. This constraint is being
violated
Ok, makes sense. So 1GB for native heap is reasonable?
Tablet server A was alive and well when looking in the monitor. Those
'constraint violations' do not stop until after I've restarted all of the
tservers.
On Wed, Jan 15, 2014 at 8:49 AM, Eric Newton eric.new...@gmail.com wrote:
When a
I tried myself a few weeks ago and saw that it just works too for the
very simple test I ran. I did see some error messages when running from sbt
after the job successfully completed and the SparkContext was closing. I
assume this has to do with resources within the AccumuloInputFormat? This
was
Hello,
I'm new to accumulo and I am trying to get it up and running. I currently =
have hadoop 2.2.0 and zookeeper 3.4.5 installed and running. I have gone t=
hrough the installation steps on the following page and I now am running in= to
a problem when I try to start accumulo up. The
What do you get when you try to run accumulo init?
On Wed, Jan 15, 2014 at 2:39 PM, Steve Kruse skr...@adaptivemethods.comwrote:
Hello,
I'm new to accumulo and I am trying to get it up and running. I currently
= have hadoop 2.2.0 and zookeeper 3.4.5 installed and running. I have gone
Which version of Accumulo are you using? And, does the HDFS directory
already exist for Accumulo? If so, Accumulo expects to be able to
create this directory itself when you init.
--
Christopher L Tubbs II
http://gravatar.com/ctubbsii
On Wed, Jan 15, 2014 at 2:39 PM, Steve Kruse
Hi Mike,
The init seems to work fine. Here is the output:
raduser@cvaraddemo01./accumulo init
log4j:WARN No appenders could be found for logger
(org.apache.accumulo.start.classloader.AccumuloClassLoader).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See
I am using 1.5.0. I run accumulo init and it seems to correctly create the
instance and I can see it in zookeeper. Here is the init output, I have run it
several times, so I tell it to delete the old instance.
raduser@cvaraddemo01./accumulo init
log4j:WARN No appenders could be found for
Steve,
You can check to see if it's initialized correctly in HDFS with:
hadoop fs -ls /accumulo/instance_id/
If you run it as the user that is trying to start Accumulo, you should see
something like this
-rw--- 3 arshakn supergroup 0 2014-01-01 22:37
Are you truncating the output message? Because there should be a message
about the root user being initialized I think...
On Wed, Jan 15, 2014 at 2:55 PM, Steve Kruse skr...@adaptivemethods.comwrote:
I am using 1.5.0. I run accumulo init and it seems to correctly create
the instance and I
Arshak,
When I do the following commands I get the following:
raduser@cvaraddemo01./hadoop fs -ls /accumulo/instance_id/
2014-01-15 15:05:05,220 WARN [main] util.NativeCodeLoader
(NativeCodeLoader.java:clinit(62)) - Unable to load native-hadoop library for
your platform... using builtin-java
Hi Steve!
It looks like you don't have any log4j settings, so you may not get some
error messages reported. You should copy the log4j.properties and
*_logger.xml files from one of hte configuration examples into
$ACCUMULO_CONF_DIR before running init.
On Wed, Jan 15, 2014 at 2:07 PM, Steve
Sean,
The logging definitely helped. I’m now getting the following but I’m not sure
why.
raduser@cvaraddemo04./bin/accumulo init
2014-01-15 15:32:20,331 [util.NativeCodeLoader] WARN : Unable to load
native-hadoop library for your platform... using builtin-java classes where
applicable
I'm sue you will get a bunch of people responding ... the email delays have
made that worse recently. The key here is this:
Hadoop Filesystem is file:///
The hadoop configuration directory (HADOOP_CONF_DIR) is not in your
classpath.
You can edit the classpath in conf/accumulo-site.xml, and
You can also get that same error if the jars for HDFS are not in the
classpaths defined in accumulo-site.xml, even if HADOOP_CONF_DIR is set
properly.
Steve, make sure you edited the general classpaths for using Hadoop 2.
On Jan 15, 2014 2:43 PM, Eric Newton eric.new...@gmail.com wrote:
I'm sue
On Wed, Jan 15, 2014 at 3:36 PM, Steve Kruse skr...@adaptivemethods.comwrote:
Sean,
The classpath for HDFS was incorrect and that definitely helped when I
corrected it. Now it seems I’m having a hadoop issue where the datanodes
are not running. I’m going to keep plugging away.
Glad
Steve,
try this to get your datanode(s) going
hadoop-daemon.sh start datanode
I am curious did you install your Hadoop from rpm?
Also this Sqrrl writeup might be helpful:
http://sqrrl.com/quick-accumulo-install/
Arshak
On Wed, Jan 15, 2014 at 1:42 PM, Sean Busbey
I have seen similar problems caused by only installing the bin rpm.
The docs seem to suggest you can choose one or the other.
However, I was only able to get it to work by installing both and then
selecting the one i would use in the config files - accumulo-env.sh
accumulo-1.5.0-bin.rpm
On Wed, Jan 15, 2014 at 4:09 PM, Kesten Broughton kbrough...@21ct.comwrote:
I have seen similar problems caused by only installing the bin rpm.
The docs seem to suggest you can choose one or the other.
However, I was only able to get it to work by installing both and then
selecting the one
UNOFFICIAL
Thanks Keith. I've run a simple mr job based on the UniqueColumns example, but
due to the size of the table this is taking a very long time. Is it possible
to pre-filter the data that goes to the MR job based on family, eg only run the
MR job on columns with a specific column
Matt,
This should help:
CollectionPairText,Text cols = Collections.singleton(new
PairText,Text(new Text(cityOfBirth), null));
AccumuloInputFormat.fetchColumns(job, cols);
On Wed, Jan 15, 2014 at 7:29 PM, Dickson, Matt MR
matt.dick...@defence.gov.au wrote:
*UNOFFICIAL*
Thanks Keith. I've
You could create a locality group for your column family. However, you
would need to recompact for the benefit. And the benefit might not be
there if your column family includes a major portion of the data.
But! If you could recompact once, and keeping this data in its own
locality group was
23 matches
Mail list logo