So here is the error from the tablet server...
2014-03-18 10:38:43,456 [client.ZooKeeperInstance] ERROR: unable obtain
instance id at /accumulo/instance_id
2014-03-18 10:38:43,456 [tabletserver.TabletServer] ERROR: Uncaught
exception in TabletServer.main, exiting
java.lang.RuntimeException: Accumulo not initialized, there is no
instance id at /accumulo/instance_id
at
org.apache.accumulo.core.client.ZooKeeperInstance.getInstanceIDFromHdfs(ZooKeeperInstance.java:295)
at
org.apache.accumulo.server.client.HdfsZooInstance._getInstanceID(HdfsZooInstance.java:126)
at
org.apache.accumulo.server.client.HdfsZooInstance.getInstanceID(HdfsZooInstance.java:119)
at
org.apache.accumulo.server.conf.ZooConfiguration.getInstance(ZooConfiguration.java:55)
at
org.apache.accumulo.server.conf.ServerConfiguration.getZooConfiguration(ServerConfiguration.java:50)
at
org.apache.accumulo.server.conf.ServerConfiguration.getConfiguration(ServerConfiguration.java:104)
at org.apache.accumulo.server.Accumulo.init(Accumulo.java:98)
at
org.apache.accumulo.server.tabletserver.TabletServer.main(TabletServer.java:3249)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.accumulo.start.Main$1.run(Main.java:103)
at java.lang.Thread.run(Thread.java:744)
Do I need to run bin/accumulo init on every box in the cluster?
On Tue, Mar 18, 2014 at 11:19 AM, Eric Newton <eric.new...@gmail.com
<mailto:eric.new...@gmail.com>> wrote:
Port numbers (for 1.5+)
4560 Accumulo monitor (for centralized log display)
9997 Tablet Server
9999 Master Server
12234 Accumulo Tracer
50091 Accumulo GC
50095 Accumulo HTTP monitor
On Tue, Mar 18, 2014 at 11:04 AM, Benjamin Parrish
<benjamin.d.parr...@gmail.com <mailto:benjamin.d.parr...@gmail.com>>
wrote:
First off, are there specific ports that need to be opened up
for accumulo? I have hadoop operating without any issues as a 5
node cluster. Zookeeper seems to be operating with 2181, 3888,
2888 ports open.
Here is some data from trying to get everything started and
getting into the shell. I discluded the bash portion as Eric
suggested because the mailing list rejected it for length and
thinking it was spam.
bin/start-all.sh
[root@hadoop-node-1 zookeeper]# bash -x
/usr/local/accumulo/bin/start-all.sh
Starting monitor on hadoop-node-1
WARN : Max files open on hadoop-node-1 is 1024, recommend 65536
Starting tablet servers ....... done
Starting tablet server on hadoop-node-3
Starting tablet server on hadoop-node-5
Starting tablet server on hadoop-node-2
Starting tablet server on hadoop-node-4
WARN : Max files open on hadoop-node-3 is 1024, recommend 65536
WARN : Max files open on hadoop-node-2 is 1024, recommend 65536
WARN : Max files open on hadoop-node-5 is 1024, recommend 65536
WARN : Max files open on hadoop-node-4 is 1024, recommend 65536
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded
library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which
might have disabled stack guard. The VM will try to fix the
stack guard no
w.
It's highly recommended that you fix the library with 'execstack
-c <libfile>', or link it with '-z noexecstack'.
2014-03-18 10:38:43,143 [util.NativeCodeLoader] WARN : Unable to
load native-hadoop library for your platform... using
builtin-java classes where applicable
2014-03-18 10:38:44,194 [server.Accumulo] INFO : Attempting to
talk to zookeeper
2014-03-18 10:38:44,389 [server.Accumulo] INFO : Zookeeper
connected and initialized, attemping to talk to HDFS
2014-03-18 10:38:44,558 [server.Accumulo] INFO : Connected to HDFS
Starting master on hadoop-node-1
WARN : Max files open on hadoop-node-1 is 1024, recommend 65536
Starting garbage collector on hadoop-node-1
WARN : Max files open on hadoop-node-1 is 1024, recommend 65536
Starting tracer on hadoop-node-1
WARN : Max files open on hadoop-node-1 is 1024, recommend 65536
starting shell as root...
[root@hadoop-node-1 zookeeper]# bash -x
/usr/local/accumulo/bin/accumulo shell -u root
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded
library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which
might have disabled stack guard. The VM will try to fix the
stack guard no
w.
It's highly recommended that you fix the library with 'execstack
-c <libfile>', or link it with '-z noexecstack'.
2014-03-18 10:38:56,002 [util.NativeCodeLoader] WARN : Unable to
load native-hadoop library for your platform... using
builtin-java classes where applicable
Password: ****
2014-03-18 10:38:58,762 [impl.ServerClient] WARN : There are no
tablet servers: check that zookeeper and accumulo are running.
... this is the point where it sits and acts like it doesn't do
anything
-- LOGS -- (most of this looks to be that I cannot connect to
anything)
here is the tail -f
$ACCUMULO_HOME/logs/monitor_hadoop-node-1.local.debug.log
2014-03-18 10:42:54,617 [impl.ThriftScanner] DEBUG: Failed to
locate tablet for table : !0 row : ~err_
2014-03-18 10:42:57,625 [monitor.Monitor] INFO : Failed to
obtain problem reports
java.lang.RuntimeException:
org.apache.accumulo.core.client.impl.ThriftScanner$ScanTimedOutException
at
org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:174)
at
org.apache.accumulo.server.problems.ProblemReports$3.hasNext(ProblemReports.java:241)
at
org.apache.accumulo.server.problems.ProblemReports.summarize(ProblemReports.java:299)
at
org.apache.accumulo.server.monitor.Monitor.fetchData(Monitor.java:399)
at
org.apache.accumulo.server.monitor.Monitor$1.run(Monitor.java:530)
at
org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
at java.lang.Thread.run(Thread.java:744)
Caused by:
org.apache.accumulo.core.client.impl.ThriftScanner$ScanTimedOutException
at
org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:212)
at
org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:82)
at
org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:164)
... 6 more
here is the tail -f
$ACCUMULO+HOME/logs/tracer_hadoop-node-1.local.debug.log
2014-03-18 10:47:44,759 [impl.ServerClient] DEBUG: ClientService
request failed null, retrying ...
org.apache.thrift.transport.TTransportException: Failed to
connect to a server
at
org.apache.accumulo.core.client.impl.ThriftTransportPool.getAnyTransport(ThriftTransportPool.java:455)
at
org.apache.accumulo.core.client.impl.ServerClient.getConnection(ServerClient.java:154)
at
org.apache.accumulo.core.client.impl.ServerClient.getConnection(ServerClient.java:128)
at
org.apache.accumulo.core.client.impl.ServerClient.getConnection(ServerClient.java:123)
at
org.apache.accumulo.core.client.impl.ServerClient.executeRaw(ServerClient.java:105)
at
org.apache.accumulo.core.client.impl.ServerClient.execute(ServerClient.java:71)
at
org.apache.accumulo.core.client.impl.ConnectorImpl.<init>(ConnectorImpl.java:64)
at
org.apache.accumulo.server.client.HdfsZooInstance.getConnector(HdfsZooInstance.java:154)
at
org.apache.accumulo.server.client.HdfsZooInstance.getConnector(HdfsZooInstance.java:149)
at
org.apache.accumulo.server.trace.TraceServer.<init>(TraceServer.java:200)
at
org.apache.accumulo.server.trace.TraceServer.main(TraceServer.java:295)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.accumulo.start.Main$1.run(Main.java:103)
at java.lang.Thread.run(Thread.java:744)
On Tue, Mar 18, 2014 at 9:37 AM, Eric Newton
<eric.new...@gmail.com <mailto:eric.new...@gmail.com>> wrote:
Can you post the exact error message you are seeing?
Verify that your HADOOP_PREFIX and HADOOP_CONF_DIR are being
set properly in accumulo-site.xml.
The output of:
bash -x $ACCUMULO_HOME/bin/accumulo shell -u root
would also help.
It's going to be something simple.
On Tue, Mar 18, 2014 at 9:14 AM, Benjamin Parrish
<benjamin.d.parr...@gmail.com
<mailto:benjamin.d.parr...@gmail.com>> wrote:
Looking to see if there was an answer to this issue or
if you could point me in a direction or example that
could lead to a solution.
On Sun, Mar 16, 2014 at 9:52 PM, Benjamin Parrish
<benjamin.d.parr...@gmail.com
<mailto:benjamin.d.parr...@gmail.com>> wrote:
I am running Accumulo 1.5.1
<?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed to the Apache Software Foundation (ASF)
under one or more
contributor license agreements. See the NOTICE
file distributed with
this work for additional information regarding
copyright ownership.
The ASF licenses this file to You under the
Apache License, Version 2.0
(the "License"); you may not use this file except
in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in
writing, software
distributed under the License is distributed on
an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
either express or implied.
See the License for the specific language
governing permissions and
limitations under the License.
-->
<?xml-stylesheet type="text/xsl"
href="configuration.xsl"?>
<configuration>
<!-- Put your site-specific accumulo
configurations here. The available configuration
values along with their defaults are documented in
docs/config.html Unless
you are simply testing at your workstation, you
will most definitely need to change the three
entries below. -->
<property>
<name>instance.zookeeper.host</name>
<value>hadoop-node-1:2181,hadoop-node-2:2181,hadoop-node-3:2181,hadoop-node-4:2181,hadoop-node-5:2181</value>
<description>comma separated list of zookeeper
servers</description>
</property>
<property>
<name>logger.dir.walog</name>
<value>walogs</value>
<description>The property only needs to be set
if upgrading from 1.4 which used to store
write-ahead logs on the local
filesystem. In 1.5 write-ahead logs are
stored in DFS. When 1.5 is started for the first
time it will copy any 1.4
write ahead logs into DFS. It is possible to
specify a comma-separated list of directories.
</description>
</property>
<property>
<name>instance.secret</name>
<value></value>
<description>A secret unique to a given
instance that all servers must know in order to
communicate with one another.
Change it before initialization. To
change it later use ./bin/accumulo
org.apache.accumulo.server.util.ChangeSecret --old
[oldpasswd] --new [newpasswd],
and then update this file.
</description>
</property>
<property>
<name>tserver.memory.maps.max</name>
<value>1G</value>
</property>
<property>
<name>tserver.cache.data.size</name>
<value>128M</value>
</property>
<property>
<name>tserver.cache.index.size</name>
<value>128M</value>
</property>
<property>
<name>trace.token.property.password</name>
<!-- change this to the root user's password,
and/or change the user below -->
<value></value>
</property>
<property>
<name>trace.user</name>
<value>root</value>
</property>
<property>
<name>general.classpaths</name>
<value>
$HADOOP_PREFIX/share/hadoop/common/.*.jar,
$HADOOP_PREFIX/share/hadoop/common/lib/.*.jar,
$HADOOP_PREFIX/share/hadoop/hdfs/.*.jar,
$HADOOP_PREFIX/share/hadoop/mapreduce/.*.jar,
$HADOOP_PREFIX/share/hadoop/yarn/.*.jar,
/usr/lib/hadoop/.*.jar,
/usr/lib/hadoop/lib/.*.jar,
/usr/lib/hadoop-hdfs/.*.jar,
/usr/lib/hadoop-mapreduce/.*.jar,
/usr/lib/hadoop-yarn/.*.jar,
$ACCUMULO_HOME/server/target/classes/,
$ACCUMULO_HOME/lib/accumulo-server.jar,
$ACCUMULO_HOME/core/target/classes/,
$ACCUMULO_HOME/lib/accumulo-core.jar,
$ACCUMULO_HOME/start/target/classes/,
$ACCUMULO_HOME/lib/accumulo-start.jar,
$ACCUMULO_HOME/fate/target/classes/,
$ACCUMULO_HOME/lib/accumulo-fate.jar,
$ACCUMULO_HOME/proxy/target/classes/,
$ACCUMULO_HOME/lib/accumulo-proxy.jar,
$ACCUMULO_HOME/lib/[^.].*.jar,
$ZOOKEEPER_HOME/zookeeper[^.].*.jar,
$HADOOP_CONF_DIR,
$HADOOP_PREFIX/[^.].*.jar,
$HADOOP_PREFIX/lib/[^.].*.jar,
</value>
<description>Classpaths that accumulo checks
for updates and class files.
When using the Security Manager, please
remove the ".../target/classes/" values.
</description>
</property>
</configuration>
On Sun, Mar 16, 2014 at 9:06 PM, Josh Elser
<josh.el...@gmail.com <mailto:josh.el...@gmail.com>>
wrote:
Posting your accumulo-site.xml (filtering out
instance.secret and trace.password before you
post) would also help us figure out what exactly
is going on.
On 3/16/14, 8:41 PM, Mike Drob wrote:
Which version of Accumulo are you using?
You might be missing the hadoop libraries
from your classpath. For this,
you would check your accumulo-site.xml and
find the comment about Hadoop
2 in the file.
On Sun, Mar 16, 2014 at 8:28 PM, Benjamin
Parrish
<benjamin.d.parr...@gmail.com
<mailto:benjamin.d.parr...@gmail.com>
<mailto:benjamin.d.parrish@__gmail.com
<mailto:benjamin.d.parr...@gmail.com>>> wrote:
I have a couple of issues when trying
to use Accumulo on Hadoop 2.2.0
1) I start with accumulo init and
everything runs through just fine,
but I can find '/accumulo' using
'hadoop fs -ls /'
2) I try to run 'accumulo shell -u
root' and it says that that
Hadoop and ZooKeeper are not started,
but if I run 'jps' on the each
cluster node it shows all the necessary
processes for both in the
JVM. Is there something I am missing?
--
Benjamin D. Parrish
H: 540-597-7860 <tel:540-597-7860>
<tel:540-597-7860 <tel:540-597-7860>>
--
Benjamin D. Parrish
H: 540-597-7860 <tel:540-597-7860>
--
Benjamin D. Parrish
H: 540-597-7860 <tel:540-597-7860>
--
Benjamin D. Parrish
H: 540-597-7860 <tel:540-597-7860>
--
Benjamin D. Parrish
H: 540-597-7860