Multi Node Question

2014-09-17 Thread Geoffry Roberts
I have been running a single node Accumulo, but I now have multiple nodes
and I have observed that Accumulo's processes only list out (jps) on the
node from which I started it.  The other nodes show only Hadoop processes,
usually the datanode.

Is this as it should be? or should I be seeing Accumulo on all nodes? Other
than that, Accumulo seems to be working.

Thanks

-- 
There are ways and there are ways,

Geoffry Roberts


Re: Multi Node Question

2014-09-17 Thread David Medinets
Look at the Accumulo Monitor page? How many TServers are shown? Each
TServer node should have at least one Accumulo process running.

On Wed, Sep 17, 2014 at 8:46 AM, Geoffry Roberts  wrote:
> I have been running a single node Accumulo, but I now have multiple nodes
> and I have observed that Accumulo's processes only list out (jps) on the
> node from which I started it.  The other nodes show only Hadoop processes,
> usually the datanode.
>
> Is this as it should be? or should I be seeing Accumulo on all nodes? Other
> than that, Accumulo seems to be working.
>
> Thanks
>
> --
> There are ways and there are ways,
>
> Geoffry Roberts


Re: Multi Node Question

2014-09-17 Thread Eric Newton
Yes, you should see a tserver running on every node.

Assuming you are starting everything with the accumulo "start-all.sh"
script, you will have to add your other nodes to the conf/slaves file.

If you have already done that, there may be something wrong on the other
hosts (missing files, permissions, ssh keys, etc).

Make sure the accumulo directories sync'd out to every node. Check to make
sure log directories exist and are writable.  If they are there, go to one
of those nodes and check for tserver*.out and tserver*.err files, which
contain the stdout and stderr when the server is started.  Any local issues
will be in those files.

If the tservers manage to start running, and experience an error, you may
see it on the monitor pages, which saves you the trouble of ssh'ing to the
machine and looking in local logs.

-Eric

On Wed, Sep 17, 2014 at 8:46 AM, Geoffry Roberts 
wrote:

> I have been running a single node Accumulo, but I now have multiple nodes
> and I have observed that Accumulo's processes only list out (jps) on the
> node from which I started it.  The other nodes show only Hadoop processes,
> usually the datanode.
>
> Is this as it should be? or should I be seeing Accumulo on all nodes?
> Other than that, Accumulo seems to be working.
>
> Thanks
>
> --
> There are ways and there are ways,
>
> Geoffry Roberts
>


Re: Multi Node Question

2014-09-17 Thread Geoffry Roberts
I took a look at the accumulo logs on the non-starting servers.  They were
complaining there was  no route to host, and the problem seemed to be with
port 9000.  I took stab at fixing it by opening said port in iptables.
 That worked in so much as when I did a jps, I saw a Main process running,
so that must be the tserver.  However, with the opening of 9000 hadoop
datanodes would no longer start.  When I closed 9000 (restoring to the
first state) hadoop datanodes started as before and the accumulo nodes quit
starting. it seems to be one or the other.

Was iptables the wrong thing to do? Is there another way?

fwiw, This cluster does not do ssh over port 22.

Thanks

On Wed, Sep 17, 2014 at 9:16 AM, Eric Newton  wrote:

> Yes, you should see a tserver running on every node.
>
> Assuming you are starting everything with the accumulo "start-all.sh"
> script, you will have to add your other nodes to the conf/slaves file.
>
> If you have already done that, there may be something wrong on the other
> hosts (missing files, permissions, ssh keys, etc).
>
> Make sure the accumulo directories sync'd out to every node. Check to make
> sure log directories exist and are writable.  If they are there, go to one
> of those nodes and check for tserver*.out and tserver*.err files, which
> contain the stdout and stderr when the server is started.  Any local issues
> will be in those files.
>
> If the tservers manage to start running, and experience an error, you may
> see it on the monitor pages, which saves you the trouble of ssh'ing to the
> machine and looking in local logs.
>
> -Eric
>
> On Wed, Sep 17, 2014 at 8:46 AM, Geoffry Roberts 
> wrote:
>
>> I have been running a single node Accumulo, but I now have multiple nodes
>> and I have observed that Accumulo's processes only list out (jps) on the
>> node from which I started it.  The other nodes show only Hadoop processes,
>> usually the datanode.
>>
>> Is this as it should be? or should I be seeing Accumulo on all nodes?
>> Other than that, Accumulo seems to be working.
>>
>> Thanks
>>
>> --
>> There are ways and there are ways,
>>
>> Geoffry Roberts
>>
>
>


-- 
There are ways and there are ways,

Geoffry Roberts


Import/Export problems from 1.5.1 -> 1.6.0?

2014-09-17 Thread Tim Israel
Hi all,

I posted something similar on the slider mailing list and was directed
here.  After debugging further, it doesn't seem like this is a slider issue.

I have some tables that were exported from another cluster running Accumulo
1.5.1 on hoya and I'm trying to import them in Accumulo 1.6.0 on Slider
0.50.2.  This target cluster is Kerberized but Accumulo is running in
simple authentication mode.

The exported table was distcp'd to a cluster configured with slider.

The table was imported via accumulo shell successfully.  The files get
moved to
/user/accumulo/.slider/cluster/slideraccumulo/database/data/tables/1

However, if I scan the imported table, accumulo complains with the
following exception:
Failed to open file hdfs://cluster/accumulo/tables/1/b-05c/I05d.rf
File does not exist: /accumulo/tables/1/b-05c/I05d.rf

I can scan the table if I move the files from
/user/accumulo/.slider/cluster/slideraccumulo/database/data/tables/1 to
/accumulo/tables/1

I pulled accumulo-site from the slider publisher and saw that
instance.volumes is set as follows:
hdfs://cluster/user/accumulo/.slider/cluster/slideraccumulo/database/data

Any suggestions would be greatly appreciated.

Thanks,

Tim


Re: Import/Export problems from 1.5.1 -> 1.6.0?

2014-09-17 Thread Tim Israel
Upon further investigation, it looks like I can't even follow the steps
outlined on the import/export documentation in 1.6.0
http://accumulo.apache.org/1.6/examples/export.html.  I get the same error
outlined in my first post

[shell.Shell] ERROR: java.lang.RuntimeException:
org.apache.accumulo.core.client.impl.AccumuloServerException: Error on
server :58444 <-- port chosen by slider

Accumulo Recent Logs

Failed to open file hdfs://cluster/accumulo/tables/1/b-05c/I05d.rf
File does not exist: /accumulo/tables/1/b-05c/I05d.rf


I tried the export/import procedure on my Accumulo 1.5.1 cluster and get
the expected result (i.e. the table is imported and can be scanned without
error)


Tim

On Wed, Sep 17, 2014 at 3:57 PM, Tim Israel  wrote:

> Hi all,
>
> I posted something similar on the slider mailing list and was directed
> here.  After debugging further, it doesn't seem like this is a slider issue.
>
> I have some tables that were exported from another cluster running
> Accumulo 1.5.1 on hoya and I'm trying to import them in Accumulo 1.6.0 on
> Slider 0.50.2.  This target cluster is Kerberized but Accumulo is running
> in simple authentication mode.
>
> The exported table was distcp'd to a cluster configured with slider.
>
> The table was imported via accumulo shell successfully.  The files get
> moved to
> /user/accumulo/.slider/cluster/slideraccumulo/database/data/tables/1
>
> However, if I scan the imported table, accumulo complains with the
> following exception:
> Failed to open file hdfs://cluster/accumulo/tables/1/b-05c/I05d.rf
> File does not exist: /accumulo/tables/1/b-05c/I05d.rf
>
> I can scan the table if I move the files from
> /user/accumulo/.slider/cluster/slideraccumulo/database/data/tables/1 to
> /accumulo/tables/1
>
> I pulled accumulo-site from the slider publisher and saw that
> instance.volumes is set as follows:
> hdfs://cluster/user/accumulo/.slider/cluster/slideraccumulo/database/data
>
> Any suggestions would be greatly appreciated.
>
> Thanks,
>
> Tim
>


Re: Import/Export problems from 1.5.1 -> 1.6.0?

2014-09-17 Thread Josh Elser

Hi Tim,

Any possibility that you can provide the exportMetadata.zip and the 
distcp.txt?


Fair warning - the data from that table won't be included, but some 
split points might be included in metadata.bin (inside 
exportMetadata.zip) which *might* contain something sensitive. Make sure 
you dbl check that.


I'll see if I can reproduce what you saw. It definitely seems strange.

- Josh

On 9/17/14, 5:10 PM, Tim Israel wrote:

Upon further investigation, it looks like I can't even follow the steps
outlined on the import/export documentation in 1.6.0
http://accumulo.apache.org/1.6/examples/export.html.  I get the same
error outlined in my first post

[shell.Shell] ERROR: java.lang.RuntimeException:
org.apache.accumulo.core.client.impl.AccumuloServerException: Error on
server :58444 <-- port chosen by slider

Accumulo Recent Logs

Failed to open file
hdfs://cluster/accumulo/tables/1/b-05c/I05d.rf File does not
exist: /accumulo/tables/1/b-05c/I05d.rf


I tried the export/import procedure on my Accumulo 1.5.1 cluster and get
the expected result (i.e. the table is imported and can be scanned
without error)


Tim

On Wed, Sep 17, 2014 at 3:57 PM, Tim Israel mailto:t...@timisrael.com>> wrote:

Hi all,

I posted something similar on the slider mailing list and was
directed here.  After debugging further, it doesn't seem like this
is a slider issue.

I have some tables that were exported from another cluster running
Accumulo 1.5.1 on hoya and I'm trying to import them in Accumulo
1.6.0 on Slider 0.50.2.  This target cluster is Kerberized but
Accumulo is running in simple authentication mode.

The exported table was distcp'd to a cluster configured with slider.

The table was imported via accumulo shell successfully.  The files
get moved to
/user/accumulo/.slider/cluster/slideraccumulo/database/data/tables/1

However, if I scan the imported table, accumulo complains with the
following exception:
Failed to open file
hdfs://cluster/accumulo/tables/1/b-05c/I05d.rf File does not
exist: /accumulo/tables/1/b-05c/I05d.rf

I can scan the table if I move the files from
/user/accumulo/.slider/cluster/slideraccumulo/database/data/tables/1
to /accumulo/tables/1

I pulled accumulo-site from the slider publisher and saw that
instance.volumes is set as follows:
hdfs://cluster/user/accumulo/.slider/cluster/slideraccumulo/database/data

Any suggestions would be greatly appreciated.

Thanks,

Tim




Re: Import/Export problems from 1.5.1 -> 1.6.0?

2014-09-17 Thread Tim Israel
Josh,
I've sent an email directly to you with the zip -- I'm not sure what the
mailing list behavior is regarding attachments.

For the benefit of the mailing list, the files (and their contents) are as
follows:

distcp.txt

hdfs://cluster/user/accumulo/.slider/cluster/slideraccumulo/database/data/tables/2/default_tablet/F09g.rf
hdfs://cluster/user/accumulo/.slider/cluster/slideraccumulo/database/data/tables/2/default_tablet/F09n.rf
hdfs://cluster/tmp/table1_export/exportMetadata.zip

exportMetadata.zip/accumulo_export_info.txt
-
exportVersion:1
srcInstanceName:instancename
srcInstanceID:b458b1bb-f613-4c3c-a399-d3f275a634da
srcZookeepers:CensoredZK1,CensoredZK2,CensoredZK3
srcTableName:table1_exp
srcTableID:3
srcDataVersion:6
srcCodeVersion:1.6.0

exportMetadata.zip/table_config.txt
---
table.constraint.1=org.apache.accumulo.core.constraints.DefaultKeySizeConstraint
table.iterator.majc.vers=20,org.apache.accumulo.core.iterators.user.VersioningIterator
table.iterator.majc.vers.opt.maxVersions=1
table.iterator.minc.vers=20,org.apache.accumulo.core.iterators.user.VersioningIterator
table.iterator.minc.vers.opt.maxVersions=1
table.iterator.scan.vers=20,org.apache.accumulo.core.iterators.user.VersioningIterator
table.iterator.scan.vers.opt.maxVersions=1
table.split.threshold=100M

exportMetadata.zip/metadata.bin
---


Thanks,

Tim

On Wed, Sep 17, 2014 at 5:45 PM, Josh Elser  wrote:

> Hi Tim,
>
> Any possibility that you can provide the exportMetadata.zip and the
> distcp.txt?
>
> Fair warning - the data from that table won't be included, but some split
> points might be included in metadata.bin (inside exportMetadata.zip) which
> *might* contain something sensitive. Make sure you dbl check that.
>
> I'll see if I can reproduce what you saw. It definitely seems strange.
>
> - Josh
>
> On 9/17/14, 5:10 PM, Tim Israel wrote:
>
>> Upon further investigation, it looks like I can't even follow the steps
>> outlined on the import/export documentation in 1.6.0
>> http://accumulo.apache.org/1.6/examples/export.html.  I get the same
>> error outlined in my first post
>>
>> [shell.Shell] ERROR: java.lang.RuntimeException:
>> org.apache.accumulo.core.client.impl.AccumuloServerException: Error on
>> server :58444 <-- port chosen by slider
>>
>> Accumulo Recent Logs
>> 
>> Failed to open file
>> hdfs://cluster/accumulo/tables/1/b-05c/I05d.rf File does not
>> exist: /accumulo/tables/1/b-05c/I05d.rf
>>
>>
>> I tried the export/import procedure on my Accumulo 1.5.1 cluster and get
>> the expected result (i.e. the table is imported and can be scanned
>> without error)
>>
>>
>> Tim
>>
>> On Wed, Sep 17, 2014 at 3:57 PM, Tim Israel > > wrote:
>>
>> Hi all,
>>
>> I posted something similar on the slider mailing list and was
>> directed here.  After debugging further, it doesn't seem like this
>> is a slider issue.
>>
>> I have some tables that were exported from another cluster running
>> Accumulo 1.5.1 on hoya and I'm trying to import them in Accumulo
>> 1.6.0 on Slider 0.50.2.  This target cluster is Kerberized but
>> Accumulo is running in simple authentication mode.
>>
>> The exported table was distcp'd to a cluster configured with slider.
>>
>> The table was imported via accumulo shell successfully.  The files
>> get moved to
>> /user/accumulo/.slider/cluster/slideraccumulo/database/data/tables/1
>>
>> However, if I scan the imported table, accumulo complains with the
>> following exception:
>> Failed to open file
>> hdfs://cluster/accumulo/tables/1/b-05c/I05d.rf File does not
>> exist: /accumulo/tables/1/b-05c/I05d.rf
>>
>> I can scan the table if I move the files from
>> /user/accumulo/.slider/cluster/slideraccumulo/database/data/tables/1
>> to /accumulo/tables/1
>>
>> I pulled accumulo-site from the slider publisher and saw that
>> instance.volumes is set as follows:
>> hdfs://cluster/user/accumulo/.slider/cluster/slideraccumulo/
>> database/data
>>
>> Any suggestions would be greatly appreciated.
>>
>> Thanks,
>>
>> Tim
>>
>>
>>


Re: AccumuloMultiTableInputFormat IllegalStatementException

2014-09-17 Thread JavaHokie
Hi Corey,

I am now trying to deploy this @ work and I am unable to get this run
without putting accumulo-core, accumulo-fate, accumulo-trace,
accumulo-tracer, and accumulo-tserver in the
$HADOOP_COMMON_HOME/share/hadoop/common directory.  Can you tell me how you
package your jar to obviate the need to put these jars here?

Thanks

--John

On Sun, Aug 24, 2014 at 6:50 PM, Corey Nolet-2 [via Apache Accumulo] <
ml-node+s1065345n1120...@n5.nabble.com> wrote:

> Awesome John! It's good to have this documented for future users. Keep us
> updated!
>
>
> On Sun, Aug 24, 2014 at 11:05 AM, JavaHokie <[hidden email]
> > wrote:
>
>> Hi Corey,
>>
>> Just to wrap things up, AccumuloMultipeTableInputFormat is working really
>> well.  This is an outstanding feature I can leverage big-time on my
>> current
>> work assignment, an IRAD I am working on, as well as my own prototype
>> project.
>>
>> Thanks again for your help!
>>
>> --John
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-accumulo.1065345.n5.nabble.com/AccumuloMultiTableInputFormat-IllegalStateException-tp11186p11208.html
>> Sent from the Users mailing list archive at Nabble.com.
>>
>
>
>
> --
>  If you reply to this email, your message will be added to the discussion
> below:
>
> http://apache-accumulo.1065345.n5.nabble.com/AccumuloMultiTableInputFormat-IllegalStateException-tp11186p11209.html
>  To unsubscribe from AccumuloMultiTableInputFormat IllegalStateException, 
> click
> here
> 
> .
> NAML
> 
>




--
View this message in context: 
http://apache-accumulo.1065345.n5.nabble.com/AccumuloMultiTableInputFormat-IllegalStateException-tp11186p11303.html
Sent from the Users mailing list archive at Nabble.com.

Re: AccumuloMultiTableInputFormat IllegalStatementException

2014-09-17 Thread Corey Nolet
Have you looked at the libjars? The uber jar is one approach, but it can
become very ugly very quickly.

http://grepalex.com/2013/02/25/hadoop-libjars/

On Wed, Sep 17, 2014 at 10:59 PM, JavaHokie 
wrote:

> Hi Corey,
>
> I am now trying to deploy this @ work and I am unable to get this run
> without putting accumulo-core, accumulo-fate, accumulo-trace,
> accumulo-tracer, and accumulo-tserver in the
> $HADOOP_COMMON_HOME/share/hadoop/common directory.  Can you tell me how you
> package your jar to obviate the need to put these jars here?
>
> Thanks
>
> --John
>
> On Sun, Aug 24, 2014 at 6:50 PM, Corey Nolet-2 [via Apache Accumulo] <[hidden
> email] > wrote:
>
>> Awesome John! It's good to have this documented for future users. Keep us
>> updated!
>>
>>
>> On Sun, Aug 24, 2014 at 11:05 AM, JavaHokie <[hidden email]
>> > wrote:
>>
>>> Hi Corey,
>>>
>>> Just to wrap things up, AccumuloMultipeTableInputFormat is working really
>>> well.  This is an outstanding feature I can leverage big-time on my
>>> current
>>> work assignment, an IRAD I am working on, as well as my own prototype
>>> project.
>>>
>>> Thanks again for your help!
>>>
>>> --John
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-accumulo.1065345.n5.nabble.com/AccumuloMultiTableInputFormat-IllegalStateException-tp11186p11208.html
>>> Sent from the Users mailing list archive at Nabble.com.
>>>
>>
>>
>>
>> --
>>  If you reply to this email, your message will be added to the
>> discussion below:
>>
>> http://apache-accumulo.1065345.n5.nabble.com/AccumuloMultiTableInputFormat-IllegalStateException-tp11186p11209.html
>>  To unsubscribe from AccumuloMultiTableInputFormat IllegalStateException, 
>> click
>> here.
>> NAML
>> 
>>
>
>
> --
> View this message in context: Re: AccumuloMultiTableInputFormat
> IllegalStatementException
> 
>
> Sent from the Users mailing list archive
>  at
> Nabble.com.
>


Re: AccumuloMultiTableInputFormat IllegalStatementException

2014-09-17 Thread Sean Busbey
If you use the "tool" command that comes with an accumulo install to run
your job, it will take care of adding whatever jars accumulo needs ot the
hadoop classpath.

Normally, it's $ACCUMULO_HOM/bin/tool.sh

-Sean

On Wed, Sep 17, 2014 at 7:59 PM, JavaHokie 
wrote:

> Hi Corey,
>
> I am now trying to deploy this @ work and I am unable to get this run
> without putting accumulo-core, accumulo-fate, accumulo-trace,
> accumulo-tracer, and accumulo-tserver in the
> $HADOOP_COMMON_HOME/share/hadoop/common directory.  Can you tell me how you
> package your jar to obviate the need to put these jars here?
>
> Thanks
>
> --John
>
> On Sun, Aug 24, 2014 at 6:50 PM, Corey Nolet-2 [via Apache Accumulo] <[hidden
> email] > wrote:
>
>> Awesome John! It's good to have this documented for future users. Keep us
>> updated!
>>
>>
>> On Sun, Aug 24, 2014 at 11:05 AM, JavaHokie <[hidden email]
>> > wrote:
>>
>>> Hi Corey,
>>>
>>> Just to wrap things up, AccumuloMultipeTableInputFormat is working really
>>> well.  This is an outstanding feature I can leverage big-time on my
>>> current
>>> work assignment, an IRAD I am working on, as well as my own prototype
>>> project.
>>>
>>> Thanks again for your help!
>>>
>>> --John
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-accumulo.1065345.n5.nabble.com/AccumuloMultiTableInputFormat-IllegalStateException-tp11186p11208.html
>>> Sent from the Users mailing list archive at Nabble.com.
>>>
>>
>>
>>
>> --
>>  If you reply to this email, your message will be added to the
>> discussion below:
>>
>> http://apache-accumulo.1065345.n5.nabble.com/AccumuloMultiTableInputFormat-IllegalStateException-tp11186p11209.html
>>  To unsubscribe from AccumuloMultiTableInputFormat IllegalStateException, 
>> click
>> here.
>> NAML
>> 
>>
>
>
> --
> View this message in context: Re: AccumuloMultiTableInputFormat
> IllegalStatementException
> 
>
> Sent from the Users mailing list archive
>  at
> Nabble.com.
>



-- 
Sean