Awesome Chris, thanks. I didn't know where to begin looking for that one.
Sent from my phone, please pardon the typos and brevity.
On May 14, 2013 7:11 PM, Christopher ctubb...@apache.org wrote:
With the right configuration, you could use the copy-dependencies goal
of the
I've given up on cdh3 then. I've trying to get 1.5 and /or trunk going on
cdh4.2.1 on a small hadoop cluster installed via cloudera manager. I built
the tar specifying Dhadoop.profile=2.0 -Dhadoop.version=2.0.0-cdh4.2.1.
I've edited accumulo-site to add $HADOOP_PREFIX/client/.*.jar, to the
In the example files, specifically accumulo-env.sh, there are 2 commented
lines after HADOOP_CONF_DIR is set, I believe. Make sure that you comment
out the old one and uncomment the one after the hadoop2 comment.
This is necessary because Accumulo puts the hadoop conf dir on the
classpath in
That sorted it, thanks.
On 15 May 2013 18:11, John Vines vi...@apache.org wrote:
In the example files, specifically accumulo-env.sh, there are 2 commented
lines after HADOOP_CONF_DIR is set, I believe. Make sure that you comment
out the old one and uncomment the one after the hadoop2
Just noticed this series of tickets related to pluggable authentication in
Hadoop:
https://issues.apache.org/jira/browse/HADOOP-9392
Billie
No problem. FYI, this is essentially what we do to drop the
non-provided deps into lib/ in the first place.
--
Christopher L Tubbs II
http://gravatar.com/ctubbsii
On Wed, May 15, 2013 at 3:03 AM, John Vines vi...@apache.org wrote:
Awesome Chris, thanks. I didn't know where to begin looking for
When tserver.readahead.concurrent.max was set to its default setting, the
load on our tservers was fairly low. So I changed accumulo-site.xml so the
max is 64. (Go for the gusto!) Then restarted the tservers to ensure the
change was picked up.
New queries are now running 32 concurrent scans on
On Sun, May 5, 2013 at 9:13 PM, Christopher ctubb...@apache.org wrote:
Can we create public filters in JIRA?
Christoper,
Did you figure this out? Go to Issues - Manage Filters, click on the gear
next to your filter and select Edit. Make sure Everyone is selected in the
pull-down menu next
Just put in the request. I hadn't been sure that it was a permission
thing until now. Thanks!
--
Christopher L Tubbs II
http://gravatar.com/ctubbsii
On Wed, May 15, 2013 at 4:59 PM, Billie Rinaldi
billie.rina...@gmail.com wrote:
On Sun, May 5, 2013 at 9:13 PM, Christopher ctubb...@apache.org
It seems like the ideal option would be to have one binary build that
determines Hadoop version and switches appropriately at runtime. Has anyone
attempted to do this yet, and do we have an enumeration of the places in
Accumulo code where the incompatibilities show up?
One of the
So, I think that'd be great, if it works, but who is willing to do
this work and get it in before I make another RC?
I'd like to cut RC3 tomorrow if I have time. So, feel free to patch
these in to get it to work before then... or, by the next RC if RC3
fails to pass a vote.
--
Christopher L Tubbs
11 matches
Mail list logo