Hi Marco,
Did you make sure that hbase-site.xml is present the classpath of your java
app? As per your error, it looks like thats not the case.
Thanks,
Anil
On Tue, Dec 16, 2014 at 10:29 AM, Marco wrote:
>
> I tried it also completely without setting the config manually,
>
> so just
>
> val co
I posted these stats in the first post/ I updated to include the Thrift
Server:
Cluster Stats:
1 Master/14 Nodes/1 Thrift Server
Quad\Dual-Core 2.8-3.0Ghz 6-8gb ram each with a single 500Gb Drive
Table Stats: ~150 Tables Total Split into 590 Regions
1 table w/ 2.5Billion Rows 500 Columns
4 tab
On Tue, Dec 16, 2014 at 11:53 AM, Ted Yu wrote:
> However, the application needs to compiled with 0.98 jars because the RPC
> has changed.
>
It should be possible to build an application that's using both client
library versions using something like JarJar [0] to munge class names. I
believe Jef
1500 reads/sec to meta with no client activity sounds high, but... how big
is the cluster (how many region servers) and how many thrift gateway
processes are you running? Are you running any other gateways as well (REST
or Thrift2)?
On Tue, Dec 16, 2014 at 11:38 AM, uamadman wrote:
>
> Currently
bq. Then just fail over the applications to the 0.98 cluster
However, the application needs to compiled with 0.98 jars because the RPC
has changed.
Cheers
On Tue, Dec 16, 2014 at 11:42 AM, Esteban Gutierrez
wrote:
>
> +1 Andrew. Is not a simple task and is error prone and can cause dataloss
> i
+1 Andrew. Is not a simple task and is error prone and can cause dataloss
if not performed correctly, also we don't have tooling to fix broken
snapshots if moved manually.
BTW 0.98 should migrate an old snapshot dir to the new post-namespaces
directory hierarchy after starting HBase from a 0.94 la
Currently, hbase-master.com:60010/master-status reports 1500 reads/s when no
clients are connected.
and 1750-2000 to the .meta when I'm running a load operation. Total R/W
spikes to 20k/s when adding all the remaining tables together.
I run a multi-core load process that pools 20 cores using happ
On Sun, Dec 14, 2014 at 10:49 PM, Mukesh Jha
wrote:
>
> Hello Experts,
>
> I've come across multiple posts where users want to read/write to hbase
> from Spark/Spark-streaming apps and everyone has to implement the same
> logic.
>
> Does HBase has (or is there any ongoing work for the same) a spar
I tried it also completely without setting the config manually,
so just
val conf = HBaseConfiguration.create()
directly on the server, where hbase/hadoop etc. is installed. Same
issueI guess there is no connection issue but an issue with the
region server (?)
Understanding what exactly the
I disagree. Before adding something like that to the ref guide, we should
actually agree to support it as a migration strategy. We're not there yet.
And although it's a heroic process, we can take steps to make it less
kludgy if so.
On Tue, Dec 16, 2014 at 9:27 AM, Ted Yu wrote:
>
> Good summary,
Good summary, Brian.
This should be added to ref guide.
Cheers
On Tue, Dec 16, 2014 at 4:17 AM, Brian Jeltema <
brian.jelt...@digitalenvoy.net> wrote:
>
> I have been able to export snapshots from 0.94 to 0.98. I’ve pasted the
> instructions that I developed
>
> and published on our internal wik
Have you seen this thread ?
http://search-hadoop.com/m/vMM142yLCX22/hbase+Memcached&subj=Using+HBase+serving+to+replace+memcached
Cheers
On Tue, Dec 16, 2014 at 5:15 AM, Scott Richter
wrote:
>
> Hello,
>
> I am designing an architecture for a website to show analytics on a huge
> quantity of da
bq. conf.clear()
Why is the above needed ?
Try removing it.
Cheers
On Tue, Dec 16, 2014 at 7:57 AM, Marco wrote:
>
> no effect :(
>
> 2014-12-16 15:19 GMT+01:00 Marco :
> > Hi,
> >
> > Hbase is installed correctly and working (hbase shell works fine).
> >
> > But I'm not able to use the Java AP
no effect :(
2014-12-16 15:19 GMT+01:00 Marco :
> Hi,
>
> Hbase is installed correctly and working (hbase shell works fine).
>
> But I'm not able to use the Java API to connect to an existing Hbase Table:
>
> <<<
> val conf = HBaseConfiguration.create()
>
> conf.clear()
>
> conf.set("hbase.zookeep
bq. conf.set("hbase.zookeeper.quorum", "ip:2181");
Have you tried omitting port in the above config ?
On Tue, Dec 16, 2014 at 6:19 AM, Marco wrote:
>
> Hi,
>
> Hbase is installed correctly and working (hbase shell works fine).
>
> But I'm not able to use the Java API to connect to an existing Hb
Hi,
Hbase is installed correctly and working (hbase shell works fine).
But I'm not able to use the Java API to connect to an existing Hbase Table:
<<<
val conf = HBaseConfiguration.create()
conf.clear()
conf.set("hbase.zookeeper.quorum", "ip:2181");
conf.set("hbase.zookeeper.property.clientPor
Hello,
I am designing an architecture for a website to show analytics on a huge
quantity of data. This data is stored in one HBase table and needs to be
accessed in a semi-random manner. Typically, a big block of rowkeys that
are contiguous will be read at once (say a few thousand rows) and some d
Ok, so we found that we are lacking some jars in our /usr/lib/hadoop/lib
folder. I posted the answer here :
http://stackoverflow.com/questions/27374810/my-cdh5-2-cluster-get-filenotfoundexception-when-running-hbase-mr-jobs/27501623#27501623
On Thu, Dec 11, 2014 at 2:14 PM, Ehud Lev wrote:
>
>
>>
I have been able to export snapshots from 0.94 to 0.98. I’ve pasted the
instructions that I developed
and published on our internal wiki. I also had to significantly increase retry
count parameters due to
a high number of timeout failures during the export.
Cross-cluster transfers
To export
19 matches
Mail list logo