Hi,
I figured out the issue.
The region server was listening to 192.168.1.102:55964 and not 127.0.0.1:
55964.
192.168.1.102 is the IP of my machine and Sachin-PC is my machine name.
In hosts my entry was
127.0.0.1 localhost.localdomain localhost Sachin-PC
I removed Sachin-PC from there and
1、 Is your zk-path 127.0.0.1:2181 ? This is default configuration without
config file.
2、Set a cache value (100 or 1000 and so on ) for scan object.
在 16/4/4 01:06, "Sachin Mittal" 写入:
>I am stuck on connecting to hbase 1.0.3 via simple java client.
>The program hangs
There are some outstanding bug fixes, e.g. HBASE-15333, for hbase-spark
module.
FYI
On Tue, Apr 5, 2016 at 2:36 PM, Nkechi Achara
wrote:
> So Hbase-spark is a continuation of the spark on hbase project, but within
> the Hbase project.
> They are not any significant
So Hbase-spark is a continuation of the spark on hbase project, but within
the Hbase project.
They are not any significant differences apart from the fact that Spark on
hbase is not updated.
Dependent on the version you are using it would be more beneficial to use
Hbase-Spark
Kay
On 5 Apr 2016
i have cloudera cluster,
i am exploring spark with HBase,
after going through this blog
http://blog.cloudera.com/blog/2014/11/how-to-do-near-real-time-sessionization-with-spark-streaming-and-apache-hadoop/
i found two options for using Spark with HBase,
Cloudera's Spark on HBase or
Apache
Check it out: http://hbasecon.com/
Thanks,
St.Ack
Why is RS is going down? are you hitting an OOME or just the RS goes into
heavy GC and you get a YouAreDeadException? my guess is that if you have
thousands of store files and you are major compacting probably you are low
in resources and the compaction queue goes out of control and triggers an
Have you considered setting TTL ?
bq. HBase will not be easily removed
Can you clarify the above ?
Cheers
On Tue, Apr 5, 2016 at 12:03 AM, hsdcl...@163.com wrote:
>
> If I want to delete hbase data in the previous month , how do? To avoid
> errors,
> when delete date , we