Hi,
We have been working on implementing secondary index in HBase, and had shared
an overview of our design in the 2012 Hadoop Technical Conference at
Beijing(http://bit.ly/hbtc12-hindex). We are pleased to open source it today.
The project is available on github.
Good to see this Rajesh. Thanks a lot to Huawei HBase team!
-Anoop-
On Tue, Aug 13, 2013 at 11:49 AM, rajeshbabu chintaguntla
rajeshbabu.chintagun...@huawei.com wrote:
Hi,
We have been working on implementing secondary index in HBase, and had
shared an overview of our design in the 2012
I generally seen we are manually put the data into hbase as well as in
hbase java client we can do all the same things like put,get,scan.
My Question is how to import the data into hbase table using java??
If yes then can u show me?how can i do this..?
(Note:Not using hbase and map reduce )
--
Well done, Rajesh!
On Tue, Aug 13, 2013 at 8:44 AM, Anoop John anoop.hb...@gmail.com wrote:
Good to see this Rajesh. Thanks a lot to Huawei HBase team!
-Anoop-
On Tue, Aug 13, 2013 at 11:49 AM, rajeshbabu chintaguntla
rajeshbabu.chintagun...@huawei.com wrote:
Hi,
We have been
Have you ruled out Sqoop as well:)
Shengjie
On 13 August 2013 14:46, manish dunani manishd...@gmail.com wrote:
I generally seen we are manually put the data into hbase as well as in
hbase java client we can do all the same things like put,get,scan.
My Question is how to import the data
Cool stuff, guys! Looking forward to reading through the code.
On Monday, August 12, 2013, Nicolas Liochon wrote:
Well done, Rajesh!
On Tue, Aug 13, 2013 at 8:44 AM, Anoop John
anoop.hb...@gmail.comjavascript:;
wrote:
Good to see this Rajesh. Thanks a lot to Huawei HBase team!
I knew it is happen with sqoop.But,I find the possiblities except this..
Thank u shengjie.!
On Tue, Aug 13, 2013 at 12:21 PM, Shengjie Min kelvin@gmail.com wrote:
Have you ruled out Sqoop as well:)
Shengjie
On 13 August 2013 14:46, manish dunani manishd...@gmail.com wrote:
I
Good to see this. Hope this would help in more improvements and
enhancements. :)
On Tue, Aug 13, 2013 at 12:14 PM, Anoop John anoop.hb...@gmail.com wrote:
Good to see this Rajesh. Thanks a lot to Huawei HBase team!
-Anoop-
On Tue, Aug 13, 2013 at 11:49 AM, rajeshbabu chintaguntla
Good to see this. Thanks a lot to Priyank, Ramakrishna, Rajesh.
_
/(|
( :
__\ \ _
() `|
()| |
().__|
(___)__.|_
Cheers,
Subroto Sanyal
On Aug 13, 2013, at 9:28 AM, ramkrishna
Congratulations!
Waiting a Patch from Hbase Jira.
2013/8/13 Subroto ssan...@datameer.com
Good to see this. Thanks a lot to Priyank, Ramakrishna, Rajesh.
_
/(|
( :
__\ \ _
() `|
()| |
().__|
Hi,
I am using Hadoop and Hbase in pseudo distributed mode.
I am using Hadoop version - 1.1.2 and Hbase version - 0.94.7
Recently i found some exception in hadoop and hbase logs.
I am not sure what has caused this.
Requesting you to please help here.
*Exception in Master log :*
2013-07-31
Nice.
Will pay attention to upcoming patch on HBASE-9203.
On Aug 12, 2013, at 11:19 PM, rajeshbabu chintaguntla
rajeshbabu.chintagun...@huawei.com wrote:
Hi,
We have been working on implementing secondary index in HBase, and had shared
an overview of our design in the 2012 Hadoop
Thank you Jean-Marc.
On 08/12/2013 11:54 AM, Jean-Marc Spaggiari wrote:
Hi Oussama.
1) That's all the goal of Hadoop and HBase ;) You might want to ready
Hadoop the Definitive guide and HBase the Definitive guide...
2) HBase is based on Hadoop and take advantage of it's repplication process.
Hi Vimal,
What was your cluster doing at that time? Was it very busy? Looks like one
server (192.168.20.30 http://192.168.20.30:50010) went so busy that it
failed to report active and closed.
JM
2013/8/13 Vimal Jain vkj...@gmail.com
Hi,
I am using Hadoop and Hbase in pseudo distributed mode.
very google local index solution.
2013/8/13 Ted Yu yuzhih...@gmail.com
Nice.
Will pay attention to upcoming patch on HBASE-9203.
On Aug 12, 2013, at 11:19 PM, rajeshbabu chintaguntla
rajeshbabu.chintagun...@huawei.com wrote:
Hi,
We have been working on implementing secondary index
Excited to see this!
Best Regards,
Anil
On Aug 13, 2013, at 6:17 AM, zhzf jeff jeff.z...@gmail.com wrote:
very google local index solution.
2013/8/13 Ted Yu yuzhih...@gmail.com
Nice.
Will pay attention to upcoming patch on HBASE-9203.
On Aug 12, 2013, at 11:19 PM, rajeshbabu
Hi , Devs/Users ;
Recently I use HBase API to insert big data into hbase;It's about 77G and my
cluster has one hbase-master,two hbase-regionservers ;
When the program executes a period of time, the regionserver automaticlly
shutdown. And I restart regionservers ,
but this thing happens again .
Hi Jia,
How is you HDFS running?
Caused by: org.apache.hadoop.ipc.
RemoteException(java.io.IOException): File
/apps/hbase/data/lbc_zte_1_imei_index/4469e6b0500bf3f5ed0ac1247d249537/.tmp/e7bb489662344b26bc6de1e72c122eec
could only be replicated to 0 nodes instead of minReplication (=1). There
Hi Jean-Marc,
Thanks for your reply.
I have one node cluster(pseudo distributed mode) , so 192.168.20.30 is the
only server which hosts all 6 processes (
namenode,datanode,secondarynamenode,Hmaster,HRegion and Zookeeper).
At time of this problem, i had given following memory to these processes (
Hi Vimal,
4GB for all the process is very short... You might want to run in
standalone mode instead of pseudo-distributed. That will save you some
memory. Have you checked if you server is swapping? That will make is slow,
and then you will miss some heartbits and processes will close...
JM
Yes. that is the reason i am planning to make it 8GB.
I am running in pseudo distributed mode as i will expand to 2-3 node
cluster as my data size increases in future.
Also , i have disabled swapping ( have set vm.swappiness to 0).
On Tue, Aug 13, 2013 at 9:12 PM, Jean-Marc Spaggiari
Ok. That's fine. I will say, don't be surprised to have such failures with
a single pseudo-distributed node with only 4GB. Go slowly with it, don't
start big jobs. And if something failed, monitor memory usage, CPU usage,
etc.
JM
2013/8/13 Vimal Jain vkj...@gmail.com
Yes. that is the reason i
Fantastic! Let me know if you're up for surfacing this through Phoenix.
Regards,
James
On Tue, Aug 13, 2013 at 7:48 AM, Anil Gupta anilgupt...@gmail.com wrote:
Excited to see this!
Best Regards,
Anil
On Aug 13, 2013, at 6:17 AM, zhzf jeff jeff.z...@gmail.com wrote:
very google local
Excellent job, congrats Huawei team. Phoenix + hindex integration?
-Vladimir
On Tue, Aug 13, 2013 at 12:53 AM, Subroto ssan...@datameer.com wrote:
Good to see this. Thanks a lot to Priyank, Ramakrishna, Rajesh.
_
/(|
( :
__\ \ _
I'm running cdh4.2 hbase 0.94.2, and am looking to merge some regions in a
table. Looking at Merge.java, it seems to require that the entire cluster
be offline. However, I also notice an HMerge.java which doesn't appear to
do the same validation.
Two questions:
1) Why does Merge.java validate
Caused by: java.io.EOFException:
The file has read.But the program is reading.
通过对这句话的分析,好像是你的文件读完了,程序还在读文件。
g_jinlong
发件人: jia.li
发送时间: 2013-08-13 15:50
收件人: user
主题: regionserver died when using Put to insert data
Hi , Devs/Users ;
Recently I use HBase API to insert big data into
Hi all
When I compile hbase source there is a directory called target, which
is directory
is what to do with it
my hbase version is hbase0.94
thanks for you help
--
In the Hadoop world, I am just a novice, explore the entire Hadoop
ecosystem, I hope one day I can contribute their own code
The target directory contains generated-sources, classes for the java
source files and possibly tar ball (e.g. hbase-0.94.8.tar.gz).
Cheers
On Tue, Aug 13, 2013 at 6:25 PM, 闫昆 yankunhad...@gmail.com wrote:
Hi all
When I compile hbase source there is a directory called target, which
is
thanks for your help !~
2013/8/13 Ted Yu yuzhih...@gmail.com
The target directory contains generated-sources, classes for the java
source files and possibly tar ball (e.g. hbase-0.94.8.tar.gz).
Cheers
On Tue, Aug 13, 2013 at 6:25 PM, 闫昆 yankunhad...@gmail.com wrote:
Hi all
When I
Hi all
I use maven compile hbase source and import to eclipse (remote java
application) to debug hbase ,when debug hbase importtsv I input argument
like this format
hadoop jar hbase.jar importtsv -Dimporttsv.columns=some clumns
-Dimporttsv.separator=, hbase table hdfs data
but when running to
Hi all
I use maven compile hbase source and import to eclipse (remote java
application) to debug hbase ,when debug hbase importtsv I input argument
like this format
hadoop jar hbase.jar importtsv -Dimporttsv.columns=some clumns
-Dimporttsv.separator=, hbase table hdfs data
but when running to
Please refer to http://hbase.apache.org/book.html#importtsv
On Tue, Aug 13, 2013 at 6:52 PM, 闫昆 yankunhad...@gmail.com wrote:
Hi all
I use maven compile hbase source and import to eclipse (remote java
application) to debug hbase ,when debug hbase importtsv I input argument
like this format
Hi Devs/Users;
I read the document about HBaseIntegration in hive , and I did a test for
this .
the same data stored in hdfs and hbase , I create two external tables and
use the same hiveQL to query ;
The result is that querying from hdfs external table costs about 20 mins ,
querying
Hi , Devs/Users ;
Recently I use HBase API to insert big data into hbase;It's about 77G and
my cluster has one hbase-master,two hbase-regionservers ;
When the program executes a period of time, the regionserver automaticlly
shutdown. And I restart regionservers ,
but this thing happens again .
Hi Varun,
I tried BulkDeletePoint and it is giving me an UnsupportedProtocolException -
no handler to support protocol. I am in Hbase 94.3. Can you help out - is this
something you facedtoo?
Regards,
Mrudula
You mean BulkDeleteProtocol?
Can u paste the trace of the exception that u r getting? Near by logs..
-Anoop-
On Wed, Aug 14, 2013 at 9:22 AM, Mrudula Madiraju mrudulamadir...@yahoo.com
wrote:
Hi Varun,
I tried BulkDeletePoint and it is giving me an
UnsupportedProtocolException - no handler
How much heap did you give to region server ?
Can you show us log snippet shortly before 12:12:08 ?
Which HBase version were you using ?
On Aug 13, 2013, at 6:33 PM, 李佳 tjuhenr...@gmail.com wrote:
Hi , Devs/Users ;
Recently I use HBase API to insert big data into hbase;It's about 77G and
my
Hi
Try using SingleColumnValueFilter # setFilterIfMissing()
Default value of this is false. Set it to True to filter rows when the
cf:q is missing
-Anoop-
On Mon, Aug 12, 2013 at 9:52 PM, Bing Li lbl...@gmail.com wrote:
Hi, all,
My understandings about HBase table and its family are as
On Tue, Aug 13, 2013 at 5:17 PM, Bryan Beaudreault bbeaudrea...@hubspot.com
wrote:
I'm running cdh4.2 hbase 0.94.2, and am looking to merge some regions in a
table. Looking at Merge.java, it seems to require that the entire cluster
be offline. However, I also notice an HMerge.java which
Increase hbase.client.scanner.caching! It helps
On Wed, Aug 14, 2013 at 8:10 AM, tjuhenr...@gmail.com wrote:
Hi Devs/Users;
I read the document about HBaseIntegration in hive , and I did a test for
this .
the same data stored in hdfs and hbase , I create two external tables and
use
40 matches
Mail list logo