Hbase ImportTSV runs in parallel?

2015-03-05 Thread Siva
Hi All, I’m loading data to Hbase by using Hbase ImportTsv utility. When I kick off this process simultaneously for different tables in different sessions, both the process starts in parallel till it reaches the map reduce program. Once one of the process kicks off map reduce job for one table,

Re: Hbase ImportTSV runs in parallel?

2015-03-05 Thread Vladimir Rodionov
Search google how to run jobs in parallel in Hadoop Your mapreduce configuration allows you to run one job at a time. This usually happens when number of job's tasks exceeds capacity of a cluster. -Vlad On Thu, Mar 5, 2015 at 3:03 PM, Siva sbhavan...@gmail.com wrote: Hi All, I’m loading

回复: Is there any material introducing how to program Endpoint withprotobuf tech?

2015-03-05 Thread donhoff_h
Thanks , Andrew. -- 原始邮件 -- 发件人: Andrew Purtell;apurt...@apache.org; 发送时间: 2015年3月5日(星期四) 中午11:39 收件人: user@hbase.apache.orguser@hbase.apache.org; 主题: Re: Is there any material introducing how to program Endpoint withprotobuf tech? Your best bet is to look

Re: Where is HBase failed servers list stored

2015-03-05 Thread Bryan Beaudreault
You should run with a backup master in a production cluster. The failover process works very well and will cause no downtime. I've done it literally hundreds of times across our multiple production hbase clusters. Even if you don't have a backup master, you should still be fine with restarting

Re: Where is HBase failed servers list stored

2015-03-05 Thread Nicolas Liochon
As Bryan. Le 5 mars 2015 17:55, Bryan Beaudreault bbeaudrea...@hubspot.com a écrit : You should run with a backup master in a production cluster. The failover process works very well and will cause no downtime. I've done it literally hundreds of times across our multiple production hbase

Nice blog post on coming zk-less assignment by our Jimmy Xiang

2015-03-05 Thread Stack
See https://blogs.apache.org/hbase/entry/hbase_zk_less_region_assignment St.Ack

Re: Dealing with data locality in the HBase Java API

2015-03-05 Thread Michael Segel
The better answer is that you don’t worry about data locality. Its becoming a moot point. On Mar 4, 2015, at 12:32 PM, Andrew Purtell apurt...@apache.org wrote: Spark supports creating RDDs using Hadoop input and output formats (

Re: Dealing with data locality in the HBase Java API

2015-03-05 Thread Michael Segel
The better answer is that you don’t worry about data locality. On Mar 4, 2015, at 12:32 PM, Andrew Purtell apurt...@apache.org wrote: Spark supports creating RDDs using Hadoop input and output formats (

RE: Where is HBase failed servers list stored

2015-03-05 Thread Sandeep Reddy
Since ours is production cluster we cant restart master. In our test cluster I tested this scenario, and it got resolved after restarting master. Other than restarting master I couldn't find any solution. Thanks,Sandeep. From: nkey...@gmail.com Date: Wed, 4 Mar 2015 14:55:03 +0100 Subject: