Please take a look at:
hbase-spark/src/test/scala/org/apache/hadoop/hbase/spark/BulkLoadSuite.scala
where usage of LoadIncrementalHFiles is demonstrated.
This is in master branch of hbase.
On Mon, Sep 19, 2016 at 12:10 PM, Punit Naik wrote:
> Hi Guys
>
> I am currently using HBase's Put API to
, Sep 18, 2016 at 11:34 AM, Krishna wrote:
> I will try that. And when inserting KeyValues, how would I set CellType?
>
>
> On Sunday, September 18, 2016, Ted Yu wrote:
>
> > If you have bandwidth, you can try the following change which would show
> > the KeyValue
field.getBytes()));
>
> On Sat, Sep 17, 2016 at 1:04 PM, Ted Yu wrote:
>
> > Here is related code from CellProtos.java :
> >
> > public Builder
> > setCellType(org.apache.hadoop.hbase.protobuf.generated.
> CellProtos.CellType
> > value) {
>
Here is related code from CellProtos.java :
public Builder
setCellType(org.apache.hadoop.hbase.protobuf.generated.CellProtos.CellType
value) {
if (value == null) {
throw new NullPointerException();
This means CellType.valueOf() returned null for the Cell.
Which release of
The log you cited should come from PeriodicMemstoreFlusher.
public static final String MEMSTORE_PERIODIC_FLUSH_INTERVAL =
"hbase.regionserver.optionalcacheflushinterval";
/** Default interval for the memstore flush */
public static final int DEFAULT_CACHE_FLUSH_INTERVAL = 360;
You can examine /jmx :
http://search-hadoop.com/m/YGbb3E2a71UVLBK&subj=Re+HBase+Count+Rows+in+Regions+and+Region+Servers
Looks like your load consists of both write and read. Have you turned on
bucket cache ?
http://hbase.apache.org/book.html#offheap.blockcache
On Fri, Sep 9, 2016 at 1:58 PM, ma
s
> Manjeet
>
> On Sat, Sep 10, 2016 at 6:26 AM, Ted Yu wrote:
>
> > Please take a look at:
> >
> > http://hbase.apache.org/book.html#table_schema_rules_of_thumb
> > http://hbase.apache.org/book.html#arch.regions.size
> > http://hbase.apache.org/book.html
Singh
wrote:
> Yeah its in weekdays
> Yeah default is 10 gb so what is the way/forumla to knw what shuld be the
> size of RS
> On 9 Sep 2016 19:03, "Ted Yu" wrote:
>
> > Can you clarify whether the incoming data rate is for weekdays ?
> >
> > At 6-7 G
The 'Above memstore limit' warning meant that your region server(s) was
under pressure of the write load.
Can you share memstore related config parameters ?
Did you observe hot spotting in the region server(s) ?
Cheers
On Fri, Sep 9, 2016 at 1:20 PM, marjana wrote:
> Looked at regionserver lo
How long was your mapreduce job ?
You may need to check log for map tasks to get more information.
Thanks
On Fri, Sep 9, 2016 at 11:17 AM, marjana wrote:
> I haven't tried that, afraid of how it will affect client connections.
> Any idea why it fails at the very end?
>
>
>
>
> --
> View this m
Have you tried increasing the value of hbase.client.scanner.timeout.period
(default 6) ?
On Fri, Sep 9, 2016 at 11:06 AM, marjana wrote:
> Hi,
> I am trying to copy a table from one cluster to another. This worked fine
> for smaller tables, but trying to copy a few of the larger ones, I keep
Can you clarify whether the incoming data rate is for weekdays ?
At 6-7 Gb /Hour, you need to set larger region size.
Default is 10GB.
If you know roughly how the key space would be filled, presplit your table
accordingly.
On Thu, Sep 8, 2016 at 11:24 PM, Manjeet Singh
wrote:
> Hi All
>
> I ha
Yeah, we should keep support for Java 7 in branch-1.
We can use CompletableFuture for 2.0 release.
On Thu, Sep 8, 2016 at 8:56 PM, Andrew Purtell
wrote:
> I think we should wait until 2.0 before dropping support for less than JDK
> 8. That's a pretty big deal. But, for 2.0, that would be fine I
uot;.
>
> This number 50-100 regions per table at the level of individual region
> server or for the entire cluster ?
>
> Thanks,
> Sreeram
>
>
>
>
>
> On Wed, Sep 7, 2016 at 4:18 PM, Ted Yu wrote:
>
> > With properly designed schema, you don
With properly designed schema, you don't need to split the cluster.
Please see:
http://hbase.apache.org/book.html#schema
> On Sep 7, 2016, at 1:59 AM, Sreeram wrote:
>
> Dear All,
>
>
>
> Looking forward to your views on the maximum limit of HBase cluster size.
>
>
>
> We are currently d
Congratulations, Duo.
> On Sep 6, 2016, at 9:26 PM, Stack wrote:
>
> On behalf of the Apache HBase PMC I am pleased to announce that 张铎
> has accepted our invitation to become a PMC member on the Apache
> HBase project. Duo has healthy notions on where the project should be
> headed and over th
Interesting.
Minor correction:
bq. The locations of all files and regions are kept in a special metadata
table “*hbase:meta*”
The locations of hfiles are not tracked in hbase:meta
On Thu, Sep 1, 2016 at 1:52 AM, Chernov, Arseny
wrote:
> Dear colleagues at User@HBase ,
>
> I really value you
Can you take a look at TestMultiRowRangeFilter to see if your usage is
different ?
It would be easier if you pastebin snippet of your code w.r.t.
MultiRowRangeFilter.
Thanks
On Tue, Aug 30, 2016 at 8:29 AM, daunnc wrote:
> Hi HBase users. I'm using HBase with Spark;
> What I am trying to do is
Probably you can poll user@avro for how the new field is handled given old
data.
FYI
On Mon, Aug 29, 2016 at 11:28 PM, Manjeet Singh
wrote:
> I want ot add few more points
>
> I am using Java native Api for Hbase get/put
>
> and below is the example
>
> assume i have below schema and I am inser
re do we set this value DEFAULT_TABLE_SKEW_COST = 35. I see it in only
> in StochasticLoadBalancer.java
> We don't find this in any of the HBase Config files. Do we need to re-build
> HBase from code for this?
>
> Thanks,
> Manish
>
>> On Tue, Aug 30, 2016 at 6:44 AM,
Please use user@ in the future.
You said:
zk session timout is 40s
Default value is 90s. Why did you configure it with lower value ?
The "RegionServer ephemeral node deleted" message means that znode for
olap3.data.lq,16020,1470799848293
expired.
Can you pastebin JVM parameters (are you using CM
s also make sure that the
> region migrates to another region server? Or do we still need to do that
> manually?
>
> On JMX, Since the environment is production, we are yet unable to use jmx
> for stats collection. But in dev we are trying it out.
>
> On Aug 30, 2016 1:01 AM,
bq. We cannot change the maxregionsize parameter
The region size can be changed on per table basis:
hbase> alter 't1', MAX_FILESIZE => '134217728'
See the beginning of hbase-shell/src/main/ruby/shell/commands/alter.rb for
more details.
FYI
On Sun, Aug 28, 2016 at 10:44 PM, Manish Maheshwari
Cycling old bits:
http://search-hadoop.com/m/YGbb3E2a71UVLBK&subj=Re+HBase+Count+Rows+in+Regions+and+Region+Servers
You can use /jmx to inspect regions and find the hotspot.
On Mon, Aug 29, 2016 at 7:29 AM, Manish Maheshwari
wrote:
> Hi Dima,
>
> Thanks for the suggestion. We can load the data
e know,
>
> Thanks,
> Yeshwanth
>
>
>
> On Fri, Aug 26, 2016 at 5:41 PM, Ted Yu wrote:
>
> > From IncreasingToUpperBoundRegionSplitPolicy#configureForRegion():
> >
> > initialSize = conf.getLong("hbase.increasing.policy.initial.size",
> &
For hortonworks product(s), consider raising question on
https://community.hortonworks.com
FYI
On Sun, Aug 28, 2016 at 6:45 PM, spats wrote:
> Regarding hbase connector by hortonworks
> https://github.com/hortonworks-spark/shc, it would be great if someone can
> answer these
>
> 1. What version
>From IncreasingToUpperBoundRegionSplitPolicy#configureForRegion():
initialSize = conf.getLong("hbase.increasing.policy.initial.size", -1);
...
if (initialSize <= 0) {
initialSize = 2 * conf.getLong(HConstants.HREGION_MEMSTORE_FLUSH_SIZE,
HTab
n which
> > machine.
> >
> > That will tell you the overall skew of your data in terms of raw bytes.
> >
> > Should be a pretty decent estimate and a lot faster than scanning your
> > table provided your table / cluster is sufficiently large.
> >
> >
the
> impact on JMX would be less than 2-3% on HBase performance?
>
> Thanks,
> Manish
>
>
> On Fri, Aug 26, 2016 at 12:11 PM, Ted Yu wrote:
>
> > Have you looked at /jmx endpoint on the servers ?
> > Below i
efault_table_x_region_66bbec5f7e136b226a19b5fdf9f17cbe_metric_incrementCount"
: 0,
On Fri, Aug 26, 2016 at 11:59 AM, Manish Maheshwari
wrote:
> Hi Ted,
>
> I understand the region crash/migration/splitting impact. Currently we have
> hotspotting on few region servers. I am trying to c
Can you elaborate on your use case ?
Suppose row A is on server B, after you retrieve row A, the region for row
A gets moved to server C (load balancer or server crash). Server B would no
longer be relevant.
Cheers
On Fri, Aug 26, 2016 at 10:07 AM, Manish Maheshwari
wrote:
> Hi,
>
> I looked a
Looks like the image didn't go through.
Can you pastebin the error ?
Cheers
On Fri, Aug 26, 2016 at 7:28 AM, Manjeet Singh
wrote:
> Adding
> I am getting below error on truncating the table
>
> [image: Inline image 1]
>
> On Fri, Aug 26, 2016 at 7:56 PM, Manjeet Singh > wrote:
>
>> Hi All
>>
Can you take a look at the replication bridge [0] Jeffrey wrote ?
It used both client library versions through JarJar [1] to avoid name
collision.
[0]: https://github.com/hortonworks/HBaseReplicationBridgeServer
[1]: https://code.google.com/p/jarjar/
On Fri, Aug 26, 2016 at 12:26 AM, Enrico Oliv
Switching to user@
http://hbase.apache.org/book.html#datamodel
By column I guess you mean column qualifier. The addition of column
qualifier in future writes can be performed based on existing schema.
On application side, when row retrieved doesn't contain the new column
qualifier, you can inter
Replication between 0.98.6 and 1.2.0 should work.
Thanks
> On Aug 26, 2016, at 1:59 AM, spats wrote:
>
>
> Does hbase replication works between different versions 0.98.6 and 1.2.0?
>
> We are in the process of upgrading our clusters & during that time we want
> to make sure if replication
for 98
>>>> so each time it will always return me A for 98
>>>> so if I have my row key Like
>>>> A_98_101
>>>> A_98_102
>>>> A_98_103
>>>> A_98_104
>>>> A_98_10
ort List object
>
> On Thu, Aug 25, 2016 at 12:32 AM, Ted Yu wrote:
>
> > Get is used to retrieve single row.
> >
> > If Get serves your need, you don't need PrefixFilter.
> >
> > On Wed, Aug 24, 2016 at 11:58 AM, Manjeet Singh <
> > manj
y to the table
> then retrieving the last million will be trickier and you will have to scan
> based on timestamp (if not modified) and then filter one more time.
>
> esteban.
>
>
> --
> Cloudera, Inc.
>
>
> On Wed, Aug 24, 2016 at 12:31 PM, Ted Yu w
The following API should help in your case:
public Scan setReversed(boolean reversed) {
Cheers
On Wed, Aug 24, 2016 at 12:05 PM, Manjeet Singh
wrote:
> Hi all
>
> Hbase didnt provide sorting on column but rowkey store in sorted form
> like small value first and greater value last
>
> example
ter(byte[] rowPrefix)
> >
> >
> > -Vlad
> >
> > On Wed, Aug 24, 2016 at 5:28 AM, Ted Yu wrote:
> >
> > > Please use the following API to set start row before calling
> > > hTable.getScanner(scan):
> > >
> > > public Scan
Please use the following API to set start row before calling
hTable.getScanner(scan):
public Scan setStartRow(byte [] startRow) {
On Wed, Aug 24, 2016 at 5:08 AM, Manjeet Singh
wrote:
> Hi All,
>
> I have below code where I have row key like 9865327845_#RandomChar
> I want to perform prefix s
You should be able to do rolling upgrade.
Cheers
> On Aug 24, 2016, at 3:32 AM, ssharavanan wrote:
>
> Can we directly upgrade HBase from 1.1.0 to 1.2.2 being from minor version to
> another minor patch version, just checking whether it can be done.?
>
> Planning to follow below steps,
> 1.In
One downside is that when the machine crashes, you would lose more than one
region server.
You may need to subclass RSGroupBasedLoadBalancer so that balancing
decision fits your requirement.
Cheers
On Tue, Aug 23, 2016 at 4:53 PM, GuangYang wrote:
> Hello,We are recently exploring running mult
e help which prints values or configuration.
> Do you happen to know what I should run to see those values?
>
> thanks.
>
>
>
> On Tue, Aug 23, 2016 at 9:57 AM, Ted Yu wrote:
>
> > Can you pastebin the output from the 3 commands, especially remove_peer ?
> >
&
Can you pastebin the output from the 3 commands, especially remove_peer ?
Can you use 'hbase zkcli' to inspect //replication/peers ?
Thanks
On Tue, Aug 23, 2016 at 9:51 AM, Ted wrote:
> I'm using hbase 1.2.1 and I'm having problems aborting a table replication.
>
> I was testing replication an
国泉:
Please share compaction related config parameters in your cluster.
For the table, how many column families does it have ?
Thanks
On Wed, Aug 17, 2016 at 2:49 AM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:
> Hi,
>
> There is 2 reasons to have a major compaction.
>
> the first on i
> The Masters are down on the 0.94 so I can't use the hbase shell.
>
>
>> On 15 Aug 2016, at 20:01, Ted Yu wrote:
>>
>> Please verify that your 0.94 cluster is configured with hfile v2.
>> Config hfile.format.version should have value of 2.
>>
&
directories.
> >
> > Option 1 is more delicate, but as you said the old hdfs was fine, it
> should
> > work for you.
> > For option 2, pre-split the tables on the new cluster to match the region
> > boundaries of the old tables.
> >
> > Jerry
> &g
For the Import tool, you can specify the following (quoted from usage):
System.err.println("To import data exported from HBase 0.94, use");
System.err.println(" -Dhbase.import.version=0.94");
FYI
On Sun, Aug 14, 2016 at 12:09 AM, Rob Verkuylen wrote:
> We're recovering from a crash o
Nitin:
Error log and command output didn't go through.
Consider using third party site and post links.
In my previous response, I was suggesting not to use -repair option.
On Thu, Aug 11, 2016 at 10:58 AM, Nitin Goswami wrote:
> Hi Michal,
>
> Thanks for the quick response. Following are the c
Suyog:
See this presentation on OpenTSDB :
http://www.slideshare.net/cloudera/4-opentsdb-hbasecon
On Thu, Aug 11, 2016 at 7:27 AM, Sterfield wrote:
> Hi,
>
> I went through this few weeks ago, and I'm afraid it'll be a bit long to
> have a POC running, considering that you'll have to build an H
What's the value for dfs.domain.socket.path ?
See explanation in http://hbase.apache.org/book.html for the meaning of
this config.
Cheers
On Thu, Aug 11, 2016 at 12:46 AM, Ming Yang
wrote:
> The cluster enabled shortCircuitLocalReads.
>
> dfs.client.read.shortcircuit
> true
>
>
> Whe
Nitin:
Normally you should not run hbck with '-repair' which include many options.
Depending on the actual inconsistencies, issue specific -fix options to fix
them.
Please provide more information as Michal suggested.
On Thu, Aug 11, 2016 at 7:36 AM, Michal Medvecky wrote:
> Hello,
>
> can you
elease.
> I am writing a client application, and need to lock a hbase table, if this
> can be used directly, that will be super great!
>
> Thanks,
> Ming
>
> -Original Message-
> From: Ted Yu [mailto:yuzhih...@gmail.com]
> Sent: Wednesday, August 10, 2016 1:
getLock(), it check the row 0 value in an atomic check
> and put operation. So if the 'table lock' is free, anyone should be able to
> get it I think.
>
> Maybe I have to study the Zookeeper's distributed lock recipes?
>
> Thanks,
> Ming
>
> -Original Mes
What if the process of owner of the lock dies ?
How can other processes obtain the lock ?
Cheers
On Tue, Aug 9, 2016 at 8:19 AM, Liu, Ming (Ming) wrote:
> Hi, all,
>
> I want to implement a simple 'table lock' in HBase. My current idea is for
> each table, I choose a special rowkey which NEVER
Was there any error in log of CsvBulkLoadTool ?
Which hbase release do you use ?
BTW phoenix 4.4 was pretty old release. Please consider using newer
release.
On Mon, Aug 8, 2016 at 3:07 PM, spark4plug wrote:
> Hi folks looking for help in terms of bulkloading about 10 txt files into
> hbase u
;user";
> >>
> >> 主题: 回复: 回复: Hbase cluster is suddenly unable to respond
> >>
> >>
> >>
> >>
> >>
> >> The client code is http://paste2.org/p3BXkKtV
> >>
> >>
> >> Is the client version com
Manjeet:
Can you share the config you use so that we can have better idea ?
Which hbase release are you using ?
The performance degradation was w.r.t. what other approach ?
Cheers
On Sat, Aug 6, 2016 at 1:11 AM, Dima Spivak wrote:
> Hey Manjeet,
>
> Let me move dev@ to bcc and add user@ as th
nPrefixFilter = new
> ColumnPrefixFilter(Bytes.toBytes("e:cat"));
>
> Filter valueFilter = new ValueFilter(CompareFilter.CompareOp.EQUAL,
> new BinaryComparator(
> Bytes.toBytes("hello kitty")));
>
>
>
> Ted Yu 于2016年8月4日周四 上午11:56写道:
>
>
*e:cat1, e:cat2, e:cat3,*
> the value :* 'hello kitty' ,*
>
> Any advice is appreciated!
> QiaoYanke
>
>
> Ted Yu 于2016年8月3日周三 上午3:15写道:
>
> > You can use the following method of Scan to specify columns to retrieve:
> >
> > public Scan addColu
You can use the following method of Scan to specify columns to retrieve:
public Scan addColumn(byte [] family, byte [] qualifier) {
w.r.t. value comparison with cf:c3 column, consider using
SingleColumnValueFilter.
Cheers
On Mon, Aug 1, 2016 at 6:56 PM, 乔彦克 wrote:
> Hi all,
>
> Currently I
For #1, please take a look at split.rb :
Split entire table or pass a region to split individual region. With the
second parameter, you can specify an explicit split key for the region.
Examples:
split 'tableName'
split 'namespace:tableName'
split 'regionName' # format: 'tableName,sta
Have you taken a look at
http://hbase.apache.org/book.html#hadoop2.hbase_0.94 ?
On Mon, Aug 1, 2016 at 1:04 PM, Igor Berman wrote:
> Hi all,
> I have old hbase cluster 0.94x that I need to write some data to. The
> problem is that my setup already contains hadoop2 jars in classpath(the
> natural
You can issue Scan with each of the start keys and setBatch(1).
Close each scan after next() is called.
On Mon, Aug 1, 2016 at 1:55 AM, jinhong lu wrote:
> Hi, I want to get first row of every region in a table, Any API for that?
> getStartKey() will return the rowkey not existed, but just the p
As mentioned in Kevin's first email, if /hbase-unsecure is the znode used
by Ambari, setting zookeeper.znode.parent to hbase (or /hbase) wouldn't
help.
On Mon, Aug 1, 2016 at 3:39 AM, Adam Davidson <
adam.david...@bigdatapartnership.com> wrote:
> Hi Kevin,
>
> when creating the Configuration obje
How did your Java program obtain hbase-site.xml of the cluster ?
Looks like hbase-site.xml was not on the classpath.
On Mon, Aug 1, 2016 at 3:36 AM, kevin wrote:
> hi,all:
> I install hbase by ambari ,I found it's zookeeper url is /hbase-unsecure .
> when I use java api to connect to hbase ,pro
fs -ls /
> 16/07/23 14:15:58 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> Found 9 items
> ...
> -rwxrwxrwx 1 root supergroup1771201 2016-07-23 14:13 /test.jar
> ...
>
>
>
t_, existingValue=-1,
> completeSequenceId=-1
> 2016-07-22 12:03:40,128 TRACE
> [B.defaultRpcServer.handler=12,queue=0,port=39479] master.ServerManager:
> 7e8aa2d93aa716ad2068808d938f0786, family=tddlcf, existingValue=-1,
> completeSequenceId=-1
> 2016-07-22 12:03:40,128 TRACE
w.r.t. the DoNotRetryIOException, can you take a look at region server log
where testTbl region(s) was hosted ?
See if there is some clue why the sanity check failed.
Thanks
On Fri, Jul 22, 2016 at 1:12 AM, Ma, Sheng-Chen (Aven) <
shengchen...@esgyn.cn> wrote:
> Hi all:
> I want to dynamic add
Please take a look at the following methods:
>From HBaseAdmin:
public List getTableRegions(final TableName tableName)
>From HRegion:
public static HDFSBlocksDistribution computeHDFSBlocksDistribution(final
Configuration conf,
final HTableDescriptor tableDescriptor, final HRegionInfo
What format are the one billion records saved in at the moment ?
The answer would depend on the compression scheme used for the table:
http://hbase.apache.org/book.html#compression
On Tue, Jul 19, 2016 at 8:59 PM, Jone Zhang wrote:
> There is a 100G date of one billion records.
> If i save it
This seems related: HBASE-14963
On Tue, Jul 19, 2016 at 3:50 PM, Saurabh Malviya (samalviy) <
samal...@cisco.com> wrote:
> Hi,
>
> I am addressing one issue to make Hbase and ES work together in same spark
> project
>
>
> https://community.cloudera.com/t5/Storage-Random-Access-HDFS/Apache-HBase-S
How did you start the master ?
Looks like hbase-server jar was not on classpath.
Cheers
On Wed, Jul 13, 2016 at 6:52 AM, Roman Wesołowski <
roman.wesolow...@apollogic.com> wrote:
> Hello,
>
>
> I'm new in Hbase so I need a help.
>
>
> While I'm trying to start Hbase I have an error:
>
> Error:
gt; }
> >
> > i just saw that i am using job.setMapOutputValueClass(classOf[Put])
> >
> > where as i am writing KeyValue, does that cause any issue?
> >
> > i will update the code and will run it,
> >
> > can you suggest me sorting on partitions.
> >
>
Can you show the code inside saveASHFile ?
Maybe the partitions of the RDD need to be sorted (for 1st issue).
Cheers
On Wed, Jul 13, 2016 at 4:29 PM, yeshwanth kumar
wrote:
> Hi i am doing bulk load into HBase as HFileFormat, by
> using saveAsNewAPIHadoopFile
>
> i am on HBase 1.2.0-cdh5.7.0 a
Which release of hbase are you using ?
Does it include HBASE-15213 ?
Thanks
On Sat, Jul 9, 2016 at 3:14 AM, 陆巍 wrote:
> Hi,
>
> I had a test for Increment operation, and find the performance is really
> bad: 94809ms for 1000 increment operaions.
> The testing cluster is pretty small with only
] logger.type:
> >> > listStatus(alluxio://master:19998/hbase/data/hbase/meta/.tabledesc)
> >> > 2016-06-20 14:50:48,335 ERROR [master:master:6] master.HMaster:
> >> > Unhandled exception. Starting shutdown.
> >> > java.io.IOEx
Robert:
When using `spark-submit`, the application jar along with any jars included
with the `--jars` option
will be automatically transferred to the cluster. URLs supplied after
`--jars` must be separated by commas. That list is included on the driver
and executor classpaths. Directory expans
The image still didn't come through.
Please use third party site.
Thanks
> On Jun 29, 2016, at 11:41 AM, karthi keyan wrote:
>
> actually am facing
>
> https://issues.apache.org/jira/browse/HBASE-12954
>
> Which displays 2 hostname for the same IP as in attached image.
>
>> On Wed, Jun
There is no hbase release with full support for SparkSQL yet.
For #1, the classes / directories are (master branch):
./hbase-spark/src/main/java/org/apache/hadoop/hbase/spark/example/hbasecontext
./hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/example/hbasecontext
hbase-spark/src/main/
HBASE-10118 was integrated into 0.98.2
The user was running 0.98.9
Hmm
On Sun, Jun 26, 2016 at 12:05 PM, Dima Spivak wrote:
> Hey M.,
>
> Just to follow up on what JMS said, this was fixed in April 2014 (details
> at https://issues.apache.org/jira/browse/HBASE-10118), so running a
> version
>
ht
>
> do not need to store hostname many times
>
>
>
>
> thanks
>
>
> On 2016-06-23 12:50 , Ted Yu Wrote:
>
> YQ:
> The HostAndWeight is basically a tuple.
> In getTopHosts(), hosts are retrieved.
> In getWeight(String host), weight is retrieved.
>
> Why
YQ:
The HostAndWeight is basically a tuple.
In getTopHosts(), hosts are retrieved.
In getWeight(String host), weight is retrieved.
Why do you think a single Long is enough ?
Cheers
On Wed, Jun 22, 2016 at 9:28 PM, ramkrishna vasudevan <
ramkrishna.s.vasude...@gmail.com> wrote:
> Hi WangYQ,
>
>
Jinhong:
Please take a look at 3rd paragraph of:
http://hbase.apache.org/book.html#gcpause
Cheers
On Wed, Jun 22, 2016 at 2:09 AM, Heng Chen wrote:
> 8000/200 = 40, if your table balance enough, each RS will serve 40
> requests per second, that is OK for RS. Have you try set xmn smaller t
Can you find out which region server hostsed d654d01588e8a46d7050852978f8eaf9
and examine its log to see if there was some clue ?
Thanks
On Sat, Jun 18, 2016 at 8:29 AM, Chathuri Wimalasena
wrote:
> Hi,
>
> I'm using HBase 0.94-23 with Hadoop 2.7.2. In my hadoop namenode log file,
> I'm getting
ually it should be KERBEROS
> authentication.
>
>
>
> And getting a warning message as "responseTooSlow"
>
>
>
> Hope this will help you to figure the issue.
>
> Thanks,
> Kumar
>
>> On Sat, Jun 18, 2016 at 12:45 AM, Ted Yu wrote:
>
> Hi,
>
> Please find the log,
>
> http://pastebin.com/Bc3ywAQQ
>
>
>
>
>
>
>
> *hbase(main):007:0> grant 'Selva','RWXCA','@default'0 row(s) in 21.4610
> secondshbase(main):008:0> revoke 'Selva','@default
Since you already have hadoop 2.7.1, why is alluxio 1.1.0 needed ?
Can you illustrate your use case ?
Thanks
On Wed, Jun 15, 2016 at 7:27 PM, kevin wrote:
> hi,all:
>
> I wonder to know If run hbase on Alluxio/tacyon is possible and a good
> idea, and can anybody share the experience.,thanks.
Tom:
Can you pastebin the stack trace for the exception ?
It would be nice if you can show snippet of your code too.
Thanks
> On Jun 15, 2016, at 8:24 AM, Ellis, Tom (Financial Markets IT)
> wrote:
>
> So I have a working prototype using just bulk puts on a table and using
> setCellVisibili
umentation available to secure zookeeper and hbase with kerberos
> properly?
>
> The same log occurs in normal cluster also and i have enabled
> authorization. The same authorization command runs in 5 to 6 seconds.
>
> Thanks,
> Kumar
>
> On Tue, Jun 14, 2016 at 7:59 PM, Ted
Please don't cross post.
This seems to be an advertisement.
> On Jun 15, 2016, at 4:41 AM, Chaturvedi Chola
> wrote:
>
> Good book on interview preparation for big data
>
> https://notionpress.com/read/big-data-interview-faqs
row(s) in 32.4330
> seconds*
>
> Find my HBase log in below pastebin
>
> http://pastebin.com/MHMjhHuF
>
>
> Thanks,
>
> Kumar
>
>
> On Mon, Jun 13, 2016 at 7:42 PM, Ted Yu wrote:
>
> > Can you inspect master log for the corresponding 40 secon
Can you inspect master log for the corresponding 40 seconds to see if there
was some clue ?
Feel free to pastebin the log snippet for this period if you cannot
determine the cause.
Cheers
On Sun, Jun 12, 2016 at 10:19 PM, kumar r wrote:
> Hi,
>
> I have configured secure HBase-1.1.3. Hadoop ve
if (combinedWithLru) {
lruCacheSize = (long) ((1 - combinedPercentage) *
bucketCacheSize);
bucketCacheSize = (long) (combinedPercentage * bucketCacheSize);
}
Looks like the above came from introduction of Bucket Cache :
HBASE-7404 Bucket Cache:A solution about
Please take a look at:
HBASE-10201 Port 'Make flush decisions per column family' to trunk
I think the comment you referenced is no longer true for 1.1.0+ releases.
Cheers
On Sat, Jun 11, 2016 at 8:29 PM, WangYQ wrote:
> in hbase 0.98.10 doc, section 6.2 "on the number of column families"
>
>
Which version of hbase / Hadoop are you using ?
(So that line number matching can be more accurate)
It would be good if you can show your code snippet.
Thanks
> On Jun 11, 2016, at 12:57 AM, Jilani Shaik wrote:
>
> Hi,
>
> I am trying to do hbase table bulk load from data file using map red
Which version of hbase are you using ?
Is it possible to come up with unit test that shows what you observed ?
There is already coverage in existing unit tests, e.g. TestFilterList which
you can use as template.
Thanks
On Thu, Jun 9, 2016 at 3:41 AM, Eko Susilo
wrote:
> Hi All,
>
>
>
> I have
Have you seen the doc at the top
of ./hbase-shell/src/main/ruby/shell/commands/alter.rb ?
Alter a table. If the "hbase.online.schema.update.enable" property is set to
false, then the table must be disabled (see help 'disable'). If the
"hbase.online.schema.update.enable" property is set to true, ta
looks like a bug, should I fire a JIRA?
>
> Thanks,
>
> Shuai
>
> On Fri, May 27, 2016 at 8:02 PM, Ted Yu wrote:
>
>> There were 7 regions Master tried to close which were opening but not
>> yet served.
>>
>> d1c7f3f455f2529da82a2f713b5ee067 was
601 - 700 of 3936 matches
Mail list logo