Can you take a look at the replication bridge [0] Jeffrey wrote ?
It used both client library versions through JarJar [1] to avoid name
collision.
[0]: https://github.com/hortonworks/HBaseReplicationBridgeServer
[1]: https://code.google.com/p/jarjar/
On Fri, Aug 26, 2016 at 12:26 AM, Enrico
Switching to user@
http://hbase.apache.org/book.html#datamodel
By column I guess you mean column qualifier. The addition of column
qualifier in future writes can be performed based on existing schema.
On application side, when row retrieved doesn't contain the new column
qualifier, you can
Replication between 0.98.6 and 1.2.0 should work.
Thanks
> On Aug 26, 2016, at 1:59 AM, spats wrote:
>
>
> Does hbase replication works between different versions 0.98.6 and 1.2.0?
>
> We are in the process of upgrading our clusters & during that time we want
> to
r
>>>> example
>>>> if based on my algo I get A for 98
>>>> so each time it will always return me A for 98
>>>> so if I have my row key Like
>>>> A_98_101
>>>> A_98_102
>>>> A_98_103
&g
wrote:
> Get also support List object
>
> On Thu, Aug 25, 2016 at 12:32 AM, Ted Yu <yuzhih...@gmail.com> wrote:
>
> > Get is used to retrieve single row.
> >
> > If Get serves your need, you don't need PrefixFilter.
> >
> > On Wed, Aug 24, 2016 at 11
keys or rows are added randomly to the table
> then retrieving the last million will be trickier and you will have to scan
> based on timestamp (if not modified) and then filter one more time.
>
> esteban.
>
>
> --
> Cloudera, Inc.
>
>
> On Wed, Aug 24, 2016 at 12:31 PM,
The following API should help in your case:
public Scan setReversed(boolean reversed) {
Cheers
On Wed, Aug 24, 2016 at 12:05 PM, Manjeet Singh
wrote:
> Hi all
>
> Hbase didnt provide sorting on column but rowkey store in sorted form
> like small value first and
t; >
> > setRowPrefixFilter(byte[] rowPrefix)
> >
> >
> > -Vlad
> >
> > On Wed, Aug 24, 2016 at 5:28 AM, Ted Yu <yuzhih...@gmail.com> wrote:
> >
> > > Please use the following API to set start row before calling
> > > hTable.ge
Please use the following API to set start row before calling
hTable.getScanner(scan):
public Scan setStartRow(byte [] startRow) {
On Wed, Aug 24, 2016 at 5:08 AM, Manjeet Singh
wrote:
> Hi All,
>
> I have below code where I have row key like 9865327845_#RandomChar
You should be able to do rolling upgrade.
Cheers
> On Aug 24, 2016, at 3:32 AM, ssharavanan wrote:
>
> Can we directly upgrade HBase from 1.1.0 to 1.2.2 being from minor version to
> another minor patch version, just checking whether it can be done.?
>
> Planning to
One downside is that when the machine crashes, you would lose more than one
region server.
You may need to subclass RSGroupBasedLoadBalancer so that balancing
decision fits your requirement.
Cheers
On Tue, Aug 23, 2016 at 4:53 PM, GuangYang wrote:
> Hello,We are recently
ich prints values or configuration.
> Do you happen to know what I should run to see those values?
>
> thanks.
>
>
>
> On Tue, Aug 23, 2016 at 9:57 AM, Ted Yu <yuzhih...@gmail.com> wrote:
>
> > Can you pastebin the output from the 3 commands, especially remove_peer ?
>
Can you pastebin the output from the 3 commands, especially remove_peer ?
Can you use 'hbase zkcli' to inspect //replication/peers ?
Thanks
On Tue, Aug 23, 2016 at 9:51 AM, Ted wrote:
> I'm using hbase 1.2.1 and I'm having problems aborting a table replication.
>
> I was
国泉:
Please share compaction related config parameters in your cluster.
For the table, how many column families does it have ?
Thanks
On Wed, Aug 17, 2016 at 2:49 AM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:
> Hi,
>
> There is 2 reasons to have a major compaction.
>
> the first on
1.2 cluster after a compaction?
>
> The Masters are down on the 0.94 so I can't use the hbase shell.
>
>
>> On 15 Aug 2016, at 20:01, Ted Yu <yuzhih...@gmail.com> wrote:
>>
>> Please verify that your 0.94 cluster is configured with hfile v2.
>> Config h
all the region
> > directories.
> >
> > Option 1 is more delicate, but as you said the old hdfs was fine, it
> should
> > work for you.
> > For option 2, pre-split the tables on the new cluster to match the region
> > boundarie
For the Import tool, you can specify the following (quoted from usage):
System.err.println("To import data exported from HBase 0.94, use");
System.err.println(" -Dhbase.import.version=0.94");
FYI
On Sun, Aug 14, 2016 at 12:09 AM, Rob Verkuylen wrote:
> We're
Nitin:
Error log and command output didn't go through.
Consider using third party site and post links.
In my previous response, I was suggesting not to use -repair option.
On Thu, Aug 11, 2016 at 10:58 AM, Nitin Goswami wrote:
> Hi Michal,
>
> Thanks for the quick
Suyog:
See this presentation on OpenTSDB :
http://www.slideshare.net/cloudera/4-opentsdb-hbasecon
On Thu, Aug 11, 2016 at 7:27 AM, Sterfield wrote:
> Hi,
>
> I went through this few weeks ago, and I'm afraid it'll be a bit long to
> have a POC running, considering that
What's the value for dfs.domain.socket.path ?
See explanation in http://hbase.apache.org/book.html for the meaning of
this config.
Cheers
On Thu, Aug 11, 2016 at 12:46 AM, Ming Yang
wrote:
> The cluster enabled shortCircuitLocalReads.
>
>
Nitin:
Normally you should not run hbck with '-repair' which include many options.
Depending on the actual inconsistencies, issue specific -fix options to fix
them.
Please provide more information as Michal suggested.
On Thu, Aug 11, 2016 at 7:36 AM, Michal Medvecky wrote:
>
if this API change for each release.
> I am writing a client application, and need to lock a hbase table, if this
> can be used directly, that will be super great!
>
> Thanks,
> Ming
>
> -----Original Message-
> From: Ted Yu [mailto:yuzhih...@gmail.com]
> Sent: Wednes
name and invoke getLock(), it check the row 0 value in an atomic check
> and put operation. So if the 'table lock' is free, anyone should be able to
> get it I think.
>
> Maybe I have to study the Zookeeper's distributed lock recipes?
>
> Thanks,
> Ming
>
> -Original Me
What if the process of owner of the lock dies ?
How can other processes obtain the lock ?
Cheers
On Tue, Aug 9, 2016 at 8:19 AM, Liu, Ming (Ming) wrote:
> Hi, all,
>
> I want to implement a simple 'table lock' in HBase. My current idea is for
> each table, I choose a special
Was there any error in log of CsvBulkLoadTool ?
Which hbase release do you use ?
BTW phoenix 4.4 was pretty old release. Please consider using newer
release.
On Mon, Aug 8, 2016 at 3:07 PM, spark4plug wrote:
> Hi folks looking for help in terms of bulkloading about 10
98...@qq.com>;
> >> 发送时间: 2015年10月30日(星期五) 上午10:24
> >> 收件人: "user"<user@hbase.apache.org>;
> >>
> >> 主题: 回复: 回复: Hbase cluster is suddenly unable to respond
> >>
> >>
> >>
> >>
> >>
> >> The client code i
Manjeet:
Can you share the config you use so that we can have better idea ?
Which hbase release are you using ?
The performance degradation was w.r.t. what other approach ?
Cheers
On Sat, Aug 6, 2016 at 1:11 AM, Dima Spivak wrote:
> Hey Manjeet,
>
> Let me move dev@ to
ew
> ColumnPrefixFilter(Bytes.toBytes("e:cat"));
>
> Filter valueFilter = new ValueFilter(CompareFilter.CompareOp.EQUAL,
> new BinaryComparator(
> Bytes.toBytes("hello kitty")));
>
>
>
> Ted Yu <yuzhih...@gmail.com>于2016年8月4日周四 上午11:56写道:
>
> > For selecting
olumns maybe: *e:cat1, e:cat2, e:cat3,*
> the value :* 'hello kitty' ,*
>
> Any advice is appreciated!
> QiaoYanke
>
>
> Ted Yu <yuzhih...@gmail.com>于2016年8月3日周三 上午3:15写道:
>
> > You can use the following method of Scan to specify columns to retrieve:
You can use the following method of Scan to specify columns to retrieve:
public Scan addColumn(byte [] family, byte [] qualifier) {
w.r.t. value comparison with cf:c3 column, consider using
SingleColumnValueFilter.
Cheers
On Mon, Aug 1, 2016 at 6:56 PM, 乔彦克 wrote:
> Hi
For #1, please take a look at split.rb :
Split entire table or pass a region to split individual region. With the
second parameter, you can specify an explicit split key for the region.
Examples:
split 'tableName'
split 'namespace:tableName'
split 'regionName' # format:
Have you taken a look at
http://hbase.apache.org/book.html#hadoop2.hbase_0.94 ?
On Mon, Aug 1, 2016 at 1:04 PM, Igor Berman wrote:
> Hi all,
> I have old hbase cluster 0.94x that I need to write some data to. The
> problem is that my setup already contains hadoop2 jars in
You can issue Scan with each of the start keys and setBatch(1).
Close each scan after next() is called.
On Mon, Aug 1, 2016 at 1:55 AM, jinhong lu wrote:
> Hi, I want to get first row of every region in a table, Any API for that?
> getStartKey() will return the rowkey not
As mentioned in Kevin's first email, if /hbase-unsecure is the znode used
by Ambari, setting zookeeper.znode.parent to hbase (or /hbase) wouldn't
help.
On Mon, Aug 1, 2016 at 3:39 AM, Adam Davidson <
adam.david...@bigdatapartnership.com> wrote:
> Hi Kevin,
>
> when creating the Configuration
How did your Java program obtain hbase-site.xml of the cluster ?
Looks like hbase-site.xml was not on the classpath.
On Mon, Aug 1, 2016 at 3:36 AM, kevin wrote:
> hi,all:
> I install hbase by ambari ,I found it's zookeeper url is /hbase-unsecure .
> when I use java
/23 14:15:58 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> Found 9 items
> ...
> -rwxrwxrwx 1 root supergroup1771201 2016-07-23 14:13 /test.jar
> ...
>
>
> -邮件原件-
> 发
t_, existingValue=-1,
> completeSequenceId=-1
> 2016-07-22 12:03:40,128 TRACE
> [B.defaultRpcServer.handler=12,queue=0,port=39479] master.ServerManager:
> 7e8aa2d93aa716ad2068808d938f0786, family=tddlcf, existingValue=-1,
> completeSequenceId=-1
> 2016-07-22 12:03:40,128 TRACE
w.r.t. the DoNotRetryIOException, can you take a look at region server log
where testTbl region(s) was hosted ?
See if there is some clue why the sanity check failed.
Thanks
On Fri, Jul 22, 2016 at 1:12 AM, Ma, Sheng-Chen (Aven) <
shengchen...@esgyn.cn> wrote:
> Hi all:
> I want to dynamic add
Please take a look at the following methods:
>From HBaseAdmin:
public List getTableRegions(final TableName tableName)
>From HRegion:
public static HDFSBlocksDistribution computeHDFSBlocksDistribution(final
Configuration conf,
final HTableDescriptor tableDescriptor, final HRegionInfo
What format are the one billion records saved in at the moment ?
The answer would depend on the compression scheme used for the table:
http://hbase.apache.org/book.html#compression
On Tue, Jul 19, 2016 at 8:59 PM, Jone Zhang wrote:
> There is a 100G date of one
This seems related: HBASE-14963
On Tue, Jul 19, 2016 at 3:50 PM, Saurabh Malviya (samalviy) <
samal...@cisco.com> wrote:
> Hi,
>
> I am addressing one issue to make Hbase and ES work together in same spark
> project
>
>
>
How did you start the master ?
Looks like hbase-server jar was not on classpath.
Cheers
On Wed, Jul 13, 2016 at 6:52 AM, Roman Wesołowski <
roman.wesolow...@apollogic.com> wrote:
> Hello,
>
>
> I'm new in Hbase so I need a help.
>
>
> While I'm trying to start Hbase I have an error:
>
> Error:
HFileOutputFormat2],
> > conf)
> > }
> >
> > i just saw that i am using job.setMapOutputValueClass(classOf[Put])
> >
> > where as i am writing KeyValue, does that cause any issue?
> >
> > i will update the code and will run it,
> >
Can you show the code inside saveASHFile ?
Maybe the partitions of the RDD need to be sorted (for 1st issue).
Cheers
On Wed, Jul 13, 2016 at 4:29 PM, yeshwanth kumar
wrote:
> Hi i am doing bulk load into HBase as HFileFormat, by
> using saveAsNewAPIHadoopFile
>
> i am
Which release of hbase are you using ?
Does it include HBASE-15213 ?
Thanks
On Sat, Jul 9, 2016 at 3:14 AM, 陆巍 wrote:
> Hi,
>
> I had a test for Increment operation, and find the performance is really
> bad: 94809ms for 1000 increment operaions.
> The testing cluster is
tatus(alluxio://master:19998/hbase/data/hbase/meta/.tabledesc)
> >> > 2016-06-20 14:50:48,335 ERROR [master:master:6] master.HMaster:
> >> > Unhandled exception. Starting shutdown.
> >> > java.io.IOException: alluxio.exception.FileDoesNotExistException: Path
&g
Robert:
When using `spark-submit`, the application jar along with any jars included
with the `--jars` option
will be automatically transferred to the cluster. URLs supplied after
`--jars` must be separated by commas. That list is included on the driver
and executor classpaths. Directory
The image still didn't come through.
Please use third party site.
Thanks
> On Jun 29, 2016, at 11:41 AM, karthi keyan wrote:
>
> actually am facing
>
> https://issues.apache.org/jira/browse/HBASE-12954
>
> Which displays 2 hostname for the same IP as in
There is no hbase release with full support for SparkSQL yet.
For #1, the classes / directories are (master branch):
./hbase-spark/src/main/java/org/apache/hadoop/hbase/spark/example/hbasecontext
./hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/example/hbasecontext
HBASE-10118 was integrated into 0.98.2
The user was running 0.98.9
Hmm
On Sun, Jun 26, 2016 at 12:05 PM, Dima Spivak wrote:
> Hey M.,
>
> Just to follow up on what JMS said, this was fixed in April 2014 (details
> at https://issues.apache.org/jira/browse/HBASE-10118), so
gt; private MultiMap<String, Long> hostAndWeight
>
> do not need to store hostname many times
>
>
>
>
> thanks
>
>
> On 2016-06-23 12:50 , Ted Yu <yuzhih...@gmail.com> Wrote:
>
> YQ:
> The HostAndWeight is basically a tuple.
> In getTopHosts(), host
YQ:
The HostAndWeight is basically a tuple.
In getTopHosts(), hosts are retrieved.
In getWeight(String host), weight is retrieved.
Why do you think a single Long is enough ?
Cheers
On Wed, Jun 22, 2016 at 9:28 PM, ramkrishna vasudevan <
ramkrishna.s.vasude...@gmail.com> wrote:
> Hi WangYQ,
>
>
Jinhong:
Please take a look at 3rd paragraph of:
http://hbase.apache.org/book.html#gcpause
Cheers
On Wed, Jun 22, 2016 at 2:09 AM, Heng Chen wrote:
> 8000/200 = 40, if your table balance enough, each RS will serve 40
> requests per second, that is OK for RS.
Can you find out which region server hostsed d654d01588e8a46d7050852978f8eaf9
and examine its log to see if there was some clue ?
Thanks
On Sat, Jun 18, 2016 at 8:29 AM, Chathuri Wimalasena
wrote:
> Hi,
>
> I'm using HBase 0.94-23 with Hadoop 2.7.2. In my hadoop namenode
SIMPLE but actually it should be KERBEROS
> authentication.
>
>
>
> And getting a warning message as "responseTooSlow"
>
>
>
> Hope this will help you to figure the issue.
>
> Thanks,
> Kumar
>
>> On Sat, Jun 18, 2016 at 12:45 AM, Ted Yu <yuzhih.
seconds*
> Thanks,
> Kumar
>
> On Wed, Jun 15, 2016 at 7:56 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>
> > Have you looked at http://hbase.apache.org/book.html#security ?
> >
> > I noticed that DEBUG logging was not on in the log you posted earlier.
> >
Since you already have hadoop 2.7.1, why is alluxio 1.1.0 needed ?
Can you illustrate your use case ?
Thanks
On Wed, Jun 15, 2016 at 7:27 PM, kevin wrote:
> hi,all:
>
> I wonder to know If run hbase on Alluxio/tacyon is possible and a good
> idea, and can anybody
Tom:
Can you pastebin the stack trace for the exception ?
It would be nice if you can show snippet of your code too.
Thanks
> On Jun 15, 2016, at 8:24 AM, Ellis, Tom (Financial Markets IT)
> wrote:
>
> So I have a working prototype using just bulk puts
log. Is there any
> documentation available to secure zookeeper and hbase with kerberos
> properly?
>
> The same log occurs in normal cluster also and i have enabled
> authorization. The same authorization command runs in 5 to 6 seconds.
>
> Thanks,
> Kumar
>
> On Tue, Jun 14
Please don't cross post.
This seems to be an advertisement.
> On Jun 15, 2016, at 4:41 AM, Chaturvedi Chola
> wrote:
>
> Good book on interview preparation for big data
>
> https://notionpress.com/read/big-data-interview-faqs
s*
>
> Find my HBase log in below pastebin
>
> http://pastebin.com/MHMjhHuF
>
>
> Thanks,
>
> Kumar
>
>
> On Mon, Jun 13, 2016 at 7:42 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>
> > Can you inspect master log for the corresponding 40 seconds to s
Can you inspect master log for the corresponding 40 seconds to see if there
was some clue ?
Feel free to pastebin the log snippet for this period if you cannot
determine the cause.
Cheers
On Sun, Jun 12, 2016 at 10:19 PM, kumar r wrote:
> Hi,
>
> I have configured secure
if (combinedWithLru) {
lruCacheSize = (long) ((1 - combinedPercentage) *
bucketCacheSize);
bucketCacheSize = (long) (combinedPercentage * bucketCacheSize);
}
Looks like the above came from introduction of Bucket Cache :
HBASE-7404 Bucket Cache:A solution about
Please take a look at:
HBASE-10201 Port 'Make flush decisions per column family' to trunk
I think the comment you referenced is no longer true for 1.1.0+ releases.
Cheers
On Sat, Jun 11, 2016 at 8:29 PM, WangYQ wrote:
> in hbase 0.98.10 doc, section 6.2 "on the
Which version of hbase / Hadoop are you using ?
(So that line number matching can be more accurate)
It would be good if you can show your code snippet.
Thanks
> On Jun 11, 2016, at 12:57 AM, Jilani Shaik wrote:
>
> Hi,
>
> I am trying to do hbase table bulk load from
Which version of hbase are you using ?
Is it possible to come up with unit test that shows what you observed ?
There is already coverage in existing unit tests, e.g. TestFilterList which
you can use as template.
Thanks
On Thu, Jun 9, 2016 at 3:41 AM, Eko Susilo
Have you seen the doc at the top
of ./hbase-shell/src/main/ruby/shell/commands/alter.rb ?
Alter a table. If the "hbase.online.schema.update.enable" property is set to
false, then the table must be disabled (see help 'disable'). If the
"hbase.online.schema.update.enable" property is set to true,
a bug of hbase? For
> me it looks like a bug, should I fire a JIRA?
>
> Thanks,
>
> Shuai
>
> On Fri, May 27, 2016 at 8:02 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>
>> There were 7 regions Master tried to close which were opening but not
>> yet served.
>>
&g
1.0.0 is quite old.
Is it possible to upgrade to 1.1 or 1.2 release ?
Thanks
On Fri, Jun 3, 2016 at 8:12 AM, Pankaj kr wrote:
> Hi,
>
> We met a weird scenario in our production environment.
> IndexOutOfBoundsException is thrown while retrieving mid key of the
>
Were you referring to the following lines ?
// See HBASE-5094. Cross check with hbase:meta if still this RS
is owning
// the region.
Pair p = MetaReader.getRegion(
this.catalogTracker, region.getRegionName());
The above is at
e construct a OpendRegionHandler and call the process
> method, not submit this handler to pool(in class AssignmentManager, line
> 1078)
>
>
>
>
>
>
>
>
> At 2016-06-02 21:29:09, "Ted Yu" <yuzhih...@gmail.com> wrote:
> >Have you seen this
Have you seen this line in EventType.java ?
RS_ZK_REGION_OPENED (4, ExecutorType.MASTER_OPEN_REGION),
If you follow RS_ZK_REGION_OPENED, you would see how the executor is used.
On Thu, Jun 2, 2016 at 4:56 AM, WangYQ wrote:
> in hbase 0.98.10, class HMaster,
ait forever.
>
>
> Will it happen in real logical?
>
>
> 2016-05-27 10:44 GMT+08:00 Heng Chen <heng.chen.1...@gmail.com>:
>
> > Thanks guys, yesterday i restart relate RS and failed close region
> reopen
> > successfuly. But today, there is another region f
Please use user@hbase for future correspondence.
Here is related code from ZooKeeperWatcher (NPE seems to have come from the
for loop):
public List getMetaReplicaNodes() throws KeeperException {
List childrenOfBaseNode = ZKUtil.listChildrenNoWatch(this,
baseZNode);
List
7:
> http://paste.openstack.org/raw/505826/
>
> The master asked node6 to open several regions. Node6 opened the first 4
> very fast (within 1 seconsd) and got stuck at the 5th one. But there is no
> errors at that time.
>
> On Wed, May 25, 2016 at 10:12 PM, Ted Yu <yuzhih...@gmail.com&
Congratulations, Mikhail !
On Thu, May 26, 2016 at 11:30 AM, Andrew Purtell
wrote:
> On behalf of the Apache HBase PMC I am pleased to announce that Mikhail
> Antonov has accepted our invitation to become a PMC member on the Apache
> HBase project. Mikhail has been an
Heng:
Can you pastebin the complete stack trace for the region server ?
Snippet from region server log may also provide more clue.
Thanks
On Wed, May 25, 2016 at 9:48 PM, Heng Chen wrote:
> On master web UI, i could see region (c371fb20c372b8edbf54735409ab5c4a)
>
re
> are 20+ regions being assigned to node6 almost at the same moment, node6
> gets overloaded and can't finish opening all of them within one minute.
>
> So this looks like a hbase bug to me (regions never get online when the
> region server failed to handle the OpenRegionReques
Have you taken a look at HBASE-9393 ?
On Mon, May 23, 2016 at 9:55 AM, Bryan Beaudreault wrote:
> Hey everyone,
>
> We are noticing a file descriptor leak that is only affecting nodes in our
> cluster running 5.7.0, not those still running 5.3.8. I ran an lsof against
> Hi Ted,
>
> The hbase version is 1.0.0-cdh5.4.8, shipped with cloudera CDH 5.4.8. The
> RS logs on node6 can be found here <http://paste.openstack.org/raw/496174/
> >
> .
>
> Thanks!
>
> Shuai
>
> On Thu, May 5, 2016 at 9:15 AM, Ted Yu <yuzhih...@gmai
access hbase, must have the correct password
>
>
> thanks
>
>
> On 2016-05-17 22:04 , Ted Yu <yuzhih...@gmail.com> Wrote:
>
> Is your goal to protect web page access ?
>
> Take a look at HBASE-5291.
>
> If I didn't understand your use case, please elaborate
Is your goal to protect web page access ?
Take a look at HBASE-5291.
If I didn't understand your use case, please elaborate.
Use user@hbase in the future.
On Tue, May 17, 2016 at 4:02 AM, WangYQ wrote:
> in hbase, if we know zookeeper address, we can write and read
bq. 2016-05-13 11:56:52,763 WARN
org.apache.hadoop.hbase.master.SplitLogManager: error while splitting logs
in
[hdfs://ip-172-31-50-109.ec2.internal:8020/hbase/WALs/ip-
172-31-54-241.ec2.internal,60020,1463123941413-splitting]
installed = 1 but only 0 done
Looks like WAL splitting was slow or
bq. Unable to list children of znode /hbase/region-in-transition
Looks like there might be some problem with zookeeper quorum.
Can you check zookeeper server logs ?
Cheers
On Fri, May 13, 2016 at 12:17 AM, Gunnar Tapper
wrote:
> Hi,
>
> I'm doing some development
TableInputFormatBase is abstract.
Most likely you would use TableInputFormat for the scan.
See javadoc of getSplits():
* Calculates the splits that will serve as input for the map tasks. The
* number of splits matches the number of regions in a table.
FYI
On Wed, May 11, 2016 at 6:05
Looks like there were pictures in the second email which didn't go through.
Please paste text.
Cheers
On Tue, May 10, 2016 at 12:13 AM, horaamit wrote:
> After making few changes to my code
>
>
>
> I am getting exception ,please find below stack trace
>
>
>
>
> --
> View
HMaster is in hbase-server-xx.jar
Was it on the classpath ?
Please consider pastebin'ning master log if you need further help.
Cheers
On Mon, May 9, 2016 at 2:25 AM, Raghuveera Ramamoorthi
wrote:
> Dear team,
>
> While starting Hbase master from newly installed HDP 2.4
// Configuration key for split threads
public final static String SPLIT_THREADS =
"hbase.regionserver.thread.split";
public final static int SPLIT_THREADS_DEFAULT = 1;
On Thu, May 5, 2016 at 6:55 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> For #3, we already have the followi
Please take a look at:
http://hbase.apache.org/book.html#_server_side_configuration_for_simple_user_access_operation
On Fri, May 6, 2016 at 10:51 AM, Mohit Anchlia
wrote:
> Is there a way to implement a simple user/pass authentication in HBase
> instead of using a
For #3, we already have the following in 1.1 release:
HBASE-10201 Port 'Make flush decisions per column family' to trunk
On Thu, May 5, 2016 at 6:36 PM, Shushant Arora
wrote:
> 1.Why is it better to have single file per region than multiple files for
> read
Lex:
Please also see this thread about s3n versus s3a:
http://search-hadoop.com/m/uOzYtE1Fy22eEWfe1=Re+S3+Hadoop+FileSystems
On Wed, May 4, 2016 at 9:01 PM, Matteo Bertozzi
wrote:
> never seen that problem before, but a couple of suggestions you can try.
>
> Instead of
Can you pastebin related server log w.r.t. d1c7f3f455f2529da82a2f713b5ee067
from rs-node6 ?
Which release of hbase are you using ?
Cheers
On Wed, May 4, 2016 at 6:07 PM, Shuai Lin wrote:
> Hi list,
>
> Last weekend I got a region server crashed, but some regions never
Yes, key order is guaranteed.
On Wed, May 4, 2016 at 3:20 PM, Dave Birdsall
wrote:
> Hi,
>
>
>
> Suppose I have an HBase table with many regions, and possibly many rows in
> the memstore from recent additions.
>
>
>
> Suppose I have a program that opens a Scan on the
nks.
>
> 2016-04-30 1:13 GMT+08:00 Ted Yu <yuzhih...@gmail.com>:
>
> > For #1, can you clarify whether your workload is read heavy, write heavy
> or
> > mixed load of read and write ?
> >
> > For #2, have you run major compaction after the second bulk load ?
For #1, in branch-1, please take a look at DefaultMemStore.java where you
would see:
// MemStore. Use a CellSkipListSet rather than SkipListSet because of the
// better semantics. The Map will overwrite if passed a key it already
had
// whereas the Set will not add new Cell if key is
trying to find a way to do this without cluster downtime.
>
> Thanks.
>
>
> Saad
>
>
> On Wed, Apr 20, 2016 at 1:19 PM, Saad Mufti <saad.mu...@gmail.com> wrote:
>
> > Thanks for the pointer. Working like a charm.
> >
> >
> > Saad
> &
For #1, can you clarify whether your workload is read heavy, write heavy or
mixed load of read and write ?
For #2, have you run major compaction after the second bulk load ?
On Thu, Apr 28, 2016 at 9:16 PM, Jone Zhang wrote:
> *1、How can i get hbase table memory used?*
Here is sample scan output from a working cluster
hbase:namespace,,146075636.acc7841bcbaca column=info:regioninfo,
timestamp=1460756360969, value={ENCODED =>
acc7841bcbacafacf336e48bb14794de, NAME => 'hbase:namespace,,14607
facf336e48bb14794de.
5636.acc7841bcbacafacf336e48bb14794de.',
There might be a typo:
bq. After the Evacuation phase, Eden and Survivor To are devoid of live
data and reclaimed.
>From the graph below it, it seems Survivor From is reclaimed, not Survivor
To.
FYI
On Wed, Apr 27, 2016 at 7:39 AM, Bryan Beaudreault wrote:
> We
Bryan:
w.r.t. gc_log_visualizer, is there plan to open source it ?
bq. while backend throughput will be better/cheaper with ParallelGC.
Does the above mean that hbase servers are still using ParallelGC ?
Thanks
On Wed, Apr 27, 2016 at 7:39 AM, Bryan Beaudreault
601 - 700 of 3641 matches
Mail list logo