Congratulations, Jingyun!
Original message From: Srinivas Reddy
Date: 11/13/18 12:46 AM (GMT-08:00) To:
d...@hbase.apache.org Cc: Hbase-User Subject: Re:
[ANNOUNCE] New HBase committer Jingyun Tian Congratulations Jingyunđđ-Srinivas-
Typed on tiny keys. pls ignore typos.{mo
Congratulations, Balazs.
On Thu, Oct 11, 2018 at 12:50 PM Mike Drob wrote:
> Welcome, Balazs!
>
> On Thu, Oct 11, 2018 at 2:49 PM Sean Busbey wrote:
>
> > On behalf of the HBase PMC, I'm pleased to announce that Balazs
> > Meszaros has accepted our invitation to become an HBase committer.
> >
>
bq. region server crashes after ttl archiver runs
This is the first time I hear of such problem. Can you give us more
information (stack trace, server log snippet for that period) ?
Thanks
On Thu, Sep 20, 2018 at 11:14 PM meirav.malka
wrote:
> Hi,Since you probably don't have enough space for
Hi,
To my knowledge, stripe compaction has not seen patches for a few years.
Have you looked at :
http://hbase.apache.org/book.html#ops.date.tiered
If the above doesn't suit your needs, can you tell us more about your use
case ?
Thanks
On Mon, Sep 17, 2018 at 11:39 AM Austin Heyne wrote:
> Th
Srinidhi Muppalla
wrote:
> Hi Ted,
>
> The highest number of filters used is 10, but the average is generally
> close to 1. Is it possible the CPU usage spike has to do with Hbase
> internal maintenance operations? It looks like post-upgrade the spike isnât
> correlated with the
For the second config you mentioned, hbase.master.distributed.log.replay,
see http://hbase.apache.org/book.html#upgrade2.0.distributed.log.replay
FYI
On Mon, Sep 10, 2018 at 8:52 AM sahil aggarwal
wrote:
> Hi,
>
> My cluster has around 50k regions and 130 RS. In case of unclean shutdown,
> the
Srinidhi :
Do you know the average / highest number of ColumnPrefixFilter's in the
FilterList ?
Thanks
On Fri, Sep 7, 2018 at 10:00 PM Ted Yu wrote:
> Thanks for detailed background information.
>
> I assume your code has done de-dup for the filters contained in
> FilterListW
int - qualifier length
This variant doesn't allocate (new) Cell / KeyValue.
This way, FilterListWithOR#shouldPassCurrentCellToFilter can use the
returned tuple for comparison.
FYI
On Fri, Sep 7, 2018 at 10:00 PM Ted Yu wrote:
> Thanks for detailed background information.
>
> I a
so that we donât need
> to know the âdistinguisherâ part of the record when writing the actual
> query, because the distinguisher is only relevant in certain circumstances.
>
> Let me know if this is the information about our query pattern that you
> were looking for and if there is anythi
>From the stack trace, ColumnPrefixFilter is used during scan.
Can you illustrate how various filters are formed thru FilterListWithOR ?
It would be easier for other people to reproduce the problem given your
query pattern.
Cheers
On Thu, Sep 6, 2018 at 11:43 AM Srinidhi Muppalla
wrote:
> Hi V
sable as it has been running for almost
> 24 hrs now.
>
> Thanks.
>
> Antonio.
>
> On Wed, Aug 29, 2018 at 3:40 PM Antonio Si wrote:
>
> > Thanks Ted.
> > Now that the table is in neither disable or enable state, will the table
> > eventually got disable com
The 'missing table descriptor' error should have been fixed by running hbck
(with selected parameters).
FYI
On Wed, Aug 29, 2018 at 2:46 PM Antonio Si wrote:
> Thanks Ted.
>
> The log says "java.io.IOException: missing table descriptor for
> ba912582f295f7ac0b83e7e
Do you have access to master / region logs for when FAILED_OPEN state was
noticed ?
There should be some hint there as to why some region couldn't open.
The length of table DDL is related to number of regions the table has. But
the length should be less related to data amount.
Which version of h
This depends on how far down you revise the max versions for table t2.
If your data normally only reaches 15000 versions and you lower max
versions to ~15000, there wouldn't be much saving.
FYI
On Sun, Aug 26, 2018 at 3:52 PM Antonio Si wrote:
> Thanks Anil.
>
> We are using hbase on s3. Yes, I
, Aug 25, 2018 at 2:49 PM Antonio Si wrote:
> Thanks Ted.
>
> I try passing "-Dhbase.client.scanner.timeout.period=180" when I invoke
> CellCounter, but it is still saying timeout after 600 sec.
>
> Thanks.
>
> Antonio.
>
> On Sat, Aug 25, 2018 at 2:09 PM T
It seems CellCounter doesn't have such (commandline) option.
You can specify, e.g. scan timerange, scan max versions, start row, stop
row, etc. so that individual run has shorter runtime.
Cheers
On Sat, Aug 25, 2018 at 9:35 AM Antonio Si wrote:
> Hi,
>
> When I run org.apache.hadoop.hbase.map
Antonio:
Please take a look at CellCounter under hbase-mapreduce module which may be
of use to you:
* 6. Total number of versions of each qualifier.
Please note that the max versions may fluctuate depending on when major
compaction kicks in.
FYI
On Wed, Aug 22, 2018 at 11:53 AM Ankit Singhal
éć¶è¶
even said "HBase 2.1" in the original email.
>
>
>
>
> On Mon, Aug 20, 2018 at 2:17 PM, Ted Yu wrote:
> > Looking at the dependency tree output, I see the following:
> >
> > [INFO] org.apache.hbase:hbase-server:jar:2.0.0.3.0.0.0-SNAPSHOT
> &g
Looking at the dependency tree output, I see the following:
[INFO] org.apache.hbase:hbase-server:jar:2.0.0.3.0.0.0-SNAPSHOT
...
[INFO] +- org.apache.htrace:htrace-core:jar:3.2.0-incubating:compile
FYI
On Mon, Aug 20, 2018 at 8:10 AM Sean Busbey wrote:
> neither Hadoop 3.1 nor HBase 2.1 use tha
For the first code snippet, it seems start row / stop row can be specified
to narrow the range of scan.
For your last question:
bq. all the filters in the filterList are RowFilters ...
The filter list currently is not doing optimization even if all filters in
the list are RowFilters.
Cheers
O
jcl:
I can see that DEBUG log wasn't turned on.
Can you set log4j to DEBUG level and see if there is more information ?
Cheers
On Tue, Aug 14, 2018 at 6:56 AM Allan Yang wrote:
> Those log is not enough to locate the problem.
> Best Regards
> Allan Yang
>
>
> jcl <515951...@163.com> äș2018ćčŽ8æ14
Since you are using a vendor's distro, can you post on their user group ?
Cheers
Original message From: Lian Jiang
Date: 8/13/18 4:03 PM (GMT-08:00) To: user@hbase.apache.org Subject: HDP3.0:
failed to map segment from shared object: Operation not permitted
Hi,
I installed h
rset make more sense? Do you think creating a
> bug(or a feature request) for this, makes sense?
>
> Thanks & Regards
> Biplob Biswas
>
>
> On Fri, Jul 20, 2018 at 6:31 PM Ted Yu wrote:
>
> > Assuming your filter list uses MUST_PASS_ONE operator, the recurring key
&g
Have you checked the output from bulk load and see if there were lines in
the following form (from LoadIncrementalHFiles#splitStoreFile) ?
LOG.info("HFile at " + hfilePath + " no longer fits inside a
single " + "region.
Splitting...");
In the server log, you should see log in the following fo
What is HBase doing in that
> circumstance
>
> Thanks & Regards
> Biplob Biswas
>
>
> On Fri, Jul 20, 2018 at 4:12 PM Ted Yu wrote:
>
> > Did you mean chaining the same row filter 'n' times using FilterList ?
> > Is the row filter from hbase (RowFilter) ?
&
Did you mean chaining the same row filter 'n' times using FilterList ?
Is the row filter from hbase (RowFilter) ?
What operator do you use (MUST_PASS_ALL or MUST_PASS_ONE) ?
For second question, I wonder how filter set would handle the constituent
filters differently from how FilterList handles th
Putting dev@ to bcc.
Which hbase-spark connector are you using ?
What's the hbase release in your deployment ?
bq. some of the columns in dataframe becomes null
Is it possible to characterize what type of columns become null ? Earlier
you said one column has xml data. Did you mean this column fr
Please see for subscription information:
http://hbase.apache.org/mail-lists.html
On Wed, Jul 11, 2018 at 4:19 AM bill.zhou wrote:
> I am a subscriber please add me thanks
>
>
Please see the following two constants defined in TableInputFormat :
/** Column Family to Scan */
public static final String SCAN_COLUMN_FAMILY =
"hbase.mapreduce.scan.column.family";
/** Space delimited list of columns and column families to scan. */
public static final String SCAN_COL
l in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Sat, 30 Jun 2018 at 23:36, Ted Yu wrote:
>
> > Please read :
> >
> > http://hbase.apache.org/book.html#wal.providers
> >
> > On Sat, Jun 30, 2018
n no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Sat, 30 Jun 2018 at 23:25, Ted Yu wrote:
>
> > Do you plan to deploy onto hadoop 3.1.x ?
> >
> > If so, you'd better build against hadoop 3.1.x yourself.
>
Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage
trunk version would correspond to hbase 3.0 which has lot more changes
compared to hbase 2.
The trunk build wouldn't serve you if your goal is to run hbase on hadoop
3.1 (see HBASE-20244)
FYI
On Sat, Jun 30, 2018 at 3:11 PM, Mich Talebzadeh
wrote:
> Thanks Ted.
>
> I downloa
Which hadoop release was the 2.0.1 built against ?
In order to build hbase 2 against hadoop 3.0.1+ / 3.1.0+, you will need
HBASE-20244.
FYI
On Sat, Jun 30, 2018 at 2:34 PM, Mich Talebzadeh
wrote:
> I am using the following hbase-site.xml
>
>
>
> hbase.rootdir
> hdfs://rhes75:9000/h
ses?
I already explained which jars contain the other two classes.
Better approach is to let 'mvn eclipse:eclipse' generate the dependencies
for you.
bq. I don't have in .m2 directory
Have you looked under ~/.m2/repository ?
Cheers
On Fri, Jun 22, 2018 at 10:27 AM, Andrzej wrote:
>
$ jar tvf
~/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-jobclient/3.0.0/hadoop-mapreduce-client-jobclient-3.0.0-tests.jar
| grep MiniMRCluster
1863 Fri Dec 08 11:31:44 PST 2017
org/apache/hadoop/mapred/ClusterMapReduceTestCase$ConfigurableMiniMRCluster.class
9947 Fri Dec 08 11:31:4
Since S3FileSystem is not taken into account in FSHDFSUtils#isSameHdfs, we
need to add more code to avoid the overhead.
Can you log a JIRA with what you discovered ?
Thanks
On Thu, Jun 21, 2018 at 2:08 PM, Austin Heyne wrote:
> Hi again,
>
> I've been doing more digging into this and I've foun
See code HBaseTestingUtility :
public Connection getConnection() throws IOException {
if (this.connection == null) {
this.connection = ConnectionFactory.createConnection(this.conf);
Once you have the connection, you can call:
this.hbaseAdmin = (HBaseAdmin) getConnection().get
I executed the following commands:
mvn clean
mvn compile
There was no error.
Andrzej:
Can you tell us which mvn / Java versions you use ?
I use the following:
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
MaxPermSize=812M; support was removed in 8.0
Apache Maven 3.5.2 (138edd61fd1
Please use HBASE-14850 branch which works with hbase 2.0
Cheers
On Tue, Jun 19, 2018 at 7:30 AM, Andrzej wrote:
> I have written previous my library based on native_client sources in any
> branch. Those worked with HBase 1.3 but now not works with HBase 2.0
> I again want compile my library wit
8:48 AM, Kang Minwoo
> wrote:
>
> > 1) I am using just InputFormat. (I do not know it is the right answer to
> > the question.)
> >
> > 2) code snippet
> >
> > ```
> > val rdd = sc.newAPIHadoopFile(...)
> > rdd.count()
> > ```
> >
>
Which connector do you use for Spark 2.1.2 ?
Is there any code snippet which may reproduce what you experienced ?
Which hbase release are you using ?
Thanks
On Fri, Jun 8, 2018 at 1:50 AM, Kang Minwoo wrote:
> Hello, Users
>
> I recently met an unusual situation.
> That is the cell result doe
Congratulations, Guangxu!
Original message From: "ćŒ é(Duo Zhang)"
Date: 6/4/18 12:00 AM (GMT-08:00) To: HBase Dev List ,
hbase-user Subject: [ANNOUNCE] New HBase committer
Guangxu Cheng
On behalf of the Apache HBase PMC, I am pleased to announce that Guangxu
Cheng has accept
suggest me the best way to export/import entire table from
> source cluster to destination cluster (its live system)
>
> Thanks
> Manjeet singh
>
> On Thu, 19 Jan 2017, 06:31 Neelesh, wrote:
>
> > Thanks Ted!
> >
> > On Wed, Jan 18, 2017 at 9:11 AM, Ted
he details between the release you use
and hbase 2.0 (which I used to generate the logs I quoted).
On Sat, May 19, 2018 at 11:29 AM, Nicolas Paris wrote:
> 2018-05-19 20:08 GMT+02:00 Ted Yu :
>
> > Mob store file is renamed from /apps/hbase/data/mobdir to the final
> > locatio
flushing process wouldn't be activated ?
>
>
>
> 2018-05-19 18:38 GMT+02:00 Ted Yu :
>
> > If you have a chance to look at region server log, you would see some
> line
> > such as the following:
> >
> > 2018-05-19 16:31:23,548 INFO [MemStoreFlusher.0]
not
> as traditional binary)
>
> Thanks
>
>
> 2018-05-19 15:59 GMT+02:00 Ted Yu :
>
> > bq. look into hdfs hbase/data/mlob
> >
> > Is 'mlob' name of your table ?
> >
> > bq. nearly empty folder
> >
> > Here is listing under a
bq. look into hdfs hbase/data/mlob
Is 'mlob' name of your table ?
bq. nearly empty folder
Here is listing under a one region table:
drwxr-xr-x - hbase hdfs 0 2018-05-16 23:51
/apps/hbase/data/data/default/atlas_janus/.tabledesc
drwxr-xr-x - hbase hdfs 0 2018-05-16 23:51
/a
bq. store a lot of logs in HBase
Kang:
Can you tell us a bit more about how you store (and access) the log(s) -
size of each log, whether log is encoded in hbase ?
ORC is columnar format while hbase uses different format.
Thanks
On Wed, May 16, 2018 at 6:41 AM, Marcell Ortutay
wrote:
> This t
For the open source download, can you tell us which release you downloaded ?
Did you install it on the docker image ?
Please share hbase-site.xml (thru pastebin) if possible.
Thanks
On Sun, May 6, 2018 at 8:04 AM, Mike Thomsen wrote:
> Ted,
>
> As I mentioned, I tried this with
Please use vendor forum for vendor specific question(s).
To my knowledge, this feature works in Apache hbase releases.
Cheers
On Sun, May 6, 2018 at 7:55 AM, Mike Thomsen wrote:
> I've tried this in the HDP docker sandbox and outside that with a basic
> installation of HBase.
>
> su - hbase
>
For #2, the reason was:
state=ROLLEDBACK exec-time=1mins,59.108sec
exception=org.apache.hadoop.hbase.TableNotFoundException:
mytable
For #3, 179108 corresponded with '1mins,59.108sec' shown above which was
the processing time (> 10,000ms)
I think you only posted part of the master log, right ?
>
>From your description, you can combine ColumnPrefixFilter with PageFilter
(thru FilterList).
FYI
On Tue, May 1, 2018 at 6:06 AM, mrmiroslav wrote:
> I'd like to perform Get / Scan with java client.
>
> In a given column family I'd like to limit the number of results per given
> column qualifi
Looking at
hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java in
branch-1.4 :
boolean[] setSplitOrMergeEnabled(final boolean enabled, final boolean
synchronous,
final MasterSwitchType... switchTypes)
throws IOException;
boolean isSplitOrMer
ancer
Cheers
On Tue, Mar 20, 2018 at 6:11 PM, Ted Yu wrote:
> Please consider tuning the following parameters of stochastic load
> balancer :
>
> "hbase.master.balancer.stochastic.maxRunningTime"
>
> default value is 30 seconds. It controls the duration of runtime fo
Please consider tuning the following parameters of stochastic load balancer
:
"hbase.master.balancer.stochastic.maxRunningTime"
default value is 30 seconds. It controls the duration of runtime for
each balanceCluster()
call.
"hbase.balancer.period"
default is 300 seconds. It controls the maximu
Saad:
I encourage you to open an HBase JIRA outlining your use case and the
config knobs you added through a patch.
We can see the details for each config and make recommendation accordingly.
Thanks
On Mon, Mar 12, 2018 at 8:43 AM, Saad Mufti wrote:
> I have create a company specific branch an
I looked at git log for MultipleColumnPrefixFilter - there has been no more
fix since 1.2.0
Can you reproduce what you observed using a unit test ?
It would be easier to understand the scenario through unit test.
Cheers
On Thu, Mar 1, 2018 at 9:57 PM, Vikash Agarwal
wrote:
> Hi Team,
>
>
> Cu
For #2, BulkDeleteEndpoint still exists - in hbase-examples branch.
./hbase-examples/src/main/java/org/apache/hadoop/hbase/coprocessor/example/BulkDeleteEndpoint.java
On Thu, Mar 1, 2018 at 10:02 PM, Vikash Agarwal
wrote:
> Hi Team,
>
> I am looking for a bulk delete in Hbase.
>
> I am looking
rsions.
>
> Cheers.
>
>
> Saad
>
>
> On Sun, Feb 25, 2018 at 11:10 AM, Ted Yu wrote:
>
> > Here is related code for disabling bucket cache:
> >
> > if (this.ioErrorStartTime > 0) {
> >
> > if (cacheEnabled && (now - ioErrorSta
bq. timing out trying to obtain write locks on rows in that region.
Can you confirm that the region under contention was the one being major
compacted ?
Can you pastebin thread dump so that we can have better idea of the
scenario ?
For the region being compacted, how long would the compaction ta
You can refer to HFilePerformanceEvaluation where creation of Writer is
demonstrated:
writer = HFile.getWriterFactoryNoCache(conf)
.withPath(fs, mf)
.withFileContext(hFileContext)
.withComparator(CellComparator.getInstance())
.create();
Cheers
On
Here is related code for disabling bucket cache:
if (this.ioErrorStartTime > 0) {
if (cacheEnabled && (now - ioErrorStartTime) > this.
ioErrorsTolerationDuration) {
LOG.error("IO errors duration time has exceeded " +
ioErrorsTolerationDuration +
"ms, disabling cache,
bq. a warning message in the shell should be displayed if simple auth and
cell visibility are in use together.
Makes sense.
Please log a JIRA.
On Sat, Feb 24, 2018 at 9:06 AM, Mike Thomsen
wrote:
> Ted/Anoop,
>
> I realized what the problem was. When I installed HBase previously
I noted that SIMPLE_AUTHENTICATION was returned.
Here is related code for getSecurityCapabilities():
if (User.isHBaseSecurityEnabled(master.getConfiguration())) {
capabilities.add(SecurityCapabilitiesResponse.Capability.
SECURE_AUTHENTICATION);
} else {
capabilities.
labels table is created by VisibilityController#postStartMaster().
You can add the following call in the @BeforeClass method:
TEST_UTIL.waitTableEnabled(LABELS_TABLE_NAME.getName(), 5);
See TestVisibilityLabelsWithACL for complete example.
On Thu, Feb 22, 2018 at 12:07 PM, Mike Thom
It seems there were 3 files on s3 (they're all on the same line).
If possible, can you pastebin parts of master log which were related to the
table ?
That may give us more clue.
On Thu, Feb 22, 2018 at 10:01 AM, Vikas Kanth <
kanth_vi...@yahoo.co.in.invalid> wrote:
> Hi Ted
Can you show more of the region server log ?
Was the cluster started clean (without any data) ?
There have been a lot of changes since 2.0.0-beta-1 was released (both in
terms of correctness and performance).
If possible, please deploy 2.0 SNAPSHOT for further testing.
Cheers
On Thu, Feb 22, 20
For a user table, you should see the following in the table dir:
drwxr-xr-x - hbase hdfs 0 2018-02-16 22:20
/apps/hbase/data/data/default/t1/.tabledesc
drwxr-xr-x - hbase hdfs 0 2018-02-16 22:20
/apps/hbase/data/data/default/t1/.tmp
Is the table descriptor under mytable ?
A
It seems you're using sbt.
Can you run this command and pastebin the output:
sbt "inspect tree clean"
On Wed, Feb 21, 2018 at 8:21 AM, Gauthier Feuillen
wrote:
> Yeah I already tested that (thanks for the help btw)
>
> Here are my dependencies:
>
>
> lazy val hbaseTesting = "org.apache.
HBaseTestingUtility is in hbase-server module.
You can add hbase-server module with test scope.
On Wed, Feb 21, 2018 at 7:07 AM, Gauthier Feuillen
wrote:
> Hi,
>
> Iâd like to be able to unit test my HBase application. I see a lot of
> posts using the HBaseTestingUtility. I canât get it into my
If you look at
https://www.cloudera.com/documentation/enterprise/release-notes/topics/cdh_rn_fixed_in_58.html#fixed_issues585
, you would see the following:
HBASE-15378 - Scanner cannot handle heartbeat message with no results
which fixed what you observed in previous release.
FYI
On Tue, Feb 2
Have you looked at FuzzyRowFilter ?
Cheers
On Mon, Feb 19, 2018 at 8:00 AM, kitex101 wrote:
> I have key design like:byte[] rowKey =
> =Bytes.add(Bytes.toBytes("3"),Bytes.toBytes(customer_id),
> Bytes.toBytes(timestamp));
> customer_id and timestamp are long type. As opentsdb uses:[âŠ]I would li
PM, Ted Yu wrote:
> It seems there are 3 components in the row key.
> Assuming the 2nd and 3rd are integers, you can take a look at the
> following method of Bytes:
>
> public static byte[] toBytes(int val) {
>
> which returns 4 byte long byte array.
> You can use this
It seems there are 3 components in the row key.
Assuming the 2nd and 3rd are integers, you can take a look at the following
method of Bytes:
public static byte[] toBytes(int val) {
which returns 4 byte long byte array.
You can use this knowledge to decode each component of the row key.
FYI
On
I don't see filter[1-3] being used in the cases.
Was any of them in the FilterList ?
Which release of hbase are you using ?
Cheers
On Fri, Feb 16, 2018 at 12:36 AM, Vikash Agarwal
wrote:
> Hi Team,
>
>
> Currently I am trying to use MultipleColumnPrefixFilter along with
> SingleColumnValueFilt
bq. Apache projects are supposed to encourage collaboration
I totally agree with you.
Cheers
On Sat, Feb 10, 2018 at 10:32 AM, anil gupta wrote:
> Thanks Ted. Will try to do the clean-up. Unfortunately, we ran out of
> support for this cluster since its nearing End-of-life. For o
You can cleanup oldwal directory beginning with oldest data.
Please open support case with the vendor.
On Sat, Feb 10, 2018 at 10:02 AM, anil gupta wrote:
> Hi Ted,
>
> We cleaned up all the snaphsots around Feb 7-8th. You were right that i
> dont see the CorruptedSnapshotExceptio
Can you clarify whether /apps/hbase/data/.hbase-snapshot/.tmp/ became empty
after 2018-02-07 09:10:08 ?
Do you see CorruptedSnapshotException for file outside of
/apps/hbase/data/.hbase-snapshot/.tmp/ ?
Cheers
Please the first few review comments of HBASE-16464.
You can sideline the corrupt snapshots (according to master log).
You can also contact the vendor for a HOTFIX.
Cheers
On Sat, Feb 10, 2018 at 8:13 AM, anil gupta wrote:
> Hi Folks,
>
> We are running HBase1.1.2. It seems like we are hittin
Do you use Phoenix functionality ?
If not, you can try disabling the Phoenix side altogether (removing Phoenix
coprocessors).
2.3.4 is really old - please upgrade to 2.6.3
You should consider asking on the vendor's community forum.
Cheers
On Thu, Feb 8, 2018 at 3:06 PM, anil gupta wrote:
> H
The built-in hbase client doesn't support failing over automatically to DR
cluster.
Switching "zookeeper.quorum" should be done on client side for failover.
Cheers
On Wed, Feb 7, 2018 at 3:16 PM, Daniel PoĆaczaĆski
wrote:
> Hi,
> I want to configure HBase in DR scenario. I create two separete
w.r.t. region split, do you verify that the new rowkey is in the same
region as the rowkey from incoming Put ?
If not, there is a chance that the new rowkey is in different region which
is going thru split.
FYI
On Mon, Jan 29, 2018 at 6:40 AM, Yang Zhang wrote:
> Both are the same question.
>
Have you looked at http://hbase.apache.org/book.html#rsgroup ?
It is in 1.4.x release.
FYI
On Fri, Jan 26, 2018 at 6:06 AM, Oussema BEN LAMINE
wrote:
> Hello,
> i am using hbase 1.1.2, i have 5 region servers already working on my
> cluster.
> i want to create a hbase table that will be on jus
any method support by coprocessor context to do that.
> Just like that you can call context.complete() to skip other coprocessors.
>
> Thanks for your advice
>
> 2018-01-23 13:01 GMT+08:00 Ted Yu :
>
> > Your prePut would write to a different column in the table, right ?
> >
, 2018 at 8:56 PM, Yang Zhang wrote:
> Yes, It is the same table.
>
> 2018-01-23 1:46 GMT+08:00 Ted Yu :
>
> > Can you clarify your use case ?
> >
> > bq. put a data into table
> >
> > Does your coprocessor write to the same table which receives user data ?
&
Can you clarify your use case ?
bq. put a data into table
Does your coprocessor write to the same table which receives user data ?
Cheers
On Mon, Jan 22, 2018 at 4:24 AM, Yang Zhang wrote:
> Hello Everyone
>
> I am using the coprocessor and want to put another data when
> someone put
his case the HFILE write of block3 would be to any of those 4
> machines and not to machine6. Is that right? Or i misunderstood?
>
> On Jan 22, 2018 22:27, "Ted Yu" wrote:
>
> > For case 1, HFile would be loaded into the region (via staging
> directory).
> >
nd replication would go to that particular region?
>
> On Jan 22, 2018 22:16, "Ted Yu" wrote:
>
> Which connector do you use to perform the write ?
>
> bq. Or spark will wisely launch an executor on that machine
>
> I don't think that is the case. Multiple
Which connector do you use to perform the write ?
bq. Or spark will wisely launch an executor on that machine
I don't think that is the case. Multiple writes may be performed which
would end up on different region servers. Spark won't provide the affinity
described above.
On Mon, Jan 22, 2018 at
>From the exception message, it seems some GetRequest was empty.
How often did this happen ?
If you can describe characteristics of the get request, that may give some
clue.
Can you come up with a unit test that reproduces the issue ?
On Thu, Jan 18, 2018 at 11:40 PM, Karthick Ram
wrote:
> ---
on2 first (by limit the start and
> end) to get an suggest range search to client, and do this suggested range
> search at the second time. This will work, but It will cost more time.
>
> Any suggest?
>
> Thanks
>
> 2018-01-15 23:49 GMT+08:00 Ted Yu :
>
> > bq. need
bq. need region2 to search fist, because it has my own index.
Looks like only some of the regions have your index. Can you tell us more
about the characteristics of the region(s) where index would be present.
bq. My scan will be blocked for my lock on region1,
By 'my' I assume the lock is placed
Peter:
Normally java.lang.System.nanoTime() is used for measuring duration of time.
See also
https://www.javacodegeeks.com/2012/02/what-is-behind-systemnanotime.html
bq. the prePut co-processor is executed inside a record lock
The prePut hook is called with read lock on the underlying region.
Was hbase-site.xml on the classpath when you executed the command ?
Original message From: Ravi Hemnani
Date: 1/9/18 3:11 AM (GMT-08:00) To: user@hbase.apache.org Subject: Re: Hbase
hbck not working properly.
@Ted,
I can the following command,
'sudo -u hbase hbase
verse
> scan.
> The reverse scan occurs StackOverFlowError.
>
> Related issue is https://issues.apache.org/jira/browse/HBASE-14497
>
> I wonder why that patch did not apply 1.2.6.
>
> Best regards,
> Minwoo Kang
>
>
> ________
> 볎
Can you provide a bit more information ?
data block encoding for the column family where this error occurred
pastebin of more of the region server prior to the StackOverflowError
(after redaction)
release of hadoop for the hdfs cluster
non-default config which may be related
Thanks
On Sat, J
Please run hbck as hbase superuser, normally user hbase.
Cheers
On Fri, Jan 5, 2018 at 4:35 AM, Ravi Hemnani
wrote:
> Hello all,
>
> I am in a fix right now where hbase hbck command is not running properly.
> It is picking up incorrect rootdir in order to search for regions info in
> hdfs and
Thanks, Clément
On Wed, Jan 3, 2018 at 2:54 PM, Clément Guillaume
wrote:
> Done https://issues.apache.org/jira/browse/HBASE-19700
>
> 2017-12-31 8:16 GMT-08:00 Ted Yu :
>
>> Clément:
>> Since you have reproduced the assertion error, can you log a JIRA ?
>>
>&g
=> 'false',
> CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false',
> COMPRESSION => 'SNAPPY', BLOCKCACHE => 'true', BLOCKSIZE => '65536',
> *METADATA
> => {'ENCODE_ON_DISK'
1 - 100 of 4213 matches
Mail list logo