Please see the following in HConstants:
public static final byte [] EMPTY_START_ROW = EMPTY_BYTE_ARRAY;
To get the first key in a region, you can specify the start key obtained
below in a Scan and utilize the following:
* To limit the maximum number of values returned for each call to
pshot. Is it feasible to
> lock the DB for this time?
>
> > On Feb 15, 2016, at 7:13 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> >
> > Keep in mind that if the writes to this table are not paused, there would
> > be some data coming in between steps #1 and #2 whi
The stack trace is similar to the one shown in HBASE-14812
HBASE-14812 is fixed in the soon to be released 1.2.0
FYI
On Tue, Feb 16, 2016 at 5:56 AM, Arul wrote:
> Hi,
>
> I am trying to pull data from hbase table and it works for some time and
> gets stuck in
gt;
> My C# code was compiled against thrift 0.92 and I am trying to connect to
> hbase 0.98.17
>
> Thanks
>
> On Tue, Feb 16, 2016 at 2:02 PM, Rajeshkumar J <
> rajeshkumarit8...@gmail.com>
> wrote:
>
> > Hi,
> >
> > For both I have used hbase 0.9
What version of thrift was your C# code compiled against ?
Which release of hbase do you try to connect to ?
Cheers
On Mon, Feb 15, 2016 at 11:04 PM, Rajeshkumar J wrote:
> Hi,
>
>
>I am connecting to thrift server via C# code and I am getting the
> following
an alias for table names?
> >
> > Didn’t see these in any docs or Googling, any help is appreciated.
> Writing all this data back to the original table would be a huge load on a
> table being written to by external processes and therefore under large load
> to begin with.
> >
Can you pastebin region server log snippet around the time when the split
happened ?
Was the split on data table or index table ?
Thanks
> On Feb 15, 2016, at 10:22 AM, Pedro Gandola wrote:
>
> Hi,
>
> I have a cluster using *HBase 1.1.2* where I have a table and a
There is currently no native support for renaming two tables in one atomic
action.
FYI
On Sun, Feb 14, 2016 at 4:18 PM, Pat Ferrel wrote:
> I use Spark to take an old table, clean it up to create an RDD of cleaned
> data. What I’d like to do is write all of the data to
Please take a look at table 13 under:
http://hbase.apache.org/book.html#_permissions
On Fri, Feb 12, 2016 at 12:07 AM, Harinder Singh wrote:
> Hi,
>
> What is the level of permission required for creating a table in HBase if I
> am making a client request using RPC.
Please take a look at filterKeyValue(Cell) method of
SingleColumnValueFilter :
if (!CellUtil.matchingColumn(c, this.columnFamily, this.columnQualifier))
{
When you write your own Filter supporting multiple columns, rewrite the
above check to fit your needs.
Cheers
On Thu, Feb 11, 2016 at
Can you give us a bit more information ?
Release of hbase
snippet of your code (especially HBaseClient.java) related to the stack
trace
Thanks
On Tue, Feb 9, 2016 at 2:47 AM, Raja.Aravapalli
wrote:
>
> Hi,
>
> HBase table lookup is failing with below exception.
Below was example from another thread involving HBaseStorage :
test = LOAD '$TEST'
USING org.apache.pig.backend.hadoop.hbase.HBaseStorage('cf_data:name
cf_data:age', '-loadKey true -maxTimestamp $test_date')
as (age);
Can you adjust your statement so that the table name is correctly specified
?
bq. the bulk of the work involves deleting the files from the column family
from HDFS
I think the first step when you delete files from column family is
archiving.
FYI
On Mon, Feb 8, 2016 at 7:53 AM, Cameron, David A
wrote:
> Hi,
>
> I'm working on a project where we
PIG to get the desired output ?
>
>
>
>
>
> From: Ted Yu <yuzhih...@gmail.com>
> To: "user@hbase.apache.org" <user@hbase.apache.org>
> Date: 06-02-2016 00:55
> Subject:Re: HBase --aggregation using MR
>
>
>
> Here is ja
bq. users.each{ mutator.mutate(toPut(it))}
I assume users is collection of User's.
Have you tried obtaining / closing mutator for each User instead of sharing
the mutator ?
See TestClientNoCluster for sample usage.
On Mon, Feb 8, 2016 at 3:01 PM, Serega Sheypak
wrote:
m(premise do
> not operate column timestamp)
>
>
>
>
> 2016-02-05 20:13 GMT+08:00 Ted Yu <yuzhih...@gmail.com>:
>
> > bq. when the result line is so much lines
> >
> > By line, did you mean number of rows ?
> >
> > bq. one table wit
Can you describe how you used importtsv ?
Here is one related command line parameter:
"By default importtsv will load data directly into HBase. To instead
generate\n" +
"HFiles of data to prepare for a bulk data load, pass the option:\n" +
" -D" + BULK_OUTPUT_CONF_KEY +
In this case, it seems stop key should be 0xAB.
Rajeshkumar's email didn't mention prefix.
I assume he was looking for generic fuzzy row filter.
Cheers
On Fri, Feb 5, 2016 at 7:25 AM, Jean-Marc Spaggiari wrote:
> Hi,
>
> Be careful with you stop key here, you might
Here is javadoc for RowFilter :
* This filter is used to filter based on the key. It takes an operator
* (equal, greater, not equal, etc) and a byte [] comparator for the row,
* and column qualifier portions of a key.
I guess you would want flexibility with comparing part(s) of row key.
bq. when the result line is so much lines
By line, did you mean number of rows ?
bq. one table with rowkey as A_B_time, another as B_A_time
In the above case, handling failed write (to the second table) becomes a
bit tricky.
Cheers
On Fri, Feb 5, 2016 at 12:08 AM, Jameson Li
Vishnu:
Please take a look
at hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
for multipart related config parameters (other than the one mentioned by
Matteo):
fs.s3n.multipart.uploads.block.size
fs.s3n.multipart.copy.block.size
Cheers
On Thu, Feb 4, 2016 at 7:00 PM,
version? I have seen a number of
> related bugs opened/fixed, but all involved changing code or setting a
> parameter. So I am not sure if the shell can support it out of box in the
> newer version.
>
> -----Original Message-
> From: Ted Yu [mailto:yuzhih...@gmail.com]
> S
Can you utilize ColumnPrefixFilter or ColumnRangeFilter to narrow the
columns to be returned.
Not sure what you would do with 3MM columns.
On Wed, Feb 3, 2016 at 10:23 AM, Frank Luo wrote:
> I am trying to “get” a very flat row, meaning one row has 3MM columns,
> from
gt;
>
>
> On Wed, Jan 27, 2016 at 3:38 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>
> > bq. from 0.98.0 to 0.98.4
> >
> > Rolling upgrade should be fine.
> >
> > Cheers
> >
> > On Wed, Jan 27, 2016 at 2:57 PM, Arul Ramachandran <arkup...@g
bq. from 0.98.0 to 0.98.4
Rolling upgrade should be fine.
Cheers
On Wed, Jan 27, 2016 at 2:57 PM, Arul Ramachandran
wrote:
> Hello HBase folks,
>
>
> - Looking to upgrade from Hortonworks HDP 2.1(Hadoop 2.4.0) to HDP 2.2.
> (Hadoop 2.6.0)
> - HBase version changes from
Shortly before BadVersion occurred, I saw:
2016-01-27 05:52:36,048 INFO [main-EventThread]
replication.ReplicationTrackerZKImpl:
/hbase/rs/r12s8.sjc.aristanetworks.com,9104,1453785783387
znode expired, triggeringreplicatorRemoved event
2016-01-27 05:52:36,051 INFO [main-EventThread]
Can you provide a bit more information ?
Which hbase release are you using ?
In between the two queries, was there any concurrent update to the
underlying table ?
BTW I assume thrift servers 1 and 2 use the same binary - they just resided
on different machines.
Cheers
On Tue, Jan 26, 2016 at
; Thanks
>
> On Wed, Jan 27, 2016 at 10:35 AM, Ted Yu <yuzhih...@gmail.com> wrote:
>
> > Can you provide a bit more information ?
> >
> > Which hbase release are you using ?
> > In between the two queries, was there any concurrent update to the
> >
stance require
> On 1/25/16, 3:02 PM, "Ted Yu" <yuzhih...@gmail.com> wrote:
>
> >bq. what if I want to update two cells (from one row) in one atomic
> >operation
> >
> >Can you clarify the condition on which the update should be performed ?
ons)) the
> condition for the atomic update?
>
> On 1/26/16, 9:52 AM, "Ted Yu" <yuzhih...@gmail.com> wrote:
>
> >As long as C1 and C2 are updated using one checkAndPut() call, it should
> >work.
> >
> >On Tue, Jan 26, 2016 at 7:47 AM, Yakubovich,
able. I can get
> HRegionLocation,HRegionInfo, but … how to get an HRegion object instance
> for the given HTable and row…
>
>
>
>
> On 1/26/16, 1:05 PM, "Ted Yu" <yuzhih...@gmail.com> wrote:
>
> >bq. can you completely customize
> >
> >Plea
>
> On 1/26/16, 2:51 PM, "Ted Yu" <yuzhih...@gmail.com> wrote:
>
> >From Connection, you can call:
> >
> > public RegionLocator getRegionLocator(TableName tableName) throws
> >IOException;
> >
> >RegionLocator has this meth
bq. what if I want to update two cells (from one row) in one atomic
operation
Can you clarify the condition on which the update should be performed ?
Meaning, do you want to compare with one column or compare with two columns
?
If you want to compare with one column and update the row depending
I have very limited knowledge on Parquet, so I can only answer from HBase
point of view.
Please see recent thread on number of columns in a row in HBase:
http://search-hadoop.com/m/YGbb3NN3v1jeL1f
There're a few Spark hbase connectors.
See this thread:
There is also backup / restore (work in progress):
https://issues.apache.org/jira/browse/HBASE-7912
FYI
On Wed, Jan 20, 2016 at 2:12 AM, Samir Ahmic wrote:
> Hi Sumit,
> IMHO snapshots are easiest way to copy tables you need just to steps:
>
> 1. create snapshot
> 2.
Having dozens of tables in a multi-tenant cluster is okay.
How many regions on average would each table have ?
The total number of regions should have higher weight in your planning compared
to the total number of tables.
Cheers
> On Jan 18, 2016, at 4:19 AM, Guillermo Ortiz
> wrote:
>
> Hi Ted,
>
> As mentioned in that HBASE-14926 it is fixed in HBase 0.98.17. I have
> googled it for both its source code and binary but I didn't find one. can
> you guide me where I can find Hbase 0.98.17 source code or binary package
>
> Thanks
>
>
17.doCall(DistributedFileSystem.java:1064)
> > at
> >
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> > at
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064)
> > at
> >
> org.apache.hadoop.fs.FilterFileSyste
ore my previous mail as the log is taken when hbase is working
> fine. Please find the logs below when Hbase didn't return any records
>
> http://pastebin.com/APYjiGSP
>
> Thanks
>
> On Mon, Jan 11, 2016 at 11:19 AM, Ted Yu <yuzhih...@gmail.com> wrote:
>
> > When
, Rajeshkumar J <rajeshkumarit8...@gmail.com>
wrote:
> Hi Ted,
>
>I don't know how how to take stacktrace using jstack. But i tried using
> some commands but it fails. can you help me in this?
>
> Thanks
>
> On Sun, Jan 10, 2016 at 11:10 PM, Ted Yu <yuzhih...@gma
Can you take a stack trace of the thrift server and pastebin the trace ?
Thanks
On Sun, Jan 10, 2016 at 8:56 AM, Rajeshkumar J
wrote:
> Hi,
>
>
> I am connecting via Hbase thrift server to access records in Hbase and I am
> doing this in C# and i am using range
Please take a look at:
https://blogs.apache.org/hbase/
You can find cases from Imgur, Bloomberg and Cask.
Cheers
On Sat, Jan 9, 2016 at 4:29 AM, Bhuvan Rawal wrote:
> Hi,
>
> I'd be grateful if someone here could direct me to case studies of HBase.
>
> Regards,
> Bhuvan
>
upper limit and these 42MB are the result of forcing the memstore to be
> flushed? The problem is that all the newly store files added to HDFS are
> starting with this size (42MB) I did not mention that my CF is in-memory.
>
> Best regards,
>
> On Tue, Jan 5, 2016 at 4:04 PM,
For #1, when all store files are selected for compaction, the compaction
becomes major
see 'Determine the Optimal Number of Pre-Split Regions' under:
http://hbase.apache.org/book.html#disable.splitting
See also http://hbase.apache.org/book.html#managed.compactions
Cheers
On Tue, Jan 5, 2016 at
scan multiple row key ranges and I came across this
> jira
>
> https://issues.apache.org/jira/browse/HBASE-11144
>
>
> whether this is implemented if so guide me the command to make use of it
>
> Thanks
>
>
>> On Thu, Dec 31, 2015 at 7:43 PM, Ted Yu <yuz
See the following in hbase-default.xml :
hbase.client.keyvalue.maxsize
10485760
Specifies the combined maximum allowed size of a KeyValue
instance.
FYI
On Mon, Jan 4, 2016 at 4:34 PM, Keith Lim wrote:
> I am using an c# API to write to HBASE, is there a
Can you pastebin the complete error you encountered ?
What dependencies have you added ?
Thanks
> On Jan 4, 2016, at 3:44 AM, Rajeshkumar J wrote:
>
> Hi,
>
> We can use fuzzy row filter when rowkey has fixed length.I decided to
> design row key as -mm-dd|
Can you log onto the server hosting region 82432aca9ede964943b40753cb64e808
and see what happened ?
See if the namespace table can be found under rootdir.
e.g. assuming /apps/hbase/data is the rootdir, you should see something
similar to the following:
hdfs dfs -ls
Please see http://hbase.apache.org/book.html#upgrade1.0.from.0.94
On Mon, Jan 4, 2016 at 6:10 AM, Parmeet Arora wrote:
> Hii,
> How to migrate data in hbase-0.94.8 to hbase-1.1.0? With hbase-0.94.8,I
> was using Hadoop-1.x and with hbase-1.1.0 ,I was using hadoop2.x,so how I
tps%3A%2F%2Fwww.knownormal.com%2F=5370834256920576=6a0326e0-41e4-46fd-bcce-114f44302f69
> >
> 4819 Emperor Blvd., Ste 400
> Durham, North Carolina 27703
> tel: +1.424.262.KNOW x703
> skype: shaneodonnell
> email: sha...@knownormal.com
>
> :
bq. HBase file layout needs to be upgraded. You have version null and I
want version 8.
Can you check hbase.version under hbase rootdir ?
On a healthy system, you should see something like the following:
hdfs dfs -cat /apps/hbase/data/hbase.version
PBUF
8
Cheers
On Sun, Jan 3, 2016 at 7:21 PM,
Was there any configuration change around the time replication stopped
working ?
Have you inspected server logs to see if there was some clue ?
Consider pastebinning snippet of server log.
Which release of HBase / hadoop you're using ?
Thanks
On Fri, Jan 1, 2016 at 2:25 AM, 耿嵩
rm : bigdata,
> control : bigdata and this will be concatenated as
> 2015-12-12|bigdata|bigdata and passed to hbase) it works and retrieves
> data. But if the user chooses multiple options for either platform or
> control it will fail. So help me in resolving this problem?
>
> Th
MultiRowRangeFilter involves List of RowRange's whose definition depends on:
public RowRange(String startRow, boolean startRowInclusive, String
stopRow,
boolean stopRowInclusive) {
It would be tedious to construct MultiRowRangeFilter in shell.
Can you use Java API ?
Cheers
On
For the example given below, you can specify PrefixFilter for the scan.
Please see also for examples of filter involving regex:
https://issues.apache.org/jira/browse/HBASE-9428
> On Dec 30, 2015, at 9:57 PM, Rajeshkumar J
> wrote:
>
> Hi,
>
> Currently i am
Which hbase release are you using ?
After a brief search, looks like Chinese char might be present in region
name or config value.
Can you double check ?
On Wed, Dec 23, 2015 at 10:04 PM, yaoxiaohua wrote:
> Hi,
>
> 172.19.206.142 ,this node is running
Kumiko:
You can define your own YCSB workload by specifying the readproportion
and scanproportion you want.
FYI
On Tue, Dec 22, 2015 at 11:39 AM, iain wright wrote:
> You could use YCSB and a custom workload (i don't see a predefined workload
> for 100% puts without reads)
>From RegionListTmpl.jamon :
<%if (onlineRegions != null && onlineRegions.size() > 0) %>
...
<%else>
Not serving regions
The message means that there was no region online on the underlying server.
FYI
On Tue, Dec 22, 2015 at 7:18 AM, Brian Jeltema wrote:
> Following
Can you pick a few regions stuck in transition and check related region
server logs to see why they couldn't be assigned ?
Which release were you using previously ?
Thanks
On Mon, Dec 21, 2015 at 3:54 PM, Brian Jeltema wrote:
> I am doing a cluster upgrade to the HDP 2.2
You can narrow the scope of search by issuing scan with start row of region
N and stop row of region N+1 (repeat for regions with #rows > 0).
Suppose you find that scanning region R hangs. You can capture stack trace
on the server which hosts R and pastebin it.
Thanks
On Mon, Dec 21, 2015 at
Have you polled Ranger community with this question ?
http://ranger.apache.org/mail-lists.html
Cheers
On Fri, Dec 18, 2015 at 9:04 AM, Chris Gent <
chris.g...@bigdatapartnership.com> wrote:
> Hi,
>
> We have a webservice that performs reads/writes on HBase tables and have a
> requirement to
en the Table object gets
> created? Or is it right back when the connection is established?
>
> --
> Chris
>
>
>
> On 18 December 2015 at 17:18, Ted Yu <yuzhih...@gmail.com> wrote:
>
> > Have you polled Ranger community with this question ?
> >
> > htt
For question #1, which release(s) are you using / interested in ?
Cheers
On Fri, Dec 18, 2015 at 9:21 AM, Dominic KUMAR wrote:
> Hi HBase,
>
> Is there any HBase End of Product Life Cycle date / release ? What is the
> road-map of HBase ?
>
>
>
> Regards,
>
> Dominic Vivek
Here is related code from AsyncProcess:
if (results.length != actions.size()) {
throw new AssertionError("results.length");
}
It means that the length of results (0 in your case) is not the same as
number of Action's
Please create results array with proper length.
> familiar with java)?
>
> Cheers
>
>
> On 17/12/2015 2:53 PM, Ted Yu wrote:
>
>> I noticed Phoenix config parameters. Are Phoenix jars in place ?
>>
>> Can you capture jstack of the master when this happens ?
>>
>> Cheers
>>
>> On Dec 16,
tack.java:140)
> at sun.tools.jstack.JStack.main(JStack.java:106)
> Caused by: sun.jvm.hotspot.debugger.DebuggerException: cannot open binary
> file
> at sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal.attach0(Native
> Method)
> at
> sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal.access$100(LinuxDebuggerLocal.java:62)
> at
> sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal$1AttachTask.doit(LinuxDebuggerLocal.java:269)
> at
> sun.jvm.hotspot.debugger.linux.LinuxDebuggerLocal$LinuxDebuggerLocalWorkerThread.run(LinuxDebuggerLocal.java:138)
>
>
>
> On 17/12/2015 3:01 PM, Ted Yu wrote:
>
>> ps aux | grep aster
>>
>
>
I noticed Phoenix config parameters. Are Phoenix jars in place ?
Can you capture jstack of the master when this happens ?
Cheers
> On Dec 16, 2015, at 7:46 PM, F21 wrote:
>
> Background:
>
> I am prototyping a HBase cluster using docker. Docker is 1.9.1 and is running
w.r.t. option #1, also consider
http://hbase.apache.org/book.html#arch.bulk.load
FYI
On Tue, Dec 15, 2015 at 12:17 PM, Frank Luo wrote:
> I am in a very similar situation.
>
> I guess you can try one of the options.
>
> Option one: avoid online insert by preparing data
Colin:
You may want to take a look at HDFS-8298 where the posted stack trace looks
similar to what you described.
Cheers
On Mon, Dec 14, 2015 at 5:17 PM, Colin Kincaid Williams
wrote:
> We had a namenode go down due to timeout with the hdfs ha qjm journal:
>
>
>
> 2015-12-09
<lin...@qiyi.com> wrote:
> Thanks for your advices.
>
> For option three, I think major compaction on a large region will affect
> performance of the region server. So the down time shall be down time for
> all the table on that RS, am i Right?
>
>
>
>
> On
as one column family hitting its max
> > version and the entire row, all column families, being wiped. Is that
> > expected?
> >
> > On Sun, Dec 13, 2015 at 6:30 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> >
> > > The Maximum Number of Versions for a C
The put for q4, q5, q6 and q7 wouldn't overwrite existing rows.
When were the columns q1 to q3 written ?
What is the TTL for your table ?
Thanks
On Sun, Dec 13, 2015 at 12:36 PM, Mike Thomsen
wrote:
> I noticed that our test data set is suddenly missing a lot of data,
s if another
> bolt fails), will that overwrite just the affected cells or affect
> everything in the column family or even the entire row?
>
> Thanks,
>
> Mike
>
> On Sun, Dec 13, 2015 at 5:14 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>
> > The put for q4, q5, q6 and q7 wouldn't
Interesting.
Which exact 0.98 release are you using ?
Can you inspect logs to see when the duplicate HFiles were introduced
(during one bulk load run or multiple bulk load runs) ?
bq. Will a compaction eventually take care of this?
I think so.
Thanks
On Wed, Dec 9, 2015 at 7:18 AM, Anthony
bq. Would they eventually be taken care of during a compaction and
converted over?
Yes. Compaction would produce v3 HFiles.
On Mon, Dec 7, 2015 at 9:48 PM, Anthony Nguyen
wrote:
> Hi all,
>
> I believe I have successfully done a rolling upgrade to a small test
>
I think you can.
See the following:
http://hbase.apache.org/book.html#_upgrade_paths
It is advisable to use 1.1.2 client so that you get the full feature set
from 1.1.2
Cheers
On Fri, Dec 4, 2015 at 9:36 PM, Li Li wrote:
> I want to set up a hbase cluster. I found the
Have you looked at HBASE-6721 ?
> On Dec 4, 2015, at 12:08 AM, manohar mc wrote:
>
> Hi All,
> We are using hbase to store data on different customers. As part of
> design one of the key goal is to segregate data of each customers.
> I came across namespace but
Created HBASE-14928 and attached patch there.
FYI
On Thu, Dec 3, 2015 at 9:05 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> Thanks for the response, Jerry.
>
> I created a patch:
>
> http://pastebin.com/xisGVHt8
>
> All REST tests passed.
>
> I know Ben logg
other way to make the check.
>
> Best regards,
>
> On Fri, Dec 4, 2015 at 4:35 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>
> > hasFamily() just checks the in-memory Map:
> >
> > public boolean hasFamily(final byte [] familyName) {
> >
> > ret
Looks like the row key prefix has fixed length (40 characters).
Please take a look at FuzzyRowFilter
Example can be found in:
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFuzzyRowFilter.java
Cheers
On Fri, Dec 4, 2015 at 1:10 PM, Arun Patel wrote:
Thanks for the response, Jerry.
I created a patch:
http://pastebin.com/xisGVHt8
All REST tests passed.
I know Ben logged a JIRA on this subject already.
Not sure if that should be re-opened or, a new JIRA should be created.
Once we have an open JIRA, I will attach my patch there.
Cheers
On
There is get_splits command but it only shows the splits.
status 'detailed' would show you enough information
e.g.
"t1,30,1449175546660.da5f3853f6e59d1ada0a8554f12885ab."
numberOfStores=1, numberOfStorefiles=0,
storefileUncompressedSizeMB=0, lastMajorCompactionTimestamp=0,
e on some RS.
>
> I found in my practice, it is always needed.
>
> 2015-12-04 4:48 GMT+08:00 Ted Yu <yuzhih...@gmail.com>:
>
> > There is get_splits command but it only shows the splits.
> >
> > status 'detailed' would show you enough inform
Do you mind pastebin snippet of region server log where the region stuck in
transition was hosted ?
This would give us some clue.
Cheers
On Wed, Dec 2, 2015 at 12:14 PM, Amanda Moran
wrote:
> Hi there All-
>
> I apologize if this issue has been raised before... I have
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:5475)
> ... 10 more
> Caused by: java.lang.ClassNotFoundException: Class
> org.apache.hadoop.hbase.regionserver.transactional.TransactionalRegion not
> found
> at
>
> org.apache.hadoop.conf.Conf
bq. current MR implementation my OOME if there is too many columns
This is related:
HBASE-14696 Support setting allowPartialResults in mapreduce Mappers
but it is not in any hbase release yet.
FYI
On Tue, Dec 1, 2015 at 7:16 AM, Jean-Marc Spaggiari wrote:
> I can
Have you read http://hbase.apache.org/book.html#rowkey.design ?
bq. we can store more than one row for a row-key value.
Can you clarify your intention / use case ? If row key is the same, key
values would be in the same row.
On Mon, Nov 30, 2015 at 8:30 AM, Rajeshkumar J
bq. duplicate data to two different tables, one with (salt-productId-timestamp)
and other with (salt-productId-place) keys
I suggest think twice about the above schema. It may become tricky keeping
data in the two tables in sync.
Meaning, when update to table1 succeeds but update to table2 fails,
an you suggest me a site or any others?
>
> On Thu, Nov 26, 2015 at 8:32 PM, Ted Yu <yuzhih...@gmail.com> wrote:
>
> > Excerpt from hbase-shell//src/main/ruby/shell/commands/major_compact.rb :
> >
> > Examples:
> > Compact all regions in
Excerpt from hbase-shell//src/main/ruby/shell/commands/major_compact.rb :
Examples:
Compact all regions in a table:
hbase> major_compact 't1'
Cheers
On Wed, Nov 25, 2015 at 10:00 PM, Rajeshkumar J <rajeshkumarit8...@gmail.com
> wrote:
> Hi Ted Yu,
After loading the data, have you major compacted the table ?
You can include STARTROW, STOPROW and TIMERANGE for your scan to narrow the
scope.
FYI
On Wed, Nov 25, 2015 at 2:36 AM, Rajeshkumar J
wrote:
> Hi,
>
>
> I am new to Apache Hbase and I am using
Please take a look at:
http://hbase.apache.org/book.html#_endpoint_example
The Endpoint Coprocessor runs server side. So it should be very efficient.
Cheers
On Wed, Nov 25, 2015 at 6:03 AM, Arul wrote:
> Hi,
>
> I am new to Hbase and doing an POC. We have a detail
Can you trace this region through master / region server log to see if there is
some clue ?
Cheers
> On Nov 21, 2015, at 2:56 AM, Pankaj kr wrote:
>
> Hi Folks,
>
> We met a very weird scenario.
> We are running PE tool, during testing we found all regions are in
nservers are aborting.
>>> The regionservers will reject client connection requests if there is an RPC
>>> version mismatch.
>>>
>>> 1.x and 0.98 client and servers have been tested to be rolling upgrade
>>> compatible (meaning that older clients can wor
lot of development is the use of the wrong client, I think about
> how to avoid.For example, we even upgrade to 1.0 but they may use 2.0
> version.
>
>
>
>
> ------ 原始邮件 --
> 发件人: "Ted Yu";<yuzhih...@gmail.com>;
> 发送时间: 2015年11月18日(星期三
See http://hbase.apache.org/book.html#hbase.rolling.upgrade
For example, in Rolling upgrade from 0.98.x to HBase 1.0.0, we state that
it is possible to do a rolling upgrade between hbase-0.98.x and hbase-1.0.0.
Cheers
On Wed, Nov 18, 2015 at 12:22 AM, 聪聪 <175998...@qq.com> wrote:
> We recently
> > > 2015-11-05 13:58 GMT+01:00 Naresh Reddy <
> > naresh.re...@aletheconsulting.com
> > > >:
> > >
> > > > Hi
> > > >
> > > > I have already replaced the hbase version with
> > "*hbase95.version=1.1.2*"
> >
t the Hbase client (which shows up the
> error) and HMaster machines in this particular case are not time-synced. I
> notice a day's gap but I assume that NTP time-sync is only a requirement
> for Hbase master/ region servers and not also for their clients.
>
> Thanks,
> Sumit
>
ler.java:114)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:833)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:810)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:842)
> at
> com.thinkaurelius.titan.diskstorage.hbase.HBaseKeyColumnValueStore.getHelper(HBaseKeyColumn
d to Hbase code?
> And any advise on if I can somehow avoid it in first place?
>
> Thanks,
> Sumit
>
> --
> *From:* Ted Yu <yuzhih...@gmail.com>
> *To:* Sumit Nigam <sumit_o...@yahoo.com>
> *Cc:* "user@hbase.apache.org&qu
801 - 900 of 3641 matches
Mail list logo