Congratulations!
哈晓琳 于2021年10月15日周五 上午11:48写道:
> Congratulations!
>
> Baiqiang Zhao 于2021年10月15日周五 上午11:21写道:
>
> > Congratulations!
> >
> > 张铎(Duo Zhang) 于2021年10月15日周五 上午9:53写道:
> >
> > > Congratulations!
> > >
> > > Nick Dimiduk 于2021年10月15日周五 上午8:56写道:
> > >
> > > > Thank you for your con
I recommend storing values as binary because when numbers are binary they
are generally lexicographically ordered (which makes sorting easier). That
said, it's important to settle on a single format (even if you store all
numbers as strings), rather than storing some in one format and other
k is?
HBASE-15707 - am I able to read the HFile manually to determine if Tags have
been written properly?
Cheers,
Tom
-Original Message-
From: ramkrishna vasudevan [mailto:ramkrishna.s.vasude...@gmail.com]
Sent: 16 June 2016 06:01
To: user@hbase.apache.org
Subject: Re: Writing visibil
can't see them.
Then I noticed the fix for HBASE-15707. I am using the Hortonworks' HBase 1.1.2
- am affected by this/does HFileOutputFormat2 support tags before this fix?
Cheers,
Tom Ellis
Consultant Developer – Excelian
Data Lake | Financial Markets IT
LLOYDS BANK COMMERCIAL BANKING
he system label privileges, but only
read access to the 'hbase:labels' table?
Then that user will still be able to scan and read the labels + ordinal, and
create the tags correctly :) I'll give it a go..
Cheers,
Tom Ellis
Consultant Developer – Excelian
Data Lake | Financial Mar
email.
Cheers,
Tom Ellis
Consultant Developer – Excelian
Data Lake | Financial Markets IT
LLOYDS BANK COMMERCIAL BANKING
E: tom.el...@lloydsbanking.com
Website: www.lloydsbankcommercial.com
, , ,
Reduce printing. Lloyds Banking Group is helping to build the low carbon
economy.
Corporate
user has to have admin/super user
privileges so they can use VisibilityExpressionResolver to correctly create the
tags on the Cell with correct ordinals?
Cheers,
Tom Ellis
Consultant Developer – Excelian
Data Lake | Financial Markets IT
LLOYDS BANK COMMERCIAL BANKING
E: tom.el...@lloydsbankin
I've seen that it's possible to do this with map reduce & setting the map
output to be a Put (and thus could setCellVisibility on the puts), but I'm
struggling to do this with Spark, as I keep getting the exception that I can't
cast a Put to a Cell.
Cheers,
Tom Ellis
Co
will check this.
We could I guess create multiple puts for cells in the same row with different
labels and use the setCellVisibility on each individual put/cell, but will this
create additional overhead?
Cheers,
Tom Ellis
Consultant Developer – Excelian
Data Lake | Financial Markets IT
LLOYDS
7835, value=branch
\x00\x00\x00\x05 column=f:hdfs,
timestamp=1465980237060, value=
\x00\x00\x00\x06 column=f:\x00,
timestamp=1465980447307, value=group
\x00\x00\x00\x06 column=f:hdfs,
timestamp=1465980454130, value=
6 row(s) i
e) but it seems to be able to use that I'd need to
know Label ordinality client side..
Thanks for your help,
Tom
-Original Message-
From: ramkrishna vasudevan [mailto:ramkrishna.s.vasude...@gmail.com]
Sent: 07 June 2016 11:19
To: user@hbase.apache.org
Subject: Re: Writing vi
g the VisibilityController
coprocessor as we need to assert the expression is valid for the labels
configured.
How can we add visibility labels to cells if we have a job that creates an
HFile with HFileOutputFormat2 which is then subsequently loaded using
LoadIncrementalHFiles?
Cheers,
Tom Ellis
Consu
>
> So to avoid this problem, we need to find out the cause why peer RS is
> slow ? Based on that and network speed, need to adjust the
> hbase.rpc.timeout value and restart the source and peer cluster.
>
> Regards,
> Ashish
>
> -Original Message-
> From:
my hbase replication has stopped
I am on hbase version 1.0.0-cdh5.4.8 (Cloudera build)
I have 2 clusters in 2 different datacenters
1 is master the other is slave
I see the following errors in log
2016-04-13 22:32:50,217 WARN
org.apache.hadoop.hbase.replication.regionserver.HBaseInterClust
some reason dies. As to why it dies, I am looking and that is a different
problem. but when the slave returns, I have an expectation that the
unconfirmed records would be resent.
Best practices would be helpful as well
All zookeepers in the slave are listed as peers
--
Abraham Tom
Email: w
hbase cli to see if there is a difference
On Fri, Dec 11, 2015 at 9:25 AM, Stack wrote:
> On Wed, Dec 9, 2015 at 8:14 PM, Abraham Tom wrote:
>
>> Hi all
>> does anybody have scripts or examples to do performance tests and
>> benchmarks on scans
>> We are using c
wanted to test native before thrift. Any github repos or ideas would
be helpful
--
Abraham Tom
Email: work2m...@gmail.com
ory: Closed potentially stale remote
> peer
>
> Have you checked hdfs health ?
>
> Cheers
>
> On Wed, Nov 11, 2015 at 10:10 AM, Abraham Tom wrote:
>
>> Thanks
>>
>> I also restarted in debug mode
>> and found that hbase is renewing my lease to other nodes
server log so that
> the message is concise.
>
> Cheers
>
> On Tue, Nov 10, 2015 at 10:17 PM, Abraham Tom wrote:
>
>> my hbase-site.xml snippet
>>
>>
>> hbase.mob.sweep.tool.compaction.memstore.flush.size134217728hbase-default.xml
my hbase-site.xml snippet
hbase.mob.sweep.tool.compaction.memstore.flush.size134217728hbase-default.xml
hbase.hregion.memstore.flush.size134217728hbase-default.xml
hbase.hregion.memstore.block.multiplier4hbase-default.xml
hbase.mob.sweep.tool.compaction.memstore.flush.size134217728hbase-default.xm
--
Abraham Tom
Email: work2m...@gmail.com
Phone: 415-515-3621
ly way to clear this
up is to restart thrift.
Thrift is invoked as hbase-daemon.sh start thrift -hsha -f
hbase master and region look fine and hbase shell can still be invoked and
queried.
That invocation works on 0.98.6 instance and our javascript npm code base
has not changed
--
Ab
Thanks for the hint. I found from the log that regions assigned to a
particular regionserver (out of 3 machines) cause warning. I decommissioned
the regionserver in question and restarted it, fixed the inconsistencies
with -fixEmptyMetaCells and now the warning is gone.
Thanks,
Tom
On Sat, May
.9bb21fe3a575503da1bae02a8d22a8c0./info:serverstartcode/1432926325684/Put/vlen=8/mvcc=0}
Tom
>
> Thanks
>
> On Thu, May 28, 2015 at 6:48 PM, Tom Chan wrote:
>
> > Hi,
> >
> > I kept running into this whenever I remove tab
leared the inconsistencies and those warning messages, but the "No
serialized HRegionInfo" and HBase inconsistencies continued to accumulate
so some config is still not quite right. Any help is appreciated.
Tom
this
--
Abraham Tom
Email: work2m...@gmail.com
Phone: 415-515-3621
tions?
> i.e.
>
> Is JAVA coding (Client API) needed to do something in HBase which is not
> possible by HBase shell commands?
>
> Thank You,
> Sudeep Pandey
> Ph: 5107783972
>
--
Abraham Tom
Email: work2m...@gmail.com
Phone: 415-515-3621
t; The streaming in data is guaranteed to have a larger KEY than ANY
> > > > > existing
> > > > > > keys in the table.
> > > > > > And the data will be READONLY.
> > > > > >
> > > > > > The data is streaming in at a very high rate, I don't want to
> > issue a
> > > > PUT
> > > > > > operation for each data entry, because obviously it is poor in
> > > > > performance.
> > > > > > I'm thinking about pooling the data entries and flush them to
> hbase
> > > > every
> > > > > > five minutes, and I AFAIK there're few options:
> > > > > >
> > > > > > 1. Pool the data entries, and every 5 minute run a MR job to
> > convert
> > > > the
> > > > > > data to hfile format. This approach could avoid the overhead of
> > > single
> > > > > PUT,
> > > > > > but I'm afraid the MR job might be too costly( waiting in the job
> > > > queue)
> > > > > to
> > > > > > keep in pace.
> > > > > >
> > > > > > 2. Use HtableInterface.put(List) the batched version should
> be
> > > > > faster,
> > > > > > but I'm not quite sure how much.
> > > > > >
> > > > > > 3.?
> > > > > >
> > > > > > can anyone give me some advice on this?
> > > > > > thanks!
> > > > > >
> > > > > > hongbin
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Andrey.
> > > > >
> > > >
> > >
> >
> >
> >
> > --
> > Andrey.
> >
>
>
>
>
--
Abraham Tom
Email: work2m...@gmail.com
Phone: 415-515-3621
particular table. All those N rows are in
1 region. Is there possibly an advantage to instead having that RS service
the same N rows, but in 2 regions of N/2 rows each?
Thanks,
-- Tom
On Fri, Jan 23, 2015 at 10:37 AM, Nick Dimiduk wrote:
> Have a look at the code in HFileOutputFor
tting from
HTableDescriptor.getMaxFileSize?
Thanks,
-- Tom
ged in
> your jar file on HDFS?
>
> On Mon, Oct 27, 2014 at 4:00 PM, Tom Brown wrote:
> > I tried to attach the coprocessor directly to a table, and it is able to
> > load the coprocessor class. Unfortunately, when I try and use the
> > coprocessor I get a ClassNot
d the coprocessor initially is
not in use when the coprocessor is actually invoked.
--Tom
On Mon, Oct 27, 2014 at 3:42 PM, Tom Brown wrote:
> I'm not sure how to tell if it is a region endpoint or a region server
> endpoint.
>
> I have not had to explicitly associate the copr
the
coprocessor code knows to which table the request applies, so it might be a
region endpoint.
If it helps, this is a 0.94.x cluster (and upgrading isn't doable right
now).
Can both types of endpoint be loaded from HDFS, or just the table-based one?
--Tom
On Mon, Oct 27, 2014 at 3:31 PM,
quot;|", when I use "hdfs:///" does that map to the root hdfs path or the hbase
hdfs path, etc).
I have attempted to google this, and have not found any clear answer.
Thanks in advance!
--Tom
ify on the regionserver to
override the hostname that's used for that regionserver on the master?
--Tom
like the cause was related to zookeeper.
>
> Cheers
>
>
> On Mon, Jun 30, 2014 at 9:56 AM, Mejo Tom wrote:
>
> >
> > Hi,
> > I have a 6 node grid that I have upgraded from hadoop 2.0 to hadoop 2.4. I
> > am facing issues with hbase upgrade from 0.9
Hi,
I have a 6 node grid that I have upgraded from hadoop 2.0 to hadoop 2.4. I am
facing issues with hbase upgrade from 0.94.6-cdh4.3.0 to apache
hbase-0.98.2-hadoop2. Appreciate any pointers on how to resolve this issue.
Details:
After upgrade of hadoop 2.0 to hadoop 2.4, performed the below
, one connection for each region served,
multiple connections per region, or some other formula?
--Tom
On Wed, Jun 25, 2014 at 1:48 PM, Ted Yu wrote:
> Can you look at the tail of master log to see which WAL takes long time to
> split ?
>
> Checking Namenode log if needed.
>
> Ch
va:337)
What does that mean? That HDFS is behaving badly, or something else
entirely?
--Tom
On Wed, Jun 25, 2014 at 11:45 AM, Ted Yu wrote:
> Looks like master was stuck in FileSystem.listStatus() call.
> I noticed the following - did this show up if you take jstack one more time
the stack)
The version of hbase is 0.94.10.
Thanks!
--Tom
On Wed, Jun 18, 2014 at 8:55 PM, Qiang Tian wrote:
> Hi Tom,
> Can you collect your master jvm stacktrace when problem happens and put it
> to pastbin?
> what is your hbase version?
>
>
> On Thu, Jun 19, 2014
Could this happen if the master is running too many RPC tasks and can't
keep up? What about if there's too many connections to the server?
--Tom
On Wed, Jun 18, 2014 at 11:33 AM, Tom Brown wrote:
> That server is the master and is not a regionserver.
>
> --Tom
>
>
>
That server is the master and is not a regionserver.
--Tom
On Wed, Jun 18, 2014 at 11:29 AM, Ted Yu wrote:
> Have you checked region server log on 10.100.101.221
> <http://hdpmgr001.pse.movenetworks.com/10.100.101.221:6> ?
>
> Cheers
>
>
> On Wed, Jun 18,
01.pse.movenetworks.com/10.100.101.221:6 failed on s
ocket timeout exception: java.net.SocketTimeoutException: 6 millis
timeout while waiting for channel to be ready for read. ch :
java.nio.channels.SocketChannel[co
nnected local=/10.100.101.221:36674
remote=hdpmgr001.pse.movenetworks.com/10.100.101.221:6]
--Tom
I don't mean to hijack the thread, but this question seems relevant:
Does data block encoding also help performance, or does it just enable more
efficient compression?
--Tom
On Saturday, June 14, 2014, Guillermo Ortiz wrote:
> I would like to see the times they got doing some scan
Otis,
I'm not sure our issue is the same (although they could turn out to be
related). As far as I have been able to determine, we have only had a
single long pause.
However, we don't have much experience micromanaging our JVMs. How did you
generate those graphs?
--Tom
On Tue, Jun 1
anything during this time.
The issue was detected because requests to a particular RS would
consistently timeout during the 20 minutes in question.
--Tom
On Tue, Jun 10, 2014 at 12:49 PM, Vladimir Rodionov wrote:
> 1. Do you have GC logging enabled on your cluster? It does not look like
&
We are still using 0.94.10. We are looking at upgrading soon, but have not
done so yet.
--Tom
On Tue, Jun 10, 2014 at 12:10 PM, Ted Yu wrote:
> Which release are you using ?
>
> In 0.98+, there is JvmPauseMonitor.
>
> Cheers
>
>
> On Tue, Jun 10, 2014 at 11:05 AM, Tom
ere were a handful of sporadic LruBlockCache
stats messages but nothing else. After 20 minutes, normal operation resumed.
Is 20 minutes for a GC pause expected given the operational load and
machine specs? Could a GC pause include periodic log messages? If it wasn't
a GC pause, what else could it be?
--Tom
Sorry, accidentally hit send... I meant to suggest this:
http://stackoverflow.com/questions/20257356/hbase-client-scan-could-not-initialize-org-apache-hadoop-hbase-util-classes/
--Tom
On Tue, May 27, 2014 at 11:14 AM, Tom Brown wrote:
> Can you check your server logs for a full stack tr
Can you check your server logs for a full stack trace? This sounds like it
could be similar to this:
On Tue, May 27, 2014 at 10:15 AM, Ted Yu wrote:
> Can you confirm the version of HBase ?
>
> To my knowledge, cdh5 is based on 0.96
>
> Cheers
>
>
> On Tue, May 27, 2014 at 1:36 AM, Vikram Sing
Does enabling compression include prefix compression (HBASE-4218), or is
there a separate switch for that?
--Tom
On Mon, Jan 27, 2014 at 3:48 PM, Ted Yu wrote:
> To make better use of block cache, see:
>
> HBASE-4218 Data Block Encoding of KeyValues (aka delta encoding / prefix
>
I believe each cell stores its own copy of the entire row key, column
qualifier, and timestamp. Could that account for the increase in size?
--Tom
On Mon, Jan 27, 2014 at 3:12 PM, Nick Xie wrote:
> I'm importing a set of data into HBase. The CSV file contains 82 entries
> per lin
he OS cache and
then HFileReaderV2 benefited from it. Just a guess...
-- Tom
On Mon, Dec 23, 2013 at 12:18 PM, Jerry Lam wrote:
> Hello HBase users,
>
> I just ran a very simple performance test and would like to see if what I
> experienced make sense.
>
> The experiment is
plits the heavily used (hot) regions
and merges the empty ones would allow you to balance your regions more
appropriately across your cluster.
--Tom
On Wed, Nov 20, 2013 at 8:43 AM, Otis Gospodnetic <
otis.gospodne...@gmail.com> wrote:
> We use https://github.com/sematext/HBaseWD and I just
e yet, so YMMV.
--Tom
On Friday, November 15, 2013, Ted Yu wrote:
> bq. you must have your customerId, timestamp in the rowkey since you query
> on it
>
> Have you looked at this API in Scan ?
>
> public Scan setTimeRange(long minStamp, long maxStamp)
>
>
> Cheers
>
w any group to access your cluster. But that
access isn't free. To use a SQL analogy: large organizations always protect
their SQL servers with a DBA. They do this because the potential downsides
of allowing unsupervised and unstructured access are too great.
YMMV
--Tom
On Mon, Oct 14, 2013
merge updates to the same row
together (for example, 3 increments of 1 each becomes 1 increment of 3).
This means fewer overall writes to HBase, and no risk of cross-regionserver
communication deadlocks.
--Tom
On Thu, Oct 10, 2013 at 1:23 PM, Vladimir Rodionov
wrote:
> Nope. It is not so
To update this thread, this was caused by a bug: HBASE-9648.
--Tom
On Sat, Sep 21, 2013 at 9:49 AM, Tom Brown wrote:
> I am still receiving thousands these log messages for the same region
> withing a very short time frame. I have read the compaction documentation,
> but have not bee
I tried the workaround, and it is working very well. The number of store
files for all regions is now sane (went from about 8000 total store files
to 1000), and scans are now much more efficient.
Thanks for all your help, Jean-Marc and Sergey!
--Tom
On Tue, Sep 24, 2013 at 2:11 PM, Jean-Marc
you
directly.
--Tom
On Tue, Sep 24, 2013 at 12:42 PM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:
> Can you try with less parameters and see if you are able to get something
> from it? This exception is caused by the "printMeta", so if you remove -m
> it should be ok.
nter.java:234)
at
org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.run(HFilePrettyPrinter.java:189)
at org.apache.hadoop.hbase.io.hfile.HFile.main(HFile.java:756)
Does this mean the problem might have been caused by a corrupted file(s)?
--Tom
On Tue, Sep 24, 2013 at 12:21 PM, Jean-M
Same thing in pastebin: http://pastebin.com/tApr5CDX
On Tue, Sep 24, 2013 at 11:18 AM, Tom Brown wrote:
> -rw--- 1 hadoop supergroup 2194 2013-09-21 14:32
> /hbase/compound3/5ab5fdfcf2aff2633e1d6d5089c96aa2/d/014ead47a9484d67b55205be16802ff1
> -rw--- 1 hadoop s
> TTL seems to be fine.
>
> -1 is the default value for TimeRangeTracker.maximumTimestamp.
>
> Can you run:
> hadoop fs -lsr hdfs://
>
> hdpmgr001.pse.movenetworks.com:8020/hbase/compound3/5ab5fdfcf2aff2633e1d6d5089c96aa2/d/
>
> Thanks,
>
> JM
>
>
> 2013/9/24
SION => 'SNAPPY',
MIN_VERSIONS => '0',
TTL => '864',
KEEP_DELETED_CELLS => 'false',
BLOCKSIZE => '65536',
IN_MEMORY => 'false',
ENCODE_ON_DISK => 'true',
BLOCKCACHE => 'true'
}
The TTL is suppos
one it's happened to).
--Tom
On Tue, Sep 24, 2013 at 10:13 AM, Jean-Marc Spaggiari <
jean-m...@spaggiari.org> wrote:
> Can you past logs a bit before that? To see if anything triggered the
> compaction?
> Before the 1M compactions entries.
>
> Also, what is your
There is one column family, d. Each row has about 10 columns, and each
row's total data size is less than 2K.
Here is a small snippet of logs from the region server:
http://pastebin.com/S2jE4ZAx
--Tom
On Tue, Sep 24, 2013 at 9:59 AM, Bharath Vissapragada wrote:
> It would help if
I am at a total loss as to why this behavior is occurring. Any help is
appreciated.
--Tom
nose further, I have included a recent example, hot off the
server (see below)
Thanks,
Tom Brown
This particular region (c5f15027ae1d4aa1d5b6046aea6f63a4) is about 800MB,
comprised of 25 store files. Given that, I could reasonably expect up to 25
messages for the region. However, there were at leas
found the "hbase.client.retries.number" property, but that doesn't
claim to set the number of retries, rather the amount of time between
retries. Is there a different property I can use to set the maximum number
of retries? Or is this property mis-documented?
Thanks in advance!
--Tom
your workload.
Default: 35
What is the formal way to request a specific documentation change? Do I
need to sign a contributor agreement?
--Tom
On Tue, Sep 17, 2013 at 11:40 AM, Ted Yu wrote:
> Have you looked at
> http://hbase.apache.org/book.html#hbase_default_configura
No, just one column family (called "d", not surprisingly).
--Tom
On Wed, Sep 4, 2013 at 9:54 AM, Jimmy Xiang wrote:
> Here "d" should be the column family being compacted.
> Do you have 3-5 column families of the same region being compacted?
>
>
> On Wed
Is it normal to receive 3-5 distinct "Compaction Complete" statuses for the
same region each second? For any individual region, it continuously
generates "Compacting d in {theregion}... Compaction Complete" statuses for
minutes or hours.
In that status message, what is "d
to rewrite/compact any single region
(even multiple regions at once).
Any ideas why my task status logs might be filling up like that? How can I
verify that it's really compacting like it says? Will it ever finish?
Thanks in advance,
Tom Brown
is
written) but probably more random reads (though it could benefit from
caching, depending on your dataset), but also the size of data to compact
will be smaller.
Just my $0.02...
--Tom
On Sunday, March 3, 2013, Anoop John wrote:
> Matt Corgan
> I remember, some one else als
your utility that you used to create valid/empty HFiles?
--Tom
On Sun, Dec 9, 2012 at 6:08 PM, Kevin O'dell wrote:
> Chris,
>
> Thank you for the very descriptive update.
>
> On Sun, Dec 9, 2012 at 6:29 PM, Chris Waterson wrote:
>
>> Well, I upgraded to 0.92.2,
essor.Batch.Call,%20org.apache.hadoop.hbase.client.coprocessor.Batch.Callback%29>
Regards
tom
Am 24.11.2012 18:32, schrieb Marcos Ortiz:
Regards, Dalia.
You have to use MapReduce for that.
In the HBase in Practice´s book, there are lot of great examples for
this.
On 11/24/2012 12:15 PM, Dalia Sobhy wrote:
Dear all,
I wanted to ask a
object and weak references to attempt to detect when
it has been leaked-- but that mechanism does not appear to be working
in this case.
--Tom
On Fri, Sep 21, 2012 at 11:45 PM, Stack wrote:
> On Fri, Sep 21, 2012 at 9:02 AM, Tom Brown wrote:
>> Hi all,
>>
>> I was having som
body tell me the right way to clean up
these items? Is there a right way?
Thanks in advance!
--Tom
for a particular sensor remain in
order lexigraphically as well as temporally)
Regards,
--Tom
On Wednesday, September 19, 2012, Rita wrote:
> Yet another time series questions.
>
> I have an issue where my row key will be the same but I will have multiple
> versions of the data. I dont n
b and
at worst it will take more size because the lookup keys will be longer than
the actual value being looked up.
The added complexity of a lookup table would not make that savings worth it
to me, but you know your data best.
Just my $0.02
--Tom
On Sunday, September 16, 2012, Rita wrote:
>
ove is my best guess; Let me know if something about that
explanation doesn't smell right)
--Tom
On Wed, Sep 12, 2012 at 4:08 PM, n keywal wrote:
> For each file; there is a time range. When you scan/search, the file is
> skipped if there is no overlap between the file timerange and t
mpted,
but fail with exceptions (ClosedChannelException).
Eventually the exceptions are being thrown from "openScanner", which
really doesn't sound good to me.
--Tom
On Mon, Sep 10, 2012 at 11:32 AM, Tom Brown wrote:
> Hi,
>
> We have our system setup such that all interact
ains code to perform aggregations.
I'm interested in improving the design, so any suggestions will be appreciated.
Thanks in advance,
--Tom
On Mon, Sep 10, 2012 at 12:45 PM, Michael Segel
wrote:
>
> On Sep 10, 2012, at 12:32 PM, Tom Brown wrote:
>
>> We have our system setup such th
0.92.1, but will be upgrading to 0.94.1 soon.
Thanks in advance!
--Tom
eally does have an equal probability
of being anywhere in the range, you would get no benefit from hashing.
--Tom
On Tue, Sep 4, 2012 at 11:37 PM, Eric Czech wrote:
> Here's what I don't get -- how is this different than if I allocated a
> different table for each separate value
t work for
everyone, but it does allow us to do some limited sorting.
--Tom
On Thursday, August 30, 2012, Stack wrote:
> On Tue, Aug 28, 2012 at 4:11 PM, Pamecha, Abhishek
> >
> wrote:
> > Hi
> >
> > I probably know the usual answer but are there any tricks to do some
&g
n 5.8.1 of http://hbase.apache.org/book.html).
Have I misunderstood something? Can I rely on behavior that is
specified in the guide?
Thanks again!
--Tom
On Sun, Aug 26, 2012 at 6:43 AM, Eric Czech wrote:
> Thanks for the info lars!
>
> In the potential use case I have for writing at
I thought when multiple values with the same key, family, qualifier and
timestamps were written, the one that was written latest (as determined by
position in the store) would be read. Is that not the case?
--Tom
On Saturday, August 25, 2012, lars hofhansl wrote:
> The prefix encoding appl
is strategy? Perhaps you've
encountered the same issue; How did you solve it?
Thanks in advance!
--Tom
ct "reseek" to work, as long as I'm seeking forward? Is
the way I'm using it up compatible with how it should work?
--Tom
On Fri, Aug 3, 2012 at 3:05 PM, lars hofhansl wrote:
> We recently added a new API for that:
> RegionScanner.reseek(...). See HBASE-5520. 0.94+ only, u
ords)?
I am using HBase 0.92. Upgrading to 0.94 is possible if it gives this
functionality.
--Tom
eaded environment, HBase is unlikely to
be the bottleneck. If your sending each scan to multiple processors,
this could be a significant speedup.
--Tom
On Mon, Jul 30, 2012 at 11:34 PM, Bertrand Dechoux wrote:
> Hi,
>
> Are you talking about as coprocessor or MapReduce input? If it is the first
&
retty easy to set the timestamp of a row when you update
it, try it, and see if it's what you want.
--Tom
On Thu, Jul 26, 2012 at 3:40 PM, Jerry Lam wrote:
> Hi St.Ack:
>
> Let say there are 5 versions for a column A with timestamp = [0, 1, 3, 6,
> 10].
> I want to execute an ef
hanks!
-Tom
On Thursday, June 21, 2012, Michael Segel wrote:
> Assuming that you have an Apache release (Apache, HW, Cloudera) ...
> (If MapR, replace the drive and you should be able to repair the cluster
> from the console. Node doesn't go down. )
> Node goes down.
> 10 min
and
"f" (U+0066), but the code points to not allow this.
Thanks anyway!
--Tom
On Fri, Jun 8, 2012 at 11:14 AM, Stack wrote:
> On Fri, Jun 8, 2012 at 9:35 AM, Tom Brown wrote:
>> Is there any way to control introduce a different ordering scheme from
>> the base comparable
in advance!
--Tom
an as from a get? (What if I
specify "max versions = 1"?)
I am currently using HBase 0.92.1, but nothing is production yet so I
could upgrade to 0.94 with little difficulty.
Thanks in advance!
--Tom
ure the
data size doesn't grow through the roof. Whether or not data expires
after exactly one hour is not an absolute requirement for this use
case. But I want to know why the system is not behaving as I think I
configured it to behave.
Thanks!
--Tom
On Sun, Jun 3, 2012 at 2:57 AM, Lars George
used by HBase? Is there anything
else I can check to verify the functionality of my integration?
I am using HBase 0.92 with Hadoop 1.0.2.
Thanks in advance!
--Tom
27;s the best I can come up with.
--Tom
On Wednesday, May 23, 2012, Kristoffer Sjögren wrote:
> Ted: Awesome. I can think of several use cases where this is useful, but im
> pretty stuck on 0.92 right now.
>
> I tried the null-version trick but must be doing something wrong. How do I
1 - 100 of 134 matches
Mail list logo