On Tue, Oct 16, 2012 at 7:00 AM, Jean-Marc Spaggiari
wrote:
> Hi St.Atck,
>
> Is the foll out upgrade process documented anywhere? I looked at the
> book by only find upgrade from 0.90 to 0.92. Can you point me to
> something? If there is no documentation yet, can someone draft the
> steps here so
On Tue, Oct 16, 2012 at 8:18 AM, Amit Sela wrote:
> Has anyone tried extending PutSortReducer in order to add some traditional
> reduce logic (i.e, aggregating counters) ?
>
> I want to process data with hadoop mapreduce job (aggregate counters per
> keys - traditional hadoop mr) but I want to bul
On Tue, Oct 16, 2012 at 12:29 PM, Wei Tan wrote:
> Hi,
>
> I am monitoring the readRequestsCount shown in the "Requests" column in
> the web GUI of a server/region. I observe that, while a put correspond to
> ONE write request, a get corresponds to 2 readRequestsCount. Is that true
> and is ther
On Tue, Oct 16, 2012 at 9:06 PM, Ramkrishna.S.Vasudevan
wrote:
> Hi Yun Peng
>
> You want to know the creation time? I could see the getModificationTime()
> api. Internally it is used to get a store file with minimum timestamp.
> I have not tried it out. Let me know if it solves your purpose.
>
I tried out a sample test class. It is working properly. I just have a
doubt whether you are doing the
Htd.addCoprocessor() step before creating the table? Try that way hope it
should work.
Regards
Ram
> -Original Message-
> From: anil gupta [mailto:anilgupt...@gmail.com]
> Sent: Wedne
Just adding on to Eugeny
Do you have the HBase code? There are quite a few testcases that adds
coprocessors. May be if you go thro them and run them in debug mode in
eclipse will help you too.
Regards
Ram
> -Original Message-
> From: anil gupta [mailto:anilgupt...@gmail.com]
> Sent: Wed
Hi Yun Peng
You want to know the creation time? I could see the getModificationTime()
api. Internally it is used to get a store file with minimum timestamp.
I have not tried it out. Let me know if it solves your purpose.
Just try it out.
Regards
Ram
> -Original Message-
> From: yun peng
Thanks for the write up Kevin. Many users will be benefited by this.
Regards
Ram
> -Original Message-
> From: Kevin O'dell [mailto:kevin.od...@cloudera.com]
> Sent: Tuesday, October 16, 2012 9:42 PM
> To: user@hbase.apache.org
> Subject: Re: hbase can't drop a table
>
> If you get in th
I forgot to mention that I am using HBase0.92.1
On Tue, Oct 16, 2012 at 5:35 PM, anil gupta wrote:
> Hi All,
>
> I would like to add a RegionObserver to a HBase table through HBase api. I
> don't want to put this RegionObserver as a user or system co-processor in
> hbase-site.xml since this is s
Hi,
I'm new to HBase and also using coprocessor...just wondering how to start HBase
server in debug mode?(I'm using single node mode)... so that I can attach a
remote debugger from eclipse?
Thanks,
YuLing
I reopened HBASE-6577
- Original Message -
From: lars hofhansl
To: "user@hbase.apache.org" ; lars hofhansl
Cc:
Sent: Tuesday, October 16, 2012 2:39 PM
Subject: Re: Slow scanning for PrefixFilter on EncodedBlocks
Looks like this is exactly the scenario I was trying to optimize with
Hopefully this is a fun question. :)
Assume you could architect an HBase table from scratch and you were
choosing between the following two key structures.
1)
The first structure creates a unique row key for each PUT. The rows are
events related to a user ID. There may be up to several hundre
Looks like this is exactly the scenario I was trying to optimize with
HBASE-6577. Hmm...
From: lars hofhansl
To: "user@hbase.apache.org"
Sent: Tuesday, October 16, 2012 12:21 AM
Subject: Re: Slow scanning for PrefixFilter on EncodedBlocks
PrefixFilter does not
Hi Eugeny,
Thanks for another nice suggestion.
Thanks,
Anil
On Tue, Oct 16, 2012 at 1:17 PM, Eugeny Morozov
wrote:
> Anil,
>
> you could've also get some benefit from using HBaseTestingUtility. It is
> able to run HBase cluster in standalone mode all-in-one JVM. Of course it
> requires to have
On Tue, Oct 16, 2012 at 8:40 AM, Kevin Lyda wrote:
>
> Is it a good idea to upgrade to those? Or is the security layer still in flux?
You should really use them if you want to deploy security, that's the
only reason we offer those jars.
J-D
Hi, All
Given ``hfile`` in ``hbase`` is immutable, I want to know the timestamp
like when the ``hfile`` is generated. Does ``hbase`` have API to allow
user-applications to know this? I need to know in postCompact() stage.
As a first attempt, I have tried using
``StoreFile.getBulkLoadTimestamp()``
Anil,
you could've also get some benefit from using HBaseTestingUtility. It is
able to run HBase cluster in standalone mode all-in-one JVM. Of course it
requires to have some code to create tables, assign coprocessor to table
and populate it with data. And then run client code against it.
All of
Hi,
I am monitoring the readRequestsCount shown in the "Requests" column in
the web GUI of a server/region. I observe that, while a put correspond to
ONE write request, a get corresponds to 2 readRequestsCount. Is that true
and is there a reason for that? I got the same number in a table with
Hi,
You have set MUST_PASS_ALL.
> "FilterList list = new FilterList(FilterList.Operator.MUST_PASS_ALL);"
which mean both (filter1 && filter2) must be true on any single row, And it
ll never happen(both condition true), that's why it is not fetching any
result when you are using it together.
As No
Hi Kevin,
Thanks for answering. What are your thoughts on copyTable vs export-import
considering my use case. Will one tool have lesser chance of copying
inconsistent data over another?
I wish to do increment copy of a live cluster to minimize downtime.
On Tue, Oct 16, 2012 at 8:47 AM, Kevin O'd
Hi Ram,
Thanks for your reply. I'll be trying your suggestions soon with the my
local standalone installation of HBase and update this thread.
Thanks,
Anil Gupta
On Mon, Oct 15, 2012 at 12:03 AM, Ramkrishna.S.Vasudevan <
ramkrishna.vasude...@huawei.com> wrote:
> Hi Anil
>
> We also do a lot of
I'd guess on a short term basis you'd just want to hook up to JMX directly
from within your coprocessor code -- coprocessors can do anything with any
Java API -- but I agree it would be interesting to know more about the kind
of metrics you are looking to expose. Just echoing what Gary said, we hav
If you get in that situation again:
1.) Verify that you don't have any remnants of the tables in HDFS
hadoop fs -ls /hbase/
2.) If you do have any remnants and you don't care about these tables
hadoop fs -mv /hbase/ /tmp
3.) ./bin/hbase hbck -fixMeta -fixAssignments
This should clean
Shrijeet,
I think a better approach would be a pre-split table and then do the
export/import. This will save you from having to script the merges, which
can be end badly for META if done wrong.
On Mon, Oct 15, 2012 at 5:31 PM, Shrijeet Paliwal
wrote:
> We moved to 0.92.2 some time ago and with
On Thu, Oct 11, 2012 at 6:03 PM, Jean-Daniel Cryans wrote:
> On Tue, Oct 9, 2012 at 11:51 AM, Kevin Lyda wrote:
>> In reading the docs I learned that hdck in 0.92.2 has some additional
>> -fix* options, and -fixAssignments and -fixMeta seem like they might
>> fix this. I also got the impression t
I think it is time for an upgrade...
Cheers.
On Tue, Oct 16, 2012 at 3:54 PM, Stack wrote:
> On Tue, Oct 16, 2012 at 2:52 AM, Amit Sela wrote:
> > I can't find to much about append support on the web. my full version
> is:
> > hadoop 0.20.3-r1057313
> >
> > I did however check hbase-site.xml
Hi all,
Has anyone tried extending PutSortReducer in order to add some traditional
reduce logic (i.e, aggregating counters) ?
I want to process data with hadoop mapreduce job (aggregate counters per
keys - traditional hadoop mr) but I want to bulk load the reduce output to
HBase.
As I understand
Hi St.Atck,
Is the foll out upgrade process documented anywhere? I looked at the
book by only find upgrade from 0.90 to 0.92. Can you point me to
something? If there is no documentation yet, can someone draft the
steps here so I can propose an update to the online book?
Thanks,
JM
2012/10/15, E
On Tue, Oct 16, 2012 at 2:52 AM, Amit Sela wrote:
> I can't find to much about append support on the web. my full version is:
> hadoop 0.20.3-r1057313
>
> I did however check hbase-site.xml and hdfs-site.xmk for
> dfs.support.append property
> and couldn't find it.
>
Can you upgrade?
St.Ack
I can't find to much about append support on the web. my full version is:
hadoop 0.20.3-r1057313
I did however check hbase-site.xml and hdfs-site.xmk for
dfs.support.append property
and couldn't find it.
On Mon, Oct 15, 2012 at 8:47 PM, Stack wrote:
> On Mon, Oct 15, 2012 at 9:52 AM, Amit Sel
After checking the .META. table , the ivy test_deu and deu_ivytest entries do
exist .
ROW COLUMN+CELL
ivyte
What does the 'list' command show? Does it say the table exists or not?
What I can infer here is that the HTableDescriptor file got deleted but the
META is having the entry. Any chance of the HTD getting accidently deleted
in your cluster?
The hbck tool with -fixOrphanTables should atleast try
both ivytest_deu and deu_ivytest have the same problem.
On Tue, Oct 16, 2012 at 3:58 PM, Ramkrishna.S.Vasudevan <
ramkrishna.vasude...@huawei.com> wrote:
> Which version of HBase?
>
>
> The below logs that you have attached says about a different table right '
> deu_ivytest,,1348826121781.985d6ca
Hope this can help you!
https://issues.apache.org/jira/browse/HBASE-3432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13418790#comment-13418790
Fowler Zhang
-Original Message-
From: 唐 颖 [mailto:ivytang0...@gmail.com]
Sent: 2012年10月16日 16:08
To:
version 0.94.0, r8547
And the table is ivytest_deu.
在 2012-10-16,下午3:58,"Ramkrishna.S.Vasudevan"
写道:
> Which version of HBase?
>
>
> The below logs that you have attached says about a different table right '
> deu_ivytest,,1348826121781.985d6ca9986d7d8cfaf82daf523fcd45.'
> And the one you a
Which version of HBase?
The below logs that you have attached says about a different table right '
deu_ivytest,,1348826121781.985d6ca9986d7d8cfaf82daf523fcd45.'
And the one you are trying to drop is ' ivytest_deu’
Regards
Ram
> -Original Message-
> From: 唐 颖 [mailto:ivytang0...@gmail.
I disable this table ivytest_deu , drop it .Error occurs.
ERROR: java.io.IOException: java.io.IOException: HTableDescriptor missing for
ivytest_deu
at
org.apache.hadoop.hbase.master.handler.TableEventHandler.getTableDescriptor(TableEventHandler.java:174)
at
org.apache.hadoop.hb
PrefixFilter does not do any seeking by itself, so I doubt this is related to
HBASE-6757.
Does this only happen with FAST_DIFF compression?
If you can create an isolated test program (that sets up the scenario and then
runs a scan with the filter such that it is very slow), I'm happy to take a
38 matches
Mail list logo