> > would be compatible with the ColumnPrefixFilters, and
> > BinaryComparators.
> > > >
> > > > However, due to the lexicographical sorting, it's awkward to
> serialize
> > > the
> > > > sequence of values needed to get it to work.
> > > >
> > > > What are the typical solutions to this? Do people just zero pad
> > integers
> > > > to make sure they sort correctly? Or do I have to implement my own
> > > > QualifierFilter - which seems expensive since I'd be deserializing
> > every
> > > > byte array just to compare.
> > > >
> > > > Thanks
> > > >
> > > > - Nasron
> > > >
> > >
> >
>
--
Regards,
Premal Shah.
How does the job local number/size of input blocks affect perf.?
>
> What is actually happening in the reduce phase that requires so much CPU?
> I assume the actual construction of HFiles isn't intensive.
>
> Ultimately, how can I improve performance?
> Thanks
>
--
Regards,
Premal Shah.
lt:timestamp and rows goes like this:
> ...
> 1:15
> 1:16
> 1:17
> 1:23
> 2:3
> 2:5
> 2:12
> 2:15
> 2:19
> 2:25
> ...
>
> And I want to find all rows, that has second part (timestamp) in range
> 15-25.
>
> Could you please tell me how you resolve this ?
> thanks in advance.
>
>
> Tony duan
>
>
--
Regards,
Premal Shah.
yes, hbase-site.xml is the right place
On Fri, Sep 20, 2013 at 1:26 PM, Jason Huang wrote:
> Premal,
>
> So this should be set at /conf/hbase-site.xml?
>
> thanks,
>
> Jason
>
>
> On Fri, Sep 20, 2013 at 4:15 PM, Premal Shah >wrote:
>
> >
&g
ion manually through HBase shell or write a script
> to enter shell and execute the major compaction at a certain time? Also,
> where should I set HConstants.MAJOR_COMPACTION_PERIOD = 0? In
> /conf/hbase-site.xml?
>
> thanks!
>
> Jason
>
--
Regards,
Premal Shah.
(2) there's
> > only one file (3) and there are no delete markers. All of these can be
> > cheaply checked with some HFile metadata (we might have all data needed
> > already).
> >
> >
> > That would take care of both of your scenarios.
> >
> > -- L
updated) to
> purge expired data (TTL).
>
> Best regards,
> Vladimir Rodionov
> Principal Platform Engineer
> Carrier IQ, www.carrieriq.com
> e-mail: vrodio...@carrieriq.com
>
> ________
> From: Premal Shah [premal.j.s...@gmail.com]
>
, all regions get major compacted. Only 1 region has more than 1
store file, every other region has exactly once.
Is there a way to avoid compaction of regions that have not changed?
We are using HBase 0.94.11
--
Regards,
Premal Shah.
gle test, you should use the following command:
>
> mvn test -PrunAllTests -DfailIfNoTests=false -Dtest=xxx
>
> I ran TestColumnRangeFilter using tip of 0.94 code base and it passed.
> Did you use tip of 0.94 ?
>
> Cheers
>
> On Mon, Jul 29, 2013 at 10:32 AM, Premal Shah
re executed! (Set
-DfailIfNoTests=false to ignore this error.) -> [Help 1]
What am I doing wrong here?
On Thu, Jul 25, 2013 at 1:54 PM, Premal Shah wrote:
> Hi Ted,
> I'm using 0.94.6.
>
> I'll setup a unix test.
>
>
> On Thu, Jul 25, 2013 at 1:50 AM, Ted Yu wrot
Hi Ted,
I'm using 0.94.6.
I'll setup a unix test.
On Thu, Jul 25, 2013 at 1:50 AM, Ted Yu wrote:
> What HBase release are you using ?
>
> Can you put the scenario below in a unit test ?
>
> Thanks
>
> On Jul 24, 2013, at 11:13 PM, Premal Shah wrote:
>
>
match Fuzzy
If MUST_PASS_ONE is set, then it returns the columns from the rows that
don't pass Fuzzy.
How do you go about using the FilterList with both filters and return the
required rows only?
--
Regards,
Premal Shah.
12 matches
Mail list logo