ts in a new ZK connection. There have certainly been bugs
> like that in the past (speaking generally, not specifically).
>
> On 6/1/20 5:59 PM, anil gupta wrote:
> > Hi Folks,
> >
> > We are running in HBase problems due to hitting the limit of ZK
> > connections.
these also created by hbase clients/apps(my guess is NO)? How can i
calculate optimal value of maxClientCnxns for my cluster/usage?
--
Thanks & Regards,
Anil Gupta
Cloned table and snapshots should not have any impact if you drop source table.
Sent from my iPhone
> On Nov 28, 2018, at 5:23 PM, William Shen wrote:
>
> Hi,
>
> I understand that changes made to the tables cloned using snapshot will not
> affect the snapshot nor the source data table the
a major compaction on t2, will I
>> see the decrease in table size for t2? If I compare the size of t2 and t1,
>> I should see a smaller size for t2?
>>
>> Thanks.
>>
>> Antonio.
>>
>>> On Sun, Aug 26, 2018 at 3:33 PM Anil Gupta wrote:
>>>
You will need to do major compaction on table for the table to clean/delete up
extra version.
Btw, 18000 max version is a unusually high value.
Are you using hbase on s3 or hbase on hdfs?
Sent from my iPhone
> On Aug 26, 2018, at 2:34 PM, Antonio Si wrote:
>
> Hello,
>
> I have a hbase
oupInformation.setLoginUser(userGroupInformation);
>
> I am getting bellow errors,
>
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
> attempts=36, exceptions: Mon Jul 09 18:45:57 IST 2018, null,
> java.net.SocketTimeoutException: callTimeout=6, callDuration=64965:
> row
> '' on table 'DEMO_TABLE' at
> region=DEMO_TABLE,,1529819280641.40f0e7dc4159937619da237915be8b11.,
> hostname=dn1-devup.mstorm.com,60020,1531051433899, seqNum=526190
>
> Exception : java.io.IOException: Failed to get result within timeout,
> timeout=6ms
>
>
> --
> Regards,
> Lalit Jadhav
> Network Component Private Limited.
>
--
Thanks & Regards,
Anil Gupta
It seems you might have a write hotspot.
Are your writes evenly distributed across the cluster? Do you have more than
15-20 regions for that table?
Sent from my iPhone
> On May 22, 2018, at 9:52 PM, Kang Minwoo wrote:
>
> I think hbase flush is too slow.
> so
compaction usually takes care of it. If you want very high locality from
beginning then you can run a major compaction on new table after your
initial load.
HTH,
Anil Gupta
On Mon, Feb 19, 2018 at 11:46 PM, Marcell Ortutay <mortu...@23andme.com>
wrote:
> I have a large HBase table (~10 TB)
t 10:11 AM, Ted Yu <yuzhih...@gmail.com> wrote:
> You can cleanup oldwal directory beginning with oldest data.
>
> Please open support case with the vendor.
>
> On Sat, Feb 10, 2018 at 10:02 AM, anil gupta <anilgupt...@gmail.com>
> wrote:
>
> > Hi Ted,
> &g
shot/.tmp/ became
> empty
> after 2018-02-07 09:10:08 ?
>
> Do you see CorruptedSnapshotException for file outside of
> /apps/hbase/data/.hbase-snapshot/.tmp/ ?
>
> Cheers
>
--
Thanks & Regards,
Anil Gupta
...@gmail.com> wrote:
> Please the first few review comments of HBASE-16464.
>
> You can sideline the corrupt snapshots (according to master log).
>
> You can also contact the vendor for a HOTFIX.
>
> Cheers
>
> On Sat, Feb 10, 2018 at 8:13 AM, anil gupta <anilgupt..
:767)
at
org.apache.hadoop.hbase.snapshot.SnapshotDescriptionUtils.readSnapshotInfo(SnapshotDescriptionUtils.java:306)
... 26 more
--
Thanks & Regards,
Anil Gupta
around but cant find anything
concrete to fix this problem. Currently, 15/60 nodes are already down in
last 2 days.
Can someone please point out what might be causing these RegionServer
failures?
--
Thanks & Regards,
Anil Gupta
; > >> >
> > > >> > Hi All,
> > > >> >
> > > >> > I have query regarding hbase data migration from one cluster to
> > > another
> > > >> > cluster in same N/W, but with a different version of hbase one is
> > > >> 0.94.27
> > > >> > (source cluster hbase) and another is destination cluster hbase
> > > version
> > > >> is
> > > >> > 1.2.1.
> > > >> >
> > > >> > I have used below command to take backup of hbase table on source
> > > >> cluster
> > > >> > is:
> > > >> > ./hbase org.apache.hadoop.hbase.mapreduce.Export SPDBRebuild
> > > >> > /data/backupData/
> > > >> >
> > > >> > below files were genrated by using above command:-
> > > >> >
> > > >> >
> > > >> > drwxr-xr-x 3 root root4096 Dec 9 2016 _logs
> > > >> > -rw-r--r-- 1 root root 788227695 Dec 16 2016 part-m-0
> > > >> > -rw-r--r-- 1 root root 1098757026 Dec 16 2016 part-m-1
> > > >> > -rw-r--r-- 1 root root 906973626 Dec 16 2016 part-m-2
> > > >> > -rw-r--r-- 1 root root 1981769314 Dec 16 2016 part-m-3
> > > >> > -rw-r--r-- 1 root root 2099785782 Dec 16 2016 part-m-4
> > > >> > -rw-r--r-- 1 root root 4118835540 Dec 16 2016 part-m-5
> > > >> > -rw-r--r-- 1 root root 14217981341 Dec 16 2016 part-m-6
> > > >> > -rw-r--r-- 1 root root 0 Dec 16 2016 _SUCCESS
> > > >> >
> > > >> >
> > > >> > in order to restore these files I am assuming I have to move these
> > > >> files in
> > > >> > destination cluster and have to run below command
> > > >> >
> > > >> > hbase org.apache.hadoop.hbase.mapreduce.Import
> > > >> > /data/backupData/
> > > >> >
> > > >> > Please suggest if I am on correct direction, second if anyone have
> > > >> another
> > > >> > option.
> > > >> > I have tryed this with test data but above command took very long
> > time
> > > >> and
> > > >> > at end it gets fails
> > > >> >
> > > >> > 17/10/23 11:54:21 INFO mapred.JobClient: map 0% reduce 0%
> > > >> > 17/10/23 12:04:24 INFO mapred.JobClient: Task Id :
> > > >> > attempt_201710131340_0355_m_02_0, Status : FAILED
> > > >> > Task attempt_201710131340_0355_m_02_0 failed to report status
> > for
> > > >> 600
> > > >> > seconds. Killing!
> > > >> >
> > > >> >
> > > >> > Thanks
> > > >> > Manjeet Singh
> > > >> >
> > > >> >
> > > >> >
> > > >> >
> > > >> >
> > > >> >
> > > >> > --
> > > >> > luv all
> > > >> >
> > > >>
> > > >
> > > >
> > > >
> > > > --
> > > > luv all
> > > >
> > >
> > >
> > >
> > > --
> > > luv all
> > >
> >
> --
>
>
> -- Enrico Olivelli
>
--
Thanks & Regards,
Anil Gupta
ig problem
> We deleted hbase data with " hdfs dfs -rmr -skipTrash /hbase",
>
> Is there any way to recovery the deleted date?
>
> Thanks a lot!
>
--
Thanks & Regards,
Anil Gupta
mer.java, I see the following around line 106:
>
> checkClosed();
>
> if (off < 0 || len < 0 || off > b.length - len) {
> throw new ArrayIndexOutOfBoundsException();
>
> You didn't get ArrayIndexOutOfBoundsException - maybe b was null ?
>
> On Thu, Jul 6, 2017 a
aiting for channel to be ready for read
>
> Do you see similar line in region server log ?
>
> Cheers
>
> On Thu, Jul 6, 2017 at 1:48 PM, anil gupta <anilgupt...@gmail.com> wrote:
>
> > Hi All,
> >
> > We are running HBase/Phoenix on EMR5.2(HBase1.2.3 an
with wiping out this table and rebuilding the dataset. We
tried to drop the table and recreate the table but it didnt fix it.
Can anyone please let us know how can we get rid of above problem? Are
we running into https://issues.apache.org/jira/browse/HBASE-16960?
--
Thanks & Regards,
Anil Gupta
Cross posting since this seems to be an HBase issue.
I think completeBulkLoad step is failing. Please refer to the mail below.
-- Forwarded message --
From: anil gupta <anilgupt...@gmail.com>
Date: Thu, May 25, 2017 at 4:38 PM
Subject: [IndexTool NOT w
HBase to store PDF files. I'm using Hbase
> 1.2.3
> > > but
> > > > I'get this error creating a table with MOB column : NameError:
> > > > uninitialized constant IS_MOB.
> > > >
> > > > A lot of web sites (including Apache official web site) talk about
> the
> > > > patch 11339 or HBase 2.0.0, but, I don't find any explanation about
> the
> > > way
> > > > to install this patch and
> > > >
> > > > I can't find the 2.0.0 version anywhere. So I'm completly lost. Could
> > you
> > > > help me please ?
> > > >
> > > >
> > >
> >
>
--
Thanks & Regards,
Anil Gupta
t;>> after some minutes.
>>>
>>> In conjunction with other programs running on that machine, this
>>> sometimes
>>> leads to an "overload" situation.
>>>
>>> Is there a way to keep thread pool usage limited - or in some closer
>>> relation with the actual concurrency required?
>>>
>>> Thanks,
>>>
>>> Henning
>>>
>>>
>>>
>>>
>
--
Thanks & Regards,
Anil Gupta
On 24 Oct 2016 06:26, "anil gupta" <anilgupt...@gmail.com> wrote:
>
> Writes/Updates usually takes few milliseconds in HBase. So, in normal cases
> lock wont be held for seconds.
>
> On Sun, Oct 23, 2016 at 12:57 PM, Manjeet Singh <
> manjeet.chand...@gmail
I have very simple requirement to update if I found xyz
> record and if I hv few ETL process which are responsible for aggregate the
> data which is very common. ... why my hbase stuck if I try to update same
> rowkey... its mean its hold the lock for few second
>
> On 24
t;
> >> > > > <javascript:;>>
> >> > > > wrote:
> >> > > >
> >> > > > > Hi All
> >> > > > >
> >> > > > > Can anyone help me about how and in which version of Hbase
> support
> >> > > Rowkey
> >> > > > > lock ?
> >> > > > > I have seen article about rowkey lock but it was about .94
> >> version it
> >> > > > said
> >> > > > > that if row key not exist and any update request come and that
> >> rowkey
> >> > > not
> >> > > > > exist then in this case Hbase hold the lock for 60 sec.
> >> > > > >
> >> > > > > currently I am using Hbase 1.2.2 version
> >> > > > >
> >> > > > > Thanks
> >> > > > > Manjeet
> >> > > > >
> >> > > > >
> >> > > > >
> >> > > > > --
> >> > > > > luv all
> >> > > > >
> >> > > >
> >> > > >
> >> > > >
> >> > > > --
> >> > > > luv all
> >> > > >
> >> > >
> >> > >
> >> > > --
> >> > > -Dima
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> > luv all
> >> >
> >>
> >>
> >> --
> >> -Dima
> >>
> >
> >
> >
> > --
> > luv all
> >
>
>
>
> --
> luv all
>
--
Thanks & Regards,
Anil Gupta
Hi Frank,
I dont know your exact use case. But, I have successfully run copyTable
across *2 secure* clusters back in 2013-2014 on a CDH distro cluster.
Unfortunately, I dont remember the settings or command that we ran to do
that since it was at my previous job.
Thanks,
Anil Gupta
On Fri, Sep 9
so someone looking to "roll their own" based on an Apache
> release is in for some long nights.
>
> On Sunday, August 21, 2016, anil gupta <anilgupt...@gmail.com> wrote:
>
> > Hi Dima,
> >
> > I was under impression that some CDH5.x GA release shipped MOB.
t;javascript:;>');>>
> > wrote:
> > >
> > > > Hi,
> > > >
> > > > I want to use MOB in Hbase 1.2.2, can anyone advise the step to
> > backport
> > > > MOB to HBase 1.2.2?
> > > >
> > > > Regards
> > > >
> > >
> > >
> > > --
> > > -Dima
> > >
> >
>
>
> --
> -Dima
>
--
Thanks & Regards,
Anil Gupta
on top of
HBase. It is ANSI SQL compliant.
Currently Phoenix is officially supported by HDP and it is also present in
cloudera labs.
HTH,
Anil Gupta
On Fri, Jul 8, 2016 at 5:18 AM, Dima Spivak <dspi...@cloudera.com> wrote:
> Hey Mahesha,
>
> It might be worthwhile to read through t
o gather all columns for a given analysis type, etc. but it
> > would perhaps result in larger column names across billions of rows.
> >
> > e.g. *analysisfoo_4_column1*
> >
> > In practice, is this done and can it perform well? Or is it better to
> pick
> > a fixed width and use some number in its place, that's then translated
> via,
> > say, another table?
> >
> > e.g. *10_1000_10* (or something to that effect -- fixed width
> > numbers that are stand-in ids for potentially longer descriptions).
> >
> > Thanks,
> > - Ken
> >
>
--
Thanks & Regards,
Anil Gupta
Cool, Thanks. Let me send the talk proposal to higher management.
On Wed, Apr 27, 2016 at 8:16 AM, James Taylor <jamestay...@apache.org>
wrote:
> Yes, that sounds great - please let me know when I can add you to the
> agenda.
>
> James
>
> On Tuesday, April 26, 20
Hi James,
I spoke to my manager and he is fine with the idea of giving the talk. Now, he
is gonna ask higher management for final approval. I am assuming there is still
a slot for my talk in use case srction. I should go ahead with my approval
process. Correct?
Thanks,
Anil Gupta
Sent from my
med a leader in Customer Insights Services Providers by
> Forrester Research
> <
> http://www.merkleinc.com/who-we-are-customer-relationship-marketing-agency/awards-recognition/merkle-named-leader-forrester?utm_source=emailfooter_medium=email_campaign=2016MonthlyEmployeeFooter
> >
>
> Forrester Research report names 500friends, a Merkle Company, a leader in
> customer Loyalty Solutions for Midsize Organizations<
> http://www.merkleinc.com/who-we-are-customer-relationship-marketing-agency/awards-recognition/500friends-merkle-company-named?utm_source=emailfooter_medium=email_campaign=2016MonthlyEmployeeFooter
> >
> This email and any attachments transmitted with it are intended for use by
> the intended recipient(s) only. If you have received this email in error,
> please notify the sender immediately and then delete it. If you are not the
> intended recipient, you must not keep, use, disclose, copy or distribute
> this email without the author’s prior permission. We take precautions to
> minimize the risk of transmitting software viruses, but we advise you to
> perform your own virus checks on any attachment to this message. We cannot
> accept liability for any loss or damage caused by software viruses. The
> information contained in this communication may be confidential and may be
> subject to the attorney-client privilege.
>
--
Thanks & Regards,
Anil Gupta
H 5.5 but I cannot import the
> > library from maven. Has anyone been able to successfully resolve this
> > issue.
> >
> >
> >
> > --
> > Talat UYARER
> > Websitesi: http://talat.uyarer.com
> > Twitter: http://twitter.com/talatuyarer
> > Linkedin: http://tr.linkedin.com/pub/talat-uyarer/10/142/304
> >
>
--
Thanks & Regards,
Anil Gupta
ist can be found here:
>
> http://mail-archives.apache.org/mod_mbox/phoenix-user/
>
> On Tue, Mar 8, 2016 at 4:54 PM, anil gupta <anilgupt...@gmail.com> wrote:
>
> > Hi,
> >
> > One of our ruby apps might be using this ruby gem(
> > https://rubygems.org/gem
Oh my bad. I m on wrong mailing list. Didn't notice my mistake. Thanks for
the reminder, Stack.
On Tue, Mar 8, 2016 at 5:10 PM, Stack <st...@duboce.net> wrote:
> On Tue, Mar 8, 2016 at 4:57 PM, anil gupta <anilgupt...@gmail.com> wrote:
>
> > Yeah, i have looked at t
Phoenix out of the box.
On Sat, Mar 5, 2016 at 12:04 PM, Rohit Jain <rohit.j...@esgyn.com> wrote:
> You probably already looked at dbVisualizer
>
> Rohit
>
> On Mar 5, 2016, at 1:25 PM, anil gupta <anilgupt...@gmail.com> wrote:
>
> Hi,
>
> I have been using S
to Phoenix4.4
with a ruby gem of Phoenix4.2? If not, then what we would need to
do?(upgrade ruby gem to Phoenix4.4?)
Here is the git: https://github.com/wxianfeng/ruby-phoenix
--
Thanks & Regards,
Anil Gupta
. Has anyone being successful.
I would like to know what other Database browser tools people are using to
connect.
--
Thanks & Regards,
Anil Gupta
PS: I would prefer to use Database browser tools to query a database that
itself has Apache License. :)
Also came across this: https://issues.apache.org/jira/browse/HBASE-6790
HBASE-6790 is also unresolved.
On Sun, Feb 28, 2016 at 10:26 PM, anil gupta <anilgupt...@gmail.com> wrote:
> Hi,
>
> A non java app would like to use AggregateImplementation(
> https://hbase.apache.org/dev
, can you also
tell me how to make calls.
I came across this: https://issues.apache.org/jira/browse/HBASE-5600 . But,
its unresolved.
--
Thanks & Regards,
Anil Gupta
If its possible to make the timestamps as a suffix of your rowkey(assuming the
rowkey is composite) then you would not run into read/write hotspots.
Have a look at open tsdb data model that scales really really well.
Sent from my iPhone
> On Feb 21, 2016, at 10:28 AM, Stephen Durfey
I dont think there is any atomic operations in hbase to support ddl across 2
tables.
But, maybe you can use hbase snapshots.
1.Create a hbase snapshot.
2.Truncate the table.
3.Write data to the table.
4.Create a table from snapshot taken in step #1 as table_old.
Now you have two tables. One
, phoenix.upsert.batch.size is 1000. Hence, the commits were
failing with a huge batch size of 1000.
Thanks,
Anil Gupta
On Sun, Feb 14, 2016 at 8:03 PM, Heng Chen <heng.chen.1...@gmail.com> wrote:
> I am not sure whether "upsert batch size in phoenix" equals HBase Clien
My phoenix upsert batch size is 50. You mean to say that 50 is also a lot?
However, AsyncProcess is complaining about 2000 actions.
I tried with upsert batch size of 5 also. But it didnt help.
On Sun, Feb 14, 2016 at 7:37 PM, anil gupta <anilgupt...@gmail.com> wrote:
> My phoenix ups
-14 12:34:23,593 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions to finish
>
> It means your writes are too many, please decrease the batch size of your
> puts, and balance your requests on each RS.
>
> 2016-02-15 4:53 GMT+08:00 ani
for 2000
actions to finish
I have never seen anything like this. Can anyone give me pointers about
this problem?
--
Thanks & Regards,
Anil Gupta
would
not timeout in 18ms
On Sun, Feb 14, 2016 at 12:44 PM, anil gupta <anilgupt...@gmail.com> wrote:
> Hi,
>
> We are using phoenix4.4, hbase 1.1(hdp2.3.4).
> I have a MR job that is using PhoenixOutputFormat. My job keeps on failing
> due to following error:
>
&
cess hbase using Java API will it be fast like thrift.
>>
>> Bear in mind that when you use Thrift Gateway/Thrift API you access HBase
>> RegionServer through the single gateway server,
>> when you use Java API - you access Region Server directly.
>> Java API is much mor
Hey Serega,
Have you tried using Java API of HBase to create table? IMO, invoking a
shell script from java program to create a table might not be the most
elegant way.
Have a look at
https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/HBaseAdmin.html
HTH,
Anil Gupta
On Wed, Jan
Java api should be same or better in performance as compared to Thrift api.
With Thrift api there is an extra hop. So, most of the time java api would be
better for performance.
Sent from my iPhone
> On Jan 12, 2016, at 4:29 AM, Rajeshkumar J
> wrote:
>
> Hi,
>
I want
> hbase to return the query within one or two seconds. Help me to choose
> which type of scan do I have to use for this - range scan or rowfilter scan
>
> Thanks
>
--
Thanks & Regards,
Anil Gupta
wrote:
> Hi Anil,
>
>I have about 10 million rows with each rows having more than 10k
> columns. I need to query this table based on row key and which will be the
> apt query process for this
>
> Thanks
>
> On Fri, Dec 18, 2015 at 5:43 PM, anil gupta <anilgupt...@gm
this hypothesis.
So, i would like to confirm this on mailing list. Please let me know.
--
Thanks & Regards,
Anil Gupta
Hi Ted,
So, as per the jira, answer to my question is YES.
We are running HDP2.3.0. That jira got fixed in 0.98.1. So, we should be
fine.
Thanks,
Anil Gupta
On Thu, Oct 29, 2015 at 12:27 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> Please take a look at:
> https://issues.apache.org
Update: We tried and it worked.
On Thu, Oct 29, 2015 at 1:24 PM, anil gupta <anilgupt...@gmail.com> wrote:
> Hi Ted,
>
> So, as per the jira, answer to my question is YES.
> We are running HDP2.3.0. That jira got fixed in 0.98.1. So, we should be
> fine.
>
> Thanks,
&
hub.io/hbase/index.html. Note the Github ribbon
> >> and the Google site search. I'm curious to know what you think.
> >>
> >> I also put the 0.94 docs menu as a submenu of the Documentation menu, to
> >> see how it looked.
> >>
> >> Thanks,
> >> Misty
> >>
> >
> >
>
--
Thanks & Regards,
Anil Gupta
le phone, but
>>>> that's a totally different issue to solve, not related to the Maven site
>>>> styling.
>>>>
>>>>> On Thu, Oct 29, 2015 at 4:13 AM, Stack <st...@duboce.net> wrote:
>>>>>
>>>>> It looks lovely on a nexus (smi
> To check the peer-state value you can use zk_dump command in hbase shell or
> from web UI.
>
> Did you find any errors in the RS logs for replication ?
>
> Regards,
> Ashish Singhi
>
> On Wed, Oct 14, 2015 at 5:04 AM, anil gupta <anilgupt...@gmail.com> wrote:
Hi,
As far as i know, export snapshot from 0.98 ->1.0 should work.
Maybe, you can verify this by creating a test table, putting couple of rows
in it, export a snapshot of that table, and clone exported snapshot on
remote cluster.
Thanks,
Anil Gupta
On Sat, Oct 17, 2015 at 12:30
find steps to accomplish my task. Can anyone provide me the steps or point
me to documentation?
--
Thanks & Regards,
Anil Gupta
directories to locate
> snapshots.
>
> Regards
> Samir
>
> On Wed, Oct 14, 2015 at 8:25 PM, anil gupta <anilgupt...@gmail.com> wrote:
>
> > I dont see the snapshot when i run "list_snapshot" on destination
> > cluster.(i checked that initially but f
t; wrote:
> Hi,
> Can you see snapshot on remote cluster? If you can see snapshot you can use
> clone snapshot command from hbase shell to create table.
> Regards
> Samir
> On Oct 14, 2015 6:38 PM, "anil gupta" <anilgupt...@gmail.com> wrote:
>
> > Hi,
ts command. Try to locate
> snapshot directories on destination cluster and move data to correct
> locations.
>
> Regards
> Samir
>
> On Wed, Oct 14, 2015 at 9:10 PM, anil gupta <anilgupt...@gmail.com> wrote:
>
> > I am using 0.98. I used that doc instructions to e
Created this: https://issues.apache.org/jira/browse/HBASE-14612
On Wed, Oct 14, 2015 at 10:18 PM, anil gupta <anilgupt...@gmail.com> wrote:
> Hi Samir,
>
> You are right. But, HBase documentation didnt mention strict requirement
> of correct hbase directory. So, i have to
> start_replication
NameError: undefined local variable or method `start_replication' for
#
Is start_replication not a valid command in HBase0.98? If its deprecated
then what is the alternate command?
--
Thanks & Regards,
Anil Gupta
[]
Can anyone tell me what is probably going on?
On Tue, Oct 13, 2015 at 3:56 PM, anil gupta <anilgupt...@gmail.com> wrote:
> Hi All,
>
> I am using HBase 0.98(HDP2.2).
> As per the documentation here:
>
> http://www.cloudera.com/content/cloudera/en/documentation/cdh4/v4-3-1
Hi Liren,
In short, adding new columns will *not* trigger compaction.
THanks,
Anil Gupta
On Sat, Oct 10, 2015 at 9:20 PM, Liren Ding <sky.gonna.bri...@gmail.com>
wrote:
> Thanks Ted. So far I don't see direct answer yet in any hbase books or
> articles. all resources say
Hi Nicolas,
For a table with 5k regions, it should not take more than 10 min for alter
table operations.
Also, in HBase 1.0+, alter table operations does not require disabling the
table. So, you are encouraged to upgrade.
Sent from my iPhone
> On Oct 9, 2015, at 1:15 AM, Nicolae Marasoiu
Hi Akmal,
It will be better if you use name service value. You will not need to worry
about which NN is active. I believe you can find that property in Hadoop's
core-site.xml file.
Sent from my iPhone
On Sep 24, 2015, at 7:23 AM, Akmal Abbasov wrote:
>> My
How many rows are expected?
Can you do sanity checking in your data to make sure there are no duplicate
rowkeys?
Sent from my iPhone
> On Sep 22, 2015, at 8:35 AM, OM PARKASH Nain
> wrote:
>
> I using two methods for row count:
>
> hbase shell:
>
> count
Source)
>at sun.security.jgss.GSSContextImpl.initSecContext(Unknown Source)
>at sun.security.jgss.GSSContextImpl.initSecContext(Unknown Source)
>... 19 more
> 2015-08-31 10:15:27,911 WARN [regionserver60020]
> regionserver.HRegionServer: reportForDuty failed; sl
/REALM@REALM
renew until 08/21/15 09:39:33
Loïc CHANEL
Engineering student at TELECOM Nancy
Trainee at Worldline - Villeurbanne
2015-08-21 6:12 GMT+02:00 anil gupta anilgupt...@gmail.com:
Did you run kinit command before invoking hbase shell? What does
klist
Regards,
Anil Gupta
for your help !
Loïc
Loïc CHANEL
Engineering student at TELECOM Nancy
Trainee at Worldline - Villeurbanne
--
Thanks Regards,
Anil Gupta
regionserver.
--
Thanks Regards,
Anil Gupta
tooling with coprocessors, like
ColumnAggregationProtocol, involve just one metric e.g. one sum(column).
We
collect many, and of course it is more efficient to scan the data once.
Please advise,
Nicu
--
Thanks Regards,
Anil Gupta
header, e.g. Accept: text/plain). If you'd like to propose a patch we'd
certainly look at it.
Thanks.
On Wed, Aug 5, 2015 at 12:51 AM, anil gupta anilgupt...@gmail.com
wrote:
Hi Andrew,
Thanks for sharing your thoughts. Sorry for late reply as i recently
came
back from vacation
accept a patch?
Thanks,
Anil Gupta
On Fri, Jul 17, 2015 at 4:57 PM, Andrew Purtell apurt...@apache.org wrote:
The closest you can get to just a string is have your client use an accept
header of Accept: application/octet-stream with making a query. This will
return zero or one value
it is painful to
use -
at least for me - because of its size.
I've started to notice this too. It'd be sweet if it loaded more
promptly.
Thanks for starting the discussion.
St.Ack
--
Thanks Regards,
Anil Gupta
Hi All,
We have a String Rowkey. We have String values of cells.
Still, Stargate returns the data with Base64 encoding due to which a user
cant read the data. Is there a way to disable Base64 encoding and then Rest
request would just return Strings.
--
Thanks Regards,
Anil Gupta
detailed discussion on this topic.
How about denormalizing the data and then just doing ONE call? Now, this
becomes more of a data modeling question.
Thanks,
Anil Gupta
On Tue, Jul 14, 2015 at 11:39 PM, Chandrashekhar Kotekar
shekhar.kote...@gmail.com wrote:
Hi,
REST APIs
denormalizing the data and then just doing ONE call? Now, this
becomes more of a data modeling question.
Thanks,
Anil Gupta
On Tue, Jul 14, 2015 at 11:39 PM, Chandrashekhar Kotekar
shekhar.kote...@gmail.com wrote:
Hi,
REST APIs of my project make 2-3 calls to different tables in HBase
I think this is a duplicate post. Please avoid posting same questions. Please
use previous thread where I replied.
Sent from my iPhone
On Jul 14, 2015, at 11:17 PM, Chandrashekhar Kotekar
shekhar.kote...@gmail.com wrote:
Hi,
REST APIs of my project make 2-3 calls to different tables
).
Any help is appreciated.
Regards,
Praneesh
--
Thanks Regards,
Anil Gupta
). It would be best to avoid having diff in hardware
in cluster machines.
Thanks,
Anil Gupta
On Wed, Jun 17, 2015 at 5:12 PM, rahul malviya malviyarahul2...@gmail.com
wrote:
Hi,
Is it possible to configure HBase to have only fix number of regions per
node per table in hbase. For example node1 serves 2
Thanks Stack.
On Wed, Jun 10, 2015 at 8:06 AM, Stack st...@duboce.net wrote:
On Mon, Jun 8, 2015 at 10:27 PM, anil gupta anilgupt...@gmail.com wrote:
So, if we have to match against non-string data in hbase shell. We should
always use double quotes?
Double-quotes means the shell (ruby
at
http://hbase.apache.org/0.94/apidocs/index.html,
but where can I find the URL for newer version?
Thanks
--
Sean
--
Thanks Regards,
Anil Gupta
-string data in hbase shell. We
should always
use double quotes?
I think so.
bq. Even for matching values of cells?
Did you mean through use of some Filter ?
Cheers
On Mon, Jun 8, 2015 at 10:27 PM, anil gupta anilgupt...@gmail.com wrote:
So, if we have to match against non-string data
?
--
Thanks Regards,
Anil Gupta
to Anil's question) is that 'escape
sequence' does not work using single quote.
Cheers
On Mon, Jun 8, 2015 at 9:11 PM, anil gupta anilgupt...@gmail.com wrote:
Hi Jean,
My bad. I gave a wrong illustration. This is the query is was trying on
my
composite key:
hbase(main):017:0 scan
. And you seems to scan without building
the composite key. How have you created your table and what is your key
design?
JM
2015-06-08 16:56 GMT-04:00 anil gupta anilgupt...@gmail.com:
Hi All,
I m having a lot of trouble dealing with HBase shell. I am running
following query:
scan
ecosystem?,etc
Please explain your use case and share your thoughts after doing some
preliminary reading.
Thanks,
Anil Gupta
On Fri, May 29, 2015 at 12:20 PM, Lukáš Vlček lukas.vl...@gmail.com wrote:
As for the #4 you might be interested in reading
https://aphyr.com/posts/294-call-me-maybe
and didn’t find anything noteworthy.
--
Benoit tsuna Sigoure
--
Benoit tsuna Sigoure
--
Thanks Regards,
Anil Gupta
route so have little
to
contribute. Can you 'talk out loud' as you try stuff Bryan and if we
can't
help highlevel, perhaps we can help on specifics.
St.Ack
cheers,
esteban.
--
Thanks Regards,
Anil Gupta
On Wed, May 13, 2015 at 10:25 AM, Anil Gupta anilgupt...@gmail.com
wrote:
How many mapper/reducers are running per node for this job?
Also how many mappers are running as data local mappers?
You load/data equally distributed?
Your disk, cpu ratio looks ok.
Sent from my iPhone
How many mapper/reducers are running per node for this job?
Also how many mappers are running as data local mappers?
You load/data equally distributed?
Your disk, cpu ratio looks ok.
Sent from my iPhone
On May 13, 2015, at 10:12 AM, rahul malviya malviyarahul2...@gmail.com
wrote:
*The
.
Thanks,
Nick
--
Thanks Regards,
Anil Gupta
here:
https://wiki.apache.org/hadoop/Hbase/Stargate to hbase.apache.org.
--
Thanks Regards,
Anil Gupta
didn't load at Anil's location for whatever reason.
On Thu, Apr 16, 2015 at 8:36 AM, Stack st...@duboce.net wrote:
Are others running into the issue Anil sees?
Thanks,
St.Ack
On Thu, Apr 16, 2015 at 8:13 AM, anil gupta anilgupt...@gmail.com
wrote:
Chrome: Version 42.0.2311.90
return the Cell value(ie: the image file)
If its not there, and we were to do it. what kind of effort it would take?
Any pointers to code that i would need to modify would be appreciated.
--
Thanks Regards,
Anil Gupta
1 - 100 of 368 matches
Mail list logo