y a d also performance.
>
> --
> Sukumar
>
> On Mon, Jun 8, 2020, 10:15 PM anil gupta wrote:
>
>> You were right from the beginning. It is a problem with phoenix secondary
>> index!
>> I tried 4LW zk commands after enabling them, they didnt really provid
in the Java RS process). The focus would
> need to be on what opens a new connection and what is not properly
> closing that connection (in every case).
>
> On 6/3/20 4:57 AM, anil gupta wrote:
> > Thanks for sharing insights. Moving hbase mailing list to cc.
> > Sorry, forgot
ts in a new ZK connection. There have certainly been bugs
> like that in the past (speaking generally, not specifically).
>
> On 6/1/20 5:59 PM, anil gupta wrote:
> > Hi Folks,
> >
> > We are running in HBase problems due to hitting the limit of ZK
> > connections.
try to add a timestamp with view, I get an error “Declaring
> a column as row_timestamp is not allowed for views"
>
>
>
> So is there a way to take advantage of built-in timestamps on preexisting
> HBase tables? If so, could someone please point me in the right direction?
>
>
>
> Thanks!
>
> --Willie
>
>
>
--
Thanks & Regards,
Anil Gupta
rom the website for now. Maybe we can find something akin
> to a `site:phoenix.apache.org ` Google search that we can embed?
>
> On 12/13/19 4:42 PM, anil gupta wrote:
> > Hi,
> >
> > When i try to use the search feature on https://phoenix.apache.org/
> > <ht
Hi,
When i try to use the search feature on https://phoenix.apache.org/ it
takes me to: http://www1.search-hadoop.com/?subid4=1576273131.0028120806
and there are no results. Is this a temporary error or search-hadoop
website is gone.
--
Thanks & Regards,
Anil Gupta
workaround would be to put Phoenix query server behind a
homegrown webservice that authenticates and authorizes the users before
forwarding the request to Queryserver.
HTH,
Anil Gupta
On Mon, Nov 4, 2019 at 12:45 AM Aleksandr Saraseka
wrote:
> Hello community.
> Does Phoenix have some kind of se
-truecarfinal
IMO, Hive integration with HBase is not fully baked and it has a lot of
rough edges. So, it better to stick with native Phoenix/HBase if you care
about performance and ease of operations.
HTH,
Anil Gupta
On Wed, Sep 25, 2019 at 10:01 AM Gautham Acharya <
gauth...@alleninstitute.org>
table.
> 2) select my_column from my_table limit 1 works fine.
>
> However, select * from my_table limit 1; returns no row.
>
> Do I need to perform some extra operations?
>
> thanks
>
>
>
>
>
>
>
--
Thanks & Regards,
Anil Gupta
and not used for MR/Spark jobs), i think its gonna be ok if
you have a heap size of 24 gb for RS.
Hope this helps,
Anil Gupta
On Tue, Nov 20, 2018 at 3:45 AM Azhar Shaikh
wrote:
> Hi All,
>
> Is there any update on Heap Size Recommendation.
>
> Your help is greatly apprecia
=1532651151877, value=x
>>
>> \x80\x00\x00\x03column=0:FNAME,
>> timestamp=1532651164899, value=C
>>
>> \x80\x00\x00\x03column=0:LNAME,
>> timestamp=1532651164899, value=B
>>
>>
>>
>> --
>> Sent from: http://apache-phoenix-user-list.1124778.n5.nabble.com/
>>
>
>
--
Thanks & Regards,
Anil Gupta
7 more
> Caused by: java.lang.IllegalAccessError: tried to access class
> org.apache.hadoop.metrics2.lib.MetricsInfoImpl from class
> org.apache.hadoop.metrics2.lib.DynamicMetricsRegistry
> at org.apache.hadoop.metrics2.lib.DynamicMetricsRegistry.newGauge(
> DynamicMetricsRegistry.java:139)
> at org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSourceImpl.(
> MetricsZooKeeperSourceImpl.java:59)
> at org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSourceImpl.(
> MetricsZooKeeperSourceImpl.java:51)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
> DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at java.util.ServiceLoader$LazyIterator.nextService(
> ServiceLoader.java:380)
>
>
--
Thanks & Regards,
Anil Gupta
We saw atleast 5x improvement in Upsert performance from our streaming app
just by altering table and adding UPDATE_CACHE_FREQUENCY=6 in all our
tables. Overall our cluster, system.catalog table and apps looks more
happy. Thanks Again!
On Thu, Apr 12, 2018 at 11:37 PM, anil gupta <anilg
12, 2018 at 11:30 PM anil gupta <anilgupt...@gmail.com> wrote:
>
>> I have set phoenix.default.update.cache.frequency=6 in
>> hbase-site.xml via ambari(we barely alter schema). Is this a client or
>> server side property?
>>
>> On Thu, Apr 12, 2018
ee here[1] and here[2].
>
> In the future, we'll let the SYSTEM.CATALOG table span multiple regions -
> keep an eye on PHOENIX-3534.
>
> Thanks,
> James
>
> [1] https://phoenix.apache.org/#Altering
> [2] https://phoenix.apache.org/language/index.html#options
>
>
preparedstatement(contrary to Statement), system.catalog table is queried
first. Hence, it is resulting into hotspotting. Is my analysis correct?
(I have already suggested my colleagues to try using Statement instead of
PS if they have to create a new one everytime.)
--
Thanks & Regards,
Anil Gupta
(non-random). Example: Timeseries data with time as
leading part of rowkey
Another way to avoid salting with incremental rowkey is to reverse the
leading number of your rowkey. example: reverse(45668) = 86654.
HTH,
Anil Gupta
On Fri, Sep 8, 2017 at 10:23 AM, Pradheep Shanmugam <
pradheep.sha
And forgot to mention that we invoke our pig scripts through oozie.
On Mon, Aug 21, 2017 at 2:20 PM, anil gupta <anilgupt...@gmail.com> wrote:
> Sorry, cant share the pig script.
> Here is what we are registering:
> REGISTER /usr/lib/phoenix/phoenix-4.7.0-HBase-1.2-client.jar;
differently.
>>
>> One thing I did not mention because I thought it should not matter is
>> that to avoid extra costs while testing, I was only running a master node
>> with no slaves (no task or core nodes). Maybe lack of slaves causes
>> problems not normally seen.
(DA
>> GAppMaster.java:2554)
>> at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2359)
>> Caused by: java.lang.ClassNotFoundException:
>> org.apache.phoenix.shaded.org.codehaus.jackson.jaxrs.Jackson
>> JaxbJsonProvider
>> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>> ... 28 more
>>
>> Has anyone been able to get org.apache.phoenix.pig.PhoenixHBaseStorage()
>> to work on recent EMR versions? Please help if you can.
>>
>> Thank you,
>> Steve
>>
>
>
--
Thanks & Regards,
Anil Gupta
mer.java, I see the following around line 106:
>
> checkClosed();
>
> if (off < 0 || len < 0 || off > b.length - len) {
> throw new ArrayIndexOutOfBoundsException();
>
> You didn't get ArrayIndexOutOfBoundsException - maybe b was null ?
>
> On Thu, Jul 6, 2017 a
aiting for channel to be ready for read
>
> Do you see similar line in region server log ?
>
> Cheers
>
> On Thu, Jul 6, 2017 at 1:48 PM, anil gupta <anilgupt...@gmail.com> wrote:
>
> > Hi All,
> >
> > We are running HBase/Phoenix on EMR5.2(HBase1.2.3 an
with wiping out this table and rebuilding the dataset. We
tried to drop the table and recreate the table but it didnt fix it.
Can anyone please let us know how can we get rid of above problem? Are
we running into https://issues.apache.org/jira/browse/HBASE-16960?
--
Thanks & Regards,
Anil Gupta
/BI.VIN_IDX/I/d0a6c4b727bb416f840ed254658f3982]
failed. This is recoverable and they will be retried.
2017-05-24 18:00:11,793 INFO [main] mapreduce.LoadIncrementalHFiles: Split
occured while grouping HFiles, retry attempt 1 with 15 files remaining to
group or split
--
Thanks & Regards,
Anil Gupta
serGroupInformation.doAs(
> UserGroupInformation.java:1698)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)/
>
> The hack i am using right now is to set the permissions manually for these
> files when the IndexTool job is running. Is there a better way ?
>
&
of sense having quotes in an integer column, does
> it?
>
> Maybe removing this quotes from the source would solve the problem.
>
> On 30 Mar 2017 18:43, "anil gupta" <anilgupt...@gmail.com> wrote:
>
>> Hi Brian,
>>
>> It seems like Phoenix
Hi Brian,
It seems like Phoenix is not liking ''(single quotes) in an integer column.
IMO, it will be better if phoenix can handle that by providing an option in
csvbulkloadtool to specify '' to be treated as null. Single quotes works
fine for varchar columns.
Thanks,
Anil Gupta
On Thu, Mar 30
at documentation but i am unable to
find a solution for this.
--
Thanks & Regards,
Anil Gupta
find it here:
> https://github.com/apache/bigtop/blob/master/bigtop.bom#L323
>
>
>
> *From: *anil gupta <anilgupt...@gmail.com>
> *Reply-To: *"user@phoenix.apache.org" <user@phoenix.apache.org>
> *Date: *Monday, February 6, 2017 at 4:22 PM
> *To: *"use
Phoenix4.7 with HBase1.2.3?
--
Thanks & Regards,
Anil Gupta
>>> >
>>>> > 0: jdbc:phoenix:rhes564:2181> explain select count(1) from
>>>> > "marketDataHbase";
>>>> > +-+
>>>> > | PLAN |
>>>> > +-+
>>>> > | CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER INDEX_DX1 |
>>>> > | SERVER FILTER BY FIRST KEY ONLY |
>>>> > | SERVER AGGREGATE INTO SINGLE ROW|
>>>> > +-+
>>>> >
>>>> > Now the issue is that the above does not show new data since build in
>>>> Hbase
>>>> > table unless I do the following:
>>>> >
>>>> > 0: jdbc:phoenix:rhes564:2181> alter index INDEX_DX1 on
>>>> "marketDataHbase"
>>>> > rebuild;
>>>> >
>>>> >
>>>> > Which is not what an index should do (The covered index should be
>>>> > maintained automatically).
>>>> > The simple issue is how to overcome this problem?
>>>> >
>>>> > As I understand the index in Phoenix ia another file independent of
>>>> the
>>>> > original phoenix view so I assume that this index file is not updated
>>>> for
>>>> > one reason or other?
>>>> >
>>>> > Thanks
>>>>
>>>
>>
>
--
Thanks & Regards,
Anil Gupta
t;
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetar
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>>>> ssorImpl.java:62)
>>>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>>>> thodAccessorImpl.java:43)
>>>> at java.lang.reflect.Method.invoke(Method.java:498)
>>>> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>>>> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>>>>
>>>> I tried putting it inside "" etc but no joy I am afraid!
>>>>
>>>> Dr Mich Talebzadeh
>>>>
>>>>
>>>>
>>>> LinkedIn *
>>>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> <https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>>
>>>>
>>>>
>>>> http://talebzadehmich.wordpress.com
>>>>
>>>>
>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>> any loss, damage or destruction of data or any other property which may
>>>> arise from relying on this email's technical content is explicitly
>>>> disclaimed. The author will in no case be liable for any monetary damages
>>>> arising from such loss, damage or destruction.
>>>>
>>>>
>>>>
>>>
>>>
>>
>
--
Thanks & Regards,
Anil Gupta
base using Phoenix
> ? It didn't work when we tried and hence posting it here. Thanks in advance
>
> --
> Thanks,
> Sanooj Padmakumar
>
--
Thanks & Regards,
Anil Gupta
Thanks for quick response. James. I'll try out some stuff.
On Sun, Oct 2, 2016 at 5:00 PM, James Taylor <jamestay...@apache.org> wrote:
> Option #2 is fine. Connections are cheap in Phoenix.
>
>
> On Sunday, October 2, 2016, anil gupta <anilgupt...@gmail.com&
ing the same Connection between multiple threads
> as it's not thread safe.
> Thanks,
> James
>
>
> On Sunday, October 2, 2016, anil gupta <anilgupt...@gmail.com> wrote:
>
>> Hi,
>>
>> We are running HDP2.3.4(HBase 1.1 and Phoenix 4.4). I have a MapReduce
>
at a much higher throughput and volume of data
but we never ran into this problem. Can anyone provide me more details on
why we are getting ConcurrentModificationException while doing upserts?
--
Thanks & Regards,
Anil Gupta
Hi James,
>
> I found this for Hbase
> https://issues.apache.org/jira/browse/HBASE-3529
>
> its patch that can be added to hbase based on what I am seeing
>
--
Thanks & Regards,
Anil Gupta
ks & Regards,
Anil Gupta
/aesop/hbasedatalayer/upsert/HBaseUpsertDataLayer.java>
>
> On Tue, Aug 2, 2016 at 10:31 PM, Anil Gupta <anilgupt...@gmail.com> wrote:
>
>> Are you using a prepared statement for upserts? IMO, query should be
>> compiled only once when prepared statement is
the Phoenix system hbase tables,
Global/Local secondary index table and then Primary Phoenix table.
I haven't done it yet. But, above is the way i would approach it.
Thanks,
Anil Gupta.
On Thu, Jun 9, 2016 at 6:49 AM, Jean-Marc Spaggiari <jean-m...@spaggiari.org
> wrote:
> Hi,
>
&g
the cluster?
Thanks,
Anil Gupta
On Thu, May 26, 2016 at 7:19 AM, Lucie Michaud <
lucie.mich...@businessdecision.com> wrote:
> Hello everybody,
>
>
>
> For a few days I developed a MapReduce code to insert values in HBase with
> Phoenix. But the code runs only in local an
You can simply write a mapreduce job to accomplish your business logic. Output
format of job will be PhoenixOutputFormat.
Have a look at PhoenixOutputFormat for more details.
Sent from my iPhone
> On May 18, 2016, at 10:53 PM, anupama agarwal wrote:
>
> Hi All,
>
> I have
7.0. You will need to either recompile(or modify code) phoenix to work
with cdh5.7.0 or ask cdh to support Phoenix.
> at
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1156)
> ... 10 more
>
> Any input on this will be extremely helpful.
>
> --
> Thanks,
> Sanooj Padmakumar
>
--
Thanks & Regards,
Anil Gupta
base.
> I know Kundera and DataNucleus but the ideia was to use the power of
> Phoenix.
>
> What is your opinion? Has anyone ever tested, use or design phase?
> Thank you for your help.
>
--
Thanks & Regards,
Anil Gupta
n block cache, right?
>
> Thanks for such a prompt reply.
> Sumit
>
>
>
> From: anil gupta <anilgupt...@gmail.com>
> To: "user@phoenix.apache.org" <user@phoenix.apache.org>; Sumit Nigam
> <sumit_o...@yahoo.com>
> Sent: Tuesday, March 22,
Global indexes are stored in a separate hbase table. So, you can estimate
memory footprint by looking at the data size of that index currently.
HTH,
Anil Gupta
On Tue, Mar 22, 2016 at 7:19 AM, Sumit Nigam <sumit_o...@yahoo.com> wrote:
> Hi,
>
> I am trying to estima
p=1458028540810, value=\xE5\xB0\x8F\xE6\x98\x8E
>>
>> I don't know how to decode the value to normal string.what's the
>> codeset?
>>
>
>
>
> --
> Thanks,
> Sanooj Padmakumar
>
--
Thanks & Regards,
Anil Gupta
Hi James,
Due to a typo we forgot to put CF name as prefix to CQ name in 1/1100
column of that table. That led to creation of CF with name "0". After
fixing the typo, we only have 2 CF.
Thanks,
Anil Gupta
On Thu, Feb 18, 2016 at 11:20 AM, James Taylor <jamestay...@apache.org>
w
settings. How are you authenticating with Phoenix/HBase?
Sorry, I dont remember the exact kerberos setting that we had.
HTH,
Anil Gupta
On Mon, Mar 14, 2016 at 11:00 AM, Sanooj Padmakumar <p.san...@gmail.com>
wrote:
> Hi
>
> We have a rest style micro service application fetching
get in sqlline and you can verify the
> setting took affect through the HBase shell by running the following
> command:
>
> describe 'MY_TABLE_SCHEMA.MY_INDEX_NAME'
>
> HTH,
>
> James
>
>
> On Sat, Mar 12, 2016 at 10:18 AM, anil gupta <anilgupt...@gmail.com>
--
Thanks & Regards,
Anil Gupta
Yes, Global indexes are stored in separate hbase table and their region
location is not related to main table regions.
Sent from my iPhone
> On Mar 12, 2016, at 4:34 AM, Saurabh Agarwal (BLOOMBERG/ 731 LEX)
> wrote:
>
> Thanks. I will try that.
>
> Having
ssentially, doing some
aggregate queries.
--
Thanks & Regards,
Anil Gupta
to Phoenix4.4
with a ruby gem of Phoenix4.2? If not, then what we would need to
do?(upgrade ruby gem to Phoenix4.4?)
Here is the git: https://github.com/wxianfeng/ruby-phoenix
--
Thanks & Regards,
Anil Gupta
; varchar)
> -- invalid syntax; pseudo code of what I wish I could do.
>
> select "dynamic_field" from MY_VIEW
>
> Should I create a JIRA for a new feature? Or is this fundamentally not
> possible?
>
> Thanks,
> Steve
>
--
Thanks & Regards,
Anil Gupta
--
Thanks & Regards,
Anil Gupta
eMs,
>> but haven't tried playing with phoenix.upsert.batch.size. Its at the
>> default 1000.
>>
>> On Wed, Feb 17, 2016 at 12:48 PM, anil gupta <anilgupt...@gmail.com>
>> wrote:
>>
>>> I think, this has been answered before:
>>> http://
commit(PhoenixConnection.java:456)
> ~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>
>
--
Thanks & Regards,
Anil Gupta
, phoenix.upsert.batch.size is 1000. Hence, the commits were
failing with a huge batch size of 1000.
Thanks,
Anil Gupta
On Sun, Feb 14, 2016 at 8:03 PM, Heng Chen <heng.chen.1...@gmail.com> wrote:
> I am not sure whether "upsert batch size in phoenix" equals HBase Clien
My phoenix upsert batch size is 50. You mean to say that 50 is also a lot?
However, AsyncProcess is complaining about 2000 actions.
I tried with upsert batch size of 5 also. But it didnt help.
On Sun, Feb 14, 2016 at 7:37 PM, anil gupta <anilgupt...@gmail.com> wrote:
> My phoenix ups
for 2000
actions to finish
I have never seen anything like this. Can anyone give me pointers about
this problem?
--
Thanks & Regards,
Anil Gupta
would
not timeout in 18ms
On Sun, Feb 14, 2016 at 12:44 PM, anil gupta <anilgupt...@gmail.com> wrote:
> Hi,
>
> We are using phoenix4.4, hbase 1.1(hdp2.3.4).
> I have a MR job that is using PhoenixOutputFormat. My job keeps on failing
> due to following error:
>
&
we can still use TinyInt and SmallInt in Phoenix table
while using Pig-Phoenix loader?
If currently there is no way to do it, can we enhance Pig-Phoenix loader to
support TinyInt and SmallInt?
--
Thanks & Regards,
Anil Gupta
ig DataType.INTEGER .
>> https://github.com/apache/phoenix/blob/master/phoenix-pig/src/main/java/org/apache/phoenix/pig/util/TypeUtil.java#L94
>> . Can you please share the error you are seeing.
>>
>> HTH
>>
>> Ravi.
>>
>> On Sat, Feb 13, 2016 at 3:16 A
acheTimeToLiveMs in the region
> server hbase-site.xml. See our Tuning page[1] for more info.
>
> FWIW, 500K rows would be much faster to insert via our standard UPSERT
> statement.
>
> Thanks,
> James
> [1] https://phoenix.apache.org/tuning.html
>
> On Sun, Jan 10,
> On Jan 6, 2016, at 11:14 PM, anil gupta <anilgupt...@gmail.com> wrote:
>
> Hi All,
>
> I am using Phoenix4.4, i have created a global secondary in one table. I am
> running MapReduce job with 20 reducers to load data into this table(maybe i m
> doing 50 writ
-on-hadoop/
4.
https://community.cloudera.com/t5/CDH-Manual-Installation/Kerberos-integration-issue-s-with-hadoop-HA/td-p/24794
Thanks,
Anil Gupta
On Tue, Jan 5, 2016 at 8:18 PM, Ns G <nsgns...@gmail.com> wrote:
> Hi Team,
>
> Any idea with this issue? We are struck up with this is
its a key/value pair. :)
Is there anyway i can achieve the above? Would the community like to
have the key/value api?
--
Thanks & Regards,
Anil Gupta
re or not. We have no plans to do a 4.5.3 release.
> FYI, Andrew put together a 4.6 version that works with CDH here too:
> https://github.com/chiastic-security/phoenix-for-cloudera. We also plan
> to do a 4.7 release soon.
>
> Thanks,
> James
>
>
> On Wed, Dec 30, 2015 at 4:30
Hi Gabriel,
Thanks for the info. What is the backward compatibility policy of Phoenix
releases? Would 4.5.3 client jar work with Phoenix4.4 server jar?
4.4 and 4.5 are considered two major release or minor releases?
Thanks,
Anil Gupta
On Tue, Dec 29, 2015 at 11:11 PM, Gabriel Reid <gabrie
.java:2405)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:1232)
at
org.apache.hadoop.mapreduce.v2.util.MRWebAppUtil.initialize(MRWebAppUtil.java:51)
at
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1444)
Is my bulkloader command incorrect?
Thanks,
ration.get(Configuration.java:1232)
at
org.apache.hadoop.mapreduce.v2.util.MRWebAppUtil.initialize(MRWebAppUtil.java:51)
at
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1444)
--
Thanks & Regards,
Anil Gupta
, Please follow the post and let us know how it goes. It should be
pretty easy.
Thanks,
Anil Gupta
On Tue, Dec 22, 2015 at 10:24 PM, James Taylor <jamestay...@apache.org>
wrote:
> See
> https://phoenix.apache.org/faq.html#How_do_I_connect_to_secure_HBase_cluster
>
> On Tue, Dec 2
Hi Akhilesh,
You can add hbase/hadoop config directories in application classpath. You
dont need to copy conf files in your app lib folder.
Thanks,
Anil Gupta
On Wed, Dec 9, 2015 at 2:54 PM, Biju N <bijuatapa...@gmail.com> wrote:
> Thanks Akhilesh/Mujtaba for your suggestions. Ad
apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2395)
> at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> at java.lang.Thread.run(Thread.java:745)
>
>
--
Thanks & Regards,
Anil Gupta
o evaluate data model of HBase tables and try to
convert queries to small range scan or lookups.
>
> I believe it is a great project and the functionality is really useful.
> What's lacking is 3 sample configs for 3 different strength clusters.
>
Anil: I agree that guidance on configuration of HBase and Phoenix can be
improved so that people can get going quickly.
>
> Thanks
>
--
Thanks & Regards,
Anil Gupta
On a side note: Did you enable short circuit reads? Did you try snappy
compression on your tables?(IMO, 7200rpm disk is on the slower side so try
compressing data on disk). There are some data encoding scheme in HBase.
Have a look at those too.
On Wed, Sep 30, 2015 at 3:45 PM, anil gupta
lete on server localhost/127.0.0.1:2181, sessionid =
>>>> 0x14fde0f7576000e, negotiated timeout = 4
>>>>
>>>> Any idea - what i am doing wrong? I tried this with Apache Hbase
>>>> running in my Ubutnu, Apache HBase running within Cloudera QuickStart
when I am writing into Phoenix tables using Java application it is
> reflecting in the corresponding Hbase table also. So Phoenix and Hbase
> tables are one and the same, right
> On Sep 19, 2015 11:35 AM, "anil gupta" <anilgupt...@gmail.com> wrote:
>
>> Ph
ionQueryServicesImpl.java:1924)
>> at
>> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1896)
>> at
>> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:77)
>> at
>> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:1896)
>> at
>> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:180)
>> at
>> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:132)
>> at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:151)
>> at java.sql.DriverManager.getConnection(DriverManager.java:664)
>> at java.sql.DriverManager.getConnection(DriverManager.java:270)
>> at phoenixTest.main(phoenixTest.java:16)
>>
>>
>> Seems more like a JAR file version mismatch issue.
>> Here are the JAR files that I am using:
>> Please refer to the screen shot
>>
>> I have followed all the guidelines for setting up Phoenix at:
>> https://phoenix.apache.org/installation.html
>>
>> My connection from Squirrel is working fine...but from Java Program
>> getting the errors.
>> --
>> With best Regards:
>> Ashutosh Sharma
>>
>
>
>
> --
> With best Regards:
> Ashutosh Sharma
>
--
Thanks & Regards,
Anil Gupta
ode and its document here:
> https://phoenix.apache.org/secondary_indexingha.html
>
> I set the parameter in `hbase-site.xml` and restart the hbase. I also use
> the `hbase-site.xml` in client side, but the threads number in my client do
> not reduce.
>
> How can I control the threads in client?
>
> Thanks !
>
--
Thanks & Regards,
Anil Gupta
ments via ResultSetMetaData?
>
> Regards,
> Matjaž
>
>
>
--
Thanks & Regards,
Anil Gupta
;
>
> Working fine. Only Phoenix client JAR is needed...nothing more than that.
> Few questions, I can see that table that i created using Phoenix is also
> created into HBase. But how they are working internallymeans if any
> update happens at Hbase side...are they reflected at Phoenix si
he.org>
> > wrote:
> >
> > > Given that we have Phoenix support for HBase 1.1, does anyone feel like
> > we
> > > need to continue with Phoenix releases for HBase 1.0?
> > >
> > > Thanks,
> > > James
> > >
> >
>
--
Thanks & Regards,
Anil Gupta
of Disclaimer INFOSYS***
--
Thanks Regards,
Anil Gupta
mentioned above. What
kind of hack we can apply? This is the first time we are using Pig-Phoenix
integration.
Thanks
Pari
On 14 August 2015 at 20:41, anil gupta anilgupt...@gmail.com wrote:
Hi Pari,
AFAIK, Oozie does not supports HBase out of the box. Does the script runs
fine without
the dataframe. Spark integration is working fine and am
able to save as table. But I want to use sequences. Any help on this please?
Thanks,
Satya
On 15-Aug-2015 8:44 pm, anil gupta anilgupt...@gmail.com wrote:
That a basic group by principle in SQL. You will need to add the column
in group
Hi James,
IMO, for using Phoenix, i would give preference to instructions from
Phoenix website. Those setting look like they are used to maintain local
index.
Thanks,
Anil Gupta
On Fri, Aug 7, 2015 at 4:32 AM, James Heather james.heat...@mendeley.com
wrote:
I'm a bit unclear as to what
, Siva B sivd...@outlook.com wrote:
Hi,
Can anyone share the URL to download Phoenix ODBC installer for Windows. I
have to connect phoenix with legacy Dotnet application.
Thanks.
--
Thanks Regards,
Anil Gupta
?
- Is there a way to do major compaction via phoenix?
Thanks.
--
Thanks Regards,
Anil Gupta
should be made.
Thank you,
Sergey Malov
From: anil gupta anilgupt...@gmail.com
Reply-To: user@phoenix.apache.org user@phoenix.apache.org
Date: Tuesday, July 7, 2015 at 02:49
To: user@phoenix.apache.org user@phoenix.apache.org
Subject: Re: create a view on existing production table
Hi Nishant,
Refer to HBase wiki for multiple column families. As per my experience, don't
try to have more than 2-3 column family. Also group the column in column
families on basis of access pattern.
If you don't have an access where you can avoid reading a column family then
you would not
Hi Sergey,
Since you have hundreds of thousand of columns. You can query your data by
using dynamic columns features of phoenix. In this way, you wont need to
predefine 100's of thousands of columns.
Thanks,
Anil Gupta
On Fri, Jun 26, 2015 at 11:34 AM, James Taylor jamestay...@apache.org
wrote
more than 1 version. That is something i would check on HBase side to
investigate this problem.
Thanks,
Anil Gupta
On Thu, Jun 25, 2015 at 3:11 AM, Riesland, Zack zack.riesl...@sensus.com
wrote:
Earlier this week I was surprised to find that, after dumping tons of
data from a Hive table
in the primary key columns as defined in Phoenix, but
when bringing these records over to Phoenix they will end up as a single
row. Any idea if this could be the situation in your setup?
- Gabriel
On Tue, Jun 23, 2015 at 6:11 AM anil gupta anilgupt...@gmail.com wrote:
For#2: You can use
,
- Alex
--
Thanks Regards,
Anil Gupta
attention with some kind of error?
2) Is there a straightforward way to count the rows in my Phoenix table so
that I can compare the Hive table with the HBase table?
Thanks in advance!
--
Thanks Regards,
Anil Gupta
with the CsvBulkLoadTool such that it might
skip some data without getting my attention with some kind of error?
2) Is there a straightforward way to count the rows in my Phoenix table
so that I can compare the Hive table with the HBase table?
Thanks in advance!
--
Thanks Regards,
Anil
double-checking with the experts because if I screw this up, it will
take 3 days to re-ingest all the data…
--
Thanks Regards,
Anil Gupta
1 - 100 of 133 matches
Mail list logo