xes
but we are happy that the cluster is not freaking out anymore.
Thanks for all the pointers!
-Anil
On Tue, Jun 9, 2020 at 12:09 AM Sukumar Maddineni
wrote:
> Hi Anil,
>
> I think if you create that missing HBase table (index table) with dummy
> metadata(using have shell) and t
me how i can
*delete this table?(restart of cluster? or doing an upsert in catalog?)*
*Lat one, if table_type='u' then its a user defined table? if
table_type='i' then its an index table? *
*Thanks a lot for your help!*
*~Anil *
On Wed, Jun 3, 2020 at 8:14 AM Josh Elser wrote
ts in a new ZK connection. There have certainly been bugs
> like that in the past (speaking generally, not specifically).
>
> On 6/1/20 5:59 PM, anil gupta wrote:
> > Hi Folks,
> >
> > We are running in HBase problems due to hitting the limit of ZK
> > connections.
try to add a timestamp with view, I get an error “Declaring
> a column as row_timestamp is not allowed for views"
>
>
>
> So is there a way to take advantage of built-in timestamps on preexisting
> HBase tables? If so, could someone please point me in the right direction?
>
>
>
> Thanks!
>
> --Willie
>
>
>
--
Thanks & Regards,
Anil Gupta
AFAIK, it was run by folks at https://sematext.com/
On Fri, Dec 13, 2019 at 3:21 PM Josh Elser wrote:
> I'm not sure who actually runs search-hadoop. I don't believe it's
> anyone affiliated with Apache Phoenix. Thanks for letting us know it's
> broken, Anil!
>
> I'll drop it f
Hi,
When i try to use the search feature on https://phoenix.apache.org/ it
takes me to: http://www1.search-hadoop.com/?subid4=1576273131.0028120806
and there are no results. Is this a temporary error or search-hadoop
website is gone.
--
Thanks & Regards,
Anil Gupta
workaround would be to put Phoenix query server behind a
homegrown webservice that authenticates and authorizes the users before
forwarding the request to Queryserver.
HTH,
Anil Gupta
On Mon, Nov 4, 2019 at 12:45 AM Aleksandr Saraseka
wrote:
> Hello community.
> Does Phoenix have some kind of se
-truecarfinal
IMO, Hive integration with HBase is not fully baked and it has a lot of
rough edges. So, it better to stick with native Phoenix/HBase if you care
about performance and ease of operations.
HTH,
Anil Gupta
On Wed, Sep 25, 2019 at 10:01 AM Gautham Acharya <
gauth...@alleninstitute.org>
table.
> 2) select my_column from my_table limit 1 works fine.
>
> However, select * from my_table limit 1; returns no row.
>
> Do I need to perform some extra operations?
>
> thanks
>
>
>
>
>
>
>
--
Thanks & Regards,
Anil Gupta
Thanks. its working. But i needs phoenix connection and convert operation
must be done for each column :(. Thanks.
I mean can I use following methods ?
TWO_BYTE_QUALIFIERS.decode()
TWO_BYTE_QUALIFIERS.encode()
are there any additional transformations (eg : storage formats) to be
considered while reading from hbase ?
Thanks. Regards
On Tue, 8 Jan 2019 at 11:46, Anil wrote:
> Hi Thomas,
>
> I hav
,
Anil
On Tue, 8 Jan 2019 at 11:24, Thomas D'Silva wrote:
> There isn't an existing utility that does that. You would have to look up
> the COLUMN_QUALIFIER for the columns you are interested in from
> SYSTEM.CATALOG
> and use then create a Scan.
>
> On Mon, Jan 7, 2019 at 9
.
Thanks,
Anil
On Tue, 11 Dec 2018 at 14:02, Anil wrote:
> Thanks.
>
> On Tue, 11 Dec 2018 at 11:51, Jaanai Zhang wrote:
>
>> The difference since used encode column names that support in 4.10
>> version(Also see PHOENIX-1598
>> <https://issues.apache.org/jira/
he original column
> names in the create table SQL, an example for:
>
> create table test(
>
> id varchar primary key,
>
> col varchar
>
> )COLUMN_ENCODED_BYTES =0 ;
>
>
>
> ----
>Jaanai Zhang
>Best regards!
>
&g
HI,
We have upgraded phoenix to Phoenix-4.11.0-cdh5.11.2 from phoenix 4.7.
Problem - When a table is created in phoenix, underlying hbase column names
and phoenix column names are different. Tables created in 4.7 version looks
good. Looks
CREATE TABLE TST_TEMP (TID VARCHAR PRIMARY KEY ,PRI
and not used for MR/Spark jobs), i think its gonna be ok if
you have a heap size of 24 gb for RS.
Hope this helps,
Anil Gupta
On Tue, Nov 20, 2018 at 3:45 AM Azhar Shaikh
wrote:
> Hi All,
>
> Is there any update on Heap Size Recommendation.
>
> Your help is greatly apprecia
=1532651151877, value=x
>>
>> \x80\x00\x00\x03column=0:FNAME,
>> timestamp=1532651164899, value=C
>>
>> \x80\x00\x00\x03column=0:LNAME,
>> timestamp=1532651164899, value=B
>>
>>
>>
>> --
>> Sent from: http://apache-phoenix-user-list.1124778.n5.nabble.com/
>>
>
>
--
Thanks & Regards,
Anil Gupta
7 more
> Caused by: java.lang.IllegalAccessError: tried to access class
> org.apache.hadoop.metrics2.lib.MetricsInfoImpl from class
> org.apache.hadoop.metrics2.lib.DynamicMetricsRegistry
> at org.apache.hadoop.metrics2.lib.DynamicMetricsRegistry.newGauge(
> DynamicMetricsRegistry.java:139)
> at org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSourceImpl.(
> MetricsZooKeeperSourceImpl.java:59)
> at org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSourceImpl.(
> MetricsZooKeeperSourceImpl.java:51)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
> DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at java.util.ServiceLoader$LazyIterator.nextService(
> ServiceLoader.java:380)
>
>
--
Thanks & Regards,
Anil Gupta
We saw atleast 5x improvement in Upsert performance from our streaming app
just by altering table and adding UPDATE_CACHE_FREQUENCY=6 in all our
tables. Overall our cluster, system.catalog table and apps looks more
happy. Thanks Again!
On Thu, Apr 12, 2018 at 11:37 PM, anil gupta <anilg
12, 2018 at 11:30 PM anil gupta <anilgupt...@gmail.com> wrote:
>
>> I have set phoenix.default.update.cache.frequency=6 in
>> hbase-site.xml via ambari(we barely alter schema). Is this a client or
>> server side property?
>>
>> On Thu, Apr 12, 2018
ee here[1] and here[2].
>
> In the future, we'll let the SYSTEM.CATALOG table span multiple regions -
> keep an eye on PHOENIX-3534.
>
> Thanks,
> James
>
> [1] https://phoenix.apache.org/#Altering
> [2] https://phoenix.apache.org/language/index.html#options
>
>
preparedstatement(contrary to Statement), system.catalog table is queried
first. Hence, it is resulting into hotspotting. Is my analysis correct?
(I have already suggested my colleagues to try using Statement instead of
PS if they have to create a new one everytime.)
--
Thanks & Regards,
Anil Gupta
Hi Yun,
You mean, we need to increase the connections in connection pool ?. We dint
have any changes to our code before upgrade. Same code is working on 4.7
version.
Note : We are using APACHE_PHOENIX-4.11.0-cdh5.11.2.p0.1.
Thanks,
Anil
On 24 March 2018 at 20:57, zhang yun <cloud.
Hi Josh,
i didn't find any other exception. i will check again. Exceptions happened
only on few nodes. i have seen failed to get regions exception on few
nodes. I will check agian and comeback to more details. Thanks.
Regards,
Anil
On 22 March 2018 at 21:56, Josh Elser <els...@apache.org>
HI Team,
We have upgraded the phoenix from 4.7.0 to 4.11.0 and started noticing the
attached exception.
Can you help me identifying the root cause of the exception ? Thanks.
Regards,
Anil
2018-03-21 08:13:19,684 ERROR
com.tst.hadoop.flume.writer.inventory.AccountPersistenceImpl: Error querying
written a mapreduce job to create a datasets for my target table and
data load to target table is taking long time and want to avoid load time
by avoiding statement execution or frequent commits.
Any help would be appreciated. thanks.
Thanks,
Anil
(non-random). Example: Timeseries data with time as
leading part of rowkey
Another way to avoid salting with incremental rowkey is to reverse the
leading number of your rowkey. example: reverse(45668) = 86654.
HTH,
Anil Gupta
On Fri, Sep 8, 2017 at 10:23 AM, Pradheep Shanmugam <
pradheep.sha
And forgot to mention that we invoke our pig scripts through oozie.
On Mon, Aug 21, 2017 at 2:20 PM, anil gupta <anilgupt...@gmail.com> wrote:
> Sorry, cant share the pig script.
> Here is what we are registering:
> REGISTER /usr/lib/phoenix/phoenix-4.7.0-HBase-1.2-client.jar;
Pig script some other way
> (EMR steps) or with different parameters? Details where this works would
> really help out a lot.
>
> Thanks,
> Steve
>
> On Mon, Aug 21, 2017 at 10:23 AM, Steve Terrell <sterr...@oculus360.us>
> wrote:
>
>> Anil,
>>
>&g
Hey Steve,
We are currently using EMR5.2 and pig-phoenix is working fine for us. We
are gonna try EMR5.8 next week.
HTH,
Anil
On Fri, Aug 18, 2017 at 9:00 AM, Steve Terrell <sterr...@oculus360.us>
wrote:
> More info...
>
> By trial and error, I tested different EMR versions an
mer.java, I see the following around line 106:
>
> checkClosed();
>
> if (off < 0 || len < 0 || off > b.length - len) {
> throw new ArrayIndexOutOfBoundsException();
>
> You didn't get ArrayIndexOutOfBoundsException - maybe b was null ?
>
> On Thu, Jul 6, 2017 a
aiting for channel to be ready for read
>
> Do you see similar line in region server log ?
>
> Cheers
>
> On Thu, Jul 6, 2017 at 1:48 PM, anil gupta <anilgupt...@gmail.com> wrote:
>
> > Hi All,
> >
> > We are running HBase/Phoenix on EMR5.2(HBase1.2.3 an
with wiping out this table and rebuilding the dataset. We
tried to drop the table and recreate the table but it didnt fix it.
Can anyone please let us know how can we get rid of above problem? Are
we running into https://issues.apache.org/jira/browse/HBASE-16960?
--
Thanks & Regards,
Anil Gupta
/BI.VIN_IDX/I/d0a6c4b727bb416f840ed254658f3982]
failed. This is recoverable and they will be retried.
2017-05-24 18:00:11,793 INFO [main] mapreduce.LoadIncrementalHFiles: Split
occured while grouping HFiles, retry attempt 1 with 15 files remaining to
group or split
--
Thanks & Regards,
Anil Gupta
serGroupInformation.doAs(
> UserGroupInformation.java:1698)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)/
>
> The hack i am using right now is to set the permissions manually for these
> files when the IndexTool job is running. Is there a better way ?
>
&
of sense having quotes in an integer column, does
> it?
>
> Maybe removing this quotes from the source would solve the problem.
>
> On 30 Mar 2017 18:43, "anil gupta" <anilgupt...@gmail.com> wrote:
>
>> Hi Brian,
>>
>> It seems like Phoenix
Hi Brian,
It seems like Phoenix is not liking ''(single quotes) in an integer column.
IMO, it will be better if phoenix can handle that by providing an option in
csvbulkloadtool to specify '' to be treated as null. Single quotes works
fine for varchar columns.
Thanks,
Anil Gupta
On Thu, Mar 30
at documentation but i am unable to
find a solution for this.
--
Thanks & Regards,
Anil Gupta
Hi,
I have two table called PERSON and PERSON_DETAIL. i need to populate the of
the person Detail info into Person record.
Does phoenix map reduce support Multiple mappers from multiple tables
through MultipleInput ?
Currently i am populating consolidated details information into a temporary
find it here:
> https://github.com/apache/bigtop/blob/master/bigtop.bom#L323
>
>
>
> *From: *anil gupta <anilgupt...@gmail.com>
> *Reply-To: *"user@phoenix.apache.org" <user@phoenix.apache.org>
> *Date: *Monday, February 6, 2017 at 4:22 PM
> *To: *"use
Phoenix4.7 with HBase1.2.3?
--
Thanks & Regards,
Anil Gupta
Hello,
I have phoenix table which have both child and parent records.
now i have created a phoenix mapreduce job to populate few columns of
parent record into child record.
Two ways of populating parent columns into child record are
1.
a. Get the parent columns information by phoenix query for
Hi MIch,
I would recommend you to use Phoenix API/Tools to write data to a Phoenix
table so that it can handle secondary index seamlessly. Your approach of
**Rebuilding** index after every bulkload will run into scalability
problems as your primary table keeps growing.
~Anil
On Sat, Oct 22
t;
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetar
Hi Mich,
Its recommended to use upper case for table and column name so that you
dont to explicitly quote table and column names.
~Anil
On Sun, Oct 23, 2016 at 9:07 AM, Ravi Kiran <maghamraviki...@gmail.com>
wrote:
> Sorry, I meant to say table names are case sensitive.
>
>
table)?
Thanks.
On 13 October 2016 at 15:13, Anil <anilk...@gmail.com> wrote:
> HI Cheyenne*,*
>
> Thank you very much.
>
> Load cannot be done in parallel with one jdbc connection. To make it
> parallel, each node must read a set of records
>
> Following is my approa
HI Cheyenne*,*
Thank you very much.
Load cannot be done in parallel with one jdbc connection. To make it
parallel, each node must read a set of records
Following is my approach.
1. Create Cluster wide singleton distributed custom service
2. Get all region(s) information (for each records has
Apache Ignite.
On 13 October 2016 at 12:05, Cheyenne Forbes <
cheyenne.osanu.for...@gmail.com> wrote:
> May I ask which in memory db are you using?
>
You are correct. Not all the data.. but with specific start row and end row.
Thanks.
On 13 October 2016 at 11:39, Cheyenne Forbes <
cheyenne.osanu.for...@gmail.com> wrote:
> Hi Anil,
>
> Basically what you want to do is copy all the data you had input with
> Phoenix to your in memory db?
>
Do you have any inputs ?
On 12 October 2016 at 15:45, Anil <anilk...@gmail.com> wrote:
> HI,
>
> I am trying to load Phoenix data into In-Memory DB.
>
> Is there any way to get the regions information (start row and end row) of
> a table based on a column of composite pr
base using Phoenix
> ? It didn't work when we tried and hence posting it here. Thanks in advance
>
> --
> Thanks,
> Sanooj Padmakumar
>
--
Thanks & Regards,
Anil Gupta
Thanks for quick response. James. I'll try out some stuff.
On Sun, Oct 2, 2016 at 5:00 PM, James Taylor <jamestay...@apache.org> wrote:
> Option #2 is fine. Connections are cheap in Phoenix.
>
>
> On Sunday, October 2, 2016, anil gupta <anilgupt...@gmail.com&
. But, AFAIK,
creating connection everytime and then the PreparedStatement is expensive.
Right?
Please let me know if there is any other better approach that i am missing.
On Sun, Oct 2, 2016 at 3:50 PM, James Taylor <jamestay...@apache.org> wrote:
> Hi Anil,
> Make sure you're not shar
at a much higher throughput and volume of data
but we never ran into this problem. Can anyone provide me more details on
why we are getting ConcurrentModificationException while doing upserts?
--
Thanks & Regards,
Anil Gupta
Hi James,
>
> I found this for Hbase
> https://issues.apache.org/jira/browse/HBASE-3529
>
> its patch that can be added to hbase based on what I am seeing
>
--
Thanks & Regards,
Anil Gupta
ks & Regards,
Anil Gupta
HI,
Please use the attached utils. let me know if you see any issue. thanks.
Regards,
Anil
On 16 September 2016 at 22:45, Krishna <research...@gmail.com> wrote:
> Hi,
>
> Does Phoenix have API for converting a rowkey (made up of multiple
> columns) and in ImmutableBytesR
Thanks James.
I am able to extract columns using row schema.
On 31 August 2016 at 02:50, James Taylor <jamestay...@apache.org> wrote:
> Anil,
> Phoenix's API is JDBC, so just be aware that you're embarking on usage of
> unsupported and mostly undocumented APIs. Not to say t
HI Michael and All ,
Did you get a chance to look into this ? Thanks.
Thanks.
On 26 August 2016 at 07:38, Anil <anilk...@gmail.com> wrote:
> HI Michael,
>
> Following the table create and upsert query -
>
> CREATE TABLE SAMPLE(TYPE VARCHAR NOT NULL, SOURCE VARCHAR NOT
oy all electronic and printed
> copies of this communication and any attachment.
>
>
>
> *From: *Anil <anilk...@gmail.com>
> *Reply-To: *"user@phoenix.apache.org" <user@phoenix.apache.org>
> *Date: *Thursday, August 25, 2016 at 10:08 AM
> *To: *"user@pho
archar.INSTANCE.toBytes("label"),
QueryConstants.SEPARATOR_BYTE_ARRAY,
PVarchar.INSTANCE.toBytes("direction"),
QueryConstants.SEPARATOR_BYTE_ARRAY,
PUnsignedLong.INSTANCE.toBytes(1235464603853L),
PVarchar.INSTANCE.toBytes("target"));
i am trying to extract TARGE
\u");
System.out.println(columns.length);
last entry in the array has special character too along with actual value.
Can you point out the bug in the above code if any. Thanks.
Regards,
Anil
On 25 August 2016 at 02:32, Michael McAllister <mmcallis...@homeaway.com>
wrote:
> Anil
>
Hi,
I have created primary key with columns in phoenix.
is there any way to extract the column values from hbase row key ? Please
help.
Thanks,
Anil
/aesop/hbasedatalayer/upsert/HBaseUpsertDataLayer.java>
>
> On Tue, Aug 2, 2016 at 10:31 PM, Anil Gupta <anilgupt...@gmail.com> wrote:
>
>> Are you using a prepared statement for upserts? IMO, query should be
>> compiled only once when prepared statement is
the Phoenix system hbase tables,
Global/Local secondary index table and then Primary Phoenix table.
I haven't done it yet. But, above is the way i would approach it.
Thanks,
Anil Gupta.
On Thu, Jun 9, 2016 at 6:49 AM, Jean-Marc Spaggiari <jean-m...@spaggiari.org
> wrote:
> Hi,
>
&g
the cluster?
Thanks,
Anil Gupta
On Thu, May 26, 2016 at 7:19 AM, Lucie Michaud <
lucie.mich...@businessdecision.com> wrote:
> Hello everybody,
>
>
>
> For a few days I developed a MapReduce code to insert values in HBase with
> Phoenix. But the code runs only in local an
You can simply write a mapreduce job to accomplish your business logic. Output
format of job will be PhoenixOutputFormat.
Have a look at PhoenixOutputFormat for more details.
Sent from my iPhone
> On May 18, 2016, at 10:53 PM, anupama agarwal wrote:
>
> Hi All,
>
> I have
.java:108)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NoSuchMethodError:
> org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment.getRegion()Lorg/apache/hadoop/hbase/regionserver/HRegion;
>
Anil: Above error seems to hint that Phoenix4.5.2 is not compatible with
cdh5.
Have a look at https://github.com/eHarmony/pho
It is specifically built for Phoenix/HBase.
HTH,
Anil
On Fri, Apr 29, 2016 at 8:02 AM, Gustavo Oliveira <ptgustav...@gmail.com>
wrote:
> I have investigated about ORM tools that lets "write / access" java
> objects in H
index into memory by setting in_memory flag and see how much
memory it took. Then u can estimate memory usage for other index on basis of
your experiment.
Sent from my iPhone
> On Mar 22, 2016, at 9:05 AM, Sumit Nigam <sumit_o...@yahoo.com> wrote:
>
> Thank you Anil.
>
> An
Global indexes are stored in a separate hbase table. So, you can estimate
memory footprint by looking at the data size of that index currently.
HTH,
Anil Gupta
On Tue, Mar 22, 2016 at 7:19 AM, Sumit Nigam <sumit_o...@yahoo.com> wrote:
> Hi,
>
> I am trying to estima
Hi ,
I see CSVBulkLoadTool accepts delimiter as single character only. So i have
to customize it. do we have documentation of steps that bulk load tool
perform ? Please share.
Thanks,
Anil
website to
find out how to use Phoenix to query data.
Thanks,
Anil
On Wed, Mar 16, 2016 at 8:13 AM, Sanooj Padmakumar <p.san...@gmail.com>
wrote:
> Hi Kevin,
>
> You can access the data created using phoenix with java hbase api .. Use
> the sample code below..
>
> Keep
Hi James,
Due to a typo we forgot to put CF name as prefix to CQ name in 1/1100
column of that table. That led to creation of CF with name "0". After
fixing the typo, we only have 2 CF.
Thanks,
Anil Gupta
On Thu, Feb 18, 2016 at 11:20 AM, James Taylor <jamestay...@apache.org>
w
settings. How are you authenticating with Phoenix/HBase?
Sorry, I dont remember the exact kerberos setting that we had.
HTH,
Anil Gupta
On Mon, Mar 14, 2016 at 11:00 AM, Sanooj Padmakumar <p.san...@gmail.com>
wrote:
> Hi
>
> We have a rest style micro service application fetching
.
On Sat, Mar 12, 2016 at 12:00 PM, James Taylor <jamestay...@apache.org>
wrote:
> Hi Anil,
> Phoenix estimates the ratio between the data table and index table as
> shown below to attempt to get the same number of splits in your index table
> as your data table.
>
> /
--
Thanks & Regards,
Anil Gupta
Yes, Global indexes are stored in separate hbase table and their region
location is not related to main table regions.
Sent from my iPhone
> On Mar 12, 2016, at 4:34 AM, Saurabh Agarwal (BLOOMBERG/ 731 LEX)
> wrote:
>
> Thanks. I will try that.
>
> Having
ssentially, doing some
aggregate queries.
--
Thanks & Regards,
Anil Gupta
to Phoenix4.4
with a ruby gem of Phoenix4.2? If not, then what we would need to
do?(upgrade ruby gem to Phoenix4.4?)
Here is the git: https://github.com/wxianfeng/ruby-phoenix
--
Thanks & Regards,
Anil Gupta
; varchar)
> -- invalid syntax; pseudo code of what I wish I could do.
>
> select "dynamic_field" from MY_VIEW
>
> Should I create a JIRA for a new feature? Or is this fundamentally not
> possible?
>
> Thanks,
> Steve
>
--
Thanks & Regards,
Anil Gupta
--
Thanks & Regards,
Anil Gupta
ele...@gmail.com> wrote:
> Also, was your change to phoenix.upsert.batch.size on the client or on
> the region server or both?
>
> On Wed, Feb 17, 2016 at 2:57 PM, Neelesh <neele...@gmail.com> wrote:
>
>> Thanks Anil. We've upped phoenix.coprocessor.maxServerCacheTimeToLiv
commit(PhoenixConnection.java:456)
> ~[phoenix-client-4.4.0-HBase-1.1.jar:4.4.0-HBase-1.1]
>
>
--
Thanks & Regards,
Anil Gupta
, phoenix.upsert.batch.size is 1000. Hence, the commits were
failing with a huge batch size of 1000.
Thanks,
Anil Gupta
On Sun, Feb 14, 2016 at 8:03 PM, Heng Chen <heng.chen.1...@gmail.com> wrote:
> I am not sure whether "upsert batch size in phoenix" equals HBase Clien
My phoenix upsert batch size is 50. You mean to say that 50 is also a lot?
However, AsyncProcess is complaining about 2000 actions.
I tried with upsert batch size of 5 also. But it didnt help.
On Sun, Feb 14, 2016 at 7:37 PM, anil gupta <anilgupt...@gmail.com> wrote:
> My phoenix ups
for 2000
actions to finish
I have never seen anything like this. Can anyone give me pointers about
this problem?
--
Thanks & Regards,
Anil Gupta
would
not timeout in 18ms
On Sun, Feb 14, 2016 at 12:44 PM, anil gupta <anilgupt...@gmail.com> wrote:
> Hi,
>
> We are using phoenix4.4, hbase 1.1(hdp2.3.4).
> I have a MR job that is using PhoenixOutputFormat. My job keeps on failing
> due to following error:
>
&
we can still use TinyInt and SmallInt in Phoenix table
while using Pig-Phoenix loader?
If currently there is no way to do it, can we enhance Pig-Phoenix loader to
support TinyInt and SmallInt?
--
Thanks & Regards,
Anil Gupta
I think, Ravi answered my question. One of my team mate was working on
Pig-Phoenix loader so i'll share this with him. We will update once we try
this out.
Thanks Guys.
On Sat, Feb 13, 2016 at 10:01 AM, James Taylor <jamestay...@apache.org>
wrote:
> I think the question Anil is asking
Hi James,
Thanks for your reply. My problem was resolved by setting
phoenix.coprocessor.maxServerCacheTimeToLiveMs to 3 minutes and
phoenix.upsert.batch.size to 10. I think, i can increase
phoenix.upsert.batch.size to a higher value but haven't got opportunity to
try that out yet.
Thanks,
Anil
> On Jan 6, 2016, at 11:14 PM, anil gupta <anilgupt...@gmail.com> wrote:
>
> Hi All,
>
> I am using Phoenix4.4, i have created a global secondary in one table. I am
> running MapReduce job with 20 reducers to load data into this table(maybe i m
> doing 50 writ
-on-hadoop/
4.
https://community.cloudera.com/t5/CDH-Manual-Installation/Kerberos-integration-issue-s-with-hadoop-HA/td-p/24794
Thanks,
Anil Gupta
On Tue, Jan 5, 2016 at 8:18 PM, Ns G <nsgns...@gmail.com> wrote:
> Hi Team,
>
> Any idea with this issue? We are struck up with this is
its a key/value pair. :)
Is there anyway i can achieve the above? Would the community like to
have the key/value api?
--
Thanks & Regards,
Anil Gupta
re or not. We have no plans to do a 4.5.3 release.
> FYI, Andrew put together a 4.6 version that works with CDH here too:
> https://github.com/chiastic-security/phoenix-for-cloudera. We also plan
> to do a 4.7 release soon.
>
> Thanks,
> James
>
>
> On Wed, Dec 30, 2015 at 4:30
Hi Gabriel,
Thanks for the info. What is the backward compatibility policy of Phoenix
releases? Would 4.5.3 client jar work with Phoenix4.4 server jar?
4.4 and 4.5 are considered two major release or minor releases?
Thanks,
Anil Gupta
On Tue, Dec 29, 2015 at 11:11 PM, Gabriel Reid <gabrie
.java:2405)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:1232)
at
org.apache.hadoop.mapreduce.v2.util.MRWebAppUtil.initialize(MRWebAppUtil.java:51)
at
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1444)
Is my bulkloader command incorrect?
Thanks,
ration.get(Configuration.java:1232)
at
org.apache.hadoop.mapreduce.v2.util.MRWebAppUtil.initialize(MRWebAppUtil.java:51)
at
org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1444)
--
Thanks & Regards,
Anil Gupta
, Please follow the post and let us know how it goes. It should be
pretty easy.
Thanks,
Anil Gupta
On Tue, Dec 22, 2015 at 10:24 PM, James Taylor <jamestay...@apache.org>
wrote:
> See
> https://phoenix.apache.org/faq.html#How_do_I_connect_to_secure_HBase_cluster
>
> On Tue, Dec 2
Hi Akhilesh,
You can add hbase/hadoop config directories in application classpath. You
dont need to copy conf files in your app lib folder.
Thanks,
Anil Gupta
On Wed, Dec 9, 2015 at 2:54 PM, Biju N <bijuatapa...@gmail.com> wrote:
> Thanks Akhilesh/Mujtaba for your suggestions. Ad
1 - 100 of 158 matches
Mail list logo