Re: Load into Phoenix table via CsvBulkLoadTool cannot find table and fails

2016-10-23 Thread Ravi Kiran
Sorry, I meant to say table names are case sensitive. On Sun, Oct 23, 2016 at 9:06 AM, Ravi Kiran <maghamraviki...@gmail.com> wrote: > Hi Mich, >Apparently, the tables are case sensitive. Since you have enclosed a > double quote when creating the table, please pass the sa

Re: Load into Phoenix table via CsvBulkLoadTool cannot find table and fails

2016-10-23 Thread Ravi Kiran
Hi Mich, Apparently, the tables are case sensitive. Since you have enclosed a double quote when creating the table, please pass the same when running the bulk load job. HADOOP_CLASSPATH=/home/hduser/jars/hbase-protocol-1.2.3.jar:/usr/lib/hbase/conf hadoop jar phoenix-4.8.1-HBase-1.2-client.jar

Re: TinyInt, SmallInt not supported in Pig Phoenix loader

2016-02-17 Thread Ravi Kiran
uys. >>> >>> On Sat, Feb 13, 2016 at 10:01 AM, James Taylor <jamestay...@apache.org> >>> wrote: >>> >>>> I think the question Anil is asking is "Does Pig have support for >>>> TinyInt (byte) and SmallInt (short)?" I do

Re: Dynamic column using Pig STORE function

2016-02-17 Thread Ravi Kiran
Hi , Unfortunately, we don't support dynamic columns within the phoenix-pig module. Currently, the only two options to PhoenixHBaseStorage are specifying the table or a set of table columns . We can definitely support dynamic columns. Please feel free to create a ticket. Regards Ravi On

Re: TinyInt, SmallInt not supported in Pig Phoenix loader

2016-02-13 Thread Ravi Kiran
Hi Anil, We do a mapping of PTintInt and PSmallInt to Pig DataType.INTEGER . https://github.com/apache/phoenix/blob/master/phoenix-pig/src/main/java/org/apache/phoenix/pig/util/TypeUtil.java#L94 . Can you please share the error you are seeing. HTH Ravi. On Sat, Feb 13, 2016 at 3:16 AM,

Re: Spark Phoenix Plugin

2016-02-09 Thread Ravi Kiran
Hi Pierre, Try your luck for building the artifacts from https://github.com/chiastic-security/phoenix-for-cloudera. Hopefully it helps. Regards Ravi . On Tue, Feb 9, 2016 at 10:04 AM, Benjamin Kim wrote: > Hi Pierre, > > I found this article about how Cloudera’s version

Re: Using Sqoop to load HBase tables , Data not visible via Phoenix

2016-01-28 Thread Ravi Kiran
Hi Manya, We are working with the Sqoop team to have our patch[1] that enables data imports to Phoenix table directly. In the mean time, you can apply the patch to Sqoop 1.4.6 source and give it a try. Please do let us know how it goes . [1] https://issues.apache.org/jira/browse/SQOOP-2649

Re: Guidance on how many regions to plan for

2016-01-18 Thread Ravi Kiran
Hi Zack, The limitation of 32 HFiles is due to this configuration property MAX_FILES_PER_REGION_PER_FAMILY which defaults to 32 in LoadIncrementalHFiles. You can give it a try updating your configuration with a larger value and see if it works.

Re: Question about support for ARRAY data type with Pig Integration

2016-01-08 Thread Ravi Kiran
Hi Rafa, I will be working on this ticket https://issues.apache.org/jira/browse/PHOENIX-2584. You can add yourself as a watcher to the ticket to see the progress. Regards Ravi On Wed, Dec 23, 2015 at 3:21 AM, rafa wrote: > Hi all !! > > Just a quick question. I see in: >

Re: Connection pooling in Phoenix ?

2015-11-05 Thread Ravi Kiran
Hi Dmitry, James has answered this couple of times in earlier threads. Found this useful. Hope it helps! https://groups.google.com/forum/#!topic/phoenix-hbase-user/lL-SVFeFpNg https://groups.google.com/forum/#!topic/phoenix-hbase-user/U3hCUhRTZV8 Regards Ravi On Thu, Nov 5, 2015 at 2:21 PM,

Re: replace CsvToKeyValueMapper with my implementation

2015-10-29 Thread Ravi Kiran
It would be great if we can provide an api and have end users provided implementation on how to parse each record . This way, we can move away with only bulk loading csv and have json and other formats of input bulk loaded onto phoenix tables. I can take that one up. Would it be something the

Re: Salting and pre-splitting

2015-10-07 Thread Ravi Kiran
Hi Sumit, The PhoenixInputFormat gets the number of splits based on the region boundaries . However, if guideposts are configured( https://phoenix.apache.org/update_statistics.html) you might not see a 1 to 1 mapping. @James please correct me if I am wrong here. You are right on the salting

Re: org.apache.hadoop.hbase.TableNotFoundException: SYSTEM.CATALOG

2015-09-21 Thread Ravi Kiran
Hi , Since you have just reset HBase, see if the table name 'SYSTEM.CATALOG' exists as a znode in zookeeper path /hbase/tables/ . If so, you can do a rmr from the zookeeper shell. Hope it helps. Regards On Mon, Sep 21, 2015 at 8:48 AM, Konstantinos Kougios < kostas.koug...@googlemail.com>

Re: PhoenixMapReduceUtil multiple tables

2015-09-16 Thread Ravi Kiran
s supported via the MultiTableOutputFormat class. This > could be used as inspiration for Phoenix's implementation. > > --Asher > > On Wed, Sep 16, 2015 at 1:00 AM, Ns G <nsgns...@gmail.com> wrote: > >> Hi Ravi, >> >> Raised Phoenix-2266 JIRA for the same. >> >> Than

Re: PhoenixMapReduceUtil multiple tables

2015-09-15 Thread Ravi Kiran
ing this? >> >> Thanks, >> Durga Prasad >> >> On Mon, Aug 10, 2015 at 6:18 AM, Ravi Kiran <maghamraviki...@gmail.com> >> wrote: >> >>> Hi Peeranat, >>> >>>With the current implementation , there isn't an option to wri

Re: PhoenixHBaseStorage issues with Salted table

2015-09-01 Thread Ravi Kiran
Hi Satish, This was reported and fixed as part of https://issues.apache.org/jira/browse/PHOENIX-2181. For a quick turnaround, you can do this. STORE c into 'hbase://checks/enterprise_id,business_date' using org.apache.phoenix.pig.PhoenixHBaseStorage('zkquorum','-batchSize 5000' );

Re: REG: Getting the current value of a sequence

2015-08-22 Thread Ravi Kiran
Hi Satya, Unless you call NEXT VALUE FOR on a sequence *the first time*, you wouldn't be getting the CURRENT VALUE . So , instead of getting the CURRENT VALUE, you can get the NEXT VALUE and then pass to your upsert query. Hope this helps On Sat, Aug 22, 2015 at 7:59 AM, Ns G

Re: Phoenix Pig integration

2015-08-10 Thread Ravi Kiran
Hi Pari, I wrote a quick test and there indeed seems to be an issue when SALT BUCKETS are mentioned on the table. Can you please raise a JIRA ticket. In the mean while, can you try the following to get over the issue. raw = LOAD 'hbase://*query/SELECT CLIENTID,EMPID,NAME FROM HIRES*' USING

Re: PhoenixMapReduceUtil multiple tables

2015-08-09 Thread Ravi Kiran
Hi Peeranat, With the current implementation , there isn't an option to write to multiple phoenix tables . Thanks Ravi On Fri, Aug 7, 2015 at 12:39 PM, Peeranat Fupongsiripan peerana...@gmail.com wrote: Hi I'm new to Phoenix. I'm wondering whether it's possible to use one map/reduce

Re: How fast is upsert select?

2015-07-22 Thread Ravi Kiran
Hi , Since you are saying billions of rows, why don't you try out the MapReduce route to speed up the process. You can take a look at how IndexTool.java(

Re: Pig - unable to read a Phoenix Table

2015-07-09 Thread Ravi Kiran
Hi Durga, Can you share the errors you get when giving the LOAD the way i specified above. Also, can you confirm if the phoenix table is ndm_17.table1 and not NDM_17.TABLE1 ? Regards Ravi On Thu, Jul 9, 2015 at 10:09 AM, Ns G nsgns...@gmail.com wrote: Hi Ravi Kiran, Thanks for your response

Re: Apache Phoenix Flume plugin usage

2015-06-11 Thread Ravi Kiran
libs that need to be in Flume's classpath along with the zookeeperQuorum to pointing to the appropriate cluster? Thanks! On Thu, Jun 11, 2015 at 12:58 PM, Ravi Kiran maghamraviki...@gmail.com wrote: Hi Buntu, Apparently, the necessary classes related to Flume client are already part

Re: REG: Phoenix MR issue

2015-06-04 Thread Ravi Kiran
MR issue To: Ravi Kiran maghamraviki...@gmail.com Hi Ravi, Thanks for taking time. Below is my job setup code. I now used reducer setup method to read the file. I am giving only a part of the code due to access restrictions final String selectQuery = SELECT * FROM Table1

Re: ScanningResultIterator resiliency

2015-04-13 Thread Ravi Kiran
for reading the data from an existing table and no other process is using HBase so I think it's not my case. Why don't you like to recreate a new scan if the old one dies? Best, Flavio On Mon, Apr 13, 2015 at 6:35 PM, Ravi Kiran maghamraviki...@gmail.com wrote: Hi Flavio, One good blog

Re: ScanningResultIterator resiliency

2015-04-13 Thread Ravi Kiran
Hi Flavio, Currently, the default scanner caching value that Phoenix runs with is 1000. You can give it a try to reduce that number by updating the property *hbase.client.scanner.caching* in your hbase-site.xml. If you are doing a lot of processing for each record in your Mapper, you might

Re: Reading Keys written with salting

2015-04-01 Thread Ravi Kiran
Hi Flavio, If you writing a Map Reduce , I would highly recommend using the custom InputFormat classes written that handle these . http://phoenix.apache.org/phoenix_mr.html. Regards Ravi On Wed, Apr 1, 2015 at 12:16 AM, Flavio Pompermaier pomperma...@okkam.it wrote: Any help here? On

Re: Hadoop Yarn with Phoenix?

2015-03-23 Thread Ravi Kiran
Hi Matt, Your understanding is right. You don't need YARN services running as long as you don't run any Map Reduce jobs(CSV Bulk Loading, Map Reduce , phoenix-pig) . Regards Ravi On Mon, Mar 23, 2015 at 7:51 AM, Matthew Johnson matt.john...@algomi.com wrote: Hi all, Currently when I

Re: Using Hive on Phoenix Tables

2015-03-15 Thread Ravi Kiran
Hi William, Phoenix tries to create tables by upper casing them and thus I believe the table name that would have been created is PERSON and hence it's not matching Person. Can you please give it a try with giving the table name in upper case. Regards Ravi On Sun, Mar 15, 2015 at 1:03 AM,

Re: PhoenixOutputFormat in MR job

2015-03-01 Thread Ravi Kiran
Hi Krishna, I assume you have already taken a look at the example here http://phoenix.apache.org/phoenix_mr.html Is there a need to compute hash byte in the MR job? Can you please elaborate a bit more on what hash byte is ? Are keys and values stored in BytesWritable before doing a

Re: MapReduce over Multiple Clusters

2015-02-11 Thread Ravi Kiran
Hi Geoffrey, In the current implementation, we wouldn't be able to as we take the zookeeper quorum information from the hbase-site.xml in the classpath. However, we can easily extend the current implementation to support this. I have raised this

Re: Pig vs Bulk Load record count

2015-02-03 Thread Ravi Kiran
@phoenix.apache.org user@phoenix.apache.org Subject: RE: Pig vs Bulk Load record count Hello Ralph, Try to check if the PIG script doesn’t produce keys that overlap (that would explain the reduce in number of rows). Good luck, Constantin *From:* Ravi Kiran [mailto:maghamraviki

Re: Pig vs Bulk Load record count

2015-02-03 Thread Ravi Kiran
), but not a showstopper for the 4.3 release. Would you mind filing a JIRA for it? Thanks, James On Tue, Feb 3, 2015 at 4:31 PM, Ravi Kiran maghamraviki...@gmail.com wrote: Hi Ralph, Glad it is working!! Regards Ravi On Tue, Feb 3, 2015 at 3:29 PM, Perko, Ralph J ralph.pe...@pnnl.gov wrote

Re: Pig vs Bulk Load record count

2015-02-03 Thread Ravi Kiran
Hi Ralph, Also, can you please have the schema also attached to the JIRA using DESCRIBE Z when you don't explicitly specify the data type for the columns. Regards Ravi On Tue, Feb 3, 2015 at 4:47 PM, Ravi Kiran maghamraviki...@gmail.com wrote: Hi Ralph, Also, can you please have

Re: Pig vs Bulk Load record count

2015-02-02 Thread Ravi Kiran
and there are no duplicates Thanks! Ralph __ *Ralph Perko* Pacific Northwest National Laboratory (509) 375-2272 ralph.pe...@pnnl.gov From: Ravi Kiran maghamraviki...@gmail.com Reply-To: user@phoenix.apache.org user@phoenix.apache.org Date: Monday, February 2

Re: Pig vs Bulk Load record count

2015-02-02 Thread Ravi Kiran
email. Thanks! Ralph -- *From:* Ravi Kiran [maghamraviki...@gmail.com] *Sent:* Monday, February 02, 2015 5:03 PM *To:* user@phoenix.apache.org *Subject:* Re: Pig vs Bulk Load record count Hi Ralph, Is it possible to share the CREATE TABLE command as I

Re: Pig vs Bulk Load record count

2015-02-02 Thread Ravi Kiran
count(1) from TEST; __ *Ralph Perko* Pacific Northwest National Laboratory (509) 375-2272 ralph.pe...@pnnl.gov From: Ravi Kiran maghamraviki...@gmail.com Reply-To: user@phoenix.apache.org user@phoenix.apache.org Date: Monday, February 2

Re: Pig vs Bulk Load record count

2015-02-02 Thread Ravi Kiran
Hi Ralph, That's definitely a cause of worry. Can you please share the UPSERT query being built by Phoenix . You should see it in the logs with an entry *Phoenix Generic Upsert Statement: *.. Also, what do the MapReduce counters say for the job. If possible can you share the pig script as

Re: Mapreduce job exception when using Apache Spark to query phoenix tables

2015-01-04 Thread Ravi Kiran
Hi , You can read more about the Map Reduce integration here http://phoenix.apache.org/phoenix_mr.html. A quick simple spark program can be found at https://gist.github.com/mravi/444afe7f49821819c987. Regarding Snapshot support, their is a jira

Re: Flume to Phoenix as Sink Issue

2014-12-21 Thread Ravi Kiran
this flume-ng agent -c conf -f /opt/flume/conf/apache.conf -n agent -Dflume.root.looger=DEBUG,console Thanks Divya N On Sat, Dec 20, 2014 at 2:14 AM, Ravi Kiran maghamraviki...@gmail.com wrote: Hi Divya, Also, can you confirm if the regex given in the configuration matches the access

Re: Flume to Phoenix as Sink Issue

2014-12-19 Thread Ravi Kiran
taken to process [0] events was [3] seconds 14/12/19 12 Thanks Divya N On Fri, Dec 19, 2014 at 6:10 AM, Ravi Kiran maghamraviki...@gmail.com wrote: ​Hi Nagarajan, Apparently, we do batches of 100 by default for each commit . You can decrease that number if you would like to. http

Re: Flume to Phoenix as Sink Issue

2014-12-19 Thread Ravi Kiran
apache logs https://github.com/apache/phoenix/blob/master/phoenix-flume/src/it/java/org/apache/phoenix/flume/RegexEventSerializerIT.java#testApacheLogRegex which can help you with the regex Happy to help!! Regards Ravi On Fri, Dec 19, 2014 at 11:19 AM, Ravi Kiran maghamraviki...@gmail.com wrote: Hi

Fwd: Flume to Phoenix as Sink Issue

2014-12-18 Thread Ravi Kiran
​Hi Nagarajan, Apparently, we do batches of 100 by default for each commit . You can decrease that number if you would like to. http://phoenix.apache.org/flume.html Regards Ravi​ Hi, As we are working in flume to store apache log into hbase/phoenix using phoenix-flume jars.Using

Re: Re: bulk update data in phoenix

2014-12-12 Thread Ravi Kiran
it。 At 2014-12-12 15:00:19, Ravi Kiran maghamraviki...@gmail.com wrote: Hi Apparently, the support for MR jobs was submitted few days ago as part of https://issues.apache.org/jira/browse/PHOENIX-1454 . Would you be willing to give it a stab by building Phoenix artifacts from Git

Re: bulk update data in phoenix

2014-12-11 Thread Ravi Kiran
Hi Apparently, the support for MR jobs was submitted few days ago as part of https://issues.apache.org/jira/browse/PHOENIX-1454 . Would you be willing to give it a stab by building Phoenix artifacts from Git by yourself ( http://phoenix.apache.org/building.html) as the feature is not yet

Re: pig and phoenix

2014-12-08 Thread Ravi Kiran
__ *Ralph Perko* Pacific Northwest National Laboratory (509) 375-2272 ralph.pe...@pnnl.gov javascript:_e(%7B%7D,'cvml','ralph.pe...@pnnl.gov'); From: Ravi Kiran maghamraviki...@gmail.com javascript:_e(%7B%7D,'cvml','maghamraviki...@gmail.com'); Reply-To: user@phoenix.apache.org javascript

Re: pig and phoenix

2014-12-05 Thread Ravi Kiran
Hi Ralph. Can you please try to modify the STORE command in the script to the following. STORE D into 'hbase://$table_name/period,deployment,file_id, recnum' using org.apache.phoenix.pig.PhoenixHBaseStorage('$zookeeper','-batchSize 1000'); Primarily, Phoenix generates the default UPSERT

Re: error running CSV bull loader on CDH 5.1

2014-08-31 Thread Ravi Kiran
Hi, Can you please give it a try by downloading the binaries from https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.1.0-rc1/bin/ and then copying over the necessary artifacts like phoenix-4.1.0-server-hadoop2.jar onto the RS classpath on the server. Regards Ravi On Sun, Aug 31, 2014

Re: Error when using Pig Storage: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected

2014-08-28 Thread Ravi Kiran
at 11:57 PM, Ravi Kiran maghamraviki...@gmail.com wrote: Hi Russel, Apparently, Phoenix 4.0.0 leverages few API methods of HBase 0.98.4 v which aren't present within 0.98.1 that comes with CDH 5.1 . That's the primary cause for the build issues. Regards Ravi On Mon, Aug 18

Re: Use of column list for Pig Store command

2014-08-26 Thread Ravi Kiran
...@ds-iq.com wrote: It looks like JIRA issue PHOENIX-898 was originally tracking this, but it looks like this issue has been reverted in 4.1.0 RC 0 and 1. Can anyone from the Phoenix group confirm this? From: Ravi Kiran [mailto:maghamraviki...@gmail.com] Sent: Thursday, August 21

Re: Use of column list for Pig Store command

2014-08-21 Thread Ravi Kiran
Hi Randy, This feature is being delivered as part of 4.1.0RC. Right now we are going through a voting phase for its release. If you wish to try this feature before the official release, please follow the instructions at http://phoenix.apache.org/building.html to build the necessary artifacts.

Re: map reduce across phoenix table

2014-08-21 Thread Ravi Kiran
Hi Jody, Can you please let us know if the HBase table that you would like to read from has a composite row key. If not, I believe using the standard TableMapReduceUtil api should do good. However, it becomes a bit tricky when the row key is a composite one . In this case, I am afraid you

Re: Error when using Pig Storage: Found interface org.apache.hadoop.mapreduce.TaskAttemptContext, but class was expected

2014-08-19 Thread Ravi Kiran
Hi Russel, Apparently, Phoenix 4.0.0 leverages few API methods of HBase 0.98.4 v which aren't present within 0.98.1 that comes with CDH 5.1 . That's the primary cause for the build issues. Regards Ravi On Mon, Aug 18, 2014 at 5:56 PM, Russell Jurney russell.jur...@gmail.com wrote:

Re: Split rowkey in HBASE to composite key in phoenix

2014-07-30 Thread Ravi Kiran
Hi Thanapool, Though its easy to do a import of a sql table to hbase tables backed by phoenix using Sqoop, we do notice issues when the row key of the hbase table is a set of composite columns. This is because the delimiter used by Phoenix is different from what Sqoop uses by default. We

Re: Can't drop table, help!

2014-06-15 Thread Ravi Kiran
Hi Russel, When recreating the table, does it complain of a TABLE_ALREADY_EXIST exception? If possible, can you please confirm if you see the table 'DEV_HET_MEF' from the zookeeper client ( zkcli.sh) a ) hbase zkcli -server host:port b)ls /hbase/table/ If so, you