Re: REG: Phoenix MR issue

2015-06-05 Thread Ns G
Hi Team, I am trying to connect to Phoenix Database through PIG. I am able to save the data. But when i read the data it fails.. I am using 4.3.1 version of jar supplied by cloudera. FileData = load 'hbase://table/cftable' USING org.apache.phoenix.pig.PhoenixHBaseLoader('server name'); Failed

Query on partial Row Key

2015-06-05 Thread Vijay Kukkala
Cluster configuration: Phoenix 4.0.2 with Hbase 0.98 HDP 2.1 One of our table has a primary key (customerId int, timestamp BigInt, transactionId varchar). One of our use cases is to retrieve records by customerId and transactionID. select * from my_table where cid = ? and tid = ? looking at th

Re: Query on partial Row Key

2015-06-05 Thread Hemal Parekh
Vijay, You can try this. select * from my_table where like '?%' and tid = ? (argument ? is customerid) This will do a range scan and a filter on server side. Other option is to change primary key design to (customerId int, transactionId varchar, timestamp BigInt). This will allow to query on p

Re: Query on partial Row Key

2015-06-05 Thread James Taylor
Hi Vijay, You've got a couple of options: 1) Force the query to do a skip scan (the Phoenix equivalent of the FuzzyRowKeyFilter) by adding a hint like this: select /*+ SKIP_SCAN */ * from my_table where cid = ? and tid = ? By default, Phoenix won't do a skip scan when there's gaps in the pk c

Re: REG: Phoenix MR issue

2015-06-05 Thread Ravi Kiran
Hi Durga Prasad, Assuming you have registered phoenix-[version]. jar and have used the same above command to LOAD data from a Phoenix table 'cftable' and 'server_name' is your zookeeper quorum , things should be working. Can you please confirm. Regards Ravi On Fri, Jun 5, 2015 at 4:53 AM, N

Re: REG: Phoenix MR issue

2015-06-05 Thread Ns G
Hi Ravi, Yes I have registered the driver before executing pig commands that's the first step I did. I didn't understand your second part. As per my understanding of your email, yes I have first loaded data into phoenix table. Then I am reading a different phoenix table created by MR process. Yes

Re: Query on partial Row Key

2015-06-05 Thread Vijay Kukkala
Thanks James, I tried both and that helped. On Fri, Jun 5, 2015 at 9:44 AM James Taylor wrote: > Hi Vijay, > You've got a couple of options: > 1) Force the query to do a skip scan (the Phoenix equivalent of the > FuzzyRowKeyFilter) by adding a hint like this: > > select /*+ SKIP_SCAN */ *

Salt bucket count recommendation

2015-06-05 Thread Perko, Ralph J
Hi, We have a 40 node cluster with 8 core tables and around 35 secondary index tables. The tables get very large – billions of records and terabytes of data. What salt bucket count do you recommend? Thanks, Ralph

Bulk loading through HFiles

2015-06-05 Thread Dawid
Hi, I was trying to code some utilities to bulk load data through HFiles from Spark RDDs. I was trying to took the pattern of CSVBulkLoadTool. I managed to generate some HFiles and load them into HBase, but i can't see the rows using sqlline. I would be more than grateful for any suggestions.

Re: Bulk loading through HFiles

2015-06-05 Thread Ravi Kiran
Hi Dawid, Do you see the data when you run a simple scan or count of the table in Hbase shell ? FYI. The links lead me to a 404 : File not found. Regards Ravi On Fri, Jun 5, 2015 at 1:17 PM, Dawid wrote: > Hi, > I was trying to code some utilities to bulk load data through HFiles from > Sp

Re: Bulk loading through HFiles

2015-06-05 Thread Dawid
Yes I can see it in hbase-shell. Sorry for the bad links, i haven't used private repositories on github. So I moved the files to a gist: https://gist.github.com/dawidwys/3aba8ba618140756da7c Hope this times it will work. On 05.06.2015 23:09, Ravi Kiran wrote: Hi Dawid, Do you see the data

Re: Phoenix drop view not working after 4.3.1 upgrade

2015-06-05 Thread Arun Kumaran Sabtharishi
Adding some information on this, The SYSTEM.CATALOG has around 1.3 million views and the region count is 1. Is it safe to split the SYSTEM.CATALOG table? Would splitting help the performance of dropping? Thanks, Arun

Re: Phoenix drop view not working after 4.3.1 upgrade

2015-06-05 Thread James Taylor
What's different between the two environments (i.e. the working and not working ones)? Do they have the same amount of data (i.e. number of views)? Do you mean 1.3M views or 1.3M rows? Do you have any indexes on your views? If not, then dropping a view is nothing more than issuing a delete over the