hi All,

I just realized that phoneix doesn't provide "group by" and "distinct" methods 
if we use phoenix map reduce. It seems below approach uses phoenix map reduce 
which is not suitable for this type of  queries.

Now I wanted to run below query by any means. My table has more than 70 million 
records and I could not run it either using "sqlline.py" and also tried to run 
it using "squirl" as well as using simple phoenix jdbc connection from a java 
program. In all three I was getting connection timeout error. I tried to 
increase various timeouts in Hbase (hbase.rpc.timeout -> 3660000, I even 
checked my hbase file path using "./phoenix_utils.py  | grep hbase_conf_path")

 but no luck. I am ok if my query takes more time but I wanted to run it 
successfully without any issue.

HBase - 1.1.2.2.4.2.0-258
Phoenix - phoenix-4.4.0.2.4.2.0-258

Can any one provide any suggestion ?

Regards,
Parveen Jain

________________________________
From: Parveen Jain <parveenj...@live.com>
Sent: Sunday, October 23, 2016 10:18 AM
To: user@phoenix.apache.org
Subject: PhoenixIOException: Table 'unionSchemaName.unionTableName' was not 
found


While running this query from spark phoenix connector:


select distinct(C_TXN.CUSTMR_ID) from CUS_TXN where (CUS_TXN.TXN_TYPE='xxxx') 
and (substr(CUS_TXN.ROW_KEY,0,8)>='20160101') group by CUS_TXN.CUSTMR_ID having 
sum(CUS_TXN.TXN_AMOUNT)>=300 union all select distinct(CUS_TXN.CUSTMR_ID) from 
CUS_TXN where (CUS_TXN.TXN_TYPE='yyyy') and 
(substr(CUS_TXN.ROW_KEY,0,8)>='20160101') group by CUS_TXN.CUSTMR_ID having 
sum(CUS_TXN.TXN_AMOUNT)>100



getting below exception:

Getting some phoenix exception for below query:
java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
Table 'unionSchemaName.unionTableName' was not found, got: hbase:namespace.
        at com.google.common.base.Throwables.propagate(Throwables.java:160)
        at org.apache.phoenix.mapreduce.PhoenixRecordReader.initialize(Phoen



my code for fetching records is:

PhoenixConfigurationUtil.setInputTableName(configuration , TABLE_NAME);
PhoenixConfigurationUtil.setOutputTableName(configuration ,TABLE_NAME);
PhoenixConfigurationUtil.setInputQuery(configuration, QueryToRun);
PhoenixConfigurationUtil.setInputClass(configuration, DataRecord.class);

configuration.setClass(JobContext.OUTPUT_FORMAT_CLASS_ATTR,PhoenixOutputFormat.class,
 OutputFormat.class);

@SuppressWarnings("unchecked")
JavaPairRDD<NullWritable, DataRecord> stocksRDD = jsc.newAPIHadoopRDD(
configuration,
PhoenixInputFormat.class,
NullWritable.class,
DataRecord.class);


Regards,
Parveen Jain




Any pointer why this could be happening.


Regards,

Parveen Jain

Reply via email to