[ 
https://issues.apache.org/jira/browse/PHOENIX-2889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Hellstrom updated PHOENIX-2889:
-----------------------------------
    Description: 
When using {{phoenixTableAsDataFrame}} on a table with auto-capitalized 
qualifiers where the user has erroneously specified these with lower caps, no 
exception is returned. Ideally, an 
{{org.apache.phoenix.schema.ColumnNotFoundException}} is thrown but instead 
lines like the following show up in the log

{code}
INFO RpcRetryingCaller: Call exception, tries=10, retries=35, started=48168 ms 
ago, cancelled=false, msg=
{code}

A minimal example:

{code}
CREATE TABLE test (foo INT, bar VARCHAR);
UPSERT INTO test VALUES (1, 'hello');
UPSERT INTO test VALUES (2, 'bye');
{code}

In Spark (shell):

{code}
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.phoenix.spark._
@transient lazy val hadoopConf: Configuration = new Configuration()
@transient lazy val hbConf: Configuration = 
HBaseConfiguration.create(hadoopConf)
val df = sqlContext.phoenixTableAsDataFrame("TEST", Array("foo", "bar"), conf = 
hbConf)
{code}

  was:
When using {{phoenixTableAsDataFrame}} on a table with auto-capitalized 
qualifiers where the user has erroneously specified these with lower caps, no 
exception is returned. Ideally, a 
{{org.apache.phoenix.schema.ColumnNotFoundException}} is thrown but instead 
lines like the following show up in the log

{code}
INFO RpcRetryingCaller: Call exception, tries=10, retries=35, started=48168 ms 
ago, cancelled=false, msg=
{code}

A minimal example:

{code}
CREATE TABLE test (foo INT, bar VARCHAR);
UPSERT INTO test VALUES (1, 'hello');
UPSERT INTO test VALUES (2, 'bye');
{code}

In Spark (shell):

{code}
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.hbase.HBaseConfiguration
import org.apache.phoenix.spark._
@transient lazy val hadoopConf: Configuration = new Configuration()
@transient lazy val hbConf: Configuration = 
HBaseConfiguration.create(hadoopConf)
val df = sqlContext.phoenixTableAsDataFrame("TEST", Array("foo", "bar"), conf = 
hbConf)
{code}


> No exception in Phoenix-Spark when column does not exist
> --------------------------------------------------------
>
>                 Key: PHOENIX-2889
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2889
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.4.0
>         Environment: Phoenix: 4.4
> HBase: 1.1.2.2.3.2.0-2950 (Hortonworks HDP)
> Spark: 1.4.1 (compiled with Scala 2.10)
>            Reporter: Ian Hellstrom
>              Labels: spark
>
> When using {{phoenixTableAsDataFrame}} on a table with auto-capitalized 
> qualifiers where the user has erroneously specified these with lower caps, no 
> exception is returned. Ideally, an 
> {{org.apache.phoenix.schema.ColumnNotFoundException}} is thrown but instead 
> lines like the following show up in the log
> {code}
> INFO RpcRetryingCaller: Call exception, tries=10, retries=35, started=48168 
> ms ago, cancelled=false, msg=
> {code}
> A minimal example:
> {code}
> CREATE TABLE test (foo INT, bar VARCHAR);
> UPSERT INTO test VALUES (1, 'hello');
> UPSERT INTO test VALUES (2, 'bye');
> {code}
> In Spark (shell):
> {code}
> import org.apache.hadoop.conf.Configuration
> import org.apache.hadoop.hbase.HBaseConfiguration
> import org.apache.phoenix.spark._
> @transient lazy val hadoopConf: Configuration = new Configuration()
> @transient lazy val hbConf: Configuration = 
> HBaseConfiguration.create(hadoopConf)
> val df = sqlContext.phoenixTableAsDataFrame("TEST", Array("foo", "bar"), conf 
> = hbConf)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to