Hi Bhathiya,

Can you try to change the mysql setting in order to ignore the case
sensitivity [1].

[1] -
http://dev.mysql.com/doc/refman/5.0/en/identifier-case-sensitivity.html

Regards,
Gihan

On Fri, Feb 13, 2015 at 4:47 PM, Bhathiya Jayasekara <bhath...@wso2.com>
wrote:

> Hi GIhan,
>
> It works with lowercase tables. I think it's better to fix this issue
> since it's a common use case. (Or is it already fixed in 2.5.0?)
>
> Thanks,
> Bhathiya
>
> On Fri, Feb 13, 2015 at 4:40 PM, Gihan Anuruddha <gi...@wso2.com> wrote:
>
>> Hi Bhathiya,
>>
>> AFAIR we had an issue with mysql about lowercase/uppercase. Can you
>> please try that with lowercase table name.
>>
>> Regards,
>> Gihan
>>
>> On Fri, Feb 13, 2015 at 4:31 PM, Bhathiya Jayasekara <bhath...@wso2.com>
>> wrote:
>>
>>> Hi all,
>>>
>>> I'm getting below error when executing attached hive script. When I run
>>> the same with H2 database it works fine. Issue comes with MySQL. Please
>>> note *highlighted* part.
>>>
>>> Here is MySQL database:
>>>
>>> mysql> emysql> use abc;
>>> Reading table information for completion of table and column names
>>> You can turn off this feature to get a quicker startup with -A
>>>
>>> Database changed
>>> mysql>
>>> mysql> show tables;
>>> +----------------------+
>>> | Tables_in_abc        |
>>> +----------------------+
>>> | RSSStatsSummaryTable |
>>> +----------------------+
>>> 1 row in set (0.00 sec)
>>>
>>> [2015-02-13 16:26:27,097] ERROR
>>> {org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBOperation} -  Failed to get
>>> total row count
>>> com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table
>>> *'abc.rssstatsformattedtable'* doesn't exist
>>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>> at
>>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>>> at
>>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>>> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>>> at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
>>> at com.mysql.jdbc.Util.getInstance(Util.java:386)
>>> at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1054)
>>> at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4120)
>>> at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4052)
>>> at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2503)
>>> at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2664)
>>> at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2815)
>>> at
>>> com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2155)
>>> at
>>> com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:2322)
>>> at
>>> org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBOperation.getTotalCount(DBOperation.java:335)
>>> at
>>> org.wso2.carbon.hadoop.hive.jdbc.storage.input.JDBCSplit.getSplits(JDBCSplit.java:113)
>>> at
>>> org.wso2.carbon.hadoop.hive.jdbc.storage.input.JDBCDataInputFormat.getSplits(JDBCDataInputFormat.java:41)
>>> at
>>> org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:302)
>>> at
>>> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:292)
>>> at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:933)
>>> at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:925)
>>> at org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:170)
>>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:839)
>>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:792)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:415)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1123)
>>> at
>>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:792)
>>> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:766)
>>> at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:460)
>>> at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:733)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:606)
>>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>> java.lang.NullPointerException
>>> at
>>> org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBOperation.getTotalCount(DBOperation.java:344)
>>> at
>>> org.wso2.carbon.hadoop.hive.jdbc.storage.input.JDBCSplit.getSplits(JDBCSplit.java:113)
>>> at
>>> org.wso2.carbon.hadoop.hive.jdbc.storage.input.JDBCDataInputFormat.getSplits(JDBCDataInputFormat.java:41)
>>> at
>>> org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:302)
>>> at
>>> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getSplits(CombineHiveInputFormat.java:292)
>>> at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:933)
>>> at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:925)
>>> at org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:170)
>>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:839)
>>> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:792)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:415)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1123)
>>> at
>>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:792)
>>> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:766)
>>> at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:460)
>>> at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:733)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:606)
>>> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>> Job Submission failed with exception
>>> 'java.lang.NullPointerException(null)'[2015-02-13 16:26:27,101] ERROR
>>> {org.apache.hadoop.hive.ql.exec.ExecDriver} -  Job Submission failed with
>>> exception 'java.lang.NullPointerException(null)'
>>>
>>> Any idea why this happens? Should tables used in hive be in lower case
>>> always?
>>>
>>> Thanks,
>>> --
>>> *Bhathiya Jayasekara*
>>> *Software Engineer,*
>>> *WSO2 inc., http://wso2.com <http://wso2.com>*
>>>
>>> *Phone: +94715478185 <%2B94715478185>*
>>> *LinkedIn: http://www.linkedin.com/in/bhathiyaj
>>> <http://www.linkedin.com/in/bhathiyaj>*
>>> *Twitter: https://twitter.com/bhathiyax <https://twitter.com/bhathiyax>*
>>> *Blog: http://movingaheadblog.blogspot.com
>>> <http://movingaheadblog.blogspot.com/>*
>>>
>>
>>
>>
>> --
>> W.G. Gihan Anuruddha
>> Senior Software Engineer | WSO2, Inc.
>> M: +94772272595
>>
>
>
>
> --
> *Bhathiya Jayasekara*
> *Software Engineer,*
> *WSO2 inc., http://wso2.com <http://wso2.com>*
>
> *Phone: +94715478185 <%2B94715478185>*
> *LinkedIn: http://www.linkedin.com/in/bhathiyaj
> <http://www.linkedin.com/in/bhathiyaj>*
> *Twitter: https://twitter.com/bhathiyax <https://twitter.com/bhathiyax>*
> *Blog: http://movingaheadblog.blogspot.com
> <http://movingaheadblog.blogspot.com/>*
>



-- 
W.G. Gihan Anuruddha
Senior Software Engineer | WSO2, Inc.
M: +94772272595
_______________________________________________
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev

Reply via email to