Hive Alter

2020-04-14 Thread Srinivas Thunga
Hi Team,

I am trying to write my query results to a file.

hive -e "alter table {0}.{1}
partition(year='{2}',month='{3}',day='{4}',hour='{5}') CONCATENATE >
/tmp/srinivas/test.log;"'.format(row[0],row[1],row[2],row[3],row[4],row[5])


Re: tunning guide

2015-12-01 Thread Srinivas Thunga
Hi,

Use some query optimization techniques for query fast execution.

Use, Partitions, Bucketing or create table in ORC format

Regards,

Srinivas T

*Thanks & Regards,*

*Srinivas T*

On Tue, Dec 1, 2015 at 1:46 PM, San Luo  wrote:

> Hi guys,
>
> My query runs slowly in hive, is there a tuning guide or document similar
> that could share some ideas in this?
>
> Thanks.
>


Re: SemanticException Unable to fetch table t. null

2015-11-03 Thread Srinivas Thunga
Can you please check metastore log?

*Thanks & Regards,*

*Srinivas T*

On Tue, Nov 3, 2015 at 4:40 PM, Rajkumar Singh 
wrote:

> it seems that you are having a problem metastore.
>
> On Tue, Nov 3, 2015 at 1:46 PM, Mich Talebzadeh 
> wrote:
>
>> Hi,
>>
>>
>>
>> Has anyone got a quick fix for dropping such table please?
>>
>>
>>
>> hive> drop table t;
>>
>> FAILED: SemanticException Unable to fetch table t. null
>>
>> hive> desc t;
>>
>> FAILED: SemanticException Unable to fetch table t. null
>>
>>
>>
>> Thanks,
>>
>>
>>
>> Mich Talebzadeh
>>
>>
>>
>> *Sybase ASE 15 Gold Medal Award 2008*
>>
>> A Winning Strategy: Running the most Critical Financial Data on ASE 15
>>
>>
>> http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf
>>
>> Author of the books* "A Practitioner’s Guide to Upgrading to Sybase ASE
>> 15", ISBN 978-0-9563693-0-7*.
>>
>> co-author *"Sybase Transact SQL Guidelines Best Practices", ISBN
>> 978-0-9759693-0-4*
>>
>> *Publications due shortly:*
>>
>> *Complex Event Processing in Heterogeneous Environments*, ISBN:
>> 978-0-9563693-3-8
>>
>> *Oracle and Sybase, Concepts and Contrasts*, ISBN: 978-0-9563693-1-4, volume
>> one out shortly
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>>
>> NOTE: The information in this email is proprietary and confidential. This
>> message is for the designated recipient only, if you are not the intended
>> recipient, you should destroy it immediately. Any information in this
>> message shall not be understood as given or endorsed by Peridale Technology
>> Ltd, its subsidiaries or their employees, unless expressly so stated. It is
>> the responsibility of the recipient to ensure that this email is virus
>> free, therefore neither Peridale Ltd, its subsidiaries nor their employees
>> accept any responsibility.
>>
>>
>>
>
>


Re: insert timestamp values in Hive

2015-10-26 Thread Srinivas Thunga
Hi,

If you want those properties to executed, then you need to create table in
ORC format and need to change some configuration on Hive-Site.xml

Regards,

Srinivas T

*Thanks & Regards,*

*Srinivas T*

On Tue, Oct 27, 2015 at 12:05 PM, Lefty Leverenz 
wrote:

> Oh, right -- I forgot that the table has to be bucketed and use ORC
> format, and several configuration parameters need to be set appropriately.
> Have you read Hive Transactions
> ,
> particularly the Limitations
> 
>  section?
>
> -- Lefty
>
> On Tue, Oct 27, 2015 at 1:02 AM, AnandaVelMurugan Chandra Mohan <
> ananthu2...@gmail.com> wrote:
>
>> Hi,
>>
>> Thanks for your suggestion. I tried this and I get the following error
>> "Attempt to do update or delete on table tmp that does not use an
>> AcidOutputFormat or is not bucketed". I googled about it and found this
>> link
>> http://unmeshasreeveni.blogspot.in/2014/11/updatedeleteinsert-in-hive-0140.html.
>> This suggests changing hive configurations which are not possible
>> immediately. Is there anything else which you can suggest?
>>
>> Regards,
>> Anand
>>
>>
>> On Tue, Oct 27, 2015 at 9:00 AM, Lefty Leverenz 
>> wrote:
>>
>>> Your table has to support transactions (see Inserting values into
>>> tables from SQL
>>> ),
>>> so create it with the TBLPROPERTIES clause
>>> 
>>> :
>>>
>>> CREATE TABLE tmp (reporttime timestamp) TBLPROPERTIES
>>> ("transactional"="true");
>>>
>>> Unfortunately the examples in Inserting values into tables from SQL
>>> 
>>>  don't
>>> show the TBLPROPERTIES clause -- that needs to be fixed in the wiki.
>>>
>>> -- Lefty
>>>
>>>
>>> On Mon, Oct 26, 2015 at 8:49 PM, AnandaVelMurugan Chandra Mohan <
>>> ananthu2...@gmail.com> wrote:
>>>
 Hi,

 I believe my Hive version is Apache 1.2.0. I guessed it from the folder
 and hive-hwi.jar names.

 I am trying to insert values into a column of type timestamp. It does
 not work

 This is how I create the table
 CREATE TABLE tmp (reporttime timestamp);

 I tried following insert queries
 INSERT INTO TABLE tmp VALUES(734196.3552);

 INSERT INTO TABLE tmp VALUES(734196);

 I read that values of timestamp could be integers which are interpreted
 as seconds since unix epoch time or float which are interpreted as seconds
 since unix epoch time with nanasecond precision or string of format
 '-mm-dd'. I cannot use third string option as of now. So I am trying
 other two options i.e. int and float.

 When I do a select query, I get null

 I tried

 select to_date(reporttime) from tmp
 select reporttime from tmp

 Please help if you have any idea.
 --
 Regards,
 Anand

>>>
>>>
>>
>>
>> --
>> Regards,
>> Anand
>>
>
>


Getting error while performing Insert query

2015-09-09 Thread Srinivas Thunga
Hi,

Have you created the hive metastore.

Have you changed the hive-site.xml with new config parameters ?

On Wednesday, September 9, 2015, Sateesh Karuturi <
sateesh.karutu...@gmail.com
> wrote:

> hello...,
> iam using hive 1.1 and tez 0.7...
> Whenever iam trying to INSERT  data into hive table using tez via java iam
> getting following error:
>
> *Exception in thread "main" org.apache.hive.service.cli.HiveSQLException:
> Error while compiling statement: FAILED: SemanticException [Error 10293]:
> Unable to create temp file for insert values Expression of type
> TOK_TABLE_OR_COL not supported in insert/values*
>


-- 
Sent from Gmail Mobile


Re: Stopping HiveServer2

2015-04-29 Thread Srinivas Thunga
Sudo service hiveserver2 stop



On Wednesday, April 29, 2015, CHEBARO Abdallah 
wrote:

>  Hello,
>
>
>
> How can I stop hiveserver2? I am not able to find the command.
>
>
>
> Thanks
>
> ***
>
> This e-mail contains information for the intended recipient only. It may
> contain proprietary material or confidential information. If you are not
> the intended recipient you are not authorised to distribute, copy or use
> this e-mail or any attachment to it. Murex cannot guarantee that it is
> virus free and accepts no responsibility for any loss or damage arising
> from its use. If you have received this e-mail in error please notify
> immediately the sender and delete the original email received, any
> attachments and all copies from your system.
>


-- 
Sent from Gmail Mobile


Re: Concurrency issue, Setting hive.txn.manager to "org.apache.hadoop.hive.ql.lockmgr.DbTxnManager"

2015-04-07 Thread Srinivas Thunga
hi Mich,

Have you created the hive metastore? And can you please check the log for
hivemetastore

As i also faced this issue before.

And let me know

*Thanks & Regards,*

*Srinivas T*

On Tue, Apr 7, 2015 at 9:13 AM, Eugene Koifman 
wrote:

>  can you check that the schema in your metastore db has transaction
> related tables?
> You can find the list of tables in hive-txn-schema-0.14.0.mysql.sql, for
> example.
>
>   From: Mich Talebzadeh 
> Reply-To: "user@hive.apache.org" 
> Date: Monday, April 6, 2015 at 8:05 AM
> To: "user@hive.apache.org" , "@Sanjiv Singh" <
> sanjiv.is...@gmail.com>
> Subject: RE: Concurrency issue, Setting hive.txn.manager to
> "org.apache.hadoop.hive.ql.lockmgr.DbTxnManager"
>
>   Thanks Sanjiv.
>
>
>
> Unfortunately after resetting
>
>
>
> hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager
>
> and doing
>
>
>
> 
>
>   hive.in.test
>
>   true
>
>  
>
>
>
>
>
> Still getting the same error message
>
>
>
> hive> show databases;
>
> FAILED: LockException [Error 10280]: Error communicating with the metastore
>
>
>
>
>
> Mich Talebzadeh
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> *Publications due shortly:*
>
> *Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
> Coherence Cache*
>
>
>
> NOTE: The information in this email is proprietary and confidential. This
> message is for the designated recipient only, if you are not the intended
> recipient, you should destroy it immediately. Any information in this
> message shall not be understood as given or endorsed by Peridale Ltd, its
> subsidiaries or their employees, unless expressly so stated. It is the
> responsibility of the recipient to ensure that this email is virus free,
> therefore neither Peridale Ltd, its subsidiaries nor their employees accept
> any responsibility.
>
>
>
> *From:* @Sanjiv Singh [mailto:sanjiv.is...@gmail.com
> ]
> *Sent:* 06 April 2015 15:21
> *To:* user@hive.apache.org
> *Subject:* Re: Concurrency issue, Setting hive.txn.manager to
> "org.apache.hadoop.hive.ql.lockmgr.DbTxnManager"
>
>
>
> Not sure ..It should work
>
> Try adding below configuration and then check..
>
>  
>
>   hive.in.test
>
>   true
>
>  
>
>
>
>
>   Regards
> Sanjiv Singh
> Mob :  +091 9990-447-339
>
>
>
> On Mon, Apr 6, 2015 at 7:21 PM, Mich Talebzadeh 
> wrote:
>
> Hi,
>
>
>
> I turned on concurrency for hive for DML with settings in hive-site.xml as
> follows:
>
>
>
> hive.support.concurrency=true
>
> hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager
>
> hive.compactor.initiator.on=true
>
> hive.compactor.worker.threads=2
>
> hive.support.concurrency=true
>
> hive.enforce.bucketing=true
>
> hive.exec.dynamic.partition.mode=nonstrict
>
>  Recycled connection to metastore and started hive server. Tried to query
> hive as follows:
>
>  hive> use asehadoop;
>
> FAILED: LockException [Error 10280]: Error communicating with the metastore
>
>
>
> Went back and set hive.txn.managerto default
>
> hive.txn.manager= org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager
>
>
>
> and recycled again and all worked!
>
>
>
> Sounds like concurrency does not work or something extra I need to do?
>
>
>
> Thanks
>
>
>
> Mich Talebzadeh
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> Publications due shortly:
>
> Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
> Coherence Cache
>
>
>
> NOTE: The information in this email is proprietary and confidential. This
> message is for the designated recipient only, if you are not the intended
> recipient, you should destroy it immediately. Any information in this
> message shall not be understood as given or endorsed by Peridale Ltd, its
> subsidiaries or their employees, unless expressly so stated. It is the
> responsibility of the recipient to ensure that this email is virus free,
> therefore neither Peridale Ltd, its subsidiaries nor their employees accept
> any responsibility.
>
>
>
>
>
>
>


Re: hive 1.1.0 Error while Metastore starting

2015-03-10 Thread Srinivas Thunga
i am thinking to run hive-schema-0.14.0.mysql.sql. it will have 54 tables.

*Thanks & Regards,*

*Srinivas T*

On Tue, Mar 10, 2015 at 12:55 PM, Amith sha  wrote:

> so can u suggest the solution
> Thanks & Regards
> Amithsha
>
>
> On Tue, Mar 10, 2015 at 12:53 PM, Srinivas Thunga
>  wrote:
> > Hi,
> >
> > I guess you have the sql as
> >
> > hive-schema-1.1.0.mysql.sql
> >
> > for this you will get only 45 tables only as Nucleus will not be there.
> >
> > I am also faced the same problem
> >
> > Thanks & Regards,
> >
> > Srinivas T
> >
> > On Tue, Mar 10, 2015 at 12:46 PM, Amith sha 
> wrote:
> >>
> >> Now i am able to create a metastore database after exporting the hive
> >> in .bashrc But the same mysql  error is found
> >> Thanks & Regards
> >> Amithsha
> >>
> >>
> >> On Tue, Mar 10, 2015 at 12:03 PM, Amith sha 
> wrote:
> >> > Hi all,
> >> >
> >> > I have Configured Hive 1.1.0 in Hadoop 2.4.1 successfully.Have started
> >> > the metastore by [ hive --service metastore -p 7099 ]
> >> >
> >> > and got the log as
> >> >
> >> > 2015-03-10 11:32:42,649 ERROR [main]: DataNucleus.Datastore
> >> > (Log4JLogger.java:error(115)) - An exception was thrown while
> >> > adding/validating class(es) : Specified key was too long; max key
> >> > length is 767 bytes
> >> > com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Specified
> >> > key was too long; max key length is 767 bytes
> >> > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> >> > Method)
> >> > at
> >> >
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> >> > at
> >> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> >> > at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> >> > at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
> >> > at com.mysql.jdbc.Util.getInstance(Util.java:386)
> >> > at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1054)
> >> > at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4237)
> >> > at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4169)
> >> > at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2617)
> >> > at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2778)
> >> > at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2819)
> >> > at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2768)
> >> > at com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:949)
> >> > at com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:795)
> >> > at
> >> > com.jolbox.bonecp.StatementHandle.execute(StatementHandle.java:254)
> >> > at
> >> >
> org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement(AbstractTable.java:760)
> >> > at
> >> >
> org.datanucleus.store.rdbms.table.TableImpl.createIndices(TableImpl.java:648)
> >> > at
> >> >
> org.datanucleus.store.rdbms.table.TableImpl.validateIndices(TableImpl.java:593)
> >> > at
> >> >
> org.datanucleus.store.rdbms.table.TableImpl.validateConstraints(TableImpl.java:390)
> >> > at
> >> >
> org.datanucleus.store.rdbms.table.ClassTable.validateConstraints(ClassTable.java:3463)
> >> > at
> >> >
> org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:3464)
> >> > at
> >> >
> org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAndValidate(RDBMSStoreManager.java:3190)
> >> > at
> >> >
> org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2841)
> >> > at
> >> >
> org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:122)
> >> > at
> >> >
> org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:1605)
> >> > at
> >> >
> org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.java:954)
> >> > at
> >> >
> org.datanucleus.store.rdbms.RDBMSStoreManager.

Re: hive 1.1.0 Error while Metastore starting

2015-03-10 Thread Srinivas Thunga
Hi,

I guess you have the sql as

hive-schema-1.1.0.mysql.sql

for this you will get only 45 tables only as Nucleus will not be there.

I am also faced the same problem

*Thanks & Regards,*

*Srinivas T*

On Tue, Mar 10, 2015 at 12:46 PM, Amith sha  wrote:

> Now i am able to create a metastore database after exporting the hive
> in .bashrc But the same mysql  error is found
> Thanks & Regards
> Amithsha
>
>
> On Tue, Mar 10, 2015 at 12:03 PM, Amith sha  wrote:
> > Hi all,
> >
> > I have Configured Hive 1.1.0 in Hadoop 2.4.1 successfully.Have started
> > the metastore by [ hive --service metastore -p 7099 ]
> >
> > and got the log as
> >
> > 2015-03-10 11:32:42,649 ERROR [main]: DataNucleus.Datastore
> > (Log4JLogger.java:error(115)) - An exception was thrown while
> > adding/validating class(es) : Specified key was too long; max key
> > length is 767 bytes
> > com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Specified
> > key was too long; max key length is 767 bytes
> > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> > at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> > at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> > at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> > at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
> > at com.mysql.jdbc.Util.getInstance(Util.java:386)
> > at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1054)
> > at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4237)
> > at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4169)
> > at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2617)
> > at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2778)
> > at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2819)
> > at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2768)
> > at com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:949)
> > at com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:795)
> > at
> com.jolbox.bonecp.StatementHandle.execute(StatementHandle.java:254)
> > at
> org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement(AbstractTable.java:760)
> > at
> org.datanucleus.store.rdbms.table.TableImpl.createIndices(TableImpl.java:648)
> > at
> org.datanucleus.store.rdbms.table.TableImpl.validateIndices(TableImpl.java:593)
> > at
> org.datanucleus.store.rdbms.table.TableImpl.validateConstraints(TableImpl.java:390)
> > at
> org.datanucleus.store.rdbms.table.ClassTable.validateConstraints(ClassTable.java:3463)
> > at
> org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:3464)
> > at
> org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAndValidate(RDBMSStoreManager.java:3190)
> > at
> org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2841)
> > at
> org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:122)
> > at
> org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:1605)
> > at
> org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.java:954)
> > at
> org.datanucleus.store.rdbms.RDBMSStoreManager.getDatastoreClass(RDBMSStoreManager.java:679)
> > at
> org.datanucleus.store.rdbms.query.RDBMSQueryUtils.getStatementForCandidates(RDBMSQueryUtils.java:408)
> > at
> org.datanucleus.store.rdbms.query.JDOQLQuery.compileQueryFull(JDOQLQuery.java:947)
> > at
> org.datanucleus.store.rdbms.query.JDOQLQuery.compileInternal(JDOQLQuery.java:370)
> > at org.datanucleus.store.query.Query.executeQuery(Query.java:1744)
> > at
> org.datanucleus.store.query.Query.executeWithArray(Query.java:1672)
> > at org.datanucleus.store.query.Query.execute(Query.java:1654)
> > at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:221)
> > at
> org.apache.hadoop.hive.metastore.MetaStoreDirectSql.ensureDbInit(MetaStoreDirectSql.java:172)
> > at
> org.apache.hadoop.hive.metastore.MetaStoreDirectSql.(MetaStoreDirectSql.java:130)
> > at
> org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:275)
> > at
> org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:238)
> > at
> org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
> > at
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
> > at
> org.apache.hadoop.hive.metastore.RawStoreProxy.(RawStoreProxy.java:56)
> > at
> org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:65)
> > at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:579)
> > at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.jav

Re: hive cli problem

2015-03-09 Thread Srinivas Thunga
Hi,

I am also getting the same problem in Hive 1.1.0

As said by you, jline jar is not there in Hadoop.

How to find where this jar located and will be removed.

Mine is

Inside hadoop i am having folders like

client, client-0.20 and cloudera, lib

*Thanks & Regards,*

*Srinivas T*

On Mon, Mar 9, 2015 at 11:47 PM, Prasanth Jayachandran <
pjayachand...@hortonworks.com> wrote:

>  Hi Garry
>
>  Try removing jline-0.9.94.jar from hadoop. The exact path is this
> $HADOOP_PREFIX/share/hadoop/yarn/lib/jline-0.9.94.jar
>
>  See here for discussion
> https://issues.apache.org/jira/browse/HIVE-8609?focusedCommentId=14215543&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14215543
>
>  Thanks
> Prasanth
>
>  On Mar 9, 2015, at 11:02 AM, Garry Chen  wrote:
>
>   Hi All,
>  New to hive and run into problem when try to lunch hive
> cli as follow.  Please help.
>  hadoop> hive
>
>  Logging initialized using configuration in jar:file:/u01/app/hive-1.1.0
> /lib/hive-common-1.1.0.jar!/hive-log4j.properties
>  SLF4J: Class path contains multiple SLF4J bindings.
>  SLF4J: Found binding in [jar:file:/u01/app/hadoop-2.6.0
> /share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: Found binding in
> [jar:file:/u01/app/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
>  SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>  [ERROR] Terminal initialization failed; falling back to unsupported
>  java.lang.IncompatibleClassChangeError: Found class jline.Terminal, but
> interface was expected
>  at jline.TerminalFactory.create(TerminalFactory.java:101)
>  at jline.TerminalFactory.get(TerminalFactory.java:158)
>  at jline.console.ConsoleReader.(ConsoleReader.java:229)
>  at jline.console.ConsoleReader.(ConsoleReader.java:221)
>  at jline.console.ConsoleReader.(ConsoleReader.java:209)
>  at
> org.apache.hadoop.hive.cli.CliDriver.getConsoleReader(CliDriver.java:773)
>  at
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:715)
>  at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
>  at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:606)
>  at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>  at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>
>  Exception in thread "main" java.lang.IncompatibleClassChangeError: Found
> class jline.Terminal, but interface was expected
>  at jline.console.ConsoleReader.(ConsoleReader.java:230)
>  at jline.console.ConsoleReader.(ConsoleReader.java:221)
>  at jline.console.ConsoleReader.(ConsoleReader.java:209)
>  at
> org.apache.hadoop.hive.cli.CliDriver.getConsoleReader(CliDriver.java:773)
>  at
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:715)
>  at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
>  at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:606)
>  at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>  at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>
>
>


Re: Hive Transactions fail

2015-03-05 Thread Srinivas Thunga
Hi,

After seeing your mail, i came to know that you are trying to update and
delete operations on Hive 1.0.

For doing the same,

After extracting Hive version, you have to create Hive meta store

http://www.cloudera.com/content/cloudera/en/documentation/cdh5/v5-1-x/CDH5-Installation-Guide/cdh5ig_hive_metastore_configure.html

Then in hive-site.xml, you need set the new parameters like

 hive.support.concurrency – true
 hive.enforce.bucketing – true
 hive.exec.dynamic.partition.mode – nonstrict
 hive.txn.manager – org.apache.hadoop.hive.ql.lockmgr.DbTxnManager
 hive.compactor.initiator.on – true
 hive.compactor.worker.threads – 1

Then restart Hive-server and Metastore

Now create one normal table and one external table with orc format

and load from normal to orc table

Now you can update and delete records.

Let me know any concerns.

*Thanks & Regards,*

*Srinivas T*

On Thu, Mar 5, 2015 at 4:51 PM, David Simoes  wrote:

> Ive had some troubles enabling transactions in Hive 1.0.0 and Ive made a
> post in
> http://stackoverflow.com/questions/28867368/hive-transactions-are-crashing
>
> Could anyone check it out and give me some pointers on why things are
> crashing?
>
> Tyvm, Dave
>
>


Re: FW: Hive error while starting up services using Ambari

2015-03-05 Thread Srinivas Thunga
Hi,

Have you created the Hive Metastore?

http://www.cloudera.com/content/cloudera/en/documentation/cdh5/v5-1-x/CDH5-Installation-Guide/cdh5ig_hive_metastore_configure.html


and then try to start the server

*Thanks & Regards,*

*Srinivas T*

On Thu, Mar 5, 2015 at 2:57 PM, Pratik Gadiya 
wrote:

>   Hi,
>
> I am trying to deploy a hadoop cluster using Ambari Blueprint.
> All the services are up and running except one i.e. Hive Server 2.
>
> I tried to look into the logs(/var/log/hive/hiveserver2.log) and looks
> like Hive is trying to access the MySQL service using username:hive.
>
> However, I think it does not have appropriate privileges or something.
> I am not able to track why it does so. Am I missing any configuration part
> in blueprint ?
>
> Please find below the blueprint and hiveserver2.log for better
> understanding and debugging of the issue.
>
> *blueprint.json:-*
>
> {
> “configurations” : [
> ],
> “host_groups” : [
> {
> “name” : “slave”,
> “configurations” : [ ],
> “components” : [
> { “name” : “ZOOKEEPER_CLIENT” },
> { “name” : “PIG” },
> { “name” : “OOZIE_CLIENT” },
> { “name” : “HBASE_CLIENT” },
> { “name” : “HCAT” },
> { “name” : “FALCON_CLIENT” },
> { “name” : “TEZ_CLIENT” },
> { “name” : “SLIDER” },
> { “name” : “SQOOP” },
> { “name” : “HDFS_CLIENT” },
> { “name” : “HIVE_CLIENT” },
> { “name” : “NODEMANAGER” },
> { “name” : “YARN_CLIENT” },
> { “name” : “MAPREDUCE2_CLIENT” },
> { “name” : “DATANODE” },
> { “name” : “GANGLIA_MONITOR” }
> ],
> “cardinality” : “1”
> },
> {
> “name” : “master”,
> “configurations” : [ ],
> “components” : [
> { “name” : “PIG” },
> { “name” : “HISTORYSERVER” },
> { “name” : “KAFKA_BROKER” },
> { “name” : “HBASE_REGIONSERVER” },
> { “name” : “OOZIE_CLIENT” },
> { “name” : “HBASE_CLIENT” },
> { “name” : “NAMENODE” },
> { “name” : “SUPERVISOR” },
> { “name” : “FALCON_SERVER” },
> { “name” : “HCAT” },
> { “name” : “KNOX_GATEWAY” },
> { “name” : “SLIDER” },
> { “name” : “AMBARI_SERVER” },
> { “name” : “APP_TIMELINE_SERVER” },
> { “name” : “HDFS_CLIENT” },
> { “name” : “HIVE_CLIENT” },
> { “name” : “FLUME_HANDLER” },
> { “name” : “WEBHCAT_SERVER” },
> { “name” : “RESOURCEMANAGER” },
> { “name” : “ZOOKEEPER_SERVER” },
> { “name” : “ZOOKEEPER_CLIENT” },
> { “name” : “STORM_UI_SERVER” },
> { “name” : “HBASE_MASTER” },
> { “name” : “HIVE_SERVER” },
> { “name” : “OOZIE_SERVER” },
> { “name” : “FALCON_CLIENT” },
> { “name” : “TEZ_CLIENT” },
> { “name” : “HIVE_METASTORE” },
> { “name” : “GANGLIA_SERVER” },
> { “name” : “SQOOP” },
> { “name” : “YARN_CLIENT” },
> { “name” : “MAPREDUCE2_CLIENT” },
> { “name” : “MYSQL_SERVER” },
> { “name” : “GANGLIA_MONITOR” },
> { “name” : “DRPC_SERVER” },
> { “name” : “NIMBUS” },
> { “name” : “SECONDARY_NAMENODE” }
> ],
> “cardinality” : “1”
> }
> ],
> “Blueprints” : {
> “stack_name” : “HDP”,
> “stack_version” : “2.2”
> }
> }
>
> *hiveserver2.log:-*
>
>
> *Caused by: java.sql.SQLException: Unable to open a test connection to the
> given database. JDBC url =
> jdbc:mysql://mymaster.ibm.com/hive?createDatabaseIfNotExist=true
> , username =
> hive. Terminating connection pool (set lazyInit to true if you expect to
> start your database after your app). Original Exception: ——
> java.sql.SQLException: Access denied for user ‘hive’@’mymaster.ibm.com
> ’ (using password: YES)*
> at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1073)
> at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3597)
> at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3529)
> at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:935)
> at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:4101)
> at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1300)
> at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2337)
> at
> com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2370)
> at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2154)
> at com.mysql.jdbc.ConnectionImpl.(ConnectionImpl.java:792)
> at com.mysql.jdbc.JDBC4Connection.(JDBC4Connection.java:49)
> at sun.reflect.GeneratedConstructorAccessor11.newInstance(Unknown Source)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
> at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:381)
> at
> com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:305)
> at java.sql.DriverManager.getConnection(DriverManager.java:571)
> at java.sql.DriverManager.getConnection(DriverManager.java:187)
> at com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:361)
> at com.jolbox.bonecp.BoneCP.(BoneCP.java:416)
> at
> com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:120)
> at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.ge

Re: column selection for insert

2015-02-26 Thread Srinivas Thunga
Hi,

Then is there any way to achieve this?


*Thanks & Regards,*

*Srinivas T*

On Fri, Feb 27, 2015 at 9:53 AM, Siddharth Tiwari  wrote:

> Not yet !
>
> Sent from my iPhone
>
> On Feb 26, 2015, at 8:23 PM, Srinivas Thunga 
> wrote:
>
> Hi,
>
> Thanks for the prompt response.
>
> Cann't i insert specific columns which i want?
>
> Like we do in oracle.
>
> *Thanks & Regards,*
>
> *Srinivas T*
>
> On Thu, Feb 26, 2015 at 8:55 PM, Alan Gates  wrote:
>
>> Specifying which columns in the target table to insert into (emp in your
>> case) is not supported in 0.14.  You have to insert into all the columns in
>> the table and they have to match the order of items in the select or values
>> clause.
>>
>> This will be supported in 1.2.
>>
>> Alan.
>>
>>  
>>  Srinivas Thunga 
>>  February 26, 2015 at 4:34
>> Hi,
>>
>> I am using Hive 0.14
>>
>> Can any one help me on to execute the below type query
>>
>> insert into table emp (eid,ename) select * from dept where emp.id =
>> dept.id
>>
>>
>>
>>
>> *Thanks & Regards,*
>>
>> *Srinivas T*
>>
>>
>


Re: column selection for insert

2015-02-26 Thread Srinivas Thunga
Hi,

Thanks for the prompt response.

Cann't i insert specific columns which i want?

Like we do in oracle.

*Thanks & Regards,*

*Srinivas T*

On Thu, Feb 26, 2015 at 8:55 PM, Alan Gates  wrote:

> Specifying which columns in the target table to insert into (emp in your
> case) is not supported in 0.14.  You have to insert into all the columns in
> the table and they have to match the order of items in the select or values
> clause.
>
> This will be supported in 1.2.
>
> Alan.
>
>   Srinivas Thunga 
>  February 26, 2015 at 4:34
> Hi,
>
> I am using Hive 0.14
>
> Can any one help me on to execute the below type query
>
> insert into table emp (eid,ename) select * from dept where emp.id =
> dept.id
>
>
>
>
> *Thanks & Regards,*
>
> *Srinivas T*
>
>


column selection for insert

2015-02-26 Thread Srinivas Thunga
Hi,

I am using Hive 0.14

Can any one help me on to execute the below type query

insert into table emp (eid,ename) select * from dept where emp.id = dept.id




*Thanks & Regards,*

*Srinivas T*


Re: ERROR OutOfMemoryError: Java heap space

2015-02-25 Thread Srinivas Thunga
Is your problem solved?

*Thanks & Regards,*

*Srinivas T*

On Wed, Feb 25, 2015 at 4:17 PM, Srinivas Thunga 
wrote:

> Hi,
>
> You can set fetch size based certain datatype like blob, image, etc
>
> statement.setFetchSize(1000);
>
> Try this.
>
>
> *Thanks & Regards,*
>
> *Srinivas T*
>
> On Wed, Feb 25, 2015 at 4:10 PM, Jadhav Shweta 
> wrote:
>
>> Query is working fine if i have small data set in same table.
>> But its throwing error for large data set
>>
>> Thanks
>> Shweta Jadhav
>>
>>
>>
>> -Srinivas Thunga  wrote: -
>> To: "user@hive.apache.org" 
>> From: Srinivas Thunga 
>> Date: 02/25/2015 03:45PM
>>
>> Subject: Re: ERROR OutOfMemoryError: Java heap space
>>
>> not required. Make sure there will not be unnecessary looping in your code
>>
>> *Thanks & Regards,*
>>
>> *Srinivas T*
>>
>> On Wed, Feb 25, 2015 at 2:58 PM, Jadhav Shweta 
>> wrote:
>>
>>> Do I need to increase HADOOP_HEAPSIZE
>>> I haven't set it.
>>> I have cluster with 4 machines each having 8GB ram.
>>> I read somewhere that we should mention auth=noSasl in hive jdbc url. Is
>>> it necessary
>>>
>>> Thanks
>>> Shweta Jadhav
>>>
>>>
>>>
>>> -Jadhav Shweta  wrote: -----
>>> To: user@hive.apache.org
>>> From: Jadhav Shweta 
>>> Date: 02/25/2015 02:54PM
>>>
>>> Subject: Re: ERROR OutOfMemoryError: Java heap space
>>>
>>> I am running the query using spring batch framework.
>>>
>>> Thanks
>>> Shweta Jadhav
>>>
>>> -Srinivas Thunga  wrote: -
>>> To: "user@hive.apache.org" 
>>> From: Srinivas Thunga 
>>> Date: 02/25/2015 02:43PM
>>> Subject: Re: ERROR OutOfMemoryError: Java heap space
>>>
>>> Hi,
>>>
>>> Might be problem in your code.
>>>
>>> Generally it happens when there are unnecessary looping.
>>>
>>> Please check your java code.
>>>
>>> *Thanks & Regards,*
>>>
>>> *Srinivas T*
>>>
>>> On Wed, Feb 25, 2015 at 2:38 PM, Jadhav Shweta 
>>> wrote:
>>>
>>>> Hi
>>>>
>>>> I have installed apache hive 0.13.0.
>>>> I have configured metastore as postgres db.
>>>> Query is working fine in beeline command line interface but giving Heap
>>>> space error while executing using JDBC java client.
>>>>
>>>> Thanks
>>>> Shweta Jadhav
>>>>
>>>> -Srinivas Thunga  wrote: -
>>>> To: "user@hive.apache.org" 
>>>> From: Srinivas Thunga 
>>>> Date: 02/25/2015 02:32PM
>>>> Subject: Re: ERROR OutOfMemoryError: Java heap space
>>>>
>>>>
>>>> Hi,
>>>>
>>>> Let me know how you configured Hive?
>>>>
>>>> R u using cloudera?
>>>>
>>>> If you are installed Hive separately then have you configured Metastore?
>>>>
>>>>
>>>>
>>>> *Thanks & Regards,*
>>>>
>>>> *Srinivas T*
>>>>
>>>> On Wed, Feb 25, 2015 at 2:27 PM, Jadhav Shweta 
>>>> wrote:
>>>>
>>>>> Hi
>>>>>
>>>>> I am running simple select query
>>>>>
>>>>> select * from table;
>>>>>
>>>>> thanks
>>>>> Shweta Jadhav
>>>>>
>>>>>
>>>>>
>>>>> -Srinivas Thunga  wrote: -
>>>>> To: "user@hive.apache.org" 
>>>>> From: Srinivas Thunga 
>>>>> Date: 02/25/2015 01:09PM
>>>>> Subject: Re:
>>>>>
>>>>> Hi,
>>>>>
>>>>> Can  you place the query as well which you are trying?
>>>>>
>>>>>
>>>>>
>>>>> *Thanks & Regards,*
>>>>>
>>>>> *Srinivas T*
>>>>>
>>>>> On Wed, Feb 25, 2015 at 1:02 PM, Jadhav Shweta 
>>>>> wrote:
>>>>>
>>>>>> Hi
>>>>>>
>>>>>> I am trying to run hive query
>>>>>> Its getting executed from beeline interface
>>>>>> but its throwing
>>>>>>
>>>>>>java.lang.OutOfMemoryError: Java heap space
>>>>>>
>>>>>>
>>>>>>error when connecting using jdbc.
>>>>>>
>>>>>>
>>>>>>I am using hive 0.13.0 version and hiveserver2.
>>>>>>
>>>>>>
>>>>>>which parameters i need to configure for the same.
>>>>>>
>>>>>> thanks
>>>>>>
>>>>>> Shweta Jadhav
>>>>>>
>>>>>> =-=-=
>>>>>> Notice: The information contained in this e-mail
>>>>>> message and/or attachments to it may contain
>>>>>> confidential or privileged information. If you are
>>>>>> not the intended recipient, any dissemination, use,
>>>>>> review, distribution, printing or copying of the
>>>>>> information contained in this e-mail message
>>>>>> and/or attachments to it are strictly prohibited. If
>>>>>> you have received this communication in error,
>>>>>> please notify us by reply e-mail or telephone and
>>>>>> immediately and permanently delete the message
>>>>>> and any attachments. Thank you
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>


Re: ERROR OutOfMemoryError: Java heap space

2015-02-25 Thread Srinivas Thunga
Hi,

You can set fetch size based certain datatype like blob, image, etc

statement.setFetchSize(1000);

Try this.


*Thanks & Regards,*

*Srinivas T*

On Wed, Feb 25, 2015 at 4:10 PM, Jadhav Shweta 
wrote:

> Query is working fine if i have small data set in same table.
> But its throwing error for large data set
>
> Thanks
> Shweta Jadhav
>
>
>
> -Srinivas Thunga  wrote: -
> To: "user@hive.apache.org" 
> From: Srinivas Thunga 
> Date: 02/25/2015 03:45PM
>
> Subject: Re: ERROR OutOfMemoryError: Java heap space
>
> not required. Make sure there will not be unnecessary looping in your code
>
> *Thanks & Regards,*
>
> *Srinivas T*
>
> On Wed, Feb 25, 2015 at 2:58 PM, Jadhav Shweta 
> wrote:
>
>> Do I need to increase HADOOP_HEAPSIZE
>> I haven't set it.
>> I have cluster with 4 machines each having 8GB ram.
>> I read somewhere that we should mention auth=noSasl in hive jdbc url. Is
>> it necessary
>>
>> Thanks
>> Shweta Jadhav
>>
>>
>>
>> -Jadhav Shweta  wrote: -
>> To: user@hive.apache.org
>> From: Jadhav Shweta 
>> Date: 02/25/2015 02:54PM
>>
>> Subject: Re: ERROR OutOfMemoryError: Java heap space
>>
>> I am running the query using spring batch framework.
>>
>> Thanks
>> Shweta Jadhav
>>
>> -Srinivas Thunga  wrote: -
>> To: "user@hive.apache.org" 
>> From: Srinivas Thunga 
>> Date: 02/25/2015 02:43PM
>> Subject: Re: ERROR OutOfMemoryError: Java heap space
>>
>> Hi,
>>
>> Might be problem in your code.
>>
>> Generally it happens when there are unnecessary looping.
>>
>> Please check your java code.
>>
>> *Thanks & Regards,*
>>
>> *Srinivas T*
>>
>> On Wed, Feb 25, 2015 at 2:38 PM, Jadhav Shweta 
>> wrote:
>>
>>> Hi
>>>
>>> I have installed apache hive 0.13.0.
>>> I have configured metastore as postgres db.
>>> Query is working fine in beeline command line interface but giving Heap
>>> space error while executing using JDBC java client.
>>>
>>> Thanks
>>> Shweta Jadhav
>>>
>>> -Srinivas Thunga  wrote: -
>>> To: "user@hive.apache.org" 
>>> From: Srinivas Thunga 
>>> Date: 02/25/2015 02:32PM
>>> Subject: Re: ERROR OutOfMemoryError: Java heap space
>>>
>>>
>>> Hi,
>>>
>>> Let me know how you configured Hive?
>>>
>>> R u using cloudera?
>>>
>>> If you are installed Hive separately then have you configured Metastore?
>>>
>>>
>>>
>>> *Thanks & Regards,*
>>>
>>> *Srinivas T*
>>>
>>> On Wed, Feb 25, 2015 at 2:27 PM, Jadhav Shweta 
>>> wrote:
>>>
>>>> Hi
>>>>
>>>> I am running simple select query
>>>>
>>>> select * from table;
>>>>
>>>> thanks
>>>> Shweta Jadhav
>>>>
>>>>
>>>>
>>>> -Srinivas Thunga  wrote: -
>>>> To: "user@hive.apache.org" 
>>>> From: Srinivas Thunga 
>>>> Date: 02/25/2015 01:09PM
>>>> Subject: Re:
>>>>
>>>> Hi,
>>>>
>>>> Can  you place the query as well which you are trying?
>>>>
>>>>
>>>>
>>>> *Thanks & Regards,*
>>>>
>>>> *Srinivas T*
>>>>
>>>> On Wed, Feb 25, 2015 at 1:02 PM, Jadhav Shweta 
>>>> wrote:
>>>>
>>>>> Hi
>>>>>
>>>>> I am trying to run hive query
>>>>> Its getting executed from beeline interface
>>>>> but its throwing
>>>>>
>>>>>java.lang.OutOfMemoryError: Java heap space
>>>>>
>>>>>
>>>>>error when connecting using jdbc.
>>>>>
>>>>>
>>>>>I am using hive 0.13.0 version and hiveserver2.
>>>>>
>>>>>
>>>>>which parameters i need to configure for the same.
>>>>>
>>>>> thanks
>>>>>
>>>>> Shweta Jadhav
>>>>>
>>>>> =-=-=
>>>>> Notice: The information contained in this e-mail
>>>>> message and/or attachments to it may contain
>>>>> confidential or privileged information. If you are
>>>>> not the intended recipient, any dissemination, use,
>>>>> review, distribution, printing or copying of the
>>>>> information contained in this e-mail message
>>>>> and/or attachments to it are strictly prohibited. If
>>>>> you have received this communication in error,
>>>>> please notify us by reply e-mail or telephone and
>>>>> immediately and permanently delete the message
>>>>> and any attachments. Thank you
>>>>>
>>>>>
>>>>
>>>
>>
>


Re: ERROR OutOfMemoryError: Java heap space

2015-02-25 Thread Srinivas Thunga
not required. Make sure there will not be unnecessary looping in your code

*Thanks & Regards,*

*Srinivas T*

On Wed, Feb 25, 2015 at 2:58 PM, Jadhav Shweta 
wrote:

> Do I need to increase HADOOP_HEAPSIZE
> I haven't set it.
> I have cluster with 4 machines each having 8GB ram.
> I read somewhere that we should mention auth=noSasl in hive jdbc url. Is
> it necessary
>
> Thanks
> Shweta Jadhav
>
>
>
> -Jadhav Shweta  wrote: -
> To: user@hive.apache.org
> From: Jadhav Shweta 
> Date: 02/25/2015 02:54PM
>
> Subject: Re: ERROR OutOfMemoryError: Java heap space
>
> I am running the query using spring batch framework.
>
> Thanks
> Shweta Jadhav
>
> -Srinivas Thunga  wrote: -
> To: "user@hive.apache.org" 
> From: Srinivas Thunga 
> Date: 02/25/2015 02:43PM
> Subject: Re: ERROR OutOfMemoryError: Java heap space
>
> Hi,
>
> Might be problem in your code.
>
> Generally it happens when there are unnecessary looping.
>
> Please check your java code.
>
> *Thanks & Regards,*
>
> *Srinivas T*
>
> On Wed, Feb 25, 2015 at 2:38 PM, Jadhav Shweta 
> wrote:
>
>> Hi
>>
>> I have installed apache hive 0.13.0.
>> I have configured metastore as postgres db.
>> Query is working fine in beeline command line interface but giving Heap
>> space error while executing using JDBC java client.
>>
>> Thanks
>> Shweta Jadhav
>>
>> -Srinivas Thunga  wrote: -
>> To: "user@hive.apache.org" 
>> From: Srinivas Thunga 
>> Date: 02/25/2015 02:32PM
>> Subject: Re: ERROR OutOfMemoryError: Java heap space
>>
>>
>> Hi,
>>
>> Let me know how you configured Hive?
>>
>> R u using cloudera?
>>
>> If you are installed Hive separately then have you configured Metastore?
>>
>>
>>
>> *Thanks & Regards,*
>>
>> *Srinivas T*
>>
>> On Wed, Feb 25, 2015 at 2:27 PM, Jadhav Shweta 
>> wrote:
>>
>>> Hi
>>>
>>> I am running simple select query
>>>
>>> select * from table;
>>>
>>> thanks
>>> Shweta Jadhav
>>>
>>>
>>>
>>> -Srinivas Thunga  wrote: -
>>> To: "user@hive.apache.org" 
>>> From: Srinivas Thunga 
>>> Date: 02/25/2015 01:09PM
>>> Subject: Re:
>>>
>>> Hi,
>>>
>>> Can  you place the query as well which you are trying?
>>>
>>>
>>>
>>> *Thanks & Regards,*
>>>
>>> *Srinivas T*
>>>
>>> On Wed, Feb 25, 2015 at 1:02 PM, Jadhav Shweta 
>>> wrote:
>>>
>>>> Hi
>>>>
>>>> I am trying to run hive query
>>>> Its getting executed from beeline interface
>>>> but its throwing
>>>>
>>>>java.lang.OutOfMemoryError: Java heap space
>>>>
>>>>
>>>>error when connecting using jdbc.
>>>>
>>>>
>>>>I am using hive 0.13.0 version and hiveserver2.
>>>>
>>>>
>>>>which parameters i need to configure for the same.
>>>>
>>>> thanks
>>>>
>>>> Shweta Jadhav
>>>>
>>>> =-=-=
>>>> Notice: The information contained in this e-mail
>>>> message and/or attachments to it may contain
>>>> confidential or privileged information. If you are
>>>> not the intended recipient, any dissemination, use,
>>>> review, distribution, printing or copying of the
>>>> information contained in this e-mail message
>>>> and/or attachments to it are strictly prohibited. If
>>>> you have received this communication in error,
>>>> please notify us by reply e-mail or telephone and
>>>> immediately and permanently delete the message
>>>> and any attachments. Thank you
>>>>
>>>>
>>>
>>
>


Re: ERROR OutOfMemoryError: Java heap space

2015-02-25 Thread Srinivas Thunga
Hi,

Might be problem in your code.

Generally it happens when there are unnecessary looping.

Please check your java code.

*Thanks & Regards,*

*Srinivas T*

On Wed, Feb 25, 2015 at 2:38 PM, Jadhav Shweta 
wrote:

> Hi
>
> I have installed apache hive 0.13.0.
> I have configured metastore as postgres db.
> Query is working fine in beeline command line interface but giving Heap
> space error while executing using JDBC java client.
>
> Thanks
> Shweta Jadhav
>
> -Srinivas Thunga  wrote: -
> To: "user@hive.apache.org" 
> From: Srinivas Thunga 
> Date: 02/25/2015 02:32PM
> Subject: Re: ERROR OutOfMemoryError: Java heap space
>
>
> Hi,
>
> Let me know how you configured Hive?
>
> R u using cloudera?
>
> If you are installed Hive separately then have you configured Metastore?
>
>
>
> *Thanks & Regards,*
>
> *Srinivas T*
>
> On Wed, Feb 25, 2015 at 2:27 PM, Jadhav Shweta 
> wrote:
>
>> Hi
>>
>> I am running simple select query
>>
>> select * from table;
>>
>> thanks
>> Shweta Jadhav
>>
>>
>>
>> -Srinivas Thunga  wrote: -
>> To: "user@hive.apache.org" 
>> From: Srinivas Thunga 
>> Date: 02/25/2015 01:09PM
>> Subject: Re:
>>
>> Hi,
>>
>> Can  you place the query as well which you are trying?
>>
>>
>>
>> *Thanks & Regards,*
>>
>> *Srinivas T*
>>
>> On Wed, Feb 25, 2015 at 1:02 PM, Jadhav Shweta 
>> wrote:
>>
>>> Hi
>>>
>>> I am trying to run hive query
>>> Its getting executed from beeline interface
>>> but its throwing
>>>
>>>java.lang.OutOfMemoryError: Java heap space
>>>
>>>
>>>error when connecting using jdbc.
>>>
>>>
>>>I am using hive 0.13.0 version and hiveserver2.
>>>
>>>
>>>which parameters i need to configure for the same.
>>>
>>> thanks
>>>
>>> Shweta Jadhav
>>>
>>> =-=-=
>>> Notice: The information contained in this e-mail
>>> message and/or attachments to it may contain
>>> confidential or privileged information. If you are
>>> not the intended recipient, any dissemination, use,
>>> review, distribution, printing or copying of the
>>> information contained in this e-mail message
>>> and/or attachments to it are strictly prohibited. If
>>> you have received this communication in error,
>>> please notify us by reply e-mail or telephone and
>>> immediately and permanently delete the message
>>> and any attachments. Thank you
>>>
>>>
>>
>


Re: ERROR OutOfMemoryError: Java heap space

2015-02-25 Thread Srinivas Thunga
Hi,

Let me know how you configured Hive?

R u using cloudera?

If you are installed Hive separately then have you configured Metastore?



*Thanks & Regards,*

*Srinivas T*

On Wed, Feb 25, 2015 at 2:27 PM, Jadhav Shweta 
wrote:

> Hi
>
> I am running simple select query
>
> select * from table;
>
> thanks
> Shweta Jadhav
>
>
>
> -Srinivas Thunga  wrote: -
> To: "user@hive.apache.org" 
> From: Srinivas Thunga 
> Date: 02/25/2015 01:09PM
> Subject: Re:
>
> Hi,
>
> Can  you place the query as well which you are trying?
>
>
>
> *Thanks & Regards,*
>
> *Srinivas T*
>
> On Wed, Feb 25, 2015 at 1:02 PM, Jadhav Shweta 
> wrote:
>
>> Hi
>>
>> I am trying to run hive query
>> Its getting executed from beeline interface
>> but its throwing
>>
>>java.lang.OutOfMemoryError: Java heap space
>>
>>
>>error when connecting using jdbc.
>>
>>
>>I am using hive 0.13.0 version and hiveserver2.
>>
>>
>>which parameters i need to configure for the same.
>>
>> thanks
>>
>> Shweta Jadhav
>>
>> =-=-=
>> Notice: The information contained in this e-mail
>> message and/or attachments to it may contain
>> confidential or privileged information. If you are
>> not the intended recipient, any dissemination, use,
>> review, distribution, printing or copying of the
>> information contained in this e-mail message
>> and/or attachments to it are strictly prohibited. If
>> you have received this communication in error,
>> please notify us by reply e-mail or telephone and
>> immediately and permanently delete the message
>> and any attachments. Thank you
>>
>>
>


Re:

2015-02-24 Thread Srinivas Thunga
Hi,

Can  you place the query as well which you are trying?



*Thanks & Regards,*

*Srinivas T*

On Wed, Feb 25, 2015 at 1:02 PM, Jadhav Shweta 
wrote:

> Hi
>
> I am trying to run hive query
> Its getting executed from beeline interface
> but its throwing
>
>java.lang.OutOfMemoryError: Java heap space
>
>
>error when connecting using jdbc.
>
>
>I am using hive 0.13.0 version and hiveserver2.
>
>
>which parameters i need to configure for the same.
>
> thanks
>
> Shweta Jadhav
>
> =-=-=
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>


Re: fail to drop table after upgrade to hive 0.14.0

2015-02-23 Thread Srinivas Thunga
hi,

Table should be created as orc table like

 create table HiveTest (eid int,ename string,desig string,sal int,dept
string) clustered by (dept) into 3 buckets stored as orc TBLPROPERTIES
('transactional'='true') ;

*Thanks & Regards,*

*Srinivas T*

On Tue, Feb 24, 2015 at 9:47 AM, Srinivas Thunga 
wrote:

> Hi,
>
> Below are new properties to be set in Hive-Site.xml
>
>  hive.support.concurrency – true
>  hive.enforce.bucketing – true
>  hive.exec.dynamic.partition.mode – nonstrict
>  hive.txn.manager – org.apache.hadoop.hive.ql.lockmgr.DbTxnManager
>  hive.compactor.initiator.on – true
>  hive.compactor.worker.threads – 1
>
> *Thanks & Regards,*
>
> *Srinivas T*
>
> On Mon, Feb 23, 2015 at 9:31 PM, Jessica Zhang 
> wrote:
>
>> Thanks for the reply! Would you please elaborate which new properties for
>> hive 0.14?
>>
>> Jessica
>>
>> On Feb 23, 2015, at 1:03 AM, Srinivas Thunga 
>> wrote:
>>
>> Apply new configuration properties for hive0.14 in hive-site.xml file
>> then drop or delete table
>>
>> On Monday, February 23, 2015, Jie Zhang  wrote:
>>
>>> Hi,
>>>
>>> I have an application using hive, and just upgraded from 0.13.1 to
>>> 0.14.0. However, a bunch of unit testcases, which are using embedded derby
>>> metastore, failing to drop table. Here are the exception stack trace. Does
>>> anyone have clue what the problem can be and how to resolve it? Feedback is
>>> really appreciated. Thanks!
>>>
>>> ERROR 2015-02-22 22:38:26,757 [main] [StmtCacheTest] [line 308]
>>> SQLException when creating hive data in test table: [stmtcache_test]
>>>
>>> java.sql.SQLException: Error while processing statement: FAILED:
>>> Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.
>>> MetaException(message:javax.jdo.JDOException: Exception thrown when
>>> executing query
>>>
>>> at
>>> org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:596)
>>>
>>> at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:275)
>>>
>>> at
>>> org.apache.hadoop.hive.metastore.ObjectStore.listMIndexes(ObjectStore.java:3133)
>>>
>>> at
>>> org.apache.hadoop.hive.metastore.ObjectStore.getIndexes(ObjectStore.java:3107)
>>>
>>> at sun.reflect.GeneratedMethodAccessor86.invoke(Unknown Source)
>>>
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>
>>> at java.lang.reflect.Method.invoke(Method.java:606)
>>>
>>> at
>>> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98)
>>>
>>> at com.sun.proxy.$Proxy8.getIndexes(Unknown Source)
>>>
>>> at
>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:1465)
>>>
>>> at
>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:1657)
>>>
>>> at sun.reflect.GeneratedMethodAccessor85.invoke(Unknown Source)
>>>
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>
>>> at java.lang.reflect.Method.invoke(Method.java:606)
>>>
>>> at
>>> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102)
>>>
>>> at com.sun.proxy.$Proxy9.drop_table_with_environment_context(Unknown
>>> Source)
>>>
>>> at
>>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.drop_table_with_environment_context(HiveMetaStoreClient.java:1890)
>>>
>>> at
>>> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.drop_table_with_environment_context(SessionHiveMetaStoreClient.java:117)
>>>
>>> at
>>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:855)
>>>
>>> at
>>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:791)
>>>
>>> at sun.reflect.GeneratedMethodAccessor73.invoke(Unknown Source)
>>>
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>
>>> at java.lang.reflect.Method.invoke(Method.java:606)
>>>
>>> at
>>> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:90)
>>>
>>> at com

Re: fail to drop table after upgrade to hive 0.14.0

2015-02-23 Thread Srinivas Thunga
Hi,

Below are new properties to be set in Hive-Site.xml

 hive.support.concurrency – true
 hive.enforce.bucketing – true
 hive.exec.dynamic.partition.mode – nonstrict
 hive.txn.manager – org.apache.hadoop.hive.ql.lockmgr.DbTxnManager
 hive.compactor.initiator.on – true
 hive.compactor.worker.threads – 1

*Thanks & Regards,*

*Srinivas T*

On Mon, Feb 23, 2015 at 9:31 PM, Jessica Zhang  wrote:

> Thanks for the reply! Would you please elaborate which new properties for
> hive 0.14?
>
> Jessica
>
> On Feb 23, 2015, at 1:03 AM, Srinivas Thunga 
> wrote:
>
> Apply new configuration properties for hive0.14 in hive-site.xml file then
> drop or delete table
>
> On Monday, February 23, 2015, Jie Zhang  wrote:
>
>> Hi,
>>
>> I have an application using hive, and just upgraded from 0.13.1 to
>> 0.14.0. However, a bunch of unit testcases, which are using embedded derby
>> metastore, failing to drop table. Here are the exception stack trace. Does
>> anyone have clue what the problem can be and how to resolve it? Feedback is
>> really appreciated. Thanks!
>>
>> ERROR 2015-02-22 22:38:26,757 [main] [StmtCacheTest] [line 308]
>> SQLException when creating hive data in test table: [stmtcache_test]
>>
>> java.sql.SQLException: Error while processing statement: FAILED:
>> Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.
>> MetaException(message:javax.jdo.JDOException: Exception thrown when
>> executing query
>>
>> at
>> org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:596)
>>
>> at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:275)
>>
>> at
>> org.apache.hadoop.hive.metastore.ObjectStore.listMIndexes(ObjectStore.java:3133)
>>
>> at
>> org.apache.hadoop.hive.metastore.ObjectStore.getIndexes(ObjectStore.java:3107)
>>
>> at sun.reflect.GeneratedMethodAccessor86.invoke(Unknown Source)
>>
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>
>> at java.lang.reflect.Method.invoke(Method.java:606)
>>
>> at
>> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98)
>>
>> at com.sun.proxy.$Proxy8.getIndexes(Unknown Source)
>>
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:1465)
>>
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:1657)
>>
>> at sun.reflect.GeneratedMethodAccessor85.invoke(Unknown Source)
>>
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>
>> at java.lang.reflect.Method.invoke(Method.java:606)
>>
>> at
>> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102)
>>
>> at com.sun.proxy.$Proxy9.drop_table_with_environment_context(Unknown
>> Source)
>>
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.drop_table_with_environment_context(HiveMetaStoreClient.java:1890)
>>
>> at
>> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.drop_table_with_environment_context(SessionHiveMetaStoreClient.java:117)
>>
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:855)
>>
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:791)
>>
>> at sun.reflect.GeneratedMethodAccessor73.invoke(Unknown Source)
>>
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>
>> at java.lang.reflect.Method.invoke(Method.java:606)
>>
>> at
>> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:90)
>>
>> at com.sun.proxy.$Proxy10.dropTable(Unknown Source)
>>
>> at org.apache.hadoop.hive.ql.metadata.Hive.dropTable(Hive.java:980)
>>
>> at org.apache.hadoop.hive.ql.metadata.Hive.dropTable(Hive.java:917)
>>
>> at org.apache.hadoop.hive.ql.exec.DDLTask.dropTable(DDLTask.java:3673)
>>
>> at
>> org.apache.hadoop.hive.ql.exec.DDLTask.dropTableOrPartitions(DDLTask.java:3608)
>>
>> at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:320)
>>
>> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
>>
>> at
>> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
>>
>> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.ja

Re: fail to drop table after upgrade to hive 0.14.0

2015-02-23 Thread Srinivas Thunga
Apply new configuration properties for hive0.14 in hive-site.xml file then
drop or delete table

On Monday, February 23, 2015, Jie Zhang  wrote:

> Hi,
>
> I have an application using hive, and just upgraded from 0.13.1 to 0.14.0.
> However, a bunch of unit testcases, which are using embedded derby
> metastore, failing to drop table. Here are the exception stack trace. Does
> anyone have clue what the problem can be and how to resolve it? Feedback is
> really appreciated. Thanks!
>
> ERROR 2015-02-22 22:38:26,757 [main] [StmtCacheTest] [line 308]
> SQLException when creating hive data in test table: [stmtcache_test]
>
> java.sql.SQLException: Error while processing statement: FAILED: Execution
> Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.
> MetaException(message:javax.jdo.JDOException: Exception thrown when
> executing query
>
> at
> org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:596)
>
> at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:275)
>
> at
> org.apache.hadoop.hive.metastore.ObjectStore.listMIndexes(ObjectStore.java:3133)
>
> at
> org.apache.hadoop.hive.metastore.ObjectStore.getIndexes(ObjectStore.java:3107)
>
> at sun.reflect.GeneratedMethodAccessor86.invoke(Unknown Source)
>
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:606)
>
> at
> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98)
>
> at com.sun.proxy.$Proxy8.getIndexes(Unknown Source)
>
> at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:1465)
>
> at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:1657)
>
> at sun.reflect.GeneratedMethodAccessor85.invoke(Unknown Source)
>
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:606)
>
> at
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102)
>
> at com.sun.proxy.$Proxy9.drop_table_with_environment_context(Unknown
> Source)
>
> at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.drop_table_with_environment_context(HiveMetaStoreClient.java:1890)
>
> at
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.drop_table_with_environment_context(SessionHiveMetaStoreClient.java:117)
>
> at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:855)
>
> at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.dropTable(HiveMetaStoreClient.java:791)
>
> at sun.reflect.GeneratedMethodAccessor73.invoke(Unknown Source)
>
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:606)
>
> at
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:90)
>
> at com.sun.proxy.$Proxy10.dropTable(Unknown Source)
>
> at org.apache.hadoop.hive.ql.metadata.Hive.dropTable(Hive.java:980)
>
> at org.apache.hadoop.hive.ql.metadata.Hive.dropTable(Hive.java:917)
>
> at org.apache.hadoop.hive.ql.exec.DDLTask.dropTable(DDLTask.java:3673)
>
> at
> org.apache.hadoop.hive.ql.exec.DDLTask.dropTableOrPartitions(DDLTask.java:3608)
>
> at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:320)
>
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
>
> at
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
>
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1604)
>
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1364)
>
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1177)
>
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1004)
>
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:999)
>
> at
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:144)
>
> at
> org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:69)
>
> at
> org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:196)
>
> at java.security.AccessController.doPrivileged(Native Method)
>
> at javax.security.auth.Subject.doAs(Subject.java:415)
>
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
>
> at
> org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:536)
>
> at
> org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:208)
>
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
>
> - Jessica
>


-- 
Sent fro

Hive source pointing to http rather than https using maven

2015-02-17 Thread Srinivas Thunga
Refereed from the below link


https://cwiki.apache.org/confluence/display/Hive/HiveDeveloperFAQ#HiveDeveloperFAQ-HowdoIimportintoEclipse
?

Generating tarball resulted in below error.

#mvn clean package -DskipTests -Phadoop-1 -Pdist

ERROR::

Failed to execute goal on project hive-shims-common: Could not resolve
dependencies for project
org.apache.hive.shims:hive-shims-common:jar:1.2.0-SNAPSHOT: Could not
transfer artifact jline:jline:jar:2.12 from/to central (
http://repo.maven.apache.org/maven2): Access denied to:
http://repo.maven.apache.org/maven2/jline/jline/2.12/jline-2.12.jar,
ReasonPhrase:Forbidden. -> [Help 1]
[ERROR]


Please advice



*Thanks & Regards,*

*Srinivas T*