Sounds like any delete only removes one 1 row from table in Hive on Spark!

 

Delete from table (actually it has only 10 rows removes one row only!

 

delete from tt;

INFO  :

Query Hive on Spark job[6] stages:

INFO  : 12

INFO  : 13

INFO  :

Status: Running (Hive on Spark job[6])

INFO  : Job Progress Format

CurrentTime StageId_StageAttemptId: 
SucceededTasksCount(+RunningTasksCount-FailedTasksCount)/TotalTasksCount 
[StageCost]

INFO  : 2015-12-23 00:04:50,811 Stage-12_0: 46(+2)/256  Stage-13_0: 0/256

INFO  : 2015-12-23 00:04:51,825 Stage-12_0: 115(+2)/256 Stage-13_0: 0/256

INFO  : 2015-12-23 00:04:52,830 Stage-12_0: 186(+2)/256 Stage-13_0: 0/256

INFO  : 2015-12-23 00:04:53,837 Stage-12_0: 252(+2)/256 Stage-13_0: 0/256

……………………………………………..

 

INFO  : Status: Finished successfully in 11.06 seconds

INFO  : Loading data to table asehadoop.tt from 
hdfs://rhes564:9000/user/hive/warehouse/asehadoop.db/tt/.hive-staging_hive_2015-12-23_00-04-49_685_5490632347624948684-10/-ext-10000

INFO  : Table asehadoop.tt stats: [numFiles=2, numRows=0, totalSize=3829, 
rawDataSize=0]

No rows affected (11.807 seconds)

0: jdbc:hive2://rhes564:10010/default> select count(1) from tt;

INFO  :

Query Hive on Spark job[7] stages:

INFO  : 15

INFO  : 14

INFO  :

Status: Running (Hive on Spark job[7])

INFO  : Job Progress Format

CurrentTime StageId_StageAttemptId: 
SucceededTasksCount(+RunningTasksCount-FailedTasksCount)/TotalTasksCount 
[StageCost]

INFO  : 2015-12-23 00:05:10,929 Stage-14_0: 49(+2)/256  Stage-15_0: 0/1

INFO  : 2015-12-23 00:05:11,935 Stage-14_0: 114(+2)/256 Stage-15_0: 0/1

INFO  : 2015-12-23 00:05:12,940 Stage-14_0: 180(+2)/256 Stage-15_0: 0/1

INFO  : 2015-12-23 00:05:13,943 Stage-14_0: 245(+2)/256 Stage-15_0: 0/1

INFO  : 2015-12-23 00:05:14,947 Stage-14_0: 256/256 Finished    Stage-15_0: 1/1 
Finished

INFO  : Status: Finished successfully in 5.03 seconds

+------+--+

| _c0  |

+------+--+

| 9    |

 

 

Effectively this does not come back with an error message but gives the 
impression that all is deleted. I assume that delete from <table> with no 
predicate should delete all?

 

Thanks

 

 

 

Mich Talebzadeh

 

Sybase ASE 15 Gold Medal Award 2008

A Winning Strategy: Running the most Critical Financial Data on ASE 15

http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf

Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE 15", 
ISBN 978-0-9563693-0-7. 

co-author "Sybase Transact SQL Guidelines Best Practices", ISBN 
978-0-9759693-0-4

Publications due shortly:

Complex Event Processing in Heterogeneous Environments, ISBN: 978-0-9563693-3-8

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume one 
out shortly

 

http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/> 

 

NOTE: The information in this email is proprietary and confidential. This 
message is for the designated recipient only, if you are not the intended 
recipient, you should destroy it immediately. Any information in this message 
shall not be understood as given or endorsed by Peridale Technology Ltd, its 
subsidiaries or their employees, unless expressly so stated. It is the 
responsibility of the recipient to ensure that this email is virus free, 
therefore neither Peridale Ltd, its subsidiaries nor their employees accept any 
responsibility.

 

From: Alan Gates [mailto:alanfga...@gmail.com] 
Sent: 23 December 2015 00:27
To: user@hive.apache.org
Subject: Re: Attempt to do update or delete using transaction manager that does 
not support these operations. (state=42000,code=10294)

 

Correct.  What doesn't work in Spark are actually the transactions, because 
there's a piece in the execution side that needs to send heartbeats to the 
metastore saying a transaction is still alive.  That hasn't been implemented 
for Spark.  It's very simple and could be done (see ql.exec.Heartbeater use in 
ql.exec.tez.TezJobMonitor for an example of how it would work).  AFAIK 
everything else would work just fine.

Alan.






 <mailto:m...@peridale.co.uk> Mich Talebzadeh

December 22, 2015 at 13:45

Thanks for the feedback Alan

 

It seems that one can do INSERTS with Hive on Spark but no updates or deletes. 
Is this correct?

 

Cheers,

 

Mich Talebzadeh

 

Sybase ASE 15 Gold Medal Award 2008

A Winning Strategy: Running the most Critical Financial Data on ASE 15

http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf

Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE 15", 
ISBN 978-0-9563693-0-7. 

co-author "Sybase Transact SQL Guidelines Best Practices", ISBN 
978-0-9759693-0-4

Publications due shortly:

Complex Event Processing in Heterogeneous Environments, ISBN: 978-0-9563693-3-8

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume one 
out shortly

 

http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/> 

 

NOTE: The information in this email is proprietary and confidential. This 
message is for the designated recipient only, if you are not the intended 
recipient, you should destroy it immediately. Any information in this message 
shall not be understood as given or endorsed by Peridale Technology Ltd, its 
subsidiaries or their employees, unless expressly so stated. It is the 
responsibility of the recipient to ensure that this email is virus free, 
therefore neither Peridale Ltd, its subsidiaries nor their employees accept any 
responsibility.

 

From: Alan Gates [mailto:alanfga...@gmail.com] 
Sent: 22 December 2015 20:39
To: user@hive.apache.org <mailto:user@hive.apache.org> 
Subject: Re: Attempt to do update or delete using transaction manager that does 
not support these operations. (state=42000,code=10294)

 

Also note that transactions only work with MR or Tez as the backend.  The 
required work to have them work with Spark hasn't been done.

Alan.







 <mailto:alanfga...@gmail.com> Alan Gates

December 22, 2015 at 12:38

Also note that transactions only work with MR or Tez as the backend.  The 
required work to have them work with Spark hasn't been done.

Alan.



 <mailto:m...@peridale.co.uk> Mich Talebzadeh

December 22, 2015 at 9:14

Thanks Elliot,

 

Sounds like that table was created as create table tt as select * from t. 
Although the original table t was created as transactional shown below, the 
table tt is not!

 

0: jdbc:hive2://rhes564:10010/default> show create table t;

+-------------------------------------------------------------+--+

|                       createtab_stmt                        |

+-------------------------------------------------------------+--+

| CREATE TABLE `t`(                                           |

|   `owner` varchar(30),                                      |

|   `object_name` varchar(30),                                |

|   `subobject_name` varchar(30),                             |

|   `object_id` bigint,                                       |

|   `data_object_id` bigint,                                  |

|   `object_type` varchar(19),                                |

|   `created` timestamp,                                      |

|   `last_ddl_time` timestamp,                                |

|   `timestamp2` varchar(19),                                 |

|   `status` varchar(7),                                      |

|   `temporary2` varchar(1),                                  |

|   `generated` varchar(1),                                   |

|   `secondary` varchar(1),                                   |

|   `namespace` bigint,                                       |

|   `edition_name` varchar(30),                               |

|   `padding1` varchar(4000),                                 |

|   `padding2` varchar(3500),                                 |

|   `attribute` varchar(32),                                  |

|   `op_type` int,                                            |

|   `op_time` timestamp,                                      |

|   `new_col` varchar(30))                                    |

| CLUSTERED BY (                                              |

|   object_id)                                                |

| INTO 256 BUCKETS                                            |

| ROW FORMAT SERDE                                            |

|   'org.apache.hadoop.hive.ql.io.orc.OrcSerde'               |

| STORED AS INPUTFORMAT                                       |

|   'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'         |

| OUTPUTFORMAT                                                |

|   'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'        |

| LOCATION                                                    |

|   'hdfs://rhes564:9000/user/hive/warehouse/asehadoop.db/t'  |

| TBLPROPERTIES (                                             |

|   'COLUMN_STATS_ACCURATE'='false',                          |

|   'last_modified_by'='hduser',                              |

|   'last_modified_time'='1449831076',                        |

|   'numFiles'='17',                                          |

|   'numRows'='-1',                                           |

|   'orc.bloom.filter.columns'='object_id',                   |

|   'orc.bloom.filter.fpp'='0.05',                            |

|   'orc.compress'='SNAPPY',                                  |

|   'orc.create.index'='true',                                |

|   'orc.row.index.stride'='10000',                           |

|   'orc.stripe.size'='268435456',                            |

|   'rawDataSize'='-1',                                       |

|   'totalSize'='64438212',                                   |

|   'transactional'='true',                                   |

|   'transient_lastDdlTime'='1449831076')                     |

+-------------------------------------------------------------+--+

49 rows selected (0.06 seconds)

 

 

Mich Talebzadeh

 

Sybase ASE 15 Gold Medal Award 2008

A Winning Strategy: Running the most Critical Financial Data on ASE 15

http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf

Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE 15", 
ISBN 978-0-9563693-0-7. 

co-author "Sybase Transact SQL Guidelines Best Practices", ISBN 
978-0-9759693-0-4

Publications due shortly:

Complex Event Processing in Heterogeneous Environments, ISBN: 978-0-9563693-3-8

Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume one 
out shortly

 

http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/> 

 

NOTE: The information in this email is proprietary and confidential. This 
message is for the designated recipient only, if you are not the intended 
recipient, you should destroy it immediately. Any information in this message 
shall not be understood as given or endorsed by Peridale Technology Ltd, its 
subsidiaries or their employees, unless expressly so stated. It is the 
responsibility of the recipient to ensure that this email is virus free, 
therefore neither Peridale Ltd, its subsidiaries nor their employees accept any 
responsibility.

 

From: Elliot West [mailto:tea...@gmail.com] 
Sent: 22 December 2015 16:57
To: user@hive.apache.org <mailto:user@hive.apache.org> 
Subject: Re: Attempt to do update or delete using transaction manager that does 
not support these operations. (state=42000,code=10294)

 

Hi,

 

The input/output formats do not appear to be ORC, have you tried 'stored as 
orc'? Additionally you'll need to set the property 'transactional=true' on the 
table. Do you have the original create table statement?

 

Cheers - Elliot.

On Tuesday, 22 December 2015, Mich Talebzadeh <m...@peridale.co.uk 
<mailto:m...@peridale.co.uk> > wrote:



 <mailto:tea...@gmail.com> Elliot West

December 22, 2015 at 8:57

Hi,

 

The input/output formats do not appear to be ORC, have you tried 'stored as 
orc'? Additionally you'll need to set the property 'transactional=true' on the 
table. Do you have the original create table statement?

 

Cheers - Elliot.

On Tuesday, 22 December 2015, Mich Talebzadeh <m...@peridale.co.uk 
<mailto:m...@peridale.co.uk> > wrote:

Reply via email to