[jira] [Resolved] (HIVE-27979) HMS alter_partitions log adds table name

2024-01-19 Thread Butao Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Butao Zhang resolved HIVE-27979.

Fix Version/s: 4.1.0
 Assignee: dzcxzl
   Resolution: Fixed

[~dzcxzl] Fix has been merged into the master branch. Thanks for your 
contribution!!!

> HMS alter_partitions log adds table name
> 
>
> Key: HIVE-27979
> URL: https://issues.apache.org/jira/browse/HIVE-27979
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Reporter: dzcxzl
>Assignee: dzcxzl
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HIVE-27960) Invalid function error when using custom udaf

2024-01-19 Thread Butao Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Butao Zhang resolved HIVE-27960.

Fix Version/s: 4.1.0
   (was: 2.3.9)
   Resolution: Fixed

[~gaoxiong] Fix has been merged into the master branch. Thanks for your 
contribution!!!

> Invalid function error when using custom udaf
> -
>
> Key: HIVE-27960
> URL: https://issues.apache.org/jira/browse/HIVE-27960
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 2.3.9, 4.0.0-beta-1
> Environment: Aliyun emr hive 2.3.9
>Reporter: gaoxiong
>Assignee: gaoxiong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.1.0
>
>
> When a permanent udaf used before over() function, hive will throw invalid 
> function error.
>  
> -In HIVE-12719 , it fix this issue for hive 3, but it can't work in hive 2.- 
> This issue reproduce on master.
>  
> In hive 2, it should get FunctionInfo from FunctionRegistry before get 
> WindowFunctionInfo same to hive 3. Because it will register window function 
> to session. Then hive can get WindowFunctionInfo correctly.
>  
>  err detail 
> register a permanent udaf:
> {code:java}
> create function row_number2 as 
> 'org.apache.hadoop.hive.ql.udf.generic.GenericUDAFRowNumber'; {code}
> execute query in a new cli session:
> {code:java}
> select row_number2() over();{code}
> blew is error log:
> {code:java}
> FAILED: SemanticException Failed to breakup Windowing invocations into 
> Groups. At least 1 group must only depend on input columns. Also check for 
> circular dependencies.Underlying error: Invalid function 
> row_number22023-12-06T10:17:30,348 ERROR 
> [0b7764ce-cde3-49c5-9d32-f96d61b20773 main] ql.Driver: FAILED: 
> SemanticException Failed to breakup Windowing invocations into Groups. At 
> least 1 group must only depend on input columns. Also check for circular 
> dependencies.Underlying error: Invalid function 
> row_number2org.apache.hadoop.hive.ql.parse.SemanticException: Failed to 
> breakup Windowing invocations into Groups. At least 1 group must only depend 
> on input columns. Also check for circular dependencies.Underlying error: 
> Invalid function row_number2    at 
> org.apache.hadoop.hive.ql.parse.WindowingComponentizer.next(WindowingComponentizer.java:97)
>     at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genWindowingPlan(SemanticAnalyzer.java:13270)
>     at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:9685)
>     at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:9644)
>     at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:10549)
>     at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:10427)
>     at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:11125)
>     at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:481)
>     at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
>     at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
>     at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)    at 
> org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)    at 
> org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)    at 
> org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)    at 
> org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)    at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)    
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)    at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)    at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:336)    at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:787)    at 
> org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)    at 
> org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)    at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)    at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
>    at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)    at 
> org.apache.hadoop.util.RunJar.run(RunJar.java:239)    at 
> org.apache.hadoop.util.RunJar.main(RunJar.java:153) {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

[jira] [Commented] (HIVE-27775) DirectSQL and JDO results are different when fetching partitions by timestamp in DST shift

2024-01-19 Thread Zhihua Deng (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-27775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808855#comment-17808855
 ] 

Zhihua Deng commented on HIVE-27775:


Not sure how many users cloud use timestamp as the type for the partition 
column, looks like this is a new feature introduced in Hive 4.0.

>From the Jira description, the partition predicate in the query

 
{code:java}
SELECT * FROM payments WHERE txn_datetime = '2023-03-26 02:30:00'; {code}
returns an empty result on direct sql, while a partition('2023-03-26 02:30:00') 
on the JDO, provided the TIMESTAMP data type in Hive is timezone agnostic, so I 
assume the result from JDO is correct.

Besides this difference, when I tested the date/timestamp partition column 
against Postgres and MySQL, there're some problems:

Postgres(direct sql):

operator does not exist: timestamp without time zone = character varying

MySQL(direct sql):

You have an error in your SQL syntax; check the manual that corresponds to your 
MySQL server version for the right syntax to use near 'TIMESTAMP) else null 
end) as TIMESTAMP) = '2023-03-26 03:30:00'))' at line 1 

 

> DirectSQL and JDO results are different when fetching partitions by timestamp 
> in DST shift
> --
>
> Key: HIVE-27775
> URL: https://issues.apache.org/jira/browse/HIVE-27775
> Project: Hive
>  Issue Type: Bug
>  Components: Standalone Metastore
>Affects Versions: 4.0.0-beta-1
>Reporter: Stamatis Zampetakis
>Assignee: Zhihua Deng
>Priority: Critical
>  Labels: pull-request-available
>
> DirectSQL and JDO results are different when fetching partitions by timestamp 
> in DST shift.
> {code:sql}
> --! qt:timezone:Europe/Paris
> CREATE EXTERNAL TABLE payments (card string) PARTITIONED BY(txn_datetime 
> TIMESTAMP) STORED AS ORC;
> INSERT into payments VALUES('---', '2023-03-26 02:30:00');
> SELECT * FROM payments WHERE txn_datetime = '2023-03-26 02:30:00';
> {code}
> The '2023-03-26 02:30:00' is a timestamp that in Europe/Paris timezone falls 
> exactly in the middle of the DST shift. In this particular timezone this date 
> time never really exists since we are jumping directly from 02:00:00 to 
> 03:00:00. However, the TIMESTAMP data type in Hive is timezone agnostic 
> (https://cwiki.apache.org/confluence/display/Hive/Different+TIMESTAMP+types) 
> so it is a perfectly valid timestamp that can be inserted in a table and we 
> must be able to recover it back.
> For the SELECT query above, partition pruning kicks in and calls the 
> ObjectStore#getPartitionsByExpr method in order to fetch the respective 
> partitions matching the timestamp from HMS.
> The tests however reveal that DirectSQL and JDO paths are not returning the 
> same results leading to an exception when VerifyingObjectStore is used. 
> According to the error below DirectSQL is able to recover one partition from 
> HMS (expected) while JDO/ORM returns empty (not expected).
> {noformat}
> 2023-10-06T03:51:19,406 ERROR [80252df4-3fdc-4971-badf-ad67ce8567c7 main] 
> metastore.VerifyingObjectStore: Lists are not the same size: SQL 1, ORM 0
> 2023-10-06T03:51:19,409 ERROR [80252df4-3fdc-4971-badf-ad67ce8567c7 main] 
> metastore.RetryingHMSHandler: MetaException(message:Lists are not the same 
> size: SQL 1, ORM 0)
>   at 
> org.apache.hadoop.hive.metastore.VerifyingObjectStore.verifyLists(VerifyingObjectStore.java:148)
>   at 
> org.apache.hadoop.hive.metastore.VerifyingObjectStore.getPartitionsByExpr(VerifyingObjectStore.java:88)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97)
>   at com.sun.proxy.$Proxy57.getPartitionsByExpr(Unknown Source)
>   at 
> org.apache.hadoop.hive.metastore.HMSHandler.get_partitions_spec_by_expr(HMSHandler.java:7330)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:98)
>   at 
> org.apache.hadoop.hive.metastore.AbstractHMSHandlerProxy.invoke(AbstractHMSHandlerProxy.java:82)
>   at com.sun.proxy.$Proxy59.get_partitions_spec_by_expr(Unknown Source)
>   at 
> org.

[jira] [Comment Edited] (HIVE-27775) DirectSQL and JDO results are different when fetching partitions by timestamp in DST shift

2024-01-19 Thread Zhihua Deng (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-27775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808855#comment-17808855
 ] 

Zhihua Deng edited comment on HIVE-27775 at 1/20/24 2:27 AM:
-

Not sure how many users cloud use timestamp as the type for the partition 
column, looks like this is a new feature introduced in Hive 4.0.

>From the Jira description, the partition predicate in the query
{code:java}
SELECT * FROM payments WHERE txn_datetime = '2023-03-26 02:30:00'; {code}
returns an empty result on direct sql, while a partition('2023-03-26 02:30:00') 
on the JDO, provided the TIMESTAMP data type in Hive is timezone agnostic, so I 
assume the result from JDO is correct.

Besides this difference, when I tested the date/timestamp partition column 
against Postgres and MySQL, there're some problems:

Postgres(direct sql):

operator does not exist: timestamp without time zone = character varying

MySQL(direct sql):

You have an error in your SQL syntax; check the manual that corresponds to your 
MySQL server version for the right syntax to use near 'TIMESTAMP) else null 
end) as TIMESTAMP) = '2023-03-26 03:30:00'))' at line 1 


was (Author: dengzh):
Not sure how many users cloud use timestamp as the type for the partition 
column, looks like this is a new feature introduced in Hive 4.0.

>From the Jira description, the partition predicate in the query

 
{code:java}
SELECT * FROM payments WHERE txn_datetime = '2023-03-26 02:30:00'; {code}
returns an empty result on direct sql, while a partition('2023-03-26 02:30:00') 
on the JDO, provided the TIMESTAMP data type in Hive is timezone agnostic, so I 
assume the result from JDO is correct.

Besides this difference, when I tested the date/timestamp partition column 
against Postgres and MySQL, there're some problems:

Postgres(direct sql):

operator does not exist: timestamp without time zone = character varying

MySQL(direct sql):

You have an error in your SQL syntax; check the manual that corresponds to your 
MySQL server version for the right syntax to use near 'TIMESTAMP) else null 
end) as TIMESTAMP) = '2023-03-26 03:30:00'))' at line 1 

 

> DirectSQL and JDO results are different when fetching partitions by timestamp 
> in DST shift
> --
>
> Key: HIVE-27775
> URL: https://issues.apache.org/jira/browse/HIVE-27775
> Project: Hive
>  Issue Type: Bug
>  Components: Standalone Metastore
>Affects Versions: 4.0.0-beta-1
>Reporter: Stamatis Zampetakis
>Assignee: Zhihua Deng
>Priority: Critical
>  Labels: pull-request-available
>
> DirectSQL and JDO results are different when fetching partitions by timestamp 
> in DST shift.
> {code:sql}
> --! qt:timezone:Europe/Paris
> CREATE EXTERNAL TABLE payments (card string) PARTITIONED BY(txn_datetime 
> TIMESTAMP) STORED AS ORC;
> INSERT into payments VALUES('---', '2023-03-26 02:30:00');
> SELECT * FROM payments WHERE txn_datetime = '2023-03-26 02:30:00';
> {code}
> The '2023-03-26 02:30:00' is a timestamp that in Europe/Paris timezone falls 
> exactly in the middle of the DST shift. In this particular timezone this date 
> time never really exists since we are jumping directly from 02:00:00 to 
> 03:00:00. However, the TIMESTAMP data type in Hive is timezone agnostic 
> (https://cwiki.apache.org/confluence/display/Hive/Different+TIMESTAMP+types) 
> so it is a perfectly valid timestamp that can be inserted in a table and we 
> must be able to recover it back.
> For the SELECT query above, partition pruning kicks in and calls the 
> ObjectStore#getPartitionsByExpr method in order to fetch the respective 
> partitions matching the timestamp from HMS.
> The tests however reveal that DirectSQL and JDO paths are not returning the 
> same results leading to an exception when VerifyingObjectStore is used. 
> According to the error below DirectSQL is able to recover one partition from 
> HMS (expected) while JDO/ORM returns empty (not expected).
> {noformat}
> 2023-10-06T03:51:19,406 ERROR [80252df4-3fdc-4971-badf-ad67ce8567c7 main] 
> metastore.VerifyingObjectStore: Lists are not the same size: SQL 1, ORM 0
> 2023-10-06T03:51:19,409 ERROR [80252df4-3fdc-4971-badf-ad67ce8567c7 main] 
> metastore.RetryingHMSHandler: MetaException(message:Lists are not the same 
> size: SQL 1, ORM 0)
>   at 
> org.apache.hadoop.hive.metastore.VerifyingObjectStore.verifyLists(VerifyingObjectStore.java:148)
>   at 
> org.apache.hadoop.hive.metastore.VerifyingObjectStore.getPartitionsByExpr(VerifyingObjectStore.java:88)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMeth

[jira] [Resolved] (HIVE-27994) Optimize renaming the partitioned table

2024-01-19 Thread Zhihua Deng (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihua Deng resolved HIVE-27994.

Fix Version/s: 4.0.0
   Resolution: Fixed

Fix has been merged. Thank you [~zhangbutao] and [~hemanth619] for the review!

> Optimize renaming the partitioned table
> ---
>
> Key: HIVE-27994
> URL: https://issues.apache.org/jira/browse/HIVE-27994
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zhihua Deng
>Assignee: Zhihua Deng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>
> In case of table rename, every row in PART_COL_STATS associated with the 
> table should be fetched, stored in memory, delete & re-insert with new 
> db/table name, this could take hours if the table has thousands of column 
> statistics in PART_COL_STATS.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-27827) Improve performance of direct SQL implement for getPartitionsByFilter

2024-01-19 Thread Sai Hemanth Gantasala (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-27827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808758#comment-17808758
 ] 

Sai Hemanth Gantasala commented on HIVE-27827:
--

[~wechar] - The patch has been merged into the master branch. Thanks for your 
contribution.

> Improve performance of direct SQL implement for getPartitionsByFilter
> -
>
> Key: HIVE-27827
> URL: https://issues.apache.org/jira/browse/HIVE-27827
> Project: Hive
>  Issue Type: Improvement
>Reporter: Wechar
>Assignee: Wechar
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-27827) Improve performance of direct SQL implement for getPartitionsByFilter

2024-01-19 Thread Sai Hemanth Gantasala (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sai Hemanth Gantasala updated HIVE-27827:
-
Fix Version/s: 4.1.0

> Improve performance of direct SQL implement for getPartitionsByFilter
> -
>
> Key: HIVE-27827
> URL: https://issues.apache.org/jira/browse/HIVE-27827
> Project: Hive
>  Issue Type: Improvement
>Reporter: Wechar
>Assignee: Wechar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HIVE-27827) Improve performance of direct SQL implement for getPartitionsByFilter

2024-01-19 Thread Sai Hemanth Gantasala (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sai Hemanth Gantasala resolved HIVE-27827.
--
Resolution: Fixed

> Improve performance of direct SQL implement for getPartitionsByFilter
> -
>
> Key: HIVE-27827
> URL: https://issues.apache.org/jira/browse/HIVE-27827
> Project: Hive
>  Issue Type: Improvement
>Reporter: Wechar
>Assignee: Wechar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (HIVE-28013) No space left on device when running precommit tests

2024-01-19 Thread Stamatis Zampetakis (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808668#comment-17808668
 ] 

Stamatis Zampetakis edited comment on HIVE-28013 at 1/19/24 2:05 PM:
-

bq. 15 days is not so much - I would recommend to raise it back ; and look 
around what crap the jobs are storing

I am not sure when exactly the 15 days start counting. I got the impression 
that they apply after deleting/closing the PR branch but maybe I am mistaken. I 
am saying this cause for master which part of the same job we kept builds since 
2021.

{noformat}
I wonder how much these 10 log files cost: 
https://github.com/apache/hive/blob/9c4eb96f816105560e7d4809f1d608e7eca9e523/Jenkinsfile#L366-L371

there was this PR: https://github.com/apache/hive/pull/4732
{noformat}
I checked the sizes of builds for master from 2021 to now and I didn't see any 
huge spikes. It was always around 100M as I noted in [a comment 
above|https://issues.apache.org/jira/browse/HIVE-28013?focusedCommentId=17808581&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17808581].

Thanks for the pointer around the space-check job; didn't know about that :)



was (Author: zabetak):
bq. 
bq. 15 days is not so much - I would recommend to raise it back ; and look 
around what crap the jobs are storing
bq. 

I am not sure when exactly the 15 days start counting. I got the impression 
that they apply after deleting/closing the PR branch but maybe I am mistaken. I 
am saying this cause for master which part of the same job we kept builds since 
2021.

{noformat}
I wonder how much these 10 log files cost: 
https://github.com/apache/hive/blob/9c4eb96f816105560e7d4809f1d608e7eca9e523/Jenkinsfile#L366-L371

there was this PR: https://github.com/apache/hive/pull/4732
{noformat}
I checked the sizes of builds for master from 2021 to now and I didn't see any 
huge spikes. It was always around 100M as I noted in [a comment 
above|https://issues.apache.org/jira/browse/HIVE-28013?focusedCommentId=17808581&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17808581].

Thanks for the pointer around the space-check job; didn't know about that :)


> No space left on device when running precommit tests
> 
>
> Key: HIVE-28013
> URL: https://issues.apache.org/jira/browse/HIVE-28013
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Reporter: Stamatis Zampetakis
>Assignee: Stamatis Zampetakis
>Priority: Blocker
> Fix For: 4.1.0
>
> Attachments: orphaned_item_strategy.png
>
>
> The Hive precommit tests fail due to lack of space. Few of the most recent 
> failures below:
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-4744/23/console
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-5005/10/console
> {noformat}
> java.io.IOException: No space left on device
>   at java.base/sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>   at 
> java.base/sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:62)
>   at java.base/sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113)
>   at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:79)
>   at java.base/sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:280)
>   at 
> org.jenkinsci.plugins.workflow.support.pickles.serialization.RiverWriter.(RiverWriter.java:109)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:560)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:537)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgramIfPossible(CpsThreadGroup.java:520)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:444)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$400(CpsThreadGroup.java:97)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:315)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:279)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139)
>   at 
> jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
>   at 
> jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
>   at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>   at java.base/java.util.concurren

[jira] [Commented] (HIVE-28013) No space left on device when running precommit tests

2024-01-19 Thread Stamatis Zampetakis (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808668#comment-17808668
 ] 

Stamatis Zampetakis commented on HIVE-28013:


bq. 
bq. 15 days is not so much - I would recommend to raise it back ; and look 
around what crap the jobs are storing
bq. 

I am not sure when exactly the 15 days start counting. I got the impression 
that they apply after deleting/closing the PR branch but maybe I am mistaken. I 
am saying this cause for master which part of the same job we kept builds since 
2021.

{noformat}
I wonder how much these 10 log files cost: 
https://github.com/apache/hive/blob/9c4eb96f816105560e7d4809f1d608e7eca9e523/Jenkinsfile#L366-L371

there was this PR: https://github.com/apache/hive/pull/4732
{noformat}
I checked the sizes of builds for master from 2021 to now and I didn't see any 
huge spikes. It was always around 100M as I noted in [a comment 
above|https://issues.apache.org/jira/browse/HIVE-28013?focusedCommentId=17808581&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17808581].

Thanks for the pointer around the space-check job; didn't know about that :)


> No space left on device when running precommit tests
> 
>
> Key: HIVE-28013
> URL: https://issues.apache.org/jira/browse/HIVE-28013
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Reporter: Stamatis Zampetakis
>Assignee: Stamatis Zampetakis
>Priority: Blocker
> Fix For: 4.1.0
>
> Attachments: orphaned_item_strategy.png
>
>
> The Hive precommit tests fail due to lack of space. Few of the most recent 
> failures below:
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-4744/23/console
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-5005/10/console
> {noformat}
> java.io.IOException: No space left on device
>   at java.base/sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>   at 
> java.base/sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:62)
>   at java.base/sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113)
>   at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:79)
>   at java.base/sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:280)
>   at 
> org.jenkinsci.plugins.workflow.support.pickles.serialization.RiverWriter.(RiverWriter.java:109)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:560)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:537)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgramIfPossible(CpsThreadGroup.java:520)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:444)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$400(CpsThreadGroup.java:97)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:315)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:279)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139)
>   at 
> jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
>   at 
> jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
>   at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:829)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-28013) No space left on device when running precommit tests

2024-01-19 Thread Zoltan Haindrich (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808666#comment-17808666
 ] 

Zoltan Haindrich commented on HIVE-28013:
-

fyi the amount of disk used by a build was estimated earlier; and it was 
working according to those estimates for around 2023 Feb ; I think there might 
be some ballast in the builds

| 2021 September | 141G | http://ci.hive.apache.org/job/space-check/100/ |
| 2022 Jul | 134G | http://ci.hive.apache.org/job/space-check/400/ |
| 2023 Feb | 141G  | http://ci.hive.apache.org/job/space-check/600/ |
| 2023 Aug | 170G | http://ci.hive.apache.org/job/space-check/800/| 
| 2023 Nov |  194G | http://ci.hive.apache.org/job/space-check/900/| 
| 2024 Jan19 | 209G | http://ci.hive.apache.org/job/space-check/950/|





> No space left on device when running precommit tests
> 
>
> Key: HIVE-28013
> URL: https://issues.apache.org/jira/browse/HIVE-28013
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Reporter: Stamatis Zampetakis
>Assignee: Stamatis Zampetakis
>Priority: Blocker
> Fix For: 4.1.0
>
> Attachments: orphaned_item_strategy.png
>
>
> The Hive precommit tests fail due to lack of space. Few of the most recent 
> failures below:
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-4744/23/console
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-5005/10/console
> {noformat}
> java.io.IOException: No space left on device
>   at java.base/sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>   at 
> java.base/sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:62)
>   at java.base/sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113)
>   at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:79)
>   at java.base/sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:280)
>   at 
> org.jenkinsci.plugins.workflow.support.pickles.serialization.RiverWriter.(RiverWriter.java:109)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:560)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:537)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgramIfPossible(CpsThreadGroup.java:520)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:444)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$400(CpsThreadGroup.java:97)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:315)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:279)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139)
>   at 
> jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
>   at 
> jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
>   at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:829)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-28013) No space left on device when running precommit tests

2024-01-19 Thread Zoltan Haindrich (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808661#comment-17808661
 ] 

Zoltan Haindrich commented on HIVE-28013:
-

there is by the way a job for checking the disk usage:

[http://ci.hive.apache.org/job/space-check/]

> No space left on device when running precommit tests
> 
>
> Key: HIVE-28013
> URL: https://issues.apache.org/jira/browse/HIVE-28013
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Reporter: Stamatis Zampetakis
>Assignee: Stamatis Zampetakis
>Priority: Blocker
> Fix For: 4.1.0
>
> Attachments: orphaned_item_strategy.png
>
>
> The Hive precommit tests fail due to lack of space. Few of the most recent 
> failures below:
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-4744/23/console
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-5005/10/console
> {noformat}
> java.io.IOException: No space left on device
>   at java.base/sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>   at 
> java.base/sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:62)
>   at java.base/sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113)
>   at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:79)
>   at java.base/sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:280)
>   at 
> org.jenkinsci.plugins.workflow.support.pickles.serialization.RiverWriter.(RiverWriter.java:109)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:560)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:537)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgramIfPossible(CpsThreadGroup.java:520)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:444)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$400(CpsThreadGroup.java:97)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:315)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:279)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139)
>   at 
> jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
>   at 
> jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
>   at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:829)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-28013) No space left on device when running precommit tests

2024-01-19 Thread Zoltan Haindrich (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808659#comment-17808659
 ] 

Zoltan Haindrich commented on HIVE-28013:
-

15 days is not so much - I would recommend to raise it back ; and look around 
what crap the jobs are storing

I wonder how much these 10 log files cost: 
[https://github.com/apache/hive/blob/9c4eb96f816105560e7d4809f1d608e7eca9e523/Jenkinsfile#L366-L371]

there was this PR: [https://github.com/apache/hive/pull/4732]

 

from your notes it seems to me that 1 build which has those logs have gone from 
100M a master build is usually to 1.1G:

1.1G var/jenkins_home/jobs/hive-precommit/branches/PR-4566/builds/8 1.2G 
var/jenkins_home/jobs/hive-precommit/branches/PR-4566/builds/27

there was a discussion about reverting - but that never landed...

> No space left on device when running precommit tests
> 
>
> Key: HIVE-28013
> URL: https://issues.apache.org/jira/browse/HIVE-28013
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Reporter: Stamatis Zampetakis
>Assignee: Stamatis Zampetakis
>Priority: Blocker
> Fix For: 4.1.0
>
> Attachments: orphaned_item_strategy.png
>
>
> The Hive precommit tests fail due to lack of space. Few of the most recent 
> failures below:
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-4744/23/console
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-5005/10/console
> {noformat}
> java.io.IOException: No space left on device
>   at java.base/sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>   at 
> java.base/sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:62)
>   at java.base/sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113)
>   at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:79)
>   at java.base/sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:280)
>   at 
> org.jenkinsci.plugins.workflow.support.pickles.serialization.RiverWriter.(RiverWriter.java:109)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:560)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:537)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgramIfPossible(CpsThreadGroup.java:520)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:444)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$400(CpsThreadGroup.java:97)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:315)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:279)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139)
>   at 
> jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
>   at 
> jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
>   at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:829)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-28014) to_unix_timestamp udf produces inconsistent results in different jdk versions

2024-01-19 Thread Wechar (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808601#comment-17808601
 ] 

Wechar commented on HIVE-28014:
---

I have downgraded the jdk8 version in wecharyu/hive-dev-box:executor to avoid 
the CI failed temporally. It needs more clues to determine which side is not 
correct.

> to_unix_timestamp udf produces inconsistent results in different jdk versions
> -
>
> Key: HIVE-28014
> URL: https://issues.apache.org/jira/browse/HIVE-28014
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0-beta-1
>Reporter: Wechar
>Assignee: Wechar
>Priority: Major
>
> In HIVE-27999 we update the CI docker image which upgrades jdk8 from 
> {*}1.8.0_262-b19{*} to *1.8.0_392-b08*. This upgrade cause 3 timestamp 
> related tests failed:
> *1. Testing / split-02 / PostProcess / 
> testTimestampToString[zoneId=Europe/Paris, timestamp=2417-03-26T02:08:43] – 
> org.apache.hadoop.hive.metastore.utils.TestMetaStoreUtils*
> {code:bash}
> Error
> expected:<2417-03-26 0[2]:08:43> but was:<2417-03-26 0[3]:08:43>
> Stacktrace
> org.junit.ComparisonFailure: expected:<2417-03-26 0[2]:08:43> but 
> was:<2417-03-26 0[3]:08:43>
>   at 
> org.apache.hadoop.hive.metastore.utils.TestMetaStoreUtils.testTimestampToString(TestMetaStoreUtils.java:85)
> {code}
> *2. Testing / split-01 / PostProcess / testCliDriver[udf5] – 
> org.apache.hadoop.hive.cli.split24.TestMiniLlapLocalCliDriver*
> {code:bash}
> Error
> Client Execution succeeded but contained differences (error code = 1) after 
> executing udf5.q 
> 263c263
> < 1400-11-08 07:35:34
> ---
> > 1400-11-08 07:35:24
> 272c272
> < 1800-11-08 07:35:34
> ---
> > 1800-11-08 07:35:24
> 434c434
> < 1399-12-31 23:35:34
> ---
> > 1399-12-31 23:35:24
> 443c443
> < 1799-12-31 23:35:34
> ---
> > 1799-12-31 23:35:24
> 452c452
> < 1899-12-31 23:35:34
> ---
> > 1899-12-31 23:35:24
> {code}
> *3. Testing / split-19 / PostProcess / testStringArg2 – 
> org.apache.hadoop.hive.ql.udf.generic.TestGenericUDFToUnixTimestamp*
> {code:bash}
> Stacktrace
> org.junit.ComparisonFailure: expected:<-17984790[40]0> but 
> was:<-17984790[39]0>
>   at org.junit.Assert.assertEquals(Assert.java:117)
>   at org.junit.Assert.assertEquals(Assert.java:146)
>   at 
> org.apache.hadoop.hive.ql.udf.generic.TestGenericUDFToUnixTimestamp.runAndVerify(TestGenericUDFToUnixTimestamp.java:70)
>   at 
> org.apache.hadoop.hive.ql.udf.generic.TestGenericUDFToUnixTimestamp.testStringArg2(TestGenericUDFToUnixTimestamp.java:167)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> {code}
> It maybe a jdk bug and fixed in the new release, because we could get the 
> same result from Spark:
> {code:sql}
> spark-sql> select to_unix_timestamp(to_timestamp("1400-02-01 00:00:00 ICT", 
> "-MM-dd HH:mm:ss z"), "US/Pacific");
> -17984790390
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-28014) to_unix_timestamp udf produces inconsistent results in different jdk versions

2024-01-19 Thread Stamatis Zampetakis (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808597#comment-17808597
 ] 

Stamatis Zampetakis commented on HIVE-28014:


If there is a bug in 1.8.0_392-b08 then of course we don't want to use this 
version. If however the new behavior is the correct one then we should probably 
update the tests or the Hive code.

> to_unix_timestamp udf produces inconsistent results in different jdk versions
> -
>
> Key: HIVE-28014
> URL: https://issues.apache.org/jira/browse/HIVE-28014
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0-beta-1
>Reporter: Wechar
>Assignee: Wechar
>Priority: Major
>
> In HIVE-27999 we update the CI docker image which upgrades jdk8 from 
> {*}1.8.0_262-b19{*} to *1.8.0_392-b08*. This upgrade cause 3 timestamp 
> related tests failed:
> *1. Testing / split-02 / PostProcess / 
> testTimestampToString[zoneId=Europe/Paris, timestamp=2417-03-26T02:08:43] – 
> org.apache.hadoop.hive.metastore.utils.TestMetaStoreUtils*
> {code:bash}
> Error
> expected:<2417-03-26 0[2]:08:43> but was:<2417-03-26 0[3]:08:43>
> Stacktrace
> org.junit.ComparisonFailure: expected:<2417-03-26 0[2]:08:43> but 
> was:<2417-03-26 0[3]:08:43>
>   at 
> org.apache.hadoop.hive.metastore.utils.TestMetaStoreUtils.testTimestampToString(TestMetaStoreUtils.java:85)
> {code}
> *2. Testing / split-01 / PostProcess / testCliDriver[udf5] – 
> org.apache.hadoop.hive.cli.split24.TestMiniLlapLocalCliDriver*
> {code:bash}
> Error
> Client Execution succeeded but contained differences (error code = 1) after 
> executing udf5.q 
> 263c263
> < 1400-11-08 07:35:34
> ---
> > 1400-11-08 07:35:24
> 272c272
> < 1800-11-08 07:35:34
> ---
> > 1800-11-08 07:35:24
> 434c434
> < 1399-12-31 23:35:34
> ---
> > 1399-12-31 23:35:24
> 443c443
> < 1799-12-31 23:35:34
> ---
> > 1799-12-31 23:35:24
> 452c452
> < 1899-12-31 23:35:34
> ---
> > 1899-12-31 23:35:24
> {code}
> *3. Testing / split-19 / PostProcess / testStringArg2 – 
> org.apache.hadoop.hive.ql.udf.generic.TestGenericUDFToUnixTimestamp*
> {code:bash}
> Stacktrace
> org.junit.ComparisonFailure: expected:<-17984790[40]0> but 
> was:<-17984790[39]0>
>   at org.junit.Assert.assertEquals(Assert.java:117)
>   at org.junit.Assert.assertEquals(Assert.java:146)
>   at 
> org.apache.hadoop.hive.ql.udf.generic.TestGenericUDFToUnixTimestamp.runAndVerify(TestGenericUDFToUnixTimestamp.java:70)
>   at 
> org.apache.hadoop.hive.ql.udf.generic.TestGenericUDFToUnixTimestamp.testStringArg2(TestGenericUDFToUnixTimestamp.java:167)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> {code}
> It maybe a jdk bug and fixed in the new release, because we could get the 
> same result from Spark:
> {code:sql}
> spark-sql> select to_unix_timestamp(to_timestamp("1400-02-01 00:00:00 ICT", 
> "-MM-dd HH:mm:ss z"), "US/Pacific");
> -17984790390
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-28014) to_unix_timestamp udf produces inconsistent results in different jdk versions

2024-01-19 Thread Stamatis Zampetakis (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808595#comment-17808595
 ] 

Stamatis Zampetakis commented on HIVE-28014:


Let's clarify where the change comes from to see what's the correct course of 
action. Maybe someone already did this investigation in the tickets related to 
the upgrade to JDK 11.

> to_unix_timestamp udf produces inconsistent results in different jdk versions
> -
>
> Key: HIVE-28014
> URL: https://issues.apache.org/jira/browse/HIVE-28014
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0-beta-1
>Reporter: Wechar
>Assignee: Wechar
>Priority: Major
>
> In HIVE-27999 we update the CI docker image which upgrades jdk8 from 
> {*}1.8.0_262-b19{*} to *1.8.0_392-b08*. This upgrade cause 3 timestamp 
> related tests failed:
> *1. Testing / split-02 / PostProcess / 
> testTimestampToString[zoneId=Europe/Paris, timestamp=2417-03-26T02:08:43] – 
> org.apache.hadoop.hive.metastore.utils.TestMetaStoreUtils*
> {code:bash}
> Error
> expected:<2417-03-26 0[2]:08:43> but was:<2417-03-26 0[3]:08:43>
> Stacktrace
> org.junit.ComparisonFailure: expected:<2417-03-26 0[2]:08:43> but 
> was:<2417-03-26 0[3]:08:43>
>   at 
> org.apache.hadoop.hive.metastore.utils.TestMetaStoreUtils.testTimestampToString(TestMetaStoreUtils.java:85)
> {code}
> *2. Testing / split-01 / PostProcess / testCliDriver[udf5] – 
> org.apache.hadoop.hive.cli.split24.TestMiniLlapLocalCliDriver*
> {code:bash}
> Error
> Client Execution succeeded but contained differences (error code = 1) after 
> executing udf5.q 
> 263c263
> < 1400-11-08 07:35:34
> ---
> > 1400-11-08 07:35:24
> 272c272
> < 1800-11-08 07:35:34
> ---
> > 1800-11-08 07:35:24
> 434c434
> < 1399-12-31 23:35:34
> ---
> > 1399-12-31 23:35:24
> 443c443
> < 1799-12-31 23:35:34
> ---
> > 1799-12-31 23:35:24
> 452c452
> < 1899-12-31 23:35:34
> ---
> > 1899-12-31 23:35:24
> {code}
> *3. Testing / split-19 / PostProcess / testStringArg2 – 
> org.apache.hadoop.hive.ql.udf.generic.TestGenericUDFToUnixTimestamp*
> {code:bash}
> Stacktrace
> org.junit.ComparisonFailure: expected:<-17984790[40]0> but 
> was:<-17984790[39]0>
>   at org.junit.Assert.assertEquals(Assert.java:117)
>   at org.junit.Assert.assertEquals(Assert.java:146)
>   at 
> org.apache.hadoop.hive.ql.udf.generic.TestGenericUDFToUnixTimestamp.runAndVerify(TestGenericUDFToUnixTimestamp.java:70)
>   at 
> org.apache.hadoop.hive.ql.udf.generic.TestGenericUDFToUnixTimestamp.testStringArg2(TestGenericUDFToUnixTimestamp.java:167)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> {code}
> It maybe a jdk bug and fixed in the new release, because we could get the 
> same result from Spark:
> {code:sql}
> spark-sql> select to_unix_timestamp(to_timestamp("1400-02-01 00:00:00 ICT", 
> "-MM-dd HH:mm:ss z"), "US/Pacific");
> -17984790390
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-28014) to_unix_timestamp udf produces inconsistent results in different jdk versions

2024-01-19 Thread Butao Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808594#comment-17808594
 ] 

Butao Zhang commented on HIVE-28014:


If it is a jdk bug, should we do some code change on hive side? or we just 
change the jdk version in the docker image?

> to_unix_timestamp udf produces inconsistent results in different jdk versions
> -
>
> Key: HIVE-28014
> URL: https://issues.apache.org/jira/browse/HIVE-28014
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0-beta-1
>Reporter: Wechar
>Assignee: Wechar
>Priority: Major
>
> In HIVE-27999 we update the CI docker image which upgrades jdk8 from 
> {*}1.8.0_262-b19{*} to *1.8.0_392-b08*. This upgrade cause 3 timestamp 
> related tests failed:
> *1. Testing / split-02 / PostProcess / 
> testTimestampToString[zoneId=Europe/Paris, timestamp=2417-03-26T02:08:43] – 
> org.apache.hadoop.hive.metastore.utils.TestMetaStoreUtils*
> {code:bash}
> Error
> expected:<2417-03-26 0[2]:08:43> but was:<2417-03-26 0[3]:08:43>
> Stacktrace
> org.junit.ComparisonFailure: expected:<2417-03-26 0[2]:08:43> but 
> was:<2417-03-26 0[3]:08:43>
>   at 
> org.apache.hadoop.hive.metastore.utils.TestMetaStoreUtils.testTimestampToString(TestMetaStoreUtils.java:85)
> {code}
> *2. Testing / split-01 / PostProcess / testCliDriver[udf5] – 
> org.apache.hadoop.hive.cli.split24.TestMiniLlapLocalCliDriver*
> {code:bash}
> Error
> Client Execution succeeded but contained differences (error code = 1) after 
> executing udf5.q 
> 263c263
> < 1400-11-08 07:35:34
> ---
> > 1400-11-08 07:35:24
> 272c272
> < 1800-11-08 07:35:34
> ---
> > 1800-11-08 07:35:24
> 434c434
> < 1399-12-31 23:35:34
> ---
> > 1399-12-31 23:35:24
> 443c443
> < 1799-12-31 23:35:34
> ---
> > 1799-12-31 23:35:24
> 452c452
> < 1899-12-31 23:35:34
> ---
> > 1899-12-31 23:35:24
> {code}
> *3. Testing / split-19 / PostProcess / testStringArg2 – 
> org.apache.hadoop.hive.ql.udf.generic.TestGenericUDFToUnixTimestamp*
> {code:bash}
> Stacktrace
> org.junit.ComparisonFailure: expected:<-17984790[40]0> but 
> was:<-17984790[39]0>
>   at org.junit.Assert.assertEquals(Assert.java:117)
>   at org.junit.Assert.assertEquals(Assert.java:146)
>   at 
> org.apache.hadoop.hive.ql.udf.generic.TestGenericUDFToUnixTimestamp.runAndVerify(TestGenericUDFToUnixTimestamp.java:70)
>   at 
> org.apache.hadoop.hive.ql.udf.generic.TestGenericUDFToUnixTimestamp.testStringArg2(TestGenericUDFToUnixTimestamp.java:167)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> {code}
> It maybe a jdk bug and fixed in the new release, because we could get the 
> same result from Spark:
> {code:sql}
> spark-sql> select to_unix_timestamp(to_timestamp("1400-02-01 00:00:00 ICT", 
> "-MM-dd HH:mm:ss z"), "US/Pacific");
> -17984790390
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HIVE-28013) No space left on device when running precommit tests

2024-01-19 Thread Stamatis Zampetakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-28013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stamatis Zampetakis resolved HIVE-28013.

Fix Version/s: 4.1.0
   Resolution: Fixed

> No space left on device when running precommit tests
> 
>
> Key: HIVE-28013
> URL: https://issues.apache.org/jira/browse/HIVE-28013
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Reporter: Stamatis Zampetakis
>Assignee: Stamatis Zampetakis
>Priority: Blocker
> Fix For: 4.1.0
>
> Attachments: orphaned_item_strategy.png
>
>
> The Hive precommit tests fail due to lack of space. Few of the most recent 
> failures below:
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-4744/23/console
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-5005/10/console
> {noformat}
> java.io.IOException: No space left on device
>   at java.base/sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>   at 
> java.base/sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:62)
>   at java.base/sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113)
>   at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:79)
>   at java.base/sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:280)
>   at 
> org.jenkinsci.plugins.workflow.support.pickles.serialization.RiverWriter.(RiverWriter.java:109)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:560)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:537)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgramIfPossible(CpsThreadGroup.java:520)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:444)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$400(CpsThreadGroup.java:97)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:315)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:279)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139)
>   at 
> jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
>   at 
> jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
>   at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:829)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-28013) No space left on device when running precommit tests

2024-01-19 Thread Stamatis Zampetakis (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808591#comment-17808591
 ] 

Stamatis Zampetakis commented on HIVE-28013:


I just deleted the following old builds as well:

{noformat}
curl -u zabetak:apitoken -H "Content-Length: 0" -X POST 
http://ci.hive.apache.org/job/hive-nightly/[432-449]/doDelete
curl -u zabetak:apitoken -H "Content-Length: 0" -X POST 
http://ci.hive.apache.org/job/hive-precommit/job/master/[583-1577]/doDelete
{noformat}

In addition, I changed the orphaned item stategy configuration for 
hive-precommit job to discard old items after 15 days (previously 60).

With all the changes above the Jenkins disk usage is now at 50% and CI is 
stable.

{noformat}
jenkins@jenkins-6858ddb664-2m79c:/$ df
Filesystem 1K-blocks  Used Available Use% Mounted on
overlay 98831908   4857584  93957940   5% /
tmpfs  65536 0 65536   0% /dev
tmpfs6647132 0   6647132   0% /sys/fs/cgroup
/dev/sdb   308521792 147638092 160867316  48% /var/jenkins_home
/dev/sda1   98831908   4857584  93957940   5% /etc/hosts
shm65536 0 65536   0% /dev/shm
tmpfs   1080492012  10804908   1% 
/run/secrets/kubernetes.io/serviceaccount
tmpfs6647132 0   6647132   0% /proc/acpi
tmpfs6647132 0   6647132   0% /proc/scsi
tmpfs6647132 0   6647132   0% /sys/firmware
{noformat}

I am considering this ticket resolved. Please leave a comment if you have ideas 
or questions regarding the above.

> No space left on device when running precommit tests
> 
>
> Key: HIVE-28013
> URL: https://issues.apache.org/jira/browse/HIVE-28013
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Reporter: Stamatis Zampetakis
>Assignee: Stamatis Zampetakis
>Priority: Blocker
> Attachments: orphaned_item_strategy.png
>
>
> The Hive precommit tests fail due to lack of space. Few of the most recent 
> failures below:
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-4744/23/console
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-5005/10/console
> {noformat}
> java.io.IOException: No space left on device
>   at java.base/sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>   at 
> java.base/sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:62)
>   at java.base/sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113)
>   at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:79)
>   at java.base/sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:280)
>   at 
> org.jenkinsci.plugins.workflow.support.pickles.serialization.RiverWriter.(RiverWriter.java:109)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:560)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:537)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgramIfPossible(CpsThreadGroup.java:520)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:444)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$400(CpsThreadGroup.java:97)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:315)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:279)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139)
>   at 
> jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
>   at 
> jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
>   at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:829)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-28013) No space left on device when running precommit tests

2024-01-19 Thread Stamatis Zampetakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-28013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stamatis Zampetakis updated HIVE-28013:
---
Attachment: orphaned_item_strategy.png

> No space left on device when running precommit tests
> 
>
> Key: HIVE-28013
> URL: https://issues.apache.org/jira/browse/HIVE-28013
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Reporter: Stamatis Zampetakis
>Assignee: Stamatis Zampetakis
>Priority: Blocker
> Attachments: orphaned_item_strategy.png
>
>
> The Hive precommit tests fail due to lack of space. Few of the most recent 
> failures below:
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-4744/23/console
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-5005/10/console
> {noformat}
> java.io.IOException: No space left on device
>   at java.base/sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>   at 
> java.base/sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:62)
>   at java.base/sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113)
>   at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:79)
>   at java.base/sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:280)
>   at 
> org.jenkinsci.plugins.workflow.support.pickles.serialization.RiverWriter.(RiverWriter.java:109)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:560)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:537)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgramIfPossible(CpsThreadGroup.java:520)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:444)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$400(CpsThreadGroup.java:97)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:315)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:279)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139)
>   at 
> jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
>   at 
> jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
>   at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:829)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-28014) to_unix_timestamp udf produces inconsistent results in different jdk versions

2024-01-19 Thread Wechar (Jira)
Wechar created HIVE-28014:
-

 Summary: to_unix_timestamp udf produces inconsistent results in 
different jdk versions
 Key: HIVE-28014
 URL: https://issues.apache.org/jira/browse/HIVE-28014
 Project: Hive
  Issue Type: Bug
  Components: Hive
Affects Versions: 4.0.0-beta-1
Reporter: Wechar


In HIVE-27999 we update the CI docker image which upgrades jdk8 from 
{*}1.8.0_262-b19{*} to *1.8.0_392-b08*. This upgrade cause 3 timestamp related 
tests failed:

*1. Testing / split-02 / PostProcess / 
testTimestampToString[zoneId=Europe/Paris, timestamp=2417-03-26T02:08:43] – 
org.apache.hadoop.hive.metastore.utils.TestMetaStoreUtils*
{code:bash}
Error
expected:<2417-03-26 0[2]:08:43> but was:<2417-03-26 0[3]:08:43>
Stacktrace
org.junit.ComparisonFailure: expected:<2417-03-26 0[2]:08:43> but 
was:<2417-03-26 0[3]:08:43>
at 
org.apache.hadoop.hive.metastore.utils.TestMetaStoreUtils.testTimestampToString(TestMetaStoreUtils.java:85)
{code}

*2. Testing / split-01 / PostProcess / testCliDriver[udf5] – 
org.apache.hadoop.hive.cli.split24.TestMiniLlapLocalCliDriver*
{code:bash}
Error
Client Execution succeeded but contained differences (error code = 1) after 
executing udf5.q 
263c263
< 1400-11-08 07:35:34
---
> 1400-11-08 07:35:24
272c272
< 1800-11-08 07:35:34
---
> 1800-11-08 07:35:24
434c434
< 1399-12-31 23:35:34
---
> 1399-12-31 23:35:24
443c443
< 1799-12-31 23:35:34
---
> 1799-12-31 23:35:24
452c452
< 1899-12-31 23:35:34
---
> 1899-12-31 23:35:24
{code}

*3. Testing / split-19 / PostProcess / testStringArg2 – 
org.apache.hadoop.hive.ql.udf.generic.TestGenericUDFToUnixTimestamp*
{code:bash}
Stacktrace
org.junit.ComparisonFailure: expected:<-17984790[40]0> but was:<-17984790[39]0>
at org.junit.Assert.assertEquals(Assert.java:117)
at org.junit.Assert.assertEquals(Assert.java:146)
at 
org.apache.hadoop.hive.ql.udf.generic.TestGenericUDFToUnixTimestamp.runAndVerify(TestGenericUDFToUnixTimestamp.java:70)
at 
org.apache.hadoop.hive.ql.udf.generic.TestGenericUDFToUnixTimestamp.testStringArg2(TestGenericUDFToUnixTimestamp.java:167)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
{code}

It maybe a jdk bug and fixed in the new release, because we could get the same 
result from Spark:
{code:sql}
spark-sql> select to_unix_timestamp(to_timestamp("1400-02-01 00:00:00 ICT", 
"-MM-dd HH:mm:ss z"), "US/Pacific");
-17984790390
{code}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-28014) to_unix_timestamp udf produces inconsistent results in different jdk versions

2024-01-19 Thread Wechar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-28014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wechar reassigned HIVE-28014:
-

Assignee: Wechar

> to_unix_timestamp udf produces inconsistent results in different jdk versions
> -
>
> Key: HIVE-28014
> URL: https://issues.apache.org/jira/browse/HIVE-28014
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0-beta-1
>Reporter: Wechar
>Assignee: Wechar
>Priority: Major
>
> In HIVE-27999 we update the CI docker image which upgrades jdk8 from 
> {*}1.8.0_262-b19{*} to *1.8.0_392-b08*. This upgrade cause 3 timestamp 
> related tests failed:
> *1. Testing / split-02 / PostProcess / 
> testTimestampToString[zoneId=Europe/Paris, timestamp=2417-03-26T02:08:43] – 
> org.apache.hadoop.hive.metastore.utils.TestMetaStoreUtils*
> {code:bash}
> Error
> expected:<2417-03-26 0[2]:08:43> but was:<2417-03-26 0[3]:08:43>
> Stacktrace
> org.junit.ComparisonFailure: expected:<2417-03-26 0[2]:08:43> but 
> was:<2417-03-26 0[3]:08:43>
>   at 
> org.apache.hadoop.hive.metastore.utils.TestMetaStoreUtils.testTimestampToString(TestMetaStoreUtils.java:85)
> {code}
> *2. Testing / split-01 / PostProcess / testCliDriver[udf5] – 
> org.apache.hadoop.hive.cli.split24.TestMiniLlapLocalCliDriver*
> {code:bash}
> Error
> Client Execution succeeded but contained differences (error code = 1) after 
> executing udf5.q 
> 263c263
> < 1400-11-08 07:35:34
> ---
> > 1400-11-08 07:35:24
> 272c272
> < 1800-11-08 07:35:34
> ---
> > 1800-11-08 07:35:24
> 434c434
> < 1399-12-31 23:35:34
> ---
> > 1399-12-31 23:35:24
> 443c443
> < 1799-12-31 23:35:34
> ---
> > 1799-12-31 23:35:24
> 452c452
> < 1899-12-31 23:35:34
> ---
> > 1899-12-31 23:35:24
> {code}
> *3. Testing / split-19 / PostProcess / testStringArg2 – 
> org.apache.hadoop.hive.ql.udf.generic.TestGenericUDFToUnixTimestamp*
> {code:bash}
> Stacktrace
> org.junit.ComparisonFailure: expected:<-17984790[40]0> but 
> was:<-17984790[39]0>
>   at org.junit.Assert.assertEquals(Assert.java:117)
>   at org.junit.Assert.assertEquals(Assert.java:146)
>   at 
> org.apache.hadoop.hive.ql.udf.generic.TestGenericUDFToUnixTimestamp.runAndVerify(TestGenericUDFToUnixTimestamp.java:70)
>   at 
> org.apache.hadoop.hive.ql.udf.generic.TestGenericUDFToUnixTimestamp.testStringArg2(TestGenericUDFToUnixTimestamp.java:167)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> {code}
> It maybe a jdk bug and fixed in the new release, because we could get the 
> same result from Spark:
> {code:sql}
> spark-sql> select to_unix_timestamp(to_timestamp("1400-02-01 00:00:00 ICT", 
> "-MM-dd HH:mm:ss z"), "US/Pacific");
> -17984790390
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-28013) No space left on device when running precommit tests

2024-01-19 Thread Stamatis Zampetakis (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808581#comment-17808581
 ] 

Stamatis Zampetakis commented on HIVE-28013:


For general information each retained build in master consumes ~100MB of space. 
The sizes vary and range somewhere between 40MB to 120MB.

> No space left on device when running precommit tests
> 
>
> Key: HIVE-28013
> URL: https://issues.apache.org/jira/browse/HIVE-28013
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Reporter: Stamatis Zampetakis
>Assignee: Stamatis Zampetakis
>Priority: Blocker
>
> The Hive precommit tests fail due to lack of space. Few of the most recent 
> failures below:
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-4744/23/console
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-5005/10/console
> {noformat}
> java.io.IOException: No space left on device
>   at java.base/sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>   at 
> java.base/sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:62)
>   at java.base/sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113)
>   at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:79)
>   at java.base/sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:280)
>   at 
> org.jenkinsci.plugins.workflow.support.pickles.serialization.RiverWriter.(RiverWriter.java:109)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:560)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:537)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgramIfPossible(CpsThreadGroup.java:520)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:444)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$400(CpsThreadGroup.java:97)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:315)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:279)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139)
>   at 
> jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
>   at 
> jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
>   at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:829)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-28013) No space left on device when running precommit tests

2024-01-19 Thread Stamatis Zampetakis (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808572#comment-17808572
 ] 

Stamatis Zampetakis commented on HIVE-28013:


 I was trying to get some information about the space consumed by each build 
but I think the use of sort (or maybe writting to /tmp) caused an OOM for the 
container and Jenkins died. (Ooops sorry about that). 

{noformat}
jenkins@jenkins-6858ddb664-lstvm:/$ du -a -d 1 
var/jenkins_home/jobs/hive-precommit/branches/master/builds | sort -n -r > 
/tmp/master_builds_disk_usage.txt
command terminated with exit code 137
{noformat}

A new container is starting so hopefully the service will be restored shortly.


> No space left on device when running precommit tests
> 
>
> Key: HIVE-28013
> URL: https://issues.apache.org/jira/browse/HIVE-28013
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Reporter: Stamatis Zampetakis
>Assignee: Stamatis Zampetakis
>Priority: Blocker
>
> The Hive precommit tests fail due to lack of space. Few of the most recent 
> failures below:
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-4744/23/console
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-5005/10/console
> {noformat}
> java.io.IOException: No space left on device
>   at java.base/sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>   at 
> java.base/sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:62)
>   at java.base/sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113)
>   at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:79)
>   at java.base/sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:280)
>   at 
> org.jenkinsci.plugins.workflow.support.pickles.serialization.RiverWriter.(RiverWriter.java:109)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:560)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:537)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgramIfPossible(CpsThreadGroup.java:520)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:444)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$400(CpsThreadGroup.java:97)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:315)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:279)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139)
>   at 
> jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
>   at 
> jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
>   at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:829)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-28013) No space left on device when running precommit tests

2024-01-19 Thread Stamatis Zampetakis (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808566#comment-17808566
 ] 

Stamatis Zampetakis commented on HIVE-28013:


I seems that for master we have kept all builds from Mar 31, 2021 to now 
summing up to a total of 100GB.

{noformat}
ls -ltr var/jenkins_home/jobs/hive-precommit/branches/master/builds  | head
total 5816
-rw-rw-r-- 1 jenkins jenkins0 May 29  2020 legacyIds
drwxrwsr-x 4 jenkins jenkins 4096 Mar 31  2021 583
drwxrwsr-x 4 jenkins jenkins 4096 Mar 31  2021 584
drwxrwsr-x 4 jenkins jenkins 4096 Mar 31  2021 585
drwxrwsr-x 3 jenkins jenkins 4096 Apr  1  2021 586
drwxrwsr-x 4 jenkins jenkins 4096 Apr  1  2021 587
drwxrwsr-x 3 jenkins jenkins 4096 Apr  5  2021 588
drwxrwsr-x 4 jenkins jenkins 4096 Apr  5  2021 589
drwxrwsr-x 4 jenkins jenkins 4096 Apr  6  2021 590

{noformat}


> No space left on device when running precommit tests
> 
>
> Key: HIVE-28013
> URL: https://issues.apache.org/jira/browse/HIVE-28013
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Reporter: Stamatis Zampetakis
>Assignee: Stamatis Zampetakis
>Priority: Blocker
>
> The Hive precommit tests fail due to lack of space. Few of the most recent 
> failures below:
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-4744/23/console
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-5005/10/console
> {noformat}
> java.io.IOException: No space left on device
>   at java.base/sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>   at 
> java.base/sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:62)
>   at java.base/sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113)
>   at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:79)
>   at java.base/sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:280)
>   at 
> org.jenkinsci.plugins.workflow.support.pickles.serialization.RiverWriter.(RiverWriter.java:109)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:560)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:537)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgramIfPossible(CpsThreadGroup.java:520)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:444)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$400(CpsThreadGroup.java:97)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:315)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:279)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139)
>   at 
> jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
>   at 
> jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
>   at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:829)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-28013) No space left on device when running precommit tests

2024-01-19 Thread Stamatis Zampetakis (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808561#comment-17808561
 ] 

Stamatis Zampetakis commented on HIVE-28013:


I just deleted builds 1 to 140 from hive-nightly-branch-3 using the following 
command.

{noformat}
curl -u zabetak:apitoken -H "Content-Length: 0" -X POST 
http://ci.hive.apache.org/job/hive-nightly-branch-3/[1-140]/doDelete
{noformat}

The apitoken should be replace by an actual token and can be generated for 
users that are Jenkins administrators.

The space went down a bit:

{noformat}
du -h -d 1 var/jenkins_home/jobs/hive-nightly-branch-3/
9.3Gvar/jenkins_home/jobs/hive-nightly-branch-3/builds
9.3Gvar/jenkins_home/jobs/hive-nightly-branch-3/
{noformat}


> No space left on device when running precommit tests
> 
>
> Key: HIVE-28013
> URL: https://issues.apache.org/jira/browse/HIVE-28013
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Reporter: Stamatis Zampetakis
>Assignee: Stamatis Zampetakis
>Priority: Blocker
>
> The Hive precommit tests fail due to lack of space. Few of the most recent 
> failures below:
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-4744/23/console
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-5005/10/console
> {noformat}
> java.io.IOException: No space left on device
>   at java.base/sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>   at 
> java.base/sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:62)
>   at java.base/sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113)
>   at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:79)
>   at java.base/sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:280)
>   at 
> org.jenkinsci.plugins.workflow.support.pickles.serialization.RiverWriter.(RiverWriter.java:109)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:560)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:537)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgramIfPossible(CpsThreadGroup.java:520)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:444)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$400(CpsThreadGroup.java:97)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:315)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:279)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139)
>   at 
> jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
>   at 
> jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
>   at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:829)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-28013) No space left on device when running precommit tests

2024-01-19 Thread Stamatis Zampetakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-28013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stamatis Zampetakis reassigned HIVE-28013:
--

Assignee: Stamatis Zampetakis

> No space left on device when running precommit tests
> 
>
> Key: HIVE-28013
> URL: https://issues.apache.org/jira/browse/HIVE-28013
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Reporter: Stamatis Zampetakis
>Assignee: Stamatis Zampetakis
>Priority: Blocker
>
> The Hive precommit tests fail due to lack of space. Few of the most recent 
> failures below:
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-4744/23/console
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-5005/10/console
> {noformat}
> java.io.IOException: No space left on device
>   at java.base/sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>   at 
> java.base/sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:62)
>   at java.base/sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113)
>   at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:79)
>   at java.base/sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:280)
>   at 
> org.jenkinsci.plugins.workflow.support.pickles.serialization.RiverWriter.(RiverWriter.java:109)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:560)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:537)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgramIfPossible(CpsThreadGroup.java:520)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:444)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$400(CpsThreadGroup.java:97)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:315)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:279)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139)
>   at 
> jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
>   at 
> jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
>   at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:829)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-28013) No space left on device when running precommit tests

2024-01-19 Thread Stamatis Zampetakis (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808550#comment-17808550
 ] 

Stamatis Zampetakis commented on HIVE-28013:


As far as I see the current configuration retains builds for 60 days we may 
have to change that but at the moment to revive precommit tests we have to 
delete some builds manually.

> No space left on device when running precommit tests
> 
>
> Key: HIVE-28013
> URL: https://issues.apache.org/jira/browse/HIVE-28013
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Reporter: Stamatis Zampetakis
>Priority: Blocker
>
> The Hive precommit tests fail due to lack of space. Few of the most recent 
> failures below:
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-4744/23/console
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-5005/10/console
> {noformat}
> java.io.IOException: No space left on device
>   at java.base/sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>   at 
> java.base/sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:62)
>   at java.base/sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113)
>   at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:79)
>   at java.base/sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:280)
>   at 
> org.jenkinsci.plugins.workflow.support.pickles.serialization.RiverWriter.(RiverWriter.java:109)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:560)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:537)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgramIfPossible(CpsThreadGroup.java:520)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:444)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$400(CpsThreadGroup.java:97)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:315)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:279)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139)
>   at 
> jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
>   at 
> jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
>   at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:829)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (HIVE-28013) No space left on device when running precommit tests

2024-01-19 Thread Stamatis Zampetakis (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808545#comment-17808545
 ] 

Stamatis Zampetakis edited comment on HIVE-28013 at 1/19/24 8:48 AM:
-

The disk usage shows that the space is occupied by Jenkins builds.
{noformat}
jenkins@jenkins-6858ddb664-lstvm:/$ du -h var/jenkins_home | grep 
"G[[:space:]]\+"
2.1Gvar/jenkins_home/caches
41G var/jenkins_home/jobs/hive-nightly-branch-3/builds
41G var/jenkins_home/jobs/hive-nightly-branch-3
1.8Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4380/builds
1.8Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4380
1.1Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4683/builds
1.1Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4683
1.3Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4820/builds
1.3Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4820
2.5G
var/jenkins_home/jobs/hive-precommit/branches/storage-branch-2-8.a76ggs/builds
2.5Gvar/jenkins_home/jobs/hive-precommit/branches/storage-branch-2-8.a76ggs
1.9Gvar/jenkins_home/jobs/hive-precommit/branches/branch-3/builds
1.9Gvar/jenkins_home/jobs/hive-precommit/branches/branch-3
1.4Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4901/builds
1.4Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4901
2.2Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4740/builds
2.2Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4740
1.3Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4882/builds
1.3Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4882
3.7Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4913/builds
3.7Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4913
1.3Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4616/builds
1.3Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4616
4.2Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4852/builds
4.2Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4852
1.4Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4961/builds
1.4Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4961
1.1Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4973/builds
1.1Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4973
4.0Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4761/builds
4.0Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4761
1.7G
var/jenkins_home/jobs/hive-precommit/branches/dependabo.ssr98q2m10i5.ss-1-24-0/builds
1.7G
var/jenkins_home/jobs/hive-precommit/branches/dependabo.ssr98q2m10i5.ss-1-24-0
103Gvar/jenkins_home/jobs/hive-precommit/branches/master/builds
103Gvar/jenkins_home/jobs/hive-precommit/branches/master
1.7Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4950/builds
1.7Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4950
3.6Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4855/builds
3.6Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4855
3.5Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4744/builds
3.5Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4744
2.4Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4517/builds
2.4Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4517
1.3Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4755/builds
1.3Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4755
1.1Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4566/builds/8
1.2Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4566/builds/27
5.8Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4566/builds
5.8Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4566
200Gvar/jenkins_home/jobs/hive-precommit/branches
200Gvar/jenkins_home/jobs/hive-precommit
1.4Gvar/jenkins_home/jobs/hive-flaky-check/builds/63/archive
1.4Gvar/jenkins_home/jobs/hive-flaky-check/builds/63
1.8Gvar/jenkins_home/jobs/hive-flaky-check/builds/802
6.7Gvar/jenkins_home/jobs/hive-flaky-check/builds
6.7Gvar/jenkins_home/jobs/hive-flaky-check
42G var/jenkins_home/jobs/hive-nightly/builds
42G var/jenkins_home/jobs/hive-nightly
289Gvar/jenkins_home/jobs
294Gvar/jenkins_home{noformat}



was (Author: zabetak):
The disk usage is still ongoing but it quickly shows that the space is occupied 
by Jenkins builds.
{noformat}
jenkins@jenkins-6858ddb664-lstvm:/$ du -h var/jenkins_home | grep 
"G[[:space:]]\+"
2.1Gvar/jenkins_home/caches
41G var/jenkins_home/jobs/hive-nightly-branch-3/builds
41G var/jenkins_home/jobs/hive-nightly-branch-3
1.8Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4380/builds
1.8Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4380
1.1Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4683/builds
1.1Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4683
1.3Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4820/builds
1.3Gvar/jenkins_home/jobs/h

[jira] [Commented] (HIVE-28013) No space left on device when running precommit tests

2024-01-19 Thread Stamatis Zampetakis (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808545#comment-17808545
 ] 

Stamatis Zampetakis commented on HIVE-28013:


The disk usage is still ongoing but it quickly shows that the space is occupied 
by Jenkins builds.
{noformat}
jenkins@jenkins-6858ddb664-lstvm:/$ du -h var/jenkins_home | grep 
"G[[:space:]]\+"
2.1Gvar/jenkins_home/caches
41G var/jenkins_home/jobs/hive-nightly-branch-3/builds
41G var/jenkins_home/jobs/hive-nightly-branch-3
1.8Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4380/builds
1.8Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4380
1.1Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4683/builds
1.1Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4683
1.3Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4820/builds
1.3Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4820
2.5G
var/jenkins_home/jobs/hive-precommit/branches/storage-branch-2-8.a76ggs/builds
2.5Gvar/jenkins_home/jobs/hive-precommit/branches/storage-branch-2-8.a76ggs
1.9Gvar/jenkins_home/jobs/hive-precommit/branches/branch-3/builds
1.9Gvar/jenkins_home/jobs/hive-precommit/branches/branch-3
1.4Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4901/builds
1.4Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4901
2.2Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4740/builds
2.2Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4740
1.3Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4882/builds
1.3Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4882
3.7Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4913/builds
3.7Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4913
1.3Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4616/builds
1.3Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4616
4.2Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4852/builds
4.2Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4852
1.4Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4961/builds
1.4Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4961
1.1Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4973/builds
1.1Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4973
4.0Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4761/builds
4.0Gvar/jenkins_home/jobs/hive-precommit/branches/PR-4761
1.7G
var/jenkins_home/jobs/hive-precommit/branches/dependabo.ssr98q2m10i5.ss-1-24-0/builds
1.7G
var/jenkins_home/jobs/hive-precommit/branches/dependabo.ssr98q2m10i5.ss-1-24-0
...
{noformat}


> No space left on device when running precommit tests
> 
>
> Key: HIVE-28013
> URL: https://issues.apache.org/jira/browse/HIVE-28013
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Reporter: Stamatis Zampetakis
>Priority: Blocker
>
> The Hive precommit tests fail due to lack of space. Few of the most recent 
> failures below:
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-4744/23/console
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-5005/10/console
> {noformat}
> java.io.IOException: No space left on device
>   at java.base/sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>   at 
> java.base/sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:62)
>   at java.base/sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113)
>   at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:79)
>   at java.base/sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:280)
>   at 
> org.jenkinsci.plugins.workflow.support.pickles.serialization.RiverWriter.(RiverWriter.java:109)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:560)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:537)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgramIfPossible(CpsThreadGroup.java:520)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:444)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$400(CpsThreadGroup.java:97)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:315)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:279)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139)
>   at 
> jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
> 

[jira] [Commented] (HIVE-28013) No space left on device when running precommit tests

2024-01-19 Thread Stamatis Zampetakis (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808542#comment-17808542
 ] 

Stamatis Zampetakis commented on HIVE-28013:


The Jenkins pod uses a persistent volume.

{noformat}
kubectl describe pod/jenkins-6858ddb664-lstvm | grep "Volumes\:" -A 15
Volumes:
  jenkins-home:
Type:   PersistentVolumeClaim (a reference to a PersistentVolumeClaim 
in the same namespace)
ClaimName:  jenkins-data2
ReadOnly:   false
  kube-api-access-9x4m5:
Type:Projected (a volume that contains injected data 
from multiple sources)
TokenExpirationSeconds:  0xc0009a4720
ConfigMapName:   kube-root-ca.crt
ConfigMapOptional:   
DownwardAPI: true
QoS Class:   Burstable
Node-Selectors:  type=core
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
 node.kubernetes.io/unreachable:NoExecute for 300s
{noformat}

The volume is full. I am trying to check what is taking up all the space.

> No space left on device when running precommit tests
> 
>
> Key: HIVE-28013
> URL: https://issues.apache.org/jira/browse/HIVE-28013
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Reporter: Stamatis Zampetakis
>Priority: Blocker
>
> The Hive precommit tests fail due to lack of space. Few of the most recent 
> failures below:
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-4744/23/console
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-5005/10/console
> {noformat}
> java.io.IOException: No space left on device
>   at java.base/sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>   at 
> java.base/sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:62)
>   at java.base/sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113)
>   at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:79)
>   at java.base/sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:280)
>   at 
> org.jenkinsci.plugins.workflow.support.pickles.serialization.RiverWriter.(RiverWriter.java:109)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:560)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:537)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgramIfPossible(CpsThreadGroup.java:520)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:444)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$400(CpsThreadGroup.java:97)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:315)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:279)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139)
>   at 
> jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
>   at 
> jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
>   at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:829)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-28013) No space left on device when running precommit tests

2024-01-19 Thread Stamatis Zampetakis (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-28013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17808537#comment-17808537
 ] 

Stamatis Zampetakis commented on HIVE-28013:


I connected to the Jenkins pod and here is the file system status:

{noformat}
kubectl exec -it jenkins-6858ddb664-lstvm -- bash
jenkins@jenkins-6858ddb664-lstvm:/$ df
Filesystem 1K-blocks  Used Available Use% Mounted on
overlay 98831908   4872600  93942924   5% /
tmpfs  65536 0 65536   0% /dev
tmpfs6647132 0   6647132   0% /sys/fs/cgroup
/dev/sdb   308521792 308033772471636 100% /var/jenkins_home
/dev/sda1   98831908   4872600  93942924   5% /etc/hosts
shm65536 0 65536   0% /dev/shm
tmpfs   1080492012  10804908   1% 
/run/secrets/kubernetes.io/serviceaccount
tmpfs6647132 0   6647132   0% /proc/acpi
tmpfs6647132 0   6647132   0% /proc/scsi
tmpfs6647132 0   6647132   0% /sys/firmware
{noformat}


> No space left on device when running precommit tests
> 
>
> Key: HIVE-28013
> URL: https://issues.apache.org/jira/browse/HIVE-28013
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Reporter: Stamatis Zampetakis
>Priority: Blocker
>
> The Hive precommit tests fail due to lack of space. Few of the most recent 
> failures below:
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-4744/23/console
> * 
> http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-5005/10/console
> {noformat}
> java.io.IOException: No space left on device
>   at java.base/sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>   at 
> java.base/sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:62)
>   at java.base/sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113)
>   at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:79)
>   at java.base/sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:280)
>   at 
> org.jenkinsci.plugins.workflow.support.pickles.serialization.RiverWriter.(RiverWriter.java:109)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:560)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:537)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgramIfPossible(CpsThreadGroup.java:520)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:444)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$400(CpsThreadGroup.java:97)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:315)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:279)
>   at 
> org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139)
>   at 
> jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
>   at 
> jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
>   at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:829)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-28013) No space left on device when running precommit tests

2024-01-19 Thread Stamatis Zampetakis (Jira)
Stamatis Zampetakis created HIVE-28013:
--

 Summary: No space left on device when running precommit tests
 Key: HIVE-28013
 URL: https://issues.apache.org/jira/browse/HIVE-28013
 Project: Hive
  Issue Type: Bug
  Components: Testing Infrastructure
Reporter: Stamatis Zampetakis


The Hive precommit tests fail due to lack of space. Few of the most recent 
failures below:
* 
http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-4744/23/console
* 
http://ci.hive.apache.org/job/hive-precommit/view/change-requests/job/PR-5005/10/console


{noformat}
java.io.IOException: No space left on device
at java.base/sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at 
java.base/sun.nio.ch.FileDispatcherImpl.write(FileDispatcherImpl.java:62)
at java.base/sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113)
at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:79)
at java.base/sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:280)
at 
org.jenkinsci.plugins.workflow.support.pickles.serialization.RiverWriter.(RiverWriter.java:109)
at 
org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:560)
at 
org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgram(CpsThreadGroup.java:537)
at 
org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.saveProgramIfPossible(CpsThreadGroup.java:520)
at 
org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.run(CpsThreadGroup.java:444)
at 
org.jenkinsci.plugins.workflow.cps.CpsThreadGroup.access$400(CpsThreadGroup.java:97)
at 
org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:315)
at 
org.jenkinsci.plugins.workflow.cps.CpsThreadGroup$2.call(CpsThreadGroup.java:279)
at 
org.jenkinsci.plugins.workflow.cps.CpsVmExecutorService$2.call(CpsVmExecutorService.java:67)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
hudson.remoting.SingleLaneExecutorService$1.run(SingleLaneExecutorService.java:139)
at 
jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
at 
jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
{noformat}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)