[ 
https://issues.apache.org/jira/browse/IMPALA-11738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17638007#comment-17638007
 ] 

Joe McDonnell commented on IMPALA-11738:
----------------------------------------

Just to follow up: The fix here helped on Centos 7 with ASAN/TSAN/UBSAN (Clang 
builds).

On Ubuntu 18, Hive still crashes during dataload and it is still related to 
libfesupport.so. This reproduces consistently by connecting with beeline and 
running the statement with compression enabled.
{noformat}
beeline -n $USER -u "jdbc:hive2://localhost:11050/default;auth=none"

SET hive.exec.compress.output=true;
select count(*) as mv_count from 
functional_orc_def.mv1_alltypes_jointbl;{noformat}
Since it does not crash when compression is not enabled, this is a potential 
workaround:

[http://gerrit.cloudera.org:8080/19275]

Feel free to go ahead with this if it seems worthwhile.

> Data loading failed at 
> load-functional-query-exhaustive-hive-generated-orc-def-block.sql
> ----------------------------------------------------------------------------------------
>
>                 Key: IMPALA-11738
>                 URL: https://issues.apache.org/jira/browse/IMPALA-11738
>             Project: IMPALA
>          Issue Type: Bug
>    Affects Versions: Impala 4.1.1
>            Reporter: Yida Wu
>            Assignee: Joe McDonnell
>            Priority: Major
>
> Ran "./bin/bootstrap_development.sh" to build the system from scratch.
> It seems to crash in hive-server2 when it executes a query
> {code:java}
> select count(*) as mv_count from functional_orc_def.mv1_alltypes_jointbl{code}
> during loading 
> load-functional-query-exhaustive-hive-generated-orc-def-block.sql.
> Found errors in 
> load-functional-query-exhaustive-hive-generated-orc-def-block.sql.log:
> {code:java}
> Unknown HS2 problem when communicating with Thrift server.
> Error: org.apache.thrift.transport.TTransportException: 
> java.net.SocketException: Broken pipe (Write failed) (state=08S01,code=0)
> java.sql.SQLException: org.apache.thrift.transport.TTransportException: 
> java.net.SocketException: Broken pipe (Write failed)
>         at 
> org.apache.hive.jdbc.HiveStatement.closeStatementIfNeeded(HiveStatement.java:225)
>         at 
> org.apache.hive.jdbc.HiveStatement.closeClientOperation(HiveStatement.java:266)
>         at org.apache.hive.jdbc.HiveStatement.close(HiveStatement.java:289)
>         at 
> org.apache.hive.beeline.Commands.executeInternal(Commands.java:1067)
>         at org.apache.hive.beeline.Commands.execute(Commands.java:1217)
>         at org.apache.hive.beeline.Commands.sql(Commands.java:1146)
>         at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1504)
>         at org.apache.hive.beeline.BeeLine.execute(BeeLine.java:1362)
>         at org.apache.hive.beeline.BeeLine.executeFile(BeeLine.java:1336)
>         at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:1134)
>         at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:1089)
>         at 
> org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:547)
>         at org.apache.hive.beeline.BeeLine.main(BeeLine.java:529)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:232){code}
> Also found a crash jstack:
> {code:java}
> Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
> j org.apache.hadoop.io.compress.zlib.ZlibCompressor.initIDs()V+0
> j org.apache.hadoop.io.compress.zlib.ZlibCompressor.<clinit>()V+18
> v ~StubRoutines::call_stub
> j org.apache.hadoop.io.compress.zlib.ZlibFactory.loadNativeZLib()V+6
> j org.apache.hadoop.io.compress.zlib.ZlibFactory.<clinit>()V+12
> v ~StubRoutines::call_stub
> j 
> org.apache.hadoop.io.compress.DefaultCodec.getDecompressorType()Ljava/lang/Class;+4
> j 
> org.apache.hadoop.io.compress.CodecPool.getDecompressor(Lorg/apache/hadoop/io/compress/CompressionCodec;)Lorg/apache/hadoop/io/compress/Decompressor;+4
> j org.apache.hadoop.io.SequenceFile$Reader.init(Z)V+486
> j 
> org.apache.hadoop.io.SequenceFile$Reader.initialize(Lorg/apache/hadoop/fs/Path;Lorg/apache/hadoop/fs/FSDataInputStream;JJLorg/apache/hadoop/conf/Configuration;Z)V+84
> j 
> org.apache.hadoop.io.SequenceFile$Reader.<init>(Lorg/apache/hadoop/conf/Configuration;[Lorg/apache/hadoop/io/SequenceFile$Reader$Option;)V+407
> j 
> org.apache.hadoop.io.SequenceFile$Reader.<init>(Lorg/apache/hadoop/fs/FileSystem;Lorg/apache/hadoop/fs/Path;Lorg/apache/hadoop/conf/Configuration;)V+17
> j 
> org.apache.hadoop.mapred.SequenceFileRecordReader.<init>(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/mapred/FileSplit;)V+30
> j 
> org.apache.hadoop.mapred.SequenceFileInputFormat.getRecordReader(Lorg/apache/hadoop/mapred/InputSplit;Lorg/apache/hadoop/mapred/JobConf;Lorg/apache/hadoop/mapred/Reporter;)Lorg/apache/hadoop/mapred/RecordReader;+19
> j 
> org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(Lorg/apache/hadoop/mapred/JobConf;)Lorg/apache/hadoop/mapred/RecordReader;+12
> j 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader()Lorg/apache/hadoop/mapred/RecordReader;+266
> j 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow()Lorg/apache/hadoop/hive/serde2/objectinspector/InspectableObject;+25
> j org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow()Z+70
> j org.apache.hadoop.hive.ql.exec.FetchTask.executeInner(Ljava/util/List;)Z+170
> j org.apache.hadoop.hive.ql.exec.FetchTask.execute()I+12
> J 22489 C1 org.apache.hadoop.hive.ql.Driver.runInternal(Ljava/lang/String;Z)V 
> (1199 bytes) @ 0x00007f928563b904 [0x00007f9285638600+0x3304]
> J 22488 C1 
> org.apache.hadoop.hive.ql.Driver.run(Ljava/lang/String;Z)Lorg/apache/hadoop/hive/ql/processors/CommandProcessorResponse;
>  (269 bytes) @ 0x00007f928561fb44 [0x00007f928561faa0+0xa4]
> J 19121 C1 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run()Lorg/apache/hadoop/hive/ql/processors/CommandProcessorResponse;
>  (300 bytes) @ 0x00007f9283c0c034 [0x00007f9283c0b4c0+0xb74]{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-all-unsubscr...@impala.apache.org
For additional commands, e-mail: issues-all-h...@impala.apache.org

Reply via email to