[ https://issues.apache.org/jira/browse/SPARK-22526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16264095#comment-16264095 ]
Steve Loughran commented on SPARK-22526: ---------------------------------------- # Fix the code you invoke #. wrap the code you invoke with something like (and this is coded in the JIRA, untested & should really close the stream in something to swallow IOEs. {code} binaryRdd.map { t => try { process(t._2) } finally { t._2.close() } } {code} > Spark hangs while reading binary files from S3 > ---------------------------------------------- > > Key: SPARK-22526 > URL: https://issues.apache.org/jira/browse/SPARK-22526 > Project: Spark > Issue Type: Bug > Components: Spark Core > Affects Versions: 2.2.0 > Reporter: mohamed imran > Original Estimate: 168h > Remaining Estimate: 168h > > Hi, > I am using Spark 2.2.0(recent version) to read binary files from S3. I use > sc.binaryfiles to read the files. > It is working fine until some 100 file read but later it get hangs > indefinitely from 5 up to 40 mins like Avro file read issue(it was fixed in > the later releases) > I tried setting the fs.s3a.connection.maximum to some maximum values but > didn't help. > And finally i ended up using the spark speculation parameter set which is > again didnt help much. > One thing Which I observed is that it is not closing the connection after > every read of binary files from the S3. > example :- sc.binaryFiles("s3a://test/test123.zip") > Please look into this major issue! -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org