Hi, I think I have hit IGNITE-4862, running a Spark job that accesses IGFS with a secondary file system configured, I get the following error:
``` 17/12/12 23:52:24 ERROR Executor: Exception in task 162.0 in stage 45.0 (TID 8090) java.lang.NullPointerException at org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream$FetchBufferPart.flatten(HadoopIgfsInputStream.java:458) at org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream$DoubleFetchBuffer.flatten(HadoopIgfsInputStream.java:511) at org.apache.ignite.internal.processors.hadoop.impl.igfs.HadoopIgfsInputStream.read(HadoopIgfsInputStream.java:177) at java.io.DataInputStream.readFully(DataInputStream.java:195) at org.apache.parquet.hadoop.util.H1SeekableInputStream.readFully(H1SeekableInputStream.java:70) at org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:1065) ``` Following http://apache-ignite-users.70518.x6.nabble.com/NullPointerException-when-using-IGFS-td11328.html I tried setting <property name="sequentialReadsBeforePrefetch" value="64"/> but it kept failing, however I was able to work around this using <property name="prefetchBlocks" value="0"/> as suggested in https://issues.apache.org/jira/browse/IGNITE-4862. Even though there was a preliminary pull request, this issues seems abandoned for some months. Are there any plans to fix this in the short term? Thanks a lot for your help! Greetings, Juan