Hi, I understand that Zstd compression can optionally be provided a dictionary object to improve performance. See “training mode” here https://facebook.github.io/zstd/
Does Spark surface a way to provide this dictionary object when writing/reading data? What about for intermediate shuffle results? Thanks, Daniel