Hi Mahendar,
Which version of spark and Hadoop are you using?
I tried it on spark2.3.1 with Hadoop 2.7.3 and it works for a folder containing 
multiple gz files.


From: Mahender Sarangam <mahender.bigd...@outlook.com>
Sent: Monday, October 1, 2018 2:00 AM
To: user@spark.apache.org
Subject: Unable to read multiple JSON.Gz File.



I’m trying to read multiple .json.gz files from a Blob storage path using the 
below scala code. But I’m unable to read the data from the files or print the 
schema. If the files are not compressed as .gz then we are able to read all the 
files into the Dataframe.
I’ve even tried giving *.gz but no luck.
 val df = 
spark.read.json("wasb://x...@azurestorage.blob.core.windows.net/sourcePath/"<mailto:wasb://x...@azurestorage.blob.core.windows.net/sourcePath/>)

Reply via email to