Hi, Basu,
if all columns is separated by delimter "\t", csv parser might be a better
choice.
for example:
```scala
spark.read
.option("sep", "\t")
.option("header", fasle)
.option("inferSchema", true)
.csv("/user/root/spark_demo/scala/data/Stations.txt")
```
Hey Chris,
Thanks for your quick help. Actually the dataset had issues, otherwise the
logic I implemented was not wrong.
I did this -
1) *V.Imp *– Creating row by segregating columns after reading the tab
delimited file before converting into DF=
val stati = stat.map(x =>
Hi Aakash,
You can try this:
import org.apache.spark.sql.Row
import org.apache.spark.sql.types.{StringType, StructField, StructType}
val header = Array("col1", "col2", "col3", "col4")
val schema = StructType(header.map(StructField(_, StringType, true)))
val statRow = stat.map(line =>