hello spark-world, I am new to spark.
I noticed this online example: http://spark.apache.org/docs/latest/ml-pipeline.html I am curious about this syntax: // Prepare training data from a list of (label, features) tuples. val training = spark.createDataFrame(Seq( (1.0, Vectors.dense(0.0, 1.1, 0.1)), (0.0, Vectors.dense(2.0, 1.0, -1.0)), (0.0, Vectors.dense(2.0, 1.3, 1.0)), (1.0, Vectors.dense(0.0, 1.2, -0.5)) )).toDF("label", "features") Is it possible to replace the above call to some syntax which reads values from CSV? I want something comparable to Python-Pandas read_csv() method.