I having trouble loading data from an s3 repo
Currently DCOS is running spark 2 so I not sure if there is a modifcation
to code with the upgrade

my code atm looks like this


sc.hadoopConfiguration.set("fs.s3n.awsAccessKeyId", "xxx")
sc.hadoopConfiguration.set("fs.s3n.awsSecretAccessKey", "xxx")

  val sqlContext = new org.apache.spark.sql.SQLContext(sc)
  val fname = "s3n://somespark/datain.csv"
  // val rows = sc.textFile(fname).map { line =>
  // val values = line.split(',').map(_.toDouble)
  // Vectors.dense(values)
  // }


  val rows = sc.textFile(fname)
  rows.count()



the spark survice returns a failed message - but little information to
exactly why the job didnt run


any suggestions to what i an try?
-- 
M

Reply via email to