iduanyingjie created PHOENIX-4034:
-------------------------------------
Summary: Can not create a Path from an empty string
Key: PHOENIX-4034
URL: https://issues.apache.org/jira/browse/PHOENIX-4034
Project: Phoenix
Issue Type: Bug
Affects Versions: 4.11.0
Environment: spark2.x
hbase1.3
Reporter: iduanyingjie
h2. my code:
spark.read
.format("csv")
.option("header", "false")
.schema(
StructType(
StructField("userid", IntegerType) ::
StructField("movieid", IntegerType) ::
StructField("rating", DoubleType) ::
StructField("timestamp", LongType) :: Nil
)
)
.load("file:///home/iduanyingjie/Desktop/ratings.csv")
.createOrReplaceTempView("ratings")
spark.sql("select count(*) from ratings").show()
spark.sql("select row_number() OVER (PARTITION BY userid ORDER BY userid) id,
userid, movieid, rating, timestamp from ratings")
.saveToPhoenix("test.ratings", zkUrl = Some("127.0.0.1:2181"))
h2. log:
+--------+
|count(1)|
+--------+
| 387804|
+--------+
...
[Stage 3:=====================================================> (193 + 4) /
200]17/07/18 11:02:37 INFO PhoenixInputFormat: UseUpsertColumns=true,
upsertColumnList.size()=5, upsertColumnList=ID,USERID,MOVIEID,RATING,TIMESTAMP
17/07/18 11:02:37 INFO PhoenixInputFormat: UseUpsertColumns=true,
upsertColumnList.size()=5, upsertColumnList=ID,USERID,MOVIEID,RATING,TIMESTAMP
17/07/18 11:02:37 INFO PhoenixInputFormat: Phoenix Custom Upsert Statement:
UPSERT INTO test.ratings ("ID", "0"."USERID", "0"."MOVIEID", "0"."RATING",
"0"."TIMESTAMP") VALUES (?, ?, ?, ?, ?)
17/07/18 11:02:37 INFO PhoenixInputFormat: UseUpsertColumns=true,
upsertColumnList.size()=5, upsertColumnList=ID,USERID,MOVIEID,RATING,TIMESTAMP
17/07/18 11:02:37 INFO PhoenixInputFormat: UseUpsertColumns=true,
upsertColumnList.size()=5, upsertColumnList=ID,USERID,MOVIEID,RATING,TIMESTAMP
17/07/18 11:02:37 INFO PhoenixInputFormat: Phoenix Custom Upsert Statement:
UPSERT INTO test.ratings ("ID", "0"."USERID", "0"."MOVIEID", "0"."RATING",
"0"."TIMESTAMP") VALUES (?, ?, ?, ?, ?)
[Stage 3:=====================================================> (196 + 4) /
200]17/07/18 11:02:37 INFO PhoenixInputFormat: UseUpsertColumns=true,
upsertColumnList.size()=5, upsertColumnList=ID,USERID,MOVIEID,RATING,TIMESTAMP
17/07/18 11:02:37 INFO PhoenixInputFormat: UseUpsertColumns=true,
upsertColumnList.size()=5, upsertColumnList=ID,USERID,MOVIEID,RATING,TIMESTAMP
17/07/18 11:02:37 INFO PhoenixInputFormat: Phoenix Custom Upsert Statement:
UPSERT INTO test.ratings ("ID", "0"."USERID", "0"."MOVIEID", "0"."RATING",
"0"."TIMESTAMP") VALUES (?, ?, ?, ?, ?)
17/07/18 11:02:37 ERROR SparkHadoopMapReduceWriter: Aborting job
job_20170718110213_0015.
java.lang.IllegalArgumentException: Can not create a Path from an empty string
at org.apache.hadoop.fs.Path.checkPathArg(Path.java:127)
at org.apache.hadoop.fs.Path.<init>(Path.java:135)
at org.apache.hadoop.fs.Path.<init>(Path.java:89)
at
org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.absPathStagingDir(HadoopMapReduceCommitProtocol.scala:58)
at
org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitJob(HadoopMapReduceCommitProtocol.scala:132)
at
org.apache.spark.internal.io.SparkHadoopMapReduceWriter$.write(SparkHadoopMapReduceWriter.scala:101)
at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1085)
at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at
org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1084)
at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply$mcV$sp(PairRDDFunctions.scala:1003)
at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at
org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:994)
at
org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:59)
at com.iduanyingjie.spark.testphoenix$.main(testphoenix.scala:35)
at com.iduanyingjie.spark.testphoenix.main(testphoenix.scala)
Exception in thread "main" java.lang.IllegalArgumentException: Can not create a
Path from an empty string
at org.apache.hadoop.fs.Path.checkPathArg(Path.java:127)
at org.apache.hadoop.fs.Path.<init>(Path.java:135)
at org.apache.hadoop.fs.Path.<init>(Path.java:89)
at
org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.absPathStagingDir(HadoopMapReduceCommitProtocol.scala:58)
at
org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.abortJob(HadoopMapReduceCommitProtocol.scala:141)
at
org.apache.spark.internal.io.SparkHadoopMapReduceWriter$.write(SparkHadoopMapReduceWriter.scala:106)
at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1085)
at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at
org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1084)
at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply$mcV$sp(PairRDDFunctions.scala:1003)
at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
at
org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:994)
at
org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:59)
at com.iduanyingjie.spark.testphoenix$.main(testphoenix.scala:35)
at com.iduanyingjie.spark.testphoenix.main(testphoenix.scala)
17/07/18 11:02:37 INFO AbstractConnector: Stopped
Spark@a739f4c{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
h2. query from phoenix:
0: jdbc:phoenix:localhost:2181:/hbase> select count(*) from test.ratings;
+-----------+
| COUNT(1) |
+-----------+
| 3292 |
+-----------+
1 row selected (0.114 seconds)
0: jdbc:phoenix:localhost:2181:/hbase>
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)