Re: Error in spark-xml

2016-05-01 Thread Mail.com
Can you try once by creating your own schema file and using it to read the XML.

I had similar issue but got that resolved by custom schema and by specifying 
each attribute in that.

Pradeep


> On May 1, 2016, at 9:45 AM, Hyukjin Kwon  wrote:
> 
> To be more clear,
> 
> If you set the rowTag as "book", then it will produces an exception which is 
> an issue opened here, https://github.com/databricks/spark-xml/issues/92
> 
> Currently it does not support to parse a single element with only a value as 
> a row.
> 
> 
> If you set the rowTag as "bkval", then it should work. I tested the case 
> below to double check.
> 
> If it does not work as below, please open an issue with some information so 
> that I can reproduce.
> 
> 
> I tested the case above with the data below
> 
>   
> bk_113
> bk_114
>   
>   
> bk_114
> bk_116
>   
>   
> bk_115
> bk_116
>   
> 
> 
> 
> I tested this with the codes below
> 
> val path = "path-to-file"
> sqlContext.read
>   .format("xml")
>   .option("rowTag", "bkval")
>   .load(path)
>   .show()
> 
> Thanks!
> 
> 
> 2016-05-01 15:11 GMT+09:00 Hyukjin Kwon :
>> Hi Sourav,
>> 
>> I think it is an issue. XML will assume the element by the rowTag as object.
>> 
>>  Could you please open an issue in 
>> https://github.com/databricks/spark-xml/issues please?
>> 
>> Thanks!
>> 
>> 
>> 2016-05-01 5:08 GMT+09:00 Sourav Mazumder :
>>> Hi,
>>> 
>>> Looks like there is a problem in spark-xml if the xml has multiple 
>>> attributes with no child element.
>>> 
>>> For example say the xml has a nested object as below 
>>> 
>>> bk_113
>>> bk_114
>>>  
>>> 
>>> Now if I create a dataframe starting with rowtag bkval and then I do a 
>>> select on that data frame it gives following error.
>>> 
>>> 
>>> scala.MatchError: ENDDOCUMENT (of class 
>>> com.sun.xml.internal.stream.events.EndDocumentEvent) at 
>>> com.databricks.spark.xml.parsers.StaxXmlParser$.checkEndElement(StaxXmlParser.scala:94)
>>>  at  
>>> com.databricks.spark.xml.parsers.StaxXmlParser$.com$databricks$spark$xml$parsers$StaxXmlParser$$convertObject(StaxXmlParser.scala:295)
>>>  at 
>>> com.databricks.spark.xml.parsers.StaxXmlParser$$anonfun$parse$1$$anonfun$apply$4.apply(StaxXmlParser.scala:58)
>>>  at 
>>> com.databricks.spark.xml.parsers.StaxXmlParser$$anonfun$parse$1$$anonfun$apply$4.apply(StaxXmlParser.scala:46)
>>>  at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at 
>>> scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at 
>>> scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at 
>>> scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308) at 
>>> scala.collection.Iterator$class.foreach(Iterator.scala:727) at 
>>> scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at 
>>> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) at 
>>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) 
>>> at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47) 
>>> at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273) at 
>>> scala.collection.AbstractIterator.to(Iterator.scala:1157) at 
>>> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265) 
>>> at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157) at 
>>> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252) 
>>> at scala.collection.AbstractIterator.toArray(Iterator.scala:1157) at 
>>> org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:215)
>>>  at 
>>> org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:215)
>>>  at 
>>> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)
>>>  at 
>>> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)
>>>  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at 
>>> org.apache.spark.scheduler.Task.run(Task.scala:88) at 
>>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at 
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>>  at 
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>>  at java.lang.Thread.run(Thread.java:745)
>>> 
>>> However if there is only one row like below, it works fine.
>>> 
>>> 
>>> bk_113
>>> 
>>> 
>>> Any workaround ?
>>> 
>>> Regards,
>>> Sourav
> 


Re: Error in spark-xml

2016-05-01 Thread Hyukjin Kwon
To be more clear,

If you set the rowTag as "book", then it will produces an exception which
is an issue opened here, https://github.com/databricks/spark-xml/issues/92

Currently it does not support to parse a single element with only a value
as a row.


If you set the rowTag as "bkval", then it should work. I tested the case
below to double check.

If it does not work as below, please open an issue with some information so
that I can reproduce.


I tested the case above with the data below


  
bk_113
bk_114
  
  
bk_114
bk_116
  
  
bk_115
bk_116
  



I tested this with the codes below

val path = "path-to-file"
sqlContext.read
  .format("xml")
  .option("rowTag", "bkval")
  .load(path)
  .show()

​

Thanks!


2016-05-01 15:11 GMT+09:00 Hyukjin Kwon :

> Hi Sourav,
>
> I think it is an issue. XML will assume the element by the rowTag as
> object.
>
>  Could you please open an issue in
> https://github.com/databricks/spark-xml/issues please?
>
> Thanks!
>
>
> 2016-05-01 5:08 GMT+09:00 Sourav Mazumder :
>
>> Hi,
>>
>> Looks like there is a problem in spark-xml if the xml has multiple
>> attributes with no child element.
>>
>> For example say the xml has a nested object as below
>> 
>> bk_113
>> bk_114
>>  
>>
>> Now if I create a dataframe starting with rowtag bkval and then I do a
>> select on that data frame it gives following error.
>>
>>
>> scala.MatchError: ENDDOCUMENT (of class
>> com.sun.xml.internal.stream.events.EndDocumentEvent) at
>> com.databricks.spark.xml.parsers.StaxXmlParser$.checkEndElement(StaxXmlParser.scala:94)
>> at
>> com.databricks.spark.xml.parsers.StaxXmlParser$.com$databricks$spark$xml$parsers$StaxXmlParser$$convertObject(StaxXmlParser.scala:295)
>> at
>> com.databricks.spark.xml.parsers.StaxXmlParser$$anonfun$parse$1$$anonfun$apply$4.apply(StaxXmlParser.scala:58)
>> at
>> com.databricks.spark.xml.parsers.StaxXmlParser$$anonfun$parse$1$$anonfun$apply$4.apply(StaxXmlParser.scala:46)
>> at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at
>> scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at
>> scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at
>> scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308) at
>> scala.collection.Iterator$class.foreach(Iterator.scala:727) at
>> scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at
>> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) at
>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>> at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
>> at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
>> at scala.collection.AbstractIterator.to(Iterator.scala:1157) at
>> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
>> at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157) at
>> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
>> at scala.collection.AbstractIterator.toArray(Iterator.scala:1157) at
>> org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:215)
>> at
>> org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:215)
>> at
>> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)
>> at
>> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)
>> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at
>> org.apache.spark.scheduler.Task.run(Task.scala:88) at
>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:745)
>>
>> However if there is only one row like below, it works fine.
>>
>> 
>> bk_113
>> 
>>
>> Any workaround ?
>>
>> Regards,
>> Sourav
>>
>>
>


Re: Error in spark-xml

2016-04-30 Thread Hyukjin Kwon
Hi Sourav,

I think it is an issue. XML will assume the element by the rowTag as object.

 Could you please open an issue in
https://github.com/databricks/spark-xml/issues please?

Thanks!


2016-05-01 5:08 GMT+09:00 Sourav Mazumder :

> Hi,
>
> Looks like there is a problem in spark-xml if the xml has multiple
> attributes with no child element.
>
> For example say the xml has a nested object as below
> 
> bk_113
> bk_114
>  
>
> Now if I create a dataframe starting with rowtag bkval and then I do a
> select on that data frame it gives following error.
>
>
> scala.MatchError: ENDDOCUMENT (of class
> com.sun.xml.internal.stream.events.EndDocumentEvent) at
> com.databricks.spark.xml.parsers.StaxXmlParser$.checkEndElement(StaxXmlParser.scala:94)
> at
> com.databricks.spark.xml.parsers.StaxXmlParser$.com$databricks$spark$xml$parsers$StaxXmlParser$$convertObject(StaxXmlParser.scala:295)
> at
> com.databricks.spark.xml.parsers.StaxXmlParser$$anonfun$parse$1$$anonfun$apply$4.apply(StaxXmlParser.scala:58)
> at
> com.databricks.spark.xml.parsers.StaxXmlParser$$anonfun$parse$1$$anonfun$apply$4.apply(StaxXmlParser.scala:46)
> at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at
> scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at
> scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at
> scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308) at
> scala.collection.Iterator$class.foreach(Iterator.scala:727) at
> scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at
> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) at
> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
> at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
> at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
> at scala.collection.AbstractIterator.to(Iterator.scala:1157) at
> scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
> at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157) at
> scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
> at scala.collection.AbstractIterator.toArray(Iterator.scala:1157) at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:215)
> at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:215)
> at
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)
> at
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at
> org.apache.spark.scheduler.Task.run(Task.scala:88) at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
>
> However if there is only one row like below, it works fine.
>
> 
> bk_113
> 
>
> Any workaround ?
>
> Regards,
> Sourav
>
>


Error in spark-xml

2016-04-30 Thread Sourav Mazumder
Hi,

Looks like there is a problem in spark-xml if the xml has multiple
attributes with no child element.

For example say the xml has a nested object as below

bk_113
bk_114
 

Now if I create a dataframe starting with rowtag bkval and then I do a
select on that data frame it gives following error.


scala.MatchError: ENDDOCUMENT (of class
com.sun.xml.internal.stream.events.EndDocumentEvent) at
com.databricks.spark.xml.parsers.StaxXmlParser$.checkEndElement(StaxXmlParser.scala:94)
at
com.databricks.spark.xml.parsers.StaxXmlParser$.com$databricks$spark$xml$parsers$StaxXmlParser$$convertObject(StaxXmlParser.scala:295)
at
com.databricks.spark.xml.parsers.StaxXmlParser$$anonfun$parse$1$$anonfun$apply$4.apply(StaxXmlParser.scala:58)
at
com.databricks.spark.xml.parsers.StaxXmlParser$$anonfun$parse$1$$anonfun$apply$4.apply(StaxXmlParser.scala:46)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at
scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at
scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) at
scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:308) at
scala.collection.Iterator$class.foreach(Iterator.scala:727) at
scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at
scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) at
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273) at
scala.collection.AbstractIterator.to(Iterator.scala:1157) at
scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157) at
scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157) at
org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:215)
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$5.apply(SparkPlan.scala:215)
at
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)
at
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1850)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at
org.apache.spark.scheduler.Task.run(Task.scala:88) at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

However if there is only one row like below, it works fine.


bk_113


Any workaround ?

Regards,
Sourav