[jira] [Updated] (SPARK-21853) Getting an exception while calling the except method on the dataframe
[ https://issues.apache.org/jira/browse/SPARK-21853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shailesh Kini updated SPARK-21853: -- Issue Type: Bug (was: Question) > Getting an exception while calling the except method on the dataframe > - > > Key: SPARK-21853 > URL: https://issues.apache.org/jira/browse/SPARK-21853 > Project: Spark > Issue Type: Bug > Components: Spark Shell >Affects Versions: 2.1.1 >Reporter: Shailesh Kini > Attachments: SparkException.txt > > > I am getting an exception while calling except on the Dataset. > org.apache.spark.sql.AnalysisException: resolved attribute(s) > SVC_BILLING_PERIOD#37723 missing from > I read 2 csv files into datasets DS1 and DS2, which I join (full outer) to > create DS3. DS3 has some rows which are similar with the exception of one > column. I need to isolate those rows and remove the similar rows. I use > groupBy with the count > 1 on a few columns in DS3 to get those similar rows > - dataset DS4. DS4 has only a few columns and not all so I join it back with > DS3 on the aggregate columns to get a new dataset DS5 which has the same > columns as DS3. To get a clean dataset without any of those similar rows, I > am calling DS3.except(DS5) which throws the exception. The attribute is one > of the filtering criteria I use which creating DS1. > Attaching the exception to this ticket. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-21853) Getting an exception while calling the except method on the dataframe
[ https://issues.apache.org/jira/browse/SPARK-21853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shailesh Kini updated SPARK-21853: -- Description: I am getting an exception while calling except on the Dataset. org.apache.spark.sql.AnalysisException: resolved attribute(s) SVC_BILLING_PERIOD#37723 missing from I read 2 csv files into datasets DS1 and DS2, which I join (full outer) to create DS3. DS3 has some rows which are similar with the exception of one column. I need to isolate those rows and remove the similar rows. I use groupBy with the count > 1 on a few columns in DS3 to get those similar rows - dataset DS4. DS4 has only a few columns and not all so I join it back with DS3 on the aggregate columns to get a new dataset DS5 which has the same columns as DS3. To get a clean dataset without any of those similar rows, I am calling DS3.except(DS5) which throws the exception. The attribute is one of the filtering criteria I use which creating DS1. Attaching the exception to this ticket. was: I am getting an exception while calling except on the Dataset. org.apache.spark.sql.AnalysisException: resolved attribute(s) SVC_BILLING_PERIOD#37723 missing from I have 2 csv. I create two dataset DS1 and DS2, which I join to create DS3. I need to filter out duplicates for further processing. I aggregate on DS3 dataset by on some columns and filter when the count > 1. This is DS4. I now join DS3 with DS4 on those columns and get DS5. DS5 has the same structure as DS3 as I drop the columns from the join. DS5 now has all the rows which are duplicate. I then call the except on DS3 to get me a dataset DS5 which all the rows not in DS5. I am planning to filter and remove of of the duplicates (all the columns are not duplicates so I need to use filter) and union it with DS6 to get the dataset free of duplicates. Attaching the exception to this ticket. > Getting an exception while calling the except method on the dataframe > - > > Key: SPARK-21853 > URL: https://issues.apache.org/jira/browse/SPARK-21853 > Project: Spark > Issue Type: Question > Components: Spark Shell >Affects Versions: 2.1.1 >Reporter: Shailesh Kini > Attachments: SparkException.txt > > > I am getting an exception while calling except on the Dataset. > org.apache.spark.sql.AnalysisException: resolved attribute(s) > SVC_BILLING_PERIOD#37723 missing from > I read 2 csv files into datasets DS1 and DS2, which I join (full outer) to > create DS3. DS3 has some rows which are similar with the exception of one > column. I need to isolate those rows and remove the similar rows. I use > groupBy with the count > 1 on a few columns in DS3 to get those similar rows > - dataset DS4. DS4 has only a few columns and not all so I join it back with > DS3 on the aggregate columns to get a new dataset DS5 which has the same > columns as DS3. To get a clean dataset without any of those similar rows, I > am calling DS3.except(DS5) which throws the exception. The attribute is one > of the filtering criteria I use which creating DS1. > Attaching the exception to this ticket. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-21853) Getting an exception while calling the except method on the dataframe
[ https://issues.apache.org/jira/browse/SPARK-21853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shailesh Kini updated SPARK-21853: -- Attachment: SparkException.txt > Getting an exception while calling the except method on the dataframe > - > > Key: SPARK-21853 > URL: https://issues.apache.org/jira/browse/SPARK-21853 > Project: Spark > Issue Type: Question > Components: Spark Shell >Affects Versions: 2.1.1 >Reporter: Shailesh Kini > Attachments: SparkException.txt > > > I am getting an exception while calling except on the Dataset. > org.apache.spark.sql.AnalysisException: resolved attribute(s) > SVC_BILLING_PERIOD#37723 missing from > I have 2 csv. I create two dataset DS1 and DS2, which I join to create DS3. I > need to filter out duplicates for further processing. I aggregate on DS3 > dataset by on some columns and filter when the count > 1. This is DS4. I now > join DS3 with DS4 on those columns and get DS5. DS5 has the same structure as > DS3 as I drop the columns from the join. DS5 now has all the rows which are > duplicate. I then call the except on DS3 to get me a dataset DS5 which all > the rows not in DS5. I am planning to filter and remove of of the duplicates > (all the columns are not duplicates so I need to use filter) and union it > with DS6 to get the dataset free of duplicates. > Attaching the exception to this ticket. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org