[jira] [Updated] (SPARK-32562) Pyspark drop duplicate columns
[ https://issues.apache.org/jira/browse/SPARK-32562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hyukjin Kwon updated SPARK-32562: - Target Version/s: (was: 3.0.0) > Pyspark drop duplicate columns > -- > > Key: SPARK-32562 > URL: https://issues.apache.org/jira/browse/SPARK-32562 > Project: Spark > Issue Type: Improvement > Components: PySpark >Affects Versions: 3.0.0 >Reporter: abhijeet dada mote >Priority: Major > Labels: newbie, starter > Original Estimate: 1h > Remaining Estimate: 1h > > Hi All, > This is one suggestion can we have a feature in pyspark to remove duplicate > columns? > I have come up with small code for that > {code:python} > def drop_duplicate_columns(_rdd_df): > column_names = _rdd_df.columns > duplicate_columns = set([x for x in column_names if column_names.count(x) > > 1]) > _rdd_df = _rdd_df.drop(*duplicate_columns) > return _rdd_df > {code} > Your suggestions are appreciatd and can work on this PR, this would be my > first contribution(PR) to Pyspark if you guys agree with it -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-32562) Pyspark drop duplicate columns
[ https://issues.apache.org/jira/browse/SPARK-32562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rohit Mishra updated SPARK-32562: - Fix Version/s: (was: 3.0.0) > Pyspark drop duplicate columns > -- > > Key: SPARK-32562 > URL: https://issues.apache.org/jira/browse/SPARK-32562 > Project: Spark > Issue Type: Improvement > Components: PySpark >Affects Versions: 3.0.0 >Reporter: abhijeet dada mote >Priority: Major > Labels: newbie, starter > Original Estimate: 1h > Remaining Estimate: 1h > > Hi All, > This is one suggestion can we have a feature in pyspark to remove duplicate > columns? > I have come up with small code for that > {code:python} > def drop_duplicate_columns(_rdd_df): > column_names = _rdd_df.columns > duplicate_columns = set([x for x in column_names if column_names.count(x) > > 1]) > _rdd_df = _rdd_df.drop(*duplicate_columns) > return _rdd_df > {code} > Your suggestions are appreciatd and can work on this PR, this would be my > first contribution(PR) to Pyspark if you guys agree with it -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-32562) Pyspark drop duplicate columns
[ https://issues.apache.org/jira/browse/SPARK-32562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] abhijeet dada mote updated SPARK-32562: --- Description: Hi All, This is one suggestion can we have a feature in pyspark to remove duplicate columns? I have come up with small code for that {code:python} def drop_duplicate_columns(_rdd_df): column_names = _rdd_df.columns duplicate_columns = set([x for x in column_names if column_names.count(x) > 1]) _rdd_df = _rdd_df.drop(*duplicate_columns) return _rdd_df {code} Your suggestions are appreciatd and can work on this PR, this would be my first contribution(PR) to Pyspark if you guys agree with it was: Hi All, This is one suggestion can we have a feature in pyspark to remove duplicate columns? I have come up with small code for that def drop_duplicate_columns(_rdd_df): column_names = _rdd_df.columns duplicate_columns = set([x for x in column_names if column_names.count(x) > 1]) _rdd_df = _rdd_df.drop(*duplicate_columns) return _rdd_df Your suggestions are appreciatd and can work on this PR, this would be my first contribution(PR) to Pyspark if you guys agree with it > Pyspark drop duplicate columns > -- > > Key: SPARK-32562 > URL: https://issues.apache.org/jira/browse/SPARK-32562 > Project: Spark > Issue Type: Improvement > Components: PySpark >Affects Versions: 3.0.0 >Reporter: abhijeet dada mote >Priority: Major > Labels: newbie, starter > Fix For: 3.0.0 > > Original Estimate: 1h > Remaining Estimate: 1h > > Hi All, > This is one suggestion can we have a feature in pyspark to remove duplicate > columns? > I have come up with small code for that > {code:python} > def drop_duplicate_columns(_rdd_df): > column_names = _rdd_df.columns > duplicate_columns = set([x for x in column_names if column_names.count(x) > > 1]) > _rdd_df = _rdd_df.drop(*duplicate_columns) > return _rdd_df > {code} > Your suggestions are appreciatd and can work on this PR, this would be my > first contribution(PR) to Pyspark if you guys agree with it -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org