Re: pyspark.GroupedData.agg works incorrectly when one column is aggregated twice?

2016-06-09 Thread Davies Liu
This one works as expected: ``` >>> spark.range(10).selectExpr("id", "id as k").groupBy("k").agg({"k": "count", >>> "id": "sum"}).show() +---++---+ | k|count(k)|sum(id)| +---++---+ | 0| 1| 0| | 7| 1| 7| | 6| 1| 6| | 9| 1|

pyspark.GroupedData.agg works incorrectly when one column is aggregated twice?

2016-05-27 Thread Andrew Vykhodtsev
Dear list, I am trying to calculate sum and count on the same column: user_id_books_clicks = (sqlContext.read.parquet('hdfs:///projects/kaggle-expedia/input/train.parquet') .groupby('user_id') .agg({'is_booking':'count',