spark git commit: [SPARK-19180] [SQL] the offset of short should be 2 in OffHeapColumn

2017-01-13 Thread davies
Repository: spark Updated Branches: refs/heads/branch-2.0 f56819f9b -> 08385b765 [SPARK-19180] [SQL] the offset of short should be 2 in OffHeapColumn ## What changes were proposed in this pull request? the offset of short is 4 in OffHeapColumnVector's putShorts, but actually it should be 2.

spark git commit: [SPARK-19180] [SQL] the offset of short should be 2 in OffHeapColumn

2017-01-13 Thread davies
Repository: spark Updated Branches: refs/heads/branch-2.1 ee3642f51 -> 5e9be1e1f [SPARK-19180] [SQL] the offset of short should be 2 in OffHeapColumn ## What changes were proposed in this pull request? the offset of short is 4 in OffHeapColumnVector's putShorts, but actually it should be 2.

spark git commit: [SPARK-19180] [SQL] the offset of short should be 2 in OffHeapColumn

2017-01-13 Thread davies
Repository: spark Updated Branches: refs/heads/master b0e8eb6d3 -> ad0dadaa2 [SPARK-19180] [SQL] the offset of short should be 2 in OffHeapColumn ## What changes were proposed in this pull request? the offset of short is 4 in OffHeapColumnVector's putShorts, but actually it should be 2. ##

spark git commit: [SPARK-18335][SPARKR] createDataFrame to support numPartitions parameter

2017-01-13 Thread shivaram
Repository: spark Updated Branches: refs/heads/branch-2.1 2c2ca8943 -> ee3642f51 [SPARK-18335][SPARKR] createDataFrame to support numPartitions parameter ## What changes were proposed in this pull request? To allow specifying number of partitions when the DataFrame is created ## How was

spark git commit: [SPARK-18335][SPARKR] createDataFrame to support numPartitions parameter

2017-01-13 Thread shivaram
Repository: spark Updated Branches: refs/heads/master 285a7798e -> b0e8eb6d3 [SPARK-18335][SPARKR] createDataFrame to support numPartitions parameter ## What changes were proposed in this pull request? To allow specifying number of partitions when the DataFrame is created ## How was this

spark-website git commit: Fix FAQ typo - Remove unnecessary occurrence of 'are'

2017-01-13 Thread srowen
Repository: spark-website Updated Branches: refs/heads/asf-site e95223137 -> 03485ecc8 Fix FAQ typo - Remove unnecessary occurrence of 'are' Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo Commit: http://git-wip-us.apache.org/repos/asf/spark-website/commit/03485ecc Tree:

spark git commit: [SPARK-19178][SQL] convert string of large numbers to int should return null

2017-01-13 Thread wenchen
Repository: spark Updated Branches: refs/heads/branch-2.0 449231c65 -> f56819f9b [SPARK-19178][SQL] convert string of large numbers to int should return null ## What changes were proposed in this pull request? When we convert a string to integral, we will convert that string to `decimal(20,

spark git commit: [SPARK-19178][SQL] convert string of large numbers to int should return null

2017-01-13 Thread wenchen
Repository: spark Updated Branches: refs/heads/branch-2.1 b2c9a2c8c -> 2c2ca8943 [SPARK-19178][SQL] convert string of large numbers to int should return null ## What changes were proposed in this pull request? When we convert a string to integral, we will convert that string to `decimal(20,

spark git commit: [SPARK-18687][PYSPARK][SQL] Backward compatibility - creating a Dataframe on a new SQLContext object fails with a Derby error

2017-01-13 Thread wenchen
Repository: spark Updated Branches: refs/heads/branch-2.0 be527ddc0 -> 449231c65 [SPARK-18687][PYSPARK][SQL] Backward compatibility - creating a Dataframe on a new SQLContext object fails with a Derby error Change is for SQLContext to reuse the active SparkSession during construction if the

spark git commit: [SPARK-18687][PYSPARK][SQL] Backward compatibility - creating a Dataframe on a new SQLContext object fails with a Derby error

2017-01-13 Thread wenchen
Repository: spark Updated Branches: refs/heads/branch-2.1 0668e061b -> b2c9a2c8c [SPARK-18687][PYSPARK][SQL] Backward compatibility - creating a Dataframe on a new SQLContext object fails with a Derby error Change is for SQLContext to reuse the active SparkSession during construction if the

spark git commit: [SPARK-18687][PYSPARK][SQL] Backward compatibility - creating a Dataframe on a new SQLContext object fails with a Derby error

2017-01-13 Thread wenchen
Repository: spark Updated Branches: refs/heads/master b040cef2e -> 285a7798e [SPARK-18687][PYSPARK][SQL] Backward compatibility - creating a Dataframe on a new SQLContext object fails with a Derby error Change is for SQLContext to reuse the active SparkSession during construction if the