Re: scalac crash when compiling DataTypeConversions.scala

2014-10-27 Thread guoxu1231
Hi Stephen, I tried it again, To avoid the profile impact, I execute mvn -DskipTests clean package with Hadoop 1.0.4 by default and open the IDEA and import it as a maven project, and I didn't choose any profile in the import wizard. Then Make project or re-build project in IDEA, unfortunately

Re: scalac crash when compiling DataTypeConversions.scala

2014-10-26 Thread guoxu1231
Any update? I encountered same issue in my environment. Here are my steps as usual: git clone https://github.com/apache/spark mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -Phive -DskipTests clean package build successfully by maven. import into IDEA as a maven project, click Build-Make

None in RDD

2014-08-14 Thread guoxu1231
Hi Guys, I have a serious problem regarding the 'None' in RDD(pyspark). Take a example of transformations that produce 'None'. leftOuterJoin(self, other, numPartitions=None) Perform a left outer join of self and other. (K, V) and (K, W), returns a dataset of (K, (V, W)) pairs with all