Hm that looks like a Parquet version mismatch then. I think Spark 1.4
uses 1.6? You might well get away with 1.6 here anyway.

On Thu, Jun 25, 2015 at 3:13 PM, Aaron <aarongm...@gmail.com> wrote:
> Sorry about not suppling the error..that would make things helpful you'd
> think :)
>
> [INFO]
> ------------------------------------------------------------------------
>
> [INFO] Building Spark Project SQL 1.4.1
>
> [INFO]
> ------------------------------------------------------------------------
>
> [INFO]
>
> [INFO] --- maven-clean-plugin:2.6.1:clean (default-clean) @ spark-sql_2.10
> ---
>
> [INFO] Deleting /Users/aarong/dev/projects/spark/sql/core/target
>
> [INFO]
>
> [INFO] --- maven-enforcer-plugin:1.4:enforce (enforce-versions) @
> spark-sql_2.10 ---
>
> [INFO]
>
> [INFO] --- scala-maven-plugin:3.2.0:add-source (eclipse-add-source) @
> spark-sql_2.10 ---
>
> [INFO] Add Source directory:
> /Users/aarong/dev/projects/spark/sql/core/src/main/scala
>
> [INFO] Add Test Source directory:
> /Users/aarong/dev/projects/spark/sql/core/src/test/scala
>
> [INFO]
>
> [INFO] --- build-helper-maven-plugin:1.9.1:add-source (add-scala-sources) @
> spark-sql_2.10 ---
>
> [INFO] Source directory:
> /Users/aarong/dev/projects/spark/sql/core/src/main/scala added.
>
> [INFO]
>
> [INFO] --- maven-remote-resources-plugin:1.5:process (default) @
> spark-sql_2.10 ---
>
> [INFO]
>
> [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @
> spark-sql_2.10 ---
>
> [INFO] Using 'UTF-8' encoding to copy filtered resources.
>
> [INFO] skip non existing resourceDirectory
> /Users/aarong/dev/projects/spark/sql/core/src/main/resources
>
> [INFO] Copying 3 resources
>
> [INFO]
>
> [INFO] --- scala-maven-plugin:3.2.0:compile (scala-compile-first) @
> spark-sql_2.10 ---
>
> [INFO] Using zinc server for incremental compilation
>
> [INFO] compiler plugin:
> BasicArtifact(org.scalamacros,paradise_2.10.4,2.0.1,null)
>
> [info] Compiling 100 Scala sources and 24 Java sources to
> /Users/aarong/dev/projects/spark/sql/core/target/scala-2.10/classes...
>
> [error]
> /Users/aarong/dev/projects/spark/sql/core/src/main/scala/org/apache/spark/sql/parquet/ParquetTableOperations.scala:504:
> method getReadSupport in class ParquetInputFormat cannot be accessed in
> org.apache.spark.sql.parquet.FilteringParquetRowInputFormat
>
> [error]     val readContext = getReadSupport(configuration).init(
>
> [error]                       ^
>
> [error] one error found
>
> [error] Compile failed at Jun 25, 2015 8:09:31 AM [5.682s]
>
> [INFO]
> ------------------------------------------------------------------------
>
> [INFO] Reactor Summary:
>
> [INFO]
>
> [INFO] Spark Project Parent POM ........................... SUCCESS [  2.624
> s]
>
> [INFO] Spark Launcher Project ............................. SUCCESS [  9.160
> s]
>
> [INFO] Spark Project Networking ........................... SUCCESS [  6.738
> s]
>
> [INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [  4.232
> s]
>
> [INFO] Spark Project Unsafe ............................... SUCCESS [  3.223
> s]
>
> [INFO] Spark Project Core ................................. SUCCESS [01:47
> min]
>
> [INFO] Spark Project Bagel ................................ SUCCESS [  4.580
> s]
>
> [INFO] Spark Project GraphX ............................... SUCCESS [ 11.935
> s]
>
> [INFO] Spark Project Streaming ............................ SUCCESS [ 24.562
> s]
>
> [INFO] Spark Project Catalyst ............................. SUCCESS [ 28.452
> s]
>
> [INFO] Spark Project SQL .................................. FAILURE [  8.252
> s]
>
> [INFO] Spark Project ML Library ........................... SKIPPED
>
> [INFO] Spark Project Tools ................................ SKIPPED
>
> [INFO] Spark Project Hive ................................. SKIPPED
>
> [INFO] Spark Project REPL ................................. SKIPPED
>
> [INFO] Spark Project Assembly ............................. SKIPPED
>
> [INFO] Spark Project External Twitter ..................... SKIPPED
>
> [INFO] Spark Project External Flume Sink .................. SKIPPED
>
> [INFO] Spark Project External Flume ....................... SKIPPED
>
> [INFO] Spark Project External MQTT ........................ SKIPPED
>
> [INFO] Spark Project External ZeroMQ ...................... SKIPPED
>
> [INFO] Spark Project External Kafka ....................... SKIPPED
>
> [INFO] Spark Project Examples ............................. SKIPPED
>
> [INFO] Spark Project External Kafka Assembly .............. SKIPPED
>
> [INFO] Spark Project YARN ................................. SKIPPED
>
> [INFO] Spark Project YARN Shuffle Service ................. SKIPPED
>
> [INFO] Spark Project Hive Thrift Server ................... SKIPPED
>
> [INFO]
> ------------------------------------------------------------------------
>
> [INFO] BUILD FAILURE
>
> [INFO]
> ------------------------------------------------------------------------
>
> [INFO] Total time: 03:31 min
>
> [INFO] Finished at: 2015-06-25T08:09:31-04:00
>
> [INFO] Final Memory: 72M/1342M
>
> [INFO]
> ------------------------------------------------------------------------
>
> [ERROR] Failed to execute goal
> net.alchim31.maven:scala-maven-plugin:3.2.0:compile (scala-compile-first) on
> project spark-sql_2.10: Execution scala-compile-first of goal
> net.alchim31.maven:scala-maven-plugin:3.2.0:compile failed. CompileFailed ->
> [Help 1]
>
> [ERROR]
>
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e
> switch.
>
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
>
> [ERROR]
>
> [ERROR] For more information about the errors and possible solutions, please
> read the following articles:
>
> [ERROR] [Help 1]
> http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
>
> [ERROR]
>
> [ERROR] After correcting the problems, you can resume the build with the
> command
>
> [ERROR]   mvn <goals> -rf :spark-sql_2.10
>
>
>
>
> Ahh..ok, so it's Hive 1.1 and Spark 1.4.  Even using "standard"  Hive .13
> version, I still the the above error.  Granted (it's CDH's Hadoop JARs, and
> Apache's Hive).
>
> On Wed, Jun 24, 2015 at 9:30 PM, Sean Owen <so...@cloudera.com> wrote:
>>
>> You didn't provide any error?
>>
>> You're compiling vs Hive 1.1 here and that is the problem. It is nothing
>> to do with CDH.
>>
>>
>> On Wed, Jun 24, 2015, 10:15 PM Aaron <aarongm...@gmail.com> wrote:
>>>
>>> I was curious if any one was able to get CDH 5.4.1 or 5.4.2 compiling
>>> with the v1.4.0 tag out of git?  SparkSQL keeps dying on me and not 100%
>>> why.
>>>
>>> I modified the pom.xml to mak  a simple profile to help:
>>>
>>>     <profile>
>>>       <id>cdh542</id>
>>>       <properties>
>>>         <java.version>1.7</java.version>
>>>         <flume.version>1.5.0-cdh5.4.2</flume.version>
>>>         <hadoop.version>2.6.0-cdh5.4.2</hadoop.version>
>>>         <yarn.version>${hadoop.version}</yarn.version>
>>>         <hive.version>1.1.0-cdh5.4.2</hive.version>
>>>         <hive.version.short>1.1.0-cdh5.4.2</hive.version.short>
>>>         <hbase.version>1.0.0-cdh5.4.2</hbase.version>
>>>         <zookeeper.version>3.4.5-cdh5.4.2</zookeeper.version>
>>>         <avro.version>1.7.6-cdh5.4.2</avro.version>
>>>         <parquet.version>1.5.0-cdh5.4.2</parquet.version>
>>>       </properties>
>>>       <modules>
>>>         <module>yarn</module>
>>>         <module>network/yarn</module>
>>>         <module>sql/hive-thriftserver</module>
>>>       </modules>
>>>     </profile>
>>>
>>> I have tried removing the "hive"  properties, and let it use the default
>>> 0.13, but, fails in the same place.
>>>
>>>
>>> mvn clean package -DskipTests -Pcdh542
>>>
>>> Using the standard,
>>>
>>> mvn clean package -Phadoop-2.6 -Pyarn -Phive-thriftserver
>>>
>>> works great..so, it's got to be something with CDH's JARs..just not sure
>>> what.  And doing a mvn -X didn't lead me anywhere....thoughts?  help?  URLs
>>> to read?
>>>
>>> Thanks in advance.
>>>
>>> Cheers,
>>> Aaron
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to