That's probably a good thing to have, so I'll add it, but unfortunately it
did not help this issue.  It looks like the hadoop-2.4 profile only sets
these properties, which don't seem like they would affect anything related
to Netty:

      <properties>
        <hadoop.version>2.4.0</hadoop.version>
        <protobuf.version>2.5.0</protobuf.version>
        <jets3t.version>0.9.0</jets3t.version>
        <commons.math3.version>3.1.1</commons.math3.version>
        <avro.mapred.classifier>hadoop2</avro.mapred.classifier>
      </properties>


Thanks,
Jonathan Kelly




On 3/5/15, 1:09 PM, "Patrick Wendell" <pwend...@gmail.com> wrote:

>You may need to add the -Phadoop-2.4 profile. When building or release
>packages for Hadoop 2.4 we use the following flags:
>
>-Phadoop-2.4 -Phive -Phive-thriftserver -Pyarn
>
>- Patrick
>
>On Thu, Mar 5, 2015 at 12:47 PM, Kelly, Jonathan <jonat...@amazon.com>
>wrote:
>> I confirmed that this has nothing to do with BigTop by running the same
>>mvn
>> command directly in a fresh clone of the Spark package at the v1.2.1
>>tag.  I
>> got the same exact error.
>>
>>
>> Jonathan Kelly
>>
>> Elastic MapReduce - SDE
>>
>> Port 99 (SEA35) 08.220.C2
>>
>>
>> From: <Kelly>, Jonathan Kelly <jonat...@amazon.com>
>> Date: Thursday, March 5, 2015 at 10:39 AM
>> To: "user@spark.apache.org" <user@spark.apache.org>
>> Subject: Spark v1.2.1 failing under BigTop build in External Flume Sink
>>(due
>> to missing Netty library)
>>
>> I'm running into an issue building Spark v1.2.1 (as well as the latest
>>in
>> branch-1.2 and v1.3.0-rc2 and the latest in branch-1.3) with BigTop
>>(v0.9,
>> which is not quite released yet).  The build fails in the External Flume
>> Sink subproject with the following error:
>>
>> [INFO] Compiling 5 Scala sources and 3 Java sources to
>> 
>>/workspace/workspace/bigtop.spark-rpm/build/spark/rpm/BUILD/spark-1.3.0/e
>>xternal/flume-sink/target/scala-2.10/classes...
>> [WARNING] Class org.jboss.netty.channel.ChannelFactory not found -
>> continuing with a stub.
>> [ERROR] error while loading NettyServer, class file
>> 
>>'/home/ec2-user/.m2/repository/org/apache/avro/avro-ipc/1.7.6/avro-ipc-1.
>>7.6.jar(org/apache/avro/ipc/NettyServer.class)'
>> is broken
>> (class java.lang.NullPointerException/null)
>> [WARNING] one warning found
>> [ERROR] one error found
>>
>> It seems like what is happening is that the Netty library is missing at
>> build time, which happens because it is explicitly excluded in the
>>pom.xml
>> (see
>> 
>>https://github.com/apache/spark/blob/v1.2.1/external/flume-sink/pom.xml#L
>>42).
>> I attempted removing the exclusions and the explicit re-add for the test
>> scope on lines 77-88, and that allowed the build to succeed, though I
>>don't
>> know if that will cause problems at runtime.  I don't have any
>>experience
>> with the Flume Sink, so I don't really know how to test it.  (And, to be
>> clear, I'm not necessarily trying to get the Flume Sink to work-- I just
>> want the project to build successfully, though of course I'd still want
>>the
>> Flume Sink to work for whomever does need it.)
>>
>> Does anybody have any idea what's going on here?  Here is the command
>>BigTop
>> is running to build Spark:
>>
>> mvn -Pbigtop-dist -Pyarn -Phive -Phive-thriftserver -Pkinesis-asl
>> -Divy.home=/home/ec2-user/.ivy2 -Dsbt.ivy.home=/home/ec2-user/.ivy2
>> -Duser.home=/home/ec2-user -Drepo.maven.org=
>> -Dreactor.repo=file:///home/ec2-user/.m2/repository
>> -Dhadoop.version=2.4.0-amzn-3-SNAPSHOT
>>-Dyarn.version=2.4.0-amzn-3-SNAPSHOT
>> -Dprotobuf.version=2.5.0 -Dscala.version=2.10.3
>>-Dscala.binary.version=2.10
>> -DskipTests -DrecompileMode=all install
>>
>> As I mentioned above, if I switch to the latest in branch-1.2, to
>> v1.3.0-rc2, or to the latest in branch-1.3, I get the same exact error.
>> I
>> was not getting the error with Spark v1.1.0, though there weren't any
>> changes to the external/flume-sink/pom.xml between v1.1.0 and v1.2.1.
>>
>>
>> ~ Jonathan Kelly


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to