Lev,

Mine worked with this

dev/make-distribution.sh --name "hadoop2-without-hive" --tgz
"-Pyarn,hadoop-provided,hadoop-2.6,parquet-provided"

[INFO] Reactor Summary:
[INFO]
[INFO] Spark Project Parent POM ........................... SUCCESS [
11.084 s]
[INFO] Spark Project Tags ................................. SUCCESS [
13.619 s]
[INFO] Spark Project Sketch ............................... SUCCESS [
16.602 s]
[INFO] Spark Project Networking ........................... SUCCESS [
16.002 s]
[INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [
7.782 s]
[INFO] Spark Project Unsafe ............................... SUCCESS [
22.354 s]
[INFO] Spark Project Launcher ............................. SUCCESS [
18.410 s]
[INFO] Spark Project Core ................................. SUCCESS [02:38
min]
[INFO] Spark Project GraphX ............................... SUCCESS [
50.014 s]
[INFO] Spark Project Streaming ............................ SUCCESS [01:24
min]
[INFO] Spark Project Catalyst ............................. SUCCESS [02:57
min]
[INFO] Spark Project SQL .................................. SUCCESS [02:15
min]
[INFO] Spark Project ML Local Library ..................... SUCCESS [
34.488 s]
[INFO] Spark Project ML Library ........................... SUCCESS [02:12
min]
[INFO] Spark Project Tools ................................ SUCCESS [
14.540 s]
[INFO] Spark Project Hive ................................. SUCCESS [02:03
min]
[INFO] Spark Project REPL ................................. SUCCESS [
17.966 s]
[INFO] Spark Project YARN Shuffle Service ................. SUCCESS [
7.891 s]
[INFO] Spark Project YARN ................................. SUCCESS [
47.754 s]
[INFO] Spark Project Assembly ............................. SUCCESS [
3.043 s]
[INFO] Spark Project External Flume Sink .................. SUCCESS [
26.215 s]
[INFO] Spark Project External Flume ....................... SUCCESS [
36.552 s]
[INFO] Spark Project External Flume Assembly .............. SUCCESS [
3.621 s]
[INFO] Spark Integration for Kafka 0.8 .................... SUCCESS [
52.641 s]
[INFO] Spark Project Examples ............................. SUCCESS [
35.921 s]
[INFO] Spark Project External Kafka Assembly .............. SUCCESS [
5.114 s]
[INFO] Spark Integration for Kafka 0.10 ................... SUCCESS [
49.266 s]
[INFO] Spark Integration for Kafka 0.10 Assembly .......... SUCCESS [
6.831 s]
[INFO] Spark Project Java 8 Tests ......................... SUCCESS [
7.682 s]
[INFO]
------------------------------------------------------------------------
[INFO] BUILD SUCCESS


and this is what I get:

rw-r--r--  1 hduser hadoop 106723489 Aug  3 10:16
spark-2.0.0-bin-hadoop2-without-hive.tgz

and you can gunzip and untar the tgz file

HTH


Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 3 August 2016 at 09:57, Lev Katzav <kat...@gmail.com> wrote:

> I'm building spark from the source code
>
> On Wed, Aug 3, 2016 at 3:51 PM, Mich Talebzadeh <mich.talebza...@gmail.com
> > wrote:
>
>> Just to clarify are you building Spark 2 from source downloaded?
>>
>> Or are you referring to building a uber jar file  using your code and mvn
>> to submit with spark-submit etc?
>>
>> HTH
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>> On 3 August 2016 at 09:43, lev <kat...@gmail.com> wrote:
>>
>>> hi,
>>>
>>> in spark 1.5, to build an uber-jar,
>>> I would just compile the code with:
>>> mvn ... package
>>> and that will create one big jar with all the dependencies.
>>>
>>> when trying to do the same with spark 2.0, I'm getting a tar.gz file
>>> instead.
>>>
>>> this is the full command I'm using:
>>> mvn -Pyarn -Phive -Phadoop-2.6 -Dhadoop.version=2.6.0-cdh5.4.7
>>> -DskipTests
>>> -Phive-thriftserver -Dscala-2.10 -Pbigtop-dist clean package
>>>
>>> how can I create the uber-jar?
>>>
>>> thanks.
>>>
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-spark-user-list.1001560.n3.nabble.com/spark-2-0-0-how-to-build-an-uber-jar-tp27463.html
>>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>>
>>>
>>
>

Reply via email to