Re: How to pass hdp.version to flink on yarn

2015-11-24 Thread Maximilian Michels
Hi Jagat,

I think your issue here are not the JVM options. You are missing shell
environment variables during the container launch. Adding those to the
user's .bashrc or .profile should fix the problem.

Best regards,
Max

On Mon, Nov 23, 2015 at 10:14 PM, Jagat Singh  wrote:
> Hello Robert,
>
> Added following
>
> env.java.opts: "-Dstack.name=phd -Dstack.version=3.0.0.0-249"
>
> Same Error
>
> Is there any config which allows to pass special java opts to actual yarn
> containers?
>
> Thanks,
>
> Jagat Singh
>
>
>
>
>
> On Mon, Nov 23, 2015 at 9:21 PM, Robert Metzger  wrote:
>>
>> Hi,
>>
>> In Flink the configuration parameter for passing custom JVM options is
>> "env.java.opts". I would recommend to put it into the conf/flink-config.yaml
>> like this:
>>
>> env.java.opts: "-Dhdp.version=2.3.0.0-2557 -Dhdp.version=2.3.0.0-2557"
>>
>> Please let me know if this works.
>> Maybe you are the first user running Flink on Pivotal HDP and there are
>> some things different to other Hadoop distributions.
>>
>> Regards,
>> Robert
>>
>>
>>
>>
>> On Mon, Nov 23, 2015 at 1:15 AM, Jagat Singh  wrote:
>>>
>>> Hi,
>>>
>>> I am running example Flink program (Pivotal HDP)
>>>
>>> ./bin/flink run -m yarn-cluster -yn 2 ./examples/WordCount.jar
>>>
>>> I am getting error below.
>>>
>>> How to pass the stack.name and stack.version to the flink program.
>>>
>>> This is similar to what we give to Spark as
>>>
>>> hdp.version.
>>>
>>> Example
>>>
>>> spark.driver.extraJavaOptions-Dhdp.version=2.3.0.0-2557
>>> spark.yarn.am.extraJavaOptions   -Dhdp.version=2.3.0.0-2557
>>>
>>> Thanks
>>>
>>> Exception message:
>>> /grid/0/hadoop/yarn/local/usercache/d760770/appcache/application_1447977375774_17024/container_e34_1447977375774_17024_01_01/launch_container.sh:
>>> line 26:
>>> $PWD/*:$HADOOP_CONF_DIR:/usr/${stack.name}/current/hadoop-client/*:/usr/${stack.name}/current/hadoop-client/lib/*:/usr/${stack.name}/current/hadoop-hdfs-client/*:/usr/${stack.name}/current/hadoop-hdfs-client/lib/*:/usr/${stack.name}/current/hadoop-yarn-client/*:/usr/${stack.name}/current/hadoop-yarn-client/lib/*:
>>> bad substitution
>>>
>>> Stack trace: ExitCodeException exitCode=1:
>>> /grid/0/hadoop/yarn/local/usercache/d760770/appcache/application_1447977375774_17024/container_e34_1447977375774_17024_01_01/launch_container.sh:
>>> line 26:
>>> $PWD/*:$HADOOP_CONF_DIR:/usr/${stack.name}/current/hadoop-client/*:/usr/${stack.name}/current/hadoop-client/lib/*:/usr/${stack.name}/current/hadoop-hdfs-client/*:/usr/${stack.name}/current/hadoop-hdfs-client/lib/*:/usr/${stack.name}/current/hadoop-yarn-client/*:/usr/${stack.name}/current/hadoop-yarn-client/lib/*:
>>> bad substitution
>>>
>>>
>>
>


Re: How to pass hdp.version to flink on yarn

2015-11-23 Thread Robert Metzger
Hi,

In Flink the configuration parameter for passing custom JVM options is
"env.java.opts". I would recommend to put it into the
conf/flink-config.yaml like this:

env.java.opts: "-Dhdp.version=2.3.0.0-2557 -Dhdp.version=2.3.0.0-2557"

Please let me know if this works.
Maybe you are the first user running Flink on Pivotal HDP and there are
some things different to other Hadoop distributions.

Regards,
Robert




On Mon, Nov 23, 2015 at 1:15 AM, Jagat Singh  wrote:

> Hi,
>
> I am running example Flink program (Pivotal HDP)
>
> ./bin/flink run -m yarn-cluster -yn 2 ./examples/WordCount.jar
>
> I am getting error below.
>
> How to pass the stack.name and stack.version to the flink program.
>
> This is similar to what we give to Spark as
>
> hdp.version.
>
> Example
>
> spark.driver.extraJavaOptions-Dhdp.version=2.3.0.0-2557
> spark.yarn.am.extraJavaOptions   -Dhdp.version=2.3.0.0-2557
>
> Thanks
>
> Exception message:
> /grid/0/hadoop/yarn/local/usercache/d760770/appcache/application_1447977375774_17024/container_e34_1447977375774_17024_01_01/launch_container.sh:
> line 26: $PWD/*:$HADOOP_CONF_DIR:/usr/${stack.name
> }/current/hadoop-client/*:/usr/${stack.name
> }/current/hadoop-client/lib/*:/usr/${stack.name
> }/current/hadoop-hdfs-client/*:/usr/${stack.name
> }/current/hadoop-hdfs-client/lib/*:/usr/${stack.name
> }/current/hadoop-yarn-client/*:/usr/*${stack.name 
> }*/current/hadoop-yarn-client/lib/*:
> bad substitution
>
> Stack trace: ExitCodeException exitCode=1:
> /grid/0/hadoop/yarn/local/usercache/d760770/appcache/application_1447977375774_17024/container_e34_1447977375774_17024_01_01/launch_container.sh:
> line 26: $PWD/*:$HADOOP_CONF_DIR:/usr/${stack.name
> }/current/hadoop-client/*:/usr/${stack.name
> }/current/hadoop-client/lib/*:/usr/${stack.name
> }/current/hadoop-hdfs-client/*:/usr/${stack.name
> }/current/hadoop-hdfs-client/lib/*:/usr/${stack.name
> }/current/hadoop-yarn-client/*:/usr/*${stack.name 
> }*/current/hadoop-yarn-client/lib/*:
> bad substitution
>
>
>


How to pass hdp.version to flink on yarn

2015-11-22 Thread Jagat Singh
Hi,

I am running example Flink program (Pivotal HDP)

./bin/flink run -m yarn-cluster -yn 2 ./examples/WordCount.jar

I am getting error below.

How to pass the stack.name and stack.version to the flink program.

This is similar to what we give to Spark as

hdp.version.

Example

spark.driver.extraJavaOptions-Dhdp.version=2.3.0.0-2557
spark.yarn.am.extraJavaOptions   -Dhdp.version=2.3.0.0-2557

Thanks

Exception message:
/grid/0/hadoop/yarn/local/usercache/d760770/appcache/application_1447977375774_17024/container_e34_1447977375774_17024_01_01/launch_container.sh:
line 26: $PWD/*:$HADOOP_CONF_DIR:/usr/${stack.name
}/current/hadoop-client/*:/usr/${stack.name
}/current/hadoop-client/lib/*:/usr/${stack.name
}/current/hadoop-hdfs-client/*:/usr/${stack.name
}/current/hadoop-hdfs-client/lib/*:/usr/${stack.name
}/current/hadoop-yarn-client/*:/usr/*${stack.name
}*/current/hadoop-yarn-client/lib/*:
bad substitution

Stack trace: ExitCodeException exitCode=1:
/grid/0/hadoop/yarn/local/usercache/d760770/appcache/application_1447977375774_17024/container_e34_1447977375774_17024_01_01/launch_container.sh:
line 26: $PWD/*:$HADOOP_CONF_DIR:/usr/${stack.name
}/current/hadoop-client/*:/usr/${stack.name
}/current/hadoop-client/lib/*:/usr/${stack.name
}/current/hadoop-hdfs-client/*:/usr/${stack.name
}/current/hadoop-hdfs-client/lib/*:/usr/${stack.name
}/current/hadoop-yarn-client/*:/usr/*${stack.name
}*/current/hadoop-yarn-client/lib/*:
bad substitution