more details commands:

2. yarn rmadmin -replaceLabelsOnNode spark-dev:54321,foo;
    yarn rmadmin -replaceLabelsOnNode sut-1:54321,bar;
    yarn rmadmin -replaceLabelsOnNode sut-2:54321,bye;
    yarn rmadmin -replaceLabelsOnNode sut-3:54321,foo;





At 2015-12-17 10:31:20, "Allen Zhang" <allenzhang...@126.com> wrote:

Hi Ted,


I have 4 vms(spark-dev, sut-1, sut-2, sut-3):


With these commands: 
1. yarn rmadmin -addToClusterNodeLabels foo,bar,bye
2. yarn rmadmin -replaceLabelsOnNode spark-dev:54321,foo;yarn rmadmin 
-replaceLabelsOnNode sut-1:54321,foo, same to sut-2 and sut-3
3. yarn rmadmin -refreshQueues


I am using Spark 1.6.0 snapshot and Apache Hadoop 2.6.0.


the command I submit to spark cluster is as below, which works as expected:
spark-submit --conf spark.yarn.executor.nodeLabelExpression=foo --conf 
spark.yarn.am.nodeLabelExpression=bar --class org.apache.spark.examples.SparkPi 
--master yarn-cluster lib/spark-examples-1.6.0-SNAPSHOT-hadoop2.6.0.jar


NOTE: I compiled the spark tar.gz.


but, a little changed to spark.yarn.executor.nodeLabelExpression="foo|bye", the 
command does not work and the appplication will be waiting there in my console
 

spark-submit --conf spark.yarn.executor.nodeLabelExpression="foo|bye" --conf 
spark.yarn.am.nodeLabelExpression=bar --class org.apache.spark.examples.SparkPi 
--master yarn-cluster lib/spark-examples-1.6.0-SNAPSHOT-hadoop2.6.0.jar


so , my question is does the spark.yarn.executor.nodeLabelExpression and  
spark.yarn.am.nodeLabelExpression really support "EXPRESSION" like and &&, or 
||, or even ! and so on.


NOTE:
I didn't change the capacity-scheduler.xml at all, I just wang to try specifing 
label in the point view of Application.
any feedback from community? 
Thanks,
Allen



At 2015-12-16 18:24:08, "Ted Yu" <yuzhih...@gmail.com> wrote:

Allen:
Since you mentioned scheduling, I assume you were talking about node label 
support in YARN. 
If that is the case, can you give us some more information:
How node labels are setup in YARN cluster
How you specified node labels in application
Hadoop and Spark releases you are using


Cheers

On Dec 16, 2015, at 1:00 AM, Chang Ya-Hsuan <sumti...@gmail.com> wrote:


are you trying to do dataframe boolean expression?
please use '&' for 'and', '|' for 'or', '~' for 'not' when building DataFrame 
boolean expressions.



example:


>>> df = sqlContext.range(10)
>>> df.where( (df.id==1) | ~(df.id==1))
DataFrame[id: bigint]




On Wed, Dec 16, 2015 at 4:32 PM, Allen Zhang <allenzhang...@126.com> wrote:

Hi All,


does spark label expression really support "&&" or "||" or even "!" for label 
based schedulering?
I tried that but it does NOT work.


Best Regards,
Allen







--

-- 張雅軒

Reply via email to