Actually, to simplify this problem, we run our program on a single machine
with 4 slave workers. Since on a single machine, I think all slave workers
are ran with root privilege.

BTW, if we have a cluster, how to make sure slaves on remote machines run
program as root?

Best regards,

Lin Hao XU
IBM Research China
Email: xulin...@cn.ibm.com
My Flickr: http://www.flickr.com/photos/xulinhao/sets



From:   Dean Wampler <deanwamp...@gmail.com>
To:     Lin Hao Xu/China/IBM@IBMCN
Cc:     Hai Shan Wu/China/IBM@IBMCN, user <user@spark.apache.org>
Date:   2015/04/29 09:40
Subject:        Re: A problem of using spark streaming to capture network
            packets



Are the tasks on the slaves also running as root? If not, that might
explain the problem.

dean

Dean Wampler, Ph.D.
Author: Programming Scala, 2nd Edition (O'Reilly)
Typesafe
@deanwampler
http://polyglotprogramming.com

On Tue, Apr 28, 2015 at 8:30 PM, Lin Hao Xu <xulin...@cn.ibm.com> wrote:
  1. The full command line is written in a shell script:

  LIB=/home/spark/.m2/repository

  /opt/spark/bin/spark-submit \
  --class spark.pcap.run.TestPcapSpark \
  --jars
  
$LIB/org/pcap4j/pcap4j-core/1.4.0/pcap4j-core-1.4.0.jar,$LIB/org/pcap4j/pcap4j-packetfactory-static/1.4.0/pcap4j-packetfactory-static-1.4.0.jar,$LIB/

  
org/slf4j/slf4j-api/1.7.6/slf4j-api-1.7.6.jar,$LIB/org/slf4j/slf4j-log4j12/1.7.6/slf4j-log4j12-1.7.6.jar,$LIB/net/java/dev/jna/jna/4.1.0/jna-4.1.0.jar
 \
  /home/spark/napa/napa.jar

  2. And we run this script with sudo, if you do not use sudo, then you
  cannot access network interface.

  3. We also tested List<PcapNetworkInterface> nifs = Pcaps.findAllDevs()
  in a standard Java program, it really worked like a champion.

  Best regards,

  Lin Hao XU
  IBM Research China
  Email: xulin...@cn.ibm.com
  My Flickr: http://www.flickr.com/photos/xulinhao/sets

  Inactive hide details for Dean Wampler ---2015/04/28 20:07:54---It's
  probably not your code. What's the full command line you uDean Wampler
  ---2015/04/28 20:07:54---It's probably not your code. What's the full
  command line you use to submit the job?

  From: Dean Wampler <deanwamp...@gmail.com>
  To: Hai Shan Wu/China/IBM@IBMCN
  Cc: user <user@spark.apache.org>, Lin Hao Xu/China/IBM@IBMCN
  Date: 2015/04/28 20:07
  Subject: Re: A problem of using spark streaming to capture network
  packets




  It's probably not your code.

  What's the full command line you use to submit the job?

  Are you sure the job on the cluster has access to the network interface?
  Can you test the receiver by itself without Spark? For example, does this
  line work as expected:

  List<PcapNetworkInterface> nifs = Pcaps.findAllDevs();

  dean

  Dean Wampler, Ph.D.
  Author: Programming Scala, 2nd Edition (O'Reilly)
  Typesafe
  @deanwampler
  http://polyglotprogramming.com

  On Mon, Apr 27, 2015 at 4:03 AM, Hai Shan Wu <wuh...@cn.ibm.com> wrote:
        Hi Everyone

        We use pcap4j to capture network packets and then use spark
        streaming to analyze captured packets. However, we met a strange
        problem.

        If we run our application on spark locally (for example,
        spark-submit --master local[2]), then the program runs
        successfully.

        If we run our application on spark standalone cluster, then the
        program will tell us that NO NIFs found.

        I also attach two test files for clarification.

        So anyone can help on this? Thanks in advance!


        (See attached file: PcapReceiver.java)(See attached file:
        TestPcapSpark.java)

        Best regards,

        - Haishan

        Haishan Wu (吴海珊)

        IBM Research - China
        Tel: 86-10-58748508
        Fax: 86-10-58748330
        Email: wuh...@cn.ibm.com
        Lotus Notes: Hai Shan Wu/China/IBM


        ---------------------------------------------------------------------

        To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
        For additional commands, e-mail: user-h...@spark.apache.org





Reply via email to