Thanks vkulichenko,
I will use TcpDiscoveryVmIpFinder instead of mutinies for ignite, but i
faced another issues.

my integration mode is : embedded mode ignite on spark
1) 
Scala code for idea as below
val spi = new TcpDiscoverySpi()
val ipFinder = new TcpDiscoveryVmIpFinder()
// Set initial IP addresses.
// Note that you can optionally specify a port or a port range.
ipFinder.setAddresses(util.Arrays.asList("172.16.186.200",
"172.16.186.200:47500..47509"))
spi.setIpFinder(ipFinder)
val cfg = new IgniteConfiguration()
// Override default discovery SPI.
cfg.setDiscoverySpi(spi)
*val igniteContext = new IgniteContext[Integer,Integer](sc, () => cfg,
false)
*


2)run spark-submit to submit my jar to yarn cluster, but faced the following
erormessage

spark-submit --driver-memory 2G --class com.ignite.testIgniteEmbedRDD
--master yarn --executor-cores 2 --executor-memory 1000m --num-executors 2
--conf spark.rdd.compress=false --conf spark.shuffle.compress=false --conf
spark.broadcast.compress=false
/root/limu/ignite/spark-project-jar-with-dependencies.jar
16/08/21 03:15:13 INFO spark.SparkContext: Running Spark version 1.6.1
16/08/21 03:15:14 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
16/08/21 03:15:14 INFO spark.SecurityManager: Changing view acls to: root
16/08/21 03:15:14 INFO spark.SecurityManager: Changing modify acls to: root
16/08/21 03:15:14 INFO spark.SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view permissions:
Set(root); users with modify permissions: Set(root)
16/08/21 03:15:15 INFO util.Utils: Successfully started service
'sparkDriver' on port 38970.
16/08/21 03:15:15 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/08/21 03:15:15 INFO Remoting: Starting remoting
16/08/21 03:15:16 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://sparkDriverActorSystem@172.16.186.200:34025]
16/08/21 03:15:16 INFO util.Utils: Successfully started service
'sparkDriverActorSystem' on port 34025.
16/08/21 03:15:16 INFO spark.SparkEnv: Registering MapOutputTracker
16/08/21 03:15:16 INFO spark.SparkEnv: Registering BlockManagerMaster
16/08/21 03:15:16 INFO storage.DiskBlockManager: Created local directory at
/tmp/blockmgr-ffc108d3-da0d-4ff4-a910-8ff2a66d0463
16/08/21 03:15:16 INFO storage.MemoryStore: MemoryStore started with
capacity 1259.8 MB
16/08/21 03:15:16 INFO spark.SparkEnv: Registering OutputCommitCoordinator
16/08/21 03:15:17 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/08/21 03:15:17 INFO server.AbstractConnector: Started
SelectChannelConnector@0.0.0.0:4040
16/08/21 03:15:17 INFO util.Utils: Successfully started service 'SparkUI' on
port 4040.
16/08/21 03:15:17 INFO ui.SparkUI: Started SparkUI at
http://172.16.186.200:4040
16/08/21 03:15:17 INFO spark.HttpFileServer: HTTP File server directory is
/tmp/spark-6406a8e6-0a53-4925-a17e-158ce3b4aa6e/httpd-292e03c2-1805-4e68-916f-acd4eef0c265
16/08/21 03:15:17 INFO spark.HttpServer: Starting HTTP Server
16/08/21 03:15:17 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/08/21 03:15:17 INFO server.AbstractConnector: Started
SocketConnector@0.0.0.0:36680
16/08/21 03:15:17 INFO util.Utils: Successfully started service 'HTTP file
server' on port 36680.
16/08/21 03:15:18 INFO spark.SparkContext: Added JAR
file:/root/limu/ignite/spark-project-jar-with-dependencies.jar at
http://172.16.186.200:36680/jars/spark-project-jar-with-dependencies.jar
with timestamp 1471774518173
16/08/21 03:15:18 INFO client.RMProxy: Connecting to ResourceManager at
sparkup1/172.16.186.200:8032
16/08/21 03:15:18 INFO yarn.Client: Requesting a new application from
cluster with 3 NodeManagers
16/08/21 03:15:18 INFO yarn.Client: Verifying our application has not
requested more than the maximum memory capability of the cluster (8192 MB
per container)
16/08/21 03:15:18 INFO yarn.Client: Will allocate AM container, with 896 MB
memory including 384 MB overhead
16/08/21 03:15:18 INFO yarn.Client: Setting up container launch context for
our AM
16/08/21 03:15:18 INFO yarn.Client: Setting up the launch environment for
our AM container
16/08/21 03:15:18 INFO yarn.Client: Preparing resources for our AM container
16/08/21 03:15:19 INFO yarn.Client: Uploading resource
file:/usr/spark-1.6.1-bin-hadoop2.6/lib/spark-assembly-1.6.1-hadoop2.6.0.jar
->
hdfs://sparkup1:9000/user/root/.sparkStaging/application_1471720446331_0026/spark-assembly-1.6.1-hadoop2.6.0.jar
16/08/21 03:15:25 INFO yarn.Client: Uploading resource
file:/tmp/spark-6406a8e6-0a53-4925-a17e-158ce3b4aa6e/__spark_conf__3046103417669035498.zip
->
hdfs://sparkup1:9000/user/root/.sparkStaging/application_1471720446331_0026/__spark_conf__3046103417669035498.zip
16/08/21 03:15:25 INFO spark.SecurityManager: Changing view acls to: root
16/08/21 03:15:25 INFO spark.SecurityManager: Changing modify acls to: root
16/08/21 03:15:25 INFO spark.SecurityManager: SecurityManager:
authentication disabled; ui acls disabled; users with view permissions:
Set(root); users with modify permissions: Set(root)
16/08/21 03:15:25 INFO yarn.Client: Submitting application 26 to
ResourceManager
16/08/21 03:15:25 INFO impl.YarnClientImpl: Submitted application
application_1471720446331_0026
16/08/21 03:15:26 INFO yarn.Client: Application report for
application_1471720446331_0026 (state: ACCEPTED)
16/08/21 03:15:26 INFO yarn.Client:
         client token: N/A
         diagnostics: N/A
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: default
         start time: 1471774525522
         final status: UNDEFINED
         tracking URL:
http://sparkup1:8088/proxy/application_1471720446331_0026/
         user: root
16/08/21 03:15:27 INFO yarn.Client: Application report for
application_1471720446331_0026 (state: ACCEPTED)
16/08/21 03:15:28 INFO yarn.Client: Application report for
application_1471720446331_0026 (state: ACCEPTED)
16/08/21 03:15:29 INFO yarn.Client: Application report for
application_1471720446331_0026 (state: ACCEPTED)
16/08/21 03:15:30 INFO yarn.Client: Application report for
application_1471720446331_0026 (state: ACCEPTED)
16/08/21 03:15:31 INFO yarn.Client: Application report for
application_1471720446331_0026 (state: ACCEPTED)
16/08/21 03:15:32 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint:
ApplicationMaster registered as NettyRpcEndpointRef(null)
16/08/21 03:15:32 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter.
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS
-> sparkup1, PROXY_URI_BASES ->
http://sparkup1:8088/proxy/application_1471720446331_0026),
/proxy/application_1471720446331_0026
16/08/21 03:15:32 INFO ui.JettyUtils: Adding filter:
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
16/08/21 03:15:32 INFO yarn.Client: Application report for
application_1471720446331_0026 (state: RUNNING)
16/08/21 03:15:32 INFO yarn.Client:
         client token: N/A
         diagnostics: N/A
         ApplicationMaster host: 172.16.186.201
         ApplicationMaster RPC port: 0
         queue: default
         start time: 1471774525522
         final status: UNDEFINED
         tracking URL:
http://sparkup1:8088/proxy/application_1471720446331_0026/
         user: root
16/08/21 03:15:32 INFO cluster.YarnClientSchedulerBackend: Application
application_1471720446331_0026 has started running.
16/08/21 03:15:32 INFO util.Utils: Successfully started service
'org.apache.spark.network.netty.NettyBlockTransferService' on port 50934.
16/08/21 03:15:32 INFO netty.NettyBlockTransferService: Server created on
50934
16/08/21 03:15:32 INFO storage.BlockManagerMaster: Trying to register
BlockManager
16/08/21 03:15:32 INFO storage.BlockManagerMasterEndpoint: Registering block
manager 172.16.186.200:50934 with 1259.8 MB RAM, BlockManagerId(driver,
172.16.186.200, 50934)
16/08/21 03:15:32 INFO storage.BlockManagerMaster: Registered BlockManager
16/08/21 03:15:41 INFO cluster.YarnClientSchedulerBackend: Registered
executor NettyRpcEndpointRef(null) (sparkup2:48699) with ID 1
16/08/21 03:15:41 INFO storage.BlockManagerMasterEndpoint: Registering block
manager sparkup2:42462 with 500.0 MB RAM, BlockManagerId(1, sparkup2, 42462)
16/08/21 03:15:42 INFO cluster.YarnClientSchedulerBackend: Registered
executor NettyRpcEndpointRef(null) (sparkup3:34689) with ID 2
16/08/21 03:15:42 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend
is ready for scheduling beginning after reached minRegisteredResourcesRatio:
0.8
16/08/21 03:15:42 INFO storage.BlockManagerMasterEndpoint: Registering block
manager sparkup3:48251 with 500.0 MB RAM, BlockManagerId(2, sparkup3, 48251)
============>
16/08/21 03:15:42 INFO spark.IgniteContext: Will start Ignite nodes on 2
workers
Exception in thread "main" org.apache.spark.SparkException: Task not
serializable
        at
org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:304)
        at
org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:294)
        at
org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122)
        at org.apache.spark.SparkContext.clean(SparkContext.scala:2055)
        at
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:919)
        at
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:918)
        at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
        at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
        at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:918)
        at
org.apache.ignite.spark.IgniteContext.<init>(IgniteContext.scala:55)
        at com.ignite.testIgniteEmbedRDD$.main(testIgniteEmbedRDD.scala:33)
        at com.ignite.testIgniteEmbedRDD.main(testIgniteEmbedRDD.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
        at
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
        at
org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.NotSerializableException:
org.apache.ignite.configuration.IgniteConfiguration
Serialization stack:
        - object not serializable (class:
org.apache.ignite.configuration.IgniteConfiguration, value:
IgniteConfiguration [gridName=null, pubPoolSize=16, callbackPoolSize=16,
sysPoolSize=16, mgmtPoolSize=4, igfsPoolSize=1, utilityCachePoolSize=16,
utilityCacheKeepAliveTime=10000, marshCachePoolSize=16,
marshCacheKeepAliveTime=10000, p2pPoolSize=2, ggHome=null, ggWork=null,
mbeanSrv=null, nodeId=null, marsh=null, marshLocJobs=false, daemon=false,
p2pEnabled=false, netTimeout=5000, sndRetryDelay=1000, sndRetryCnt=3,
clockSyncSamples=8, clockSyncFreq=120000, metricsHistSize=10000,
metricsUpdateFreq=2000, metricsExpTime=9223372036854775807,
discoSpi=TcpDiscoverySpi [addrRslvr=null, sockTimeout=0, ackTimeout=0,
reconCnt=10, maxAckTimeout=600000, forceSrvMode=false,
clientReconnectDisabled=false], segPlc=STOP, segResolveAttempts=2,
waitForSegOnStart=true, allResolversPassReq=true, segChkFreq=10000,
commSpi=null, evtSpi=null, colSpi=null, deploySpi=null, swapSpaceSpi=null,
indexingSpi=null, addrRslvr=null, clientMode=null,
rebalanceThreadPoolSize=1,
txCfg=org.apache.ignite.configuration.TransactionConfiguration@5dbef3d2,
cacheSanityCheckEnabled=true, discoStartupDelay=60000, deployMode=SHARED,
p2pMissedCacheSize=100, locHost=null, timeSrvPortBase=31100,
timeSrvPortRange=100, failureDetectionTimeout=10000, metricsLogFreq=60000,
hadoopCfg=null,
connectorCfg=org.apache.ignite.configuration.ConnectorConfiguration@23799013,
odbcCfg=null, warmupClos=null, atomicCfg=AtomicConfiguration
[seqReserveSize=1000, cacheMode=PARTITIONED, backups=0], classLdr=null,
sslCtxFactory=null, platformCfg=null, binaryCfg=null,
lateAffAssignment=true])
        - field (class: com.ignite.testIgniteEmbedRDD$$anonfun$1, name:
cfg$1, type: class org.apache.ignite.configuration.IgniteConfiguration)
        - object (class com.ignite.testIgniteEmbedRDD$$anonfun$1,
<function0>)
        - field (class: org.apache.ignite.spark.Once, name: clo, type:
interface scala.Function0)
        - object (class org.apache.ignite.spark.Once,
org.apache.ignite.spark.Once@60a1d043)
        - field (class: org.apache.ignite.spark.IgniteContext, name: cfgClo,
type: class org.apache.ignite.spark.Once)
        - object (class org.apache.ignite.spark.IgniteContext,
org.apache.ignite.spark.IgniteContext@16977784)
        - field (class: org.apache.ignite.spark.IgniteContext$$anonfun$2,
name: $outer, type: class org.apache.ignite.spark.IgniteContext)
        - object (class org.apache.ignite.spark.IgniteContext$$anonfun$2,
<function1>)
        at
org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
        at
org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
        at
org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:101)
        at
org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:301)
        ... 21 more
16/08/21 03:15:43 INFO spark.SparkContext: Invoking stop() from shutdown
hook
16/08/21 03:15:43 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/metrics/json,null}
16/08/21 03:15:43 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
16/08/21 03:15:43 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/api,null}
16/08/21 03:15:43 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/,null}
16/08/21 03:15:43 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/static,null}
16/08/21 03:15:43 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
16/08/21 03:15:43 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/executors/threadDump,null}
16/08/21 03:15:43 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/executors/json,null}
16/08/21 03:15:43 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/executors,null}
16/08/21 03:15:43 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/environment/json,null}
16/08/21 03:15:43 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/environment,null}
16/08/21 03:15:43 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
16/08/21 03:15:43 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/storage/rdd,null}
16/08/21 03:15:43 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/storage/json,null}
16/08/21 03:15:43 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/storage,null}
16/08/21 03:15:43 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/pool/json,null}
16/08/21 03:15:43 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/pool,null}
16/08/21 03:15:43 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/stage/json,null}
16/08/21 03:15:43 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/stage,null}
16/08/21 03:15:43 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages/json,null}
16/08/21 03:15:43 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/stages,null}
16/08/21 03:15:43 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/jobs/job/json,null}
16/08/21 03:15:43 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/jobs/job,null}
16/08/21 03:15:43 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/jobs/json,null}
16/08/21 03:15:43 INFO handler.ContextHandler: stopped
o.s.j.s.ServletContextHandler{/jobs,null}
16/08/21 03:15:43 INFO ui.SparkUI: Stopped Spark web UI at
http://172.16.186.200:4040
16/08/21 03:15:43 INFO cluster.YarnClientSchedulerBackend: Interrupting
monitor thread
16/08/21 03:15:43 INFO cluster.YarnClientSchedulerBackend: Shutting down all
executors
16/08/21 03:15:43 INFO cluster.YarnClientSchedulerBackend: Asking each
executor to shut down
16/08/21 03:15:43 INFO cluster.YarnClientSchedulerBackend: Stopped
16/08/21 03:15:43 INFO spark.MapOutputTrackerMasterEndpoint:
MapOutputTrackerMasterEndpoint stopped!
16/08/21 03:15:43 INFO storage.MemoryStore: MemoryStore cleared
16/08/21 03:15:43 INFO storage.BlockManager: BlockManager stopped
16/08/21 03:15:43 INFO storage.BlockManagerMaster: BlockManagerMaster
stopped
16/08/21 03:15:43 INFO
scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:
OutputCommitCoordinator stopped!
16/08/21 03:15:43 INFO spark.SparkContext: Successfully stopped SparkContext
16/08/21 03:15:43 INFO util.ShutdownHookManager: Shutdown hook called
16/08/21 03:15:43 INFO util.ShutdownHookManager: Deleting directory
/tmp/spark-6406a8e6-0a53-4925-a17e-158ce3b4aa6e
16/08/21 03:15:43 INFO remote.RemoteActorRefProvider$RemotingTerminator:
Shutting down remote daemon.
16/08/21 03:15:43 INFO remote.RemoteActorRefProvider$RemotingTerminator:
Remote daemon shut down; proceeding with flushing remote transports.
16/08/21 03:15:43 INFO util.ShutdownHookManager: Deleting directory
/tmp/spark-6406a8e6-0a53-4925-a17e-158ce3b4aa6e/httpd-292e03c2-1805-4e68-916f-acd4eef0c265
[root@sparkup1 libs]# 



3)Can you please tell me how to integrate ignite on spark for embed mode on
idea project? tanks!!!!




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Embedded-mode-ignite-on-spark-tp6942p7193.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Reply via email to