Hi,

I am no expert but have a small application working with Spark and
Cassandra.

I faced these issues when we were deploying our cluster on EC2 instances
with some machines on public network and some on private.

This seems to be a similar issue as you are trying to connect to "
10.34.224.249" which is a private IP but the address you get in the error
message is a public IP "30.247.7.8".

If you want to connect to public IP ensure that your network settings allow
you to connect using spark cluster's public IP on the port 9042.

Hope this helps!!

Thanks
Ankur

On Thu, Jan 29, 2015 at 1:33 PM, oxpeople <vincent.y....@bankofamerica.com>
wrote:

> I have the code set up the Cassandra
>
>    SparkConf conf = new SparkConf(true);
>  conf.setAppName("Java cassandra R&D");
>  conf.set(*"spark.cassandra.connection.host", "10.34.224.249"*);
>
> but I got log try to connect different host.
>
>
> 15/01/29 16:16:42 INFO NettyBlockTransferService: Server created on 62002
> 15/01/29 16:16:42 INFO BlockManagerMaster: Trying to register BlockManager
> 15/01/29 16:16:42 INFO BlockManagerMasterActor: Registering block manager
> F6C3BE5F7042A.corp.com:62002 with 975.5 MB RAM, BlockManagerId(<driver>,
> F6C3BE5F7042A.corp.com, 62002)
> 15/01/29 16:16:42 INFO BlockManagerMaster: Registered BlockManager
> 15/01/29 16:16:42 INFO SparkDeploySchedulerBackend: SchedulerBackend is
> ready for scheduling beginning after reached minRegisteredResourcesRatio:
> 0.0
> 15/01/29 16:16:44 INFO SparkDeploySchedulerBackend: Registered executor:
> Actor[akka.tcp://
> sparkexecu...@f6c3be5f7042a.corp.com:62064/user/Executor#-184690467]
> with ID 0
> 15/01/29 16:16:44 INFO BlockManagerMasterActor: Registering block manager
> F6C3BE5F7042A.corp.com:62100 with 265.4 MB RAM, BlockManagerId(0,
> F6C3BE5F7042A.corp, 62100)
> Exception in thread "main" java.io.IOException: Failed to open native
> connection to Cassandra at *{30.247.7.8}:9042*
>         at
>
> com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:174)
>         at
>
> com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:160)
>         at
>
> com.datastax.spark.connector.cql.CassandraConnector$$anonfun$2.apply(CassandraConnector.scala:160)
>         at
>
> com.datastax.spark.connector.cql.RefCountedCache.createNewValueAndKeys(RefCountedCache.scala:36)
>         at
>
> com.datastax.spark.connector.cql.RefCountedCache.acquire(RefCountedCache.scala:61)
>         at
>
> com.datastax.spark.connector.cql.CassandraConnector.openSession(CassandraConnector.scala:71)
>         at
>
> com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:97)
>         at
>
> com.datastax.spark.connector.cql.CassandraConnector.withClusterDo(CassandraConnector.scala:108)
>         at
> com.datastax.spark.connector.cql.Schema$.fromCassandra(Schema.scala:134)
>         at
>
> com.datastax.spark.connector.rdd.CassandraRDD.tableDef$lzycompute(CassandraRDD.scala:240)
>         at
>
> com.datastax.spark.connector.rdd.CassandraRDD.tableDef(CassandraRDD.scala:239)
>         at
>
> com.datastax.spark.connector.rdd.CassandraRDD.verify$lzycompute(CassandraRDD.scala:298)
>         at
>
> com.datastax.spark.connector.rdd.CassandraRDD.verify(CassandraRDD.scala:295)
>         at
>
> com.datastax.spark.connector.rdd.CassandraRDD.getPartitions(CassandraRDD.scala:324)
>         at
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205)
>         at
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203)
>         at scala.Option.getOrElse(Option.scala:120)
>         at org.apache.spark.rdd.RDD.partitions(RDD.scala:203)
>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:1328)
>         at org.apache.spark.rdd.RDD.collect(RDD.scala:780)
>         at
> org.apache.spark.api.java.JavaRDDLike$class.collect(JavaRDDLike.scala:309)
>         at
> org.apache.spark.api.java.JavaPairRDD.collect(JavaPairRDD.scala:45)
>         at
>
> com.bof.spark.cassandra.JavaSparkCassandraTest.run(JavaSparkCassandraTest.java:41)
>         at
>
> com.bof.spark.cassandra.JavaSparkCassandraTest.main(JavaSparkCassandraTest.java:70)
> Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException:
> All
> host(s) tried for query failed (tried: /30.247.7.8:9042
> (com.datastax.driver.core.TransportException: [/30.247.7.8:9042] Cannot
> connect))
>         at
>
> com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:220)
>         at
>
> com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:78)
>         at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1231)
>         at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:334)
>         at
>
> com.datastax.spark.connector.cql.CassandraConnector$.com$datastax$spark$connector$cql$CassandraConnector$$createSession(CassandraConnector.scala:167)
>         ... 23 more
>
> Any helping is appreciated.
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Connecting-Cassandra-by-unknow-host-tp21424.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to