How to randomise data on spark

2015-03-25 Thread critikaled
How to randomise data accross all partitions and merge them into one.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-randomise-data-on-spark-tp2.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Kafka Version Update 0.8.2 status?

2015-02-10 Thread critikaled
When can we expect the latest kafka and scala 2.11 support in spark
streaming?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Kafka-Version-Update-0-8-2-status-tp21573.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Spark 1.1 (slow, working), Spark 1.2 (fast, freezing)

2015-01-21 Thread critikaled
I'm also facing the same issue.
is this a bug?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-1-slow-working-Spark-1-2-fast-freezing-tp21278p21283.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Does Spark automatically run different stages concurrently when possible?

2015-01-19 Thread critikaled
Hi, john and david
I tried this to run them concurrently List(RDD1,RDD2,.).par.foreach{
rdd=>
rdd.collect().foreach(println)
}
this was able to successfully register the task but the parallelism of the
stages is limited it was able run 4 of them some time and only one of them
some time which was not consistent.
  



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Does-Spark-automatically-run-different-stages-concurrently-when-possible-tp21075p21240.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Does Spark automatically run different stages concurrently when possible?

2015-01-19 Thread critikaled
+1, I too need to know.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Does-Spark-automatically-run-different-stages-concurrently-when-possible-tp21075p21233.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



is there documentation on spark sql catalyst?

2015-01-19 Thread critikaled
Where can I find a good documentation on sql catalyst?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/is-there-documentation-on-spark-sql-catalyst-tp21232.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Spark 1.2.0 ec2 launch script hadoop native libraries not found warning

2015-01-08 Thread critikaled
Hi,
Im facing this error on spark ec2 cluster when a job is submitted its says
that native hadoop libraries are not found I have checked spark-env.sh and
all the folders in the path but unable to find the problem even though the
folder are containing. are there any performance drawbacks if we use inbuilt
jars is there any body else fcing this problem. btw I'm using spark-1.2.0,
hadoop major version = 2, scala version = 2.10.4.




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-1-2-0-ec2-launch-script-hadoop-native-libraries-not-found-warning-tp21030.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



What are all the Hadoop Major Versions in spark-ec2 script?

2014-12-29 Thread critikaled
So what should be the value for --hadoop-major-version the follwing hadoop
versions
Hadoop1.x is 1
CDH4
Hadoop2.3
Hadoop2.4
MapR 3.x
MapR 4.x



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/What-are-all-the-Hadoop-Major-Versions-in-spark-ec2-script-tp20882.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



How to set up spark sql on ec2

2014-12-29 Thread critikaled
How to make the spark ec2 script to install hive and spark sql on ec2 when I
run the spark ec2 script and go to bin and run ./spark-sql and execute query
I'm getting connection refused on master:9000 what else has to be configured
for this?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-set-up-spark-sql-on-ec2-tp20881.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Serious issues with class not found exceptions of classes in uber jar

2014-12-26 Thread critikaled
this out put from std err will help?


Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties
14/12/26 10:13:44 INFO CoarseGrainedExecutorBackend: Registered signal
handlers for [TERM, HUP, INT]
14/12/26 10:13:44 WARN NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
14/12/26 10:13:44 INFO SecurityManager: Changing view acls to: root
14/12/26 10:13:44 INFO SecurityManager: Changing modify acls to: root
14/12/26 10:13:44 INFO SecurityManager: SecurityManager: authentication
disabled; ui acls disabled; users with view permissions: Set(root); users
with modify permissions: Set(root)
14/12/26 10:13:45 INFO Slf4jLogger: Slf4jLogger started
14/12/26 10:13:45 INFO Remoting: Starting remoting
14/12/26 10:13:45 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://driverpropsfetc...@ip-172-31-18-138.ap-southeast-1.compute.internal:44461]
14/12/26 10:13:45 INFO Utils: Successfully started service
'driverPropsFetcher' on port 44461.
14/12/26 10:13:45 INFO SecurityManager: Changing view acls to: root
14/12/26 10:13:45 INFO SecurityManager: Changing modify acls to: root
14/12/26 10:13:45 INFO SecurityManager: SecurityManager: authentication
disabled; ui acls disabled; users with view permissions: Set(root); users
with modify permissions: Set(root)
14/12/26 10:13:45 INFO RemoteActorRefProvider$RemotingTerminator: Shutting
down remote daemon.
14/12/26 10:13:45 INFO RemoteActorRefProvider$RemotingTerminator: Remote
daemon shut down; proceeding with flushing remote transports.
14/12/26 10:13:45 INFO Slf4jLogger: Slf4jLogger started
14/12/26 10:13:45 INFO RemoteActorRefProvider$RemotingTerminator: Remoting
shut down.
14/12/26 10:13:45 INFO Remoting: Starting remoting
14/12/26 10:13:45 INFO Remoting: Remoting started; listening on addresses
:[akka.tcp://sparkexecu...@ip-172-31-18-138.ap-southeast-1.compute.internal:60600]
14/12/26 10:13:45 INFO Utils: Successfully started service 'sparkExecutor'
on port 60600.
14/12/26 10:13:45 INFO CoarseGrainedExecutorBackend: Connecting to driver:
akka.tcp://sparkdri...@ip-172-31-18-138.ap-southeast-1.compute.internal:51708/user/CoarseGrainedScheduler
14/12/26 10:13:45 INFO WorkerWatcher: Connecting to worker
akka.tcp://sparkwor...@ip-172-31-18-138.ap-southeast-1.compute.internal:50757/user/Worker
14/12/26 10:13:45 INFO WorkerWatcher: Successfully connected to
akka.tcp://sparkwor...@ip-172-31-18-138.ap-southeast-1.compute.internal:50757/user/Worker
14/12/26 10:13:45 INFO CoarseGrainedExecutorBackend: Successfully registered
with driver
14/12/26 10:13:45 INFO SecurityManager: Changing view acls to: root
14/12/26 10:13:45 INFO SecurityManager: Changing modify acls to: root
14/12/26 10:13:45 INFO SecurityManager: SecurityManager: authentication
disabled; ui acls disabled; users with view permissions: Set(root); users
with modify permissions: Set(root)
14/12/26 10:13:45 INFO AkkaUtils: Connecting to MapOutputTracker:
akka.tcp://sparkdri...@ip-172-31-18-138.ap-southeast-1.compute.internal:51708/user/MapOutputTracker
14/12/26 10:13:45 INFO AkkaUtils: Connecting to BlockManagerMaster:
akka.tcp://sparkdri...@ip-172-31-18-138.ap-southeast-1.compute.internal:51708/user/BlockManagerMaster
14/12/26 10:13:45 INFO DiskBlockManager: Created local directory at
/mnt/spark/spark-local-20141226101345-b3c7
14/12/26 10:13:45 INFO DiskBlockManager: Created local directory at
/mnt2/spark/spark-local-20141226101345-cad1
14/12/26 10:13:45 INFO MemoryStore: MemoryStore started with capacity 265.4
MB
14/12/26 10:13:46 INFO NettyBlockTransferService: Server created on 51895
14/12/26 10:13:46 INFO BlockManagerMaster: Trying to register BlockManager
14/12/26 10:13:46 INFO BlockManagerMaster: Registered BlockManager
14/12/26 10:13:46 INFO AkkaUtils: Connecting to HeartbeatReceiver:
akka.tcp://sparkdri...@ip-172-31-18-138.ap-southeast-1.compute.internal:51708/user/HeartbeatReceiver
14/12/26 10:13:46 INFO CoarseGrainedExecutorBackend: Got assigned task 0
14/12/26 10:13:46 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
14/12/26 10:13:46 INFO Executor: Fetching
file:/root/persistent-hdfs/jars/play-json_2.10-2.4.0-M2.jar with timestamp
1419588821693
14/12/26 10:13:46 INFO Utils: Copying
/root/persistent-hdfs/jars/play-json_2.10-2.4.0-M2.jar to
/mnt/spark/-17002596021419588821693_cache
14/12/26 10:13:46 INFO Executor: Adding
file:/root/spark/work/app-20141226101341-/0/./play-json_2.10-2.4.0-M2.jar
to class loader
14/12/26 10:13:46 INFO Executor: Fetching
file:/root/persistent-hdfs/jars/spark-cassandra-connector_2.10-1.1.0.jar
with timestamp 1419588821694
14/12/26 10:13:46 INFO Utils: Copying
/root/persistent-hdfs/jars/spark-cassandra-connector_2.10-1.1.0.jar to
/mnt/spark/-16568315171419588821694_cache
14/12/26 10:13:46 INFO Executor: Adding
file:/root/spark/work/app-20141226101341-/0/./spark-cassandra-connector_2.10-1.1.0.jar
to class loader
14/12/26 10:13:46 INFO Executor: Fetching
file:/root/persiste

Serious issues with class not found exceptions of classes in uber jar

2014-12-26 Thread critikaled
Hi, I m facing serious issues with spark application not recognizing the
classes in uber jar some times it recognizes some time its does not. even
adding external jars using setJars is not helping sometimes is any one else
facing similar issue? Im using the latest 1.2.0 version.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Serious-issues-with-class-not-found-exceptions-of-classes-in-uber-jar-tp20863.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: How to insert complex types like map> in spark sql

2014-11-25 Thread critikaled
Exactly that seems to be the problem will have to wait for the next release 



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-insert-complex-types-like-map-string-map-string-int-in-spark-sql-tp19603p19734.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: How to insert complex types like map> in spark sql

2014-11-25 Thread critikaled
https://github.com/apache/spark/blob/84d79ee9ec47465269f7b0a7971176da93c96f3f/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala

Doesn't look like spark sql support nested complex types right now



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-insert-complex-types-like-map-string-map-string-int-in-spark-sql-tp19603p19730.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: How to insert complex types like map> in spark sql

2014-11-24 Thread critikaled
Thanks for the reply Micheal here is the stack trace
org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in
stage 0.0 failed 1 times, most recent failure: Lost task 3.0 in stage 0.0
(TID 3, localhost): scala.MatchError: MapType(StringType,StringType,true)
(of class org.apache.spark.sql.catalyst.types.MapType)
[info]
org.apache.spark.sql.catalyst.expressions.Cast.cast$lzycompute(Cast.scala:247)
[info]
org.apache.spark.sql.catalyst.expressions.Cast.cast(Cast.scala:247)
[info]
org.apache.spark.sql.catalyst.expressions.Cast.eval(Cast.scala:263)
[info]
org.apache.spark.sql.catalyst.expressions.Alias.eval(namedExpressions.scala:84)
[info]
org.apache.spark.sql.catalyst.expressions.InterpretedMutableProjection.apply(Projection.scala:66)
[info]
org.apache.spark.sql.catalyst.expressions.InterpretedMutableProjection.apply(Projection.scala:50)
[info] scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
[info] scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
[info]
org.apache.spark.sql.hive.execution.InsertIntoHiveTable.org$apache$spark$sql$hive$execution$InsertIntoHiveTable$$writeToFile$1(InsertIntoHiveTable.scala:149)
[info]
org.apache.spark.sql.hive.execution.InsertIntoHiveTable$$anonfun$saveAsHiveFile$1.apply(InsertIntoHiveTable.scala:158)
[info]
org.apache.spark.sql.hive.execution.InsertIntoHiveTable$$anonfun$saveAsHiveFile$1.apply(InsertIntoHiveTable.scala:158)
[info]
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
[info] org.apache.spark.scheduler.Task.run(Task.scala:54)
[info]
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
[info]
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info]
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] java.lang.Thread.run(Thread.java:745)
[info] Driver stacktrace:
[info]   at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185)
[info]   at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174)
[info]   at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173)
[info]   at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
[info]   at
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
[info]   at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173)
[info]   at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
[info]   at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
[info]   at scala.Option.foreach(Option.scala:236)
[info]   at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:688)




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-insert-complex-types-like-map-string-map-string-int-in-spark-sql-tp19603p19728.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



How to insert complex types like map> in spark sql

2014-11-23 Thread critikaled
Hi,
I am trying to insert particular set of data from rdd  to a hive table I
have Map[String,Map[String,Int]] in scala which I want to insert into the
table of map> I was able to create the table but
while inserting it says scala.MatchError:
MapType(StringType,MapType(StringType,IntegerType,true),true) (of class
org.apache.spark.sql.catalyst.types.MapType) can any one help me with this.
Thanks in advance.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-insert-complex-types-like-map-string-map-string-int-in-spark-sql-tp19603.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



How to retrive spark context when hiveContext is used in sparkstreaming

2014-10-28 Thread critikaled
Hi,

I'm trying to get hold of use spark context from hive context or
streamingcontext. I have 2 pieces of codes one in core spark one in spark
streaming. plain spark with hive which gives me context. Spark streaming
code with hive which prints null. plz help me figure out how to make this
code work.

thanks in advance

/core spark which gives context
import org.apache.spark.sql.hive.HiveContext
import org.apache.spark.{SparkConf, SparkContext}

object Trail extends App {
  val conf = new SparkConf(false).setMaster("local[*]").setAppName("Spark
Streamer").set("spark.logConf",
"true").set("spark.cassandra.connection.host",
"127.0.0.1").set("spark.cleaner.ttl", "300")

  val context = new SparkContext(conf)

  val hiveContext = new HiveContext(context)

  import com.dgm.Trail.hiveContext._

  context textFile "logs/log1.txt" flatMap { data =>
val Array(id, signals) = data split '|'
signals split '&' map { signal =>
  val Array(key, value) = signal split '='
  Signal(id, key, value)
}
  } registerTempTable "signals"

  hiveContext cacheTable "signals"

  val signalRows = hiveContext sql "select id from signals where key='id'
value='123'" map rts cache()

  signalRows.foreach { x =>
println(signalRows.context)
  }

}


/ spark streaming code which prints null
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext._
import org.apache.spark.sql.hive.HiveContext
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming.kafka.KafkaUtils
import org.apache.spark.streaming.{ Seconds, StreamingContext }

object Trail extends App {
  val conf = new SparkConf(false).setMaster("local[*]").setAppName("Spark
Streamer").set("spark.logConf",
"true").set("spark.cassandra.connection.host",
"127.0.0.1").set("spark.cleaner.ttl", "300")

  val streamingContext = new StreamingContext(conf, Seconds(10))

  val context = streamingContext.sparkContext

  val kafkaParams = Map(
"zookeeper.connect" -> "localhost",
"group.id" -> "spark_stream",
"zookeeper.connection.timeout.ms" -> "1",
"auto.offset.reset" -> "smallest"
  )

  val stream = KafkaUtils.createStream[String, String,
kafka.serializer.StringDecoder,
kafka.serializer.StringDecoder](streamingContext, kafkaParams, Map("tracker"
-> 2), StorageLevel.MEMORY_AND_DISK_SER_2).map(_._2)

  val signalsDStream = stream flatMap { data =>
val Array(id, signals) = data split '|'
signals split '&' map { signal =>
  val Array(key, value) = signal split '='
  Signal(id, key, value)
}
  }

  signalsDStream foreachRDD { rdds =>
val hiveContext = new HiveContext(streamingContext.sparkContext)
import hiveContext._
rdds registerTempTable "signals"
hiveContext cacheTable "signals"
val signalRows = hiveContext sql "select id from signals where key='id'
and value='123'" map rts cache ()
signalRows.foreach { x =>
  //println(streamingContext.sparkContext) causes serialization error
  println(hiveContext.sparkContext)
}
  }

  streamingContext.start()

}




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-retrive-spark-context-when-hiveContext-is-used-in-sparkstreaming-tp17609.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Is Spark in Java a bad idea?

2014-10-28 Thread critikaled
Hi Ron,
what ever api you have in scala you can possibly use it form java. scala is
inter-operable with java and vice versa. scala being both object oriented
and functional will make your job easier on jvm and it is more consise than
java. Take it as an opportunity and start learning scala ;).



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Is-Spark-in-Java-a-bad-idea-tp17534p17538.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: RDD to Multiple Tables SparkSQL

2014-10-28 Thread critikaled
Hi oliver,
thanks for the answer I don't have the information of all keys before hand,
the reason i want to have multiple tables is that based on my information on
known key I will apply different queries get the results for that particular
key I don't want to touch the unkown ones I'll save that for later. My idea
is that if every key has and individual table the querying will be faster.
now if I have a single table I'll do "select id from kv where key
='some_key' and value operator 'some_thing' ". I want to make it "select id
from some_key where value operator 'some_thing' ". BTW what do you mean by
"extract" could you direct me to api or code sample.

thanks and regards,
critikaled.  



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/RDD-to-Multiple-Tables-SparkSQL-tp16807p17536.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Spark Streaming and Storm

2014-10-28 Thread critikaled
http://www.cs.berkeley.edu/~matei/papers/2013/sosp_spark_streaming.pdf



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-and-Storm-tp9118p17530.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



RDD to Multiple Tables SparkSQL

2014-10-20 Thread critikaled
Hi I have a rdd which I want to register as multiple tables based on key


val context = new SparkContext(conf)
val sqlContext = new org.apache.spark.sql.hive.HiveContext(context)
import sqlContext.createSchemaRDD

case class KV(key:String,id:String,value:String)
val logsRDD = context.textFile("logs", 10).map{line=>
  val Array(key,id,value) = line split ' '
  (key,id,value)
}.registerTempTable("KVS")

I want to store the above information to multiple tables based on key
without bringing the entire data to master

Thanks in advance.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/RDD-to-Multiple-Tables-SparkSQL-tp16807.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



any good library to implement multilabel classification on spark?

2014-10-03 Thread critikaled
Hi, Going through spark mllib doc I have noticed that it supports multiclass
classification can any body help me in implementing multilabel
classification on spark like in " Mulan
  " and " Meka
  " libraries.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/any-good-library-to-implement-multilabel-classification-on-spark-tp15717.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.