Fwd: Define the name of the outputs with Java-Spark.

2014-09-12 Thread Guillermo Ortiz
I would like to define the names of my output in Spark, I have a process which write many fails and I would like to name them, is it possible? I guess that it's not possible with saveAsText method. It would be something similar to the MultipleOutput of Hadoop.

Spark Streaming with Flume or Kafka?

2014-11-19 Thread Guillermo Ortiz
Hi, I'm starting with Spark and I just trying to understand if I want to use Spark Streaming, should I use to feed it Flume or Kafka? I think there's not a official Sink for Flume to Spark Streaming and it seems that Kafka it fits better since gives you readibility. Could someone give a good

Re: Spark Streaming with Flume or Kafka?

2014-11-19 Thread Guillermo Ortiz
or something else) and make it available for a variety of apps via Kafka. Hope this helps! Hari On Wed, Nov 19, 2014 at 8:10 AM, Guillermo Ortiz konstt2...@gmail.com wrote: Hi, I'm starting with Spark and I just trying to understand if I want to use Spark Streaming, should I use to feed

Re: Spark Streaming with Flume or Kafka?

2014-11-19 Thread Guillermo Ortiz
Streaming (from Flume or Kafka or something else) and make it available for a variety of apps via Kafka. Hope this helps! Hari On Wed, Nov 19, 2014 at 8:10 AM, Guillermo Ortiz konstt2...@gmail.com wrote: Hi, I'm starting with Spark and I just trying to understand if I want to use Spark

Re: Spark Streaming with Flume or Kafka?

2014-11-19 Thread Guillermo Ortiz
, Guillermo Ortiz konstt2...@gmail.com wrote: Thank you for your answer, I don't know if I typed the question correctly. But your nswer helps me. I'm going to make the question again for knowing if you understood me. I have this topology: DataSource1, , DataSourceN -- Kafka -- SparkStreaming

Spark or MR, Scala or Java?

2014-11-22 Thread Guillermo Ortiz
Hello, I'm a newbie with Spark but I've been working with Hadoop for a while. I have two questions. Is there any case where MR is better than Spark? I don't know what cases I should be used Spark by MR. When is MR faster than Spark? The other question is, I know Java, is it worth it to learn

Read data from SparkStreaming from Java socket.

2014-12-12 Thread Guillermo Ortiz
Hi, I'm a newbie with Spark,, I'm just trying to use SparkStreaming and filter some data sent with a Java Socket but it's not working... it works when I use ncat Why is it not working?? My sparkcode is just this: val sparkConf = new SparkConf().setMaster(local[2]).setAppName(Test) val

Re: Read data from SparkStreaming from Java socket.

2014-12-12 Thread Guillermo Ortiz
which will be sent to the client whoever connects on 12345, i have it tested and is working with SparkStreaming (socketTextStream). Thanks Best Regards On Fri, Dec 12, 2014 at 6:25 PM, Guillermo Ortiz konstt2...@gmail.com wrote: Hi, I'm a newbie with Spark,, I'm just trying to use

Re: Read data from SparkStreaming from Java socket.

2014-12-13 Thread Guillermo Ortiz
ak...@sigmoidanalytics.com wrote: socketTextStream is Socket client which will read from a TCP ServerSocket. Thanks Best Regards On Fri, Dec 12, 2014 at 7:21 PM, Guillermo Ortiz konstt2...@gmail.com wrote: I dont' understand what spark streaming socketTextStream is waiting... is it like

Re: Read data from SparkStreaming from Java socket.

2014-12-14 Thread Guillermo Ortiz
Why doesn't it work?? I guess that it's the same with \n. 2014-12-13 12:56 GMT+01:00 Guillermo Ortiz konstt2...@gmail.com: I got it, thanks,, a silly question,, why if I do: out.write(hello + System.currentTimeMillis() + \n); it doesn't detect anything and if I do out.println(hello

Re: Read data from SparkStreaming from Java socket.

2014-12-14 Thread Guillermo Ortiz
Thanks. 2014-12-14 12:20 GMT+01:00 Gerard Maas gerard.m...@gmail.com: Are you using a bufferedPrintWriter? that's probably a different flushing behaviour. Try doing out.flush() after out.write(...) and you will have the same result. This is Spark unrelated btw. -kr, Gerard.

Get the value of DStream[(String, Iterable[String])]

2014-12-17 Thread Guillermo Ortiz
I'm a newbie with Spark,,, a simple question val errorLines = lines.filter(_.contains(h)) val mapErrorLines = errorLines.map(line = (key, line)) val grouping = errorLinesValue.groupByKeyAndWindow(Seconds(8), Seconds(4)) I get something like: 604: --- 605:

Re: Get the value of DStream[(String, Iterable[String])]

2014-12-17 Thread Guillermo Ortiz
, Guillermo Ortiz konstt2...@gmail.com wrote: I'm a newbie with Spark,,, a simple question val errorLines = lines.filter(_.contains(h)) val mapErrorLines = errorLines.map(line = (key, line)) val grouping = errorLinesValue.groupByKeyAndWindow(Seconds(8), Seconds(4)) I get something like: 604

Re: Get the value of DStream[(String, Iterable[String])]

2014-12-17 Thread Guillermo Ortiz
and do something for each element. } I think that it must be pretty basic,, argg. 2014-12-17 18:43 GMT+01:00 Guillermo Ortiz konstt2...@gmail.com: What I would like to do it's to count the number of elements and if it's greater than a number, I have to iterate all them and store them in mysql

Spark Streaming and Windows, it always counts the logs during all the windows. Why?

2014-12-26 Thread Guillermo Ortiz
I'm trying to make some operation with windows and intervals. I get data every15 seconds, and want to have a windows of 60 seconds with batch intervals of 15 seconds. I''m injecting data with ncat. if I inject 3 logs in the same interval I get into the do something each 15 secods during one

Re: Spark Streaming and Windows, it always counts the logs during all the windows. Why?

2014-12-26 Thread Guillermo Ortiz
the println(4...)?? shouldn't it execute all the code each 15 seconds that it's what it's defined on the context (val ssc = new StreamingContext(sparkConf, Seconds(15));) 2014-12-26 10:56 GMT+01:00 Guillermo Ortiz konstt2...@gmail.com: I'm trying to make some operation with windows and intervals. I

Re: Spark Streaming and Windows, it always counts the logs during all the windows. Why?

2014-12-26 Thread Guillermo Ortiz
Oh, I didn't understand what I was doing, my fault (too much parties these xmas). Thought windows works in another weird way. Sorry for the questions.. 2014-12-26 13:42 GMT+01:00 Guillermo Ortiz konstt2...@gmail.com: I'm trying to understand why it's not working and I typed some println

Re: Problems with GC and time to execute with different number of executors.

2015-02-05 Thread Guillermo Ortiz
what it's happeing. 2015-02-04 18:57 GMT+01:00 Sandy Ryza sandy.r...@cloudera.com: Hi Guillermo, What exactly do you mean by each iteration? Are you caching data in memory? -Sandy On Wed, Feb 4, 2015 at 5:02 AM, Guillermo Ortiz konstt2...@gmail.com wrote: I execute a job in Spark where I'm

Problems with GC and time to execute with different number of executors.

2015-02-04 Thread Guillermo Ortiz
I execute a job in Spark where I'm processing a file of 80Gb in HDFS. I have 5 slaves: (32cores /256Gb / 7physical disks) x 5 I have been trying many different configurations with YARN. yarn.nodemanager.resource.memory-mb 196Gb yarn.nodemanager.resource.cpu-vcores 24 I have tried to execute the

Re: Problems with GC and time to execute with different number of executors.

2015-02-06 Thread Guillermo Ortiz
. Though that wouldn't explain the high GC. What percent of task time does the web UI report that tasks are spending in GC? On Fri, Feb 6, 2015 at 12:56 AM, Guillermo Ortiz konstt2...@gmail.com wrote: Yes, It's surpressing to me as well I tried to execute it with different configurations

Re: Problems with GC and time to execute with different number of executors.

2015-02-06 Thread Guillermo Ortiz
to me that you would be hitting a lot of GC for this scenario. Are you setting --executor-cores and --executor-memory? What are you setting them to? -Sandy On Thu, Feb 5, 2015 at 10:17 AM, Guillermo Ortiz konstt2...@gmail.com wrote: Any idea why if I use more containers I get a lot

Define size partitions

2015-01-30 Thread Guillermo Ortiz
Hi, I want to process some files, there're a king of big, dozens of gigabytes each one. I get them like a array of bytes and there's an structure inside of them. I have a header which describes the structure. It could be like: Number(8bytes) Char(16bytes) Number(4 bytes) Char(1bytes), ..

Re: Problems with GC and time to execute with different number of executors.

2015-02-05 Thread Guillermo Ortiz
Any idea why if I use more containers I get a lot of stopped because GC? 2015-02-05 8:59 GMT+01:00 Guillermo Ortiz konstt2...@gmail.com: I'm not caching the data. with each iteration I mean,, each 128mb that a executor has to process. The code is pretty simple. final Conversor c = new

Trying to execute Spark in Yarn

2015-01-08 Thread Guillermo Ortiz
I'm trying to execute Spark from a Hadoop Cluster, I have created this script to try it: #!/bin/bash export HADOOP_CONF_DIR=/etc/hadoop/conf SPARK_CLASSPATH= for lib in `ls /user/local/etc/lib/*.jar` do SPARK_CLASSPATH=$SPARK_CLASSPATH:$lib done

Re: Trying to execute Spark in Yarn

2015-01-08 Thread Guillermo Ortiz
:23 GMT+08:00 Guillermo Ortiz konstt2...@gmail.com: I'm trying to execute Spark from a Hadoop Cluster, I have created this script to try it: #!/bin/bash export HADOOP_CONF_DIR=/etc/hadoop/conf SPARK_CLASSPATH= for lib in `ls /user/local/etc/lib/*.jar` do SPARK_CLASSPATH

Executing Spark, Error creating path from empty String.

2015-01-08 Thread Guillermo Ortiz
When I try to execute my task with Spark it starts to copy the jars it needs to HDFS and it finally fails, I don't know exactly why. I have checked HDFS and it copies the files, so, it seems to work that part. I changed the log level to debug but there's nothing else to help. What else does Spark

Re: Executing Spark, Error creating path from empty String.

2015-01-08 Thread Guillermo Ortiz
I was adding some bad jars I guess. I deleted all the jars and copied them again and it works. 2015-01-08 14:15 GMT+01:00 Guillermo Ortiz konstt2...@gmail.com: When I try to execute my task with Spark it starts to copy the jars it needs to HDFS and it finally fails, I don't know exactly why. I

SparkSQL, executing an OR

2015-03-03 Thread Guillermo Ortiz
I'm trying to execute a query with Spark. (Example from the Spark Documentation) val teenagers = people.where('age = 10).where('age = 19).select('name) Is it possible to execute an OR with this syntax? val teenagers = people.where('age = 10 'or 'age = 4).where('age = 19).select('name) I have

Re: SparkSQL, executing an OR

2015-03-03 Thread Guillermo Ortiz
thanks, it works. 2015-03-03 13:32 GMT+01:00 Cheng, Hao hao.ch...@intel.com: Using where('age =10 'age =4) instead. -Original Message- From: Guillermo Ortiz [mailto:konstt2...@gmail.com] Sent: Tuesday, March 3, 2015 5:14 PM To: user Subject: SparkSQL, executing an OR I'm trying

Combiners in Spark

2015-03-02 Thread Guillermo Ortiz
Which is the equivalent function to Combiners of MapReduce in Spark? I guess that it's combineByKey, but is combineByKey executed locally? I understand than functions as reduceByKey or foldByKey aren't executed locally. Reading the documentation looks like combineByKey is equivalent to

Re: How Broadcast variable scale?.

2015-02-23 Thread Guillermo Ortiz
you tried it on your 300-machine cluster? I'm curious to know what happened. -Mosharaf On Mon, Feb 23, 2015 at 8:06 AM, Guillermo Ortiz konstt2...@gmail.com wrote: I'm looking for about how scale broadcast variables in Spark and what algorithm uses. I have found http://www.cs.berkeley.edu

How Broadcast variable scale?.

2015-02-23 Thread Guillermo Ortiz
I'm looking for about how scale broadcast variables in Spark and what algorithm uses. I have found http://www.cs.berkeley.edu/~agearh/cs267.sp10/files/mosharaf-spark-bc-report-spring10.pdf I don't know if they're talking about the current version (1.2.1) because the file was created in 2010. I

CollectAsMap, Broadcasting.

2015-02-26 Thread Guillermo Ortiz
I have a question, If I execute this code, val users = sc.textFile(/tmp/users.log).map(x = x.split(,)).map( v = (v(0), v(1))) val contacts = sc.textFile(/tmp/contacts.log).map(y = y.split(,)).map( v = (v(0), v(1))) val usersMap = contacts.collectAsMap() contacts.map(v = (v._1, (usersMap(v._1),

Re: CollectAsMap, Broadcasting.

2015-02-26 Thread Guillermo Ortiz
are right that this is mostly because joins usually involve shuffles. If not, it's not as clear which way is best. I suppose that if the Map is large-ish, it's safer to not keep pulling it to the driver. On Thu, Feb 26, 2015 at 10:00 AM, Guillermo Ortiz konstt2...@gmail.com wrote: I have

Re: CollectAsMap, Broadcasting.

2015-02-26 Thread Guillermo Ortiz
is a local object. This bit has nothing to do with Spark. Yes you would have to broadcast it to use it efficient in functions (not on the driver). On Thu, Feb 26, 2015 at 10:24 AM, Guillermo Ortiz konstt2...@gmail.com wrote: So, on my example, when I execute: val usersMap

Re: CollectAsMap, Broadcasting.

2015-02-26 Thread Guillermo Ortiz
the copy in the driver. On Thu, Feb 26, 2015 at 10:47 AM, Guillermo Ortiz konstt2...@gmail.com wrote: Isn't it contacts.map(v = (v._1, (usersMap(v._1), v._2))).collect() executed in the executors? why is it executed in the driver? contacts are not a local object, right

How to separate messages of different topics.

2015-05-05 Thread Guillermo Ortiz
I want to read from many topics in Kafka and know from where each message is coming (topic1, topic2 and so on). val kafkaParams = Map[String, String](metadata.broker.list - myKafka:9092) val topics = Set(EntryLog, presOpManager) val directKafkaStream = KafkaUtils.createDirectStream[String,

Spark + Kakfa with directStream

2015-05-05 Thread Guillermo Ortiz
I'm tryting to execute the Hello World example with Spark + Kafka ( https://github.com/apache/spark/blob/master/examples/scala-2.10/src/main/scala/org/apache/spark/examples/streaming/DirectKafkaWordCount.scala) with createDirectStream and I get this error. java.lang.NoSuchMethodError:

Re: Spark + Kakfa with directStream

2015-05-05 Thread Guillermo Ortiz
Sorry, I had a duplicated kafka dependency with another older version in another pom.xml 2015-05-05 14:46 GMT+02:00 Guillermo Ortiz konstt2...@gmail.com: I'm tryting to execute the Hello World example with Spark + Kafka ( https://github.com/apache/spark/blob/master/examples/scala-2.10/src/main

Working with slides. How do I know how many times a RDD has been processed?

2015-05-18 Thread Guillermo Ortiz
Hi, I have two streaming RDD1 and RDD2 and want to cogroup them. Data don't come in the same time and sometimes they could come with some delay. When I get all data I want to insert in MongoDB. For example, imagine that I get: RDD1 -- T 0 RDD2 --T 0.5 I do cogroup between them but I couldn't

Re: Working with slides. How do I know how many times a RDD has been processed?

2015-05-19 Thread Guillermo Ortiz
(splitRegister.length) = 1 splitRegister.copyToArray(newArray) } (splitRegister(1), newArray) } If I check the length of splitRegister is always 2 in each slide, it is never three. 2015-05-18 15:36 GMT+02:00 Guillermo Ortiz konstt2...@gmail.com: Hi, I have two streaming RDD1

Uncaught exception in thread delete Spark local dirs

2015-06-27 Thread Guillermo Ortiz
Hi, I'm executing a SparkStreamig code with Kafka. IçThe code was working but today I tried to execute the code again and I got an exception, I dn't know what's it happening. right now , there are no jobs executions on YARN. How could it fix it? Exception in thread main

Re: Uncaught exception in thread delete Spark local dirs

2015-06-27 Thread Guillermo Ortiz
stateful operations. 2. Could you try not using the SPARK_CLASSPATH environment variable. TD On Sat, Jun 27, 2015 at 1:00 AM, Guillermo Ortiz konstt2...@gmail.com wrote: I don't have any checkpoint on my code. Really, I don't have to save any state. It's just a log processing of a PoC. I have

Re: Uncaught exception in thread delete Spark local dirs

2015-06-27 Thread Guillermo Ortiz
: Requested user hdfs is not whitelisted and has id 496,which is below the minimum allowed 1000 Container exited with a non-zero exit code 255 Failing this attempt. Failing the application. 2015-06-27 11:25 GMT+02:00 Guillermo Ortiz konstt2...@gmail.com: Well SPARK_CLASSPATH it's just a random name

Re: Uncaught exception in thread delete Spark local dirs

2015-06-27 Thread Guillermo Ortiz
, or otherwise? Also cc'ed Hari who may have a better idea of YARN related issues. On Sat, Jun 27, 2015 at 12:35 AM, Guillermo Ortiz konstt2...@gmail.com wrote: Hi, I'm executing a SparkStreamig code with Kafka. IçThe code was working but today I tried to execute the code again and I got

Re: Uncaught exception in thread delete Spark local dirs

2015-06-27 Thread Guillermo Ortiz
that. Mind renamign that variable and trying it out again? At least it will reduce one possible source of problem. TD On Sat, Jun 27, 2015 at 2:32 AM, Guillermo Ortiz konstt2...@gmail.com wrote: I'm checking the logs in YARN and I found this error as well Application

Trying to connect to many topics with several DirectConnect

2015-05-22 Thread Guillermo Ortiz
Hi, I'm trying to connect to two topics of Kafka with Spark with DirectStream but I get an error. I don't know if there're any limitation to do it, because when I just access to one topics everything if right. *val ssc = new StreamingContext(sparkConf, Seconds(5))* *val kafkaParams =

Checkpoints in SparkStreaming

2015-07-28 Thread Guillermo Ortiz
I'm using SparkStreaming and I want to configure checkpoint to manage fault-tolerance. I've been reading the documentation. Is it necessary to create and configure the InputDSStream in the getOrCreate function? I checked the example in

Problems with JobScheduler

2015-07-30 Thread Guillermo Ortiz
I have some problem with the JobScheduler. I have executed same code in two cluster. I read from three topics in Kafka with DirectStream so I have three tasks. I have check YARN and there aren't more jobs launched. The cluster where I have troubles I got this logs: 15/07/30 14:32:58 INFO

Re: Problems with JobScheduler

2015-07-30 Thread Guillermo Ortiz
I read about maxRatePerPartition parameter, I haven't set this parameter. Could it be the problem?? Although this wouldn't explain why it doesn't work in one of the clusters. 2015-07-30 14:47 GMT+02:00 Guillermo Ortiz konstt2...@gmail.com: They just share the kafka, the rest of resources

Re: Problems with JobScheduler

2015-07-30 Thread Guillermo Ortiz
They just share the kafka, the rest of resources are independents. I tried to stop one cluster and execute just the cluster isn't working but it happens the same. 2015-07-30 14:41 GMT+02:00 Guillermo Ortiz konstt2...@gmail.com: I have some problem with the JobScheduler. I have executed same

Re: Problems with JobScheduler

2015-07-30 Thread Guillermo Ortiz
at MetricsSpark.scala:67, took 60.391761 s 15/07/30 14:37:35 INFO DAGScheduler: Job 93 finished: foreachRDD at MetricsSpark.scala:67, took 0.531323 s Are those jobs running on the same topicpartition? On Thu, Jul 30, 2015 at 8:03 AM, Guillermo Ortiz konstt2...@gmail.com wrote: I read about

Re: Problems with JobScheduler

2015-07-31 Thread Guillermo Ortiz
, 2015 at 10:46 AM, Guillermo Ortiz konstt2...@gmail.com wrote: The difference is that one recives more data than the others two. I can pass thought parameters the topics, so, I could execute the code trying with one topic and figure out with one is the topic, although I guess that it's

Re: Problems with JobScheduler

2015-07-30 Thread Guillermo Ortiz
the results. On Thu, Jul 30, 2015 at 9:29 AM, Guillermo Ortiz konstt2...@gmail.com wrote: I have three topics with one partition each topic. So each jobs run about one topics. 2015-07-30 16:20 GMT+02:00 Cody Koeninger c...@koeninger.org: Just so I'm clear, the difference in timing you're talking

Re: Problems with JobScheduler

2015-07-31 Thread Guillermo Ortiz
:15 GMT+02:00 Guillermo Ortiz konstt2...@gmail.com: It doesn't make sense to me. Because in the another cluster process all data in less than a second. Anyway, I'm going to set that parameter. 2015-07-31 0:36 GMT+02:00 Tathagata Das t...@databricks.com: Yes, and that is indeed the problem

Error SparkStreaming after a while executing.

2015-07-30 Thread Guillermo Ortiz
I'm executing a job with Spark Streaming and got this error all times when the job has been executing for a while (usually hours of days). I have no idea why it's happening. 15/07/30 13:02:14 ERROR LiveListenerBus: Listener EventLoggingListener threw an exception

How to config the log in Spark

2015-12-07 Thread Guillermo Ortiz
I don't get to activate the logs for my classes. I'm using CDH 5.4 with Spark 1.3.0 I have a class in Scala with some log.debug, I create a class to log: package example.spark import org.apache.log4j.Logger object Holder extends Serializable { @transient lazy val log =

Trying to index document in Solr with Spark and solr-spark library

2015-12-16 Thread Guillermo Ortiz
I'm trying to index document to Solr from Spark with the library solr-spark I have create a project with Maven and include all the dependencies when I execute spark but I get a ClassNotFoundException. I have check that the class is in one of the jar that I'm including ( solr-solrj-4.10.3.jar) I

Re: Trying to index document in Solr with Spark and solr-spark library

2015-12-16 Thread Guillermo Ortiz
) at org.apache.solr.common.cloud.ZkStateReader.createClusterStateWatchersAndUpdate(ZkStateReader.java:334) at org.apache.solr.client.solrj.impl.CloudSolrServer.connect(CloudSolrServer.java:243) 2015-12-16 16:26 GMT+01:00 Guillermo Ortiz <konstt2...@gmail.com>: > I'm trying to index document to Solr from Spark with the library solr-spark > >

Re: Spark directStream with Kafka and process the lost messages.

2015-11-30 Thread Guillermo Ortiz
streaming-programming-guide.html#checkpointing > > On Mon, Nov 30, 2015 at 9:38 AM, Guillermo Ortiz <konstt2...@gmail.com> > wrote: > >> Hello, >> >> I have Spark and Kafka with directStream. I'm trying that if Spark dies >> it could process all those messages

Spark directStream with Kafka and process the lost messages.

2015-11-30 Thread Guillermo Ortiz
Hello, I have Spark and Kafka with directStream. I'm trying that if Spark dies it could process all those messages when it starts. The offsets are stored in chekpoints but I don't know how I could say to Spark to start in that point. I saw that there's another createDirectStream method with a

Number of consumers in Kafka with Spark Streaming

2016-06-21 Thread Guillermo Ortiz
I use Spark Streaming with Kafka and I'd like to know how many consumers are generated. I guess that as many as partitions in Kafka but I'm not sure. Is there a way to know the name of the groupId generated in Spark to Kafka?

Re: How could I do this algorithm in Spark?

2016-02-25 Thread Guillermo Ortiz
;> You could easily model your data as an RDD or tuples (or as a >> dataframe/set) and use the sortBy (or orderBy for dataframe/sets) >> methods. >> >> best, >> --Jakob >> >> On Wed, Feb 24, 2016 at 2:26 PM, Guillermo Ortiz <konstt2...@gmail.c

Number partitions after a join

2016-02-25 Thread Guillermo Ortiz
When you do a join in Spark, how many partitions are as result? is it a default number if you don't specify the number of partitions?

Re: How could I do this algorithm in Spark?

2016-02-25 Thread Guillermo Ortiz
thm implementation. > > --- > Robin East > *Spark GraphX in Action* Michael Malak and Robin East > Manning Publications Co. > http://www.manning.com/books/spark-graphx-in-action > > > > > &

Re: Number partitions after a join

2016-02-25 Thread Guillermo Ortiz
; > On Thu, Feb 25, 2016 at 7:42 PM, Guillermo Ortiz <konstt2...@gmail.com> > wrote: > >> When you do a join in Spark, how many partitions are as result? is it a >> default number if you don't specify the number of partitions? >> > > > > -- > --- > Takeshi Yamamuro >

Re: How could I do this algorithm in Spark?

2016-02-25 Thread Guillermo Ortiz
Oh, the letters were just an example, it could be: a , t b, o t, k k, c So.. a -> t -> k -> c and the result is: a,c; t,c; k,c and b,o I don't know if you were thinking about sortBy because the another example where letter were consecutive. 2016-02-25 9:42 GMT+01:00 Guillermo Ortiz

Re: How could I do this algorithm in Spark?

2016-02-25 Thread Guillermo Ortiz
; (a-b) -> (a-b-c, a-b-e) > (b-c) -> (a-b-c, b-c-d) > (c-d) -> (b-c-d) > (b-e) -> (b-e-f) > (e-f) -> (b-e-f, e-f-c) > (f-c) -> (e-f-c) > > filter out keys with less than 2 values > > (b-c) -> (a-b-c, b-c-d) > (e-f) -> (b-e-f, e-f-c) > > ma

Re: Number partitions after a join

2016-02-25 Thread Guillermo Ortiz
> partition. > > > > Cheers, > > Ximo > > > > *De:* Guillermo Ortiz [mailto:konstt2...@gmail.com] > *Enviado el:* jueves, 25 de febrero de 2016 15:19 > *Para:* Takeshi Yamamuro <linguin@gmail.com> > *CC:* user <user@spark.apache.org> > *Asunto

Re: How could I do this algorithm in Spark?

2016-02-25 Thread Guillermo Ortiz
m my Verizon Wireless 4G LTE smartphone > > > ---- Original message > From: Guillermo Ortiz <konstt2...@gmail.com> > Date: 02/24/2016 5:26 PM (GMT-05:00) > To: user <user@spark.apache.org> > Subject: How could I do this algorithm in Spark? > > I want to do some

Get all vertexes with outDegree equals to 0 with GraphX

2016-02-26 Thread Guillermo Ortiz
I'm new with graphX. I need to get the vertex without out edges.. I guess that it's pretty easy but I did it pretty complicated.. and inefficienct val vertices: RDD[(VertexId, (List[String], List[String]))] = sc.parallelize(Array((1L, (List("a"), List[String]())), (2L, (List("b"),

Re: Get all vertexes with outDegree equals to 0 with GraphX

2016-02-27 Thread Guillermo Ortiz
es == 0 > > ) > > > > val verticesWithNoOutEdges = graphWithNoOutEdges.vertices > > > > Mohammed > > Author: Big Data Analytics with Spark > <http://www.amazon.com/Big-Data-Analytics-Spark-Practitioners/dp/1484209656/> > > > > *From:* Guillermo Ortiz [mailto:kon

Re: Get all vertexes with outDegree equals to 0 with GraphX

2016-02-26 Thread Guillermo Ortiz
East > *Spark GraphX in Action* Michael Malak and Robin East > Manning Publications Co. > http://www.manning.com/books/spark-graphx-in-action > > > > > > On 26 Feb 2016, at 11:59, Guillermo Ortiz <konstt2...@gmail.com> wrote: > > I'm new with graphX. I need to get

How could I do this algorithm in Spark?

2016-02-24 Thread Guillermo Ortiz
I want to do some algorithm in Spark.. I know how to do it in a single machine where all data are together, but I don't know a good way to do it in Spark. If someone has an idea.. I have some data like this a , b x , y b , c y , y c , d I want something like: a , d b , d c , d x , y y , y I

Number of executors in Spark - Kafka

2016-01-21 Thread Guillermo Ortiz
I'm using Spark Streaming and Kafka with Direct Approach. I have created a topic with 6 partitions so when I execute Spark there are six RDD. I understand than ideally it should have six executors to process each one one RDD. To do it, when I execute spark-submit (I use YARN) I specific the

Re: Spark job stops after a while.

2016-01-21 Thread Guillermo Ortiz
on executor throws ClassNotFoundException on > driver > > FYI > > On Thu, Jan 21, 2016 at 7:10 AM, Guillermo Ortiz <konstt2...@gmail.com> > wrote: > >> I'm using CDH 5.5.1 with Spark 1.5.x (I think that it's 1.5.2). >> >> I know that the library is here:

Spark job stops after a while.

2016-01-21 Thread Guillermo Ortiz
I'm runing a Spark Streaming process and it stops in a while. It makes some process an insert the result in ElasticSeach with its library. After a while the process fail. I have been checking the logs and I have seen this error 2016-01-21 14:57:54,388

Re: Spark job stops after a while.

2016-01-21 Thread Guillermo Ortiz
> > Which Spark version are you using ? > > Cheers > > On Thu, Jan 21, 2016 at 6:50 AM, Guillermo Ortiz <konstt2...@gmail.com> > wrote: > >> I'm runing a Spark Streaming process and it stops in a while. It makes >> some process an insert the result in Elastic

Re: Spark job stops after a while.

2016-01-21 Thread Guillermo Ortiz
I think that it's that bug, because the error is the same.. thanks a lot. 2016-01-21 16:46 GMT+01:00 Guillermo Ortiz <konstt2...@gmail.com>: > I'm using 1.5.0 of Spark confirmed. Less this > jar file:/opt/centralLogs/lib/spark-catalyst_2.10-1.5.1.jar. > > I'm going to keep l

Problem with union of DirectStream

2016-03-10 Thread Guillermo Ortiz
I have a DirectStream and process data from Kafka, val directKafkaStream = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](ssc, kafkaParams1, topics1.toSet) directKafkaStream.foreachRDD { rdd => val offsets = rdd.asInstanceOf[HasOffsetRanges].offsetRanges When

Checkpoints in Spark

2016-03-30 Thread Guillermo Ortiz
I'm curious about what kind of things are saved in the checkpoints. I just changed the number of executors when I execute Spark and it didn't happen until I remove the checkpoint, I guess that if I'm using log4j.properties and I want to changed I have to remove the checkpoint as well. When you

Configuring log4j Spark

2016-03-30 Thread Guillermo Ortiz
I'm trying to configure log4j in Spark. spark-submit --conf spark.metrics.conf=metrics.properties --name "myProject" --master yarn-cluster --class myCompany.spark.MyClass *--files /opt/myProject/conf/log4j.properties* --jars $SPARK_CLASSPATH --executor-memory 1024m --num-executors 5

Re: Configuring log4j Spark

2016-03-30 Thread Guillermo Ortiz
cutors 5 --executor-cores 1 --driver-memory 1024m *--files /opt/myProject/conf/log4j.properties* /opt/myProject/myJar.jar I think I didn't do any others changes. 2016-03-30 15:42 GMT+02:00 Guillermo Ortiz <konstt2...@gmail.com>: > I'm trying to configure log4j in Sp

Error Kafka/Spark. Ran out of messages before reaching ending offset

2016-05-06 Thread Guillermo Ortiz
I'm trying to read data from Spark and index to ES with its library (es-hadoop 2.2.1 version). IIt was working right for a while but now it has started to happen this error. I have delete the checkpoint and even the kafka topic and restart all the machines with kafka and zookeeper but it didn't

Re: Error Kafka/Spark. Ran out of messages before reaching ending offset

2016-05-06 Thread Guillermo Ortiz
I think that it's a kafka error, but I'm starting thinking if it could be something about elasticsearch since I have seen more people with same error using elasticsearch. I have no idea. 2016-05-06 11:05 GMT+02:00 Guillermo Ortiz <konstt2...@gmail.com>: > I'm trying to read data f

Re: Error Kafka/Spark. Ran out of messages before reaching ending offset

2016-05-06 Thread Guillermo Ortiz
[JobGenerator] INFO org.apache.spark.streaming.scheduler.JobScheduler - Added jobs for time 146252629 ms 2016-05-06 11:18:10,015 [JobGenerator] INFO org.apache.spark.streaming.scheduler.JobGenerator - Checkpointing graph for time 146252629 ms 2016-05-06 11:11 GMT+02:00 Guillermo Ortiz <kons

java.lang.NoClassDefFoundError: kafka/api/TopicMetadataRequest

2016-05-09 Thread Guillermo Ortiz
I'm trying to execute a job with Spark and Kafka and I'm getting this error. I know that it's becuase the version are not right, but I have been checking the jar which I import on the SparkUI spark.yarn.secondary.jars and they are right and the class exists inside *kafka_2.10-0.8.2.1.jar. *

Re: Error Kafka/Spark. Ran out of messages before reaching ending offset

2016-05-09 Thread Guillermo Ortiz
earch. > > On Fri, May 6, 2016 at 4:22 AM, Guillermo Ortiz <konstt2...@gmail.com> > wrote: > > This is the complete error. > > > > 2016-05-06 11:18:05,424 [task-result-getter-0] INFO > > org.apache.spark.scheduler.TaskSetManager - Finished task 5.0 in stage > &g

Re: java.lang.NoClassDefFoundError: kafka/api/TopicMetadataRequest

2016-05-09 Thread Guillermo Ortiz
:36 CET 2015 kafka/javaapi/TopicMetadataRequest.class 2135 Thu Feb 26 14:30:38 CET 2015 kafka/server/KafkaApis$$anonfun$handleTopicMetadataRequest$1.class 2016-05-09 12:51 GMT+02:00 Guillermo Ortiz <konstt2...@gmail.com>: > I'm trying to execute a job with Spark and Kafka and I'

Re: java.lang.NoClassDefFoundError: kafka/api/TopicMetadataRequest

2016-05-09 Thread Guillermo Ortiz
ersion of kafka is embedded in any of the > jars listed below. > > Cheers > > > On Mon, May 9, 2016 at 4:00 AM, Guillermo Ortiz <konstt2...@gmail.com> > wrote: > >> *jar tvf kafka_2.10-0.8.2.1.jar | grep TopicMetadataRequest * >> 1757 Thu Feb 26 14:30:34 CET

Re: java.lang.NoClassDefFoundError: kafka/api/TopicMetadataRequest

2016-05-09 Thread Guillermo Ortiz
picMetadataRequest > > > > Dr Mich Talebzadeh > > > > LinkedIn * > https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw > <https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>* > > > > h

Flume and Spark Streaming

2017-01-16 Thread Guillermo Ortiz
I'm wondering to use Flume (channel file)-Spark Streaming. I have some doubts about it: 1.The RDD size is all data what it comes in a microbatch which you have defined. Risght? 2.If there are 2Gb of data, how many are RDDs generated? just one and I have to make a repartition? 3.When is the ACK

Re: Flume and Spark Streaming

2017-01-16 Thread Guillermo Ortiz
Avro sink --> Spark Streaming 2017-01-16 13:55 GMT+01:00 ayan guha <guha.a...@gmail.com>: > With Flume, what would be your sink? > > > > On Mon, Jan 16, 2017 at 10:44 PM, Guillermo Ortiz <konstt2...@gmail.com> > wrote: > >> I'm wondering to use Flume (c

Testing Spark-Cassandra

2018-01-17 Thread Guillermo Ortiz
Hello, I'm using spark 2.0 and Cassandra. Is there any util to make unit test easily or which one would be the best way to do it? library? Cassandra with docker?

Re: Testing Spark-Cassandra

2018-01-17 Thread Guillermo Ortiz
nation. > > > > 2018-01-17 16:48 GMT+01:00 Guillermo Ortiz <konstt2...@gmail.com>: > >> Hello, >> >> I'm using spark 2.0 and Cassandra. Is there any util to make unit test >> easily or which one would be the best way to do it? library? Cassandra with >>

Re: Caching small Rdd's take really long time and Spark seems frozen

2018-08-23 Thread Guillermo Ortiz
uld the blockage be in their compute > creation instead of their caching? > > Thanks, > Sonal > Nube Technologies <http://www.nubetech.co> > > <http://in.linkedin.com/in/sonalgoyal> > > > > On Thu, Aug 23, 2018 at 6:38 PM, Guillermo Ortiz > wrote: > &

Re: Caching small Rdd's take really long time and Spark seems frozen

2018-08-24 Thread Guillermo Ortiz
Another test I just did it's to execute with local[X] and this problem doesn't happen. Communication problems? 2018-08-23 22:43 GMT+02:00 Guillermo Ortiz : > it's a complex DAG before the point I cache the RDD, they are some joins, > filter and maps before caching data, but most of the

Caching small Rdd's take really long time and Spark seems frozen

2018-08-23 Thread Guillermo Ortiz
I use spark with caching with persist method. I have several RDDs what I cache but some of them are pretty small (about 300kbytes). Most of time it works well and usually lasts 1s the whole job, but sometimes it takes about 40s to store 300kbytes to cache. If I go to the SparkUI->Cache, I can see

Local mode vs client mode with one executor

2018-08-30 Thread Guillermo Ortiz
I have many spark processes, some of them are pretty simple and they don't have to process almost messages but they were developed with the same archeotype and they use spark. Some of them are executed with many executors but a few ones don't make sense to process with more than 2-4 cores in only

Connection SparkStreaming with SchemaRegistry

2018-03-09 Thread Guillermo Ortiz
I'm trying to integrate with schemaRegistry and SparkStreaming. By the moment I want to use GenericRecords. It seems that my producer works and new schemas are published in _schemas topic. When I try to read with my Consumer, I'm not able to deserialize the data. How could I say to Spark that

  1   2   >