/scala/org/apache/spark/examples/mllib/RecommendationExample.scala#L62
On Fri, Mar 11, 2016 at 8:18 PM, Shishir Anshuman <shishiranshu...@gmail.com
> wrote:
> The model produced after training.
>
> On Fri, Mar 11, 2016 at 10:29 PM, Bryan Cutler <cutl...@gmail.com> wrote:
>
&
Steve & Adam,
I would be interesting in hearing the outcome here as well. I am seeing
some similar issues in my 1.4.1 pipeline, using stateful functions
(reduceByKeyAndWindow and updateStateByKey).
Regards,
Bryan Jeffrey
On Mon, Mar 14, 2016 at 6:45 AM, Steve Loughran <ste...@hortonwo
Are you trying to save predictions on a dataset to a file, or the model
produced after training with ALS?
On Thu, Mar 10, 2016 at 7:57 PM, Shishir Anshuman wrote:
> hello,
>
> I am new to Apache Spark and would like to get the Recommendation output
> of the ALS
Prateek,
I believe that one task is created per Cassandra partition. How is your
data partitioned?
Regards,
Bryan Jeffrey
On Thu, Mar 10, 2016 at 10:36 AM, Prateek . <prat...@aricent.com> wrote:
> Hi,
>
>
>
> I have a Spark Batch job for reading timeseries data from
Hello.
Is there a suggested method and/or some example code to write results from
a Spark streaming job back to Kafka?
I'm using Scala and Spark 1.4.1.
Regards,
Bryan Jeffrey
ek.mis...@xerox.com> wrote:
> Hello Bryan,
>
>
>
> Thank you for the update on Jira. I took your code and tried with mine.
> But I get an error with the vector being created. Please see my code below
> and suggest me.
>
> My input file has some conte
I'm not exactly sure how you would like to setup your LDA model, but I
noticed there was no Python example for LDA in Spark. I created this issue
to add it https://issues.apache.org/jira/browse/SPARK-13500. Keep an eye
on this if it could be of help.
bryan
On Wed, Feb 24, 2016 at 8:34 PM
Using flatmap on a string will treat it as a sequence, which is why you are
getting an RDD of char. I think you want to just do a map instead. Like
this
val timestamps = stream.map(event => event.getCreatedAt.toString)
On Feb 25, 2016 8:27 AM, "Dominik Safaric" wrote:
>>
>> [4,1, 3083.2778025]
>>
>> [2, 4, 6226.40232139]
>>
>> [1, 2, 785.84266]
>>
>> [5, 1, 6706.05424139]
>>
>>
>>
>> and monitor. please let know if I missed something
>>
>> Krishna
>>
>>
&g
Can you share more of your code to reproduce this issue? The model should
be updated with each batch, but can't tell what is happening from what you
posted so far.
On Fri, Feb 19, 2016 at 10:40 AM, krishna ramachandran <ram...@s1776.com>
wrote:
> Hi Bryan
> Agreed. It is a sing
Could you elaborate where the issue is? You say calling
model.latestModel.clusterCenters.foreach(println) doesn't show an updated
model, but that is just a single statement to print the centers once..
Also, is there any reason you don't predict on the test data like this?
Arko,
Check this out: https://github.com/Microsoft/SparkCLR
This is a Microsoft authored C# language binding for Spark.
Regards,
Bryan Jeffrey
On Tue, Feb 9, 2016 at 3:13 PM, Arko Provo Mukherjee <
arkoprovomukher...@gmail.com> wrote:
> Doesn't seem to be supported, but
>From within a Spark job you can use a Periodic Listener:
ssc.addStreamingListener(PeriodicStatisticsListener(Seconds(60)))
class PeriodicStatisticsListener(timePeriod: Duration) extends
StreamingListener {
private val logger = LoggerFactory.getLogger("Application")
override def
I am sure we're doing consistent hashing.
The 'reduceAdd' function is adding to a map. The 'inverseReduceFunction' is
subtracting from the map. The filter function is removing items where the
number of entries in the map is zero. Has anyone seen this error before?
Regards,
Bryan Jeffrey
Excuse me - I should have mentioned: I am running Spark 1.4.1, Scala 2.11.
I am running in streaming mode receiving data from Kafka.
Regards,
Bryan Jeffrey
On Mon, Feb 1, 2016 at 9:19 PM, Bryan Jeffrey <bryan.jeff...@gmail.com>
wrote:
> Hello.
>
> I have a reduceByKeyAnd
Glad you got it going! It's wasn't very obvious what needed to be set,
maybe it is worth explicitly stating this in the docs since it seems to
have come up a couple times before too.
Bryan
On Fri, Jan 15, 2016 at 12:33 PM, Andrew Weiner <
andrewweiner2...@u.northwestern.edu> wrote:
>
If you are able to just train the RandomForestClassificationModel from ML
directly instead of training the old model and converting, then that would
be the way to go.
On Thu, Jan 14, 2016 at 2:21 PM, <rachana.srivast...@thomsonreuters.com>
wrote:
> Thanks so much Bryan for your
solved as part of this JIRA
https://issues.apache.org/jira/browse/SPARK-12183
Bryan
On Thu, Jan 14, 2016 at 8:12 AM, Rachana Srivastava <
rachana.srivast...@markmonitor.com> wrote:
> Tried using 1.6 version of Spark that takes numberOfFeatures fifth
> argument in the API but s
bmit --master yarn --deploy-mode client
--driver-memory 4g --executor-memory 2g --executor-cores 1
./examples/src/main/python/pi.py 10*
That is a good sign that local jobs and Java examples work, probably just a
small configuration issue :)
Bryan
On Wed, Jan 13, 2016 at
Hi Andrew,
I know that older versions of Spark could not run PySpark on YARN in
cluster mode. I'm not sure if that is fixed in 1.6.0 though. Can you try
setting deploy-mode option to "client" when calling spark-submit?
Bryan
On Thu, Jan 7, 2016 at 2:39 PM, weineran <
This is a known issue https://issues.apache.org/jira/browse/SPARK-9844. As
Noorul said, it is probably safe to ignore as the executor process is
already destroyed at this point.
On Mon, Dec 21, 2015 at 8:54 PM, Noorul Islam K M wrote:
> carlilek
the message is always reaching Kafka (checked through the console
consumer).
Regards
Vivek
Sent using CloudMagic Email
On Sat, Dec 26, 2015 at 2:42 am, Bryan <bryan.jeff...@gmail.com> wrote:
Agreed. I did not see that they were using the same group name.
Sent from Outlook Mail for Windows 10
Vivek,
https://spark.apache.org/docs/1.5.2/streaming-kafka-integration.html
The map is per partitions number of topics to consume. Is numThreads below
equal to the number of partitions in your topic?
Regards,
Bryan Jeffrey
Sent from Outlook Mail for Windows 10 phone
From: vivek.meghanat
bs use same group name –
is that a problem?
val topicMap = topics.split(",").map((_, numThreads.toInt)).toMap - Number of
threads used here is 1
val searches = KafkaUtils.createStream(ssc, zkQuorum, group, topicMap).map(line
=> parse(line._2).extract[Search])
Regards,
while missing data from other partitions.
Regards,
Bryan Jeffrey
Sent from Outlook Mail for Windows 10 phone
From: vivek.meghanat...@wipro.com
Sent: Thursday, December 24, 2015 5:22 AM
To: user@spark.apache.org
Subject: Spark Streaming + Kafka + scala job message read issue
Hi All,
We
hops) the throughput decreases
significantly, causing job delays.
Is this typical? Have others encountered similar issues? Is there Kafka
configuration that might mitigate this issue?
Regards,
Bryan Jeffrey
Sent from Outlook Mail for Windows 10 phone
llect().toString());
total += rdd.count();
}
}
MyFunc f = new MyFunc();
inputStream.foreachRDD(f);
// f.total will have the count of all RDDs
Hope that helps some!
-bryan
On Wed, Dec 16, 2015 at 8:37 AM, Bryan Cutler <cutl...@gmail.com> wrote:
> Hi Andy,
>
>
I had a bunch of library dependencies that were still using Scala 2.10
versions. I updated them to 2.11 and everything has worked fine since.
On Wed, Dec 16, 2015 at 3:12 AM, Ashwin Sai Shankar <ashan...@netflix.com>
wrote:
> Hi Bryan,
> I see the same issue with 1.5.2, can you pls
Hi Andy,
Regarding the foreachrdd return value, this Jira that will be in 1.6 should
take care of that https://issues.apache.org/jira/browse/SPARK-4557 and make
things a little simpler.
On Dec 15, 2015 6:55 PM, "Andy Davidson"
wrote:
> I am writing a JUnit test
Hi Roberto,
1. How do they differ in terms of performance?
They both use alternating least squares matrix factorization, the main
difference is ml.recommendation.ALS uses DataFrames as input which has
built-in optimizations and should give better performance
2. Am I correct to assume
rowid from your code is a variable in the driver, so it will be evaluated
once and then only the value is sent to words.map. You probably want to
have rowid be a lambda itself, so that it will get the value at the time it
is evaluated. For example if I have the following:
>>> data =
?
Regards,
Bryan Jeffrey
Sent from Outlook Mail
From: Cheng Lian
Sent: Tuesday, November 24, 2015 6:49 AM
To: Bryan;user
Subject: Re: DateTime Support - Hive Parquet
I see, then this is actually irrelevant to Parquet. I guess can support Joda
DateTime in Spark SQL reflective schema inference
Cheng,
That’s exactly what I was hoping for – native support for writing DateTime
objects. As it stands Spark 1.5.2 seems to leave no option but to do manual
conversion (to nanos, Timestamp, etc) prior to writing records to hive.
Regards,
Bryan Jeffrey
Sent from Outlook Mail
From: Cheng
with
1.5.2 - however, I am still seeing the associated errors.
Is there a bug I can follow to determine when DateTime will be supported
for Parquet?
Regards,
Bryan Jeffrey
.
Is there an alternative method to initialize state?
InputQueueStream joined to window would seem to work, but InputQueueStream does
not allow checkpointing
Sent from Outlook Mail
From: Tathagata Das
Sent: Sunday, November 22, 2015 8:01 PM
To: Bryan
Cc: user
Subject: Re: Initial State
Hello.
I'm seeing an error creating a Hive Context moving from Spark 1.4.1 to
1.5.2. Has anyone seen this issue?
I'm invoking the following:
new HiveContext(sc) // sc is a Spark Context
I am seeing the following error:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding
The 1.5.2 Spark was compiled using the following options: mvn
-Dhadoop.version=2.6.1 -Dscala-2.11 -DskipTests -Pyarn -Phive
-Phive-thriftserver clean package
Regards,
Bryan Jeffrey
On Fri, Nov 20, 2015 at 2:13 PM, Bryan Jeffrey <bryan.jeff...@gmail.com>
wrote:
> Hello.
>
> I'm
Nevermind. I had a library dependency that still had the old Spark version.
On Fri, Nov 20, 2015 at 2:14 PM, Bryan Jeffrey <bryan.jeff...@gmail.com>
wrote:
> The 1.5.2 Spark was compiled using the following options: mvn
> -Dhadoop.version=2.6.1 -Dscala-2.11 -DskipTests -Pyarn -Ph
/scala/org/apache/spark/mllib/clustering/LDAModel.scala#L350
-bryan
On Tue, Nov 17, 2015 at 3:06 AM, frula00 <i...@crossing-technologies.com>
wrote:
> Hi,
> I'm working in Java, with Spark 1.3.1 - I am trying to extract data from
> the
master
spark://10.0.0.4:7077 --packages
com.datastax.spark:spark-cassandra-connector_2.11:1.5.0-M1 --hiveconf
"spark.cores.max=2" --hiveconf "spark.executor.memory=2g"
Do I perhaps need to include an additional library to do the default
conversion?
Regards,
Bryan Jeffrey
On Th
Yes, I do - I found your example of doing that later in your slides. Thank
you for your help!
On Thu, Nov 12, 2015 at 12:20 PM, Mohammed Guller <moham...@glassbeam.com>
wrote:
> Did you mean Hive or Spark SQL JDBC/ODBC server?
>
>
>
> Mohammed
>
>
>
> *From:*
TIONS (
keyspace "c2", table "detectionresult" );
]Error: java.io.IOException: Failed to open native connection to Cassandra
at {10.0.0.4}:9042 (state=,code=0)
This seems to be connecting to local host regardless of the value I set
spark.cassandra.connection.host to.
Regards,
Mohammed,
That is great. It looks like a perfect scenario. Would I be able to make
the created DF queryable over the Hive JDBC/ODBC server?
Regards,
Bryan Jeffrey
On Wed, Nov 11, 2015 at 9:34 PM, Mohammed Guller <moham...@glassbeam.com>
wrote:
> Short answer: yes.
>
>
>
>
Answer: In beeline run the following: SET
spark.cassandra.connection.host="10.0.0.10"
On Thu, Nov 12, 2015 at 1:13 PM, Bryan Jeffrey <bryan.jeff...@gmail.com>
wrote:
> Mohammed,
>
> While you're willing to answer questions, is there a trick to getting the
> H
Anyone have thoughts or a similar use-case for SparkSQL / Cassandra?
Regards,
Bryan Jeffrey
-Original Message-
From: "Bryan Jeffrey" <bryan.jeff...@gmail.com>
Sent: 11/4/2015 11:16 AM
To: "user" <user@spark.apache.org>
Subject: Cassandra via SparkSQL
the manually calculated fields are correct. However, the
dynamically calculated (string) partition for idAndSource is a random field
from within my case class. I've duplicated this with several other classes
and have seen the same result (I use this example because it's very simple).
Any idea if this is a known bug? Is there a workaround?
Regards,
Bryan Jeffrey
SparkConf().set("spark.driver.allowMultipleContexts",
"true").setAppName(appName).setMaster(master)
new StreamingContext(conf, Seconds(seconds))
}
Regards,
Bryan Jeffrey
On Wed, Nov 4, 2015 at 9:49 AM, Ted Yu <yuzhih...@gmail.com> wrote:
> Are you trying to sp
conversion prior to
insertion?
Regards,
Bryan Jeffrey
Deenar,
This worked perfectly - I moved to SQL Server and things are working well.
Regards,
Bryan Jeffrey
On Thu, Oct 29, 2015 at 8:14 AM, Deenar Toraskar <deenar.toras...@gmail.com>
wrote:
> Hi Bryan
>
> For your use case you don't need to have multiple metastores. The defa
of the Spark documentation, but do not see version
specified anywhere - it would be a good addition.
Thank you,
Bryan Jeffrey
essible (and
not partitioned).
Is there a straightforward way to write to partitioned tables using Spark
SQL? I understand that the read performance for partitioned data is far
better - are there other performance improvements that might be better to
use instead of partitioning?
Regards,
Bryan Jeffrey
issues
(3) When partitioning without maps I see frequent out of memory issues
I'll update this email when I've got a more concrete example of problems.
Regards,
Bryan Jeffrey
On Wed, Oct 28, 2015 at 1:33 PM, Susan Zhang <suchenz...@gmail.com> wrote:
> Have you tried partitionBy?
>
every time. Is this a
known issue? Is there a workaround?
Regards,
Bryan Jeffrey
On Wed, Oct 28, 2015 at 3:13 PM, Bryan Jeffrey <bryan.jeff...@gmail.com>
wrote:
> Susan,
>
> I did give that a shot -- I'm seeing a number of oddities:
>
> (1) 'Partition By' appears only accepts
MetadataTypedColumnsetSerDe
|
| InputFormat:
org.apache.hadoop.mapred.SequenceFileInputFormat
|
| OutputFormat:
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
|
This seems like a pretty big bug associated with persistent tables. Am I
missing a step somewhere?
Thank you,
Bryan Jeffrey
On Wed, Oct 28, 2015 at 4:10
Jerry,
Thank you for the note. It sounds like you were able to get further than I have
been - any insight? Just a Spark 1.4.1 vs Spark 1.5?
Regards,
Bryan Jeffrey
-Original Message-
From: "Jerry Lam" <chiling...@gmail.com>
Sent: 10/28/2015 6:29 PM
To: "Bryan
storage location for the data. That seems very hacky
though, and likely to result in maintenance issues.
Regards,
Bryan Jeffrey
-Original Message-
From: "Yana Kadiyska" <yana.kadiy...@gmail.com>
Sent: 10/28/2015 8:32 PM
To: "Bryan Jeffrey" <bryan.jeff..
me to a persistent Hive table accomplished? Has
anyone else run into the same issue?
Regards,
Bryan Jeffrey
(broadcasting the smaller set).
For joining two large datasets, it would seem to be better to repartition both
sets in the same way then join each partition. It there a suggested practice
for this problem?
Thank you,
Bryan Jeffrey
All,
The error resolved to a bad version of jline pulling from Maven. The jline
version is defined as 'scala.version' -- the 2.11 version does not exist in
maven. Instead the following should be used:
org.scala-lang
jline
2.11.0-M3
Regards,
Bryan Jeffrey
All,
I'm seeing the following error compiling Spark 1.4.1 w/ Scala 2.11 & Hive
support. Any ideas?
mvn -Dhadoop.version=2.6.1 -Dscala-2.11 -DskipTests -Pyarn -Phive
-Phive-thriftserver package
[INFO] Spark Project Parent POM .. SUCCESS [4.124s]
[INFO] Spark Launcher
t the method calls, the function
that is called for appears to be the same. I was hoping an example might
shed some light on the issue.
Regards,
Bryan Jeffrey
On Thu, Oct 8, 2015 at 7:04 AM, Aniket Bhatnagar <aniket.bhatna...@gmail.com
> wrote:
> Here is an example:
>
> val
= initialRDD)
> counts.print()
>
> Thanks,
> Aniket
>
>
> On Thu, Oct 8, 2015 at 5:48 PM Bryan Jeffrey <bryan.jeff...@gmail.com>
> wrote:
>
>> Aniket,
>>
>> Thank you for the example - but that's not quite what I'm looking for.
>
Nukunj,
No, I'm not calling set w/ master at all. This ended up being a foolish
configuration problem with my slaves file.
Regards,
Bryan Jeffrey
On Fri, Sep 25, 2015 at 11:20 PM, N B <nb.nos...@gmail.com> wrote:
> Bryan,
>
> By any chance, are you calling SparkConf.s
parkcheckpoint --broker kafkaBroker:9092 --topic test
--numStreams 9 --threadParallelism 9
Even when I put a long-running job in the queue, none of the other nodes
are anything but idle.
Am I missing something obvious?
Regards,
Bryan Jeffrey
On Fri, Sep 25, 2015 at 8:28 AM, Akhil Das <a
45 INFO SparkContext: Running Spark version 1.4.1
15/09/25 16:45:45 INFO SparkContext: Spark configuration:
spark.app.name=MainClass
spark.default.parallelism=6
spark.driver.supervise=true
spark.jars=file:/tmp/OinkSpark-1.0-SNAPSHOT-jar-with-dependencies.jar
spark.logConf=true
spark.master=local[*]
spark.rpc.askTimeout=10
spark.streaming.receiver.maxRate=500
As you can see, despite -Dmaster=spark://sparkserver:7077, the streaming
context still registers the master as local[*]. Any idea why?
Thank you,
Bryan Jeffrey
Tathagata,
Simple batch jobs do work. The cluster has a good set of resources and a
limited input volume on the given Kafka topic.
The job works on the small 3-node standalone-configured cluster I have setup
for test.
Regards,
Bryan Jeffrey
-Original Message-
From: "Tathagat
Also - I double checked - we're setting the master to "yarn-cluster"
-Original Message-
From: "Tathagata Das" <t...@databricks.com>
Sent: 9/23/2015 2:38 PM
To: "Bryan" <bryan.jeff...@gmail.com>
Cc: "user" <user@spark.apache.org>
Marcelo,
The error below is from the application logs. The spark streaming context is
initialized and actively processing data when yarn claims that the context is
not initialized.
There are a number of errors, but they're all associated with the ssc shutting
down.
Regards,
Bryan Jeffrey
need to change to allow Yarn to initialize Spark streaming (vs.
batch) jobs?
Thank you,
Bryan Jeffrey
counts to a database. Is there a built in mechanism or established
pattern to execute periodic jobs in spark streaming?
Regards,
Bryan Jeffrey
Akhil,
This looks like the issue. I'll update my path to include the (soon to be
added) winutils & assoc. DLLs.
Thank you,
Bryan
-Original Message-
From: "Akhil Das" <ak...@sigmoidanalytics.com>
Sent: 9/14/2015 6:46 AM
To: "Bryan Jeffrey" <bryan.je
en
something similar?
Regards,
Bryan Jeffrey
for Spark dev in an enterprise environment?
What was the outcome?
Regards,
Bryan Jeffrey
Thank you for the quick responses. It's useful to have some insight from
folks already extensively using Spark.
Regards,
Bryan Jeffrey
On Tue, Sep 8, 2015 at 10:28 AM, Sean Owen <so...@cloudera.com> wrote:
> Why would Scala vs Java performance be different Ted? Relatively
&
Hello. We're getting started with Spark Streaming. We're working to build
some unit/acceptance testing around functions that consume DStreams. The
current method for creating DStreams is to populate the data by creating an
InputDStream:
val input = Array(TestDataFactory.CreateEvent(123
Hi Praveen,
In MLLib, the major difference is that RandomForestClassificationModel
makes use of a newer API which utilizes ML pipelines. I can't say for
certain if they will produce the same exact result for a given dataset, but
I believe they should.
Bryan
On Wed, Jul 29, 2015 at 12:14 PM
I'm not sure what the expected performance should be for this amount of
data, but you could try to increase the timeout with the property
spark.akka.timeout to see if that helps.
Bryan
On Sun, Apr 26, 2015 at 6:57 AM, Deepak Gopalakrishnan dgk...@gmail.com
wrote:
Hello All,
I'm trying
101 - 177 of 177 matches
Mail list logo