unsubscribe
As of the spark summit 2014 they mentioned that there will be no active
development on shark.
Thanks,
Shrikar
On Wed, Jul 2, 2014 at 3:53 PM, Subacini B wrote:
> Hi,
>
>
> http://mail-archives.apache.org/mod_mbox/spark-user/201403.mbox/%3cb75376b8-7a57-4161-b604-f919886cf...@gmail.com%3E
>
> T
ible to amortize the cost of http calls.
>
> TD
>
>
>
>
>
> On Fri, Jun 20, 2014 at 11:16 AM, Shrikar archak
> wrote:
>
>> Hi All,
>>
>> I was curious to know which of the two approach is better for doing
>> analytics using spark streaming. Lets say we
Hi All,
I was curious to know which of the two approach is better for doing
analytics using spark streaming. Lets say we want to add some metadata to
the stream which is being processed like sentiment, tags etc and then
perform some analytics using these added metadata.
1) Is it ok to make a htt
Hi Shivani,
I use sbt assembly to create a fat jar .
https://github.com/sbt/sbt-assembly
Example of the sbt file is below.
import AssemblyKeys._ // put this at the top of the file
assemblySettings
mainClass in assembly := Some("FifaSparkStreaming")
name := "FifaSparkStreaming"
version := "1.
Hi All,
I was curious to know which of the two approach is better for doing
analytics using spark streaming. Lets say we want to add some metadata to
the stream which is being processed like sentiment, tags etc and then
perform some analytics using these added metadata.
1) Is it ok to make a htt
Hi All,
Is there a way to store the streamed data as textfiles per day instead of
per window?
Thanks,
Shrikar
Hi All,
I was writing a simple Streaming job to get more understanding about Spark
streaming.
I am not understanding why the union behaviour in this particular case
*WORKS:*
val lines = ssc.socketTextStream("localhost", ,
StorageLevel.MEMORY_AND_DISK_SER)
val words = lines..flatMap(_.
s change fixed this issue for a few people using the SBT
>> build, worth committing?
>>
>> On Thu, Jun 5, 2014 at 6:40 AM, Shrikar archak
>> wrote:
>> > Hi All,
>> > Now that the Spark Version 1.0.0 is release there should not be any
>> problem
>&
cast.BroadcastManager.(BroadcastManager.scala:35)
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:218)
at org.apache.spark.SparkContext.(SparkContext.scala:202)
Any help would be greatly appreciated.
Thanks,
Shrikar
On Fri, May 23, 2014 at 3:58 PM, Shrikar archak wrote:
> Still the same error no chang
eded. Your build.sbt should really look as follows:
>
> name := "Simple Project"
>
> version := "1.0"
>
> scalaVersion := "2.10.4"
>
> libraryDependencies += "org.apache.spark" %% "spark-streaming" %
> "1.0.0-SNAPSHOT&q
; On May 23, 2014, at 12:03 AM, Shrikar archak wrote:
>
> Yes I did a sbt publish-local. Ok I will try with Spark 0.9.1.
>
> Thanks,
> Shrikar
>
>
> On Thu, May 22, 2014 at 8:53 PM, Tathagata Das <
> tathagata.das1...@gmail.com> wrote:
>
>> How are you getting S
s is a weird indeed. SBT should take care of all the dependencies of
> spark.
>
> In any case, you can try the last released Spark 0.9.1 and see if the
> problem persists.
>
>
> On Thu, May 22, 2014 at 3:59 PM, Shrikar archak wrote:
>
>> I am running as sbt run. I am run
s to me that you seem to have spark classes in your execution
> environment but missing some of Spark's dependencies.
>
> TD
>
>
>
> On Thu, May 22, 2014 at 2:27 PM, Shrikar archak
> wrote:
> > Hi All,
> >
> > I am trying to run the network count exa
Hi All,
I am trying to run the network count example as a seperate standalone job
and running into some issues.
Environment:
1) Mac Mavericks
2) Latest spark repo from Github.
I have a structure like this
Shrikars-MacBook-Pro:SimpleJob shrikar$ find .
.
./simple.sbt
./src
./src/main
./src/main
15 matches
Mail list logo