Re: question on SPARK_WORKER_CORES

2017-02-17 Thread Alex Kozlov
t; arguments, , depending on the spark version? >> >> >> >> >> >> *From:* kant kodali [mailto:kanth...@gmail.com] >> *Sent:* Friday, February 17, 2017 5:03 PM >> *To:* Alex Kozlov <ale...@gmail.com> >> *Cc:* user @spark <user@spark.apach

Re: question on SPARK_WORKER_CORES

2017-02-17 Thread Alex Kozlov
increase number of parallel tasks running from > 4 to 16 so I exported an env variable called SPARK_WORKER_CORES=16 in > conf/spark-env.sh. I though that should do it but it doesn't. It still > shows me 4. any idea? > > > Thanks much! > > > -- Alex Kozlov (408) 507-4987 (650) 887-2135 efax ale...@gmail.com

Re: Processing millions of messages in milliseconds -- Architecture guide required

2016-04-19 Thread Alex Kozlov
>> message enhancer and then finally a processor. >> I thought about using data cache as well for serving the data >> The data cache should have the capability to serve the historical data >> in milliseconds (may be upto 30 days of data) >> -- >> Thanks >> Deepak >> www.bigdatabig.com >> >> -- Alex Kozlov ale...@gmail.com

Re: sparkR issues ?

2016-03-15 Thread Alex Kozlov
ame() >> in SparkR to avoid such covering. >> >> >> >> *From:* Alex Kozlov [mailto:ale...@gmail.com] >> *Sent:* Tuesday, March 15, 2016 2:59 PM >> *To:* roni <roni.epi...@gmail.com> >> *Cc:* user@spark.apache.org >> *Subject:* Re: sparkR is

Re: sparkR issues ?

2016-03-15 Thread Alex Kozlov
I am not using any spark function , so i would expect > it to work as a simple R code. > why it does not work? > > Appreciate the help > -R > > -- Alex Kozlov (408) 507-4987 (650) 887-2135 efax ale...@gmail.com

Re: Spark on RAID

2016-03-08 Thread Alex Kozlov
; as separate mount points) > > My question is why not raid? What is the argument\reason for not using > Raid? > > Thanks! > -Eddie > -- Alex Kozlov

Re: How to query a hive table from inside a map in Spark

2016-02-14 Thread Alex Kozlov
map-in-Spark-tp26224.html > Sent from the Apache Spark User List mailing list archive at Nabble.com. > > - > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org > For additional commands, e-mail: user-h...@spark.apache.org > > -- Alex Kozlov (408) 507-4987 (650) 887-2135 efax ale...@gmail.com

Re: Best practises of share Spark cluster over few applications

2016-02-14 Thread Alex Kozlov
>>> >>> Ideally I'd like Spark cores just be available in total and the first >>> app who needs it, takes as much as required from the available at the >>> moment. Is it possible? I believe Mesos is able to set resources free if >>> they're not in use. Is it possible with YARN? >>> >>> I'd appreciate if you could share your thoughts or experience on the >>> subject. >>> >>> Thanks. >>> -- >>> Be well! >>> Jean Morozov >>> >> -- Alex Kozlov ale...@gmail.com

Re: How can I disable logging when running local[*]?

2015-10-07 Thread Alex Kozlov
%c{1}: %m%n > > > # Change this to set Spark log level > > log4j.logger.org.apache.spark=WARN > > > # Silence akka remoting > > log4j.logger.Remoting=WARN > > > # Ignore messages below warning level from Jetty, because it's a bit > verbose > > log

Re: How can I disable logging when running local[*]?

2015-10-06 Thread Alex Kozlov
atal exception has occurred. Program will exit. > > > I tried a bunch of different quoting but nothing produced a good result. I > also tried passing it directly to activator using –jvm but it still > produces the same results with verbose logging. Is there a way I can tell > if it’

Re: How can I disable logging when running local[*]?

2015-10-05 Thread Alex Kozlov
mitations on use as > indicated by the sender. If you are not a designated recipient, you may > not review, use, > copy or distribute this message. If you received this in error, please > notify the sender by > reply e-mail and delete this message. > -- Alex Kozlov (408) 507-4987 (408) 830-9982 fax (650) 887-2135 efax ale...@gmail.com

Re: Parquet Array Support Broken?

2015-09-07 Thread Alex Kozlov
parquet file in Spark; > 2. upgrade to Spark 1.5. > > -- > Ruslan Dautkhanov > > On Mon, Sep 7, 2015 at 3:52 PM, Alex Kozlov <ale...@gmail.com> wrote: > >> No, it was created in Hive by CTAS, but any help is appreciated... >> >> On Mon, Sep 7, 2015 at 2:51 PM

Re: Parquet Array Support Broken?

2015-09-07 Thread Alex Kozlov
Spark prior to 1.5 often incompatible with Hive for example, if I remember > correctly. > On Mon, Sep 7, 2015, 2:57 PM Alex Kozlov <ale...@gmail.com> wrote: > >> I am trying to read an (array typed) parquet file in spark-shell (Spark >> 1.4.1 with Hadoop 2.6): &g

Re: Parquet Array Support Broken?

2015-09-07 Thread Alex Kozlov
The same error if I do: val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc) val results = sqlContext.sql("SELECT * FROM stats") but it does work from Hive shell directly... On Mon, Sep 7, 2015 at 1:56 PM, Alex Kozlov <ale...@gmail.com> wrote: > I am trying to r

Parquet Array Support Broken?

2015-09-07 Thread Alex Kozlov
I am trying to read an (array typed) parquet file in spark-shell (Spark 1.4.1 with Hadoop 2.6): {code} $ bin/spark-shell log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory). log4j:WARN Please initialize the log4j system properly. log4j:WARN See