t; arguments, , depending on the spark version?
>>
>>
>>
>>
>>
>> *From:* kant kodali [mailto:kanth...@gmail.com]
>> *Sent:* Friday, February 17, 2017 5:03 PM
>> *To:* Alex Kozlov <ale...@gmail.com>
>> *Cc:* user @spark <user@spark.apach
increase number of parallel tasks running from
> 4 to 16 so I exported an env variable called SPARK_WORKER_CORES=16 in
> conf/spark-env.sh. I though that should do it but it doesn't. It still
> shows me 4. any idea?
>
>
> Thanks much!
>
>
>
--
Alex Kozlov
(408) 507-4987
(650) 887-2135 efax
ale...@gmail.com
>> message enhancer and then finally a processor.
>> I thought about using data cache as well for serving the data
>> The data cache should have the capability to serve the historical data
>> in milliseconds (may be upto 30 days of data)
>> --
>> Thanks
>> Deepak
>> www.bigdatabig.com
>>
>>
--
Alex Kozlov
ale...@gmail.com
ame()
>> in SparkR to avoid such covering.
>>
>>
>>
>> *From:* Alex Kozlov [mailto:ale...@gmail.com]
>> *Sent:* Tuesday, March 15, 2016 2:59 PM
>> *To:* roni <roni.epi...@gmail.com>
>> *Cc:* user@spark.apache.org
>> *Subject:* Re: sparkR is
I am not using any spark function , so i would expect
> it to work as a simple R code.
> why it does not work?
>
> Appreciate the help
> -R
>
>
--
Alex Kozlov
(408) 507-4987
(650) 887-2135 efax
ale...@gmail.com
; as separate mount points)
>
> My question is why not raid? What is the argument\reason for not using
> Raid?
>
> Thanks!
> -Eddie
>
--
Alex Kozlov
map-in-Spark-tp26224.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>
--
Alex Kozlov
(408) 507-4987
(650) 887-2135 efax
ale...@gmail.com
>>>
>>> Ideally I'd like Spark cores just be available in total and the first
>>> app who needs it, takes as much as required from the available at the
>>> moment. Is it possible? I believe Mesos is able to set resources free if
>>> they're not in use. Is it possible with YARN?
>>>
>>> I'd appreciate if you could share your thoughts or experience on the
>>> subject.
>>>
>>> Thanks.
>>> --
>>> Be well!
>>> Jean Morozov
>>>
>>
--
Alex Kozlov
ale...@gmail.com
%c{1}: %m%n
>
>
> # Change this to set Spark log level
>
> log4j.logger.org.apache.spark=WARN
>
>
> # Silence akka remoting
>
> log4j.logger.Remoting=WARN
>
>
> # Ignore messages below warning level from Jetty, because it's a bit
> verbose
>
> log
atal exception has occurred. Program will exit.
>
>
> I tried a bunch of different quoting but nothing produced a good result. I
> also tried passing it directly to activator using –jvm but it still
> produces the same results with verbose logging. Is there a way I can tell
> if it’
mitations on use as
> indicated by the sender. If you are not a designated recipient, you may
> not review, use,
> copy or distribute this message. If you received this in error, please
> notify the sender by
> reply e-mail and delete this message.
>
--
Alex Kozlov
(408) 507-4987
(408) 830-9982 fax
(650) 887-2135 efax
ale...@gmail.com
parquet file in Spark;
> 2. upgrade to Spark 1.5.
>
> --
> Ruslan Dautkhanov
>
> On Mon, Sep 7, 2015 at 3:52 PM, Alex Kozlov <ale...@gmail.com> wrote:
>
>> No, it was created in Hive by CTAS, but any help is appreciated...
>>
>> On Mon, Sep 7, 2015 at 2:51 PM
Spark prior to 1.5 often incompatible with Hive for example, if I remember
> correctly.
> On Mon, Sep 7, 2015, 2:57 PM Alex Kozlov <ale...@gmail.com> wrote:
>
>> I am trying to read an (array typed) parquet file in spark-shell (Spark
>> 1.4.1 with Hadoop 2.6):
&g
The same error if I do:
val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
val results = sqlContext.sql("SELECT * FROM stats")
but it does work from Hive shell directly...
On Mon, Sep 7, 2015 at 1:56 PM, Alex Kozlov <ale...@gmail.com> wrote:
> I am trying to r
I am trying to read an (array typed) parquet file in spark-shell (Spark
1.4.1 with Hadoop 2.6):
{code}
$ bin/spark-shell
log4j:WARN No appenders could be found for logger
(org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See
15 matches
Mail list logo