;>
>>
>>
>>
>>
>> *From:* kant kodali [mailto:kanth...@gmail.com]
>> *Sent:* Friday, February 17, 2017 5:03 PM
>> *To:* Alex Kozlov
>> *Cc:* user @spark
>> *Subject:* Re: question on SPARK_WORKER_CORES
>>
>>
>>
>> St
el tasks running from
> 4 to 16 so I exported an env variable called SPARK_WORKER_CORES=16 in
> conf/spark-env.sh. I though that should do it but it doesn't. It still
> shows me 4. any idea?
>
>
> Thanks much!
>
>
>
--
Alex Kozlov
(408) 507-4987
(650) 887-2135 efax
ale...@gmail.com
t;> I thought about using data cache as well for serving the data
>> The data cache should have the capability to serve the historical data
>> in milliseconds (may be upto 30 days of data)
>> --
>> Thanks
>> Deepak
>> www.bigdatabig.com
>>
>>
--
Alex Kozlov
ale...@gmail.com
;
> On Tue, Mar 15, 2016 at 12:28 AM, Sun, Rui wrote:
>
>> It seems as.data.frame() defined in SparkR convers the versions in R base
>> package.
>>
>> We can try to see if we can change the implementation of as.data.frame()
>> in SparkR to avoid such covering.
&g
error
>
> > dds <- DESeqDataSetFromMatrix(countData, as.data.frame(condition), ~
> condition)
> Error in DataFrame(colData, row.names = rownames(colData)) :
> cannot coerce class "data.frame" to a DataFrame
>
> I am really stumped. I am not using any spark fun
;
> My question is why not raid? What is the argument\reason for not using
> Raid?
>
> Thanks!
> -Eddie
>
--
Alex Kozlov
nt from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>
--
Alex Kozlov
(408) 507-4987
(650) 887-2135 efax
ale...@gmail.com
;d like Spark cores just be available in total and the first
>>> app who needs it, takes as much as required from the available at the
>>> moment. Is it possible? I believe Mesos is able to set resources free if
>>> they're not in use. Is it possible with YARN?
>>>
>>> I'd appreciate if you could share your thoughts or experience on the
>>> subject.
>>>
>>> Thanks.
>>> --
>>> Be well!
>>> Jean Morozov
>>>
>>
--
Alex Kozlov
ale...@gmail.com
> # Change this to set Spark log level
>
> log4j.logger.org.apache.spark=WARN
>
>
> # Silence akka remoting
>
> log4j.logger.Remoting=WARN
>
>
> # Ignore messages below warning level from Jetty, because it's a bit
> verbose
>
> log4j.logger.org.eclipse.jetty
rred. Program will exit.
>
>
> I tried a bunch of different quoting but nothing produced a good result. I
> also tried passing it directly to activator using –jvm but it still
> produces the same results with verbose logging. Is there a way I can tell
> if it’s picking up my file?
&g
; indicated by the sender. If you are not a designated recipient, you may
> not review, use,
> copy or distribute this message. If you received this in error, please
> notify the sender by
> reply e-mail and delete this message.
>
--
Alex Kozlov
(408) 507-4987
(408) 830-9982 fax
(650) 887-2135 efax
ale...@gmail.com
; --
> Ruslan Dautkhanov
>
> On Mon, Sep 7, 2015 at 3:52 PM, Alex Kozlov wrote:
>
>> No, it was created in Hive by CTAS, but any help is appreciated...
>>
>> On Mon, Sep 7, 2015 at 2:51 PM, Ruslan Dautkhanov
>> wrote:
>>
>>> That parquet table wa
incompatible with Hive for example, if I remember
> correctly.
> On Mon, Sep 7, 2015, 2:57 PM Alex Kozlov wrote:
>
>> I am trying to read an (array typed) parquet file in spark-shell (Spark
>> 1.4.1 with Hadoop 2.6):
>>
>> {code}
>> $ bin/spark-sh
The same error if I do:
val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
val results = sqlContext.sql("SELECT * FROM stats")
but it does work from Hive shell directly...
On Mon, Sep 7, 2015 at 1:56 PM, Alex Kozlov wrote:
> I am trying to read an (array typed) pa
I am trying to read an (array typed) parquet file in spark-shell (Spark
1.4.1 with Hadoop 2.6):
{code}
$ bin/spark-shell
log4j:WARN No appenders could be found for logger
(org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See
15 matches
Mail list logo