arquet
>
>
>
>
> 2018-03-17 14:15 GMT+01:00 Denis Bolshakov <bolshakov.de...@gmail.com>:
>
>> Hello Serega,
>>
>> https://spark.apache.org/docs/latest/sql-programming-guide.html
>>
>> Please try SaveMode.Append option. Does it work for
Hello Serega,
https://spark.apache.org/docs/latest/sql-programming-guide.html
Please try SaveMode.Append option. Does it work for you?
сб, 17 мар. 2018 г., 15:19 Serega Sheypak :
> Hi, I', using spark-sql to process my data and store result as parquet
> partitioned
gt;>>
>>>
>>>
>>> --
>>> View this message in context: http://apache-spark-user-list.
>>> 1001560.n3.nabble.com/Pasting-into-spark-shell-doesn-t-work-
>>> for-Databricks-example-tp28113p28116.html
>>> Sent from the Apache Spa
override def merge(buffer1: MutableAggregationBuffer, buffer2:
> Row): Unit = {
> ^
> :46: error: not found: type Row
> override def evaluate(buffer: Row): Any = {
>^
>
>
>
> --
> View this message in context: http://apache-spark-user-list.
> 1001560.n3.nabble.com/Pasting-into-spark-shell-doesn-t-work-
> for-Databricks-example-tp28113.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
--
//with Best Regards
--Denis Bolshakov
e-mail: bolshakov.de...@gmail.com
cannot turn off kerberos on `cluster C`
>>> 4. We can turn on/off kerberos on `cluster B`, currently it's turned off.
>>> 5. Spark app is built on top of RDD and does not depend on spark-sql.
>>>
>>> Does anybody know how to write data using RDD api to remote cluster
-sql.
Does anybody know how to write data using RDD api to remote cluster which
is running with Kerberos?
--
//with Best Regards
--Denis Bolshakov
e-mail: bolshakov.de...@gmail.com
Look here
http://www.slideshare.net/cloudera/top-5-mistakes-to-avoid-when-writing-apache-spark-applications
Probably it will help a bit.
Best regards,
Denis
11 Окт 2016 г. 23:49 пользователь "Xiaoye Sun"
написал:
> Hi,
>
> Currently, I am running Spark using the
Try to build a flat (uber) jar which includes all dependencies.
11 Окт 2016 г. 22:11 пользователь "doruchiulan"
написал:
> Hi,
>
> I have a problem that's bothering me for a few days, and I'm pretty out of
> ideas.
>
> I built a Spark docker container where Spark runs in
of x & y become same.
>
> hence many elements with different keys fall into single partition at
> times.
>
>
>
> Thanks,
> Sujeet
>
--
//with Best Regards
--Denis Bolshakov
e-mail: bolshakov.de...@gmail.com
-user-list.
> 1001560.n3.nabble.com/Spark-tasks-blockes-randomly-on-
> standalone-cluster-tp27693.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -----
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
--
//with Best Regards
--Denis Bolshakov
e-mail: bolshakov.de...@gmail.com
Hello,
I would also set java opts for driver.
Best regards,
Denis
4 Сен 2016 г. 0:31 пользователь "Sourav Mazumder" <
sourav.mazumde...@gmail.com> написал:
> Hi,
>
> I am trying to create a RDD by using swebhdfs to a remote hadoop cluster
> which is protected by Knox and uses SSL.
>
> The code
==
>
> logData RDD takes *2.1 KB*
>
> errors RDD takes *1.3 KB*
>
>
>
> Regards
>
> Rohit Kumar Prusty
>
> +91-9884070075
>
>
>
--
//with Best Regards
--Denis Bolshakov
e-mail: bolshakov.de...@gmail.com
ties/Office Services II, A03031, OED-Employment Dev (031),
> 1979-10-24T00:00:00, 56705.00, 54135.44))
>
> Expecting Output:
>
> Need elements from the WrappedArray
>
> Below you can find the attachment of .json file
>
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
--
//with Best Regards
--Denis Bolshakov
e-mail: bolshakov.de...@gmail.com
13 matches
Mail list logo