Re: Spark Scala API still not updated for 2.13 or it's a mistake?

2022-08-02 Thread pengyh
I can use scala 2.13 for spark-shell, but not spark-submit. regards. Spark 3.3.0 supports 2.13, though you need to build it for 2.13. The default binary distro uses 2.12. - To unsubscribe e-mail:

Re: Spark Scala API still not updated for 2.13 or it's a mistake?

2022-08-02 Thread Sean Owen
Oh ha we do provide a pre-built binary for 2.13, oops. I can just remove that line from the docs, I think it's just out of date. On Tue, Aug 2, 2022 at 11:01 AM Roman I wrote: > Ok, though the doc is misleading. The official site provides both 2.12 and > 2.13 pre-build binaries. > Thanks > > On

Re: Spark Scala API still not updated for 2.13 or it's a mistake?

2022-08-02 Thread Roman I
Ok, though the doc is misleading. The official site provides both 2.12 and 2.13 pre-build binaries. Thanks > On 2 Aug 2022, at 18:52, Sean Owen wrote: > > Spark 3.3.0 supports 2.13, though you need to build it for 2.13. The default > binary distro uses 2.12. > > On Tue, Aug 2, 2022, 10:47

Re: Spark Scala API still not updated for 2.13 or it's a mistake?

2022-08-02 Thread Sean Owen
Spark 3.3.0 supports 2.13, though you need to build it for 2.13. The default binary distro uses 2.12. On Tue, Aug 2, 2022, 10:47 AM Roman I wrote: > > For the Scala API, Spark 3.3.0 uses Scala 2.12. You will need to use a > compatible Scala version (2.12.x). > >

Spark Scala API still not updated for 2.13 or it's a mistake?

2022-08-02 Thread Roman I
For the Scala API, Spark 3.3.0 uses Scala 2.12. You will need to use a compatible Scala version (2.12.x). https://spark.apache.org/docs/latest/

Re: log transfering into hadoop/spark

2022-08-02 Thread Gourav Sengupta
hi, I do it with simple bash scripts to transfer to s3. Takes less than 1 minute to write it, and another 1 min to include it bootstrap scripts. Never saw the need for so much hype for such simple tasks. Regards, Gourav Sengupta On Tue, Aug 2, 2022 at 2:16 PM ayan guha wrote: > ELK or Splunk

Re: log transfering into hadoop/spark

2022-08-02 Thread ayan guha
ELK or Splunk agents typically. Or if you are in cloud then there are cloud native solutions which can forward logs to object store, which can then be read like hdfs. On Tue, 2 Aug 2022 at 4:43 pm, pengyh wrote: > since flume is not continued to develop. > what's the current opensource tool to

Re: [pyspark delta] [delta][Spark SQL]: Getting an Analysis Exception. The associated location (path) is not empty

2022-08-02 Thread Sean Owen
That isn't the issue - the table does not exist anyway, but the storage path does. On Tue, Aug 2, 2022 at 6:48 AM Stelios Philippou wrote: > HI Kumba. > > SQL Structure is a bit different for > CREATE OR REPLACE TABLE > > > You can only do the following > CREATE TABLE IF NOT EXISTS > > > > >

Re: [pyspark delta] [delta][Spark SQL]: Getting an Analysis Exception. The associated location (path) is not empty

2022-08-02 Thread Stelios Philippou
HI Kumba. SQL Structure is a bit different for CREATE OR REPLACE TABLE You can only do the following CREATE TABLE IF NOT EXISTS https://spark.apache.org/docs/3.3.0/sql-ref-syntax-ddl-create-table-datasource.html On Tue, 2 Aug 2022 at 14:38, Sean Owen wrote: > I don't think "CREATE OR

Re: [pyspark delta] [delta][Spark SQL]: Getting an Analysis Exception. The associated location (path) is not empty

2022-08-02 Thread Sean Owen
I don't think "CREATE OR REPLACE TABLE" exists (in SQL?); this isn't a VIEW. Delete the path first; that's simplest. On Tue, Aug 2, 2022 at 12:55 AM Kumba Janga wrote: > Thanks Sean! That was a simple fix. I changed it to "Create or Replace > Table" but now I am getting the following error. I

Re: [pyspark delta] [delta][Spark SQL]: Getting an Analysis Exception. The associated location (path) is not empty

2022-08-02 Thread ayan guha
Hi I strongly suggest to use print prepared sqls and try them in raw form. The error you posted points to a syntax error. On Tue, 2 Aug 2022 at 3:56 pm, Kumba Janga wrote: > Thanks Sean! That was a simple fix. I changed it to "Create or Replace > Table" but now I am getting the following

log transfering into hadoop/spark

2022-08-02 Thread pengyh
since flume is not continued to develop. what's the current opensource tool to transfer webserver logs into hdfs/spark? thank you. - To unsubscribe e-mail: user-unsubscr...@spark.apache.org