Looking for a generic solution, not for a specific DB or number of tables.
On Fri, Mar 29, 2019 at 5:04 AM Jason Nerothin
wrote:
> How many tables? What DB?
>
> On Fri, Mar 29, 2019 at 00:50 Surendra , Manchikanti <
> surendra.manchika...@gmail.com> wrote:
>
>> Hi J
titionColumn, lowerBound, and upperBound
>
> https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html
>
> On Wed, Mar 27, 2019 at 23:06 Surendra , Manchikanti <
> surendra.manchika...@gmail.com> wrote:
>
>> Hi All,
>>
>> Is there any way to copy all the tables
Hi All,
Is there any way to copy all the tables in parallel from RDBMS using Spark?
We are looking for a functionality similar to Sqoop.
Thanks,
Surendra
Hi Vineeth,
Can you please check resource(RAM,Cores) availability in your local
cluster, And change accordingly.
Regards,
Surendra M
-- Surendra Manchikanti
On Tue, Mar 29, 2016 at 1:15 PM, Vineet Mishra <clearmido...@gmail.com>
wrote:
> Hi All,
>
> While starting Spark o
Hi Vinoth,
As per documentation DirectParquetOutputCommitter better suits for S3.
https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/DirectParquetOutputCommitter.scala
Regards,
Surendra M
-- Surendra Manchikanti
On Fri, Mar
Hi Vetal,
You may try with MultiOutPutFormat instead of TextOutPutFormat in
saveAsNewAPIHadoopFile().
Regards,
Surendra M
-- Surendra Manchikanti
On Tue, Mar 22, 2016 at 10:26 AM, vetal king <greenve...@gmail.com> wrote:
> We are using Spark 1.4 for Spark Streaming. Kafka is da
Hi,
Can you check Kafka topic replication ? And leader information?
Regards,
Surendra M
-- Surendra Manchikanti
On Thu, Mar 17, 2016 at 7:28 PM, Ascot Moss <ascot.m...@gmail.com> wrote:
> Hi,
>
> I have a SparkStream (with Kafka) job, after running several days, it
> fai