Hi,
Could you provide with the code snippet of how you are connecting and
reading data from kafka?
Akshay Bhardwaj
+91-97111-33849
On Thu, Oct 17, 2019 at 8:39 PM Amit Sharma wrote:
> Please update me if any one knows about it.
>
>
> Thanks
> Amit
>
> On Thu, Oct 10, 2019 at 3:49 PM Amit
hm that'll be better to me if we can build customized resource manager
out of core; otherwise, we have to go through the long discussion in the
community :)
But if we support that, why still mesos/yarn/k8s resource manager there in
the tree?
On Fri, Nov 8, 2019 at 10:18 PM Tom Graves wrote:
Hello spark users,
Spark-postgres is designed for reliable and performant ETL in big-data
workload and offer read/write/scd capability . The version 3 introduces
a datasource API and simplifies the usage. It outperforms sqoop by
factor 8 and the apache spark core jdbc by infinity.
Features:
-
Hi Team,
I could really use your insight here, any help is appreciated!
Thanks,
Rishi
On Wed, Nov 6, 2019 at 8:27 PM Rishi Shah wrote:
> Any suggestions?
>
> On Wed, Nov 6, 2019 at 7:30 AM Rishi Shah
> wrote:
>
>> Hi All,
>>
>> I have two relatively big tables and join on them keeps
Can you switch the write for a count just so we can isolate if it’s the
write or the count?
Also what’s the output path your using?
On Sun, Nov 10, 2019 at 7:31 AM Gal Benshlomo
wrote:
>
>
> Hi,
>
>
>
> I’m using pandas_udf and not able to run it from cluster mode, even though
> the same code
Hi,
I'm using pandas_udf and not able to run it from cluster mode, even though the
same code works on standalone.
The code is as follows:
schema_test = StructType([
StructField("cluster", LongType()),
StructField("name", StringType())
])
@pandas_udf(schema_test,
If you look inside of the generation we generate java code and compile it
with Janino. For interested folks the conversation moved over to the dev@
list
On Sat, Nov 9, 2019 at 10:37 AM Marcin Tustin
wrote:
> What do you mean by this? Spark is written in a combination of Scala and
> Java, and