You might be affected by this issue:
https://github.com/apache/iceberg/issues/8601
It was already patched but it isn't released yet.
On Thu, Oct 5, 2023 at 7:47 PM Prashant Sharma wrote:
> Hi Sanket, more details might help here.
>
> How does your spark configuration look like?
>
> What exactly
Thanks Ahmed. I am trying to bring this up with Spark DE community
On Thu, Oct 5, 2023 at 12:32 PM Ahmed Albalawi <
ahmed.albal...@capitalone.com> wrote:
> Hello team,
>
> We are in the process of upgrading one of our apps to Spring Boot 3.x
> while using Spark, and we have encountered an issue w
I think we already updated this in Spark 4. However for now you would have
to also include a JAR with the jakarta.* classes instead.
You are welcome to try Spark 4 now by building from master, but it's far
from release.
On Thu, Oct 5, 2023 at 11:53 AM Ahmed Albalawi
wrote:
> Hello team,
>
> We a
Hello team,
We are in the process of upgrading one of our apps to Spring Boot 3.x while
using Spark, and we have encountered an issue with Spark compatibility,
specifically with Jakarta Servlet. Spring Boot 3.x uses Jakarta Servlet
while Spark uses Javax Servlet. Can we get some guidance on how to
The fact that you have 60 partitions or brokers in kaka is not directly
correlated to Spark Structured Streaming (SSS) executors by itself. See
below.
Spark starts with 200 partitions. However, by default, Spark/PySpark
creates partitions that are equal to the number of CPU cores in the node,
th
You can try the 'optimize' command of delta lake. That will help you for
sure. It merges small files. Also, it depends on the file format. If you
are working with Parquet then still small files should not cause any issues.
P.
On Thu, Oct 5, 2023 at 10:55 AM Shao Yang Hong
wrote:
> Hi Raghavendr
Hi Sanket, more details might help here.
How does your spark configuration look like?
What exactly was done when this happened?
On Thu, 5 Oct, 2023, 2:29 pm Agrawal, Sanket,
wrote:
> Hello Everyone,
>
>
>
> We are trying to stream the changes in our Iceberg tables stored in AWS
> S3. We are ac
Hello Everyone,
We are trying to stream the changes in our Iceberg tables stored in AWS S3. We
are achieving this through Spark-Iceberg Connector and using JAR files for
Spark-AWS. Suddenly we have started receiving error "Connection pool shut down".
Spark Version: 3.4.1
Iceberg: 1.3.1
Any hel