You are getting DiskChecker$DiskErrorExceptionerror when no new records are
published to Kafka for a few days. The error indicates that the Spark
application could not find a valid local directory to create temporary
files for data processing. This mightbe due to any of these
- if no records
DiskChecker$DiskErrorException: Could not find any valid local directory
for s3ablock-0001-
out of space?
tir. 13. feb. 2024 kl. 21:24 skrev Abhishek Singla <
abhisheksingla...@gmail.com>:
> Hi Team,
>
> Could someone provide some insights into this issue?
>
> Regards,
> Abhishek Singla
>
> On
Hi Team,
Could someone provide some insights into this issue?
Regards,
Abhishek Singla
On Wed, Jan 17, 2024 at 11:45 PM Abhishek Singla <
abhisheksingla...@gmail.com> wrote:
> Hi Team,
>
> Version: 3.2.2
> Java Version: 1.8.0_211
> Scala Version: 2.12.15
> Cluster: Standalone
>
> I am using
Hi Team,
Version: 3.2.2
Java Version: 1.8.0_211
Scala Version: 2.12.15
Cluster: Standalone
I am using Spark Streaming to read from Kafka and write to S3. The job
fails with below error if there are no records published to Kafka for a few
days and then there are some records published. Could