fcvr1010 commented on issue #2991:
URL: https://github.com/apache/iceberg/issues/2991#issuecomment-920679781
Thanks @kbendick for your comment.
`bpftrace` was running before the executor started. This is what I've done.
I resized my cluster to a single task node, started `bpftrace` and then
submitted the Spark application. Some of the jobs executed just fine and I
didn't get any output, so I think there might have been an issue in my tracing
setup itself. What's strange it that `touch s3fileio-something` resulted in an
output in `bpftrace`, so the setup was not completely broken either.
@alex-shchetkov if I understand your comment correctly, before you had the
`tmp/driver` directory created on every node your jobs consistently failed, is
that correct? In my case most of the jobs succeed, it's just some of them that
failed with the `No such file or directory` exception.
In general, is it correct that Iceberg **always** creates a temporary file
with prefix `s3fileio`?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]