As a bare minimum you will need to add some error trapping and exception
handling!
scala> import org.apache.hadoop.fs.FileAlreadyExistsException
import org.apache.hadoop.fs.FileAlreadyExistsException
and try your code
try {
df
.coalesce(1)
.write
Hello Community,
I checked below issue in various platforms but I could not get satisfactory
answer.
I am using spark java.
I am having large data cluster.
My application is making more than 10 API calls.
Each calls returns a java list. Each list item is of same structure (i.e.
same java class)
Hi Chao,
As a cool feature
- Compared to standard Spark, what kind of performance gains can be
expected with Comet?
- Can one use Comet on k8s in conjunction with something like a Volcano
addon?
HTH
Mich Talebzadeh,
Dad | Technologist | Solutions Architect | Engineer
London