Hi,
How does spark handle compressed files? Are they optimizable in terms of using 
multiple RDDs against the file pr one needs to uncompress them beforehand say 
bz type files.
thanks

Reply via email to