comphead commented on PR #3755: URL: https://github.com/apache/datafusion-comet/pull/3755#issuecomment-4111630869
> Is this scope right? The comment says "across spark partitions" but if its lifecycle is tied to a scan object, we don't actually get that benefit. We would need to move its lifecycle up to some sort of context object, but I don't know what the proper key is for that. hm, it was partially rolled back because of Spark failed test `partitioned table is cached when partition pruning is true *** FAILED *** (1 second, 574 milliseconds)` The test delete the file from the disk and expects `FileNotFoundException` and our cached manager doesn't track that and responds as if file still exists. I wanna keep the caching factory and use in other context object like you mentioned but with tracking deleted files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
