That's an interesting thought. No one has done this that I am aware of but
I'd be curious to see what the results of doing this are.

You could approximate something like this using the Cassandra fs
implementation --  a few treats ago there was a project that allowed one to
use Cassandra as the "file system" for hadoop, like people use s3. Not sure
if that's still supported.

On Wednesday, March 18, 2015, Issac Buenrostro <[email protected]>
wrote:

> Hello,
>
> Is there a way to write Parquet records to Cassandra? So far I have only
> found logic for writing to Hadoop compatible filesystems.
>
> I could see each page written to a different cell in Cassandra, with file
> metadata and page headers written to separate cells in a different column
> family. That way, we could leverage large Cassandra clusters, getting
> highly parallel writes and low latency reads.
>
> Thank you.
> Issac
>

Reply via email to