imilar to your filter/aggregate previously computed spark results.
>
> Regards,
> Yohann
>
>
> --
> *De :* Rick Moritz <rah...@gmail.com>
> *Envoyé :* jeudi 16 mars 2017 10:37
> *À :* user
> *Objet :* Re: RE: Fast write datastore...
>
&
lar to
> your filter/aggregate previously computed spark results.
>
> Regards,
> Yohann
>
> De : Rick Moritz <rah...@gmail.com>
> Envoyé : jeudi 16 mars 2017 10:37
> À : user
> Objet : Re: RE: Fast write datastore...
>
> If you have enough RAM/SSDs avail
017 10:37
À : user
Objet : Re: RE: Fast write datastore...
If you have enough RAM/SSDs available, maybe tiered HDFS storage and Parquet
might also be an option. Of course, management-wise it has much more overhead
than using ES, since you need to manually define partitions and buckets,
If you have enough RAM/SSDs available, maybe tiered HDFS storage and
Parquet might also be an option. Of course, management-wise it has much
more overhead than using ES, since you need to manually define partitions
and buckets, which is suboptimal. On the other hand, for querying, you can
probably