entries, 5.048B raw,
1.262B comp}
By the way why is the schema wrong? I include there repeated values, I'm
very confused!
Thanks
Matthes
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Is-it-possible-to-use-Parquet-with-Dremel-encoding-tp15186p15344
peatedid;
> > repeated group level2
> > {
> > int64 value1;
> > int32 value2;
> > }
> > }
&
p level2
> {
> int64 value1;
> int32 value2;
> }
> }
> }
> """
>
> Best,
> Matthes
>
>
>
> --
> View this message in context:
> http:/
rk-user-list.1001560.n3.nabble.com/Is-it-possible-to-use-Parquet-with-Dremel-encoding-tp15186p15239.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.ap
se.
>
> So the question now is, can I use it in the benefit way of nested parquet
> files to find fast with sql or do I have to write a special map/reduce job
> to transform and find my data?
>
>
>
> --
> View this message in context:
> http://apache-spark-
.com/Is-it-possible-to-use-Parquet-with-Dremel-encoding-tp15186p15234.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional comman
9 | 1234
>
> It would be awesome if somebody could give me a good hint how can I do that
> or maybe a better way.
>
> Best,
> Matthes
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com
somebody could give me a good hint how can I do that
or maybe a better way.
Best,
Matthes
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Is-it-possible-to-use-Parquet-with-Dremel-encoding-tp15186.html
Sent from the Apache Spark User List mailing list archive at