Hi people,
I would like to share some of my experience in data processing using
stateful structured streaming in Apache Spark. Especially in the case when
there are problems related to OutOfMemory errors because the built-in state
store provider tries to keep all of the data in memory. So, I've
r Chermenin.Web: http://chermenin.ruMail: a...@chermenin.ru 06.05.2016, 14:19, "Alexander Chermenin" <a...@chermenin.ru>:Hi everybody! This code: DataFrame df = sqlContext.read().json(FILE_NAME); DataFrame profiles = df.select( column("_id"), struc
Hello. I think you can use something like this: df.select( struct( column("site.site_id").as("id"), column("site.site_name").as("name"), column("site.site_domain").as("domain"), column("site.site_cat").as("cat"), struct(
Hi everybody! This code: DataFrame df = sqlContext.read().json(FILE_NAME); DataFrame profiles = df.select( column("_id"), struct( column("name.first").as("first_name"), column("name.last").as("last_name"), column("friends")