You can save the results as parquet file or as text file and created a hive 
table based on these files 

Daniel

> On 20 בנוב׳ 2014, at 08:01, akshayhazari <akshayhaz...@gmail.com> wrote:
> 
> Sorry about the confusion I created . I just have started learning this week.
> Silly me, I was actually writing the schema to a txt file and expecting
> records. This is what I was supposed to do. Also if you could let me know
> about adding the data from jsonFile/jsonRDD methods of hiveContext to hive
> tables it will be appreciated. 
> 
> JavaRDD<String> result=writetxt.map(new Function<Row, String>() {
> 
>                public String call(Row row) {
>                    String temp="";
>                    temp+=(row.getInt(0))+" ";
>                    temp+=row.getString(1)+" ";
>                    temp+=(row.getInt(2));
>                    return temp;
>                }
>            });
>        result.saveAsTextFile("pqtotxt");
> 
> 
> 
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/How-to-apply-schema-to-queried-data-from-Hive-before-saving-it-as-parquet-file-tp19259p19343.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
> 

Reply via email to