Hi All,i am trying to load around 40 GB file using
"org.apache.phoenix.mapreduce.CsvBulkLoadTool" but it is showing the below
error message.
INFO mapreduce.Job: Task Id : attempt_1469663368297_56967_m_42_0, Status :
FAILEDError: java.lang.RuntimeException: java.lang.RuntimeException:
java.
ed by: java.lang.OutOfMemoryError: Java heap space
Thanks & Regards
Radha Krishna
On Wed, May 18, 2016 at 12:04 AM, Maryann Xue wrote:
> Hi Radha,
>
> Thanks for reporting this issue! Would you mind trying it with latest
> Phoenix version?
>
> Thanks,
> Maryann
>
> On Tue, May 17, 2
ol2=small.col2;
Hash Join
UPSERT INTO Target_Table SELECT big.col1,big.col2...(102 columns) FROM
BIG_TABLE as big JOIN SMALL_TABLE as small ON big.col1=small.col1 where
big.col2=small.col2;
Thanks & Regards
Radha krishna
result of a Spark Job. I would like to know if i convert it to a Dataframe
> and save it, will it do Bulk load (or) it is not the efficient way to write
> data to Phoenix HBase table
>
> --
> Thanks and Regards
> Mohan
>
--
Thanks & Regards
Radha krishna
hist_hist_df.registerTempTable("HIST_TABLE")
val matched_rc = input_incr_rdd_df.join(hist_hist_df,
input_incr_rdd_df("Col1") <=> hist_hist_df("col1")
&& input_incr_rdd_df("col2") <=> hist_hist_df("col2"))
matched_rc.show()
Thanks & Regards
Radha krishna
ment for the create and load scripts.
Thanks & Regards
Radha krishna
Phoenix create table with one column family and 19 salt buckets
===
CREATE TABLE IF NOT EXISTS MY_Table_Name(
"BASE_PROD_ID" VARCHAR,
"SRL_NR_ID&quo