View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Unable-to-join-table-across-data-sources-using-sparkSQL-tp22761p22816.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
--
t: string (nullable = true)
>
> |-- i_size: string (nullable = true)
>
> |-- i_formulation: string (nullable = true)
>
> |-- i_color: string (nullable = true)
>
> |-- i_units: string (nullable = true)
>
> |-- i_container: string
,
Ishwardeep
From: ankitjindal [via Apache Spark User List]
[mailto:ml-node+s1001560n22766...@n3.nabble.com]
Sent: Tuesday, May 5, 2015 5:00 PM
To: Ishwardeep Singh
Subject: RE: Unable to join table across data sources using sparkSQL
Just check the Schema of both the tables using frame.printSchema
User List]
[mailto:ml-node+s1001560n22762...@n3.nabble.com]
Sent: Tuesday, May 5, 2015 1:26 PM
To: Ishwardeep Singh
Subject: Re: Unable to join table across data sources using sparkSQL
Hi
I was doing the same but with a file in hadoop as a temp table and one
table in sql server but i succeeded
store_sales.ss_ext_discount_amt, d_current_day, store_sales.ss_hdemo_sk,
> d_year, i_category, d_month_seq, i_manufact_id, store_sales.ss_net_profit,
> store_sales.ss_addr_sk, store_sales.ss_customer_sk, d_same_day_ly,
> d_quarter_seq, d_holiday, i_size, store_sales.ss_ext_list_
fact_id, store_sales.ss_net_profit,
store_sales.ss_addr_sk, store_sales.ss_customer_sk, d_same_day_ly,
d_quarter_seq, d_holiday, i_size, store_sales.ss_ext_list_price,
i_formulation, i_manager_id, i_manufact, d_fy_week_seq, i_product_name,
store_sales.ss_sold_time_sk, i_item_id; line 1 pos 253
Reg