Re: Spark-SQL - Query Hanging, How To Troubleshoot

2023-08-18 Thread Mich Talebzadeh
Yes, it sounds like it. So the broadcast DF size seems to be between 1 and
4GB. So I suggest that you leave it as it is.

I have not used the standalone mode since spark-2.4.3 so I may be missing a
fair bit of context here.  I am sure there are others like you that are
still using it!

HTH

Mich Talebzadeh,
Solutions Architect/Engineering Lead
London
United Kingdom


   view my Linkedin profile



 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Thu, 17 Aug 2023 at 23:33, Patrick Tucci  wrote:

> No, the driver memory was not set explicitly. So it was likely the default
> value, which appears to be 1GB.
>
> On Thu, Aug 17, 2023, 16:49 Mich Talebzadeh 
> wrote:
>
>> One question, what was the driver memory before setting it to 4G? Did you
>> have it set at all before?
>>
>> HTH
>>
>> Mich Talebzadeh,
>> Solutions Architect/Engineering Lead
>> London
>> United Kingdom
>>
>>
>>view my Linkedin profile
>> 
>>
>>
>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Thu, 17 Aug 2023 at 21:01, Patrick Tucci 
>> wrote:
>>
>>> Hi Mich,
>>>
>>> Here are my config values from spark-defaults.conf:
>>>
>>> spark.eventLog.enabled true
>>> spark.eventLog.dir hdfs://10.0.50.1:8020/spark-logs
>>> spark.history.provider org.apache.spark.deploy.history.FsHistoryProvider
>>> spark.history.fs.logDirectory hdfs://10.0.50.1:8020/spark-logs
>>> spark.history.fs.update.interval 10s
>>> spark.history.ui.port 18080
>>> spark.sql.warehouse.dir hdfs://10.0.50.1:8020/user/spark/warehouse
>>> spark.executor.cores 4
>>> spark.executor.memory 16000M
>>> spark.sql.legacy.createHiveTableByDefault false
>>> spark.driver.host 10.0.50.1
>>> spark.scheduler.mode FAIR
>>> spark.driver.memory 4g #added 2023-08-17
>>>
>>> The only application that runs on the cluster is the Spark Thrift
>>> server, which I launch like so:
>>>
>>> ~/spark/sbin/start-thriftserver.sh --master spark://10.0.50.1:7077
>>>
>>> The cluster runs in standalone mode and does not use Yarn for resource
>>> management. As a result, the Spark Thrift server acquires all available
>>> cluster resources when it starts. This is okay; as of right now, I am the
>>> only user of the cluster. If I add more users, they will also be SQL users,
>>> submitting queries through the Thrift server.
>>>
>>> Let me know if you have any other questions or thoughts.
>>>
>>> Thanks,
>>>
>>> Patrick
>>>
>>> On Thu, Aug 17, 2023 at 3:09 PM Mich Talebzadeh <
>>> mich.talebza...@gmail.com> wrote:
>>>
 Hello Paatrick,

 As a matter of interest what parameters and their respective values do
 you use in spark-submit. I assume it is running in YARN mode.

 HTH

 Mich Talebzadeh,
 Solutions Architect/Engineering Lead
 London
 United Kingdom


view my Linkedin profile
 


  https://en.everybodywiki.com/Mich_Talebzadeh



 *Disclaimer:* Use it at your own risk. Any and all responsibility for
 any loss, damage or destruction of data or any other property which may
 arise from relying on this email's technical content is explicitly
 disclaimed. The author will in no case be liable for any monetary damages
 arising from such loss, damage or destruction.




 On Thu, 17 Aug 2023 at 19:36, Patrick Tucci 
 wrote:

> Hi Mich,
>
> Yes, that's the sequence of events. I think the big breakthrough is
> that (for now at least) Spark is throwing errors instead of the queries
> hanging. Which is a big step forward. I can at least troubleshoot issues 
> if
> I know what they are.
>
> When I reflect on the issues I faced and the solutions, my issue may
> have been driver memory all along. I just couldn't determine that was the
> issue because I never saw any errors. In one case, converting a LEFT JOIN
> to an inner JOIN caused the query to run. In another case, replacing a 
> text
> field with an int ID and JOINing on the ID column worked. Per your advice,
> changing file formats from ORC to Parquet solved one issue. These
> interventions could have changed the wa

Re: Spark-SQL - Query Hanging, How To Troubleshoot

2023-08-17 Thread Patrick Tucci
No, the driver memory was not set explicitly. So it was likely the default
value, which appears to be 1GB.

On Thu, Aug 17, 2023, 16:49 Mich Talebzadeh 
wrote:

> One question, what was the driver memory before setting it to 4G? Did you
> have it set at all before?
>
> HTH
>
> Mich Talebzadeh,
> Solutions Architect/Engineering Lead
> London
> United Kingdom
>
>
>view my Linkedin profile
> 
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Thu, 17 Aug 2023 at 21:01, Patrick Tucci 
> wrote:
>
>> Hi Mich,
>>
>> Here are my config values from spark-defaults.conf:
>>
>> spark.eventLog.enabled true
>> spark.eventLog.dir hdfs://10.0.50.1:8020/spark-logs
>> spark.history.provider org.apache.spark.deploy.history.FsHistoryProvider
>> spark.history.fs.logDirectory hdfs://10.0.50.1:8020/spark-logs
>> spark.history.fs.update.interval 10s
>> spark.history.ui.port 18080
>> spark.sql.warehouse.dir hdfs://10.0.50.1:8020/user/spark/warehouse
>> spark.executor.cores 4
>> spark.executor.memory 16000M
>> spark.sql.legacy.createHiveTableByDefault false
>> spark.driver.host 10.0.50.1
>> spark.scheduler.mode FAIR
>> spark.driver.memory 4g #added 2023-08-17
>>
>> The only application that runs on the cluster is the Spark Thrift server,
>> which I launch like so:
>>
>> ~/spark/sbin/start-thriftserver.sh --master spark://10.0.50.1:7077
>>
>> The cluster runs in standalone mode and does not use Yarn for resource
>> management. As a result, the Spark Thrift server acquires all available
>> cluster resources when it starts. This is okay; as of right now, I am the
>> only user of the cluster. If I add more users, they will also be SQL users,
>> submitting queries through the Thrift server.
>>
>> Let me know if you have any other questions or thoughts.
>>
>> Thanks,
>>
>> Patrick
>>
>> On Thu, Aug 17, 2023 at 3:09 PM Mich Talebzadeh <
>> mich.talebza...@gmail.com> wrote:
>>
>>> Hello Paatrick,
>>>
>>> As a matter of interest what parameters and their respective values do
>>> you use in spark-submit. I assume it is running in YARN mode.
>>>
>>> HTH
>>>
>>> Mich Talebzadeh,
>>> Solutions Architect/Engineering Lead
>>> London
>>> United Kingdom
>>>
>>>
>>>view my Linkedin profile
>>> 
>>>
>>>
>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>>
>>> On Thu, 17 Aug 2023 at 19:36, Patrick Tucci 
>>> wrote:
>>>
 Hi Mich,

 Yes, that's the sequence of events. I think the big breakthrough is
 that (for now at least) Spark is throwing errors instead of the queries
 hanging. Which is a big step forward. I can at least troubleshoot issues if
 I know what they are.

 When I reflect on the issues I faced and the solutions, my issue may
 have been driver memory all along. I just couldn't determine that was the
 issue because I never saw any errors. In one case, converting a LEFT JOIN
 to an inner JOIN caused the query to run. In another case, replacing a text
 field with an int ID and JOINing on the ID column worked. Per your advice,
 changing file formats from ORC to Parquet solved one issue. These
 interventions could have changed the way Spark needed to broadcast data to
 execute the query, thereby reducing demand on the memory-constrained 
 driver.

 Fingers crossed this is the solution. I will reply to this thread if
 the issue comes up again (hopefully it doesn't!).

 Thanks again,

 Patrick

 On Thu, Aug 17, 2023 at 1:54 PM Mich Talebzadeh <
 mich.talebza...@gmail.com> wrote:

> Hi Patrik,
>
> glad that you have managed to sort this problem out. Hopefully it will
> go away for good.
>
> Still we are in the dark about how this problem is going away and
> coming back :( As I recall the chronology of events were as follows:
>
>
>1. The Issue with hanging Spark job reported
>2. concurrency on Hive metastore (single threaded Derby DB) was
>identified as a possible cause
>3. You changed the underlying Hive table formats from ORC to
>Parquet and somehow it worked
>4. The issue was reported again
>5. You upgraded 

Re: Spark-SQL - Query Hanging, How To Troubleshoot

2023-08-17 Thread Mich Talebzadeh
One question, what was the driver memory before setting it to 4G? Did you
have it set at all before?

HTH

Mich Talebzadeh,
Solutions Architect/Engineering Lead
London
United Kingdom


   view my Linkedin profile



 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Thu, 17 Aug 2023 at 21:01, Patrick Tucci  wrote:

> Hi Mich,
>
> Here are my config values from spark-defaults.conf:
>
> spark.eventLog.enabled true
> spark.eventLog.dir hdfs://10.0.50.1:8020/spark-logs
> spark.history.provider org.apache.spark.deploy.history.FsHistoryProvider
> spark.history.fs.logDirectory hdfs://10.0.50.1:8020/spark-logs
> spark.history.fs.update.interval 10s
> spark.history.ui.port 18080
> spark.sql.warehouse.dir hdfs://10.0.50.1:8020/user/spark/warehouse
> spark.executor.cores 4
> spark.executor.memory 16000M
> spark.sql.legacy.createHiveTableByDefault false
> spark.driver.host 10.0.50.1
> spark.scheduler.mode FAIR
> spark.driver.memory 4g #added 2023-08-17
>
> The only application that runs on the cluster is the Spark Thrift server,
> which I launch like so:
>
> ~/spark/sbin/start-thriftserver.sh --master spark://10.0.50.1:7077
>
> The cluster runs in standalone mode and does not use Yarn for resource
> management. As a result, the Spark Thrift server acquires all available
> cluster resources when it starts. This is okay; as of right now, I am the
> only user of the cluster. If I add more users, they will also be SQL users,
> submitting queries through the Thrift server.
>
> Let me know if you have any other questions or thoughts.
>
> Thanks,
>
> Patrick
>
> On Thu, Aug 17, 2023 at 3:09 PM Mich Talebzadeh 
> wrote:
>
>> Hello Paatrick,
>>
>> As a matter of interest what parameters and their respective values do
>> you use in spark-submit. I assume it is running in YARN mode.
>>
>> HTH
>>
>> Mich Talebzadeh,
>> Solutions Architect/Engineering Lead
>> London
>> United Kingdom
>>
>>
>>view my Linkedin profile
>> 
>>
>>
>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Thu, 17 Aug 2023 at 19:36, Patrick Tucci 
>> wrote:
>>
>>> Hi Mich,
>>>
>>> Yes, that's the sequence of events. I think the big breakthrough is that
>>> (for now at least) Spark is throwing errors instead of the queries hanging.
>>> Which is a big step forward. I can at least troubleshoot issues if I know
>>> what they are.
>>>
>>> When I reflect on the issues I faced and the solutions, my issue may
>>> have been driver memory all along. I just couldn't determine that was the
>>> issue because I never saw any errors. In one case, converting a LEFT JOIN
>>> to an inner JOIN caused the query to run. In another case, replacing a text
>>> field with an int ID and JOINing on the ID column worked. Per your advice,
>>> changing file formats from ORC to Parquet solved one issue. These
>>> interventions could have changed the way Spark needed to broadcast data to
>>> execute the query, thereby reducing demand on the memory-constrained driver.
>>>
>>> Fingers crossed this is the solution. I will reply to this thread if the
>>> issue comes up again (hopefully it doesn't!).
>>>
>>> Thanks again,
>>>
>>> Patrick
>>>
>>> On Thu, Aug 17, 2023 at 1:54 PM Mich Talebzadeh <
>>> mich.talebza...@gmail.com> wrote:
>>>
 Hi Patrik,

 glad that you have managed to sort this problem out. Hopefully it will
 go away for good.

 Still we are in the dark about how this problem is going away and
 coming back :( As I recall the chronology of events were as follows:


1. The Issue with hanging Spark job reported
2. concurrency on Hive metastore (single threaded Derby DB) was
identified as a possible cause
3. You changed the underlying Hive table formats from ORC to
Parquet and somehow it worked
4. The issue was reported again
5. You upgraded the spark version from 3.4.0 to 3.4.1 (as a
possible underlying issue) and encountered driver memory limitation.
6. you allocated more memory to the driver and it is running ok for
now,
7. It appears that you are doing some join between a large dataset
and a smaller dataset. Spark decides to do broadcast join by ta

Re: Spark-SQL - Query Hanging, How To Troubleshoot

2023-08-17 Thread Patrick Tucci
Hi Mich,

Here are my config values from spark-defaults.conf:

spark.eventLog.enabled true
spark.eventLog.dir hdfs://10.0.50.1:8020/spark-logs
spark.history.provider org.apache.spark.deploy.history.FsHistoryProvider
spark.history.fs.logDirectory hdfs://10.0.50.1:8020/spark-logs
spark.history.fs.update.interval 10s
spark.history.ui.port 18080
spark.sql.warehouse.dir hdfs://10.0.50.1:8020/user/spark/warehouse
spark.executor.cores 4
spark.executor.memory 16000M
spark.sql.legacy.createHiveTableByDefault false
spark.driver.host 10.0.50.1
spark.scheduler.mode FAIR
spark.driver.memory 4g #added 2023-08-17

The only application that runs on the cluster is the Spark Thrift server,
which I launch like so:

~/spark/sbin/start-thriftserver.sh --master spark://10.0.50.1:7077

The cluster runs in standalone mode and does not use Yarn for resource
management. As a result, the Spark Thrift server acquires all available
cluster resources when it starts. This is okay; as of right now, I am the
only user of the cluster. If I add more users, they will also be SQL users,
submitting queries through the Thrift server.

Let me know if you have any other questions or thoughts.

Thanks,

Patrick

On Thu, Aug 17, 2023 at 3:09 PM Mich Talebzadeh 
wrote:

> Hello Paatrick,
>
> As a matter of interest what parameters and their respective values do
> you use in spark-submit. I assume it is running in YARN mode.
>
> HTH
>
> Mich Talebzadeh,
> Solutions Architect/Engineering Lead
> London
> United Kingdom
>
>
>view my Linkedin profile
> 
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Thu, 17 Aug 2023 at 19:36, Patrick Tucci 
> wrote:
>
>> Hi Mich,
>>
>> Yes, that's the sequence of events. I think the big breakthrough is that
>> (for now at least) Spark is throwing errors instead of the queries hanging.
>> Which is a big step forward. I can at least troubleshoot issues if I know
>> what they are.
>>
>> When I reflect on the issues I faced and the solutions, my issue may have
>> been driver memory all along. I just couldn't determine that was the issue
>> because I never saw any errors. In one case, converting a LEFT JOIN to an
>> inner JOIN caused the query to run. In another case, replacing a text field
>> with an int ID and JOINing on the ID column worked. Per your advice,
>> changing file formats from ORC to Parquet solved one issue. These
>> interventions could have changed the way Spark needed to broadcast data to
>> execute the query, thereby reducing demand on the memory-constrained driver.
>>
>> Fingers crossed this is the solution. I will reply to this thread if the
>> issue comes up again (hopefully it doesn't!).
>>
>> Thanks again,
>>
>> Patrick
>>
>> On Thu, Aug 17, 2023 at 1:54 PM Mich Talebzadeh <
>> mich.talebza...@gmail.com> wrote:
>>
>>> Hi Patrik,
>>>
>>> glad that you have managed to sort this problem out. Hopefully it will
>>> go away for good.
>>>
>>> Still we are in the dark about how this problem is going away and coming
>>> back :( As I recall the chronology of events were as follows:
>>>
>>>
>>>1. The Issue with hanging Spark job reported
>>>2. concurrency on Hive metastore (single threaded Derby DB) was
>>>identified as a possible cause
>>>3. You changed the underlying Hive table formats from ORC to Parquet
>>>and somehow it worked
>>>4. The issue was reported again
>>>5. You upgraded the spark version from 3.4.0 to 3.4.1 (as a possible
>>>underlying issue) and encountered driver memory limitation.
>>>6. you allocated more memory to the driver and it is running ok for
>>>now,
>>>7. It appears that you are doing some join between a large dataset
>>>and a smaller dataset. Spark decides to do broadcast join by taking the
>>>smaller dataset, fit it into the driver memory and broadcasting it to all
>>>executors.  That is where you had this issue with the memory limit on the
>>>driver. In the absence of Broadcast join, spark needs to perform a 
>>> shuffle
>>>which is an expensive process.
>>>   1. you can increase the broadcast join memory setting the conf.
>>>   parameter "spark.sql.autoBroadcastJoinThreshold" in bytes (check the 
>>> manual)
>>>   2. You can also disable the broadcast join by setting
>>>   "spark.sql.autoBroadcastJoinThreshold", -1 to see what is happening.
>>>
>>>
>>> So you still need to find a resolution to this issue. Maybe 3.4.1 has
>>> managed to fix some underlying issues.
>>>
>>> HTH
>>>
>>> Mich Talebzadeh,
>>> Solutions Architect/Engineering Lead
>>> London
>>> United Kingdom
>>>
>>>
>>>vie

Re: Spark-SQL - Query Hanging, How To Troubleshoot

2023-08-17 Thread Mich Talebzadeh
Hello Paatrick,

As a matter of interest what parameters and their respective values do
you use in spark-submit. I assume it is running in YARN mode.

HTH

Mich Talebzadeh,
Solutions Architect/Engineering Lead
London
United Kingdom


   view my Linkedin profile



 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Thu, 17 Aug 2023 at 19:36, Patrick Tucci  wrote:

> Hi Mich,
>
> Yes, that's the sequence of events. I think the big breakthrough is that
> (for now at least) Spark is throwing errors instead of the queries hanging.
> Which is a big step forward. I can at least troubleshoot issues if I know
> what they are.
>
> When I reflect on the issues I faced and the solutions, my issue may have
> been driver memory all along. I just couldn't determine that was the issue
> because I never saw any errors. In one case, converting a LEFT JOIN to an
> inner JOIN caused the query to run. In another case, replacing a text field
> with an int ID and JOINing on the ID column worked. Per your advice,
> changing file formats from ORC to Parquet solved one issue. These
> interventions could have changed the way Spark needed to broadcast data to
> execute the query, thereby reducing demand on the memory-constrained driver.
>
> Fingers crossed this is the solution. I will reply to this thread if the
> issue comes up again (hopefully it doesn't!).
>
> Thanks again,
>
> Patrick
>
> On Thu, Aug 17, 2023 at 1:54 PM Mich Talebzadeh 
> wrote:
>
>> Hi Patrik,
>>
>> glad that you have managed to sort this problem out. Hopefully it will go
>> away for good.
>>
>> Still we are in the dark about how this problem is going away and coming
>> back :( As I recall the chronology of events were as follows:
>>
>>
>>1. The Issue with hanging Spark job reported
>>2. concurrency on Hive metastore (single threaded Derby DB) was
>>identified as a possible cause
>>3. You changed the underlying Hive table formats from ORC to Parquet
>>and somehow it worked
>>4. The issue was reported again
>>5. You upgraded the spark version from 3.4.0 to 3.4.1 (as a possible
>>underlying issue) and encountered driver memory limitation.
>>6. you allocated more memory to the driver and it is running ok for
>>now,
>>7. It appears that you are doing some join between a large dataset
>>and a smaller dataset. Spark decides to do broadcast join by taking the
>>smaller dataset, fit it into the driver memory and broadcasting it to all
>>executors.  That is where you had this issue with the memory limit on the
>>driver. In the absence of Broadcast join, spark needs to perform a shuffle
>>which is an expensive process.
>>   1. you can increase the broadcast join memory setting the conf.
>>   parameter "spark.sql.autoBroadcastJoinThreshold" in bytes (check the 
>> manual)
>>   2. You can also disable the broadcast join by setting
>>   "spark.sql.autoBroadcastJoinThreshold", -1 to see what is happening.
>>
>>
>> So you still need to find a resolution to this issue. Maybe 3.4.1 has
>> managed to fix some underlying issues.
>>
>> HTH
>>
>> Mich Talebzadeh,
>> Solutions Architect/Engineering Lead
>> London
>> United Kingdom
>>
>>
>>view my Linkedin profile
>> 
>>
>>
>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Thu, 17 Aug 2023 at 17:17, Patrick Tucci 
>> wrote:
>>
>>> Hi Everyone,
>>>
>>> I just wanted to follow up on this issue. This issue has continued since
>>> our last correspondence. Today I had a query hang and couldn't resolve the
>>> issue. I decided to upgrade my Spark install from 3.4.0 to 3.4.1. After
>>> doing so, instead of the query hanging, I got an error message that the
>>> driver didn't have enough memory to broadcast objects. After increasing the
>>> driver memory, the query runs without issue.
>>>
>>> I hope this can be helpful to someone else in the future. Thanks again
>>> for the support,
>>>
>>> Patrick
>>>
>>> On Sun, Aug 13, 2023 at 7:52 AM Mich Talebzadeh <
>>> mich.talebza...@gmail.com> wrote:
>>>
 OK I use Hive 3.1.1

 My suggestion is to put your hive issues to u...@hive.apache.org and
 for JAVA version compatibility

 They will g

Re: Spark-SQL - Query Hanging, How To Troubleshoot

2023-08-17 Thread Patrick Tucci
Hi Mich,

Yes, that's the sequence of events. I think the big breakthrough is that
(for now at least) Spark is throwing errors instead of the queries hanging.
Which is a big step forward. I can at least troubleshoot issues if I know
what they are.

When I reflect on the issues I faced and the solutions, my issue may have
been driver memory all along. I just couldn't determine that was the issue
because I never saw any errors. In one case, converting a LEFT JOIN to an
inner JOIN caused the query to run. In another case, replacing a text field
with an int ID and JOINing on the ID column worked. Per your advice,
changing file formats from ORC to Parquet solved one issue. These
interventions could have changed the way Spark needed to broadcast data to
execute the query, thereby reducing demand on the memory-constrained driver.

Fingers crossed this is the solution. I will reply to this thread if the
issue comes up again (hopefully it doesn't!).

Thanks again,

Patrick

On Thu, Aug 17, 2023 at 1:54 PM Mich Talebzadeh 
wrote:

> Hi Patrik,
>
> glad that you have managed to sort this problem out. Hopefully it will go
> away for good.
>
> Still we are in the dark about how this problem is going away and coming
> back :( As I recall the chronology of events were as follows:
>
>
>1. The Issue with hanging Spark job reported
>2. concurrency on Hive metastore (single threaded Derby DB) was
>identified as a possible cause
>3. You changed the underlying Hive table formats from ORC to Parquet
>and somehow it worked
>4. The issue was reported again
>5. You upgraded the spark version from 3.4.0 to 3.4.1 (as a possible
>underlying issue) and encountered driver memory limitation.
>6. you allocated more memory to the driver and it is running ok for
>now,
>7. It appears that you are doing some join between a large dataset and
>a smaller dataset. Spark decides to do broadcast join by taking the smaller
>dataset, fit it into the driver memory and broadcasting it to all
>executors.  That is where you had this issue with the memory limit on the
>driver. In the absence of Broadcast join, spark needs to perform a shuffle
>which is an expensive process.
>   1. you can increase the broadcast join memory setting the conf.
>   parameter "spark.sql.autoBroadcastJoinThreshold" in bytes (check the 
> manual)
>   2. You can also disable the broadcast join by setting
>   "spark.sql.autoBroadcastJoinThreshold", -1 to see what is happening.
>
>
> So you still need to find a resolution to this issue. Maybe 3.4.1 has
> managed to fix some underlying issues.
>
> HTH
>
> Mich Talebzadeh,
> Solutions Architect/Engineering Lead
> London
> United Kingdom
>
>
>view my Linkedin profile
> 
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Thu, 17 Aug 2023 at 17:17, Patrick Tucci 
> wrote:
>
>> Hi Everyone,
>>
>> I just wanted to follow up on this issue. This issue has continued since
>> our last correspondence. Today I had a query hang and couldn't resolve the
>> issue. I decided to upgrade my Spark install from 3.4.0 to 3.4.1. After
>> doing so, instead of the query hanging, I got an error message that the
>> driver didn't have enough memory to broadcast objects. After increasing the
>> driver memory, the query runs without issue.
>>
>> I hope this can be helpful to someone else in the future. Thanks again
>> for the support,
>>
>> Patrick
>>
>> On Sun, Aug 13, 2023 at 7:52 AM Mich Talebzadeh <
>> mich.talebza...@gmail.com> wrote:
>>
>>> OK I use Hive 3.1.1
>>>
>>> My suggestion is to put your hive issues to u...@hive.apache.org and
>>> for JAVA version compatibility
>>>
>>> They will give you better info.
>>>
>>> HTH
>>>
>>> Mich Talebzadeh,
>>> Solutions Architect/Engineering Lead
>>> London
>>> United Kingdom
>>>
>>>
>>>view my Linkedin profile
>>> 
>>>
>>>
>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>>
>>> On Sun, 13 Aug 2023 at 11:48, Patrick Tucci 
>>> wrote:
>>>
 I attempted to install Hive yesterday. The experience was similar to
 other attempts at installing Hive: it took a few hours and at the end of
 t

Re: Spark-SQL - Query Hanging, How To Troubleshoot

2023-08-17 Thread Mich Talebzadeh
Hi Patrik,

glad that you have managed to sort this problem out. Hopefully it will go
away for good.

Still we are in the dark about how this problem is going away and coming
back :( As I recall the chronology of events were as follows:


   1. The Issue with hanging Spark job reported
   2. concurrency on Hive metastore (single threaded Derby DB) was
   identified as a possible cause
   3. You changed the underlying Hive table formats from ORC to Parquet and
   somehow it worked
   4. The issue was reported again
   5. You upgraded the spark version from 3.4.0 to 3.4.1 (as a possible
   underlying issue) and encountered driver memory limitation.
   6. you allocated more memory to the driver and it is running ok for now,
   7. It appears that you are doing some join between a large dataset and a
   smaller dataset. Spark decides to do broadcast join by taking the smaller
   dataset, fit it into the driver memory and broadcasting it to all
   executors.  That is where you had this issue with the memory limit on the
   driver. In the absence of Broadcast join, spark needs to perform a shuffle
   which is an expensive process.
  1. you can increase the broadcast join memory setting the conf.
  parameter "spark.sql.autoBroadcastJoinThreshold" in bytes (check
the manual)
  2. You can also disable the broadcast join by setting
  "spark.sql.autoBroadcastJoinThreshold", -1 to see what is happening.


So you still need to find a resolution to this issue. Maybe 3.4.1 has
managed to fix some underlying issues.

HTH

Mich Talebzadeh,
Solutions Architect/Engineering Lead
London
United Kingdom


   view my Linkedin profile



 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Thu, 17 Aug 2023 at 17:17, Patrick Tucci  wrote:

> Hi Everyone,
>
> I just wanted to follow up on this issue. This issue has continued since
> our last correspondence. Today I had a query hang and couldn't resolve the
> issue. I decided to upgrade my Spark install from 3.4.0 to 3.4.1. After
> doing so, instead of the query hanging, I got an error message that the
> driver didn't have enough memory to broadcast objects. After increasing the
> driver memory, the query runs without issue.
>
> I hope this can be helpful to someone else in the future. Thanks again for
> the support,
>
> Patrick
>
> On Sun, Aug 13, 2023 at 7:52 AM Mich Talebzadeh 
> wrote:
>
>> OK I use Hive 3.1.1
>>
>> My suggestion is to put your hive issues to u...@hive.apache.org and for
>> JAVA version compatibility
>>
>> They will give you better info.
>>
>> HTH
>>
>> Mich Talebzadeh,
>> Solutions Architect/Engineering Lead
>> London
>> United Kingdom
>>
>>
>>view my Linkedin profile
>> 
>>
>>
>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Sun, 13 Aug 2023 at 11:48, Patrick Tucci 
>> wrote:
>>
>>> I attempted to install Hive yesterday. The experience was similar to
>>> other attempts at installing Hive: it took a few hours and at the end of
>>> the process, I didn't have a working setup. The latest stable release would
>>> not run. I never discovered the cause, but similar StackOverflow questions
>>> suggest it might be a Java incompatibility issue. Since I didn't want to
>>> downgrade or install an additional Java version, I attempted to use the
>>> latest alpha as well. This appears to have worked, although I couldn't
>>> figure out how to get it to use the metastore_db from Spark.
>>>
>>> After turning my attention back to Spark, I determined the issue. After
>>> much troubleshooting, I discovered that if I performed a COUNT(*) using
>>> the same JOINs, the problem query worked. I removed all the columns from
>>> the SELECT statement and added them one by one until I found the culprit.
>>> It's a text field on one of the tables. When the query SELECTs this column,
>>> or attempts to filter on it, the query hangs and never completes. If I
>>> remove all explicit references to this column, the query works fine. Since
>>> I need this column in the results, I went back to the ETL and extracted the
>>> values to a dimension table. I replaced the text column in the source table
>>> with an integer ID column and the query worked without issue.
>>>
>>> On t

Re: Spark-SQL - Query Hanging, How To Troubleshoot

2023-08-17 Thread Patrick Tucci
Hi Everyone,

I just wanted to follow up on this issue. This issue has continued since
our last correspondence. Today I had a query hang and couldn't resolve the
issue. I decided to upgrade my Spark install from 3.4.0 to 3.4.1. After
doing so, instead of the query hanging, I got an error message that the
driver didn't have enough memory to broadcast objects. After increasing the
driver memory, the query runs without issue.

I hope this can be helpful to someone else in the future. Thanks again for
the support,

Patrick

On Sun, Aug 13, 2023 at 7:52 AM Mich Talebzadeh 
wrote:

> OK I use Hive 3.1.1
>
> My suggestion is to put your hive issues to u...@hive.apache.org and for
> JAVA version compatibility
>
> They will give you better info.
>
> HTH
>
> Mich Talebzadeh,
> Solutions Architect/Engineering Lead
> London
> United Kingdom
>
>
>view my Linkedin profile
> 
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Sun, 13 Aug 2023 at 11:48, Patrick Tucci 
> wrote:
>
>> I attempted to install Hive yesterday. The experience was similar to
>> other attempts at installing Hive: it took a few hours and at the end of
>> the process, I didn't have a working setup. The latest stable release would
>> not run. I never discovered the cause, but similar StackOverflow questions
>> suggest it might be a Java incompatibility issue. Since I didn't want to
>> downgrade or install an additional Java version, I attempted to use the
>> latest alpha as well. This appears to have worked, although I couldn't
>> figure out how to get it to use the metastore_db from Spark.
>>
>> After turning my attention back to Spark, I determined the issue. After
>> much troubleshooting, I discovered that if I performed a COUNT(*) using
>> the same JOINs, the problem query worked. I removed all the columns from
>> the SELECT statement and added them one by one until I found the culprit.
>> It's a text field on one of the tables. When the query SELECTs this column,
>> or attempts to filter on it, the query hangs and never completes. If I
>> remove all explicit references to this column, the query works fine. Since
>> I need this column in the results, I went back to the ETL and extracted the
>> values to a dimension table. I replaced the text column in the source table
>> with an integer ID column and the query worked without issue.
>>
>> On the topic of Hive, does anyone have any detailed resources for how to
>> set up Hive from scratch? Aside from the official site, since those
>> instructions didn't work for me. I'm starting to feel uneasy about building
>> my process around Spark. There really shouldn't be any instances where I
>> ask Spark to run legal ANSI SQL code and it just does nothing. In the past
>> 4 days I've run into 2 of these instances, and the solution was more voodoo
>> and magic than examining errors/logs and fixing code. I feel that I should
>> have a contingency plan in place for when I run into an issue with Spark
>> that can't be resolved.
>>
>> Thanks everyone.
>>
>>
>> On Sat, Aug 12, 2023 at 2:18 PM Mich Talebzadeh <
>> mich.talebza...@gmail.com> wrote:
>>
>>> OK you would not have known unless you went through the process so to
>>> speak.
>>>
>>> Let us do something revolutionary here 😁
>>>
>>> Install hive and its metastore. You already have hadoop anyway
>>>
>>> https://cwiki.apache.org/confluence/display/hive/adminmanual+installation
>>>
>>> hive metastore
>>>
>>>
>>> https://data-flair.training/blogs/apache-hive-metastore/#:~:text=What%20is%20Hive%20Metastore%3F,by%20using%20metastore%20service%20API
>>> .
>>>
>>> choose one of these
>>>
>>> derby  hive  mssql  mysql  oracle  postgres
>>>
>>> Mine is an oracle. postgres is good as well.
>>>
>>> HTH
>>>
>>> Mich Talebzadeh,
>>> Solutions Architect/Engineering Lead
>>> London
>>> United Kingdom
>>>
>>>
>>>view my Linkedin profile
>>> 
>>>
>>>
>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>>
>>> On Sat, 12 Aug 2023 at 18:31, Patrick Tucci 
>>> wrote:
>>>
 Yes, on premise.

 Unfortunately after installing Delta Lake and re-writing all tables as
 Delta tables, the issue persists.

 On Sat, Aug 12, 2023

Re: Spark-SQL - Query Hanging, How To Troubleshoot

2023-08-13 Thread Mich Talebzadeh
OK I use Hive 3.1.1

My suggestion is to put your hive issues to u...@hive.apache.org and for
JAVA version compatibility

They will give you better info.

HTH

Mich Talebzadeh,
Solutions Architect/Engineering Lead
London
United Kingdom


   view my Linkedin profile



 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Sun, 13 Aug 2023 at 11:48, Patrick Tucci  wrote:

> I attempted to install Hive yesterday. The experience was similar to other
> attempts at installing Hive: it took a few hours and at the end of the
> process, I didn't have a working setup. The latest stable release would not
> run. I never discovered the cause, but similar StackOverflow questions
> suggest it might be a Java incompatibility issue. Since I didn't want to
> downgrade or install an additional Java version, I attempted to use the
> latest alpha as well. This appears to have worked, although I couldn't
> figure out how to get it to use the metastore_db from Spark.
>
> After turning my attention back to Spark, I determined the issue. After
> much troubleshooting, I discovered that if I performed a COUNT(*) using
> the same JOINs, the problem query worked. I removed all the columns from
> the SELECT statement and added them one by one until I found the culprit.
> It's a text field on one of the tables. When the query SELECTs this column,
> or attempts to filter on it, the query hangs and never completes. If I
> remove all explicit references to this column, the query works fine. Since
> I need this column in the results, I went back to the ETL and extracted the
> values to a dimension table. I replaced the text column in the source table
> with an integer ID column and the query worked without issue.
>
> On the topic of Hive, does anyone have any detailed resources for how to
> set up Hive from scratch? Aside from the official site, since those
> instructions didn't work for me. I'm starting to feel uneasy about building
> my process around Spark. There really shouldn't be any instances where I
> ask Spark to run legal ANSI SQL code and it just does nothing. In the past
> 4 days I've run into 2 of these instances, and the solution was more voodoo
> and magic than examining errors/logs and fixing code. I feel that I should
> have a contingency plan in place for when I run into an issue with Spark
> that can't be resolved.
>
> Thanks everyone.
>
>
> On Sat, Aug 12, 2023 at 2:18 PM Mich Talebzadeh 
> wrote:
>
>> OK you would not have known unless you went through the process so to
>> speak.
>>
>> Let us do something revolutionary here 😁
>>
>> Install hive and its metastore. You already have hadoop anyway
>>
>> https://cwiki.apache.org/confluence/display/hive/adminmanual+installation
>>
>> hive metastore
>>
>>
>> https://data-flair.training/blogs/apache-hive-metastore/#:~:text=What%20is%20Hive%20Metastore%3F,by%20using%20metastore%20service%20API
>> .
>>
>> choose one of these
>>
>> derby  hive  mssql  mysql  oracle  postgres
>>
>> Mine is an oracle. postgres is good as well.
>>
>> HTH
>>
>> Mich Talebzadeh,
>> Solutions Architect/Engineering Lead
>> London
>> United Kingdom
>>
>>
>>view my Linkedin profile
>> 
>>
>>
>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Sat, 12 Aug 2023 at 18:31, Patrick Tucci 
>> wrote:
>>
>>> Yes, on premise.
>>>
>>> Unfortunately after installing Delta Lake and re-writing all tables as
>>> Delta tables, the issue persists.
>>>
>>> On Sat, Aug 12, 2023 at 11:34 AM Mich Talebzadeh <
>>> mich.talebza...@gmail.com> wrote:
>>>
 ok sure.

 Is this Delta Lake going to be on-premise?

 Mich Talebzadeh,
 Solutions Architect/Engineering Lead
 London
 United Kingdom


view my Linkedin profile
 


  https://en.everybodywiki.com/Mich_Talebzadeh



 *Disclaimer:* Use it at your own risk. Any and all responsibility for
 any loss, damage or destruction of data or any other property which may
 arise from relying on this email's technical content is explicitly
 disclaimed. The author will in no case be liable for any monetary damages
 arising from such loss, 

Re: Spark-SQL - Query Hanging, How To Troubleshoot

2023-08-13 Thread Patrick Tucci
I attempted to install Hive yesterday. The experience was similar to other
attempts at installing Hive: it took a few hours and at the end of the
process, I didn't have a working setup. The latest stable release would not
run. I never discovered the cause, but similar StackOverflow questions
suggest it might be a Java incompatibility issue. Since I didn't want to
downgrade or install an additional Java version, I attempted to use the
latest alpha as well. This appears to have worked, although I couldn't
figure out how to get it to use the metastore_db from Spark.

After turning my attention back to Spark, I determined the issue. After
much troubleshooting, I discovered that if I performed a COUNT(*) using the
same JOINs, the problem query worked. I removed all the columns from the
SELECT statement and added them one by one until I found the culprit. It's
a text field on one of the tables. When the query SELECTs this column, or
attempts to filter on it, the query hangs and never completes. If I remove
all explicit references to this column, the query works fine. Since I need
this column in the results, I went back to the ETL and extracted the values
to a dimension table. I replaced the text column in the source table with
an integer ID column and the query worked without issue.

On the topic of Hive, does anyone have any detailed resources for how to
set up Hive from scratch? Aside from the official site, since those
instructions didn't work for me. I'm starting to feel uneasy about building
my process around Spark. There really shouldn't be any instances where I
ask Spark to run legal ANSI SQL code and it just does nothing. In the past
4 days I've run into 2 of these instances, and the solution was more voodoo
and magic than examining errors/logs and fixing code. I feel that I should
have a contingency plan in place for when I run into an issue with Spark
that can't be resolved.

Thanks everyone.


On Sat, Aug 12, 2023 at 2:18 PM Mich Talebzadeh 
wrote:

> OK you would not have known unless you went through the process so to
> speak.
>
> Let us do something revolutionary here 😁
>
> Install hive and its metastore. You already have hadoop anyway
>
> https://cwiki.apache.org/confluence/display/hive/adminmanual+installation
>
> hive metastore
>
>
> https://data-flair.training/blogs/apache-hive-metastore/#:~:text=What%20is%20Hive%20Metastore%3F,by%20using%20metastore%20service%20API
> .
>
> choose one of these
>
> derby  hive  mssql  mysql  oracle  postgres
>
> Mine is an oracle. postgres is good as well.
>
> HTH
>
> Mich Talebzadeh,
> Solutions Architect/Engineering Lead
> London
> United Kingdom
>
>
>view my Linkedin profile
> 
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Sat, 12 Aug 2023 at 18:31, Patrick Tucci 
> wrote:
>
>> Yes, on premise.
>>
>> Unfortunately after installing Delta Lake and re-writing all tables as
>> Delta tables, the issue persists.
>>
>> On Sat, Aug 12, 2023 at 11:34 AM Mich Talebzadeh <
>> mich.talebza...@gmail.com> wrote:
>>
>>> ok sure.
>>>
>>> Is this Delta Lake going to be on-premise?
>>>
>>> Mich Talebzadeh,
>>> Solutions Architect/Engineering Lead
>>> London
>>> United Kingdom
>>>
>>>
>>>view my Linkedin profile
>>> 
>>>
>>>
>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>>
>>> On Sat, 12 Aug 2023 at 12:03, Patrick Tucci 
>>> wrote:
>>>
 Hi Mich,

 Thanks for the feedback. My original intention after reading your
 response was to stick to Hive for managing tables. Unfortunately, I'm
 running into another case of SQL scripts hanging. Since all tables are
 already Parquet, I'm out of troubleshooting options. I'm going to migrate
 to Delta Lake and see if that solves the issue.

 Thanks again for your feedback.

 Patrick

 On Fri, Aug 11, 2023 at 10:09 AM Mich Talebzadeh <
 mich.talebza...@gmail.com> wrote:

> Hi Patrick,
>
> There is not anything wrong with Hive On-premise it is the best data
> warehouse there is
>
> Hive handles both ORC and Parquet formal well. They are both columnar
> implementations of relational model. What you are seeing

Re: Spark-SQL - Query Hanging, How To Troubleshoot

2023-08-12 Thread Mich Talebzadeh
OK you would not have known unless you went through the process so to speak.

Let us do something revolutionary here 😁

Install hive and its metastore. You already have hadoop anyway

https://cwiki.apache.org/confluence/display/hive/adminmanual+installation

hive metastore

https://data-flair.training/blogs/apache-hive-metastore/#:~:text=What%20is%20Hive%20Metastore%3F,by%20using%20metastore%20service%20API
.

choose one of these

derby  hive  mssql  mysql  oracle  postgres

Mine is an oracle. postgres is good as well.

HTH

Mich Talebzadeh,
Solutions Architect/Engineering Lead
London
United Kingdom


   view my Linkedin profile



 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Sat, 12 Aug 2023 at 18:31, Patrick Tucci  wrote:

> Yes, on premise.
>
> Unfortunately after installing Delta Lake and re-writing all tables as
> Delta tables, the issue persists.
>
> On Sat, Aug 12, 2023 at 11:34 AM Mich Talebzadeh <
> mich.talebza...@gmail.com> wrote:
>
>> ok sure.
>>
>> Is this Delta Lake going to be on-premise?
>>
>> Mich Talebzadeh,
>> Solutions Architect/Engineering Lead
>> London
>> United Kingdom
>>
>>
>>view my Linkedin profile
>> 
>>
>>
>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Sat, 12 Aug 2023 at 12:03, Patrick Tucci 
>> wrote:
>>
>>> Hi Mich,
>>>
>>> Thanks for the feedback. My original intention after reading your
>>> response was to stick to Hive for managing tables. Unfortunately, I'm
>>> running into another case of SQL scripts hanging. Since all tables are
>>> already Parquet, I'm out of troubleshooting options. I'm going to migrate
>>> to Delta Lake and see if that solves the issue.
>>>
>>> Thanks again for your feedback.
>>>
>>> Patrick
>>>
>>> On Fri, Aug 11, 2023 at 10:09 AM Mich Talebzadeh <
>>> mich.talebza...@gmail.com> wrote:
>>>
 Hi Patrick,

 There is not anything wrong with Hive On-premise it is the best data
 warehouse there is

 Hive handles both ORC and Parquet formal well. They are both columnar
 implementations of relational model. What you are seeing is the Spark API
 to Hive which prefers Parquet. I found out a few years ago.

 From your point of view I suggest you stick to parquet format with Hive
 specific to Spark. As far as I know you don't have a fully independent Hive
 DB as yet.

 Anyway stick to Hive for now as you never know what issues you may be
 facing using moving to Delta Lake.

 You can also use compression

 STORED AS PARQUET
 TBLPROPERTIES ("parquet.compression"="SNAPPY")

 ALSO

 ANALYZE TABLE  COMPUTE STATISTICS FOR COLUMNS

 HTH

 Mich Talebzadeh,
 Solutions Architect/Engineering Lead
 London
 United Kingdom


view my Linkedin profile
 


  https://en.everybodywiki.com/Mich_Talebzadeh



 *Disclaimer:* Use it at your own risk. Any and all responsibility for
 any loss, damage or destruction of data or any other property which may
 arise from relying on this email's technical content is explicitly
 disclaimed. The author will in no case be liable for any monetary damages
 arising from such loss, damage or destruction.




 On Fri, 11 Aug 2023 at 11:26, Patrick Tucci 
 wrote:

> Thanks for the reply Stephen and Mich.
>
> Stephen, you're right, it feels like Spark is waiting for something,
> but I'm not sure what. I'm the only user on the cluster and there are
> plenty of resources (+60 cores, +250GB RAM). I even tried restarting
> Hadoop, Spark and the host servers to make sure nothing was lingering in
> the background.
>
> Mich, thank you so much, your suggestion worked. Storing the tables as
> Parquet solves the issue.
>
> Interestingly, I found that only the MemberEnrollment table needs to
> be Parquet. The ID field in MemberEnrollment is an int calculated during
> load by a ROW_NUMBER() function. Further testing found that if I hard code
> a 0 as MemberEnrollment.ID instead of using the ROW_NUMBER() function, th

Re: Spark-SQL - Query Hanging, How To Troubleshoot

2023-08-12 Thread Patrick Tucci
Yes, on premise.

Unfortunately after installing Delta Lake and re-writing all tables as
Delta tables, the issue persists.

On Sat, Aug 12, 2023 at 11:34 AM Mich Talebzadeh 
wrote:

> ok sure.
>
> Is this Delta Lake going to be on-premise?
>
> Mich Talebzadeh,
> Solutions Architect/Engineering Lead
> London
> United Kingdom
>
>
>view my Linkedin profile
> 
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Sat, 12 Aug 2023 at 12:03, Patrick Tucci 
> wrote:
>
>> Hi Mich,
>>
>> Thanks for the feedback. My original intention after reading your
>> response was to stick to Hive for managing tables. Unfortunately, I'm
>> running into another case of SQL scripts hanging. Since all tables are
>> already Parquet, I'm out of troubleshooting options. I'm going to migrate
>> to Delta Lake and see if that solves the issue.
>>
>> Thanks again for your feedback.
>>
>> Patrick
>>
>> On Fri, Aug 11, 2023 at 10:09 AM Mich Talebzadeh <
>> mich.talebza...@gmail.com> wrote:
>>
>>> Hi Patrick,
>>>
>>> There is not anything wrong with Hive On-premise it is the best data
>>> warehouse there is
>>>
>>> Hive handles both ORC and Parquet formal well. They are both columnar
>>> implementations of relational model. What you are seeing is the Spark API
>>> to Hive which prefers Parquet. I found out a few years ago.
>>>
>>> From your point of view I suggest you stick to parquet format with Hive
>>> specific to Spark. As far as I know you don't have a fully independent Hive
>>> DB as yet.
>>>
>>> Anyway stick to Hive for now as you never know what issues you may be
>>> facing using moving to Delta Lake.
>>>
>>> You can also use compression
>>>
>>> STORED AS PARQUET
>>> TBLPROPERTIES ("parquet.compression"="SNAPPY")
>>>
>>> ALSO
>>>
>>> ANALYZE TABLE  COMPUTE STATISTICS FOR COLUMNS
>>>
>>> HTH
>>>
>>> Mich Talebzadeh,
>>> Solutions Architect/Engineering Lead
>>> London
>>> United Kingdom
>>>
>>>
>>>view my Linkedin profile
>>> 
>>>
>>>
>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>>
>>> On Fri, 11 Aug 2023 at 11:26, Patrick Tucci 
>>> wrote:
>>>
 Thanks for the reply Stephen and Mich.

 Stephen, you're right, it feels like Spark is waiting for something,
 but I'm not sure what. I'm the only user on the cluster and there are
 plenty of resources (+60 cores, +250GB RAM). I even tried restarting
 Hadoop, Spark and the host servers to make sure nothing was lingering in
 the background.

 Mich, thank you so much, your suggestion worked. Storing the tables as
 Parquet solves the issue.

 Interestingly, I found that only the MemberEnrollment table needs to be
 Parquet. The ID field in MemberEnrollment is an int calculated during load
 by a ROW_NUMBER() function. Further testing found that if I hard code a 0
 as MemberEnrollment.ID instead of using the ROW_NUMBER() function, the
 query works without issue even if both tables are ORC.

 Should I infer from this issue that the Hive components prefer Parquet
 over ORC? Furthermore, should I consider using a different table storage
 framework, like Delta Lake, instead of the Hive components? Given this
 issue and other issues I've had with Hive, I'm starting to think a
 different solution might be more robust and stable. The main condition is
 that my application operates solely through Thrift server, so I need to be
 able to connect to Spark through Thrift server and have it write tables
 using Delta Lake instead of Hive. From this StackOverflow question, it
 looks like this is possible:
 https://stackoverflow.com/questions/69862388/how-to-run-spark-sql-thrift-server-in-local-mode-and-connect-to-delta-using-jdbc

 Thanks again to everyone who replied for their help.

 Patrick


 On Fri, Aug 11, 2023 at 2:14 AM Mich Talebzadeh <
 mich.talebza...@gmail.com> wrote:

> Steve may have a valid point. You raised an issue with concurrent
> writes before, if I recall correctly. Since this limitation may be due to
> Hive metastore. By default Spark uses Apache Derby for its database
> persistence. *Howeve

Re: Spark-SQL - Query Hanging, How To Troubleshoot

2023-08-12 Thread Mich Talebzadeh
ok sure.

Is this Delta Lake going to be on-premise?

Mich Talebzadeh,
Solutions Architect/Engineering Lead
London
United Kingdom


   view my Linkedin profile



 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Sat, 12 Aug 2023 at 12:03, Patrick Tucci  wrote:

> Hi Mich,
>
> Thanks for the feedback. My original intention after reading your response
> was to stick to Hive for managing tables. Unfortunately, I'm running into
> another case of SQL scripts hanging. Since all tables are already Parquet,
> I'm out of troubleshooting options. I'm going to migrate to Delta Lake and
> see if that solves the issue.
>
> Thanks again for your feedback.
>
> Patrick
>
> On Fri, Aug 11, 2023 at 10:09 AM Mich Talebzadeh <
> mich.talebza...@gmail.com> wrote:
>
>> Hi Patrick,
>>
>> There is not anything wrong with Hive On-premise it is the best data
>> warehouse there is
>>
>> Hive handles both ORC and Parquet formal well. They are both columnar
>> implementations of relational model. What you are seeing is the Spark API
>> to Hive which prefers Parquet. I found out a few years ago.
>>
>> From your point of view I suggest you stick to parquet format with Hive
>> specific to Spark. As far as I know you don't have a fully independent Hive
>> DB as yet.
>>
>> Anyway stick to Hive for now as you never know what issues you may be
>> facing using moving to Delta Lake.
>>
>> You can also use compression
>>
>> STORED AS PARQUET
>> TBLPROPERTIES ("parquet.compression"="SNAPPY")
>>
>> ALSO
>>
>> ANALYZE TABLE  COMPUTE STATISTICS FOR COLUMNS
>>
>> HTH
>>
>> Mich Talebzadeh,
>> Solutions Architect/Engineering Lead
>> London
>> United Kingdom
>>
>>
>>view my Linkedin profile
>> 
>>
>>
>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Fri, 11 Aug 2023 at 11:26, Patrick Tucci 
>> wrote:
>>
>>> Thanks for the reply Stephen and Mich.
>>>
>>> Stephen, you're right, it feels like Spark is waiting for something, but
>>> I'm not sure what. I'm the only user on the cluster and there are plenty of
>>> resources (+60 cores, +250GB RAM). I even tried restarting Hadoop, Spark
>>> and the host servers to make sure nothing was lingering in the background.
>>>
>>> Mich, thank you so much, your suggestion worked. Storing the tables as
>>> Parquet solves the issue.
>>>
>>> Interestingly, I found that only the MemberEnrollment table needs to be
>>> Parquet. The ID field in MemberEnrollment is an int calculated during load
>>> by a ROW_NUMBER() function. Further testing found that if I hard code a 0
>>> as MemberEnrollment.ID instead of using the ROW_NUMBER() function, the
>>> query works without issue even if both tables are ORC.
>>>
>>> Should I infer from this issue that the Hive components prefer Parquet
>>> over ORC? Furthermore, should I consider using a different table storage
>>> framework, like Delta Lake, instead of the Hive components? Given this
>>> issue and other issues I've had with Hive, I'm starting to think a
>>> different solution might be more robust and stable. The main condition is
>>> that my application operates solely through Thrift server, so I need to be
>>> able to connect to Spark through Thrift server and have it write tables
>>> using Delta Lake instead of Hive. From this StackOverflow question, it
>>> looks like this is possible:
>>> https://stackoverflow.com/questions/69862388/how-to-run-spark-sql-thrift-server-in-local-mode-and-connect-to-delta-using-jdbc
>>>
>>> Thanks again to everyone who replied for their help.
>>>
>>> Patrick
>>>
>>>
>>> On Fri, Aug 11, 2023 at 2:14 AM Mich Talebzadeh <
>>> mich.talebza...@gmail.com> wrote:
>>>
 Steve may have a valid point. You raised an issue with concurrent
 writes before, if I recall correctly. Since this limitation may be due to
 Hive metastore. By default Spark uses Apache Derby for its database
 persistence. *However it is limited to only one Spark session at any
 time for the purposes of metadata storage.*  That may be the cause
 here as well. Does this happen if the underlying tables are created as
 PARQUET as opposed to ORC?

 HTH

 Mich Talebzadeh,
 Solutions Architect/Engineering Lead
 London
 United King

Re: Spark-SQL - Query Hanging, How To Troubleshoot

2023-08-12 Thread Patrick Tucci
Hi Mich,

Thanks for the feedback. My original intention after reading your response
was to stick to Hive for managing tables. Unfortunately, I'm running into
another case of SQL scripts hanging. Since all tables are already Parquet,
I'm out of troubleshooting options. I'm going to migrate to Delta Lake and
see if that solves the issue.

Thanks again for your feedback.

Patrick

On Fri, Aug 11, 2023 at 10:09 AM Mich Talebzadeh 
wrote:

> Hi Patrick,
>
> There is not anything wrong with Hive On-premise it is the best data
> warehouse there is
>
> Hive handles both ORC and Parquet formal well. They are both columnar
> implementations of relational model. What you are seeing is the Spark API
> to Hive which prefers Parquet. I found out a few years ago.
>
> From your point of view I suggest you stick to parquet format with Hive
> specific to Spark. As far as I know you don't have a fully independent Hive
> DB as yet.
>
> Anyway stick to Hive for now as you never know what issues you may be
> facing using moving to Delta Lake.
>
> You can also use compression
>
> STORED AS PARQUET
> TBLPROPERTIES ("parquet.compression"="SNAPPY")
>
> ALSO
>
> ANALYZE TABLE  COMPUTE STATISTICS FOR COLUMNS
>
> HTH
>
> Mich Talebzadeh,
> Solutions Architect/Engineering Lead
> London
> United Kingdom
>
>
>view my Linkedin profile
> 
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Fri, 11 Aug 2023 at 11:26, Patrick Tucci 
> wrote:
>
>> Thanks for the reply Stephen and Mich.
>>
>> Stephen, you're right, it feels like Spark is waiting for something, but
>> I'm not sure what. I'm the only user on the cluster and there are plenty of
>> resources (+60 cores, +250GB RAM). I even tried restarting Hadoop, Spark
>> and the host servers to make sure nothing was lingering in the background.
>>
>> Mich, thank you so much, your suggestion worked. Storing the tables as
>> Parquet solves the issue.
>>
>> Interestingly, I found that only the MemberEnrollment table needs to be
>> Parquet. The ID field in MemberEnrollment is an int calculated during load
>> by a ROW_NUMBER() function. Further testing found that if I hard code a 0
>> as MemberEnrollment.ID instead of using the ROW_NUMBER() function, the
>> query works without issue even if both tables are ORC.
>>
>> Should I infer from this issue that the Hive components prefer Parquet
>> over ORC? Furthermore, should I consider using a different table storage
>> framework, like Delta Lake, instead of the Hive components? Given this
>> issue and other issues I've had with Hive, I'm starting to think a
>> different solution might be more robust and stable. The main condition is
>> that my application operates solely through Thrift server, so I need to be
>> able to connect to Spark through Thrift server and have it write tables
>> using Delta Lake instead of Hive. From this StackOverflow question, it
>> looks like this is possible:
>> https://stackoverflow.com/questions/69862388/how-to-run-spark-sql-thrift-server-in-local-mode-and-connect-to-delta-using-jdbc
>>
>> Thanks again to everyone who replied for their help.
>>
>> Patrick
>>
>>
>> On Fri, Aug 11, 2023 at 2:14 AM Mich Talebzadeh <
>> mich.talebza...@gmail.com> wrote:
>>
>>> Steve may have a valid point. You raised an issue with concurrent writes
>>> before, if I recall correctly. Since this limitation may be due to Hive
>>> metastore. By default Spark uses Apache Derby for its database
>>> persistence. *However it is limited to only one Spark session at any
>>> time for the purposes of metadata storage.*  That may be the cause here
>>> as well. Does this happen if the underlying tables are created as PARQUET
>>> as opposed to ORC?
>>>
>>> HTH
>>>
>>> Mich Talebzadeh,
>>> Solutions Architect/Engineering Lead
>>> London
>>> United Kingdom
>>>
>>>
>>>view my Linkedin profile
>>> 
>>>
>>>
>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>>
>>> On Fri, 11 Aug 2023 at 01:33, Stephen Coy 
>>> wrote:
>>>
 Hi Patrick,

 When this has happened to me in the past (admittedly via spark-submit)
 it has been because another job was still running and had already claimed
 some of the resources (co

Re: Spark-SQL - Query Hanging, How To Troubleshoot

2023-08-11 Thread Mich Talebzadeh
Hi Patrick,

There is not anything wrong with Hive On-premise it is the best data
warehouse there is

Hive handles both ORC and Parquet formal well. They are both columnar
implementations of relational model. What you are seeing is the Spark API
to Hive which prefers Parquet. I found out a few years ago.

>From your point of view I suggest you stick to parquet format with Hive
specific to Spark. As far as I know you don't have a fully independent Hive
DB as yet.

Anyway stick to Hive for now as you never know what issues you may be
facing using moving to Delta Lake.

You can also use compression

STORED AS PARQUET
TBLPROPERTIES ("parquet.compression"="SNAPPY")

ALSO

ANALYZE TABLE  COMPUTE STATISTICS FOR COLUMNS

HTH

Mich Talebzadeh,
Solutions Architect/Engineering Lead
London
United Kingdom


   view my Linkedin profile



 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Fri, 11 Aug 2023 at 11:26, Patrick Tucci  wrote:

> Thanks for the reply Stephen and Mich.
>
> Stephen, you're right, it feels like Spark is waiting for something, but
> I'm not sure what. I'm the only user on the cluster and there are plenty of
> resources (+60 cores, +250GB RAM). I even tried restarting Hadoop, Spark
> and the host servers to make sure nothing was lingering in the background.
>
> Mich, thank you so much, your suggestion worked. Storing the tables as
> Parquet solves the issue.
>
> Interestingly, I found that only the MemberEnrollment table needs to be
> Parquet. The ID field in MemberEnrollment is an int calculated during load
> by a ROW_NUMBER() function. Further testing found that if I hard code a 0
> as MemberEnrollment.ID instead of using the ROW_NUMBER() function, the
> query works without issue even if both tables are ORC.
>
> Should I infer from this issue that the Hive components prefer Parquet
> over ORC? Furthermore, should I consider using a different table storage
> framework, like Delta Lake, instead of the Hive components? Given this
> issue and other issues I've had with Hive, I'm starting to think a
> different solution might be more robust and stable. The main condition is
> that my application operates solely through Thrift server, so I need to be
> able to connect to Spark through Thrift server and have it write tables
> using Delta Lake instead of Hive. From this StackOverflow question, it
> looks like this is possible:
> https://stackoverflow.com/questions/69862388/how-to-run-spark-sql-thrift-server-in-local-mode-and-connect-to-delta-using-jdbc
>
> Thanks again to everyone who replied for their help.
>
> Patrick
>
>
> On Fri, Aug 11, 2023 at 2:14 AM Mich Talebzadeh 
> wrote:
>
>> Steve may have a valid point. You raised an issue with concurrent writes
>> before, if I recall correctly. Since this limitation may be due to Hive
>> metastore. By default Spark uses Apache Derby for its database
>> persistence. *However it is limited to only one Spark session at any
>> time for the purposes of metadata storage.*  That may be the cause here
>> as well. Does this happen if the underlying tables are created as PARQUET
>> as opposed to ORC?
>>
>> HTH
>>
>> Mich Talebzadeh,
>> Solutions Architect/Engineering Lead
>> London
>> United Kingdom
>>
>>
>>view my Linkedin profile
>> 
>>
>>
>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Fri, 11 Aug 2023 at 01:33, Stephen Coy 
>> wrote:
>>
>>> Hi Patrick,
>>>
>>> When this has happened to me in the past (admittedly via spark-submit)
>>> it has been because another job was still running and had already claimed
>>> some of the resources (cores and memory).
>>>
>>> I think this can also happen if your configuration tries to claim
>>> resources that will never be available.
>>>
>>> Cheers,
>>>
>>> SteveC
>>>
>>>
>>> On 11 Aug 2023, at 3:36 am, Patrick Tucci 
>>> wrote:
>>>
>>> Hello,
>>>
>>> I'm attempting to run a query on Spark 3.4.0 through the Spark
>>> ThriftServer. The cluster has 64 cores, 250GB RAM, and operates in
>>> standalone mode using HDFS for storage.
>>>
>>> The query is as follows:
>>>
>>> SELECT ME.*, MB.BenefitID
>>> FROM MemberEnrollment ME
>>> JOIN MemberBenefits MB
>>> ON ME.ID  = MB.EnrollmentID
>>> WHERE MB.BenefitID =

Re: Spark-SQL - Query Hanging, How To Troubleshoot

2023-08-11 Thread Patrick Tucci
Thanks for the reply Stephen and Mich.

Stephen, you're right, it feels like Spark is waiting for something, but
I'm not sure what. I'm the only user on the cluster and there are plenty of
resources (+60 cores, +250GB RAM). I even tried restarting Hadoop, Spark
and the host servers to make sure nothing was lingering in the background.

Mich, thank you so much, your suggestion worked. Storing the tables as
Parquet solves the issue.

Interestingly, I found that only the MemberEnrollment table needs to be
Parquet. The ID field in MemberEnrollment is an int calculated during load
by a ROW_NUMBER() function. Further testing found that if I hard code a 0
as MemberEnrollment.ID instead of using the ROW_NUMBER() function, the
query works without issue even if both tables are ORC.

Should I infer from this issue that the Hive components prefer Parquet over
ORC? Furthermore, should I consider using a different table storage
framework, like Delta Lake, instead of the Hive components? Given this
issue and other issues I've had with Hive, I'm starting to think a
different solution might be more robust and stable. The main condition is
that my application operates solely through Thrift server, so I need to be
able to connect to Spark through Thrift server and have it write tables
using Delta Lake instead of Hive. From this StackOverflow question, it
looks like this is possible:
https://stackoverflow.com/questions/69862388/how-to-run-spark-sql-thrift-server-in-local-mode-and-connect-to-delta-using-jdbc

Thanks again to everyone who replied for their help.

Patrick


On Fri, Aug 11, 2023 at 2:14 AM Mich Talebzadeh 
wrote:

> Steve may have a valid point. You raised an issue with concurrent writes
> before, if I recall correctly. Since this limitation may be due to Hive
> metastore. By default Spark uses Apache Derby for its database
> persistence. *However it is limited to only one Spark session at any time
> for the purposes of metadata storage.*  That may be the cause here as
> well. Does this happen if the underlying tables are created as PARQUET as
> opposed to ORC?
>
> HTH
>
> Mich Talebzadeh,
> Solutions Architect/Engineering Lead
> London
> United Kingdom
>
>
>view my Linkedin profile
> 
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Fri, 11 Aug 2023 at 01:33, Stephen Coy 
> wrote:
>
>> Hi Patrick,
>>
>> When this has happened to me in the past (admittedly via spark-submit) it
>> has been because another job was still running and had already claimed some
>> of the resources (cores and memory).
>>
>> I think this can also happen if your configuration tries to claim
>> resources that will never be available.
>>
>> Cheers,
>>
>> SteveC
>>
>>
>> On 11 Aug 2023, at 3:36 am, Patrick Tucci 
>> wrote:
>>
>> Hello,
>>
>> I'm attempting to run a query on Spark 3.4.0 through the Spark
>> ThriftServer. The cluster has 64 cores, 250GB RAM, and operates in
>> standalone mode using HDFS for storage.
>>
>> The query is as follows:
>>
>> SELECT ME.*, MB.BenefitID
>> FROM MemberEnrollment ME
>> JOIN MemberBenefits MB
>> ON ME.ID  = MB.EnrollmentID
>> WHERE MB.BenefitID = 5
>> LIMIT 10
>>
>> The tables are defined as follows:
>>
>> -- Contains about 3M rows
>> CREATE TABLE MemberEnrollment
>> (
>> ID INT
>> , MemberID VARCHAR(50)
>> , StartDate DATE
>> , EndDate DATE
>> -- Other columns, but these are the most important
>> ) STORED AS ORC;
>>
>> -- Contains about 25m rows
>> CREATE TABLE MemberBenefits
>> (
>> EnrollmentID INT
>> , BenefitID INT
>> ) STORED AS ORC;
>>
>> When I execute the query, it runs a single broadcast exchange stage,
>> which completes after a few seconds. Then everything just hangs. The
>> JDBC/ODBC tab in the UI shows the query state as COMPILED, but no stages or
>> tasks are executing or pending:
>>
>> 
>>
>> I've let the query run for as long as 30 minutes with no additional
>> stages, progress, or errors. I'm not sure where to start troubleshooting.
>>
>> Thanks for your help,
>>
>> Patrick
>>
>>
>> This email contains confidential information of and is the copyright of
>> Infomedia. It must not be forwarded, amended or disclosed without consent
>> of the sender. If you received this message by mistake, please advise the
>> sender and delete all copies. Security of transmission on the internet
>> cannot be guaranteed, could be infected, intercepted, or corrupted and you
>> should ensure you have suitable antivirus protection in place. By sending
>> us your or any third party personal details, you consent to (or confirm you
>> have obtained consent

Re: Spark-SQL - Query Hanging, How To Troubleshoot

2023-08-10 Thread Mich Talebzadeh
Steve may have a valid point. You raised an issue with concurrent writes
before, if I recall correctly. Since this limitation may be due to Hive
metastore. By default Spark uses Apache Derby for its database
persistence. *However
it is limited to only one Spark session at any time for the purposes of
metadata storage.*  That may be the cause here as well. Does this happen if
the underlying tables are created as PARQUET as opposed to ORC?

HTH

Mich Talebzadeh,
Solutions Architect/Engineering Lead
London
United Kingdom


   view my Linkedin profile



 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Fri, 11 Aug 2023 at 01:33, Stephen Coy 
wrote:

> Hi Patrick,
>
> When this has happened to me in the past (admittedly via spark-submit) it
> has been because another job was still running and had already claimed some
> of the resources (cores and memory).
>
> I think this can also happen if your configuration tries to claim
> resources that will never be available.
>
> Cheers,
>
> SteveC
>
>
> On 11 Aug 2023, at 3:36 am, Patrick Tucci  wrote:
>
> Hello,
>
> I'm attempting to run a query on Spark 3.4.0 through the Spark
> ThriftServer. The cluster has 64 cores, 250GB RAM, and operates in
> standalone mode using HDFS for storage.
>
> The query is as follows:
>
> SELECT ME.*, MB.BenefitID
> FROM MemberEnrollment ME
> JOIN MemberBenefits MB
> ON ME.ID  = MB.EnrollmentID
> WHERE MB.BenefitID = 5
> LIMIT 10
>
> The tables are defined as follows:
>
> -- Contains about 3M rows
> CREATE TABLE MemberEnrollment
> (
> ID INT
> , MemberID VARCHAR(50)
> , StartDate DATE
> , EndDate DATE
> -- Other columns, but these are the most important
> ) STORED AS ORC;
>
> -- Contains about 25m rows
> CREATE TABLE MemberBenefits
> (
> EnrollmentID INT
> , BenefitID INT
> ) STORED AS ORC;
>
> When I execute the query, it runs a single broadcast exchange stage, which
> completes after a few seconds. Then everything just hangs. The JDBC/ODBC
> tab in the UI shows the query state as COMPILED, but no stages or tasks are
> executing or pending:
>
> 
>
> I've let the query run for as long as 30 minutes with no additional
> stages, progress, or errors. I'm not sure where to start troubleshooting.
>
> Thanks for your help,
>
> Patrick
>
>
> This email contains confidential information of and is the copyright of
> Infomedia. It must not be forwarded, amended or disclosed without consent
> of the sender. If you received this message by mistake, please advise the
> sender and delete all copies. Security of transmission on the internet
> cannot be guaranteed, could be infected, intercepted, or corrupted and you
> should ensure you have suitable antivirus protection in place. By sending
> us your or any third party personal details, you consent to (or confirm you
> have obtained consent from such third parties) to Infomedia’s privacy
> policy. http://www.infomedia.com.au/privacy-policy/
>


Re: Spark-SQL - Query Hanging, How To Troubleshoot

2023-08-10 Thread Stephen Coy
Hi Patrick,

When this has happened to me in the past (admittedly via spark-submit) it has 
been because another job was still running and had already claimed some of the 
resources (cores and memory).

I think this can also happen if your configuration tries to claim resources 
that will never be available.

Cheers,

SteveC


On 11 Aug 2023, at 3:36 am, Patrick Tucci  wrote:

Hello,

I'm attempting to run a query on Spark 3.4.0 through the Spark ThriftServer. 
The cluster has 64 cores, 250GB RAM, and operates in standalone mode using HDFS 
for storage.

The query is as follows:

SELECT ME.*, MB.BenefitID
FROM MemberEnrollment ME
JOIN MemberBenefits MB
ON ME.ID = MB.EnrollmentID
WHERE MB.BenefitID = 5
LIMIT 10

The tables are defined as follows:

-- Contains about 3M rows
CREATE TABLE MemberEnrollment
(
ID INT
, MemberID VARCHAR(50)
, StartDate DATE
, EndDate DATE
-- Other columns, but these are the most important
) STORED AS ORC;

-- Contains about 25m rows
CREATE TABLE MemberBenefits
(
EnrollmentID INT
, BenefitID INT
) STORED AS ORC;

When I execute the query, it runs a single broadcast exchange stage, which 
completes after a few seconds. Then everything just hangs. The JDBC/ODBC tab in 
the UI shows the query state as COMPILED, but no stages or tasks are executing 
or pending:



I've let the query run for as long as 30 minutes with no additional stages, 
progress, or errors. I'm not sure where to start troubleshooting.

Thanks for your help,

Patrick

This email contains confidential information of and is the copyright of 
Infomedia. It must not be forwarded, amended or disclosed without consent of 
the sender. If you received this message by mistake, please advise the sender 
and delete all copies. Security of transmission on the internet cannot be 
guaranteed, could be infected, intercepted, or corrupted and you should ensure 
you have suitable antivirus protection in place. By sending us your or any 
third party personal details, you consent to (or confirm you have obtained 
consent from such third parties) to Infomedia's privacy policy. 
http://www.infomedia.com.au/privacy-policy/


Re: Spark-SQL - Query Hanging, How To Troubleshoot

2023-08-10 Thread Patrick Tucci
Hi Mich,

I don't believe Hive is installed. I set up this cluster from scratch. I
installed Hadoop and Spark by downloading them from their project websites.
If Hive isn't bundled with Hadoop or Spark, I don't believe I have it. I'm
running the Thrift server distributed with Spark, like so:

~/spark/sbin/start-thriftserver.sh --master spark://10.0.50.1:7077

I can look into installing Hive, but it might take some time. I tried to
set up Hive when I first started evaluating distributed data processing
solutions, but I encountered many issues. Spark was much simpler, which was
part of the reason why I chose it.

Thanks again for the reply, I truly appreciate your help.

Patrick

On Thu, Aug 10, 2023 at 3:43 PM Mich Talebzadeh 
wrote:

> sorry host is 10.0.50.1
>
> Mich Talebzadeh,
> Solutions Architect/Engineering Lead
> London
> United Kingdom
>
>
>view my Linkedin profile
> 
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Thu, 10 Aug 2023 at 20:41, Mich Talebzadeh 
> wrote:
>
>> Hi Patrick
>>
>> That beeline on port 1 is a hive thrift server running on your hive
>> on host 10.0.50.1:1.
>>
>> if you can access that host, you should be able to log into hive by
>> typing hive. The os user is hadoop in your case and sounds like there is no
>> password!
>>
>> Once inside that host, hive logs are kept in your case
>> /tmp/hadoop/hive.log or go to /tmp and do
>>
>> /tmp> find ./ -name hive.log. It should be under /tmp/hive.log
>>
>> Try running the sql inside hive and see what it says
>>
>> HTH
>>
>> Mich Talebzadeh,
>> Solutions Architect/Engineering Lead
>> London
>> United Kingdom
>>
>>
>>view my Linkedin profile
>> 
>>
>>
>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Thu, 10 Aug 2023 at 20:02, Patrick Tucci 
>> wrote:
>>
>>> Hi Mich,
>>>
>>> Thanks for the reply. Unfortunately I don't have Hive set up on my
>>> cluster. I can explore this if there are no other ways to troubleshoot.
>>>
>>> I'm using beeline to run commands against the Thrift server. Here's the
>>> command I use:
>>>
>>> ~/spark/bin/beeline -u jdbc:hive2://10.0.50.1:1 -n hadoop -f
>>> command.sql
>>>
>>> Thanks again for your help.
>>>
>>> Patrick
>>>
>>>
>>> On Thu, Aug 10, 2023 at 2:24 PM Mich Talebzadeh <
>>> mich.talebza...@gmail.com> wrote:
>>>
 Can you run this sql query through hive itself?

 Are you using this command or similar for your thrift server?

 beeline -u jdbc:hive2:///1/default
 org.apache.hive.jdbc.HiveDriver -n hadoop -p xxx

 HTH

 Mich Talebzadeh,
 Solutions Architect/Engineering Lead
 London
 United Kingdom


view my Linkedin profile
 


  https://en.everybodywiki.com/Mich_Talebzadeh



 *Disclaimer:* Use it at your own risk. Any and all responsibility for
 any loss, damage or destruction of data or any other property which may
 arise from relying on this email's technical content is explicitly
 disclaimed. The author will in no case be liable for any monetary damages
 arising from such loss, damage or destruction.




 On Thu, 10 Aug 2023 at 18:39, Patrick Tucci 
 wrote:

> Hello,
>
> I'm attempting to run a query on Spark 3.4.0 through the Spark
> ThriftServer. The cluster has 64 cores, 250GB RAM, and operates in
> standalone mode using HDFS for storage.
>
> The query is as follows:
>
> SELECT ME.*, MB.BenefitID
> FROM MemberEnrollment ME
> JOIN MemberBenefits MB
> ON ME.ID = MB.EnrollmentID
> WHERE MB.BenefitID = 5
> LIMIT 10
>
> The tables are defined as follows:
>
> -- Contains about 3M rows
> CREATE TABLE MemberEnrollment
> (
> ID INT
> , MemberID VARCHAR(50)
> , StartDate DATE
> , EndDate DATE
> -- Other columns, but these are the most important
> ) STORED AS ORC;
>
> -- Contains about 25m rows
> CREATE TABLE MemberBenefits
> (
> EnrollmentID INT
> , BenefitID INT
> ) STORED AS ORC;
>
> W

Re: Spark-SQL - Query Hanging, How To Troubleshoot

2023-08-10 Thread Mich Talebzadeh
sorry host is 10.0.50.1

Mich Talebzadeh,
Solutions Architect/Engineering Lead
London
United Kingdom


   view my Linkedin profile



 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Thu, 10 Aug 2023 at 20:41, Mich Talebzadeh 
wrote:

> Hi Patrick
>
> That beeline on port 1 is a hive thrift server running on your hive on
> host 10.0.50.1:1.
>
> if you can access that host, you should be able to log into hive by typing
> hive. The os user is hadoop in your case and sounds like there is no
> password!
>
> Once inside that host, hive logs are kept in your case
> /tmp/hadoop/hive.log or go to /tmp and do
>
> /tmp> find ./ -name hive.log. It should be under /tmp/hive.log
>
> Try running the sql inside hive and see what it says
>
> HTH
>
> Mich Talebzadeh,
> Solutions Architect/Engineering Lead
> London
> United Kingdom
>
>
>view my Linkedin profile
> 
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Thu, 10 Aug 2023 at 20:02, Patrick Tucci 
> wrote:
>
>> Hi Mich,
>>
>> Thanks for the reply. Unfortunately I don't have Hive set up on my
>> cluster. I can explore this if there are no other ways to troubleshoot.
>>
>> I'm using beeline to run commands against the Thrift server. Here's the
>> command I use:
>>
>> ~/spark/bin/beeline -u jdbc:hive2://10.0.50.1:1 -n hadoop -f
>> command.sql
>>
>> Thanks again for your help.
>>
>> Patrick
>>
>>
>> On Thu, Aug 10, 2023 at 2:24 PM Mich Talebzadeh <
>> mich.talebza...@gmail.com> wrote:
>>
>>> Can you run this sql query through hive itself?
>>>
>>> Are you using this command or similar for your thrift server?
>>>
>>> beeline -u jdbc:hive2:///1/default
>>> org.apache.hive.jdbc.HiveDriver -n hadoop -p xxx
>>>
>>> HTH
>>>
>>> Mich Talebzadeh,
>>> Solutions Architect/Engineering Lead
>>> London
>>> United Kingdom
>>>
>>>
>>>view my Linkedin profile
>>> 
>>>
>>>
>>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>>
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>>
>>> On Thu, 10 Aug 2023 at 18:39, Patrick Tucci 
>>> wrote:
>>>
 Hello,

 I'm attempting to run a query on Spark 3.4.0 through the Spark
 ThriftServer. The cluster has 64 cores, 250GB RAM, and operates in
 standalone mode using HDFS for storage.

 The query is as follows:

 SELECT ME.*, MB.BenefitID
 FROM MemberEnrollment ME
 JOIN MemberBenefits MB
 ON ME.ID = MB.EnrollmentID
 WHERE MB.BenefitID = 5
 LIMIT 10

 The tables are defined as follows:

 -- Contains about 3M rows
 CREATE TABLE MemberEnrollment
 (
 ID INT
 , MemberID VARCHAR(50)
 , StartDate DATE
 , EndDate DATE
 -- Other columns, but these are the most important
 ) STORED AS ORC;

 -- Contains about 25m rows
 CREATE TABLE MemberBenefits
 (
 EnrollmentID INT
 , BenefitID INT
 ) STORED AS ORC;

 When I execute the query, it runs a single broadcast exchange stage,
 which completes after a few seconds. Then everything just hangs. The
 JDBC/ODBC tab in the UI shows the query state as COMPILED, but no stages or
 tasks are executing or pending:

 [image: image.png]

 I've let the query run for as long as 30 minutes with no additional
 stages, progress, or errors. I'm not sure where to start troubleshooting.

 Thanks for your help,

 Patrick

>>>


Re: Spark-SQL - Query Hanging, How To Troubleshoot

2023-08-10 Thread Mich Talebzadeh
Hi Patrick

That beeline on port 1 is a hive thrift server running on your hive on
host 10.0.50.1:1.

if you can access that host, you should be able to log into hive by typing
hive. The os user is hadoop in your case and sounds like there is no
password!

Once inside that host, hive logs are kept in your case /tmp/hadoop/hive.log
or go to /tmp and do

/tmp> find ./ -name hive.log. It should be under /tmp/hive.log

Try running the sql inside hive and see what it says

HTH

Mich Talebzadeh,
Solutions Architect/Engineering Lead
London
United Kingdom


   view my Linkedin profile



 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Thu, 10 Aug 2023 at 20:02, Patrick Tucci  wrote:

> Hi Mich,
>
> Thanks for the reply. Unfortunately I don't have Hive set up on my
> cluster. I can explore this if there are no other ways to troubleshoot.
>
> I'm using beeline to run commands against the Thrift server. Here's the
> command I use:
>
> ~/spark/bin/beeline -u jdbc:hive2://10.0.50.1:1 -n hadoop -f
> command.sql
>
> Thanks again for your help.
>
> Patrick
>
>
> On Thu, Aug 10, 2023 at 2:24 PM Mich Talebzadeh 
> wrote:
>
>> Can you run this sql query through hive itself?
>>
>> Are you using this command or similar for your thrift server?
>>
>> beeline -u jdbc:hive2:///1/default
>> org.apache.hive.jdbc.HiveDriver -n hadoop -p xxx
>>
>> HTH
>>
>> Mich Talebzadeh,
>> Solutions Architect/Engineering Lead
>> London
>> United Kingdom
>>
>>
>>view my Linkedin profile
>> 
>>
>>
>>  https://en.everybodywiki.com/Mich_Talebzadeh
>>
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Thu, 10 Aug 2023 at 18:39, Patrick Tucci 
>> wrote:
>>
>>> Hello,
>>>
>>> I'm attempting to run a query on Spark 3.4.0 through the Spark
>>> ThriftServer. The cluster has 64 cores, 250GB RAM, and operates in
>>> standalone mode using HDFS for storage.
>>>
>>> The query is as follows:
>>>
>>> SELECT ME.*, MB.BenefitID
>>> FROM MemberEnrollment ME
>>> JOIN MemberBenefits MB
>>> ON ME.ID = MB.EnrollmentID
>>> WHERE MB.BenefitID = 5
>>> LIMIT 10
>>>
>>> The tables are defined as follows:
>>>
>>> -- Contains about 3M rows
>>> CREATE TABLE MemberEnrollment
>>> (
>>> ID INT
>>> , MemberID VARCHAR(50)
>>> , StartDate DATE
>>> , EndDate DATE
>>> -- Other columns, but these are the most important
>>> ) STORED AS ORC;
>>>
>>> -- Contains about 25m rows
>>> CREATE TABLE MemberBenefits
>>> (
>>> EnrollmentID INT
>>> , BenefitID INT
>>> ) STORED AS ORC;
>>>
>>> When I execute the query, it runs a single broadcast exchange stage,
>>> which completes after a few seconds. Then everything just hangs. The
>>> JDBC/ODBC tab in the UI shows the query state as COMPILED, but no stages or
>>> tasks are executing or pending:
>>>
>>> [image: image.png]
>>>
>>> I've let the query run for as long as 30 minutes with no additional
>>> stages, progress, or errors. I'm not sure where to start troubleshooting.
>>>
>>> Thanks for your help,
>>>
>>> Patrick
>>>
>>


Re: Spark-SQL - Query Hanging, How To Troubleshoot

2023-08-10 Thread Patrick Tucci
Hi Mich,

Thanks for the reply. Unfortunately I don't have Hive set up on my cluster.
I can explore this if there are no other ways to troubleshoot.

I'm using beeline to run commands against the Thrift server. Here's the
command I use:

~/spark/bin/beeline -u jdbc:hive2://10.0.50.1:1 -n hadoop -f command.sql

Thanks again for your help.

Patrick


On Thu, Aug 10, 2023 at 2:24 PM Mich Talebzadeh 
wrote:

> Can you run this sql query through hive itself?
>
> Are you using this command or similar for your thrift server?
>
> beeline -u jdbc:hive2:///1/default
> org.apache.hive.jdbc.HiveDriver -n hadoop -p xxx
>
> HTH
>
> Mich Talebzadeh,
> Solutions Architect/Engineering Lead
> London
> United Kingdom
>
>
>view my Linkedin profile
> 
>
>
>  https://en.everybodywiki.com/Mich_Talebzadeh
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Thu, 10 Aug 2023 at 18:39, Patrick Tucci 
> wrote:
>
>> Hello,
>>
>> I'm attempting to run a query on Spark 3.4.0 through the Spark
>> ThriftServer. The cluster has 64 cores, 250GB RAM, and operates in
>> standalone mode using HDFS for storage.
>>
>> The query is as follows:
>>
>> SELECT ME.*, MB.BenefitID
>> FROM MemberEnrollment ME
>> JOIN MemberBenefits MB
>> ON ME.ID = MB.EnrollmentID
>> WHERE MB.BenefitID = 5
>> LIMIT 10
>>
>> The tables are defined as follows:
>>
>> -- Contains about 3M rows
>> CREATE TABLE MemberEnrollment
>> (
>> ID INT
>> , MemberID VARCHAR(50)
>> , StartDate DATE
>> , EndDate DATE
>> -- Other columns, but these are the most important
>> ) STORED AS ORC;
>>
>> -- Contains about 25m rows
>> CREATE TABLE MemberBenefits
>> (
>> EnrollmentID INT
>> , BenefitID INT
>> ) STORED AS ORC;
>>
>> When I execute the query, it runs a single broadcast exchange stage,
>> which completes after a few seconds. Then everything just hangs. The
>> JDBC/ODBC tab in the UI shows the query state as COMPILED, but no stages or
>> tasks are executing or pending:
>>
>> [image: image.png]
>>
>> I've let the query run for as long as 30 minutes with no additional
>> stages, progress, or errors. I'm not sure where to start troubleshooting.
>>
>> Thanks for your help,
>>
>> Patrick
>>
>


Re: Spark-SQL - Query Hanging, How To Troubleshoot

2023-08-10 Thread Mich Talebzadeh
Can you run this sql query through hive itself?

Are you using this command or similar for your thrift server?

beeline -u jdbc:hive2:///1/default
org.apache.hive.jdbc.HiveDriver -n hadoop -p xxx

HTH

Mich Talebzadeh,
Solutions Architect/Engineering Lead
London
United Kingdom


   view my Linkedin profile



 https://en.everybodywiki.com/Mich_Talebzadeh



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Thu, 10 Aug 2023 at 18:39, Patrick Tucci  wrote:

> Hello,
>
> I'm attempting to run a query on Spark 3.4.0 through the Spark
> ThriftServer. The cluster has 64 cores, 250GB RAM, and operates in
> standalone mode using HDFS for storage.
>
> The query is as follows:
>
> SELECT ME.*, MB.BenefitID
> FROM MemberEnrollment ME
> JOIN MemberBenefits MB
> ON ME.ID = MB.EnrollmentID
> WHERE MB.BenefitID = 5
> LIMIT 10
>
> The tables are defined as follows:
>
> -- Contains about 3M rows
> CREATE TABLE MemberEnrollment
> (
> ID INT
> , MemberID VARCHAR(50)
> , StartDate DATE
> , EndDate DATE
> -- Other columns, but these are the most important
> ) STORED AS ORC;
>
> -- Contains about 25m rows
> CREATE TABLE MemberBenefits
> (
> EnrollmentID INT
> , BenefitID INT
> ) STORED AS ORC;
>
> When I execute the query, it runs a single broadcast exchange stage, which
> completes after a few seconds. Then everything just hangs. The JDBC/ODBC
> tab in the UI shows the query state as COMPILED, but no stages or tasks are
> executing or pending:
>
> [image: image.png]
>
> I've let the query run for as long as 30 minutes with no additional
> stages, progress, or errors. I'm not sure where to start troubleshooting.
>
> Thanks for your help,
>
> Patrick
>


Re: Spark SQL query

2021-02-03 Thread Mich Talebzadeh
I suggest one thing you can do is  to open another thread for this feature
request

"Having functionality in Spark to allow queries to be gathered and analyzed"

and see what forum  responds to it.

HTH





LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*





*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Wed, 3 Feb 2021 at 11:17, Arpan Bhandari  wrote:

> Yes Mich,
>
> Mapping the spark sql query that got executed corresponding to an
> application Id on yarn would greatly help in analyzing and debugging the
> query for any potential problems.
>
>
> Thanks,
> Arpan Bhandari
>
>
>
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>


Re: Spark SQL query

2021-02-03 Thread Arpan Bhandari
Yes Mich,

Mapping the spark sql query that got executed corresponding to an
application Id on yarn would greatly help in analyzing and debugging the
query for any potential problems.


Thanks,
Arpan Bhandari



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: Spark SQL query

2021-02-03 Thread Mich Talebzadeh
I gather what you are after is a code sniffer for Spark that provides a
form of GUI to get the code that applications run against spark.

I don't think Spark has this type of plug-in although it would be
potentially useful. Some RDBMS provide this. Usually stored on some form of
persistent storage or database. I have not come across it in Spark.

HTH




LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*





*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Wed, 3 Feb 2021 at 05:10, Arpan Bhandari  wrote:

> Mich,
>
> The directory is already there and event logs are getting generated, I have
> checked them it contains the query plan but not the actual query.
>
>
> Thanks,
> Arpan Bhandari
>
>
>
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>


Re: Spark SQL query

2021-02-02 Thread Arpan Bhandari
Mich,

The directory is already there and event logs are getting generated, I have
checked them it contains the query plan but not the actual query.


Thanks,
Arpan Bhandari



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: Spark SQL query

2021-02-02 Thread Mich Talebzadeh
create a directory in hdfs

hdfs dfs -mkdir /spark_event_logs

modify file $SPARK_HOME/conf/spark-defaults.conf and add these two lines

spark.eventLog.enabled=true
# do not use quotes below
spark.eventLog.dir=hdfs://rhes75:9000/spark_event_logs

Then run a job and check it

hdfs dfs -ls /spark_event_logs

-rw-rw   3 hduser supergroup   33795834 2021-02-02 19:48
/spark_event_logs/yarn-1612295234284

That should have all the info you need

Make sure the directory hdfs://:9000/spark_event_logs is
writable by spark


HTH




LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*





*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Tue, 2 Feb 2021 at 15:59, Arpan Bhandari  wrote:

> Yes i can see the jobs on 8088 and also on the spark history url. spark
> history server is showing up the plan details on the sql tab but not giving
> the query.
>
>
> Thanks,
> Arpan Bhandari
>
>
>
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>


Re: Spark SQL query

2021-02-02 Thread Arpan Bhandari
Yes i can see the jobs on 8088 and also on the spark history url. spark
history server is showing up the plan details on the sql tab but not giving
the query.


Thanks,
Arpan Bhandari



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: Spark SQL query

2021-02-02 Thread Arpan Bhandari
Hi Mich,

I do see the .scala_history directory, but it contains all the queries which
got executed uptill now, but if i have to map a specific query to an
application Id in yarn that would not correlate, hence this method alone
won't suffice

Thanks,
Arpan Bhandari
 



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: Spark SQL query

2021-02-02 Thread Mich Talebzadeh
Hi Arpan.

I believe all applications including spark and scala create a hidden
history file

You can go to home directory

cd

# see list of all hidden files

ls -a | egrep '^\.'

If you are using scala do you see .scala_history file?

.scala_history

HTH



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*





*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Tue, 2 Feb 2021 at 10:16, Arpan Bhandari  wrote:

> Hi Mich,
>
> Repeated the steps as suggested, but still there is no such folder created
> in the home directory. Do we need to enable some property so that it
> creates
> one.
>
>
> Thanks,
> Arpan Bhandari
>
>
>
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>


Re: Spark SQL query

2021-02-02 Thread Arpan Bhandari
Hi Mich,

Repeated the steps as suggested, but still there is no such folder created
in the home directory. Do we need to enable some property so that it creates
one.


Thanks,
Arpan Bhandari



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: Spark SQL query

2021-02-02 Thread Arpan Bhandari
Sanchit,

It seems I have to do some sort of analysis from the plan to get the query.
Appreciate all your help on this.


Thanks,
Arpan Bhandari



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: Spark SQL query

2021-02-01 Thread Mich Talebzadeh
Hi Arpan,

log in as any user that has execution right for spark. type spark-shell, do
some simple commands then exit. go to home directory of that user and look
for that hidden file


${HOME/.spark_history

it will be there.

HTH,



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*





*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Mon, 1 Feb 2021 at 15:44, Arpan Bhandari  wrote:

> Hey Mich,
>
> Thanks for the suggestions, but i don't see any such folder created on the
> edge node.
>
>
> Thanks,
> Arpan Bhandari
>
>
>
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>


Re: Spark SQL query

2021-02-01 Thread Sachit Murarka
Application wise it wont show as such.
You can try to corelate it with explain plain output using some filters or
attribute.

Or else if you do not have too much queries in history. Just take queries
and find plan of those queries and match it with shown in UI.

I know thats the tedious task. But I dont think that there is other way.

Thanks
Sachit

On Mon, 1 Feb 2021, 22:32 Arpan Bhandari,  wrote:

> Sachit,
>
> That is showing all the queries that got executed, but how it would get
> mapped to specific application Id it was associated with ?
>
> Thanks,
> Arpan Bhandari
>
>
>
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>


Re: Spark SQL query

2021-02-01 Thread Arpan Bhandari
Sachit,

That is showing all the queries that got executed, but how it would get
mapped to specific application Id it was associated with ?

Thanks,
Arpan Bhandari



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: Spark SQL query

2021-02-01 Thread Sachit Murarka
Hi arpan,

In spark shell when you type
:history.
then also it is not showing?

Thanks
Sachit

On Mon, 1 Feb 2021, 21:13 Arpan Bhandari,  wrote:

> Hey Sachit,
>
> It shows the query plan, which is difficult to diagnose out and depict the
> actual query.
>
>
> Thanks,
> Arpan Bhandari
>
>
>
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>


Re: Spark SQL query

2021-02-01 Thread Arpan Bhandari
Hey Mich,

Thanks for the suggestions, but i don't see any such folder created on the
edge node.


Thanks,
Arpan Bhandari



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: Spark SQL query

2021-02-01 Thread Arpan Bhandari
Hey Sachit,

It shows the query plan, which is difficult to diagnose out and depict the
actual query.


Thanks,
Arpan Bhandari



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: Spark SQL query

2021-01-31 Thread Mich Talebzadeh
Hi Arpan,

I presume you are interested in what client was doing.

If you have access to the edge node (where spark code is submitted), look
for the following file

${HOME/.spark_history

example

-rw-r--r--. 1 hduser hadoop 111997 Jun  2  2018 .spark_history

just use shell tools (cat, grep etc) to have a look

Or put it in HDFS somewhere

hdfs dfs -put .spark_history /misc/spark_history ## Spark cannot read a
hidden file

#and read it as text file through sparkRDD in spark-shell

scala> val historyRDD = spark.sparkContext.textFile("/misc/spark_history")
historyRDD: org.apache.spark.rdd.RDD[String] = /misc/spark_history
MapPartitionsRDD[11] at textFile at :23

#print it out

 historyRDD.collect().foreach(f=>{println(f)})


HTH





LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*





*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Fri, 29 Jan 2021 at 13:49, Arpan Bhandari  wrote:

> Hi ,
>
> Is there a way to track back spark sql after it has been already run i.e.
> query has been already submitted by a person and i have to back trace what
> query actually got submitted.
>
>
> Appreciate any help on this.
>
>
>
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>


Re: Spark SQL query

2021-01-31 Thread Sachit Murarka
  Hi Arpan,

Launch spark shell and in the shell type ":history" , you will see the
query executed.

In the Spark UI under SQL Tab you can see the query plan when you click on
the details button(Though it won't show you the complete query). But by
looking at the plan you can get your query.

Hope this helps!


Kind Regards,
Sachit Murarka


On Fri, Jan 29, 2021 at 9:33 PM Arpan Bhandari  wrote:

> Hi Sachit,
>
> Yes it was executed using spark shell, history is already enabled. already
> checked sql tab but it is not showing the query. My spark version is 2.4.5
>
> Thanks,
> Arpan Bhandari
>
>
>
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>


Re: Spark SQL query

2021-01-29 Thread Arpan Bhandari
Hi Sachit,

Yes it was executed using spark shell, history is already enabled. already
checked sql tab but it is not showing the query. My spark version is 2.4.5

Thanks,
Arpan Bhandari



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Re: Spark SQL query

2021-01-29 Thread Sachit Murarka
Hi Arpan,

Was it executed using spark shell?
If yes type :history

Do u have history server enabled?
If yes , go to the history and go to the SQL tab in History UI.

Thanks
Sachit

On Fri, 29 Jan 2021, 19:19 Arpan Bhandari,  wrote:

> Hi ,
>
> Is there a way to track back spark sql after it has been already run i.e.
> query has been already submitted by a person and i have to back trace what
> query actually got submitted.
>
>
> Appreciate any help on this.
>
>
>
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>


RE: Spark-SQL Query Optimization: overlapping ranges

2017-05-01 Thread Lavelle, Shawn
Jacek,

   Thanks for your help.  I didn’t want to write a bug/enhancement unless 
warranted.

~ Shawn

From: Jacek Laskowski [mailto:ja...@japila.pl]
Sent: Thursday, April 27, 2017 8:39 AM
To: Lavelle, Shawn 
Cc: user 
Subject: Re: Spark-SQL Query Optimization: overlapping ranges

Hi Shawn,

If you're asking me if Spark SQL should optimize such queries, I don't know.

If you're asking me if it's possible to convince Spark SQL to do so, I'd say, 
sure, it is. Write your optimization rule and attach it to Optimizer (using 
extraOptimizations extension point).


Pozdrawiam,
Jacek Laskowski

https://medium.com/@jaceklaskowski/
Mastering Apache Spark 2 https://bit.ly/mastering-apache-spark
Follow me at https://twitter.com/jaceklaskowski

On Thu, Apr 27, 2017 at 3:22 PM, Lavelle, Shawn 
mailto:shawn.lave...@osii.com>> wrote:
Hi Jacek,

 I know that it is not currently doing so, but should it be?  The algorithm 
isn’t complicated and could be applied to both OR and AND logical operators 
with comparison operators as children.
 My users write programs to generate queries that aren’t checked for this 
sort of thing. We’re probably going to write our own 
org.apache.spark.sql.catalyst.rules.Rule to handle it.

~ Shawn

From: Jacek Laskowski [mailto:ja...@japila.pl<mailto:ja...@japila.pl>]
Sent: Wednesday, April 26, 2017 2:55 AM
To: Lavelle, Shawn mailto:shawn.lave...@osii.com>>
Cc: user mailto:user@spark.apache.org>>
Subject: Re: Spark-SQL Query Optimization: overlapping ranges

explain it and you'll know what happens under the covers.

i.e. Use explain on the Dataset.

Jacek

On 25 Apr 2017 12:46 a.m., "Lavelle, Shawn" 
mailto:shawn.lave...@osii.com>> wrote:
Hello Spark Users!

   Does the Spark Optimization engine reduce overlapping column ranges?  If so, 
should it push this down to a Data Source?

  Example,
This:  Select * from table where col between 3 and 7 OR col between 5 and 9
Reduces to:  Select * from table where col between 3 and 9


  Thanks for your insight!

~ Shawn M Lavelle



[cid:image001.png@01D2C298.8343C580]
Shawn Lavelle
Software Development

4101 Arrowhead Drive
Medina, Minnesota 55340-9457
Phone: 763 551 0559
Fax: 763 551 0750
Email: shawn.lave...@osii.com<mailto:shawn.lave...@osii.com>
Website: www.osii.com<http://www.osii.com>





Re: Spark-SQL Query Optimization: overlapping ranges

2017-04-27 Thread Jacek Laskowski
Hi Shawn,

If you're asking me if Spark SQL should optimize such queries, I don't know.

If you're asking me if it's possible to convince Spark SQL to do so, I'd
say, sure, it is. Write your optimization rule and attach it to Optimizer
(using extraOptimizations extension point).


Pozdrawiam,
Jacek Laskowski

https://medium.com/@jaceklaskowski/
Mastering Apache Spark 2 https://bit.ly/mastering-apache-spark
Follow me at https://twitter.com/jaceklaskowski

On Thu, Apr 27, 2017 at 3:22 PM, Lavelle, Shawn 
wrote:

> Hi Jacek,
>
>
>
>  I know that it is not currently doing so, but should it be?  The
> algorithm isn’t complicated and could be applied to both OR and AND logical
> operators with comparison operators as children.
>  My users write programs to generate queries that aren’t checked for
> this sort of thing. We’re probably going to write our own
> org.apache.spark.sql.catalyst.rules.Rule to handle it.
>
> ~ Shawn
>
>
>
> *From:* Jacek Laskowski [mailto:ja...@japila.pl]
> *Sent:* Wednesday, April 26, 2017 2:55 AM
> *To:* Lavelle, Shawn 
> *Cc:* user 
> *Subject:* Re: Spark-SQL Query Optimization: overlapping ranges
>
>
>
> explain it and you'll know what happens under the covers.
>
>
>
> i.e. Use explain on the Dataset.
>
>
>
> Jacek
>
>
>
> On 25 Apr 2017 12:46 a.m., "Lavelle, Shawn" 
> wrote:
>
> Hello Spark Users!
>
>Does the Spark Optimization engine reduce overlapping column ranges?
> If so, should it push this down to a Data Source?
>
>   Example,
>
> This:  Select * from table where col between 3 and 7 OR col between 5
> and 9
>
> Reduces to:  Select * from table where col between 3 and 9
>
>
>
>
>
>   Thanks for your insight!
>
>
> ~ Shawn M Lavelle
>
>
>
>
>
> *Shawn* *Lavelle*
> Software Development
>
> 4101 Arrowhead Drive
> Medina, Minnesota 55340-9457
> Phone: 763 551 0559 <(763)%20551-0559>
> Fax: 763 551 0750 <(763)%20551-0750>
> *Email:* *shawn.lave...@osii.com *
> *Website:* *www.osii.com* <http://www.osii.com>
>
>
>
>


RE: Spark-SQL Query Optimization: overlapping ranges

2017-04-27 Thread Lavelle, Shawn
Hi Jacek,

 I know that it is not currently doing so, but should it be?  The algorithm 
isn’t complicated and could be applied to both OR and AND logical operators 
with comparison operators as children.
 My users write programs to generate queries that aren’t checked for this 
sort of thing. We’re probably going to write our own 
org.apache.spark.sql.catalyst.rules.Rule to handle it.

~ Shawn

From: Jacek Laskowski [mailto:ja...@japila.pl]
Sent: Wednesday, April 26, 2017 2:55 AM
To: Lavelle, Shawn 
Cc: user 
Subject: Re: Spark-SQL Query Optimization: overlapping ranges

explain it and you'll know what happens under the covers.

i.e. Use explain on the Dataset.

Jacek

On 25 Apr 2017 12:46 a.m., "Lavelle, Shawn" 
mailto:shawn.lave...@osii.com>> wrote:
Hello Spark Users!

   Does the Spark Optimization engine reduce overlapping column ranges?  If so, 
should it push this down to a Data Source?

  Example,
This:  Select * from table where col between 3 and 7 OR col between 5 and 9
Reduces to:  Select * from table where col between 3 and 9


  Thanks for your insight!

~ Shawn M Lavelle



[cid:image002.png@01D2BF2D.E0330800]
Shawn Lavelle
Software Development

4101 Arrowhead Drive
Medina, Minnesota 55340-9457
Phone: 763 551 0559
Fax: 763 551 0750
Email: shawn.lave...@osii.com<mailto:shawn.lave...@osii.com>
Website: www.osii.com<http://www.osii.com>




Re: Spark-SQL Query Optimization: overlapping ranges

2017-04-26 Thread Jacek Laskowski
explain it and you'll know what happens under the covers.

i.e. Use explain on the Dataset.

Jacek

On 25 Apr 2017 12:46 a.m., "Lavelle, Shawn"  wrote:

> Hello Spark Users!
>
>Does the Spark Optimization engine reduce overlapping column ranges?
> If so, should it push this down to a Data Source?
>
>   Example,
>
> This:  Select * from table where col between 3 and 7 OR col between 5
> and 9
>
> Reduces to:  Select * from table where col between 3 and 9
>
>
>
>
>
>   Thanks for your insight!
>
>
> ~ Shawn M Lavelle
>
>
>
>
>
>
> Shawn Lavelle
> Software Development
>
> 4101 Arrowhead Drive
> Medina, Minnesota 55340-9457
> Phone: 763 551 0559 <(763)%20551-0559>
> Fax: 763 551 0750 <(763)%20551-0750>
> *Email:* shawn.lave...@osii.com
> *Website: **www.osii.com* 
>


Re: Spark sql query plan contains all the partitions from hive table even though filtering of partitions is provided

2017-01-17 Thread Raju Bairishetti
Thanks Yong for the response. Adding my responses inline

On Tue, Jan 17, 2017 at 10:27 PM, Yong Zhang  wrote:

> What DB you are using for your Hive meta store, and what types are your
> partition columns?
>
I am using MySql for Hive metastore. Partition columns are  combination of
INT and STRING types.

>
> You maybe want to read the discussion in SPARK-6910, and especially the
> comments in PR. There are some limitation about partition pruning in
> Hive/Spark, maybe yours is one of them
>
Seems I had already gone through SPARK-6910 and corresponding all PRs.
*spark.sql.hive.convertMetastoreParquet   false*
*spark.sql.hive.metastorePartitionPruning   true*
*I had set the above properties from *SPARK-6910 & PRs.


>

> Yong
>
>
> --
> *From:* Raju Bairishetti 
> *Sent:* Tuesday, January 17, 2017 3:00 AM
> *To:* user @spark
> *Subject:* Re: Spark sql query plan contains all the partitions from hive
> table even though filtering of partitions is provided
>
> Had a high level look into the code. Seems getHiveQlPartitions  method
> from HiveMetastoreCatalog is getting called irrespective of 
> metastorePartitionPruning
> conf value.
>
>  It should not fetch all partitions if we set metastorePartitionPruning to
> true (Default value for this is false)
>
> def getHiveQlPartitions(predicates: Seq[Expression] = Nil): Seq[Partition] = {
>   val rawPartitions = if (sqlContext.conf.metastorePartitionPruning) {
> table.getPartitions(predicates)
>   } else {
> allPartitions
>   }
>
> ...
>
> def getPartitions(predicates: Seq[Expression]): Seq[HivePartition] =
>   client.getPartitionsByFilter(this, predicates)
>
> lazy val allPartitions = table.getAllPartitions
>
> But somehow getAllPartitions is getting called eventough after setting 
> metastorePartitionPruning to true.
>
> Am I missing something or looking at wrong place?
>
>
> On Mon, Jan 16, 2017 at 12:53 PM, Raju Bairishetti 
> wrote:
>
>> Waiting for suggestions/help on this...
>>
>> On Wed, Jan 11, 2017 at 12:14 PM, Raju Bairishetti 
>> wrote:
>>
>>> Hello,
>>>
>>>Spark sql is generating query plan with all partitions information
>>> even though if we apply filters on partitions in the query.  Due to this,
>>> spark driver/hive metastore is hitting with OOM as each table is with lots
>>> of partitions.
>>>
>>> We can confirm from hive audit logs that it tries to *fetch all
>>> partitions* from hive metastore.
>>>
>>>  2016-12-28 07:18:33,749 INFO  [pool-4-thread-184]: HiveMetaStore.audit
>>> (HiveMetaStore.java:logAuditEvent(371)) - ugi=rajubip=/x.x.x.x
>>> cmd=get_partitions : db= tbl=x
>>>
>>>
>>> Configured the following parameters in the spark conf to fix the above
>>> issue(source: from spark-jira & github pullreq):
>>>
>>> *spark.sql.hive.convertMetastoreParquet   false *
>>> *spark.sql.hive.metastorePartitionPruning   true*
>>>
>>>
>>> *   plan:  rdf.explain *
>>> *   == Physical Plan ==*
>>>HiveTableScan [rejection_reason#626], MetastoreRelation dbname,
>>> tablename, None,   [(year#314 = 2016),(month#315 = 12),(day#316 =
>>> 28),(hour#317 = 2),(venture#318 = DEFAULT)]
>>>
>>> *get_partitions_by_filter* method is called and fetching only
>>> required partitions.
>>>
>>> But we are seeing parquetDecode errors in our applications
>>> frequently after this. Looks like these decoding errors were because of
>>> changing serde from spark-builtin to hive serde.
>>>
>>> I feel like,* fixing query plan generation in the spark-sql* is the
>>> right approach instead of forcing users to use hive serde.
>>>
>>> Is there any workaround/way to fix this issue? I would like to hear more
>>> thoughts on this :)
>>>
>>> --
>>> Thanks,
>>> Raju Bairishetti,
>>> www.lazada.com
>>>
>>
>>
>>
>> --
>>
>> --
>> Thanks,
>> Raju Bairishetti,
>> www.lazada.com
>>
>
>
>
> --
>
> --
> Thanks,
> Raju Bairishetti,
> www.lazada.com
>



-- 

--
Thanks,
Raju Bairishetti,
www.lazada.com


Re: Spark sql query plan contains all the partitions from hive table even though filtering of partitions is provided

2017-01-17 Thread Yong Zhang
What DB you are using for your Hive meta store, and what types are your 
partition columns?


You maybe want to read the discussion in SPARK-6910, and especially the 
comments in PR. There are some limitation about partition pruning in 
Hive/Spark, maybe yours is one of them.


Yong



From: Raju Bairishetti 
Sent: Tuesday, January 17, 2017 3:00 AM
To: user @spark
Subject: Re: Spark sql query plan contains all the partitions from hive table 
even though filtering of partitions is provided

Had a high level look into the code. Seems getHiveQlPartitions  method from 
HiveMetastoreCatalog is getting called irrespective of 
metastorePartitionPruning conf value.

 It should not fetch all partitions if we set metastorePartitionPruning to true 
(Default value for this is false)

def getHiveQlPartitions(predicates: Seq[Expression] = Nil): Seq[Partition] = {
  val rawPartitions = if (sqlContext.conf.metastorePartitionPruning) {
table.getPartitions(predicates)
  } else {
allPartitions
  }

...

def getPartitions(predicates: Seq[Expression]): Seq[HivePartition] =
  client.getPartitionsByFilter(this, predicates)

lazy val allPartitions = table.getAllPartitions

But somehow getAllPartitions is getting called eventough after setting 
metastorePartitionPruning to true.

Am I missing something or looking at wrong place?

On Mon, Jan 16, 2017 at 12:53 PM, Raju Bairishetti 
mailto:r...@apache.org>> wrote:
Waiting for suggestions/help on this...

On Wed, Jan 11, 2017 at 12:14 PM, Raju Bairishetti 
mailto:r...@apache.org>> wrote:
Hello,

   Spark sql is generating query plan with all partitions information even 
though if we apply filters on partitions in the query.  Due to this, spark 
driver/hive metastore is hitting with OOM as each table is with lots of 
partitions.

We can confirm from hive audit logs that it tries to fetch all partitions from 
hive metastore.

 2016-12-28 07:18:33,749 INFO  [pool-4-thread-184]: HiveMetaStore.audit 
(HiveMetaStore.java:logAuditEvent(371)) - ugi=rajubip=/x.x.x.x   
cmd=get_partitions : db= tbl=x


Configured the following parameters in the spark conf to fix the above 
issue(source: from spark-jira & github pullreq):
spark.sql.hive.convertMetastoreParquet   false
spark.sql.hive.metastorePartitionPruning   true

   plan:  rdf.explain
   == Physical Plan ==
   HiveTableScan [rejection_reason#626], MetastoreRelation dbname, 
tablename, None,   [(year#314 = 2016),(month#315 = 12),(day#316 = 28),(hour#317 
= 2),(venture#318 = DEFAULT)]

get_partitions_by_filter method is called and fetching only required 
partitions.

But we are seeing parquetDecode errors in our applications frequently after 
this. Looks like these decoding errors were because of changing serde from 
spark-builtin to hive serde.

I feel like, fixing query plan generation in the spark-sql is the right 
approach instead of forcing users to use hive serde.

Is there any workaround/way to fix this issue? I would like to hear more 
thoughts on this :)

--
Thanks,
Raju Bairishetti,
www.lazada.com<http://www.lazada.com>



--

--
Thanks,
Raju Bairishetti,
www.lazada.com<http://www.lazada.com>



--

--
Thanks,
Raju Bairishetti,
www.lazada.com<http://www.lazada.com>


Re: Spark sql query plan contains all the partitions from hive table even though filtering of partitions is provided

2017-01-17 Thread Raju Bairishetti
Had a high level look into the code. Seems getHiveQlPartitions  method from
HiveMetastoreCatalog is getting called irrespective of
metastorePartitionPruning
conf value.

 It should not fetch all partitions if we set metastorePartitionPruning to
true (Default value for this is false)

def getHiveQlPartitions(predicates: Seq[Expression] = Nil): Seq[Partition] = {
  val rawPartitions = if (sqlContext.conf.metastorePartitionPruning) {
table.getPartitions(predicates)
  } else {
allPartitions
  }

...

def getPartitions(predicates: Seq[Expression]): Seq[HivePartition] =
  client.getPartitionsByFilter(this, predicates)

lazy val allPartitions = table.getAllPartitions

But somehow getAllPartitions is getting called eventough after setting
metastorePartitionPruning to true.

Am I missing something or looking at wrong place?


On Mon, Jan 16, 2017 at 12:53 PM, Raju Bairishetti  wrote:

> Waiting for suggestions/help on this...
>
> On Wed, Jan 11, 2017 at 12:14 PM, Raju Bairishetti 
> wrote:
>
>> Hello,
>>
>>Spark sql is generating query plan with all partitions information
>> even though if we apply filters on partitions in the query.  Due to this,
>> spark driver/hive metastore is hitting with OOM as each table is with lots
>> of partitions.
>>
>> We can confirm from hive audit logs that it tries to *fetch all
>> partitions* from hive metastore.
>>
>>  2016-12-28 07:18:33,749 INFO  [pool-4-thread-184]: HiveMetaStore.audit
>> (HiveMetaStore.java:logAuditEvent(371)) - ugi=rajubip=/x.x.x.x
>> cmd=get_partitions : db= tbl=x
>>
>>
>> Configured the following parameters in the spark conf to fix the above
>> issue(source: from spark-jira & github pullreq):
>>
>> *spark.sql.hive.convertMetastoreParquet   false*
>> *spark.sql.hive.metastorePartitionPruning   true*
>>
>>
>> *   plan:  rdf.explain*
>> *   == Physical Plan ==*
>>HiveTableScan [rejection_reason#626], MetastoreRelation dbname,
>> tablename, None,   [(year#314 = 2016),(month#315 = 12),(day#316 =
>> 28),(hour#317 = 2),(venture#318 = DEFAULT)]
>>
>> *get_partitions_by_filter* method is called and fetching only
>> required partitions.
>>
>> But we are seeing parquetDecode errors in our applications frequently
>> after this. Looks like these decoding errors were because of changing
>> serde from spark-builtin to hive serde.
>>
>> I feel like,* fixing query plan generation in the spark-sql* is the
>> right approach instead of forcing users to use hive serde.
>>
>> Is there any workaround/way to fix this issue? I would like to hear more
>> thoughts on this :)
>>
>> --
>> Thanks,
>> Raju Bairishetti,
>> www.lazada.com
>>
>
>
>
> --
>
> --
> Thanks,
> Raju Bairishetti,
> www.lazada.com
>



-- 

--
Thanks,
Raju Bairishetti,
www.lazada.com


Re: Spark sql query plan contains all the partitions from hive table even though filtering of partitions is provided

2017-01-15 Thread Raju Bairishetti
Waiting for suggestions/help on this...

On Wed, Jan 11, 2017 at 12:14 PM, Raju Bairishetti  wrote:

> Hello,
>
>Spark sql is generating query plan with all partitions information even
> though if we apply filters on partitions in the query.  Due to this, spark
> driver/hive metastore is hitting with OOM as each table is with lots of
> partitions.
>
> We can confirm from hive audit logs that it tries to *fetch all
> partitions* from hive metastore.
>
>  2016-12-28 07:18:33,749 INFO  [pool-4-thread-184]: HiveMetaStore.audit
> (HiveMetaStore.java:logAuditEvent(371)) - ugi=rajubip=/x.x.x.x
> cmd=get_partitions : db= tbl=x
>
>
> Configured the following parameters in the spark conf to fix the above
> issue(source: from spark-jira & github pullreq):
>
> *spark.sql.hive.convertMetastoreParquet   false*
> *spark.sql.hive.metastorePartitionPruning   true*
>
>
> *   plan:  rdf.explain*
> *   == Physical Plan ==*
>HiveTableScan [rejection_reason#626], MetastoreRelation dbname,
> tablename, None,   [(year#314 = 2016),(month#315 = 12),(day#316 =
> 28),(hour#317 = 2),(venture#318 = DEFAULT)]
>
> *get_partitions_by_filter* method is called and fetching only
> required partitions.
>
> But we are seeing parquetDecode errors in our applications frequently
> after this. Looks like these decoding errors were because of changing
> serde from spark-builtin to hive serde.
>
> I feel like,* fixing query plan generation in the spark-sql* is the right
> approach instead of forcing users to use hive serde.
>
> Is there any workaround/way to fix this issue? I would like to hear more
> thoughts on this :)
>
> --
> Thanks,
> Raju Bairishetti,
> www.lazada.com
>



-- 

--
Thanks,
Raju Bairishetti,
www.lazada.com


Re: Spark SQL query for List

2016-04-26 Thread Ramkumar V
I'm getting following exception if i form a query like this. Its not coming
to the point where get(0) or get(1).

Exception in thread "main" java.lang.RuntimeException: [1.22] failure:
``*'' expected but `cities' found


*Thanks*,



On Tue, Apr 26, 2016 at 4:41 PM, Hyukjin Kwon  wrote:

> Doesn't get(0) give you the Array[String] for CITY (am I missing
> something?)
> On 26 Apr 2016 11:02 p.m., "Ramkumar V"  wrote:
>
> JavaSparkContext ctx = new JavaSparkContext(sparkConf);
>
> SQLContext sqlContext = new SQLContext(ctx);
>
> DataFrame parquetFile = sqlContext.parquetFile(
> "hdfs:/XYZ:8020/user/hdfs/parquet/*.parquet");
>
>parquetFile.registerTempTable("parquetFile");
>
> DataFrame tempDF = sqlContext.sql("SELECT TOUR.CITIES, BUDJET from
> parquetFile");
>
> JavaRDDjRDD = tempDF.toJavaRDD();
>
>  JavaRDD ones = jRDD.map(new Function() {
>
>   public String call(Row row) throws Exception {
>
> return row.getString(1);
>
>   }
>
> });
>
> *Thanks*,
> 
>
>
> On Tue, Apr 26, 2016 at 3:48 PM, Hyukjin Kwon  wrote:
>
>> Could you maybe share your codes?
>> On 26 Apr 2016 9:51 p.m., "Ramkumar V"  wrote:
>>
>>> Hi,
>>>
>>> I had loaded JSON file in parquet format into SparkSQL. I can't able to
>>> read List which is inside JSON.
>>>
>>> Sample JSON
>>>
>>> {
>>> "TOUR" : {
>>>  "CITIES" : ["Paris","Berlin","Prague"]
>>> },
>>> "BUDJET" : 100
>>> }
>>>
>>> I want to read value of CITIES.
>>>
>>> *Thanks*,
>>> 
>>>
>>>
>


Re: Spark SQL query for List

2016-04-26 Thread Hyukjin Kwon
Doesn't get(0) give you the Array[String] for CITY (am I missing something?)
On 26 Apr 2016 11:02 p.m., "Ramkumar V"  wrote:

JavaSparkContext ctx = new JavaSparkContext(sparkConf);

SQLContext sqlContext = new SQLContext(ctx);

DataFrame parquetFile = sqlContext.parquetFile(
"hdfs:/XYZ:8020/user/hdfs/parquet/*.parquet");

   parquetFile.registerTempTable("parquetFile");

DataFrame tempDF = sqlContext.sql("SELECT TOUR.CITIES, BUDJET from
parquetFile");

JavaRDDjRDD = tempDF.toJavaRDD();

 JavaRDD ones = jRDD.map(new Function() {

  public String call(Row row) throws Exception {

return row.getString(1);

  }

});

*Thanks*,



On Tue, Apr 26, 2016 at 3:48 PM, Hyukjin Kwon  wrote:

> Could you maybe share your codes?
> On 26 Apr 2016 9:51 p.m., "Ramkumar V"  wrote:
>
>> Hi,
>>
>> I had loaded JSON file in parquet format into SparkSQL. I can't able to
>> read List which is inside JSON.
>>
>> Sample JSON
>>
>> {
>> "TOUR" : {
>>  "CITIES" : ["Paris","Berlin","Prague"]
>> },
>> "BUDJET" : 100
>> }
>>
>> I want to read value of CITIES.
>>
>> *Thanks*,
>> 
>>
>>


Re: Spark SQL query for List

2016-04-26 Thread Ramkumar V
JavaSparkContext ctx = new JavaSparkContext(sparkConf);

SQLContext sqlContext = new SQLContext(ctx);

DataFrame parquetFile = sqlContext.parquetFile(
"hdfs:/XYZ:8020/user/hdfs/parquet/*.parquet");

   parquetFile.registerTempTable("parquetFile");

DataFrame tempDF = sqlContext.sql("SELECT TOUR.CITIES, BUDJET from
parquetFile");

JavaRDDjRDD = tempDF.toJavaRDD();

 JavaRDD ones = jRDD.map(new Function() {

  public String call(Row row) throws Exception {

return row.getString(1);

  }

});

*Thanks*,



On Tue, Apr 26, 2016 at 3:48 PM, Hyukjin Kwon  wrote:

> Could you maybe share your codes?
> On 26 Apr 2016 9:51 p.m., "Ramkumar V"  wrote:
>
>> Hi,
>>
>> I had loaded JSON file in parquet format into SparkSQL. I can't able to
>> read List which is inside JSON.
>>
>> Sample JSON
>>
>> {
>> "TOUR" : {
>>  "CITIES" : ["Paris","Berlin","Prague"]
>> },
>> "BUDJET" : 100
>> }
>>
>> I want to read value of CITIES.
>>
>> *Thanks*,
>> 
>>
>>


Re: Spark SQL query for List

2016-04-26 Thread Hyukjin Kwon
Could you maybe share your codes?
On 26 Apr 2016 9:51 p.m., "Ramkumar V"  wrote:

> Hi,
>
> I had loaded JSON file in parquet format into SparkSQL. I can't able to
> read List which is inside JSON.
>
> Sample JSON
>
> {
> "TOUR" : {
>  "CITIES" : ["Paris","Berlin","Prague"]
> },
> "BUDJET" : 100
> }
>
> I want to read value of CITIES.
>
> *Thanks*,
> 
>
>


Re: Spark sql query taking long time

2016-03-03 Thread Gourav Sengupta
Hi,

using dataframes you can use SQL, and SQL has an option of JOIN, BETWEEN,
IN and LIKE OPERATIONS. Why would someone use a dataframe and then use them
as RDD's? :)

Regards,
Gourav Sengupta

On Thu, Mar 3, 2016 at 4:28 PM, Sumedh Wale  wrote:

> On Thursday 03 March 2016 09:15 PM, Gourav Sengupta wrote:
>
> Hi,
>
> why not read the table into a dataframe directly using SPARK CSV package.
> You are trying to solve the problem the round about way.
>
>
> Yes, that will simplify and avoid the explicit split/map a bit (though the
> code below is simple enough as is). However, the basic problem with
> performance is not due to that. Note that a DataFrame whether using the
> spark-csv package or otherwise is just an access point into the underlying
> database.txt file, so multiple scans of the DataFrame as in the code below
> will lead to multiple tokenization/parse of the database.txt file which is
> quite expensive. The join approach will reduce to a single scan for case
> below which should definitely be done if possible, but if more queries are
> required to be executed on the DataFrame then saving it into parquet/orc
> (or cacheTable if possible) is faster in my experience.
>
>
> Regards,
> Gourav Sengupta
>
>
> thanks
>
> --
> Sumedh Wale
> SnappyData (http://www.snappydata.io)
>
>
> On Thu, Mar 3, 2016 at 12:33 PM, Sumedh Wale  wrote:
>
>> On Thursday 03 March 2016 11:03 AM, Angel Angel wrote:
>>
>> Hello Sir/Madam,
>>
>> I am writing one application using spark sql.
>>
>> i made the vary big table using the following command
>>
>> *val dfCustomers1 =
>> sc.textFile("/root/Desktop/database.txt").map(_.split(",")).map(p =>
>> Customer1(p(0),p(1).trim.toInt, p(2).trim.toInt, p(3)))toDF*
>>
>>
>> Now i want to search the address(many address)  fields in the table and
>> then extends the new table as per the searching.
>>
>> *var k = dfCustomers1.filter(dfCustomers1("Address").equalTo(lines(0)))*
>>
>>
>>
>> *for( a <-1 until 1500) {*
>>
>> * | var temp=
>> dfCustomers1.filter(dfCustomers1("Address").equalTo(lines(a)))*
>>
>> * |  k = temp.unionAll(k)*
>>
>> *}*
>>
>> *k.show*
>>
>>
>> For above case one approach that can help a lot is to covert the lines[0]
>> to a table and then do a join on it instead of individual searches.
>> Something like:
>>
>> val linesRDD = sc.parallelize(lines, 1) // since number of lines is
>> small, so 1 partition should be fine
>> val schema = StructType(Array(StructField("Address", StringType)))
>> val linesDF = sqlContext.createDataFrame(linesRDD.map(Row(_)), schema)
>> val result = dfCustomers1.join(linesDF, "Address")
>>
>>
>> If you do need to scan the DataFrame multiple times, then this will end
>> up scanning the csv file, formatting etc in every loop. I would suggest
>> caching in memory or saving to parquet/orc format for faster access. If
>> there is enough memory then the SQLContext.cacheTable API can be used, else
>> can save to parquet file:
>>
>> dfCustomers1.write.parquet("database.parquet")
>> val dfCustomers2 = sqlContext.read.parquet("database.parquet")
>>
>>
>> Normally parquet file scanning should be much faster than CSV scan+format
>> so use the dfCustomers2 everywhere. You can also try various values of
>> "spark.sql.parquet.compression.codec" (lzo, snappy, uncompressed) instead
>> of default gzip. Try if this reduces the runtime. Fastest will be if there
>> is enough memory for sqlContext.cacheTable but I doubt that will be
>> possible since you say it is a big table.
>>
>>
>> But this is taking so long time. So can you suggest me the any optimized
>> way, so i can reduce the execution time.
>>
>>
>> My cluster has 3 slaves and 1 master.
>>
>>
>> Thanks.
>>
>>
>>
>> thanks
>>
>> --
>> Sumedh Wale
>> SnappyData (http://www.snappydata.io)
>>
>> - To
>> unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional
>> commands, e-mail: user-h...@spark.apache.org
>
>
>
>


Re: Spark sql query taking long time

2016-03-03 Thread Sumedh Wale

  
  
On Thursday 03 March 2016 09:15 PM,
  Gourav Sengupta wrote:


  

  
Hi,
  

why not read the table into a dataframe directly using SPARK
CSV package. You are trying to solve the problem the round
about way.

  

  


Yes, that will simplify and avoid the explicit split/map a bit
(though the code below is simple enough as is). However, the basic
problem with performance is not due to that. Note that a DataFrame
whether using the spark-csv package or otherwise is just an access
point into the underlying database.txt file, so multiple scans of
the DataFrame as in the code below will lead to multiple
tokenization/parse of the database.txt file which is quite
expensive. The join approach will reduce to a single scan for case
below which should definitely be done if possible, but if more
queries are required to be executed on the DataFrame then saving it
into parquet/orc (or cacheTable if possible) is faster in my
experience.


  

  
  
  Regards,

Gourav Sengupta
  
  
  


thanks
-- 
Sumedh Wale
SnappyData (http://www.snappydata.io)



  
On Thu, Mar 3, 2016 at 12:33 PM, Sumedh
  Wale 
  wrote:
  

On Thursday 03 March 2016 11:03 AM, Angel Angel
  wrote:


  Hello Sir/Madam,


I am writing one application using spark sql.


i made the vary big table using the following
  command 



  val dfCustomers1 =
  sc.textFile("/root/Desktop/database.txt").map(_.split(",")).map(p
  => Customer1(p(0),p(1).trim.toInt,
  p(2).trim.toInt, p(3)))toDF





Now i want to search the address(many address)
   fields in the table and then extends the new
  table as per the searching. 



  var k =
dfCustomers1.filter(dfCustomers1("Address").equalTo(lines(0)))


  
  
  
  
  for( a <-1 until 1500)
  {
   | var temp=
dfCustomers1.filter(dfCustomers1("Address").equalTo(lines(a)))
   |  k =
  temp.unionAll(k)
  }
  k.show
  

  


   For above case one approach that can help a lot is
  to covert the lines[0] to a table and then do a join on it
  instead of individual searches. Something like:
  
   val linesRDD = sc.parallelize(lines, 1)
  // since number of lines is small, so 1 partition
  should be fine
 val schema =
  StructType(Array(StructField("Address", StringType)))
 val linesDF =
  sqlContext.createDataFrame(linesRDD.map(Row(_)),
  schema)
 val result = dfCustomers1.join(linesDF, "Address")
  
  
  If you do need to scan the DataFrame multiple times, then
  this will end up scanning the csv file, formatting etc in
  every loop. I would suggest caching in memory or saving to
  parquet/orc format for faster access. If there is enough
  memory then the SQLContext.cacheTable API can be used,
  else can save to parquet file:
  
  dfCustomers1.write.parquet("database.parquet")
val dfCustomers2 =
  sqlContext.read.parquet("database.parquet")
  
  
  Normally parquet file scanning should be much faster than
  CSV scan+format so use the dfCustomers2 everywhere. You
  can also try various values of
  "spark.sql.parquet.compression.codec" (lzo, snappy,
  uncompressed) instead of default gzip. Try if this reduces
  the runtime. Fastest will be if there is enough memory for
  sqlContext.cacheTable but I doubt that will be possible
  since you say it is a big table.

Re: Spark sql query taking long time

2016-03-03 Thread Gourav Sengupta
Hi,

why not read the table into a dataframe directly using SPARK CSV package.
You are trying to solve the problem the round about way.


Regards,
Gourav Sengupta

On Thu, Mar 3, 2016 at 12:33 PM, Sumedh Wale  wrote:

> On Thursday 03 March 2016 11:03 AM, Angel Angel wrote:
>
> Hello Sir/Madam,
>
> I am writing one application using spark sql.
>
> i made the vary big table using the following command
>
> *val dfCustomers1 =
> sc.textFile("/root/Desktop/database.txt").map(_.split(",")).map(p =>
> Customer1(p(0),p(1).trim.toInt, p(2).trim.toInt, p(3)))toDF*
>
>
> Now i want to search the address(many address)  fields in the table and
> then extends the new table as per the searching.
>
> *var k = dfCustomers1.filter(dfCustomers1("Address").equalTo(lines(0)))*
>
>
>
> *for( a <-1 until 1500) {*
>
> * | var temp=
> dfCustomers1.filter(dfCustomers1("Address").equalTo(lines(a)))*
>
> * |  k = temp.unionAll(k)*
>
> *}*
>
> *k.show*
>
>
> For above case one approach that can help a lot is to covert the lines[0]
> to a table and then do a join on it instead of individual searches.
> Something like:
>
> val linesRDD = sc.parallelize(lines, 1) // since number of lines is small,
> so 1 partition should be fine
> val schema = StructType(Array(StructField("Address", StringType)))
> val linesDF = sqlContext.createDataFrame(linesRDD.map(Row(_)), schema)
> val result = dfCustomers1.join(linesDF, "Address")
>
>
> If you do need to scan the DataFrame multiple times, then this will end up
> scanning the csv file, formatting etc in every loop. I would suggest
> caching in memory or saving to parquet/orc format for faster access. If
> there is enough memory then the SQLContext.cacheTable API can be used, else
> can save to parquet file:
>
> dfCustomers1.write.parquet("database.parquet")
> val dfCustomers2 = sqlContext.read.parquet("database.parquet")
>
>
> Normally parquet file scanning should be much faster than CSV scan+format
> so use the dfCustomers2 everywhere. You can also try various values of
> "spark.sql.parquet.compression.codec" (lzo, snappy, uncompressed) instead
> of default gzip. Try if this reduces the runtime. Fastest will be if there
> is enough memory for sqlContext.cacheTable but I doubt that will be
> possible since you say it is a big table.
>
>
> But this is taking so long time. So can you suggest me the any optimized
> way, so i can reduce the execution time.
>
>
> My cluster has 3 slaves and 1 master.
>
>
> Thanks.
>
>
>
> thanks
>
> --
> Sumedh Wale
> SnappyData (http://www.snappydata.io)
>
> - To
> unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional
> commands, e-mail: user-h...@spark.apache.org


Re: Spark sql query taking long time

2016-03-03 Thread Sumedh Wale

  
  
On Thursday 03 March 2016 11:03 AM,
  Angel Angel wrote:


  Hello Sir/Madam,


I am writing one application using spark sql.


i made the vary big table using the following command 



  val dfCustomers1 =
  sc.textFile("/root/Desktop/database.txt").map(_.split(",")).map(p
  => Customer1(p(0),p(1).trim.toInt, p(2).trim.toInt,
  p(3)))toDF





Now i want to search the address(many address)  fields in
  the table and then extends the new table as per the
  searching. 



  var k =
dfCustomers1.filter(dfCustomers1("Address").equalTo(lines(0)))


  
  
  
  
  for( a <-1 until 1500) {
   | var temp=
dfCustomers1.filter(dfCustomers1("Address").equalTo(lines(a)))
   |  k = temp.unionAll(k)
  }
  k.show
  

  


For above case one approach that can help a lot is to covert the
lines[0] to a table and then do a join on it instead of individual
searches. Something like:


val linesRDD = sc.parallelize(lines, 1) // since number of lines
is small, so 1 partition should be fine
  
val schema = StructType(Array(StructField("Address",
StringType)))
  
val linesDF = sqlContext.createDataFrame(linesRDD.map(Row(_)),
schema)
  
val result = dfCustomers1.join(linesDF, "Address")


If you do need to scan the DataFrame multiple times, then this will
end up scanning the csv file, formatting etc in every loop. I would
suggest caching in memory or saving to parquet/orc format for faster
access. If there is enough memory then the SQLContext.cacheTable API
can be used, else can save to parquet file:

dfCustomers1.write.parquet("database.parquet")
  val dfCustomers2 = sqlContext.read.parquet("database.parquet")


Normally parquet file scanning should be much faster than CSV
scan+format so use the dfCustomers2 everywhere. You can also try
various values of "spark.sql.parquet.compression.codec" (lzo,
snappy, uncompressed) instead of default gzip. Try if this reduces
the runtime. Fastest will be if there is enough memory for
sqlContext.cacheTable but I doubt that will be possible since you
say it is a big table.



  

  But this is taking so long time. So can
you suggest me the any optimized way, so i can reduce the
execution time.
  
  
  My cluster has 3 slaves and 1 master.
  
  
  Thanks.

  



thanks
-- 
Sumedh Wale
SnappyData (http://www.snappydata.io)
  


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Spark sql query taking long time

2016-03-02 Thread Ted Yu
Have you seen the thread 'Filter on a column having multiple values' where
Michael gave this example ?

https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/1023043053387187/107522969592/2840265927289860/2388bac36e.html

FYI

On Wed, Mar 2, 2016 at 9:33 PM, Angel Angel  wrote:

> Hello Sir/Madam,
>
> I am writing one application using spark sql.
>
> i made the vary big table using the following command
>
> *val dfCustomers1 =
> sc.textFile("/root/Desktop/database.txt").map(_.split(",")).map(p =>
> Customer1(p(0),p(1).trim.toInt, p(2).trim.toInt, p(3)))toDF*
>
>
> Now i want to search the address(many address)  fields in the table and
> then extends the new table as per the searching.
>
> *var k = dfCustomers1.filter(dfCustomers1("Address").equalTo(lines(0)))*
>
>
>
> *for( a <-1 until 1500) {*
>
> * | var temp=
> dfCustomers1.filter(dfCustomers1("Address").equalTo(lines(a)))*
>
> * |  k = temp.unionAll(k)*
>
> *}*
>
> *k.show*
>
>
>
>
> But this is taking so long time. So can you suggest me the any optimized
> way, so i can reduce the execution time.
>
>
> My cluster has 3 slaves and 1 master.
>
>
> Thanks.
>


RE: Spark SQL query AVRO file

2015-08-07 Thread java8964
Good to know that.
Let me research it and give it a try.
Thanks
Yong

From: mich...@databricks.com
Date: Fri, 7 Aug 2015 11:44:48 -0700
Subject: Re: Spark SQL query AVRO file
To: java8...@hotmail.com
CC: user@spark.apache.org

You can register your data as a table using this library and then query it 
using HiveQL
CREATE TEMPORARY TABLE episodes
USING com.databricks.spark.avro
OPTIONS (path "src/test/resources/episodes.avro")
On Fri, Aug 7, 2015 at 11:42 AM, java8964  wrote:



Hi, Michael:
I am not sure how spark-avro can help in this case. 
My understanding is that to use Spark-avro, I have to translate all the logic 
from this big Hive query into Spark code, right?
If I have this big Hive query, how I can use spark-avro to run the query?
Thanks
Yong

From: mich...@databricks.com
Date: Fri, 7 Aug 2015 11:32:21 -0700
Subject: Re: Spark SQL query AVRO file
To: java8...@hotmail.com
CC: user@spark.apache.org

Have you considered trying Spark SQL's native support for avro data?
https://github.com/databricks/spark-avro

On Fri, Aug 7, 2015 at 11:30 AM, java8964  wrote:



Hi, Spark users:
We currently are using Spark 1.2.2 + Hive 0.12 + Hadoop 2.2.0 on our production 
cluster, which has 42 data/task nodes.
There is one dataset stored as Avro files about 3T. Our business has a complex 
query running for the dataset, which is stored in nest structure with Array of 
Struct in Avro and Hive.
We can query it using Hive without any problem, but we like the SparkSQL's 
performance, so we in fact run the same query in the Spark SQL, and found out 
it is in fact much faster than Hive.
But when we run it, we got the following error randomly from Spark executors, 
sometime seriously enough to fail the whole spark job.
Below the stack trace, and I think it is a bug related to Spark due to:
1) The error jumps out inconsistent, as sometimes we won't see it for this job. 
(We run it daily)2) Sometime it won't fail our job, as it recover after 
retry.3) Sometime it will fail our job, as I listed below.4) Is this due to the 
multithreading in Spark? The NullPointException indicates Hive got a Null 
ObjectInspector of the children of StructObjectInspector, as I read the Hive 
source code, but I know there is no null of ObjectInsepector as children of 
StructObjectInspector. Google this error didn't give me any hint. Does any one 
know anything like this?
Project 
[HiveGenericUdf#org.apache.hadoop.hive.ql.udf.generic.GenericUDFConcatWS(,,CAST(account_id#23L,
 StringType),CAST(gross_contact_count_a#4L, StringType),CASE WHEN IS NULL 
tag_cnt#21 THEN 0 ELSE CAST(tag_cnt#21, StringType),CAST(list_cnt_a#5L, 
StringType),CAST(active_contact_count_a#16L, 
StringType),CAST(other_api_contact_count_a#6L, 
StringType),CAST(fb_api_contact_count_a#7L, 
StringType),CAST(evm_contact_count_a#8L, 
StringType),CAST(loyalty_contact_count_a#9L, 
StringType),CAST(mobile_jmml_contact_count_a#10L, 
StringType),CAST(savelocal_contact_count_a#11L, 
StringType),CAST(siteowner_contact_count_a#12L, 
StringType),CAST(socialcamp_service_contact_count_a#13L, 
S...org.apache.spark.SparkException: Job aborted due to stage failure: Task 58 
in stage 1.0 failed 4 times, most recent failure: Lost task 58.3 in stage 1.0 
(TID 257, 10.20.95.146): java.lang.NullPointerExceptionat 
org.apache.hadoop.hive.serde2.avro.AvroObjectInspectorGenerator.supportedCategories(AvroObjectInspectorGenerator.java:139)
at 
org.apache.hadoop.hive.serde2.avro.AvroObjectInspectorGenerator.createObjectInspectorWorker(AvroObjectInspectorGenerator.java:89)
at 
org.apache.hadoop.hive.serde2.avro.AvroObjectInspectorGenerator.createObjectInspectorWorker(AvroObjectInspectorGenerator.java:101)
at 
org.apache.hadoop.hive.serde2.avro.AvroObjectInspectorGenerator.createObjectInspectorWorker(AvroObjectInspectorGenerator.java:117)
at 
org.apache.hadoop.hive.serde2.avro.AvroObjectInspectorGenerator.createObjectInspector(AvroObjectInspectorGenerator.java:81)
at 
org.apache.hadoop.hive.serde2.avro.AvroObjectInspectorGenerator.(AvroObjectInspectorGenerator.java:55)
at 
org.apache.hadoop.hive.serde2.avro.AvroSerDe.initialize(AvroSerDe.java:69)  
  at 
org.apache.spark.sql.hive.HadoopTableReader$$anonfun$2.apply(TableReader.scala:112)
at 
org.apache.spark.sql.hive.HadoopTableReader$$anonfun$2.apply(TableReader.scala:109)
at org.apache.spark.rdd.RDD$$anonfun$14.apply(RDD.scala:618)at 
org.apache.spark.rdd.RDD$$anonfun$14.apply(RDD.scala:618)at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280)at 
org.apache.spark.rdd.RDD.iterator(RDD.scala:247)at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280)at 
org.apache.spark.rdd.RD

Re: Spark SQL query AVRO file

2015-08-07 Thread Michael Armbrust
You can register your data as a table using this library and then query it
using HiveQL

CREATE TEMPORARY TABLE episodes
USING com.databricks.spark.avro
OPTIONS (path "src/test/resources/episodes.avro")


On Fri, Aug 7, 2015 at 11:42 AM, java8964  wrote:

> Hi, Michael:
>
> I am not sure how spark-avro can help in this case.
>
> My understanding is that to use Spark-avro, I have to translate all the
> logic from this big Hive query into Spark code, right?
>
> If I have this big Hive query, how I can use spark-avro to run the query?
>
> Thanks
>
> Yong
>
> --
> From: mich...@databricks.com
> Date: Fri, 7 Aug 2015 11:32:21 -0700
> Subject: Re: Spark SQL query AVRO file
> To: java8...@hotmail.com
> CC: user@spark.apache.org
>
>
> Have you considered trying Spark SQL's native support for avro data?
>
> https://github.com/databricks/spark-avro
>
> On Fri, Aug 7, 2015 at 11:30 AM, java8964  wrote:
>
> Hi, Spark users:
>
> We currently are using Spark 1.2.2 + Hive 0.12 + Hadoop 2.2.0 on our
> production cluster, which has 42 data/task nodes.
>
> There is one dataset stored as Avro files about 3T. Our business has a
> complex query running for the dataset, which is stored in nest structure
> with Array of Struct in Avro and Hive.
>
> We can query it using Hive without any problem, but we like the SparkSQL's
> performance, so we in fact run the same query in the Spark SQL, and found
> out it is in fact much faster than Hive.
>
> But when we run it, we got the following error randomly from Spark
> executors, sometime seriously enough to fail the whole spark job.
>
> Below the stack trace, and I think it is a bug related to Spark due to:
>
> 1) The error jumps out inconsistent, as sometimes we won't see it for this
> job. (We run it daily)
> 2) Sometime it won't fail our job, as it recover after retry.
> 3) Sometime it will fail our job, as I listed below.
> 4) Is this due to the multithreading in Spark? The NullPointException
> indicates Hive got a Null ObjectInspector of the children of
> StructObjectInspector, as I read the Hive source code, but I know there is
> no null of ObjectInsepector as children of StructObjectInspector. Google
> this error didn't give me any hint. Does any one know anything like this?
>
> Project
> [HiveGenericUdf#org.apache.hadoop.hive.ql.udf.generic.GenericUDFConcatWS(,,CAST(account_id#23L,
> StringType),CAST(gross_contact_count_a#4L, StringType),CASE WHEN IS NULL
> tag_cnt#21 THEN 0 ELSE CAST(tag_cnt#21, StringType),CAST(list_cnt_a#5L,
> StringType),CAST(active_contact_count_a#16L,
> StringType),CAST(other_api_contact_count_a#6L,
> StringType),CAST(fb_api_contact_count_a#7L,
> StringType),CAST(evm_contact_count_a#8L,
> StringType),CAST(loyalty_contact_count_a#9L,
> StringType),CAST(mobile_jmml_contact_count_a#10L,
> StringType),CAST(savelocal_contact_count_a#11L,
> StringType),CAST(siteowner_contact_count_a#12L,
> StringType),CAST(socialcamp_service_contact_count_a#13L,
> S...org.apache.spark.SparkException: Job aborted due to stage failure: Task
> 58 in stage 1.0 failed 4 times, most recent failure: Lost task 58.3 in
> stage 1.0 (TID 257, 10.20.95.146): java.lang.NullPointerException
> at
> org.apache.hadoop.hive.serde2.avro.AvroObjectInspectorGenerator.supportedCategories(AvroObjectInspectorGenerator.java:139)
> at
> org.apache.hadoop.hive.serde2.avro.AvroObjectInspectorGenerator.createObjectInspectorWorker(AvroObjectInspectorGenerator.java:89)
> at
> org.apache.hadoop.hive.serde2.avro.AvroObjectInspectorGenerator.createObjectInspectorWorker(AvroObjectInspectorGenerator.java:101)
> at
> org.apache.hadoop.hive.serde2.avro.AvroObjectInspectorGenerator.createObjectInspectorWorker(AvroObjectInspectorGenerator.java:117)
> at
> org.apache.hadoop.hive.serde2.avro.AvroObjectInspectorGenerator.createObjectInspector(AvroObjectInspectorGenerator.java:81)
> at
> org.apache.hadoop.hive.serde2.avro.AvroObjectInspectorGenerator.(AvroObjectInspectorGenerator.java:55)
> at
> org.apache.hadoop.hive.serde2.avro.AvroSerDe.initialize(AvroSerDe.java:69)
> at
> org.apache.spark.sql.hive.HadoopTableReader$$anonfun$2.apply(TableReader.scala:112)
> at
> org.apache.spark.sql.hive.HadoopTableReader$$anonfun$2.apply(TableReader.scala:109)
> at org.apache.spark.rdd.RDD$$anonfun$14.apply(RDD.scala:618)
> at org.apache.spark.rdd.RDD$$anonfun$14.apply(RDD.scala:618)
> at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280)
> at org.apache.spark.rdd.RDD.iterator(RDD.scal

RE: Spark SQL query AVRO file

2015-08-07 Thread java8964
Hi, Michael:
I am not sure how spark-avro can help in this case. 
My understanding is that to use Spark-avro, I have to translate all the logic 
from this big Hive query into Spark code, right?
If I have this big Hive query, how I can use spark-avro to run the query?
Thanks
Yong

From: mich...@databricks.com
Date: Fri, 7 Aug 2015 11:32:21 -0700
Subject: Re: Spark SQL query AVRO file
To: java8...@hotmail.com
CC: user@spark.apache.org

Have you considered trying Spark SQL's native support for avro data?
https://github.com/databricks/spark-avro

On Fri, Aug 7, 2015 at 11:30 AM, java8964  wrote:



Hi, Spark users:
We currently are using Spark 1.2.2 + Hive 0.12 + Hadoop 2.2.0 on our production 
cluster, which has 42 data/task nodes.
There is one dataset stored as Avro files about 3T. Our business has a complex 
query running for the dataset, which is stored in nest structure with Array of 
Struct in Avro and Hive.
We can query it using Hive without any problem, but we like the SparkSQL's 
performance, so we in fact run the same query in the Spark SQL, and found out 
it is in fact much faster than Hive.
But when we run it, we got the following error randomly from Spark executors, 
sometime seriously enough to fail the whole spark job.
Below the stack trace, and I think it is a bug related to Spark due to:
1) The error jumps out inconsistent, as sometimes we won't see it for this job. 
(We run it daily)2) Sometime it won't fail our job, as it recover after 
retry.3) Sometime it will fail our job, as I listed below.4) Is this due to the 
multithreading in Spark? The NullPointException indicates Hive got a Null 
ObjectInspector of the children of StructObjectInspector, as I read the Hive 
source code, but I know there is no null of ObjectInsepector as children of 
StructObjectInspector. Google this error didn't give me any hint. Does any one 
know anything like this?
Project 
[HiveGenericUdf#org.apache.hadoop.hive.ql.udf.generic.GenericUDFConcatWS(,,CAST(account_id#23L,
 StringType),CAST(gross_contact_count_a#4L, StringType),CASE WHEN IS NULL 
tag_cnt#21 THEN 0 ELSE CAST(tag_cnt#21, StringType),CAST(list_cnt_a#5L, 
StringType),CAST(active_contact_count_a#16L, 
StringType),CAST(other_api_contact_count_a#6L, 
StringType),CAST(fb_api_contact_count_a#7L, 
StringType),CAST(evm_contact_count_a#8L, 
StringType),CAST(loyalty_contact_count_a#9L, 
StringType),CAST(mobile_jmml_contact_count_a#10L, 
StringType),CAST(savelocal_contact_count_a#11L, 
StringType),CAST(siteowner_contact_count_a#12L, 
StringType),CAST(socialcamp_service_contact_count_a#13L, 
S...org.apache.spark.SparkException: Job aborted due to stage failure: Task 58 
in stage 1.0 failed 4 times, most recent failure: Lost task 58.3 in stage 1.0 
(TID 257, 10.20.95.146): java.lang.NullPointerExceptionat 
org.apache.hadoop.hive.serde2.avro.AvroObjectInspectorGenerator.supportedCategories(AvroObjectInspectorGenerator.java:139)
at 
org.apache.hadoop.hive.serde2.avro.AvroObjectInspectorGenerator.createObjectInspectorWorker(AvroObjectInspectorGenerator.java:89)
at 
org.apache.hadoop.hive.serde2.avro.AvroObjectInspectorGenerator.createObjectInspectorWorker(AvroObjectInspectorGenerator.java:101)
at 
org.apache.hadoop.hive.serde2.avro.AvroObjectInspectorGenerator.createObjectInspectorWorker(AvroObjectInspectorGenerator.java:117)
at 
org.apache.hadoop.hive.serde2.avro.AvroObjectInspectorGenerator.createObjectInspector(AvroObjectInspectorGenerator.java:81)
at 
org.apache.hadoop.hive.serde2.avro.AvroObjectInspectorGenerator.(AvroObjectInspectorGenerator.java:55)
at 
org.apache.hadoop.hive.serde2.avro.AvroSerDe.initialize(AvroSerDe.java:69)  
  at 
org.apache.spark.sql.hive.HadoopTableReader$$anonfun$2.apply(TableReader.scala:112)
at 
org.apache.spark.sql.hive.HadoopTableReader$$anonfun$2.apply(TableReader.scala:109)
at org.apache.spark.rdd.RDD$$anonfun$14.apply(RDD.scala:618)at 
org.apache.spark.rdd.RDD$$anonfun$14.apply(RDD.scala:618)at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280)at 
org.apache.spark.rdd.RDD.iterator(RDD.scala:247)at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280)at 
org.apache.spark.rdd.RDD.iterator(RDD.scala:247)at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280)at 
org.apache.spark.rdd.RDD.iterator(RDD.scala:247)at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280)at 
org.apache.spark.rdd.RDD.iterator(RDD.scala:247)at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartition

Re: Spark SQL query AVRO file

2015-08-07 Thread Michael Armbrust
Have you considered trying Spark SQL's native support for avro data?

https://github.com/databricks/spark-avro

On Fri, Aug 7, 2015 at 11:30 AM, java8964  wrote:

> Hi, Spark users:
>
> We currently are using Spark 1.2.2 + Hive 0.12 + Hadoop 2.2.0 on our
> production cluster, which has 42 data/task nodes.
>
> There is one dataset stored as Avro files about 3T. Our business has a
> complex query running for the dataset, which is stored in nest structure
> with Array of Struct in Avro and Hive.
>
> We can query it using Hive without any problem, but we like the SparkSQL's
> performance, so we in fact run the same query in the Spark SQL, and found
> out it is in fact much faster than Hive.
>
> But when we run it, we got the following error randomly from Spark
> executors, sometime seriously enough to fail the whole spark job.
>
> Below the stack trace, and I think it is a bug related to Spark due to:
>
> 1) The error jumps out inconsistent, as sometimes we won't see it for this
> job. (We run it daily)
> 2) Sometime it won't fail our job, as it recover after retry.
> 3) Sometime it will fail our job, as I listed below.
> 4) Is this due to the multithreading in Spark? The NullPointException
> indicates Hive got a Null ObjectInspector of the children of
> StructObjectInspector, as I read the Hive source code, but I know there is
> no null of ObjectInsepector as children of StructObjectInspector. Google
> this error didn't give me any hint. Does any one know anything like this?
>
> Project
> [HiveGenericUdf#org.apache.hadoop.hive.ql.udf.generic.GenericUDFConcatWS(,,CAST(account_id#23L,
> StringType),CAST(gross_contact_count_a#4L, StringType),CASE WHEN IS NULL
> tag_cnt#21 THEN 0 ELSE CAST(tag_cnt#21, StringType),CAST(list_cnt_a#5L,
> StringType),CAST(active_contact_count_a#16L,
> StringType),CAST(other_api_contact_count_a#6L,
> StringType),CAST(fb_api_contact_count_a#7L,
> StringType),CAST(evm_contact_count_a#8L,
> StringType),CAST(loyalty_contact_count_a#9L,
> StringType),CAST(mobile_jmml_contact_count_a#10L,
> StringType),CAST(savelocal_contact_count_a#11L,
> StringType),CAST(siteowner_contact_count_a#12L,
> StringType),CAST(socialcamp_service_contact_count_a#13L,
> S...org.apache.spark.SparkException: Job aborted due to stage failure: Task
> 58 in stage 1.0 failed 4 times, most recent failure: Lost task 58.3 in
> stage 1.0 (TID 257, 10.20.95.146): java.lang.NullPointerException
> at
> org.apache.hadoop.hive.serde2.avro.AvroObjectInspectorGenerator.supportedCategories(AvroObjectInspectorGenerator.java:139)
> at
> org.apache.hadoop.hive.serde2.avro.AvroObjectInspectorGenerator.createObjectInspectorWorker(AvroObjectInspectorGenerator.java:89)
> at
> org.apache.hadoop.hive.serde2.avro.AvroObjectInspectorGenerator.createObjectInspectorWorker(AvroObjectInspectorGenerator.java:101)
> at
> org.apache.hadoop.hive.serde2.avro.AvroObjectInspectorGenerator.createObjectInspectorWorker(AvroObjectInspectorGenerator.java:117)
> at
> org.apache.hadoop.hive.serde2.avro.AvroObjectInspectorGenerator.createObjectInspector(AvroObjectInspectorGenerator.java:81)
> at
> org.apache.hadoop.hive.serde2.avro.AvroObjectInspectorGenerator.(AvroObjectInspectorGenerator.java:55)
> at
> org.apache.hadoop.hive.serde2.avro.AvroSerDe.initialize(AvroSerDe.java:69)
> at
> org.apache.spark.sql.hive.HadoopTableReader$$anonfun$2.apply(TableReader.scala:112)
> at
> org.apache.spark.sql.hive.HadoopTableReader$$anonfun$2.apply(TableReader.scala:109)
> at org.apache.spark.rdd.RDD$$anonfun$14.apply(RDD.scala:618)
> at org.apache.spark.rdd.RDD$$anonfun$14.apply(RDD.scala:618)
> at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:247)
> at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:247)
> at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:247)
> at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:247)
> at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:247)
> at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
> at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>   

Re: Spark SQL query key/value in Map

2015-04-16 Thread JC Francisco
Ah yeah, didn't notice that difference.

Thanks! It worked.

On Fri, Apr 17, 2015 at 4:27 AM, Yin Huai  wrote:

> For Map type column, fields['driver'] is the syntax to retrieve the map
> value (in the schema, you can see "fields: map"). The syntax of
> fields.driver is used for struct type.
>
> On Thu, Apr 16, 2015 at 12:37 AM, jc.francisco 
> wrote:
>
>> Hi,
>>
>> I'm new with both Cassandra and Spark and am experimenting with what Spark
>> SQL can do as it will affect my Cassandra data model.
>>
>> What I need is a model that can accept arbitrary fields, similar to
>> Postgres's Hstore. Right now, I'm trying out the map type in Cassandra but
>> I'm getting the exception below when running my Spark SQL:
>>
>> java.lang.RuntimeException: Can't access nested field in type
>> MapType(StringType,StringType,true)
>>
>> The schema I have now is:
>> root
>>  |-- device_id: integer (nullable = true)
>>  |-- event_date: string (nullable = true)
>>  |-- fields: map (nullable = true)
>>  ||-- key: string
>>  ||-- value: string (valueContainsNull = true)
>>
>> And my Spark SQL is:
>> SELECT fields from raw_device_data where fields.driver = 'driver1'
>>
>> From what I gather, this should work for a JSON based RDD
>> (
>> https://databricks.com/blog/2015/02/02/an-introduction-to-json-support-in-spark-sql.html
>> ).
>>
>> Is this not supported for a Cassandra map type?
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-query-key-value-in-Map-tp22517.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>
>


Re: Spark SQL query key/value in Map

2015-04-16 Thread Yin Huai
For Map type column, fields['driver'] is the syntax to retrieve the map
value (in the schema, you can see "fields: map"). The syntax of
fields.driver is used for struct type.

On Thu, Apr 16, 2015 at 12:37 AM, jc.francisco 
wrote:

> Hi,
>
> I'm new with both Cassandra and Spark and am experimenting with what Spark
> SQL can do as it will affect my Cassandra data model.
>
> What I need is a model that can accept arbitrary fields, similar to
> Postgres's Hstore. Right now, I'm trying out the map type in Cassandra but
> I'm getting the exception below when running my Spark SQL:
>
> java.lang.RuntimeException: Can't access nested field in type
> MapType(StringType,StringType,true)
>
> The schema I have now is:
> root
>  |-- device_id: integer (nullable = true)
>  |-- event_date: string (nullable = true)
>  |-- fields: map (nullable = true)
>  ||-- key: string
>  ||-- value: string (valueContainsNull = true)
>
> And my Spark SQL is:
> SELECT fields from raw_device_data where fields.driver = 'driver1'
>
> From what I gather, this should work for a JSON based RDD
> (
> https://databricks.com/blog/2015/02/02/an-introduction-to-json-support-in-spark-sql.html
> ).
>
> Is this not supported for a Cassandra map type?
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-query-key-value-in-Map-tp22517.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>


Re: Spark-sql query got exception.Help

2015-03-26 Thread 李铖
Yes,the exception occured sometimes,but  at the end  the final result rised.

2015-03-26 11:08 GMT+08:00 Saisai Shao :

> Would you mind running again to see if this exception can be reproduced
> again, since exception in MapOutputTracker seldom occurs, maybe some other
> exceptions which lead to this error.
>
> Thanks
> Jerry
>
> 2015-03-26 10:55 GMT+08:00 李铖 :
>
>> One more exception.How to fix it .Anybody help me ,please.
>>
>>
>> org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output
>> location for shuffle 0
>> at
>> org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$1.apply(MapOutputTracker.scala:386)
>> at
>> org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$1.apply(MapOutputTracker.scala:383)
>> at
>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>> at
>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>> at
>> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
>> at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>> at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
>> at
>> org.apache.spark.MapOutputTracker$.org$apache$spark$MapOutputTracker$$convertMapStatuses(MapOutputTracker.scala:382)
>> at
>> org.apache.spark.MapOutputTracker.getServerStatuses(MapOutputTracker.scala:178)
>> at
>> org.apache.spark.shuffle.hash.BlockStoreShuffleFetcher$.fetch(BlockStoreShuffleFetcher.scala:42)
>> at
>> org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:40)
>> at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)
>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
>> at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
>> at
>> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>> at org.apache.spark.sql.SchemaRDD.compute(SchemaRDD.scala:120)
>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
>> at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
>> at
>> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
>> at
>> org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply$mcV$sp(PythonRDD.scala:242)
>> at
>> org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204)
>> at
>> org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204)
>> at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1460)
>> at
>> org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:203)
>>
>>
>> 2015-03-26 10:39 GMT+08:00 李铖 :
>>
>>> Yes, it works after I append the two properties in spark-defaults.conf.
>>>
>>> As I  use python programing on spark platform,the python api does not
>>> have SparkConf api.
>>>
>>> Thanks.
>>>
>>> 2015-03-25 21:07 GMT+08:00 Cheng Lian :
>>>
  Oh, just noticed that you were calling sc.setSystemProperty. Actually
 you need to set this property in SparkConf or in spark-defaults.conf. And
 there are two configurations related to Kryo buffer size,

- spark.kryoserializer.buffer.mb, which is the initial size, and
- spark.kryoserializer.buffer.max.mb, which is the max buffer size.

 Make sure the 2nd one is larger (it seems that Kryo doesn’t check for
 it).

 Cheng

 On 3/25/15 7:31 PM, 李铖 wrote:

   Here is the full track

  15/03/25 17:48:34 WARN TaskSetManager: Lost task 0.0 in stage 1.0
 (TID 1, cloud1): com.esotericsoftware.kryo.KryoException: Buffer overflow.
 Available: 0, required: 39135
  at com.esotericsoftware.kryo.io.Output.require(Output.java:138)
  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:220)
  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:206)
  at
 com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:29)
  at
 com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:18)
  at com.esotericsoftware.kryo.Kryo.writeObjectOrNull(Kryo.java:549)
  at
 com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:312)
  at
 com.

Re: Spark-sql query got exception.Help

2015-03-25 Thread Saisai Shao
Would you mind running again to see if this exception can be reproduced
again, since exception in MapOutputTracker seldom occurs, maybe some other
exceptions which lead to this error.

Thanks
Jerry

2015-03-26 10:55 GMT+08:00 李铖 :

> One more exception.How to fix it .Anybody help me ,please.
>
>
> org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output
> location for shuffle 0
> at
> org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$1.apply(MapOutputTracker.scala:386)
> at
> org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$1.apply(MapOutputTracker.scala:383)
> at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
> at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
> at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
> at
> org.apache.spark.MapOutputTracker$.org$apache$spark$MapOutputTracker$$convertMapStatuses(MapOutputTracker.scala:382)
> at
> org.apache.spark.MapOutputTracker.getServerStatuses(MapOutputTracker.scala:178)
> at
> org.apache.spark.shuffle.hash.BlockStoreShuffleFetcher$.fetch(BlockStoreShuffleFetcher.scala:42)
> at
> org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:40)
> at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
> at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
> at org.apache.spark.sql.SchemaRDD.compute(SchemaRDD.scala:120)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
> at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
> at
> org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply$mcV$sp(PythonRDD.scala:242)
> at
> org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204)
> at
> org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204)
> at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1460)
> at
> org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:203)
>
>
> 2015-03-26 10:39 GMT+08:00 李铖 :
>
>> Yes, it works after I append the two properties in spark-defaults.conf.
>>
>> As I  use python programing on spark platform,the python api does not
>> have SparkConf api.
>>
>> Thanks.
>>
>> 2015-03-25 21:07 GMT+08:00 Cheng Lian :
>>
>>>  Oh, just noticed that you were calling sc.setSystemProperty. Actually
>>> you need to set this property in SparkConf or in spark-defaults.conf. And
>>> there are two configurations related to Kryo buffer size,
>>>
>>>- spark.kryoserializer.buffer.mb, which is the initial size, and
>>>- spark.kryoserializer.buffer.max.mb, which is the max buffer size.
>>>
>>> Make sure the 2nd one is larger (it seems that Kryo doesn’t check for
>>> it).
>>>
>>> Cheng
>>>
>>> On 3/25/15 7:31 PM, 李铖 wrote:
>>>
>>>   Here is the full track
>>>
>>>  15/03/25 17:48:34 WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID
>>> 1, cloud1): com.esotericsoftware.kryo.KryoException: Buffer overflow.
>>> Available: 0, required: 39135
>>>  at com.esotericsoftware.kryo.io.Output.require(Output.java:138)
>>>  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:220)
>>>  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:206)
>>>  at
>>> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:29)
>>>  at
>>> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:18)
>>>  at com.esotericsoftware.kryo.Kryo.writeObjectOrNull(Kryo.java:549)
>>>  at
>>> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:312)
>>>  at
>>> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:293)
>>>  at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:568)
>>>  at
>>> org.apache.spark.serializer.Kr

Re: Spark-sql query got exception.Help

2015-03-25 Thread 李铖
One more exception.How to fix it .Anybody help me ,please.


org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output
location for shuffle 0
at
org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$1.apply(MapOutputTracker.scala:386)
at
org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$1.apply(MapOutputTracker.scala:383)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108)
at
org.apache.spark.MapOutputTracker$.org$apache$spark$MapOutputTracker$$convertMapStatuses(MapOutputTracker.scala:382)
at
org.apache.spark.MapOutputTracker.getServerStatuses(MapOutputTracker.scala:178)
at
org.apache.spark.shuffle.hash.BlockStoreShuffleFetcher$.fetch(BlockStoreShuffleFetcher.scala:42)
at
org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:40)
at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.sql.SchemaRDD.compute(SchemaRDD.scala:120)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:263)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:230)
at
org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply$mcV$sp(PythonRDD.scala:242)
at
org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204)
at
org.apache.spark.api.python.PythonRDD$WriterThread$$anonfun$run$1.apply(PythonRDD.scala:204)
at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1460)
at
org.apache.spark.api.python.PythonRDD$WriterThread.run(PythonRDD.scala:203)


2015-03-26 10:39 GMT+08:00 李铖 :

> Yes, it works after I append the two properties in spark-defaults.conf.
>
> As I  use python programing on spark platform,the python api does not have
> SparkConf api.
>
> Thanks.
>
> 2015-03-25 21:07 GMT+08:00 Cheng Lian :
>
>>  Oh, just noticed that you were calling sc.setSystemProperty. Actually
>> you need to set this property in SparkConf or in spark-defaults.conf. And
>> there are two configurations related to Kryo buffer size,
>>
>>- spark.kryoserializer.buffer.mb, which is the initial size, and
>>- spark.kryoserializer.buffer.max.mb, which is the max buffer size.
>>
>> Make sure the 2nd one is larger (it seems that Kryo doesn’t check for it).
>>
>> Cheng
>>
>> On 3/25/15 7:31 PM, 李铖 wrote:
>>
>>   Here is the full track
>>
>>  15/03/25 17:48:34 WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID
>> 1, cloud1): com.esotericsoftware.kryo.KryoException: Buffer overflow.
>> Available: 0, required: 39135
>>  at com.esotericsoftware.kryo.io.Output.require(Output.java:138)
>>  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:220)
>>  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:206)
>>  at
>> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:29)
>>  at
>> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:18)
>>  at com.esotericsoftware.kryo.Kryo.writeObjectOrNull(Kryo.java:549)
>>  at
>> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:312)
>>  at
>> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:293)
>>  at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:568)
>>  at
>> org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:165)
>>  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:206)
>>  at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>  at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>  at java.lang.Thread.run(Thread.java:745)
>>
>> 2015-03-25 19:05 G

Re: Spark-sql query got exception.Help

2015-03-25 Thread 李铖
Yes, it works after I append the two properties in spark-defaults.conf.

As I  use python programing on spark platform,the python api does not have
SparkConf api.

Thanks.

2015-03-25 21:07 GMT+08:00 Cheng Lian :

>  Oh, just noticed that you were calling sc.setSystemProperty. Actually
> you need to set this property in SparkConf or in spark-defaults.conf. And
> there are two configurations related to Kryo buffer size,
>
>- spark.kryoserializer.buffer.mb, which is the initial size, and
>- spark.kryoserializer.buffer.max.mb, which is the max buffer size.
>
> Make sure the 2nd one is larger (it seems that Kryo doesn’t check for it).
>
> Cheng
>
> On 3/25/15 7:31 PM, 李铖 wrote:
>
>   Here is the full track
>
>  15/03/25 17:48:34 WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID
> 1, cloud1): com.esotericsoftware.kryo.KryoException: Buffer overflow.
> Available: 0, required: 39135
>  at com.esotericsoftware.kryo.io.Output.require(Output.java:138)
>  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:220)
>  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:206)
>  at
> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:29)
>  at
> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:18)
>  at com.esotericsoftware.kryo.Kryo.writeObjectOrNull(Kryo.java:549)
>  at
> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:312)
>  at
> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:293)
>  at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:568)
>  at
> org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:165)
>  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:206)
>  at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  at java.lang.Thread.run(Thread.java:745)
>
> 2015-03-25 19:05 GMT+08:00 Cheng Lian :
>
>>  Could you please provide the full stack trace?
>>
>>
>> On 3/25/15 6:26 PM, 李铖 wrote:
>>
>>  It is ok when I do query data from a small hdfs file.
>> But if the hdfs file is 152m,I got this exception.
>> I try this code
>> .'sc.setSystemProperty("spark.kryoserializer.buffer.mb",'256')'.error
>> still.
>>
>>  ```
>> com.esotericsoftware.kryo.KryoException: Buffer overflow. Available: 0,
>> required: 39135
>>  at com.esotericsoftware.kryo.io.Output.require(Output.java:138)
>>  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:220)
>>  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:206)
>>  at
>> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:29)
>>  at
>>
>>
>> ```
>>
>>
>>
>​
>


Re: Spark-sql query got exception.Help

2015-03-25 Thread Cheng Lian
Oh, just noticed that you were calling |sc.setSystemProperty|. Actually 
you need to set this property in SparkConf or in spark-defaults.conf. 
And there are two configurations related to Kryo buffer size,


 * spark.kryoserializer.buffer.mb, which is the initial size, and
 * spark.kryoserializer.buffer.max.mb, which is the max buffer size.

Make sure the 2nd one is larger (it seems that Kryo doesn’t check for it).

Cheng

On 3/25/15 7:31 PM, 李铖 wrote:


Here is the full track

15/03/25 17:48:34 WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID 
1, cloud1): com.esotericsoftware.kryo.KryoException: Buffer overflow. 
Available: 0, required: 39135

at com.esotericsoftware.kryo.io.Output.require(Output.java:138)
at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:220)
at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:206)
at 
com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:29)
at 
com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:18)

at com.esotericsoftware.kryo.Kryo.writeObjectOrNull(Kryo.java:549)
at 
com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:312)
at 
com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:293)

at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:568)
at 
org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:165)

at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:206)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:745)

2015-03-25 19:05 GMT+08:00 Cheng Lian >:


Could you please provide the full stack trace?


On 3/25/15 6:26 PM, 李铖 wrote:

It is ok when I do query data from a small hdfs file.
But if the hdfs file is 152m,I got this exception.
I try this code
.'sc.setSystemProperty("spark.kryoserializer.buffer.mb",'256')'.error
still.

```
com.esotericsoftware.kryo.KryoException: Buffer overflow.
Available: 0, required: 39135
at com.esotericsoftware.kryo.io.Output.require(Output.java:138)
at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:220)
at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:206)
at

com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:29)
at


```




​


Re: Spark-sql query got exception.Help

2015-03-25 Thread 李铖
Here is the full track

15/03/25 17:48:34 WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID 1,
cloud1): com.esotericsoftware.kryo.KryoException: Buffer overflow.
Available: 0, required: 39135
at com.esotericsoftware.kryo.io.Output.require(Output.java:138)
at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:220)
at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:206)
at
com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:29)
at
com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:18)
at com.esotericsoftware.kryo.Kryo.writeObjectOrNull(Kryo.java:549)
at
com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:312)
at
com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:293)
at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:568)
at
org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:165)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:206)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

2015-03-25 19:05 GMT+08:00 Cheng Lian :

>  Could you please provide the full stack trace?
>
>
> On 3/25/15 6:26 PM, 李铖 wrote:
>
>  It is ok when I do query data from a small hdfs file.
> But if the hdfs file is 152m,I got this exception.
> I try this code
> .'sc.setSystemProperty("spark.kryoserializer.buffer.mb",'256')'.error
> still.
>
>  ```
> com.esotericsoftware.kryo.KryoException: Buffer overflow. Available: 0,
> required: 39135
>  at com.esotericsoftware.kryo.io.Output.require(Output.java:138)
>  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:220)
>  at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:206)
>  at
> com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:29)
>  at
>
>
> ```
>
>
>


Re: Spark-sql query got exception.Help

2015-03-25 Thread Cheng Lian

Could you please provide the full stack trace?

On 3/25/15 6:26 PM, 李铖 wrote:

It is ok when I do query data from a small hdfs file.
But if the hdfs file is 152m,I got this exception.
I try this code 
.'sc.setSystemProperty("spark.kryoserializer.buffer.mb",'256')'.error 
still.


```
com.esotericsoftware.kryo.KryoException: Buffer overflow. Available: 
0, required: 39135

at com.esotericsoftware.kryo.io.Output.require(Output.java:138)
at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:220)
at com.esotericsoftware.kryo.io.Output.writeBytes(Output.java:206)
at 
com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ByteArraySerializer.write(DefaultArraySerializers.java:29)

at


```




Re: spark sql query optimization , and decision tree building

2014-10-27 Thread Yanbo Liang
If you want to calculate mean, variance, minimum, maximum and total count
for each columns, especially for features of machine learning, you can try
MultivariateOnlineSummarizer.
MultivariateOnlineSummarizer implements a numerically stable algorithm to
compute sample mean and variance by column in a online fashion. It support
both sparse and dense vector which can be constructed  from column
features. The time complexity is O(nnz) instead of O(n) for each column and
nnz represents the number of nunzero of each column.

2014-10-23 1:09 GMT+08:00 sanath kumar :

> Thank you very much ,
>
> two more small questions :
>
> 1) val output = sqlContext.sql("SELECT * From people")
> my output has 128 columns and  single row .
> how to find the which column has the maximum value in a single row using
> scala ?
>
> 2) as each row has 128 columns how to print each row into a text while
> with space delimitation or as json using scala?
>
> please reply
>
> Thanks,
> Sanath
>
>
> On Wed, Oct 22, 2014 at 8:24 AM, Cheng, Hao  wrote:
>
>>  The “output” variable is actually a SchemaRDD, it provides lots of DSL
>> API, see
>> http://spark.apache.org/docs/1.1.0/api/scala/index.html#org.apache.spark.sql.SchemaRDD
>>
>>
>>
>> 1) How to save result values of a query into a list ?
>>
>> [CH:] val list: Array[Row] = output.collect, however get 1M records into an 
>> array seems not a good idea.
>>
>>
>>
>> 2) How to calculate variance of a column .Is there any efficient way?
>>
>> [CH:] Not sure what’s that mean, but you can try 
>> output.select(‘colname).groupby ?
>>
>>
>> 3) i will be running multiple queries on same data .Does spark has any way 
>> to optimize it ?
>>
>> [CH:] val cachedRdd = output.cache(), and do whatever you need to do based 
>> on cachedRDD
>>
>>
>> 4) how to save the output as key value pairs in a text file ?
>>
>> [CH:] cachedRdd.generate(xx,xx,xx).saveAsTextFile(xx)
>>
>>
>>
>>  5) is there any way i can build decision kd tree using machine libraries of 
>> spark ?
>>
>> [CH:] Sorry, I am not sure about how kd tree used in mllib. but keep in
>> mind SchemaRDD is just a normal RDD.
>>
>>
>>
>> Cheng Hao
>>
>>
>>
>> *From:* sanath kumar [mailto:sanath1...@gmail.com]
>> *Sent:* Wednesday, October 22, 2014 12:58 PM
>> *To:* user@spark.apache.org
>> *Subject:* spark sql query optimization , and decision tree building
>>
>>
>>
>> Hi all ,
>>
>>
>>
>> I have a large data in text files (1,000,000 lines) .Each line has 128
>> columns . Here each line is a feature and each column is a dimension.
>>
>> I have converted the txt files in json format and able to run sql queries
>> on json files using spark.
>>
>> Now i am trying to build a k dimenstion decision tree (kd tree) with this
>> large data .
>>
>> My steps :
>> 1) calculate variance of each column pick the column with maximum
>> variance and make it as key of first node , and mean of the column as the
>> value of the node.
>> 2) based on the first node value split the data into 2 parts an repeat
>> the process until you reach a point.
>>
>> My sample code :
>>
>> import sqlContext._
>>
>> val people = sqlContext.jsonFile("siftoutput/")
>>
>> people.printSchema()
>>
>> people.registerTempTable("people")
>>
>> val output = sqlContext.sql("SELECT * From people")
>>
>> My Questions :
>>
>> 1) How to save result values of a query into a list ?
>>
>> 2) How to calculate variance of a column .Is there any efficient way?
>> 3) i will be running multiple queries on same data .Does spark has any way 
>> to optimize it ?
>> 4) how to save the output as key value pairs in a text file ?
>>
>> 5) is there any way i can build decision kd tree using machine libraries of 
>> spark ?
>>
>> please help
>>
>> Thanks,
>>
>> Sanath
>>
>>
>>
>
>


Re: spark sql query optimization , and decision tree building

2014-10-22 Thread sanath kumar
Thank you very much ,

two more small questions :

1) val output = sqlContext.sql("SELECT * From people")
my output has 128 columns and  single row .
how to find the which column has the maximum value in a single row using
scala ?

2) as each row has 128 columns how to print each row into a text while with
space delimitation or as json using scala?

please reply

Thanks,
Sanath


On Wed, Oct 22, 2014 at 8:24 AM, Cheng, Hao  wrote:

>  The “output” variable is actually a SchemaRDD, it provides lots of DSL
> API, see
> http://spark.apache.org/docs/1.1.0/api/scala/index.html#org.apache.spark.sql.SchemaRDD
>
>
>
> 1) How to save result values of a query into a list ?
>
> [CH:] val list: Array[Row] = output.collect, however get 1M records into an 
> array seems not a good idea.
>
>
>
> 2) How to calculate variance of a column .Is there any efficient way?
>
> [CH:] Not sure what’s that mean, but you can try 
> output.select(‘colname).groupby ?
>
>
> 3) i will be running multiple queries on same data .Does spark has any way to 
> optimize it ?
>
> [CH:] val cachedRdd = output.cache(), and do whatever you need to do based on 
> cachedRDD
>
>
> 4) how to save the output as key value pairs in a text file ?
>
> [CH:] cachedRdd.generate(xx,xx,xx).saveAsTextFile(xx)
>
>
>
>  5) is there any way i can build decision kd tree using machine libraries of 
> spark ?
>
> [CH:] Sorry, I am not sure about how kd tree used in mllib. but keep in
> mind SchemaRDD is just a normal RDD.
>
>
>
> Cheng Hao
>
>
>
> *From:* sanath kumar [mailto:sanath1...@gmail.com]
> *Sent:* Wednesday, October 22, 2014 12:58 PM
> *To:* user@spark.apache.org
> *Subject:* spark sql query optimization , and decision tree building
>
>
>
> Hi all ,
>
>
>
> I have a large data in text files (1,000,000 lines) .Each line has 128
> columns . Here each line is a feature and each column is a dimension.
>
> I have converted the txt files in json format and able to run sql queries
> on json files using spark.
>
> Now i am trying to build a k dimenstion decision tree (kd tree) with this
> large data .
>
> My steps :
> 1) calculate variance of each column pick the column with maximum variance
> and make it as key of first node , and mean of the column as the value of
> the node.
> 2) based on the first node value split the data into 2 parts an repeat the
> process until you reach a point.
>
> My sample code :
>
> import sqlContext._
>
> val people = sqlContext.jsonFile("siftoutput/")
>
> people.printSchema()
>
> people.registerTempTable("people")
>
> val output = sqlContext.sql("SELECT * From people")
>
> My Questions :
>
> 1) How to save result values of a query into a list ?
>
> 2) How to calculate variance of a column .Is there any efficient way?
> 3) i will be running multiple queries on same data .Does spark has any way to 
> optimize it ?
> 4) how to save the output as key value pairs in a text file ?
>
> 5) is there any way i can build decision kd tree using machine libraries of 
> spark ?
>
> please help
>
> Thanks,
>
> Sanath
>
>
>


RE: spark sql query optimization , and decision tree building

2014-10-22 Thread Cheng, Hao
The “output” variable is actually a SchemaRDD, it provides lots of DSL API, see 
http://spark.apache.org/docs/1.1.0/api/scala/index.html#org.apache.spark.sql.SchemaRDD


1) How to save result values of a query into a list ?

[CH:] val list: Array[Row] = output.collect, however get 1M records into an 
array seems not a good idea.



2) How to calculate variance of a column .Is there any efficient way?

[CH:] Not sure what’s that mean, but you can try 
output.select(‘colname).groupby ?

3) i will be running multiple queries on same data .Does spark has any way to 
optimize it ?

[CH:] val cachedRdd = output.cache(), and do whatever you need to do based on 
cachedRDD

4) how to save the output as key value pairs in a text file ?

[CH:] cachedRdd.generate(xx,xx,xx).saveAsTextFile(xx)



 5) is there any way i can build decision kd tree using machine libraries of 
spark ?
[CH:] Sorry, I am not sure about how kd tree used in mllib. but keep in mind 
SchemaRDD is just a normal RDD.

Cheng Hao

From: sanath kumar [mailto:sanath1...@gmail.com]
Sent: Wednesday, October 22, 2014 12:58 PM
To: user@spark.apache.org
Subject: spark sql query optimization , and decision tree building

Hi all ,


I have a large data in text files (1,000,000 lines) .Each line has 128 columns 
. Here each line is a feature and each column is a dimension.

I have converted the txt files in json format and able to run sql queries on 
json files using spark.

Now i am trying to build a k dimenstion decision tree (kd tree) with this large 
data .

My steps :
1) calculate variance of each column pick the column with maximum variance and 
make it as key of first node , and mean of the column as the value of the node.
2) based on the first node value split the data into 2 parts an repeat the 
process until you reach a point.

My sample code :

import sqlContext._

val people = sqlContext.jsonFile("siftoutput/")

people.printSchema()

people.registerTempTable("people")

val output = sqlContext.sql("SELECT * From people")

My Questions :

1) How to save result values of a query into a list ?

2) How to calculate variance of a column .Is there any efficient way?
3) i will be running multiple queries on same data .Does spark has any way to 
optimize it ?
4) how to save the output as key value pairs in a text file ?

5) is there any way i can build decision kd tree using machine libraries of 
spark ?

please help

Thanks,

Sanath



Re: Spark SQL Query Plan optimization

2014-08-02 Thread Michael Armbrust
The number of partitions (which decides the number of tasks) is fixed after
any shuffle and can be configured using 'spark.sql.shuffle.partitions'
though SQLConf (i.e. sqlContext.set(...) or
"SET spark.sql.shuffle.partitions=..." in sql)  It is possible we will auto
select this based on statistics in the future.

I think you might be reading the query plan backwards.  The data starts at
the bottom and moves upwards.  The filter is being performed before the
shuffle (exchange) and join operations.


On Fri, Aug 1, 2014 at 10:13 AM, N.Venkata Naga Ravi 
wrote:

>  Hi,
>
> I am trying to understand the query plan and number of tasks /execution
> time created for joined query.
>
> Consider following example , creating two tables emp, sal with appropriate
> 100 records in each table with key for joining them.
>
> *EmpRDDRelation.scala*
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *case class EmpRecord(key: Int, value: String)case class SalRecord(key:
> Int, salary: Int)object EmpRDDRelation {  def main(args: Array[String])
> {val sparkConf = new
> SparkConf().setMaster("local[1]").setAppName("RDDRelation") val sc =
> new SparkContext(sparkConf)val sqlContext = new SQLContext(sc)//
> Importing the SQL context gives access to all the SQL functions and
> implicit conversions.import sqlContext._var rdd=
> sc.parallelize((1 to 100 ).map(i=>EmpRecord(i, s"name_$i")))
> rdd.registerAsTable("emp")// Once tables have been registered, you can
> run SQL queries over them. println("Result of SELECT *:")
> sql("SELECT * FROM emp").collect().foreach(println)var salrdd =
> sc.parallelize((1 to 100).map(i=>SalRecord(i,i*100)))
> salrdd.registerAsTable("sal")  sql("SELECT * FROM
> sal").collect().foreach(println) var salRRDFromSQL= sql("SELECT
> emp.key,value,salary from emp,sal WHERE  emp.key=30 AND emp.key=sal.key")
> salRRDFromSQL.collect().foreach(println) }}*
>
> Here are my observation :
>
> Below is query plan for above join query which creates 150 tasks. I could
> see Filter is added in the plan , but not sure whether taken in optimized
> way. First of all it is not clear why 150 tasks are required, because i
> could see similar 150 tasks when executed the above join query without
> filter "*emp.key=30" *like "*SELECT emp.key,value,salary from emp,sal
> WHERE  emp.key=sal.key"* and took *same time for both cases*. So my
> understanding emp.key =30 filter should take place first and on top of the
> filtered records from emp table it should join with sal table( From the
> Oracle RDBMS perspective) .  But here query plan joins tables first  and
> applies filter later.  Is there anyway we can improve it from code wise or
> does require enhancement from Spark SQL side.
>
> Please review my observation and let me know your comments.
>
>
> == Query Plan ==
> Project [key#0:0,value#1:1,salary#3:3]
>  HashJoin [key#0], [key#2], BuildRight
>
>
> *Exchange (HashPartitioning [key#0:0], 150)Filter (key#0:0 = 30)*
> ExistingRdd [key#0,value#1], MapPartitionsRDD[1] at mapPartitions at
> basicOperators.scala:174
>   Exchange (HashPartitioning [key#2:0], 150)
>ExistingRdd [key#2,salary#3], MapPartitionsRDD[5] at mapPartitions at
> basicOperators.scala:174), which is now runnable
> 14/08/01 22:20:02 INFO DAGScheduler: Submitting 150 missing tasks from
> Stage 2 (SchemaRDD[8] at RDD at SchemaRDD.scala:98
> == Query Plan ==
> Project [key#0:0,value#1:1,salary#3:3]
>  HashJoin [key#0], [key#2], BuildRight
>   Exchange (HashPartitioning [key#0:0], 150)
>Filter (key#0:0 = 30)
> ExistingRdd [key#0,value#1], MapPartitionsRDD[1] at mapPartitions at
> basicOperators.scala:174
>   Exchange (HashPartitioning [key#2:0], 150)
>ExistingRdd [key#2,salary#3], MapPartitionsRDD[5] at mapPartitions at
> basicOperators.scala:174)
> 14/08/01 22:20:02 INFO TaskSchedulerImpl: *Adding task set 2.0 with 150
> tasks*
>