Increasing Spark_executors_instances to 4 worked.
SPARK_EXECUTOR_INSTANCES="4" #Number of workers to start (Default: 2)
Regards,
Vinti
On Wed, Mar 2, 2016 at 4:28 AM, Vinti Maheshwari
wrote:
> Thanks much Saisai. Got it.
> So i think increasing worker executor memory
Thanks much Saisai. Got it.
So i think increasing worker executor memory might work. Trying that.
Regards,
~Vinti
On Wed, Mar 2, 2016 at 4:21 AM, Saisai Shao wrote:
> You don't have to specify the storage level for direct Kafka API, since it
> doesn't require to store
You don't have to specify the storage level for direct Kafka API, since it
doesn't require to store the input data ahead of time. Only receiver-based
approach could specify the storage level.
Thanks
Saisai
On Wed, Mar 2, 2016 at 7:08 PM, Vinti Maheshwari
wrote:
> Hi All,
Hi All,
I wanted to set *StorageLevel.MEMORY_AND_DISK_SER* in my spark-streaming
program as currently i am getting
MetadataFetchFailedException*. *I am not sure where i should pass
StorageLevel.MEMORY_AND_DISK, as it seems like createDirectStream doesn't
allow to pass that parameter.
val