p.com> wrote:
> Have you tried to “set spark.driver.allowMultipleContexts = true”?
>
>
>
> *David Newberger*
>
>
>
> *From:* Lee Ho Yeung [mailto:jobmatt...@gmail.com]
> *Sent:* Tuesday, June 14, 2016 8:34 PM
> *To:* user@spark.apache.org
> *Subject:* streaming ex
i write a python script which has itertools.combinations(initlist, 2)
but it got error when number of elements in initlist over 14,000
is it possible to use spark to do this work?
i have seen yatel can do this, is spark and yatel using hard disk as memory?
if so,
which need to change in
; Dr Mich Talebzadeh
>
>
>
> LinkedIn *
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
Dr Mich Talebzadeh
>
>
>
> LinkedIn *
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> On 15 June 2016 at 03:02, Lee
deep=1"))
org.apache.spark.sql.AnalysisException: cannot resolve 'a0' given input
columns: [a0a1a2a3a4a5a6a7a8 a9 ];
at
org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
On Tue, Jun 14, 20
when simulate streaming with nc -lk
got error below,
then i try example,
martin@ubuntu:~/Downloads$
/home/martin/Downloads/spark-1.6.1/bin/run-example
streaming.NetworkWordCount localhost
Using Spark's default log4j profile:
org/apache/spark/log4j-defaults.properties
16/06/14 18:33:06
after tried following commands, can not show data
https://drive.google.com/file/d/0Bxs_ao6uuBDUVkJYVmNaUGx2ZUE/view?usp=sharing
https://drive.google.com/file/d/0Bxs_ao6uuBDUc3ltMVZqNlBUYVk/view?usp=sharing
/home/martin/Downloads/spark-1.6.1/bin/spark-shell --packages