t;>- Bagel
>>>>- Support for Hadoop 2.1 and earlier
>>>> - The ability to configure closure serializer
>>>>- HTTPBroadcast
>>>>- TTL-based metadata cleaning
>>>>- *Semi-private class org.apache.spark.Logging. We
2.1 and earlier
>>>- The ability to configure closure serializer
>>>- HTTPBroadcast
>>>- TTL-based metadata cleaning
>>>- *Semi-private class org.apache.spark.Logging. We suggest you use
>>>slf4j directly.*
>>>- SparkContext.metricsSystem
gt;- The ability to configure closure serializer
>>- HTTPBroadcast
>>- TTL-based metadata cleaning
>>- *Semi-private class org.apache.spark.Logging. We suggest you use
>>slf4j directly.*
>>- SparkContext.metricsSystem
>>
>> Thanks,
>>
&
t; *From:* ayan guha [mailto:guha.a...@gmail.com]
> *Sent:* Monday, June 26, 2017 6:26 AM
> *To:* Weiqing Yang
> *Cc:* user
> *Subject:* Re: HDP 2.5 - Python - Spark-On-Hbase
>
>
>
> Hi
>
>
>
> I am using following:
>
>
>
> --packages com.hortonwork
Yang
Cc: user
Subject: Re: HDP 2.5 - Python - Spark-On-Hbase
Hi
I am using following:
--packages com.hortonworks:shc:1.0.0-1.6-s_2.10 --repositories
http://repo.hortonworks.com/content/groups/public/
Is it compatible with Spark 2.X? I would like to use it
Best
Ayan
On Sat, Jun 24, 2017 at 2
Hi
I am using following:
--packages com.hortonworks:shc:1.0.0-1.6-s_2.10 --repositories
http://repo.hortonworks.com/content/groups/public/
Is it compatible with Spark 2.X? I would like to use it
Best
Ayan
On Sat, Jun 24, 2017 at 2:09 AM, Weiqing Yang
wrote:
>
Yes.
What SHC version you were using?
If hitting any issues, you can post them in SHC github issues. There are
some threads about this.
On Fri, Jun 23, 2017 at 5:46 AM, ayan guha wrote:
> Hi
>
> Is it possible to use SHC from Hortonworks with pyspark? If so, any
> working
Hi
Is it possible to use SHC from Hortonworks with pyspark? If so, any working
code sample available?
Also, I faced an issue while running the samples with Spark 2.0
"Caused by: java.lang.ClassNotFoundException: org.apache.spark.Logging"
Any workaround?
Thanks in advance
--
Best
I wanted to confirm whether this is now supported, such as in Spark v1.3.0
I've read varying info online just thought I'd verify.
Thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Python-Spark-and-HBase-tp6142p24117.html
Sent from the Apache Spark
format(self._fqn + name))
660
661 def __call__(self, *args):
Py4JError: org.apache.spark.api.python.PythonRDDnewAPIHadoopFile does not
exist in the JVM
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Python-Spark-and-HBase
reference to the class org.apache.spark.api.python.PythonRDDnewAPIHadoopFile
Any ideas?
Also, do you have a working example of HBase access with the new code?
Thanks
Tommer
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Python-Spark-and-HBase-tp6142p6502.html
to the class org.apache.spark.api.python.PythonRDDnewAPIHadoopFile
Any ideas?
Also, do you have a working example of HBase access with the new code?
Thanks
Tommer
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Python-Spark-and-HBase-tp6142p6502
='org.apache.hadoop.hbase.client.Result'
Is it possible that the typo is coming from inside the spark code?
Tommer
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Python-Spark-and-HBase-tp6142p6506.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
format(self._fqn + name))
660
661 def __call__(self, *args):
Py4JError: org.apache.spark.api.python.PythonRDDnewAPIHadoopFile does not
exist in the JVM
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Python-Spark-and-HBase
Thanks Nick and Matei. I'll take a look at the patch and keep you updated.
Tommer
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Python-Spark-and-HBase-tp6142p6176.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
. Is there any equivalent in python?
Thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Python-Spark-and-HBase-tp6142.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
?
Thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Python-Spark-and-HBase-tp6142.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
17 matches
Mail list logo