lang.Thread.run(Thread.java:750)
Could someone help me how to proceed further?
--
Thanks and Regards
*Ranga Reddy*
*--*
*Bangalore, Karnataka, India*
*Mobile : +91-9986183183 | Email: rangareddy.av...@gmail.com
*
Thanks for the information. Will rebuild with 0.6.0 till the patch is
merged.
On Tue, Mar 17, 2015 at 7:24 PM, Ted Yu yuzhih...@gmail.com wrote:
Ranga:
Take a look at https://github.com/apache/spark/pull/4867
Cheers
On Tue, Mar 17, 2015 at 6:08 PM, fightf...@163.com fightf...@163.com
to
create tachyon dir in
/tmp_spark_tachyon/spark-e3538a20-5e42-48a4-ad67-4b97aded90e4/driver
Thanks for any other pointers.
- Ranga
On Wed, Mar 18, 2015 at 9:53 AM, Ranga sra...@gmail.com wrote:
Thanks for the information. Will rebuild with 0.6.0 till the patch is
merged.
On Tue, Mar 17
Thanks Ted. Will do.
On Wed, Mar 18, 2015 at 2:27 PM, Ted Yu yuzhih...@gmail.com wrote:
Ranga:
Please apply the patch from:
https://github.com/apache/spark/pull/4867
And rebuild Spark - the build would use Tachyon-0.6.1
Cheers
On Wed, Mar 18, 2015 at 2:23 PM, Ranga sra...@gmail.com
Hi Haoyuan
No. I assumed that Spark-1.3.0 was already built with Tachyon-0.6.0. If
not, I can rebuild and try. Could you let me know how to rebuild with 0.6.0?
Thanks for your help.
- Ranga
On Wed, Mar 18, 2015 at 12:59 PM, Haoyuan Li haoyuan...@gmail.com wrote:
Did you recompile
in a
production environment by anybody in this group?
Appreciate your help with this.
- Ranga
.
- Ranga
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-HiveContext-within-Custom-Actor-tp20892.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
though.
You could also increase the spark.storage.memoryFraction if that is an
option.
- Ranga
On Wed, Dec 10, 2014 at 10:23 PM, Aaron Davidson ilike...@gmail.com wrote:
The ContextCleaner uncaches RDDs that have gone out of scope on the
driver. So it's possible that the given RDD
. This may or may not be possible in the
environment I am in (from a policy perspective)
- Ranga
On Tue, Oct 14, 2014 at 4:21 AM, Rafal Kwasny m...@entropy.be wrote:
Hi,
keep in mind that you're going to have a bad time if your secret key
contains a /
This is due to old and stupid hadoop bug
based on the IAMRole
but
they generally expire in an hour
- If Spark is not able to use the IAMRole credentials, I may have to
generate a static key-id/secret. This may or may not be possible in
the
environment I am in (from a policy perspective)
- Ranga
On Tue, Oct 14
One related question. Could I specify the
com.amazonaws.services.s3.AmazonS3Client implementation for the
fs.s3.impl parameter? Let me try that and update this thread with my
findings.
On Tue, Oct 14, 2014 at 10:48 AM, Ranga sra...@gmail.com wrote:
Thanks for the input.
Yes, I did use
use AWS SDK in your application to provide AWS credentials?
https://github.com/seratch/AWScala
On Oct 14, 2014, at 11:10 AM, Ranga sra...@gmail.com wrote:
One related question. Could I specify the
com.amazonaws.services.s3.AmazonS3Client implementation for the
fs.s3.impl parameter? Let
.
- Ranga
Is there a way to specify a request header during the
sparkContext.textFile call?
- Ranga
On Mon, Oct 13, 2014 at 11:03 AM, Ranga sra...@gmail.com wrote:
Hi
I am trying to access files/buckets in S3 and encountering a permissions
issue. The buckets are configured to authenticate using
Hi Daniil
Could you provide some more details on how the cluster should be
launched/configured? The EC2 instance that I am dealing with uses the
concept of IAMRoles. I don't have any keyfile to specify to the spark-ec2
script.
Thanks for your help.
- Ranga
On Mon, Oct 13, 2014 at 3:04 PM
with this for now.
- Ranga
On Wed, Oct 8, 2014 at 9:18 PM, Ranga sra...@gmail.com wrote:
This is a bit strange. When I print the schema for the RDD, it reflects
the correct data type for each column. But doing any kind of mathematical
calculation seems to result in ClassCastException. Here
approaches that I should be looking at?
Thanks for your help.
- Ranga
)
// This query throws the exception when I collect the results
I tried adding the cast to the aggRdd query above and that didn't help.
- Ranga
On Wed, Oct 8, 2014 at 3:52 PM, Michael Armbrust mich...@databricks.com
wrote:
Using SUM on a string should automatically cast the column. Also you
:11 PM, Michael Armbrust mich...@databricks.com
wrote:
Which version of Spark are you running?
On Wed, Oct 8, 2014 at 4:18 PM, Ranga sra...@gmail.com wrote:
Thanks Michael. Should the cast be done in the source RDD or while doing
the SUM?
To give a better picture here is the code sequence
as int)
...
from table
Any other pointers? Thanks for the help.
- Ranga
On Wed, Oct 8, 2014 at 5:20 PM, Ranga sra...@gmail.com wrote:
Sorry. Its 1.1.0.
After digging a bit more into this, it seems like the OpenCSV Deseralizer
converts all the columns to a String type. This maybe throwing
20 matches
Mail list logo