Github user manku-timma commented on a diff in the pull request:
https://github.com/apache/spark/pull/20851#discussion_r175327686
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilters.scala
---
@@ -72,6 +82,15 @@ private[parquet
Github user manku-timma commented on a diff in the pull request:
https://github.com/apache/spark/pull/18174#discussion_r125386319
--- Diff:
core/src/main/java/org/apache/spark/shuffle/sort/UnsafeShuffleWriter.java ---
@@ -360,12 +368,10 @@ void forceSorterToSpill() throws
Github user manku-timma commented on the issue:
https://github.com/apache/spark/pull/18174
Just to understand what is happening.
1. Shuffle records are written to a serialisation buffer (1M) after
serialisation
2. The serialised buffer is written to in-memory-sorterâs
Github user manku-timma commented on the pull request:
https://github.com/apache/spark/pull/8186#issuecomment-132618967
Say the metastore DB admin has created a place to save the data of all
tables by default. This applies to all spark (and other) jobs that use the
metastore
Github user manku-timma commented on the pull request:
https://github.com/apache/spark/pull/8186#issuecomment-132874783
@yhuai
In hive, the following sample code works fine:-
```
set hive.metastore.warehouse.dir=/test/warehouse
create table test1 as select * from
GitHub user manku-timma opened a pull request:
https://github.com/apache/spark/pull/8186
[SPARK-9944] [SQL] [WIP] Allow hive.metastore.warehouse.dir to override
db_location_uri
When you run `dataframe.saveAsTable(tablename)`, the warehouse directory
in the metastore
Github user manku-timma commented on the pull request:
https://github.com/apache/spark/pull/322#issuecomment-39976847
@pwendell: Do you plan to pick this up for 1.0? Is there anything more I
need to do?
---
If your project is set up for it, you can reply to this email and have your
Github user manku-timma commented on the pull request:
https://github.com/apache/spark/pull/322#issuecomment-39720333
So the current fix looks fine?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user manku-timma commented on the pull request:
https://github.com/apache/spark/pull/322#issuecomment-39676303
@pwendell: I tested your fix to SparkEnv.scala (after reverting my earlier
change). It does not work. SparkEnv's loader turns out to be
`sun.misc.Launcher
Github user manku-timma commented on the pull request:
https://github.com/apache/spark/pull/322#issuecomment-39689262
@pwendell: You are right. Actually
`sun.misc.Launcher$AppClassLoader@12360be0` is the classloader even in the
earlier code.
Looks like classes directly
Github user manku-timma commented on the pull request:
https://github.com/apache/spark/pull/322#issuecomment-39641219
Let me know if there is any other change I need to make. I have tested
after merging from master and things look fine. This is good to be merged from
my end
Github user manku-timma commented on the pull request:
https://github.com/apache/spark/pull/322#issuecomment-39655348
I see that PR 334 made the java 6 change. So I reverted mine.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user manku-timma commented on the pull request:
https://github.com/apache/spark/pull/322#issuecomment-39541428
I can confirm that the third line is needed. Without that line I see the
same failure as earlier.
---
If your project is set up for it, you can reply to this email
Github user manku-timma commented on the pull request:
https://github.com/apache/spark/pull/322#issuecomment-39543590
java.lang.ClassNotFoundException: org/apache/spark/serializer/JavaSerializer
at java.lang.Class.forName0(Native Method
Github user manku-timma commented on the pull request:
https://github.com/apache/spark/pull/322#issuecomment-39552826
@ueshin, your one line change works for me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user manku-timma commented on the pull request:
https://github.com/apache/spark/pull/322#issuecomment-39627690
Oops. Added that line.
I am facing this error in the current git tree
```
[error]
/home/vagrant/spark2/sql/core/src/main/scala/org/apache/spark
GitHub user manku-timma opened a pull request:
https://github.com/apache/spark/pull/322
[SPARK-1403] Move the class loader creation back to where it was in 0.9.0
[SPARK-1403] I investigated why spark 0.9.0 loads fine on mesos while spark
1.0.0 fails. What I found
Github user manku-timma commented on the pull request:
https://github.com/apache/spark/pull/322#issuecomment-39530349
After looking at the code a bit more, I see that the code to
setContextClassLoader does not use SecurityManager AFAIS. createClassLoader is
creating a File object
18 matches
Mail list logo