Try http://localhost:4040
On Mon, Feb 22, 2016 at 8:23 AM, Vasanth Bhat wrote:
> Thanks Gourav, Eduardo
>
> I tried http://localhost:8080 and http://OAhtvJ5MCA:8080/ . Both
> cases the forefox just hangs.
>
> Also I tried with lynx text based browser. I get the message "HTTP
> request s
Our hadoop NFS Gateway seems to be malfunctioning.
I basically restart it. Now spark jobs have resumed successfully.
Problem solved.
Sonal, SparkPi couldn't run as well. Stuck to the screen with no output
hadoop-user@yks-hadoop-m01:/usr/local/spark$ ./bin/run-example SparkPi
On Tue, Nov 17, 2015 at 12:22 PM, Steve Loughran
wrote:
> 48 hours is one of those kerberos warning times (as is 24h, 72h and 7
> days)
Does this
big-data-conference-sg-2015/public/schedule/detail/44606>
> Reifier at Spark Summit 2015
> <https://spark-summit.org/2015/events/real-time-fuzzy-matching-with-spark-and-elastic-search/>
>
> <http://in.linkedin.com/in/sonalgoyal>
>
>
>
> On Tue, Nov 17, 2015 at
Anyone experienced this issue as well?
On Mon, Nov 16, 2015 at 8:06 PM, Kayode Odeyemi wrote:
>
> Or are you saying that the Java process never even starts?
>
>
> Exactly.
>
> Here's what I got back from jstack as expected:
>
> hadoop-user@yks-hadoop-m01:/u
> Or are you saying that the Java process never even starts?
Exactly.
Here's what I got back from jstack as expected:
hadoop-user@yks-hadoop-m01:/usr/local/spark/bin$ jstack 31316
31316: Unable to open socket file: target process not responding or HotSpot
VM not loaded
The -F option can be used
illed
"$RUNNER" -cp "$LAUNCH_CLASSPATH" org.apache.spark.launcher.Main "$@"
On Mon, Nov 16, 2015 at 5:22 PM, Ted Yu wrote:
> Which release of Spark are you using ?
>
> Can you take stack trace and pastebin it ?
>
> Thanks
>
> On Mon, Nov 16, 201
./spark-submit --class com.migration.UpdateProfiles --executor-memory 8g
~/migration-profiles-0.1-SNAPSHOT.jar
is stuck and outputs nothing to the console.
What could be the cause of this? Current max heap size is 1.75g and it's
only using 1g.
Thank you. That seems to resolve it.
On Fri, Nov 6, 2015 at 11:46 PM, Ted Yu wrote:
> You mentioned resourcemanager but not nodemanagers.
>
> I think you need to install Spark on nodes running nodemanagers.
>
> Cheers
>
> On Fri, Nov 6, 2015 at 1:32 PM, Kayode Odeyemi wr
Hi,
I have a YARN hadoop setup of 8 nodes (7 datanodes, 1 namenode and
resourcemaneger). I have Spark setup only on the namenode/resource manager.
Do I need to have Spark installed on the datanodes?
I asked because I'm getting below error when I run a Spark job through
spark-submit:
Error: Coul
Hi,
I'm running a Spark standalone in cluster mode (1 master, 2 workers).
Everything has failed including spark-submit with errors such as "Caused
by: java.lang.ClassNotFoundException: com.migration.App$$anonfun$upsert$1"
Now, I've reverted back to submitting jobs through scala apps.
Any ideas
-slaves.sh)
On Wed, Nov 4, 2015 at 9:28 PM, Ted Yu wrote:
> Something like this:
> conf.setMaster("local[3]")
>
> On Wed, Nov 4, 2015 at 11:08 AM, Kayode Odeyemi wrote:
>
>> Thanks Ted.
>>
>> Where would you suggest I add that? I'm creating a
true")
conf.set("spark.executor.memory", "5g")
On Wed, Nov 4, 2015 at 9:04 PM, Ted Yu wrote:
> Have you tried using -Dspark.master=local ?
>
> Cheers
>
> On Wed, Nov 4, 2015 at 10:47 AM, Kayode Odeyemi wrote:
>
>> Hi,
>>
>> I can'
Hi,
I can't seem to understand why all created executors always fail.
I have a Spark standalone cluster setup make up of 2 workers and 1 master.
My spark-env looks like this:
SPARK_MASTER_IP=192.168.2.11
SPARK_LOCAL_IP=192.168.2.11
SPARK_MASTER_OPTS="-Dspark.deploy.defaultCores=4"
SPARK_WORKER_C
From
http://spark.apache.org/docs/latest/spark-standalone.html#cluster-launch-scripts
:
If you do not have a password-less setup, you can set the environment
> variable SPARK_SSH_FOREGROUND and serially provide a password for each
> worker.
>
What does "serially provide a password for each worker
hive
>> -Phive-thriftserver -Pyarn
>>
>> spark-1.6.0-SNAPSHOT-bin-custom-spark.tgz was generated (with patch from
>> SPARK-11348)
>>
>> Can you try above command ?
>>
>> Thanks
>>
>> On Tue, Oct 27, 2015 at 7:03 AM, Kayode Odeyemi
>&g
Seems the build and directory structure in dist is similar to the .gz file
downloaded from the
downloads page. Can the dist directory be used as is?
On Tue, Oct 27, 2015 at 4:03 PM, Kayode Odeyemi wrote:
> Ted, I switched to this:
>
> ./make-distribution.sh --name spark-lat
r directory
On Tue, Oct 27, 2015 at 2:14 PM, Ted Yu wrote:
> Can you try the same command shown in the pull request ?
>
> Thanks
>
> On Oct 27, 2015, at 12:40 AM, Kayode Odeyemi wrote:
>
> Thank you.
>
> But I'm getting same warnings and it's still preventing the
nd.
>
> On Mon, Oct 26, 2015 at 12:06 PM, Kayode Odeyemi
> wrote:
>
>> I used this command which is synonymous to what you have:
>>
>> ./make-distribution.sh --name spark-latest --tgz --mvn mvn
>> -Dhadoop.version=2.6.0 -Phadoop-2.6 -Phive -Phive-thriftserver -Ds
quet_partitioned/year=2015/month=10/day=25/part-r-2.gz.parquet
>
> ./dist/python/test_support/sql/parquet_partitioned/year=2015/month=10/day=26/part-r-5.gz.parquet
>
> On Mon, Oct 26, 2015 at 11:47 AM, Kayode Odeyemi
> wrote:
>
>> I see a lot of stuffs like this after
ectory that make_distribution is in)
>
>
>
> On Mon, Oct 26, 2015 at 8:46 AM, Kayode Odeyemi wrote:
>
>> Hi,
>>
>> The ./make_distribution task completed. However, I can't seem to locate
>> the
>> .tar.gz file.
>>
>> Where does Spark save
Hi,
Is it possible to load binary files from NFS share like this:
sc.binaryFiles("nfs://host/mountpath")
I understand that it takes a path, but want to know if it allows protocol.
Appreciate your help.
Hi,
The ./make_distribution task completed. However, I can't seem to locate the
.tar.gz file.
Where does Spark save this? or should I just work with the dist directory?
On Fri, Oct 23, 2015 at 4:23 PM, Kayode Odeyemi wrote:
> I saw this when I tested manually (without ./make-dist
Maven. I have a strong
> guess that you haven't set MAVEN_OPTS to increase the memory Maven can
> use.
>
> On Fri, Oct 23, 2015 at 6:14 AM, Kayode Odeyemi wrote:
> > Hi,
> >
> > I can't seem to get a successful maven build. Please see command output
> >
Hi,
I can't seem to get a successful maven build. Please see command output
below:
bash-3.2$ ./make-distribution.sh --name spark-latest --tgz --mvn mvn
-Dhadoop.version=2.7.0 -Phadoop-2.7 -Phive -Phive-thriftserver -DskipTests
clean package
+++ dirname ./make-distribution.sh
++ cd .
++ pwd
+ SPAR
When I use that I get a "Caused by: org.postgresql.util.PSQLException:
ERROR: column "none" does not exist"
On Thu, Oct 22, 2015 at 9:31 PM, Kayode Odeyemi wrote:
> Hi,
>
> I've trying to load a postgres table using the following expressio
Hi,
I've trying to load a postgres table using the following expression:
val cachedIndex = cache.get("latest_legacy_group_index")
val mappingsDF = sqlContext.load("jdbc", Map(
"url" -> Config.dataSourceUrl(mode, Some("mappings")),
"dbtable" -> s"(select userid, yid, username from legacyusers
27 matches
Mail list logo