[
https://issues.apache.org/jira/browse/SPARK-15039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15268675#comment-15268675
]
Tsai Li Ming edited comment on SPARK-15039 at 5/3/16 1:16 PM:
--
[~zsxwing
[
https://issues.apache.org/jira/browse/SPARK-15039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15268675#comment-15268675
]
Tsai Li Ming commented on SPARK-15039:
--
[~zsxwing] Nothing suspcisious in the logs. The streaming
[
https://issues.apache.org/jira/browse/SPARK-15039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Tsai Li Ming updated SPARK-15039:
-
Description:
Hi,
Using the pyspark kinesis example, it does not receive any messages from
[
https://issues.apache.org/jira/browse/SPARK-15039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Tsai Li Ming updated SPARK-15039:
-
Description:
Hi,
Using the pyspark kinesis example, it does not receive any messages from
Tsai Li Ming created SPARK-15039:
Summary: Kinesis reciever does not work in Yarn
Key: SPARK-15039
URL: https://issues.apache.org/jira/browse/SPARK-15039
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-3220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140623#comment-15140623
]
Tsai Li Ming commented on SPARK-3220:
-
I built Derrick's kmeans against Spark 1.6.0 and ran
{code
[
https://issues.apache.org/jira/browse/SPARK-3220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140623#comment-15140623
]
Tsai Li Ming edited comment on SPARK-3220 at 2/10/16 11:01 AM:
---
I built
[
https://issues.apache.org/jira/browse/SPARK-3220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15140434#comment-15140434
]
Tsai Li Ming commented on SPARK-3220:
-
[~derrickburns], Is your private fork at
https://github.com
Hi,
I found out that the instructions for OpenBLAS has been changed by the author
of netlib-java in:
https://github.com/apache/spark/pull/4448 since Spark 1.3.0
In that PR, I asked whether there’s still a need to compile OpenBLAS with
USE_THREAD=0, and also about Intel MKL.
Is it still
Hi,
I downloaded the source from Downloads page and ran the make-distribution.sh
script.
# ./make-distribution.sh --tgz -Phadoop-2.6 -Dhadoop.version=2.6.0 -DskipTests
clean package
The script has “-x” set in the beginning.
++ /tmp/a/spark-1.4.0/build/mvn help:evaluate
Hi,
I can’t seem to find any documentation on this feature in 1.4.0?
Regards,
Liming
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
Forgot to mention this is on standalone mode.
Is my configuration wrong?
Thanks,
Liming
On 15 Jun, 2015, at 11:26 pm, Tsai Li Ming mailingl...@ltsai.com wrote:
Hi,
I have this in my spark-defaults.conf (same for hdfs):
spark.eventLog.enabled true
spark.eventLog.dir
Hi,
I have this in my spark-defaults.conf (same for hdfs):
spark.eventLog.enabled true
spark.eventLog.dir file:/tmp/spark-events
spark.history.fs.logDirectory file:/tmp/spark-events
While the app is running, there is a “.inprogress” directory. However when the
job
I have been using a logstash alternative - fluentd to ingest the data into hdfs.
I had to configure fluentd to not append the data so that spark streaming will
be able to pick up the new logs.
-Liming
On 2 Feb, 2015, at 6:05 am, NORD SC jan.algermis...@nordsc.com wrote:
Hi,
I plan to
I’m getting the same issue on Spark 1.2.0. Despite having set
“spark.core.connection.ack.wait.timeout” in spark-defaults.conf and verified in
the job UI (port 4040) environment tab, I still get the “no heartbeat in 60
seconds” error.
spark.core.connection.ack.wait.timeout=3600
15/01/22
Hi,
I have the classic word count example:
file.flatMap(line = line.split( )).map(word = (word,1)).reduceByKey(_ +
_).collect()
From the Job UI, I can only see 2 stages: 0-collect and 1-map.
What happened to ShuffledRDD in reduceByKey? And both flatMap and map
operations is collapsed into a
Hi,
This is on version 1.1.0.
I’m did a simple test on MEMORY_AND_DISK storage level.
var file =
sc.textFile(“file:///path/to/file.txt”).persit(StorageLevel.MEMORY_AND_DISK)
file.count()
The file is 1.5GB and there is only 1 worker. I have requested for 1GB of
worker memory per node:
Another observation I had was reading over local filesystem with “file://“. it
was stated as PROCESS_LOCAL which was confusing.
Regards,
Liming
On 13 Sep, 2014, at 3:12 am, Nicholas Chammas nicholas.cham...@gmail.com
wrote:
Andrew,
This email was pretty helpful. I feel like this stuff
Hi,
I’m running 2 slurmds on a single host (built with --enable-multiple-slurmd).
The total cpus are divided equally among the 2 nodes.
I’m trying to test the distribution modes=block/cyclic but the tasks are always
allocated on the first node unless I use --ntasks-per-node=1
$ srun -n2
Hi,
I am using the following partition name DEFAULT/default but slurmctld is not
able to start.
NodeName=compute State=UNKNOWN
PartitionName=default Nodes=compute Default=YES MaxTime=INFINITE State=UP
slurmctld: debug: Reading slurm.conf file: /opt/slurm-14.03.0/etc/slurm.conf
slurmctld:
Hi,
I’m testing slurm on my vm.
My compute node is defined in slurmd.conf without any CPU/Socket/Core/Thread
information:
NodeName=compute State=UNKNOWN
# ./slurmd -C
ClusterName=(null) NodeName=compute CPUs=2 Boards=1 SocketsPerBoard=1
CoresPerSocket=2 ThreadsPerCore=1 RealMemory=1463
://alpinenow.com/
On Mon, Mar 31, 2014 at 11:38 PM, Tsai Li Ming mailingl...@ltsai.com wrote:
Hi,
Is the code available for Hadoop to calculate the Logistic Regression
hyperplane?
I’m looking at the Examples:
http://spark.apache.org/examples.html,
where there is the 110s vs 0.9s
tell, spark.local.dir should *not*
be set there, so workers should get it from their spark-env.sh. It’s true
that if you set spark.local.dir in the driver it would pass that on to the
workers for that job.
Matei
On Mar 27, 2014, at 9:57 PM, Tsai Li Ming mailingl...@ltsai.com wrote
Anyone can help?
How can I configure a different spark.local.dir for each executor?
On 23 Mar, 2014, at 12:11 am, Tsai Li Ming mailingl...@ltsai.com wrote:
Hi,
Each of my worker node has its own unique spark.local.dir.
However, when I run spark-shell, the shuffle writes are always
Hi,
My worker nodes have more memory than the host that I’m submitting my driver
program, but it seems that SPARK_MEM is also setting the Xmx of the spark shell?
$ SPARK_MEM=100g MASTER=spark://XXX:7077 bin/spark-shell
Java HotSpot(TM) 64-Bit Server VM warning: INFO:
On Sun, Mar 23, 2014 at 3:15 AM, Tsai Li Ming mailingl...@ltsai.com wrote:
Hi,
At the reduceBuyKey stage, it takes a few minutes before the tasks start
working.
I have -Dspark.default.parallelism=127 cores (n-1).
CPU/Network/IO is idling across all nodes when this is happening
:53 PM, Tsai Li Ming mailingl...@ltsai.com wrote:
Hi,
This is on a 4 nodes cluster each with 32 cores/256GB Ram.
(0.9.0) is deployed in a stand alone mode.
Each worker is configured with 192GB. Spark executor memory is also 192GB.
This is on the first iteration. K=50. Here's
Hi,
I have R 3.0.3 and OpenMPI 1.6.5.
Here’s my test script:
library(snow)
nbNodes - 4
cl - makeCluster(nbNodes, MPI)
clusterCall(cl, function() Sys.info()[c(nodename,machine)])
mpi.quit()
And the mpirun command:
/opt/openmpi-1.6.5-intel/bin/mpirun -np 1 -H host1,host2,host3,host4 R
--no-save
Hi,
Each of my worker node has its own unique spark.local.dir.
However, when I run spark-shell, the shuffle writes are always written to /tmp
despite being set when the worker node is started.
By specifying the spark.local.dir for the driver program, it seems to override
the executor? Is
Hi,
I'm confused about the -Dspark.local.dir and SPARK_WORKER_DIR(--work-dir).
What's the difference?
I have set -Dspark.local.dir for all my worker nodes but I'm still seeing
directories being created in /tmp when the job is running.
I have also tried setting -Dspark.local.dir when I run the
spark.local.dir can and should be set both on the executors and on the
driver (if the driver broadcast variables, the files will be stored in this
directory)
Do you mean the worker nodes?
Don’t think they are jetty connectors and the directories are empty:
Hi,
In older posts on Google Groups, there was mention of checking the logs on
“preferred/non-preferred” for data locality.
But I can’t seem to find this on 0.9.0 anymore? Has this been changed to
“PROCESS_LOCAL” , like this:
14/02/06 13:51:45 INFO TaskSetManager: Starting task 9.0:50 as TID
Hi,
While running the Bagel’s Wikipedia Page Rank example
(org.apache.spark.examples.bagel.WikipediaPageRank), it is having this error at
the end:
org.apache.spark.SparkException: Job aborted: Task 3.0:4 failed 4 times (most
recent failure: Exception failure: java.lang.ClassNotFoundException:
On 4 Feb, 2014, at 10:08 am, Tsai Li Ming mailingl...@ltsai.com wrote:
Hi,
While running the Bagel’s Wikipedia Page Rank example
(org.apache.spark.examples.bagel.WikipediaPageRank), it is having this error
at the end:
org.apache.spark.SparkException: Job aborted: Task 3.0:4 failed 4
Bogdan Nicolescu wrote:
- Original Message
From: Tsai Li Ming lt...@osgdc.org
To: CentOS mailing list centos@centos.org
Sent: Monday, July 20, 2009 12:18:26 AM
Subject: Re: [CentOS] Cloud Computing
Hi,
Bogdan Nicolescu wrote:
- Original Message
From
Karanbir Singh wrote:
Tsai Li Ming wrote:
Also, we are going to start preparing ours to work with RHEL 5.4 when it
is out in the coming months. Can the community wait till our 5.4
compatible version is ready. This may coincide with the Centos 5.4 release.
The last time we had
Hi,
Bogdan Nicolescu wrote:
- Original Message
From: Ryan J M sync@gmail.com
To: CentOS mailing list centos@centos.org
Sent: Saturday, July 18, 2009 8:59:02 AM
Subject: Re: [CentOS] Cloud Computing
On Sat, Jul 18, 2009 at 4:36 AM, Mattwrote:
Is anyone creating a
Niki Kovacs wrote:
Niki Kovacs a écrit :
If I take a look at /var/lib/dhclient/dhclient-eth0.leases (on the
client), here's a summary of the lease:
lease {
interface eth0;
fixed-address 192.168.1.2;
option subnet-mask 255.255.255.0;
option routers 192.168.1.254;
option
Scott Silva wrote:
on 4-2-2009 2:00 PM Anne Wilson spake the following:
On Thursday 02 April 2009 21:40:59 R P Herrold wrote:
On Wed, 1 Apr 2009, Paul Heinlein wrote:
I don't know if it's a bug or a feature, but the
filesystem-2.4.0-2.el5.centos rpm won't upgrade cleanly if /home is an
NFS
R P Herrold wrote:
Thank you for the confirmation, I have not had a chance to
file in the centos tracker yet, and hope to get it filed
tomorrow's business hours. Similarly I have not checked
upstream's tracker yet. If needee, I'll file there as well,
but I cannot imagine it will be
Hi J,
Which kernel updates are you only interested in? There are entries in
the database that are related to the updated kernels.
fyi, kernel-xen is not used right now.
-Liming
Jay wrote:
How do I select only some of the kernel updates found when running repopatch?
Or do I have to
Hi,
Is it possible for the list owner to get a bounce when the confirmation
emails does not get sent out to the subscribers, usually by a bounce or 550.
I have the following logs in my postfix but the owner is not getting any
bounces:
Nov 11 13:43:46 mail postfix/local[19345]: 313C22FED3:
Dear all,
I have the following directives in my conf file.
ifmodule mod_proxy.c
proxyrequests off
RewriteEngine On
ProxyPass /Server/ http://localhost:8081
ProxyPassReverse /Server/ http://localhost:8081
RewriteRule ^/Server$
://localhost:8081/
ProxyPassReverse /Server/ http://localhost:8081/
/ifmodule
you shouldn't need the other stuff to make it work
On 22/09/2004, at 1:07 PM, Tsai Li Ming wrote:
Dear all,
I have the following directives in my conf file.
ifmodule mod_proxy.c
proxyrequests off
RewriteEngine
Hi
I have been getting random input/oput error when trying to cp a ISO
(100mb) to a samba mount point. I get the same random error when I try
to cp a txt file over too.
cp: writing `/public/cd.iso': Input/output error
my fstab:
//fserv/public /public smbfs fmask=666,username=,password=
45 matches
Mail list logo