https://bugs.kde.org/show_bug.cgi?id=490792
Bug ID: 490792
Summary: repeated plasma-browser-integration-host crash
Classification: Plasma
Product: plasma-browser-integration
Version: unspecified
Platform: Arch Linux
OS: Lin
I have a ~ 30 minute talk which covers my GHC proposal
(NoToplevelFieldSelectors), as well as parts of the renamer. I could hold
that at any point, if there's still time slots over.
Am Di., 28. Mai 2019 um 23:37 Uhr schrieb Ben Gamari :
> Andreas Herrmann writes:
>
> > Dear GHC devs,
> >
> I've
2016-04-24 13:38 GMT+02:00 Stefan Falk :
> sc.parallelize(cfile.toString()
> .split("\n"), 1)
Try `sc.textFile(pathToFile)` instead.
>java.io.IOException: Broken pipe
>at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
>at sun.nio.ch.SocketDispatcher.write(SocketD
2016-03-29 11:25 GMT+02:00 Robert Schmidtke :
> Is there a meaningful way for me to find out what exactly is going wrong
> here? Any help and hints are greatly appreciated!
Maybe a version mismatch between the jars on the cluster?
---
2016-03-24 11:09 GMT+01:00 Shishir Anshuman :
> I am using two Slaves to run the ALS algorithm. I am saving the predictions
> in a textfile using :
> saveAsTextFile(path)
>
> The predictions is getting stored on the slaves but I want the predictions
> to be saved on the Master.
Yes, that is e
2016-03-24 9:54 GMT+01:00 Max Schmidt :
> we're using with the java-api (1.6.0) a ScheduledExecutor that
continuously
> executes a SparkJob to a standalone cluster.
I'd recommend Scala.
> After each job we close the JavaSparkContext and create a new one.
Why do that? You can happily reuse it. Pret
I'd try `brew install spark` or `apache-spark` and see where that gets
you. https://github.com/Homebrew/homebrew
2016-03-04 21:18 GMT+01:00 Aida :
> Hi all,
>
> I am a complete novice and was wondering whether anyone would be willing to
> provide me with a step by step guide on how to install Spar
2016-02-15 14:02 GMT+01:00 Sun, Rui :
> On computation, RRDD launches one R process for each partition, so there
> won't be thread-safe issue
>
> Could you give more details on your new environment?
Running on EC2, I start the executors via
/usr/bin/R CMD javareconf -e "/usr/lib/spark/sbin/
2016-02-15 4:35 GMT+01:00 Sun, Rui :
> Yes, JRI loads an R dynamic library into the executor JVM, which faces
> thread-safe issue when there are multiple task threads within the executor.
>
> I am thinking if the demand like yours (calling R code in RDD
> transformations) is much desired, we may
Hello
I'm currently running R code in an executor via JRI. Because R is
single-threaded, any call to R needs to be wrapped in a
`synchronized`. Now I can use a bit more than one core per executor,
which is undesirable. Is there a way to tell spark that this specific
application (or even specific U
The occasional type error if the casting goes wrong for whatever reason.
2016-01-19 1:22 GMT+08:00 Michael Armbrust :
> What error?
>
> On Mon, Jan 18, 2016 at 9:01 AM, Simon Hafner wrote:
>>
>> And for deserializing,
>> `sqlContext.read.parquet("path/to/parquet
combining the classes
> in Spark 2.0 to remove this awkwardness.
>
> On Tue, Jan 12, 2016 at 11:20 PM, Simon Hafner
> wrote:
>>
>> What's the proper way to write DataSets to disk? Convert them to a
>> DataFrame and use the writers there?
>>
>> ---
[
https://issues.apache.org/jira/browse/SPARK-12677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15096594#comment-15096594
]
Simon Hafner commented on SPARK-12677:
--
What would be the gain? The applica
What's the proper way to write DataSets to disk? Convert them to a
DataFrame and use the writers there?
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
lved the problem.
>
> On Fri, Oct 16, 2015 at 9:54 AM, Simon Hafner wrote:
>>
>> Fresh clone of spark 1.5.1, java version "1.7.0_85"
>>
>> build/mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -DskipTests clean
>> package
>>
>> [error] bad symbol
[
https://issues.apache.org/jira/browse/SPARK-11539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Simon Hafner updated SPARK-11539:
-
Comment: was deleted
(was: sbt-native-packager makes it slightly easier.)
> Debian packag
[
https://issues.apache.org/jira/browse/SPARK-11539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14992587#comment-14992587
]
Simon Hafner commented on SPARK-11539:
--
sbt-native-packager makes it slig
Simon Hafner created SPARK-11539:
Summary: Debian packaging
Key: SPARK-11539
URL: https://issues.apache.org/jira/browse/SPARK-11539
Project: Spark
Issue Type: New Feature
2015-11-03 23:20 GMT+01:00 Ionized :
> TypeUtils.getInterpretedOrdering currently only supports AtomicType and
> StructType. Is it possible to add support for UserDefinedType as well?
Yes, make a PR to spark.
https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/
2015-11-03 20:26 GMT+01:00 xenocyon :
> I want to save an mllib model to disk, and am trying the model.save
> operation as described in
> http://spark.apache.org/docs/latest/mllib-collaborative-filtering.html#examples:
>
> model.save(sc, "myModelPath")
>
> But after running it, I am unable to find
2015-11-03 20:07 GMT+01:00 Sebastian Kuepers
:
> Hey,
>
> with collect() RDDs elements are send as a list back to the driver.
>
> If have a 4 node cluster (based on Mesos) in a datacenter and I have my
> local dev machine.
>
> I work with a small 200MB dataset just for testing during development ri
Simon Hafner created SPARK-11268:
Summary: Non-daemon startup scripts
Key: SPARK-11268
URL: https://issues.apache.org/jira/browse/SPARK-11268
Project: Spark
Issue Type: Improvement
Fresh clone of spark 1.5.1, java version "1.7.0_85"
build/mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -DskipTests clean package
[error] bad symbolic reference. A signature in WebUI.class refers to
term eclipse
[error] in package org which is not available.
[error] It may be completely missing
Hi everyone
is it possible to return multiple values with an udaf defined in spark
1.5.0? The documentation [1] mentions
abstract def dataType: DataType
The DataType of the returned value of this UserDefinedAggregateFunction.
so it's only possible to return a single value. Should I use ArrayType
Simon Hafner created SPARK-10053:
Summary: SparkR isn't exporting lapply
Key: SPARK-10053
URL: https://issues.apache.org/jira/browse/SPARK-10053
Project: Spark
Issue Type
Simon Hafner created SPARK-8821:
---
Summary: The ec2 script doesn't run on python 3 with an utf8 env
Key: SPARK-8821
URL: https://issues.apache.org/jira/browse/SPARK-8821
Project: Spark
2014-11-30 7:25 GMT-06:00 Eddie Epstein :
> On Sat, Nov 29, 2014 at 4:46 PM, Simon Hafner wrote:
>
>> I've thrown some numbers at it (doubling each) and it's running at
>> comfortable 125 procs. However, at about 6.1k of 6.5k items, the procs
>> drop down to 30.
>
> DUCC would have to be restarted for the JD size parameters to take effect.
>
> One of the current DUCC development items is to significantly reduce the
> memory needed per work item, and raise the default limit for concurrent
> work items by two or three orders of magnitude.
&
lable for all preemptable
>> jobs.
>>
>> To see more JPs, increase the number and/or size of the input text files,
>> or decrease the number of pipeline threads per JP.
>>
>> Note that it can be counter productive to run "too many" pipeline
>> thr
2014-11-28 14:18 GMT-06:00 Eddie Epstein :
> To debug, please add the following option to the job submission:
> --all_in_one local
>
> This will run all the code in a single process on the machine doing the
> submit. Hopefully the log file and/or console will be more informative.
Yes, that helped.
2014-11-28 10:45 GMT-06:00 Eddie Epstein :
> DuccCasCC component has presumably created
> /home/ducc/analysis/txt.processed/5911.txt_0_processed.zip_temp and written
> to it?
I don't know, the _temp file doesn't exist anymore.
> Did you run this sample job in something other than cluster mode?
I g
When running DUCC in cluster mode, I get "Rename failed". The file
mentioned in the error message exists in the txt.processed/ directory.
The mount is via nfs (rw,sync,insecure).
org.apache.uima.resource.ResourceProcessException: Received Exception
In Message From Service on Queue:ducc.jd.queue.75
2014-11-27 11:44 GMT-06:00 Eddie Epstein :
> Those are the only two log files? Should be a ducc.log (probably with no
> more info than on the console), and either one or both of the job driver
> logfiles: jd.out.log and jobid-JD-jdnode-jdpid.log. If for some reason the
> job driver failed to start,
When launching the Raw Text example application, it doesn't load with
the following error:
[ducc@ip-10-0-0-164 analysis]$ MyAppDir=$PWD MyInputDir=$PWD/txt
MyOutputDir=$PWD/txt.processed ~/ducc_install/bin/ducc_submit -f
DuccRawTextSpec.job
Job 50 submitted
id:50 location:5991@ip-10-0-0-164
id:50
I have 20 nodes via EC2 and an application that reads the data via
wholeTextFiles. I've tried to copy the data into hadoop via
copyFromLocal, and I get
14/11/24 02:00:07 INFO hdfs.DFSClient: Exception in
createBlockOutputStream 172.31.2.209:50010 java.io.IOException: Bad
connect ack with firstBadL
How many shares does your agent have available?
2014-11-18 14:37 GMT-06:00 Dan Heinze :
> I've read the "DUCC stuck Waiting for Resources on Amazon..." thread.
> I have a similar problem. I did my first install of DUCC yesterday on a
> CentOS 6.5 VM with 9GB RAM. No problems with the install. ./
I fired the DuccRawTextSpec.job on a cluster consisting of three
machines, with 100 documents. The scheduler only runs the processes on
two machines instead of all three. Can I mess with a few config
variables to make it use all three?
id:22 state:Running total:100 done:0 error:0 retry:0 procs:1
i
2014-11-17 0:00 GMT-06:00 reshu.agarwal :
> I want to run two DUCC version i.e. 1.0.0 and 1.1.0 on same machines with
> different user. Can this be possible?
Yes, that should be possible. You'll have to make sure there are no
port conflicts, I'd guess the ActiveMQ port is hardcoded, the rest
might
So to run effectively, I would need more memory, because the job wants
two shares? ... Yes. With a larger node it works. What would be a
reasonable memory size for a ducc node?
2014-11-14 9:38 GMT-06:00 Lou DeGenaro :
> Simon,
>
> Congratulations! You found a bug in DUCC's Web Server. It was inc
re for reasons
> the resources are not being allocated?
>
> Eddie
>
> On Wed, Nov 12, 2014 at 4:07 PM, Simon Hafner wrote:
>
>> 4 shares total, 2 in use.
>>
>> 2014-11-12 5:06 GMT-06:00 Lou DeGenaro :
>> > Try looking at your DUCC's web server
4 shares total, 2 in use.
2014-11-12 5:06 GMT-06:00 Lou DeGenaro :
> Try looking at your DUCC's web server. On the System -> Machines page
> do you see any shares not inuse?
>
> Lou.
>
> On Wed, Nov 12, 2014 at 5:51 AM, Simon Hafner wrote:
>> I'
I've set up DUCC according to
https://cwiki.apache.org/confluence/display/UIMA/DUCC
ducc_install/bin/ducc_submit -f ducc_install/examples/simple/1.job
the job is stuck at WaitingForResources.
12 Nov 2014 10:37:30,175 INFO Agent.LinuxNodeMetricsProcessor -
process N/A ... Agent Collectin
I've tried to set the log4j logger to warn only via log4j properties file in
cat src/test/resources/log4j.properties
log4j.logger.org.apache.spark=WARN
or in sbt via
javaOptions += "-Dlog4j.logger.org.apache.spark=WARN"
But the logger still gives me INFO messages to stdout when I run my tests v
I tried using shapeless HLists as data storage for data inside spark.
Unsurprisingly, it failed. The deserialization isn't well-defined because of
all the implicits used by shapeless. How could I make it work?
Sample Code:
/* SimpleApp.scala */
import org.apache.spark.SparkContext
import org.apac
Hello
how do I add root certificates to kopete? Add added ca-cert to both kleopatra
and the system ca-certificates, but kopete still complains about an invalid SSL
certificate from the server due to invalid root certificate.
Cheers,
Simon
___
This
Hey y'all
I have an ePass2003, and I'd like to use it for pam_p11 and ssh. The
pam_p11 key should be usable without a pin, or can I provide the pin
by using the password field? I'd like to know which paths are
possible. The other object stored is an ssh key secured by a pin.
My problem is now tha
2012/8/16 Jim Cromie :
> On Tue, Aug 14, 2012 at 11:43 PM, Simon Hafner wrote:
>> 2012/8/14 Jim Cromie :
>>> On Mon, Jul 23, 2012 at 2:54 AM, liquid wrote:
>>> root@voyage:~# dmesg | grep -i -E 'tsc|clocksource'
>>> [0.00] Fast TSC calibra
2012/8/15 Andreas Liljeqvist :
> I am not really familiar with elisp though, what are the equivalence of
> multimethods as found in clojure for example?
According to #emacs, there is eieio, if I you associate multimethods
with dynamic dispatch.
> "cW" stopping in the case of structure(parenthesis
s not switch desktop. `wmctrl -s 1` gives the same message
ClientMessage event, serial 26, synthetic YES, window 0xab,
message_type 0x13f (_NET_CURRENT_DESKTOP), format 32
but it switches desktops. How can I debug that further?
Greetings
Simon Hafner
_
liquid writes:
> I am trying to install voyage linux 0.8.5 on an Alix 3d2 board via pxe
> boot. I can see that it loads the image properly, but the boot process
> hangs on "Switching to clocksource tsc". [...]
I've got the same problem with voyage 0.8.5, I'd appreciate help.
_
those not bound by StumpWM)?
Greetings
Simon Hafner
___
Stumpwm-devel mailing list
Stumpwm-devel@nongnu.org
https://lists.nongnu.org/mailman/listinfo/stumpwm-devel
Hi all
is there a nice way to get the top 100 translations?
I'm trying to compare two languages on character ngram level, to find
common edit paths. The idea is to train moses for that pair and then
extract the most common ngram pairs. Is this even possible or are they
normalized based on their o
Hello
is there an IRC channel for the moses decoder? I haven't found one on
freenode, and there isn't any in the wiki either.
As a small project for one of my lectures, I consider implementing an
english <-> klingon translator. Has anyone ever done such a thing with
moses?
Cheers
Simon
_
On 02.12.2011, at 04:54, KARTHIK SHIVAKUMAR wrote:
> Hi
>
> Spec
> O/s win os 7
> Jdk : 1.6.0_29
> Lucene lucene-core-3.3.0
>
> Finally after Indexing successfully ,Why this Code does not optimize (
> sample code )
>
>INDEX_WRITER.optimize(100);
>INDEX_WRITER.commit();
On Monday 10 January 2011 22.37:46 RichardOnRails wrote:
> Hi,
>
> I'm running WinXP-Pro/SP3 & Ruby 1.8.6
>
> K:/_Utilities/ruby186-26_rc2/ruby/lib/ruby/gems/1.8/gems/rspec-
> core-2.4.0/lib/rspec/core/configuration_options.rb:9:couldn't find
> HOME environment -- expanding `~' (ArgumentError)
>
55 matches
Mail list logo