I’ve just published a clj template for for an Electron application
built with deps.edn, Figwheel Main, Reagent, and test integration via
cljs-test-display:
https://github.com/paulbutcher/electron-app
--
paul.butcher->msgCount++
Silverstone, Brands Hatch, Donington Park...
Who says I have a on
On 12 August 2014 at 13:49:42, Linus Ericsson (oscarlinuserics...@gmail.com)
wrote:
The conclusion of this is I think the easiest way to make this work is to just
run the algorithm in both versions and watch the object allocation statistics
closely in VisualVM or similar.
Yeah, that's exactly
Is there any way to benchmark the degree of structural sharing achieved by a
Clojure algorithm? I'm evaluating two different implementations of an
algorithm, one which uses zippers and one which uses rrb-vector. It would be
great if there were some way to quantify the degree to which they both a
Out of interest, which aspect(s) of the EPL are your lawyers objecting to?
--
paul.butcher->msgCount++
Silverstone, Brands Hatch, Donington Park...
Who says I have a one track mind?
http://www.paulbutcher.com/
LinkedIn: http://www.linkedin.com/in/paulbutcher
Skype: paulrabutcher
Author of Seven
I recently hit exactly this question in a ClojureScript app I’m writing. It
just so happens that Javascript provides a .indexOf method which is, as near as
dammit, the same as the one provided by Java. So in this instance, portability
isn’t an issue.
But having said that, I would still prefer t
ent), it's recommended to include %s in in your custom
;; :target-path, which will splice in names of the currently active profiles.
:target-path "target/%s/"
--Curtis
On Monday, July 28, 2014 12:40:45 PM UTC-5, Paul Butcher wrote:
Ah! I thought that Leiningen put class files i
igen profiles
and resource paths so only include the one you want.
HTH,
/thomas
On Monday, July 28, 2014 3:45:51 PM UTC+2, Paul Butcher wrote:
Oops - I originally sent this to the ClojureScript group, which probably wasn’t
the best place. Apologies to those who subscribe to both lists for the
Oops - I originally sent this to the ClojureScript group, which probably wasn’t
the best place. Apologies to those who subscribe to both lists for the spam:
I’m clearly misunderstanding something fundamental about how Leiningen profiles
work. I’d appreciate help fixing my understanding.
I’m try
I wasn’t aware of yesql - thanks for the pointer.
My concern with “write your queries in pure SQL” is increased vulnerability to
SQL injection. From a quick glance at yesql, it seems likely that it does
provide protection against SQL injection, but there’s nothing in the
documentation (that I c
On 24 July 2014 at 11:32:53, Bruce Durling (b...@otfrom.com) wrote:
Paul,
You might also want to post this on London Clojurians Jobs
https://groups.google.com/forum/#!forum/london-clojurian-jobs
Ah! I’ve just posted it to London Clojurians - I wasn’t aware of the separate
jobs list (blush).
I’m looking to recruit a senior software engineer in London to work with
Clojure, ClojureScript, React/Reagent, Docker, and a bunch of other equally
interesting technologies.
We’re not necessarily looking for Clojure/ClojureScript experience, so if
you’ve been champing at the bit to get into t
t (but you should check) that rrb-vector will perform better and do more
structural sharing.
On Wed, Jun 4, 2014 at 8:29 AM, Paul Butcher wrote:
I am working with “sequence like” trees - by which I mean that they’re very
broad (typically the root node will have several thousand children) and s
I am working with “sequence like” trees - by which I mean that they’re very
broad (typically the root node will have several thousand children) and shallow
(no more than 2 levels deep). I’m often dealing with a degenerate tree that’s
really just a sequence (all the nodes in the tree are children
;
> Timothy
>
>
> On Tue, Dec 31, 2013 at 7:14 PM, Cedric Greevey wrote:
> It should work if it's inlined or a macro. It won't shrink foo's generated
> code size any if bar is a macro, but it will split up the source code into
> smaller pieces if that&
I recently discovered that parking calls only work if they're directly
contained within a go block. So this works fine:
(defn foo [ch]
(go
(msgCount++
Silverstone, Brands Hatch, Donington Park...
Who says I have a one track mind?
http://www.paulbutcher.com/
LinkedIn: http://www.linkedin.c
On 30 Dec 2013, at 16:34, Cedric Greevey wrote:
> To do it with case, you'd need to wrap case in a macro that expanded to
> `(case ~thingy ~(eval case1) ...) or something along those lines
Thanks, but I suspect that that might be another way of saying "use condp" :-)
Cheers,
--
paul.butcher-
On 30 Dec 2013, at 13:57, Nicola Mometto wrote:
> The test clauses of case expressions are not evaluated, so that case is
> trying to "match" the symbol 'FetcherEvent/EVENT_TYPE_FEED_POLLED, not
> the value of FetcherEvent/EVENT_TYPE_FEED_POLLED.
Ah - I was aware of the requirement for the const
I'm sure I'm missing something very simple here, but I'm damned if I can see it.
I'm trying to use the Java Rome RSS/Atom utility library from Clojure. The Rome
Fetcher library defines a FeedEvent class, which among other things defines a
number of static final string members:
https://github.co
On 8 Dec 2013, at 14:21, Alex Miller wrote:
> If you're starting with lein repl then I would expect errors to be printed at
> the console - that's the actual process running the code. It's also possible
> that the thread's untaught exception handler doesn't print. I know that the
> expectation
On 8 Dec 2013, at 05:04, Alex Miller wrote:
> Errors are most likely occurring on a thread different than main. Assuming
> you're using nrepl, have you looked at the nrepl-error buffer?
Thanks. I wasn't aware of nrepl-error. Some quick googling has turned up a few
articles about how to see nre
Consider the following function that adds two numbers exceptionally
inefficiently by creating lots of threads:
user=> (require '[clojure.core.async :refer [thread (defn thread-add [x y]
#_=> (thread
#_=> (if (zero? y)
#_=> x
#_=> (let [t (thread-add (inc x) (dec y))]
On 30 Nov 2013, at 01:33, Mark Engelberg wrote:
> (take-while seq (iterate rest [1 2 3 4]))
D'oh! Told you I would kick myself. Thanks Mark.
--
paul.butcher->msgCount++
Silverstone, Brands Hatch, Donington Park...
Who says I have a one track mind?
http://www.paulbutcher.com/
LinkedIn: http://
I fear that I'm missing something obvious here, so I'm getting ready to kick
myself. I'm looking for an equivalent of Scala's "tails" method:
scala> List(1, 2, 3, 4).tails.toList
res0: List[List[Int]] = List(List(1, 2, 3, 4), List(2, 3, 4), List(3, 4),
List(4), List())
But I'm damned if I can f
I've been playing with core.async, and have come across a couple of things that
it seemed would probably be common use cases, but can't find anything in the
library that addresses them.
I'd be grateful for pointers if any of these do exist and I'm just missing
them, or suggestions for reasons w
On 23 Oct 2013, at 18:37, David Nolen wrote:
> The problem here is that you're not following your Scala solution closely
> enough. I suspect if you used defrecords to represent the pieces the way that
> you used a class in Scala you can avoid the number of collisions for larger
> problems.
>
On 24 Oct 2013, at 11:34, Phillip Lord wrote:
> What does Scala do? I mean, given that it doesn't have the same problem,
> perhaps it has a solution?
By default, Scala uses a tree-based set, not a hash-based set. So the fact that
it doesn't run into hashing issues isn't surprising (and is yet a
On 24 Oct 2013, at 01:46, Stuart Halloway wrote:
> Is the Scala for lazy or eager? If the latter, you are not comparing apples
> to apples (separate from the other differences David already pointed out.)
Oh, it's eager.
Bear in mind that my purpose wasn't really to directly compare Scala and
On 23 Oct 2013, at 18:15, Andy Fingerhut wrote:
> If we had a 'universal comparator', i.e. a comparison function that provided
> a total order on any pair of values that anyone would ever want to put into a
> set or use as a map key, then instead of having linked lists for values that
> collid
On 23 Oct 2013, at 17:43, Andy Fingerhut wrote:
> Paul, your function solve returns a set of solutions, but there is nothing on
> the program that seems to rely upon being able to quickly test whether a
> particular solution is in such a set. Returning a sequence from solve is
> much faster,
On 23 Oct 2013, at 17:06, Andy Fingerhut wrote:
> I have instrumented a copy of Paul's Clojure program to print the hash code
> of all of the solutions in the set returned by solve, and there are *many*
> pairs of solutions that have identical hash values
Aha! The smoking gun :-)
Very many t
On 23 Oct 2013, at 15:32, David Powell wrote:
> When you say it is spending 99% of its time in PersistentHashSet.cons, is
> that the time spent in just that method, or the time spent in that method and
> the methods that it calls?
The latter.
> Given that (set ...) is one of the first things
On 23 Oct 2013, at 14:19, David Nolen wrote:
> The Scala code is using immutable sets.
Correct - the Scala is immutable throughout.
--
paul.butcher->msgCount++
Snetterton, Castle Combe, Cadwell Park...
Who says I have a one track mind?
http://www.paulbutcher.com/
LinkedIn: http://www.linkedin
On 23 Oct 2013, at 14:18, David Nolen wrote:
> If set construction was 1000X worse why don't the smaller problem sizes
> exhibit exactly the same issue? If the Scala version requires 12G why is the
> Clojure version steady at 300M?
Both excellent questions.
> Aren't Scala for comprehensions o
On 23 Oct 2013, at 13:18, Timothy Baldridge wrote:
> Great! you have a profiler, use that. Find the hotspots, use YourKit to find
> where the .cons is being called from, find things to optimize, and go from
> there. This is exactly the same process I would use any optimizations I
> attempted.
On 23 Oct 2013, at 12:44, Timothy Baldridge wrote:
> That being said, the #1 rule of benchmarking with lein is don't benchmark
> with lein. The JVM lein spins up has a different set of goals (namely startup
> time). The best way to benchmark is to run the test several thousand times,
> from a
On 23 Oct 2013, at 12:21, David Nolen wrote:
> Those numbers make the larger problem runtime suspect. How are you running
> the Clojure version? With Lein?
Yes - lein run.
--
paul.butcher->msgCount++
Snetterton, Castle Combe, Cadwell Park...
Who says I have a one track mind?
http://www.paulb
On 22 Oct 2013, at 20:20, David Nolen wrote:
> On Tue, Oct 22, 2013 at 3:11 PM, Paul Butcher wrote:
> Yeah - I have tried giving it more RAM without any effect on the timing
> whatsoever. And I couldn't see the point of stopping people with less RAM
> than that from bei
On 22 Oct 2013, at 19:55, David Nolen wrote:
> I note that the Clojure version isn't being given 12gigs of RAM, is this
> something you're giving to the JVM after when you run a AOTed version of the
> Clojure code.
Yeah - I have tried giving it more RAM without any effect on the timing
whatso
On 22 Oct 2013, at 18:45, Mark Engelberg wrote:
> I looked briefly at the code and can confirm that to my eye, the two
> implementations appear to be implementing the same algorithm.
Thanks - always good to have a second pair of eyes :-)
> My first guess would be that the performance differenc
I've been playing around with a generalised version of the N-Queens problem
that handles other chess pieces. I have a Scala implementation that uses an
eager depth-first search. Given enough RAM (around 12G - it's very memory
hungry!) it can solve a 6x9 board with 6 pieces in around 2.5 minutes
On 6 Oct 2013, at 04:35, zcaudate wrote:
> I'm a little bit miffed over this current craze of `types` and `correctness`
> of programs. It smells to me of the whole `object` craze of the last two
> decades.
This debate is as old as the hills (it certainly predates object-oriented
programming).
Alan,
Apologies for the delayed reply - I remember Iota well (there was some
cross-fertilisation between it and foldable-seq a few months back IIRC :-)
Having said that, I don't think that Iota will help in my particular situation
(although I'd be delighted to be proven wrong)? Given that the f
On 29 Sep 2013, at 22:58, Paul Mooser wrote:
> Paul, is there any easy way to get the (small) dataset you're working with,
> so we can run your actual code against the same data?
The dataset I'm using is a Wikipedia dump, which hardly counts as "small" :-)
Having said that, the first couple of
gner wrote:
> I would go a bit more further and suggest that you do not use sequences at
> all and work only with reducible/foldable collections. Make an input reader
> which returns a foldable collection and you will have the most performant
> solution. The thing about holding int
Thanks Alex - I've made both of these changes. The shutdown-agents did get rid
of the pause at the end of the pmap solution, and the -server argument made a
very slight across-the-board performance improvement. But neither of them
fundamentally change the basic result (that the implementation th
On 28 Sep 2013, at 19:51, Jozef Wagner wrote:
> Anyway, I think the bottleneck in your code is at
> https://github.com/paulbutcher/parallel-word-count/blob/master/src/wordcount/core.clj#L9
> Instead of creating new persistent map for each word, you should use a
> transient here.
I would love
On 28 Sep 2013, at 17:42, Jozef Wagner wrote:
> I mean that you should forgot about lazy sequences and sequences in general,
> if you want to have a cutting edge performance with reducers. Example of
> reducible slurp, https://gist.github.com/wagjo/6743885 , does not hold into
> the head.
OK
On 28 Sep 2013, at 17:14, Jozef Wagner wrote:
> I would go a bit more further and suggest that you do not use sequences at
> all and work only with reducible/foldable collections. Make an input reader
> which returns a foldable collection and you will have the most performant
> solution. The t
elow. If you call (shutdown-agents) the 60-second wait to exit
> should go away.
>
> http://clojuredocs.org/clojure_core/clojure.core/future
>
> Andy
>
> Sent from my iPhone
>
> On Sep 28, 2013, at 1:41 AM, Paul Butcher wrote:
>
>> On 28 Sep 2013, at 0
On 28 Sep 2013, at 01:22, Rich Morin wrote:
>> On Sat, May 25, 2013 at 12:34 PM, Paul Butcher wrote:
>> I'm currently working on a book on concurrent/parallel development for The
>> Pragmatic Programmers. ...
>
> Ordered; PDF just arrived (:-).
Cool - very int
b/master/examples/exploring/reducing_apple_pie.clj.
>
> I would be curious to know how this approach performs with your data. With
> the generated data I used, the partition+fold and partition+pmap approaches
> both used most of my cores and had similar perf.
>
> Enjoying y
I'm currently working on a book on concurrent/parallel development for The
Pragmatic Programmers. One of the subjects I'm covering is parallel programming
in Clojure, but I've hit a roadblock with one of the examples. I'm hoping that
I can get some help to work through it here.
The example coun
On 15 Mar 2013, at 09:23, Marko Topolnik wrote:
> This is not about bureaucracy --- it's about API contract,
Quite.
> you must preserve laziness whenever applicable, but you may not take
> advantage of it by assuming any guarantees.
Erm - I certainly hope that this isn't true. Otherwise, I wo
On 15 Mar 2013, at 08:28, Marko Topolnik wrote:
> To the best of my knowledge the only guarantee you get is the existence of an
> upper bound on the size of the eagerly fetched chunk, so a potentially
> infinite lazy sequence will not result in an endless loop/OOME. The whole
> mechanism is ba
On 15 Mar 2013, at 07:04, Meikel Brandmeyer (kotarak) wrote:
> this highly depends on the sequence function at hand. Usually they are
> guaranteed to be as lazy as possible. But the are two aspects: a) sometimes
> you need to look ahead to actually perform the action (eg. take-while or
> drop-
Clojure's sequences are lazy - but is there anything that guarantees *how* lazy
they are?
To give a concrete example - given an infinite lazy sequence of promises:
(def promises (repeatedly promise))
If, in one thread I do:
(doseq [p (map deref promises)] (println p))
And in another thread, I
On 14 Mar 2013, at 13:13, David Powell wrote:
> As a temporary hack, perhaps you could implement a deftype ReduceToTransient
> wrapper that implements CollReduce by calling reduce on the parameter, and
> then calling persistent! on the return value of reduce. You'd also need to
> implement Co
On 14 Mar 2013, at 11:49, Meikel Brandmeyer (kotarak) wrote:
> that's not really possible at the moment. cf.
> https://groups.google.com/d/topic/clojure-dev/UbJlMO9XYjo/discussion and
> https://github.com/cgrand/clojure/commit/65e1acef03362a76f7043ebf3fe2fa277c581912
Dang. At least other peopl
I've been experimenting with reducers using a small example that counts the
words in Wikipedia pages by parsing the Wikipedia XML dump. The basic structure
of the code is:
(frequencies (flatten (map get-words (get-pages
where get-pages returns a lazy sequence of pages from the XML dump and
.com
AIM: paulrabutcher
Skype: paulrabutcher
On 13 Mar 2013, at 14:13, Jim foo.bar wrote:
> there was a memory leak hence the 1.5.1 release the next day...
>
> Jim
>
> On 13/03/13 14:12, Paul Butcher wrote:
>> On 13 Mar 2013, at 14:05, "Jim foo.bar" wrote:
>&
On 13 Mar 2013, at 14:05, "Jim foo.bar" wrote:
> how come your project depends on the problematic version 1.5.0?
1.5.0 is problematic?
--
paul.butcher->msgCount++
Snetterton, Castle Combe, Cadwell Park...
Who says I have a one track mind?
http://www.paulbutcher.com/
LinkedIn: http://www.linke
.
>
> On Tuesday, March 12, 2013, Paul Butcher wrote:
> On 12 Mar 2013, at 18:26, Stuart Sierra wrote:
>
>> This might be an interesting contribution to clojure.core.reducers. I
>> haven't looked at your code in detail, so I can't say for sure, but being
>&
On 12 Mar 2013, at 18:26, Stuart Sierra wrote:
> This might be an interesting contribution to clojure.core.reducers. I haven't
> looked at your code in detail, so I can't say for sure, but being able to do
> parallel fold over semi-lazy sequences would be very useful.
I'd be delighted if this
On 12 Mar 2013, at 15:55, Alan Busby wrote:
> If Paul wouldn't mind I'd like to add a a similar "seq" function to Iota that
> would allow for index-less processing like he did in foldable-seq.
Paul would be delighted :-)
--
paul.butcher->msgCount++
Snetterton, Castle Combe, Cadwell Park...
Wh
On 12 Mar 2013, at 13:49, Adam Clements wrote:
> How would feeding a line-seq into this compare to iota? And how would that
> compare to a version of iota tweaked to work in a slightly less eager fashion?
It'll not suffer from the problem of having to drag the whole file into memory,
but will
On 12 Mar 2013, at 13:52, Marko Topolnik wrote:
> That's what I meant, succeed by relying on the way f/j is used by the
> reducers public API, without copy-pasting the internals and using them
> directly. So I guess the answer is "no".
I don't believe that I could - the CollFold implementation
On 12 Mar 2013, at 13:45, Marko Topolnik wrote:
> Nice going :) Is it really impossible to somehow do this from the outside,
> through the public API?
I think that it *does* do it from the outside through the public API :-) I'm
just reifying the (public) CollFold protocol.
I do copy a bunch
netterton, Castle Combe, Cadwell Park...
Who says I have a one track mind?
http://www.paulbutcher.com/
LinkedIn: http://www.linkedin.com/in/paulbutcher
MSN: p...@paulbutcher.com
AIM: paulrabutcher
Skype: paulrabutcher
On 11 Mar 2013, at 13:38, Paul Butcher wrote:
> On 11 Mar 2013, at 11:00, Marko
On 11 Mar 2013, at 11:00, Marko Topolnik wrote:
> The idea is to transform into a lazy sequence of eager chunks. That approach
> should work.
Exactly. Right - I guess I should put my money where my mouth is and see if I
can get it working...
--
paul.butcher->msgCount++
Snetterton, Castle Com
On 11 Mar 2013, at 10:40, Jim foo.bar wrote:
> why can't you 'vec' the result of xml/parse and then use fold on that? Is it
> a massive seq?
In my case, it's the Wikipedia XML dump, so around 40GiB (so no, that wouldn't
work :-)
--
paul.butcher->msgCount++
Snetterton, Castle Combe, Cadwell P
As things currently stand, fold can be used on a sequence-based reducible
collection, but won't be parallel.
I'm currently working on code that processes XML generated by
clojure.data.xml/parse, and would love to do so in parallel. I can't
immediately see any reason why it wouldn't be possible
On 14 Dec 2012, at 13:52, Rich Hickey wrote:
> On Dec 14, 2012, at 12:55 AM, Paul Butcher wrote:
>> Rich - what is the "soundbite description" of Clojure's concurrency model
>> you're happiest with?
>
> Ah, soundbites, the foundation of modern programm
On 14 Dec 2012, at 00:30, kovas boguta wrote:
> My recommendation is either "Persistent Datastructures" or "Database as a
> Value"
Interesting. I'd be interested to hear others thoughts on this. In particular
Rich's
Rich - what is the "soundbite description" of Clojure's concurrency model
yo
On 14 Dec 2012, at 00:22, Patrick Logan wrote:
> Another concurrency model I've used a great deal is the tuplespace model,
> specifically javaspaces. This is an often forgotten model that has a lot to
> offer with a high expressiveness to complexity ratio.
Ah! That brings back memories :-) I
literally no more complex than a trivial
> hackneyed book example. :-)
>
> Cheers,
> Stu
>
>
>
> On Sun, Dec 2, 2012 at 11:03 AM, Paul Butcher wrote:
> All,
>
> I have a request which I hope the members of this group are uniquely
> positioned to help
On 10 Dec 2012, at 13:37, Marko Topolnik wrote:
> But concurrency is all about performance and throughput. So where is the
> benefit of using correct, slow concurrent mutation? I guess in a
> write-seldom, read-often scenario.
I'm not at all sure that that's true. There are plenty of occasions
On 10 Dec 2012, at 12:56, Chas Emerick wrote:
> I'd be surprised if Paul doesn't hear from people directly
I wish that that were true, but no, I've not had anyone get in touch off-list.
Many thanks, Marko, for resurrecting the thread - I'm still definitely keen to
hear of first-hand experience
All,
I have a request which I hope the members of this group are uniquely positioned
to help with. I have recently started working on a new book for The Pragmatic
Programmers with the working title "Seven Concurrency Models in Seven Weeks"
(it follows on from their existing "Seven Languages" an
78 matches
Mail list logo