I have a hard time understanding why there is a need to retry when doing
"swap!" on an atom. Why does not Clojure just lock the atom up-front and do
the update? I have this question because I don't see any benefit of the
current "just try and then re-try if needed" (STM?) approach for atom
(may
if you do it as a lock, then readers must block writers (think it
through). Clojure's reference types + immutable datastructures and the
views on perception that underlay them are strongly opposed to readers
interfering with writers.
http://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hickey
On Tuesday, July 17, 2012 7:50:10 PM UTC-4, red...@gmail.com wrote:
>
> if you do it as a lock, then readers must block writers (think it
> through). Clojure's reference types + immutable datastructures and the
> views on perception that underlay them are strongly opposed to readers
> interfer
> Why is it so? Does not the reader just get a snapshot copy of the atom state
> and does not care who writes to the original atom? If a lock is needed, it
> is only needed for a very short commit time (cannot read when a writer is
> committing), but not during the whole "swap!" function. That stil
On Tue, Jul 17, 2012 at 4:58 PM, Warren Lynn wrote:
>
>
> On Tuesday, July 17, 2012 7:50:10 PM UTC-4, red...@gmail.com wrote:
>>
>> if you do it as a lock, then readers must block writers (think it
>> through). Clojure's reference types + immutable datastructures and the
>> views on perception tha
> (def a (atom {}))
>
> (defn add-kv [k v]
>(swap! assoc k v))
>
> If I call add-kv from multiple threads, how can I assume that the map
> won't be modified in the middle of the assoc? Sure, I could lock, but
> read this article first:
> http://en.wikipedia.org/wiki/Non-blocking_algorit
> Finish the thought, what happens when there is "contention", a thread
> reads then writes before you acquire the lock to commit. You can try
> and making locking work, but you'll just up with CAS based on a lock
>
>
>
I am not saying to throw away the "swap!" syntax. "swap!" syntax guarantee
For people who are interested, here is my own version of atom updating
functions:
;; A wrapped up atom that can be used in those lock-* functions
(deftype LockAtom
[atom]
clojure.lang.IDeref
(deref [this]
@(.atom this)))
;; (lock-atom (+ 4 5)) => #
(defmacro lock-atom
"Like ATOM, but
Please excuse my ignorance and my late comment, but can you make your
1hr operation shorter?
The general advice I've always been given has been "whenever you need
to use a resource that might cause contention do it quickly, in and
out in a blink".
I know that the argument then would be "why do I
On Wednesday, July 18, 2012 1:20:03 AM UTC+1, Warren Lynn wrote:
>
> The "making progress" seems an illusion here to me. Sure, you can make
> progress in one thread while another thread is taking one hour to finish
> its part. But the cost is the "long" thread finally found out "oops, I have
>
>> Being lockless seems useful for certain cases (like real-time system as
>> mentioned in the Wikipedia article). But I still could not grasp the idea
>> how it can increase *real* work throughput, as the problem itself mandates a
>> part of the work can only be done in serial.
Well first of all
Accesses to atoms are just wrappers around atomic compare and swap
instructions at the hardware level. Locking an object also uses an atomic
compare and swap, but piles other stuff on top of it, making it more
expensive. So atoms are useful in situations where there is likely not
going to be much
Warren Lynn writes:
> I have a hard time understanding why there is a need to retry when
> doing "swap!" on an atom. Why does not Clojure just lock the atom
> up-front and do the update?
This is just my two cents, but I think the/one big reason is that
Clojure atoms just *are* non-locking STM-ba
Hi,
Am Mittwoch, 18. Juli 2012 00:57:13 UTC+2 schrieb Warren Lynn:
>
> I have a hard time understanding why there is a need to retry when doing
> "swap!" on an atom. Why does not Clojure just lock the atom up-front and do
> the update? I have this question because I don't see any benefit of the
Sorta off-topic from the main discussion, but in reference to the error you
pointed out, one clever fix for this is to add a delay around the future:
(defn cache-image [icache url]
(let [task (delay (future (download-image url)))]
(doto (swap! icache update-in [url] #(or % task))
(->
Thanks for the discussion This is not a reply to any particular post. Here
is my thinking on the various points raised.
1. The length of the critical section is irrelevant in this discussion.
Even with locks, people agree that the critical section should be as short
as possible. So the limiting
>> But consider swap! is already doing some kind of internal locking at commit
>> time as I mentioned before
>> I assume even with STM style atom, some kind of lock is happening
>> internally, for the final commit, because when committing,
>> you still need to coordinate the access to some state
> It's not. Locks are created by using CAS, not the other way around.
> On a x86 machine the swap basically compiles down to a single assembly
> code instruction:
>
Eh, let me clarify thatlocks do exist on x86, it's just that they
only lock a single assembly instruction. The complete list of
i
It's not. Locks are created by using CAS, not the other way around.
> On a x86 machine the swap basically compiles down to a single assembly
> code instruction:
>
> http://jsimlo.sk/docs/cpu/index.php/cmpxchg.html
>
> On a normal x86 machine, every lock in the system will boil down to
> usin
> Now I got a broader question, why CAS is hardware supported, but lock is not
> (i.e., why it is not the other way around)? I used to work on some firmware,
> and we have hardware mutex. Why this is not generally the case for general
> purpose CPUs?
There's several issues at work here, I'll try t
With multiple CPUs, for costs and design complexity reasons, coordination uses
test and set instructions in shared memory.
When the bit is set, other contenders can either try latter or enter a spin loop
(spin lock) retrying the operation until it succeeds.
Implementing complex hardware atomic i
I found this presentation pretty enlightening in understanding why non
blocking concurrent algorithms can be more efficient than locking ones:
http://www.infoq.com/presentations/LMAX-Disruptor-100K-TPS-at-Less-than-1ms-Latency
I'd watch it from the start to get the background, but section 6 from
> The cost of
> retrying a CAS operation a few times is relatively trivial.
Not to mention that most of the time locks, thread sleeping, etc. all
involve a context switch into the kernel. Where a CAS is done in
userspace.
Timothy
--
You received this message because you are subscribed to the Go
> In addition, most systems only support loading memory in cache lines.
> IIRC, today most cache lines are 16KB. So when you read a single byte,
> the 16KB around that memory location is loaded as well.
The cache line size on x86 is 32 bytes on 32 bit systems, not 16KB. On
64 bit systems, it's 64
On Thu, Jul 26, 2012 at 1:28 AM, Stefan Ring wrote:
> > In addition, most systems only support loading memory in cache lines.
> > IIRC, today most cache lines are 16KB. So when you read a single byte,
> > the 16KB around that memory location is loaded as well.
>
> The cache line size on x86 is 32
25 matches
Mail list logo