On Fri, Jan 01, 2021 at 03:20:09PM -0500, Vadim Belman wrote:
> As it seems that Audio::PortMIDI lacks non-blocking interface, I think a
> solution would be to read events in a dedicated thread and re-submit them
> into a Supplier. Something like:
>
> my Supplier $midi-events;
>
> start {
>
t; kind, but I'm not sure how I would go about doing that. If I
> understand correctly, once it is a supply, I could add it to the
> react block as another whenever event.
>
> I have found examples of how to create a Supply using a loop and
> a Promise that is kept after a specific
rrectly, once it is a supply, I could add it to the
> react block as another whenever event.
>
> I have found examples of how to create a Supply using a loop and
> a Promise that is kept after a specific amount of time
> (https://stackoverflow.com/questions/57486372/concurrency-react-ing-to
understand correctly, once it is a supply, I could add it to the
react block as another whenever event.
I have found examples of how to create a Supply using a loop and
a Promise that is kept after a specific amount of time
(https://stackoverflow.com/questions/57486372/concurrency-react-ing-to-more
https://jnthn.net/papers/2018-conc-par-8-ways.pdf
Here you go! :)
Taken from this tweet:
https://twitter.com/jnthnwrthngtn/status/983817396401180672
HTH
- Timo
There was a talk by Jonathan Worthington at the German Perl Workshop 2018,
http://act.yapc.eu/gpw2018/talk/7326
"8 ways to do concurrency and parallelism in Perl 6"
I'd be very interested in seeing the slides and/or video of this talk.
Anyone know if it is available or will be
about
concurrency.)
This might be four cores competing to get update access to the loop
counter.
Core-to-core synchronization of a memory cell with high-frequency
updates is an extremely expensive operation, with dozens or hundreds of
wait states to request exclusive cache lines access and t
re box. Attempting to parallelize an empty loop
makes
the execution 1 second slower:
[...]
But running actual real-life code makes it almost 4 times faster, as
would be expected on a 4-core box:
(Disclaimer: I have no ideas of the internals, but I know a bit about
concurrency.)
This might be four
-life code makes it almost 4 times faster, as
would be expected on a 4-core box:
(Disclaimer: I have no ideas of the internals, but I know a bit about
concurrency.)
This might be four cores competing to get update access to the loop counter.
Core-to-core synchronization of a memory cell with high
this: it's the fibonacci function.
The first section I call it 4 times, without using any concurrency.
The second section I call it 4 times in a single worker.
The third section I start 4 workers, each calling it once.
I would expect the first and second sections to have about the same run
time
ial: 7.69045687
# Parallel: 2.087147
On Sat Oct 01 06:15:18 2016, steve.pi...@gmail.com wrote:
> This could be a stupid user problem, in which case I apologise for wasting
> your time.
>
> Simple concurrency demonstrations seem to work; the following completes in
> just
ime.
Simple concurrency demonstrations seem to work; the following completes in
just over a second:
perl6 -e 'await Promise.allof(start {sleep 1}, start {sleep 1}, start
{sleep 1});say now - INIT now'
However if the started tasks are doing any CPU-intensive work, they seem
to take much lon
-passing concurrency. This
balance will be implemented by modules.
comments are appreciated.
daniel
attachment: Perl6ThreadingModel.svg
and Concurrency was Re: Ideas for
aObject-Belongs-to-Thread (nntp: message 4 of 20) threading model (nntp:
message 20 of 20 -lastone!-) (nntp: message 13 of 20) (nntp: message 1 of
20)
Date: Tue, 18 May 2010 00:38:43 +0100
On Mon, 17 May 2010 23:25:07 +0100, Dave Whipp - dave_wh...@yahoo.com
--- Forwarded message ---
From: nigelsande...@btconnect.com
To: Dave Whipp - d...@whipp.name
+nntp+browseruk+e66dbbe0cf.dave#whipp.n...@spamgourmet.com
Cc:
Subject: Re: Parallelism and Concurrency was Re: Ideas for
aObject-Belongs-to-Thread (nntp: message 4 of 20) threading model
Em Dom, 2010-05-16 às 19:34 +0100, nigelsande...@btconnect.com escreveu:
3) The tough-y: Closed-over variables.
These are tough because it exposes lexicals to sharing, but they are so
natural to use, it is hard to suggest banning their use in concurrent
routines.
This is the point I
Em Dom, 2010-05-16 às 19:34 +0100, nigelsande...@btconnect.com escreveu:
Interoperability with Perl 5 and
is reference counting should not be a high priority in the decision making
process for defining the Perl 6 concurrency model.
If we drop that requirement then we can simply go
On Tue, 18 May 2010 11:39:04 +0100, Daniel Ruoso dan...@ruoso.com wrote:
This is the point I was trying to address, actually. Having *only*
explicitly shared variables makes it very cumbersome to write threaded
code, specially because explicitly shared variables have a lot of
restrictions on
concurrency model.
If we drop that requirement then we can simply go to the
we-can-spawn-as-many-os-threads-as-we-want model..
I do not see that as a requirement. But, I am painfully aware that I am
playing catchup with all the various versions, flavours and colors of
Perl6 interpreter. And more
Em Ter, 2010-05-18 às 15:15 +0100, nigelsande...@btconnect.com escreveu:
1) the interpreter doesn't need to detect the closed over variables, so
even string eval'ed access to such variables would work (which is, imho,
a good thing)
You'd have to explain further for me to understand why it
On Tue, May 18, 2010 at 3:19 AM, nigelsande...@btconnect.com wrote:
The guts of the discussion has been kernel threading (and mutable shared
state) is necessary. The perception being that by using user-threading (on
a single core at a time), you avoid the need for and complexities of
locking
Em Ter, 2010-05-18 às 12:58 -0700, Alex Elsayed escreveu:
You are imposing a false dichotomy here. Neither 'green' threads nor kernel
threads preclude each other. In fact, it can be convincingly argued that they
work _best_ when combined. Please look at the GSoC proposal for hybrid
threading
nigelsande...@btconnect.com wrote:
There are very few algorithms that actually benefit from using even low
hundreds of threads, let alone thousands. The ability of Erlang (and go
an IO and many others) to spawn 100,000 threads makes an impressive demo
for the uninitiated, but finding
concurrency model.
Even if there was only one parallel algorithm, if that algorithm was
needed for the majority of parallel workloads then it would be
significant.
In fact, though utilizing thousands of threads may be hard, once you get
to millions of threads then things become interesting
:
The important thing is not the number of algorithms: it's the number
programs and workloads.
From that statement, you do not appear to understand the subject matter of
this thread: Perl 6 concurrency model.
That seems a tad more confrontational than was required. It's also arguably
incorrect
nigelsande...@btconnect.com wrote:
From that statement, you do not appear to understand the subject matter
of this thread: Perl 6 concurrency model.
If I misunderstood then I apologize: I had thought that the subject was
the underlying abstractions of parallelism and concurrency that perl6
-CPU
commodity processors.
Also, basing the Perl 6 concurrency model upon what is convenient for the
implementation of SMOP:
http://www.perlfoundation.org/perl6/index.cgi?smop
as clever as it is, and as important as that has been to the evolution of
the Perl 6 development effort
On Fri, 14 May 2010 17:35:20 +0100, B. Estrade - estr...@gmail.com
+nntp+browseruk+c4c81fb0fa.estrabd#gmail@spamgourmet.com wrote:
The future is indeed multicore - or, rather, *many-core. What this
means is that however the hardware jockeys have to strap them together
on a single node,
After reading this thread and S17, I have lots of questions and some
remarks.
Parallelism and Concurrency could be considered to be two different things.
The hyperoperators and junctions imply, but do not require, parallelism.
It is left for the implementors to resolve whether a single
Operators, even
the async block, specify almost no requirements on the concurrency
model.
The discussion is more about *one* specific threading model designed to
support all the Perl 6 features in a scalable way.
daniel
On Fri, May 14, 2010 at 03:48:10PM +0400, Richard Hainsworth wrote:
: After reading this thread and S17, I have lots of questions and some
: remarks.
:
: Parallelism and Concurrency could be considered to be two different things.
:
: The hyperoperators and junctions imply, but do not require
will 'age'.
I think the important thing to realize here is that the Perl 6 language
keeps its definitions mostly abstract. Junctions, Hyper Operators, even
the async block, specify almost no requirements on the concurrency
model.
The discussion is more about *one* specific threading model designed
Em Sex, 2010-05-14 às 18:13 +0100, nigelsande...@btconnect.com escreveu:
The point I(we)'ve been trying to make is that once you have a reentrant
interpreter, and the ability to spawn one in an OS thread,
all the other bits can be built on top. But unless you have that ability,
whilst the
On Fri, May 14, 2010 at 03:48:10PM +0400, Richard Hainsworth wrote:
After reading this thread and S17, I have lots of questions and some
remarks.
Parallelism and Concurrency could be considered to be two different things.
The hyperoperators and junctions imply, but do not require
On Fri, May 14, 2010 at 09:50:21AM -0700, Larry Wall wrote:
On Fri, May 14, 2010 at 03:48:10PM +0400, Richard Hainsworth wrote:
...snip
But as you say, this is not a simple problem to solve; our response
should not be to punt this to future generations, but to solve it
as best as we can,
These seem interesting and relevant here:
http://www.ddj.com/go-parallel/blog/archives/2009/04/java_7_will_evo.html
http://developers.sun.com/learning/javaoneonline/2008/pdf/TS-5515.pdf
http://www.infoq.com/news/2007/07/concurrency-java-se-7
Tim.
the
connection was opened. Is this possible with the concurrency model you
specified?
The things you're describing are from the time when we were assuming STM
(Software Transactional Memory) would be used by every implementation of
Perl 6. Things have changed a bit in the last months, as STM
roll things back like they were before the
connection was opened. Is this possible with the concurrency model you
specified?
The things you're describing are from the time when we were assuming STM
(Software Transactional Memory) would be used by every implementation of
Perl 6. Things have
Hi. I have a question for the S17-concurrency people.
Say I wanted to write a POP3 server. I want to receive a username and
password from the client. I want things to be interruptable during this, but
it's also impossible to sensibly roll things back like they were before
I've just committed a change (r29239) to allow Rakudo to build and run the
basic tests on the pdd25cx branch. Most of it works.
Test Summary Report
---
t/01-sanity/05-sub (Wstat: 256 Tests: 2 Failed: 0)
Non-zero exit status: 1
Parse errors: Bad plan. You planned
On Thu, Jul 10, 2008 at 1:25 PM, chromatic [EMAIL PROTECTED] wrote:
I've just committed a change (r29239) to allow Rakudo to build and run the
basic tests on the pdd25cx branch. Most of it works.
New Revision: 29239
Modified:
branches/gsoc_pdd09/languages/perl6/src/classes/Scalar.pir
On Thursday 10 July 2008 10:29:16 Will Coleda wrote:
On Thu, Jul 10, 2008 at 1:25 PM, chromatic [EMAIL PROTECTED] wrote:
I've just committed a change (r29239) to allow Rakudo to build and run
the basic tests on the pdd25cx branch. Most of it works.
New Revision: 29239
Modified:
Allison Randal wrote:
Presumably the handled opcode will remove the exception Task from the
scheduler and resume execution at the appropriate point. Presumably
also the declining to handle an exception (the replacement for
rethrow) will cause the scheduler to move to the next exception
as a checklist when I'd finished the task.
Essentially, we're ripping out the entire underpinning of the old
exception system, and replacing it with the concurrency scheduler, while
still preserving the same interface. The deprecation of rethrow will
have to come towards the end of the transition
also the
declining to handle an exception (the replacement for rethrow) will cause the
scheduler to move to the next exception handler in its list? If so, how do
we model this control flow?
* Change 'real_exception' to use concurrency scheduler.
Does this mean to change the opcode and its
Bob Rogers wrote:
From: Klaas-Jan Stol [EMAIL PROTECTED]
about the removal of internal_exception: the specified ticket (in the
list on the wiki) does not have a conclusion: no final decision seems
to be have made on that issue. What's more, a quick check on a few
calls to
On Mon, Apr 28, 2008 at 1:53 PM, Allison Randal [EMAIL PROTECTED] wrote:
Bob Rogers wrote:
From: Klaas-Jan Stol [EMAIL PROTECTED]
about the removal of internal_exception: the specified ticket (in the
list on the wiki) does not have a conclusion: no final decision seems
to
I'm working on the concurrency branch, and specifically the Exception PMC.
Here's a patch that does what I think it needs I'll run more tests and
see if Parrot still works.
-- c
Index: src/pmc/exception.pmc
===
--- src/pmc
By popular demand, I've put my ongoing list of tasks for the concurrency
implementation branch on the wiki. Mark a task you start to work on with
your initials, so we know you're working on it:
http://www.perlfoundation.org/parrot/index.cgi?concurrency_tasks
Allison
On Sun, Apr 27, 2008 at 2:00 PM, Allison Randal [EMAIL PROTECTED] wrote:
By popular demand, I've put my ongoing list of tasks for the concurrency
implementation branch on the wiki. Mark a task you start to work on with
your initials, so we know you're working on it:
http
From: Klaas-Jan Stol [EMAIL PROTECTED]
Date: Sun, 27 Apr 2008 19:37:18 +0100
On Sun, Apr 27, 2008 at 2:00 PM, Allison Randal [EMAIL PROTECTED] wrote:
By popular demand, I've put my ongoing list of tasks for the concurrency
implementation branch on the wiki. Mark a task you start
Andy Armstrong wrote:
Again, let me know if you need more.
Could you give it another go with the latest revision? (You'll need to
uncomment the 'Parrot_cx...' lines in src/inter_create.c again, as I
commented them out in trunk while working on the hangs.)
I've eliminated the hangs on my
On 12 Dec 2007, at 19:53, Allison Randal wrote:
So, I'm interested to see what your 10.5 box does.
I uncommented the Parrot_cx calls and ran the tests twice - once with
CX_DEBUG = 1, once with CX_DEBUG = 0. It got through them all both
times :)
--
Andy Armstrong, Hexten
Andy Armstrong wrote:
I uncommented the Parrot_cx calls and ran the tests twice - once with
CX_DEBUG = 1, once with CX_DEBUG = 0. It got through them all both times :)
Excellent!
Thanks,
Allison
Andy Armstrong wrote:
Again, let me know if you need more.
I pushed it far enough that I was able to repeat the deadlock hang on OS
10.4.11, that's good. The interesting thing was the order of operations.
The usual order is:
call to Parrot_cx_init_scheduler
initializing scheduler runloop
On 07/12/2007, Allison Randal [EMAIL PROTECTED] wrote:
Andy Dougherty wrote:
Whether this is a defect in the vtables_4 test sourcefile for failing to
initialize the vtables, or whether pmc_new ought to be more defensive, I
can't say.
Looks like a bug in the test, as there are other
Andy Armstrong wrote:
And Instruments is telling me this:
http://hexten.net/junk/parrot1.png
Nice level of detail in this tool. Almost worth the cost of 10.5 all on
its own.
It seems to hang much more readily with CX_DEBUG enabled - including
once during make rather than make test.
On 9 Dec 2007, at 21:02, Allison Randal wrote:
Andy Armstrong wrote:
And Instruments is telling me this:
http://hexten.net/junk/parrot1.png
Nice level of detail in this tool. Almost worth the cost of 10.5 all
on its own.
It seems rather lovely. Bear in mind that I didn't even launch it
On 7 Dec 2007, at 16:32, chromatic wrote:
On Friday 07 December 2007 05:23:39 Allison Randal wrote:
I'm about to turn on the concurrency scheduler runloop in Parrot
trunk.
Before I do, I'd like test results on as many platforms as possible
(especially Windows, since it doesn't use POSIX
Andy Armstrong wrote:
But on Mac OS 10.5 I get random hangs. First at
t/op/01-parse_ops..287/335
for about ten minutes until I interrupted it and then
t/op/string_cs.1/50
for another ten or so minutes.
Are you sure you've got the
On 8 Dec 2007, at 13:42, Allison Randal wrote:
Are you sure you've got the very latest version of all files on this
box ('make realclean', etc)?
Yup. I've just done make realclean make make test again and this
time it hung at
t/pmc/parrotlibrary1/1
(time
chromatic wrote:
On Ubuntu 7.10:
t/src/vtables..1/4
# Failed test (t/src/vtables.t at line 142)
# Exited with error code: [SIGNAL 139]
[...]
This didn't go away on a realclean. I even moved ending the runloop to before
the full DOD when destroying an interpreter, but to no effect.
On 8 Dec 2007, at 14:18, Andy Armstrong wrote:
Please let me know if there's anything you'd like me to investigate.
I'm afraid I don't know my way around parrot, er, at all - but I'm
willing to learn.
Ah, Mac OS 10.5 has dtrace - which I hadn't tried before but turns out
to be rather
interpreter, the
I/O thread, the events thread, and the new concurrency scheduler
thread). The particular hanging test could have its own threads added.
What does the Mac OS Activity Monitor utility report for that process?
(Or, in general. Watch it through a test run and see how many threads it
reports
On 8 Dec 2007, at 20:22, Allison Randal wrote:
Could you edit src/scheduler.c and change the value of CX_DEBUG to
1, recompile, and run the test suite? Most of the tests will fail
because of the additional output on stderr, but if you can catch a
hang, we'll have a little more detail on
I'm about to turn on the concurrency scheduler runloop in Parrot trunk.
Before I do, I'd like test results on as many platforms as possible
(especially Windows, since it doesn't use POSIX threads).
To test it, edit src/inter_create.c and uncomment the two lines that
start with 'Parrot_cx
On Dec 7, 2007 5:23 AM, Allison Randal [EMAIL PROTECTED] wrote:
I'm about to turn on the concurrency scheduler runloop in Parrot trunk.
Before I do, I'd like test results on as many platforms as possible
(especially Windows, since it doesn't use POSIX threads).
To test it, edit src
jerry gay wrote:
looks good to me. commit away!
nice work.
I've got a clean report on our core platform targets, so committed in
r23574. As usual, please report any issues.
Thanks!
Allison
On Friday 07 December 2007 05:23:39 Allison Randal wrote:
I'm about to turn on the concurrency scheduler runloop in Parrot trunk.
Before I do, I'd like test results on as many platforms as possible
(especially Windows, since it doesn't use POSIX threads).
To test it, edit src/inter_create.c
Andy Dougherty wrote:
Whether this is a defect in the vtables_4 test sourcefile for failing to
initialize the vtables, or whether pmc_new ought to be more defensive, I
can't say.
Looks like a bug in the test, as there are other things in Parrot_exit
that won't behave appropriately without an
On Fri, Dec 07, 2007 at 08:45:03PM +0200, Allison Randal wrote:
jerry gay wrote:
looks good to me. commit away!
nice work.
I've got a clean report on our core platform targets, so committed in
r23574. As usual, please report any issues.
r23574 gives me failures in t/src/vtables.t and
On Fri, 7 Dec 2007, Allison Randal wrote:
I'm about to turn on the concurrency scheduler runloop in Parrot trunk. Before
I do, I'd like test results on as many platforms as possible (especially
Windows, since it doesn't use POSIX threads).
To test it, edit src/inter_create.c and uncomment
This last SOTO re-reminded me of what an inveterate fan I am of Perl 6. Wow.
My question today is about concurrency. I can imagine how things like IPC
Mailboxes (e.g. RFC 86) happen in modules. I can also imagine why Threads
(e.g. RFC 1) should be in modules- given the obvious dependence
I'll try to reply as good as possible, but I'm sure others will do better.
David Brunton wrote:
This last SOTO re-reminded me of what an inveterate fan I am of Perl 6. Wow.
My question today is about concurrency. I can imagine how things like IPC
Mailboxes
(e.g. RFC 86) happen
. Perl 5's
Copen-with-|) live?
What do you mean by where? The namespace? Or the implementation?
I think I'm starting to get the distinction about what goes
into STD.pm. Coming most recently from a project in Erlang, concurrency
still feels like a kind of flow control to me right now
An older article, but an interesting peek into a system with multiple
interacting concurrency models:
http://developer.apple.com/technotes/tn/tn2028.html
Allison
... that chromatic AIM'd me:
http://debasishg.blogspot.com/2006/11/threadless-concurrency-on-jvm-aka-scala.html
Worth a read.
Allison
On Mon, Oct 30, 2006 at 10:28:51PM -0800, Allison Randal wrote:
: Oh, the Io language, which I've been interested in lately, also makes
: use of the concept of futures for concurrency. It's got a degree of
: appeal to it.
Perl 6's feeds (lazy lists) can also be viewed as a form of futures
I've finished a first pass through PDD 25 on threading/concurrency. It's
largely a collection of prior thinking on the subject. Before I start
kicking it into a more structured form, I'd like to do an initial round
of discussion. This is your chance to mention anything you hoped or
expected
On 10/30/06, Allison Randal [EMAIL PROTECTED] wrote:
Before I start
kicking it into a more structured form, I'd like to do an initial round
of discussion. This is your chance to mention anything you hoped or
expected from Parrot's concurrency models. How do you plan to use
concurrency
ideas
mixed in there.
Hmmm... high on my wishlist: sandboxing for non-concurrent code running
inside concurrent code.
Oh, the Io language, which I've been interested in lately, also makes
use of the concept of futures for concurrency. It's got a degree of
appeal to it.
Allison
Jonathan Scott Duff [EMAIL PROTECTED] wrote:
On Thu, Jun 01, 2006 at 02:22:12PM -0700, Jonathan Lang wrote:
Forgive this ignorant soul; but what is STM?
Software Transaction Memory
Well, Software Transactional Memory if I'm being picky. :-) Some info and
an interesting paper here:-
At 1:50 PM -0700 6/1/06, Larry Wall wrote:
As for side-effecty ops, many of them can just be a promise to perform
the op later when the transaction is committed, I suspect.
Yes, but it would be important to specify that by the time control is
returned to whatever invoked the op, that any side
Darren Duncan schreef:
Each time a context (a code block, either a routine or a syntactic
construct like 'try' is) is entered that is marked 'is atomic', a new
transaction begins, which as a whole can later be committed or rolled
back; it implicitly commits if that context is exited normally,
On Thu, Jun 01, 2006 at 11:52:59AM +1200, Sam Vilain wrote:
: The lock on entry approach will only be for non-threaded interpreters
: that don't know how to do real STM.
The way I see it, the fundamental difference is that with ordinary
locking, you're locking in real time, whereas with STM you
Larry Wall wrote:
The way I see it, the fundamental difference is that with ordinary
locking, you're locking in real time, whereas with STM you potentially
have the ability to virtualize time to see if there's a way to order
the locks in virtual time such that they still make sense. Then you
On Thu, Jun 01, 2006 at 02:22:12PM -0700, Jonathan Lang wrote:
Larry Wall wrote:
The way I see it, the fundamental difference is that with ordinary
locking, you're locking in real time, whereas with STM you potentially
have the ability to virtualize time to see if there's a way to order
the
How does an atomic block differ from one in which all variables are
implicitly hypotheticalized? I'm thinking that a retry exit
statement may be redundant; instead, why not just go with the existing
mechanisms for successful vs. failed block termination, with the minor
modification that when an
How does an atomic block differ from one in which all variables are
implicitly hypotheticalized?
I assume that the atomicness being controlled by some kind of lock on
entry, it also applies to I/O and other side-effecty things that you
can't undo.
--
Hats are no worse for being made by ancient
Jonathan Lang wrote:
How does an atomic block differ from one in which all variables are
implicitly hypotheticalized? I'm thinking that a retry exit
statement may be redundant; instead, why not just go with the existing
mechanisms for successful vs. failed block termination, with the minor
Daniel Hulme wrote:
How does an atomic block differ from one in which all variables are
implicitly hypotheticalized?
I assume that the atomicness being controlled by some kind of lock on
entry, it also applies to I/O and other side-effecty things that you
can't undo.
The lock on entry
At 11:51 AM +1200 6/1/06, Sam Vilain wrote:
I think the answer lies in the checkpointing references in that
document. I don't know whether that's akin to a SQL savepoint (ie, a
point mid-transaction that can be rolled back to, without committing the
entire transaction) or more like a
http://www.cminusminus.org/abstracts/c--con.html
93 matches
Mail list logo