Re: [Haskell-cafe] What happens if you get hit by a bus?

2011-12-16 Thread Brian Hurt
On Fri, Dec 16, 2011 at 3:51 PM, Andrew Coppin
wrote:

> On 16/12/2011 07:05 PM, Bardur Arantsson wrote:
>
>> Michael Litchard wrote:
>>
>> [--snip--]
>>
>> If getting hit by a bus is a significant factor in the overall outcome of
>> the project then I think those are pretty good odds, aren't they?
>>
>> (I do realize that traffic accidents are a lot more frequent than we like
>> to
>> think, but still...)
>>
>
> The /actual/ probability of being hit by a bus is irrelevant. The only
> thing of concequence is the /percieved/ probability. This latter quantity
> is not related to the former in any meaningful way. In fact, due to an
> effect known as availability bias, the probability of any potential threat
> varies depending on how long you spend thinking about it.
>
> (In other words, if you've never even considered what would happen if your
> sole developer got hit by a bus, your estimate of the probability of this
> will be very low. If you sit down and think about how much trouble you'd be
> in if this actually happened, suddenly your estimate of the probability
> starts increasing. This is completely illogical - which is why it's called
> a cognitive bias.


There are a lot of ways that, from the perspective of the business, a
person could get "hit by a bus"- they could get into an accident, including
getting hit by a bus, die for some other reason, get sick, retire, get
another job, or even quit to join the Peace Corp and get shipped off to
Uganda (actually had that happen to me once).  Sooner or later, everyone
will be metaphorically "hit by a bus", in that they will not be here
anymore.  Next time this discussion comes up, as the room how many people
have been at their company for 20 years or more.  Then ask them to guess
how many people in the room will still be at the company in 20 years time.
 The high probability is that very few, if any, people will still be at the
company in 20 years time.

It doesn't matter why Bob is no longer here with his specialized knowledge
of Bob's code, but the end result is the same.  And the problem is the
same- someone else has to be brought in to deal with Bob's code.  And that
someone, even if they know the language the code is written in as well, or
even better, than Bob, doesn't know the *code*.  And it's just a question
of when, not if, Bob will no longer be there to maintain the code.

If anything, using Haskell *reduces* the truck-factor compared to other
languages.  Someone new coming in needs to be able to make small changes
fairly quickly (major reorganizations and refactorings can generally be
delayed while the new person comes up to speed, but small bug fix and small
feature additions are a constant need).  What Nancy the new hire can do, if
the code is in Haskell, is simply change the function, and let the compiler
figure out where it's being used.  Also, types are a wonderful bit of
documentation- see the paper "Theorems for Free" by Phillip Wadler.  This
makes it easier fo the new person to come up to speed on what the code
does, if not necessarily how or why.


>
> Ever heard the phrase "fear, uncertainty and doubt"? It's a killer in a
> business context.
>
> It seems clear [to me] that there are actually quite a few Haskell
> programmers around, and it's not especially hard to find them. The question
> is how many "good" ones there are. "Good" is all vague and subjective and
> hard to measure, unfortunately.
>
> On the other hand, if you just start the project with more than one
> developer on board in the first place, then the possibility of just one of
> them being killed prematurely becomes drastically less serious. (For the
> business. I'm sure it's still fairly serious for the person who just
> DIED...)
>
>
Again, it depends.  If there was this large body of code that only one
developer ever worked on, then you have truck-factor problems.



> PS. Kudos to Ketil Malde for the best link I've seen today.
>
> __**_
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/**mailman/listinfo/haskell-cafe
>

Brian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What happens if you get hit by a bus?

2011-12-16 Thread Brian Hurt
I think the "truck-factor" implications of the programming language as
dwarfed by the implications of everything else in the project.  Any project
of any significant size is going to have a huge amount of project-specific
information tucked up inside the programmers head.  It doesn't matter if
there are a million other programmers who know the language you used, or
only a dozen- if you're the only one who knows how things were done, and
more importantly, why they were done that way, and you get hit by a truck,
then your boss has a big problem.  Whether there are millions of candidate
replacement programmers, or only dozens, none of them had the
project-specific knowledge you had.  Finding a replacement who knows the
language is the least of his problems.

On Fri, Dec 16, 2011 at 10:40 AM, Colin Adams wrote:

> I would think there were plenty of Haskell programmers ready to jump in as
> replacements.
>
> On 16 December 2011 15:37, Michael Litchard  wrote:
>
>> I'm learning what it means to be a professional Haskell programmer,
>> and contemplating taking on side jobs. The path of least resistance
>> seems to be web applications, as that is what I do at work. I've been
>> investigating what some web developers have to say about their trade.
>> One article addresses the question above. His answer was that he uses
>> RoR which has a large community and he is therefore easily
>> replaceable. My question, for freelancers in general, and web
>> developers in particular is this: How do you address this question?  I
>> imagine potential clients would need to be assuaged of their fears
>> that hiring me would lead to a lock-in situation at best, and no one
>> to maintain a code base at worst. Lock-in won't be part of my business
>> model, also sooner or later we part ways with the client. When the
>> client wonders, "What happens then?", what is a good answer?
>>
>> ___
>> Haskell-Cafe mailing list
>> Haskell-Cafe@haskell.org
>> http://www.haskell.org/mailman/listinfo/haskell-cafe
>>
>
>
> ___
> Haskell-Cafe mailing list
> Haskell-Cafe@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stupid question, re: overloaded type classes

2009-01-18 Thread Brian Hurt



On Sun, 18 Jan 2009, Ryan Ingram wrote:


2) A third choice is to do what Show does, which is kind of a hack but
solves this specific problem:


Thanks.  I like this solution much better than the two I proposed.

Brian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Stupid question, re: overloaded type classes

2009-01-18 Thread Brian Hurt


So, I'm working with this simplistic S-expression library of my own design 
(yes, I know, reinventing the wheel).  Basically, I have the type:


data Sexp =
List of [ Sexp ]
| Atom of String

with the associated parsers and printers which really aren't relevent to 
the question at hand.  Then, I want to define the type class of types I 
can convert to and from s-expressions, like:


class Sexpable a where
toSexp :: a -> Sexp
fromSexp :: Sexp -> Maybe a

here, fromSexp can return Nothing is the s-expression isn't the right form 
to be parsed into a whatever.


Now, here's the problem.  I want to define a bunch of default instances, 
and two in particular I want to define are:


instance Sexpable String where
toSexp s = Atom s
fromSexp (Atom s) = Just s
fromSexp _ = Nothing

instance Sexpable a => Sexpable [ a ] where
toSexp lst = List $ map toSexp lst
fromSexp (List lst) = mapM fromSexp lst
fromSexp _ = Nothing

Note that I am not implementing Sexpable Char anywhere, so the only valid 
transform for [Char] should be the String one.  But this still causes a 
compiler error due to the overloaded instances on [Char].


There are two solutions to this that I already know of.  One is to play 
games with newtype, which I don't like because it simply adds complexity 
in my case and doesn't help anything else.  The second possibility is to 
compile with -fallow-incoherent-instances, which I'm slightly afraid of 
because I'm not sure what (if any) possible errors adding this option 
might allow.


So my question is twofold: 1) what errors might be allowed if I add 
-fallow-incoherent-instances, and 2) is there some third choice that 
avoids both solutions I already know about?


Thanks.

Brian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Stupid question #852: Strict monad

2009-01-01 Thread Brian Hurt


First off, let me apologize for this ongoing series of stupid newbie 
questions.  Haskell just recently went from that theoretically interesting 
language I really need to learn some day to a language I actually kinda 
understand and can write real code in (thanks to Real World Haskell).  Of 
course, this gives rise to a whole bunch of "wait- why is it this way?" 
sort of questions.


So today's question is: why isn't there a Strict monad?  Something like:

data Strict a = X a

instance Monad Strict where
( >>= ) (X m) f = let x = f m in x `seq` (X x)
return a = a `seq` (X a)

(although this code doesn't compile for reasons I'm not clear on- I keep 
getting:

temp.hs:4:0:
Occurs check: cannot construct the infinite type: b = Strict b
When trying to generalise the type inferred for `>>='
  Signature type: forall a b1.
  Strict a -> (a -> Strict b1) -> Strict b1
  Type to generalise: forall a b1.
  Strict a -> (a -> Strict b1) -> Strict b1
In the instance declaration for `Monad Strict'

as a type error.  Feel free to jump in and tell me what I'm doing wrong.)

Obviously, there would also be a StrictT monad transformer.  The point 
here is that there are some monads (State being the obvious one) that come 
in both strict and lazy variants- the two variants could be eliminated, 
and the strict variant would just become a StrictT (State x) a.  Or 
maybe a StateT x Strict a.


Hmm.  Which raises the interesting question of what, if any, semantic 
difference there is between a StrictT (State x) a and a StateT x Strict 
a.


In any case, the idea came when I was thinking about monads being "code 
regions"- that we can talk about such and such a function being "in" the 
IO monad or STM monad.  The idea was that one could define a "strict" code 
region you could put code in.


Brian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type classes vr.s functions

2008-12-24 Thread Brian Hurt



On Tue, 23 Dec 2008, wren ng thornton wrote:

In particular, imagine that you have two different and valid ways to convert 
the same type into Foo; which do you choose? With the 
continuation/combinator/argument approach this is a non-issue since you can 
just pass in the one you need. With type-classes it's tricky since they're 
the same type, which leads to hacks with newtype wrappers or phantom types.


If there is guaranteed to be only one valid transformation from any given 
type into Foo, then type-classes make your intentions clear and they never 
run into this issue. If more than one valid transformation could exist for 
some type, then the extra argument is cleaner.


In this case, there is gaurenteed to be only one valid transformation. 
Basically, I have a number of similar data structures (which enforce 
different constraints, which is why they're not all the same data 
structure), and the function in question converts the specific 
(constraint-enforcing) data structures into a general 
(non-constraint-enforcing) data structure on which I can perform generic 
algorithms.


To be more specific, I'm writing a compiler in Haskell (what a shock), and 
the source data structures are the parse trees after various 
transformations- for example, after the lambda lifting phase, the parse 
tree should not have lambda expressions in them at all (they should have 
all been lifted to top-level functions).  So, while the 
before-lambda-lifting data structure has a Lambda constructor, the 
after-lambda-lifting data structure doesn't, thus enforcing the constraint 
that lambda lifting removes all (non-top-level) lambda expressions.  But I 
want to be able to get all free variables of a given expression, both 
before and after lambda lifting, so I just define a function to convert 
both types into a common generic representation that I can write a "get 
free variables" function to work on.


At this point, I realize that I'm being stupid and way over thinking 
things.  Haskell is a *lazy* language- I'm still wrapping my head around 
the implications of that statement.  My original thinking was that the 
conversion function would be a one-level conversion, i.e. the data 
structure would be like:

data Generic a =
UnaryOp uop a
| BinaryOp a bop a
| If a a a
...

i.e. I'd only convert the root node, and the child nodes would still be 
the original data structure.  So I'd need to pass around a function of the

form:
a -> Generic a
which is my conversion function.  But what I'm doing here is 
hand-reimplementing a lazy conversion of the data structure- which I get 
for free anyways.  So what I should do is define the data structure like:

data Generic =
UnaryOp uop Generic
| BinaryOp Generic bop Generic
| If Generic Generic Generic
...

Then all I need to do is to write the pure conversion function a -> 
Generic, and then run the generic algorithm over the data structure.  That 
gives me the exact behavior I'm looking for, without either (explicitly) 
passing a conversion function around or playing with fancy typeclasses.


Pardon me while I go beat my head against the wall.

Brian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Stupid question #374: why is MaybeT not in the standard library?

2008-12-22 Thread Brian Hurt


I wrote my own implementation of MaybeT (which was a usefull exercise), 
but a quick google showed:


http://www.haskell.org/haskellwiki/New_monads/MaybeT

But I'm wondering why it's not in the standard library.  The standards 
committee just hasn't gotten around to it yet?  Or was there some 
discussion of this in the past on some (public) maillist, that my 
admittedly shallow googling failed to uncover, that someone could point me 
at?


Thanks.

Brian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell as a religion

2008-12-20 Thread Brian Hurt


(Sorry for the late reply)

On Tue, 16 Dec 2008, Andrew Coppin wrote:


Don Stewart wrote:

I think of Haskell more as a revolutionary movement


LOL! Longest revolution EVER, eh? I mean, how long ago was its dogma first 
codified? ;-)


Remember: the eternal union of soviet socialist states lasted about 7 
times as long as Hitler's thousand-year Reich (meanwhile, Jefferson's 
temporary experiment still seems to be lurching along about as well as 
ever).  Nothing is as permanent as that which is declared temporary, and 
nothing is as temporary as that which is declared permanent.  Also, 
constants aren't and variables don't.




The thing that saddens me is this:

http://prog21.dadgum.com/31.html

Basically, Haskell will never be popular, but its coolest ideas will be 
stolen by everybody else and passed off as their own. :-(


We should be so lucky.  My deepest fears is that Haskell doesn't become 
popular *and* it's ideas aren't picked up by other languages.


Brian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Type classes vr.s functions

2008-12-20 Thread Brian Hurt


So, style question for people, if I can.  I have a certain problem-
basically, I have a bunch of functions which need a special function,
of type a -> Foo say.  And a bunch of other functions which can define
that function on some type of interest, and then what to call the first
batch of functions.  I can do this either by defining a type class,
something like:
class Fooable a where
toFoo :: a -> Foo
or I can simply have all the functions which need a toFoo take an extra
agrument.  Performance really isn't that important here, so it's really
a matter of style- which approach would people prefer in this case?

Brian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Please tell me this function exists

2008-12-17 Thread Brian Hurt



On Wed, 17 Dec 2008, Max Rabkin wrote:


Hoogle is your friend:

Searching for String -> [String] -> String
Data.List   intercalate :: [a] -> [[a]] -> [a]
base
intercalate xs xss is equivalent to (concat (intersperse xs xss)). It
inserts the...



Thanks.

Brian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Please tell me this function exists

2008-12-17 Thread Brian Hurt


I know it's not hard to write, but still:

concat :: String -> [String] -> String
concat _ [] = ""
concat _ [x] = x
concat sep x:xs = x ++ sep ++ (concat sep xs)

I've got to be stupid and missing it in the standard libraries.  Please 
tell me where.  Thanks.


Brian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Line noise

2008-09-22 Thread Brian Hurt



On Sun, 21 Sep 2008, wren ng thornton wrote:

Even with functionalists ---of the OCaml and SML ilk--- 
this use of spaces can be confusing if noone explains that function 
application binds tighter than all operators.


Bwuh?  Ocaml programmers certainly know that application binds tighter 
than operators.  And as:


let f x y = ... in
f a b

is more efficient (in Ocaml) than:

let f (x, y) = ... in
f (a, b)

(Ocaml doesn't optimize away the tuple allocation), the former 
(Haskell-like) is generally preferred by Ocaml programmers.


SML programmers do use the second form, I'll grant you.

Brian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] the trivial monad- thoughts and a question

2008-01-12 Thread Brian Hurt


So, I've been playing around with what I call the trivial monad:

module TrivialMonad where

data TrivialMonad a = M a

recover :: TrivialMonad a -> a
recover (M x) = x

instance Monad TrivialMonad where

(M x) >>= f = f x
(M x) >> f = f
return x = M x
fail s = undefined

This is actually a surprisingly usefull little sucker- it allows you to 
"demonadify" code.  Which is usefull when you have some places where you 
want to use the code within a monad, and some places where you don't.  You 
write the base bit of code monadically, and then demonadify it as needed.


The first question I have is it is possible to implement this guy without 
wrapping the value in a constructor?  What I'd like to do is replace the:

data TrivialMonad a = M a
with something like:
type TrivialMonad a = a
and then be able to junk the recover function.

The second question I have is: is there any hope of getting something like 
this into the standard library?


Thanks,
Brian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Newbie question: can laziness lead to space compression?

2007-12-29 Thread Brian Hurt


My apologies if this has been beat to death before, I'm still new to 
Haskell.  But I was wondering if it is possible that lazy evaluation could 
lead to space compression, especially under heavily persistant usage 
patterns?


Here's the argument I'm making.  Say we have a tree-based Set with, say, 
1024 values in it.  For ease of math we'll assume that it's perfectly 
balanced.  Say each node in the tree takes 5 words of memory.  So in an 
eager language (for example, Ocaml), adding a new node to this tree 
requires the allocation of 10 new nodes, or 50 words of memory.  In 
Haskell, what would happen (as I understand it) is that just a new lazy 
thunk would be allocated- say, 10 words of memory.  Conceptually, we could 
think of the returned value as the original tree plus a small delta.  The 
lazy implementation is using 40 fewer words of memory than the eager 
implementation.


Note that the benefit isn't *big*- we're talking about 40 words of memory 
when the main data structure is taking up 5K plus words of memory- so it's 
less than 1% different.  But there is a (small) upside in memory usage at 
least occassionally, right?


Brian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] On the verge of ... giving up!

2007-10-14 Thread Brian Hurt



On Sun, 14 Oct 2007, Andrew Coppin wrote:


Vimal wrote:

I like learning by
comparison with other similar languages. This approach worked for me
when I tried learning Python+Perl together. The nicer syntax and
easier object-orientedness made me leave Perl behind and pursue
Python.
I also tried it for Haskell (Lisp+OCaml+Haskell together).



This probably works quite well for mainstream programming languages (since 
they're all so similar), but is unlikely to work at all for Haskell (since, 
as far as I know, no other programming language on Earth is remotely like it 
- Miranda excluded). Even Lisp and Erland are nothing like Haskell, and 
they're supposedly based on the same ideas.



I'm going to offer an opinion here that's likely to be controversial (in 
this forum): people new to functional programming shouldn't learn Haskell 
first.  They should start with either Ocaml or SML first.  If it makes it 
easier to accept this argument, you can consider Ocaml and SML as "Haskell 
with training wheels".  And that the original poster, rather than giving 
up on Haskell, should instead put Haskell on the backburner and instead 
learn Ocaml (which is likely to be hard enough).  Then, in a year or two, 
start taking another swing at Haskell.


The problem is that Haskell has a lot of big ideas that all come at you 
pretty much immediately.  Learning a new language in a new paradigm is a 
lot harder than learning in a new language in a paradigm you already know. 
If you know Java or C#, it's not that hard to learn Python or Ruby, 
because you already know how to think in objects.  The problem with going 
from a C/Java/Ruby paradigm to Haskell is that Haskell isn't just one 
paradigm shift, it's several all rolled into one:

- Purely functional
- Lazy
- Monadic
- Strongly typed

Each one of these has a huge impact in how you think about problems and 
design programs- I'd argue about as large an impact as Objects have. 
These are all good things (well, I'd admit to not being 100% sure about 
pure laziness), and knowing how to think in these paradigms is definately 
a good thing.


And the situation is worse with pure functional languages.  When you move 
from, say C/Pascal/Fortran to Java/Ruby/Python, you don't have to learn 
new data structures and new algorithms.  A doubly linked list is still 
just a doubly linked list, still has the same properties, and still has 
more or less the same implementation.  In addition to learning Haskell, 
you also need to relearn basic computer science.  You need to learn what a 
realtime lazy catenable dequeue is, how to implement it, and why you need 
one, all the while struggling with the syntax, type errors, and all the 
other problems trying to learning a new language.


I mean, contemplate this trivial exercise for a moment: write a program 
that reads from stdin a series of numbers (one number per line), and 
writes out the sum of the last n numbers.  This is a trivial problem, and 
I have no doubt that someone who knows Haskell better than I will reply to 
this email with a single line of code that does it.  But just think for a 
moment the number of different paradigm shifts that you need to make to 
write this program in Haskell.


I'm not saying that it's impossible to go directly to Haskell, I'm saying 
that it's just very very hard.


Ocaml, meanwhile, is in many ways the C++ of the functional world.  C++ 
was needed for object orientation to take off, because it allowed you to 
continue to program in plain old procedural C when you needed to get 
something done, and to play with OO concepts and ideas as you have the 
time/inclination/need.  Then, once people became comfortable with objects 
and OO thinking, they were ready for the real OO languages.


In a similiar fasion, Ocaml allows you to write old fasioned impertive 
code when you need to just get something working.  And as you have the 
time/inclination/need to, you can start playing with more advanced 
functional concepts, like purely applicative data structures, lazy 
evaluation, and even monads.


Brian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Elevator pitch for Haskell.

2007-09-10 Thread Brian Hurt



On Mon, 10 Sep 2007, Devin Mullins wrote:


Brian Hurt wrote:

Any links to these presentations?  I'm interested.

Videos:
 http://rubyhoedown2007.confreaks.com/session04.html


Actually, this video has an interesting bit, relevent to this discussion. 
He doesn't phrase it as an "elevator pitch", but that's what it is.  When 
he's talking about his first Ruby job, he was approached by a company 
thinking of doing a web app in Java.  His response: "Give me your best bid 
for doing in Java- take of 30% of the cost and cut the delivery time in 
half, and I'll do it in Ruby for that."  Managers don't understand (or 
care about, generally) technology- but they understand time, and they 
understand money.


So there's your elevator pitch- "You're looking at doing project X in 
language Y- I can do it in Haskell for 30% less money and in half the 
time."  All you need to do is fill in the correct values for X and Y.


Brian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Elevator pitch for Haskell.

2007-09-09 Thread Brian Hurt



On Sun, 9 Sep 2007, Devin Mullins wrote:


As for the latter, the reason I hear most often is "I want to be able to
use the language at my job."* Yet, I have heard two presentations from
people who studied the history of Smalltalk/Java/etc. and came to the
(informal) conclusion that the very things that enable the language to
cross the chasm are the very things that make people like us want to
start looking for a new language.


Any links to these presentations?  I'm interested.

Thanks.

Brian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Dynamic thread management?

2007-08-11 Thread Brian Hurt



On Sat, 11 Aug 2007, Sebastian Sylvan wrote:


How is this any better than using "par" in Haskell?


Mainly how the threads are actually scheduled.  Mind you, I'm an 
*incredible* Haskell newbie, so take all of my comments with a 5-pound 
salt block, but as I understand how the current implementation of parallel 
Haskell works, all threads spawned go into a communal heap.  When a thread 
blocks, it's put back into the communal queue, and a new thread is 
selected to run by the worker thread.


In Cilk, the task structure is more tree-like.  When thread A spawns 
thread B, the worker thread stops executing thread A and starts executing 
thread B.  When thread B exits, the worker thread then resumes thread A. 
So in any given point in time, the worker thread has a stack of processes 
waiting for subthreads to complete so they can be resumed- not unlike 
function calls in other languages, or nested lazy blocks in Haskell.


When a worker thread runs out of work to do, when it has no more threads 
to execute, it picks another worker thread at random, and asks the other 
worker thread for work to do.  The other worker thread (assuming it has 
work to do) then picks the microthread at the bottom of the resume stack, 
that is farthest away from being resumed, and passes that microthread's 
state to the original worker thread.


From the user program's perspective, this is no different from the current 
implementation.  Threads get spawned, get executed in some unspecified 
order, and then complete.


What's interesting (to me, at least) are the optimization opportunities 
this model provides that the shared-queue model doesn't.  Consider the 
following structural model: we have a two-tiered heap.  Each worker thread 
has it's own local heap, which only microthreads it is managing can see. 
Plus there is a global heap with objects that all microthreads in all 
worker threads can see.  Objects are originally allocated in the local 
heap.  When a microthread to start being executed on another worker 
thread, all objects it can see (reach, in the conservative GC sense) are 
promoted to the global heap.


The advantage of all of this is that now we can easily distinguish between 
objects that can be seen from other real threads, and those that can't. 
All of the mutable data structures- tvars, lazy thunks, everything- can be 
much more cheaply implemented if you know that no other thread can access 
them.


Take the example of a parallel map, where I'm using a tvar in my map 
function.  The likelyhood is that all of the (short-lived) microthreads 
I'm spawning will execute on the same worker thread- and that thus the 
tvar will never leave the local heap, and thus can be optimized down to 
something close to a single-threaded mvar.  I have, via optimization, 
turned a parallel map and a synchronized tvar back into something 
approaching a serial map and an mvar.


Brian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Dynamic thread management?

2007-08-11 Thread Brian Hurt


You guys might also want to take a look at the Cilk programming language, 
and how it managed threads.  If you know C, learning Cilk is about 2 hours 
of work, as it's C with half a dozen extra keywords and a few new 
concepts.  I'd love to see Cilk - C + Haskell as a programming language.


The key idea of Cilk is that it's easier to deparallelize than it is to 
parallelize, especially automatically.  So the idea is that the program is 
written incredibly parallel, with huge numbers of microthreads, which are 
(on average) very cheap to spawn.  The runtime then deparallelizes the 
microthreads, running multiple microthreads sequentially within a single 
real thread (a worker thread).  Microthreads that live their entire life 
within a single real thread are cheap to spawn (as in "not much more 
expensive than a normal function call" cheap).


The problem that Cilk runs into is that it's, well, C.  It doesn't deal 
with contention at well at all- a single microthread blocking blocks the 
whole worker thread- meaning, among other things, that you can have "false 
deadlocks", where one microthread blocks on another microthread in the 
same real thread, and thus acts like it's deadlocked even though it really 
isn't.  You have greatly increased the likelyhood of raceconditions as 
well (mutable data and multithreading just don't mix).  Plus you have all 
the normal fun you have with C bugs- wild pointers, buffer over runs, etc.


All of which you avoid if you replace the C aspects of Cilk with Haskell. 
With STM you avoid the deadlock problem entirely, and with immutable data 
(except for carefully coded monads) you avoid the whole race condition 
problem.  Plus all the normal advantages Haskell has over C.


Just my $0.02.

Brian

On Sat, 11 Aug 2007, Neil Bartlett wrote:


Hugh,

I certainly think it would be wrong to declare that NDP is doomed to
failure... not because you would be making an enemy of SPJ (I'm pretty
sure you wouldn't!) but because it actually aims to solves a less
ambitious problem: the problem of parallelising the SAME task applied
to different data, rather than a collection of arbitrary tasks.
Because the task does not change, we know that e.g. taking half the
data cuts the amount of work in half. Therefore an up-front scheduling
approach can work.

If you fast forward to about 42 minutes into the London HUG video, you
see that Simon talks about the problem of parallelizing (f x) + (g y),
and says that he spent "quite a lot of time in the eighties trying to
make this game go fast [..] and the result was pretty much abject
failure".

You're absolutely right that a dynamic/adaptive approach is the only
one that will work when the tasks are of unknown size. Whether this
approach is as easy as you think is open for you to prove. I look
forward to testing your VM implementation, or at the very least
reading your paper on the subject ;-)

Regards,
Neil


On 8/11/07, Hugh Perkins <[EMAIL PROTECTED]> wrote:

On 8/11/07, Thomas Conway <[EMAIL PROTECTED]> wrote:

There are many papers about this in the Parallel Logic Programming
area. It is commonly called "Embarrassing Parallelism".


Ah, I wasnt very precise ;-)  I didnt mean I dont understand the
problem; I meant I dont understand why people think it is difficult to
solve ;-)  (And then I tried to explain by examples why it is easy,
but it is true that my communication sucks ;-) )


you'll only get a benefit if you can break xs into chunks

of e.g. 10^3 elements or more, and more like 10^5 or more for more
usual 'f's.

Actually, the number of elements is irrelevant.  The only measure that
is important is how long the function is taking to execute, and
whether it is parellizable.

Example, the following only has 3 elements but will take a while to run:

strPrtLn $ sum $ map getNumberPrimes [1024, 2048, 4096 ]

The following has 10 million elements but runs quickly:

strPrtLn $ sum $ map (+1) [1..1000 ]

In the second, we start the function running, in a single thread, and
after a second, the function has already finished, so great! Were
done!

In the first, we start the function running, in a single thread.
After a second the function is still running, so we look at what is
taking the time and whether it is parallelizable.

Turns out the vm has been chugging away on the map for the last
second, and that that maps are parallelizeable, so we split the map
into Math.Min( , ) pieces, which on a
1024-core machine, given we have 3 elements, is Math.Min( 1024, 3 ),
which is 3.

So, we assign each of the 3 elements of the map to a thread.  So, now
we're using 3 of our 64 cores.

A second later, each thread is still chugging away at its element, so
we think, ok, maybe we can parallelize more, because we still have 61
threads/cores available, so now we look at the getNumberOfPrimes
function itself, and continue the above type of analysis.

This continues until all 64 cores have been assigned some work to do.

Whenever a thread finishes, 

[Haskell-cafe] Announce: New York Functional Programmers Meetup

2007-06-18 Thread Brian Hurt


I'd like to announe the New York Functional Programmers Meetup, which is 
Wednesday, June 27th, 7PM, at the offices of Jane Street Capital.


The NYFP meetup is for people using or interested in strongly typed functional 
languages, such as Haskell, Ocaml, SML, etc.


Meeting details are here:
http://lisp.meetup.com/59/

(sorry about the Lisp category- it's the closest thing Meetup has to 
"Functional Programmers").


Brian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Do monads imply laziness?

2007-04-14 Thread Brian Hurt


This is probably an off-topic question, but I can't think of a better 
forum to ask it: does the existance of monads imply laziness in a 
language, at least at the monadic level?


Consider the following: a purely functional, eagerly evaluated programming 
language, that uses monads to encapsulate the awkward squad.  In this 
programming language, a program generates an IO monad whose encapsulating 
computation performs side effecting affections- it writes to stdout, say. 
But this generated monad never has it's value evaluated- the monad is 
tossed away uninspected.  Does the side effect happen?  If the answer is 
at least potientially "no", then monads are lazily evaluated, and thus 
monads imply laziness (at least at the monadic level).  On the other hand, 
if the answer is "yes", then monads do not imply laziness.


Thanks,
Brian

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stupid newbie question

2007-01-05 Thread Brian Hurt



On Fri, 5 Jan 2007, Jeremy Shaw wrote:


The easiest solution is to make things a little more strict. For
example, if you change:

nth i (x:xs) = if i < 0 then Empty else nth (i-1) xs


Even better, if I define:

nth 0 (x:_) = Just x
nth i (_:xs) = if i < 0 then Nothing else nth (i-1) xs
nth i [] = Nothing

makelist i = i `seq` i : (makelist (i+1))

nth 1000 (makelist 1)

This works like a charm.  Thanks.

Brian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stupid newbie question

2007-01-05 Thread Brian Hurt



On Fri, 5 Jan 2007, Jason Creighton wrote:


On Fri, Jan 05, 2007 at 08:17:33PM -0500, Brian Hurt wrote:


My apologies for wasting bandwidth on what I'm sure is a stupid newbie
question.

Given:

-- Reimplementing the wheel here, I know

data Option a = Some a | Empty
deriving (Eq,Ord,Show,Read)


My apologies if you knew this already, (I don't know whether your
"Reimplementing the wheel" comment refers to this type or the "nth"
function) but this is the Maybe data type in Prelude:

data Maybe a = Nothing | Just a


Both, actually.  I did the Option a definition to get some experience with 
defining symbolic data types, and probably should have removed it in my 
example.




...which is the same as yours, except it's spelled differently, is an
instance of several handy classes (eg, Monad), and has a handful of
helper functions predefined: 
http://haskell.org/ghc/docs/latest/html/libraries/base/Data-Maybe.html


Thanks for the link.

Brian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stupid newbie question

2007-01-05 Thread Brian Hurt



On Fri, 5 Jan 2007, Jeremy Shaw wrote:


Hi,

In this case, the stack overflow you are seeing is due to laziness not
tail recursion.


Aha.  I knew it was something stupid.



Because you never demand the value of any element in the list, Haskell
never bothers to calculate it. So you have a list that looks like:

[ i,  i - 1, (i - 1) - 1, ((i - 1) - 1 - 1), .. ]

So, by the time you get up to some big numbers, you have built up a
very large thunk. For some reason this is causing a stack overflow.



Actually, this makes sense to me.  Recursively forcing lazy thunks is not 
tail recursive, it needs to allocate stack frames.  So if a million-deep 
recursive thunk, forcing it is a problem.



The easiest solution is to make things a little more strict. For
example, if you change:

nth i (x:xs) = if i < 0 then Empty else nth (i-1) xs

to:

nth i (x:xs) = x `seq` (if i < 0 then Empty else nth (i-1) xs)

This will force x enough that things do not overflow.

j.

ps. just a warning, seq is not entirely straightforward to use, so
while it works in this case, it may not always work for you. I think
there might be a wiki page somewhere that explains how to avoid space
leaks in greater detail, but I can't seem to find it.

Another solution that does not involve using seq would be to replace
the above line with these two lines:

nth i (0:xs) = if i < 0 then Empty else nth (i-1) xs
This looks to be a typo, not sure if it's mine or yours.  The definition I 
was playing with was (or should be):


nth i (x:xs) = if i < 0 then Empty else nth (i-1) xs

Brian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Stupid newbie question

2007-01-05 Thread Brian Hurt


My apologies for wasting bandwidth on what I'm sure is a stupid newbie 
question.


Given:

-- Reimplementing the wheel here, I know

data Option a = Some a | Empty
deriving (Eq,Ord,Show,Read)

nth 0 (x:xs) = Some x
nth i (x:xs) = if i < 0 then Empty else nth (i-1) xs
nth i [] = Empty

That:

nth 100 [1..1000]

returns the right answer, while:

nth 100 [1..]

blows stack.  Furthermore, given:

makelist i = i : (makelist (i-1))

Why is it that:
nth 100 (makelist 1)

blows stack?  What are the rules for being tail recursive in Haskell?  I'm 
an Ocaml programmer, so I'm familiar with this problem, I just don't see 
the solution.  Some help would be appreciated.


Thanks in advance,
Brian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Good Haskell introduction for an Ocaml programmer?

2006-12-12 Thread Brian Hurt


Greetings, all.  I'm an experienced Ocaml programmer, looking to broaden 
my horizons yet further and pick up Haskell, and I'm wondering if there's 
a good introduction to Haskell for me.  I have Simon Thompson's "Haskell: 
The Craft of Functional Programming", which isn't a bad book, but I'm 
something of a special case.  I'm already familiar with and comfortable 
with a lot of concepts which are new to your average C++/Java programmer- 
things like symbolic computation and recursion as looping and applicative 
data structures.  So churning through introductions to these concepts 
looking for the rare nugget of new information is, well, kinda boring.  On 
the other hand there are a lot of Haskell concepts I'm not comfortable 
with, like monads.


So I was wondering if there was a better introduction for me out there? 
I'm willing to pay for a book or read something online, whichever.


Thanks,
Brian

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe