Problem: "invalid argument" emitted by sendTo

2001-09-20 Thread mgross


I am using SocketPrim to send a udp packet to each of 254 addresses on a
network. The message packets have been preconstructed and placed in a
list, whose content has already been checked for correctness and appear
to meet the specifications in the appropriate RFC. When I go
through the list (using mapM_), sending a packet to each address with
sendTo, I get a valid return code from sendTo (used trace to check that) 
for the addresses from 10.129.129.1 through 10.129.129.239. Address
10.129.129.240, however, fails with the messages

Fail: invalid argument
Action: sendTo
Reason: invalid argument


The message "invalid argument" does not appear within the source of
SocketPrim, but it does appear in several of the binary files in the
library. I've begun to dig around in the source code, but I've taken to
hoping that someone out there in Haskell land may be able to point me
directly to where (and for what) I should be looking to determine just
what triggered those messages. 

Oh, yes, verson info: ghc-5.00.2, linux (debian testing). 

Thanks in advance for attention. 

Murray Gross
[EMAIL PROTECTED]




___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Problem: "invalid argument" emitted by sendTo

2001-09-20 Thread mgross


Found the error. No need to follow up. 

Apologies to anyone who objects to mail that turns out not to need
answering. 

Murray Gross
[EMAIL PROTECTED]




___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Compiler problem or dumb mistake?

2002-02-02 Thread mgross


Here's the problematic snippet, using GHC 5.00.2: 

mapWidList :: [(Int,Int,Bool)] -> [WidNode] -> Int-> [WidNode]  
mapWidList showList
   widNodes seq   
--map  (\x -> (checkWid showList x (widNodes!!x)))
-- [0 .. ((length widNodes) -1)]
| (length widNodes ) < seq + 1 = []
| otherwise  = (checkWid showList
   seq (widNodes!!seq) ):
   (mapWidList showList widNodes
   (seq+1))


The commented lines are what I would really like, but trace indicated 
that the anonymous function is never invoked, so I wrote out explictly
recursive code. However, while trace indicates that the both alternatives
are appropriately repeatedly tested, checkWid is never invoked. I have 
modified checkWid to assure that laziness is not the problem (I suffed in 
code that prevents prediction of the result returned by checkWid), so
something else is going on here. Anyone care to take a shot at this one? 

Thanks in advance. 

Murray Gross
[EMAIL PROTECTED]


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Compiler problem or dumb mistake?

2002-02-03 Thread mgross


Further to my earlier post on the same subject: 

The problem seems to be not in my code, but in some sort of nasty 
interaction between thread dispatching, the trace function, and 
possibly, Gtk: It appears that trace dumps what is expected (indicating
execution of functions that I thought weren't being executed) when I click
on windows displayed by GTK+hs in another thread (many thanks to Manuel
Chakravarty for solving a nasty problem there). Please note that the
thread containing the code in which trace seems to be failing does in fact
appear to be executing to completion, although I am not yet able to
determine whether execution is correct (the code is only partially
complete). 

Anyone want to stick their fingers into this one? (I have lots of burn
ointment if you're afraid of the heat :) ). 

Murray Gross
[EMAIL PROTECTED]



___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Compiler problem or dumb error?

2002-02-03 Thread mgross



On Sun, 3 Feb 2002, Jay Cox wrote:

> If checkWid is never invoked, then possibly it is never forced to begin
> with, which I think means the bug is elsewhere in your code (or
> elsewhere).
> 
Which turns out to be the case, although I haven't pinned it down
completely--please see my follow-up post on the same subject.

> 
> btw, your original map function could possibly be better written as a
> zipWith as in..
> 
> 
> zipWith (\w s -> checkWid ShowList w s) widNodes [0..]
> 
> 
> zipWith is mentioned in the Prelude and is a fairly widely used
> higher-order function.
> 

Neater than what I have, but not quite as obvious as documentation to me
(since I have not frequently used zipWith). I will probably replace my
code with your suggested code when I remove the scaffolding (read
crutches) from my half-built program. Thanks for the hint. 

Best, 

Murray Gross
[EMAIL PROTECTED] 

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: ANNOUNCE: hmake-3.06

2002-08-09 Thread mgross





On Fri, 9 Aug 2002, Malcolm Wallace wrote:

>   hmake-3.06
>   --
> We are pleased to announce a fresh, bugfix, release of hmake, the
> Haskell compilation manager.
> 

www.cs.york.ac.uk seems to be down. Does anyone know of a mirror that
might have the new release, or when the home site will be back up? 

Thanks in advance, 

Murray Gross


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: GPH: RE: Concurrency and Haskell

2002-08-20 Thread mgross


To reduce the amount of duplication, I'm going to assume that everyone
copied on your original note to me reads the list and copy only the list.
If you know someone on the copy list who does not read the list, please
tell me, and I will individually copy them on appropriate future e-mail. 


On Tue, 20 Aug 2002, Phil Trinder wrote:

> Murray,
> 
> There are several parallel Haskell implementations: a survey of them has just 
> appeared in Journal of Func. Prog Vols 4&5 (July & Sept 2002). Implementations 
> are available for
> o Eden http://www.mathematik.uni-marburg.de/inf/eden/
> o GpH  http://www.cee.hw.ac.uk/~dsg/gph/
> 
Yes, I have that survey, and am following up on it. 

> My group works on Glasgow parallel Haskell (GpH) which extends Haskell 98 with 
> a parallel composition combinator. As Simon said the main implemntation,
> GUM, buys portability using the relatively slow PVM communications library. 
> This doesn't matter so much on distributed memory machines, and we've recently 
> achieved some quite respectable results SunServer shared-memory machines.
> 
Although we have a multiprocessor machine available to us, we would prefer
to concentrate our efforts on the cluster, because in these tight budget
times, that seems to be a more likely avenue to increasing computational
capacity. 


> Simon Marlowe developed an alternative SMP implementation of GpH a couple of 
> years ago that may be more suitable for a Mosix platform, but I'm not sure of 
> the status of that implementation now.
> 

To the extent that we can use existing software, we will. All information
on anything potentially useful is greatly appreciated. 

Thanks for your note. 

Best, 

Murray Gross


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: GPH: RE: Concurrency and Haskell

2002-08-20 Thread mgross





On Tue, 20 Aug 2002 [EMAIL PROTECTED] wrote:

> Speaking of GpH, I wonder how is GdH coming along? It seems
> that the installation instruction on http://www.cee.hw.ac.uk/~dsg/gdh/
> is still incomplete...
> 
> As far as I know, Mosix has nothing to do with PVM, so am I
> right to say that GpH still needs PVM even you use it on a
> Mosix cluster? Then what will Mosix provide when you can have
> a PVM cluster without Mosix running the same thing?
> 

GpH needs PVM. The difference between PVM and mosix is that PVM requires
the user to specify the processor on which tasks execute. I am beginning
to investigate whether or not mosix will migrate PVM tasks, and if not,
how this can be arranged. Mosix provides automatic load balancing, which
is one of our major research interests. You are correct that PVM alone can
be used for construction of a cluster; it is just that PVM clustering is
not the kind of clustering we wish to achieve. And we may not even be able
to get GpH to run on a non-shared-memory cluster. We may or may
not be able to achieve our goals, but finding out is what research is all
about, isn't it? :)

Best, 

Murray Gross


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



PVM-free GpH

2003-08-27 Thread mgross

I am pleased to be able to announce that the Metis Project at Brooklyn
College has been able to modify the gum RTS for ghc-5.02.3 so that it runs
without using PVM. We have verified operation under the Mosix clustering
system and are currently working on implementing a version that we will be
able to use for parallel execution on our Solaris machines. It is to be
expected that our code will run without modification on any Linux SMP.
 
While our code is still alpha (among other things, it generates numerous
annoying but harmless informational messages we hope will be soon be
silenced)  and we lack the resources to provide technical support, I will
be pleased to provide copies of our CVS repository to those who ask
(sorry, no anonymous CVS access at present). We would greatly appreciate
assistance in improving the code, and particularly in locating and fixing
the bug whose symptoms are either a failed assertion in the garbage
collector, or a message about improper packet packing.
 
Anyone who would like a copy of the CVS archive should send e-mail to
[EMAIL PROTECTED] I will reply as soon as possible to arrange
for a transfer.

Murray Gross
Brookly College, City University of New York


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: About Haskell Thread Model

2003-10-13 Thread mgross




On Mon, 13 Oct 2003, Wolfgang Thaller wrote:

> > Do you have some experience or knowledge about Parallel Haskell? And

Parallel Haskell runs, but there are problems. Unless someone has slipped
something past me, there is no parallel implementation for Release 6 yet, 
so if you want to tinker with it, you'll need to go to the CVS repository
for Release 5 of Haskell and the parallel run-time system (look for the
GUM branch). Please note, the Release 5 version of parallel Haskell should
be considered experimental and best left aside if you are unwilling to fix
code. 

Murray Gross
Metis Project,
Brooklyn College 




___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Implicit parallel functional programming

2005-01-18 Thread mgross




On Tue, 18 Jan 2005, Satnam Singh wrote:

> I'm trying to find out about existing work on implicit parallel functional 
> programming. I see that the Glasgow Haskell compiler has a parallel mode 
> which can be used with PVM and there is interesting work with pH at MIT. Does 
> anyone know of any other work on implicitly parallelizing functional programs 
> for fine grain parallel execution?
> 
> The emergence of multi-core processors makes me think that we should look at 
> implicit parallel functional programming in a new light.
> 
At Brooklyn College, we are working on a version of Parallel Haskell that
does not require PVM. Instead, we use Internet protocols and the Mosix
patches to Linux. 

Murray Gross
Brooklyn College, CUNY
Metis Project 


___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Implicit parallel functional programming

2005-01-19 Thread mgross

Having donned my flame-resistant suit, I am going to venture that I think
the differences of opinion in the posts copied below are in large part the
result of people talking about different things at the same time. 

First, there is a claim that functional languages facilitate parallel
execution, which is undeniably true if the implementation is something
like that of Haskell (no assignment statements mean no memory contention,
etc.). 

Then there is the claim that it is difficult (if not impossible) to
analyze a piece of code and parallelize it optimally automatically, which
is a point that was conceded long ago.

Third, there is the issue of whether "reasonable" parallelization can be
obtained, and the answer to this problem is that it has been done before,
and it can be done again. The open question is whether or not the
parallelization thus achieved is sufficiently "good" in some sense. This,
I submit, appears to be an open question at present, which is a statement
I'd really like to find is wrong. 

Then there is the issue of optimality of algorithms. Which is always a
bugbear, because different quality criteria change things wildly (y'know,
n-square sorts are frequently used in real-world programs . . . ). And
when it comes to dealing with parallel execution on a multiprocessor
machine (as opposed to a cluster), this raises significant questions about
performance constraints resulting from hardware design and implementation.
Any answer to the problems here depends on the hardware that is being used
and cannot be general.

I suggest, then, that if this thread is to be continued, contributors
should be careful to specify the particular hardware and software
environments they are talking about, and be specific about such things as
quality criteria. 

Murray Gross





On Thu, 20 Jan 2005, Ben Lippmeier wrote:

> 
> I thought the "lazy functional languages are great for implicit 
> parallelism" thing died out some time ago - at least as far as running 
> the programs on conventional hardware is concerned.
> 
> Designing an algorithm that breaks apart a "sequential" lazy program 
> into parallel chunks of the appropriate size is **HARD** (with double 
> asterixes). The time and space behavior of a lazy program is complex 
> enough for the _programmer_ to reason about, let alone an automated 
> analysis - which has no knowledge of what the program is actually trying 
> to do.
> 
> I think a more likely approach lies in the direction of the so called 
> "parallel strategies". If you haven't already, I would strongly suggest 
> reading: Algorithm + Strategy = Parallelism, 1998, PW Trinder, et al.
> You can get this paper from Simon Peyton Jones's homepage.
> 
> Also, at the end of Hans Wolfgang-Loidl's thesis he develops a 
> granularity analysis for a Haskell subset - one of the first steps in 
> any kind of implicit parallelism. It's a pretty good effort, but at the 
> end of it all it still relies on a pre-existing table of information 
> about recursive functions. I think that these kind of analyses tend 
> suffer from uncomputability problems more than most.
> 
> If you've still got your heart set on implicit parallelism, then there's 
> a (very slow) simulator you might want to poke around with. I wrote it 
> based around Clem Baker-Finch's "Abstract machine for parallel lazy 
> evaluation", which supports fully speculative implicit parallelism.
> 
> There's a link to it on my homepage at 
> http://cs.anu.edu.au/people/Ben.Lippmeier
> 
> 
> Keean Schupke wrote:
> > I have to say I disagree... I feel Haskell is highly suited to implicit 
> > parallel execution... The key to "implicit" parallelisation is that it 
> > is implicit - not explicit, so the programmer should feel like they are 
> > programming a sequential language. If we can assume little memory access 
> > penalties for threads running on other CPUs (shared cache model), it 
> > seems to be a matter of putting locks on the right structures, and 
> > allowing any worker-thread to take the next function ready to run from 
> > the scheduler.
> > 
> >Keean.
> > 
> 
> ___
> Haskell mailing list
> Haskell@haskell.org
> http://www.haskell.org/mailman/listinfo/haskell
> 

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell