[Haskell] CFP: Lambda Days, Krakow, 22-23 February 2018

2017-10-02 Thread John Hughes
Lambda Days is a joint industry/academic conference on functional
programming, held annually at the Jagiellonian University in Krakow. Lambda
Days is calling both for proposed presentations, and for research papers,
the latter to be published after the event in Concurrency and Computation:
Practice and Experience or in Science of Computer Programming. The deadline
for both kinds of submission is October 31st.

 

See the full call for papers here:
http://www.lambdadays.org/lambdadays2018#call-for-papers

 

___
Haskell mailing list
Haskell@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell


[Haskell] CFP: Functional Paradigm for High Performance Computing

2016-10-19 Thread John Hughes




  Future Generation Computer Systems (IF=2.430) Special Issue on 
Functional Paradigm  for High Performance Computing



in conjunction with the International Conference Lambda Days 2017

http://www.lambdadays.org



  February 09-10, Kraków, Poland 




Dear Sir or Madam,



We hereby cordially invite you to submit a paper  presenting the results of 
original research or innovative practical application in the area of Functional 
Programming for High Performance Computing.



The special issue  aims to bring together functional programming designers, HPC 
experts, researchers and application developers from  academia and industry to 
present the latest progress on functional programming paradigm for High 
Performance Computing and to identify and discuss which possible solutions 
provided by functional programming are useful for HPC issues.



The deadline for submitting your paper to the  workshop is November 30th, 2016.



For more information and details on the call, please check the link:



http://www.journals.elsevier.com/future-generation-computer-systems/call-for-papers/special-issue-on-functional-paradigm-for-high-performance-co



or the main conference site: www.lambdadays.org<http://www.lambdadays.org>



or the PDF CFP online:

http://www.lambdadays.org/static/upload/media/1475770130836999pdf_lambda2.pdf



You can also contact the organizers directly by sending an email to 
ol...@agh.edu.pl<mailto:ol...@agh.edu.pl>





With best regards,



On behalf of the special section chairs:



Aleksander Byrski, AGH University of Science and Technology, Krakow, Poland 
Katarzyna Rycerz, AGH University of Science and Technology, Krakow, Poland John 
Hughes, Chalmers University of Technology, Gothenburg, Sweden Kevin Hammond, 
University of St. Andrews, St. Andrews, Scotland, UK



___
Haskell mailing list
Haskell@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell


[Haskell] Lambda Days--Call for abstracts

2014-12-11 Thread John Hughes
Lambda Days is a 2-day developer conference to be held in Krakow next year, Feb 
26-27,
devoted to all things functional. Abstract submission is open until the 5th of 
January.

http://www.lambdadays.org/

Last year’s program is available here:

http://www.lambdadays.org/lambdadays2014/#schedule

Looking forward to some exciting Haskell submissions!

John Hughes___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] Erlang workshop call for papers

2012-04-20 Thread John Hughes
Why not adapt some cool Haskell ideas to Erlang too? Six weeks to go...

John Hughes

[http://www.erlang.org/workshop/2011/erlang090.gif]

CALL FOR PAPERS
Eleventh ACM SIGPLAN Erlang Workshop
Copenhagen, Denmark
Friday, September 14, 2012

[http://www.erlang.org/workshop/2011/acm_logo_wordmark.gif]


A satellite event of the 17th ACM SIGPLAN International Conference on 
Functional Programming (ICFP 2012http://icfpconference.org/icfp2012/).

Erlang is a concurrent, distributed functional programming language aimed at 
systems with requirements on massive concurrency, soft real time response, 
fault tolerance, and high availability. It has been available as open source 
for over a decade, creating a community that actively contributes to its 
already existing rich set of libraries and applications. Originally created for 
telecom applications, its usage has spread to other domains including 
e-commerce, banking, databases, and computer telephony and messaging.

Erlang programs are today among the largest applications written in any 
functional programming language. These applications offer new opportunities to 
evaluate functional programming and functional programming methods on a very 
large scale and suggest new problems for the research community to solve.

This workshop will bring together the open source, academic, and industrial 
programming communities of Erlang. It will enable participants to familiarize 
themselves with recent developments on new techniques and tools tailored to 
Erlang, novel applications, draw lessons from users' experiences and identify 
research problems and common areas relevant to the practice of Erlang and 
functional programming.

We invite three sorts of submissions.
*  Technical papers describing language extensions, critical discussions of the 
status quo, formal semantics of language constructs, program analysis and 
transformation, virtual machine extensions and compilation techniques, 
implementations and interfaces of Erlang in/with other languages, and new tools 
(profilers, tracers, debuggers, testing frameworks, etc.). The maximum length 
for technical papers is restricted to 12 pages.
*  Practice and application papers describing uses of Erlang in the 
real-world, Erlang libraries for specific tasks, experiences from using 
Erlang in specific application domains, reusable programming idioms and elegant 
new ways of using Erlang to approach or solve a particular problem. The maximum 
length for the practice and application papers is restricted to 12 pages. Note 
that this is a maximum length: we welcome shorter papers also; the program 
committee will evaluate all papers on an equal basis independent of their 
lengths.
*  Poster presentations describing topics related to the workshop goals. Each 
of them includes max 2 pages of the abstract and summary. Presentations in this 
category will be given an hour of shared simultaneous demonstration time.
Workshop Chair

  *   Torben Hoffmann, Issuu, Denmark

Program Chair

  *   John Hughes, Chalmers University of Technology/Quviq AB, Gothenburg, 
Sweden

Program Committee
(Note: the Workshop and Program Chairs are also committee members)

  *   Clara Benac Earle, Technical University of Madrid, Madrid, Spain
  *   Scott Lystig Fritchie, Basho Technologies, USA
  *   Simon Peyton Jones, Microsoft Research, Cambridge, England
  *   Tamas Kozsik, Eötvös Loránd University, Budapest, Hungary
  *   Kenneth Lundin, Ericsson AB, Stockholm, Sweden
  *   Kostis Sagonas, Uppsala University, Uppsala, Sweden
  *   Erik Stenman, Klarna AB, Stockholm, Sweden
  *   Kresten Krab Thorup, Trifork A/S, Aarhus, Denmark
  *   Steve Vinoski, Basho Technologies, USA

Important Dates

  *   Submission deadline: Sunday, June 3, 2012
  *   Author notification: Wednesday, June 27, 2012
  *   Final submission for the publisher: Tuesday, July 10, 2012
  *   Workshop date: Friday, September 14, 2012

Instructions to authors
Papers must be submitted online via EasyChair (via the Erlang2012 event). The 
submission page is https://www.easychair.org/conferences/?conf=erlang2012.

Submitted papers should be in portable document format (PDF), formatted using 
the ACM SIGPLAN style guidelines.

Each submission must adhere to SIGPLAN's republication policy. Violation risks 
summary rejection of the offending submission. Accepted papers will be 
published by the ACM and will appear in the ACM Digital Library.

Paper submissions will be considered for poster submission in the case that 
they are not accepted as full papers.

Venue  Registration Details
For registration, please see the ICFP 2012 web site.
Related Links

  *   ICFP 2012 web site: http://www.icfpconference.org/icfp2012/
  *   Past ACM SIGPLAN Erlang workshops: http://www.erlang.org/workshop/
  *   Open Source Erlang: http://www.erlang.org/
  *   EasyChair submission site: 
https://www.easychair.org/account/signin.cgi?conf=erlang2012
  *   Author Information for SIGPLAN Conferences: 
http

[Haskell] CFP: Automation of Software Test 2012

2011-12-19 Thread John Hughes
(It would be nice to see some papers on Haskell automated testing here)

John

AST 2012
7th IEEE/ACM International Workshop on Automation of Software Test
http://ast2012.org
At ICSE 2012 (http://www.ifi.uzh.ch/icse2012/)
Zurich, Switzerland, 2-3 June 2012

IMPORTANT DATES:

Submission deadline:
   February 17, 2012
Notification of acceptance:
   March 19, 2012
Camera-Ready version:
   March 29, 2012
Workshop dates:
   June 2 - 3 2012

In software development practice, testing accounts for as much as 50%
of the total development effort. It is therefore imperative to reduce
the cost and improve the effectiveness of software testing by
automating the testing process. In the past decades, a great amount of
research effort has been spent on automatic test case generation,
automatic test oracles, etc. Test automation is also becoming an
important part of software testing practice.

The workshop is the 7th edition of the successful AST workshops held
at ICSE 2006-2011. It will provide researchers and practitioners with
a forum for exchanging ideas, experiences, understanding of the
problems, visions for the future, and promising solutions to the
problems. The workshop will also provide a platform for researchers
and developers of testing tools to work together to identify the
problems in the theory and practice of software test automation and to
set an agenda and lay the foundation for future development.

TOPICS OF INTEREST

The workshop focuses on bridging the gap between the theory and
practice of software test automation. The topics of interest include,
but are not limited to, the following:

1) Methodology: Software test automation in the context of various
software development methodologies, such as model-driven,
component-based, agile, test-driven, product lines, service-oriented,
agent-oriented, and aspect-oriented methodologies, and in evolutionary
development.

2) Technology: Automation of test techniques and methods used in
test-related activities, support for software testing methods, and
applications to specific types of software in different application
domains.

3) Software testing tools and environments: Issues in the development,
operation, maintenance and evolution of software testing tools and
environments.

4) Experiments, empirical studies and experience reports and vision of
the future of software test automation. The AST 2012 workshop focuses
on a special topic on the Automation of Security Testing. Submissions
on this topic are especially encouraged, but papers on other topics
are also welcome.

CHARETTE DISCUSSIONS

The workshop continues the successful charette discussions of previous
years. This year, the charette discussion sessions will focus on the
topic of automation of security testing.

INDUSTRIAL CASE STUDIES

There will also be an Industrial Case Studies session at the workshop,
with presentations reporting the real state of the practice in
automation of software testing. Test practitioners are especially
encouraged to contribute their experiences and insights on test
automation. Topics for this session include, but are not limited to,
the following: new tools, new best practices, process innovations
related to test automation, gaps in existing vendor toolsets, ideas
and visions for near-term research work.

SUBMISSIONS

Two types of submissions are invited: regular papers and industrial
case study papers. The former presents research in the area of
software test automation, while the latter presents practical
applications of test automation. Both types of papers are limited to 7
pages in IEEE Conference Proceedings Format. All papers submitted to
the workshop must be unpublished original work and should not be
simultaneously under review or submitted elsewhere. All submissions
must be in English and in either PDF or postscript format through
online upload to the workshop submission website at the following URL:

http://www.easychair.org/conferences/?conf=AST2012

All submissions will be reviewed by three PC members. Regular papers
will be judged on the basis of their clarity, relevance, originality,
and contribution. Case study papers will be judged on the basis of
their clarity, relevance, and interest to the workshop participants.

PUBLICATION OF PAPERS

The accepted workshop papers including both regular and case study
papers will be published in the AST 2012 Proceedings and included in
the IEEE Digital Libraries. Workshop attendees will receive a memory
stick with both ICSE 2012 and AST 2012 proceedings on it. The
presentations slides will be published on the AST 2012
website. Authors of accepted papers are required to register for the
workshop and present the paper at the workshop in order for the paper
to be included in the proceedings and Digital Libraries.

A journal special issue to publish revised and extended versions of
selected best papers is pending.

___
Haskell mailing 

[Haskell] Chalmers FP is hiring 2 Assistant Professors in Functional Programming: deadline 2011-10-18

2011-10-12 Thread John Hughes
We're recruiting Assistant Professors to the FP group for our new Strategic 
Research project RAW FP. Come and work with us! Two-body problem? We've got 
two jobs!
Deadline coming up on the 18th.
John Hughes

http://wiki.portal.chalmers.se/cse/pmwiki.php/FP/FP

https://site1.reachmee.com/I003/chalmers/ENG/vacdetail.aspx?commadseqno=142postback=vacancies.aspx
2 Assistant Professors in Functional Programming

The Functional Programming group at Chalmers University is seeking two 
outstanding young researchers for four year Assistant Professor positions Ref 
2011/231.

Group description
Functional programming is enjoying an unprecedented surge of interest, with 
languages such as Erlang, Scala and Haskell seeing high-profile applications in 
companies large and small. A major reason is the much improved productivity 
that software developers can enjoy, by adopting functional programming. With 
intense competition from emerging economies, productivity improvements are 
vital if the West is to retain its software industry. We have secured a large, 
five-year strategic research grant to bring these benefits to two new areas: 
signal processing and low-level control in the system layer of products such as 
radio base stations for mobile broadband, and real-time automotive software 
built around the AUTOSAR standard.

Our approach is to combine domain-specific languages embedded in Haskell or 
Erlang with rapid verification based on property-based testing (QuickCheck) and 
automated proof tools. Our work is carried out in close cooperation with 
industry, and has the potential to make a real impact in key industrial sectors.

Our group is a leading centre of functional programming research, the source of 
the QuickCheck testing tool (awarded Most Influential Paper of ICFP 2000 by ACM 
SIGPLAN), the Equinox theorem prover and Paradox model finder, and of domain 
specific languages like Lava and the Feldspar signal-processing language 
(embedded in Haskell, funded by Ericsson, the world's largest supplier of radio 
base stations). We combine strong links to industry with high academic 
standards, and thus receive funding from both basic and strategic research 
agencies. We are currently four professors (Hughes, Sheeran, Claessen, 
Jansson), five postdocs, and eight doctoral students; our new strategic grant 
will fund a significant expansion and a stream of visiting international 
researchers. The proposal that attracted the funding for these positions is 
available at 
http://www.cse.chalmers.se/~ms/SSF10Final.pdfhttp://www.cse.chalmers.se/%7Ems/SSF10Final.pdf

Although some changes have been made to the plan, the proposal gives a good 
indication of the research areas to which we expect successful candidates to 
contribute.

Job description
These are attractive four year full-time positions in one of the strongest 
Functional Programming groups anywhere. You will work at the cutting edge of 
research in Functional Programming, and will strengthen your qualifications, 
both for advancement in academia and for industrial research positions. After 
two years, progress will be assessed. After four years, an Assistant Professor 
who has been appointed in open competition may, under certain circumstances, be 
considered for appointment to a permanent position as Associate Professor.

You will be expected to take part in teaching and masters project supervision 
(to a maximum of 25%) and also in doctoral student supervision. The working 
language in the research group is English. The undergraduates are normally 
taught in Swedish, but the graduate level teaching is in English. You will gain 
experience of grant proposal writing, as we strive to find funding for our long 
term research. You will most likely be involved in industrial collaboration.

The positions are available from October 2011, although the start date is 
negotiable.

Required qualifications
We are looking for researchers with a strong record in functional programming 
research and practice, preferably related to domain specific languages, testing 
and/or automated reasoning. You should have a doctorate in computer science 
(normally completed at most five years before deadline for application), and 
ideally also post-doctoral experience. We have promised much in the above 
proposal, and you will be expected to help us to deliver! You should have a 
strong publication record for your academic age. We place particular emphasis 
on the communication of research results and the quality of your writing will 
strongly influence our assessment of you. Please make sure to include your 
three best papers in your application.

Application procedure
The application shall be marked with Ref 2011/231 and written in English. The 
application shall be sent electronically and be attached as three pdf-files, as 
below:

 1.  Application:
- a first page containing name, reference number 2011/231 and a list of all 
documents that have been enclosed,
- description

[Haskell] Looking for a PhD position in functional programming?

2011-08-24 Thread John Hughes
We're advertising a position for a PhD student in the FP group at Chalmers, 
with closing date the 1st of September. Interested? Please apply!

 
http://www.chalmers.se/cse/EN/news/vacancies/positions/phd-student-position-in8107

John Hughes
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] PhD position at Chalmers University

2011-07-07 Thread John Hughes
Looking for a funded PhD position in functional programming? The FP group at 
Chalmers is recruiting a new PhD student to join us in bringing the 
productivity of functional programming to new application domains, via domain 
specific languages embedded in Haskell. More details, and application via this 
link:

http://www.chalmers.se/cse/EN/news/vacancies/positions/phd-student-position-in8107

Deadline for applications: 1st September.

John Hughes
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] Postdoc in Functional Programming at Chalmers

2010-10-19 Thread John Hughes

The Functional Programming group at Chalmers is recruiting a postdoctoral 
researcher for a two year position, starting as soon as possible. The position 
is funded by a project in property-based testing with QuickCheck, with an 
emphasis specifically on language terms as test data. We're looking for someone 
with a PhD with a strong Functional Programming component.  Experience with 
automated test case generation, whether with QuickCheck or other tools, and 
with formal specifications, is a merit.

Full details (including how to apply) are on the web here:

http://www.chalmers.se/cse/EN/news/vacancies/positions/post-doc-position-in3564

John Hughes


___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] Call for fast abstracts: TAIC-PART (testing conference)

2010-05-21 Thread John Hughes
TAIC-PART is an interesting conference on testing that takes place in wonderful 
surroundings in Windsor Park. I recommend it-I much enjoyed it last year.  It's 
calling for fast abstracts-short papers on new results-by June 11th. It would 
be fun to see work on testing in the FP community represented. Why not submit 
something?

John

CALL FOR FAST ABSTRACTS: Late breaking, PhD and Tools

Testing: Academic and Industrial Conference -
Practice and Research Techniques (TAIC PART 2010)

Windsor, United Kingdom
September 3 - 5, 2010

http://www2010.taicpart.org/

Theme and Goals:

TAIC PART is a conference that aims to forge collaboration between
industry and academia on the challenging and exciting problem of
software testing. It is sponsored by representatives of both industry
and academia, bringing together commercial and industrial software
developers and users with academic researchers working on the theory
and practice of software testing. The goals of TAIC PART range from
the articulation of fundamental research questions in the field of
software testing and analysis to practical challenges that are often
faced by software developers in industry.  TAIC PART is a unique event
that strives to combine the important aspects of a software testing
conference, workshop, and retreat.

Keynote Speakers:
-
Dr Wolfgang Grieskamp, Microsoft, USA
Prof Sir Tony Hoare, Microsoft Research Cambridge, UK
Prof Bertrand Meyer, ETH Zuerich, Switzerland

Call for Fast Abstracts:

We invite submission of fast abstracts with a limit of 4 pages:

- Late breaking results or work in progress will be evaluated
according to their ability to generate discussion and suggest
interesting areas for future research.

- PhD papers are for PhD students who are interested in receiving
feedback about dissertation research that is an early stage. There
will be a dedicated PhD session at the conference.

- Tool papers must focus on the design, implementation, and evaluation
of software testing and analysis tools and will be judged by the
technical merit, novelty, and evaluation of the tool. There will be a
tools session with the opportunity to demonstrate the tools.

TAIC PART 2010 solicits papers on, but is not limited to, the
following areas:

- Test Adequacy Criteria
- Test Suite Execution
- Test Coverage Monitoring
- Automated Test Data Generation
- Regression Testing
- Automated Debugging and Fault Localization
- Performance Evaluation
- Static and Dynamic Analysis
- Verification and Validation
- Software Reliability Engineering
- Model-Based Testing
- Testing and Formal Methods
- Testing and Model Checking
- Software Testing Process
- Technology Transfer

Submissions:

Authors should submit a PDF version of their paper through the TAIC
PART 2010 paper submission site. Papers must be written in English,
and prepared according to Springer's LNCS style (guidelines:
http://www.springer.de/comp/lncs/authors.html).  All papers will be
reviewed, and accepted papers will be published in a volume of the
Springer Lecture Notes in Computer Science series (LNCS).

Important Dates:

* Fast abstract submission: June 11, 2010
* Fast abstract notification: June 21, 2010
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] AST 2010 reminder--call for papers and presentations

2010-01-12 Thread John Hughes
Just a reminder: the submission deadline for AST 2010 (Automation of Software 
Test 2010, associated with ICSE 2010 in Cape Town) is only a week away. If 
you're planning to submit a paper or a presentation for the workshop, then 
now's the time to be putting it together!

http://www.cs.allegheny.edu/ast2010/

John Hughes
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] Interesting experiences of test automation in Haskell? Automation of Software Test 2010

2009-11-27 Thread John Hughes
This is a heads up about a workshop on test automation that I just joined the 
programme committee of. Automation of Software Test will be co-located with 
ICSE in Cape Town in May--the workshop home page is here: 

http://www.cs.allegheny.edu/ast2010/

Tools like QuickCheck, SmallCheck and HUnit fit the call for papers perfectly. 
So if you're doing some interesting automated testing in Haskell, why not 
submit a paper about it, and show a new audience what the Haskell community is 
up to? Both research papers and case studies are welcome, and the latter can 
even be in the form of a presentation of up to 15 slides--so there's no excuse 
for not putting something together!

So how about it? It would be great to see some Haskell papers at the workshop! 
Deadline 20 January.

John Hughes

PS Check out the ICSE web site for information on the location: 
http://www.sbs.co.za/ICSE2010/
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] 1-year postdoc position in Chalmers Functional Programming group

2008-11-05 Thread John Hughes
The Chalmers Functional Programming Group would like to recruit a
post-doctoral researcher for a one-year tax-free stipend funded by
Intel. The funded project will develop a Domain Specific Language
(DSL) for high level modelling, design and analysis of hardware and
microarchitectures. We will concentrate on providing language support
for the abstractions involved. Intel funded the following research
proposal by Mary Sheeran and Koen Claessen from the FP group and Per
Larsson-Edefors from the VLSI group:
http://www.cs.chalmers.se/~ms/IntelProp.pdf

Our contact person at Intel is Carl Seger, who is visiting faculty at
our Department. We have been working with Intel for some years, and
find the contact to be very stimulating. Our most recent project was
on Wired, a DSL for low level hardware design, documented in Emil
Axelsson's recent PhD thesis:
http://www.cs.chalmers.se/~emax/documents/PhD_thesis.pdf

We are looking for a researcher with a very strong background and
research record in functional programming, and preferably also in
domain specific languages. We will work in Haskell, given the
expertise available in the group.  We feel that knowledge of
functional programming is most important, but of course an interest in
hardware or architectural modelling would also be an advantage.

We would like to get the project underway on Jan. 1 2009 or as soon as
possible thereafter. For further information about the project, our
group and Chalmers, please contact Mary Sheeran (ms at
chalmers.se). An application, which should be sent by email to both
Mary and Koen Claessen (koen at chalmers.se), should consist of your
CV, a list of publications with links to relevant papers, and a brief
description of your research to date. The deadline for applications is
December 1 2008. To be eligible, you must have a doctorate from a
non-Swedish University. We will plan to interview suitable candidates.

The Chalmers Functional Programming group has as its senior members
John Hughes, Mary Sheeran, Koen Claessen, Patrik Jansson and Björn von
Sydow, as well as around 8 post-docs and doctoral students. Much of
our work has centred on DSLs (QuickCheck, Lava, Wired) and we are
embarking on both this project and a project with Ericsson to develop
a DSL for Digital Signal Processing algorithm design. One of our
doctoral students, Joel Svensson, is developing a DSL for GPU
programming. We provide an excellent research environment, and can
also strongly recommend Göteborg as a great place to live. The project
is initially funded for a year, but we are hopeful of obtaining
continued funding for this work.

John Hughes___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] Last chance to recruit functional programmers via Jobs in Functional Programming at Chalmers

2007-12-09 Thread John Hughes
 

I’m happy to report that the ”Jobs in Functional Programming” event to be
held at Chalmers this Friday looks like being a big success. We have seven
companies involved, and five speakers—and almost 70 people registered BEFORE
we put up posters around the campus, including some people who are
travelling from other countries just for this event. 

 

If you’re recruiting functional programmers, and you HAVEN’T got in touch
with me yet, then there is still time—although I can’t accept any more
speakers now. But if you would like to send recruitment materials, then I
will be only too happy to ensure that participants receive a copy. Just
don’t ask me to print your materials for you: there won’t be time for that.
If you would like to send such materials, then courier them to me at 

 

Department of Computer Science and Engineering

Chalmers University of Technology

S-41296 GÖTEBORG

Sweden

 

They need to arrive by Friday morning at the latest. Let me know by email to
expect them.

 

It’s delightful to find that there are both job-seekers and employers enough
to make this kind of event a success!

 

John Hughes

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] Jobs in Functional Programming event at Chalmers

2007-12-03 Thread John Hughes
I'm proud to announce that the first Jobs in Functional Programming
recruitment evening will be held at Chalmers University on the 14th of
December. As far as I am aware, this is the first time such an event has
been held anywhere in the world. While we're holding the event at Chalmers
because we have many skilled functional programmers among our students, it
is of course open to anyone who would like to take part. All the details can
be found at www.jobs-in-fp.org. 

 

Welcome to what promises to be a very exciting event!

 

John Hughes

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] Recruiting functional programmers

2007-11-20 Thread John Hughes
Interested in recruiting Haskell programmers from Chalmers/Gothenburg
university? As an experiment, I am planning a recruitment event here in
December-see www.jobs-in-fp.org for how to take part.

 

John Hughes

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] PhD position at Chalmers

2007-10-26 Thread John Hughes
The Functional Programming group at Chalmers is seeking to recruit a PhD
student to work on domain-specific languages embedded in Haskell for
hardware design, and for programming graphics processors. PhD positions in
Sweden are “real jobs”, paying a respectable salary for up to five years.
The official announcement follows.

 

John Hughes

 

PhD Position in Functional Programming at the department of Computer Science
and Engineering

The Department provides strongly international and dynamic research
environments with 76 faculty and 55 PhD students from about 30 countries.
For more information, see http://www.chalmers.se/cse/EN/ 

Knowledge of Swedish is not a prerequisite for application. English is our
working language for research. Both Swedish and English are used in
undergraduate courses. Half of our researchers and PhD students are native
Swedes; the rest come from more than 20 different countries. 

Aim of work The PhD student will pursue research on applying techniques from
functional programming to develop novel methods of hardware design and
verification. The PhD student will join the research activities at our
department in applications of functional programming. In particular, we have
concentrated on the design and application of Domain Specific Embedded
Languages in Haskell. Two examples of our work are QuickCheck for
specification-guided random testing and Lava for hardware design and
verification. We have also developed Wired, for designing high performance
hardware at the layout level. This project continues the above theme, by
considering higher levels of abstraction, but still aiming to take account
of physical properties, to give good performance and power consumption. We
will even extend our ideas about circuit design to data parallel
programming, including GPU programming. 

Applicants must have a very good undergraduate degree in Computing Science
or in a related subject with a strong Computing Science component. They must
also have a strong, documented interest in doing research. The ideal student
for the project will be an accomplished functional programmer with an
interest in applied functional programming. 

You may even apply if you have not yet completed your degree, but expect to
do so by 1 January 2008. 

In order to improve gender balance, Chalmers welcomes in particular
applications from female candidates. 

How to apply  

The full application should contain 

1. A letter of application, listing specific research interests 

2. A curriculum vitae 

3. Attested copies of degrees and other certificates 

4. Copies of relevant work, for example dissertations or
articles, that you have authored or co-authored 




 

5. Letters of recommendation from your teachers or employers 

 

*** You MUST include or e-mail Letters of Recommendation: we typically get
over 100 apps, and it is simply not feasible for us to request individual
letters *** 

The job reference number is: 2007/161. The last date for your full
application to arrive is November 27, 2007 

Send your application electronically in PDF files or by paper-mail to
Registrator, Chalmers University of Technology, Se-412 96 Göteborg, Sweden.
Phone: +4631 772 1000, Fax: +4631 772 4922, E-mail:
[EMAIL PROTECTED]

Details about employment The appointee must also have been admitted to
Chalmers Doctoral Program. PhD student positions are limited to five years
and will then normally include 20% departmental work, mostly teaching
duties. Admission and employment are regulated in Chalmers' Rules of
Procedure for Doctoral Programs, §4 and §5. Salary follows the agreement
for PhD student positions. Further information Research level: Professor
Mary Sheeran e-mail ms'at'chalmers.se Division level: Graduate studies
director Philippas Tsigas, e-mail: tsigas'at'chalmers.se 

Union representatives: SACO Jan Lindér, ST-ATF Marie Wenander, SEKO Ralf
Berndtsson All reachable via Chalmers exchange: +46 31 772 10 00 

 

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Software Engineering and Functional Programming (withHaskell)

2007-04-04 Thread John Hughes
Take a look at World Class Product Certification using Erlang by Ulf Wiger et 
al. It's about a real project, not a scientific experiment, but even so it aims 
to demonstrate some of the claims made for FP. It's Erlang, not Haskell, but 
that doesn't really matter. The product is certainly a significant 
system--it's 1.5 million lines of Erlang, and is in use in Ericsson-supplied 
telephone networks around the world. But it's not built exclusively with 
Erlang, and actually I think that's an unreasonable demand--virtually any large 
system is built with a mixture of languages. In this case there are 2 million 
lines of C doing low-level data transport.

I'm sure you'll find the Commercial Users of Functional Programming workshops 
interesting too, although there are only slides available in most cases, not 
papers. There are many success stories there.

John
  - Original Message - 
  From: Sukit Tretriluxana 
  To: haskell@haskell.org 
  Sent: Tuesday, April 03, 2007 10:06 PM
  Subject: [Haskell] Software Engineering and Functional Programming 
(withHaskell)


  Hi all,

  I'm a Software Engineering (SE) Master's degree student at CMU. As part of 
the program, each of us needs to present a topic that's related to SE. I am 
picking Functional Programming with Haskell as the topic as I believe it has a 
lot of direct impact on SE due to its nature that requires a the whole new 
world of thinking process, design, analysis, development, testing, and 
deployment. 

  Unfortunately my instructor disagrees that the topic is relevant. In his 
response, he mentioned that he will accept the topic only if I can prove the 
following.


  Haskell has been around for quite a while.  To convince me,
  you'll have to give me references that I can read about
  nontrivial examples of significant software systems already
  built exclusively with Haskell which includes the software 
  engineering principles applied in this environment and the
  software measures that demonstrate the claims. I
  welcome the opportunity for you to provide me with such
  in-depth research references to support your viewpoint. 


  Straight of the bat, I have very limited visibility in terms of finding him 
the resources to prove this. I am wondering if any of you all here could shed 
some light where I can find a couple compelling evidences to convince him. In 
fact, my presentation topic is not specifically tied to Haskell but more to FP. 
So any resources that provide such information on FP in general would do as 
well.

  Thanks,
  Ed






--


  ___
  Haskell mailing list
  Haskell@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] How to write elegant Haskell programms?

2007-01-30 Thread John Hughes




I think that whole program flow thing is something you get used to.  In
 


true, pure functional programming (i.e. Haskell) program flow is a
meaningless term, basically.  Haskell is a declarative language, not an
imperative one.  You have to learn to give up that control and trust the
runtime to Do The Right Thing.  (In this regard it's similar to logic
programming languages.)

   



I think it's important/useful to point out that program flow in a pure
functional language is really a matter of data dependency. The compiler is
only free to arbitrarily order computations if there are no data
dependencies. Furthermore, monads are not special in any way (they are after
all just a useful set of combinators; e.g.
http://sigfpe.blogspot.com/2006/08/you-could-have-invented-monads-and.html);
they only wind up sequencing computations because they set up a data
dependency between the two arguments of the bind operator.

-Jeff

And actually they don't even do that, always. A useful example in 
practice: then Gen monad in QuickCheck does *not* necessarily set up any 
data dependencies, so do in the Gen monad does not force sequencing. The 
fact that it's non-strict is what enables us to generate infinite random 
data-structures with it.


John Hughes
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Debugging partial functions by the rules

2006-11-15 Thread John Hughes



 From: Robert Dockins [EMAIL PROTECTED]
 It seems to me that every possible use of a partial function has 
some  (possibly imagined) program invariant that prevents it from 
failing.  Otherwise it is downright wrong. 'head', 
'fromJust' and friends  don't do anything to put that invariant in 
the program text.
Well, not really. For example, I often write programs with command line 
arguments, that contain code of the form

 do ...
[a,b] 
- getArgs
...

Of course the pattern match is partial, but if itfails, then the 
standard error message is good enough.

This applies to "throw away" code, of course, and if I decide to keep the 
code then I sooner or later extend it to fix the partiality and give a more 
sensible error message.But it's still an advantage to beABLE to 
write the more concise, but cruder version initially.

This isn't a trivial point. We know that error handling codeis a 
majorpart ofsoftware cost--it can even dominate the cost of the 
"correct case" code (by a large factor).Erlang's "program for the correct 
case" strategy, coupled with good fault tolerance mechanisms, is one reason for 
its commercial success--the cost of including error handling code *everywhere* 
is avoided. Butthis means accepting that code *may*very well 
fail--the failure is just going to be handled somewhere else.

Haskell (or at least GHC)has good exception handling mechanisms too. 
We should be prepared to use them, and "let it fail" when thingsgo wrong. 
The savings of doing so are too large to ignore.

John


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Standard syntax for preconditions, postconditions, and invariants

2006-10-22 Thread John Hughes



I propose that haskell' include a standard syntax for invariants that
the programmer wants to express.



The intent is not to have standardized checks on the invariants, its
just to supply a common way to specify invariants to that the various
strategies for checking them can all work from the same data. For
example, one tool might use the invariants to generate QuickCheck
properties




I think it's too early for this. As others have pointed out, it's not even 
clear

what the semantics of these things should be. What's more, the way you
write such properties depends partly on what you intend to do with them.
QuickCheck properties aren't just logical properties, they're restricted
to make them testable, and carry extra information to control test case
generation. Other tools, like partly automated provers, may need other
additional information, such as induction variables. We spent a lot of time
looking for a unified property notation in the Cover project, and we didn't
find one we were really happy with. The P-logic that Programmatica uses,
and the preconditions that Dana Xu used in her Haskell workshop paper,
are quite different from each other. We need more research on the tools that
would use these annotations, before we try to fix their syntax.

John 



___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal for stand-alone deriving declarations?

2006-10-06 Thread John Hughes

   What I implemented in GHC is an extension of the proposal below. The
   proposal just mentions:

   deriving Class for Type

   In GHC I also added a form for newtype deriving of multi-parameter
   type classes:

   deriving (Class t1 ... tn) for Type

   I think that it's close to what we ended up with when talking about
   it at the Hackathon. My intuition about this syntax is that except
   for the for Type part, it looks the same as a normal deriving
   clause. The for part is just there to connect it to a data/newtype
   declaration. This lets it pretty much use the normal code for
   deriving declarations.

   Stand-alone deriving declarations are currently a little bit weaker
   than normal deriving clauses, since the current implementation does
   not let you reference the type arguments of a newtype in the
   arguments of an MPTC. See my response to Bulat on [EMAIL PROTECTED]
   for more details.

   /Björn



 


A suggestion re syntax:

With the newtype-deriving extension, the instances named in a deriving clause are not 
just class names, but partial applications of class names to all but the last argument. 
Why all but the last one? Because the last one is the type being defined. Once deriving 
clauses are separated from type definitions, then there is no type being defined any 
more--hence the need for for Type in your syntax, and the introduction of 
another keyword. But why single out one class parameter, once deriving clauses are 
separated from type definitions? Why not simply write the FULL application of the class 
in the deriving list? Thus:

deriving (Eq Foo, Ord Foo)

instead of

deriving (Eq, Ord) for Foo

I find the former syntax clearer and more readable, actually.

John



___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell] Re: Haskell Weekly News: September 27, 2006

2006-09-29 Thread John Hughes
The intention is to put the speaker's slides online. But in some cases, that 
will require additional permission from the company concerned--putting 
slides on the web is more public than talking at a workshop. So some 
sanitation may perhaps be needed. All this is going to take a little while, 
so while the slides WILL appear in the medium term, don't hold your breath.


John

- Original Message - 
From: Cyril Schmidt [EMAIL PROTECTED]

To: haskell@haskell.org
Sent: Friday, September 29, 2006 3:47 PM
Subject: [Haskell] Re: Haskell Weekly News: September 27, 2006



Donald Bruce Stewart wrote:

Conference roundup



  The [41]Commercial Users of Functional Programming workshop (CUFP),
  held in association with [42]ICFP, attracted a record turn-out this
  year.

[...]

* Haskell programming at Credit Suisse. Howard Mansell talked about
  Haskell programming at [44]Credit Suisse in New York.


Sadly, I was unable to attend. Is the presentation, by any chance,
available online? Pointers to other related info will be much
appreciated.

(Yes, I know that a report of the conference will be published, but
that won't happen before December :(

Regards,

Cyril

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell




___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] does the compiler optimize repeated calls?

2006-09-06 Thread John Hughes



On 9/6/06, Tamas K Papp [EMAIL PROTECTED] wrote:
 


or does the compiler perform this optimization?  More generally, if a
function is invoked with the same parameters again (and it doesn't
involve anything like monads), does does it makes sense
(performancewise) to store the result somewhere?
   



I was wondering something like this too, and then I found this email:
http://www.haskell.org/pipermail/glasgow-haskell-bugs/2004-December/004530.html

So I guess it is as Stephane said: theoretically possible but not actually done?

--eric the perpetual newbie

 

The trouble is that this isn't always an optimisation. Try these two 
programs:


powerset [] = [[]]
powerset (x:xs) = powerset xs++map (x:) (powerset xs)

and

powerset [] = [[]]
powerset (x:xs) = pxs++map (x:) pxs
 where pxs = powerset xs

Try computing length (powerset [1..n]) with each definition. For small 
n, the second is faster. As n gets larger, the second gets slower and 
slower, but the first keeps chugging along. The problem is that the 
second has exponentially higher peak memory requirements than the first. 
Round about n=25, on my machine, all other programs stop responding 
while the second one runs. You don't really want a compiler to make that 
kind of pessimisation to your program, which is why it's a deliberate 
decision to leave most CSE to the programmer. You can still write the 
second version, and suffer the consequences, but at least you know it's 
your own fault!


John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] does the compiler optimize repeated calls?

2006-09-06 Thread John Hughes

John Hughes wrote:

The trouble is that this isn't always an optimisation. Try these two
programs:

powerset [] = [[]]
powerset (x:xs) = powerset xs++map (x:) (powerset xs)

and

powerset [] = [[]]
powerset (x:xs) = pxs++map (x:) pxs
 where pxs = powerset xs

Try computing length (powerset [1..n]) with each definition. For small
n, the second is faster. As n gets larger, the second gets slower and
slower, but the first keeps chugging along. The problem is that the
second has exponentially higher peak memory requirements than the
first. Round about n=25, on my machine, all other programs stop 
responding

while the second one runs. You don't really want a compiler to make
that kind of pessimisation to your program, which is why it's a
deliberate decision to leave most CSE to the programmer. You can
still write the second version, and suffer the consequences, but at least 
you know

it's your own fault!


Thanks for the above example. I found it quite difficult to understand why 
the second is worse than the first for large n, but I think the reason is 
that you're using the second def in conjunction with (length). Therefore 
it is the *combination* of the cse'd (powerset) with (length) that is less 
efficient, because (length) just reads its input as a stream so there is 
no need for the whole of (powerset xs) to exist in memory thus the non 
cse'd version gives a faster (length . powerset).


Yes... not just length, of course, but any function that consumes its input 
lazily, or perhaps I should say in one pass. For example, if you print 
out the result of powerset, then the print function makes only one pass over 
it, and the first version will run in linear space in n, while the second 
takes exponential. But then you'll be doing so much I/O that you won't be 
able to run the code for such large n in reasonable time--that's the reason 
I chose length in my example, it's a list consumer that isn't I/O-bound.


Just to be explicit, the reason the second is worse is that the pointer to 
pxs from the expression map (x:) pxs prevents the garbage collector from 
recovering the space pxs occupies, while the pxs++ is being computed and 
consumed. So you end up computing all of pxs while the pxs++ is running, AND 
STORING THE RESULT, and then making a second pass over it with map (x:) pxs, 
during which pxs can be garbage collected as it is processed. In the first 
version, we compute powerset xs twice, but each time, every cell is 
constructed, then immediately processed and discarded, so every garbage 
collection reclaims almost all the allocated memory.


Ideally it would be great if the compiler could make use of the context in 
which a function is being applied to produce optimized code across 
function boundaries. In the above example of (length . powerset), (length) 
has no interest in the contents of the powerset itself so could the 
compiler not fuse (length . powerset) into the following function:


   lengthPowerset [] = 1
   lengthPowerset (x:xs) = 2 * lengthPowerset xs

The compiler would need to analyse the definition of (++) and (map) to 
discover that


   length (x ++ y) === length x + length y

   length (map f y) === length y

and with that knowledge I imagine the steps could be something like:

   lengthPowerset [] = length (powerset []) = length ([[]]) = 1

   lengthPowerset (x:xs)
   = length (powerset xs ++ map (:x) (powerset xs))
   = length (powerset xs) + length (map (:x) (powerset xs))
   = length (powerset xs) + length (powerset xs)
   = lengthPowerset xs + lengthPowerset xs
   = 2 * lengthPowerset xs

After getting the function (lengthPowerset) as above, I'd also expect the 
compiler to apply a transformation into a tail recursive function:


   lengthPowerset y = lengthPowerset' y 1
   where
   lengthPowerset' [] i = i
   lengthPowerset' (_:xs) i = lengthPowerset' xs $! 2*i

resulting in a tightly coded machine code loop to rival, or greatly 
exceed(!), the best efforts of C.


In the meantime I tend to code in Haskell just expecting these kind of 
optimizations to be done (unless I'm writing a really time-critical piece 
of code that can't wait), knowing of course that they might not be done 
just at the moment but at least some time in the (hopefully not too 
distant) future... ;-)


Regards, Brian.


You know, I suspect you could get a lot of this to happen by programming 
GHC's optimiser using rewrite rules. But I'm going to leave it as an 
exercise for the reader (he he:-). For the compiler to do all this without 
guidance, would, I suspect, require much more theorem proving than it will 
be reasonable for compilers to do for a long. long time.


John 




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Exercise in point free-style

2006-09-01 Thread John Hughes

From: Julien Oster [EMAIL PROTECTED]
Subject: [Haskell-cafe] Exercise in point free-style

I was just doing Exercise 7.1 of Hal Daumé's very good Yet Another
Haskell Tutorial. It consists of 5 short functions which are to be
converted into point-free style (if possible).

It's insightful and after some thinking I've been able to come up with
solutions that make me understand things better.

But I'm having problems with one of the functions:

func3 f l = l ++ map f l

Looks pretty clear and simple. However, I can't come up with a solution.
Is it even possible to remove one of the variables, f or l? If so, how?

Thanks,
Julien


Oh, YES!!

Two ways to remove l:

func3a f = uncurry ((.map f).(++)) . pair
func3b f = uncurry (flip (++).map f) . pair

And just to make sure they're right:

propab new f l =
 func3 f l == new f l
 where types = f :: Int-Int

quickCheck (propab func3a)
quickCheck (propab func3b)

If you don't mind swapping the arguments, then

propc f l =
 func3 f l == func3c l f
 where types = f :: Int-Int

func3c l = (l++) . (`map` l)

With the arguments swapped, you can even remove both!

propd f l =
 func3 f l == func3d l f
 where types = f :: Int - Int

func3d = uncurry ((.(flip map)) . (.) . (++)) . pair

MUCH clearer!

The trick is to observe that l is duplicated, so you need to use a 
combinator that duplicates something. The only one available here is pair, 
which you then have to combine with uncurry.


It would be nicer to have

(f  g) x = (f x,g x)

available. ( is one of the arrow combinators). Then you could remove l by

func3e f = uncurry (++) . (id  map f)

which is sort of readable, and remove both by

func3f = (uncurry (++).) . (id ) . map

John 




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: map and fmap

2006-08-29 Thread John Hughes




On 8/28/06, John Hughes [EMAIL PROTECTED] wrote:

No, map was never overloaded--it was list comprehensions that were
overloaded as monad comprehensions in Haskell 1.4. That certainly did 
lead

to problems of exactly the sort John M is describing.


I just checked the reports for Haskell 1.3 and 1.4 on the Haskell
website and they both state that the method of 'Functor' was 'map'.  I
only started using Haskell towards the end of 1.4, so I don't have
much experience with those versions of the language, but it seems that
having an overloaded 'map' was not much of a problem if only a few
people noticed.

-Iavor



Good Lord, I'd forgotten that! So I'm afraid I've also forgotten the details 
of the arguments that led to fmap being introduced--maybe others can fill 
them in. But I wouldn't conclude from that that only a few people noticed 
and so it would be OK to overload map again.


On the contrary, it seems we had plenty of experience with an overloaded 
map--it was in the language for two and a half years, and two language 
versions. In the light of that experience, the Haskell 98 committee 
evidently decided that overloading map was a mistake, and introduced fmap 
for the overloaded version. Now, this was an incompatible change, and the 
Haskell committee was always very wary of making such changes--so there must 
have been a weight of experience suggesting that overloading map really was 
a mistake. It wouldn't have been changed on the basis of abstract 
discussions of small examples. My own bad experiences with list overloading 
were with monad comprehensions, but others must have had bad experiences 
with overloaded map also. Given that it's been tried--and tried so 
thoroughly--and then abandoned, I would be very wary of reintroducing it.


We didn't simplify things in Haskell 98 for the sake of it--we simplified 
things because users were complaining that actually using the language had 
become too complex, that there were too many corners to stumble on. I think 
we did a good job--certainly, the Haskell community began growing 
considerably faster once Haskell 98 came out. So I'd be very nervous about 
undoing some of the simplifications we made at that time.


John



___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: map and fmap

2006-08-20 Thread John Hughes


From: Jon Fairbairn [EMAIL PROTECTED]


To
reinforce what Aaron said, if a programme works now, it'll
still work if map suddenly means fmap.


Well, this isn't quite true, is it? Here's an example:

class Foldable f where
 fold :: (a - a - a) - a - f a - a

instance Foldable [] where
 fold = foldr

example = fold (+) 0 (map (+1) (return 2))

example has the value 3 (of course), but if you replace map by fmap then the 
code no longer compiles.


In any case, I'm dubious about this as a criterion. I would guess that the 
majority if compiler runs for beginners (and perhaps for the rest of us 
too!) end in a type error, not a successful compilation, so arguably the 
quality of error messages when a type-check fails is more important than 
which programs compile.


John



___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: map and fmap

2006-08-20 Thread John Hughes


From: Jon Fairbairn [EMAIL PROTECTED]


To
reinforce what Aaron said, if a programme works now, it'll
still work if map suddenly means fmap.


Well, this isn't quite true, is it? Here's an example:

class Foldable f where
 fold :: (a - a - a) - a - f a - a

instance Foldable [] where
 fold = foldr

example = fold (+) 0 (map (+1) (return 2))

example has the value 3 (of course), but if you replace map by fmap then the 
code no longer compiles.


In any case, I'm dubious about this as a criterion. I would guess that the 
majority if compiler runs for beginners (and perhaps for the rest of us 
too!) end in a type error, not a successful compilation, so arguably the 
quality of error messages when a type-check fails is more important than 
which programs compile.


John



___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Class System current status

2006-05-15 Thread John Hughes

Stephanie wrote:

 Simon,

 Why is an Appendix is better than just a footnote in the Standard that
 says we aren't sure, one way or the other, whether FDs will stay in
 the language for ever.  Why do we need this extra structure?

 I'm worried that this extra structure could be confusing. In
 particular, if someone says this program is pure Haskell'  what will
 that mean? In practice, will it be clear whether pure Haskell'
 includes the Appendix?

I don't like the appendix idea either--or a footnote for that matter. I don't 
think a language definition should be cluttered by remarks about whether or not 
the designers are sure they got each bit right. It should just define a 
language.

It seems to me that Haskell' is gaining a status in our minds that it doesn't 
deserve--that the main language report should be a thing of permanence that 
will stand the test of time. Previous versions of Haskell haven't. There was no footnote 
in the Haskell 1.2 report indicating that the IO system might be replaced... yet that's 
exactly what happened in 1.3, a much more dramatic change than replacing FDs by ATs will 
be. Future versions of Haskell may change things--that's a given. The language will 
continue to evolve. We've even been discussing changing the semantics of pattern matching 
this time around--which shows that even basic parts of the language may be called into 
question in the future.

Haskell' should define a standard language for use TODAY--and it should be 100% clear 
what that language is, with no pussy-footing around difficult choices. In my view it 
should include FDs. Then in the future they may be replaced--but it should then be clear 
that this IS a replacement, with no arguments of the sort well it's not really an 
incompatible change because FDs were only in an appendix! Let's face it, people ARE 
going to use FDs whatever the standard says, because they're just so godamn useful, and 
rewriting those programs if FDs are replaced by ATs is not going to be any easier because 
it's an appendix that's changing, rather than the main body of the report.

John

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org//mailman/listinfo/haskell-prime


Re: [Haskell] Haskell as a disruptive technology?

2006-03-27 Thread John Hughes

Dusan Kolar wrote:



Malcolm Wallace wrote:


Paul Johnson [EMAIL PROTECTED] wrote:

 


Is there a market that is poorly served by the incumbent languages for
which Haskell would be an absolute godsend?




Yes.  Safety critical systems, encompassing everything from avionics to
railway signalling equipment, to medical devices.  These markets are
relatively small / low-volume, with needs for high assurance, and better
development times.
  


Well, the market is growing and not that small. ;-) Think of mobile 
phones and cars, for instance, they are full of embedded computers.


Mobile phones are a big market, but they are not safety critical. In 
fact, you can make a phone which crashes quite often and still sell it 
successfully (as I know to my cost!). Low power (==long battery life) is 
probably more important for phone software. Here execution time is 
strongly correlated with energy use, but things like compacting garbage 
collection can actually help--you can turn off part of your memory after 
a GC and save the leakage current. I think low power functional 
programming could be very interesting, but we certainly have no good 
story on this point at the moment.


Car components are commodities and sold under extreme price pressure. 
Because volumes are high, the price per unit is all that matters, and 
there is typically no way to charge for software at all--it's assumed 
that the subcontractor develops it as part of the development cost of 
the component. Maybe that could give Haskell an advantage--lower 
software development costs--as long as the hardware doesn't need to cost 
a penny more. But again, with hard real time constraints, this doesn't 
feel like a natural market for Haskell in its current state.


I think Paul asked just the right question--but I wonder if we'll see 
the answer on this list? After all, anyone with a really *good* answer 
stands to make a lot of money...


John
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Haskell Weekly News: March 13, 2006

2006-03-16 Thread John Hughes




With a view to this I started collecting just the announcements on a
`feed' here:
   http://www.cse.unsw.edu.au/~dons/code/hwn/announce.html

These should serve as a basis for the content, I think.

 



Can you add an actual date? Seeing things dated a few days ago does 
contribute to a feeling of great activity, I think.


John

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] library sort

2006-03-08 Thread John Hughes



Am Samstag, 4. März 2006 21:30 schrieb Neil Mitchell:
 


And a related question is: Which packages are searchable by Hoogle?
 


The best answer to that is some. I intentionally excluded OpenGL and
other graphics ones because they have a large interface and yet are
not used by most people using Haskell. [...]
   



Well, this a bold assumption IMHO, and I'm not particularly happy with that, 
as you can probably imagine. For my part, I would assume that Joe Programmer 
is much more likely to use some multimedia packages than TH or Data.Graph.* 
etc., but this is a bold assumption on *my* side...


...


Well, this a bold assumption IMHO, and I'm not particularly
happy with that, as you can probably imagine.
   



I would also imagine that Joe Programmer is more likely to use
wxHaskell or Gtk2Hs than those - however because those are outside the
standard tree they don't make it in. I don't think much of TH made it
in either (not becuase of deliberate exclusions, but because of
technical limitations in the tool).

 

When I surveyed Haskell users, I asked respondents to name the most 
important tools and libraries they use. (Caveat: respondents saw the 
list of tools and libraries already named, and could include these just 
by selecting them, so tools mentioned early in the survey were more 
likely to be named by subsequent respondents). Here are a few relevant 
entries, where the percentage is the proportion of respondents who named 
the tool:


29% Parsec
19% wxHaskell
16% QuickCheck
16% haddock
12% Monadic Parser Combinators
11% Gtk2Hs
9% hs-plugins
8% HaXml
7% Data.*
7% Monad foundation classes
6% Arrows
6% HOpenGL

The list includes all libraries named by more than 5% of respondents. 
Sure enough, wxHaskell and Gtk2Hs are more popular, but 6% naming 
HOpenGL as among the most important libraries is quite respectable.



John
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Library survey results

2006-03-08 Thread John Hughes

 29% Parsec
 19% wxHaskell
 16% QuickCheck
 16% haddock
 12% Monadic Parser Combinators
 11% Gtk2Hs
 9% hs-plugins
 8% HaXml
 7% Data.*
 7% Monad foundation classes
 6% Arrows
 6% HOpenGL

 The list includes all libraries named by more than 5% of respondents.
 Sure enough, wxHaskell and Gtk2Hs are more popular, but 6% naming
 HOpenGL as among the most important libraries is quite respectable.

 Well, I've never said that it is among the most important 
libraries, but

 OTOH I really much doubt that the way the survey was done delivers
 anything
 near reliable results. It heavily biases early entries, and I dare to
 speculate that the people taking part in the survey were probably not 
even

 near to a representative group, but a bunch of highly motivated,
 experienced
 non-Joe-Programmer kind of people who are actively participating on the
 mailing lists etc.

It wasn't as bad as you think--over 580 replies, with less than a 
quarter of

those from academics (there were actually more replies from people working
in industry than from academics!). So I'd dispute that the survey tells us
mostly about what the research community in particular wants. I'd guess the
survey was pretty representative of KEEN Haskell users. It's a bit unclear
who we mean by Joe Haskell-programmer anyway--I bet that, apart from
students of course, the number of Haskell programmers who are using the
language just because their boss tells them to can be counted on the
fingers of one hand! I'd claim Joe Haskell programmer probably IS keen,
highly motivated, experienced and active. In fact, the one group that is
obviously underrepresented badly in my survey is just students--I got
replies from just under 300, while another survey of teachers shows that at
least 5-10,000 were taught Haskell last year. But maybe tools like hoogle
SHOULD be aimed at the most active users?

You're right that libraries mentioned earlier in the survey received more
votes as a result, but since I have a record of all responses *in time
order* I can see the difference, for each library, between pre- and post-
first mention behaviour. HOpenGL was mentioned in the 7th response,
wxHaskell in the first, and Gtk2Hs in the 17th, so they were in the
previously mentioned category for almost the entire survey, and the 
effect

you're talking about was not significant for a comparison between those
three. Data.*, on the other hand, was first mentioned about half way 
through

the survey, which indicates that around 12% of respondents selected it as
among the most important libraries, when prompted to do so by seeing its
name. However, the proportion who name Data.* spontaneously is below 
2%--and

that conclusion is statistically significant at the 99% level. 580 replies
is enough to say statistically significant things (about the population the
survey sampled, anyway). I feel quite inspired--when I have a spare moment,
I'll analyse the results more carefully and see what one actually CAN say
with a degree of certainty, taking into account when each library was first
mentioned.

 Furthermore, some of the percentages above are extremely
 strange, e.g. how can people use huge GUI toolkits with 30% while staying
 largely away from something as fundamental as Data.*?

I don't find it so strange, really. Data.* implements lots of useful
standard datatypes, but you can import some of them in other ways (import
Maybe for example), and in many cases it's not too hard to roll your own.
OK, maybe with a worse result, but you're not *forced* to use Data.* -- so
if you're used to using something else, there's no 100% compelling 
reason to

change. Rolling your own wxHaskell or Gtk2Hs is prohibitively hard. So I'm
not at all surprised that GUI toolkits rated much higher--people's natural
conservatism gives them a big advantage, and even Haskell users can be
conservative sometimes!

Perhaps the most surprising thing here is that only 30% of keen Haskellers
think a GUI is important!

John


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Haskell-prime Digest, Vol 2, Issue 58

2006-02-24 Thread John Hughes

   From: Claus Reinke [EMAIL PROTECTED]

   let's go through 5.2 Export Lists to see what would be missing
   if we tried to replace the export list with a separation of a module
   into a public (exported) and a private (local) part:
   ...
   any other issues I missed here?

I feel unkeen.

One of the nice things about Haskell is that definitions can appear in any 
order. That makes it possible to gather a group of logically related 
definitions together, within a module. With your proposal, exported definitions 
and non-exported ones would have to be separated.

What would that mean in practice? Suppose I have a few exported functions and a 
collection of auxiliary functions that are used to implement them. I have two 
choices: either I put the exported definitions in the public section, and the 
remaining ones elsewhere in the private section, or I put everything in the 
private section with appropriate redeclarations in the public part -- 
exportedName = localExportedName or whatever. The first alternative has the 
disadvantages that logically related code is separated, and that the public 
section of the module may itself become quite large (since it contains full 
function definitions), making it hard to see at a glance what the exported 
names are. The second alternative has the disadvantage of introducing an 
indirection---finding the actual definition of an exported function becomes 
more difficult, because one must both scan the module for it, and first look up 
the public section to see what the private version is called. Neither 
alternative feels really attractive to me.

Today's export lists at least have the advantage that it is easy to see what is 
exported, even if changing the status of a definition from private to public is 
a little awkward (with edits in two places). With a tool like Haddock installed 
too, once gets a pretty good view of the interface---arguably better than one 
can get from a public module section. Perhaps, given that Haddock is available, 
a public modifier makes more sense than an explicit export list---although code 
browsing would then require running Haddock during development much more often 
than today, and potentially on syntactically incorrect or ill-typed modules.

Incidentally, the Erlang equivalent of Haddock, edoc, is distributed with the 
compiler, so that all Erlang installations include the tool, no matter what 
platform they're running on. Surely that's a prerequisite for adapting the 
language design on the assumption that tools of various kinds are available?

John



___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Bang patterns, ~ patterns, and lazy let

2006-02-08 Thread John Hughes

Simon Peyton-Jones wrote:


I've updated the Wiki to add your strict proposal, but rather briefly.
If you want to add stuff, send it to me and I'll add it.

Meanwhile:

| And as a consequence, it is no longer possible to transform a pair of
| bindings into a binding of a pair. In Haskell 98,
| 
| p1 = e1

| p2 = e2
| 
| is always equivalent to
| 
| (~p1, ~p2) = (e1,e2)


In your strict proposal, I'm sure you hope that the above pair would be
equivalent to
(p1,p2) = (e1,e2)
which would be even nicer.

But sadly I don't think it is, because that'd change the strongly
connected component structure.  Somehow that smells wrong.

Simon
 

What have you got in mind? ANY tupling of bindings may change the SCC 
structure, and hence the results of type inference--I'm taking that as 
read. But that still leaves the question of whether the dynamic 
semantics of the program is changed. Let's assume for the time being 
that all bindings carry a type signature--then the SCC structure is 
irrelevant, isn't it? Or am I missing something here? I'm under the 
impression that the *dynamic* semantics of


   p1 = e1
   p2 = e2

*would* be the same as (p1,p2) = (e1,e2) under my strict matching 
proposal. I don't see how the SCC structure can affect that.


John

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Restricted data types

2006-02-07 Thread John Hughes

On 2/5/06, Jim Apple [EMAIL PROTECTED] wrote:


Have we considered Restricted Data Types?

http://www.cs.chalmers.se/~rjmh/Papers/restricted-datatypes.ps
 



Nice to see my old paper hasn't sunk without trace!

As Taral pointed out, though, Restricted Data Types have not been implemented, 
and they can't be seriously considered for Haskell' until they have been. They 
are a fairly major extension, requiring changes to the way context reduction 
works, and introducing many new constraints to contexts. That said, the problem 
they address comes up again and again and again, so I think it would be worth 
making a serious attempt to solve it--but not necessarily for Haskell'.

A few points.

Perhaps the biggest disadvantage of Restricted Data Types from an 
implementation point of view is the need to pass a potentially large number of 
dictionaries to overloaded monadic functions, which in most cases will never be 
used. For jhc, this would not be a problem, of course--so perhaps jhc is the 
best setting to try RDT's out in. For dictionary passing implementations, I 
suggested compiling two versions of each affected function, one fast version 
without the RDT dictionaries for the common case, and the other with them for 
the general case. It's not clear how many functions would be affected, or how 
much code size would grow as a result.


From a language point of view, introducing well-formed-type constraints into 
contexts can make definitions overloaded that were formerly polymorphic, thus 
triggering the M-R and potentially causing type errors. But if the M-R were 
reformed, for example as Simon M suggested, so as not to distinguish between 
polymorphic and overloaded definitions, then this problem would go away. (Or 
rather, it would strike regardless of RDTs, so someone else would carry the 
blame!).


Finally, I wrote my paper before fundeps came on the scene. Some of the 
contortions I went through in my simulation of RDTs could be avoided with the 
help of fundeps. The alternative solution I discuss at the end--parameterising 
classes and functions over contexts--would also benefit frlom fundeps. A 
Collection class could be defined something like this:

class Collection c cxt | c - cxt where
  empty :: cxt a = c a
  member :: cxt a = a - c a - Bool
  ...

I still think it's nicer to associate cxt with c at the type definition of c, 
rather than in instance definitions of potentially many classes, but this 
alternative should be considered. How it would relate to associated types I 
can't imagine--associated contexts?

Summary: it would be great to add RDTs to Haskell, but there's a lot of work to 
be done before they can be considered for the standard.

John


	  


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Bang patterns, ~ patterns, and lazy let

2006-02-07 Thread John Hughes


   From: Ross Paterson [EMAIL PROTECTED]
   John Hughes wrote:

I would urge that either we stick with the present design, or, if bang 
patterns are added (which a lot speaks for), that the language be 
simplified at the same time so that patterns are matched in the same way 
everywhere, and BOTH warts above are excised. Some existing code would 
break, but in return the language would become simpler and more expressive.
   


   Would top-level bindings of constructor patterns and !x be evaluated
   when the module was loaded (or at compile time)?


Yes. Nothing else makes sense, does it? If that's problematic (although 
I can't see why it would be), just forbid strict patterns at the top 
level of modules.


Load time rather than compile-time, I think--otherwise the compiled code 
for a module could depend on the *code* of modules it imports, not just 
on their interfaces, which would be harmful for separate compilation.


John

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Restricted Data Types

2006-02-07 Thread John Hughes



From: John Meacham [EMAIL PROTECTED]
Subject: Re: Restricted Data Types

however, what prevents the following from being _infered_

return Foo :: Moand m = m Foo
so, we think we can specialize it to
return Foo :: Set Foo

however, we no longer have the constraint that Foo must be in Eq!

Maybe I wasn't up-front enough about this in the paper: RDTs add a new 
form of constraint in contexts, wft t, meaning that type t is 
well-formed. While base types and type variables can be assumed to be 
well-formed, other types may only be used at all in the context of a 
suitable well-formedness constraint. That means that the Monad class is 
not allowed to declare


   return :: a - m a

because there's no guarantee that the type m a would be well-formed. The 
type declared for return has to become


   return :: wft (m a) = a - m a

i.e. when return is called, it has to be passed a dictionary proving 
that m a is well-formed. In the case of Set, that's an Eq a dictionary.


In your example, the most general type of return Foo is (Monad m, wft (m 
Foo)) = m Foo, and when you instantiate m to Set, you're left with the 
contraint wft (Set Foo), which reduces to Eq Foo.


The changes to the type system are that all typing rules have to add wft 
t constraints to the context, for every type t that appears in them. 
Fortunately most can be immediately discarded, since base types are 
always well-formed, and data type definitions imply constraint reduction 
rules that eliminate wft constraints unless the datatype is restricted. 
For example, wft [a] reduces to wft a, because lists place no 
restriction on their element type. Wft constraints on type *variables* 
can be discarded where the variables are generalised, *provided* the 
instantiation rule only permits variables to be instantiated at 
well-formed types. There's no need to write the type of reverse as wft a 
= [a] - [a], if polymorphic type variables can only every be 
instantiated to well formed types. The only wft constraints that cannot 
be discarded are thus those where the type that must be well-formed is 
an application of a type constructor variable, such as wft (m a) in the 
type of return. That suggests that wft constraints ought not to be too 
common in real code, but they do crop up in monadic library code--


 class Monad m where
return :: wft (m a) = a - m a
(=) :: (wft (m a), wft (m b)) = m a - (a - m b) - m b

That's why, without some clever optimisation, RDTs will lead to massive 
extra dictionary passing (in implementations that use dictionaries). 
It'll be just fine as long as you don't use monads isn't really a good 
enough argument, is it?!


John
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Bang patterns, ~ patterns, and lazy let

2006-02-07 Thread John Hughes






  
From: Ben Rudiak-Gould [EMAIL PROTECTED]
Subject: Re: Bang patterns, ~ patterns, and lazy let


  It's also not that case that !x has the same 
meaning in both proposals, e.g.

 let { !x = y ; !y = const 'a' x } in x

means 'a' in the current proposal but _|_ in yours.
  

Aargh, you're right, it does mean _|_ in mine! That's not very nice.

But wait, I'm not sure about
 let { !x = const undefined y ; !y = const 'a' x } in y
  
desugars in the current proposal to
  
let { x = const undefined y ; y = const 'a' x } in x `seq` y `seq` y
  
which is _|_, but absent implicit ~,
  
let { x = const undefined y ; y = const 'a' x } in y
  
had better (and does) mean 'a'. 
Applying the rules on the wiki, the first step is to translate the
first _expression_ into a tuple binding, omitting the implicit ~:

 let (x,y) = (const undefined y, const 'a' x) in y

This desugars to

 let (x,y) = fix (\ ~(x,y)-(const undefined y, const 'a' x)) in y

which evaluates to 'a'. In other words, despite the ! on x, the current
proposal is not strict in x.

Maybe the intention was that !s on the lhs of let bindings should be
transferred to the corresponding patterns when a tuple pattern is
introduced? Let's try it: then the example desugars by pattern tupling
to

 let (!x, !y) = (const undefined y, const 'a' x) in y

Now we can introduce fix:

 let (!x, !y) = fix (\ ~(!x, !y) - (const undefined y, const 'a'
x)) in y

and finally case:

 case fix (\~(!x,!y) - (const undefined y, const 'a' x)) of
~(!x, !y) - y

and this is consistent with what you said above.

But if I return to your first example, and do the same thing, I get

 let !x = y; !y = const 'a' x in x

desugars by tupling to

 let (!x, !y) = (y, const 'a' x) in x

which desugars by fix and case introduction to

 case fix (\ ~(!x, !y) - (y, const 'a' x)) of ~(!x, !y) - x

The first approximation to the fixed point is _|_, so the second is
(_|_, 'a'). Now, when ~(!x,!y) is matched against (_|_,'a') then *both*
variables are bound to _|_ --- the effect of the ~ is just to delay
matching (!x,!y) until one of the variables is used, but as soon as y,
say, *is* used, then the match is performed and, of course, it loops.
Thus (_|_, 'a') is the fixed point. For the same reason, x and y are
both bound to _|_ in the body of the case, and so the entire _expression_
evaluates to _|_, not 'a' as you claimed.

Bottom line: I can't find a way to interpret the translation rules in
the Haskell report, modified as the Wiki page suggests, to produce the
results you expect in both cases.

But maybe the fault is in the translation rules in the Haskell report.
It was always rather tricky to explain a group of recursive bindings in
Haskell in terms of a single tuple binding, because Haskell tuples are
lifted. I see that you have a more direct understanding of what ! is
supposed to mean. Is it possible, I wonder, to give a direct
denotational semantics to a declaration group--say mapping environments
to environments--in which there is only one case for ! (its natural
semantics in patterns)? Such a semantics should have the property that

 let x1 = e1; x2 = e2 in e0 === let x1 = e1 in let x2 = e2 in e0

provided x1 does not occur in e2. Finding a simple and compositional
denotational semantics with these properties, and proving the law
above, would be a good way to show that ! patterns do NOT introduce
semantic warts---and would probably also suggest that the
semantics-by-translation used in the report is fundamentally flawed. We
did construct denotational semantics of fragments of Haskell as part of
the original design, and it had quite an impact on the result--I
recommend it as a way of debugging ideas!

John


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


[Haskell] Survey of Haskell in higher education

2006-02-06 Thread John Hughes
Before Christmas I invited people on this mailing list to complete a web 
survey about the use of Haskell in higher education. Many have done 
so--thank you very much! I received 126 responses from 89 universities, 
accounting for 5-10,000 students taught using Haskell this academic 
year. The survey is now closed, and the results are available on the web 
at http://www.cs.chalmers.se/~rjmh/Wash/Survey/teaching.htm. I've put up 
the raw data, together with various simple analyses.


Enjoy!

John Hughes
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] Why is $ right associative instead ofleftassociative?

2006-02-05 Thread John Hughes

 Quoting Paul Hudak [EMAIL PROTECTED]:

 Actually, one of the main reasons that we chose (:) is that that's what
 Miranda used.  So, at the time at least, it was not entirely clear what
 the de facto universal inter-language standard was.


Phil Wadler argued for the ML convention at the time, and wrote a document
containing a fair amount of sample code to illustrate what it would look
like. We noticed something surprising: instead of (x:xs) and the like, Phil
had consistently written (x :: xs) -- note the extra spaces. Somehow, using
the longer operator name led him to feel spaces were needed around it. That
in turn made his lines longer, encouraged him to split definitions across
lines, and so on. When I read the thing, I realised after a while that I 
was

skipping all the code fragments -- because they just looked too big and
complicated to take in during a quick reading. It was at least partly that
experience that convinced us that using :: for cons would impose a small
cost, but a real one, on readability. It may seem trivial, but the sum of
many such decisions is significant. The story does illustrate the 
importance

of actually trying out syntactic ideas and seeing how they play--one can be
surprised by the result.

 I don't think Haskell Prime should be about changing the look and
 feel of the language.

It's about consolidating the most important extensions into the standard,
isn't it? Changes that break existing code should be very, very well
motivated--if porting code to Haskell Prime is too difficult, people just
won't do it.

John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead ofleftassociative?

2006-02-05 Thread John Hughes

Lennart Augustsson wrote:

 I now think :: for type signatures was a bad mistake.
 I don't use lists very much.  They are not the right data structure
 for many things.  So : is not as common as :: in my code.
 I checked a small sample of code, about 2 lines of Haskell.
 It has about 1000 uses of ':' and 2000 of '::'.

Just for interest, I analysed some of my code. Obviously my style is
quite different to yours--my type specialiser of 3,500 lines has 240
conses, and only 22 occurrences of '::'. I seem to be using '::'  a bit more
lately, though, which I suspect is due to using classes much more.
I also checked the Agda source code, about 14,000 lines, with
about 500 occurrences of cons and 640 of '::'. I think the only conclusion
one can draw is that style varies.

 In my opinion all the special syntactic sugar for lists should go
 away.  I don't think lists are special enough to motivate it.

What, no list comprehensions??

I'd disagree--sequencing is special, and lists represent it directly.
Don't forget, also, that lists are also much more prevalent in beginners'
code--and nice notation for beginners helps get people started on
Haskell.

 But this is not what Haskell' is about.  It's supposed to be some
 modest extensions to Haskell.  Not designing a new perfect language.

Right!

John
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is $ right associative instead ofleftassociative?

2006-02-05 Thread John Hughes

Cale Gibbard wrote:


That said, I'd *really* like to see monad comprehensions come back,
since they align better with the view that monads are container types,
dual to the view that monads are computations, which is supported by
the do-syntax. This view is actually much easier to teach (in my
experience). Giving lists a little extra syntax is nice, but giving
things unnecessarily restrictive types seems to be the point at which
I'd consider it going too far.

 



The trouble with monad comprehensions was that it became far too easy to 
write ambiguous programs, even when you thought you were just working 
with lists. Haskell overloading works really nicely *as long as there's 
a judicious mixture of overloaded and non-overloaded functions*, so that 
the overloading actually gets resolved somewhere. Overload too many 
things, and you end up having to insert type annotations in the middle 
of expressions instead, which really isn't nice.


Lists are special, not least because they come very early in a Haskell 
course--or, in my case, in the first ever programming course my students 
have ever taken. Getting error messages about ambiguous overloading when 
they are still trying to understand what comprehension notation means 
(without even the concept of a for-loop to relate it to) is very 
destructive. And this is in the case where the code is otherwise 
type-correct--the kind of error message you would get by trying to 
append a number to a monad comprehension doesn't bear thinking about!


The class system is already something of an obstacle in teaching, 
because you have to mention it in the context of arithmetic (even if you 
tell students always to write monomorphic type signatures, they still 
see classes mentioned in error messages). After all, that is surely why 
Helium doesn't have it. I find classes manageable for arithmetic, even 
if students do take some time to learn to distinguish between a class 
and a type (or a type and a value, for that matter!). But it's a relief 
that list programs, at least, have simple non-overloaded types. List 
functions provide an opportunity to introduce polymorphism in a simple 
context--it's much easier to understand why (++) should have the type 
[a] - [a] - [a], than to start talking about MonadPlus m = m a - m a 
- m a.


There is a lot to learn in Haskell, especially in the type and class 
system. It's an advantage if you don't have to learn it all at once--if 
you can master lists and list comprehensions without exposure to monads 
(which are a much harder concept). We should never forget that beginners 
have somewhat different needs from expert programmers--and those needs 
are also important. If we want Haskell to be used for first programming 
courses (and it's a big advantage to catch 'em early), then there needs 
to be a learning path into the language that goes quite gently. 
Monomorphic lists help with that.


We did consider more aggressive defaulting to address the ambiguity 
problems with monad comprehensions--defaulting Monad to lists, for 
example, or user-declared defaulting rules--but this introduces yet more 
complexity without really addressing the problem of keeping types simple 
for beginners, so the idea was abandoned.


John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Parallel list comprehensions

2006-02-04 Thread John Hughes

I noticed ticket #55--add parallel list comprehensions--which according to
the ticket, will probably be adopted. I would argue against.

Firstly: because in its more general forms the notation is confusing. Try
this example:

[(i,j,k) | i-[1..3], j-[1..3] | k - [1..9]]

In general it's hard to predict how the elements from each set of 
generators

and filters will match up.

Secondly: because the notation lacks expressive power. When I use zip in a
comprehension, it's often in a definition like this one:

positions x xs = [i | (i,x') - zip [0..] xs, x==x']

I'm zipping the two lists together *so that I can relate the two* in a
subsequent filter. The parallel comprehension notation cannot express this.
(You can certainly write

wrong_positions x xs = [i | i - [0..] | x' - xs, x==x']

but it does the wrong thing).

Thirdly: because even in situations where it can be applied, the gain is
small. Using zip explicitly is almost equally concise, and (thanks to being
explicit) more readable and understandable.

My opinion may be coloured by the fact that I never use the things. 
However,

I think it's a mistake to add rarely used features with a small payoff to
the language. It just makes the language larger, harder to learn, and 
harder

to read for all but the expert, without much corresponding benefit.

John

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Priorities

2006-02-02 Thread John Hughes
For the last few days, my mail-box has been full of mail about the M-R, 
lazy pattern
matching, n+k patterns, comment syntax--it's just like the good old 
days! And that

worries me.

Each of these topics is a snake pit--a tricky area of language design, 
with many
alternative possibilities and no clear winner. As such, they make great 
discussion
topics--if you want to start a heated argument, just propose a change to 
one of
these (mea culpa)! We've also been discussing most of them for sixteen 
years.


Of course, it's worth raising them from time to time, in case new 
insight means

we can find a solution that is clearly much better than we found before. But
the risk is that we go over the same old arguments, perhaps the consensus
is slightly different this time, we make a change to the language, and who
knows, next time the language is revised, maybe it'll change back again! 
That's
destructive. We shouldn't change these things unless we can very, very 
clearly

make an improvement.

The fact is, we've lived with the present design for over a decade in 
each case,

and we can live with it for another decade if we have to. The problem with
Haskell 98 is not its warts--it offers a very pleasant programming 
experience

despite them. The problem with Haskell 98 is that it *lacks* features which
have become absolutely essential to Haskell programmers today. Those
features are what really *need* discussion and energy spent on them.

What are the top priorities for inclusion in a new standard? How can we
identify them? I have some personal favourites, of course:

   ST state threads--absolutely essential.
  - but how to distinguish strict and lazy state? The present 
solution--importing

a different module--sucks.

   Monad transformers
  - dramatically simplify constructing new monads
  - not a language feature in themselves, but demand extensions to 
work well.


   Multi-parameter classes with functional dependencies
  - used everywhere... for example in monad transformers... so 
*must* be

included this time
  - omitted from Haskell 98 because the right design wasn't clear
  - it's still unclear! Functional dependencies *in some form* are 
essential,

but associated types and datatypes look nicer in many ways!
  - is it too late, in practice, to replace fundeps by something 
else? How will
we know? If we are to standardize on associated types instead, 
we need

a major effort to *make sure* all important applications of fundeps
can be represented. How will we organize that?

   Type system extensions--at least existential and rank-2 types.
  - should we go further? How will we know?

My favourites may not be your favourites. How will we know what the most
important features to discuss are? Here's a thought:

Wouldn't it be nice if the most important Haskell tools and libraries, were
actually written in standard-compliant Haskell'?

One such tool is wxHaskell--named by 19% of Haskell users in my survey,
it's the de facto standard GUI toolkit. wxHaskell makes essential use of
existential types in its interface, a strong reason for including them in
Haskell'. It also uses multi-parameter classes and functional dependencies,
although much less heavily.

What other tools must be supported by Haskell'? What other extensions
must be present to support them? What issues remain before those
extensions are standardized, and thus frozen in stone for many years to 
come?


I'd like to see clearer priorities and a more focussed discussion--maybe the
Wiki can be used to help?

John


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


~ patterns

2006-01-27 Thread John Hughes



Personally I think ~ patterns are great, and we are now talking about !
patterns, a kind of dual to ~ patterns.  So at least I think we should
un-couple the two discussions.
   



I think so too. Removing ~ patterns seems like a fairly poor idea to
me. Sure, they're not very much explicitly used (though everyone uses
them implicitly in pattern bindings), but when you want them, they can
be fairly important. I think perhaps we just need better coverage of ~
in the tutorials. Now that I think about it, I rather like the idea of
! patterns as well. They make ~ patterns seem more natural by
contrast. Strictness can on occasion be just as important as laziness,
and this notation makes it more convenient to obtain in simple cases.
How to get a similarly pretty notation for more structured strictness
annotations is a bit of a concern. I wonder whether some of the
Control.Parallel.Strategies library should be more strategically
located? :)

- Cale

I'd be fairly horrified at the removal of ~ patterns. I've used them to 
fix very serious space-leaks that would have been awkward to fix in any 
other way.  I suppose if lazy pattern matching were provided by some 
other mechanism, say via let bindings, then I should always in principle 
be able to translate away a ~ pattern. But the translation will often be 
much more awkward.


Actually, I think it's a much bigger wart that pattern matching in let 
and where is lazy, and everywhere else is strict, than that we have an 
operator ~ on patterns with a clean compositional semantics. Way back in 
the Haskell 98 process I tried to get that changed--so that patterns 
would be matched strictly EVERYWHERE unless you wrote ~ --but I lost 
that argument.


John
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell] Discrete event simulation

2006-01-26 Thread John Hughes


- Original Message - 
From: Jake Luck [EMAIL PROTECTED]

To: haskell@haskell.org
Sent: Friday, January 27, 2006 7:41 AM
Subject: Re: [Haskell] Discrete event simulation


Part of this will be some kind of synchronisation primitive.  I don't 
much care what it is, but somewhere I need a way to make a process wait 
until something happens rather than just a constant time.


Paul, what you have described sounds like a reactive system. Have you 
looked into AFRP/Yampa? Although it is based on a continuous time model 
simulation, it might just work for your scenario nicely. jake




There's a Yampa-like arrow for DES described in my notes at the AFP Summer 
School 2004.
It's a small example, not optimised at all, but might still be useful. 
Unlike Yampa, time is discrete

and arrows compute only when there is something for them to do.

John 



___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: Haskell-prime Digest, Vol 1, Issue 4

2006-01-26 Thread John Hughes

Ross Paterson wrote:


I suggest = for bind-by-name, and
:= for bind-by-need. 
...
 



You're proposing that the =/:= distinction both decides whether
constrained type variables are monomorphic and whether the binding
should be implemented using sharing.  If it only did the former (and the
expectation was that all pattern bindings with unconstrained types used
sharing), then existing legal programs would still be legal, and the
examples that currently trip over the MR would be legal but inefficient.
(Also, what does the shared/unshared distinction mean for functions?)

 

Not just constrained type variables. All type variables. Because changes 
in the program
elsewhere can easily change the status of a type variable from 
unconstrained to constrained,
thus triggering monomorphism unexpectedly--that was the point of my 
comment about
introducing an equality test in a function called from the definition. 
Changing the status
of a type variable should not change the way it is treated by any 
replacement for the M-R.


I don't quite follow what you're suggesting above. The main point of a 
=/:= distinction is
to distinguish between sharing and non-sharing, isn't it? And sharing 
means you have to
be monomorphic, at least for constrained type variables, and (by the 
argument above)
thus for unconstrained ones too. How can sharing/unsharing and 
monomorphic/overloaded

be separated?

I'd really like to avoid ANOTHER rule that guesses what method to use, 
based on the
form of the definition (your reference to pattern bindings above). That 
leads to surprises
for the programmer, at least the less-than-expert one, when a definition 
is replaced by
something that LOOKS equivalent, but type-checking or sharing suddenly 
behaves
differently. Much preferable is a simple and obvious rule: = means 
unshared and

polymorphic, := means shared and monomorphic.

Shared/unshared doesn't matter for function definitions, but 
monomorphic/polymorphic
can still be important. There is an interaction with implicit parameters 
here--which I
suppose might make it into Haskell'. A := definition says: resolve all 
overloading here.
Thus, if there is an implicit parameter in the RHS of such a definition, 
then it refers
to the instance of that parameter in scope at the point of definition. 
With a = definition,
it refers to the instance in scope at the point of use. This is an 
important distinction
whether you're defining a function or anything else. This is discussed 
in my paper
on Global Variables in Haskell, which suggested using implicit 
parameters to refer
to global variables, rather than an unsafe unsafePerformIO applied to a 
newIORef.


What if one has mutually recursive bindings, some using = and some := ?
Does monomorphism kick in if some of the variables in a binding group
use :=, or would we just require that all bindings in the same group
use the same binder?  (At first I couldn't see why one would ever use :=
with function bindings, but perhaps that's the reason.)

 

I don't think there's really a problem in allowing a mixture of = and := 
in the same
mutually recursive group, even if it could be quite confusing to do so! 
= just means
that type variables and dictionaries should be abstracted, and that the 
binding
should be by-name... let's assume that we're translating to System F, 
and we always
insert at least a \()- on such bindings in the translation. := means, 
on the other hand,
that type variables and dictionaries are not abstracted, and so must be 
inherited from

an enclosing scope. So in a group of the form

   f = ...g...
   g := ...f...

then any type variables in the definition of g must refer to the 
enclosing scope, which
means that they cannot be generalised in the definition of f either 
(since they are
free in the context). But if there are type variables in the 
definition of f which do
NOT occur in the type of g, then they can be generalised as usual. 
Meanwhile f can
be bound by name, and g by need--there's no difficulty with that. This 
would be an
odd thing to do, but I think it makes perfect sense. (I wonder what 
happens today,
if you write mutually recursive definitions where the M-R applies to 
some, but

not others?)

Of course, polymorphic recursion would REQUIRE an = binding, but that 
shouldn't

surprise anybody.

John

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell-cafe] Re: Tutorial uploaded

2005-12-21 Thread John Hughes




--

Message: 1
Date: Wed, 21 Dec 2005 10:48:08 +
From: Robin Green [EMAIL PROTECTED]
Subject: Re: [Haskell-cafe] Re: Tutorial uploaded

 



 


Beginners should start with non-monadic functions in order to later avoid
IO in their functions whereever possible.
   



Whilst localising IO to a small part of the program is generally a good 
idea, beginners should not be scared off by the thought that IO in 
Haskell is so hard it has to be covered on page 94. This is not the 
case. It should be introduced on page 1.


If people want Haskell to be treated as a practical language, not just 
something for doing academic teaching and research with, it should be 
taught as a practical language - which means that things like IO and 
space/time usage come to the forefront.


 

Couldn't agree more. I show my students IO in the first lecture of their 
first programming
course in the first term of their first year. A few lectures later I 
discuss it in more detail,
and tell to think of an IO a as instructions on a piece of paper to 
produce an a. My students
have no difficulty understanding the difference between being given 500 
crowns (a value),
and being given my ATM card and code with instructions how to work an 
ATM! Haskell

IO is mystifed far too often in my opinion--it needn't be hard at all.

John
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell] Survey of Haskell in Higher Education

2005-11-28 Thread John Hughes
I'm carrying out another survey of the Haskell community, this time to 
find out how Haskell is being
used in university teaching. If you're teaching a course and using 
Haskell in any way, then this survey

is aimed at you! You can find the survey form at

   http://www.cs.chalmers.se/~rjmh/Wash/Survey/TeachingSurvey.cgi

Once again, as a reward for completing the form, you will see the 
results so far!


I'm only surveying courses being taught this academic year, but I would 
like to gather as complete
information as possible. I know that not all teachers using Haskell read 
this list, so I would
appreciate it if you could help me by informing colleagues who are using 
Haskell about the existence

of the survey.

The information gathered will be used in the History of Haskell paper 
that I, Simon PJ, Phil Wadler

and Paul Hudak are working on.

Thanks very much for you help!

John Hughes


PS I now have over 500 responses to my earlier survey on use of 
Haskell--thank you very much!
The results are interesting, and can be obtained from 
http://www.cs.chalmers.se/~rjmh/Wash/Survey/Survey.cgi.

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Survey of Haskell in Higher Education

2005-11-28 Thread John Hughes

Wolfgang Jeltsch wrote:


Am Montag, 28. November 2005 16:39 schrieb John Hughes:
 


I'm carrying out another survey of the Haskell community, this time to
find out how Haskell is being used in university teaching.
   



Roughly how many students took the course last time it was taught?

What if this course was never thaught so far?

Since when, roughly, has this course been taught using Haskell?

What if this course is first thaught in spring 2006?
 


Tricky.

If you can make a reasonable estimate of student numbers, say via 
students already
registered for the course, then go ahead and report it. Otherwise, it's 
maybe better

just to leave the course out of the survey--too much of an unknown quantity.

2006 is not an option for since when..., but just put 2005. I'll 
understand it to include

the entire academic year starting in 2005.

By the way, do proseminars (simpler form of seminars where 1st or 2nd year 
undergraduate students have to give speeches) also count?
 

If, for example, one student out of a large class gives a talk about 
Haskell, then I would
say no--the Haskell content in that case is trivial. If all the 
students give seminars
about Haskell programs they have written, then the answer is clearly 
yes. You have to
make a judgement here: is Haskell used as a medium of instruction to a 
non-trivial
extent? If so, include the course. You can always add an explanatory 
comment.


John



 


[...]
   



Best wishes,
Wolfgang
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell
 



___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Survey of Haskell in Higher Education

2005-11-28 Thread John Hughes

Wolfgang Jeltsch wrote:

What is the first programming language students on your degree programme 
learn?
What is the second programming language students on your degree programme 
learn?


This is too restrictive.  What if the lecture Computer Science I is held in 
different years by different lecturers which teach different programming 
languages?


Best wishes,
Wolfgang
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell
 

In this case enter Varies from year to year in both fields. Mind you, 
I'm amazed
the teachers for the rest of the curriculum can cope without knowing 
what programming

language their students have learned!

John
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Haskell users survey--please respond!

2005-11-10 Thread John Hughes



Subject: Re: [Haskell] Haskell users survey--please respond!



Hello,

why doesn't the section about Haskell tools and libraries mention 
HToolkit?


Best wishes,
Wolfgang



Because you haven't added it!

The survey is designed so that each respondent can add NEW favourite tools
to the list, but also easily select previously mentioned tools. That way 
there's
no predetermined list of tools-one-can-select, while at the same time 
there's
less risk that the same tool will appear named in many slightly different 
ways.

But I guess now that the list has become longer, people may overlook the bit
at the bottom that tells you how to add new entries, so it maybe looks as 
though

I made a predetermined selection.

John


___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Haskell users survey--please respond!

2005-11-10 Thread John Hughes


I want to stress that I'm interested in responses from ALL users--if 
you're

a complete beginner writing your first Haskell programs on a course, I'd
like to know that, just as if you're one of the designers using it for
your next POPL article.


Do you also want respones from people which once used Haskell as part of 
an
university course and maybe never used it again since then?  If yes, I 
could

forward the mail to students of our university.



I think it's too hard to try to reach former users reliably, and they would 
have little
incentive to answer, so I'm really just interested in those who use Haskell 
today.
It's fine if students studying Haskell right now answer--all to the good, in 
fact--
and I can easily recognise their responses if I want to analyse just other 
users.


John


___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Haskell users survey--please respond!

2005-11-10 Thread John Hughes



Am Mittwoch, 9. November 2005 13:09 schrieben Sie:

On Wed, Nov 09, 2005 at 01:02:19PM +0100, Wolfgang Jeltsch wrote:
 Only 2% find fglasgow-exts useful???

Only 2% consider it a tool or library.

I think that if John cares about getting reliable results, he should
take the results from this survey and prepare the next one already
having all the most important options. I won't mind filling the new
survey again, because I myself find the results interesting. I
wonder what other people think?

Best regards
Tomasz


I wasn't aware of the auto-extension feature when I wrote my question 
about
fglasgow-exts.  I think that the results of the survey will not be 
reliable.
The fglasgow-exts case might be an extreme example showing how wrong they 
can

be.  I think, the survey should be repeated with a rather complete list of
tools/libraries.


I take your point, and I'll think about doing this--although I don't want to 
spam

people with surveys! I'll also be very careful how I interpret the results.



By the way, did the list of libraries/tools start with the empty list?



Yes.

John


Best wishes,
Wolfgang
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell




___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] Haskell users survey--please respond!

2005-11-09 Thread John Hughes
I'm conducting a web-based survey of Haskell users to gather material 
for an article on the history of Haskell, and I need your help. I want 
to stress that I'm interested in responses from ALL users--if you're a 
complete beginner writing your first Haskell programs on a course, I'd 
like to know that, just as if you're one of the designers using it for 
your next POPL article. If I'm to get an accurate picture of how Haskell 
is used today, I need responses from as many users as possible.


The survey consists of eight multiple-choice questions, and should only 
take a minute or so to complete. Please help by doing so! As a reward, 
you'll get to see a summary of the responses so far.


The survey is at this URL: 
http://www.cs.chalmers.se/~rjmh/Wash/Survey/Survey.cgi


Thanks for your help! I'll post a summary of the results to the list.

John Hughes
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Re: Haskell users survey--please respond!

2005-11-09 Thread John Hughes

Cale Gibbard wrote:


It seems that if you keep trying to fill out the form, you will
eventually succeed. If someone finishes filling out the form between
when you start filling it out and when you finish, then the
checkboxes/dropdowns will have been updated, and you'll have to reload
the page and try again. Certainly a bug.

- Cale
 

Thank you, that explains it. A WASH subtlety, caused by using unsafe IO 
to read the

tools and country lists. Should now be fixed!

(Oh the shame of it. In my defense, it only appears when many people 
start using it simultaneously. You were all so quick off the mark!)


John


On 09/11/05, Sebastian Sylvan [EMAIL PROTECTED] wrote:
 


On 11/9/05, Fraser Wilson [EMAIL PROTECTED] wrote:
   


On 11/9/05, Aaron Denney [EMAIL PROTECTED] wrote:
 


On 2005-11-09, John Hughes [EMAIL PROTECTED] wrote:
   


The survey is at this URL:
http://www.cs.chalmers.se/~rjmh/Wash/Survey/Survey.cgi
 


I hit Submit response and view previous responses and it says
Unspecified action and a Try again... hyperlink.
The only thing I can think of on my end is that I haven't checked any of
the tools or libraries, as I haven't used any of them
   


Me too, though I checked three of the tools ... Firefox 1.0.7 ... my
country isn't on the list though.

Fraser.

 


Yep, I got it too...

/S

--
Sebastian Sylvan
+46(0)736-818655
UIN: 44640862
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell

   


___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell
 



___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Re: Haskell users survey--please respond!

2005-11-09 Thread John Hughes

Tomasz Zielonka wrote:


On Wed, Nov 09, 2005 at 04:55:46AM -0500, Cale Gibbard wrote:
 


It seems that if you keep trying to fill out the form, you will
eventually succeed. If someone finishes filling out the form between
when you start filling it out and when you finish, then the
checkboxes/dropdowns will have been updated, and you'll have to reload
the page and try again. Certainly a bug.
   



Yes, John used a nice trick to automatically extend the option lists,
but it breaks the assumptions of WASH. He should ensure that each user
uses the same version of the script.

 



It should be enough to read the list of options using safe IO, so that 
it is the same when the
form is processed as when it was generated (which I've now done). That 
leaves open the risk that two simultaneous users may add the SAME new 
tool or country to the list, so it gets added twice, but that I can fix 
up when I analyse the results.


John
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] WASH-Problem was Re: Haskell users survey--please respond!

2005-11-09 Thread John Hughes

Tomasz Zielonka wrote:

The tools list is extended automatically, after a response? There is an 
odd entry Parsec, HOpenGL
   



Someone hasn't read the instructions :-)

BTW, is there a way to update my entry? I forgot to mention one of the
best tools I use - Parsec :-(

 



I'm afraid not, because there's no record of which is YOUR entry. I 
suppose I could have asked for an email address, but not everybody likes 
to supply that, so I thought it better to keep the whole thing 
anonymous. (Not to worry, Parsec is getting a lot of votes anyway!)


John

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] Call for contributions: Industrial Session at the Applied Semantics (APPSEM) Workshop

2005-06-15 Thread John Hughes

 APPSEM05: CALL FOR INDUSTRIAL SUBMISSIONS


At the forthcoming APPSEM workshop (on the beautiful island of
Frauenchiemsee http://de.wikipedia.org/wiki/Frauenchiemsee, in
September), we are keen to hear about industrial work which takes a
principled approach to programming languages. This could mean
development of a novel language, a novel compiler, program analysis
tools, or indeed, just a semantic model for a new kind of application.
However, we don't want to be restrictive: we welcome talks on any
kind of industrial work which is informed by the science of
programming languages.

The theme of the workshop as a whole is Applied Semantics; we are
planning a special session of industrial contributions. We appreciate
that people working in industry do not always have the time to write
full papers, so there is a mechanism for short submissions that only
require an abstract of a maximum two pages to be submitted. Abstracts
should be submitted to the APPSEM05 submission page
http://lionel.tcs.ifi.lmu.de/appsem05-submit/ by July 8th. Please
label submissions Industrial Application.

We also plan a discussion session on industrial problems that can be
addressed by semantic methods, which we hope our industrial
participants will contribute to actively.

More information is available from the workshop home page:
http://lionel.tcs.ifi.lmu.de/APPSEM05/

If you are engaged in this kind of work in industry, please send us
an abstract! We look forward to a fruitful session at the workshop itself.

John Hughes and Peter Dybjer (session organisers)
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] Speed comparison?

2005-05-04 Thread John Hughes

Furthermore, the Haskell page says that ghc produces fast programs.
So I would guess that Haskell is faster than Python, but not as fast
as C.
Would that be correct?
   

Usually yes.
 

In a sense. I think it's important to remember that what matters is 
performance of a whole application, not of highly tuned benchmark code, 
and in writing applications with good performance, then the quality of 
profiling tools, and (especially) the amount of time the programmer has 
available for tuning, can be far more important than the compiler. There 
have certainly been occasions where Haskell entries to the ICFP 
programming contest have beat OCaml or even C/C++ entries on 
performance, by a wide margin. Likewise Ericsson's ATM switch, which is 
controlled by a million lines plus of Erlang (running on a byte code 
interpreter!), performs better than all its competitors.

Benchmarks give a very one-sided view, ignoring the large effect that 
ease of programming can have on the final system's performance.

John Hughes
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Instances of constrained datatypes

2005-04-07 Thread John Hughes

This is a contrived example, but contains the essence of what I'd like 
to do.  Suppose I have this datatype:

 data (Eq v) = EqList v = EqList [v]
I'd like to make it an instance of Functor.  However, fmap takes an 
arbitrary function  of type a - b.  I need an Eq constraint on a and 
b.  Is there any way to do this without creating my own `EqFunctor' 
class with explicitly-kinded quantification:

 class (Eq a) = EqFunctor (f :: * - *) a where
  eqfmap:: (Eq b) = (a - b) - f a - f b
Thanks.
-Arjun

 

I wrote a paper proposing an extension to allow this, published at the 
Haskell Workshop in 1999. Here's the link:

http://www.cs.chalmers.se/~rjmh/Papers/restricted-datatypes.ps
Getting the right dictionaries to the right place involves adding a 
concept of well-formed types, which perhaps is why it hasn't been taken 
up by the Simons...

John Hughes
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Top 20 ``things'' to know in Haskell

2005-02-09 Thread John Hughes

Message: 9
Date: Mon, 7 Feb 2005 10:31:30 -0500
From: Jacques Carette [EMAIL PROTECTED]
Subject: [Haskell-cafe] Top 20 ``things'' to know in Haskell
To: haskell-cafe@haskell.org
Message-ID: [EMAIL PROTECTED]
Content-Type: text/plain;   charset=us-ascii
The recent post of Graham Klyne (below) reminds me that I have meant to ask:
is there a ``top 20'' things a serious programmer should know when writing
code in Haskell?  Of course there is a lot of programming language theory
that would be great to know, but I mean really down-to-earth things like the
2 items below (module Maybe, the 'maybe' function).
The Haskell libraries are quite large, and it is unrealistic to try to get
familiar with all of them right away.  But getting a ``small'' list would be
very useful - I think of this as step 2 after one learns to get comfortable
with a language.  I had done this (for Maple) for training new hires at
Maplesoft, and I definitely noticed that they became more idiomatic
programmers faster this way.
Jacques
PS: of course, this could already exist on haskell.org and/or the Wiki, but
not in an 'obvious' enough place as I missed it...
 

Beginners might find this useful:
   http://www.cs.chalmers.se/Cs/Grundutb/Kurser/d1pt/d1pta/ListDoc/
It's guide to finding the right list function which leads you to it by 
asking you
what you are trying to achieve.

john
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Graham Klyne
Sent: February 7, 2005 10:09 AM
To: Yuri D'Elia; haskell-cafe@haskell.org
Subject: [Haskell-cafe] Re: [Haskell] [newbye] 'Just a'
You might also be interested in the library function 'maybe':
  http://www.haskell.org/onlinereport/standard-prelude.html#$vmaybe
or maybe (sic) Maybe.fromMaybe in:
  http://www.haskell.org/onlinereport/maybe.html
#g
--
 

 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] Implicit parallel functional programming

2005-01-20 Thread John Hughes
Lennart Augustsson and Thomas Johnsson got some encouraging results 
fifteen years ago with their nu-G-machine. They compiled Lazy ML for a 
shared memory multiprocessor, and benchmarked against the sequential LML 
compiler, the precursor of hbc and at that time the best compiler for a 
lazy functional language. They observed speed-ups of at least 20% even 
with only two processors, and with four the speed-ups were in the range 
2-3.3x. The idea was to use sparks to start parallel computations, 
which were ignored, and thus cheap, unless a processor was actually 
available. OK, then benchmarks were tiny (n-queens), and the LML 
compiler had higher overheads for ordinary computation than GHC, but 
still, it's an encouraging result for exploiting multi-core machines.

Here's the reference:
@inproceedings{99386,
author = {Lennart Augustsson and Thomas Johnsson},
title = {Parallel graph reduction with the (v , G)-machine},
booktitle = {FPCA '89: Proceedings of the fourth international conference on 
Functional programming languages and computer architecture},
year = {1989},
isbn = {0-89791-328-0},
pages = {202--213},
location = {Imperial College, London, United Kingdom},
doi = {http://doi.acm.org/10.1145/99370.99386},
publisher = {ACM Press},
}
John

I thought the lazy functional languages are great for implicit 
parallelism thing died out some time ago - at least as far as 
running the programs on conventional hardware is concerned.

Designing an algorithm that breaks apart a sequential lazy program 
into parallel chunks of the appropriate size is **HARD** (with double 
asterixes). The time and space behavior of a lazy program is complex 
enough for the _programmer_ to reason about, let alone an automated 
analysis - which has no knowledge of what the program is actually 
trying to do.

I think a more likely approach lies in the direction of the so called 
parallel strategies. If you haven't already, I would strongly 
suggest reading: Algorithm + Strategy = Parallelism, 1998, PW 
Trinder, et al.
You can get this paper from Simon Peyton Jones's homepage.

Also, at the end of Hans Wolfgang-Loidl's thesis he develops a 
granularity analysis for a Haskell subset - one of the first steps in 
any kind of implicit parallelism. It's a pretty good effort, but at 
the end of it all it still relies on a pre-existing table of 
information about recursive functions. I think that these kind of 
analyses tend suffer from uncomputability problems more than most.

If you've still got your heart set on implicit parallelism, then 
there's a (very slow) simulator you might want to poke around with. I 
wrote it based around Clem Baker-Finch's Abstract machine for 
parallel lazy evaluation, which supports fully speculative implicit 
parallelism.

There's a link to it on my homepage at 
http://cs.anu.edu.au/people/Ben.Lippmeier

Keean Schupke wrote:
I have to say I disagree... I feel Haskell is highly suited to 
implicit parallel execution... The key to implicit parallelisation 
is that it is implicit - not explicit, so the programmer should feel 
like they are programming a sequential language. If we can assume 
little memory access penalties for threads running on other CPUs 
(shared cache model), it seems to be a matter of putting locks on 
the right structures, and allowing any worker-thread to take the 
next function ready to run from the scheduler.
   Keean.

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell-cafe] Re: Some random newbie questions

2005-01-10 Thread John Hughes
I seriously considered switching frlom Hugs to GHC for my introductory 
programming class this year, but in the end stayed with Hugs because of 
a single feature.

I'm teaching beginning programmers, and for them at least, there is an 
overwhelming volume of names to learn -- what's that function? is a 
question they ask themselves often, as is what's that type?. I teach 
them that, whenever they see a name they don't recognise, they can find 
out more about it using the :i command. This is a simple trick to learn, 
that helps them understand points they've missed and catches 
misapprehensions.

My students also see type classes very early. I'll bet yours will too. 
Even if one is very careful to restrict the examples in lectures so as 
to avoid them (which is a bind), as soon as students try out Hugs for 
themselves, they will make mistakes that generate error messages 
referring to type classes. No problem: the question what's that class? 
can ALSO be answered by :i.

Now, at the beginning students have only a very rudimentary 
understanding of classes. A class is a collection of types to them, 
nothing more. In particular, the class definition itself is of little 
use to them, since it often contains a very subtly chosen collection of 
methods (just type :i Show, for example, which students do very early). 
What IS useful, right from the beginning, is the list of instances. What 
are Num types? Oh, integers and reals.  What are Show types? Oh, pretty 
much everything. Particularly when debugging missing instance errors, 
this is just the information you need.

Unfortunately, while Hugs prints the list of instances of a class in 
response to :i, GHCi does not. It only prints the class definition -- 
which, for my students, contains no useful information. For that reason 
alone, I stuck with Hugs last year.

Of course, later in the course there is no problem in introducing GHC as 
well. Students coping well are happy to learn there is a compiler 
available too, while those who are struggling can stay with Hugs 
throughout the course. I demonstrated GHC in order to show them 
wxHaskell (which was very popular with the students), but I didn't 
REQUIRE them to use it.

How about changing the behaviour of :i, Simon, so I can use GHCi 
throughout next year?

John
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] Reading a directory tree

2004-06-22 Thread John Hughes
 Hi,

 I'm looking for a way to iteratively read all the files in a directory and
 its subdirectories, given the filepath of the top-level dir.
 For example, I want to find a file, corresponding to a given filename, in
 a directory and its subdirectories.


getDirectoryContents is your friend. It's a function in the standard
library Directory, documented here:
http://haskell.org/onlinereport/directory.html

getDirectoryContents  :: FilePath - IO [FilePath]

A FilePath is just a String.

John Hughes
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Reading a directory tree

2004-06-22 Thread John Hughes
 Hi,

 I'm looking for a way to iteratively read all the files in a directory and
 its subdirectories, given the filepath of the top-level dir.
 For example, I want to find a file, corresponding to a given filename, in
 a directory and its subdirectories.

 Is there a way to implement this in Haskell?


Just a supplement to my previous message: you can find better
documentation of the Directory library here:

http://www.haskell.org/ghc/docs/latest/html/libraries/base/System.Directory.html

John Hughes
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Monadic Loops

2004-06-18 Thread John Hughes
Vivian McPhail wrote:
Hi, 

I've implemented a Neural Net simulator which needs to repeat a training loop many times. For this I used a while function: 

while test body = do
 (cond,res) - body
 if (test cond) then do rs - while test body
 return (res:rs)
 else return [res]
However, when I run the program, the interpreter falls over after a few thousand iterations, running out of space. I think that this is because the Monadic implementation of the while loop actually nests functions of type (a - M b) and there is a maximum ?stack size. 

 

As others have pointed out, the problem is that while is not tail 
recursive, so the stack frame for return (res:rs) remains on the stack 
during the recursive call: result, you run out of stack.

However, this only happens because the monad you are using is strict -- 
or rather, the implementation of (=) is strict. If you use a monad 
with a lazy bind operator instead, then the recursive call will not be 
evaluated initially, just bound to rs, and evaluated when rs is used by 
the called, AFTER the return (res:rs). Result: no deep stack.

The advantage of doing this is that the elements of the list are 
available lazily as they are constructed, and so can be consumed as they 
are generated. Your solution (using a strict monad), and the tail 
recursive solution you've been offered, both build the entire list 
before returning it to the caller. If you are performing many thousands 
of recursions, and the list points at large structures, then this may 
give you a problematic space leak.

The ST monad comes in strict and lazy versions. Provided that's the 
monad you're using, or a monad built on top of it, you can switch to the 
lazy version just by importing Control.Monad.ST.Lazy instead of 
Control.Monad.ST. Documentation is here:

   
http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control.Monad.ST.Lazy.html

If you are using the IO monad, then you achieve the same effect by 
wrapping the recursive call in unsafeInterleaveIO from System.IO.Unsafe. 
That will delay its evaluation until rs is evaluated, once again AFTER 
the enclosing call has returned. But that is -- well -- unsafe.

John Hughes
PS You can read about lazy state here:
http://portal.acm.org/citation.cfm?id=178246coll=portaldl=ACMCFID=22769016CFTOKEN=85305111
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


[Haskell-cafe] Re: Adding Ord constraint to instance Monad Set?

2004-04-01 Thread John Hughes


| Constraints on datatype declarations are a misfeature of Haskell, and
| have no useful effect.
| 
| Is this the final conclusion?

Yes, it is, I believe.  Constraints on data type declarations are a
mis-feature.  I tried to get them removed, but there was some argument
that they could be made useful (see John Hughes's paper Restricted data
types in Haskell, Haskell workshop 1999), and they stayed. 

Simon

 

Made useful in a way that would solve the problem in this thread, I 
might add. The paper is at

	http://www.cs.chalmers.se/~rjmh/Papers/restricted-datatypes.ps

for the curious.

John

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] Programming language shootout (completing the Haskell entry)

2004-03-30 Thread John Hughes




Adrian Hey wrote:

  On Monday 29 Mar 2004 3:49 pm, John Hughes wrote:
  
  
Actually the cache behaviour of code generated by GHC isn't at all bad.
I know because I ran a student project a couple of years ago to
implement cache-friendly optimisations. The first thing they did was
cache profiling of some benchmarks, and to our surprise we discovered
the cache behaviour of lazy code is already pretty good. You get a lot
of structure in the evaluation of lazy code -- it just isn't evident in
the source code!

  
  
That's interesting. So why do you think "OCaml is so darned fast
compared to Haskell" :-)

Seriously though, is this finding consistent with the paper Nicholas
Nethercote mentioned? I had a quick read of the paper but didn't
take the time to digest the full significance of all tests done
and graphs presented. But if I understood the overall conclusions
correctly there were several reasons for relatively poor performance
on modern processors (one of which was a high rate of cache misses).
I suppose we should distinguish code from heap accesses here though.

Let me ask Tobias Gedell to comment, since he was directly involved in
the project and probably has the actual results to hand. I'm not sure
there's any inconsistency though: that paper finds that 60% of the
execution time is due to data cache misses, and so a speed up of at
most 2.5x is possible by improving the cache behaviour. That's
certainly very respectable, but our initial guess was that the
potential speed-ups might be considerably greater. Given the high
penalty of L2 cache misses, this still corresponds to a hit rate of 98%
or so (couldn't find the exact figure in the paper). We were initially
expecting that Haskell programs would miss more often than that.

The paper reports a 22% speed-up from prefetching to avoid write
misses. If I remember rightly, we also got some decent speed-ups from
optimisations aimed at improving the cache behaviour, although this
wasn't using the GHC code generator, so the results aren't directly
comparable.

Tobias?

John



___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Programming language shootout (completing the Haskell entry)

2004-03-29 Thread John Hughes
Adrian Hey wrote:

On Friday 26 Mar 2004 10:39 pm, Sean E. Russell wrote:
 

Why is Ocaml so darned fast compared to Haskell?
   

...

Also, I have a hunch that not only is eager evaluation inherently
more efficient (in terms of the raw number of operations that need
to be performed), it's probably more cache friendly too (you probably
end up with code that looks far more like a traditional imperative
loops and whatnot that you'll get by performing itsy bitsy on
demand graph reduction).
 

Actually the cache behaviour of code generated by GHC isn't at all bad. 
I know because I ran a student project a couple of years ago to 
implement cache-friendly optimisations. The first thing they did was 
cache profiling of some benchmarks, and to our surprise we discovered 
the cache behaviour of lazy code is already pretty good. You get a lot 
of structure in the evaluation of lazy code -- it just isn't evident in 
the source code!

John

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Haddock, QuickCheck, and Functional Design by Contract

2004-02-17 Thread John Hughes




Robert Will wrote:
4.
A notation for preconditions. For simple functions a Precondition
 can be calculated automatically from the Patterns:

  
  
 head (x:xs) = x

  
  
   Gives the Precondition @xs /= []@ for @head [EMAIL PROTECTED]  This only needs
   some simple knowledge about @data@ and @newtype@ declarations.  (We
   could also choose to leave it out, if it's too much work, for too
   few cases where it works.  Then programmers would have to state all
   preconditions explicitly as shown next.)

   Presently I use the following coding style:

  
  
 take n xs | not (n = 9) = error "take.Precondition"

  
  
   The precondition can simply be extracted as the bracketed
   _expression_ behind @not@ in a line containing the words @error@ and
   "Precondition".

   However it has the disadvantage that I can't simply disable
   run-time checking of the Precondition (as I could when using
   @assert@).  Furthermore it is also rather ugly.  Does anyone have a
   better idea?

The most lightweight way to specify preconditions would be via another
function definition with the same parameters and a systematically
chosen name, for example

 take_pre n xs = n=0

thus making it easy to generate the QuickCheck property

 prop_take n xs = take_pre n xs == take_spec n xs == take n xs

There are some problems in generating such properties automatically,
though:

 - QuickCheck properties have to be tested at a particular
(mono-)type. It isn't obvious how to specify at which type such a
polymorphic property should be tested.

 - Simple preconditions (such as n=0) work well in tested
properties. More complex ones (such as ordered xs) need to be replaced
by a custom test data generator for testing to be both efficient and
effective. Again, it's hard to see how to insert these automatically.

Perhaps it's simpler just to write QuickCheck properties explicitly?
6.
Invariants are an important part of Design by Contract according to
 [3].

   In Functional Programming it has been proven practical to check
   (and at the same time: document) data structure invariants in a
   smart constructor; the invariant resides only in this place (is
   not redundantly mentioned in the code) and is run-time checked on
   each creation of a data cell.  This works with @assert@ and doesn't
   need QuickCheck properties.  Also, such internal invariants are not
   relevant for the external documentation.  (Otherwise we could
   documentation- export the definition of the smart constructor,
   using -- ||.)

   External Invariants are nothing else than algebraic properties of
   data structures.  They can be documented using normal QuickCheck-
   Properties, and these can be documentation- exported via -- ||.

   As a consequence, invariants need no special notation.  I only
   mention them her for methodological reasons.

  

In our experience it's important to state properties that invariants
are preserved by functions -- for example,

 ordered xs == ordered (insert x xs)

or rather

 forAll orderedList $ \xs - ordered (insert x xs)

But I agree, no special notation is needed.

John


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: learning to love laziness

2003-09-26 Thread John Hughes
On Fri, 26 Sep 2003, Derek Elkins wrote:

 On Thu, 25 Sep 2003 12:59:37 -0700
 Mark Tullsen [EMAIL PROTECTED] wrote:

  Haskell has lazy/lifted products and not true products.

 Aren't lazy products true products?  What makes something a product is:
 fst (x,y) = x
 snd (x,y) = y
 for all x and y.  This holds with lazy products but not eager ones
 fst (3,_|_) = _|_ /= 3


Right, strict languages do not have true products either. A strict
language without side-effects, and in which no expression can loop
infinitely or cause an error (because these are side-effects of a sort),
COULD have true products.

The reason Haskell doesn't is that true categorical products have to
satisfy an additional law,

(fst x, snd x) = x

and Haskell's products don't, since

(_|_, _|_) /= _|_

The key question is: can we write a program which distinguishes these two
values? And we certainly can, for example

seq (_|_,_|_) 0 =  0
seq_|_0 = _|_

or

case (_|_,_|_) of (_,_) - 0=  0
case_|_of (_,_) - 0= _|_

But of course, *fst and snd* can't distinguish the two! So if we only
operated on products using fst and snd, then we could pretend that these
two values *were* equal, even though they're not in the implementation,
and thus say that Haskell had true products.

This leads some people to suggest that the behaviour of seq and pattern
matching be changed, so as not to distinguish these two values either.
A possible change would be to make both seq and case lazy for product
types, so that if x were a pair, then seq x y would NOT force the
evaluation of x, and case x of (a,b) - e would behave as case x of ~(a,b)
- e, i.e. x would be demanded only when one of a or b was needed.

This would be a good thing, in that Haskell would have a nicer algebra,
but a bad thing, in that it would conceal a difference which is really
present in the implementation, thus denying the programmer a degree of
control over what the implementation does. Namely, a programmer would no
longer be able to move the evaluation of a pair earlier than its
components are needed, by using a seq or a case (or in any other way).

Unfortunately, that control may sometimes be vitally important. Anyone who
has removed space leaks from Haskell programs knows the importance of
controlling evaluation order, so as, for example to drop pointers to
garbage earlier. Consider an expression such as

if ...condition containing the last pointer
  to a very large datastructure...
then (a,b)
else (c,d)

where a, b, c and d are perhaps expressions which *build* a very large
data-structure. Evaluating this expression early enables a lot of garbage
to be collected, but evaluating a component early allocates a lot of
memory before it is needed. Thus it's important to be able to force just
the pair structure, without forcing components, and this distinguishes _|_
from (_|_,_|_), however it is done.

Space debugging does, to quite a large extent, involve inserting seq in
judiciously chosen places. These places are hard to predict in advance
(before seeing the heap profile). If Haskell had true products, and it
turned out that the place needing a seq happened to have a pair type, then
that simple fix would not be applicable. The programmer would need to
refactor the program, replacing the pair by another type representing a
lifted product, before the space leak could be fixed. That might require
changing an arbitrarily large part of the program, which is clearly not
desirable.

So there are good arguments both for and against having true products.
The debate has raged many times during the Haskell design process. In the
end, the argument that Haskell's designers should not make finding and
fixing space leaks any harder than it already is won, and I think that was
the right decision -- but it is a trade-off, and one must recognise that.

Incidentally, exactly the same problem arises for functions: Haskell does
not have true functions either, because _|_ and \x - _|_ are not equal.

John Hughes
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: pretty newby

2003-09-24 Thread John Hughes
On Wed, 24 Sep 2003, Luc Taesch wrote:

 alos, Im surprised that this bland issue has not already been solved (PP
 lib paper looks dated 96). no offence intended, but aas im newbie, i
 wonder what am i missing ? what is different in the process of creation
 so that this need did not arised already? i should admint i'm puzzled,
 given the facilities at hand...

As the author of the 96 paper (wasn't it 95?), let me defend it a bit, or
at least explain what it offers, and what it does not.

The problem I was solving in that was paper was how to pretty-print
*data-structures* from inside a program. That includes abstract syntax
trees in compilers -- in fact, that's maybe the most important
application. The library is used in GHC, for example, to pretty-print
programs at various stages, after varying degrees of transformation. You
the user can see what the compiler has produced after each phase, and you
see it laid out in a readable format.

Pretty-printing data-structures is quite different from pretty-printing
*programs* -- we should have two different words for it, really! When you
pretty-print a program, you just want to change the layout in an existing
text, and comments etc. ought to be preserved and appear in sensible
places afterwards. When we pretty-print programs, there is an input (the
program as you wrote it) to take into account. When you pretty-print a
data-structure, you're just trying to output something produced by a
program... there is no input it was produced from which the
pretty-printed output ought to resemble.

So, if you want to see what GHC has produced after phase X, then the
pretty-printing library is appropriate. Of course, if there were comments
in the program you wrote, then you ought to realise the compiler discarded
them early, and you should not expect them to appear in the result of
various transformations/optimisations.

But if you want to standardise the layout of a program you wrote, then
this is no good. The pretty-printing library is not sufficient to do this
by itself... it solves a different problem. It is possible to BUILD a
program pretty-printer using the library (although this is not the only
possible approach), but it requires combining the library with a parser
which loses no information, in particular including comments at the
appropriate places in abstract syntax trees, so that they are a part of
the data structure being pretty-printed.

The problem is that parsers in compilers don't keep track of comments...
instead they discard them at the first opportunity (during lexing). Thus
such a parser, combined with a pretty-printer, cannot solve the *program
pretty-printing* problem.  What's needed is a parser that can parse
comments, and tie them to the *right place* in the abstract syntax tree.
Figuring out what a comment is really commenting is probably extremely
hard... But without an AST with attached comments in the right places, my
pretty-printing library, and Simon PJ's extension, are useless.

That said, maybe it is surprising that no good Haskell pretty-printer has
appeared yet, especially given the importance of layout in the language.
Why not write one? I dare say there would be many users, and no doubt you
could publish a paper at the Haskell Workshop...

John Hughes

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


RE: Haskell for non-Haskell's sake

2003-09-08 Thread John Hughes
On Fri, 5 Sep 2003, Simon Marlow wrote:

 ... Claiming a lock on a file is
  easy in C (well,
  it takes 18 lines...), but there's nothing in the standard Haskell
  libraries that can do it. So I borrowed a little C code from
  the net, and
  called it via the FFI.

 Locking support is available in the unix package distributed with GHC.
 See:

 http://www.haskell.org/ghc/docs/latest/html/unix/System.Posix.IO.html#7


 Cheers,
   Simon


That'll teach me not just to look in the standard Haskell 98 libraries!
Thanks.

john

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Haskell for non-Haskell's sake

2003-09-05 Thread John Hughes
On Thu, 4 Sep 2003, Sebastian Sylvan wrote:

 On Thu, 4 Sep 2003, John Hughes wrote:
  I wrote the system for my (Haskell!) programming course, with 170 students
  last year, and it is now also being used (at least) for our Java course
  and a cryptography course. It consists of about 600 lines of Haskell and
  18 lines of C.


 Just curious. What was C used for?


 /One of the students in the mentioned haskell programming course...


I implemented a trivial database, stored in ordinary files, and had to
ensure mutual exclusion of database access between simultaneously running
CGI scripts. Since each CGI run is short, I simply locked the entire
database for the entire run. Claiming a lock on a file is easy in C (well,
it takes 18 lines...), but there's nothing in the standard Haskell
libraries that can do it. So I borrowed a little C code from the net, and
called it via the FFI.

John

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Haskell for non-Haskell's sake

2003-09-05 Thread John Hughes
On Fri, 5 Sep 2003, Johannes Waldmann wrote:

 On Thu, 4 Sep 2003, John Hughes wrote:

  I use Haskell and Wash/CGI for administering students lab work.

 same here (in addition to Haskell programs for actually grading the homework).

 just curious: what kind of data base do you use?
 we take Krasimir Angelov's MySql binding (from HToolkit).

 this has the advantage that the administrator
 can directly view/edit the data base
 via the sql shell resp. via webmin.


I wrote a very simple database implementation in Haskell, which stores and
fetches Showable Haskell values with an associated key. The interface is
very simple: just

getRecord :: Key k r = k - IO (Maybe r)
putRecord :: Key k r = k - r - IO ()

where the instance of the Key class determines where the records are
stored.

Advantages:
* Trivial interface
* No need to worry about configuring any other software
* Very quick to implement
* Can store most kinds of Haskell data
* I can directly view/edit the database with emacs!

Disadvantages:
* Must handle locking myself
* No transactions, so no rollback on failure
* Leads to a network style, where records explicitly contain the keys of
  related records.
* Must maintain invariants such as if A points to B then B points to A
  myself.
* Performance is poor, but...

With 170 students the amount of data involved is small, and with each
accessing the system a few dozen times over a period of weeks, this isn't
a performance critical application.

I know this isn't the right way to do it, but it was quick, easy, and it
works pretty well.

John

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Haskell for non-Haskell's sake

2003-09-04 Thread John Hughes

I use Haskell and Wash/CGI for administering students lab work. Students
solve programming exercises in pairs, register their pair, and upload
their solution over the web. The pair gets a home page on which they can
see their grade and comments from their tutor, and also submit new
solutions if their tutor is unhappy with the first. Tutors have a home
page on which they can see which assignments they still need to mark,
download the student's code, set grades and enter comments, and also view
a summary of results for all students they are responsible for. As the
administrator, I can see results for each student, which submissions are
waiting to be marked, what the success rate is, and so on. (The
administrator's interface is a bit cruder than the other two, since I can
always hack the code when I need some more information...). The system
also packages up all submitted solutions ready for submission to an
automated plagiarism detector.

The benefits of the system are that students, tutors, and the
administrator can work from any machine on the Internet -- for example, at
home; submission and returns are quicker and easier for both students and
tutors, so feedback is quicker; tutors and the administrator have a much
better overview of the state of students' work; solutions are kept in a
uniform form which makes automated cheat detection easy.

I wrote the system for my (Haskell!) programming course, with 170 students
last year, and it is now also being used (at least) for our Java course
and a cryptography course. It consists of about 600 lines of Haskell and
18 lines of C.

John Hughes
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


ADV: Haskell-related postdoc positions at Chalmers University

2003-06-13 Thread John Hughes

Post-doctoral Fellowships at Chalmers University


We are seeking one or two postdoctoral research fellows to work at the
Department of Computing Science on the CoVer project (COmbining
VERification methods), a collaboration between the functional
programming, formal methods, and programming logic groups. The goal of
the project is to integrate testing and formal verification methods in
a program development environment for Haskell programs. We have been
awarded 8MSEK (about 870,000 euros) by the Swedish Foundation for
Strategic Research to fund this work. The project leaders are John
Hughes, Mary Sheeran, Peter Dybjer, and Thierry Coquand.

We are looking for well qualified candidates with a recent doctorate
in a related area, and with proven system building skills, to spend up
to two years with us as Research Fellows. We are looking for
candidates familiar with some or all of these areas:

* functional programming, especially using Haskell
* dependent types and logic
* proof editors
* automated theorem provers
* program analysis and transformation

Responsibilities will include system development in Haskell.

Further information on the CoVer project is available at

http://dilbert.cs.chalmers.se/Cover/

Successful applicants will receive a tax-free fellowship of around
16,000 SEK per month (about 1,750 euros), which is sufficient to live
comfortably in Göteborg. Swedish tax law restricts these fellowships
to people coming from abroad: those presently working in Sweden are
not eligible. Start date is negotiable from 1 September to 1 January
2004.

Gothenburg is an attractive city on the Swedish west coast, offering
an excellent quality of life. It has all the cultural amenities you
would expect of Sweden's second city, and the easy access to unspoiled
nature of a country with less than 10% the population density of the
UK! A good starting point for information on the city is

http://www.goteborg.com/default.asp?id=4293

Candidates are welcome to contact any of the Principal Investigators
with questions. Please email an application letter, together with your
CV and a copy of *one* relevant publication to [EMAIL PROTECTED],
by Sunday June 29th at the latest. Your application letter should
specifically address your system building skills as well as your
research experience. Attached documents should be Postscript or PDF.

John Hughes [EMAIL PROTECTED]
Mary Sheeran[EMAIL PROTECTED]
Peter Dybjer[EMAIL PROTECTED]
Thierry Coquand [EMAIL PROTECTED]
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: ANN: H98 FFI Addendum 1.0, Release Candidate 10

2003-06-10 Thread John Hughes


   - Great care should be exercised in the use of this function.  Not only
   - because of the danger of introducing side effects, but also because
   - \code{unsafePerformIO} may compromise typing, for example, when it is used
   - in conjunction with polymorphic references.
 
  Or maybe it would be better to provide some useful guidance? How about,
 
  To preserve the soundness of the type system, the result of
  unsafePerformIO should always have a monomorphic type. For
  example,
 
  listRef = unsafePerformIO (newIORef [])
 
  is unsafe, while
 
  listRef = unsafePerformIO (newIORef ([] :: [Int]))
 
  is type safe. In the first case listRef is assigned type IORef
  [a], which makes it possible to store a list of one type and fetch
  it with a different type.

 Unfortunately, this example is not directly applicable.  As
 Ross pointed out, it is already ruled out by the determinism
 requirement.  Moreover, `IORef's are neither part of H98 nor
 of the FFI.  The construction of a corresponding example
 with `Ptr' that uses `unsafePerformIO' deterministically is
 possible, but IMHO a bit to verbose for inclusion at this
 point.  However, I have changed the above cited warning to
 read

   Great care should be exercised in the use of this function.  Not only
   because of the danger of introducing side effects, but also because
   \code{unsafePerformIO} may compromise typing; in particular, the result of
   \code{unsafePerformIO} should always have a monomorphic type.

 This at least describes the typing problem more precisely.

 Cheers,
 Manuel


Manuel,

should always have is unfortunately ambiguous: does it mean you should
ensure that..., or we believe that..., but we're not completely sure. I
suggest changing the last phrase to

...; to avoid this, the programmer should ensure that the result of
unsafePerformIO has a monomorphic type.

John

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


RE: flock and sockets

2003-03-20 Thread John Hughes
On Wed, 19 Mar 2003, Simon Marlow wrote:

  I am planning a Haskell project and I need to access files. Since the
  program will be automatically started whenever a mail comes
  in I will need
  to be able to lock the access to files. Is there any support
  for this in some library?

 Yes, the Posix library distributed with GHC has support for file
 locking.  See

   http://www.haskell.org/ghc/docs/latest/html/hslibs/sec-posix.html

 This library is currently undergoing a rewrite - the functionality will
 be available under System.Posix in future releases of GHC (and possibly
 Hugs  NHC).


I didn't find this when I needed to lock files, so my solution (under
Unix) was to write a little C code and call it via the FFI. I used a
single lock file, since my application was a CGI script which runs fairly
rarely -- there's no need for a finer grain of locking.

Here's the C code (taken from the web and slightly adapted):

#include stdio.h
#include fcntl.h

void claimLock()
{
  struct flock fl;
  int fd;

  fl.l_type   = F_WRLCK;  /* F_RDLCK, F_WRLCK, F_UNLCK*/
  fl.l_whence = SEEK_SET; /* SEEK_SET, SEEK_CUR, SEEK_END */
  fl.l_start  = 0;/* Offset from l_whence */
  fl.l_len= 0;/* length, 0 = to EOF   */
  fl.l_pid= getpid(); /* our PID  */

  fd = open(lock, O_WRONLY);

  fcntl(fd, F_SETLKW, fl);  /* F_GETLK, F_SETLK, F_SETLKW */
}

Here's the corresponding Haskell declaration:

foreign import claimLock :: IO ()

John Hughes

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: [OT] Teaching Haskell in High School (fwd)

2003-02-04 Thread John Hughes
On Mon, 3 Feb 2003, Rex Page wrote:

 This matches my experience, too. When I've taught Haskell to first
 year college students, there have always been some hard core hackers
 who've been at it in C or VB or Perl or something like that for
 years, and they rarely take kindly to Haskell. The ones without any
 programming background do better.

 I think Haskell would be great for a high school math class. They
 could learn some logic and induction along with it, and get a few
 proofs back into the high school math curriculum.

 Rex Page

Actually, this doesn't match my experience. I teach first years too, and I
always have some hard core hackers. In my experience, they have the
advantage that they already understand notions such as a formal syntax and
an algorithm, as opposed to expecting that the computer will somehow
know what they mean. They also have something concrete to compare
Haskell against ... which often leads them to become real Haskell
enthusiasts! But then again, my course emphasises real programming and
real-world problem solving, at the expense of logic and induction.

John Hughes


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: avoiding cost of (++)

2003-01-17 Thread John Hughes
On Thu, 16 Jan 2003, Hal Daume III wrote:

 I have a function which behaves like map, except instead of applying the
 given function to, say, the element at position 5, it applies it to the
 entire list *without* the element at position 5.  An implementation looks
 like:

  mapWithout :: ([a] - b) - [a] - [b]
  mapWithout f = mapWith' []
  where mapWith' pre [] = []
mapWith' pre (x:xs) = f (pre ++ xs) : mapWith' (x:pre) xs

 Unfortunately, this is very slow, due to the overhead of (++).


I don't think this can be sped up appreciably without avoiding the
construction of the lists pre++xs. To accomplish that, I would change the
type of mapWithout, as follows:

mapPrePost :: ([a] - [a] - b) - [a] - [b]
mapPrePost f = mapWith' []
  where mapWith' pre [] = []
mapWith' pre (x:xs) = f pre xs : mapWith' (x:pre) xs

OK, you have to change the functions passed as f, and if f REALLY needs to
compute pre++xs then there is no gain, but I'm betting that in many cases
f has some property that enables

f' pre xs = f (pre++xs)

to be computed by a more efficient method.

John


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Interpret haskell within haskell.

2002-12-20 Thread John Hughes
On Fri, 20 Dec 2002, Matt Hellige wrote:

 [Christopher Milton [EMAIL PROTECTED]]
  --- David Sankel [EMAIL PROTECTED] wrote:
   I was wondering if there is any project that aims to
   interpret haskell within haskell.

 [... snipped intro to ghci ...]

  If you have defined functions in myprog.hs:
  :load myprog.hs
  then the functions defined in the file are available,
  or else you'll get error message(s) about problems
  found parsing myprog.hs.
 

 I'm not sure this is quite what he's asking for. For many projects it
 is useful to be able to dynamically (and programatically) load and
 evaluate arbitrary chunks of code for one reason or another.

Maybe this isn't quite what he was asking for either, but...

When I want to do this kind of thing, I use hugs as a back-end. I write
the expressions I want to evaluate, or whatever, to a file of hugs
commands, and then run

system hugs hugsinput hugsoutput

then read and process the output (choose your favourite filenames,
/tmp/... or whatever).

It's not the world's fastest solution, but on the other hand hugs provides
excellent reflection -- you can do much more than just evaluate
expressions, you can ask hugs about types, class instances, etc etc. I've
used this, for example, in an old prototype JavaDoc like tool which used
Hugs to determine the types and other properties of documented names, and
in a little script for use with QuickCheck, which finds all the property
names defined in a set of modules, loads the modules into hugs, and tests
the properties.

You may think this is a very simple-minded approach, but I would defend it
on several grounds:

* it is VERY simple, and provides programmatic eval without any semantic
  problems whatsoever.

* knowledge of the Haskell language is isolated where it belongs, in the
  hugs interpreter -- my tools only need to know how to work the hugs
  interface. As the language evolves, I can keep up just by installing a
  new version of hugs -- I have no parser and interpreter of my own to
  maintain.

Easy and effective -- if a bit slow.

John Hughes


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: AW: slide: useful function?

2002-12-02 Thread John Hughes
 On Mon, 2 Dec 2002, Andrew J Bromage wrote:

  ... If you mention a term like design patterns,

 well I love design patterns, it's just that in Haskell-land
 they are called higher-order functions, or polymorphic functions, etc.

 -- Johannes Waldmann  http://www.informatik.uni-leipzig.de/~joe/ --

Or maybe more general notions, such as

   define a recursive datatype together with a catamorphism combinator

or even

   embed a domain-specific language via combinators over an ADT

There are patterns of that sort in our programs, which we would probably
rather call design techniques, which aren't so easily captured by a
higher-order function definition.

John


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: AW: slide: useful function?

2002-12-02 Thread John Hughes
On Tue, 3 Dec 2002, Andrew J Bromage wrote:

 On Mon, Dec 02, 2002 at 10:27:21AM +0100, John Hughes wrote:

  There are patterns of that sort in our programs, which we would probably
  rather call design techniques, which aren't so easily captured by a
  higher-order function definition.

 As a matter of interest, _why_ would we rather call them design
 techniques than design patterns?  Is it just an aversion to
 buzzwords, or are they fundamentally different?


No particular reason, maybe, except that the buzzword isn't established in
our community. And no wonder, since we have no catalogue of Haskell
design patterns -- we're not even completely sure what they are! If
there were such a catalogue, and it became popular among Haskell users,
and it called the entries design patterns, then that would establish the
term.

One difference which I see between a technique and a pattern is that
there is a uniform and somewhat formal way of describing patterns: thought
has been put into what you should document when you describe a pattern.
Maybe we should stick to vaguer terms such as technique until we've put
in the corresponding thought!

John


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Best recursion choice for penultimax

2002-11-25 Thread John Hughes

 What does nub stand for?  (This is the first I've heard of it.)
 From the definition in List.hs it seems to remove repeats, keeping
 only the first.

Yes, that's what it does. It doesn't stand for anything, it's a word:

nub: small knob or lump, esp. of coal; small residue, stub; point or gist
(of matter or story).
Concise OED.

It's the second or third meaning which is implied here.

Hmm, maybe that's not such a great explanation. I wonder who can come up
with the best acronym? My contribution is

Note Unique Bits

John


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Writing a counter function

2002-06-30 Thread John Hughes

On Sun, 30 Jun 2002, Jon Cast wrote:


 Mark Carroll [EMAIL PROTECTED] wrote:
  On Sat, 29 Jun 2002, Samuel E. Moelius III wrote:
  (snip)
   Here's another not-exactly-what-you-wanted solution.  :)
  (snip)

  Do any of the experimental extensions to Haskell allow a
  what-he-wanted solution? I couldn't arrange one in H98 without
  something having an infinitely-recursive type signature.

 That's because the problem requires an infinitely-recursive type.

 Consider Haskell:

 self-apply f = f f

 The *typing algorithm* (the thing that didn't complain in Lisp)
 proceeds roughly as follows:

 f is applied to at least one argument, so f must have type a - b.
 Therefore, f's argument (f) must have type a.  So, we conclude:

 f :: a - b
 f :: a

 But f can only have one type, so we set its types equal:

 a = a - b

 This is clearly recursive, right?

  so I'm wondering if it could be at all sane for Haskell to allow
  such stuff

 Sure.  All it has to do is:

 1. Create its own newtype in response to such things as
 `self-apply' above.

 2. Ensure that

 self-apply f = f f

 and

 self-apply' g = g g

 have the same type.  I would *love* to hear ideas on how to do that,
 but it's difficult.


It isn't particularly hard to implement this. Haskell typecheckers use
unification to match types up; the only difference is that a graph
unification algorithm would be needed instead. Such algorithms exist ---
the only real difference is you have to remember what you're unifying to
avoid falling into a loop when unifying graphs with cycles.

The idea of adding this to Haskell has been proposed several times, but
never implemented. And the reason for THAT is that it would make type
errors significantly harder to understand.

Consider

f x y = if x==0 then y else f (x-1)

(where I forgot the second parameter in the recursive call). We expect
this to be a type error, and indeed it is --- BECAUSE recursive types are
not allowed. If they were, this definition would be well-typed, with the
type

f :: Num a = a - bwhere b = b - b

You wouldn't get an error message unless you tried to CALL f, and perhaps
not even then. For example,

f 1 2 :: Num b = b where b = b - b

is well-typed! So you would probably eventually, somewhere else in the
program, see an error message of the form

ERROR - Illegal Haskell 98 class constraint in inferred type
*** Type   : Num b = b where b = b - b

This is enough to scare most people off the idea.

Think of all the times you've seen the error message

unification would give infinite type

It's not the most common kind of type error, but it does happen regularly,
at least to me. In every such case, the type error would be deferred in
the way I describe.

If I remember rightly, OCaml allows type recursion of this sort, but
restricts it to object types precisely to avoid these problems.

John Hughes

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: problems figuring out what the type system is telling me

2002-06-08 Thread John Hughes

On Fri, 7 Jun 2002, Chris Moline wrote:
 ...
 two. i have also read what the hell are monads and monads for the working
 haskell programmer. i still do not get what i am doing wrong.

 getDepends :: String - [String]
 getDepends p = do
 handle - openFile (portsDir ++ p ++ /+CONTENTS) ReadMode
 fetchDepends handle

 to my brain this takes a string, concatenates it with portsDir and
 +/CONTENTS, and passes it to openfile. openFile then returns a handle and
 this handle and passes it to fetchDepends. getDepends will return whatever
 fetchDepends returns, assuming openFile succeeds. however ghc says

 Phoebe.hs:19:
 Couldn't match `[]' against `IO'
 Expected type: [t]
 Inferred type: IO Handle
 In the application `openFile (portsDir ++ (p ++ /+CONTENTS))
  ReadMode'
 In a 'do' expression pattern binding:
 handle - openFile (portsDir ++ (p ++ /+CONTENTS)) ReadMode

 i do not know what this [t] is and i do not know why it is expected. my theory
 is it is being caused by something in fetchDepends.

[t] is a list type: GHC is expecting the call of openFile to return a list
(of t, but t is just a type variable so tells us nothing). Why is it
expecting a list? BECAUSE OF YOUR TYPE SIGNATURE!

You wrote a type signature specifying that getDepends (and fetchDepends,
for that matter), returns a list. You should have given the result an IO
type, since these functions do IO. I haven't checked, but probably the
type of getDepends should be

getDepends :: String - IO [String]

rather than the type you wrote. So your mistake is just forgetting to
include the monad in your type signature.

Now, the error message you got maybe isn't the very clearest, but it is
logical. Remember that the do syntax that you used in getDepends is
OVERLOADED -- it can be used with any monad, not just with IO. In
particular, it can be used with the list type, which is itself a monad.
For example, we could, if we wished, define the ordinary list map function
like this:

map :: (a-b) - [a] - [b]
map f xs = do x - xs
  return (f x)

That's an example of using do with the list monad. Of course, since the do
is working over lists, then when we write x - xs, the xs must also have a
list type. We have to be consistent, and use the same monad throughout the
do. This is just like when we use do to write an IO computation: in that
case, when we write x - f y or whatever, the f y has to have an IO type.

Now, in your code for getDepends, GHC sees your type signature and says
Aha! This function returns a list. So the do in the body must be working
over the list monad. In that case, when we see

handle - openFile...

then the openFile must have a list type. Oh dear, it's type isn't a list,
it's IO! Better complain that [] (the name of the list type, not
the empty list) doesn't match IO!

Hence the error message you got.

And now to a thorny and controversial question: should one write type
signatures, or not? In particular, what advice should one give less
experienced Haskell programmers?

In this case, your CODE is probably quite correct. If you hadn't written
the type signatures, then GHC would just have inferred the correct types
and everything would have worked. Good advice for you might be to leave
type signatures out, compile your code, and then look at the types that
functions actually get (using ghci). You can always paste the types back
into your code, and this way, they will be correct.

On the other hand, if you omit type signatures, then when you DO get a
type error it will be in terms of type variables and classes, rather than
types such as Int or String which you would probably be expecting. There
is a trade off here.

One way to approach it is to write type signatures, but when a definition
doesn't type check, remove the signature on that one and see whether it
then typechecks. If so, the definition is right, but your type signature
is wrong. Use ghci to find out the correct type signature, and insert it.

You also mentioned layout problems when you used if in a do. I'm assuming
you tried writing something like

do 
   if ... then ...
   else ...
   ...

If you write this, then because the else appears in the same column as the
if, it's taken to be the start of a new element in the do -- with a syntax
error as the result. You just have to indent the else further than the if.
I use an indentation like this:

do ...
   if ...
 then ...
 else ...
   ...

which lets GHC see that the entire if-then-else makes up just one element
in the enclosing do. This is a classic gotcha of the layout rule: the
first layout is quite natural, but you just can't write it.

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: FFI and ODBC connectivity

2002-06-05 Thread John Hughes

On Wed, 5 Jun 2002, Jamie Love wrote:

 The second set of libraries I am interested in is HTML/CGI libraries. I
 have yet to investigate those available, so any direction on which
 library package is currently live and well would also be useful.

I recommend Peter Thiemann's Wash/CGI highly -- that's what I'm using
nowadays.  Peter has put a lot of work into making it a very complete
system, and there is also a paper and tutorial material. The best thing
about it, though, is the very simple way in which it supports sessions:
you just define a form as a number of elements combined using monadic do,
you name the values entered into input fields at the time you create the
fields, and you supply a call-back function for the submit button. That's
it.

John Hughes

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: arrows

2002-05-25 Thread John Hughes

On Sat, 25 May 2002, Koen Claessen wrote:


 There are many types which would fit nicely in an arrow
 framework, but do not because of the demand of these
 operators, here are two examples:

   * Isomorphisms, are nice arrows:

   type Iso a b = (a - b, b - a)

 but of course not all functions have an appropriate
 inverse, so arr cannot be defined.

   * Stream processors (from Fudgets) are nice arrows:

   data SP a b = Get (a - SP a b) | Put a (SP a b) | Nil

 But the first operator assumes that the product type
 associated with this arrow must be Haskell's product
 (,), but in fact a sum type would make a much nicer
 product.

 The reason why John chose to lay out the arrow library as it
 is (I think) is because of:

   * Simplicity; if you are too general then you get lots of
 painful classes all over the place.

   * Sufficiency; all examples he considered in his paper fit
 into the current framework.

 It is not clear if the design of the arrow library should be
 redone just because some particular examples do not fit in.
 After all, there are many examples of monads (Sets for
 example) which can not be made instance of the current monad
 class in Haskell.

 Regards,
 /Koen.

Exactly. The other reason is that I was dubious that one can do very much
WITH an arrow that doesn't have first. It's all very well to be able to
make various types into instances, but if the combinators aren't then
useful, then you've suffered extra complexity in the class structure for
nothing.

This is a compromise, of course, and I could be persuaded that it would be
better to split the Arrow class -- but only if the new instances can then
be USED in a useful way.

John


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: name of List.nub function

2002-05-24 Thread John Hughes

On Fri, 24 May 2002, Tom Schrijvers wrote:


 The first result for nub in dictionary.com gives:
 nub   Pronunciation Key  (nb)
 n.

 1. A protuberance or knob.
 2. A small lump.
 3. The essence; the core: the nub of a story

 I think essence is the right meaning, removing all duplicates.
 Cheers,
 --
 Tom

Yes, that was the reason for the name.

John


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: How to get functional software engineering experience?

2002-05-14 Thread John Hughes


 However, it appears that the only place (short of Ericsson) I can actually
 work on a complex functional system is in academia. Unfortunately, this is
 not an option, as I have no Ph.D., and going back to school is probably not
 realistic.

There are other companies using Erlang, if not on as large a scale. The
Erlang User Conference (proceedings online at erlang.org) is a good
starting point to find out which. Some of the best stories, though, are
from very small Erlang groups in companies mainly using something else...

I would guess the biggest Haskell company is Galois Connections Inc
(galois.com), although I know there are others. Funny there's no Haskell
in Industry section on haskell.org -- it might be small, but it wouldn't
be empty, if people using Haskell were willing to stand up and be counted.

John Hughes

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: free variables in lambda terms ?

2002-04-26 Thread John Hughes

konsu [EMAIL PROTECTED] wrote in message
news:ATOx8.42722$[EMAIL PROTECTED]...
 hello,
 
 as an exercise, i am trying to implement a pure lambda calculus
interpreter.
 i am using normal order reduction with textual substitution of terms.
 
 a program that my interpreter accepts is a list of name/term
declarations:
 
 name 1 = lambda term 1
 ...
 name N =  lambda term N
 
 each declaration defines a global name and binds a term to it, and the
 interpreter starts evaluation with the term named 'main'.
 
 
 i implemented all the functions necessary to evaluate a closed lambda
term,
 and now i got to the point where i need to handle free variables in a
term.
 my assumption is that free variables denote terms declared as above.
 
 my question is when should i expand free variables? i also would
appreciate
 any references to literature, etc. where this is discussed.
 
 so far i believe the way to do it is to reduce a term to its normal
form
 without handling free variables, and then, if it has free variables,
look up
 their definitions in the environment and replace them with their
bodies,
 evaluate the resulting term to its normal form again, and keep
repeating
 this process until no free variables are found.
 

This is certainly one reasonable possibility. Alternatively, you could replace
a free variable by its value as soon as you encounter it in a reduction
context (i.e. at the point where your reducer would make the next reduction
step). Since you're aiming for normal forms rather than weak head normal
forms, you presumably will substitute for free variables under lambdas.

I assume you're being careful not to capture variable names during
substitution -- beware of cases like

 f = g
 g = 1
 main = (\g.f) 2

where substituting for f in main must rename the local variable g to avoid a
name clash with the global g.

Given the Church-Rosser property, it doesn't really matter when you substitute
for free variables -- you should get the same result in any case. The only
thing to be careful of is that you don't substitute too eagerly, and thus
fall into an infinite loop. Global definitions are presumably the only way to
achieve recursion in your system (apart from using Y); substituting too
eagerly in recursive definitions will of course lead to loops. Just consider

fac = \n. if (== n 0) 1 (* n (fac (- n 1)))

where if you substitute for fac before applying the lambda-expression, you
clearly fall into a loop.

John Hughes
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Coding Problems!! Please help

2002-04-17 Thread John Hughes

I am currently working on an assignment with requires us to write in
Haskell. I am having trouble with pipeline. The required textbook
called, The Craft of Functional Programming 2nd Edition.  There is a
pipeline example, which I find it useful in my assignment, but for some
reason, there may be a typo in the pipeline syntax, which is .. Coz it
doesn't compile.

Your problem is that that book defines and uses a non-standard composition
operator.

The standard Haskell function composition is written just ., defined by

(f . g) x = f (g x)

Notice that in f . g, it is *g* which is applied first to the argument, and f
which is applied to the result of that. That is, composition is backwards in
some sense. Here's an example:

 sqrt . abs

takes the absolute value of its argument first, and then the square root of
that, not the absolute value of the square root (which would fail for negative
arguments).

Many people (including mathematicians) prefer to write the arguments of
compose in the other order, so that the first function to be applied is the
first one you write. To make that possible, Simon Thompson defines

(f . g) x = g (f x)

So now you would write the example above as

abs . sqrt

You take the absolute value first, so you write it first.

Regardless of which you prefer, the important thing to understand is that .
IS NOT PART OF THE HASKELL STANDARD. So if you want to use it, you have to
include the definition above in your own program.

John Hughes

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: explicitly quantified classes in functions

2002-04-04 Thread John Hughes


Why can I not define the following (in ghc):

 class Foo p where
 instance Foo Double where
 foo :: Double - (forall q . Foo q = q)
 foo p = p

From my humble (lack of) knowledge, there seems to be nothing wrong
here, but ghc (5.03) complains about unifying q with Double. 

Well, of course! The result has to have type (forall q . Foo q = q), but p
actually has type Double. Of course GHC complains.

The question is, why YOU think it is OK. And the answer is that you reason:
there is only one instance of Foo, so when I say Foo q, that means q must be
Double. But you're making the closed world assumption, that you know all of
the instances of Foo. The compiler doesn't, and for good reason.

Suppose you were to export these definitions from the module where they
appear, into another module which defines

instance Foo Integer

Now, of course, it should be OK to use Foo with the type Double - Integer
(since Foo Integer holds in this context). Yet clearly, the *definition* you
gave cannot be used with this type. Hence the type is wrong.

Of course, you could argue that if the definition of foo is *not* exported,
then the compiler should use the closed world assumption and the program
should be accepted. But it would be really weird if whether a definition is
well-typed or not depended on whether or not it was exported...

John

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: explicitly quantified classes in functions

2002-04-04 Thread John Hughes


Why can I not define the following (in ghc):

 class Foo p where
 instance Foo Double where
 foo :: Double - (forall q . Foo q = q)
 foo p = p

From my humble (lack of) knowledge, there seems to be nothing wrong
here, but ghc (5.03) complains about unifying q with Double. 

Well, of course! The result has to have type (forall q . Foo q = q), but p
actually has type Double. Of course GHC complains.

The question is, why YOU think it is OK. And the answer is that you reason:
there is only one instance of Foo, so when I say Foo q, that means q must be
Double. But you're making the closed world assumption, that you know all of
the instances of Foo. The compiler doesn't, and for good reason.

Suppose you were to export these definitions from the module where they
appear, into another module which defines

instance Foo Integer

Now, of course, it should be OK to use Foo with the type Double - Integer
(since Foo Integer holds in this context). Yet clearly, the *definition* you
gave cannot be used with this type. Hence the type is wrong.

Of course, you could argue that if the definition of foo is *not* exported,
then the compiler should use the closed world assumption and the program
should be accepted. But it would be really weird if whether a definition is
well-typed or not depended on whether or not it was exported...

John

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: and do notation

2002-04-03 Thread John Hughes


 I think the point that's being missed in this discussion
 is that a monad is a n *abstract* type, and sometimes the
 natural equality on the abstract type is not the same as
 equality on representations. ... If we can give  a more efficient
 implementation, which constructs a better representation
 than = does, that's fine, as long as the two
 representations mean the same thing.

Point taken, but I remain unconvinced. What comes out of the
monad /isn't/ abstract; there's nothing to ensure that
subsequent use respects the abstraction.

That's true, of course. But *should* we insist that the programming language
guarantee that abstractions are always respected? i.e. equivalent
representations are always treated equivalently? We don't always in other
contexts. To take an example, consider a bag datatype with a bagFold operator
and an Eq instance that implements bag equality. Then

xBag == yBag

doesn't necessarily imply

bagFold op z xBag == bagFold op z yBag

We rely on the programmer to use bagFold properly, i.e. supply an AC op with
unit z. If he doesn't, the abstraction is broken.

 This is all perfectly respectable, and has to do with the
 fact that Haskell i s a programming language, not
 mathematics -- so we represent equivalence classe s by
 values chosen from them. For the *language* to rule out
 constructing different representations for equivalent
 constructions, such as  and =, would be unreasonable.

Here's the problem. Your argument sounds very similar to the
one against type checking. That /you/ can get it right
doesn't encourage me to believe that J Random Hacker isn't
going to abuse the facility. It's not as if you couldn't
define != and ! for something that's nearly a monad, it
would just stop you using the do notation, which is, I think
fair, since Haskell provides no way of selecting the correct
form of equality for do {_ - A; B} == do {A; B}.

  Jón

This is a much bigger cost than you make out. Making something into a monad
allows not only the use of the do syntax, it allows the use of a large number
of generic monad operations, the application of monad transformers, etc
etc. It's a big deal.

I agree there's an analogy here with type-checking here. Indeed, there's a
tradeoff where typechecking is concerned too: we give up some flexibility for
greater safety. We accept that nowadays, because the flexibility we give up
with Hindley-Milner typing is not so great, and the gain in safety is
considerable. But would you argue that we should prohibit programs which
potentially break an abstraction?  I.e. prohibit bagFold, or alternatively
allow it only for operators which the compiler can *decide* are AC? Would you
want to prohibit monad instances which the compiler cannot decide satisfy the
monad laws? I know you would want to prohibit monad instances where the
compiler cannot decide that  and = are properly related.

The trouble is that prohibiting bagFold, separate definitions of ,
etc. altogether is quite a big cost: such things are very useful! Yet we lack
the analogue of the Hindley-Milner type system -- a way for the compiler to
*decide* that a monad instance, an ADT definition, or whatever, is OK, which
is *proven* sufficiently flexible to handle almost all practical cases. In its
absence, the analogy with type-checking is not compelling. If all we had was a
monomorphic type system, then I would be against imposing type-checking too.

Perhaps there's an interesting research direction here for somebody...

John

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: and do notation

2002-04-01 Thread John Hughes


If (as a human reader of a programme) I see

do a - thing1
   expression

and I notice (perhaps after some modifications) that a is
not present in expression, then I /really/ don't want a
change to

do thing1
   expression

to change the meaning of the programme.

I think the point that's being missed in this discussion is that a monad is an
*abstract* type, and sometimes the natural equality on the abstract type is
not the same as equality on representations. Of course, we want the left and
right hand sides of the monad laws to have the same meaning, and we want 
to mean = \_ -, but this meaning is really up to the abstract equality,
not the concrete one. If we can give  a more efficient implementation, which
constructs a better representation than = does, that's fine, as long as the
two representations mean the same thing.

To be specific, the application that kicked off this discussion was program
generation. In this example, it's not important that  and = generate the
same *program text*, the important thing is that they generate equivalent
programs. If  can more easily generate a more efficient program, that's
fine.

There's another example in QuickCheck, where we use a monad Gen for random
value generation -- Gen a is a generator for random values of type a. Gen does
not satisfy the monad laws: for example, g and g = return will usually
generate different values. But viewed as *probability distributions* (which is
how we think of them), they are the same. Morally, Gen is a monad.

This is all perfectly respectable, and has to do with the fact that Haskell is
a programming language, not mathematics -- so we represent equivalence classes
by values chosen from them. For the *language* to rule out constructing
different representations for equivalent constructions, such as  and =,
would be unreasonable.

John Hughes
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



  1   2   >