Re: ANNOUNCE: hmake-3.06

2002-08-09 Thread mgross





On Fri, 9 Aug 2002, Malcolm Wallace wrote:

>   hmake-3.06
>   --
> We are pleased to announce a fresh, bugfix, release of hmake, the
> Haskell compilation manager.
> 

www.cs.york.ac.uk seems to be down. Does anyone know of a mirror that
might have the new release, or when the home site will be back up? 

Thanks in advance, 

Murray Gross


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Yet more text pedantry

2002-08-09 Thread Alastair Reid


Can we stop the pedantry and have some people go off in a corner and
produce a design which:

1) Solves some of the internationalization issues notably those
   involving unicode and locales.

2) Will work on a decent range of existing and plausible future
   Windows and Unix boxes.  (Embedded systems, mainframes, PDAs,
   etc. are also worthwhile but since we would not run the full
   Haskell libraries on them they are of secondary importance.)

   That is, follow a standard spec if you can but when the spec
   becomes impossible to use because of some wild generalization which
   covers situations that will never come up, make a few assumptions
   based on what real systems do.

3) Can support nearly all of the current Haskell '98 libraries without
   change and as much as possible of the Hugs-GHC/hslibs/hierarchial
   libraries with slight changes.  This is partly because, for all its
   faults, the current interface has the virtue of being simple.

   I envisage a veneer which implements the old interface on top of
   the new design.  That is, the new design might expose all kinds of
   information about the encoding in the typesystem or through conversion
   functions or whatever but this complexity could be hidden behind
   an interface which reads and writes characters and does something
   plausible when it encounters UTF-32 and friends.

4) Relies on (and plays well with) Haskell'98 and approved addenda.
   
   (It's possible to meet this goal by lobbying for other common
   extensions to become approved addenda.)

5) Someone is going to produce a decent quality implementation for.
   (Talk is cheap and all that...)

   This is much easier now that both Hugs and GHC are working from the
   same source tree for libraries (with suggestions that NHC will
   follow suit).

--
Alastair Reid [EMAIL PROTECTED]  
Reid Consulting (UK) Limited  http://www.reid-consulting-uk.ltd.uk/alastair/
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: GHC bug,or Hugs feature?

2002-08-09 Thread Mark Tullsen

I believe the incompatibilities are explained thus:

  In section 4.5.1 of the Haskell Report it only states that
   "A dependency analysis transformation is first performed to increase
polymorphism"

  But hugs appears to be using a more refined version of the dependency
  analysis as explained in section 11.6.3 of Mark Jones' paper Typing
  Haskell in Haskell.  Read that section.

- Mark


Arthur Baars wrote:
> In Mark Jones' paper Typing Haskell in Haskell, I found the following
> example(in the section on binding-groups):
> 
> f   :: Eq a => a -> Bool
> f x = x==x || g True
> g y = y<=y || f True
> 
> According to the paper the inferred type of g should be:
>  g::Ord a => a -> Bool
> 
> Hugs infers this type but GHC infers the following *ambiguous* type:
> *Main> :i g
> -- g is a variable, defined at Test.hs:25
> g :: forall a. (Eq a) => Bool -> Bool
> 
> When adding an explicit type signature for g, Hugs happily accepts the code,
> but GHC gives the following error:
> 
> f   :: Eq a => a -> Bool
> f x = x==x || g True
> g   :: Ord a => a -> Bool
> g y = y<=y || f True
> 
> Test.hs:24:
> Couldn't match `{Ord a}' against `{Eq a1}'
> When matching the contexts of the signatures for
>   g :: forall a. (Ord a) => a -> Bool
>   f :: forall a. (Eq a) => a -> Bool
> The signature contexts in a mutually recursive group should all be
> identical
> When generalising the type(s) for g, f
> Failed, modules loaded: none.
> 
> I think the problems are caused by differences in the binding group analysis
> in Hugs and GHC. 
> 
> Malcolm, could you check what NHC says about the examples above?
> 
> Cheers, 
>  Arthur
> 
> ___
> Haskell mailing list
> [EMAIL PROTECTED]
> http://www.haskell.org/mailman/listinfo/haskell
> 


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Yet more text pedantry

2002-08-09 Thread George Russell

Ketil Z Malde wrote:
> 
> George Russell <[EMAIL PROTECTED]> writes:
> 
> > "Ketil Z. Malde" wrote:
> > [snip]
> 
> >>> and on Solaris the default representation of a characters is as a
> >>> signed quantity.
> 
> >> Why should we care?
> 
> > If you want to talk to any C libraries or C programs which use
> > characters, which some  of us do.  GNU readline and regex come to
> > mind.
> 
> Yes, which is why we all agree on CChar for FFI purposes.
> But we were discussing IO, weren't we?
Well for example I mentioned regex.  Using a different sort of char will
potentially break regex, since it means the meaning of a range of characters
[A..B] will change if A and B have different signs.  So either RegexString will
have to do complicated transformations of the regular expression string to
fix this (you will need to buy Simon Marlow several drinks) or else the
manual will have to admit that the ordering used by RegexString differs from
that used anywhere else.

This is just something that comes to mind, there are probably lots of
other cases where C libraries we might want to interface to provide things
which depend on the order of char.
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Yet more text pedantry

2002-08-09 Thread Ketil Z Malde

George Russell <[EMAIL PROTECTED]> writes:

> "Ketil Z. Malde" wrote:
> [snip]

>>> and on Solaris the default representation of a characters is as a
>>> signed quantity.

>> Why should we care?

> If you want to talk to any C libraries or C programs which use
> characters, which some  of us do.  GNU readline and regex come to
> mind. 

Yes, which is why we all agree on CChar for FFI purposes.
But we were discussing IO, weren't we?

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: AlphaBeta (chess) in Haskell

2002-08-09 Thread Malcolm Wallace

Mario Lang <[EMAIL PROTECTED]> writes:

> Also, I've found a reference to a working mate-problem
> solver which was used for HAT testing, but
> also, no code.

The chess end-game solver we used with Hat is one written by Colin
Runciman.  The code is attached (the main program is in Mate.hs), in
a tar file together with several example board positions and solutions.

Regards,
Malcolm

P.S. We plan to release a whole bunch of small Haskell programs of
 this nature at some point in the near future.



Mate.tar
Description: Binary data


ANNOUNCE: hmake-3.06

2002-08-09 Thread Malcolm Wallace

hmake-3.06
--
We are pleased to announce a fresh, bugfix, release of hmake, the
Haskell compilation manager.

The usual hmake highlights
--
* hmake knows about interface (.hi) files.
* hmake is compiler-independent, and allows multiple compiler versions.
* hmake is aware of many pre-processors.
* hmake can generate object files in a separate directory from your sources.
* hmake understands the library package system.
* hmake understands hierarchical module namespaces.
* hmake understands the Hat tracer.

What's new in 3.06
--
* Better handling of package libraries. Previously, the package
  import directories were detected at installation time, so the
  addition of a new package required hmake-config to be invoked
  to update the config database. Also, because all of the package
  dirs were searched on every invocation, hmake  could not warn of a
  missing -package flag. Now, package dirs are detected at runtime,
  and only for the requested packages - this fixes both problems.

* Added the cmdline option list to hmake-config to display the
  set of Haskell compilers known to hmake.

* Bugfix for the -hat option. Ensure that if a file goes through
  cpp before hat-trans, the resulting .hx file is moved from the
  temporary dir back to the build dir.

* Bugfix, to ensure that hmake isn't confused by the escaped
  character \\ in a literal string.

More info, and downloads

http://www.cs.york.ac.uk/fp/hmake/


Regards,
Malcolm Wallace
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Yet more text pedantry

2002-08-09 Thread George Russell

"Ketil Z. Malde" wrote:
[snip]
> 
> > and on Solaris the default representation of a characters is as a
> > signed quantity.
> 
> Why should we care?
[snip]
If you want to talk to any C libraries or C programs which use characters, which some 
of us do.  GNU readline and regex come to mind.
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Yet more text pedantry

2002-08-09 Thread Ketil Z. Malde

George Russell <[EMAIL PROTECTED]> writes:

>> How does the file system know the difference?  I think you mean that
>> C chars on Solaris are signed, not that files and sockets don't
>> contain octets.

> Well, you can define the files to contain only directed graphs if it makes
> you feel any happier,  but the fact is that the standard access functions 
> return characters*, 

What "standard access functions"? The functions found in C libraries?
>From Solaris man pages, the "read" system call reads bytes into a void
* buffer.

I would propose that the standard access functions in *Haskell* return
Word8, *regardless* of operating system or C libraries.  As long as
you have primitives to do octet IO, this should be straightforward,
regardless of whether the OS (or other programming languages or
libraries) thinks the octet is signed or not. 

> and on Solaris the default representation of a characters is as a
> signed quantity. 

Why should we care?

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Yet more text pedantry

2002-08-09 Thread George Russell

"Ketil Z. Malde" wrote:
> 
> George Russell <[EMAIL PROTECTED]> writes:
> 
> > Ketil wrote (quoting Ken)
> 
> >>> On most machines, Char will be a wrapper around Word8.  (This
> >>> contradicts the present language standard.)
> 
> >> Can you point out any machine where this is not the case?  One with a
> >> Haskell implementation, or likely to have one in the future
> 
> > That's easy enough.  On Sun/Solaris (which I use and which came out as
> > being very popular on the Haskell survey) characters are SIGNED, so the
> > values run from -128 to 127 and the wrapper would be not Word8 but Int8.
> 
> How does the file system know the difference?  I think you mean that
> C chars on Solaris are signed, not that files and sockets don't
> contain octets.
Well, you can define the files to contain only directed graphs if it makes
you feel any happier, but the fact is that the standard access functions 
return characters*, and on Solaris the default representation of a characters is
as a signed quantity.


*.  Though in fact it must be admitted that some, such as fgetc, actually return
an integer usually containing an _unsigned_ char, so that negative values can
be preserved for other information.  Life can be very complicated sometimes.
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Yet more text pedantry

2002-08-09 Thread Ketil Z. Malde

George Russell <[EMAIL PROTECTED]> writes:

> Ketil wrote (quoting Ken)

>>> On most machines, Char will be a wrapper around Word8.  (This
>>> contradicts the present language standard.)

>> Can you point out any machine where this is not the case?  One with a
>> Haskell implementation, or likely to have one in the future

> That's easy enough.  On Sun/Solaris (which I use and which came out as
> being very popular on the Haskell survey) characters are SIGNED, so the
> values run from -128 to 127 and the wrapper would be not Word8 but Int8.

How does the file system know the difference?  I think you mean that
C chars on Solaris are signed, not that files and sockets don't
contain octets. 

> I think this demonstrates the perils of saying "It's safe to assume
> everything is 8 bit because everything is now".

I don't think it does so at all.  There may be a peril in assuming
octet IO, but frankly I think trying to anticipate different futures
will only make things messy, and have a great likelyhood of turning
out useless anyway. 

Remember, worse is better.

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Yet more text pedantry

2002-08-09 Thread George Russell

Ketil wrote (quoting Ken)
[snip]
> > On most machines, Char will be a wrapper around Word8.  (This
> > contradicts the present language standard.)
> 
> Can you point out any machine where this is not the case?  One with a
> Haskell implementation, or likely to have one in the future
[snip]
That's easy enough.  On Sun/Solaris (which I use and which came out as
being very popular on the Haskell survey) characters are SIGNED, so the
values run from -128 to 127 and the wrapper would be not Word8 but Int8.

I think this demonstrates the perils of saying "It's safe to assume everytning
is 8 bit because everything is now".  Although it is the case now that the
overwhelming majority of computers use data sizes based on powers of 2, it is
nothing other than speculation to say that this will always be the case.
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Text in Haskell pedantry

2002-08-09 Thread George Russell

Ashley wrote
[quote]No, a file is always a list of octets. Nothing else (ignoring metadata, 
forks etc.).
[/quote]
On MVS at least a file is a list of list of octets, because record boundaries are
not handled by a "record boundary character" but by other means.  There are
still more horrible details of MVS access methods no-one here will want to
know about.

I think it would be more correct to say a file is always a
list of C characters, since while this may not be true, at least every system
in the foreseeable future is going to make it possible to pretend it is, at least
for the sort of files people are likely to want to process using Haskell.
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: AlphaBeta (chess) in Haskell

2002-08-09 Thread Johannes Waldmann


> Does anyone know of a AlphaBeta/Minimax module for haskell

http://www.informatik.uni-leipzig.de/~joe/projekte/phutball/clients/alpha-beta/

this is rather generic. 

it's applied in the Flankengott client for the Philosopher's Football game
http://theopc.informatik.uni-leipzig.de/~joe/phutball/

for a brief description, search for "Modules for Boardgames" in
http://haskell.cs.yale.edu/communities/05-2002/html/report.html

-- 
-- >>>  ungefähr drittes  leipziger einrad-picknick  am 18. august  <<<
-- >>> http://www.informatik.uni-leipzig.de/~joe/juggling/picknick/ <<<
-- Johannes Waldmann  http://www.informatik.uni-leipzig.de/~joe/ --
-- [EMAIL PROTECTED] -- phone/fax (+49) 341 9732 204/207 --
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



AlphaBeta (chess) in Haskell

2002-08-09 Thread Mario Lang

Hello.

Does anyone know of a AlphaBeta/Minimax module
for haskell, and / or a chess module?

I've found several references on the web, but no
code.  There are little examples in whyfp paper
and in some other Haskell related paper,
but both are far from complete.

Also, I've found a reference to a working mate-problem
solver which was used for HAT testing, but
also, no code.

Maybe someone has something already working
which could be used as a basis.

My plan is to provide a generic AlphaBeta module 
which could be used for different kind of two-player
games.


-- 
Thanks,
  Mario

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: Text in Haskell: a second proposal

2002-08-09 Thread Simon Marlow

Here's my take on the Unicode issue.  Summary: unless there's a very
good reason, I don't think we should decouple encoding/decoding from
I/O, at least for the standard I/O library.

Firstly, types.  We already have all the necessary types:

  - Char, a Unicode code point
  - Word8, an octet
  - CChar, a type representing the C 'char' type

The latter two are defined by the FFI addendum.

Taking hGetChar as an example:

hGetChar :: Handle -> IO Char

This combines, IMO, two operations: reading some data from the file, and
decoding enough of it to yield a Char.  Underneath the hood, the Handle
has a particular encoding associated with it.  In GHC, currently we have
two encodings, ISO8859 (aka binary, but we shouldn't use that term
because the I/O library works in terms of Char) and MS-DOS text.  We
could easily extend the set of encodings to include UTF-8 and others.  

Seeking only works on Handles with a 1-1 correspondence between handle
positions and characters (i.e. in the ISO encoding).

Why combine I/O and {en,de}coding?  Firstly, efficiency.  Secondly,
because it's convenient: if we were to express encodings as stream
transformers, eg:

decodeUTF8 :: [Word8] -> [Char]

Then we would have to do all our I/O using lazy streams.  You can't
write hGetChar in terms of hGetWord8 using this: you need the non-stream
version which in general looks something like

decode :: Word8 -> DecodingState 
-> (Maybe [Char], DecodingState)

for UTF-8 you can get away with something simpler, but AFAIK that's not
true in general.  You might want to use compression as an encoding, for
example.  So in general you need to store not only the DecodingState but
also some cached characters between invocations of hGetChar.  It's
highly unlikely that automatic optimisations will be able to do anything
useful with code written using the above interface, but we can write
efficient code if the encoder/decoder can work on the I/O buffer
directly.

There's no reason why we shouldn't provide encoders/decoders as a
separate library *as well*, and we should definitely also provide
low-level I/O that works with Word8.

Cheers,
Simon
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Text in Haskell: a second proposal

2002-08-09 Thread Ashley Yakeley

At 2002-08-09 01:19, Sven Moritz Hallberg wrote:

>> Whether or not the old Char-based ones should be deprecated, or whatever, 
>> I don't know.
>
>I think any notion of treating the _raw_ contents of a file as Chars
>must go, because it is simply incorrect. 

Right.

Certainly we need to come up with _correct_ Word8-based file functions, 
and separately, text-encoding functions. After that we need to consider 
what is to be done with the existing "expedient" (and conceptually ugly) 
Char-based file functions. Should they be deprecated, or should we fix 
them with a particular encoding scheme such as UTF-8 or ISO 8859-1, or 
what? What about newline handling? etc.

-- 
Ashley Yakeley, Seattle WA

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Text in Haskell: a second proposal

2002-08-09 Thread Sven Moritz Hallberg

On Fri, 2002-08-09 at 08:40, Ashley Yakeley wrote:
> At 2002-08-08 23:10, Ken Shan wrote:
> 
> > 1. Octets.
> > 2. C "char".
> > 3. Unicode code points.
> > 4. Unicode code values, useful only for UTF-16, which is seldom used.
> > 5. "What handles handle".
> ...
> >I suggest that the following Haskell types be used for the five items
> >above:
> >
> > 1. Word8
> > 2. CChar
> > 3. CodePoint
> > 4. Word16
> > 5. Char
> 
> I disagree, they should be:
> 
> 1. Word8
> 2. CChar
> 3. Char
> 4. Word16
> 5. Word8

Yes.


> >Let me elaborate.  Files are funny because the information units they
> >contain can be treated as both numbers and characters.
> 
> No, a file is always a list of octets. Nothing else (ignoring metadata, 
> forks etc.). Of course, you can interpret those octets as text using 
> "ASCII" or "UTF-8" or whatever, equally, you can interpret those octets 
> as an image using "PNG", "JPEG" etc. But those are secondary 
> transformations, separate from the business of reading from and writing 
> to a file.

Ack!


> We should have Word8-based interfaces to file and network handles. 
> Whether or not the old Char-based ones should be deprecated, or whatever, 
> I don't know.

I think any notion of treating the _raw_ contents of a file as Chars
must go, because it is simply incorrect. It's like a typo someone made,
because for a moment, he got Haskell Char and C char mixed up.


> As for Unicode codepoints, if there's to be an internationalisation 
> effort for Haskell, the type of character literals, Char, should be fixed 
> as the type for Unicode codepoints, much as it already is in GHC.

Ack.


Sven Moritz
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: UTF-8 library

2002-08-09 Thread Sven Moritz Hallberg

On Thu, 2002-08-08 at 18:26, anatoli wrote:
> Having a locale associated with each individual stream is much more
> convenient.

I argue _strongly_ against associating some sort of locale state with
handles.

1) In agreement with Ashley's statements, file IO should use octets,
because that's what's in a file.

2) If you need to decode those octets to characters, or vice-versa,
compose a (de)serialization function before it.

3) A "best shot" character reading(or writing, for that matter)
function, will be convenient. This should probably use your current
locale, because when writing a character, you'll probably want to be
able to write your own language's characters correctly.

4) For decoding, we'll need some parsing functionality, as someone
already mentioned. With that we can have functions like parseUTF8.
"Associating a locale with a stream", as you put it, is a matter of, if
f is the raw Word8 stream, g = parseUTF8 f, where g is the Char stream,
parsed as UTF-8-encoded characters from f.


Sven Moritz

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Text in Haskell: a second proposal

2002-08-09 Thread Ketil Z. Malde

Ken Shan <[EMAIL PROTECTED]> writes:

> I suggest that the following Haskell types be used for the five items
> above:
> 
>  1. Word8
>  2. CChar
>  3. CodePoint
>  4. Word16
>  5. Char
> 
> On most machines, Char will be a wrapper around Word8.  (This
> contradicts the present language standard.)

Can you point out any machine where this is not the case?  One with a
Haskell implementation, or likely to have one in the future?

If not, I don't see much point, and agree with Ashley to restrict
"real" IO to [Word8].  

I like the Encoding data structure, though. 

>data Encoding text code
>   = Encoding { encode :: [text] -> Maybe [code]
>   , decode :: [code] -> Maybe [text] }
>
>utf8 :: Encoding CodePoint Word8
>iso88591 :: Encoding CodePoint Word8

Perhaps changing it to 

data Encoding text code 
= Encoding { encode :: text -> Maybe code, ...}

so that

utf8 :: Encoding String [Word8]

but more importantly

jpeg :: Encoding Image [Word8]

Perhaps [Word8], if it is the basis for IO, should be the target for
*all* Encodings?  And encoding, can it really fail?  How about:

data Encoding text -- or rather, 'data_item' or something?
= Encoding {encode :: text -> [Word8],
decode :: [Word8] -> Maybe text}

?

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: UTF-8 library

2002-08-09 Thread Fergus Henderson

On 06-Aug-2002, George Russell <[EMAIL PROTECTED]> wrote:
> 
> Converting CStrings to [Word8] is probably a bad idea anyway, since there is
> absolutely no reason to assume a C character will be only 8 bits long, and
> under some implementations it isn't. 

That's true in general; the C standard only guarantees that a C character
will be at least 8 bits long.

But Posix now guarantees that C's `char' is exactly 8 bits.

Posix hasn't taken over the world yet, and doesn't look like doing so
in the near future.  So Haskell should not limit itself to being only
implementable on Posix systems.  However, systems which don't have 8-bit
bytes are getting very very rare nowadays -- it might well be reasonable
for Haskell, like Posix, to limit itself to only being implementable
on systems where C's `char' is exactly 8 bits.

-- 
Fergus Henderson <[EMAIL PROTECTED]>  |  "I have always known that the pursuit
The University of Melbourne |  of excellence is a lethal habit"
WWW:   | -- the last words of T. S. Garp.
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Storable tuples and what is 'alignment'?

2002-08-09 Thread Fergus Henderson

On 06-Aug-2002, Alastair Reid <[EMAIL PROTECTED]> wrote:
> 
> Andrew J Bromage <[EMAIL PROTECTED]> writes:
> > This number is called the "alignment", and a good rule of thumb for
> > computing it is:
> 
> >  instance Storable a where alignment a = sizeOf a `min` machine_word_size
> 
> The way we calculate it in GHC and Hugs is:
> 
>   #define offsetof(ty,field) ((size_t)((char *)&((ty *)0)->field - (char *)(ty *)0))

You shouldn't define offsetof() yourself.  The C standard provides
offsetof() in  -- you should use that rather than defining
it yourself.  Defining offsetof() yourself is an error if 
is included, because you are stepping on the implementation's namespace.
Furthermore, the definition there is not standard-conforming C code,
since it dereferences a null pointer.

-- 
Fergus Henderson <[EMAIL PROTECTED]>  |  "I have always known that the pursuit
The University of Melbourne |  of excellence is a lethal habit"
WWW:   | -- the last words of T. S. Garp.
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: UTF-8 library

2002-08-09 Thread Ketil Z. Malde

anatoli <[EMAIL PROTECTED]> writes:

> Dependence on the current locale is EXTREMELY inconvenient.
> Imagine that you're writing a Web browser.

Web browsers get input with MIME declarations, and shouldn't rely on
*any* default setting.   Instead, they should read [Word8] and decode
the contents according to Content-Type/Content-Transfer-Encoding.

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell