Simon Cozens wrote:
> On Mon, Sep 10, 2001 at 08:38:43PM -0400, Ken Fox wrote:
> > Have you guys seen Topaz?
> 
> I may have heard of it, yes.

That's it? You're rejecting all of that work without
learning anything from it? Building strings on buffers
looked like a really good idea.

In general I think Parrot is missing critical abstractions.
The string/buffer relation is one. Others are the use of
stacks in the interpreter and the dispatch of opcodes in
runops(). This code is going to be very difficult to work
with because assumptions of the data structures are made
in lots of different places.

IMHO this is the main reason why Perl 5 is difficult to
understand -- and Parrot is repeating it.

> > The other major suggestion I have is to avoid "void *"
> > interfaces.
> 
> I'm using void * to avoid char *. :)
> ...
> Look at the code for string_make. If you're already
> passing *in* a UTF32 buffer, why do we need any special
> processing for UTF32?

My point is that a "void *" API turns off all compiler
type checking -- it's unsafe as a public API. I have looked
at string_make() and it doesn't do any special processing
based on encoding. It *requires* that the raw bits are
compatible with the encoding.

The only reason I'm bringing this up is that the whole
intent behind handling platform encodings is to integrate
better with user environments. A really nice interface
would type check the arguments when making strings so
that a user sees a type mismatch error when compiling.
For example, on Win32, Parrot might use string_make_utf16
take only LPWSTR.

I don't know much about encodings, but I do know that
type casts make perl really difficult to understand. Casts
suck. They're the goto for data structures.

> No, that's a really bad way to do it. We can special-case
> things like UTF16 to UTF32.

Ok. Are you planning on compiling in all the encodings
or can a module dynamically load one? You might want a
slow-but-standard encoding conversion algorithm anyways.

- Ken

Reply via email to