In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] 
(James Mastros) wrote:

> On Sun, Nov 04, 2001 at 01:38:58PM -0500, Dan Sugalski wrote:
> > Currently, I don't want to promise back before Win98, though if Win95 
> > is no different from a programming standpoint (I have no idea if it 
> > is) then that's fine too. Win 3.1 and DOS are *not* target platforms, 
> > though if someone gets it going I'm fine with it.

There is relatively little difference amongst Win95 thru ME. Some extras, 
but in practice I don't think we're going to want them (not in the core in 
any case).

> I'd tend to say that we should support back to win95 (original, not 
> sp2).
> AFAIK, there's nothing that changed that should effect core perl/parrot.
> The one big exception is Unicode support, NT-based systems have much 
> better
> Unicode.  Specificly, you can output unicode to the console.  However, 
> only
> targeting NT machines is absolutly not-an-option, for obvious reasons.

No and yes. No, in that the UNICODE[1] support in NT[2] is all pervasive 
(i.e. the ascii APIs are translated into UNICODE to be passed into the 
kernel).

> It might be that we end up with an NT binary with support for printing
> Unicode to the console, and a generic binary without.  (Come to think 
> of it,
> the only thing that should care is the opcode library that implements
> print(s|sc).)  There's a lot of other differences, of course, but for
> everything the win95 versions should be sufficent.  (For example, if we 
> want
> to set security properties on open, we need to use APIs that won't work 
> on
> 95,98, or Me.  But so long as we don't care, the security descriptor
> parameter can be NULL, and it will work fine on both.)

I would think (given Perl's roots) that's exactly where Perl can gain an 
advantage, the ability to programmatically manipulate ACLs without having 
to take the security APIs full on is going to be a big win (oops:).

The one big benefit an NT only build is that it could use UNICODE (but see 
[1]) as its native character set and avoid all the ASCII <-> UNICODE 
conversions in the APIs; however this may not be a really big gain in 
practice (however I can only speak as someone who rarely uses the upper 
codes of Latin-1, let alone all the other sets than UNCICODE provides -- 
e.g. to create filenames[3]).

The answer I think is to move as much UNICODE enabled functionality into 
modules, the selecting of which would switch in the native UNICODE support 
and only be supported on NT. The other alternative might be "Microsoft 
Layer for Unicode" which emulates much of the NT Unicode support on 
Win9x/ME, however I need to finished reading the info on this (and since 
it's rather new...)

Once I'm caught up on these messages and a few others I'll put together a 
patch to setup the defines before including windows.h to limit us to be 
Win95 compatible in the core.

> I should note, BTW, that I don't write windows programs when I can 
> manage
> not to, and I don't run NT.

[OT] If you're going to run Windows then  2k is a far easier environment 
(once working) than 9x (except for most games that is).

>         -=- James Mastros
> 
> 

[1] Strictly speaking UNICODE assuming USC-2, i.e. pre-V3.0 with the 
extension beyond 64k code points.
[2] That is NT, 2000 and XP (at the time of writing).
[3] For example (this is C++, or IIRC C99):

#include <windows.h>
#include <stdio.h>

int wmain() {   // UNICODE entry point, like main for ASCII
        wchar_t fn[2];
        fn[0] = 0x4f09; // Some CJF Unified Ideograph
        fn[1] = 0;
        HANDLE h = CreateFileW(fn, GENERIC_WRITE, 0, 0, CREATE_ALWAYS,
                                0, 0);
        if (INVALID_HANDLE_VALUE == h) {
                wprintf(L"Couldn't create file %ld\n", GetLastError());
        } else {
                CloseHandle(h);
        }
        return 0;
}

works fine... and displays quite nicely (once I had selected a typeface 
with the symbol in it.)

-- 
[EMAIL PROTECTED]

Reply via email to