On 2011-10-27 20:42, Adam Wilson wrote:
On Thu, 27 Oct 2011 00:14:35 -0700, Jacob Carlborg <d...@me.com> wrote:

Are you saying that you consider using D for this Horizon project? I
can recommend you take a look at DWT: www.dsource.org/projects/dwt

Somewhere down the road I've planed to create an interface/window
builder for DWT using XML or something similar. I'm thinking something
like how it works on Mac OS X using Interface Builder.

I am, and I have looked at DWT. My problem with it is a one that is
endemic to open-source UI framework. Microsoft recognized a decade ago
that UI widgets whose look-and-feel is defined and controlled by the
Operating System are going the way of the dodo. Out of that realization
WPF was born. UI designers today want the ability to control every pixel
of the UI's presentation. And the reasons for this are two-fold. The
first is that it turns out that most OS designers are fantastically bad
at UI and design in general. It takes epic piles of cash to pull of a
decent one and even then there is a still a "programmers were here" look
to it (with the notable exception of iOS/OSX where designers rule the
roost).

I can agree to that.

The second is product differentiation. Nobody wants an app that
looks like every other app because it actually becomes impossible for
the user to distinguish which app works best for them.

I want that. Because I know how the GUI works and it will be easy for me to learn if it follows the guidelines of the platform. Also I don't need to figure out if I can use the scroll wheel on the mouse on this, what looks like a, scroll bar. If the application have a native look and feel I know how to use the widgets.

Users ONLY look
at the UI, and if the app doesn't look good, they wont "buy" it, even if
it's free. This is non-negotiable. Users, when given two apps that do
the same thing, even for different prices, will pick the prettier one
every time, because the prettier one is perceived as being "better".
It's called the Attractiveness Bias and it is a well-known principle in
the design world. Who would you rather look at all day, Alessandra
Ambrosio or Rosie O'Donnell? I rest my case.

That's why I use Mac OS X where the native applications look good :)

I maintain that this is prime reason that Linux on the desktop has
failed miserably, and I think Android proves my point. Android's key win
is that it put a usable UI on top of Linux. People never had a problem
with the price of Linux, they just couldn't stand to look at it. The
Linux Desktop LOOKS industrial and it's apps for the most part look the
same (I know of a few outliers that did a good job, but it isn't the
norm).

I can agree to that.

My point is that the day of cookie cutter apps is over. Anyone
designing for that paradigm is history. Microsoft's latest UI paradigm,
"Metro", is just a different cut-down version of WPF similar to
Silverlight. Microsoft has no plans to go back to OS controlled UI
styles; Metro and WPF are the plan for the next 15 years. (I attended
the MS BUILD conference, they made this plan very ... nay, EXTREMELY
clear).

I don't know how Metro or WPF is implemented but how says they can't be the native look and feel of the OS.

Open-source is chronically behind the big boys with money, precisely
because FOSS doesn't have the money to sling around for Testing and
Usability Studies, and most FOSS guys don't want to mess around with
that stuff anyways. You see FOSS guys tend to be engineers; they can put
up with, and even like, industrial looking interfaces. But programmers
also have a giant blind-spot when it comes to users. Most programmers
view users as a lower species and assume that they will be delighted by
whatever the programmer deigns to bequeath to them. But if you look at
the successful people in the tech industry (*ahem* Steve Jobs) you'll
find an attitude that is the exact opposite. Jobs was so focused on
delivering what the user wanted that he would publicly berate any
programmer who thought they knew better than the designer. While I don't
necessarily agree with Jobs' management style, there is a reason why
Apple is the second largest company in the world right now, and it has
nothing to do with how good Apple's engineering is (which I hear is
average at best). Despite programmers' best efforts, the world of
technology is no longer controlled by programmers. For better or worse,
users determine our course now. The open-source community would do well
to embrace the user.

I have no trouble in letting a designer decide how GUI should look like, in fact, as you say it would be better if they did. But not everyone can have that luxury. Since I'm no designer you do the best I can.

But without a first-class UI framework, that will never happen. In terms
of capability and usability, both Apple and Microsoft have beat the best
FOSS has to offer by a decade at least. I looked, searched, and scoured,
but the fact of the matter is, even the usable FOSS UI offerings are
pitiful when compared to the commercial offerings. The Horizon Project
got it's start because there has been a trend in recent commercial UI
offerings from towards increasing reliance on the operating system
itself. Metro XAML just flat won't work on anything other than Windows
8, Silverlight is a second class citizen at MS now, Cocoa only works on
Mac etc. My goal with The Horizon Project is to create First-Class UI
Framework for multiple platforms so that programmers don't have to
rewrite the UI from scratch for each new platform they want to support
and then open-source it so that the commercial OS vendors can't pervert
it for their own purposes. I want to put [some of] the power back into
the programmers hands.

Apologies for the length, but this is a topic that is of some interest
to me. :-)

I's a topic of interest to me as well. But I, on the other hand, prefer a native look and fell of applications. Instead of having yet another application with yet another GUI that doesn't work properly. If I see something that looks like a scroll bar I assume I can use it like a scroll bar. But that's not true for many applications, instead they implement the bare minimum for having the scroll bar "work", i.e. I may not be able to scroll using the wheel on the mouse. A great example of this are scroll bars in games.

In my experience of non-native GUI's they always perform worse the native GUI's.

Anyway, you can still use DWT to draw your own widgets. Even if you don't use any native widgets you still need to be able to draw something on a surface. DWT has a platform independent way of doing that.

IBM Lotus Software and IBM Rational Software are built on SWT (which DWT is a port of) uses a lot a non-native widgets. There's an application called "Azureus" that had its own GUI that didn't look native at all, apparently it's called Vuze these days and it looks native.

I don't understand this one. Should the compiler disable reflection as
soon as it sees malloc/free? On what level should it be disabled? I
mean, the runtime needs to be able to use these functions to implement
the memory manager, i.e. the garbage collector and other things as well.

The idea is to automatically prevent reflection access to sections of
the program where using direct memory manipulation could potentially
result in security holes but provide a way out for the programmer if
they really wanted it. C/C++ is famous for programmer bugs around memory
handling. If the compiler automatically disabled reflection, by sticking
@noreflect in front of a function that used malloc or new, it could
potentially prevent those types of memory manipulation flaws and help
keep the Reflection attacks to a minimum. It was an idea that I was
throwing out there. But I don't know enough about D yet to know if it's
the right way to handle it. And I have to admit I am little confused
though as I would hope that Reflection would be disabled on the GC,
because I have never personally had a reason to reflect into the GC...

What happens when you use class A from class B and the compiler has added @noreflect to class A? Will it add it to B as well? If not, how does the compiler know that?

There has been a similar discussion about having the compiler insert "pure" automatic on functions. But what might happens is a function that is called by another function change it implementation making it no longer pure. Which means your function will no longer be pure and you have no idea about it.

--
/Jacob Carlborg

Reply via email to