Hmm, looks like the rest of my response got lost on the way to the newsgroup somewhere, reposting the rest below:

On Sunday, 21 June 2015 at 10:07:05 UTC, Ola Fosheim Grøstad wrote:
On Sunday, 21 June 2015 at 09:07:52 UTC, Joakim wrote:
recent years and that's about it. If this webasm effort ever got into most browsers, I guarantee that almost everybody would chuck javascript and compile java, C#, or D to the browser instead.

Java has been available for years, almost nobody used it. Flash was available for years and it was only used in limited domains. Active-X was available. PNACL is available. asm.js is available, and webasm doesn't offer much more than asm.js in the near future.

You can wish, but certainly not guarantee.

You seem to have missed the discussion above. I guessed that they were allowing webasm to directly manipulate the DOM, rather than having to call out to javascript to do it. Reading a bit more now, I don't think they're doing that. In any case, none of the latter three technologies that allow using different programming languages were ever ubiquitous, the importance of which Wyatt and I discussed above. Just by webasm being implemented in all major browsers, it would certainly lead to a _lot_ less javascript getting written, once devs actually have a choice of other languages, even if they'd still have to wrap javascript calls for DOM manipulation.

As for Java and Flash, they were very widely used, despite being slow and in their own little world inside the browser. It was Flash that finally brought video widely to the browser, not the few HTML tags, codecs, and players that were there before. And neither is as integrated into the web stack as webasm will be.

Actually, that's one of the big problems with the more dynamic model: it breaks search engine indexing. How does the crawler have any idea how to navigate an app UI and generate URLs that are meaningful, if they're even made available by the app?

Google provide ways to index dynamic apps, but it is more work. So it costs more in developer time.

Right, that is the problem. The old static page model was naturally geared towards a search engine, but the new dynamic model isn't. That's a big problem for google, whether they realize it or not.

enough. But as I noted earlier, the canvas tag doesn't even support hyperlinks natively, which is a pretty big omission for a web technology.

Not sure what you mean by that? You trigger on the click and load the target page? Or if you wish, you can overlay hyper-link rectangles on top of the canvas.

I meant that it'd be nice if linking to parts of the canvas had a bit nicer support than this:

http://stackoverflow.com/questions/6215841/create-links-in-html-canvas

OK, that's not going to make something so low-level as a canvas magically that much better-integrated into the web, but it might help. I haven't messed with canvas much, but it's interesting how little it's been used, despite all the hype it got when it was first released.

The current model is quite flexible, you can mix technologies. Perhaps too flexible.

That's what you do when you mash a bunch of disparate technologies together: make them mixable and flexible and let the devs deal with all the complexity and bugs. :)

actually work. As I already noted, SVG doesn't have to be text to be "embedded."

It has to be part of the DOM. Parsing is not the main issue.

If speed of parsing and analyzing weren't one of the main issues, why are they even taking this webasm binary approach? A binary SVG can be made part of the DOM too once it's parsed.

Very responsive because they're made up of trivially simple line art, perhaps.

Trivial is relative. You can't have full-on photon-based simulation. You can have an advanced webGL shader if you want. As long as the renderer is the bottleneck you have to design for the renderer, no matter what kind of renderer you have. And you have many:

1. HTML5/CSS
2. HTML5/CSS GPU transforms
3. SVG
4. Canvas2D
5. WebGL.

That's five different rendering strategies with different performance characteristics and you have to design your graphics for each one of them.

We were talking about the original web stack and SVG, ie 1-3 in you list. WebGL is a whole different beast.

attach event-listeners to parts of the SVG. Not having HTML and SVG in the same source would be very confusing.

It wouldn't be confusing at all. You'd simply do all that in your text SVG authoring format and HTML on your side, then compile SVG to a binary encoding on the server and send that to the browser.

That would just be a different encoding of HTML5, if parsing was a major bottleneck, that might be a point. But it would have to coexist with the textual version and developers would only upgrade if it solves a problem.

It's only a different encoding of SVG, which the browser would then integrate into the DOM. At this point, you'd still have to have the text fallback, as you say, because they already put it in, but an option for binary encoding would significantly increase its use. Of course, as Wyatt said, SVG has a lot of other problems too.

In the scripting API using text as values might be an issue, but that's a different topic.

Nothing that couldn't be made to work with the appropriate binary encoding.

On Sunday, 21 June 2015 at 10:13:22 UTC, Kagamin wrote:
Do you think it's wise to ignore 2 billion users? The size of the mobile market doesn't mean you can target it entirely. The article suggests currently we have era of services and services are clustered by culture, which means you can't target users outside of your cultural cluster, while desktop applications usually target entire desktop market without exceptions.

Apparently most new apps nowadays are ignoring that legacy desktop market. :) My point was that as mobile devices become usable with larger monitors, that desktop market is going to collapse. As for cultural clusters, that's changing as they're now starting to bleed into each other: look at Office on Android/iOS and the multi-window stuff coming to mobile devices.

All the major mobile vendors are working on multi-window implementations which will soon allow you to plug your mobile device into a dock that connects to a monitor/keyboard/trackpad on your desk and run your mobile apps in a similar way to the desktop: Apple's just-announced multi-window feature to go along with their coming iPad Pro, Google's in-development multi-window implementation that has been found in the Android M build, and Microsoft's recently announced Continuum for mobile devices, that lets you plug your Windows Phone into a monitor and use Office with a desktop UI.

Are you going to support windows phone?

No, of course not. :) But I'd been saying for a year or two that MS was dumb not to put desktop UIs and apps on their mobile devices, so you could use them with a monitor, and they're finally fixing that. I can't imagine anyone actually wants to use Excel or Word on a touchscreen, I have no idea why they made such a big deal out of that. Will it save Windows Mobile? I doubt it, but given the strength of their office suite, it might sustain it a bit longer.

What this means is that people will soon be using their mobile devices for almost everything and desktop computers are effectively dead. :) Now, workstations were killed off by PCs and they still sell a couple million worldwide. Similarly, there will always be a niche for PCs and mainframes. It's just a small niche.

It will be desktop for all practical purposes, just more constrained in resources. Mobile platform will embrace two unrelated ecosystems, and you will still have to choose which ecosystem you target, and since desktop is a minority, why you would care about mobile desktop? It will be minority for all the same reasons that make desktop minority.

That's like saying current PCs are "mainframes for all practical purposes, just more constrained in resources," you honestly believe that too? ;)

I disagree that the ecosystems are unrelated, though I agree that they're different, but yes, the desktop UI on mobile devices will definitely be a minority. Most people using computers just want to read a little, hit a couple buttons, and listen to music or watch a video once in awhile, anywhere they are. That's what mobile devices are for. The former dominant use case for computers, creating content or getting work done, are a small part of what computers are bought and used for nowadays.

So yes, the desktop UI is a niche, but a moderately large niche that is about to move to mobile devices also. There will always be a tiny niche of users that sticks with desktops/laptops, workstations, and mainframes.

On Sunday, 21 June 2015 at 10:29:26 UTC, Kagamin wrote:
On Saturday, 20 June 2015 at 19:00:08 UTC, Joakim wrote:
On Saturday, 20 June 2015 at 15:21:29 UTC, Kagamin wrote:
High DPI settings screw up native UI too if it's not pixel-precise, and ignoring user preferences is infraction, I'm afraid. And this is where web actually shines: it's designed to adapt gracefully to any user settings. Well, of course when site design strays from how web was designed to work, it runs into problems, that should be obvious.

The highest-DPI devices I use nowadays are mobile devices and, in my experience, websites are the ones who most often get it wrong.

I mean only design possibility, which is not taken advantage of in modern web, unfortunately.

That's usually related to tiny text, but that affects the overall layout too.

Designers like their 5-pixel fonts and believe everybody will appreciate them. But I think pixel-oriented design is a flawed design choice for web, web wasn't designed to work that way.

It certainly can be hard to get this stuff right on any app platform, whether web or native mobile, with the proliferation of screen sizes, viewing distances, and DPIs these days, as Nick pointed out with the smart TV example above. But I have to wonder if most of those small-font sites/apps render right on anything other than a _single_ device, which means their devs are certainly not dealing with that complexity at all.

Reply via email to