Re: Mixing 2D and 3D

2013-08-05 Thread Chien Yang

Hi Jim,

We worked closely with Pavel to ensure 3D picking did extend nicely. I 
believe the specification at this link is still up-to-date:


https://wiki.openjdk.java.net/display/OpenJFX/Picking3dAPI

- Chien

On 8/5/2013 3:44 AM, Pavel Safrata wrote:

On 1.8.2013 22:33, Richard Bair wrote:
How does that fit in with the 2D-ish picking events we deliver now?  
If a cylinder is picked, how do we describe which part of the 
cylinder was picked?
Pavel or Chien will have to pipe in on this part of the question, I 
don't know.




Similar to 2D, the delivered event has coordinates in the cylinder's 
local coordinate space, so getX(), getY(), getZ() tell you the 
intersection point of the pick ray cast to the scene by cursor 
position and the cylinder. Moreover, the events have getPickResult() 
from which you can obtain also distance from camera, picked face (not 
used for cylinder but identifies picked triangle for a mesh) and 
texture coordinates.


Pavel




Re: Mixing 2D and 3D

2013-08-05 Thread Pavel Safrata

On 1.8.2013 22:33, Richard Bair wrote:

How does that fit in with the 2D-ish picking events we deliver now?  If a 
cylinder is picked, how do we describe which part of the cylinder was picked?

Pavel or Chien will have to pipe in on this part of the question, I don't know.



Similar to 2D, the delivered event has coordinates in the cylinder's 
local coordinate space, so getX(), getY(), getZ() tell you the 
intersection point of the pick ray cast to the scene by cursor position 
and the cylinder. Moreover, the events have getPickResult() from which 
you can obtain also distance from camera, picked face (not used for 
cylinder but identifies picked triangle for a mesh) and texture coordinates.


Pavel


Re: Mixing 2D and 3D

2013-08-01 Thread Jim Graham
There is one Mesh per MeshView and MeshView is a Node, so we require a 
node per mesh.


But, a mesh can have millions of triangles.  As long as they all have a 
common material...


...jim

On 8/1/13 1:38 PM, Richard Bair wrote:

I intend to agree with you that Node may be "too big" to use as the basis
for a 3D model's triangles and quads given that there can easily be millions
of them and that manipulating or interacting with them on an individual
basis is mostly unlikely.  As you say, storing those models outside the
scenegraph seems to make sense...


I agree. We don't use Nodes as the basis for 3D model's mesh, we have a 
separate Mesh class which does this very efficiently (Mesh==Image, 
MeshView==ImageView).

Richard



Re: Mixing 2D and 3D

2013-08-01 Thread Richard Bair
> I intend to agree with you that Node may be "too big" to use as the basis
> for a 3D model's triangles and quads given that there can easily be millions
> of them and that manipulating or interacting with them on an individual
> basis is mostly unlikely.  As you say, storing those models outside the
> scenegraph seems to make sense...

I agree. We don't use Nodes as the basis for 3D model's mesh, we have a 
separate Mesh class which does this very efficiently (Mesh==Image, 
MeshView==ImageView).

Richard

Re: Mixing 2D and 3D

2013-08-01 Thread Richard Bair

> - What is the minimum space taken for "new Node()" these days?  Is that too 
> heavyweight for a 3D scene with hundreds or thousands of "things", whatever 
> granularity we have, or come to have, for our Nodes?

I think each Node is going to come in somewhere between 1.5-2K in size (I 
haven't measured recently though so it could be worse or better). In any case, 
it is fairly significant. The "StretchyGrid" toy in apps/toys had 300K nodes 
and the performance limitation appeared to be on the rendering size rather than 
the scene graph side (other than sync overhead which was 16ms, but I was just 
looking at pulse logger and didn't do a full analysis with profiling tools). So 
although a node isn't cheap, I think it is cheap enough for a hundred thousand 
nodes or so on reasonably beefy hardware. On a small device with crappy CPU, 
we're looking at a thousand or so nodes max.

> - How often do those attributes get used on a 3D object?  If one is modeling 
> an engine, does one really need every mesh to be pickable, or are they likely 
> to be multi-mesh groups that are pickable?  In other words, you might want to 
> pick on the piston, but is that a single mesh?  And is the chain that 
> connects it to the alternator a single mesh or a dozen meshes per link in the 
> chain with 100 links?  (Yes, I know that alternators use belts, but I'm 
> trying to come up with meaningful examples.)

I think this is a good question also applicable to 2D. For example I have a 
bunch of nodes that are in a TableView, but for many of these they don't need 
to be individually pickable. We do optimize for this in memory though by 
"bucketing" the properties. So if you don't use any of the mouse listeners, you 
only pay for a single null field reference on the Node. As soon as you use a 
single mouse event property (add a listener, set a value), then we inflate it 
to an object that holds the implementation for those listeners. So there are 
games we can play here so that just having this API on a node is actually very 
inexpensive.

Most of the cost (last time I measured) of a Node is actually in the bounds & 
transforms, which we duplicated on Node and NGNode. And that state we really 
can't get away without.

One thing we talked about in the past was having a way to "compile" a portion 
of a scene graph. So you could describe some nodes and then pass it to a method 
which would "compile" it down to an opaque Node type that you could put state 
on. Maybe there is still a big NG node graph behind the scenes, but once you've 
compiled it, it is static content (so no listeners etc). Anyway, I'm not sure 
if it is useful or not.

> - How does picking work for 3D apps?  Is the ability to add listeners to 
> individual objects good or bad?

I assume it is basically the same as 2D. There will be some elements in the 
screen you want to be pickable / get listeners (key listeners, mouse listeners, 
etc) but most things won't be.

> - How does picking interact with meshes that are tessellated on the fly?  
> Does one ever want to know which tessellated triangle was picked?  

I could imagine a design application (like Cheetah 3D or something) being 
interested in this, but with on-card geometry shaders doing LOD-based dynamic 
tessellation, I think it is something we would not expose in API. Somebody 
would have to actually create a mesh that complicated (and I guess we'd have to 
allow picking a particular part of the mesh).

So in real 3D we'd not be able to guarantee picking of any geometry, but we 
could potentially allow picking of any geometry known about on the FX level (in 
a mesh). Right now they are one-in-the-same, but when we start supporting 
on-card tessellation, the FX-known geometry will be a subset of all the real 
geometry on the card.

I imagine it is similar to 2D. In 2D we say "here is a rounded rectangle" but 
in reality we might represent this with any kind of geometry on the card (right 
now it is just a quad, but we've talked about having a quad for the interior 
and others for the exterior such that the interior is just a solid opaque quad 
and the edges transparent quads for AA, such that on-card occlusion culling of 
fragments could be used).

> How does that fit in with the 2D-ish picking events we deliver now?  If a 
> cylinder is picked, how do we describe which part of the cylinder was picked?

Pavel or Chien will have to pipe in on this part of the question, I don't know.

> - Are umpteen isolated 2D-ish transform attributes a convenient or useful way 
> to manipulate 3D objects?  Or do we really want occasional transforms 
> inserted in only a few places that are basically a full Affine3D because when 
> you want a transform, you are most likely going to want to change several 
> attributes at once?  2D loves isolated just-translations like candy.  It also 
> tends to like simple 2D scales and rotates around Z here and there.  But 
> doesn't 3D love quaternion-based 3-axis rotations and scales that

RE: Mixing 2D and 3D

2013-07-31 Thread John C. Turnbull
Jim,

I intend to agree with you that Node may be "too big" to use as the basis
for a 3D model's triangles and quads given that there can easily be millions
of them and that manipulating or interacting with them on an individual
basis is mostly unlikely.  As you say, storing those models outside the
scenegraph seems to make sense...

-jct

-Original Message-
From: openjfx-dev-boun...@openjdk.java.net
[mailto:openjfx-dev-boun...@openjdk.java.net] On Behalf Of Jim Graham
Sent: Thursday, 1 August 2013 05:58
To: Richard Bair
Cc: openjfx-dev@openjdk.java.net Mailing
Subject: Re: Mixing 2D and 3D

I'm a little behind on getting into this discussion.

I don't have a lot of background in 3D application design, but I do have
some low-level rendering algorithm familiarity, and Richard's summary is an
excellent outline of the issues I was having with rampant mixing of 2D and
3D.  I see them as fundamentally different approaches to presentation, but
that easily combine in larger chunks such as "2D UI HUD over 3D scene" and
"3D-ish effects on otherwise 2D-ish objects".  I think the discussion
covered all of those in better detail than I could outline.

My main concerns with having the full-3D parts of the Scene Graph be mix-in
nodes are that there are attributes on Node that sometimes don't make sense,
but other times aren't necessarily the best approach to providing the
functionality in a 3D scene.  I was actually a little surprised at how few
attributes fell into the category of not making sense when Richard went
through the list in an earlier message.  A lot of the other attributes seem
to be non-optimal though.

- What is the minimum space taken for "new Node()" these days?  Is that too
heavyweight for a 3D scene with hundreds or thousands of "things", whatever
granularity we have, or come to have, for our Nodes?

- How often do those attributes get used on a 3D object?  If one is modeling
an engine, does one really need every mesh to be pickable, or are they
likely to be multi-mesh groups that are pickable?  In other words, you might
want to pick on the piston, but is that a single mesh?  And is the chain
that connects it to the alternator a single mesh or a dozen meshes per link
in the chain with 100 links?  (Yes, I know that alternators use belts, but
I'm trying to come up with meaningful examples.)

- How does picking work for 3D apps?  Is the ability to add listeners to
individual objects good or bad?

- How does picking interact with meshes that are tessellated on the fly?
Does one ever want to know which tessellated triangle was picked?  How does
that fit in with the 2D-ish picking events we deliver now?  If a cylinder is
picked, how do we describe which part of the cylinder was picked?

- Are umpteen isolated 2D-ish transform attributes a convenient or useful
way to manipulate 3D objects?  Or do we really want occasional transforms
inserted in only a few places that are basically a full Affine3D because
when you want a transform, you are most likely going to want to change
several attributes at once?  2D loves isolated just-translations like candy.
It also tends to like simple 2D scales and rotates around Z here and there.
But doesn't 3D love quaternion-based 3-axis rotations and scales that very
quickly fill most of the slots of a 3x3 or 4x3 matrix?

- Right now our Blend modes are pretty sparse, representing some of the more
common equations that we were aware of, but I'm not sure how that may hold
up in the future.  I can implement any blending equation that someone feeds
me, and optimize the heck out of the math of it - but I'm pretty unfamiliar
with which equations are useful to content creators in the 2D or 3D world or
how they may differ now or in our evolution.

- How will the nodes, and the granularity they impose, enable or prevent
getting to the point of an optimized bundle of vector and texture
information stored on the card that we tweak and re-trigger?  I should
probably start reading up on 3D hw utilization techniques.  8(

- Looking at the multi-camera issues I keep coming back to my original
pre-mis-conceptions of what 3D would add wherein I was under the novice
impression that we should have 3D models that live outside the SG, but then
have a 3DView that lives in the SG.  Multiple views would simply be multiple
3DView objects with shared models similar to multiple ImageViews vs. a small
number of Image objects.  I'm not a 3D person so that was simply my amateur
pre-conception of how 3D would be integrated, but I trust the expertise that
went into what we have now.  In this pre-concept, though, there were fewer
interactions of "3Disms" and "2Disms" - and much lighter weight players in
the 3D models.

- In looking briefly at some 3D-lite demos it looks like there are attempts
to do higher quality AA combined with depth sorting, possibly with breaking
prim

Re: Mixing 2D and 3D

2013-07-31 Thread Jim Graham
mo that has 4 images on rotating fan blades with alpha - very pretty, but 
probably not done in a way that would facilitate a model of a factory with 10K parts to 
be tracked (in particular, you can't do that with just a Z-buffer alone due to the 
constantly re-sorted alpha):

http://www.the-art-of-web.com/css/3d-transforms/#section_3

I want to apologize for not having any concrete answers, but hopefully I ask 
some enlightening questions?

        ...jim

On 7/18/2013 1:58 PM, Richard Bair wrote:

While working on RT-5534, we found a large number of odd cases when mixing 2D 
and 3D. Some of these we talked about previously, some either we hadn't or, at 
least, they hadn't occurred to me. With 8 we are defining a lot of new API for 
3D, and we need to make sure that we've very clearly defined how 2D and 3D 
nodes interact with each other, or developers will run into problems frequently 
and fire off angry emails about it :-)

Fundamentally, 2D and 3D rendering are completely different. There are 
differences in how opacity is understood and applied. 2D graphics frequently 
use clips, whereas 3D does not (other than clipping the view frustum or other 
such environmental clipping). 2D uses things like filter effects (drop shadow, 
etc) that is based on pixel bashing, whereas 3D uses light sources, shaders, or 
other such techniques to cast shadows, implement fog, dynamic lighting, etc. In 
short, 2D is fundamentally about drawing pixels and blending using the Painters 
Algorithm, whereas 3D is about geometry and shaders and (usually) a depth 
buffer. Of course 2D is almost always defined as 0,0 in the top left, positive 
x to the right and positive y down, whereas 3D is almost always 0,0 in the 
center, positive x to the right and positive y up. But that's just a transform 
away, so I don't consider that a *fundamental* difference.

There are many ways in which these differences manifest themselves when mixing 
content between the two graphics.

http://fxexperience.com/?attachment_id=2853

This picture shows 4 circles and a rectangle. They are setup such that all 5 shapes are 
in the same group [c1, c2, r, c3, c4]. However depthBuffer is turned on (as well as 
perspective camera) so that I can use Z to position the shapes instead of using the 
painter's algorithm. You will notice that the first two circles (green and magenta) have 
a "dirty edge", whereas the last two circles (blue and orange) look beautiful. 
Note that even though there is a depth buffer involved, we're still issuing these shapes 
to the card in a specific order.

For those not familiar with the depth buffer, the way it works is very simple. 
When you draw something, in addition to recording the RGBA values for each 
pixel, you also write to an array (one element per pixel) with a value for 
every non-transparent pixel that was touched. In this way, if you draw 
something on top, and then draw something beneath it, the graphics card can 
check the depth buffer to determine whether it should skip a pixel. So in the 
image, we draw green for the green circle, and then later draw the black for 
the rectangle, and because some pixels were already drawn to by the green 
circle, the card knows not to overwrite those with the black pixel in the 
background rectangle.

The depth buffer is just a technique used to ensure that content rendered 
respects Z for the order in which things appear composited in the final frame. 
(You can individually cause nodes to ignore this requirement by setting 
depthTest to false for a specific node or branch of the scene graph, in which 
case they won't check with the depth buffer prior to drawing their pixels, 
they'll just overwrite anything that was drawn previously, even if it has a Z 
value that would put it behind the thing it is drawing over!).

For the sake of this discussion "3D World" means "depth buffer enabled" and assumes 
perspective camera is enabled, and 2D means "2.5D capable" by which I mean perspective camera but 
no depth buffer.

So:

1) Draw the first green circle. This is done by rendering the circle 
into an image with nice anti-aliasing, and then rotating that image
  and blend with anything already in the frame buffer
2) Draw the magenta circle. Same as with green -- draw into an image 
with nice AA and rotate and blend
3) Draw the rectangle. Because the depth buffer is turned on, for each 
pixel of the green & magenta circles, we *don't* render
 any black. Because the AA edge has been touched with some 
transparency, it was written to the depth buffer, and we will not
 draw any black there. Hence the dirty fringe! No blending!
4) Draw the blue circle into an image with nice AA, rotate, and blend. 
AA edges are blended nicely with black background!
5) Draw the orange circle into an image with nice AA, ro

Re: Mixing 2D and 3D

2013-07-29 Thread Chien Yang

Hi August,

 It is good to bring to back old memory occasionally. I've almost 
forgotten ViewSpecificGroup in Java 3D. :-) Thanks for sharing your 
proof of concept work in using Node.snapshot. We will take a closer 
study of your work. It is possible that you might have uncovered new 
area where we have missing 3D support on existing feature.


Thanks,
- Chien

On 7/28/2013 10:14 AM, August Lammersdorf, InteractiveMesh wrote:

"Simultaneous viewing based on Node.snapshot - proof of concept"

Chien,

certainly you remember Java 3D's multiple view concept based on 
ViewSpecificGroup (not easy to apply but powerful). It allows to 
assign the entire graph or sub-graphs or even single nodes to one or 
several cameras/canvases simultaneously. Animations (Behavior) are 
executed only once. Then the engine renders individually the 
assigned/extracted sub-scene for each camera/canvas.


The current JavaFX Scene/SubScene-design leads to an exclusive 
one-to-one-relationship of a 2D/3D-scene-graph and a camera.


Simultaneous views require at least individual lighting (headlight) 
per camera to avoid 'overexposure' or unwanted shading effects.


Thanks for your 'Node.snapshot' implementation hint and code example.

So, I tried to apply this approach to FXTuxCube and added a second 
camera. It works to some extend:


1st camera
 - some flicker for higher cube sizes during mouse navigation

2nd camera
 - one frame delayed
 - only default lighting/headlight, no individual lighting (?)
 - AmbientLight doesn't seem to be applied
 - no individual extraction of sub-graphs
 - currently permanent running AnimationTimer for repainting

The first result - FXTuxCubeSV - can be launched and downloaded here:
www.interactivemesh.org/models/jfx3dtuxcube.html#simview

August

Am Freitag, den 26.07.2013, 17:43 +0200 schrieb Chien Yang 
:

Hi August,

   John Yoon, Richard and I have a private discussion on the
possibility of avoiding "cloning" for your use case. We wonder do you
ever need to interact with the 3D scene via the various sub views? Or
these sub views are purely for viewing the 3d scene with different
cameras? If  this is your use case scenario, have you thought of using
Node.snapshot()?

public WritableImage snapshot(SnapshotParameters params,
WritableImage image) {

Where you can call snapshot on the node with a specified camera (in
the snapshot params).  It will then render the node from that camera's
viewpoint and put the result in an WritableImage. You can then add it
into the scenegraph using an ImageView.  I have attached a very simple
example on how snapshot can be used with an ancillary camera. Please
let us know of your progress in this work. We would hope to learn from
this work so that we can evaluate it to see if there are any
performance / semantic problems. You will likely ended up with a one
frame behind rendering in those sub views, but do let us know for your
finding.

Thanks,
- Chien







Mixing 2D and 3D

2013-07-28 Thread August Lammersdorf, InteractiveMesh

"Simultaneous viewing based on Node.snapshot - proof of concept"

Chien,

certainly you remember Java 3D's multiple view concept based on 
ViewSpecificGroup (not easy to apply but powerful). It allows to assign 
the entire graph or sub-graphs or even single nodes to one or several 
cameras/canvases simultaneously. Animations (Behavior) are executed only 
once. Then the engine renders individually the assigned/extracted 
sub-scene for each camera/canvas.


The current JavaFX Scene/SubScene-design leads to an exclusive 
one-to-one-relationship of a 2D/3D-scene-graph and a camera.


Simultaneous views require at least individual lighting (headlight) per 
camera to avoid 'overexposure' or unwanted shading effects.


Thanks for your 'Node.snapshot' implementation hint and code example.

So, I tried to apply this approach to FXTuxCube and added a second 
camera. It works to some extend:


1st camera
 - some flicker for higher cube sizes during mouse navigation

2nd camera
 - one frame delayed
 - only default lighting/headlight, no individual lighting (?)
 - AmbientLight doesn't seem to be applied
 - no individual extraction of sub-graphs
 - currently permanent running AnimationTimer for repainting

The first result - FXTuxCubeSV - can be launched and downloaded here:
www.interactivemesh.org/models/jfx3dtuxcube.html#simview

August

Am Freitag, den 26.07.2013, 17:43 +0200 schrieb Chien Yang 
:

Hi August,

   John Yoon, Richard and I have a private discussion on the
possibility of avoiding "cloning" for your use case. We wonder do you
ever need to interact with the 3D scene via the various sub views? Or
these sub views are purely for viewing the 3d scene with different
cameras? If  this is your use case scenario, have you thought of 
using

Node.snapshot()?

public WritableImage snapshot(SnapshotParameters params,
WritableImage image) {

Where you can call snapshot on the node with a specified camera (in
the snapshot params).  It will then render the node from that 
camera's

viewpoint and put the result in an WritableImage. You can then add it
into the scenegraph using an ImageView.  I have attached a very 
simple

example on how snapshot can be used with an ancillary camera. Please
let us know of your progress in this work. We would hope to learn 
from

this work so that we can evaluate it to see if there are any
performance / semantic problems. You will likely ended up with a one
frame behind rendering in those sub views, but do let us know for 
your

finding.

Thanks,
- Chien





Mixing 2D and 3D

2013-07-28 Thread August Lammersdorf, InteractiveMesh
"... wasn't successful ... assigning a cursor to a Shape3D or receiving 
response from any 'setOnMouseXXX' method."


Richard,

picking doesn't work correctly for a SubScene with PerspectiveCamera. 
This issue is known, RT-31255, and should be fixed in an FX 8 build > 
b99. :-)


August



Re: Mixing 2D and 3D

2013-07-26 Thread Chien Yang

Hi August,

   John Yoon, Richard and I have a private discussion on the 
possibility of avoiding "cloning" for your use case. We wonder do you 
ever need to interact with the 3D scene via the various sub views? Or 
these sub views are purely for viewing the 3d scene with different 
cameras? If  this is your use case scenario, have you thought of using 
Node.snapshot()?


public WritableImage snapshot(SnapshotParameters params, 
WritableImage image) {


Where you can call snapshot on the node with a specified camera (in the 
snapshot params).  It will then render the node from that camera's 
viewpoint and put the result in an WritableImage. You can then add it 
into the scenegraph using an ImageView.  I have attached a very simple 
example on how snapshot can be used with an ancillary camera. Please let 
us know of your progress in this work. We would hope to learn from this 
work so that we can evaluate it to see if there are any performance / 
semantic problems. You will likely ended up with a one frame behind 
rendering in those sub views, but do let us know for your finding.


Thanks,
- Chien



On 7/26/2013 10:53 AM, Chien Yang wrote:
Yes, that is still the approach. The challenge isn't just strictly on 
rendering. Let's take picking a "shared" node as an example. Imagine 
this node is in a scenegraph viewed by more than 1 camera. The 
question is where do we hang the set picked information. There is some 
level of "cloning" needed within JavaFX if we want to free application 
developer from doing it.


- Chien

On 7/25/2013 7:31 PM, Richard Bair wrote:
I thought the approach was not to have multiple parents, but to 
render into an image.


On Jul 25, 2013, at 5:26 PM, Chien Yang  wrote:

We don't support sharable Node. Some one will have to do the cloning 
of the scenegraph, either the application or JavaFX. Now I may have 
opened a can of worms. ;-)


- Chien

On 7/25/2013 5:20 PM, Richard Bair wrote:
Having to clone the nodes hardly seems like simultaneous viewing 
from different points of view?


On Jul 25, 2013, at 5:17 PM, Chien Yang  wrote:


Hi August,

  We did talk through some use cases such as PIP and rear view 
mirror. You can do simultaneous viewing from different points of 
view into a single 3D scene via the use of SubScenes. The key 
point, as you have clearly stated, is the need to clone the scene 
graph nodes per SubScene.


- Chien

On 7/25/2013 10:37 AM, Richard Bair wrote:

Hi August,


"I think we already do multiple active cameras?"

More precisely: simultaneous viewing from different points of 
view into a single 3D scene graph was meant, i.e. several 
cameras are attached to one scene graph.
A SubScene has exactly one camera attached which renders the 
associated scene graph into the corresponding SubScene's 
rectangle. Implementing simultaneous viewing requires a cloned 
3D scene graph for the second, third, and so on 
SubScene/Camera. Material, Mesh, and Image objects can be 
re-used because they are shareable. Animations of Nodes' 
Transforms seem to be shareable as well. But Transitions 
(Rotate, Scale, Translate) have to be cloned because they 
operate on a Node's methods directly. So, simultaneous viewing 
seems practicable.
Jasper or Kevin will have to comment, but I know this scenario 
was talked about extensively in the design for the renderToImage 
and cameras, and I thought this was possible today.






/*
 * Copyright (c) 2013, Oracle and/or its affiliates. All rights reserved.
 */
package helloworld;

import javafx.application.Application;
import javafx.geometry.Rectangle2D;
import javafx.scene.Group;
import javafx.scene.PerspectiveCamera;
import javafx.scene.Scene;
import javafx.scene.SnapshotParameters;
import javafx.scene.image.Image;
import javafx.scene.image.ImageView;
import javafx.scene.paint.Color;
import javafx.scene.shape.Rectangle;
import javafx.scene.transform.Rotate;
import javafx.stage.Stage;

public class HelloSnapshotPerspective extends Application {

@Override
public void start(Stage stage) {
stage.setTitle("HelloSnapshotPerspective");

Group g = new Group();
Rectangle rect = new Rectangle(300, 200);
rect.setTranslateX(25);
rect.setTranslateY(25);
rect.setRotate(-40);
rect.setRotationAxis(Rotate.Y_AXIS);
rect.setFill(Color.PALEGREEN);
g.getChildren().add(rect);
final Group root = new Group(g);

Scene scene = new Scene(root, 800, 300);
scene.setCamera(new PerspectiveCamera());
scene.setFill(Color.BROWN);

SnapshotParameters params = new SnapshotParameters();
params.setCamera(new PerspectiveCamera());
params.setFill(Color.DARKBLUE);
params.setViewport(new Rectangle2D(0, 0, 800, 300));

final Image image = rect.snapshot(params, null);
ImageView iv = new ImageView(image);
iv.setLayoutX(400);
root.getChildren().add(iv);

stage.setScene(scene);
stage.sho

Re: Mixing 2D and 3D

2013-07-26 Thread Chien Yang
Yes, that is still the approach. The challenge isn't just strictly on 
rendering. Let's take picking a "shared" node as an example. Imagine 
this node is in a scenegraph viewed by more than 1 camera. The question 
is where do we hang the set picked information. There is some level of 
"cloning" needed within JavaFX if we want to free application developer 
from doing it.


- Chien

On 7/25/2013 7:31 PM, Richard Bair wrote:

I thought the approach was not to have multiple parents, but to render into an 
image.

On Jul 25, 2013, at 5:26 PM, Chien Yang  wrote:


We don't support sharable Node. Some one will have to do the cloning of the 
scenegraph, either the application or JavaFX. Now I may have opened a can of 
worms. ;-)

- Chien

On 7/25/2013 5:20 PM, Richard Bair wrote:

Having to clone the nodes hardly seems like simultaneous viewing from different 
points of view?

On Jul 25, 2013, at 5:17 PM, Chien Yang  wrote:


Hi August,

  We did talk through some use cases such as PIP and rear view mirror. You 
can do simultaneous viewing from different points of view into a single 3D 
scene via the use of SubScenes. The key point, as you have clearly stated, is 
the need to clone the scene graph nodes per SubScene.

- Chien

On 7/25/2013 10:37 AM, Richard Bair wrote:

Hi August,


"I think we already do multiple active cameras?"

More precisely: simultaneous viewing from different points of view into a 
single 3D scene graph was meant, i.e. several cameras are attached to one scene 
graph.
A SubScene has exactly one camera attached which renders the associated scene 
graph into the corresponding SubScene's rectangle. Implementing simultaneous 
viewing requires a cloned 3D scene graph for the second, third, and so on 
SubScene/Camera. Material, Mesh, and Image objects can be re-used because they 
are shareable. Animations of Nodes' Transforms seem to be shareable as well. 
But Transitions (Rotate, Scale, Translate) have to be cloned because they 
operate on a Node's methods directly. So, simultaneous viewing seems 
practicable.

Jasper or Kevin will have to comment, but I know this scenario was talked about 
extensively in the design for the renderToImage and cameras, and I thought this 
was possible today.





Re: Mixing 2D and 3D

2013-07-25 Thread Richard Bair
I thought the approach was not to have multiple parents, but to render into an 
image.

On Jul 25, 2013, at 5:26 PM, Chien Yang  wrote:

> We don't support sharable Node. Some one will have to do the cloning of the 
> scenegraph, either the application or JavaFX. Now I may have opened a can of 
> worms. ;-)
> 
> - Chien
> 
> On 7/25/2013 5:20 PM, Richard Bair wrote:
>> Having to clone the nodes hardly seems like simultaneous viewing from 
>> different points of view?
>> 
>> On Jul 25, 2013, at 5:17 PM, Chien Yang  wrote:
>> 
>>> Hi August,
>>> 
>>>  We did talk through some use cases such as PIP and rear view mirror. 
>>> You can do simultaneous viewing from different points of view into a single 
>>> 3D scene via the use of SubScenes. The key point, as you have clearly 
>>> stated, is the need to clone the scene graph nodes per SubScene.
>>> 
>>> - Chien
>>> 
>>> On 7/25/2013 10:37 AM, Richard Bair wrote:
 Hi August,
 
>> "I think we already do multiple active cameras?"
>> 
>> More precisely: simultaneous viewing from different points of view into 
>> a single 3D scene graph was meant, i.e. several cameras are attached to 
>> one scene graph.
>> A SubScene has exactly one camera attached which renders the associated 
>> scene graph into the corresponding SubScene's rectangle. Implementing 
>> simultaneous viewing requires a cloned 3D scene graph for the second, 
>> third, and so on SubScene/Camera. Material, Mesh, and Image objects can 
>> be re-used because they are shareable. Animations of Nodes' Transforms 
>> seem to be shareable as well. But Transitions (Rotate, Scale, Translate) 
>> have to be cloned because they operate on a Node's methods directly. So, 
>> simultaneous viewing seems practicable.
 Jasper or Kevin will have to comment, but I know this scenario was talked 
 about extensively in the design for the renderToImage and cameras, and I 
 thought this was possible today.
 
> 



Re: Mixing 2D and 3D

2013-07-25 Thread Chien Yang
We don't support sharable Node. Some one will have to do the cloning of 
the scenegraph, either the application or JavaFX. Now I may have opened 
a can of worms. ;-)


- Chien

On 7/25/2013 5:20 PM, Richard Bair wrote:

Having to clone the nodes hardly seems like simultaneous viewing from different 
points of view?

On Jul 25, 2013, at 5:17 PM, Chien Yang  wrote:


Hi August,

  We did talk through some use cases such as PIP and rear view mirror. You 
can do simultaneous viewing from different points of view into a single 3D 
scene via the use of SubScenes. The key point, as you have clearly stated, is 
the need to clone the scene graph nodes per SubScene.

- Chien

On 7/25/2013 10:37 AM, Richard Bair wrote:

Hi August,


"I think we already do multiple active cameras?"

More precisely: simultaneous viewing from different points of view into a 
single 3D scene graph was meant, i.e. several cameras are attached to one scene 
graph.
A SubScene has exactly one camera attached which renders the associated scene 
graph into the corresponding SubScene's rectangle. Implementing simultaneous 
viewing requires a cloned 3D scene graph for the second, third, and so on 
SubScene/Camera. Material, Mesh, and Image objects can be re-used because they 
are shareable. Animations of Nodes' Transforms seem to be shareable as well. 
But Transitions (Rotate, Scale, Translate) have to be cloned because they 
operate on a Node's methods directly. So, simultaneous viewing seems 
practicable.

Jasper or Kevin will have to comment, but I know this scenario was talked about 
extensively in the design for the renderToImage and cameras, and I thought this 
was possible today.





Re: Mixing 2D and 3D

2013-07-25 Thread Richard Bair
Having to clone the nodes hardly seems like simultaneous viewing from different 
points of view?

On Jul 25, 2013, at 5:17 PM, Chien Yang  wrote:

> Hi August,
> 
>  We did talk through some use cases such as PIP and rear view mirror. You 
> can do simultaneous viewing from different points of view into a single 3D 
> scene via the use of SubScenes. The key point, as you have clearly stated, is 
> the need to clone the scene graph nodes per SubScene.
> 
> - Chien
> 
> On 7/25/2013 10:37 AM, Richard Bair wrote:
>> Hi August,
>> 
>>> >"I think we already do multiple active cameras?"
>>> >
>>> >More precisely: simultaneous viewing from different points of view into a 
>>> >single 3D scene graph was meant, i.e. several cameras are attached to one 
>>> >scene graph.
>>> >A SubScene has exactly one camera attached which renders the associated 
>>> >scene graph into the corresponding SubScene's rectangle. Implementing 
>>> >simultaneous viewing requires a cloned 3D scene graph for the second, 
>>> >third, and so on SubScene/Camera. Material, Mesh, and Image objects can be 
>>> >re-used because they are shareable. Animations of Nodes' Transforms seem 
>>> >to be shareable as well. But Transitions (Rotate, Scale, Translate) have 
>>> >to be cloned because they operate on a Node's methods directly. So, 
>>> >simultaneous viewing seems practicable.
>> Jasper or Kevin will have to comment, but I know this scenario was talked 
>> about extensively in the design for the renderToImage and cameras, and I 
>> thought this was possible today.
>> 
> 



Re: Mixing 2D and 3D

2013-07-25 Thread Chien Yang

Hi August,

  We did talk through some use cases such as PIP and rear view 
mirror. You can do simultaneous viewing from different points of view 
into a single 3D scene via the use of SubScenes. The key point, as you 
have clearly stated, is the need to clone the scene graph nodes per 
SubScene.


- Chien

On 7/25/2013 10:37 AM, Richard Bair wrote:

Hi August,


>"I think we already do multiple active cameras?"
>
>More precisely: simultaneous viewing from different points of view into a 
single 3D scene graph was meant, i.e. several cameras are attached to one scene 
graph.
>A SubScene has exactly one camera attached which renders the associated scene 
graph into the corresponding SubScene's rectangle. Implementing simultaneous 
viewing requires a cloned 3D scene graph for the second, third, and so on 
SubScene/Camera. Material, Mesh, and Image objects can be re-used because they are 
shareable. Animations of Nodes' Transforms seem to be shareable as well. But 
Transitions (Rotate, Scale, Translate) have to be cloned because they operate on a 
Node's methods directly. So, simultaneous viewing seems practicable.

Jasper or Kevin will have to comment, but I know this scenario was talked about 
extensively in the design for the renderToImage and cameras, and I thought this 
was possible today.





Re: Mixing 2D and 3D

2013-07-25 Thread Joseph Andresen

err... two identical groups of nodes**

On 7/25/2013 11:04 AM, Joseph Andresen wrote:


On 7/25/2013 10:37 AM, Richard Bair wrote:

Hi August,


"I think we already do multiple active cameras?"

More precisely: simultaneous viewing from different points of view 
into a single 3D scene graph was meant, i.e. several cameras are 
attached to one scene graph.
A SubScene has exactly one camera attached which renders the 
associated scene graph into the corresponding SubScene's rectangle. 
Implementing simultaneous viewing requires a cloned 3D scene graph 
for the second, third, and so on SubScene/Camera. Material, Mesh, 
and Image objects can be re-used because they are shareable. 
Animations of Nodes' Transforms seem to be shareable as well. But 
Transitions (Rotate, Scale, Translate) have to be cloned because 
they operate on a Node's methods directly. So, simultaneous viewing 
seems practicable.
Jasper or Kevin will have to comment, but I know this scenario was 
talked about extensively in the design for the renderToImage and 
cameras, and I thought this was possible today.
I know that one way to do this is by rendering the same group of nodes 
twice, using two different cameras each time, and using render to 
image or whatever to get your "RTT". I haven't tried it but i suspect 
it goes something like calling render to image on a group with one 
camera and then render to image on the same group with a different 
camera.






Re: Mixing 2D and 3D

2013-07-25 Thread Joseph Andresen


On 7/25/2013 10:37 AM, Richard Bair wrote:

Hi August,


"I think we already do multiple active cameras?"

More precisely: simultaneous viewing from different points of view into a 
single 3D scene graph was meant, i.e. several cameras are attached to one scene 
graph.
A SubScene has exactly one camera attached which renders the associated scene 
graph into the corresponding SubScene's rectangle. Implementing simultaneous 
viewing requires a cloned 3D scene graph for the second, third, and so on 
SubScene/Camera. Material, Mesh, and Image objects can be re-used because they 
are shareable. Animations of Nodes' Transforms seem to be shareable as well. 
But Transitions (Rotate, Scale, Translate) have to be cloned because they 
operate on a Node's methods directly. So, simultaneous viewing seems 
practicable.

Jasper or Kevin will have to comment, but I know this scenario was talked about 
extensively in the design for the renderToImage and cameras, and I thought this 
was possible today.
I know that one way to do this is by rendering the same group of nodes 
twice, using two different cameras each time, and using render to image 
or whatever to get your "RTT". I haven't tried it but i suspect it goes 
something like calling render to image on a group with one camera and 
then render to image on the same group with a different camera.




Re: Mixing 2D and 3D

2013-07-25 Thread Richard Bair
Hi August,

> "I think we already do multiple active cameras?"
> 
> More precisely: simultaneous viewing from different points of view into a 
> single 3D scene graph was meant, i.e. several cameras are attached to one 
> scene graph.
> A SubScene has exactly one camera attached which renders the associated scene 
> graph into the corresponding SubScene's rectangle. Implementing simultaneous 
> viewing requires a cloned 3D scene graph for the second, third, and so on 
> SubScene/Camera. Material, Mesh, and Image objects can be re-used because 
> they are shareable. Animations of Nodes' Transforms seem to be shareable as 
> well. But Transitions (Rotate, Scale, Translate) have to be cloned because 
> they operate on a Node's methods directly. So, simultaneous viewing seems 
> practicable.

Jasper or Kevin will have to comment, but I know this scenario was talked about 
extensively in the design for the renderToImage and cameras, and I thought this 
was possible today.

> "Key/ mouse / touch / etc should be there already?"
> Scene navigation and model interaction are often the first stumbling blocks 
> for developers with less 3D experience. A ready to go rotate, zoom, and drag 
> support including setting of arbitrary pivot points and adjustment of the 
> camera's clipping planes would overcome one's inhibitions.

I see

> "Node's properties and methods"
> 
> Before I agree with all of your appraisals I would like to gain more 
> experiences with their 3D related implementations. For instance I wasn't 
> successful so far assigning a cursor to a Shape3D or receiving response from 
> any 'setOnMouseXXX' method.

Yes, please do try it out, it should be working and if not please do file a 
bug. We use a pick-ray for determining which nodes to pick, and then deliver 
mouse events to 3D and 2D nodes the same way.

Thanks!
Richard

Mixing 2D and 3D

2013-07-24 Thread August Lammersdorf, InteractiveMesh

Hi Richard,

thanks a lot for your detailed reply and the insights into your 
intentions and ideas. The mailing list members will appreciate being 
preferentially informed.


"I think we already do multiple active cameras?"

More precisely: simultaneous viewing from different points of view into 
a single 3D scene graph was meant, i.e. several cameras are attached to 
one scene graph.


A SubScene has exactly one camera attached which renders the associated 
scene graph into the corresponding SubScene's rectangle. Implementing 
simultaneous viewing requires a cloned 3D scene graph for the second, 
third, and so on SubScene/Camera. Material, Mesh, and Image objects can 
be re-used because they are shareable. Animations of Nodes' Transforms 
seem to be shareable as well. But Transitions (Rotate, Scale, Translate) 
have to be cloned because they operate on a Node's methods directly. So, 
simultaneous viewing seems practicable.


"Key/ mouse / touch / etc should be there already?"

Scene navigation and model interaction are often the first stumbling 
blocks for developers with less 3D experience. A ready to go rotate, 
zoom, and drag support including setting of arbitrary pivot points and 
adjustment of the camera's clipping planes would overcome one's 
inhibitions.


"Node's properties and methods"

Before I agree with all of your appraisals I would like to gain more 
experiences with their 3D related implementations. For instance I wasn't 
successful so far assigning a cursor to a Shape3D or receiving response 
from any 'setOnMouseXXX' method.


August



Mixing 2D and 3D

2013-07-23 Thread August Lammersdorf, InteractiveMesh

David,

JavaFX 3D requires JDK 8. Please see 'System requirements' on the 
webpage. Webstart of FXTuxCube runs on my OS X 10.7.5/JDK 8 b99.


August

Am Dienstag, den 23.07.2013, 10:19 +0200 schrieb David Ray 
:

And Voila!

Webstart fails again!

"Cannot launch the application…"


http://javafx.com";
codebase="http://www.interactivemesh.org/models/webstartjfx/";
href="fxTuxCube-0.6_FX3D.jnlp">
  
FXTuxCube 0.6
InteractiveMesh e.K.
FXTuxCube 0.6
  
  

  
  

download="eager"/>





  
  
  


Re:  FXTuxCube is a simple JavaFX 3D example with 2D elements drawn
on top of a 3D SubScene:
http://www.interactivemesh.org/models/jfx3dtuxcube.html


Just letting you know (as if you didn't already).  I'm running a iMac
3.4 ghz Intel core i7, with Java SE build 1.7.0_21-b12

David






Re: Prism Arch. WAS. Re: Mixing 2D and 3D

2013-07-23 Thread Joseph Andresen
 in hardware. Ideally the custom shader support (that 
doesn't exist) mentioned above would give a way to do this. There is a balance, 
I think, between what we want to provide built-in and what 3rd parties such as 
yourself could layer above the basic support with additional libraries.

For example, physics. I don't think we're going to add a physics engine of our own (probably ever), but it 
should be relatively easy for a framework to be built on top of JavaFX that did so. I was reading over the 
Unity docs for example to get a taste for how they setup their system, and was thinking that a GameObject is 
basically a type of Parent which has the ability to take multiple "Components", including a 
MeshView. So a "Unity-like" API could be built on top of FX Nodes, including "unity-like" 
physics engine integration. The core thing for our 3D support, I think, is to expose enough of the drawing 
system so that additional libraries can be built on top. Obviously that will include needing a way to let 
developers set shaders and manipulate various state.

Right now we just expose "depthTest", but we don't differentiate testing from 
depth-writing. The more GL / D3D specific functionality we expose in API the more 
treacherous the waters become. I think this is the main design problem to be navigated.


Will JavaOne 2013 highlight the JavaFX 3D strategy extensively?

I don't expect so. We'll show 3D but you're getting the cutting edge 
information now :-).


II. Mixing 2D and 3D
- "to tease apart the scene graph into Node, Node3D, and NodeBase ... doesn't 
work"
- "we keep the integrated scene graph as we have it",

So, all current and future 3D leaf and branch nodes are/will be derived from 
Node, which is primarily designed for use in a 2D/2.5D scene graph.

Yes.


Alea iacta est!

I guess so, Google translate failed me though so I'm not sure :-D


Various Node's properties and methods seem not to be relevant or applicable for 
3D nodes (e.g. Shape3D). Are you going to communicate which of these several 
hundreds (see below) should be used preferably, have a different semantic, 
should not be used or aren't supported for 3D nodes?

Sure, I think actually it is not so dire. Most of the APIs are applicable to 
both, there are only a few that are *strictly* 2D in nature (clip, effect, 
blend mode, cache), and cache at least we can ignore in a 3D world and blend 
mode might be applicable even in 3D (though with different semantics).


- "So the idea is that we can have different pipelines optimized for 2D or 3D 
rendering"
Would be great!

The more I think of it, the more I think this is necessary. If done right, we 
can make the pipeline abstraction simple enough and straightforward enough that 
we could also have DX11 pipeline, ES 3 pipeline, NVidia DX 11 pipeline, etc. 
I'm no doubt wildly optimistic, but if possible, it would mean we could 
optimize for specific configurations. Testing those configurations will be a 
nightmare, so even if we could write them, supporting them all is a monumental 
effort without nailing down a sensible testing strategy.


- Scene - Scene2D/SubScene2D - Scene3D/SubScene3D
- "if you put a 2D rectangle in a Scene3D/SubScene3D"

What is the use case for rendering 2D pixel-coordinates-based shapes (or even 
controls) within a 3D scene consisting of meshes constructed on 3D coordinates 
of arbitrary unit?

I think if we had this split, I would treat 2D shapes as if they were 3D shapes 
but without depth. So a Rectangle is just a box with depth=0 (assuming a box 
with depth=0 would even be drawn, I don't know what we do now). So I would 
expect 2D shapes going through the 3D pipeline to be tessellated (if it is a 
path or rounded rect) and interact just like any other geometry. Maybe it 
wouldn't be used very much but unless we completely split the scene graphs 
(such that Node3D existed and did not extend from Node at all) we have to make 
sense of them somehow.

ImageView and MediaView are both good examples of "2D" nodes which are also 3D 
nodes.

Controls *could* make sense in 3D if we had normals and bump-mapping. The 3D 
pipeline could use that additional information to give some 3D aspect to the 
controls. I could imagine a 3D radio button or button or checkbox being wanted 
for a game's settings page.

I suspect most of the time, people will use a SubScene, put their 2D controls 
in this, and then put the SubScene into the 3D world.


Couldn't the pipelines be even better optimized if neither 2D nodes are 
rendered in SubScene3D nor 3D nodes are rendered in SubScene2D?

I don't think so, if you treat a 2D node as just a really thin 3D node in terms 
of how the pipeline sees things. If we try to AA stroke rectangles etc as we do 
today, then yes, I think it would be quite a pain for a 3D pipeline to deal 

Re: Mixing 2D and 3D

2013-07-23 Thread David Ray
And Voila! 

Webstart fails again!

"Cannot launch the application…"


http://javafx.com"; 
codebase="http://www.interactivemesh.org/models/webstartjfx/"; 
href="fxTuxCube-0.6_FX3D.jnlp">
  
FXTuxCube 0.6
InteractiveMesh e.K.
FXTuxCube 0.6
  
  

  
  






  
  
  


Re:  FXTuxCube is a simple JavaFX 3D example with 2D elements drawn on top of a 
3D SubScene: http://www.interactivemesh.org/models/jfx3dtuxcube.html


Just letting you know (as if you didn't already).  I'm running a iMac 3.4 ghz 
Intel core i7, with Java SE build 1.7.0_21-b12

David



On Jul 23, 2013, at 6:45 AM, "August Lammersdorf, InteractiveMesh" 
 wrote:

> Vidio games usually draw the player's 2D UI elements in a second step on top 
> of the perspective 3D scene in the parallel/orthographic projection mode 
> (overlay or head-up display - HUD).
> 
> This approach is already widest supported by JavaFX even with perspective 3D 
> transforms. Using a StackPane or a self-written layered pane the user 
> interface can be rendered onto the 3D SubScene and can be designed, updated 
> and animated with:
> 
> - layout panes : javafx.scene.layout.*
> - UI controls : subclasses of javafx.scene.control.*
> - two-dimensional shapes : javafx.scene.effect.shape*
> - graphical effects : javafx.scene.effect.*
> - images : javafx.scene.image.*
> - shapes and background filler : javafx.scene.paint.*
> - transformations : javafx.scene.transform
> - animations : javafx.animation
> 
> Is there any other 3D API or game engine based on C++ or Java which can beat 
> this scenario?
> 
> FXTuxCube is a simple JavaFX 3D example with 2D elements drawn on top of a 3D 
> SubScene: http://www.interactivemesh.org/models/jfx3dtuxcube.html
> 
> Worth reading: "User interface design in games" 
> http://www.thewanderlust.net/blog/2010/03/29/user-interface-design-in-video-games/
> 
> August
> 
> Am Sonntag, den 21.07.2013, 16:07 +0200 schrieb Herve Girod 
> :
>> "What is the use case for rendering 2D pixel-coordinates-based shapes
>> (or even controls) within a 3D scene consisting of meshes constructed
>> on 3D coordinates of arbitrary unit?"
>> 
>> In my field of work (avionics displays),we might have a lot of use
>> cases for that. However, you also have that in video games, where they
>> like to render 2D interactive panels in a 3D perspective in the game
>> world.
>> 
>> Hervé
> 
> 



Mixing 2D and 3D

2013-07-23 Thread August Lammersdorf, InteractiveMesh
Vidio games usually draw the player's 2D UI elements in a second step 
on top of the perspective 3D scene in the parallel/orthographic 
projection mode (overlay or head-up display - HUD).


This approach is already widest supported by JavaFX even with 
perspective 3D transforms. Using a StackPane or a self-written layered 
pane the user interface can be rendered onto the 3D SubScene and can be 
designed, updated and animated with:


 - layout panes : javafx.scene.layout.*
 - UI controls : subclasses of javafx.scene.control.*
 - two-dimensional shapes : javafx.scene.effect.shape*
 - graphical effects : javafx.scene.effect.*
 - images : javafx.scene.image.*
 - shapes and background filler : javafx.scene.paint.*
 - transformations : javafx.scene.transform
 - animations : javafx.animation

Is there any other 3D API or game engine based on C++ or Java which can 
beat this scenario?


FXTuxCube is a simple JavaFX 3D example with 2D elements drawn on top 
of a 3D SubScene: 
http://www.interactivemesh.org/models/jfx3dtuxcube.html


Worth reading: "User interface design in games" 
http://www.thewanderlust.net/blog/2010/03/29/user-interface-design-in-video-games/


August

Am Sonntag, den 21.07.2013, 16:07 +0200 schrieb Herve Girod 
:

"What is the use case for rendering 2D pixel-coordinates-based shapes
(or even controls) within a 3D scene consisting of meshes constructed
on 3D coordinates of arbitrary unit?"

In my field of work (avionics displays),we might have a lot of use
cases for that. However, you also have that in video games, where 
they

like to render 2D interactive panels in a 3D perspective in the game
world.

Hervé





Re: Mixing 2D and 3D

2013-07-22 Thread Dr. Michael Paus

Am 23.07.13 02:19, schrieb Richard Bair:

Alea iacta est!

I guess so, Google translate failed me though so I'm not sure :-D


http://en.wikipedia.org/wiki/Alea_iacta_est

" The phrase is still used today to mean that events have passed a point 
of no return, that something inevitably will happen."


Just in order to avoid misinterpretations :-)

Michael


Re: Mixing 2D and 3D

2013-07-22 Thread Richard Bair
Java 8 (not JavaFX 8 specifically, although we're part of Java 8 so it also 
applies to us) is not supported on XP. It may or may not work, but we're not 
testing that configuration.

Richard

On Jul 22, 2013, at 5:41 PM, Pedro Duque Vieira  
wrote:

> Hi,
>  
> Java 8 doesn't support Windows XP, so in theory we can start taking advantage 
> of DirectX 10+. At this time we are limited to OpenGL ES 2 for...
>  
> Richard did you mean Java8 won't run on Windows XP, or that 3D features won't 
> be supported in Windows XP?
> 
> Thanks, best regards,
> 
> -- 
> Pedro Duque Vieira



Re: Mixing 2D and 3D

2013-07-22 Thread Pedro Duque Vieira
Hi,


> Java 8 doesn't support Windows XP, so in theory we can start taking
> advantage of DirectX 10+. At this time we are limited to OpenGL ES 2 for...


Richard did you mean Java8 won't run on Windows XP, or that 3D features
won't be supported in Windows XP?

Thanks, best regards,

-- 
Pedro Duque Vieira


Re: Mixing 2D and 3D

2013-07-22 Thread Richard Bair
Hi August,

I will attempt some kind of answer although what we end up doing has a lot to 
do with where the community wants to take things, so some things we've talked 
about, some things we haven't, and the conversation is needed and good.

> Whatever the use case will be JavaFX 3D will be measured against the business 
> needs and certainly against the capabilities of the latest releases of the 
> native 3D APIs Direct3D (11.1) and OpenGL (4.3). This might be unfair or not 
> helpful, but it is legitimate because they are the upper limits.
> 
> So, even an object-oriented scene-graph-based 3D implementation should in 
> principle be able to benefit from almost all native 3D features and should 
> provide access to them. If the JavaFX architecture enables this approach then 
> a currently minor implementation state will be rather accepted.
> Potential features are currently determined and limited by JavaFX' underlying 
> native APIs Direct3D 9 and OpenGL ES 2. Correct?

Java 8 doesn't support Windows XP, so in theory we can start taking advantage 
of DirectX 10+. At this time we are limited to OpenGL ES 2 for the sake of 
mobile and embedded. Also the cost of supporting multiple pipelines is 
significant. I think the first thing we would need to work through, before we 
could support DX 10, 11+, is to make it as easy as we can to write & maintain 
multiple pipelines.

So we have been looking at moving off of DX 9, but not off of ES 2 (until ES 3 
is widespread).

> - core, e.g.: primitives (points, lines, line-strip, triangle-strip, patches, 
> corresponding adjacency types), vertex attributes, double-precision, shaders 
> (vertex, tessalation, geometry, fragment, compute), 3D-/cubemap-texture, 
> multitextures, compressed texture, texture properties, multipass rendering, 
> occlusion queries, stencil test, offscreen rendering into image, multiple 
> active cameras, stereo?

I think some of these are relatively easy things -- a line strip or triangle 
strip for example could be Mesh subclasses?

We need to talk about how to handle shaders. I agree this is something that is 
required (people must have some way of supplying a custom shader). This will 
require work in Prism to support, but conceptually seems rather fine to me.

I think different texture types could be supported by extending API in Image. 
The worst case would be to introduce a Texture class and setup some kind of 
relationship if possible between an Image and a Texture (since images are just 
textures, after all).

The harder things are things like multi-pass rendering. Basically, features 
that we can add where prism is in control and picks a rendering strategy for 
you are relatively straightforward to think about. But giving a hook that 
allows the developer to pick the rendering strategy is quite a bit more 
complicated. I was reading up on order independent transparency algorithms and 
thinking "how would this be exposed in API?". I haven't had any good 
brain-waves on that yet.

I think we already do multiple active cameras?

> - APIs, e.g.: user input for scene navigation and model interaction 
> (keyboard, mouse, touchpad/screen), geometry utilies, skinning, physics 
> engine interface (kinematics), shadows

Key/ mouse / touch / etc should be there already?

Skinning and physics are both interesting. Skinning and boning is interesting 
because of different ways to go about this. Last JavaOne we did this with 
special shaders all in hardware. Ideally the custom shader support (that 
doesn't exist) mentioned above would give a way to do this. There is a balance, 
I think, between what we want to provide built-in and what 3rd parties such as 
yourself could layer above the basic support with additional libraries.

For example, physics. I don't think we're going to add a physics engine of our 
own (probably ever), but it should be relatively easy for a framework to be 
built on top of JavaFX that did so. I was reading over the Unity docs for 
example to get a taste for how they setup their system, and was thinking that a 
GameObject is basically a type of Parent which has the ability to take multiple 
"Components", including a MeshView. So a "Unity-like" API could be built on top 
of FX Nodes, including "unity-like" physics engine integration. The core thing 
for our 3D support, I think, is to expose enough of the drawing system so that 
additional libraries can be built on top. Obviously that will include needing a 
way to let developers set shaders and manipulate various state.

Right now we just expose "depthTest", but we don't differentiate testing from 
depth-writing. The more GL / D3D specific functionality we expose in API the 
more treacherous the waters become. I think this is the main design problem to 
be navigated.

> Will JavaOne 2013 highlight the JavaFX 3D

Re: Mixing 2D and 3D

2013-07-22 Thread Richard Bair
Feature freeze doesn't mean that there won't be changes -- just that every 
major feature has an implementation and is ready for testing. Inevitably, as we 
use new features, we'll find that there needs to be modifications or 
clarifications. Once we ship, the possibility to refine is greatly reduced.

Richard

On Jul 22, 2013, at 9:10 AM, Pedro Duque Vieira  
wrote:

> I'm a little confused, are this changes still going to be present in
> JavaFX8?
> 
> I thought JavaFX8 was feature freeze (1 month ago), but I see that some
> major API discussions are still taking place.
> 
> Regards,
> 
> -- 
> Pedro Duque Vieira



Re: Mixing 2D and 3D

2013-07-22 Thread Pedro Duque Vieira
I'm a little confused, are this changes still going to be present in
JavaFX8?

I thought JavaFX8 was feature freeze (1 month ago), but I see that some
major API discussions are still taking place.

Regards,

-- 
Pedro Duque Vieira


Re: Mixing 2D and 3D

2013-07-21 Thread Herve Girod
"What is the use case for rendering 2D pixel-coordinates-based shapes (or
even controls) within a 3D scene consisting of meshes constructed on 3D
coordinates of arbitrary unit?"

In my field of work (avionics displays),we might have a lot of use cases
for that. However, you also have that in video games, where they like to
render 2D interactive panels in a 3D perspective in the game world.

Hervé


2013/7/21 August Lammersdorf, InteractiveMesh 

> I. JavaFX 3D goals
>
> Whatever the use case will be JavaFX 3D will be measured against the
> business needs and certainly against the capabilities of the latest
> releases of the native 3D APIs Direct3D (11.1) and OpenGL (4.3). This might
> be unfair or not helpful, but it is legitimate because they are the upper
> limits.
>
> So, even an object-oriented scene-graph-based 3D implementation should in
> principle be able to benefit from almost all native 3D features and should
> provide access to them. If the JavaFX architecture enables this approach
> then a currently minor implementation state will be rather accepted.
>
> Potential features are currently determined and limited by JavaFX'
> underlying native APIs Direct3D 9 and OpenGL ES 2. Correct?
>
> Are there any perspectives available about future features?
>
>  - core, e.g.: primitives (points, lines, line-strip, triangle-strip,
> patches, corresponding adjacency types), vertex attributes,
> double-precision, shaders (vertex, tessalation, geometry, fragment,
> compute), 3D-/cubemap-texture, multitextures, compressed texture, texture
> properties, multipass rendering, occlusion queries, stencil test, offscreen
> rendering into image, multiple active cameras, stereo?
>
>  - APIs, e.g.: user input for scene navigation and model interaction
> (keyboard, mouse, touchpad/screen), geometry utilies, skinning, physics
> engine interface (kinematics), shadows
>
> Will JavaOne 2013 highlight the JavaFX 3D strategy extensively?
>
> Another source: Exploring JavaFX 3D, Jim Weaver
> https://oraclein.activeevents.**com/connect/fileDownload/**session/**
> 4D7869532A53BCAC5A18FB5687C110**3C/CON1140_Weaver-exploring-**
> javafx-3d.pdf<https://oraclein.activeevents.com/connect/fileDownload/session/4D7869532A53BCAC5A18FB5687C1103C/CON1140_Weaver-exploring-javafx-3d.pdf>
>
> II. Mixing 2D and 3D
>
>  - "to tease apart the scene graph into Node, Node3D, and NodeBase ...
> doesn't work"
>
>  - "we keep the integrated scene graph as we have it",
>
> So, all current and future 3D leaf and branch nodes are/will be derived
> from Node, which is primarily designed for use in a 2D/2.5D scene graph.
>
> Alea iacta est!
>
> Various Node's properties and methods seem not to be relevant or
> applicable for 3D nodes (e.g. Shape3D). Are you going to communicate which
> of these several hundreds (see below) should be used preferably, have a
> different semantic, should not be used or aren't supported for 3D nodes?
>
>  - "So the idea is that we can have different pipelines optimized for 2D
> or 3D rendering"
>
> Would be great!
>
>  - Scene - Scene2D/SubScene2D - Scene3D/SubScene3D
>
>  - "if you put a 2D rectangle in a Scene3D/SubScene3D"
>
> What is the use case for rendering 2D pixel-coordinates-based shapes (or
> even controls) within a 3D scene consisting of meshes constructed on 3D
> coordinates of arbitrary unit?
>
> Couldn't the pipelines be even better optimized if neither 2D nodes are
> rendered in SubScene3D nor 3D nodes are rendered in SubScene2D?
>
> August
>
> --
>
> javafx.scene.shape.Shape3D
>
> Properties inherited from class javafx.scene.Node
>
> blendMode, boundsInLocal, boundsInParent, cacheHint, cache, clip, cursor,
> depthTest, disabled, disable, effectiveNodeOrientation, effect,
> eventDispatcher, focused, focusTraversable, hover, id, inputMethodRequests,
> layoutBounds, layoutX, layoutY, localToParentTransform,
> localToSceneTransform, managed, mouseTransparent, nodeOrientation,
> onContextMenuRequested, onDragDetected, onDragDone, onDragDropped,
> onDragEntered, onDragExited, onDragOver, onInputMethodTextChanged,
> onKeyPressed, onKeyReleased, onKeyTyped, onMouseClicked,
> onMouseDragEntered, onMouseDragExited, onMouseDragged, onMouseDragOver,
> onMouseDragReleased, onMouseEntered, onMouseExited, onMouseMoved,
> onMousePressed, onMouseReleased, onRotate, onRotationFinished,
> onRotationStarted, onScrollFinished, onScroll, onScrollStarted,
> onSwipeDown, onSwipeLeft, onSwipeRight, onSwipeUp, onTouchMoved,
> onTouchPressed, onTouchReleased, onTouchStationary, onZoomFinished, onZoom,
> onZoomStarted, opacity, parent, pickOnBounds, pressed, rotate,
> rotationAx

Mixing 2D and 3D

2013-07-21 Thread August Lammersdorf, InteractiveMesh

I. JavaFX 3D goals

Whatever the use case will be JavaFX 3D will be measured against the 
business needs and certainly against the capabilities of the latest 
releases of the native 3D APIs Direct3D (11.1) and OpenGL (4.3). This 
might be unfair or not helpful, but it is legitimate because they are 
the upper limits.


So, even an object-oriented scene-graph-based 3D implementation should 
in principle be able to benefit from almost all native 3D features and 
should provide access to them. If the JavaFX architecture enables this 
approach then a currently minor implementation state will be rather 
accepted.


Potential features are currently determined and limited by JavaFX' 
underlying native APIs Direct3D 9 and OpenGL ES 2. Correct?


Are there any perspectives available about future features?

 - core, e.g.: primitives (points, lines, line-strip, triangle-strip, 
patches, corresponding adjacency types), vertex attributes, 
double-precision, shaders (vertex, tessalation, geometry, fragment, 
compute), 3D-/cubemap-texture, multitextures, compressed texture, 
texture properties, multipass rendering, occlusion queries, stencil 
test, offscreen rendering into image, multiple active cameras, stereo?


 - APIs, e.g.: user input for scene navigation and model interaction 
(keyboard, mouse, touchpad/screen), geometry utilies, skinning, physics 
engine interface (kinematics), shadows


Will JavaOne 2013 highlight the JavaFX 3D strategy extensively?

Another source: Exploring JavaFX 3D, Jim Weaver
https://oraclein.activeevents.com/connect/fileDownload/session/4D7869532A53BCAC5A18FB5687C1103C/CON1140_Weaver-exploring-javafx-3d.pdf

II. Mixing 2D and 3D

 - "to tease apart the scene graph into Node, Node3D, and NodeBase ... 
doesn't work"

 - "we keep the integrated scene graph as we have it",

So, all current and future 3D leaf and branch nodes are/will be derived 
from Node, which is primarily designed for use in a 2D/2.5D scene graph.


Alea iacta est!

Various Node's properties and methods seem not to be relevant or 
applicable for 3D nodes (e.g. Shape3D). Are you going to communicate 
which of these several hundreds (see below) should be used preferably, 
have a different semantic, should not be used or aren't supported for 3D 
nodes?


 - "So the idea is that we can have different pipelines optimized for 
2D or 3D rendering"


Would be great!

 - Scene - Scene2D/SubScene2D - Scene3D/SubScene3D
 - "if you put a 2D rectangle in a Scene3D/SubScene3D"

What is the use case for rendering 2D pixel-coordinates-based shapes 
(or even controls) within a 3D scene consisting of meshes constructed on 
3D coordinates of arbitrary unit?


Couldn't the pipelines be even better optimized if neither 2D nodes are 
rendered in SubScene3D nor 3D nodes are rendered in SubScene2D?


August

--

javafx.scene.shape.Shape3D

Properties inherited from class javafx.scene.Node

blendMode, boundsInLocal, boundsInParent, cacheHint, cache, clip, 
cursor, depthTest, disabled, disable, effectiveNodeOrientation, effect, 
eventDispatcher, focused, focusTraversable, hover, id, 
inputMethodRequests, layoutBounds, layoutX, layoutY, 
localToParentTransform, localToSceneTransform, managed, 
mouseTransparent, nodeOrientation, onContextMenuRequested, 
onDragDetected, onDragDone, onDragDropped, onDragEntered, onDragExited, 
onDragOver, onInputMethodTextChanged, onKeyPressed, onKeyReleased, 
onKeyTyped, onMouseClicked, onMouseDragEntered, onMouseDragExited, 
onMouseDragged, onMouseDragOver, onMouseDragReleased, onMouseEntered, 
onMouseExited, onMouseMoved, onMousePressed, onMouseReleased, onRotate, 
onRotationFinished, onRotationStarted, onScrollFinished, onScroll, 
onScrollStarted, onSwipeDown, onSwipeLeft, onSwipeRight, onSwipeUp, 
onTouchMoved, onTouchPressed, onTouchReleased, onTouchStationary, 
onZoomFinished, onZoom, onZoomStarted, opacity, parent, pickOnBounds, 
pressed, rotate, rotationAxis, scaleX, scaleY, scaleZ, scene, style, 
translateX, translateY, translateZ, visible


Methods inherited from class javafx.scene.Node

addEventFilter, addEventHandler, autosize, blendModeProperty, 
boundsInLocalProperty, boundsInParentProperty, buildEventDispatchChain, 
cacheHintProperty, cacheProperty, clipProperty, computeAreaInScreen, 
contains, contains, cursorProperty, depthTestProperty, disabledProperty, 
disableProperty, effectiveNodeOrientationProperty, effectProperty, 
eventDispatcherProperty, fireEvent, focusedProperty, 
focusTraversableProperty, getBaselineOffset, getBlendMode, 
getBoundsInLocal, getBoundsInParent, getCacheHint, getClassCssMetaData, 
getClip, getContentBias, getCssMetaData, getCursor, getDepthTest, 
getEffect, getEffectiveNodeOrientation, getEventDispatcher, getId, 
getInputMethodRequests, getLayoutBounds, getLayoutX, getLayoutY, 
getLocalToParentTransform, getLocalToSceneTransform, getNodeOrientation, 
getOnContextMenuRequested, getOnDragDetected, ge

Re: Mixing 2D and 3D

2013-07-19 Thread Richard Bair
Hi August, 

I thought I had gotten to that in the summary of the email, but maybe you can 
help me learn what I don't yet understand about the space so that we can define 
what we're doing better. 

> Instead I propose that we keep the integrated scene graph as we have it, but 
> that we introduce two new classes, Scene3D and SubScene3D. These would be 
> configured specially in two ways. First, they would default to depthTest 
> enabled, scene antialiasing enabled, and perspective camera. Meanwhile, Scene 
> and SubScene would be configured for 2.5D by default, such that depthTest is 
> disabled, scene AA is disabled, and perspective camera is set. In this way, 
> if you rotate a 2.5D shape, you get perspective as you would expect, but none 
> of the other 3D behaviors. Scene3D and SubScene3D could also have y-up and 
> 0,0 in the center.
> 
> Second, we will interpret the meaning of opacity differently depending on 
> whether you are in a Scene / SubScene, or a Scene3D / SubScene3D. Over time 
> we will also implement different semantics for rendering in both worlds. For 
> example, if you put a 2D rectangle in a Scene3D / SubScene3D, we would use a 
> quad to represent the rectangle and would not AA it at all, allowing the 
> scene3D's anti-aliasing property to define how to handle this. Likewise, a 
> complex path could either be tessellated or we could still use the mask + 
> shader approach to filling it, but that we would do so with no AA (so the 
> mask is black or white, not grayscale).
> 
> If you use effects, clips, or blendModes we're going to flatten in the 3D 
> world as well. But since these are not common things to do in 3D, I find that 
> quite acceptable. Meanwhile in 3D we'll simply ignore the cache property 
> (since it is just a hint).
> 
> So the idea is that we can have different pipelines optimized for 2D or 3D 
> rendering, and we will key-off which kind to use based on Scene / Scene3D, or 
> SubScene / SubScene3D. Shapes will look different depending on which world 
> they're rendered in, but that follows. All shapes (2D and 3D) will render by 
> the same rules in the 3D realm.

The idea is to give a very well defined way in the API to separate "real 3D" 
from 2.5D scenes. Because the needs of the two different worlds are quite 
different, it makes sense to give them well defined boundaries. This would 
allow our rendering code to render the same primitive (a Rectangle, for 
instance) differently depending on whether the developer was assembling a 
2D/2.5D application or a 3D application. We could decide that depthBuffer 
enabled is that flag, although some "real 3D" apps might want to disable the 
depth test for some portion of their scene, or maybe disable the depthBuffer 
entirely, and if they do I'm not sure that that alone would mean that we should 
switch rendering modes?

> You are talking a lot about implementation details. But, what I really miss 
> is a commitment to precisely specified MISSION and OBJECTIVES concerning 3D 
> support in the JavaFX API.

Our initial goal with 3D is to support enterprise 3D use cases. So for example, 
3D representations of human anatomy for medial devices, or viewing mechanical 
devices or drawings in 3D (like a CAD viewer). From before 1.0 shipped we've 
had conversations with customers who were building software that needed these 
capabilities. We want to be able to import and represent COLLADA files.

Our initial goal is not game development. That being said, we don't want to 
absolutely preclude game development from being possible, either in the future 
as part of the platform if it turned out that we got used for that, or as a 3rd 
party add-on to JavaFX. We're just not likely to devote engineering time to 
adding features specifically designed for gaming. (Although it seems to me that 
many of the things games want to do are also the things enterprise use cases 
want to do -- animations, quality rendering, performance, etc).

Richard

Re: Mixing 2D and 3D

2013-07-19 Thread August Lammersdorf, InteractiveMesh

Richard,

with all due respect, this is well known for a long time:

"Fundamentally, 2D and 3D rendering are completely different."

So, it isn't really suprising that issues concerning anti-aliasing and 
transparency occur with the current approach providing 3D rendering 
capabilities in JavaFX.


You are talking a lot about implementation details. But, what I really 
miss is a commitment to precisely specified MISSION and OBJECTIVES 
concerning 3D support in the JavaFX API.


I have no idea which goals and requirements I should consider when 
contributing concepts or code. So, I can't reply to any of your 
proposals.


All published planned 3D features 
(https://wiki.openjdk.java.net/display/OpenJFX/3D+Features) and all 
released 3D classes so far are more or less isolated items but not parts 
of the big picture.


Please, spend the same amount of time to communicate the principle 
goals of 3D rendering in JavaFX.


Sincerely,

August







Re: Mixing 2D and 3D

2013-07-18 Thread Richard Bair
Basically the "embed and OpenGL rendering" would be treated as "rendering into 
a texture using OpenGL and composite it into the scene", so it doesn't really 
impact on either approach. Unless instead of giving you a surface to scribble 
into using OpenGL, we gave you a callback where you issued OpenGL into the 
stream of commands such that your code was an integral part of the scene. In 
that case, there would be all kinds of issues depending on whether it was a 2D 
or 3D setup.

Richard

On Jul 18, 2013, at 2:29 PM, David Ray  wrote:

> I'm not a 3D expert but my "gut" tells me that the two pipelines should 
> remain distinct as you say. I can't imagine the evolution of such different 
> functions converging in such a way where the semantic treatment of the two 
> will coincide in a clean, simple and unconfusing manner. That only seems like 
> it would lead to compromise and the inability to develop both concepts to 
> their full maturity - and what about what you mentioned regarding possible 
> OpenGL exposure from the 3D API ? Would this be possible while still merging 
> 2D and 3D semantics?
> 
> David 
> 
> 
> 
> On Jul 18, 2013, at 3:58 PM, Richard Bair  wrote:
> 
>> While working on RT-5534, we found a large number of odd cases when mixing 
>> 2D and 3D. Some of these we talked about previously, some either we hadn't 
>> or, at least, they hadn't occurred to me. With 8 we are defining a lot of 
>> new API for 3D, and we need to make sure that we've very clearly defined how 
>> 2D and 3D nodes interact with each other, or developers will run into 
>> problems frequently and fire off angry emails about it :-)
>> 
>> Fundamentally, 2D and 3D rendering are completely different. There are 
>> differences in how opacity is understood and applied. 2D graphics frequently 
>> use clips, whereas 3D does not (other than clipping the view frustum or 
>> other such environmental clipping). 2D uses things like filter effects (drop 
>> shadow, etc) that is based on pixel bashing, whereas 3D uses light sources, 
>> shaders, or other such techniques to cast shadows, implement fog, dynamic 
>> lighting, etc. In short, 2D is fundamentally about drawing pixels and 
>> blending using the Painters Algorithm, whereas 3D is about geometry and 
>> shaders and (usually) a depth buffer. Of course 2D is almost always defined 
>> as 0,0 in the top left, positive x to the right and positive y down, whereas 
>> 3D is almost always 0,0 in the center, positive x to the right and positive 
>> y up. But that's just a transform away, so I don't consider that a 
>> *fundamental* difference.
>> 
>> There are many ways in which these differences manifest themselves when 
>> mixing content between the two graphics.
>> 
>> http://fxexperience.com/?attachment_id=2853
>> 
>> This picture shows 4 circles and a rectangle. They are setup such that all 5 
>> shapes are in the same group [c1, c2, r, c3, c4]. However depthBuffer is 
>> turned on (as well as perspective camera) so that I can use Z to position 
>> the shapes instead of using the painter's algorithm. You will notice that 
>> the first two circles (green and magenta) have a "dirty edge", whereas the 
>> last two circles (blue and orange) look beautiful. Note that even though 
>> there is a depth buffer involved, we're still issuing these shapes to the 
>> card in a specific order.
>> 
>> For those not familiar with the depth buffer, the way it works is very 
>> simple. When you draw something, in addition to recording the RGBA values 
>> for each pixel, you also write to an array (one element per pixel) with a 
>> value for every non-transparent pixel that was touched. In this way, if you 
>> draw something on top, and then draw something beneath it, the graphics card 
>> can check the depth buffer to determine whether it should skip a pixel. So 
>> in the image, we draw green for the green circle, and then later draw the 
>> black for the rectangle, and because some pixels were already drawn to by 
>> the green circle, the card knows not to overwrite those with the black pixel 
>> in the background rectangle.
>> 
>> The depth buffer is just a technique used to ensure that content rendered 
>> respects Z for the order in which things appear composited in the final 
>> frame. (You can individually cause nodes to ignore this requirement by 
>> setting depthTest to false for a specific node or branch of the scene graph, 
>> in which case they won't check with the depth buffer prior to drawing their 
>> pixels, they'll just overwrite any

Re: Mixing 2D and 3D

2013-07-18 Thread Richard Bair
You just embed a SubScene within a 3D scene, or a SubScene3D within a 2D scene, 
etc. So you can easily nest one rendering mode within the other -- where 
"easily" means, that each SubScene / SubScene3D has the semantics of "draw into 
a texture and composite into the parent scene".

Richard

On Jul 18, 2013, at 2:20 PM, Daniel Zwolenski  wrote:

> Does it need to be a separate class, can it not just be a setting on scene 
> like setRenderMode(3d)? Just thinking you may want a base 3d view for example 
> but then show 2d screens at times for settings, etc, so you could switch it 
> on and off. 
> 
> I assume there's no way to do it pane by pane, so the docked components of a 
> BorderPane are 2d optimized but the center is 3d? Or is that what SubScene3d 
> is for (not real clear how this would be used)?
> 
> 
> 
> On 19/07/2013, at 6:58 AM, Richard Bair  wrote:
> 
>> While working on RT-5534, we found a large number of odd cases when mixing 
>> 2D and 3D. Some of these we talked about previously, some either we hadn't 
>> or, at least, they hadn't occurred to me. With 8 we are defining a lot of 
>> new API for 3D, and we need to make sure that we've very clearly defined how 
>> 2D and 3D nodes interact with each other, or developers will run into 
>> problems frequently and fire off angry emails about it :-)
>> 
>> Fundamentally, 2D and 3D rendering are completely different. There are 
>> differences in how opacity is understood and applied. 2D graphics frequently 
>> use clips, whereas 3D does not (other than clipping the view frustum or 
>> other such environmental clipping). 2D uses things like filter effects (drop 
>> shadow, etc) that is based on pixel bashing, whereas 3D uses light sources, 
>> shaders, or other such techniques to cast shadows, implement fog, dynamic 
>> lighting, etc. In short, 2D is fundamentally about drawing pixels and 
>> blending using the Painters Algorithm, whereas 3D is about geometry and 
>> shaders and (usually) a depth buffer. Of course 2D is almost always defined 
>> as 0,0 in the top left, positive x to the right and positive y down, whereas 
>> 3D is almost always 0,0 in the center, positive x to the right and positive 
>> y up. But that's just a transform away, so I don't consider that a 
>> *fundamental* difference.
>> 
>> There are many ways in which these differences manifest themselves when 
>> mixing content between the two graphics.
>> 
>> http://fxexperience.com/?attachment_id=2853
>> 
>> This picture shows 4 circles and a rectangle. They are setup such that all 5 
>> shapes are in the same group [c1, c2, r, c3, c4]. However depthBuffer is 
>> turned on (as well as perspective camera) so that I can use Z to position 
>> the shapes instead of using the painter's algorithm. You will notice that 
>> the first two circles (green and magenta) have a "dirty edge", whereas the 
>> last two circles (blue and orange) look beautiful. Note that even though 
>> there is a depth buffer involved, we're still issuing these shapes to the 
>> card in a specific order.
>> 
>> For those not familiar with the depth buffer, the way it works is very 
>> simple. When you draw something, in addition to recording the RGBA values 
>> for each pixel, you also write to an array (one element per pixel) with a 
>> value for every non-transparent pixel that was touched. In this way, if you 
>> draw something on top, and then draw something beneath it, the graphics card 
>> can check the depth buffer to determine whether it should skip a pixel. So 
>> in the image, we draw green for the green circle, and then later draw the 
>> black for the rectangle, and because some pixels were already drawn to by 
>> the green circle, the card knows not to overwrite those with the black pixel 
>> in the background rectangle.
>> 
>> The depth buffer is just a technique used to ensure that content rendered 
>> respects Z for the order in which things appear composited in the final 
>> frame. (You can individually cause nodes to ignore this requirement by 
>> setting depthTest to false for a specific node or branch of the scene graph, 
>> in which case they won't check with the depth buffer prior to drawing their 
>> pixels, they'll just overwrite anything that was drawn previously, even if 
>> it has a Z value that would put it behind the thing it is drawing over!).
>> 
>> For the sake of this discussion "3D World" means "depth buffer enabled" and 
>> assumes perspective camera is enabled, and 2D means "2.5D capable" by which 
&g

Re: Mixing 2D and 3D

2013-07-18 Thread David Ray
I'm not a 3D expert but my "gut" tells me that the two pipelines should remain 
distinct as you say. I can't imagine the evolution of such different functions 
converging in such a way where the semantic treatment of the two will coincide 
in a clean, simple and unconfusing manner. That only seems like it would lead 
to compromise and the inability to develop both concepts to their full maturity 
- and what about what you mentioned regarding possible OpenGL exposure from the 
3D API ? Would this be possible while still merging 2D and 3D semantics?

David 



On Jul 18, 2013, at 3:58 PM, Richard Bair  wrote:

> While working on RT-5534, we found a large number of odd cases when mixing 2D 
> and 3D. Some of these we talked about previously, some either we hadn't or, 
> at least, they hadn't occurred to me. With 8 we are defining a lot of new API 
> for 3D, and we need to make sure that we've very clearly defined how 2D and 
> 3D nodes interact with each other, or developers will run into problems 
> frequently and fire off angry emails about it :-)
> 
> Fundamentally, 2D and 3D rendering are completely different. There are 
> differences in how opacity is understood and applied. 2D graphics frequently 
> use clips, whereas 3D does not (other than clipping the view frustum or other 
> such environmental clipping). 2D uses things like filter effects (drop 
> shadow, etc) that is based on pixel bashing, whereas 3D uses light sources, 
> shaders, or other such techniques to cast shadows, implement fog, dynamic 
> lighting, etc. In short, 2D is fundamentally about drawing pixels and 
> blending using the Painters Algorithm, whereas 3D is about geometry and 
> shaders and (usually) a depth buffer. Of course 2D is almost always defined 
> as 0,0 in the top left, positive x to the right and positive y down, whereas 
> 3D is almost always 0,0 in the center, positive x to the right and positive y 
> up. But that's just a transform away, so I don't consider that a 
> *fundamental* difference.
> 
> There are many ways in which these differences manifest themselves when 
> mixing content between the two graphics.
> 
> http://fxexperience.com/?attachment_id=2853
> 
> This picture shows 4 circles and a rectangle. They are setup such that all 5 
> shapes are in the same group [c1, c2, r, c3, c4]. However depthBuffer is 
> turned on (as well as perspective camera) so that I can use Z to position the 
> shapes instead of using the painter's algorithm. You will notice that the 
> first two circles (green and magenta) have a "dirty edge", whereas the last 
> two circles (blue and orange) look beautiful. Note that even though there is 
> a depth buffer involved, we're still issuing these shapes to the card in a 
> specific order.
> 
> For those not familiar with the depth buffer, the way it works is very 
> simple. When you draw something, in addition to recording the RGBA values for 
> each pixel, you also write to an array (one element per pixel) with a value 
> for every non-transparent pixel that was touched. In this way, if you draw 
> something on top, and then draw something beneath it, the graphics card can 
> check the depth buffer to determine whether it should skip a pixel. So in the 
> image, we draw green for the green circle, and then later draw the black for 
> the rectangle, and because some pixels were already drawn to by the green 
> circle, the card knows not to overwrite those with the black pixel in the 
> background rectangle.
> 
> The depth buffer is just a technique used to ensure that content rendered 
> respects Z for the order in which things appear composited in the final 
> frame. (You can individually cause nodes to ignore this requirement by 
> setting depthTest to false for a specific node or branch of the scene graph, 
> in which case they won't check with the depth buffer prior to drawing their 
> pixels, they'll just overwrite anything that was drawn previously, even if it 
> has a Z value that would put it behind the thing it is drawing over!).
> 
> For the sake of this discussion "3D World" means "depth buffer enabled" and 
> assumes perspective camera is enabled, and 2D means "2.5D capable" by which I 
> mean perspective camera but no depth buffer.
> 
> So:
> 
>   1) Draw the first green circle. This is done by rendering the circle 
> into an image with nice anti-aliasing, and then rotating that image
> and blend with anything already in the frame buffer
>   2) Draw the magenta circle. Same as with green -- draw into an image 
> with nice AA and rotate and blend
>   3) Draw the rectangle. Because the depth buffer is turned on, for each 
> pixel of the green & mag

Re: Mixing 2D and 3D

2013-07-18 Thread Daniel Zwolenski
Does it need to be a separate class, can it not just be a setting on scene like 
setRenderMode(3d)? Just thinking you may want a base 3d view for example but 
then show 2d screens at times for settings, etc, so you could switch it on and 
off. 

I assume there's no way to do it pane by pane, so the docked components of a 
BorderPane are 2d optimized but the center is 3d? Or is that what SubScene3d is 
for (not real clear how this would be used)?



On 19/07/2013, at 6:58 AM, Richard Bair  wrote:

> While working on RT-5534, we found a large number of odd cases when mixing 2D 
> and 3D. Some of these we talked about previously, some either we hadn't or, 
> at least, they hadn't occurred to me. With 8 we are defining a lot of new API 
> for 3D, and we need to make sure that we've very clearly defined how 2D and 
> 3D nodes interact with each other, or developers will run into problems 
> frequently and fire off angry emails about it :-)
> 
> Fundamentally, 2D and 3D rendering are completely different. There are 
> differences in how opacity is understood and applied. 2D graphics frequently 
> use clips, whereas 3D does not (other than clipping the view frustum or other 
> such environmental clipping). 2D uses things like filter effects (drop 
> shadow, etc) that is based on pixel bashing, whereas 3D uses light sources, 
> shaders, or other such techniques to cast shadows, implement fog, dynamic 
> lighting, etc. In short, 2D is fundamentally about drawing pixels and 
> blending using the Painters Algorithm, whereas 3D is about geometry and 
> shaders and (usually) a depth buffer. Of course 2D is almost always defined 
> as 0,0 in the top left, positive x to the right and positive y down, whereas 
> 3D is almost always 0,0 in the center, positive x to the right and positive y 
> up. But that's just a transform away, so I don't consider that a 
> *fundamental* difference.
> 
> There are many ways in which these differences manifest themselves when 
> mixing content between the two graphics.
> 
> http://fxexperience.com/?attachment_id=2853
> 
> This picture shows 4 circles and a rectangle. They are setup such that all 5 
> shapes are in the same group [c1, c2, r, c3, c4]. However depthBuffer is 
> turned on (as well as perspective camera) so that I can use Z to position the 
> shapes instead of using the painter's algorithm. You will notice that the 
> first two circles (green and magenta) have a "dirty edge", whereas the last 
> two circles (blue and orange) look beautiful. Note that even though there is 
> a depth buffer involved, we're still issuing these shapes to the card in a 
> specific order.
> 
> For those not familiar with the depth buffer, the way it works is very 
> simple. When you draw something, in addition to recording the RGBA values for 
> each pixel, you also write to an array (one element per pixel) with a value 
> for every non-transparent pixel that was touched. In this way, if you draw 
> something on top, and then draw something beneath it, the graphics card can 
> check the depth buffer to determine whether it should skip a pixel. So in the 
> image, we draw green for the green circle, and then later draw the black for 
> the rectangle, and because some pixels were already drawn to by the green 
> circle, the card knows not to overwrite those with the black pixel in the 
> background rectangle.
> 
> The depth buffer is just a technique used to ensure that content rendered 
> respects Z for the order in which things appear composited in the final 
> frame. (You can individually cause nodes to ignore this requirement by 
> setting depthTest to false for a specific node or branch of the scene graph, 
> in which case they won't check with the depth buffer prior to drawing their 
> pixels, they'll just overwrite anything that was drawn previously, even if it 
> has a Z value that would put it behind the thing it is drawing over!).
> 
> For the sake of this discussion "3D World" means "depth buffer enabled" and 
> assumes perspective camera is enabled, and 2D means "2.5D capable" by which I 
> mean perspective camera but no depth buffer.
> 
> So:
> 
>1) Draw the first green circle. This is done by rendering the circle into 
> an image with nice anti-aliasing, and then rotating that image
> and blend with anything already in the frame buffer
>2) Draw the magenta circle. Same as with green -- draw into an image with 
> nice AA and rotate and blend
>3) Draw the rectangle. Because the depth buffer is turned on, for each 
> pixel of the green & magenta circles, we *don't* render
> any black. Because the AA edge has been touched with some 
> transparency, it was wri

Mixing 2D and 3D

2013-07-18 Thread Richard Bair
While working on RT-5534, we found a large number of odd cases when mixing 2D 
and 3D. Some of these we talked about previously, some either we hadn't or, at 
least, they hadn't occurred to me. With 8 we are defining a lot of new API for 
3D, and we need to make sure that we've very clearly defined how 2D and 3D 
nodes interact with each other, or developers will run into problems frequently 
and fire off angry emails about it :-)

Fundamentally, 2D and 3D rendering are completely different. There are 
differences in how opacity is understood and applied. 2D graphics frequently 
use clips, whereas 3D does not (other than clipping the view frustum or other 
such environmental clipping). 2D uses things like filter effects (drop shadow, 
etc) that is based on pixel bashing, whereas 3D uses light sources, shaders, or 
other such techniques to cast shadows, implement fog, dynamic lighting, etc. In 
short, 2D is fundamentally about drawing pixels and blending using the Painters 
Algorithm, whereas 3D is about geometry and shaders and (usually) a depth 
buffer. Of course 2D is almost always defined as 0,0 in the top left, positive 
x to the right and positive y down, whereas 3D is almost always 0,0 in the 
center, positive x to the right and positive y up. But that's just a transform 
away, so I don't consider that a *fundamental* difference.

There are many ways in which these differences manifest themselves when mixing 
content between the two graphics.

http://fxexperience.com/?attachment_id=2853

This picture shows 4 circles and a rectangle. They are setup such that all 5 
shapes are in the same group [c1, c2, r, c3, c4]. However depthBuffer is turned 
on (as well as perspective camera) so that I can use Z to position the shapes 
instead of using the painter's algorithm. You will notice that the first two 
circles (green and magenta) have a "dirty edge", whereas the last two circles 
(blue and orange) look beautiful. Note that even though there is a depth buffer 
involved, we're still issuing these shapes to the card in a specific order.

For those not familiar with the depth buffer, the way it works is very simple. 
When you draw something, in addition to recording the RGBA values for each 
pixel, you also write to an array (one element per pixel) with a value for 
every non-transparent pixel that was touched. In this way, if you draw 
something on top, and then draw something beneath it, the graphics card can 
check the depth buffer to determine whether it should skip a pixel. So in the 
image, we draw green for the green circle, and then later draw the black for 
the rectangle, and because some pixels were already drawn to by the green 
circle, the card knows not to overwrite those with the black pixel in the 
background rectangle.

The depth buffer is just a technique used to ensure that content rendered 
respects Z for the order in which things appear composited in the final frame. 
(You can individually cause nodes to ignore this requirement by setting 
depthTest to false for a specific node or branch of the scene graph, in which 
case they won't check with the depth buffer prior to drawing their pixels, 
they'll just overwrite anything that was drawn previously, even if it has a Z 
value that would put it behind the thing it is drawing over!).

For the sake of this discussion "3D World" means "depth buffer enabled" and 
assumes perspective camera is enabled, and 2D means "2.5D capable" by which I 
mean perspective camera but no depth buffer.

So:

1) Draw the first green circle. This is done by rendering the circle 
into an image with nice anti-aliasing, and then rotating that image
 and blend with anything already in the frame buffer
2) Draw the magenta circle. Same as with green -- draw into an image 
with nice AA and rotate and blend
3) Draw the rectangle. Because the depth buffer is turned on, for each 
pixel of the green & magenta circles, we *don't* render
 any black. Because the AA edge has been touched with some 
transparency, it was written to the depth buffer, and we will not
 draw any black there. Hence the dirty fringe! No blending!
4) Draw the blue circle into an image with nice AA, rotate, and blend. 
AA edges are blended nicely with black background!
5) Draw the orange circle into an image with nice AA, rotate, and 
blend. AA edges are blended nicely with black background!

Transparency in 3D is a problem, and on ES2 it is particularly difficult to 
solve. As such, it is usually up to the application to sort their scene graph 
nodes in such a way as to end up with something sensible. The difficulty in 
this case is that when you use any 2D node and mix it in with 3D nodes (or even 
other 2D nodes but with the depth buffer turned on) then you end up in a 
situation where the nice AA ends up being a liability rather