Re: ANGLE - Translating OpenGL ES 2 code to DirectX?

2014-07-21 Thread Joseph Andresen

That's a good point Robert,

If the GLContext work that steve and felipe did become an actual thing, 
this would help that cause become cross platform.
Angle also is strictly es2, and I haven't looked at prism es2 in a while 
but I think we use GL2 calls for desktop in some cases. We would have to 
address those cases (if even possible) before any work started.


-Joe

On 7/21/2014 10:40 AM, Robert Krüger wrote:

On Mon, Jul 21, 2014 at 7:09 PM, Joseph Andresen
 wrote:

I also forgot,

The argument could be made that if we did indeed use angle, we could ditch
our directx 9 pipeline altogether and just use "one" hardware pipeline. We
would really have to evaluate this though, and I am not sure the work would
be worth the benefit (if there even is any).

Well, at least the presence of the directx pipeline was used as an
argument against exposing a GL context via a low-level native api,
which quite a number of people with particular graphics/performance
requirements need IIRC, so this would be a potential benefit.




Re: ANGLE - Translating OpenGL ES 2 code to DirectX?

2014-07-21 Thread Joseph Andresen

I also forgot,

The argument could be made that if we did indeed use angle, we could 
ditch our directx 9 pipeline altogether and just use "one" hardware 
pipeline. We would really have to evaluate this though, and I am not 
sure the work would be worth the benefit (if there even is any).



On 7/21/2014 10:04 AM, Joseph Andresen wrote:

Hi Tobias,

I took an extensive look into exactly what angle provides in terms of 
a feature set, and at the time, found that it wouldn't really get us 
anything. Technical challenges aside, being able to run the GL pipe on 
windows is not limited by prism, in fact in the past me and other 
engineers have used windows es2 to vet out platform specific bugs. I 
think we just don't ship with that support.


I do think one interesting thing to set up would be to use it to 
validate our shaders (if all the legal stuff worked out and we were 
actually able to use it).


-Joe




On 7/21/2014 4:17 AM, Tobias Bley wrote:

Hi,

does anybody knows the AngleProject? 
(https://code.google.com/p/angleproject/)


It’s used by Chrome and Firefox for WebGL to translate OpenGL ES2 
code to DirectX on Windows….


Maybe it can be used to use the JavaFX OpenGL ES2 pipeline on Windows 
too?


Best regards,
Tobi









Re: ANGLE - Translating OpenGL ES 2 code to DirectX?

2014-07-21 Thread Joseph Andresen

Hi Tobias,

I took an extensive look into exactly what angle provides in terms of a 
feature set, and at the time, found that it wouldn't really get us 
anything. Technical challenges aside, being able to run the GL pipe on 
windows is not limited by prism, in fact in the past me and other 
engineers have used windows es2 to vet out platform specific bugs. I 
think we just don't ship with that support.


I do think one interesting thing to set up would be to use it to 
validate our shaders (if all the legal stuff worked out and we were 
actually able to use it).


-Joe




On 7/21/2014 4:17 AM, Tobias Bley wrote:

Hi,

does anybody knows the AngleProject? (https://code.google.com/p/angleproject/)

It’s used by Chrome and Firefox for WebGL to translate OpenGL ES2 code to 
DirectX on Windows….

Maybe it can be used to use the JavaFX OpenGL ES2 pipeline on Windows too?

Best regards,
Tobi







hg: openjfx/8u-dev/rt: RT-23916 [Accessibility] Add accessibility support for Charts

2014-07-16 Thread joseph . andresen
Changeset: c0a468c56309
Author:Joseph Andresen
Date:  2014-07-16 14:37 -0700
URL:   http://hg.openjdk.java.net/openjfx/8u-dev/rt/rev/c0a468c56309

RT-23916 [Accessibility] Add accessibility support for Charts

! modules/controls/src/main/java/javafx/scene/chart/AreaChart.java
! modules/controls/src/main/java/javafx/scene/chart/BarChart.java
! modules/controls/src/main/java/javafx/scene/chart/Chart.java
! modules/controls/src/main/java/javafx/scene/chart/LineChart.java
! modules/controls/src/main/java/javafx/scene/chart/PieChart.java
! modules/controls/src/main/java/javafx/scene/chart/ScatterChart.java
! modules/controls/src/main/java/javafx/scene/chart/StackedAreaChart.java
! modules/controls/src/main/java/javafx/scene/chart/StackedBarChart.java
! modules/controls/src/main/java/javafx/scene/chart/XYChart.java



Review Request RT-36692 HighContrast Support for Windows 7

2014-05-07 Thread Joseph Andresen

Hey Jonathan and Anthony,

Please Review the change for high contrast support on Windows.

Jira: https://javafx-jira.kenai.com/browse/RT-36692
Webrev: http://cr.openjdk.java.net/~jandrese/RT36692/

Thanks,
Joe


Re: Bounds constructor validation

2013-10-01 Thread Joseph Andresen
Kevin and I had this exact conversation years ago. I believe our answer was 
that anything less than 0 meant uninitialized? 

Maybe he can remember better than I.

-Joe

On Oct 1, 2013, at 4:32 PM, Richard Bair  wrote:

> I see this is not going to work, since isEmpty() defines itself as where one 
> component's max (maxX, maxY, maxZ) is less than the corresponding min. So we 
> make sense, at least, out of -1 (although as far as the implementation is 
> concerned, any negative value works just as well).
> 
> On Oct 1, 2013, at 3:13 PM, Richard Bair  wrote:
> 
>> Hi,
>> 
>> I'm looking at https://javafx-jira.kenai.com/browse/RT-23446, where the 
>> argument is made that the width / height of a node (specifically, a Region's 
>> prefWidth, minWidth, maxWidth, prefHeight, minHeight, maxHeight) should 
>> never be negative. While looking at this, I noticed that in Node, the 
>> prefWidth method relies on the layoutBounds.getWidth(). However, the Bounds 
>> class itself does not appear to do any validation of the parameters passed 
>> to the Bounds. There are no checks for NaN, and no checks for negative 
>> width, height, depth.
>> 
>> Is there any reason why we should allow NaN, or negative width / height / 
>> depth for Bounds?
>> 
>> Richard
> 


Re: problem with javaFX canvas

2013-09-30 Thread Joseph Andresen

Hello Cinta,

Sorry for the delay. JavaOne madness.

I just ran your code on Windows 7 with an nVidia card with no issues.

Can you file a Jira and describe your build and what version of javafx 
you are using?

Also, paste the code from your email.

https://javafx-jira.kenai.com/secure/Dashboard.jspa

Thanks,
Joe

On 9/26/2013 9:20 PM, Cinta Damayanti wrote:

import javafx.application.Application;
import javafx.scene.Group;
import javafx.scene.Scene;
import javafx.scene.canvas.Canvas;
import javafx.scene.canvas.GraphicsContext;
import javafx.scene.control.Button;
import javafx.scene.paint.Color;
import javafx.stage.Stage;
  public class TestCanvasTest extends Application {
  static int width = 400 , height = 400;
  @Override
public void start(Stage stage) throws Exception {
Group root = new Group();
final Canvas canvas = new Canvas(width, height);
final GraphicsContext gc = canvas.getGraphicsContext2D();
  Button button1 =  new Button();
button1.setText("START");
button1.setMinWidth(100);
button1.setTranslateX(10);
button1.setTranslateY(10);
  Button button2  = new Button();
button2.setText("PAUSE");
button2.setMinWidth(100);
button2.setTranslateX(10);
button2.setTranslateY(40);
  root.getChildren().add(canvas);
  root.getChildren().addAll(button1, button2);
  Scene scene = new Scene(root, width, height);
stage.setScene(scene);
  gc.setFill(Color.RED);
gc.fillRect(0, 0, width,  height);
stage.show();
}
  public static void main(String[] args) {
launch(args);
}
  }




Re: JavaFX 3D Issues reported in April

2013-09-26 Thread Joseph Andresen
The max number of lights on a node can only be 3, that is a limitation. Not 
having normals is also a limitation, not a bug. 

Chien can comment on the rest.

On Sep 26, 2013, at 12:54 PM, Richard Bair  wrote:

> Hi guys,
> 
> I was just going through old email after JavaOne, and noticed one about bugs 
> found with JavaFX 3D:
> 
> http://www.spanglefish.com/dmsconsulting/index.asp?pageid=469276
> 
> Chien, can you go through this list of issues and make sure everything is 
> accounted for (either fixed or entered into JIRA)?
> 
> Thanks
> Richard


Re: Canvas blowing up (was Re: JavaFX Media issues)

2013-08-28 Thread Joseph Andresen



Canvas as of right now (pretty much any 8.0 build) is actually not 
persistent due to the MT changes that were implemented to make the 
buffer system "work" with multi-threading. I had a patch which 
implements the clearRect (without considering Jim's warning about stale 
state), but it is useless until the buffers are properly handled with 
multiple threads.


Here is the jira for anyone curious:
https://javafx-jira.kenai.com/browse/RT-27558

-Joe


On 8/9/2013 8:23 AM, Richard Bair wrote:

I mean, it looks like it is working for a few seconds,
but then as the memory fills with the Canvas backlog it can lead to the GC
using a lot more CPU, thus reducing the ability for Canvas to process its
command queue even further, well it just collapses in on itself  and dies.

Forking the thread.

The problem with Canvas is that if you have a canvas and you scribble on it, 
and then scribble on it some more, and then scribble on it some more, then in 
order for us to get the right result in the end, we need to replay all those 
scribbles in order. If pulses are not happening, we still need to remember 
these scribbles so we can draw the right result.

BUT, if you issue a command to the canvas which will cause it to "clear" all 
its contents, then we could throw away any previously buffered data. Right now the only 
way to do that would be a fillRect with a solid fill where the fillRect encompasses the 
entire canvas area, or a clearRect where the clearRect encompasses the entire canvas area.

This seems like a very simple fix. GraphicsContext.clearRect and GraphicsContext.fillRect should 
both (under the right conditions) throw away the previously buffered commands. Then all you have to 
do is be sure to make one of these calls (likely just a clearRect) before each frame, and we'll 
never buffer more than a single frame's worth of data. We could also add a "clear" method 
which is "clearRect(0, 0, w, h)" to make this more foolproof, and then document it as a 
best practice to clear the canvas before each rendering if you intend to redraw the entire thing on 
each frame.

If you're making use of manually operated "dirty rects" so that you only clear the 
damaged area to repaint, then we couldn't employ this technique and we'd have to buffer 'till 
kingdom come. So we still need a mechanism exposed in the scene graph of "liveness" and 
associated events so that when the scene is no longer live (for example, when minimized) you could 
stop your animation timer, but for your specific media use case this isn't as important.

Richard




Re: Mixing 2D and 3D

2013-07-25 Thread Joseph Andresen

err... two identical groups of nodes**

On 7/25/2013 11:04 AM, Joseph Andresen wrote:


On 7/25/2013 10:37 AM, Richard Bair wrote:

Hi August,


"I think we already do multiple active cameras?"

More precisely: simultaneous viewing from different points of view 
into a single 3D scene graph was meant, i.e. several cameras are 
attached to one scene graph.
A SubScene has exactly one camera attached which renders the 
associated scene graph into the corresponding SubScene's rectangle. 
Implementing simultaneous viewing requires a cloned 3D scene graph 
for the second, third, and so on SubScene/Camera. Material, Mesh, 
and Image objects can be re-used because they are shareable. 
Animations of Nodes' Transforms seem to be shareable as well. But 
Transitions (Rotate, Scale, Translate) have to be cloned because 
they operate on a Node's methods directly. So, simultaneous viewing 
seems practicable.
Jasper or Kevin will have to comment, but I know this scenario was 
talked about extensively in the design for the renderToImage and 
cameras, and I thought this was possible today.
I know that one way to do this is by rendering the same group of nodes 
twice, using two different cameras each time, and using render to 
image or whatever to get your "RTT". I haven't tried it but i suspect 
it goes something like calling render to image on a group with one 
camera and then render to image on the same group with a different 
camera.






Re: Mixing 2D and 3D

2013-07-25 Thread Joseph Andresen


On 7/25/2013 10:37 AM, Richard Bair wrote:

Hi August,


"I think we already do multiple active cameras?"

More precisely: simultaneous viewing from different points of view into a 
single 3D scene graph was meant, i.e. several cameras are attached to one scene 
graph.
A SubScene has exactly one camera attached which renders the associated scene 
graph into the corresponding SubScene's rectangle. Implementing simultaneous 
viewing requires a cloned 3D scene graph for the second, third, and so on 
SubScene/Camera. Material, Mesh, and Image objects can be re-used because they 
are shareable. Animations of Nodes' Transforms seem to be shareable as well. 
But Transitions (Rotate, Scale, Translate) have to be cloned because they 
operate on a Node's methods directly. So, simultaneous viewing seems 
practicable.

Jasper or Kevin will have to comment, but I know this scenario was talked about 
extensively in the design for the renderToImage and cameras, and I thought this 
was possible today.
I know that one way to do this is by rendering the same group of nodes 
twice, using two different cameras each time, and using render to image 
or whatever to get your "RTT". I haven't tried it but i suspect it goes 
something like calling render to image on a group with one camera and 
then render to image on the same group with a different camera.




Re: Can JavaFX do CAD?

2013-07-23 Thread Joseph Andresen
I believe JavaFx could do cad, first step would be to provide a simple data set 
and boil it down to the best render paths in JavaFX. 

As far as I know it shouldn't be any worse than swing with the slowest render 
paths.

-Joe

On Jul 23, 2013, at 8:47 AM, Chris Gay  wrote:

> Hello all.
> 
> Please could someone advise if it is even feasible for me to consider 
> re-factoring the following Swing application, so that it becomes a JavaFX 
> one. From trying to read about JavaFX, I get the feeling that Oracle never 
> intended Java FX for the purpose I need.
> 
> I have a large Java Swing desktop CAD application which makes heavy use of 
> the Java 2D API, and concurrency. It is a Menu Bar driven application, with a 
> Toolbar for Tools, and a few buttons, but 99% of the user activity concerns 
> selecting and manipulating vector graphical objects in a traditional manner 
> using one Tool at a time (think Inkscape or LibreOffice Drawing apps). The 
> application has multiple drawings open at the same time (but only one is 
> visible), and each Drawing contains it's own Drawing and Processing threads 
> (in addition to sharing the Main and Event Threads), which keeps the Event 
> Thread lively. Each Drawing contains an ArrayList, acting as a Display List 
> of all graphical objects, and each graphical object can be a tree structure. 
> In many cases I use simple iteration instead of Iterators, simply for speed, 
> and to avoid garbage. The graphical objects are lightweight, in that they do 
> not carry references to events and handlers. All they carry is their basic 
> geometric data and properties, a bounding box which they can lend as Read 
> Only, and a boolean flag to indicate selection, which means there can be 
> millions of the objects, with a minimum memory footprint. To support them, 
> there are many hundreds of methods, which the tools interact with. There can 
> be multiple Drawing Windows active on a single drawing, where each Window is 
> backed up by an offscreen image, which handles the zoom and sliding buffer 
> behaviour for fast scrolling, to allow rapid bit-blt's to the actual window. 
> Lastly, the user manipulates the Drawing (Display List), using one of many 
> Tools, where the active Tool receives events from the event queue, and then 
> manipulates selected and/or unselected graphical objects, by using XOR Mode 
> graphics onto the offscreen buffer, generally using callbacks.
> 
> The system is fast and very responsive on large data sets, but what I do not 
> know is whether JavaFX will help me make a better implementation, or whether 
> it will fight me all the way. With JavaFX claiming hardware acceleration, I 
> do not understand whether it depends on transferring the very large data sets 
> into graphics hardware to render, and what happens if the hardware cannot 
> cope. So far, I have found little in the way of discussions that help me get 
> a mental picture of how JavaFX is intended to work. Should I stick with Swing?
> 
> Regards,
> 
> Chris
> 


Re: Prism Arch. WAS. Re: Mixing 2D and 3D

2013-07-23 Thread Joseph Andresen

I hate to point out a small part of such an important reply but I must.

"before we could support DX 10, 11+, is to make it as easy as we can to write & 
maintain multiple pipelines."

I cannot stress enough how important this is. I have been playing around 
with extending prism to dx11 and before I could even do anything useful 
I need to have a useful abstraction of concepts, simply because they are 
different in dx11.


Prism's VertexBuffer class is a prime example of this. It was designed 
with 2D in mind. Not to a fault, but ideally it would be nice to be able 
to define Vertex Formats IN JAVA, and just have all 2d render commands 
use the same vertex format (which it does already, hardcoded, in 
d3dcontext.c?? i think). This would allow us to have the same 
VertexBuffer for 2D, while being able to create VertexBuffers for 3D and 
heck, even GPU programming. As far as i'm concerned this is a near 
prerequisite before integrating new pipelines. This also effects our 
ability to easily provide custom shader support outside of prism. I had 
a changeset somewhere that got as far as rewriting 2D to use such a 
version of VertexBuffers, but I believe I got stuck with dynamically 
instantiating the native buffers contained within a VertexBuffer based 
on a given VertexFormat.


Also, another challenge, the above abstraction is an old way of 
thinking. Newer native API's abstract this stuff even more because of 
compute shaders.


-Joe


On 7/22/2013 5:19 PM, Richard Bair wrote:

Hi August,

I will attempt some kind of answer although what we end up doing has a lot to 
do with where the community wants to take things, so some things we've talked 
about, some things we haven't, and the conversation is needed and good.


Whatever the use case will be JavaFX 3D will be measured against the business 
needs and certainly against the capabilities of the latest releases of the 
native 3D APIs Direct3D (11.1) and OpenGL (4.3). This might be unfair or not 
helpful, but it is legitimate because they are the upper limits.

So, even an object-oriented scene-graph-based 3D implementation should in 
principle be able to benefit from almost all native 3D features and should 
provide access to them. If the JavaFX architecture enables this approach then a 
currently minor implementation state will be rather accepted.
Potential features are currently determined and limited by JavaFX' underlying 
native APIs Direct3D 9 and OpenGL ES 2. Correct?

Java 8 doesn't support Windows XP, so in theory we can start taking advantage of 
DirectX 10+. At this time we are limited to OpenGL ES 2 for the sake of mobile and 
embedded. Also the cost of supporting multiple pipelines is significant. I think 
the first thing we would need to work through, before we could support DX 10, 11+, 
is to make it as easy as we can to write & maintain multiple pipelines.

So we have been looking at moving off of DX 9, but not off of ES 2 (until ES 3 
is widespread).


- core, e.g.: primitives (points, lines, line-strip, triangle-strip, patches, 
corresponding adjacency types), vertex attributes, double-precision, shaders 
(vertex, tessalation, geometry, fragment, compute), 3D-/cubemap-texture, 
multitextures, compressed texture, texture properties, multipass rendering, 
occlusion queries, stencil test, offscreen rendering into image, multiple 
active cameras, stereo?

I think some of these are relatively easy things -- a line strip or triangle 
strip for example could be Mesh subclasses?

We need to talk about how to handle shaders. I agree this is something that is 
required (people must have some way of supplying a custom shader). This will 
require work in Prism to support, but conceptually seems rather fine to me.

I think different texture types could be supported by extending API in Image. 
The worst case would be to introduce a Texture class and setup some kind of 
relationship if possible between an Image and a Texture (since images are just 
textures, after all).

The harder things are things like multi-pass rendering. Basically, features that we can 
add where prism is in control and picks a rendering strategy for you are relatively 
straightforward to think about. But giving a hook that allows the developer to pick the 
rendering strategy is quite a bit more complicated. I was reading up on order independent 
transparency algorithms and thinking "how would this be exposed in API?". I 
haven't had any good brain-waves on that yet.

I think we already do multiple active cameras?


- APIs, e.g.: user input for scene navigation and model interaction (keyboard, 
mouse, touchpad/screen), geometry utilies, skinning, physics engine interface 
(kinematics), shadows

Key/ mouse / touch / etc should be there already?

Skinning and physics are both interesting. Skinning and boning is interesting 
because of different ways to go about this. Last JavaOne we did this with 
special shaders all in hardware. Ideally the custom shader support (that 
doesn't exist)

Re: MSAA and Scene anti aliasing

2013-07-14 Thread Joseph Andresen
Yea but there are multiple ways to implement anti aliasing besides msaa and a 
read only attribute implies all we will ever do is msaa.

Perhaps having number of samples in the enum isn't right still. Or having it 
return a sensible value if we ever implement AA techniques that don't have this 
setting.

-J

On Jul 13, 2013, at 12:00 PM, Kevin Rushforth  
wrote:

> I don't really like the single enum approach. I would prefer to keep the 
> existing MSAA boolean, and then, if needed, add a separate attribute for 
> requesting the number of samples; if desired there could be a read-only 
> attribute that returns the actual number of samples used. Most chipsets give 
> limited (or no) control over the number of samples anyway so an enum doesn't 
> seem like a good fit.
> 
> -- Kevin
> 
> 
> Gerrit Grunwald wrote:
>> +1 for the enum approach...will make it easier to enhance for future 
>> options...
>> 
>> Gerrit 
>> Am 12.07.2013 um 19:55 schrieb Richard Bair :
>> 
>>  
>>> Thor recently pushed an implementation for MSAA for those cases when the 
>>> feature is supported by the card and where a Scene (or SubScene) is created 
>>> with the antiAliasing flag set to true. MSAA is "Multi-sampled Anti 
>>> Aliasing", which means that the graphics card, when configured in this 
>>> mode, will sample each fragment multiple times. The upshot is that 3D 
>>> doesn't look as jaggy.
>>> 
>>> However this has an impact on performance (usually an extra buffer copy or 
>>> at the very least you will be sampling each pixel multiple times so if you 
>>> are doing something graphically intense then that might push you over the 
>>> edge where you start to see performance degradation). Now multi-sampling 
>>> can be 2x, 4x, etc. The higher the multi-sampling value, the better the 
>>> quality, and the lower the performance.
>>> 
>>> I'm also bothered but the name "antiAliasing" because there are many forms 
>>> of anti-aliasing in the world and it isn't clear which this is. I think 
>>> perhaps we should instead have an enum. The idea is that we can add to the 
>>> enum over time with greater options for how to perform the scene 
>>> antialiasing.
>>> 
>>> public enum SceneAntiAliasing {
>>>   DISABLED,
>>>   DEFAULT,
>>>   MSAA_2X,
>>>   MSAA_4X
>>> }
>>> 
>>> And then grow it over time to include potentially other techniques. My 
>>> thought here is that the implementation is going to matter to folks. 
>>> They're going to want to be able to make the performance / quality 
>>> tradeoff, and perhaps even the implementation tradeoff (since different 
>>> implementations may provide somewhat different results). DISABLED turns it 
>>> off, obviously. DEFAULT allows us to pick what we think is the best (might 
>>> be different on different platforms. Desktop might go with MSAA_16x or 
>>> equivalent while iOS might be MSAA_2X). Then some standard options.
>>> 
>>> Thoughts?
>>> Richard
>>>


Re: MSAA and Scene anti aliasing

2013-07-12 Thread Joseph Andresen

totally agree on this, if I could clone myself we would have FXAA shaders :P

-J


On 7/12/2013 10:55 AM, Richard Bair wrote:

public enum SceneAntiAliasing {
 DISABLED,
 DEFAULT,
 MSAA_2X,
 MSAA_4X
}