Re: PageFlowScope and framesets bug?
I get the point, thanks Adam. Probably it would be fine to have af:frame do it automatically, at least with an option. Cosma 2006/6/15, Adam Winer [EMAIL PROTECTED]: Cosma, What you'll need to do is change: source=/faces/people/edit/top.jspx to source=#{...anELExpression..} ... where the EL expression uses ViewHandler.getActionURL(). If you're using Facelets, I'd register a custom EL function that takes a viewID and calls ViewHandler.getActionURL(), so this'd look like: source=#{my:getViewUrl('/people/edit/top.jspx')} (If you're not using Facelets, you can create a custom Map implementation that has the same effect.) -- Adam On 6/15/06, Cosma Colanicchia [EMAIL PROTECTED] wrote: Hello, I have a problem using PageFlowScope when the dialog contains a frameset. I explain what I'm doing: I have an af:table that launch a dialog in a child window when the user click on a row. af:commandLink text=Edit useWindow=true action=#{peopleActions.editPerson} af:setActionListener from=#{person.personId} to=#{pageFlowScope.personId}/ /af:commandLink In the action method, I put an object in the PageFlowScope and return a dialog:* outcome. public String editPerson() { Integer personId = (Integer) getPageFlowAttribute(personId); PersonDTO person = PeopleFacadeUtil.getFacade().getPersonDetails(personId); setPageFlowAttribute(person, person); return dialog:editPerson; } My navigation rule point to a frameset page: navigation-rule navigation-case from-action#{peopleActions.editPerson}/from-action from-outcomedialog:editPerson/from-outcome to-view-id/people/edit/frameset.jspx/to-view-id /navigation-case /navigation-rule And my frameset is defined like this: afh:frameBorderLayout f:facet name=top afh:frame source=/faces/people/edit/top.jspx name=editPersonTop height=350px / /f:facet f:facet name=bottom afh:frame source=/faces/people/edit/edit.jspx name=editPersonContent height=* / /f:facet /afh:frameBorderLayout The problem is that inside the top.jspx and edit.jspx, the PageFlowScope person attribute is lost. If I skip the frameset and navigate directly to one of the two, the object is there. My guess is that the af:frame component doesn't pass throught the _afPfm parameter when loading the view defined in source, so the correct pageFlowScope can't be determined. Can this be considered a bug? Thanks Cosma
Re: [Proposal] skinning platform, agent, and language features
Catalin Kormos wrote: Thanks for bringing the use of CSS3 syntax into discussion, this was one of the things that put me a lot into thinking actualy :). From what i can tell, all the features you mentioned can be achieved with CSS2 syntax too. Here is how i'm imagining to make this work: - selector inclusion with :alias, this one marks any selector that is only a place holder for common style properties. This is alright by CSS2 syntax too (a standard parser will return it), and all selectors that end with :alias are actualy removed from the final CSS, their content beeing included in other selectors. The inclusion by another selector will look like this : content:url(.bgColor:alias), this beeing the current solution, but not 100% sure about it yet. - right-to-left support, this is achieved by using :rtl pseudo selector, right? this one can get used from the beginning with another naming strategy, like component selector name ending with -rtl. How is this better than :rtl? - skinning icons, all selectors names that define an icon, end with Icon or -icon, right? so these can be easily identified when parsing, and they will be removed from the final CSS, and the image url used to get the right icon instance. This is what we are doing now. - abstracting out the html/styleClass implementation, maybe i didn't get this right, but isn't the component making the decision which selector to use depending on its state? it could end up looking like af-inputText-disabled and af-inputText. This can get difficult when there are multiple states a component can be in, like disabled, readOnly, active, etc. The user can specify this with pseudo-classes like this: af|inputText:disabled:read-only:active. We render input class=af_inputText p_AFDisabled p_AFReadOnly p_Active. I hope this makes sense to you too :) I do not understand how using css2 syntax would help. You still need to parse the skin css file and generate a new css file for the browser. I understand that you can use a 3rd party parser, since 3rd party parsers don't understand our :alias, :rtl pseudo-classes, is that right? We have a stand-alone parser that uses the batik jars. We considered using that (and donating it to Trinidad) instead of the skin parser that is there now, but it has a few issues, so we couldn't. When those are fixed, we can revisit whether it would be useful to have. Regards, Catalin On 6/15/06, Jeanne Waldman [EMAIL PROTECTED] wrote: I'll start developing the @agent/@platform features very soon, like this week/beginning of next week. I'll do the :lang/@locale after that, so I can revisit the api later. I'd like to give you a brief background as to why we use the css3 syntax instead of using css2 that all browsers can interpret. The main reason is to add features that are not in css2, like * selector inclusions with the :alias, * right-to-left support, * skinning icons in the same skin file, * the @agent/@platform features that I've mentioned, and * the ability to abstract out the html/styleclass implementation. It's arguable that abstracting out the html implementation may make the skinning 'keys' more confusing to the users. For example, af|inputText:disabled skins the af|inputText when it is in the disabled mode. Another way we could have defined this would be to do this: af|inputText.AFDisabledState. - Jeanne Catalin Kormos wrote: @Jeanne: i just want to say that i'll make sure your proposition will make it into the merged skinning approach, but my work won't be available sooner than the next two months, and more, considering that it will require integration work too after that, and probably Trinidad's users would want to benefit from such new cool feature as soon as possible. In my opinion you could get to it with no problem, what do you think? Regards, Catalin On 6/15/06, Catalin Kormos [EMAIL PROTECTED] wrote: @Adam: Thanks, i'm glad to hear that!. The current functionality will be kept, for sure :) @Martin: Sure, i'll keep insync with them, Jeanne's work i'll be able to reuse i think, as i said the parsing approach will be changed, but for her proposition we need custom parsing anyway. I'll get in touch with her and try to figure this out. Regards, Catalin On 6/15/06, Martin Marinschek [EMAIL PROTECTED] wrote: Catalin, I'm +1 on any changes you want to do on the existing framework, but keep insync with Trinidad's skinning developers (like Jeanne) so that all of this is used for Trinidad as well - keep in mind that our ultimate goal here is merging together the different approaches, laying the base for making the component sets compliant to each other. New features are great, but only if they end up in both sides of the great divide. @Tobago developers: if anyone is interested - Catalin has looked through the skinning approaches, and Trinidad's deemed him best for implementing the skinning portion
Re: Thoughts about unit testing and mock objects
Gentlements, since there is this nice little guy ([1]), I'd like to come back to this discussion. Adam and I spoke about starting to solve this problem. For JSF infrastructure things we'd like to give Shale a try. For stuff like UIComponent / Rendere there might be JMock useful. Btw. I checked the legal-discuss list and the jMock license (BSD style) fit's fine into the ASF world. -Matthias [1] http://issues.apache.org/jira/browse/ADFFACES-4 On 5/14/06, John Fallows [EMAIL PROTECTED] wrote: On 5/14/06, Adam Winer [EMAIL PROTECTED] wrote: On 5/14/06, John Fallows [EMAIL PROTECTED] wrote: On 5/13/06, Craig McClanahan [EMAIL PROTECTED] wrote: In an example that ought to be relevant for this codebase :-), consider what happens when you want to test the encodeBegin(), encodeChildren(), and encodeEnd() methods of a Renderer implementation. The methods return void, so the output isn't particularly interesting :-). What is interesting, however, is that the methods themselves are going to assume that FacesContext.getResponseWriter() returns somethng usable, even in a unit test scenario. Shale's MockFacesContext, for example, gives you back a mock writer that can write out to a buffer, which you can later examine to determine whether the component created correct markup or not (based on the properties/attributes you set on the component). And, this functionality is needed in all renderer tests, so it makes sense to encapsulate into a separate library that can be debugged once. How do you approach this sort of scenario with dynamic mocks? Nice example, thanks Craig! With dynamic mock objects, you would setup each method call made on the ResponseWriter which is functionality equivalent, although much more verbose. Far more so, to the point that I can hardly imagine anyone writing a test like that. Also, the order of invocations becomes partially important, and partially irrelevant - which attributes are written on which elements are important, which order the attributes are written is irrelevant. A dynamic mock is never going to have enough semantics to capture this requirement. Surprisingly, jMock does have sufficient semantics, but the ResponseWriter buffer approach makes it easier to specify the expected output. That's why the ADF Faces codebase has a handwritten mock for a response writer that very intentionally *is not* the same response writer we use at runtime: http://tinyurl.com/m4zth ... and there's no way that the default mock implementation would capture any of the semantics that this test captures. Since you mention this example, the TestResponseWriter seems to include some implementation-specific stuff, such as optimized removal of empty span elements in the final markup. For isolated unit testing, we probably ought to be unit testing the Renderers to verify that they interact as designed with the mock ResponseWriter, and then separately unit testing our ResponseWriter implementation to make sure it optimizes correctly. Your point is to simplify the effort involved in verifying the result, right? That's pretty useful, although this example doesn't appear to address my question about the behavior that might become incorrect. Simplifying the effort to write a test is crucial, not just pretty useful. Developers are habitually lazy at writing tests, and putting obstacles in their path ensures they won't. Sure. Like I said above, buffering the Renderer output is a good idea that makes it easier to verify the output. For your question about behavior that might be incorrect, say I was mocking UIComponent behaviors for a piece of code that will need to invoke findComponent(). How do I mock findComponent()? Odds are, if I'm writing the test, I mock it to produce exactly the result I want - not necessarily implement the correct algorithm. Do I know that my test is right? Thanks for the example, this is getting closer to what I'm trying to figure out. :-) Suppose there are two codepaths in such a unit test, one that expects a non-null component from findComponent, and another codepath that handles the null case. As a unit test writer, I don't need to be concerned about how findComponent is actually implemented, just that there are only two possible results affecting the codepath, null and non-null. If the unit tests are isolated, then there would be a separate unit test for UIComponentBase to verify the implementation of findComponent. In general, I *think* the only behavior that is relevant to each unit test is the implementation of the method under test. All other participants in that implementation are mocked to control the codepath during execution, giving an opportunity to provide 100% codepath coverage over several unit tests. tc, -john. -- http://apress.com/book/bookDisplay.html?bID=10044 Author: Pro JSF and Ajax: Building Rich Internet Components, Apress