To clarify, Kristian talks about the jump(f,n) version:

jump(scalar,n) is vector-valued
jump(vector,n) is scalar

while I mixed up with the jump(f) version:

jump(scalar) is scalar
jump(vector) is vector-valued

The jump(f) version gives the difference of the full value, while
the jump(f,n) version gives the difference in normal component.

However, if I didn't mess up some signs I think you can write your term like

0.5*dot(jump(Dn(u)), jump(v))

which seems much more intuitive to me (although I'm not that into DG scheme
terminology).

Martin



On 10 June 2014 09:05, Kristian Ølgaard <[email protected]> wrote:

>
> On 9 June 2014 20:58, Martin Sandve Alnæs <[email protected]> wrote:
>
>> I object to changing definitions based on that it would work out nicely
>> for one particular equation. The current definition yields a scalar jump
>> for both scalar and vector valued quantities, and the definition was chosen
>> for a reason. I'm pretty sure it's in use. Adding a tensor_jump on the
>> other hand wouldn't break any older programs.
>>
>> Maybe Kristian has an opinion here, cc to get his attention.
>>
>
> I follow the list, but thanks anyway.
>
> The current implementation of the jump() operator follows the definition
> often used in papers (e.g. UNIFIED ANALYSIS OF DISCONTINUOUS GALERKIN
> METHODS
> FOR ELLIPTIC PROBLEMS, arnold et al.
> http://epubs.siam.org/doi/abs/10.1137/S0036142901384162)
>
> where the jump of scalar valued function result in a vector, and the jump
> of a vector valued function result in a scalar.
>
> Adding the tensor_jump() function seems like a good solution in this case
> as I don't see a simple way of overloading the current jump() function to
> return the tensor jump.
>
> Kristian
>
> Martin
>> 9. juni 2014 20:16 skrev "Anders Logg" <[email protected]> følgende:
>>
>> On Mon, Jun 09, 2014 at 11:30:09AM +0200, Jan Blechta wrote:
>>> > On Mon, 9 Jun 2014 11:10:12 +0200
>>> > Anders Logg <[email protected]> wrote:
>>> >
>>> > > For vector elements, the jump() operator in UFL is defined as
>>> follows:
>>> > >
>>> > >   dot(v('+'), n('+')) + dot(v('-'), n('-'))
>>> > >
>>> > > I'd like to argue that it should instead be implemented like so:
>>> > >
>>> > >   outer(v('+'), n('+')) + outer(v('-'), n('-'))
>>> >
>>> > This inconsistency has been already encountered by users
>>> > http://fenicsproject.org/qa/359/discontinuous-galerkin-jump-operators
>>>
>>> Interesting! I hadn't noticed.
>>>
>>> Are there any objections to changing this definition in UFL?
>>>
>>> --
>>> Anders
>>> _______________________________________________
>>> fenics mailing list
>>> [email protected]
>>> http://fenicsproject.org/mailman/listinfo/fenics
>>>
>>
>
_______________________________________________
fenics mailing list
[email protected]
http://fenicsproject.org/mailman/listinfo/fenics

Reply via email to