On 10 June 2014 14:17, Anders Logg <[email protected]> wrote:

> On Tue, Jun 10, 2014 at 11:01:49AM +0200, Kristian Ølgaard wrote:
> >
> >
> >
> > On 10 June 2014 10:44, Anders Logg <[email protected]> wrote:
> >
> >     On Tue, Jun 10, 2014 at 10:09:44AM +0200, Martin Sandve Alnæs wrote:
> >     > To clarify, Kristian talks about the jump(f,n) version:
> >     >
> >     > jump(scalar,n) is vector-valued
> >     > jump(vector,n) is scalar
> >     >
> >     > while I mixed up with the jump(f) version:
> >     >
> >     > jump(scalar) is scalar
> >     > jump(vector) is vector-valued
> >     >
> >     > The jump(f) version gives the difference of the full value, while
> >     > the jump(f,n) version gives the difference in normal component.
> >
> >     I looked through the Unified DG paper that Kristian pointed to but
> >     couldn't see any examples of vector-valued equations.
> >
> > Section 3.1, text below eqn. 3.2 definitions of avg and jump for
> functions 'q'
> > and '\phi'.
>
> Yes I noticed the definition of the jump, but it is not applied to the
> case where u is a vector (so that grad(u) is a matrix).
>

True.

In section 3.2.7 in the UFL paper, the jump which involves the facet
normal, is referred to as a convenience function which
follows the commonly used definition where the jump(f,n) of a vector or
tensor valued function is given as dot(f(+), n(+)) + dot(f(-), n(-)).

The same section also states that custom definitions/operators can be
implemented by a user using the restriction operators ('+') and ('-').

If we just look at the jump() and avg() operators as convenience operators,
there is nothing wrong with also introducing tensor_jump() as a new
convenience operator. This is also much simpler and less error prone
compared to hacking all definitions together in one jump() operator (in
accordance with Jed's post).

Kristian


>
> --
> Anders
>
>
> >     I also don't see the point in defining jump(v, n) for vector valued u
> >     as in the paper. If the result of jump(v, n) is a scalar quantity,
> >     there is no way to combine the normal n with the thing it should
> >     naturally be paired with, namely the flux (or grad(u)). It only works
> >     out in the special case of scalar elements.
> >
> >     But perhaps adding tensor_jump() is the best solution since that
> paper
> >     is the standard reference for DG methods.
> >
> >     (I'll ask Douglas Arnold about this, if he has not seen this
> >     already. Maybe I am missing something obvious.)
> >
> >     > However, if I didn't mess up some signs I think you can write your
> term
> >     like
> >     >
> >     > 0.5*dot(jump(Dn(u)), jump(v))
> >     >
> >     > which seems much more intuitive to me (although I'm not that into
> DG
> >     scheme
> >     > terminology).
> >
> >     Yes, that looks correct but I don't think it is intuitive since it
> >     does not involve the avg() operator which is naturally paired with
> the
> >     jump() operator in most DG formulations.
> >
> >
> >
> >     > On 10 June 2014 09:05, Kristian Ølgaard <[email protected]>
> wrote:
> >     >
> >     >
> >     >     On 9 June 2014 20:58, Martin Sandve Alnæs <[email protected]>
> wrote:
> >     >
> >     >
> >     >         I object to changing definitions based on that it would
> work out
> >     nicely
> >     >         for one particular equation. The current definition yields
> a
> >     scalar
> >     >         jump for both scalar and vector valued quantities, and the
> >     definition
> >     >         was chosen for a reason. I'm pretty sure it's in use.
> Adding a
> >     >         tensor_jump on the other hand wouldn't break any older
> programs.
> >     >
> >     >         Maybe Kristian has an opinion here, cc to get his
> attention.
> >     >
> >     >
> >     >     I follow the list, but thanks anyway.
> >     >
> >     >     The current implementation of the jump() operator follows the
> >     definition
> >     >     often used in papers (e.g. UNIFIED ANALYSIS OF DISCONTINUOUS
> GALERKIN
> >     >     METHODS
> >     >     FOR ELLIPTIC PROBLEMS, arnold et al.
> http://epubs.siam.org/doi/abs/
> >     10.1137/
> >     >     S0036142901384162)
> >     >
> >     >     where the jump of scalar valued function result in a vector,
> and the
> >     jump
> >     >     of a vector valued function result in a scalar.
> >     >
> >     >     Adding the tensor_jump() function seems like a good solution
> in this
> >     case
> >     >     as I don't see a simple way of overloading the current jump()
> >     function to
> >     >     return the tensor jump.
> >     >
> >     >     Kristian
> >     >
> >     >
> >     >
> >     >         Martin
> >     >
> >     >         9. juni 2014 20:16 skrev "Anders Logg" <[email protected]>
> >     følgende:
> >     >
> >     >
> >     >             On Mon, Jun 09, 2014 at 11:30:09AM +0200, Jan Blechta
> wrote:
> >     >             > On Mon, 9 Jun 2014 11:10:12 +0200
> >     >             > Anders Logg <[email protected]> wrote:
> >     >             >
> >     >             > > For vector elements, the jump() operator in UFL is
> >     defined as
> >     >             follows:
> >     >             > >
> >     >             > >   dot(v('+'), n('+')) + dot(v('-'), n('-'))
> >     >             > >
> >     >             > > I'd like to argue that it should instead be
> implemented
> >     like
> >     >             so:
> >     >             > >
> >     >             > >   outer(v('+'), n('+')) + outer(v('-'), n('-'))
> >     >             >
> >     >             > This inconsistency has been already encountered by
> users
> >     >             > http://fenicsproject.org/qa/359/
> >     >             discontinuous-galerkin-jump-operators
> >     >
> >     >             Interesting! I hadn't noticed.
> >     >
> >     >             Are there any objections to changing this definition
> in UFL?
> >     >
> >     >
> >     >
> >     >
> >     >
> >
> >
>
_______________________________________________
fenics mailing list
[email protected]
http://fenicsproject.org/mailman/listinfo/fenics

Reply via email to