On Tuesday, March 4, 2014 7:05:02 AM UTC-5, Toivo Henningsson wrote: > > Because of multiple dispatch, we go to exceptional lengths in Julia to > make sure to only overload the same operation on different types, not to > create functions that do different conceptual operations based on the type. >
Right, we want the operation to depend only on the type, and not on the value. So, the operation should be the same for all AbstractMatrix types, even if one of the dimensions is 1. The AbstractMatrix norm should not switch to the AbstractVector norm based on the value of the matrix object (e.g. the dimensions). > So I think that the heart of the matter is to settle whether the vector > norm and matrix norm are the same operation or not. (As long as we are > talking about row vectors, I still think that they are, right?) > No, the induced norm of a 1xN matrix is not always the same as the corresponding norm of an N-component vector, in particular in the case of the L1 induced norm. (It has a very specific meaning in linear algebra: the L1 induced matrix norm of A is the supremum of |Ax|/|x| over all x, where |...| is the L1 vector norm. If A is a row vector y', this yields the infinity norm of y.) > > Perhaps it would be enough to leave norm as it is and introduce > > vecnorm(x::AbstractVector, args...) = norm(x, args...) > I think you would want to do the opposite. Define norm(x::AbstractVector, p=2) = vecnorm(x, p=2) where you define vecnorm(itr, p) for any iterable type.