Preference for row ordering is more related to the way people view tabular
data, I think.  For many applications/data, there are many more rows than
columns (think of database tables or csv files), and it's slightly
unnatural to read those into a transposed data structure for analysis.
Putting a time series into a TxN matrix is natural (that's likely how the
data is stored), but inefficient if data access is sequentially through
time.  Without a "TransposeView" or similar, we're forced to make a choice
between poor performance or an unintuitive representation of the data.

I can appreciate that matrix operations could be more natural with column
ordering, but in my experience practical applications favor row ordering.

On Fri, Oct 30, 2015 at 11:44 AM, John Gibson <johnfgib...@gmail.com> wrote:

> Agreed w Glenn H here. "math being column major" is because the range of a
> matrix being the span of its columns, and consequently most linear algebra
> algorithms are naturally expressed as operations on the columns of the
> matrix, for example QR decomp via Gramm-Schmidt or Householder, LU without
> pivoting, all Krylov subspace methods. An exception would be LU with full
> or partial row pivoting.
>
> I think preference for row-ordering comes from the fact that the textbook
> presentation of matrix-vector multiply is given as computing y(i)= sum_j
> A(i,j) x(j) for each value of i. Ordering the operations that way would
> make row-ordering of A cache-friendly. But if instead you understand
> mat-vec mult as forming a linear combination of the columns of A, and you
> do the computation via y = sum_j A(:,j) x(j), column-ordering is
> cache-friendly. And it's the latter version that generalizes into all the
> important linear algebra algorithms.
>
> John
>
>
> On Friday, October 30, 2015 at 10:46:36 AM UTC-4, Glen H wrote:
>>
>>
>> On Thursday, October 29, 2015 at 1:24:23 PM UTC-4, Stefan Karpinski wrote:
>>>
>>> Yes, this is an unfortunate consequence of mathematics being
>>> column-major – oh how I wish it weren't so. The storage order is actually
>>> largely irrelevant, the whole issue stems from the fact that the element in
>>> the ith row and the jth column of a matrix is indexes as A[i,j]. If it were
>>> A[j,i] then these would agree (and many things would be simpler). I like
>>> your explanation of "an index closer to the expression to be evaluated
>>> runs faster" – that's a really good way to remember/explain it.
>>>
>>>
>> To help understand, is "math being column major" referring to matrix
>> operations in math textbooks are done by columns?  For example:
>>
>>
>> http://eli.thegreenplace.net/2015/visualizing-matrix-multiplication-as-a-linear-combination/
>>
>> While the order is by convention (eg not that is has to be that way),
>> this is how people are taught.
>>
>> Glen
>>
>

Reply via email to