Hi,
Like Mario mentioned, we have to look at the way matrix multiplication
with vectors works. The existing ` maVecMul ` takes a matrix and a
function of a single value, where it would be easy to use words as-is if
matrices were indexed starting with 0.
About concatenation, there are interesting properties of matrices by
blocks, where blocks behave very much the same way as matrix entries
themselves. The big matrix can be seen as concatenation of its blocks,
and Dijkstra's argument about index manipulation will apply. In that
case the corresponding vectors also work by block concatenation.
As for elements of an Euclidean vector space, Scott Fenton has defined
them with an index in ` ( 1 ... N ) ` (see ~ elee), and I have reused
that definition in ` EEhil ` (see ~ df-ehl).
BR,
_
Thierry
On 03/09/2020 08:59, Norman Megill wrote:
Is the issue here the ability to translate to Lean more easily?
On the flip side, there has been some interest in translation from
set.mm to Mathematica, which would be more difficult with 0-based
matrices.
Why there is a need for words in the development of matrices? Is any
of this already being done in set.mm? Is there literature that
develops matrices using words?
Norm
On Wednesday, September 2, 2020 at 6:08:26 PM UTC-4 di.... gmail.com
wrote:
In Coq and Lean, the structure of inductive types, specifically
the peano natural numbers starting at 0, *strongly* prefers
0-based indexing. The difference is stark enough that it's not
really ever a discussion. You can do induction on nat, you can't
do induction on positive nat (at least the version that is baked
into the language doesn't give you this, you have to use a theorem
that amounts to an index conversion under the hood). ZFC is more
free in this regard because any definition is as good as any other
from the point of view of the automation, but there is still
friction associated with conversions.
My argument for 0-based matrices is at this point entirely one of
uniformity. Changing 300 theorems with a silly index change is not
an automatic job so it will be a lot of work, and the gain is
small (and since I'm on Dijkstra's side in this argument, the gain
is negative from my POV). Unlike Norm I don't think that rigid
adherence to the textbook formulations of things is necessarily a
good idea, as long as I think the author would agree that the
reformulation is essentially equivalent (the "mathematician's
equal"). It is easier to get this confirmation when you have
actual practicing mathematicians on staff, as it were; there are a
lot of mathematicians getting into lean these days and the 0-based
thing hasn't caused one bit of dissent that I have heard.
Here's the definition of a matrix:
https://leanprover-community.github.io/mathlib_docs/find/matrix/src
. It uses arbitrary finite sets, but further down the page you
will see some theorems like
https://leanprover-community.github.io/mathlib_docs/find/matrix.sub_left/src
that specialize to the type "fin n", which is lean's version of
our (0..^n). It is defined as {i : nat | i < n}, where nat is the
inductive type of nonnegative integers. The type (1...n) would be
the more cumbersome {i : nat | 0 < i /\ i <= n}, which has an
extra conjunct that would have to be dealt with whenever you
destructure the type (and working directly from the stated
definition is much more common in lean than metamath). This is the
sort of extra friction I am talking about - it's not just changing
numbers in a few places.
Mario
--
You received this message because you are subscribed to the Google Groups
"Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/metamath/8adbe563-5fc4-a238-072d-ed20bb42354f%40gmx.net.