I should point out that for vectors in the space spanned by columns of A,
U_k U_k' is an identity mapping. For any vector x in the null space of
columns of A, U_k U_k x = 0.

You can view every vector u as the sum of a component in span A and an
orthogonal component in null A

     u = u_A + u_/A

If you shove u through U_k U_k' you get this:

     U_k U_k' u = U_k U_k' (u_A + u_/A) = U_k U_k' (u_A) + 0 = u_A

This is another way of showing that U_k U_k' projects a vector into span A.

On Sun, Sep 16, 2012 at 12:55 PM, Ted Dunning <ted.dunn...@gmail.com> wrote:

> U_k ' U_k = I
>
> U_k U_k ' != I
>
>
> On Sun, Sep 16, 2012 at 12:54 PM, Sean Owen <sro...@gmail.com> wrote:
>
>> I have a feeling this is a dumb question. But in...
>>
>>  u_k = U_k U_k' u
>>
>> U is orthonormal, so Uk is orthonormal. So U_k U_k' is the identity.
>> So what does this actually say?
>>
>> It's surely on the right track of somehthing, since I find the same
>> expression in that original SVD paper, "Incremental Singular Value
>> Decomposition Algorithms for Highly Scalable Recommender Systems". But
>> it has some serious typos in its notation. (Try to figure out Figure
>> 1.) And it proceeds in its analysis by basically saying that the
>> projection is Uk' times the new vector, so, I never understood this
>> expression.
>>
>> On Sun, Sep 16, 2012 at 7:13 PM, Ted Dunning <ted.dunn...@gmail.com>
>> wrote:
>> > A is in there implicitly.
>> >
>> > U_k provides a basis of the row space and V_k provides a basis of the
>> > column space of A.  A itself has a representation in either of these
>> > depending on whether you think of it as rows or columns.
>> >
>> > The original question (possibly misphrased and not in so many words)
>> asked
>> > for a way to project any vector into the space spanned by A.  What space
>> > this is depends on whether we have a new column vector (with n rows x 1
>> > column) or a new row vector (with 1 row x m columns).  In either case,
>> the
>> > way to compute this is to project the vector onto the basis of interest
>> > (U_k for column vectors, V_k for row vectors) and then reconstruct that
>> > projection in the original space.
>> >
>> > This is not usually a practical operation when we are working with
>> sparse
>> > vectors because the projected vector is not usually sparse.  Thus it is
>> > usually better to stop before projecting back into the original space.
>> >
>> > On Sun, Sep 16, 2012 at 10:26 AM, Sean Owen <sro...@gmail.com> wrote:
>> >
>> >> I don't quite get these formulations -- shouldn't Ak be in there
>> >> somewhere? you have a new row of that (well, some piece of some new
>> >> row of Ak), and need a new row of Uk. Or: surely the expression
>> >> depends on V?
>> >>
>> >> On Sun, Sep 16, 2012 at 5:33 PM, Ted Dunning <ted.dunn...@gmail.com>
>> >> wrote:
>> >> > And if you want the reduced rank representation of A, you have it
>> already
>> >> > with
>> >> >
>> >> >     A_k = U_k S_k V_k'
>> >> >
>> >> > Assume that A is n x m in size.  This means that U_k is n x k and
>> V_k is
>> >> m
>> >> > x k
>> >> >
>> >> > The rank reduced projection of an n x 1 column vector is
>> >> >
>> >> >     u_k = U_k U_k' u
>> >> >
>> >> > Beware that v_k is probably not sparse even if v is sparse.
>> >> >
>> >> > Similarly, the rank reduced projection of a 1 x m row vector is
>> >> >
>> >> >     v_k = v V_k V_k'
>> >> >
>> >> > A similar sparsity warning applies to v_k.  This is why it is usually
>> >> > preferable to just work in the reduced space directly.
>> >>
>>
>
>

Reply via email to