This range-finding function (the same found in auto-focus) is an amazing 
property – but points towards a key problem of contemporary screens (and most 
associated hardware): they cannot show vector graphics except by translating to 
bitmap. As I understand it, the Kinect is a bitmap device, so in that sense 
tailored for the dominate form of display (even more dominant now, as has been 
pointed out, the cathode ray tube is on its way out). The demise and 
all-but-vanishing of the vector screen is a thread to pick up on later

S


From: Simon Biggs <si...@littlepig.org.uk<mailto:si...@littlepig.org.uk>>
Reply-To: soft_skinned_space 
<empyre@lists.cofa.unsw.edu.au<mailto:empyre@lists.cofa.unsw.edu.au>>
Date: Friday, 6 July 2012 08:59
To: soft_skinned_space 
<empyre@lists.cofa.unsw.edu.au<mailto:empyre@lists.cofa.unsw.edu.au>>
Subject: [-empyre-] screens

The x-ray isn't a screen per se - although it is often displayed upon one, 
especially as it is now (like so many imaging systems) entirely digital and no 
longer employs film.

I've been working recently with the Kinect to develop some new interactive 
systems. It is a curious device, especially as to how it visualises the world. 
A key part of its functionality is a high resolution Infra-Red projection 
system. It projects high powered infra-red light onto the scene in front of it 
and then scans the returning data as a variable resolution dot matrix, in the 
process creating a 3D point cloud model of the scene. When varying the 
resolution dynamically you can produce some quite amazing visual effects that 
in normal operation of the device are never visible. It is only when you 
visualise the data cloud you see the visual field. Anyway, when doing this the 
bodies of people, objects in the visual field, walls and surfaces, all can be 
seen to function as screens. The dynamically variable projected point cloud 
appears to crawl over everything and has the visual effect of many hundreds of 
thousands of small particles interacting with things, and each other, as they 
collect vital data. Of course this isn't what is happening - the particles are 
simply the visual representation of the sampling granularity of the system, but 
the effect is no less compelling for knowing that. In a sense the system is a 
particle driven vision system, a bit like the pre-Socratic idea that our eyes 
shoot rays out into the world and it is the echo of these rays, back onto our 
retinas, that allows us to see things (in some ways a far more interesting 
model of vision than what we conventionally apprehend). The Kinect literally 
works like that, turning everything into a screen - not a screen for display 
but a screen of forensic analysis.

best

Simon


On 5 Jul 2012, at 23:06, Johannes Birringer wrote:

dear all

interesting that a historical look back [Christian's fascinating reference to 
O.Winter's "Ain't It Lifelike!", and the comments on the cinematograph and 
X-ray machines, and  the "borderlands of cinema and other screens"] can allow 
us to reflect, in several ways, on the phenomena of screens and screening and 
reception (the "acts of viewership"), and whether or not the "spectacle" 
succeeds (in doing what?)..... I immediately wanted to ask Christian what 
exactly would be the advantage or success of the the X-ray -- >

The X-ray offered the far more humane element,
the opportunity to break down topics and people into component pairs, and 
presumably that made it a potential heir to the higher arts. X-ray as 
entertainment, cinema as medical marvel....[Christian]>>

and how would we understand the function of the x-ray [or medical 
visualization, or other kinds of "visualization" or simulation or 
experimentation, if you think for a moment of the Higgs-boson discovery at 
CERN's Large Hadron Collider yesterday....]  in the context of the debate that 
Martin proposes:  I feel that Martin wants us to look at the new "screens" and 
a paradigm shift, and yet i believe this paradigm shift can only be addressed 
if we sometimes go back to screens and the art of projection (of light) in the 
not so long history of photography and cinema;   and furthermore, when I read 
Charlie's  fine posting, on screens as partitions (and the Bartleby story), I 
couldn;t help thinking of "sound" and all the lovely stories i have read from 
sound artists/theorists on Pythagoras and acousmatics, the way in which 
Pythagoras hid from view when he was teaching so that his voice would reach the 
listeners (not the "viewership', that is) unencumbered.

now, what paradigm then?

Martin schreibt:


Given the growth of mobile and pervasive media forms, all dependent to some 
degree on screens, this changed condition really forms a new paradigm, 
variously described by  researchers who now tend to regard the screen as a 
window into an extended  “Hertzian” space,  ‘hybrid space’, ‘augmented 
reality’, ‘mixed reality’, ‘pervasive space’; or from the user behaviour end as 
forming  ‘trajectories’ (Benford) , and even as ‘sculpture’ ( Calderwood) .

The primary role of the screen, as Simon points out, is now one that mediates 
or remediates the world in a growing number of ways  (although the internet of 
things and NFS promise to make direct -and screenless-interaction more 
prevalent)  not as another space like cinema , where fantasy is experienced 
through a locked and dreamlike suspension, but as a dynamic and changing 
condition of experience, where the user is interactive or pro-active in 
creating their own personalised experience.

I am interested in the next week in  examining this changing condition of 
reception as the key to the phenomenon [...]



This raises some questions. Why is mobile communication dependent on "screens" 
(what screens, one must ask, once again, like some of you already did)?  Do we 
really bother to think of our cell phone or Ipads or whatever as "screen 
media"? or remediation instruments"? i don't think so.  In my mind they carry 
over, or transport information, such as messages, and sometimes pictures, or 
sound, and i listen to them.  The sender is hidden, like Pythagoras, or if i am 
on slype, i look at whoever is talking and communicating, and i generally 
actually don't view as much as i am writing (on the chat window at same time as 
talking, to add/expand references and allow me to make notes, as i would with 
my notebook on  a train ride or when I walk through an art exhibit.  I receive 
making notes, and later exchange them or think about the experience, i re-play 
the experiences (in my mind, my memory screens or whatever they are, the films 
running inside me, not x-rays), like I did after watching the Euro championship 
matches.   I replayed these matches. In those instances I am not quite the kind 
of interactive proactive person Martin envisions, but in other instances, 
giving a certain amount of interactional interfaces that I encounter in my 
daily life, i might "personalize" my experience or i am actually enslaved by 
the programming of the interface operation, as i was the other day trying to 
book a flight on Ryanair which took me hours and was as frustrating as it can 
be.  The airline online browser thing was doing some "screening" thing on me, 
that I detested rather thoroughly.

Remediating the world,  can you give examples please, Martin?


with regards

Johannnes Birringer
_______________________________________________
empyre forum
empyre@lists.cofa.unsw.edu.au<mailto:empyre@lists.cofa.unsw.edu.au>
http://www.subtle.net/empyre



Simon Biggs
si...@littlepig.org.uk<mailto:si...@littlepig.org.uk> 
http://www.littlepig.org.uk/ @SimonBiggsUK skype: simonbiggsuk

s.bi...@ed.ac.uk<mailto:s.bi...@ed.ac.uk> Edinburgh College of Art, University 
of Edinburgh
http://www.eca.ac.uk/circle/ http://www.elmcip.net/ 
http://www.movingtargets.co.uk/


_______________________________________________
empyre forum
empyre@lists.cofa.unsw.edu.au
http://www.subtle.net/empyre

Reply via email to