Yeah, there are a few holes in the data (and my code ;) ) - the main problem 
are copyrighted artworks which have thumbnail image references in the data, but 
don't actually exist on the Tate server.

Andy Warhol, for example, you're not allowed to see his work online, it seems...


On 11 Nov 2013, at 09:53, James Morris wrote:

> I got a blank one last night, no artwork, no categories, just the texting 
> asking me if I see _ and _. 
> On Nov 10, 2013 12:29 PM, "shardcore" <shardc...@shardcore.org> wrote:
> >
> > Thanks for the kind works, everyone.
> >
> > autoserota is really just the baseline model of what I'd like to do with 
> > this dataset.
> >
> > As I mentioned in the blogpost, the most interesting categories, for me, 
> > are the more subjective ones, the categories which feel like they're 
> > furthest along the 'I need a human to make this judgement' axis. This 
> > dataset goes beyond simple 'fact based' descriptions, which means it 
> > contains a whole lot more humanity than most 'big data'.
> >
> > We can imagine machines which spot the items within a representational work 
> > (look at Google Goggles, for example) but algorithms which spot the 
> > 'emotions and human qualities' of a work are more difficult to comprehend. 
> > These categories capture complex, uniquely human judgements which occupy a 
> > space which we hold outside of simple visual perception. In fact I think 
> > I'd find a machine which could accurately classify an artwork in this way a 
> > little sinister...
> >
> > The relationships between these categories and the works are metaphorical 
> > in nature, allusions to whole classes of human experience that cannot be 
> > derived from simply 'looking at' the artwork. The exciting part of the Tate 
> > data is really the 'humanity' it contains, something absolutely essential 
> > when we're talking about art - after all, culture cannot exist without 
> > culturally informed entities experiencing it.
> >
> > This data is represented in JSON, it's been expressed in a machine-readable 
> > form explicitly for algorithmic manipulation. It gives us a fascinating 
> > opportunity to investigate how machines can navigate a cultural space, 
> > precisely because it's been imbued with 'cultural knowledge' by the 
> > hard-working taggers of The Tate.
> >
> >
> >
> > On 9 Nov 2013, at 23:30, Rob Myers wrote:
> >
> > > On 09/11/13 03:07 PM, Bjørn Magnhildøen wrote:
> > >> "can you see the $x and the $y?"
> > >
> > > Yes it's very simple but the effect of framing it as a question makes it
> > > very effective IMO. :-)
> > >
> > >> i'd like to do something with the categories themselves, interesting
> > >> how the concepts surrounds and defines the works
> > >> thinking of an descriptive art from it
> > >> or instructive art
> > >> then mutated art maybe
> > >> algorithmic selection
> > >> "as stated", dictated, mutated
> > >
> > > Definitely. Once we know how existing objects are described we can
> > > describe objects that don't yet exist. :-)
> > >
> > > _______________________________________________
> > > NetBehaviour mailing list
> > > NetBehaviour@netbehaviour.org
> > > http://www.netbehaviour.org/mailman/listinfo/netbehaviour
> >
> > _______________________________________________
> > NetBehaviour mailing list
> > NetBehaviour@netbehaviour.org
> > http://www.netbehaviour.org/mailman/listinfo/netbehaviour
> _______________________________________________
> NetBehaviour mailing list
> NetBehaviour@netbehaviour.org
> http://www.netbehaviour.org/mailman/listinfo/netbehaviour

_______________________________________________
NetBehaviour mailing list
NetBehaviour@netbehaviour.org
http://www.netbehaviour.org/mailman/listinfo/netbehaviour

Reply via email to