This didn't seem to go through the first time. Apologies if you get it
twice.

From: "Hugo Sweet" <[EMAIL PROTECTED]>
To: "313 (E-mail)" <313@hyperreal.org>
Sent: Tuesday, November 13, 2001 10:31 PM
Subject: [313] Art and Technology again


> In my opinion, and based purely on the above, software such as EMI could
not
> produce great techno because of the priority given to the characteristics
of
> the sounds used (especially where effects and "found" samples are
> concerned), rather than the rules of musical theory behind their
> arrangement.  It could be argued that techno is the distillation of music
> theory, a music that goes beyond the limitations of acoustic musical
> technology.  Once music theory has been distilled to variations upon the
> theme of the heartbeat, the sound palette can be set free.  Electronic
music
> is therefore the inverse of classical music, and other music based on a
> restricted palette of acoustic sounds.  Where in classical music it was
the
> arrangement that defined the style of a composer, with techno it is the
> sound palette itself.  Only in techno does it make sense for a composer to
> say that using a preset synth sound is either succumbing to cliche or is
> creative cowardice.

If a computer can learn how to write music, I fail to see why it couldn't
also learn how to produce music. The characteristics of sound could be
analysed as easilly as the notes played. Samplers and software that we use
today already do this very well. To be fair, you could also feed it the
arrangement and sequencing of the song, and "teach" the computer to look for
correlations between the originally sequenced track and the final audio (if
all the original knob/fader movements were recorded into MIDI and the
effects were DSP). You could teach it about the evolution of synthesis and
production, and all the latest gear. Given that you'd be working with audio
samples rather than MIDI, the time it would take to learn would be that much
greater, and the variables are larger, but given time, I'm sure it could be
done to the same degree that "the classics" can be learned. And to the same
degree, one computer might limit itself to familiar tools, while another
might tend to explore new options. One might generate chaotic noise, another
might generate a symphony. If we put 1000 computers in a room and played
1/100 techno songs, and 50/100 trance songs, most of them would probably
make trance. Alright, maybe that was silly, but you catch my drift.

Musical taste is learned by humans, starting with what we're spoonfed. Who
knows why we chose what we chose beyond that? Arguments to the "objective
superiority" of style X hold no water. So if it's not objective, what is it?
where does that root of choice come from? What in humans distinguishes our
faculty of taste from that of a computers? What is taste? What is it that
makes "new" music new? What is random? What is creativity? What is
spontanaeity? These things have been debated ad nauseum to no conclusion.

Give the computer some Hendrix, King Tubby and Kratwerk and see where it
winds up 20 years down the road. Moreover, give a few hundred thousand
computers 20 years in their own vaccuum to share musical ideas from
different musical influences, merge that with programmers feeding them new
instruments, and see where they wind up in twenty years. I think Mike's
comments the other day were great. Take this scenario and *combine* it with
the shared experience of music humans own today, and together there could be
great developments.

I'm reading Bergson right now, and this all fits far too well. With AI
innovations like these, I see programmers leaping from assigning narrow
tasks (like an amoeba) to giving computers the faculties to *link* distinct
work and generate a response, given a broad pallette. Humans have a very
developed network of learned associations that form complex perceptions of
the world, compared to microorganisms that sense little, and respond within
a narrow framework. As the task of organizing a complex array of sensation
is combined with a larger memory, the numbers of options for any given
action expand. A child may form a concrete dislike from one bad hot dog, and
as they grow older, they may experience more hot dogs in different ways and
based on those experiences find new arrangements of condiments that will
become identifiable with their taste. With both computers like EMI, and
adults eating hot dogs, we are unable to determine what choice they may
make, and with both complex thought systems, we are unable to gleen the root
of that spontanaeity. It wouldn't be spontaneous if it had a root, right?
I'd say we're already as unable to determine the roots of choice in complex
computer programs as we are with humans. Why did EMI chose the notes it
chose? If we keep broadening its pallette of choice, I can see vast
potential for creativity.

Of course, this is all a huge oversimplification, b/c computers are not
presently predisposed to one action or another through a history of
experience, they aren't fed experiences outside of the realm of a few
musical passages from famous composers and they don't play them top-40 radio
either. But since this is just the beginning, I think some argumentative
slack can be granted. Also, this all begs a discussion about spirit and soul
and what-not, but you can tell where I stand on that fence.

Tristan
----------
http://ampcast.com/phonopsia <- Music
http://phonopsia.tripod.com <- Mixes, pics, thought, travelogue & info
http://www.metatrackstudios.com
[EMAIL PROTECTED] <- email
<FrogboyMCI> <- AOL Instant Messenger


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to