Posted on: February 4, 2008 10:34 AM, by Jonah Lehrer
This video is shamefully manipulative. It's just a bunch of celebrities, from Scarlett to John Legend, harmonizing over a particularly eloquent Obama speech. The rhetoric is beautiful, poetic and vapid. The camera work is a little too artful. The crescendo at the end is a little too obvious.
And yet, it works. The short video manipulates you even though you know you're being manipulated. I'm not a big fan of celebrities mixing with progressive politics, but I still got shivers at the end of the song, right when the "Yes we can!" chorus picks up speed.
Those shivers are the sole message of the video. To paraphrase Walter Pater, every political speech aspires to the condition of music, and this short video fulfills that aspiration. No ideas or policy proposals interfere with its emotions: it works on our feelings directly. But what interests me most about the video is that it reveals the strong element of music existing just below the surface of the best speeches. It makes the implicit melody of the words explicit. I hope this is the start of a new mash-up genre.
He's right, although it's simpler than that.
74. Musical Scales Mimic the Sound of Language
The harmonics of human vocalization may generate the frequencies used in music.
by Susan Kruglinski
Throughout history, humans of many cultures have found approximately the same small set of sound frequencies musically appealing, as in the 12-note chromatic scale played on the black and white keys of a piano. The frequency of every note occurs in a simple ratio to those of other notes, such as 3:2 or 2:1.
Dale Purves, a neuroscientist at Duke University, set out to understand if there was a biological origin to this tonal preference, and struck a chord in April when he reported (pdf) that the tones of the chromatic scale are dominated by the harmonic ratios found in the sound of the human voice.
“Tonality in nature seems to come only from vocalization,” Purves says, but previous researchers had found no evidence of music-like intervals in the rise and fall of speech. So he looked at the harmonics of vowel sounds, which are created when air passes through vocal folds that can be controlled with a precision similar to the range of a musical instrument. He discovered that when the tonal intervals, or harmonics, of a single vowel sound were broken down, the frequency ratios of our familiar music scales are usually found.
“If this really holds water, it’s an entry into the whole question—and it’s a very divisive question—of what human aesthetics is all about,” says Purves, who usually studies the neuroscience of vision. “The implicit conclusion in this work is that aesthetics is reduced to biological information, and that is not what musicians and philosophers want to hear.”
In a way, when we speak, we speak in music.
I've heard theories that music came first, then language.