Innovation from University of Chicago

Fifty years ago, most linguistics research required little advanced technology beyond a tape recorder, with a notebook and a No. 2 pencil to jot down findings.

But lately the field has grown to encompass ideas from computer science and cognitive psychology. Linguists increasingly make use of sophisticated tools such as high-speed cameras, eye trackers, visual recognition software, and EEG machines that track brain activity.

Linguistics researchers at the University of Chicago are finding new ways to incorporate the field’s latest approaches and tools. Computational linguists like John Goldsmith, the Edward Carson Waller Distinguished Service Professor in linguistics and computer science, and Greg Kobele, a Neubauer Family assistant professor in linguistics, use computer programs to model different components of human language. Phonologist Alan Yu, associate professor in linguistics, runs experiments that investigate how sounds change over time. Professor Anastasia Giannakidou, a semanticist, collaborates with faculty in psychology to study sentence building in sign languages.

“Traditional methods are still the core of the department, but we’re adding on these other elements in a complementary way,” says Chris Kennedy, chair of linguistics. “I do see it as enhancement of what we’ve already got. You want to build on traditional strengths.”

Finding the Fundamental Similarities

A key question in linguistics is how finite linguistic rules can generate infinite complexity, says Jason Riggle, assistant professor in linguistics. The so-called “generative revolution,” which Noam Chomsky spearheaded in the 1950s, lives on in different forms today. Linguists continue to look across many languages for insights into how the mind handles the challenge of language.

“That means the system that we ultimately want to explain is not the one that is English or Cantonese,” Kennedy adds, “it’s the one that underlies English or Cantonese.”

Technology is offering novel ways to explore such linguistic puzzles. Computational linguists like Goldsmith, for example, have used computer programs to simulate the ways the brain can learn language. Goldsmith developed a program called Linguistica that can take a sample of any language and learn its morphological structure.

Experimental linguists like Ming Xiang, assistant professor in linguistics, favor other strategies such as monitoring the brain’s electrical activity with an EEG machine. Xiang’s interest in how the brain processes sentences has led her to explore commonalities among languages that on the surface seem quite different.

Some of her recent research focuses on wh-questions—questions that use interrogative words like who, what, when, where, and which. In English, wh-words tend to appear at the start of questions, which means they are generally far away from their verb (“What did he buy?”). In Chinese, the word order is the same as in the corresponding non-question, keeping wh-words close to their verb (“He bought what?”). The difference in word order might suggest that English and Chinese speakers use different strategies to process these sentences, but Xiang’s work indicated otherwise.

Using an EEG machine, which measures the brain’s electrical signals, she studied how Chinese speakers responded to wh-words. She focused on P-600, a burst of electrical activity in the brain. In English speakers, P-600 correlates with the association of wh-words and their verbs.

Xiang presented her Chinese-speaking subjects with two sentences, one with a wh-word and one without. In both sentences, because of the Chinese word order, the verb and its object were right next to each other (for example, “John wondered Mary went to see which pop star” vs. “John thought Mary went to see that pop star”). Yet the sentences with wh-words caused a much larger P-600 effect, similar to the effect observed in English sentences with wh-words.

This suggests that the brain processes the Chinese and English sentences containing wh-words in a similar way, despite the difference in word order. It further suggests a similar mental strategy for comprehension that extends across languages.

The discovery of such cross-linguistic mental processes is crucial, Xiang says. “The fundamental similarities are what linguistics is going after. What are the basic, primitive units that can combine to produce all the languages?”

The “Forbidden Experiment”

In addition to better tools, linguists are getting access to more kinds of data. Two graduate students in Riggle’s lab, Max Bane and Morgan Sonderegger, have extracted intriguing data from the unlikeliest of sources—reality television.

“It’s the forbidden experiment,” Bane says. Linguists would love to know what happens when you isolate people in an enclosed space for weeks at a time and record every utterance, but it’s hardly something a human-subjects board would allow. Fortunately, the contestants on the television show Big Brother did it for them.

Working with Peter Graff at MIT, the linguists examined the changes in speech among Big Brother participants. They focused on Voice Onset Time, or VOT, a measure of how long it takes for the vocal chords to make a sound after a person articulates certain consonants. It’s variation in VOT that makes a “p” sound different from “b,” and a “k” sound different from “g.” Longer VOT makes “p”s sound especially “bursty” and clear.

The team charted each contestant’s average VOT over the course of the season, using statistical methods to control for things like word frequency that might skew the average.

Their preliminary results, published in the Proceedings of the 46th Annual Meeting of the Chicago Linguistic Society, suggest that the contestants’ phonetics changed in response to social pressures. During periods of tension in the house, the contestants’ average VOT began to shift wildly—some people’s average VOT increased in response to the tension, while others’ decreased. By the season’s end, the average VOT began to converge. In the absence of social pressures, contestants began to sound more like one another.

It’s the kind of experiment you never would have seen 50 years ago, notes Riggle, who also directs the Chicago Language Modeling Lab. He isn’t fazed by all the changes to his field. “If the field is not building new tools that suggest new questions that make you build more new tools, then it’s not going anywhere,” he says. “If I’m not obsolete in 30 years, I’m not doing a good job now.”

About CHN

Speak Your Mind

*