Tampilkan postingan dengan label Linguistic. Tampilkan semua postingan
Tampilkan postingan dengan label Linguistic. Tampilkan semua postingan

Music tuition can help children improve reading skills

Los Angeles, London, New Delhi, Singapore and Washington DC (16 March 2009) -- Children exposed to a multi-year programme of music tuition involving training in increasingly complex rhythmic, tonal, and practical skills display superior cognitive performance in reading skills compared with their non-musically trained peers, according to a study published today in the journal Psychology of Music, published by SAGE.

According to authors Joseph M Piro and Camilo Ortiz from Long Island University, USA, data from this study will help to clarify the role of music study on cognition and shed light on the question of the potential of music to enhance school performance in language and literacy.

Studying children the two US elementary schools, one of which routinely trained children in music and one that did not, Piro and Ortiz aimed to investigate the hypothesis that children who have received keyboard instruction as part of a music curriculum increasing in difficulty over successive years would demonstrate significantly better performance on measures of vocabulary and verbal sequencing than students who did not receive keyboard instruction.

Several studies have reported positive associations between music education and increased abilities in non-musical (eg, linguistic, mathematical, and spatial) domains in children. The authors say there are similarities in the way that individuals interpret music and language and "because neural response to music is a widely distributed system within the brain…. it would not be unreasonable to expect that some processing networks for music and language behaviors, namely reading, located in both hemispheres of the brain would overlap."

The aim of this study was to look at two specific reading subskills – vocabulary and verbal sequencing – which, according to the authors, are "are cornerstone components in the continuum of literacy development and a window into the subsequent successful acquisition of proficient reading and language skills such as decoding and reading comprehension."

Using a quasi-experimental design, the investigators selected second-grade children from two school sites located in the same geographic vicinity and with similar demographic characteristics, to ensure the two groups of children were as similar as possible apart from their music experience.

Children in the intervention school (n=46) studied piano formally for a period of three consecutive years as part of a comprehensive instructional intervention program. Children attending the control school (n=57) received no formal musical training on any musical instrument and had never taken music lessons as part of their general school curriculum or in private study. Both schools followed comprehensive balanced literacy programmes that integrate skills of reading, writing, speaking and listening.

All participants were individually tested to assess their reading skills at the start and close of a standard 10-month school year using the Structure of Intellect (SOI) measure.

Results analysed at the end of the year showed that the music-learning group had significantly better vocabulary and verbal sequencing scores than did the non-music-learning control group. This finding, conclude the authors, provides evidence to support the increasingly common practice of "educators incorporating a variety of approaches, including music, in their teaching practice in continuing efforts to improve reading achievement in children".

However, further interpretation of the results revealed some complexity within the overall outcomes. An interesting observation was that when the study began, the music-learning group had already experienced two years of piano lessons yet their reading scores were nearly identical to the control group at the start of the experiment.

So, ask the authors, "If the children receiving piano instruction already had two years of music involvement, why did they not significantly outscore the musically naïve students on both measures at the outset?" Addressing previous findings showing that music instruction has been demonstrated to exert cortical changes in certain cognitive areas such as spatial-temporal performance fairly quickly, Piro and Ortiz propose three factors to explain the lack of evidence of early benefit for music in the present study.

First, children were tested for their baseline reading skills at the beginning of the school year, after an extended holiday period. Perhaps the absence of any music instruction during a lengthy summer recess may have reversed any earlier temporary cortical reorganization experienced by students in the music group, a finding reported in other related research. Another explanation could be that the duration of music study required to improve reading and associated skills is fairly long, so the initial two years were not sufficient.

A third explanation involves the specific developmental time period during which children were receiving the tuition. During the course of their third year of music lessons, the music-learning group was in second grade and approaching the age of seven. There is evidence that there are significant spurts of brain growth and gray matter distribution around this developmental period and, coupled with the increased complexity of the study matter in this year, brain changes that promote reading skills may have been more likely to accrue at this time than in the earlier two years.

"All of this adds a compelling layer of meaning to the experimental outcomes, perhaps signalling that decisions on 'when' to teach are at least as important as 'what' to teach when probing differential neural pathways and investigating their associative cognitive substrates," note the authors.

"Study of how music may also assist cognitive development will help education practitioners go beyond the sometimes hazy and ill-defined 'music makes you smarter' claims and provide careful and credible instructional approaches that use the rich and complex conceptual structure of music and its transfer to other cognitive areas," they conclude.

The Handwriting of Liars


(PhysOrg.com) -- Forget about unreliable polygraph lie detectors for identifying liars. A new study claims the best way to find out if someone is a liar is to look at their handwriting, rather than analyzing their word choice, eye movements and body language.

The study by Gil Luria and Sara Rosenblum from the University of Haifa in Israel, tested 34 volunteers, who were each asked to write two stories using a system called ComPET (Computerized Penmanship Evaluation Tool), which comprises a piece of paper positioned on a computer tablet and a wireless electronic pen with a pressure-sensitive tip. Using the system, the subjects wrote one paragraph about a true memory, and one that was made up.

The researchers analyzed the writing and discovered that in the untrue paragraphs the subjects on average pressed down harder on the paper and made significantly longer strokes and taller letters than in the true paragraphs. The differences were not visible to the eye, but were detectable by . There were no differences in writing speed.

The scientists suggest that handwriting changes because the brain is forced to work harder since it is inventing information, and this interferes with normal writing.

People hesitate when they lie, Dr Richard Wiseman, a psychology professor at the University of Hertfordshire told the Daily Mail, and some companies use this knowledge to check how long people take to tick boxes in online surveys. The new research is promising, he said, but needs larger scale testing.

The study was published in the Applied Cognitive Psychology journal. Research is in its early stages but ComPET could one day find practical application in testing the truthfulness of handwritten insurance claims or loan applications, or in handwriting tests during job interviews. Handwriting analyses could also be combined with lie detectors to identify whether or not people were lying.

Multilingualism brings communities closer together

by: Danielle Moore

Learning their community language outside the home enhances minority ethnic children's development, according to research led from the University of Birmingham. The research, which was funded by the Economic and Social Research Council, found that attending language classes at complementary schools has a positive impact on students.

Complementary schools provide out-of-school-hours community language learning for children and young people from minority groups. They aim to develop students' multilingualism, strengthen the link between home and the community, and connect them with wider social networks. The study found that the parents believed that bilingualism had economic benefits for their children as it improved their chances of success in the global jobs market.

According to Angela Creese, Professor of Educational Linguistics, who led the research, there is a growing interest in complementary schools because they are unique, offering students the opportunity to develop their verbal and written language skills across a variety of languages 'It is rare to find an environment where two or more languages are used in teaching and learning,' she explains. 'Teachers and young people move between languages, and our findings show that the children are proud of their flexible language skills. One Turkish boy told us he was learning four languages and loved being able to show off to his friends.'

The research builds on an earlier study of complementary schools in Leicester that found significant evidence of the value of these schools. Consisting of linked case studies of schools serving four of Britain's linguistic minority communities, the study focused on Bengali schools in Birmingham, Chinese schools in Manchester, Gujarati schools in Leicester, and Turkish schools in London. It explored the social, cultural and linguistic significance of these schools in their communities and in wider society.

The findings highlight the general view among minority communities that children need to study language, heritage and culture at school rather than in isolation at home. A Chinese parent told the researchers that children who were taught by private tutors had a limited experience: 'They need to learn with other kids, to see how other children learn, their attitudes and so on. Then they can decide for themselves what kind of person they should be.'

The research team found that, for students in complementary schools, being bilingual is associated with contemporary, cosmopolitan identities. Students often see themselves as 'successful learners' as well as 'multicultural' and 'bilingual', the report says. 'Teachers and students alike see the complementary schools as places where they can develop multicultural, multilingual identities', says Professor Creese.

Babies who gesture have big vocabularies


Babies who use gestures to communicate when they are 14 months old have much larger vocabularies when they start school than those who don't, say US researchers.

They say babies with wealthier, better-educated parents tend to gesture more and this may help explain why some children from low-income families fare less well in school.

"When children enter school, there is a large socioeconomic gap in their vocabularies," says the University of Chicago's Dr Meredith Rowe, whose study appears in the journal Science.

Gestures could help explain the difference, Rowe told the American Association for the Advancement of Science annual meeting in Chicago.

Vocabulary is a key predictor of school success. Earlier research shows that well-off, educated parents tend to talk to their children more than their poorer, less-educated peers.

"What we are doing here is going one step earlier and asking, does this socioeconomic status relate to gesture, and can that explain some of the gap we see at school entry," says Rowe.

Early foundations

The researchers filmed 50 Chicago-area children and parents from diverse economic backgrounds and counted the number of gestures, such as pointing at a picture.

The team found that 14-month-olds from high-income, well-educated families used gesture to convey an average of 24 different meanings during each 90-minute session, compared with 13 meanings conveyed by children from lower-income families.

When the same children entered school at age four and a half years, those from higher-income families had better vocabulary scores on standardised tests.

"At 14 months, an age when there aren't even socioeconomic differences in their talk yet, we see there are differences in their gestures," says Rowe.

The videos revealed that parents from wealthier families gestured more with their children than the other parents.

Rowe says the findings suggest that gestures can at least partly explain vocabulary differences between the groups, and may prove useful as the basis for interventions.

"Can we manipulate how much parents and children gesture, and if so, will it increase their vocabulary?" he says.

Humor Shown To Be Fundamental To Our Success As A Species

ScienceDaily (June 16, 2008) — First universal theory of humour answers how and why we find things funny. Published June 12, The Pattern Recognition Theory of Humour by Alastair Clarke answers the centuries old question of what is humour. Clarke explains how and why we find things funny and identifies the reason humour is common to all human societies, its fundamental role in the evolution of homo sapiens and its continuing importance in the cognitive development of infants.

Clarke explains: “For some time now it’s been assumed that a global theory of humour is impossible. This theory changes thousands of years of incorrect analyses and mini-theories that have applied to only a small proportion of instances of humour. It offers a vital answer as to why humour exists in every human society.”

Previous theories from philosophers, literary critics and psychologists have focused on what we laugh at, on ‘getting the joke’. “Humour cannot be explained in terms of content or subject matter. A group of individuals can respond completely differently to the same content, and so to understand humour we have to examine the structures underlying it and analyse the process by which each individual responds to them. Pattern Recognition Theory is an evolutionary and cognitive explanation of how and why an individual finds something funny. Effectively it explains that humour occurs when the brain recognizes a pattern that surprises it, and that this recognition is rewarded with the experience of the humorous response.” says Clarke.

Humour is not about comedy it is about a fundamental cognitive function. Clarke explains: “An ability to recognize patterns instantly and unconsciously has proved a fundamental weapon in the cognitive arsenal of human beings.” Recognising patterns enables us to quickly understand our environment and function effectively within it: language, which is unique to humans, is based on patterns.

Clarke’s theory has wider implications: “It sheds light on infantile cognitive development, will lead to a revision of tests on ‘humour’ to diagnose psychological or neurological conditions and will have implications regarding the development of language. It will lead to a clarification of whether other animals have a sense of humour, and has an important role to play in the production of artificial intelligence being that will feel a bit less robotic thanks to its sense of humour.”

Alastair Clarke explains: “The development of pattern recognition as displayed in humour could form the basis of humankind’s instinctive linguistic ability. Syntax and grammar function in fundamental patterns for which a child has an innate facility. All that differs from one individual to the next is the content of those patterns in terms of vocabulary.”

Pattern Recognition Theory identifies further correlation between the development of humour and the development of cognitive ability in infants. Previous research has shown that children respond to humour long before they can comprehend language or develop long-term memory. Humour is present as one of the early fundamental cognitive processes. Alastair Clarke explains: “Amusing childish games such as peek-a-boo and clap hands all exhibit the precise mechanism of humour as it appears in any adult form. Peek-a-boo can elicit a humorous response in infants as young as four months, and is, effectively, a simple process of surprise repetition, forming a clear, basic pattern. As the infant develops, the patterns in childish humour become more complex and compounded and attain spatial as well as temporal elements until, finally, the child begins to grapple with the patterns involved in linguistic humour.”

Alastair Clarke explains that the Pattern Recognition Theory “can not say categorically what is funny. The individual is of paramount importance in determining what they find amusing, bringing memories, associations, meta-meaning, disposition, their ability to recognize patterns and their comprehension of similarity to the equation. But the following two examples illustrate its basic structure. A common form of humour is the juxtaposition of two pictures, normally of people, in whom we recognize a similarity. What we are witnessing here is spatial repetition, a simple two-term pattern featuring the outline or the features of the first repeated in those of the second. If the pattern is sufficiently convincing (as in the degree to which we perceive repetition), and we are surprised by recognizing it, we will find the stimulus amusing.”

“As a second example, related to the first but in a different medium, stand-up comedy regularly features what we might call the It’s so true form of humour. As with the first example, the brain recognizes a two-term pattern of repetition between the comedian’s depiction and its retained mental image, and if the recognition is surprising, it will be found amusing. The individual may be surprised to hear such things being talked about in public, perhaps because they are taboo, or because the individual has never heard them being articulated before. The only difference between the two examples is that in the first the pattern is recognized between one photograph and the next, and in the second it occurs between the comedian’s words and the mental image retained by the individual of the matter being portrayed.”

“Both of these examples use simple patterns of exact repetition, even if the fidelity of that repetition is poor (for example if the photographs are only vaguely similar). But pattern types can be surprisingly varied, including reflection, reversal, minification and magnification and so on. Sarcasm, for example, functions around a basic pattern of reversal, otherwise known as repetition in opposites. Patterns can also contain many stages, whereas the ones depicted here feature only two terms.”

What makes an accent in a foreign language lighter?


by; University of Haifa
The more empathy one has for another, the lighter the accent will be when speaking in a second language. This is the conclusion of a new study carried out at the University of Haifa by Dr. Raphiq Ibrahim and Dr. Mark Leikin of the Department of Learning Disabilities and Prof. Zohar Eviatar of the Department of Psychology at the University of Haifa. The study has been published in the International Journal of Bilingualism. “In addition to personal-affective factors, it has been found that the ‘language ego’ is also influenced by the sociopolitical position of the speaker towards the majority group,” the researchers stated.

We all know how to identify the average Hebrew speaker trying to speak English: the Israeli accent is an easy give-away. But why is there an accent and what are the factors that make one speaker have a heavier accent than another? One possibility is based on the cognitive discipline, which suggests that our language system limits the creation of language pronunciations in a non-native language. Another explanation is derived from the socio-lingual field, which claims that socio-affective elements have an effect on accent and that the second language constitutes an image label for the speaker in the presence of a majority group.

“Israel is a perfect lab location for testing the topic of second languages, because of the complex composition of its population. This population is made up of immigrants who learn Hebrew at an advanced age; an ethnic minority of Arabs, some of whom learn Hebrew from an early age, and others who learn the language as mature adults; and a majority group of native Hebrew speakers,” the researchers explained.

The first stage of the study divided participants - students from the University of Haifa - into three groups: 20 native Hebrew speakers, 20 Arabic speakers who learned Hebrew at the age of 7-8, and 20 Russian immigrants who learned Hebrew after age 13. The participants’ socioeconomic characteristics were identical. All were asked to read out a section from a report in Hebrew, and then to describe - in Hebrew - an image that was shown to them. The pieces were recorded and divided into two-minute sections. Additionally, the participants filled out a questionnaire that measures empathetic abilities in 29 statements.

The second stage of the study took 20 different native Hebrew speaking participants. They listened to the pieces that had been recorded in the first stage, and rated each piece according to accent “heaviness”. Subsequently, each participant from the first stage was given a score on the weight of his or her accent and another score for level of empathy.

The study has shown that the accent level of Russian immigrants and of native Arabic speakers is similar. It also revealed that for the Russian immigrants, there is a direct link between the two measures: the higher the ability to exhibit empathy for the other, the weaker the accent. Amongst the Arabic speakers, however, no such link - either positive or negative - between level of empathy and heaviness of accent could be seen.

The researchers’ hypothesis is that in the group of Arabic speakers, a new factor enters the ‘language ego’ equation: sociopolitical position. “We believe that the pattern among Arabic speakers demonstrates their sentiment toward the Hebrew-speaking majority group, and the former consider their accent as something that distinguishes them from the majority.

Our research shows that both personal and sociopolitical aspects have an influence on accent in speaking a second language, and teachers giving instruction in languages as second languages, especially among minority groups, must relate to the social and political connection when teaching,” the researchers explain.

Language driven by culture, not biology

by :Eurekalert

Language in humans has evolved culturally rather than genetically, according to a study by UCL (University College London) and US researchers. By modelling the ways in which genes for language might have evolved alongside language itself, the study showed that genetic adaptation to language would be highly unlikely, as cultural conventions change much more rapidly than genes. Thus, the biological machinery upon which human language is built appears to predate the emergence of language.

According to a phenomenon known as the Baldwin effect, characteristics that are learned or developed over a lifespan may become gradually encoded in the genome over many generations, because organisms with a stronger predisposition to acquire a trait have a selective advantage. Over generations, the amount of environmental exposure required to develop the trait decreases, and eventually no environmental exposure may be needed - the trait is genetically encoded. An example of the Baldwin effect is the development of calluses on the keels and sterna of ostriches. The calluses may initially have developed in response to abrasion where the keel and sterna touch the ground during sitting. Natural selection then favored individuals that could develop calluses more rapidly, until callus development became triggered within the embryo and could occur without environmental stimulation. The PNAS paper explored circumstances under which a similar evolutionary mechanism could genetically assimilate properties of language – a theory that has been widely favoured by those arguing for the existence of 'language genes'.

The study modelled ways in which genes encoding language-specific properties could have coevolved with language itself. The key finding was that genes for language could have coevolved only in a highly stable linguistic environment; a rapidly changing linguistic environment would not provide a stable target for natural selection. Thus, a biological endowment could not coevolve with properties of language that began as learned cultural conventions, because cultural conventions change much more rapidly than genes.

The authors conclude that it is unlikely that humans possess a genetic 'language module' which has evolved by natural selection. The genetic basis of human language appears to primarily predate the emergence of language.

The conclusion is reinforced by the observation that had such adaptation occurred in the human lineage, these processes would have operated independently on modern human populations as they spread throughout Africa and the rest of the world over the last 100,000 years. If this were so, genetic populations should have coevolved with their own language groups, leading to divergent and mutually incompatible language modules. Linguists have found no evidence of this, however; for example, native Australasian populations have been largely isolated for 50,000 years but learn European languages readily.

Professor Nick Chater, UCL Cognitive, Perceptual and Brain Sciences, says: "Language is uniquely human. But does this uniqueness stem from biology or culture? This question is central to our understanding of what it is to be human, and has fundamental implications for the relationship between genes and culture. Our paper uncovers a paradox at the heart of theories about the evolutionary origin and genetic basis of human language – although we have appear to have a genetic predisposition towards language, human language has evolved far more quickly than our genes could keep up with, suggesting that language is shaped and driven by culture rather than biology.

"The linguistic environment is continually changing; indeed, linguistic change is vastly more rapid than genetic change. For example, the entire Indo-European language group has diverged in less than 10,000 years. Our simulations show the evolutionary impact of such rapid linguistic change: genes cannot evolve fast enough to keep up with this 'moving target'.

"Of course, co-evolution between genes and culture can occur. For example, lactose tolerance appears to have co-evolved with dairying. But dairying involves a stable change to the nutritional environment, positively selecting the gene for lactose tolerance, unlike the fast-changing linguistic environment. Our simulations show that this kind of co-evolution can only occur when language change is offset by very strong genetic pressure. Under these conditions of extreme pressure, language rapidly evolves to reflect pre-existing biases, whether the genes are subject to natural selection or not. Thus, co-evolution only occurs when the language is already almost entirely genetically encoded. We conclude that slow-changing genes can drive the structure of a fast-changing language, but not the reverse.

"But if universal grammar did not evolve by natural selection, how could it have arisen? Our findings suggest that language must be a culturally evolved system, not a product of biological adaption. This is consistent with current theories that language arose from the unique human capacity for social intelligence."


Pacific People Spread From Taiwan, Language Evolution Study Shows


ScienceDaily (Jan. 27, 2009) — New research into language evolution suggests most Pacific populations originated in Taiwan around 5,200 years ago. Scientists at The University of Auckland have used sophisticated computer analyses on vocabulary from 400 Austronesian languages to uncover how the Pacific was settled.

"The Austronesian language family is one of the largest in the world, with 1200 languages spread across the Pacific," says Professor Russell Gray of the Department of Psychology. "The settlement of the Pacific is one of the most remarkable prehistoric human population expansions. By studying the basic vocabulary from these languages, such as words for animals, simple verbs, colours and numbers, we can trace how these languages evolved. The relationships between these languages give us a detailed history of Pacific settlement."

"Our results use cutting-edge computational methods derived from evolutionary biology on a large database of language data," says Dr Alexei Drummond of the Department of Computer Science. "By combining biological methods and linguistic data we are able to investigate big-picture questions about human origins".

The results, published in the latest issue of the journal Science, show how the settlement of the Pacific proceeded in a series of expansion pulses and settlement pauses. The Austronesians arose in Taiwan around 5,200 years ago. Before entering the Philippines, they paused for around a thousand years, and then spread rapidly across the 7,000km from the Philippines to Polynesia in less than one thousand years. After settling Fiji, Samoa and Tonga, the Austronesians paused again for another thousand years, before finally spreading further into Polynesia eventually reaching as far as New Zealand, Hawaii and Easter Island.

"We can link these expansion pulses to the development of new technology, such as better canoes and social techniques to deal with the great distances between islands in Polynesia," says Research Fellow Simon Greenhill. "Using these new technologies the Austronesians and Polynesians were able to rapidly spread through the Pacific in one of the greatest human migrations ever. This suggests that technological advances have played a major role in the spread of people throughout the world."

The research was funded by the New Zealand Royal Society Marsden fund. The database of Austronesian basic vocabulary can be accessed at: http://language.psy.auckland.ac.nz/austronesian/

Language change can be traced using gigantic text archives

By : Physorg
(PhysOrg.com) -- Historical collections that include everything ever written in a dozen American and British newspapers since they started are now available electronically. Donald MacQueen from Uppsala University, Sweden, has carried out the first comprehensive study that makes use of this resource in order to track changes in language usage, a method that makes it possible to attain an entirely new degree of precision in dating.

The gigantic archives contain news and feature articles as well as editorials and commercial and classified advertisements. Together they comprise tens of billions of words. In his dissertation in English linguistics, Donald MacQueen has examined the word million in English, especially how usage shifted from the previously nearly totally dominant "five millions of inhabitants" to today's "five million inhabitants." With the help of these electronic collections of texts that only recently became available, he has succeeded in pinning down when and where the modern expression began to take over.

"When you study the occurrence of uncommon words in smaller corpora (text archives) of one or a few million words, you only get a few examples to analyze. These collections are much larger, and they have enabled me to obtain extremely reliable historical data for one year at a time. In this way I have been able to trace the shift with a precision that was not previously possible in linguistic studies," he explains.

It turns out that the modern construction took over in the American newspapers in the middle of the 1880s and in the British The Times only in the mid 1910s. What's more, it became apparent that the transitional period was shorter in The Times. These circumstances indicate that usage in American newspapers influenced and accelerated the shift in the British newspaper.

This took place at the height of the British empire, and roughly when the US economy overtook the British for the first time. Donald MacQueen tentatively sees as an impetus for the change in usage, apart from the fact that both expressions suddenly began to be used more frequently, the greater propensity for people to embrace innovations during periods of severe social crisis, in this case the American Civil War and World War I, respectively. It is also possible that these wars entailed major population movements that could have impacted usage.

"Another discovery I made, thanks to the huge amount of data, is that when the use of the two constructions began to be roughly equal in frequency, the newspapers chose quite simply to avoid using such constructions, writing numeral expressions instead. After World War II, when there was no longer any doubt which construction was the ‘right' one, the newspapers reverted to writing number-word expressions again," he says.

The dissertation also includes a comparison with languages like French and German, where the corresponding grammatical shift regarding the word million from being a noun to an ordinary number word has not yet taken place.

"But in the long perspective we can expect this change to occur in those languages as well. The shift is a universal phenomenon when it comes to number words," says Donald MacQueen.

He defended his dissertation at Uppsala University on June 8.

Differences in language-related brain activity affected by sex?

by: Alpha Galileo

Men show greater activation than women in the brain regions connected to language (1), according to researchers from CNRS, Université de Montpellier I and Montpellier III. This work is published in the February 2009 issue of the journal Cortex.

The researchers studied the strength of brain activation in women and men of high and low verbal fluency. For their study, they made up two groups of female and two of male subjects, chosen for their high or low verbal performance at a particular linguistic task (word generation). They then asked each subject in the four groups to mentally generate the largest possible number of words beginning with a given letter while observing them by functional magnetic resonance imaging (fMRI). The researchers observed by fMRI that brain regions are activated differently according to sex and to verbal fluency level (variation in the number of generated words).

Independent of the number of words generated, men showed greater activation than women in the classical language regions of the brain. Furthermore, regardless of the sex of the subject, participants with low verbal skills elicited greater activation in a brain zone (the anterior cingulate) whereas highly fluent subjects activated the cerebellum to a greater extent.

The researchers also showed the combined effects of sex and verbal competence in the strength of activation of particular brain regions.

- The group of men with high verbal fluency, when compared to the three other groups, showed greater activation of two brain regions (the right precuneous and left dorsolateral prefrontal cortex) and lesser activation of another region (right inferior frontal gyrus);
- In low fluency women, the researchers noticed a greater activation of the left anterior cingulate than in women with high fluency.

By separating out the effects of sex and performance on the strength of brain activation for the first time, this study shows that there is an effect linked exclusively to the sex of the subject, another effect linked exclusively to performance, or an effect linked to both factors in different brain regions. The authors conclude that to explore neural correlates of verbal fluency with an aim to understanding the difference made by sex, it is important to take into account performance levels in order to obtain conclusive results.