![]() Further, musical elements can be played concurrently to form harmony, but this is not the case for language. However, analogies between the two domains should be done carefully, as grammatical categories (nouns, verbs) and functions (subject, object) have no parallels in music ( Jackendoff, 2009, Patel, 2003). harmony and sentences, respectively), governed by a set of rules which determines their syntax ( Patel, 1998, Patel, 2003, 2012 Slevc et al., 2009). tones and words, respectively) to form complex hierarchical structures (i.e. Music and language use different elements (i.e. Music and language are two of the most characteristic human attributes, and there has been a surge of recent research interest in investigating the relationship between their cognitive and neural processing (e.g., Carrus et al., 2011 Koelsch and Jentschke, 2010 Maess et al., 2001 Patel et al., 1998a, Patel et al., 1998b). These findings offer novel contributions to our understanding of shared higher-order mechanisms between music and language processing on pitch dimension, and further demonstrate a potential modulation by musical expertise. The beneficial aspect of expertise could be attributed to its strengthening effect of general executive functions. ![]() Critically, musicians' P800 amplitudes were proportional to their level of musical training, suggesting that expertise might shape the pitch processing of language. At the neural level, prosodic violations elicited a front-central positive ERP around 150 ms after the onset of the last word/note, while musicians presented reduced P600 in response to strong incongruities (questions on low-probability notes). Further, musicians performed better than nonmusicians, which may be related to their increased pitch tracking ability. Participants were faster for simultaneous expectancy violations in the melodic and linguistic stimuli. Participants' (22 musicians, 20 nonmusicians) ERPs and behavioural responses in a statement/question discrimination task were recorded. For this purpose, we used melodies ending on an expected or unexpected note (melodic expectancy being estimated by a computational model) paired with prosodic utterances which were either expected (statements with falling pitch) or relatively unexpected (questions with rising pitch). The present study investigated how music and language interact on pitch dimension and whether musical training plays a role in this interaction. Further, pitch perception is modulated by musical training. Pitch also constitutes a common denominator, forming melody in music and prosody in language. Current research on music processing and syntax or semantics in language suggests that music and language share partially overlapping neural resources.
0 Comments
Leave a Reply. |