In this poster I discuss the orchestrator’s need and possible uses of homogeneity measure. I will also introduce a possibility to give the homogeneity a numerical value with a method borrowed from data science. I point out the benefits of caveats for using the data science method for the timbre data and explain how it is implemented in the Score-Tool app, a computer program for psychoacoustic analysis of musical scores. Score-Tool app is a part of my doctoral project at Uniarts.
For a composer or orchestrator the need for knowing the homogeneity of the orchestration can be related to technical or artistic reasons. From a technical point of view, there might be need for obtaining maximum possible blend, and because the like timbres tend to blend better than contrasting timbres, the homogenous orchestration is an advantage. Another technical need could come when determining the audibility of the target with Score-Tool method, the homogeneity parameter can be used to predict if the target has a distinct timbre from the orchestration. If the orchestration is highly heterogeneous the target timbre, despite the high audibility masking wise, could be hard to detect from the high variety of timbres.
The need for homogeneity parameter can also be artistic. For example, in orchestrating a passage I have often felt need for composing a chord with highly homogenous sound. Homogenous sound may be needed, for example, to move focus of the music from orchestration to harmony. The need can also be the opposite; to use as heterogeneous orchestration as possible to attract listener’s attention when there is not much happening in harmony.
The concept of timbre homogeneity appear every now and then in casual discussions with colleagues, but not necessarily with the exact term. I often hear arguments, like “I like composing for string quartet, because of its uniform timbre”, “It’s easy to compose for monochromatic ensemble”, or “composing for wind band is hard because of the heterogeneity of the timbre”. I agree with these arguments, and the problem becomes even more complex with a full orchestra.
The homogeneity of the timbre is rarely addressed as a parameter in orchestration handbooks, orchestration literature, or even in orchestration teaching. The timbre variety within an ensemble could be thought as a part of composer community’s tacit knowledge, and the ability to orchestrate homo- or heterogeneously is gained little by little by experience and with discussions with more experienced colleagues.
In Score-Tool app I present the possibility to check the homogeneity of the orchestration with an algorithm borrowed from the data science. In data science, especially in business-related data science, the researches try to find formulas which could predict the future development. One sub class of this practice is the demographic approach; a study of the relationship between kinship structures. The most commonly used measure of demographic heterogeneity is the coefficient of variation.
The reason I became interested in using CV method in Score-Tool is that the CV measures the variability of a series of numbers independently of the unit of measurement used for these numbers.
In Score-Tool app I use mfcc vector as a measurement of timbre. The mfcc values are unitless. Thus, using CV formula to mfcc would be a natural choice.
In Score-Tool method the mfcc timbre data is obtained both from the whole orchestration timbre and from the individual instruments timbres participating to the orchestration. This gives the possibility to explore the orchestration timbre’s components in relation to the overall timbre. In the mfcc of the overall timbre there is no way to tell the homogeneity of it. The only option is comparison of the timbre components, i.e. mfcc components.
Comparing the orchestration mfcc components can be done in several ways, because the data itself is consistent. Every mfcc vector consists of 12 values, and there are no missing data, because the mfcc algorithm always outputs all values, even though the correlation would be zero.
Usual methods to compare consistent data are mean, variation and deviance. In mfcc values, where data have no unit, it also have no actual scale, because the result comes from correlation. For example, a characteristic of the variance (or equivalently the standard deviation) is that it is sensitive to the scale on which the variables are measured: if all values are multiplied by a constant c, the variance will increase by a factor of c as well. One solution to this problem is to use the CV. In machine learning applications, CV indicates the constancy aspects of the system (data).
Formula for calculation of Coefficient of variation
The mathematical definition is CV is simple, standard deviation divided with mean, which is a preference also for me, because the computational part of the Score-Tool app is in danger to became heavy. Luckily CV will support in designing computationally efficient single pass algorithms due to its elegant algebraic properties. In mfcc’s case there is, however, a minor concern of the shape of the data, because the values of the measurement used to compute the CV are assumed to be always positive or null. Also, Coefficient of variation is not definable when mean is zero, and it will be unbounded when the mean is approaching zero. There is, in other words, a danger to get bad values is the mean is low.
The mfcc values cause trouble, because they are negative or positive, depending on whether the cepstrum correlates to the cosine wave or not. There are similar cases in data science, where for example, only the data with positive values are used, and other methods applied to the data that is not suitable for CV formula. For example in demographic approach Sørensen advices that, a better course of action for organizational demographers would be to enter the components of the coefficient of variation into their models separately. This means, using the CV formula when the data allows it.
Another problem is that CV is normally computed from values in ratio scale. Especially in the field of machine learning there is a new way to calculate CV also for non-positive type of data. One possibility is translating the data to a scale with only positive numbers. Bindu et al. showed in their article that while scale of the data has no effect on CV, but the translation of the data influences CV exponentially. This can be avoided if the data is normalized, because translation and scale have no effect for normalized data. Bindu et al. advice also, that to avoid the non-existence of CV, it is recommended to code in strictly positive zone. Further, it is recommended to bring the range of normalization to [1, 2]. I like this approach more, than Sørensen approach, which involves the need for an additional formula. In my approach, to translate the mfcc values to positive zone, and apply normalization.
The values obtained with CV formula are usually fractions. In research CV is often presented as a percentage, which is obtained by multiplying the CV value by 100. I use also the percentage presentation because it makes the value more readable in comparison to the fractions. The notable thing to remember is that the CV can also be over 1, resulting the “percentage” be over 100, which may confuse the Score-Tool user. In the app code, the value is therefore restricted, making every value over it result the maximum of 100 percent.
Scale of homogeneity used in Score-Tool orchestration analysis program
|CV value||attribute in statistics||attribute in Score-Tool|
|0-5||highly consistent||highly homogenous|
|6-15||moderately consistent||moderately homogenous|
|16-33||weak consistent||weak homogenous|
|34-66||weak inconsistent||weak heterogenous|
|67-100||moderately inconsistent||moderately heterogenous|
|101-||highly inconsistent||highly heterogenous|
Final step to determine the homogeneity of the orchestration by comparing mfcc vectors is setting the limits for homo- or heterogeneity. For a reference Bindu et al. provides in their book a set of values, which fall to a certain category of data consistency. In the case of mfcc’s, the consistency is interpreted as homogeneity. For homogeneity, the scale must be inverted, because the lower value means more homogeneity.
As a final word about the subject, the CV is by no means the only, or even the best measure of homogeneity. Even the data scientists state that in some cases, using the coefficient of variation may lead to incorrect conclusions about empirical phenomena. In business or healthcare this may lead to fatal errors, but luckily for orchestration the worst thing that can happen is that orchestrator learns more and perhaps starts to develop more accurate homogeneity model.
Pulkkis is a Finnish contemporary composer. Pulkkis specialty is combining mathematics and computer science in orchestration practice. His latest large scale compositions include an opera “I väntan på en jordbävning” (2019), orchestral work “Lagrangian point” (2018) and one hour long symphony “Maamme” (2018). In addition Pulkkis’s composition catalogue has over 40 orchestral works and 5 operas, and he’s works has received several international prices, such as Gustav Mahler, Paris Rostrum and Queen Elizabeth –prizes in the turn of the century. Currently Pulkkis is finishing his doctoral project at the Uniarts Helsinki, and composing two operas “All the truths we cannot see” and “Raatteen tie”, which are scheduled to be performed after the pandemic.
Contact author: uljas pulkkis (at) uniarts fiUljas Pulkkis's orchestration analysis tool: Score-Tool Siba Research days 2021 website