Yiğit Kolat : (mis)translating data into musical information

Yiğit Kolat’s music explores the liminal frontiers of musical activity and potentialities in (mis)translating data into musical information. The complicated political and social environment of his native Turkey is a recurring theme in his diverse output, which includes acoustic, electro-acoustic, and electronic works written for orchestra, chamber ensembles, voice, and solo instruments.

His works have been recognized by a prestigious array of organizations worldwide, including the Millay Colony for the Arts, the Bogliasco Foundation (2016 Edward T. Cone Bogliasco Fellow in Music), the Toru Takemitsu Composition Award (1st Prize, 2015), the Queen Elisabeth Competition in Belgium (Finalist, 2013), the Concours International de Composition Henri Dutilleux (2nd Prize, 2012).

His music has been featured throughout the United States, Europe, and Asia by leading ensembles and soloists, among them the Tokyo Philharmonic (Japan); Solistes de L’Orchestre de Tours, Donatienne Michel-Dansac, and Pascal Gallois (France); The Nieuw Ensemble, The Black Pencil Ensemble, and the Duo Mares (The Netherlands); Jonathan Shames, the Talea Ensemble, and the Argento New Music Project (USA); the Athelas Ensemble (Denmark); the Presidential Symphony Orchestra of Turkey; Peter Sheppard-Skaerved and Aaron Shorr (Great Britain). His music has been broadcast by the Japan Broadcasting Corporation (NHK) and Turkish Radio Television (TRT).

Kolat earned his Doctorate of Musical Arts at the University of Washington, studying with Joël-François Durand. He currently resides in Seattle.


What is your earliest musical memory that, in looking back, has proved to be significant regarding your career as a composer?

One would expect such memory would be vague and difficult to put a finger on, but in my case it is very clear. My parents got me a Commodore 64 on my fourth birthday, and the first game on the cassette that came with it was RoboCop. Its 8-bit title theme coming from the mighty SID chip constitutes the earliest musical experience that made me want to create similar sounds. I don’t know why, out of all sonic experiences a child might have, this particular one triggered a desire for making music. I vividly remember wandering around, humming whatever I could remember, and trying to add new things to it.

Are there composers who have been influential or relevant regarding your own work? Has this changed over time?

The list of influential composers/musicians is always changing. Influence from certain figures has lost intensity in time. For some others, the interest rekindled after revisiting their works. However, there are permanent members of the list, comprising of musicians such as Dutilleux, Grisey, and Sciarrino whose works always inspire my working process. I can easily say the same for other figures “outside the concert hall” such as Ryoji Ikeda, Uwe Schmidt, or Rob Brown and Sean Booth of Autechre. Influence also comes from other fields, especially from culinary arts. I constantly make analogies to cuisine when I talk/think about music, and I’m afraid it will pop up somewhere in these answers as well.

How do you approach the question of “form” especially for longer works?

My recent works begin with a focus on maintaining contrasts between all musical parameters in shorter timeframes, and in lesser intensities. A particular attention is paid to not allowing any micro- or macro-structure to leave a trace in the short-term memory. However as the piece flows, things tend to change, large contrasts enter to stir the attention, structures ossify into patterns and become more memorable.

Memorability can be thought in relation to expressive directness—however for me directness is not an ideal, it is simply a formal device. The level of directness can be controlled to carry the longer musical narratives effectively. This is by no means a new compositional device, although there is always room to explore more on both indirect and direct modes of expressivity.

Nevertheless, there is no objective measure for memorability of a musical material, and one needs to ask: memorable for whom? Our sensitivity to the intensity of the information governs how much of it is retained in our memory and this greatly differs from person to person. Some people have a palate of a sommelier, some others are just happy with any wine that’d go with the meal, and then you have many others in-between. What I am interested is to create environments that can calibrate listener’s sensitivity to a certain level, no matter what her normal level may be. If the listener’s working memory cannot cling on anything, due to the lack of repetition and to the lack of local structural hierarchy, anything that is more direct would create an amplified, visceral response. An example for this would be a simple foot stomping gesture I used in Shobute (2017) which creates a considerable dramatic effect after calibrating the listener’s mind to a flowing yet expressively indirect musical environment.

Messenger of Sorrows (2016) exemplifies a recent approach to form in a long piece. The piece features two independent structures unfolding in tandem. One of these structures is a folk song that underwent a process that is analogous to that of Alvin Lucier’s I am sitting in a room (1969). According to several audience members who were present at the première, unveiling the folk song throughout the piece introduced an element of curiosity, and the need to figure out what will come out of the process dramatically supported the overall flow of the large formal structure that typically lasts 35-40 minutes.

Would you mind speaking a little concerning your working process, i.e., do you have a regular schedule for writing; do you use a computer for composing (either for creating pre-composition materials or notation), if so, do you find that it inhibits your process? What other technology, if any, do you use?

I’ve finally managed to set a regular schedule for composing, taking advantage of getting up very early. I use computer for all phases of the compositional process, however I always keep my notebook around to jot down ideas when I’m not around the computer.

The backbone of my recent work is a notation program I wrote in Python programming language. Using an object-oriented programming paradigm for musical notation, being able to define musical parameters in notation as variables or data structures, opened up uncharted compositional territories for me. For example, I use it to export/import any data to/from another programming environment, such as SuperCollider or Arduino IDE, in order to create structural connections between the acoustic and electronic materials. Data might also stem from an extra-musical source, and can be mapped to the musical/notational parameters—allowing the composer to work with extra-musical material in a more direct manner, in comparison to constructing metaphorical associations and/or following a narrative-based approach.

It’s important to emphasize that my motivation behind the usage of data is not making easily-marketable interconnections between the source and the music (e.g. stuff that is often featured in articles titled like “Composer makes music with climate change data”). The idea is to challenge what my culturally-conditioned mind deems to be the right move in the composition. The program outputs a measure of music in full-fledged notation each time the code is run, and these pieces of notation function as suggestions for the next step in the piece. Getting inspiration from this raw output rather than listening to my –again, conditioned– intuition is exciting, and often takes me to territories I wouldn’t be able to explore myself. It’s also fun to go deeper into the intuition realm until the next stream of data steers the music away from the comfort zone. Ultimately, the compositional process becomes a battle between intuition and the dataset: none of them come off victorious from this conflict.

Please describe a recent work.


In March 2016, the computer program AlphaGo defeated Lee Se-dol, a South Korean Go player of 9 dan rank in the first game of Google DeepMind Challenge Match. AlphaGo’s victory meant a monumental achievement in the field of artificial intelligence—and immediately revived age-old questions about the nature of artificial and “natural” intelligence. The main inspiration of the piece comes from these unanswered questions.

The piece is based on two layers of processing, a machine layer and a human layer. The machine layer consists of mapping the data taken from the game log of the Se-dol vs. AlphaGo game into musical parameters and eventually to musical notation, using a custom-written notation software. The Noh-voice is based on the black moves (Se-dol), while the piccolo part is based on the white moves (AlphaGo). The human layer consists of spontaneous repurposing, alteration, and amplification of the machine layer. Additionally, commentaries made about the game by various Go masters both provided the text, and are used to shape the dramatic contour of the music. The machine-human dichotomy, in terms of certainty and uncertainty, or calculation and intuition, also exists in the two modes of performance: each musician uses two types of notations that differ in their communication of how specific the musical result should be. One of these notational systems is based on the yowagin scale that is often employed in Noh theater.

Shōbute is a Go term, signifying a risky play that is employed in an attempt to bring balance when one is at disadvantage. As a tactic that involves considerable risks, shōbute is an emotionally charged choice, symbolizing an inherently human aspect of the game.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s