This interaction design research challenges the lack of non-linear sound design in music games/installations by proposing a scope of implementations of improvisation-focused interactivity and emergent (invention-focused) gameplay.
Chapter 1 explains my motivation, as a composer and designer. Chapters 2, 3 and 4 give an overview of non-linearity in video games. Chapter 5 explores the evolution of music games. Chapter 6 lays out the essence of musical improvisation. Chapter 7 proposes (concrete) ideas.
1.0 Game Design for Musical Improvisation?
1.1 Can improvisational gameplay make music games more expressive?
1.2 Can emergent (physics-based) gameplay make improvisation more accessible and social?
2.0 Non-Linear Game Mechanics
2.1 Basics (e.g. Path Finding)
2.3 Open Worlds
2.2 Emergent Gameplay (e.g. Physics, Chemistry)
2.3 Sandbox (e.g. Theatrical Behaviour)
3.0 Non-Linear Level Design
3.1 Fractal Art
3.2 Procedular Generation
4.0 Non-Linear Sound Design
4.1 Adaptive Music
4.2 Musical Improvisation as a Byproduct
4.3 Procedular-Generated Music
4.4 AI-Generated Music
5.0 Analysis of Music Games
5.1 Pre-History (e.g. Mozart’s dice game)
5.3 Rhythm Games: Call-and-Response
5.4 Rhythm Games: Dance Games
5.5 Rhythm Games: Other
5.6 Sound Toys
5.7 Playful Sequencers
5.8 Augmented Music
5.9 Interactive Music Installations
7.0 Design Challenges
1.1 Can improvisational gameplay make music games more expressive?
Expressivity in Music Games
Not many music games nor sound-focused interactive installations aim to really cultivate their players’ artistic expressivity: players can’t usually develop their own musical voice (style) within the game, nor can they transform the game’s core interactivity (game mechanics) by for example creating new instruments or environments. Could improvisation and/or emergent (transformative) gameplay bring solutions? And, could (emergent) gameplay be the key to make expressive (musician-like) improvisation accessible to anyone?
One of the core philosophies of playfulness (or play) is the so called ‘risk-reward ratio’. This dynamic isn’t always noticeable or quantifiable, it can operate on a primal psychological level – hence not all games have concretely defined risks (punishments), rewards (scores, prizes) and goals. See for example play-fighting, which is common among humans and other animals. This means that we can boil ‘risk VS reward’ down to simpler terms: ‘enjoyment VS challenge’: having fun while challenging one’s mind and/or body. Then we see that the idea of play applies to lego building, jamming, snorkelling, and you name it. Through challenge, play allows for a comfortable confrontation with struggle, chaos and uncertainty – it’s a window to the unknown, and thus a path to exploration, discovery, wonder and inspiration – eventually coming full circle by feeding back into the player’s motivation. Play is all about feedback loops, just like life.
Though it’s not experienced as such, improvisation is rule-based. Bach was a genius improviser because he internalised a complex 5-voice counterpoint algorithm (among others obviously). Freestyle rap is trained methodically as well. Rules can be unwritten and implied: when kinds play-wrestle, they instinctively understand non-violence and power balance. Even within rule-based, competitive sports – like chess and football – players improvise, as they constantly use their (trained) intuition to deal with unexpected situations. Good games are “random” enough to never be fully predictable, like life. Therefore, it’s no surprise that the (evolutionary) essence of human playfulness can be traced back to how a mother ‘freestyle babbles’ with her baby – through the simple rules of call-and-response communication. This is how she teaches the baby to communicate and deal with (unexpected) emotions [source: The Origins and Future of Playfulness by Gwen Gordon]. It’s a way of making some sense of the randomness (and patterns) of life, and to deal with them spontaneously.
Like improvisation, emergent gameplay embraces unpredictability. Gameplay ’emerges’ whenever a player manages to bend or break the rules of the game without tipping the risk-reward over to either side (which would make the game too difficult or too boring). Emergence is like hacking: it aims for ingenuity and question the original designer’s authority. It explores freedom and gives players a bit of design-power.
For gameplay to be artistically expressive – musically or otherwise – its experience needs to revolve around control (flow) and exploration (wonder). Control arises from a clear, intuitive understanding of one action’s, no matter if yields successful or unsuccessful results. Exploration arises when the game transforms the player’s creation, or when the creation affects the game. It’s once gain, a factor of unpredictability.
Dance Dance Revolution (DDR), an arcade game played by dancing, is one of my favourite games allround. But as a composer and designer, I wonder what we can learn from its limitations, to create new experiences. DDR’s core gameplay is linear. On a macro-level that means players can’t create or discover different branches of the same composition. On a micro-level it means that all actions are aligned to a rigid rhythmic grid, ignoring the essence of expressivity: micro-improvisation. For a musician, character and individuality stem from how one defies the grid… pushing, dragging and bending the notes slightly out of their “intended”, “logical” or “calculated” positions.
But aren’t precise movements and high scores the appeal of music games? Are players even interested in malleable compositions and intricate expressions? Competitiveness surely raises the excitement, but it’s not the only way to achieve immersion. “Art games” like Flower have set the stage for mindful exploration without (competitive) goals. Where is Flower’s musical counterpart? Improvisation, or even (generative) exploration, could bring a new level of social mindfulness to music games. And social creativity could open new artistic potentials. Funnily, a game as simple as Karaoke allows players to be expressive and social, and without the need for an explicitly stated score (though many new Karaoke machines do that too).
Sound/music-focused interactive installations face similar social and expressive challenges. How can visitors/players interact artistically and leave their unique marks? How can a social artwork emerge through group improvisation? How can players remix each others works? Is game design the key to accessible, inclusive forms of improvisation? Yes, many questions are still (partially) open, and I think that’s an interesting thing.
Zen-philosopher Allan Watts said: “The physical universe is basically playful. There’s no necessity for it whatsoever. It isn’t going anywhere; that is to say, it doesn’t have a destination that it ought to arrive at. But it is best understood by analogy to music, because music as an art form is essentially playful. (…) When Bach writes a line of melody, he doesn’t mean anything; he doesn’t try to imitate the thundering of horses hoofs, or the sound of streams, or factory whistles and the uprising of the workers. It has no social message; it’s pure playing with sound. And for that reason, among others, it’s sublime.”
–’Four Ways to the Center‘ by Allan Watts via ‘HD Philosophy Lectures’ on Youtube
Some music aims to tell a story, a chronological narrative that moves through a certain emotional spectrum. Other music appears to have no beginning, no ending, and no well-defined emotions… just trance-inducing patterns that repeatedly evolve and break apart. While the first category (dramatic music) helps us reflect on emotions (and culture?), the latter category (abstract music) possibly explains our appeal for the quasi-randomness of nature, reconnecting us with our (pre-cultural) origins. But all music possesses some of both perspectives, I believe. Genius composers like Bach were able to weave narrative and non-narrative patterns together into complex yet seamless fusions of drama and spirituality (or abstraction, if you will). In his time, Bach was mainly known as an improviser – his approach to music was linear (dramatic) and less/non-linear (improvised).
Now of course, we don’t need only Bachs, we also need music that exploits emotional bombast (I love the cathartic power of ballads and arias). Equally so, we need music that dives deep into the patterns and silence of nature (like calm Korean sanjo music or visceral sub-Saharan African percussion). Awareness of the cultural interactions between narrative (emotional) music and abstract (spiritual) music, enables us to inspire each other in unexpected and original ways, to preserve/revive/reinterpret underrepresented forms of music, to celebrate cultural diversity and interplay. What can interactive, playful design learn from this? Or vice versa, how can we use playful design to explore these cultural mechanics?
Non-linearity, playfulness, improvisation, ingenuity and finally innovation. Two of my biggest passions revolve around this philosophy. 1) Since around 2008, I improvise electro-acoustic music with classical and traditional instrumentalists. 2) In 2008 I also started to study Composition & Sound Design for Adaptive Systems, which allowed me to explore non-linear interactivity (such as game design used for open-ended, spiritual experiences) and adaptive sound design (which we’ll explore further down). It slowly dawned on me how my fascinations for improvised music and improvised gameplay were rooted in the same spiritual origin. With this article, I hope to travel from this origin into an exciting, social, expressive future.
1.2 – Can emergent (physics-based) gameplay make improvisation accessible and social?
My earliest inspiration for this research dates back 10 years. In 2010 I was part of an interdisciplinary team that designed a 10 x 10 meter, 10-player installation called Thresholder. We came up with the idea to “bind” players together in a web of floor-projected quasi-elastic lines, provoking them to exert force upon each other’s sounds. When a part of the web was stretched out too far (beyond its threshold), it would snap, with social consequences. But no musical skills were needed: every sound, including the snapping, was coherent. Exploration and social interaction were the motivators.
The project turned out fruitful, as people remained engaged and mindful as they took minutes to explore the thresholds of the “web”. This video gives an (unfortunately slightly vague) impression of the prototype we launched at Born Digital Festival. By the way, the installation had no physical thresholds, making it really inclusive, bringing all kinds of people together.
Around 2015, it started to dawn on me how the physics-based game mechanics of Thresholder (its web of lines inspired by elasticity, tension and fragility) showed similarities with a physics-related concept that was gaining traction in the world of popular games: emergent gameplay.
Gameplay “emerges” every time a player discovers a new way to play the game. This can be the result of the player…
– designing an ingenious tactic and/or tool (eureka!)
– achieving a moment of extraordinary agility (damn!)
– making an accidental discovery (wtf?)
So to embrace emergent gameplay, game design should allow these situations to happens as often as possible.
Why do physics play such a big role in this this? Physics are both complex and intuitive – predictable and unpredictable. Playful interpretations of physics shape highly dynamic environments, in which countless variables affect each other – yet, we have an intuitive understanding of the behaviour of tools, objects and characters within that complex environment. The same goes for (quasi-)chemistry, and simulated social interactions (which will undoubtedly improve with advancements of AI).
So I started to wonder A) what musical improvisation could learn from developments in emergent gameplay, and B) whether emergent gameplay could be the key in allowing more musical creativity and improvisation in music games (and game music).
2.1 – Non-Linearity & Path-Finding – Insert (Another) Coin
Two important elements of emergent gameplay are: tactics (thinking, re-interpreting, designing) and agility (exploring, expressing, improvising) – and both have non-linear components. We can explore tactics in a game as simple as Pac-Man (1980): players are allowed to roam through the maze in directions of their choice, thereby triggering the enemy ghosts to traverse diverse paths as well. Pac-Mans game mechanics don’t revolve only around hand-eye coordination, they also incentive players to come up with more clever tactics.
Poetry is a way of non-lineair storytelling. Even when a poem expresses a chronological narrative, it aims to be abstract enough for subjectivity: readers have to zig-zag through their minds to come up with an interpretation of their own. Non-linearity is about giving people freedom of interpretation. I think Pac-Man is quite poetic. And philosophical. Do the ghosts have a free will? Free will is matter of context, in this case it’s a game-AI called path finding. Ghosts Inky, Pinky, Blinky and Clyde, all have their own personalities that define their path-finding behaviour towards you. Clyde is direct but gets distracted, Inky is smart but gets confused, Blinky is angry and fast, and Pinky is tactical.
This next one made me think… what if we leave the path finding ‘paths’ visible in the game, could they become musical instruments?
Here’s a flowchart I sketched, illustrating non-linearity in creative gameplay:
– Pacman Ghosts AI explained in the video “Nuclear Fruit: How the Cold War Shaped Video Games” by Ahoy on Youtube
– “Why Pac-Man was light years ahead of its time” by GamesRadar+
– “Pacman Killscreen” by koolkid9997 on Youtube
Emergent Gameplay – Defying the Laws of Nature
Of course, various kinds of games combine agility with tactics, e.g. sports games and first person shooters. Yet, few games revolve specifically around emergent gameplay: the concept of giving players the ability to “design” or improvise their own unique tactics, unforeseen by the game’s designers.
The essence of emergent gameplay is psychological, it’s the player’s feeling of “doing something uniquely creative or ingenious”.
Emergent gameplay can flourish when objects and environment both have highly dynamic and complex properties (variables), such as (playful) interpretations of physics and chemistry. Think of gravity, viscosity, fragility, flexibility, reactivity to water, fire, wind, and so forth. Due to advancements in AI, verbal and physical communication with virtual characters will also become (more) emergent.
“Plans must be devised by the player; it’s not about how clever you are [as a game designer]. (…) Games that don’t offer quote-unquote ‘real choices’ might be just fine. Some people may want a game that’s all about squeezing a virtual trigger, or moving forward like a shark, or solving a puzzle that shows more about how clever the designer is, rather than how clever they are. I’m just more interested in emergence than in scripted adventures… and I believe once players get a taste of that kind of game, it’s very hard for them to go back”
– Warren Spector interview, Gamasutra, 2013 (13 years after designing Deus Ex)
Examples of games specifically designed to push emergent gameplay forward:
– The Incredible Machine (puzzle game, 1993)
– Banjo Kazooie: Nuts & Bolts (action-adventure, 2008)
– Scribblenauts (puzzle game, 2009)
– Infinifactory (puzzle game, 2015)
– Deus Ex (role playing action game, 2000)
– Zelda, Breath of the Wild (action-adventure, 2017)
Gameplay can also be emergent in games not primarily aimed at emergent gameplay. See for example ‘rocket jumping’ (Wikipedia). I remember when my friends and I were exhilarated to discover this in Unreal (1998): blasting a rocket at one’s own feet while jumping, would propel the player to unforeseen (and undesigned) locations. This required agility: one had to figure out the optimal angle for the explosion, while somehow managing to retain enough health to survive.
– “Systemic Games – A Design Philosophy” (by alexbolano82 for The Artifice)
– “Scribblenauts Director Explaining Game” (Gametrailers.com via Youtube)
– “The Rise of the Systemic Game” (Game Maker’s Toolkit on Youtube)
– “Breaking Conventions with Zelda: Breath of the Wild” (GDC on Youtube)
– “Spector: Go Emergent, Game Design Is Not All About You” (Gamasutra interview)
Sandbox Creativity (and Theatre)
Games like Garry’s Mod (2004) and Minecraft (2011) – don’t even contain any specific problems that need to be solved. Instead, their sandbox design stimulates creativity and artistry by allowing players to express themselves (socially) through inventively and improvisingly playing with objects, physics, camera perspective, NPCs (non playable characters) – as a sort of on-the-fly level editing experience.
– “Gary’s Mod: Black Hole VS Destructible Town (phys_bibridgeton)” (PieNinja on Youtube)
– “101 Minecraft Build Hacks” (Grian on Youtube)
About Zelda, Gamespot said: “Never had a game so open-ended, nonlinear, and liberating been released for the mainstream market, and Nintendo of America was downright concerned that it would go right over the public’s head. Thus, it included a toll-free number that stumped players could call to have a genuine Nintendo employee talk them through any of the game’s many enigmas. Soon after the game’s release, Nintendo’s phone lines were deluged with calls, forcing it to establish gaming’s first major pay hint-service. The Nintendo Game Counselors were then born, and the rest is history.“
Yet, there isn’t a clear consensus on the definition of an open world. I believe this is because there’s a spectrum: non-linearity and linearity can be combined in various ways and to various degrees…
Re-Cap Thus Far
Non-linear storytelling: variation a game offers in terms of completing challenges in different orders, and how much effect this has on changes in the story: in how many directions can the story branch out?
Bonus play: additional variation a game offers by offering optional gameplay. For example by rewarding the players extra points for finding hidden objects or executing more difficult to perform moves.
Sandbox elements: ways in which the game allows the players to leave their artistic (incl. social, theatrical) marks.
I will address the following forms of non-linearity in the next chapters:
Adaptive audio: how the audio transforms along with the player’s behaviour and other variables in the game.
Procedural generation: the organic feeling the landscapes express through how they are generated.
NPC AI (incl. enemy AI): how do the non-playable characters interact with the players?
these techniques differ from machine learning and deep learning. Algorithms used are for example path-finding and and decision tree learning.
Here’s a diagram that roughly recaps the distinctions in non-linearity made so far:
I wasn’t planning to write about math. It’s not my expertise, and that’s an understatement. But during this personal research, I was once again reminded how everything, at one point, leads back to math. So without getting too technical, let’s have a peak at a geometric art concept that helps us philosophise about the difference between linearity and non-linearity: fractal art.
“Fractal geometry reveals that some of the most austerely formal chapters of mathematics had a hidden face: a world of beauty unsuspected until now.” – Benoit Mandelbrot (The Fractal Geometry of Nature, 1982)
The concept of fractal art is to execute a repeating set of instructions (AKA an iterative algorithm) that feeds parts of its output (drawing) back into its input (recursion).
There is no exact definition of a fractal, but there are enough characteristics (Wikipedia) for us to suggest various genres of fractal art. The vegetable in above image is – I’m sorry – not a very interesting fractal, as it feels lineair and predictable. What’s clear about it though, is recursion: its parts (fractions) look similar to its whole, and this repetition / feedback loop occurred an X number of times (iterations).
- step 0: draw a shape on a transparant piece of paper
- step 1: scan the drawing
- step 2: resize the scan, shrink it to 90%
- step 3: print the scan on a fresh sheet of transparent piece of paper
- step 4: measure the widest part of the scanned drawing in centimeters
- step 5: rotate the scan by an angle of 0.5x the measured centimeters
(examples: 20 * 0,5 = 10 degrees | 18 * 0,5 = 9 degrees | 16,2*0,5 = 8,1 degrees)
- step 6: place the rotated scan on top of the last paper
- step 7: return to step 1 (each transforming loop is called an iteration)
Both “scaling” and “evolving” fractals are based on that same principle, their only difference is what kind of change they output. Fractals that “scale” find their strength in predictability, rhythm, linearity. Fractals that “evolve” also express a level of predictability, but they captivate us with their unpredictability, which replaces rhythm with a sense of organic shapeshifting.
Algorithms (sets of instructions for the computer to follow) are called procedural when they are employed to generate environments that can create endless (semi-random variation) by evolve endlessly – like non-linear fractals. Procedural level generation was first used in the Apple II game Beneath Apple Manor (1978), which also happened to be the first commercially available role playing game (RPG). Randomisation can add an element of surprise to a game, giving it a high ‘replay value’. [under construction]
Playstation 4 megahit Horizon Zero Dawn used a procedural placement system to generate rich natural environments. Level designers were able to adjust the generative system and save its output, after which they could make manual changes to the landscape, or adjust the system once again.
– “GPU-Based Run-Time Procedular Placement in Horizon Zero Dawn” (Lecture by Jaap van Muijden at Digital Dragons 2017 on Youtube)
– “Fractals are typically not self-similar” (3Blue1Brown on Youtube)
In many contemporary high-end games, the music seamlessly adapts itself to the player’s current situation, in terms of: health, location, activities within that location (relaxed vs threatening), room acoustics (small toilet vs big church), etc. This kind of adaptive music can work in (at least) three ways:
A. Horizontally: by chopping a composition into loops that can be re-sequenced.
B. Vertically: by adding or removing certain instruments from the aforementioned loops, also known as re-orchestration.
C. Modulation: by adding direct flexibility to musical parameters (timbre, intensity, speed, etc.) of certain instruments.
The first game to use a combined horizontal-vertical adaptive music system, with smooth transitions, was the point-and-click adventure Monkey Island 2 (1991).
The sound design of Tetris Effect (2018, with VR support) pairs every single action (movement, rotation, etc.) with a corresponding sound effect, thereby blurring the lines between sound effects and music. GlitchHiker (2011) game I co-created at Global Game Jam, also used interactive music in such as way (instant glitching) that additional sound effects weren’t necessary.
Musical Improvisation As A Byproduct
Otocky (1987) was a very musical game for the Nintendo Famicom (NES) game console. In this side-scrolling-shoot-em-up the player was able to shoot in 8 directions, each of which represented one of the 7 notes of a (western music) minor/major scale, plus the next octave’s root note (e.g. DO-re-mi-fa-sol-la-si-DO.
This meant that you would simultaneously shoot goals and improvise melodies on top of the background music. Your melodies would automatically be quantised: the timing of each note would be slightly nudged to align it with the rhythmic grid of the background music.
Procedular Music and A.I.
Portal 2 (2011) took it a step further by using real-time procedurally generated music: every action (such as shooting) instantaneously generated a unique bit of music.
Recently the first tech demos of an AI-driven music system for the real-time generation of adaptive music were created by Melodrive (2018).
Music Games – Pre-History
What about music games? A beautiful example is Mozart’s Dice Game from 1787 (play online), for which he wrote 272 measures to be arranged in any random order (non-linearly). In the video game world, most music games are neither non-linear nor improvisatory; yet there exceptions and innovations, we’ll get to to them soon.
First, I have to mention an invention that might have kickstarted the whole music video game genre.
A Japanese invention from 1971. Initially it didn’t include a television screen — see Daisuke Inoue’s Juke 8 — but as karaoke spread across Asia in the 1980s, it transformed into the karaoke box, also known as KTV (karaoke television): private, sound-proof karaoke rooms with on-screen lyrics.
Although karaoke binds its players to relatively linear ruleset — singing the songs as accurately as possible — it does give players room to express themselves creatively; after all, a little improvisation or weirdness can add to the fun and social interaction.
In relatively recent Japanese karaoke systems, players can score skill-points, based on various elements of singing. A spin-off of this scoring system appeared in Karaoke Joysound (2008) for Nintendo Wii. Playstation game Singstar (2004) visualised the melody as it approached, and gave its players a live-indication of their accuracy.
Music Games: Call-and-Response Rhythm Games
After Karaoke, Japan once again gave birth to an interactive music idea that would become hugely popular: rhythm games. Most commonly, the objective of these 1990’s games was to follow the rhythmic cues that scrolled across the screen.
Peculiarly, the first rhythm game, PaRappa the Rapper (1996) for Playstation 1, didn’t exactly work like that. It did show a stream (sequence) of notes, but it mainly challenged the players to repeat the teacher’s rhythms; to copy him in a call-and-response way.
Another thing that made PaRappa’s game mechanic quite unique, was its emphasis on improvisation: by playing extra notes in the rhythmic grid, the player could “beat” the teacher and get a chance to perform on the “cool mode” freestyle stage.
In 2011, fifteen years after he created PaRappa, designer Masaya Matsuura, explained to gaming website Kotaku how much room there still was (and is) for innovation in music game design:
“Strictly speaking I do not believe that ‘Music Games’ as a genre really exists yet. We just have ‘Rhythm Games’. We can’t really grow the genre until we have some games that explore areas of music other than just rhythm. I really want us to help overcome this deficiency. I’d like to for example do something that features extremely accurate musical performance animation. As an example, if we could zoom in and see lifelike fingerwork in an animated pianist, the opportunities afforded to uncover and develop new methods of playing beyond the capabilities of humans would be a crucial step in musical evolution. The ability of ‘games’ to allow us to take part in such advances is crucial.“
Jazz musician Bobby McFerrin plays a melodic game with his audience in below video. He appeals to our intuitive understanding of the pentatonic (ancient 5-note) musical scale. Could we translate this to a video game? Why not, right?
In 1997, just one year after PaRappa came out, the rhythm virus spread from the Playstation to the Arcade Hall, thanks to Konami’s DJ simulator Beatmania. Players now had to bash buttons in sync with scrolling notes. 1998 marked another revolution: Dance Dance Revolution (DDR), the first rhythm game with a dance controller.
The game didn’t award its players with any extra points for improvising, yet that didn’t stop a ‘DDR freestyle’ scene from emerging. Freestylers are not focused on the game’s expert levels or extreme scores; they instead prefer to come up with new choreographies and/or improvisations. Thus DDR reached cult status.
Later games, like Dance Central (2010) and Dance Evolution / Dance Masters (2010), challenged the players to dance with their full bodies. Hand and foot gestures appear on the screen right before they have to be executed. These games use the Kinect 3D-sensor for full body tracking. DANCERUSH (2008) continued in that tradition.
Other Rhythm Games
Many other rhythm games would follow: rail-shooter Rez (2001), “traditional” Japanese drum game Taiko no Tatsujin (2001), puzzle game Lumines (2004), Guitar Hero (2005), countless mobile games, motion controlled games like Wii Music (2009) which included a Custom Jam mode for improv, “piano” game Chunithm (2015) and VR games like Beat Saber (2018).
Frequency (2001), for Playstation 2, featured a remix mode that allowed up to 4 players to improvise collaborative compositions online. Vib-Ribbon (1999) was the first rhythm game to feature an audio-based level generator; it could create new levels based on any audio CD you would insert.
Music Games controlled by Real Instruments
In recent years we’ve seen innovative games that can be controlled by improvising on real (acoustic or electronic) instruments, like Cello Fortress (2012): an action game in which a cellist defends a fortress from 4 gamepad-players who try to destroy it. And SoundSelf (2012): a psychedelic, therapeutic VR experience controlled by chanting (singing).
Music Education Games
Still focused on hitting the right note on the right cue, but somewhat less rhythm oriented are music-education games, as they allow you to practice on real instruments. Examples are Rocksmith (2011) for electric guitar and Synthesia (2006) for MIDI keyboard. Robert Nichols hacked Rocksmith to make it work with violin as well.
Sound toys are arguably similar to sandbox games, or they’re no games at all. Examples are SimTunes (1996), Electroplankton (2005), Björk’s Biophilia (2011), the physics-based Musyc (2013), as well as remix apps ninjaJAMM (2013) and Playground (2017). Table-top inspired ReacTable (2005) can be scaled large enough for multi-player improvisation.
Sentris (2014) is possibly the first real / advanced hybrid of a puzzle game and a sequencer (software to write and produce music with). Non-game influenced sequencers have also become more playful and non-linear, see for example stochastic sequencer SECTOR (2014), multi-playhead sequencer Fugue Machine (2015) and the complex but playful grid-sequencers JR Hexatone (2009) and New Path (2017). Just released, iPad app Gestrument (2018), promises to become a highly customisable interactive music engine/instrument that can be controlled by eye tracking, movements, audio tracking and more.
Augmented Music Apps
RJDJ (2008) is a fascinating augmented reality music app / sound toy; it can make music out of any sound it picks up around the user, plus it can use smartphone sensors to change parameters of the music. RDJD’s interactive compositions (scenes) are programmed in the visual programming language Pure Data.
Interactive Music Installations
When it comes to large (public) interactive installations, there are many examples of interactive music or sound, like the simple but funny Piano Stairs (2009), or the body tracking avant-garde noises of Device Unknown (2018), or the immersive, brainwave (EEG) controlled lasers and sounds of On Your Wavelength (2015). Visual art can be created by groups of people on floor-projected installations like Healing Pool (2008). But it’s rare to find one that combines collaborative interaction (social game design) + music/sound improvisation. Something that comes close is The Music Box Village (2018), a collection of architectural sound toys that can be used by artists and audience to jam with otherworldly (texture based) sounds.
In musical improvisation it’s crucial for all players to be aware of each other, only then one can choose how to respond in one of the following ways:
▫️ Silence: giving space to other musicians to place the focus on their output.
▫️ Synchronisation: complementing one or more of the currently playing elements, move the music forward in its current direction through variations or dynamic changes.
▫️ Recontextualization: transforming one or more of the currently playing elements by playing something around them that gives them a completely different meaning.
▫️ Disruption: playing something that completely disturbs current atmosphere, thereby giving space for a drastic change of collective course.
In many traditions, group improvisation takes place on a micro level: a direct (physical) change in dynamics, melody, harmony, rhythms or choice of instrumentation (timbre). Improvisation occurs when an instrumentalist or a collective spontaneously alters any of those five elements.
But that does not mean that non of those elements may be pre-set. To give an example, jazz improv is traditionally based on the harmonies and rhythms of its well-known repertoire known as jazz standards. And in case a drum circle, there is of course no melody or harmony present whatsoever.
Thanks to music technologies – such as recording media, sequencers, samplers, loopers and algorithmic music generators – improvisation can also be done on a more macro level: by manipulating larger chunks of music ahead of their playback. In any case, all music, improvised or composed, can move between the 2 core-axes of music: “relaxed VS stressed” VS “stable VS chaotic”.
The momentary state of the individual instruments can be measured in the same way. Some examples:
▫️ Relaxed + Stable = a church choir singing a lengthy harmonic chord
▫️ Relaxed + Chaotic = a hectic jazz drum solo played with very soft brushes
▫️ Stressed + Stable = a demonic sounding organ chord droning on
▫️ Stressed + Chaotic = a fast death metal guitar solo at its climax
Apple’s Garageband uses a similar concept for intuitively composing drums:
From all the knowledge I gathered so far, I defined a set of challenges for designing music games in 2019 (under construction):
▫️ Melody/harmony games inspired by rhythm games. Moving away from the on-screen rhythmic cues, towards actually listening (to random generated sequences) with the challenge to respond with suitable notes and timings.
▫️ Rhythm games with a stronger focus on groove and swing, achieve neural rhythmic synchronisation through repetitive patterns. Research has shown that none of the current rhythm games achieve this.
▫️ Music games that are equally as suitable for beginners, hardcore (high-score) players, and a third group: freestylers (DDR is a successful example of this).
▫️ Entertainment-educational crossover games, that teach players how musical elements like rhythm, harmony, melody actually work; so they can use this to become more musical and learn how to improvise.
▫️ Masaya Matsuura’s idea: exploring post-human virtuosity, for example by giving supernatural influence over a virtual pianist. Perhaps
this also relates to inventing ways to conduct expressively, in stead of performing every single note.
▫️ Music games that stimulate the players to improvise more, for example through an AI system that knows how well a player is improvising.
▫️ Music games with endless variation of music by implementing procedural (generative/algorithmic) music systems.
▫️ Music games revolving around emergent gameplay, by for example letting players build their own instruments by using physics-based playgrounds. Or through playful sequencers (combinations of games and composition software)
▫️ Music games to improve social contact (incl. games for the elderly, like the Tovertafel
▫️ Music games crossed over with sports games. And connecting rhythm games with fitness, for example by using weights as controllers.
▫️ Music game elements in serious electronic instruments.
▫️ Music games integrated in architecture / public space.
▫️ Music games suitable for large groups of people.
▫️ Open source music games allowing for co-design
(reminder: add tarik barri’s exploration)