Musical Notation for Computers
Current Musical Notation
is designed to be used by Performers playing musical instruments. An
alternative notation is needed, when one wants to use computers to generate a
sound file based on the set of instructions provided by a Composer.
When the composition requires use
of a few speakers (for example personal stereo phones), information about location
of speakers should be provided to the computer and it should generate a separate
sound file for each speaker.
The sound file should be created
using a computer program specially developed for each particular composition. To
write such computer programs, we need computer languages and compilers, which
transform composer’s instructions into a computer program.
The combination of a chosen
computer language and a chosen programming style, we will call Musical
Notation. Thus we should expect a few variants of such Musical Notation
intended to be used in conjunction with computers.
This approach would
require Composers to learn computer programming, which is considerable effort.
However, such effort yields benefits of proficiency in computer programming and
this opens up additional opportunities (demand for computer programming is
growing).
It is in interests of Composers to
select only a small number of computer languages and a small number of
programming styles to be used as Musical Notation.
Computer languages
chosen to be a part of new Musical Notation should be chosen among languages, which
knowledge is in great demand by industries. This opens more opportunities, and
this assures their longevity, improvement of implementation of their compilers
and healthy community of programmers using them. The most promising languages
for such role are so called “object oriented” languages, which allow efficient
description of various patterns and “objects” conforming to these patterns.
Currently, Composers create the
general structure of the Music and Performers interpret it according to
audience and circumstances. This cooperation between Composers and Performers
worked well for a long time, and it makes sense to use it in this new way of
Music creation and performance.
To open up the possibility for
Performers to interpret and adjust the composition, we need to do two things.
First, we need to give Performers
not a sound file, but a program, which generates it. The program should be
produced using a compiler as described above.
Second, we need to develop this
type of programs with some interface, which a Performer could use to adjust the
output of such program to conditions of particular performance. Note that in
many cases, simple configuration file would be sufficient to be used as an
interface. Hence, what could be modified for a performance is defined by the
Composer, who develops the program and its interface. The program itself is not
designed to be modified.
Modern computer languages
(especially object oriented languages), allow writing the program in terms
similar to the way a person thinks, in terms of patterns, generic and specific.
They should provide Composers with quicker way of writing the Music.
Note, that this way of Music
development fits naturally in the way software is developed. Developers utilize
ability of computers to transform conceptual description into unfolding in time
execution and a configuration file is a natural feature of programs. Composers
just need to transfer this experience into the area of Music composing.
The foundation of the
musical composition is a set of basic sound patterns and basic ways of their
manipulation.
A Composer constructs a
composition using Sound Patterns, some of which are well known, some are
created for that particular composition. For each particular place in the
composition, he selects appropriate pattern and specifies its parameters. They are
sound patterns, which are either well known or while not used broadly, used
often by this particular Composer. These Sound Patterns are the basic tool of
the Composer and they have to be collected in a special “library”, which could
be reused – Basic Sound Patterns. These “libraries” could be shared and could
accumulate quickly.
For example, a broadly
used Sound Pattern consists of three elements: initial sound intensity rump-up,
a period of stable sound intensity (possibly with some degree of oscillation), and
a period of reduction of sound intensity. This pattern should be combined with
the pattern of characteristics of the sound. These characteristics could be
given as a digital recording of the sound or as a set of frequencies of air
vibration with their relative intensity, etc.
In object oriented
computer languages all this information is included in one “class”. In the same
class we need to add some functions, which could be used to compute samples of
digital representation of the sound corresponding to our settings of its
parameters (how long, how loud, etc.).
The collection of such
patterns – the Library of Basic Sounds, is an important part in our new Musical
Notation.
We should define a few
patterns of simultaneous execution and sequential execution of Basic Sounds.
These patterns we will provide as “classes” to which we pass parameters. Among
these parameters we will have a list of classes (patterns of sound), which we
want to be executed in prescribed manner.
For example we might want to have
smooth transition from one sound to another in generation of the chain of
sounds, using special rules of transition from one sound to another.
Alternatively, we might want a few sound patterns to be executed in synchrony –
start together and end together with similar rump-up and fading patterns.
These classes should be a part of
the Library of Basic Sounds.
In many musical
performances, there are parallel coordinated streams of sounds. Usually, sounds
of one stream have special characteristics, which allow their recognition as
related. In an orchestra, they are sounds of one type of instruments. We will
create imaginary instruments – groups of sounds with similar properties, i.e.
they are easily recognized as related.
Coordination between different
streams of sounds is done by dividing all sounds into blocks of sounds confined
into strictly defined time intervals. These time intervals are reinforced with
accents (even intensity execution of the majority of sounds and higher
intensity execution of selected sounds).
We need to have special
tools to facilitate this type of organization of parallel streams of sounds.
Since composing such coordinated streams is difficult, we need also special
tools, which allow quick detection violation of this organization, while we are
writing the music.
These two tasks could be
accomplished with special classes - Execution Matrix and Time Markers.
We will define an object Sound
Stream. The Sound Stream is a kind of collection, into which we add objects
one-by-one. We will put in the Sound Stream Basic Sounds, and Pauses. As usual,
each of these objects should have defined duration of execution. In addition,
we will put in the Sound Stream special objects Time Markers, each one having a
name. Time Markers will not be executed, they will be
used to check coordination of the Sound Streams.
Sound Streams we will put into
another collection – Execution Matrix.
We will put Time Markers with the
same names in different Sound Streams and the computer will compute how much
time it takes to reach them walking in different Sound Streams (including
pauses). During the composing, we will ask the computer to report any mismatch
of this sort. This is very similar to the way composing is done now and it is
easy to learn. Practically, it is somewhat more complicated, because we need to
compile the program and to specify in the Configuration File that we want to
run this particular check, not to play the music, and the program will produce
the file showing all places of mismatch, which it could find.
As one could see, Configuration
File is a highly useful feature not only for Performers but also for Composers.
During generation of the sound
files for performance, all Time Markers will be simply ignored.
We will generate digital sound file
using Execution Matrix in a standard way: the computer will walk along the
time-line with a small time-step (defined in the chosen format of the sound
file). For each moment, it will compute values of intensity of sound in each
Sound Stream and will add them up, that’s all.
Using Color-images the
way we use sounds in Music, is a new undertaking and we rather rely on success
of Music/sound and try to replicate it with Music/color. When spectators get
accustomed with this new art form, it will expend naturally.
One
promising class of Color-images is presented in the “Beauty of Moving Color”
(in this site). Each image occupies entire screen, it has an “anchor” point – a
place where intensity of color is the highest, and from that point intensity
drops quickly, when the gaze move from the “anchor” in any direction. Locations
of anchor points we further limited to nodes of a few imaginary grids, which we
put on the screen.
As
we put musical sounds (Sound-images) into a Library of patterns, we will put our
“color images” into a Similar Library of patterns. A basic pattern in this
library should be a set of such Color-images displayed simultaneously, with
natural mixing of colors, rising in intensity quickly and in coordinated
manner, staying for awhile on the screen and quickly fading in intensity also
in coordinated manner. To remind harmonious accord of musical sound, colors of Color-images
in the group should be harmonious also.
As
with Sound-images we will make a few classes to help us generate new
Color-images by grouping of Color-images with either simultaneous execution or with
smooth overlapping in time.
The Color Stream
- the chain of Color-images, we will organize as a Sound Stream, using rhythmic
division of their time intervals and accents (higher intensity of color for
some Color-images in the Color Stream).
We
will use the Execution Matrix and Time Markers as we use for sounds. Only we
add the Color Streams to it instead of Sound Streams.
We
will generate digital video file using Execution Matrix similar to the way we
generate digital sound file: the computer will walk with the specified
time-step and on a specified grid on the screen and add up colors of
corresponding Color Streams. We could do that, because colors reproduced by a
color monitor could be described with a set of three numbers, which should be
added as vectors, when colors are mixed.
It
is possible that, initially, combining sound and color could be easier to
accept than moving color alone – familiar structuring of the Music/sound could
support recognition of similar structuring of the Music/color.
It
is kind of obvious, how this could be done, from what is described above.
However, we could use some programming trick, which is often used with object
oriented programming. It is possible:
-
to define one type of Stream, where we could collect
our sound objects and color-objects,
-
to define one
type of Execution Matrix, where we could collect Sound Streams and Color
Streams.
In
any case, technically, we have everything in place: we will place in the same
Execution Matrix
This
new way of creation and performing of Music will find its place among
performing arts. The compositions could be performed in movie theaters with large
electronic screens, and sophisticated sound systems. They will offer never seen
before color performances and sound effects, like illusion of sound coming from
the basement or from above the roof. Alternatively, this performance could be
recorded as a movie and repeated many times in many places. From the other
hand, one could enjoy these performances in intimate setting of own home.
However,
these performances will not provide immediate connectedness of the Performer
and audience, small variations of performance in reaction to the audience, or
even strong reaction of the Performer to the mood of the audience as a group.
This
Music should become a new art form, enriching, not replacing existing one.
Alexander Liss
11/30/2019