Computer music
Computer music is the application of computing technology in music composition, to help human composers create new music or to have computers independently create music, such as with algorithmic composition programs. It includes the theory and application of new and existing computer software technologies and basic aspects of music, such as sound synthesis, digital signal processing, sound design, sonic diffusion, acoustics, and psychoacoustics. The field of computer music can trace its roots back to the origins of electronic music, and the very first experiments and innovations with electronic instruments at the turn of the 20th century.
In the 2000s, with the widespread availability of relatively affordable home computers that have a fast processing speed, and the growth of home recording using digital audio recording systems ranging from Garageband to Protools, the term is sometimes used to describe music that has been created using digital technology.
History
Much of the work on computer music has drawn on the relationship between music theory and mathematics, a relationship which has been noted since the Ancient Greeks described the "harmony of the spheres". The world's first computer to play music was CSIRAC, which was designed and built by Trevor Pearcey and Maston Beard in the 1950s. Mathematician Geoff Hill programmed the CSIRAC to play popular musical melodies from the very early 1950s. In 1951 it publicly played the "Colonel Bogey March"[1] of which no known recordings exist. However, CSIRAC played standard repertoire and was not used to extend musical thinking or composition which is current computer-music practice.
The oldest known recordings of computer generated music were played by the Ferranti Mark 1 computer, a commercial version of the Baby Machine from the University of Manchester in the autumn of 1951. The music program was written by Christopher Strachey and Alan Turing with analog interface designed by Turing. During a session recorded by the BBC, the machine managed to work its way through "Baa Baa Black Sheep", "God Save the Queen" and part of "In the Mood".[2][3]
Two further major 1950s developments were the origins of digital sound synthesis by computer, and of algorithmic composition programs beyond rote playback. Max Mathews at Bell Laboratories developed the influential MUSIC I program and its descendents, further popularising computer music through a 1963 article in Science.[4] Amongst other pioneers, the musical chemists Lejaren Hiller and Leonard Isaacson worked on a series of algorithmic composition experiments from 1956-9, manifested in the 1957 premiere of the Illiac Suite for string quartet.[5]
In Japan, experiments in computer music date back to 1962, when Keio University professor Sekine and Toshiba engineer Hayashi experimented with the TOSBAC computer. This resulted in a piece entitled TOSBAC Suite, influenced by the Illiac Suite. Later Japanese computer music compositions include a piece by Kenjiro Ezaki presented during Osaka Expo '70 and "Panoramic Sonore" (1974) by music critic Akimichi Takeda. Ezaki also published an article called "Contemporary Music and Computers" in 1970. Since then, Japanese research in computer music has largely been carried out for commercial purposes in popular music, though some of the more serious Japanese musicians used large computer systems such as the Fairlight in the 1970s.[6]
Early computer-music programs typically did not run in real time. Programs would run for hours or days, on multimillion-dollar computers, to generate a few minutes of music.[7][8] One way around this was to use a 'hybrid system', most notably the Roland MC-8 Microcomposer, where a microprocessor-based system controls an analog synthesizer, released in 1978.[6] John Chowning's work on FM synthesis from the 1960s to the 1970s allowed much more efficient digital synthesis,[9] eventually leading to the development of the affordable FM synthesis-based Yamaha DX7 digital synthesizer, released in 1983.[10] In addition to the Yamaha DX7, the advent of inexpensive digital chips and microcomputers opened the door to real-time generation of computer music.[10] In the 1980s, Japanese personal computers such as the NEC PC-88 came installed with FM synthesis sound chips and featured audio programming languages such as Music Macro Language (MML) and MIDI interfaces, which were most often used to produce video game music, or chiptunes.[6] By the early 1990s, the performance of microprocessor-based computers reached the point that real-time generation of computer music using more general programs and algorithms became possible.[11]
Interesting sounds must have a fluidity and changeability that allows them to remain fresh to the ear. In computer music this subtle ingredient is bought at a high computational cost, both in terms of the number of items requiring detail in a score and in the amount of interpretive work the instruments must produce to realize this detail in sound.[12]
Research
Despite the ubiquity of computer music in contemporary culture, there is considerable activity in the field of computer music, as researchers continue to pursue new and interesting computer-based synthesis, composition, and performance approaches. Throughout the world there are many organizations and institutions dedicated to the area of computer and electronic music study and research, including the ICMA (International Computer Music Association), IRCAM, GRAME, SEAMUS (Society for Electro Acoustic Music in the United States), CEC (Canadian Electroacoustic Community), and a great number of institutions of higher learning around the world.
Music composed and performed by computers
Computer-generated scores for performance by human players
Melomics, a research project from the University of Málaga, Spain, developed a computer composition cluster named Iamus, which composes complex, multi-instrument pieces for editing and performance. Since its inception, Iamus has composed a full album in 2012, appropriately named Iamus, which New Scientist described as "The first major work composed by a computer and performed by a full orchestra."[13] The group has also developed an API for developers to utilize the technology, and makes its music available on its website.
Machine improvisation
Machine improvisation uses computer algorithms to create improvisation on existing music materials. This is usually done by sophisticated recombination of musical phrases extracted from existing music, either live or pre-recorded. In order to achieve credible improvisation in particular style, machine improvisation uses machine learning and pattern matching algorithms to analyze existing musical examples. The resulting patterns are then used to create new variations "in the style" of the original music, developing a notion of stylistic reinjection.
Statistical style modeling
Style modeling implies building a computational representation of the musical surface that captures important stylistic features from data. Statistical approaches are used to capture the redundancies in terms of pattern dictionaries or repetitions, which are later recombined to generate new musical data. Style mixing can be realized by analysis of a database containing multiple musical examples in different styles. Machine Improvisation builds upon a long musical tradition of statistical modeling that began with Hiller and Isaacson's Illiac Suite for String Quartet (1957) and Xenakis' uses of Markov chains and stochastic processes. Modern methods include the use of lossless data compression for incremental parsing, prediction suffix tree and string searching by factor oracle algorithm (basically a factor oracle is a finite state automaton constructed in linear time and space in an incremental fashion).[14]
Implementations
The first system implementing interactive machine improvisation by means of Markov models and style modeling techniques is the Continuator, , developed by François Pachet at Sony CSL Paris in 2002[15][16] based on work on non-real time style modeling.[17][18] Matlab implementation of the Factor Oracle machine improvisation can be found as part of Computer Audition toolbox.
Musicians working with machine improvisation
Gerard Assayag (IRCAM, France), Jeremy Baguyos (University of Nebraska at Omaha, USA) Tim Blackwell (Goldsmiths College, Great Britain), George Bloch (Composer, France), Marc Chemiller (IRCAM/CNRS, France), Nick Collins (University of Sussex, UK), Shlomo Dubnov (Composer, Israel / USA), Mari Kimura (Juilliard, New York City), George Lewis (Columbia University, New York City), Bernard Lubat (Pianist, France), François Pachet (Sony CSL, France), Joel Ryan (Institute of Sonology, Netherlands), Michel Waisvisz (STEIM, Netherlands), David Wessel (CNMAT, California), Michael Young (Goldsmiths College, Great Britain), Pietro Grossi (CNUCE, Institute of the National Research Council, Pisa, Italy), Toby Gifford and Andrew Brown (Griffith University, Brisbane, Australia), Davis Salks (jazz composer, Hamburg, PA, USA), Doug Van Nort (electroacoustic improviser, Montreal/New York) Jorge Variego (University of Tennessee, USA)
Live coding
Live coding[19] (sometimes known as 'interactive programming', 'on-the-fly programming',[20] 'just in time programming') is the name given to the process of writing software in realtime as part of a performance. Recently it has been explored as a more rigorous alternative to laptop musicians who, live coders often feel, lack the charisma and pizzazz of musicians performing live.[21] TOPLAP, an ad-hoc conglomerate of artists interested in live coding was formed in 2004, and promotes the use, proliferation and exploration of a range of software, languages and techniques to implement live coding.
See also
References
- ↑ Doornbusch, Paul. "The Music of CSIRAC". Melbourne School of Engineering, Department of Computer Science and Software Engineering.
- ↑ Fildes, Jonathan (June 17, 2008). "'Oldest' computer music unveiled". BBC News. Retrieved 4 December 2013.
- ↑ "Listening to the music of Turing's computer". BBC News. 2016-10-01. Retrieved 2016-10-04.
- ↑ Bogdanov, Vladimir (2001). All Music Guide to Electronica: The Definitive Guide to Electronic Music. Backbeat Books. p. 320. Retrieved 4 December 2013.
- ↑ Lejaren Hiller and Leonard Isaacson, Experimental Music: Composition with an Electronic Computer (New York: McGraw-Hill, 1959; reprinted Westport, Conn.: Greenwood Press, 1979). ISBN 0-313-22158-8.
- 1 2 3 Shimazu, Takehito (1994). "The History of Electronic and Computer Music in Japan: Significant Composers and Their Works". Leonardo Music Journal. MIT Press. 4: 102–106 [104]. doi:10.2307/1513190. Retrieved 9 July 2012.
- ↑ Cattermole, Tannith (May 9, 2011). "Farseeing inventor pioneered computer music". Gizmag. Retrieved 28 October 2011.
"In 1957 the MUSIC program allowed an IBM 704 mainframe computer to play a 17-second composition by Mathews. Back then computers were ponderous, so synthesis would take an hour." - ↑ Mathews, Max (1 November 1963). "The Digital Computer as a Musical Instrument". Science. 142 (3592): 553–557. doi:10.1126/science.142.3592.553. Retrieved 28 October 2011.
"The generation of sound signals requires very high sampling rates.... A high speed machine such as the I.B.M. 7090 ... can compute only about 5000 numbers per second ... when generating a reasonably complex sound." - ↑ Dean, R. T. (2009). The Oxford handbook of computer music. Oxford University Press. p. 20. ISBN 0-19-533161-3.
- 1 2 Dean, R. T. (2009). The Oxford handbook of computer music. Oxford University Press. p. 1. ISBN 0-19-533161-3.
- ↑ Dean, R. T. (2009). The Oxford handbook of computer music. Oxford University Press. pp. 4–5. ISBN 0-19-533161-3.
"... by the 90s ... digital sound manipulation (using MSP or many other platforms) became widespread, fluent and stable." - ↑ Loy, D. Gareth (1992). Roads, Curtis, ed. The Music Machine: Selected Readings from Computer Music Journal. MIT Press. p. 344. ISBN 0-262-68078-5.
- ↑ "Computer composer honours Turing's centenary". News Scientist. 5 July 2012.
- ↑ Jan Pavelka; Gerard Tel; Miroslav Bartosek, eds. (1999). Factor oracle: a new structure for pattern matching; Proceedings of SOFSEM’99; Theory and Practice of Informatics. Springer-Verlag, Berlin. pp. 291–306. ISBN 3-540-66694-X. Retrieved 4 December 2013.
Lecture Notes in Computer Science 1725
- ↑ Pachet, F., The Continuator: Musical Interaction with Style. In ICMA, editor,Proceedings of ICMC, pages 211-218, Göteborg, Sweden, September 2002. ICMA. Best paper award.
- ↑ Pachet, F. Playing with Virtual Musicians: the Continuator in practice. IEEE Multimedia,9(3):77-82 2002.
- ↑ G. Assayag, S. Dubnov, O. Delerue, "Guessing the Composer's Mind : Applying Universal Prediction to Musical Style", In Proceedings of International Computer Music Conference, Beijing, 1999.
- ↑ S. Dubnov, G. Assayag, O. Lartillot, G. Bejerano, "Using Machine-Learning Methods for Musical Style Modeling", IEEE Computers, 36 (10), pp. 73-80, Oct. 2003.
- ↑ Collins, N.; McLean, A.; Rohrhuber, J.; Ward, A. (2004). "Live coding in laptop performance". Organised Sound. 8 (03). doi:10.1017/S135577180300030X.
- ↑ Wang G. & Cook P. (2004) "On-the-fly Programming: Using Code as an Expressive Musical Instrument", In Proceedings of the 2004 International Conference on New Interfaces for Musical Expression (NIME) (New York: NIME, 2004).
- ↑ Collins, N. (2003). "Generative Music and Laptop Performance". Contemporary Music Review. 22 (4): 67–79. doi:10.1080/0749446032000156919.
Further reading
- Ariza, C. 2005. "Navigating the Landscape of Computer-Aided Algorithmic Composition Systems: A Definition, Seven Descriptors, and a Lexicon of Systems and Research." In Proceedings of the International Computer Music Conference. San Francisco: International Computer Music Association. 765-772. Internet: http://www.flexatone.net/docs/nlcaacs.pdf
- Ariza, C. 2005. An Open Design for Computer-Aided Algorithmic Music Composition: athenaCL. Ph.D. Dissertation, New York University. Internet: http://www.flexatone.net/docs/odcaamca.pdf
- Berg, P. 1996. "Abstracting the future: The Search for Musical Constructs" Computer Music Journal 20(3): 24-27.
- Boulanger, Richard, ed. (March 6, 2000). The Csound Book: Perspectives in Software Synthesis, Sound Design, Signal Processing, and Programming. The MIT Press. p. 740. ISBN 0-262-52261-6. Retrieved 3 October 2009.
- Chadabe, Joel. 1997. Electric Sound: The Past and Promise of Electronic Music. Upper Saddle River, New Jersey: Prentice Hall.
- Chowning, John. 1973. "The Synthesis of Complex Audio Spectra by Means of Frequency Modulation". Journal of the Audio Engineering Society 21, no. 7:526–34.
- Collins, Nick (2009). Introduction to Computer Music. Chichester: Wiley. ISBN 978-0-470-71455-3.
- Dodge, Charles; Jerse (1997). Computer Music: Synthesis, Composition and Performance. Thomas A. (2nd ed.). New York: Schirmer Books. p. 453. ISBN 0-02-864682-7.
- Doornbusch, P. 2015. "A Chronology / History of Electronic and Computer Music and Related Events 1906 - 2015" http://www.doornbusch.net/chronology/
- Heifetz, Robin (1989). On the Wires of Our Nerves. Lewisburg Pa.: Bucknell University Press. ISBN 0-8387-5155-5.
- Manning, Peter (2004). Electronic and Computer Music (revised and expanded ed.). Oxford Oxfordshire: Oxford University Press. ISBN 0-19-517085-7.
- Perry, Mark, and Thomas Margoni. 2010. "From Music Tracks to Google Maps: Who Owns Computer-Generated Works?". Computer Law and Security Review 26: 621–29.
- Roads, Curtis (1994). The Computer Music Tutorial. Cambridge: MIT Press. ISBN 0-262-68082-3.
- Supper, M. 2001. "A Few Remarks on Algorithmic Composition." Computer Music Journal 25(1): 48-53.
- Xenakis, Iannis (2001). Formalized Music: Thought and Mathematics in Composition. Harmonologia Series No. 6. Hillsdale, NY: Pendragon Pr. ISBN 1-57647-079-2.