(Translated by https://www.hiragana.jp/)
cSounds.com - Barry Vercoe
The Wayback Machine - https://web.archive.org/web/20100130151826/http://www.csounds.com:80/vercoe/index.html

 
The Father of Csound
Photo

Biography
Awarded a Guggenheim Fellowship in 1983, Professor Barry Vercoe bv@media.mit.edu worked with Pierre Boulez in Paris at the Institut de Recherche et Coordination Acoustique/Musique (IRCAM), where he developed the world's first Automatic Accompanist (Synthetic Performer). A later version of this was featured on Nova's Discover the World of Science. The innovation also won the Computer World / Smithsonian Award for Media Arts in 1992.

Most recently Vercoe's Csound (and its descendent NetSound) has provided the underlying technology for the Structure Audio component of the new MPEG-4 standard for digital audio transmission and production. This innovative technology is over 10 times more compact than MP3.

Professor Vercoe hosted the First International Conference for Computer Music in 1976, and is the author of numerous technical papers in the field. He holds a PhD in Music Composition, and his MIT Summer Workshops in Computer Music have launched many scientists/musicians into the field.

As a consultant for Analog Devices Inc, Vercoe has recently developed Extended Csound, putting his Csound technology onto high-speed real-time DSP chips, now in use in high-profile music production in the US and Japan, by clients such as Denon Corporation.

1999 Tech Talk Quote
"I think what artists often do as a function in society is show engineers new ways of doing things-creative things. Engineers like to feel they're creative too, but they must realize that artists are creative in a different way. Perhaps only artists know how to push the limits of devices and thereby engender a rethinking of how they can be used." (Tech Talk 5/19/1999)

On Composing
"When you have a technology that enables composers and performers to do new and different things, that will always excite the imagination," said Professor Vercoe, likening modern advances in signal processing to the development of musical instruments with responsive control such as valved trumpets.

Professor Vercoe notes that while his compositions use the latest technologies, they are human driven. "I've always been in love with the live aspects of music and see them as an extension of natural body motion," he said. "There's a contribution that humans do make, either through vocal cords or tactile control or whatever, that is an essential human communication."

About The Experimental Music Studio - EMS
Founded by Professor Vercoe at MIT in 1973, the EMS was the first facility to have digital computers dedicated to full time research and composition of computer music. Committed to moving technology forward in artistic ways, the EMS hosted the first International Conference on Computer Music in 1976. During its first 12 years, the EMS was responsible for developing or advancing computer-based music technology such as real-time digital synthesis, live keyboard input, graphical score editing, synchronization between natural and synthetic sound in composition and advanced computer languages for music composition. The prevailing musical aesthetic at the EMS encouraged explorations into the interaction between live performers and computer accompanists. In 1985, Professor Vercoe became one of the founding faculty members of MIT's Media Laboratory.

Vercoe's History of Csound... from The Csound Book
This field has always benefited most from the spirit of sharing. It was Max Mathews' willingness to give copies of Music 4 to both Princeton and Stanford in the early 60's that got me started. At Princeton it had fallen into the fertile hands of Hubert Howe and the late Godfrey Winham, who as composers imbued it with controllable envelope onsets (envlp) while they also worked to have it consume less IBM 7094 time by writing large parts in a BEFAP assembler (Music4B). Looking on was Ken Steiglitz, an engineer who had recently discovered that analog feedback filters could be represented with digital samples. By the time I first saw Music4B code (1966-67) it had a reson filter — and the age of subtractive digital sound design was already underway.

During 1967-68 I wrote a large work (for double chorus, band, string orchestra, soloists and computer-generated sounds), whose Seattle Opera House performance convinced me that this was a medium with a future. But on my arrival back at Princeton I encountered a major problem: the 7094 was to be replaced by a new machine called a 360 and the BEFAP code would no longer run. Although Godfrey responded by writing a Fortran version (Music4BF, slower but eternally portable), I took a gamble that IBM would not change its assembler language again soon, and wrote Music 360. Like Max Mathews, I then gave this away as fast as I could, and its super efficiency enabled a new generation of composers with limited budgets to see computer music as an affordable medium.

But we were still at an arm's length from our instrument. Punched cards and batch processing at a central campus facility were no way to interact with any device, and on my move to the Massachusetts Institute of Technology (M.I.T.) in 1971 I set about designing the first comprehensive real-time digital sound synthesizer, to bring the best of Music 360's audio processing into the realm of live interactive performance. After two years and a design complete, its imminent construction was distracted by a gift from Digital Equipment Corporation of their latest creation, a PDP-11. Now, with a whole computer devoted exclusively to music, we could have both real-time processing and software flexibility, and Music 11 was the result.

There were many innovations in this rewrite. First, since my earlier hardware design had introduced the concept of control-rate signals for things like vibrato pitch motion, filter motion, amplitude motion and certain envelopes, this idea was carried into the first 1973 version of Music 11 as k-rate signals (familiar now to Csound users). Second, envelopes became more natural with multi-controllable exponential decays. Indeed, in 1976 while writing my Synapse, for Viola and computer, I found I could not match the articulation of my soloist unless I made the steady-state decay rate of each note in a phrase be a functional inverse of the note length. (In this regard string and wind players are different from pianists, who can articulate only by early release. Up to this time we had all been thinking like pianists, i.e. no better than MIDI.) My envlpx opcode fixed that.

This had been my second gamble that a particular machine would be sufficiently common and long-lived to warrant assembler coding, and Music 11's efficiency and availability sustained a decade of even more affordable and widespread computer music. Moreover, although the exported code was not real-time, our in-house experiments were: Stephen Haflich connected an old organ keyboard so that we could play the computer in real-time; if you played something reasonably metric, the computer would print out the score when you finished; if you entered your score via our graphical score editor, the machine would play it back in real-time (I made extensive use of this while writing Synapse); if you created your orchestra graphically using Rich Steiger's OEDIT, Music 11 would use those instruments. Later, in 1980, student Miller Puckette connected a light-sensing diode to one end of the PDP-11, and an array-processing accelerator to the other, enabling one-dimensional conducting of a real-time performance. Haflich responded with a two-dimensional conducting sensor, using two sonar cells from a Polaroid camera. This was an exciting time for real-time experiments, and the attendees at our annual MIT Summer Workshops got to try many of these.

Meanwhile, my interest had shifted to tracking live instruments. At IRCAM in Paris in 1982, flutist Larry Beauregard had connected his flute to DiGiugno's 4X audio processor, enabling real-time pitch-following. On a Guggenheim at the time, I extended this concept to real-time score-following with automatic synchronized accompaniment, and over the next two years Larry and I gave numerous demonstrations of the computer as a Chamber musician, playing Handel flute sonatas, Boulez's Sonatine for flute and piano and by 1984 my own Synapse II for flute and computer — the first piece ever composed expressly for such a setup. A major challenge was finding the right software constructs to support highly sensitive and responsive accompaniment. All of this was pre-MIDI, but the results were impressive even though heavy doses of tempo rubato would continually surprise my Synthetic Performer. In 1985 we solved the tempo rubato problem by incorporating learning from rehearsals (each time you played this way the machine would get better). We were also now tracking violin, since our brilliant young flutist had contracted a fatal cancer. Moreover, this version used a new standard called MIDI, and here I was ably assisted by former student Miller Puckette, whose initial concepts for this task he later expanded into a program called MAX.

On returning to MIT in 1985 it was clear that microprocessors would eventually become the affordable machine power, that unportable assembler code would lose its usefulness, and that ANSI C would become the lingua franca. Since many parts of Music 11 and all of my Synthetic Performer were already in C, I was able to expand the existing constructs into a working Csound during the Fall of that year. Once it was operating, I received additional help from students like Kevin Peterson and Alan Delespinase and later from Bill Gardner, Dan Ellis and Paris Smaragdis. Moreover, thanks to the internet and ftp/public, my continuing wish to share the system even as it gained further maturity would take even less of my time.

The step to Real-time Csound was a simple one. With the right constructs already in place due to my long-time interest in interactive performance, and computers now fast enough to do floating-point processing on a set schedule, I only had to use the DAC output pointer to implement blocking I/O on a fine time-grid to achieve tight interactive control. I took that step in 1990, and demonstrated it during the ICMC paper Real-time Csound: Software Synthesis with Sensing and Control (Vercoe & Ellis, 1990). For me, the only reason for real-time is controllable performance, and Dan Ellis illustrated this by controlling a Bach synthesis by tapping arbitrary drum patterns on the table that held the microphone. The sensing also introduced Csound's new Spectral Data Types (see my chapter in this volume). With a sufficiently powerful machine (at the time a DECstation), both sensing and controlled high-fidelity synthesis had finally become possible.

But not all of us can command such a powerful central processor and today's interest in deft graphical control and graphical audio monitoring can often soak up the new cycles faster than technology creates them. At the 1996 ICMC in Hong Kong, I demonstrated an alternative architecture for both software and hardware with Extended Csound. This is the first stage of an orderly progression towards multi-processor fully-interactive performance. In the current version, Csound is divided between two processors, a host PC and a DSP-based soundcard. The host does all compiling and translation, disk I/O, and graphical-user-interface (GUI) processing, such as Patchwork (editing) and Cakewalk (sequencing). The DSP does all the signal processing, with sole access to the audio I/O ports; it also traps all MIDI input with an on-chip MIDI manager, such that each MIDI note-on results in an activated instrument instance in less than one control period.

The tightly-coupled multi-processor performance of Extended Csound has induced a flurry of new opcodes, many of them tailored to the internal code of the DSP I am using (a floating-point SHARC 21060 from Analog Devices). The new opcodes extend the power of Csound in areas such as real-time pitch-shifting, compressor-limiting, effects processing, mixing, sampling synthesis and MIDI processing and control. The curious can look at my paper in the 1996 ICMC Proceedings. I expect to be very active in this area for some time.

More Vercoe Photos
[+] Vercoe Talk
[+] Vercoe Synapse

Contact Information
Barry Vercoe
MIT Media Lab
20 Ames Street, Rm E15-401A
Cambridge, MA 02139
<bv@media.mit.edu>
http://sound.media.mit.edu/~bv/