During the heyday of postmodern debate, at least one critic in America noticed the connection between postmodern pastiche and computerization. In his book After the Great Divide (1986), Andreas Huyssen writes:
All modern and avant-gardist techniques, forms and images are now stored for instant recall in the computerized memory banks of our culture. But the same memory also stores all of pre-modernist art as well as the genres, codes, and image worlds of popular cultures and modern mass culture.11 Andreas Huyssen, “Mapping the Postmodern,” After the Great Divide, (Bloomington and Indianapolis: Indiana University Press, 1986), 196.
His analysis is accurate—except that these “computerized memory banks” did not really become commonplace for another fifteen years. Only when the Web absorbed enough media archives did it become this universal cultural memory bank accessible to all producers of culture. But even for professionals, the ability to easily integrate multiple media sources within the same project—multiple layers of video, scanned still images, animation, graphics, and typography—only came towards the end of the 1990s.
In 1985, when Huyssen’s book was being prepared for publication, I was working for one of the few computer animation companies in the world, called Digital Effects.22 See: Wayne Carlson, A Critical History of Computer Graphics and Animations, Section 2: The Emergence of Computer Graphics Technology. Each computer animator had his own interactive graphics terminal that could show 3D models, but only in wireframe and monochrome. To see them fully rendered in color, we had to take turns as the company had only one color raster display which we all shared. The data was stored on bulky magnetic tapes about a foot in diameter. Finding data from an old job was a cumbersome process which involved locating the right tape in the tape library, putting it on a tape drive and then searching for the right part of the tape. We did not have a color scanner, so getting all modern and avant-gardist techniques, forms and images into the computer was a far from trivial matter. Even if we did have one, there was no way to store, recall or modify these images. The machine that could do that, the Quantel Paintbox, cost over USD 160,000, which we could not afford. In 1986, when Quantel introduced Harry, the first commercial non-linear editing system which allowed for digital compositing of multiple layers of video and special effects, its cost also made it prohibitive for everybody except network television stations and a few production houses. Harry could record only eighty seconds of broadcast quality video at a time. In the realm of still images, things were not much better: for instance, the digital still store Picturebox, released by Quantel in 1990, could hold only 500 broadcast-quality images and its cost was similarly very high.
In short, in the middle of the 1980s neither we nor other production companies had anything approaching the “computerized memory banks” imagined by Huyssen. Of course, the same was true for the visual artists that were then associated with postmodernism and the concepts of pastiche, collage and appropriation. In 1986, the BBC produced the documentary Painting with Light33 Painting with Light is a documentary the BBC made in 1986 that presents David Hockney drawing with a Paintbox, http://b-o.fr/hockney for which half a dozen well-known painters, including Richard Hamilton and David Hockney, were invited to work with a Quantel Paintbox. The resulting images were not so different from the regular paintings that these artists were producing without a computer and, while some artists were making references to “modern and avant-gardist techniques, forms and images,” these references were painted rather than being directly uploaded from “computerized memory banks.” Only in the middle of the 1990s, when relatively inexpensive graphics workstations and personal computers running image editing, animation, compositing and illustration software became commonplace and affordable for freelance graphic designers, illustrators, and small post-production and animation studios, the situation imagined by Huyssen started to become a reality. The results were dramatic. Within about five years, modern visual culture was fundamentally transformed. Previously separate media—live action cinematography, graphics, still photography, animation, 3D computer animation, and typography— began to be combined in numerous ways. By the end of the decade, “pure” moving image media became an exception and hybrid media became the norm.
Today (2006), the new hybrid visual language of moving images that emerged during the period of 1993–1998 dominates our visual culture. While narrative features mostly stick to live cinematography and video footage shot by ordinary people with consumer video cameras and cell phones that is usually left as is, everything else—commercials, music videos, motion graphics, TV graphics, and other types of short non-narrative films and moving image sequences being produced around the world by media professionals, from companies, individual designers and artists to students—is hybrid.
The ubiquity of the new hybrid visual language today is such that it takes an effort to recall how different things looked before. Similarly, the changes in production processes and equipment that made this language possible have also quickly faded from both public and professional memory. […]
If you wanted to be more creative [in the 1980s], you couldn’t just add more software to your system. You had to spend hundreds of thousands of dollars and buy a Paintbox. If you wanted to do something graphic—an opening for a TV show with a lot of layers—you had to go to an editing house and spend over a thousand dollars an hour to do the exact same thing you can do now with an inexpensive computer and several software programs. Now with Adobe After Effects and Photoshop, you can do everything in one fell swoop. You can edit, design, animate. You can do 3D or 2D, all on your desktop computer at home or in a small office.44 Mindi Lipschultz, interviewed by The Compulsive Creative, May 2004.
In 1989, the former Soviet satellites of Central and Eastern Europe peacefully liberated themselves from the Soviet Union. In the case of Czechoslovakia, this event came to be referred as the Velvet Revolution in order to contrast it with the revolutions in modern history that were typically always accompanied by bloodshed. I am going to appropriate the term Velvet Revolution to emphasize the gradual, almost invisible pace of the transformations that occurred in moving image esthetics from around 1993 to 1998. Subsequently, this series of articles is subtitled The Velvet Revolution in Moving Image Culture. Although it may seem presumptuous to compare political and esthetic transformations simply because they share the same nonviolent quality, as we will see in the later article, the two revolutions are actually related.
Although the Velvet Revolution I will be discussing involved many technological and social developments—hardware, software, production practices, new job titles and new professional fields—it is appropriate to highlight one software package as being central to events: After Effects.
Introduced in 1993, After Effects was the first software designed to do animation, compositing, and special effects on a personal computer.55 Actually, The NewTek Video Toaster released in 1990 was the first PC-based video production system that included a video switcher, character generation, image manipulation, and animation. Because of their low cost, Video Toaster systems were extremely popular in the 1990s. However, in the context of my article, After Effects is more important because, as I will explain below, it introduced a new paradigm for moving image design that was different from the familiar video editing paradigm supported by systems such as Toaster. Its broad effects on moving image production can be compared to the effects of Photoshop and Illustrator on photography, illustration, and graphic design. Although today media design and post-production companies continue to rely on more expensive “high-end” software such as Flame, Inferno or Paintbox […], After Effects remains the most popular and well-known application in this area because of its affordability and length of time on the market. Consequently, After Effects will be given a privileged role in this text as both the symbol and the key material foundation which made the Velvet Revolution in moving image culture possible—even though today other programs in a similar price category, such as Apple’s Motion, Autodesk’s Combustion, and Adobe’s Flash, have challenged After Effects’ dominance.
The use of After Effects is closely identified with a particular type of moving image, motion graphics, which became commonplace due in large part to this software. Concisely defined by Matt Frantz in his Masters’ thesis as “designed non-narrative, non-figurative based visuals that change over time,”66 Matt Frantz, Changing Over Time: The Future of Motion Graphics, 2003, http://b-o.fr/frantz motion graphics today include film and television titles, TV graphics, dynamic menus, graphics for mobile media content, and other animated sequences. Typically motion graphics appear as parts of longer pieces: commercials, music videos, training videos, narrative and documentary films, and interactive projects. While motion graphics definitely exemplify the changes that took place during the Velvet Revolution, these changes are of a broader nature. Simply put, the result of the Velvet Revolution is a new hybrid visual language of moving images in general. This language is not confined to particular media forms. Also, while today it manifests itself most clearly in non-narrative forms, it is often present in narrative and figurative sequences and films.
For example, a music video may use live action while also employing typography and a variety of transitions executed with computer graphics, or it may imbed the singer within an animated painterly space. A short film may mix typography, stylized 3D graphics, moving design elements, and video. In some cases, the juxtaposition of different media is clearly visible. In other cases, a sequence may move between different media so quickly that the shifts are barely noticeable. In yet other cases, a commercial or a movie title may feature continuous action shots on video or film, with the image periodically changing from a more natural to a highly stylized look. While particular esthetic solutions vary from one piece to the next and from one designer to another, they all share the same logic: the appearance of multiple media simultaneously in the same frame. Whether these media are openly juxtaposed or almost seamlessly blended together is less important than the fact of this co-presence itself.
Today such hybrid visual language is also common in a large proportion of short “experimental” (i.e. non-commercial) films being produced for media festivals, the Web, mobile media devices, and other distribution platforms. Most of the visuals created by VJs and Live Cinema artists are also hybrid, combining video, layers of 2D imagery, animation, and abstract imagery, all generated in real time. […] In the case of feature narrative films and TV programs, while they still rarely mix different graphical styles within the same frame, many now feature highly stylized esthetics which would have previously be identified with illustration rather than filmmaking, such as the TV series CSI, George Lucas’s latest Star Wars films, or Robert Rodriguez’s Sin City.
What is the logic of this new hybrid visual language? It is one of remixability: not only of the content of different media or simply their esthetics, but their fundamental techniques, working methods, and assumptions. United within the common software environment, cinematography, animation, computer animation, special effects, graphic design, and typography have come to form a new metamedium. Work produced in this new metamedium can use all techniques that were previously unique to these different media, or any subset of these techniques.
If we use the concept of “remediation” to describe this new situation, we misrepresent this logic—and the logic of media computing in general.77 Jay David Bolter and Richard Grusin, Remediation: Understanding New Media, (Cambridge, MA: MIT Press, 1999). The computer does not “remediate” particular media. Instead, it simulates all media, and what it simulates are not the surface appearances of different media but rather all the techniques used for their production and all the methods of viewing and interaction with the works in these media. By the middle of the 1990s, once all types of media connected within the same digital environment, they started interacting in ways that could never have been previously predicted nor even imagined. For instance, while particular media techniques continue to be used in relation to their original media, they can also be applied to other media. (This is possible because the techniques are converted into algorithms, all media is turned into digital data stored in compatible file formats, and software is designed to read and write files produced by other programs.) Here are a few examples: motion blur is applied to 3D computer graphics, computer-generated fields of particles are blended with live action footage to give it an enhanced look, a virtual camera is made to move around a virtual space filled with 2D drawings, flat typography is animated to look like it is emerging from a liquid-like material (with the liquid simulation coming from a computer graphics field), and so on. While this “cross-over” use constitutes a fundamental shift in media history in and of itself, today a typical short film or a sequence may combine many such pairings within the same frame. The result is a hybrid, intricate, complex, and rich visual language—or rather, numerous languages that share the basic logic of remixability.
I believe that the “media remixability,” which begins around the mid-1990s constitutes a new fundamental stage in the history of media. It manifests itself in different areas of culture and not only through moving images—although the latter does offer a particularly striking example of this new logic at work. Here software such as After Effects became a petri dish where computer animation, live cinematography, graphic design, 2D animation and typography started to interact together, creating new hybrids. As the examples mentioned above demonstrate, this process of remixability resulted in a new esthetics and new media species which is more than the sum of media that constitutes them. Put differently, the interactions of different media in the same software environment are cultural species.
Media remixability does not necessary lead to a collage-like esthetic which foregrounds the juxtapositions of different media and different media techniques. As a very different example of the results of media remixability, consider a more subtle esthetic, well captured by the name of the software under discussion—After Effects. While in the 1990s computers were used to create highly spectacular special effects or “invisible effects,” by the end of this decade we see something else emerging: a new visual esthetic that transcends effects. According to this esthetic, the whole project—music video, commercial, short film, or a large part of a feature film—displays a hyper-real look where the enhancement of live action material is not completely invisible but at the same time does not call attention to itself the way special effects usually did. This new hyper-real esthetic is yet another example of how, in the hands of designers, the petri dish of software containing all media creation and manipulation techniques created over the course of human history now produces new hybrids. In fact, it produces only hybrids.
Layers, Transparency, Compositing
Probably the most dramatic among the changes that took place from 1993–1998 was the new ability to combine multiple levels of imagery with a varying degree of transparency via digital compositing. If you compare a typical music video or a TV advertising spot circa 1986 with their counterparts circa 1996, the differences are striking. (The same also holds true for still images.) As I already noted, in 1986 computerized memory banks were very limited in their storage capacity and prohibitively expensive, and therefore designers could not quickly and easily cut and paste multiple image sources. But even when they would assemble multiple visual references, a designer only could place them next to, or on top of, each other. They could not modulate these juxtapositions by precisely adjusting transparency levels of different images. Instead, they had to resort to the same photocollage techniques popularized in the 1920s. […] In other words, the lack of transparency restricted the number of different image sources that can be integrated within a single composition without it starting to look like several photomontages by John Heartfield, Hannah Hoch, or Robert Rauschenberg—a mosaic of fragments without any being strongly dominant.88 In the case of video, one of the main reasons which made combining multiple visuals difficult was the rapid degradation of the video signal when an analog video tape was copied more than a couple of times. Such a copy would no longer meet broadcasting standards.
Compositing also made another operation that had previously been very cumbersome commonplace. Until the 1990s, different media types such as hand-drawn animation, lens-based recordings, i.e. film and video, and typography practically never appeared within the same frame. Instead, animated commercials, publicity shorts, industrial films, and some feature and experimental films that did include multiple media usually placed them in separate shots. A few directors have managed to build entire esthetic systems out of such temporal juxtapositions—most notably Jean-Luc Godard. In his 1960s films such as Week End (1967) Godard cut bold typographic compositions in between live action, creating what can be called “media montages.” In the same 1960s pioneering élan, graphics designer Pablo Ferro, who has appropriately called his company Frame Imagery, created promotional shorts and TV graphics that played upon juxtapositions of different media replacing each other in rapid succession99 Jeff Bellantfoni and Matt Woolman, Type in Motion, (Rizzoli, 1999), 22–29. In a number of Ferro’s spots, static images of different letterforms, line drawings, original hand-painted artwork, photographs, very short clips from newsreels, and other visuals would follow one upon the other with machine-gun speed.
Within cinema, the superimposition of different media within the same frame was usually limited to the two media placed on top of each other in a standardized manner, i.e. static letters appearing on top of still or moving lens-based images in feature film titles. Both Ferro and Saul Bass, another motion graphics pioneer, have created a few title sequences where visual elements of different origin were systema-tically overlaid together—such as the opening credits for Hitchcock’s Vertigo, designed by Bass (1958). However, I think it is fair to say that such complex juxtapositions of media within the same frame (rather than in edited sequence) were rare exceptions in the otherwise “uni-media” universe where filmed images appeared in feature films and hand-drawn images appeared in animated films. The only 20th-century feature film director I know of who has built his unique esthetic by systematically combining different media within the same shot is the Czech Karel Zeman. A typical shot by Zeman may contain filmed human figures, an old engraving used as a background, and a miniature model.
To put this in general terms, we can say that, before the computerization of the 1990s, a designer’s capacity to access, manipulate, remix, and filter visual information, whether still or moving, was quite restricted. In fact, their capabilities had remained practically unchanged for the previous hundred years—regardless of whether filmmakers and designers used in-camera effects, optical printing, or video keying. In retrospect, we can see they were at odds with the flexibility, speed, and precision of data manipulation that was already available to most other professional fields which had by that time been computerized: sciences, engineering, accounting, management, etc. Therefore it was only a matter of time before all image media would be turned into digital data and illustrators, graphic designers, animators, film editors, video editors, and motion graphics designers started manipulating them using software instead of their traditional tools. But this is only obvious today—after the Velvet Revolution has taken place.
In 1985, Jeff Stein directed a music video for the New Wave band The Cars. This video represented a major landmark in the design world, and MTV awarded it first prize in its first annual music awards. Stein managed to create a surreal world in which a video cutout of the singing head of the band member was animated over different video backgrounds. Stein made use of the esthetics of animated cartoons—2D animated characters superimposed over a 2D background—and recreated them using video imagery. […] Suddenly, modernist photomontage came alive. A decade later, such moving video collages not only became commonplace but they also became more complex, more layered, and more subtle. Instead of two or three, a composition could now feature hundreds and even thousands of layers, and each layer could have its own level of transparency.
In short, digital compositing now allowed the designers to easily mix any number of visual elements regardless of the media in which they originated and to control each element in the process. We can draw an analogy between multitrack audio recording and digital compositing of moving images. In multitrack recording, each soundtrack can be manipulated individually to produce the desired result. Similarly, in digital compositing each visual element can be independently modulated in a variety of ways: resized, recolored, animated, etc. Just as multitrack recording redefined the sound of popular music from the 1960s onward, once digital compositing became widely available during the 1990s, it changed the visual esthetics of moving images in popular culture.
By the end of the 1990s, digital compositing had become the basic operation used in creating all forms of moving images, not just big budget features. So while compositing was originally developed within the context of special effects production in the 1970s and early 1980s,1010 Thomas Porter and Tom Duff, “Compositing Digital Images,” ACM Computer Graphics, 18:3 (1984), 253–259. it had a much broader effect on contemporary visual and media cultures. Compositing played a key part in turning digital computer into an experimental lab where different media could be brought together and their esthetics and techniques combined to create a new species. […]
Thus, compositing was at first a particular digital technique designed to integrate two specific media, live action film and computer graphics, but went on to become a “universal media integrator.” Although compositing was originally created to support the esthetics of cinematic realism, over time it actually had the opposite effect. Rather than forcing different media to fuse seamlessly, compositing led to the flourishing of numerous media hybrids where the juxtapositions between live and algorithmically generated, two-dimensional and three-dimensional, raster and vector are made deliberately visible rather than being hidden.
From “Time-Based” to “Composition-Based”
My thesis about media remixability applies both to cultural forms and the software used to create them. Just as the moving image media made by designers today combines formats, assumptions, and techniques from different media, the toolboxes and interfaces of the software used are also remixes. […] When moving image designers started to use compositing/animation software such as After Effects, its interface encouraged them to think about moving images in a fundamentally new way. Film and video editing systems and their computer simulations that came to be known as non-linear editors (as exemplified today by Avid and Final Cut) conceptualized a media project as a sequence of shots organized in time1111 I should note that compositing functionality was gradually added over time to most NLE, so today the distinction between original After Effects or Flame interfaces and Avid and Final Cut interfaces is less pronounced. Consequently, while non-linear editing software (NLE) provided an editor with many tools for adjusting edits, they took for granted the constant of film language that originated from its industrial organization, i.e. that all frames have the same size and aspect ratio. This is an example of a larger phenomenon: as physical media were simulated in a computer, often many of their fundamental properties, interface conventions and constraints were methodically recreated in software, even though software as a medium itself has no such limitations. In contrast, from its inception, After Effects’ interface put forward a new concept of a moving image: as a composition organized both in time and 2D space.
The center of this interface is a Composition Window, conceptualized as a large canvas that can contain visual elements of arbitrary sizes and proportions. When I first began using After Effects, soon after it came out, I remember feeling shocked that the software did not automatically resize the graphics I dragged into the Composition Window to make them fit the overall frame. The fundamental assumption of cinema that accompanied it throughout its whole history—that film consists of many frames, all with the same size and aspect ratio—was gone.
In film and video editing paradigms of the 20th century, the minimal unit upon which the editor works is a frame. You can change the length of an edit, adjusting where one film or video segment ends and another begins, but you cannot interfere with the contents of a frame. […] This was a job for special effects departments. But in the After Effects interface, the basic unit is not a frame but a visual element placed in the Composition Window. Each element can be individually accessed, manipulated and animated, […] and conceptualized as an independent object. Consequently, a media composition is understood as a set of independent objects that can change over time. The very word “composition” is important in this context as it references 2D media (drawing, painting, photography, design) rather than filmmaking—i.e. space as opposed to time.
Where does the After Effects interface come from? Given that this software is commonly used to create animated graphics (i.e. “motion graphics”) and visual effects, it is not surprising that we can find interface elements that can be traced back to three separate fields: animation, graphic design, and special effects. In traditional cell animation practice, an animator places a number of transparent cells on top of each other. Each cell contains a different drawing, for instance, the body of a character on one cell, the head on another cell, eyes on the third cell. Because the cells are transparent, the drawings are automatically “composited” into a single composition. While the After Effects interface does not use the metaphor of a stack of transparent cells directly, it is based on the same principle. Each element in the Composition Window is assigned a “virtual depth” relative to all other elements. Together all the elements form a virtual stack. At any time, the designer can change the relative position of an element within the stack, delete it, or add new elements. […] An element can be a digital video frame, a line of type, an arbitrary geometric shape, etc. The finished work is the result of a particular arrangement of these elements in space and time. In this paradigm we can compare the designer to a choreographer who creates a dance by “animating” the bodies of dancers, specifying their entry and exit points, trajectories through the space of the stage, and the movements of their bodies. In this respect, it is relevant that, while the After Effects interface did not mention this reference, Macromedia Director, the key multimedia authoring software of the 1990s, did directly use the metaphor of a theater stage.
While we can link the After Effects interface to the traditional animation methods used by commercial animation studios, the working method of the software is closer to graphic design. […] A graphic designer works in real time. As the designer introduces new elements, adjusts locations, colors and other properties, experiments with different images, changes the size of the type, and so on, they can immediately see the result of their work. […] When physical media are simulated in a computer, we do not simply end up with the same media as before. By adding new properties and working methods, computer simulation fundamentally changes the identity of a given media. For example, in the case of “electronic paper” such as a Word document or a PDF file, we can do many things which were not possible with ordinary paper: zoom in and out of the document, search for a particular phrase, change fonts and line spacing, etc. Similarly, current  online interactive map services provided by MapQuest, Yahoo, and Google augment traditional paper maps in multiple and amazing ways—just take a look at Google Earth.
A significant proportion of contemporary software for creating, editing, and interacting with media developed by simulating physical media and augmenting it with new properties. But if we consider media design software such as Maya (used for 3D modeling and computer animation) or After Effects (motion graphics, compositing and visual effects), we encounter a different logic. These software applications do not simulate any single physical media that existed previously. Rather, they borrow from a number of different media, combining and mixing their working methods and specific techniques. Of course, they also add new capabilities specific to computers: for instance, the ability to automatically calculate the intermediate values between a number of keyframes. […]
The result is not simply a mechanical sum of all the elements that came from earlier media. Rather, as software incorporates the techniques and working methods of the various media they simulate, the result is new interfaces, tools and workflows with their own distinct logic. In the case of After Effects, the working method that it puts forward is neither animation, nor graphic design, nor cinematography, even though it draws from all of these fields. […]
The Velvet Revolution unleashed by After Effects and other software did not simply make animated graphics more commonplace. Artists and designers such as John and James Whitney, Norman McLaren, Saul Bass, Robert Abel, Harry Marks, R/Greenberg, and others had already been working with the stop motion animation, optical printing, and video effects hardware of the 1980s, along with other custom techniques and technologies. Instead, it led to the emergence of numerous new visual esthetics that had not existed before, and a new way of making moving image media.
This text is excerpted from the first part of “After Effects, or Velvet Revolution,” an essay initially published in 2006 in Millenium Film Journal (no. 45-46).