Published on The New Republic (http://www.tnr.com)
Has Hollywood Murdered the Movies?
How the richness of technology led to the poverty of imagination.
- David Denby
- September 14, 2012 | 12:00 am
I.
SIX HUNDRED OR SO movies open in the United States every year, including films from every country, documentaries, first features spilling out of festivals, experiments, oddities, zero-budget movies made in someone’s apartment. Even in the digit-dazed summer season, small movies never stop opening—there is always something to see, something to write about. Just recently I have been excited by two independent films—the visionary Louisiana bayou mini-epic, Beasts of the Southern Wild, and a terse, morally alert fable of authority and obedience called Compliance. Yet despite such pleasures, movies—mainstream American movies—are in serious trouble. And this is hardly a problem that worries movie critics more than anyone else: many moviegoers feel the same puzzlement and dismay.
When I speak of moviegoers, I mean people who get out of the house and into a theater as often as they can; or people with kids, who back up rare trips to the movies with lots of recent DVDs and films ordered on demand. I do not mean the cinephiles, the solitary and obsessed, who have given up on movie houses and on movies as our national theater (as Pauline Kael called it) and plant themselves at home in front of flat screens and computers, where they look at old films or small new films from the four corners of the globe, blogging and exchanging disks with their friends. They are extraordinary, some of them, and their blogs and websites generate an exfoliating mass of knowledge and opinion, a thickening density of inquiries and claims, outraged and dulcet tweets. Yet it is unlikely that they can do much to build a theatrical audience for the movies they love. And directors still need a sizable audience if they are to make their next picture about something more than a few people talking on the street.
I have in mind the great national audience for movies, or what’s left of it. In the 1930s, roughly eighty million people went to the movies every week, with weekly attendance peaking at ninety million in 1930 and again in the mid-1940s. Now about thirty million people go, in a population two and a half times the size of the population of the 1930s. By degrees, as everyone knows, television, the Internet, and computer games dethroned the movies as regular entertainment. By the 1980s, the economics of the business became largely event-driven, with a never-ending production of spectacle and animation that draws young audiences away from their home screens on opening weekend. For years, the tastes of young audiences have wielded an influence on what gets made way out of proportion to their numbers in the population. We now have a movie culture so bizarrely pulled out of shape that it makes one wonder what kind of future movies will have.
Nostalgia is history altered through sentiment. What’s necessary for survival is not nostalgia, but defiance. I’m made crazy by the way the business structure of movies is now constricting the art of movies. I don’t understand why more people are not made crazy by the same thing. Perhaps their best hopes have been defeated; perhaps, if they are journalists, they do not want to argue themselves out of a job; perhaps they are too frightened of sounding like cranks to point out what is obvious and have merely, with a suppressed sigh, accommodated themselves to the strange thing that American movies have become. A successful marketplace has a vast bullying force to enforce acquiescence.
EARLIER THIS YEAR, The Avengers, which pulled together into one movie all the familiar Marvel Comics characters from earlier pictures—Captain America, Thor, Iron Man, and so on—achieved a worldwide box-office gross within a couple of months of about $1.5 billion. That extraordinary figure represented a triumph of craft and cynical marketing: the movie, which cost $220 million to make, was mildly entertaining for a while (self-mockery was built into it), but then it degenerated into a digital slam, an endless battle of exacerbated pixels, most of the fighting set in the airless digital spaces of a digital city. Only a few critics saw anything bizarre or inane about so vast a display of technology devoted to so little. American commercial movies are now dominated by the instantaneous monumental, the senseless repetition of movies washing in on a mighty roar of publicity and washing out in a waste of semi-indifference a few weeks later. The Green Hornet? The Green Lantern? Did I actually see both of them? The Avengers will quickly be effaced by an even bigger movie of the same type.
This franchise-capping Avengers was a carefully built phenomenon. Let’s go back a couple of years and pick up a single strand that led to it. Consider one of its predecessors, Iron Man 2, which began its run in the United States, on May 7, 2010, at 4,380 theaters. That’s only the number of theaters: multiplexes often put new movies on two or three, or even five or six, screens within the complex, so the actual number of screens was much higher—well over 6,000. The gross receipts for the opening weekend were $128 million. Yet those were not the movie’s first revenues. As a way of discouraging piracy and cheap street sale of the movie overseas, the movie’s distributor, Paramount Pictures, had opened Iron Man 2 a week earlier in many countries around the world. By May 9, at the end of the weekend in which the picture opened in America, cumulative worldwide theatrical gross was $324 million. By the end of its run, the cumulative total had advanced to $622 million. Let’s face it: big numbers are impressive, no matter what produced them.
The worldwide theatrical gross of Iron Man 2 served as a branding operation for what followed—sale of the movie to broadcast and cable TV, and licensing to retail outlets for DVD rentals and purchase. Iron Man 2 was itself part of a well-developed franchise (the first Iron Man came out in 2008). The hero, Tony Stark, a billionaire industrialist-playboy, first appeared in a Marvel comic book in 1963 and still appears in new Marvel comics. By 2010, rattling around stores and malls all over the world, there were also Iron Man video games, soundtrack albums, toys, bobblehead dolls, construction sets, dishware, pillows, pajamas, helmets, t-shirts, and lounge pants. There was a hamburger available at Burger King named after Mickey Rourke, a supporting player in Iron Man 2. Companies such as Audi, LG Mobile, 7-Eleven, Dr. Pepper, Oracle, Royal Purple motor oil, and Symantec’s Norton software signed on as “promotional partners,” issuing products with the Iron Man logo imprinted somewhere on the product or in its advertising. In effect, all of American commerce was selling the franchise. All of American commerce sells every franchise.
Iron Man movies have a lighter touch than many comparable blockbusters—for instance, the clangorous Transformer movies, which are themselves based on plastic toys, in which dark whirling digital masses barge into each other or thresh their way through buildings, cities, and people, and at which the moviegoer, sitting in the theater, feels as if his head were repeatedly being smashed against a wall. The Iron Man movies have been shaped around the temperament of their self-deprecating star, Robert Downey, Jr., an actor who manages to convey, in the midst of a $200-million super-production, a private sense of amusement. By slightly distancing himself from the material, this charming rake offers the grown-up audience a sense of complicity, which saves it from self-contempt. Like so many big digital movies, the Iron Man films engage in a daringly flirtatious give-and-take with their own inconsequence: the disproportion between the size of the productions, with their huge sets and digital battles, and the puniness of any meaning that can possibly be extracted from them, may, for the audience, be part of the frivolous pleasure of seeing them.
Many big films (not just the ones based on Marvel Comics) are now soaked in what can only be called corporate irony, a mad discrepancy between size and significance—for instance, Christopher Nolan’s widely admired Inception, which generates an extraordinarily complicated structure devoted to little but its own workings. Despite its dream layers, the movie is not really about dreams—the action you see on screen feels nothing like dreams. An industrialist hires experts to invade the dreaming mind of another industrialist in order to plant emotions that would cause the second man to change corporate plans. Or something like that; the plot is a little vague. Anyway, why should we care? What is at stake?
You could say, I suppose, that the movie is about different levels of representation, and then refine that observation and observe that the differences between fiction and reality, between subjective and objective, no longer exist—that what Nolan created is somehow analogous to our life in a postmodernist society in which the image and the real, the simulacrum and the original, have assumed, for many people, equal weight. (The literary and media theorist Fredric Jameson has made such a case for the movie.) You can say all of that, but you still haven’t established why such an academic-spectacular exercise is worth looking at as a work of narrative art, or why any of it matters emotionally.
Nolan’s movie was a whimsical, over-articulate nullity—a huge fancy clock that displays wheels and gears but somehow fails to tell the time. Yet Inception is nothing more than the logical product of a recent trend in which big movies have been progressively drained of sense. As much as two-thirds of the box office for these big films now comes from overseas, and the studios appear to have concluded that if a movie were actually about something, it might risk offending some part of the worldwide audience. Aimed at Bangkok and Bangalore as much as at Bangor, our big movies have been defoliated of character, wit, psychology, local color.
I DO NOT HATE all over-scaled digital work. “God works too slowly,” said Ian McKellen in X-Men, playing Magneto, who can produce mutations on the spot. So can digital film-makers, who play God at will. Digital movie-making is the art of transformation, and in the hands of a few imaginative people it has produced sequences of great loveliness and shivery terror—the literally mercurial reconstituted beings in Terminator 2, the floating high-chic battles in The Matrix. I loved the luscious purple beauty of Avatar, but Avatar is off the scale in visual allure, and so is Alfonso Cuarón’s Harry Potter and the Prisoner of Azkaban, the best of the Potter series.
Apart from these movies and a few others, however, many of us have logged deadly hours watching superheroes bashing people off walls, cars leapfrogging one another in tunnels, giant toys and mock-dragons smashing through Chicago, and charming teens whooshing around castles. What we see in bad digital action movies has the anti-Newtonian physics of a cartoon, but drawn with real figures. Rushed, jammed, broken, and overloaded, action now produces temporary sensation rather than emotion and engagement. Afterward these sequences fade into blurs, the different blurs themselves melding into one another—a vague memory of having been briefly excited rather than the enduring contentment of scenes playing again and again in one’s head.
The oversized weightlessness leaves one numbed, defeated. Surely rage would seem an excessive response to movies so enormously trivial. Yet the overall trend is enraging. Fantasy is moving into all kinds of adventure and romantic movies; time travel has become a commonplace. At this point the fantastic is chasing human temperament and destiny—what we used to call drama—from the movies. The merely human has been transcended. And if the illusion of physical reality is unstable, the emotional framework of movies has changed, too, and for the worse. In time—a very short time—the fantastic, not the illusion of reality, may become the default mode of cinema.
Yes, of course, the studios, with greater or lesser degrees of enthusiasm, make other things besides spectacles—thrillers and horror movies; chick flicks and teen romances; comedies with Adam Sandler, Will Ferrell, Jennifer Aniston, Katherine Heigl, and Cameron Diaz; burlesque-hangover debauches and their female equivalents; animated pictures for families. All these movies have an assured audience (or one at least mostly assured), and a few of them, especially the Pixar animated movies, may be very good. The studios will also distribute an interesting movie if their financing partners pay for most of it. And at the end of the year, as the Oscars loom, they distribute unadventurous but shrewdly written and played movies, such as The Fighter, which are made entirely by someone else. Again and again these serioso films win honors, but for the most part, the studios, except as distributors, don’t want to get involved in them. Why not? Because they are “execution dependent”—that is, in order to succeed, they have to be good. It has come to this: a movie studio can no longer risk making good movies. Their business model depends on the assured audience and the blockbuster. It has done so for years and will continue to do so for years more. Nothing is going to stop the success of The Avengers from laying waste to the movies as an art form. The big revenues from such pictures rarely get siphoned into more adventurous projects; they get poured into the next sequel or a new franchise. Pretending otherwise is sheer denial.
II.
ON APRIL 30, 2010, a week before Iron Man 2 made its American debut, an independent film called Please Give, written and directed by Nicole Holofcener and starring Catherine Keener, Rebecca Hall, and Oliver Platt, opened in five theaters in the United States. The theatrical gross for the first weekend was $118,000. Holofcener’s movie is a modest and formally conservative but sharply perceptive comedy devoted to a group of neighbors in Manhattan—a “relationship” film (in Hollywood jargon) arrayed around such matters as the ambiguous moral quality of benevolence and the vexing but inescapable necessity of family loyalty. Like a good short-story writer, Holofcener has a precise and gentle touch; moments from the picture have lingered in the affections of people who saw it. I’m not saying that Please Give is anything great—but look at how hard it had to struggle to make even the slightest impression in the marketplace. Please Give cost $3 million, and its worldwide theatrical gross was $4.3 million. Once the ancillary markets are added in, the movie, on a small scale, will also be a financial success. But, so far, no more than about 500,000 people have seen it in theaters. Around 83 million have seen Iron Man 2 in theaters. Maybe 175 million have seen Transformers 3. At least 250 million people have seen The Avengers.
Most of the great directors of the past—Griffith, Chaplin, Murnau, Renoir, Lang, Ford, Hawks, Hitchcock, Welles, Rossellini, De Sica, Mizoguchi, Kurosawa, Bergman, the young Coppola, Scorsese, and Altman, and many others—did not imagine that they were making films for a tiny audience, and they did not imagine they were making “art” movies, even though they worked with a high degree of conscious artistry. (The truculent John Ford would have glared at you with his unpatched eye if you used the word “art” in his presence.) They thought that they were making films for everyone, or at least everyone with spirit, which is a lot of people. But over the past twenty-five years, if you step back and look at the American movie scene, you see the mass-culture juggernauts, increasingly triumphs of heavy-duty digital craft, tempered by self-mockery and filling up every available corner of public space; and the tiny, morally inquiring “relationship” movies, making their modest way to a limited audience. The ironic cinema, and the earnest cinema; the mall cinema, and the art house cinema.
I can hear the retorts. If such inexpensive movies as Please Give (or Winter’s Bone, an even better movie, which also came out in 2010, or Beasts of the Southern Wild) still get made, and they have an appreciative audience, however small; if directors such as Martin Scorsese and Steven Soderbergh and David O. Russell and Kathryn Bigelow and Noah Baumbach and David Fincher and Wes Anderson are doing interesting things within the system; if Terrence Malick can make a lyrical masterpiece such as The Tree of Life; if the edges of the industry are soulfully alive even as the center is mostly an algorithm for making money—if all of this is so, then why get steamed over the Iron Man or Transformers franchises?
The reason is this: not everything a film artist wants to say can be said with $3 million. Artists who want to work with, say, $30 million (still a moderate amount of money by Hollywood standards) often have an impossible time getting their movies made. At this writing, Paul Thomas Anderson (There Will Be Blood), one of the most talented men in Hollywood, has finished his Scientology movie, The Master, but it took years of pleading to get the money to do it. (An heiress came to his rescue.) After making Capote, Bennett Miller was idle for six years before making Moneyball. Alexander Payne had to wait seven years (after Sideways) before making The Descendants. Alfonso Cuarón hasn’t brought out a movie since the brilliant Children of Men in 2006. Guillermo del Toro, the gifted man who made Pan’s Labyrinth, is also having trouble getting money for his projects. By studio standards, there isn’t a big enough audience for their movies: they can work if they want to, but only on very small budgets. You cannot mourn an unmade project, but you can feel its absence through the long stretches of an inane season.
And why isn’t there a big enough audience for art? Consider that in recent years the major studios have literally gamed the system. American children—boys, at least—play video games, and read comic books and graphic novels. Latching onto those tastes, Disney purchased Marvel Comics for $4 billion, which gives it the right to make Marvel’s superhero comic book characters into movies. Paramount has its own deal with Marvel for the Captain America character and others. Time Warner now owns DC Comics, and Warner Bros. will make an endless stream of movies based on DC Comics characters (the Superman, Batman, and Green Lantern pictures are just the beginning). For years, all the studios have tried to adapt video games into movies, often with disastrous results. So Warner Bros. went the logical next step: it bought a video game company, which is developing new games that the studio will later make into films.
“Give me the children until they are seven, and anyone may have them afterwards,” Francis Xavier, one of the early Jesuits, is supposed to have said. The studios grab boys when they are seven, eight, or nine, command a corner of their hearts, and hold them with franchise sequels and product tie-ins for fifteen years. The Twilight series of teen vampire movies, which deliciously sell sex without sex—romantic danger without fornication—caught girls in the same way at a slightly older age. The Hunger Games franchise will be with us for years. In brief, the studios are not merely servicing the tastes of the young audience; they are also continuously creating the audience to whom they want to sell. (They have tied their fortunes to the birth rate.) Which raises an inevitable question: will these constantly created new audiences, arising from infancy with all their faculties intact but their expectations already defined—these potential moviegoers—will they ever develop a taste for narrative, for character, for suspense, for acting, for irony, for wit, for drama? Isn’t it possible that they will be so hooked on sensation that anything without extreme action and fantasy will just seem lifeless and dead to them?
APART FROM THAT dolorous autumn-leaves season (the Holocaust, troubled marriages, raging families, self-annihilating artists), American movies during the rest of the year largely abandon older audiences, leaving them to wander about like downsized workers. Many gratefully retreat into television, where producer-writers such as David Chase, Aaron Sorkin, David Simon, and David Milch now enjoy the same freedom and status, at HBO, as the Coppola-Scorsese generation of movie directors forty years ago. Cable television has certainly opened a space for somber realism, such as The Wire, and satirical realism, such as Mad Men and Lena Dunham’s mock-depressive, urban-dejection series Girls. But television cannot be the answer to what ails movies. I have been ravished in recent years by things possible only in movies—by Paul Thomas Anderson’s There Will Be Blood, Julian Schnabel’s The Diving Bell and the Butterfly, and Malick’s The Tree of Life, which refurbished the tattered language of film. Such films as Sideways, The Squid and the Whale, and Capote have a fineness, a nuanced subtlety, that would come off awkwardly on television. Would that there were more of them.
The intentional shift in large-scale movie production away from adults is a sad betrayal and a minor catastrophe. Among other things, it has killed a lot of the culture of the movies. By culture, I do not mean film festivals, film magazines, and cinephile Internet sites and bloggers, all of which are flourishing. I mean that blessedly saturated mental state of moviegoing, both solitary and social, half dreamy, half critical, maybe amused, but also sometimes awed, that fuels a living art form. Moviegoing is both a private and a sociable affair—a strangers-at-barbecues, cocktail-party affair, the common coin of everyday discourse. In the fall season there may be a number of good things to see, and so, for adult audiences, the habit may flicker to life again. If you have seen one of the five interesting movies currently playing, then you need to see the other four so you can join the dinner-party conversation. If there is only one, as there is most of the year, you may skip it without feeling you are missing much.
THESE OBSERVATIONS annoy many people, including some of the smartest people I know, particularly men in their late forties and younger, who have grown up with pop culture dominated by the conglomerates and don’t know anything else. They don’t disagree, exactly, but they find all of this tiresome and beside the point. They accept the movies as a kind of environment, a constant stream. There are just movies, you see, movies always and forever, and of course many of them will be uninspiring, and always have been. Critics, chalking the score on the blackboard, think of large-scale American movie-making as a system in which a few talented people, in order to make something good, struggle against discouragement or seduction; but for my young media-hip friends, this view is pure melodrama. They see the movies not as a moral and aesthetic battleground, but as a media game that can be played either shrewdly or stupidly. There is no serious difference for them between making a piece of clanging, overwrought, mock-nihilistic digital roughhouse for $200 million and a personal independent film for $2 million. They are not looking for art, and they do not want to be associated with commercial failure; it irritates them in some way; it makes them feel like losers. If I say that the huge budgets and profits are mucking up movie aesthetics, changing the audience, burning away other movies, they look at me with a slight smile and say something like this: “There’s a market for this stuff. People are going. Their needs are being satisfied. If they didn’t like these movies, they wouldn’t go.”
But who knows if needs are being satisfied? The audience goes because the movies are there, not because anyone necessarily loves them. My friends’ attitudes are defined so completely by the current movie market that they do not wish to hear that movies, for the first eighty years of their existence, were essentially made for adults. Sure, there were always films for families and children, but, for the most part, ten-year-olds and teens were dragged by their parents to what the parents wanted to see, and this was true well after television reduced the size of the adult audience. The kids saw, and half understood, a satire such as Dr. Strangelove, an earnest social drama such as To Kill a Mockingbird, a cheesy disaster movie such as Airport, and that process of half understanding, half not, may have been part of growing up; it also laid the soil for their own enjoyment of grown-up movies years later. They were not expected to remain in a state of goofy euphoria until they were thirty-five. My friends think that our current situation is normal. They believe that critics are naïve blowhards, but it is they who are naïve.
III.
THE LANGUAGE OF big-budget, market-driven movies—the elements of shooting, editing, storytelling, and characterization—began disintegrating as far back as the 1980s, but all of this crystallized for me a decade ago, in the summer of 2001, when the slovenliness of what I was seeing that year, even in the Oscar-winning Gladiator, hit me hard. The action scenes in Gladiator were mostly a blur of whirling movement shot right up close—a limb hacked off and flying, a spurt of blood, a flash of chariot wheels. Who could actually see anything? Yet almost no one seemed to object. The old ideal of action as something staged cleanly and realistically in open space had been destroyed by sheer fakery and digital “magic”—a constant chopping of movement into tiny pieces that are then assembled by computer editing into exploding little packages. What we were seeing in Gladiator and other movies were not just individual artistic failures and crass commercial strategies, but was a new and awful idea of how to put a picture together.
Seventy years ago, the look of a given studio’s films reflected the ambitions and the fantasies of the men who ran it, as well as the film genres they cultivated and the writers, directors, and craftsmen they hired. But by the 1980s, as the studios became just one part—and not always a very profitable part—of enormous conglomerates, the head of the motionpicture division was mainly responsible for a revenue stream that would please board members, share-holders, and stock analysts. Looking around him, he saw divisions of his conglomerate that have a greater profit ratio than his own—video games, for instance. Imitating these commercially successful forms would not hurt him among the people he needs to please. Under such a pressure, style quickly fades away. Apart from some of the animated work, it is impossible now to tell the films of one studio from another. All the studios are ruled by what I would call conglomerate aesthetics.
The phrase falls uneasily on the ear, so I had better say that I don’t mean to pile into the tumbrels every large movie recently made by conglomerates. I realize as well that “conglomerate aesthetics” has a cranky sub-Marxist ring to it, the sound of an assistant professor warming the prejudices of an academic conference. Naïveté is a poor excuse for false moralism, both for me and for the professor, and so I should immediately add that we both know that Hollywood has almost always been a big-money game, that money is the lifeblood of large-scale picture-making. (David Thomson’s book The Whole Equation, which appeared in 2004, provides a fascinating account of the relations of art and money at different times in Hollywood history.) Yet the desire to be profitable does not dictate, in itself, one style or another. The dreadfulness of many big movies now cannot be waved away on the grounds that the studios have to make them that way. They don’t have to make them that way; they just think they do. They choose this style.
That summer of 2001 the shape of conglomerate aesthetics could be seen in the narrative gibberish of too many creatures and too many villains in the overstuffed, put-on adventure movie, The Mummy Returns; and it could be seen in the frantic pastiche construction of Baz Luhrmann’s musical Moulin Rouge, with its characters openly borrowed from other movies, its songs composed of many other songs—music that alludes to the history of pop rather than risking the painful beauty of a ravishing new melody. The conglomerate aesthetic seizes on the recycled and the clichéd; it disdains originality and shies away from anything too individual, too clearly defined—even a strong personality. (Angelina Jolie wasn’t required to be a person in the Lara Croft movies—she got by on pure attitude. Ewan McGregor in The Phantom Menace didn’t even have attitude.) The only genuine protagonist in big movies in that period was Russell Crowe’s Jeffrey Wigand in Michael Mann’s The Insider, from 1999, and that movie failed commercially. In Hollywood, the lesson has been learned: no complex protagonist unless he is a historical figure such as Howard Hughes, John Nash (of A Beautiful Mind), J. Edgar Hoover, or the like. As the visual schemes grow more complicated, the human material becomes undernourished, wan, apologetic, absent—or so stylized that you can enjoy it only ironically (Angelina Jolie as a svelte, voguing super-killer).
Constant and incoherent movement; rushed editing strategies; feeble characterization; pastiche and hapless collage—these are the elements of conglomerate aesthetics. There is something more than lousy film-making in such a collection of attention-getting swindles. Again and again I have the sense that film-makers are purposely trying to distance the audience from the material—to prevent moviegoers from feeling anything but sensory excitement, to thwart any kind of significance in the movie.
Consider a single scene from one of the most prominent artistic fiascos of recent years, Michael Bay’s Pearl Harbor. Forget Ben Affleck’s refusal to sleep with Kate Beckinsale the night before going off to battle; forget the rest of the frightfully noble love story. Look at the action sequences in the movie, the scenes that many critics unaccountably praised. Here’s the moment: the Japanese have arrived, dropped their load, and gone back to their carriers. Admiral Kimmel (Colm Feore), the commander of the Pacific fleet, then rides through the harbor in an open boat, surveying the disaster. We have seen Kimmel earlier: not as a major character, but as a definite presence. Before December 7, he had intimations that an attack might be coming but not enough information to form a coherent picture. He did not act, and now he feels the deepest chagrin. Dressed in Navy whites, and surrounded by junior officers also dressed in white, he passes slowly through ships torn apart and still burning, ships whose crews, in some cases, remain trapped below the waterline.
Now, the admiral’s boat trip could have yielded a passage of bitterly eloquent movie poetry. Imagine what John Ford or David Lean would have done with it! We have just seen bodies blackened by fire, the men’s skin burned off. Intentionally or not, the spotless dress whites worn by the officers become an excruciating symbol of the Navy’s complacency before the attack. The whole meaning of Bay’s movie could have been captured in that one shot if it had been built into a sustained sequence. Yet this shot, to our amazement, lasts no more than a few seconds. After cutting away, Bay and his editors return to the scene, but this time from a different angle, and that shot doesn’t last, either. Bay and his team of editors abandon their own creation, just as, earlier in the movie, they jump away from an extraordinary shot of nurses being strafed as they run across an open plaza in front of the base hospital.
People who know how these movies are made told me that the film-makers could not have held those shots any longer, because audiences would have noticed that they were digital fakes. That point (if true) should tell you that something is seriously wrong. If you cannot sustain shots at the dramatic crux of your movie, why make violent spectacle at all? It turns out that fake-looking digital film-making can actually disable spectacle when it is supposed to be set in the real world. Increasingly, the solution has been to create more and more digitized cities, houses, castles, planets. Big films have lost touch with the photographed physical reality that provided so much greater enchantment than fantasy.
Of course, no one who has ever enjoyed a mindless hyper-charged movie could say that a little meaningless movement destroyed his day or ruined him for art. Everyone loves being worked over now and then, and speed in itself is not the enemy. Like many moviegoers, I was dazzled by the craft of Paul Greengrass (and cinematographer Oliver Wood) in The Bourne Supremacy (2004) and The Bourne Ultimatum (2007). Greengrass’s method in those thrillers, in which Matt Damon is pursued through damp, hostile Berlin or Moscow, is to shoot action in tiny fragments—a face glimpsed across the street, a car whizzing by—and then pull the fragments together into a coherent impression of rapid movement. Even the bruising, endless car-chase in The Bourne Supremacy was so well done that one didn’t feel cheated. We can accept that car chases are going to be over-cut for sheer excitement.
The problem is that too many ordinary scenes in many big movies are cut like car chases. One of the tendencies of conglomerate aesthetics is to replace action and drama as much as possible with mere movement. Conglomerate aesthetics requires a dozen trash epiphanies (explosions, transformations, rebirths) rather than the arc of a single pure visual climax; mass slaughter rather than a single death. The job of luring the big audience to the Friday opening—the linchpin of the commercial system—has destroyed action on the screen by making it carry the entire burden of the movie’s pleasure. In Christopher Nolan’s Batman movies, The Dark Knight and The Dark Knight Rises, sensation has been carried to the point of a brazenly beautiful nihilism, in which a modishly “dark” atmosphere of dread and disaster overwhelms any kind of plot logic or sequence or character interest. You leave the theater vibrating, but a day later you don’t feel a thing. The audience has been conditioned to find the absence of emotion pleasurable.
IV.
TO UNDERSTAND what is so strange about big movies now, you have to remember a little of what movies once were and what audiences once wanted from them—how stories were told in different periods, how movies were put together. We are now trapped by an exasperating irony: employing all the devastating means at their command, movies have in some ways gone back to their crudest beginnings and are determined, perhaps, to stay there forever, or at least as long as the box-office and ancillary-market mother lode holds out.
A long time ago, at a university far away, I taught film, and I did what many teachers have no doubt done before and since: I tried to develop film aesthetics for the students as a historical progression toward narrative. After all, many of the first movies in the 1890s were not stories at all, but just views of things—a train coming into a station, a wave breaking toward the camera. These visual astonishments caused the audience to stare open-mouthed or duck under the seats for cover (or so the legend says, preserved recently in Scorsese’s Hugo). I wanted my students to be astonished, too—to enjoy the development of film technique as a triumph of artistic and technical consciousness. I worked in straight chronological order, moving from those early “views” through Edwin S. Porter’s 1903 experiments in linear sequencing in films such as Life of an American Fireman and The Great Train Robbery and then on through D.W. Griffith’s consolidation a few years later of an actual syntax—long and medium shots, close-ups, flashbacks, parallel editing, and the like.
But I now think there was something merely convenient in teaching that way. The implication of my lesson plan was that the medium had by degrees come to a realization of itself, discovering in those early years—say, 1895 to 1915—its own true nature embedded within its technology: the leafy oak of narrative lodged within the acorn of celluloid. It is a teleological view, and it is probably false. In truth, there is nothing inherent in the process of exposing strips of film to light sixteen or eighteen times a second (later twenty-four times) that demanded the telling of a story.
At the beginning, after the views of trains and oceans, movies offered burlesque skits or excerpts from theatrical events, but still no stories. A completed movie was often just a single, fixed, long-lasting shot. It is likely, as David Bordwell, Janet Staiger, and Kristin Thompson explained in The Classical Hollywood Cinema, that narrative emerged less from the inherent nature of film than from the influence of older forms—novels and short stories and plays. And also from pressure to create work of greater power to attract more and more customers.
If creating fictions is not encoded in the DNA of film, then what is happening now has a kind of grisly logic to it. As the narrative and dramatic powers of movies fall into abeyance, and many big movies turn into sheer spectacle, with only a notional pass at plot or characterization, we are returning with much greater power to capers and larks that were originally performed in innocence. The kind of primitive chase, for instance, that in 1905 depended on some sort of accident or mischief rather than on character or plot has been succeeded by the endless up-in-the-air digital fight. The 1905 scene has a harum-scarum looseness and wit; the destructive action scenes in movies now are brought off with a kind of grim, faceless glee, an exultation in power and mass: We can do it, therefore we will do it, and our ability to do it is the meaning of it, and even if you’re not impressed, it is still going to roll over you.
Neo-primitivism is one of the great strategies of modern art—Picasso and his African masks; Bartók re-fashioning folk materials in his advanced music; Chuck Berry drawing on “hillbilly” rhythms for his own super-charged songs. Neo-primitivism cleared away the mush of Viennese or Edwardian sentiment, the perfume and pallor of Paris salon art, the Brill Building softness of early-1960s rock. In movies, a great deal of mush has also been cleared away (the tyranny of niceness that ruined so many family movies in the 1940s; the physical, verbal, and sexual coyness of the 1950s). But the continuous motion of crass conglomerate product aimed at the young has removed much else as well, such as the mysteries of personality, sophisticated dialogue, any kind of elegant or smart life, and, frequently, a woman’s emotions (think of Bette Davis, Katharine Hepburn, Joan Crawford) as the center of a movie. The studios have been devoted to the systematic de-culturation of movies, and the casting away of all manner of dramatic cunning laboriously built up over decades.
SPARKED, PERHAPS, by the absence of sound, film-makers developed the visual possibilities of film very quickly, and by the end of World War I the vocabulary of editing and the overall strategy of Hollywood movie-making was set. Celluloid may not have carried storytelling in its genes, but, as David Bordwell puts it, “telling a story is the basic formal concern which makes the film studio resemble the monastery’s scriptorium, the site of the transcription and transmission of countless narratives.” In the scriptorium, an unspoken vow was repeated daily: audiences need to get emotionally involved in a story in order to enjoy themselves. The idea is so obvious that it seems absurd to spell it out. Yet in recent years this assumption, and everything that follows from it, have begun to weaken and even to disappear.
It is shocking to be reminded of some of the things that are now slipping away: that whatever is introduced in a tale has to mean something, and that one thing should inevitably lead to another; that events are foreshadowed and then echoed, and that tension rises steadily through a series of minor climaxes to a final, grand climax; that music should be created not just as an enforcer of mood but as the outward sign of an emotional or narrative logic; that characterization should be consistent; that a character’s destiny is supposed to have some moral and spiritual meaning—the wicked punished, the virtuous rewarded or at least sanctified. It was a fictional world of total accountability.
For decades, the rules of the scriptorium signified “Hollywood” in both the contemptuous sense and the honorific sense—a system dismissed by the humorless and pleasureless as “bourgeois cinema,” but also enjoyed around the world by millions and praised in majestic terms by critics, most notably André Bazin. Looking back to the late 1930s from a dozen or so years later, and passing judgment on precisely such conventions, Bazin announced that “in seeing again today such films as Jezebel by William Wyler, Stagecoach by John Ford, or Le jour se lève by Marcel Carné, one has the feeling that in them an art has found its perfect balance, its ideal form of expression, and reciprocally one admires them for dramatic and moral themes to which the cinema, while it may not have created them, has given a grandeur, an artistic effectiveness, that they would not otherwise have had. In short, here are all the characteristics of the ripeness of a classical art.”
The ripeness of a classical art. The words are stirring, but at this point almost embarrassing. What on Earth did Bazin mean? Never mind Le jour se lève, which is impossible to understand without evoking French literary and philosophical traditions. What about the American films Bazin mentions? What’s in these movies? What satisfactions did they offer? And is it mere weak-souled nostalgia that makes one long for their equivalent now?
In Jezebel, from 1938, everything revolves around a central character of extraordinary perversity. Julie Marsden, played by Bette Davis, is a rich belle in pre-Civil War New Orleans. Her situation is paradoxical: she exercises the largest possible freedom within the confines of a social tradition that allows women no other career than that of a coquette. Taut and over-defined emotionally—she demands categorical approval or rejection—Julie values the predominance of her own will more than love, more than society, more than anything. In other words, she is admirable, dislikable, and crazy. She torments her high-minded beau (Henry Fonda) and wears a scarlet dress to a society ball at which unmarried women have traditionally worn white, knowing full well that the act must compromise her fiancé and destroy her own social position.
Around this central incident—which at first seems trivial, and then more and more momentous—a portrait of antebellum Southern society as both noble and savagely inadequate unfolds with surprising power. Yet Davis’s will-driven Julie dominates the movie. All the other characters lean toward her or shy away from her; they fuss before her arrivals and flutter after her departures. William Wyler is often said to have used an “invisible” technique, which means, in this case, that the camera glides, dollies, cranes, goes wherever it has to go, leading the eye from one shot to the next in an unbroken continuity that illuminates a story that is essentially psychological and social. The style is dedicated to the defining moment: the upturned face, the instant of self-definition, the rapid concentrics of astonishment and scandal spreading through a room as a young woman enters and gazes around herself in defiance. All the physical details are tightly arrayed around the outrageous, mesmerizing central figure.
In Jezebel, a rigidly structured society is falling into decay. In Bazin’s other choice, Stagecoach, a new society is taking shape—the West as caravan of American democracy. The movie is perhaps the most popular and well-known Western ever made, and yet seeing it again one is struck by how fresh it is—how very funny and sharply edged, how bracingly decisive and swift. An entire fluid American world is moving West: a high-type Virginia lady, a fallen Southern gentleman, an alcoholic doctor, a hypocritical banker, a good-hearted prostitute—all these people plus an ineffably relaxed young male beauty, John Wayne. They all go to Lordsburg, but they also move into their future. In Stagecoach, it is not so much a matter of a dominating individual as of an evolving group. All through the trip, the writer, Dudley Nichols, and the director, John Ford, make it clear what each of these highly wrought people thinks of the others. By degrees, they all come to understand that the alcoholic doctor is something more than an irresponsible lush, that the Southern gambler and murderer is suffused with self-disgust and has some genuine tenderness in him, and so on. Apart from all the fun and excitement it generates, Stagecoach is a drama of perception.
It is also a drama of space. What audiences feel about characters on the screen is probably affected more than most of us realize by the way the space surrounding the people is carved up and re-combined. In John Ford, the geographical sense is very strong—the poetic awareness of sky and landscape and moving horses, but also the attention to such things as how people are arrayed at a long table as an indication of social caste (the prostitute at one end, the fine lady at the other). The best use of space is not just an effective disposition of activity on the screen, it is the emotional meaning of activity on the screen.
Directors used to take great care with such things: spatial integrity was another part of the unspoken contract with audiences, a codicil to the narrative doctrine of the scriptorium. It allowed viewers to understand, say, how much danger a man was facing when he stuck his head above a rock in a gunfight, or where two secret lovers at a dinner party were sitting in relation to their jealous enemies. Space could be analyzed and broken into close-ups and reaction shots and the like, but then it had to be re-unified in a way that brought the experience together in a viewer’s head—so that, in Jezebel, one felt physically what Bette Davis suffered as scandalized couples backed away from her in the ballroom. If the audience didn’t experience that emotion, the movie wouldn’t have cast its spell.
This seems like plain common sense. Who could possibly argue with it? Yet spatial integrity is just about gone from big movies. What Wyler and his editors did—matching body movement from one shot to the next—is rarely attempted now. Hardly anyone thinks it important. The most common method of editing in big movies now is to lay one furiously active shot on top of another, and often with only a general relation in space or body movement between the two. The continuous whirl of movement distracts us from noticing the uncertain or slovenly fit between shots. The camera moves, the actors move: in Moulin Rouge, the camera swings wildly over masses of men in the nightclub, Nicole Kidman flings herself around her boudoir like a rag doll. The digital fight at the end of The Avengers takes place in a completely artificial environment, a vacuum in which gravity has been abandoned; continuity is not even an issue. If the constant buffoonishness of action in all sorts of big movies leaves one both over-stimulated and unsatisfied—cheated without knowing why—then part of the reason is that the terrain hasn’t been sewn together. You have been deprived of that loving inner possession of the movie that causes you to play it over and over in your head.
A dominating individual, a dynamically evolving group—the classical American cinema was always centered in character one way or another. It was an ideal, but hardly the only ideal, and I am certainly not suggesting a return to 1939. Most movies in those years, of course, were nowhere near as good as Jezebel and Stagecoach, and at their worst Hollywood movies in the classical period were draped in the molasses of sentiment and reassurance.
After the war, it was time to pull off the drapes. Bazin and also James Agee loved the Italian Neo-Realists of the 1940s and 1950s, who produced a plainer image and a harsher moral tone than Hollywood ever did; and if Bazin had lived past 1958, when Truffaut, Godard, Chabrol, Rivette, and the others were just getting started, he would have loved the flowing, open rhythms and off-hand literary flavor of the New Wave. In America, television and other media entered the arena, luring viewers away, and the old tropes got stretched or broken into new shapes. To name just a few of the famous ones: the point-of-view camera and shock cutting of Hitchcock; the expressionist lighting and radio-studio echo chambers of Orson Welles; the dynamic architecture of widescreen composition in David Lean; the breathtaking and deeply moving tracking shots of Max Ophüls; Stanley Kubrick’s cold, discordant tableaux; the savagery, both humane and inhumane, of Akira Kurosawa and Sam Peckinpah; the crowded operatic realism of Coppola in the first two Godfather movies; the layered, richly allusive dialogue and sour-mash melancholy of Robert Altman; Steven Spielberg’s visually eccentric manipulation of pop archetypes; Quentin Tarantino’s discontinuous time scheme in Pulp Fiction; and many, many others. Sometimes directors subtracted conventional elements from the old syntax; sometimes they overloaded the medium, refusing an obvious emotional pay-off while reaching a purer, more intense emotion through the exaggeration of a single element in filmmaking—say, the sustained, lens-scarring monologues of Ingmar Bergman, which reveal the soul of a performer so powerfully that it exposes the soul of the viewer (to himself) as well.
Audiences were no longer enveloped by movies in the same comforting way; sometimes even mainstream commercial movies affronted or even assaulted us. On the whole, this felt good. To be exposed to ugliness and horror, to be disturbed rather than cosseted, overburdened rather than babied, never hurt a moviegoer yet, and it made many of us happy not to have everything prepared and cushioned for us. Abruptness in the form of, say, the jump cuts in Breathless or the breaks in continuity in Annie Hall and dozens of other movies inject little spurts of energy into a scene. In such instances, we were not bothered by discontinuities from shot to shot—not when the sequences worked well within an exciting overall conception the continuity of which may have been intellectual and emotional rather than physical. After the war, modernist film-makers also found it impossible to believe in a coherent moral world, and their narratives no longer meted out punishments and rewards in the old Burbank-bookkeeper’s manner. Moral realism felt closer to the way we viewed our own lives, in which we are rarely heroes and few confident or outraged expectations ever meet their longed-for fulfillment in justice.
The glory of modernism was that it yoked together candor and spiritual yearning with radical experiments in form. But in making such changes, filmmakers were hardly abandoning the audience. Reassurance may have ended, but emotion did not. The many alterations in the old stable syntax still honored the contract with us. The ignorant, suffering, morally vacant Jake LaMotta in Raging Bull was as great a protagonist as Julie Marsden. The morose Nashville was as trenchant a group portrait and national snapshot as the hopeful Stagecoach. However elliptical or harsh or astringent, emotion in modernist movies was a strong presence, not an absence.
THE STRUCTURE OF the movie business—the shaping of production decisions by marketing—has kicked bloody hell out of the language of film. But the business framework is not operating alone. Film, a photographic and digital medium, is perhaps more vulnerable than any of the other arts to the post-modernist habits of recycling and quotation. Imitation, pastiche, and collage have become dominant strategies, and there is an excruciating paradox in this development: two of the sprightly media forms derived from movies—commercials and music videos—began to dominate movies. The art experienced a case of blowback.
As everyone knows, we can read an image much more quickly than anyone thought possible forty years ago, and in recent years many commercials have been cut faster and faster. The film-makers know that we are not so much receiving information as getting a visual impression, a mood, a desire. A truly hip commercial has no obvious connection to the product being sold, though selling is still its job. What, then, is being sold at a big movie that is cut the same way? The experience of going to the movie itself, the sensation of being rushed, dizzied, overwhelmed by the images. Michael Bay wasn’t interested in what happened at Pearl Harbor. He was interested in his whizzing fantasia of the event. Nothing important happens in The Avengers. As in half of these big movies, the world is about to end because of some invading force; but the world is always about to end in digital spectacles, and when everything is at stake, nothing is at stake. The larger the movie, the more “content” becomes incidental, even disposable.
In recent years, some of the young movie directors have come out of commercials and MTV. If a director is just starting out in feature films, he doesn’t have to be paid much, and the studios can throw a script at him with the assumption that the movie, if nothing else, will have a great “look.” He has already produced that look in his commercials or videos, which he shoots on film and then finishes digitally—adding or subtracting color, changing the sky, putting in flame or mist, retarding or speeding up movement. In a commercial for a new car, the blue-tinted streets rumble and crack, trees give up their roots, and the silver SUV, cool as a titanium cucumber, rides over the steaming fissures. Wow! What a film-maker! Studio executives or production executives who get financing from studios do not have to instruct such a young director to cut a feature very fast and put in a lot of thrills, because for their big movies they hire only the kind of people who will cut it fast and put in thrills. That the young director has never worked with a serious dramatic structure, or even with actors, may not be considered a liability.
The results are there to see. At the risk of obviousness: techniques that hold your eye in a commercial or video are not suited to telling stories or building dramatic tension. In a full-length movie, images conceived that way begin to cancel each other out or just slip off the screen; the characters are just types or blurred spots of movement. The links with fiction and theater and classical film technique have been broken. The center no longer holds; mere anarchy is loosed upon the screen; the movie winds up a mess.
So are American movies finished, a cultural irrelevance? Despite almost everything, I don’t think the game is up, not by any means. There are talented directors who manage to keep working either within the system or just on the edges of it. Some of the independent films that have succeeded, against the odds, in gaining funding and at least minimal traction in the theaters, are obvious signs of hope. Terence Malick is alive and working hard. Digital is still in its infancy, and if it moves into the hands of people who have a more imaginative and delicate sense of spectacle, it could bloom in any one of a dozen ways. The micro-budget movies now made on the streets or in living rooms might also take off if they give up on sub-Cassavetes ideas of improvisation, and accept the necessity of a script. There is enough talent sloshing around in the troubled vessel of American movies to keep the art form alive. But the trouble is real, and it has been growing for more than twenty-five years. By now there is a wearying, numbing, infuriating sameness to the cycle of American releases year after year. Much of the time, adults cannot find anything to see. And that reason alone is enough to make us realize that American movies are in a terrible crisis, which is not going to end soon.
David Denby is a film critic for The New Yorker and the author of Do the Movies Have a Future? (Simon & Schuster). This article appeared in the October 4, 2012 issue of the magazine.
SIX HUNDRED OR SO movies open in the United States every year, including films from every country, documentaries, first features spilling out of festivals, experiments, oddities, zero-budget movies made in someone’s apartment. Even in the digit-dazed summer season, small movies never stop opening—there is always something to see, something to write about. Just recently I have been excited by two independent films—the visionary Louisiana bayou mini-epic, Beasts of the Southern Wild, and a terse, morally alert fable of authority and obedience called Compliance. Yet despite such pleasures, movies—mainstream American movies—are in serious trouble. And this is hardly a problem that worries movie critics more than anyone else: many moviegoers feel the same puzzlement and dismay.
When I speak of moviegoers, I mean people who get out of the house and into a theater as often as they can; or people with kids, who back up rare trips to the movies with lots of recent DVDs and films ordered on demand. I do not mean the cinephiles, the solitary and obsessed, who have given up on movie houses and on movies as our national theater (as Pauline Kael called it) and plant themselves at home in front of flat screens and computers, where they look at old films or small new films from the four corners of the globe, blogging and exchanging disks with their friends. They are extraordinary, some of them, and their blogs and websites generate an exfoliating mass of knowledge and opinion, a thickening density of inquiries and claims, outraged and dulcet tweets. Yet it is unlikely that they can do much to build a theatrical audience for the movies they love. And directors still need a sizable audience if they are to make their next picture about something more than a few people talking on the street.
I have in mind the great national audience for movies, or what’s left of it. In the 1930s, roughly eighty million people went to the movies every week, with weekly attendance peaking at ninety million in 1930 and again in the mid-1940s. Now about thirty million people go, in a population two and a half times the size of the population of the 1930s. By degrees, as everyone knows, television, the Internet, and computer games dethroned the movies as regular entertainment. By the 1980s, the economics of the business became largely event-driven, with a never-ending production of spectacle and animation that draws young audiences away from their home screens on opening weekend. For years, the tastes of young audiences have wielded an influence on what gets made way out of proportion to their numbers in the population. We now have a movie culture so bizarrely pulled out of shape that it makes one wonder what kind of future movies will have.
Nostalgia is history altered through sentiment. What’s necessary for survival is not nostalgia, but defiance. I’m made crazy by the way the business structure of movies is now constricting the art of movies. I don’t understand why more people are not made crazy by the same thing. Perhaps their best hopes have been defeated; perhaps, if they are journalists, they do not want to argue themselves out of a job; perhaps they are too frightened of sounding like cranks to point out what is obvious and have merely, with a suppressed sigh, accommodated themselves to the strange thing that American movies have become. A successful marketplace has a vast bullying force to enforce acquiescence.
EARLIER THIS YEAR, The Avengers, which pulled together into one movie all the familiar Marvel Comics characters from earlier pictures—Captain America, Thor, Iron Man, and so on—achieved a worldwide box-office gross within a couple of months of about $1.5 billion. That extraordinary figure represented a triumph of craft and cynical marketing: the movie, which cost $220 million to make, was mildly entertaining for a while (self-mockery was built into it), but then it degenerated into a digital slam, an endless battle of exacerbated pixels, most of the fighting set in the airless digital spaces of a digital city. Only a few critics saw anything bizarre or inane about so vast a display of technology devoted to so little. American commercial movies are now dominated by the instantaneous monumental, the senseless repetition of movies washing in on a mighty roar of publicity and washing out in a waste of semi-indifference a few weeks later. The Green Hornet? The Green Lantern? Did I actually see both of them? The Avengers will quickly be effaced by an even bigger movie of the same type.
This franchise-capping Avengers was a carefully built phenomenon. Let’s go back a couple of years and pick up a single strand that led to it. Consider one of its predecessors, Iron Man 2, which began its run in the United States, on May 7, 2010, at 4,380 theaters. That’s only the number of theaters: multiplexes often put new movies on two or three, or even five or six, screens within the complex, so the actual number of screens was much higher—well over 6,000. The gross receipts for the opening weekend were $128 million. Yet those were not the movie’s first revenues. As a way of discouraging piracy and cheap street sale of the movie overseas, the movie’s distributor, Paramount Pictures, had opened Iron Man 2 a week earlier in many countries around the world. By May 9, at the end of the weekend in which the picture opened in America, cumulative worldwide theatrical gross was $324 million. By the end of its run, the cumulative total had advanced to $622 million. Let’s face it: big numbers are impressive, no matter what produced them.
The worldwide theatrical gross of Iron Man 2 served as a branding operation for what followed—sale of the movie to broadcast and cable TV, and licensing to retail outlets for DVD rentals and purchase. Iron Man 2 was itself part of a well-developed franchise (the first Iron Man came out in 2008). The hero, Tony Stark, a billionaire industrialist-playboy, first appeared in a Marvel comic book in 1963 and still appears in new Marvel comics. By 2010, rattling around stores and malls all over the world, there were also Iron Man video games, soundtrack albums, toys, bobblehead dolls, construction sets, dishware, pillows, pajamas, helmets, t-shirts, and lounge pants. There was a hamburger available at Burger King named after Mickey Rourke, a supporting player in Iron Man 2. Companies such as Audi, LG Mobile, 7-Eleven, Dr. Pepper, Oracle, Royal Purple motor oil, and Symantec’s Norton software signed on as “promotional partners,” issuing products with the Iron Man logo imprinted somewhere on the product or in its advertising. In effect, all of American commerce was selling the franchise. All of American commerce sells every franchise.
Iron Man movies have a lighter touch than many comparable blockbusters—for instance, the clangorous Transformer movies, which are themselves based on plastic toys, in which dark whirling digital masses barge into each other or thresh their way through buildings, cities, and people, and at which the moviegoer, sitting in the theater, feels as if his head were repeatedly being smashed against a wall. The Iron Man movies have been shaped around the temperament of their self-deprecating star, Robert Downey, Jr., an actor who manages to convey, in the midst of a $200-million super-production, a private sense of amusement. By slightly distancing himself from the material, this charming rake offers the grown-up audience a sense of complicity, which saves it from self-contempt. Like so many big digital movies, the Iron Man films engage in a daringly flirtatious give-and-take with their own inconsequence: the disproportion between the size of the productions, with their huge sets and digital battles, and the puniness of any meaning that can possibly be extracted from them, may, for the audience, be part of the frivolous pleasure of seeing them.
Many big films (not just the ones based on Marvel Comics) are now soaked in what can only be called corporate irony, a mad discrepancy between size and significance—for instance, Christopher Nolan’s widely admired Inception, which generates an extraordinarily complicated structure devoted to little but its own workings. Despite its dream layers, the movie is not really about dreams—the action you see on screen feels nothing like dreams. An industrialist hires experts to invade the dreaming mind of another industrialist in order to plant emotions that would cause the second man to change corporate plans. Or something like that; the plot is a little vague. Anyway, why should we care? What is at stake?
You could say, I suppose, that the movie is about different levels of representation, and then refine that observation and observe that the differences between fiction and reality, between subjective and objective, no longer exist—that what Nolan created is somehow analogous to our life in a postmodernist society in which the image and the real, the simulacrum and the original, have assumed, for many people, equal weight. (The literary and media theorist Fredric Jameson has made such a case for the movie.) You can say all of that, but you still haven’t established why such an academic-spectacular exercise is worth looking at as a work of narrative art, or why any of it matters emotionally.
Nolan’s movie was a whimsical, over-articulate nullity—a huge fancy clock that displays wheels and gears but somehow fails to tell the time. Yet Inception is nothing more than the logical product of a recent trend in which big movies have been progressively drained of sense. As much as two-thirds of the box office for these big films now comes from overseas, and the studios appear to have concluded that if a movie were actually about something, it might risk offending some part of the worldwide audience. Aimed at Bangkok and Bangalore as much as at Bangor, our big movies have been defoliated of character, wit, psychology, local color.
I DO NOT HATE all over-scaled digital work. “God works too slowly,” said Ian McKellen in X-Men, playing Magneto, who can produce mutations on the spot. So can digital film-makers, who play God at will. Digital movie-making is the art of transformation, and in the hands of a few imaginative people it has produced sequences of great loveliness and shivery terror—the literally mercurial reconstituted beings in Terminator 2, the floating high-chic battles in The Matrix. I loved the luscious purple beauty of Avatar, but Avatar is off the scale in visual allure, and so is Alfonso Cuarón’s Harry Potter and the Prisoner of Azkaban, the best of the Potter series.
Apart from these movies and a few others, however, many of us have logged deadly hours watching superheroes bashing people off walls, cars leapfrogging one another in tunnels, giant toys and mock-dragons smashing through Chicago, and charming teens whooshing around castles. What we see in bad digital action movies has the anti-Newtonian physics of a cartoon, but drawn with real figures. Rushed, jammed, broken, and overloaded, action now produces temporary sensation rather than emotion and engagement. Afterward these sequences fade into blurs, the different blurs themselves melding into one another—a vague memory of having been briefly excited rather than the enduring contentment of scenes playing again and again in one’s head.
The oversized weightlessness leaves one numbed, defeated. Surely rage would seem an excessive response to movies so enormously trivial. Yet the overall trend is enraging. Fantasy is moving into all kinds of adventure and romantic movies; time travel has become a commonplace. At this point the fantastic is chasing human temperament and destiny—what we used to call drama—from the movies. The merely human has been transcended. And if the illusion of physical reality is unstable, the emotional framework of movies has changed, too, and for the worse. In time—a very short time—the fantastic, not the illusion of reality, may become the default mode of cinema.
Yes, of course, the studios, with greater or lesser degrees of enthusiasm, make other things besides spectacles—thrillers and horror movies; chick flicks and teen romances; comedies with Adam Sandler, Will Ferrell, Jennifer Aniston, Katherine Heigl, and Cameron Diaz; burlesque-hangover debauches and their female equivalents; animated pictures for families. All these movies have an assured audience (or one at least mostly assured), and a few of them, especially the Pixar animated movies, may be very good. The studios will also distribute an interesting movie if their financing partners pay for most of it. And at the end of the year, as the Oscars loom, they distribute unadventurous but shrewdly written and played movies, such as The Fighter, which are made entirely by someone else. Again and again these serioso films win honors, but for the most part, the studios, except as distributors, don’t want to get involved in them. Why not? Because they are “execution dependent”—that is, in order to succeed, they have to be good. It has come to this: a movie studio can no longer risk making good movies. Their business model depends on the assured audience and the blockbuster. It has done so for years and will continue to do so for years more. Nothing is going to stop the success of The Avengers from laying waste to the movies as an art form. The big revenues from such pictures rarely get siphoned into more adventurous projects; they get poured into the next sequel or a new franchise. Pretending otherwise is sheer denial.
II.
ON APRIL 30, 2010, a week before Iron Man 2 made its American debut, an independent film called Please Give, written and directed by Nicole Holofcener and starring Catherine Keener, Rebecca Hall, and Oliver Platt, opened in five theaters in the United States. The theatrical gross for the first weekend was $118,000. Holofcener’s movie is a modest and formally conservative but sharply perceptive comedy devoted to a group of neighbors in Manhattan—a “relationship” film (in Hollywood jargon) arrayed around such matters as the ambiguous moral quality of benevolence and the vexing but inescapable necessity of family loyalty. Like a good short-story writer, Holofcener has a precise and gentle touch; moments from the picture have lingered in the affections of people who saw it. I’m not saying that Please Give is anything great—but look at how hard it had to struggle to make even the slightest impression in the marketplace. Please Give cost $3 million, and its worldwide theatrical gross was $4.3 million. Once the ancillary markets are added in, the movie, on a small scale, will also be a financial success. But, so far, no more than about 500,000 people have seen it in theaters. Around 83 million have seen Iron Man 2 in theaters. Maybe 175 million have seen Transformers 3. At least 250 million people have seen The Avengers.
Most of the great directors of the past—Griffith, Chaplin, Murnau, Renoir, Lang, Ford, Hawks, Hitchcock, Welles, Rossellini, De Sica, Mizoguchi, Kurosawa, Bergman, the young Coppola, Scorsese, and Altman, and many others—did not imagine that they were making films for a tiny audience, and they did not imagine they were making “art” movies, even though they worked with a high degree of conscious artistry. (The truculent John Ford would have glared at you with his unpatched eye if you used the word “art” in his presence.) They thought that they were making films for everyone, or at least everyone with spirit, which is a lot of people. But over the past twenty-five years, if you step back and look at the American movie scene, you see the mass-culture juggernauts, increasingly triumphs of heavy-duty digital craft, tempered by self-mockery and filling up every available corner of public space; and the tiny, morally inquiring “relationship” movies, making their modest way to a limited audience. The ironic cinema, and the earnest cinema; the mall cinema, and the art house cinema.
I can hear the retorts. If such inexpensive movies as Please Give (or Winter’s Bone, an even better movie, which also came out in 2010, or Beasts of the Southern Wild) still get made, and they have an appreciative audience, however small; if directors such as Martin Scorsese and Steven Soderbergh and David O. Russell and Kathryn Bigelow and Noah Baumbach and David Fincher and Wes Anderson are doing interesting things within the system; if Terrence Malick can make a lyrical masterpiece such as The Tree of Life; if the edges of the industry are soulfully alive even as the center is mostly an algorithm for making money—if all of this is so, then why get steamed over the Iron Man or Transformers franchises?
The reason is this: not everything a film artist wants to say can be said with $3 million. Artists who want to work with, say, $30 million (still a moderate amount of money by Hollywood standards) often have an impossible time getting their movies made. At this writing, Paul Thomas Anderson (There Will Be Blood), one of the most talented men in Hollywood, has finished his Scientology movie, The Master, but it took years of pleading to get the money to do it. (An heiress came to his rescue.) After making Capote, Bennett Miller was idle for six years before making Moneyball. Alexander Payne had to wait seven years (after Sideways) before making The Descendants. Alfonso Cuarón hasn’t brought out a movie since the brilliant Children of Men in 2006. Guillermo del Toro, the gifted man who made Pan’s Labyrinth, is also having trouble getting money for his projects. By studio standards, there isn’t a big enough audience for their movies: they can work if they want to, but only on very small budgets. You cannot mourn an unmade project, but you can feel its absence through the long stretches of an inane season.
And why isn’t there a big enough audience for art? Consider that in recent years the major studios have literally gamed the system. American children—boys, at least—play video games, and read comic books and graphic novels. Latching onto those tastes, Disney purchased Marvel Comics for $4 billion, which gives it the right to make Marvel’s superhero comic book characters into movies. Paramount has its own deal with Marvel for the Captain America character and others. Time Warner now owns DC Comics, and Warner Bros. will make an endless stream of movies based on DC Comics characters (the Superman, Batman, and Green Lantern pictures are just the beginning). For years, all the studios have tried to adapt video games into movies, often with disastrous results. So Warner Bros. went the logical next step: it bought a video game company, which is developing new games that the studio will later make into films.
“Give me the children until they are seven, and anyone may have them afterwards,” Francis Xavier, one of the early Jesuits, is supposed to have said. The studios grab boys when they are seven, eight, or nine, command a corner of their hearts, and hold them with franchise sequels and product tie-ins for fifteen years. The Twilight series of teen vampire movies, which deliciously sell sex without sex—romantic danger without fornication—caught girls in the same way at a slightly older age. The Hunger Games franchise will be with us for years. In brief, the studios are not merely servicing the tastes of the young audience; they are also continuously creating the audience to whom they want to sell. (They have tied their fortunes to the birth rate.) Which raises an inevitable question: will these constantly created new audiences, arising from infancy with all their faculties intact but their expectations already defined—these potential moviegoers—will they ever develop a taste for narrative, for character, for suspense, for acting, for irony, for wit, for drama? Isn’t it possible that they will be so hooked on sensation that anything without extreme action and fantasy will just seem lifeless and dead to them?
APART FROM THAT dolorous autumn-leaves season (the Holocaust, troubled marriages, raging families, self-annihilating artists), American movies during the rest of the year largely abandon older audiences, leaving them to wander about like downsized workers. Many gratefully retreat into television, where producer-writers such as David Chase, Aaron Sorkin, David Simon, and David Milch now enjoy the same freedom and status, at HBO, as the Coppola-Scorsese generation of movie directors forty years ago. Cable television has certainly opened a space for somber realism, such as The Wire, and satirical realism, such as Mad Men and Lena Dunham’s mock-depressive, urban-dejection series Girls. But television cannot be the answer to what ails movies. I have been ravished in recent years by things possible only in movies—by Paul Thomas Anderson’s There Will Be Blood, Julian Schnabel’s The Diving Bell and the Butterfly, and Malick’s The Tree of Life, which refurbished the tattered language of film. Such films as Sideways, The Squid and the Whale, and Capote have a fineness, a nuanced subtlety, that would come off awkwardly on television. Would that there were more of them.
The intentional shift in large-scale movie production away from adults is a sad betrayal and a minor catastrophe. Among other things, it has killed a lot of the culture of the movies. By culture, I do not mean film festivals, film magazines, and cinephile Internet sites and bloggers, all of which are flourishing. I mean that blessedly saturated mental state of moviegoing, both solitary and social, half dreamy, half critical, maybe amused, but also sometimes awed, that fuels a living art form. Moviegoing is both a private and a sociable affair—a strangers-at-barbecues, cocktail-party affair, the common coin of everyday discourse. In the fall season there may be a number of good things to see, and so, for adult audiences, the habit may flicker to life again. If you have seen one of the five interesting movies currently playing, then you need to see the other four so you can join the dinner-party conversation. If there is only one, as there is most of the year, you may skip it without feeling you are missing much.
THESE OBSERVATIONS annoy many people, including some of the smartest people I know, particularly men in their late forties and younger, who have grown up with pop culture dominated by the conglomerates and don’t know anything else. They don’t disagree, exactly, but they find all of this tiresome and beside the point. They accept the movies as a kind of environment, a constant stream. There are just movies, you see, movies always and forever, and of course many of them will be uninspiring, and always have been. Critics, chalking the score on the blackboard, think of large-scale American movie-making as a system in which a few talented people, in order to make something good, struggle against discouragement or seduction; but for my young media-hip friends, this view is pure melodrama. They see the movies not as a moral and aesthetic battleground, but as a media game that can be played either shrewdly or stupidly. There is no serious difference for them between making a piece of clanging, overwrought, mock-nihilistic digital roughhouse for $200 million and a personal independent film for $2 million. They are not looking for art, and they do not want to be associated with commercial failure; it irritates them in some way; it makes them feel like losers. If I say that the huge budgets and profits are mucking up movie aesthetics, changing the audience, burning away other movies, they look at me with a slight smile and say something like this: “There’s a market for this stuff. People are going. Their needs are being satisfied. If they didn’t like these movies, they wouldn’t go.”
But who knows if needs are being satisfied? The audience goes because the movies are there, not because anyone necessarily loves them. My friends’ attitudes are defined so completely by the current movie market that they do not wish to hear that movies, for the first eighty years of their existence, were essentially made for adults. Sure, there were always films for families and children, but, for the most part, ten-year-olds and teens were dragged by their parents to what the parents wanted to see, and this was true well after television reduced the size of the adult audience. The kids saw, and half understood, a satire such as Dr. Strangelove, an earnest social drama such as To Kill a Mockingbird, a cheesy disaster movie such as Airport, and that process of half understanding, half not, may have been part of growing up; it also laid the soil for their own enjoyment of grown-up movies years later. They were not expected to remain in a state of goofy euphoria until they were thirty-five. My friends think that our current situation is normal. They believe that critics are naïve blowhards, but it is they who are naïve.
III.
THE LANGUAGE OF big-budget, market-driven movies—the elements of shooting, editing, storytelling, and characterization—began disintegrating as far back as the 1980s, but all of this crystallized for me a decade ago, in the summer of 2001, when the slovenliness of what I was seeing that year, even in the Oscar-winning Gladiator, hit me hard. The action scenes in Gladiator were mostly a blur of whirling movement shot right up close—a limb hacked off and flying, a spurt of blood, a flash of chariot wheels. Who could actually see anything? Yet almost no one seemed to object. The old ideal of action as something staged cleanly and realistically in open space had been destroyed by sheer fakery and digital “magic”—a constant chopping of movement into tiny pieces that are then assembled by computer editing into exploding little packages. What we were seeing in Gladiator and other movies were not just individual artistic failures and crass commercial strategies, but was a new and awful idea of how to put a picture together.
Seventy years ago, the look of a given studio’s films reflected the ambitions and the fantasies of the men who ran it, as well as the film genres they cultivated and the writers, directors, and craftsmen they hired. But by the 1980s, as the studios became just one part—and not always a very profitable part—of enormous conglomerates, the head of the motionpicture division was mainly responsible for a revenue stream that would please board members, share-holders, and stock analysts. Looking around him, he saw divisions of his conglomerate that have a greater profit ratio than his own—video games, for instance. Imitating these commercially successful forms would not hurt him among the people he needs to please. Under such a pressure, style quickly fades away. Apart from some of the animated work, it is impossible now to tell the films of one studio from another. All the studios are ruled by what I would call conglomerate aesthetics.
The phrase falls uneasily on the ear, so I had better say that I don’t mean to pile into the tumbrels every large movie recently made by conglomerates. I realize as well that “conglomerate aesthetics” has a cranky sub-Marxist ring to it, the sound of an assistant professor warming the prejudices of an academic conference. Naïveté is a poor excuse for false moralism, both for me and for the professor, and so I should immediately add that we both know that Hollywood has almost always been a big-money game, that money is the lifeblood of large-scale picture-making. (David Thomson’s book The Whole Equation, which appeared in 2004, provides a fascinating account of the relations of art and money at different times in Hollywood history.) Yet the desire to be profitable does not dictate, in itself, one style or another. The dreadfulness of many big movies now cannot be waved away on the grounds that the studios have to make them that way. They don’t have to make them that way; they just think they do. They choose this style.
That summer of 2001 the shape of conglomerate aesthetics could be seen in the narrative gibberish of too many creatures and too many villains in the overstuffed, put-on adventure movie, The Mummy Returns; and it could be seen in the frantic pastiche construction of Baz Luhrmann’s musical Moulin Rouge, with its characters openly borrowed from other movies, its songs composed of many other songs—music that alludes to the history of pop rather than risking the painful beauty of a ravishing new melody. The conglomerate aesthetic seizes on the recycled and the clichéd; it disdains originality and shies away from anything too individual, too clearly defined—even a strong personality. (Angelina Jolie wasn’t required to be a person in the Lara Croft movies—she got by on pure attitude. Ewan McGregor in The Phantom Menace didn’t even have attitude.) The only genuine protagonist in big movies in that period was Russell Crowe’s Jeffrey Wigand in Michael Mann’s The Insider, from 1999, and that movie failed commercially. In Hollywood, the lesson has been learned: no complex protagonist unless he is a historical figure such as Howard Hughes, John Nash (of A Beautiful Mind), J. Edgar Hoover, or the like. As the visual schemes grow more complicated, the human material becomes undernourished, wan, apologetic, absent—or so stylized that you can enjoy it only ironically (Angelina Jolie as a svelte, voguing super-killer).
Constant and incoherent movement; rushed editing strategies; feeble characterization; pastiche and hapless collage—these are the elements of conglomerate aesthetics. There is something more than lousy film-making in such a collection of attention-getting swindles. Again and again I have the sense that film-makers are purposely trying to distance the audience from the material—to prevent moviegoers from feeling anything but sensory excitement, to thwart any kind of significance in the movie.
Consider a single scene from one of the most prominent artistic fiascos of recent years, Michael Bay’s Pearl Harbor. Forget Ben Affleck’s refusal to sleep with Kate Beckinsale the night before going off to battle; forget the rest of the frightfully noble love story. Look at the action sequences in the movie, the scenes that many critics unaccountably praised. Here’s the moment: the Japanese have arrived, dropped their load, and gone back to their carriers. Admiral Kimmel (Colm Feore), the commander of the Pacific fleet, then rides through the harbor in an open boat, surveying the disaster. We have seen Kimmel earlier: not as a major character, but as a definite presence. Before December 7, he had intimations that an attack might be coming but not enough information to form a coherent picture. He did not act, and now he feels the deepest chagrin. Dressed in Navy whites, and surrounded by junior officers also dressed in white, he passes slowly through ships torn apart and still burning, ships whose crews, in some cases, remain trapped below the waterline.
Now, the admiral’s boat trip could have yielded a passage of bitterly eloquent movie poetry. Imagine what John Ford or David Lean would have done with it! We have just seen bodies blackened by fire, the men’s skin burned off. Intentionally or not, the spotless dress whites worn by the officers become an excruciating symbol of the Navy’s complacency before the attack. The whole meaning of Bay’s movie could have been captured in that one shot if it had been built into a sustained sequence. Yet this shot, to our amazement, lasts no more than a few seconds. After cutting away, Bay and his editors return to the scene, but this time from a different angle, and that shot doesn’t last, either. Bay and his team of editors abandon their own creation, just as, earlier in the movie, they jump away from an extraordinary shot of nurses being strafed as they run across an open plaza in front of the base hospital.
People who know how these movies are made told me that the film-makers could not have held those shots any longer, because audiences would have noticed that they were digital fakes. That point (if true) should tell you that something is seriously wrong. If you cannot sustain shots at the dramatic crux of your movie, why make violent spectacle at all? It turns out that fake-looking digital film-making can actually disable spectacle when it is supposed to be set in the real world. Increasingly, the solution has been to create more and more digitized cities, houses, castles, planets. Big films have lost touch with the photographed physical reality that provided so much greater enchantment than fantasy.
Of course, no one who has ever enjoyed a mindless hyper-charged movie could say that a little meaningless movement destroyed his day or ruined him for art. Everyone loves being worked over now and then, and speed in itself is not the enemy. Like many moviegoers, I was dazzled by the craft of Paul Greengrass (and cinematographer Oliver Wood) in The Bourne Supremacy (2004) and The Bourne Ultimatum (2007). Greengrass’s method in those thrillers, in which Matt Damon is pursued through damp, hostile Berlin or Moscow, is to shoot action in tiny fragments—a face glimpsed across the street, a car whizzing by—and then pull the fragments together into a coherent impression of rapid movement. Even the bruising, endless car-chase in The Bourne Supremacy was so well done that one didn’t feel cheated. We can accept that car chases are going to be over-cut for sheer excitement.
The problem is that too many ordinary scenes in many big movies are cut like car chases. One of the tendencies of conglomerate aesthetics is to replace action and drama as much as possible with mere movement. Conglomerate aesthetics requires a dozen trash epiphanies (explosions, transformations, rebirths) rather than the arc of a single pure visual climax; mass slaughter rather than a single death. The job of luring the big audience to the Friday opening—the linchpin of the commercial system—has destroyed action on the screen by making it carry the entire burden of the movie’s pleasure. In Christopher Nolan’s Batman movies, The Dark Knight and The Dark Knight Rises, sensation has been carried to the point of a brazenly beautiful nihilism, in which a modishly “dark” atmosphere of dread and disaster overwhelms any kind of plot logic or sequence or character interest. You leave the theater vibrating, but a day later you don’t feel a thing. The audience has been conditioned to find the absence of emotion pleasurable.
IV.
TO UNDERSTAND what is so strange about big movies now, you have to remember a little of what movies once were and what audiences once wanted from them—how stories were told in different periods, how movies were put together. We are now trapped by an exasperating irony: employing all the devastating means at their command, movies have in some ways gone back to their crudest beginnings and are determined, perhaps, to stay there forever, or at least as long as the box-office and ancillary-market mother lode holds out.
A long time ago, at a university far away, I taught film, and I did what many teachers have no doubt done before and since: I tried to develop film aesthetics for the students as a historical progression toward narrative. After all, many of the first movies in the 1890s were not stories at all, but just views of things—a train coming into a station, a wave breaking toward the camera. These visual astonishments caused the audience to stare open-mouthed or duck under the seats for cover (or so the legend says, preserved recently in Scorsese’s Hugo). I wanted my students to be astonished, too—to enjoy the development of film technique as a triumph of artistic and technical consciousness. I worked in straight chronological order, moving from those early “views” through Edwin S. Porter’s 1903 experiments in linear sequencing in films such as Life of an American Fireman and The Great Train Robbery and then on through D.W. Griffith’s consolidation a few years later of an actual syntax—long and medium shots, close-ups, flashbacks, parallel editing, and the like.
But I now think there was something merely convenient in teaching that way. The implication of my lesson plan was that the medium had by degrees come to a realization of itself, discovering in those early years—say, 1895 to 1915—its own true nature embedded within its technology: the leafy oak of narrative lodged within the acorn of celluloid. It is a teleological view, and it is probably false. In truth, there is nothing inherent in the process of exposing strips of film to light sixteen or eighteen times a second (later twenty-four times) that demanded the telling of a story.
At the beginning, after the views of trains and oceans, movies offered burlesque skits or excerpts from theatrical events, but still no stories. A completed movie was often just a single, fixed, long-lasting shot. It is likely, as David Bordwell, Janet Staiger, and Kristin Thompson explained in The Classical Hollywood Cinema, that narrative emerged less from the inherent nature of film than from the influence of older forms—novels and short stories and plays. And also from pressure to create work of greater power to attract more and more customers.
If creating fictions is not encoded in the DNA of film, then what is happening now has a kind of grisly logic to it. As the narrative and dramatic powers of movies fall into abeyance, and many big movies turn into sheer spectacle, with only a notional pass at plot or characterization, we are returning with much greater power to capers and larks that were originally performed in innocence. The kind of primitive chase, for instance, that in 1905 depended on some sort of accident or mischief rather than on character or plot has been succeeded by the endless up-in-the-air digital fight. The 1905 scene has a harum-scarum looseness and wit; the destructive action scenes in movies now are brought off with a kind of grim, faceless glee, an exultation in power and mass: We can do it, therefore we will do it, and our ability to do it is the meaning of it, and even if you’re not impressed, it is still going to roll over you.
Neo-primitivism is one of the great strategies of modern art—Picasso and his African masks; Bartók re-fashioning folk materials in his advanced music; Chuck Berry drawing on “hillbilly” rhythms for his own super-charged songs. Neo-primitivism cleared away the mush of Viennese or Edwardian sentiment, the perfume and pallor of Paris salon art, the Brill Building softness of early-1960s rock. In movies, a great deal of mush has also been cleared away (the tyranny of niceness that ruined so many family movies in the 1940s; the physical, verbal, and sexual coyness of the 1950s). But the continuous motion of crass conglomerate product aimed at the young has removed much else as well, such as the mysteries of personality, sophisticated dialogue, any kind of elegant or smart life, and, frequently, a woman’s emotions (think of Bette Davis, Katharine Hepburn, Joan Crawford) as the center of a movie. The studios have been devoted to the systematic de-culturation of movies, and the casting away of all manner of dramatic cunning laboriously built up over decades.
SPARKED, PERHAPS, by the absence of sound, film-makers developed the visual possibilities of film very quickly, and by the end of World War I the vocabulary of editing and the overall strategy of Hollywood movie-making was set. Celluloid may not have carried storytelling in its genes, but, as David Bordwell puts it, “telling a story is the basic formal concern which makes the film studio resemble the monastery’s scriptorium, the site of the transcription and transmission of countless narratives.” In the scriptorium, an unspoken vow was repeated daily: audiences need to get emotionally involved in a story in order to enjoy themselves. The idea is so obvious that it seems absurd to spell it out. Yet in recent years this assumption, and everything that follows from it, have begun to weaken and even to disappear.
It is shocking to be reminded of some of the things that are now slipping away: that whatever is introduced in a tale has to mean something, and that one thing should inevitably lead to another; that events are foreshadowed and then echoed, and that tension rises steadily through a series of minor climaxes to a final, grand climax; that music should be created not just as an enforcer of mood but as the outward sign of an emotional or narrative logic; that characterization should be consistent; that a character’s destiny is supposed to have some moral and spiritual meaning—the wicked punished, the virtuous rewarded or at least sanctified. It was a fictional world of total accountability.
For decades, the rules of the scriptorium signified “Hollywood” in both the contemptuous sense and the honorific sense—a system dismissed by the humorless and pleasureless as “bourgeois cinema,” but also enjoyed around the world by millions and praised in majestic terms by critics, most notably André Bazin. Looking back to the late 1930s from a dozen or so years later, and passing judgment on precisely such conventions, Bazin announced that “in seeing again today such films as Jezebel by William Wyler, Stagecoach by John Ford, or Le jour se lève by Marcel Carné, one has the feeling that in them an art has found its perfect balance, its ideal form of expression, and reciprocally one admires them for dramatic and moral themes to which the cinema, while it may not have created them, has given a grandeur, an artistic effectiveness, that they would not otherwise have had. In short, here are all the characteristics of the ripeness of a classical art.”
The ripeness of a classical art. The words are stirring, but at this point almost embarrassing. What on Earth did Bazin mean? Never mind Le jour se lève, which is impossible to understand without evoking French literary and philosophical traditions. What about the American films Bazin mentions? What’s in these movies? What satisfactions did they offer? And is it mere weak-souled nostalgia that makes one long for their equivalent now?
In Jezebel, from 1938, everything revolves around a central character of extraordinary perversity. Julie Marsden, played by Bette Davis, is a rich belle in pre-Civil War New Orleans. Her situation is paradoxical: she exercises the largest possible freedom within the confines of a social tradition that allows women no other career than that of a coquette. Taut and over-defined emotionally—she demands categorical approval or rejection—Julie values the predominance of her own will more than love, more than society, more than anything. In other words, she is admirable, dislikable, and crazy. She torments her high-minded beau (Henry Fonda) and wears a scarlet dress to a society ball at which unmarried women have traditionally worn white, knowing full well that the act must compromise her fiancé and destroy her own social position.
Around this central incident—which at first seems trivial, and then more and more momentous—a portrait of antebellum Southern society as both noble and savagely inadequate unfolds with surprising power. Yet Davis’s will-driven Julie dominates the movie. All the other characters lean toward her or shy away from her; they fuss before her arrivals and flutter after her departures. William Wyler is often said to have used an “invisible” technique, which means, in this case, that the camera glides, dollies, cranes, goes wherever it has to go, leading the eye from one shot to the next in an unbroken continuity that illuminates a story that is essentially psychological and social. The style is dedicated to the defining moment: the upturned face, the instant of self-definition, the rapid concentrics of astonishment and scandal spreading through a room as a young woman enters and gazes around herself in defiance. All the physical details are tightly arrayed around the outrageous, mesmerizing central figure.
In Jezebel, a rigidly structured society is falling into decay. In Bazin’s other choice, Stagecoach, a new society is taking shape—the West as caravan of American democracy. The movie is perhaps the most popular and well-known Western ever made, and yet seeing it again one is struck by how fresh it is—how very funny and sharply edged, how bracingly decisive and swift. An entire fluid American world is moving West: a high-type Virginia lady, a fallen Southern gentleman, an alcoholic doctor, a hypocritical banker, a good-hearted prostitute—all these people plus an ineffably relaxed young male beauty, John Wayne. They all go to Lordsburg, but they also move into their future. In Stagecoach, it is not so much a matter of a dominating individual as of an evolving group. All through the trip, the writer, Dudley Nichols, and the director, John Ford, make it clear what each of these highly wrought people thinks of the others. By degrees, they all come to understand that the alcoholic doctor is something more than an irresponsible lush, that the Southern gambler and murderer is suffused with self-disgust and has some genuine tenderness in him, and so on. Apart from all the fun and excitement it generates, Stagecoach is a drama of perception.
It is also a drama of space. What audiences feel about characters on the screen is probably affected more than most of us realize by the way the space surrounding the people is carved up and re-combined. In John Ford, the geographical sense is very strong—the poetic awareness of sky and landscape and moving horses, but also the attention to such things as how people are arrayed at a long table as an indication of social caste (the prostitute at one end, the fine lady at the other). The best use of space is not just an effective disposition of activity on the screen, it is the emotional meaning of activity on the screen.
Directors used to take great care with such things: spatial integrity was another part of the unspoken contract with audiences, a codicil to the narrative doctrine of the scriptorium. It allowed viewers to understand, say, how much danger a man was facing when he stuck his head above a rock in a gunfight, or where two secret lovers at a dinner party were sitting in relation to their jealous enemies. Space could be analyzed and broken into close-ups and reaction shots and the like, but then it had to be re-unified in a way that brought the experience together in a viewer’s head—so that, in Jezebel, one felt physically what Bette Davis suffered as scandalized couples backed away from her in the ballroom. If the audience didn’t experience that emotion, the movie wouldn’t have cast its spell.
This seems like plain common sense. Who could possibly argue with it? Yet spatial integrity is just about gone from big movies. What Wyler and his editors did—matching body movement from one shot to the next—is rarely attempted now. Hardly anyone thinks it important. The most common method of editing in big movies now is to lay one furiously active shot on top of another, and often with only a general relation in space or body movement between the two. The continuous whirl of movement distracts us from noticing the uncertain or slovenly fit between shots. The camera moves, the actors move: in Moulin Rouge, the camera swings wildly over masses of men in the nightclub, Nicole Kidman flings herself around her boudoir like a rag doll. The digital fight at the end of The Avengers takes place in a completely artificial environment, a vacuum in which gravity has been abandoned; continuity is not even an issue. If the constant buffoonishness of action in all sorts of big movies leaves one both over-stimulated and unsatisfied—cheated without knowing why—then part of the reason is that the terrain hasn’t been sewn together. You have been deprived of that loving inner possession of the movie that causes you to play it over and over in your head.
A dominating individual, a dynamically evolving group—the classical American cinema was always centered in character one way or another. It was an ideal, but hardly the only ideal, and I am certainly not suggesting a return to 1939. Most movies in those years, of course, were nowhere near as good as Jezebel and Stagecoach, and at their worst Hollywood movies in the classical period were draped in the molasses of sentiment and reassurance.
After the war, it was time to pull off the drapes. Bazin and also James Agee loved the Italian Neo-Realists of the 1940s and 1950s, who produced a plainer image and a harsher moral tone than Hollywood ever did; and if Bazin had lived past 1958, when Truffaut, Godard, Chabrol, Rivette, and the others were just getting started, he would have loved the flowing, open rhythms and off-hand literary flavor of the New Wave. In America, television and other media entered the arena, luring viewers away, and the old tropes got stretched or broken into new shapes. To name just a few of the famous ones: the point-of-view camera and shock cutting of Hitchcock; the expressionist lighting and radio-studio echo chambers of Orson Welles; the dynamic architecture of widescreen composition in David Lean; the breathtaking and deeply moving tracking shots of Max Ophüls; Stanley Kubrick’s cold, discordant tableaux; the savagery, both humane and inhumane, of Akira Kurosawa and Sam Peckinpah; the crowded operatic realism of Coppola in the first two Godfather movies; the layered, richly allusive dialogue and sour-mash melancholy of Robert Altman; Steven Spielberg’s visually eccentric manipulation of pop archetypes; Quentin Tarantino’s discontinuous time scheme in Pulp Fiction; and many, many others. Sometimes directors subtracted conventional elements from the old syntax; sometimes they overloaded the medium, refusing an obvious emotional pay-off while reaching a purer, more intense emotion through the exaggeration of a single element in filmmaking—say, the sustained, lens-scarring monologues of Ingmar Bergman, which reveal the soul of a performer so powerfully that it exposes the soul of the viewer (to himself) as well.
Audiences were no longer enveloped by movies in the same comforting way; sometimes even mainstream commercial movies affronted or even assaulted us. On the whole, this felt good. To be exposed to ugliness and horror, to be disturbed rather than cosseted, overburdened rather than babied, never hurt a moviegoer yet, and it made many of us happy not to have everything prepared and cushioned for us. Abruptness in the form of, say, the jump cuts in Breathless or the breaks in continuity in Annie Hall and dozens of other movies inject little spurts of energy into a scene. In such instances, we were not bothered by discontinuities from shot to shot—not when the sequences worked well within an exciting overall conception the continuity of which may have been intellectual and emotional rather than physical. After the war, modernist film-makers also found it impossible to believe in a coherent moral world, and their narratives no longer meted out punishments and rewards in the old Burbank-bookkeeper’s manner. Moral realism felt closer to the way we viewed our own lives, in which we are rarely heroes and few confident or outraged expectations ever meet their longed-for fulfillment in justice.
The glory of modernism was that it yoked together candor and spiritual yearning with radical experiments in form. But in making such changes, filmmakers were hardly abandoning the audience. Reassurance may have ended, but emotion did not. The many alterations in the old stable syntax still honored the contract with us. The ignorant, suffering, morally vacant Jake LaMotta in Raging Bull was as great a protagonist as Julie Marsden. The morose Nashville was as trenchant a group portrait and national snapshot as the hopeful Stagecoach. However elliptical or harsh or astringent, emotion in modernist movies was a strong presence, not an absence.
THE STRUCTURE OF the movie business—the shaping of production decisions by marketing—has kicked bloody hell out of the language of film. But the business framework is not operating alone. Film, a photographic and digital medium, is perhaps more vulnerable than any of the other arts to the post-modernist habits of recycling and quotation. Imitation, pastiche, and collage have become dominant strategies, and there is an excruciating paradox in this development: two of the sprightly media forms derived from movies—commercials and music videos—began to dominate movies. The art experienced a case of blowback.
As everyone knows, we can read an image much more quickly than anyone thought possible forty years ago, and in recent years many commercials have been cut faster and faster. The film-makers know that we are not so much receiving information as getting a visual impression, a mood, a desire. A truly hip commercial has no obvious connection to the product being sold, though selling is still its job. What, then, is being sold at a big movie that is cut the same way? The experience of going to the movie itself, the sensation of being rushed, dizzied, overwhelmed by the images. Michael Bay wasn’t interested in what happened at Pearl Harbor. He was interested in his whizzing fantasia of the event. Nothing important happens in The Avengers. As in half of these big movies, the world is about to end because of some invading force; but the world is always about to end in digital spectacles, and when everything is at stake, nothing is at stake. The larger the movie, the more “content” becomes incidental, even disposable.
In recent years, some of the young movie directors have come out of commercials and MTV. If a director is just starting out in feature films, he doesn’t have to be paid much, and the studios can throw a script at him with the assumption that the movie, if nothing else, will have a great “look.” He has already produced that look in his commercials or videos, which he shoots on film and then finishes digitally—adding or subtracting color, changing the sky, putting in flame or mist, retarding or speeding up movement. In a commercial for a new car, the blue-tinted streets rumble and crack, trees give up their roots, and the silver SUV, cool as a titanium cucumber, rides over the steaming fissures. Wow! What a film-maker! Studio executives or production executives who get financing from studios do not have to instruct such a young director to cut a feature very fast and put in a lot of thrills, because for their big movies they hire only the kind of people who will cut it fast and put in thrills. That the young director has never worked with a serious dramatic structure, or even with actors, may not be considered a liability.
The results are there to see. At the risk of obviousness: techniques that hold your eye in a commercial or video are not suited to telling stories or building dramatic tension. In a full-length movie, images conceived that way begin to cancel each other out or just slip off the screen; the characters are just types or blurred spots of movement. The links with fiction and theater and classical film technique have been broken. The center no longer holds; mere anarchy is loosed upon the screen; the movie winds up a mess.
So are American movies finished, a cultural irrelevance? Despite almost everything, I don’t think the game is up, not by any means. There are talented directors who manage to keep working either within the system or just on the edges of it. Some of the independent films that have succeeded, against the odds, in gaining funding and at least minimal traction in the theaters, are obvious signs of hope. Terence Malick is alive and working hard. Digital is still in its infancy, and if it moves into the hands of people who have a more imaginative and delicate sense of spectacle, it could bloom in any one of a dozen ways. The micro-budget movies now made on the streets or in living rooms might also take off if they give up on sub-Cassavetes ideas of improvisation, and accept the necessity of a script. There is enough talent sloshing around in the troubled vessel of American movies to keep the art form alive. But the trouble is real, and it has been growing for more than twenty-five years. By now there is a wearying, numbing, infuriating sameness to the cycle of American releases year after year. Much of the time, adults cannot find anything to see. And that reason alone is enough to make us realize that American movies are in a terrible crisis, which is not going to end soon.
David Denby is a film critic for The New Yorker and the author of Do the Movies Have a Future? (Simon & Schuster). This article appeared in the October 4, 2012 issue of the magazine.
No comments:
Post a Comment