Match Frame

Thoughts from an American editor and filmmaker in New Zealand about film and video production and post-production. Plus whatever else I feel like talking about.

Name:
Location: Balmoral, Auckland, New Zealand

A work in progress.

Tuesday, April 25, 2006

the post-production cameraperson - some initial thoughts.

A recent confluence of events made me want to get back into blogging, so it makes sense to start with this.

A series of links brought me to this post by the writer of SCARY MOVIE 4, which was apparently the first film to be shot on Panavision's Genesis camera. A high-definition camera with 35mm lenses, it's the camera that's being used by the folks making SUPERMAN RETURNS, so Hollywood is taking it quite seriously.

As a 2K resolution camera, it's supposed to be pretty darn good. (For the non-technology inclined, that's roughly four times the number of lines of resolution of your television.) So good, in fact, that apparently one shot in SCARY MOVIE 4 was blown up by 250% without any visible resolution loss by the average viewer.

This caught my eye for a number of reasons. I've been working on a couple different reality-TV style projects, and one common technique used to make things more dynamic is to resize a shot. A small resize - say, to 110% - can oft be gotten away with without anybody noticing. On the other extreme, going to anything like 200% on something sourced on miniDV (most of what I'm working with these days) goes to a pixel soup. Sometimes this effect is okay, especially if it's quick (as in, say, an E! special, where you see it grossly abused all the time).

But for serious drama and documentary work, I personally think that you want to have a consistent visual feel. (Unless you have a compelling reason not to, of course. See SLACKER for a good example.) And often, there's the need to resize a shot to fix something - a boom mic that swipes the edge of screen being a classic example. But to resize the shot will create a slightly different feel in standard video formats, one that takes you out of it. And it's not just video - you see this every once in a while in film, where a shot has been blown up, and those blown-up shots tend to have an inconsistent look from the rest of the film. (I assume this is because the film grains are enlarged.)

Now let's say you can resize 250% without impunity. On my test on an Avid the other day, this took a generous wide shot (a woman standing, full frame, with a bit of headroom) into a reasonably-sized mid-shot (waist-up). On my Avid, of course, it looked like crap (especially because the source footage, while XDCAM, was compressed 15:1). But what if it didn't?

The short answer would be: you could either shoot a hell of a lot less or make a lot more shots with what you have, and avoid some tricky camera shots completely if they're easier to execute in post. Imagine you are a drama director, and you have a zoom shot that you want. You want it to end just when a character says a final word, or makes an arm turn. These things can be surprisingly tricky to coordinate on set, especially if the actor is inconsistent in the speed of what they're doing. So: don't do it on set. Get the performance you want, do the zoom in post.

Or, let's say you forgot to get a reaction shot of one of your actors in a single shot, but you have it in a two-shot. No problem: grab it from the two-shot and turn it into the single.

This strikes me as a huge potential revolution. Obviously, there are limitations. Often shots are re-lit in close-ups. (But if you light something neutral, you can add lighting in post ...) And often lenses are used to throw backgrounds out of focus. (But hey - we're starting to be able to do that in post as well! The other day I added a great focus pull to a show I'm working on in Avid.) And you can't really perform three-dimensional movements without lots of digital cheating.

But, coming from an indie filmmaker perspective - your source material is ultimately what you have to work with in the editing room, and by increasing the potential usability of that source material, you increase the number of possible solutions for making a scene work.

The real virtue, though, comes with documentary filmmaking. The perennial problem of documentary filmmaking is trying to cover the action and knowing where to put the camera, as well as getting cut aways to cover up cuts. If you can use the source material to grab those cutaways, you can keep your camera reasonably wide and do your best to get all the action on screen. Obviously if people are sitting across from each other, you'll still probably have to favor one side or the other at various times, so you still might not have a nice shot of somebody who's talking, but I think so much of the documentary editing that I've done so far would have been immeasurably easier if I could have blown up shots this much - even just 150% could get the person who's talking out of shot and give me a good reaction shot on the other side of the table, for instance.

None of this is tomorrow. The Genesis is hyper-expensive, and at least as much of the ability to blow the image up nicely is the lens as it is the sensor.

But at NAB right now, there's a company called Red that's announced their new camera. For less than $20K, you can own a camera body that can capture better than 4K lines of resolution, and considering it's coming from the folks who make Oakley sunglasses you best expect that they'll have some damn fine optics available for it as well. It's not going to replace the $2000 DV camera market ... possibly ever. But then again, while I think there's a price point that glass won't dip below, it's possible that 4K sensors could hit prosumer cameras sooner than anyone imagined.

Anyway, the real question is: will directors use this to change the way that they shoot? Or will it just be another trick up the editor's sleeve to save a scene that isn't working?

0 Comments:

Post a Comment

<< Home