How DPI Does 2.35:1 Sans the Extra Lens
9/8/2010 3:26 AM
Being a big fan of anamorphic 3.25:1 aspect ratio projection in my own home theater, I was intrigued when I picked up a recent press release from Digital Projection Inc. USA describing the company’s new “external lens-free” anamorphic-capable projector.
According to the company, the 2,560 x 1,600-pixel dVision 35-WQXGA (almost twice the resolution of a 1080p display), is the first anamorphic lens-free multiple-aspect ratio-capable projector. Essentially, when 1.78 content is being viewed that content is displayed by the projector at 1080p resolution. But, when wider aspect ratio content is to be presented, the image scaling within the projector goes to work.
|Digital Projection's VP of home cinema, George Walter |
Intrigued about this concept, I first chatted with my buddy Mike Bridwell in DPI’s marketing department, and then he suggested that I contact VP of home cinema, George Walter, to discuss the intricacies of the new product. Here’s some of what we discussed:In a nutshell, how does this projector work?
This is similar to something we did in a previous projector. Essentially it’s programmable lens control. Although it sounds kind of easy, it’s not that easy to do it accurately. The lens mount has three different movements. It has a vertical shift, horizontal shift, it has zoom and it has focus. Basically if you program all of those so the lens knows where it is, with some software you can sort presets. Can you give a comparison between using an anamorphic lens and using a programmable zoom?
OK, you’ve got two scenarios. One is that you’ve got a 1.78 image and one is 2.35:1, which is a letterbox image with black bars on top and bottom. In an anamorphic solution the first thing I have to do with my 2.35 is video processing. Vertical resolution injects roughly 30 percent of additional pixels based on an algorithm that says if you have two that are both blue, then we’ll put a blue pixel in. If you’ve got maybe one green and one red then they’ll stick a yellow pixel in, so it’s an interpolated value, but it expands the value to fill the 1.78 chipset and effectively everyone becomes tall and thin or a circle becomes an oval. In reality what I’m adding for sake of argument is 30 percent distortion. But it’s not real video. Then I put it through an anamorphic lens, which stretches the horizontal but doesn’t affect the vertical. So at the end of the day I have a native 2.35 aspect ratio.But critics, of course, complain that this approach negatively affects the video quality.
The downsides of this approach is that first of all you now have this additional video processing, and with the additional glass elements I reduce my contrast ratio a little bit. I end up with shaped pixels instead of square pixels and I create a little bit of geometric distortion in the corners as a function of the anamorphic lens. If you took an exit poll of 100 people that would say that they didn’t see the geometric distortion in the corners. Of course that depends on how it’d done, and if you use a CineCurve screen that helps it even more.
So you’ve got a different option now?
|Digital Projection's 2,560 x 1,600-pixel dVision 35-WQXGA |
Now with the zoom scenario, everything starts out the same but now when I have 2.35 what I literally do is select a preset where I zoom out and make the pixels 30 percent larger. Now, as it turns out, they become the same width as my anamorphic pixels, but they’re taller because they’re still square and I’m overshooting the screen by top and by bottom by roughly 15 percent on the top and 15 percent on the bottom. With original first-generation DLP projectors, this was a little bit tricky because the contrast ratio in those projectors was maybe 750:1, which means the black level was pretty high. If you’re shining black light at eye level if there’s anything that’s going to catch it, you’re going to see it in the overscan. Now what we’re looking at is black levels, even native, at 5000:1. Those black levels are now maybe a third of what they were. For the most part, considering that screen borders, especially on a masking screen because they have to be beefier because you have to have motors with cables in there the area where you’re going to be overscanning is going to be duveteen or black velvet. It tends not to be an objection. But because I’m not passing through any additional glass elements, my contrast ratio for the active image is absolutely better. And I have no geometric distortion because I’m still passing through the primary lens. I’m not making rectangular pixels. Sounds great. Are there any challenges to this approach?
The downside is that you have to make sure your projector is located so that in the zoom range you can hit the 1.78 and the 2.35, and that is not always so easy. As you get the longer throw ratios, it gets easier. Guys like Tony Grimani with PMI actually mount their projector at a 90-degree angle to the screen and bounce it 45 degrees off a mirror. So it increases the throw ratio and puts in a better position for 1.78 and 2.35. The other thing that allows you to do… with an anamorphic lens you pretty much have two options: 1.78 or 2.35. But with a programmable zoom, I can have multiple stops. If you want to go with a four-way masking screen, I could have a taller 4:3 screen, so it doesn’t end up looking like a complete postage stamp. Also if I had 2.35 content that had subtitles on the bottom, I could have a preprogrammed zoom and shift with four-way masking. So it just gives you some added flexibility. My personal opinion is that it’s about the individual application and customer preference. The anamorphic lens preserves some light output because you’re literally using the whole chip. The programmable zoom retains some contrast ratio. If I do the pros and cons I don’t think there’s a terrible clear winner; they’re just options. What if you want to add 3D to the equation?
It becomes a little bit more of an issue when you talk about 3D, because 3D, at least first generation, to do the video processing for 3D basically requires that whatever internal video processor you have has to work twice as hard because I’m manipulating two images in the same interval as one and I could extrapolate that down to either side by side or frame packed or sequential or whatever, but I still have left eye and right eye information that I have to video processing for at the same time and then combine them back into a single image. That is a lot more on the buffer side and memory allocation of a video processor. Therefore, Gen 1 for everybody don’t expect to have that anamorphic. Right now what we do is to that vertical stretch, you put that image in a buffer to create those additional 30 percent vertical pixels, you jam them in in output. Now on top of all the 3D stuff, which is not terribly easy, you’d have to induce this process. And it would be very different weather your content was side by side, frame packed or sequential, because they’re all three very different. And the size of the content in the buffer is different.
|George Walter and DPI's marketing master Michael Bridwell soak in the adulation during the CEDIA Awards Banquet last year. |
We can do 2.35 with side by side and we can do it with sequential, but we can’t do it with frame packed because it’s just too many steps and too much in and out of the buffer. Short term we’re not going to do that but short term we think we have a fix, but if we want to go with an anamorphic lens, we have to do side by side or sequential. But if you have this ability with the zoom now you don’t have to worry about that. You could have a rig with an anamorphic lens and you have zoom. One doesn’t preclude the other. For instance, if I’m doing any Blu-ray content and I want to use the anamorphic lens because I love the output that’s no problem. And when I’m looking at 3D if it’s side by side I can do that. But if it’s frame packed maybe I have to push a button to go to the programmable zoom. But can achieve better results and not end up with black bars. I think as things evolve someone will build a bigger, fatter buffer and run dual processing. The only thing you’ll have to deal with is audio and video sync. For most scaler companies it will be next-generation electronics.Is this a technology that will require the skills of a highly trained integrator?
When you talk about things like programmable zoom, which is different than 3D, it’s not that difficult. The way we’ve structured it, we have five different presets. You set up the projector exactly the way you want it (the zoom, focus, shift) and then you just safe as preset 1, and then you go and set it up another way and safe it. Then you set up your control system to retrieve those making it extremely transparent to the end user. OK, so everyone has an opinion about 3D, in general. How do you honestly feel about it?
I’m really candid when I’m teaching our classes. I say that we are where we are with 3D out of luck. One of our commercial customers approached us several years back to build a 3D version of our Titan, so we started looking into it and we were well on the path before it started to become a consumer kind of a product. We had already overcome several challenges. I hear people say that the absolutely hate 3D and hate the idea of sitting down to watch TV wearing 3D glasses and I can’t disagree with them. But there are certain events. For me, if I go to the movies there are some I’d want to see certain movies in 3D and others I wouldn’t even consider in 3D. I thought Avatar, because it was made for 3D, really added to that experience. It made me feel like I was more a part of this new planet and everything that was associated with it. I saw footage of the Masters in 3D and though it wasn’t perfect but it was actually shot with a separate crew and it showed the contours on the green. For someone who is really a golf enthusiast who has never been to Augusta you really appreciate the course. That’ something 3D brings.