Dynamic Range Primer is an article that has to be seen as a typical ‘write to think’ essay and will grow overtime. We will try to cover some grounds regarding Dynamic Range both technically and artistically.
methods
To explain Dynamic range one can go in depth with science from an electronics point of view or study it empirically by doing latitude tests and shoot charts and process through software like IMAtest to express dynamic range in numbers instead of perceptual observations.
Another way is to study Dynamic range scene reference based, based on what’s actually being displayed in front of the camera, and also showcase wise, how many stops of light are still preserved in a final Master of a film. By studying dynamic range with a spot light meter and measuring the actual scene which has to be resolved by the camera, and studying dynamic range artistically by studying what contrast light ratios were used on famous films, one can understand how many dynamic range is actually needed on a camera to capture a given scene with enough data to do proper post production and showcase an image with maximal intention.
In this ever growing article which we will revisit now and then we will cover Dynamic range both artistically and technically, and obviously there is a huge cross over between the two.
Dynamic Range explained technically
Before stating simple numbers we first need to know we are having the same definition of Dynamic Range. Dynamic Range is the amount of stops of lights (doubling of light intensity) a camera is able to capture. Obviously this is a far too simple definition because we have to study the quality of this dynamic range as well, and history have shown that simply stating numbers can lead to huge misconceptions and can even lead to to misusage by camera marketing teams.
Three simple examples of how a camera can fake a dynamic range test
- A camera has a reflective sensor and reflective sensor cabinet, which bounces light around and thus lowering contrast which could lead to higher dynamic range numbers in technical test which output numeric values. Same can happen with lenses that are not able to retain contrast very well. It will simply lower the full bright level and lift up the shadows, and thus more patches of a XYLA chart will be unclipped, but that doesn’t necessarily mean that there is meaningful details revealed because a xyla chart works on a macro level. To put it into simple words, both a lens and sensor cabinet can lower the contrast which being fed to the photosites of the sensor, at first you might think this equals a higher dynamic range, but the lowest patches will register about the same IR levels (brightness levels), and thus there is no separation and only contrast loss. the effect of bouncing light inside a sensor cabinet includes the reflective nature of a sensor and a olpf and uv ir cut filter. Especially on cameras with higher dynamic range like Alex35 they did everything to minimize shattering bouncing light in the sensor cabinet, to increase contrast, simply because they have the luxury of having a sensor which dynamic range figures are close to the maximum dynamic range a lens can resolve. Because a camera registers light linearly, a Xyla chart with its linear increasements of one stop should lead to a flat curve and in any case we see a curve towards shadows and highlights we can fastly conclude that something is going on outside the normal behavior of a photosite.
- A camera uses highlight construction that cannot be bypassed even not in their RAW UI (a program to debayer raw footage, into RGB). In such case the unclipped data of individual pixels before being debayered is used to extract extra highlight details at the expense of color accuracy. If one extracts highlight data from 1 or 2 more color channels (rggb) and the other(s) are clipped the colors cannot be properly reconstructed during the debayer. Even though these algorithms become smarter and one can ask the help of AI, we see very very disappointing results in cameras that use such baked in highlight retention (red cameras), and one should not use these stops of light unless that person is shooting in Black and white and even then some luminance levels will show incorrect. the reason such a company like RED has highlight retention enabled and has no function to disable this feature is to fake LAB test, like CINED lab test and othesr. But luckily more and more people start to demystify dynamic range, and share the information with the community, a good thing!
- A camera uses in camera noise reduction to fake a dynamic range test. Cameras like SONY prosumer line (fx6,fx3 a7 etc), have no ability to completely disable the internal noise reduction. A dynamic range test sets a certain threshold for noise values. Based on perceptual test a signal to noise ratio of 2 in imatest software gives a good idea of usable dynamic range. When executing low quality in camera noise reduction, one destroys fine detail, which makes the camera inable to register any fine detail, one might see dynamic range difference on a macro level, like a black door, but on a smaller level there is no distiction. Because Xyla charts feature large patches this internal noise reduction fakes the noise threshold, and thus stops of light that are not merely visible are included in the outcome.
ARRI Alexa
Here at Gafpa Gear we state that by far the highest scoring camera in the Latitude/Dynamic range game is the ARRI Alexa35 (released in 2022), and 2nd place is for the ARRI Alexa classic (same photosites as in the Alexa LF and the Alexa 65), it scores 13.5 stops of Dynamic in pixel to pixel and can be upgraded to 14 stops oversampled by 70% to 2k, The ALEXA automatically oversamples by a factor of 0.7X when shooting Prores to get past Nyquist limits and gets to the perceptual resolution it can actually resolve which is 2K. Basically any Bayer sensor should be oversampled by 0.7X to get past it’s inherent issue of inability to separate contrast and detail (mtf). By oversampling the Dynamic range doesn’t really change that much just a bit cleaner in the shadows and a bit less color pollution by lower chrome artefacts. So even though the camera measures 14 stops of dynamic range oversampled and 13.5 pixel to pixel, they both yield the same dynamic range, but the oversampled image is simply less noisy.
Latitude
One could introduce a new term called LATITUDE, which expresses the perceptual quality of an image, where the shadow qualities are checked by underexposing an image and restoring it to a normal image during post production, one can describe the shadow quality in three ways .
- Linear performance, thus good contrast separation
- Amount of noise, and the quality of the noise, for instance mixing normal sensor noise with ADC noise can lead to awful patterns like fixed pattern noise, and codecs can also mess with noise.
- general color shifts or loss of color information or distortion of color information.

Dynamic range is often calculated with the very well thought-out software called IMAtest in conjunction with an expensive XYLA chart. You simply point the camera equipped with a lens with a good contrast reproduction to the chart. On the Xyla chart every patch represents a stop of light. The XYLA chart is a back illuminated chart with patches with filtration filters, that will get you absolutely scientifically correct stops of light, without color pollution, look at them as ND filters.
In Imatest the footage is properly linearized by means of a smart data fit, or by manually applying its gamma data. Both the final curve and noise pattern and RGB data jointly describe the problematic nature of Dynamic range in a few numbers. If a camera is well designed, doesn’t use internal noise reduction, has a high MTF (contrast and detail separation) and anti reflective lens cabinet and last but not least uses no internal highlight reconstruction, we can kind of express the quality of a camera to register light in a dynamic matter in a proper way. Cined has started to publish test of many cameras, and they build an archive, even though we worship their efforts we find their attempts problematic for numerous reasons.

Most of the cameras they test have a 12 bit readout and can technically not score higher than 12 stops of light, any higher value than 12 means, the camera has issues with reflections, and basically shadows that should be ‘ clipped’ have some fill light, or the camera uses fake stops regenerated by ugly highlight reconstruction. These result don’t mean that the camera has a high dynamic range, but simply that the light the camera is fed with is not properly registered. Such cameras have a steep decrease in linearity towards shadows, and Xyla patches that should be separated by a stop of light now have almost the same IRE levels, so technically these stops are not being registered and perceptually they look like if one uses a lowcon tiffen filter. A nice artistic effort but hasn’t anything to do with a sensor being able to register a certain amount of stops. Our suggestion to CINED is to make four categories of camera’s:
Camera’s that use Dual gain readout, cameras that use 14 bit analogue to digital converters, cameras that use 12 bit analogue to digital converters and cameras that use other methods like quad bayer.
These four methods are the main methods in camera’s and will predict the outcome of Dynamic range by the nature of their hardware. We previous concluded that a 12 bit adc (analogue to digital converter) camera cannot register more than 12 stops. We can also state that a 14 bit ADC camera cannot resolve more than 14 stops of light. Dual gain readout is a complete other method, one pixel can either be fed to two different analogue to digital converters (ADC) with different gains with a crossover, or one pixel can be red out twice in a row with different gain circuitry. In the case of DGO sensors the dynamic range can be potentially much higher depending of the quality of the photosites. That brings us to the most important point, and that’s the amount of noise a pixel produces before its fed into a ADC.
If the noise produced by a pixel on it’s way to the Analogue to digital converter (ADC) is too dramatic, an ADC can only register more noise but not more stops of light. Bigger pixels tend to have less noise, because they gather more light, but this is highly technical because it depends on the sensor layout and all the hardware parts where noise is occurring, and how the noise quality is. For instance is it solely noise produced by one pixel, or does it also exhibit crosstalk noise from other pixels, dark current? And so on and so forth. Obviously there is a lot more to this. but to put it simply if a pixel outputs random noise (no visual information) a ADC will only capture this noise and it doesn’t lead to higher dynamic figures, HOWEVER, it’s known that if the quality of noise is nice and it rolls gradually off to random noise without digital artefacts like fpn (fixed pattern noise), chroma noise and such, its very much worth to capture this noise for nicer image instead of cutting of this noise and mixing with ADC noise, therefor a higher ADC bitdepth is always nice, however a higher bitdepth readout will often lead to slower readout times resulting in more Rolling shutter artefacts.
A camera like a Fujifilm XH2s with its stacked sensor has shown that rolling shutter artefacts can still be controlled well, with such fast stacked sensor design, the future is bright! What also has to be noticed is that almost any RAW shooting camera we know of doesn’t do noise reduction on a RAW level. Test have shown that’s its highly profitable to use denoising on a pixel level, before the Debayer, CYZ, and CCM pass, while all cameras except ACHTEL (works together with wavelet beam), do denoising after Debayering. By doing denoising on a native pixel level one gets better color information after debayer pass. And noise values that still exist will be more ‘organic’ . We hope that other camera brands will start to come up with pre-debayer noise reduction.
12 bit adc
The amount of cameras that are equipped with a sensor that’s crippled in video mode to 12 bit adc is still huge, and accounts for about 80% of the pro cameras above 1500 euro’s. It’s even worse, a large amount of cameras that are still being produced only execute a readout at 11 Bit (11stops). Some cameras might use some internal noise reduction, others might use a sensor with better photosites. but the differences are marginal, a pocket 6k registering more than 12 stops is simply because the light is bouncing around in the cheaply made sensor cabinet. This will simply lead to the same look as putting a soft contrast lens or lens with soft contrast filter on a camera with high dynamic range as an ARRI Alexa. This won’t lead to the actual dynamic range response we see numerically expressed in in a IMA test. The better the sensor and its ADC the more stops you can capture in the shadows. Imagine a good sensor with good signal to noise specs, and it has a 12 bit ADC mode, you can now potentially capture 12 stops of light which can be used as usable stops, once you change the readout mode to 14 bit ADC you will get theoretically two extra stops in the shadows, but the image will clip at the same point (well capacity).

So stop with expressing numbers, just count the numbers above the noise floor, and also look at linear response (after linearization), if it’s not linear then light is bouncing around the sensor cabinet or the lens used in the test was not suitable. We can make a list of camera’s with 12 bit ADC, and all will perform pretty similar, some have better CFA so they yield better spectral sensitivity (the ability to separate colors) or some cameras have better colour science (ccm or even a huesatdim map on top of that) , some cameras have better codecs, or better sensor cabinets, olpf’s etc. but MTF (the ability to register fine detail and contrast), and spectral sensitivity are not part of the dynamic range discussion, so we will make separate topics for these. About any camera in the industry does 12 bit readout. Excluding c70 (dgo) c300(dgo), alexa, alexa35, varicam, xh2s, Sony and a few others. All the rest including Zcam, Kinefinity Sony (except venice), do 12/11 bit adc. Yes RED komodo does 12 bit as well, and looking at some test 11 bit might be possible as well! and their marketing is firm, and if you still believe your Komodo or your sony FX6 does more than 11.8 stops of real dynamic range then this article might not be for you!

Latitude test to determine Dynamic range
As we discussed the issue with dynamic range expressed in numeric values, we wanted to tell our humble opinion of how something like Dynamic range should be measured.

If a lens is capable of resolve dynamic range properly, at least linearly over 18 stops range, one can set up a simple test. take a grey or white chart sit next to it in the frame for some real life reference. start with the chart clipped and simply decrease exposure by 1 stop. best is to use shutterspeed, to decrease exposure from there. Every time by one stop. Now load this footage into resolve. Use the first shot where nothing is clipped, to be sure go one stop below, to ensure no separate color info is clipped. linearize on the first node using the right log data of the camera you are using. use the gain slider on the 2nd node to normalize this shot to a ‘ normal’ looking image, delinearize into rec709 gamma 2.4 on the 3rd node. Now simply use the gain slider to get the white/grey chart on the same stop as your reference image for all other images you shot where the exposure is decreasing. Obviously the lower you go in exposure the more noise, but we can dismiss this for now.
Once you see a difference in contrast on your scopes, or color shifts, we should consider cutting our test off and count our stops from there. This is all very much simplified, but the test is clear, and cined luckily does such tests, but we disagree with their results, we do believe that once a camera starts to behave differently on the scopes (take a standard deviation in consideration), one should cut off the test, and count the stops. What i miss in their test is a proper linearization, to apply their exposure shift, this could easily be done, and makes these test much more representative. But they are looking in the right direction. A digital sensor is unlike analogue film a linear capture device, any roll off or such is caused by either malfunctioning, or because a manufacturer applies a huesatdimmap.

Above you can see a Xyla chart waveform of the ALEXA35 in logc4, from a good source we know that when linearizing these stops you can draw a straight line, and obviously a log curve has nothing to do with digital sensitometry, hence we think that Cined and their labtest should properly linearize their footage. Yes no-one will ever want to see a linear image because it looks odd, but for lab tests a linear display is paramount because one can easily judge the latitude without having to do painstaking latitude tests. On a test with the Alexa35 you would see that even the 16th stop doesn’t hit the noise floor. now due to the steep log curve (logc4 is very steep), it looks like 16,15,14th stop are unusable, while there is even a 17th stop and a 18th showing up once linearized (stuck in the noise floor though but still can serve as a nice roll off)
HUESATDIM map on ARRI Alexa
A huesatdim map like on an alexa overcomplicates everything, because something like that is hard to reverse in post. It’s known that Alexa uses a hybrid subtractive color science , basically comparable to a 3d non linear behaving lut on top of their CCM (color correction linear matrix), some people choose Venice over Alexa because it has more ‘digital’ colors, and we have shown that its much easier to match a Venice to an Alexa than an Alexa to a Venice, because it rolls of, desaturates color information above certain levels. It would be great if Alexa would offer to disable their huesatdim map in post, for more advanced grading, because now the ‘Alexa look’ is always the starting point of every colorist, and it’s next to impossible to reverse. Such cameras show desaturation and contrast shifts in latitude test, not because they are bad, but have a non linear calculation. 12 years ago this was super cool, because it was hard to grade a project, but now everyone can do grading, and it’s not necessary to have a kind of 3d lut incorporated in a camera.
Even if you shoot ARRI raw you have to go through their UI and this huesatdim map will be added. Yes it’s aesthetic and nice, but the fact that you can’t reverse it, and it could also be added as a simply add on, makes it stupid. But let’s face it Arri cameras especially the ALEXA35 will remain some of the best cameras for the coming 10 years, and it will be very hard to beat them, on all levels (mtf, spectral sensitivity, dynamic, range, liability, connectivity, etc). We endorse Arri to be more open about their color science that’s all, they let you think it’s something organic, etc. but in technical terms, its a very well designed CFA, very well designed DGO architecture, and a lot more. All they need to do is Debayer on a high level, apply a XYZ parameter for whitebalance spectral locus, add a CCM with target color space ALEXA. CCM calculation isn’t super hard, one can use DELTA00, and put a bit more weighting on skin tones, and play around with it, but the quality of ccm will be depended on CFA and debayer. HUesatdim map is just complete nonsense and could be incorporated in their rec709 display lut, while keeping their logc3 and 4 clean, so one can linearize it in a proper way and turn it into magic. Arri already disabled their ‘ film matrix’ in earlier models, now they should stop with their funny Reveal colourscience, and just focus on a good ccm, and sensor design, we are not stupid (well most of the industry is stupid :).
DYNAMIC RANGE ARTISTICALLY
This part is the most fun, because it’s absurd that so many people talk about Dynamic range and have
- 1 no idea about the technicalities behind Dynamic range
- 2 no idea what dynamic range means on a level of artistry
Once we skip the technical dynamic part and acknowledge that most cameras don’t register more than 12 stops of light (except the few usual suspects), we can focus on artistry, and see how we both can make great looking images with cameras with relatively low Dynamic Range, and also study how many dynamic range we actually need for the level of artistry we are striving for.
In order to understand Dynamic range artistically instead of technically we need to tackle a new technical concept and that’s mastering.
Cinema and tv in the ballpark (except hdr), can only display about 6 stops of light, obviously our eyes are more sensitive to mid tones so we can clamp with a curve more stops of light inside this 6 stop container, but this means we have to have a roll off in both highlights and shadows to favor the most articulated parts of our compositions. As an example the ARRI ALEXA alev3 sensor has a latitude (useable dynami range range) of 14 stops.

When we shoot in CINE EI on the ARRI Alexa the dynamic range and full well sensitivity (the moment the highlights clip) remains the same, so no analogue gain is involved only digital gain. When you shoot at 800 iso it will stall 7.4 stops stops of light above middle gray and 6.6 stops below middle gray. If we would shoot a scene which gives us 14 stops of dynamic range (we can measure by stop meter), and we want to display this scene on a normal rec709 screen or cinema screen, we have to add a quite dramatic curve. Since film had a curve by its analogue nature we are used to watch such content, so it’s engrained in our shared memories. The target of grading (correcting) a 14 stops range into 6-8 stops display format is to align precious information linear and clamp the rest of stops of light with a roll off so no hard clipping in shadows or highlights is occurring.

If we look at the sensitometry curve of KODAK 200t vision 3, we can clearly see that it can resolve linearly about 10 stops. the feet of the film looses contrast quickly and severe grain is introduced, and towards the highlight it looses contrast and color information as well. It’s an analogue roll off. Once we would print this film to a printfilm in order to project it, we would only be able to see in conjunction of the low contrast nature of projection (shadows are filled in due to ambient light and smear leakage of the screen) about 6-8 stops of light, but by means of the added curvature of print film and the slopes of the film it was shot on, we can clamp about 10 stops of light in this 6-8 stops projection.
By means of printer lights we were able to adjust exposure for printing to the print film so the mid point and overall brightness of the scene would look good. one of the good features compared to digital is that no hard clipping occurs, something with the digital happens in highlights, which we call the ‘ brick wall’ effect. Hard clipping occurs when full well capacity is reached and a pixel cannot except more light. The often to referred magical roll off in digital is a myth, it doesn’t exist and can only be reached with a roll of curve, but not by nature of how a digital sensor works, and effectively adding a roll off curve always will reduce the dynamic range, so best is to adjust the roll curve yourself during post production opposed to cameras with a roll of curve applied. As long as no hard clipping is occurring one can make a roll off curve without loosing any highlight details. A roll off curve is nothing more than clamping stops of lights that were being fed to the camera in a lower amount of stops in post.

Even though film has a very large range of dynamic range (linearly around 8-9 stops) and a roll off of 2-3stops on he shadow side and 3-4 stops on the highlight side (one could even subjectively say 5 stops, because the gradation to full detail loss is super smooth). Goal was to stay out of these regions and use these for non precious information. Many cinematographers rated their films as lower ASA speed and thus over exposed relatively to get better contrast levels in their shadows and less grain. They basically spot metered their shadows, and made sure they stayed out of the feet of the film to create a cleaner and better density (a thick negative). 500t film was often rated as 320 ASA instead of 500 ASA.

With the typical brick wall effect of digital acquisition, and our eyes which tend to look for brighter parts of an image, an unwanted effect for cinematographers is white clipping, and they will try to avoid clipping at all costs. A way to make clipped whites look less harsh is adding a halation/blooming filter which draws a halo around bright clipped portions, as long as the halo’s aren’t clipped there is a somewhat roll off towards the whites which will make it look more organic. Another way off making look clipping less harsh is by using vintage lenses, and adding grain during post production. When there is texture inside the clipped parts it will look less clipped, because there is still content.
Another way of handling clipped highlights is by not clipping them, by means of underexposure or using cameras with a very high dynamic range (ALEXA alev3 cameras or ALEXA35 to max out), but watch out; as stated film stocks have an extreme long roll off towards full white, and full white in analogue doesn’t exist.. It’s one of these things which film is very good at. But overexposing film had it’s limits, when printing back to normal exposure, one could see that highlights were loosing their density (contrast), and color information, so it only looked good, when non precious information was used on this part of the film curve, like skies, incandescent lights etc. and not skintones, or other precious things. Also the non linear behavior and grain values in the shadows of film could be painstaking, if we compare filmic response to an alexa35, one can conclude that we can now mimic film 100% with a curve, where as with the older Alexa, film stocks still had a gain in some parts. Now we could add a 5 stops extreme roll of curve on the Alexa 35 and still have enough linear stops to create a nice image with, even incandescent practicals can still yield some detail, which gives an otherworldly effect, without having to dim them, so they can serve as real lights, and interact with the scene.
Spotmeter your way through life
Maybe we should have started our article with this paragraph, because it’s paramount for understanding Dynamic range both from a artistic point of view as from the point of view as an technician. As a photographer/filmmaker we are working partly with reality as an input but we transform reality by means of transformation, in this process there is no right or wrong, as long as you have a vision. As a cinematographer or filmmaker we challenge you to buy a spotmeter, it doesn’t have to be an expensive one, and one with incidental meter (the cone) is not even necessary maybe even misleading, just start with a spotmeter, you will be surprised what wealth of information it will give you.

A spotmeter often has a small zoom lens, all you do is spot the meter on a object/surface, and it will tell you at what f/t stop, or shutter speed, or nd value or ISO speed (all jointly determine exposure) you have to set your camera in order to render that object/surface as 19% middle gray. Now everyone who shot with a analogue stills camera which often has an primitive inbuild spotmeter will understand the problematic nature of a spot meter.
Example: I am on wintersports (hard to imagine with climate change) and I want to photograph a friend of mine on his ski’s. I aim the spotmeter at the freshly fallen snow and according to the info of the meter I have to shoot at F22. I put the lens at F22 and after I come home and develop the film the snow is rendered as mid gray, while it was super white in reality, and the person I am portraying is rendered as a silhouette. Solution: I need to know how bright I want to render the snow which is in this case 4 stops above middle gray, so I have to open my Iris to a value of F5.6 (4 stops brighter compared to f22). Now I come home develop the film and even though the snow is nicely rendered the face of the person Iam portraying is still somewhat dark, maybe even a bit too grainy, and I don’t want to scan the image and digitally push the face in photoshop. Solution: When I put the spot meter at the persons face it tells me to shoot at F4.0, but since I know that the pale white skintones of this person should be rendered at least one stop above middle gray I put the lens at F2.8. Now when I come home and develop the film, I find that whatever I try there is never a good balance, now the face looks great, but part of the sky and right hand corner of the Snow slope is overexposed and it looks ugly. Yes I could have gone with F5.6 and relight in post, degrain the face, but that was never the aim, and will never look as good as when I would have shot it the right way. Solution: Acknowledge that the thing I want to shoot is not pleasing at first, even if I will shoot it with a high Dynamic range film roll, the colors and density look bad once printed (print holds only a very tiny amount of dynamic range). The spot meter and the whole procedure of looking at a composition more deeply, learned me a lot of things. First of all the idea of the picture was odd anyhow, a man on a ski’s, very generic, I could have searched for stock photo’s an pasted my friends head on some random photo, 2nd of all the composition was odd, there was a disbalance, I didn’t really put enough time in finding the right location, and the right place to put the camera (camera height and focal length were a mess). The whole scenery was odd, the day before the sun was out and we saw some beautiful trees but I didn’t had my camera with me. But at least we learned something, didn’t we? This is a imaginary example (luckily), I don’t go on wintersports and I have a vision when I bring a camera.

What we just learned that even though our spotmeters are used properly and our film roll or digital camera have sufficient dynamic range a composition can still be worthless, and can straight go to the garbage. By deeply studying the compositions we want to make and how they turn out after being captured and going back and forth we can become better artists. When we spotmeter the world around us we will soon understand that a lot of potentially nice compositions only hold a small amount of real life stops of light, and if we make sure we understand how to capture this technically it will turn out to be nice, because we have to remember that cinema is about flattening 3d space and light, we don’t want the sun to be as bright as it was, we don’t want to to use sunglasses and as we learned there is a technical limit of how many stops contrast a projection can have, even though we can hide 16 stops of light in a projected image of 8 stops, we have to remember that now each stop of light became a half stop of light, and the composition will start to lack contrast. Yes it can be a artistic choice, and I am all in, but before we become artistic we have to have some sort of an idea of the medium we work in. Yes there is HDR, but its not really a standard, and I am kind of conservative, which brings me to paintings. The most known art is the art of painting and so I brought my spot meter to the Rijksmuseum to spotmeter a Rembrandt, a painter who’s is know for his contrasty images, and he could make it seem as if there was a bright light shining inside his paintings.

So what did I found? 6.5 stops of dynamic range!! Rembrandt had access to very deep black and the painting was recently renovated, this painting must have been one of the most contrasty images I have ever seen. Maybe there was also some reflections due to harsh lights facing the painting, so I added a circular polarizer in front of my spotmeter, and got to 5.8 stops. Basically this image fits more than twice in an Alexa alev3. I can overexpose it 4 stops and underexpose it 4 stops on an Alexa at it will hold about all info, though it will be a bit more noisy when I underexpose. this brings me to the following paragraph: Shoot in-camera
Shoot in-camera
We just learned that a painting made by Rembrandt and to be considered as a very contrasty image only holds 6.5-7 stops of light. Any camera in the world even old mini dv cameras with maybe 8 stops max can capture this painting easily without clipping and still holding the shadows. Now in our next episode we will look at cinema history and guess dynamic range in motion picture films . But not before we watch a superb video featuring the amazing Geoff Boyle talking about paintings. The next episode will be available here the mid february 2023.
Dynamic range Primer