“Misbehaving pixels” is an umbrella term used to describe a pixel/photosite deviating from its expected response, encompassing but not limited to stuck, hot, dead, lit, warm, defective and flashing pixels. These individual pixels or clusters of pixels may appear static across multiple clips or come and go on a per frame basis, seemingly at random.
All CMOS sensors are subject to thousands of misbehaving pixels regardless of the manufacturer. These defects are inherent to the manufacturing process. For context, a single 4K UHD CMOS sensor has ~8.3 million photosites on it. Can you imagine producing anything 8.3 million times over without error? Challenging to say the least.
So if CMOS sensors have potentially thousands of defective pixels all over the place, why don’t we see them in every frame?
Great question. Manufacturers test each and every sensor off the production line, carefully mapping the location of photosites that deviate from their expected response using complex algorithms. (See image 1)
Image 1. Misbehaving pixel detected by algorithm
As the location of each photosite is logged, a proverbial map is generated unique to every sensor, similar to a fingerprint (see image 2).
Image 2. Dead pixel mapped and marked as useless
This pixel map is then used to correct these misbehaving pixels during readout (see images 3 and 4) by replacing the suspected bad values with theoretically more accurate values obtained from the surrounding photosites. It’s sort of like when the unprepared kid in class cheats off his neighbors during a test, except in this case he’s encouraged.
Image 3. Reading surrounding pixels to obtain replacement value of misbehaving pixel
Image 4. Mapped and fixed misbehaving pixel, corrected using surrounding pixels information
If all these misbehaving pixels are mapped out at the factory, why are we still seeing them pop up occasionally?
The algorithms being used to identify problematic pixels aren’t always perfect, and pixel response can be a fickle thing. The behavior of pixels is heavily influenced by the sensor’s operating temperature and integration time(or shutter speed) along with a few other things we won’t dive into, such as cosmic radiation(not kidding). As the temperature of the sensor fluctuates up and down, certain pixels may fall in and out of line with their expected response.
The same is true with regard to adjustments in exposure. Furthermore, sensors may degrade over time and develop new defects after the factory calibration has taken place. This is why many manufacturers have given users the ability to perform their own sensor calibrations dependent upon their current shooting environment, exposure settings, and unique sensor response.
These user-performed sensor calibrations can go by many names and may include additional functions outside of just pixel correction. Examples include Black Shading, Black Balance, dark frame subtraction and Automatic Pixel Restoration. In most scenarios, properly calibrating your camera’s sensor in accordance with the manufacturers guidelines and operating the camera within its calibrated range (temperature and integration time/shutter speed) will correct all misbehaving pixels. Phew.
Not so fast, I still see a few bad apples. What gives?
As discussed earlier, the complex algorithms being used to identify misbehaving pixels in-camera are neither infallible nor are they all created equal. Camera manufacturers are in a tough spot. On one hand, customers want smaller and smaller camera systems, while on the other hand, customers want break neck performance with little down time.
Manufacturers will take different approaches to solving this dilemma depending on where their priorities lie. Company X might be okay with a larger camera body capable of housing the processing hardware necessary for more sophisticated sensor calibrations in-camera. Company Y might be more focused on maintaining a small form factor with ultra fast calibration speeds in-camera, so they’ll rely on more powerful post processing solutions to resolve any missed pixels.
Both approaches have their pros and cons. To make this balancing act even more difficult, imagine analyzing ~35 million photosites on an 8K sensor vs. the ~8.3 million cited earlier for a 4K UHD sensor. That’s 4x the amount of information to process. As camera systems continue to push capture resolutions higher and higher, the likelihood of encountering misbehaving pixels in your footage will likely increase. Thankfully most post processing software offers a variety of easy-to-use pixel masking tools to address this.
HOW TO AVOID/CORRECT MISBEHAVING PIXELS
- Allow the camera/sensor to warm up to its intended operating temperature and perform a sensor calibration in accordance with the manufacturer’s guidelines.
- Operate the camera within its calibrated temperature and exposure ranges.
- Large deviations in sensor temperature or exposure may introduce misbehaving pixels. Follow a manufacturer’s best practices when operating the camera in harsh environments.
- Most post software tools offer pixel masking functions to correct for misbehaving pixels. Additionally, certain camera manufacturers such as ARRI and RED provide pixel masking tools via their SDK for integration into popular third party programs. Please ensure your post processing software is up to date in order to take full advantage of these potential tools.
- Don’t use spatial noise reduction globally across the image in order to correct misbehaving pixels.
- This will degrade the image and result in detail/texture loss. Noise reduction tries to avoid removing something that's a genuine feature of the frame. Since misbehaving pixels can remain static across multiple frames, the noise reduction algorithm may not properly identify the pixel for correction.
If operating the camera within its calibrated range and performing a new sensor calibration doesn’t resolve a misbehaving pixel issue, please contact the manufacturer directly.