I’m a big fan of quantification, and in the field of human factors it can be difficult. Enter a two-paper series authored by my friend Jeff Suway and his neuroscientist brother. The papers outline a practical and quick method for measuring the luminance of objects in a scene using a consumer-grade, digital camera. First, what is luminance?
Luminance: the amount of light emitting, passing through, or reflecting from a surface
Traditionally acquired via numerous individual spot measurements, the Suways built on prior literature to develop a method to quantify luminance using digital video. This allows the analyst to drive through the collision scene, with the camera at the driver’s eye position, and have all the data needed to establish luminance from multiple perspectives.
Under the hood, the authors are simply relating pixel intensity to luminance, made possible by a calibration routine. In their 2017 paper, the authors calibrated a Sony Alpha 7SII and compared the results to those acquired with a high-end meter, achieving a mean difference of 0.000024 cd/sqm (candela/square meter) for the calibration set, and 0.0096 cd/sqm for four images that were not part of the calibration. The authors updated the process slightly in their 2022 paper, improving noise resilience and results at luminance extremes.
The image above is a screenshot from Nitere (a commercially available package developed by the Suways) showing how pixel intensity is converted to luminance. The papers provide enough detail to perform the calibrations/calculations yourself, but if you’d prefer the more automated/beautiful solution, Nitere is $499/yr and camera calibrations are $299.
Whether or not you plan to dive into conspicuity analyses, I think it’s beneficial to know what’s out there so you can direct clients appropriately.
Thanks for reading, keep exploring!
Lou Peck
Lightpoint | Axiom
P.S. I'm finally firing the Data Driven Podcast back up! The next guest will be no other than Jeff Suway himself. Stay tuned for its release.