Within the context of video surveillance, object and incident detection by an operator is minimized in over- and under-exposed pixels. Multiple different exposures of the same scene can be fused to create a well-balanced and detailed image that is meaningful to a human operator or surveillance system. We present an implementation for real-time video enhancement using videos of varying exposures. The algorithm increases the details in over- or under-exposed areas that are present in a single frame of fixed exposure. The processing power of a mobile computer and its graphics processing unit (GPU) are able to fuse three greyscale videos of resolution 1600×1200 at 20 frames-per-second.
Reference:
Bachoo, AK. 2009. Real-time exposure fusion on a mobile computer. 20th Annual Symposium of the Pattern Recognition Association of South Africa (PRASA), 30 November – 1 December 2009, Stellenbosch, South Africa, pp 111-115
Bachoo, A. (2009). Real-time exposure fusion on a mobile computer. PRASA 2009. http://hdl.handle.net/10204/3857
Bachoo, AK. "Real-time exposure fusion on a mobile computer." (2009): http://hdl.handle.net/10204/3857
Bachoo A, Real-time exposure fusion on a mobile computer; PRASA 2009; 2009. http://hdl.handle.net/10204/3857 .