Spatially adaptive multi-resolution multispectral image fusion
Due to the performance limit of remote sensing systems, multispectral images have limited spatial resolution. Their spatial resolution can be improved by merging them with higher resolution image data. A fundamental problem frequently occurring in existing fusion processes, however, is the distortion of spectral information. This paper presents a spatially adaptive image fusion algorithm which produces visually natural images and retains the quality of local spectral information as well. High frequency information of the high resolution image to be inserted to the resampled multispectral images is controlled by adaptive gains to incorporate the difference of local spectral characteristics between the high and the low resolution images into the fusion. Each gain is estimated to minimize the l2-norm of the error between the original and the estimated pixel values defined in a spatially adaptive window of which the weights are proportional to the spectral correlation measurements of the corresponding regions. This method is applied to a set of co-registered Landsat 7 Enhanced Thematic Mapper (ETM)+ panchromatic and multispectral image data. The experimental results show that high resolution images can be synthesized by the proposed method, which successfully preserves spectral content of the multispectral images.