Skip to main content

本文基于《Adaptive and integrated neighborhood-dependent approach for nonlinear enhancement of color images》

先放图片:

其算法主要分为三步:

  1. 根据图像的亮度分布建立一个全局映射;
  2. 自适应对比度增强;
  3. 颜色恢复。

全局曲线调整:

  1. 首先计算出图像的亮度值,公式如下:

    I(x,y)=76.245IR(x,y)+149.685IB(x,y)+29.07IG(x,y)255I(x,y)=\frac{76.245\cdot I_R(x,y)+149.685\cdot I_B(x,y)+29.07\cdot I_G(x,y)}{255}
  2. 对亮度进行归一化:

    IN(x,y)=I(x,y)255I_N(x,y)=\frac{I(x,y)}{255}
  3. 通过映射函数对图像进行线性增强:

    a=0.24,b=0.5In(x,y)=I(x,y)a+(1I(x,y))b+I(x,y)22a=0.24,b=0.5\\In(x,y)=\frac{I(x,y)^a+(1-I(x,y))^b+I(x,y)^2}{2}

对比度增强:

  1. 接着对 I(x,y)I(x,y) 进行不同尺度的 高斯核函数 进行卷积运算,其结果包含图像相邻像素的亮度信息
G(x,y)=Ke((X2+y2)c2)G(x,y)=Ke(\frac{-(X^2+y^2)}{c^2})\\

​ c 为尺度或是高斯环绕空间常数

Ig(x,y)=I(x,y)G(x,y)I_g(x,y)=I(x,y)\ast G(x,y)
  1. 对比度增强:

    R(x,y)=In(x,y)r(x,y)r(x,y)=Ig(x,y)I(x,y)R(x,y)=I_n(x,y)^{r^{(x,y)}}\\r(x,y)=\frac{I_g(x,y)}{I(x,y)}
  2. 线性组合:

    R(x,y)=i3ωiRi(x,y)R(x,y)=\sum_i^3\omega _iR_i(x,y)

ω\omega 通常取 1/3

颜色恢复:

使用线性颜色恢复:

Rj(x,y)=Ij(x,y)In(xy)jr,b,gR_j(x,y)=\frac{I_j(x,y)}{I_n(x,y)}\\j为r,b,g

代码:

  1. matlab 代码:

    I=im2double(imread('hockey.png'));
    I1=rgb2gray(I);
    In=(I1.^(0.24)+(1-I1).*0.5+I1.^2)/2;
    %通过高斯核对灰度增强图像做卷积运算
    sigma=5;
    window = double(uint8(3*sigma)*2 + 1);
    G1=fspecial('gaussian',window,sigma);
    Guass1=imfilter(I1,G1,'conv','replicate','same');
    r1=Guass1./I1;
    R1=In.^r1;
    sigma=20;
    window = double(uint8(3*sigma)*2 + 1);
    G2=fspecial('gaussian',window,sigma);
    Guass2=imfilter(I1,G2,'conv','replicate','same');
    r2=Guass2./I1;
    R2=In.^r2;
    sigma=240;
    window = double(uint8(3*sigma)*2 + 1);
    G3=fspecial('gaussian',window,sigma);
    Guass3=imfilter(I1,G3,'conv','replicate','same');
    r3=Guass3./I1;
    R3=In.^r3;
    R=(R1+R2+R3)/3;
    Rr=R.*(I(:,:,1)./I1);
    Rg=R.*(I(:,:,2)./I1);
    Rb=R.*(I(:,:,3)./I1);
    rgb=cat(3,Rr,Rg,Rb);
    imshow([I rgb]);
  2. Python 代码:

    def Image_enhancing(image):
    img=cv2.cvtColor(image,cv2.COLOR_BGR2GARY)/255
    # img = 0.2989*image[:, :, 2]+0.5870*image[:, :, 1]+0.1140*image[:, :, 0]
    #二者效果一样,使用后者不需要再除以255
    img = replaceZeroes(img)
    img_n = (np.power(img, 0.24)+(1-img)*0.5+np.power(img, 2))/2
    sigma = 3
    window = 3*sigma*2+1

    guass1 = cv2.GaussianBlur(img, (window, window), sigma)
    r1 = guass1/img
    R1 = np.power(img_n, r1)
    sigma = 20
    window = 3*sigma*2+1

    guass2 = cv2.GaussianBlur(img, (window, window), sigma)
    r2 = guass2/img
    R2 = np.power(img_n, r2)
    sigma = 24
    window = 255

    guass3 = cv2.GaussianBlur(img, (window, window), sigma)
    r3 = guass3/img
    R3 = np.power(img_n, r3/255)
    R = (R1+R2+R3)/3
    Rr = R*(image[:, :, 2]/img_n)
    Rg = R*(image[:, :, 1]/img_n)
    Rb = R*(image[:, :, 0]/img_n)
    rgb = np.concatenate(
    (Rb[:, :, np.newaxis], Rg[:, :, np.newaxis], Rr[:, :, np.newaxis]), axis=2)
    return rgb+image
    def replaceZeroes(img):
    min_nonzero = min(img[np.nonzero(img)])
    img[img == 0] = min_nonzero
    return img

    几个要点:

    1. 读取图像时需要归一化,我将归一化放在转为灰度图之后;
    2. 显示增强图像时需要除以 255;
    3. cv2.imread() 读取图片通道为 BGR;
    4. 有文章说需要将增强图像超过 1 的部分设为 1,详见 ,但实际并没有遇到问题。