Physically-based chromatic aberration model

This post is actually a translation and adaptation of my original article.

What is chromatic aberration?

Chromatic aberration [1] is a failure of a lens to focus all colors to the same point. It is caused by dispersion: the refractive index of the lens elements varies with the wavelength of light.

That is how a prefect lens works:

As you can see, all the colors are focused at the same point.

There are 2 types of chromatic aberration: longitudinal (axial) and lateral (transverse).


 

Longitudinal chromatic aberration occurs when different colors have different focal planes. Meanwhile, the lateral chromatic aberration contains all the colors on the same focal plane but with a little shift.

Describing all the further algorithms, I assume that we have only lateral chromatic aberration.

Implementation

Common algorithm

 You can find the most popular algorithm implemented in my shadertoy account.

//Chromatic aberration parameters
#define CA_STRENGTH 15.0

vec3 ChromaticAberration(vec2 uv)
{
    vec3 color = texture(iChannel0, uv).rgb;
    color.r = texture(iChannel0, (uv - 0.5) * (1.0 + CA_STRENGTH / iResolution.xy) + 0.5).r;
    color.b = texture(iChannel0, (uv - 0.5) * (1.0 - CA_STRENGTH / iResolution.xy) + 0.5).b;

    return color;
}

We are sampling texture in different places and take corresponding channels. As a result, we see shifted color channels in different directions. This algorithm is easily implemented, but it does not take into account any other color wavelengths, since we are working only with the red, green and blue colors.

Physically-based approach

Theory

Since this algorithm was developed while I was working at My.Games, the main purpose of the effect was to show highly convex lenses. So, beside the round shape of the chromatic aberration, I was interested in getting more accurate result with wide range of colors, e.g. purple or yellow, due to the light dispersion. It means, that every time light comes through the lens, different wavelength refract at different angles, and every wavelength corresponds to a specific color in RGB-palette.

The main idea is to sample texture with some coefficients that are related somehow to the amount of every RGB channels. Let's start with decomposition of light into a RGB-palette. The function, that returns RGB-color of the exact wavelength looks like this:

def get_color(waveLength):
    if waveLength >= 380 and waveLength < 440:
        red = -(waveLength - 440.0) / (440.0 - 380.0)
        green = 0.0
        blue  = 1.0
    elif waveLength >= 440 and waveLength < 490:
        red   = 0.0
        green = (waveLength - 440.0) / (490.0 - 440.0)
        blue  = 1.0
    elif waveLength >= 490 and waveLength < 510:
        red   = 0.0
        green = 1.0
        blue  = -(waveLength - 510.0) / (510.0 - 490.0)
    elif waveLength >= 510 and waveLength < 580:
        red   = (waveLength - 510.0) / (580.0 - 510.0)
        green = 1.0
        blue  = 0.0
    elif waveLength >= 580 and waveLength < 645:
        red   = 1.0
        green = -(waveLength - 645.0) / (645.0 - 580.0)
        blue  = 0.0
    elif waveLength >= 645 and waveLength < 781:
        red   = 1.0
        green = 0.0
        blue  = 0.0
    else:
        red   = 0.0
        green = 0.0
        blue  = 0.0
    
    factor = 0.0
    if waveLength >= 380 and waveLength < 420:
        factor = 0.3 + 0.7*(waveLength - 380.0) / (420.0 - 380.0)
    elif waveLength >= 420 and waveLength < 701:
        factor = 1.0
    elif waveLength >= 701 and waveLength < 781:
        factor = 0.3 + 0.7*(780.0 - waveLength) / (780.0 - 700.0)
 
    gamma = 0.80
    R = (red   * factor)**gamma if red > 0 else 0
    G = (green * factor)**gamma if green > 0 else 0
    B = (blue  * factor)**gamma if blue > 0 else 0
    
    return R, G, B

Exact RGB-values dependency on wavelength within the visible light range (380-780 nm) looks like this:


But this function is complicated for GPU-execution because of the if-statements. If we build polynomials, calculations will be much easier.

I built polynomials on the ranges, where color channel actually changes. Except for the beginning of the red channel: it is too curvative to build an easy polynomial.

Calculations for the red channel:

wave_arange = numpy.arange(510, 580, 0.001)
red = list()
for wave in wave_arange:
    red.append(get_color(wave)[0])
red_func = numpy.polynomial.polynomial.Polynomial.fit(wave_arange, red, 2)

red_predict = list()
for wave in wave_arange:
    red_predict.append(saturate(red_func(wave)))

 

Calculations for the green channel:

wave_arange = numpy.arange(440, 650, 0.001)
green = list()
for wave in wave_arange:
    green.append(get_color(wave)[1])
green_func = numpy.polynomial.polynomial.Polynomial.fit(wave_arange, green, 2)

green_predict = list()
for wave in wave_arange:
    green_predict.append(saturate(green_func(wave)))

Calculations for the blue channel:

wave_arange = numpy.arange(380, 520, 0.001)
blue = list()
for wave in wave_arange:
    blue.append(get_color(wave)[2])
blue_func = numpy.polynomial.polynomial.Polynomial.fit(wave_arange, blue, 2)

blue_predict = list()
for wave in wave_arange:
    blue_predict.append(saturate(blue_func(wave)))

As a result, interpolated RGB-values dependency on wavelength within the visible light range (380-780 nm) looks like this:

Since we've got a lot of red color in the end, we will work in the range 380-700 nm.

Implementation

You can find fully implemented algorithm in my shadertoy account [link].

Implementation consists of two stages:

  1. Build the velocity vector field, to imitate the lens curvature;
  2. Calculate the result color using the knowledge of what colors do we get from every pixel that we sample.

In my example, we assume that lens is just too thick in the center, so we will see all the visual artifacts closer to the lens edge. This is how we do:

float distanceStrength = pow(length(uv - 0.5), FALLOFF);
vec2 direction = normalize(uv - 0.5);
vec2 velocity = direction * BLUR * distanceStrength;

vec2 totalOffset = velocity * STEP_MULTIPLIER;
vec2 offsetDecrement = totalOffset / float(SAMPLE_COUNT);

Total offset and offset step are also calculated on this stage. So say, if the total offset is less than 1 pixel, it means that we won't sample any pixel other than current. So we can do a little optimization here:

// Optimization: don't process pixels which won't include any other pixel's information
// aka if we will sample only the pixel itself, it will still have the same color without aberration artifact
bool isNotAberrated = abs(totalOffset.x * iResolution.x) < 1.0 && abs(totalOffset.y * iResolution.y) < 1.0;
if (isNotAberrated || SAMPLE_COUNT < 2)
{
    return texture(iChannel0, uv ).rgb;
}

On the second stage we assume that we work with different wavelength. Further we are from our current pixel, smaller wavelength we take due to dispersion. If we know the wavelength, we also know what color it brings to us. So we go through all the pixels that affect our current pixel, accumulate these colors and then normalize the result:

vec3 accumulator = vec3(0);
vec2 offset = vec2(0);
vec3 WeightSum = vec3(0);
vec3 Weight = vec3(0);
vec3 color;
float waveLength;

for (int i = 0; i < SAMPLE_COUNT; i++)
{
    waveLength = mix(700.0, 380.0, float(i) / (float(SAMPLE_COUNT) - 1.0));
    Weight.r = GetRedWeight(waveLength);
    Weight.g = GetGreenWeight(waveLength);
    Weight.b = GetBlueWeight(waveLength);
       
    offset -= offsetDecrement;

    color = texture(iChannel0, uv + offset).rgb;
    accumulator.rgb += color.rgb * Weight.rgb; 

    WeightSum.rgb += Weight.rgb;
}

return accumulator.rgb / WeightSum.rgb;

As a result, we've got more physically accurate chromatic aberration, that can have as much texture samples, as you want. For example, the common algorithm, that I've mentioned earlier, can have only multiple of 3 samples.

References:


Comments