Black point compensation algorithm proposal

By Marti Maria

This paper intends to be only a quick and dirty procedural summary for black point compensation algorithm. The method used is a plain scaling in XYZ.

In order to favour more efficient implementations, I have separated the black point compensation process into two steps. First step takes place on transform generation and must be computed only once. Second step is to be applied to each PCS value.

Fist step

The input parameters are

  • Input media white
  • Input black point BPin
  • Output media white
  • Output black point BPout

Output parameters are :

  • Coefficients Ax, Ay, Az
  • Offsets Bx, By, Bz

The referred black points are intent-specific, and can effectively be different from real (media related) black point. The goal of this step is to compute three coefficients Ax, Ay and Az plus three offsets Bx, By and Bz. These parameters will be applied latter on step two.

  1. Adapt input black point from input media white and output black point from output media white. This can be done by converting to relative encoding or by the use of chromatic adaptation matrix. Differences between both methods are minimal.
  1. Compute coefficients Ax, Ay and Az, one for each XYZ tristimulous as

A = (BPout - D50) / (BPin - D50)

  1. Compute offsets Bx, By and Bz, one for each tristimulous as

B = - D50 * (BPout - BPin) / (BPin - D50)

Keep these coefficients for latter use on transform time.

Second step.

When evaluating the transform,

force the PCS to be XYZ. This would imply some extra conversion in the case of Lab to Lab transforms, CMM must do the conversion anyway if any profile using XYZ as PCS is already implied.

The input parameters are

  • Xi, Yi, Zi the tristimulous values of PCS before black point compensation

The output parameters are

  • Xo, Yo, Zo, the tristimulous values of PCS after black point compensation

Now we can apply the coefficients and offsets obtained in step 1

Xo = Ax* Xi + Bx

Yo = Ay* Yi + By

Zo = Az* Zi + Bz

Comments: The big trouble on implementing black point compensation has turned in practise to be not the algorithm itself, but the fragile state of most profiles regarding black point tag. Most of actual profiles have black point inexistent or worse, just wrong.

Also, the effective black point can vary depending on intent being used. As example, the tweaks introduced in perceptual intents can lower the effective black point much less that the real hardware is currently honouring. So, the most reliable way, seems to be a matter of heuristic to guess black point by examining the current LUT. Obviously this induces a level of required hacking far bigger that the compensation itself. It remains unclear how to solve this problem.