Dear lazyweb,

I’ve got a few hundred measurements like this from the prototype hardware:

input value: 0.123456,0.234567,0.345678

*gives:*

output value: 0.876543,0.765432,0.654321

Does anyone know how to estimate a 3×3 matrix to convert the output value to the input value? I need to do this to be able to calibrate the open-source calibration hardware that I’ve created. Thanks.

I suggest you have a look at :

http://en.wikipedia.org/wiki/Least_squares

and I’m pretty sure R (probably through sagemath) is able to do such things, though I don’t exactly know how to do that.

I looked at Least Squares, but (maybe incorrectly) that it would be a hard time getting a precise matrix unless some kind of culling was done. Otherwise you’re scanning a huge sample space…

If you convert the input matrix to a 4d vector with 0 as the last value and the output is also a 4d vector with 0 as the last value then its pretty straight forward to solve with a 4 x 4 matrix.

I bow to your mathematical prowess Can you define straightforward please? Maths was never a strong subject for me. Thanks.

0.123456 -1 0 0 1 0.876544

0.234567 x 0 -1 0 1 = 0.765433

0.345678 0 0 -1 1 0.654322

1 0 0 0 1 1

ok this is ugly

let me write it somehwere where it iwll be easier to read

http://fpaste.org/tCpq/

(tip: mathurl.com for math equations )

correction u put a 1 instead of a 0

Least-squares seems like the way to go. Do you have any constraints on the matrix (e.g. is it a rotation matrix) ?

Hm, I would try (1.0, 1.0, 1.0) – input-vector …

input plus output does simply seem to sum up to ~1

I do not know how can you be sure that this is a 3×3 matrix linear transformation…

Anyway, a 3×3 matrix has 9 unkowns, but you only provide 3 pairs of input/output values, I guess that many many 3×3 matrices satisfy your values…

Well, too few informations. However, assuming the values scales linearly:

inXYZ = [inX, inY, inZ]

outXYZ = [outX, outY, outZ]

[inX/outX, 0, 0]

[0, inY/outY, 0] * outXYZ = inXYZ

[0, 0, inZ/outZ]

Solve it with http://fpaste.org/tCpq/

Classical Ax = b problem -> x = pinv(A)*b

A and b are full matrix (input, 3 x N). Pinv is pseudoinverse but you can use the classical (A’A)^-1A’ (A’ is A transpose) to invert the matrix too.

A and b are full matrix (input and output accordingly, 3 x N)…

I’d recommend Matlab or Octave.

Isn’t it:

[ R ] [ 1 2 3 ] [ X ]

[ G ] . [ 4 5 6 ] = [ Y ]

[ B ] [ 7 8 9 ] [ Z ]

?

Well, as you’ve described it, you have infinite number of solutions. If you can provide 3 different input values for 3 different output values, you just put them as columns in matrices, do inverse of one of them and multiply. Let’s O denote output matrix, I input matrix and A the final matrix. And suppose you want to do I = AO (strange, but you tell you want to calculate input from output…). Then the following holds (\cdot is matrix multiplication ^{-1} is matrix inverse):

A = I \cdot O^{-1}

If you provide more than 3 i/o combinations, than the matrices O and I won’t be square so you’d be effectively looking for least square approximation formula. For the same problem the following would hold true (^T means transpose of matrix):

A = I \cdot O^T \cdot (O \cdot O^T)^{-1}

Of course there might be numerically better solutions, but I’m not that familiar with numerical algebra.

A little background for some people that asked, the matrix will actually be converting DeviceRGB values into XYZ color values.

before converting add a 1 to the end of the output and 1

then multiply it with the matrix

| -1 0 0 1 |

| 0 -1 0 1 |

| 0 0 -1 1 |

| 0 0 0 1 |

1. You grab http://people.gnome.org/~mortenw/fit.gnumeric

2. You plug in your data in A8:F9999

3. You extend columns H-L to match your data.

4. You make sure L5 sums the extended L column.

5. Tools->Solver, Solve.

This fits the least square norm of the XYZ errors.

(This machinery is gross overkill for the task, but easy to

understand. No grand math required.)

If you need anyone to test said hardware, I’ve a few screens that need calibrating and my Spyder2 is buried under cables…

Always the opportunist I see. Anyway, I’ve got dibs.

Don’t suppose this magical device could head Cambridge-wards afterwards?

(I’m only half joking.)

With input x and output y you want the matrix A such that Ax = y then yes, as people have said, least squares is the most direct way to do it. To spell it out…

– Take all of your inputs, x, as column vectors and put them into a matrix X = [ x_1, ... , x_N ]. Take all the outputs and do the same to get Y = [ y_1, ... y_N ].

– Then the least squares estimate of A is A = Y*pinv(X). Use octave or scilab or anything else that knows how to compute a pseudo inverse to do it.

If you need a translational component estimated (i.e., you want the equation to be something like Ax + b = y) then add an extra input dimension which is always set to 1 (i.e., add an entry of 1 to the end of each input x). Then do exactly the same thing above, but this time the A you get back will be 3×4 instead of 3×3. The last column of A is then b.

Ehm… for most of the other posters: he has hundreds of measurements, not only that one.

Said that, if you want a matrix you want the best linear approximation of a function R³ -> R³; as other people said, the simplest approach is via least squares (or preferably, least median of squares, which is more robust). Google for “least median of squares regressions” or similar.

So, some answers so far were good (in particular, the terms “least squares” and “pseudoinverse” make sense), but they are incomplete, and in fact some are just complete enough to be dangerous!

So let’s start with the modeling:

Let K be the number of measurements.

We want to have A xk = bk for k = 1, …, K.

This leads to an overdetermined system of linear equations:

xk1 A11 + xk2 A12 + xk3 A13 = bk1

xk1 A21 + xk2 A22 + xk3 A23 = bk2

xk1 A31 + xk2 A32 + xk3 A33 = bk3

(for all k, and where the unknowns are the Aij, NOT the xkj!)

which we obviously cannot satisfy exactly (because it is overdetermined: there are 3k equations for only 9 unknowns, which means it’s overdetermined as soon as k>3).

Let’s rewrite this system in matrix form: X a = b, where the known matrix X is of the form:

x11 x12 x13 0 0 0 0 0 0

0 0 0 x11 x12 x13 0 0 0

0 0 0 0 0 0 x11 x12 x13

x21 x22 x23 0 0 0 0 0 0

0 0 0 x21 x22 x23 0 0 0

0 0 0 0 0 0 x21 x22 x23

…

xK1 xK2 xK3 0 0 0 0 0 0

0 0 0 xK1 xK2 xK3 0 0 0

0 0 0 0 0 0 xK1 xK2 xK3

the unknown vector a is of the form:

A11

A12

A13

A21

A22

A23

A31

A32

A33

and the known vector b is of the form:

b11

b12

b13

b21

b22

b23

…

bK1

bK2

bK3

It shall be noted that this can be split into 3 separate systems of equations for each row of A: X’ a(i) = b(i), i=1, 2, 3 where the known matrix X’ is of the form:

x11 x12 x13

x21 x22 x23

…

xk1 xk2 xk3

the unknown vector a(i) is of the form:

Ai1

Ai2

Ai3

and the known vector b(i) is of the form:

b1i

b2i

…

bki

It is probably a good idea to do this for efficiency, though the technique is the same for solving X a = b and X’ a(i) = b(i). For simplicitly, I will refer to the problem as X a = b in the remainder of this post, but I do recommend doing the split.

Now to the solution:

As explained, X a = b cannot be solved exactly. So we want to compute a least squares solution, which minimizes the norm of (X a – b). This is also where the pseudoinverse comes into play, since a = pinv(X) b. But you do NOT want to compute the pseudoinverse explicitly, both for efficiency and for numerical stability reasons. Instead, there are a few other ways to solve this system:

1. normal equations (NOT RECOMMENDED!)

The least squares solution (for a) of X a = b can be computed as X* X a = X* b where * is the transpose operation (because these are real matrices). The problem is that the matrix X* X we compute here tends to be ill-conditioned (its condition number is the square of the condition number of X, and the larger it is, the worse the condition of the matrix), which means that the algorithm is numerically unstable, which means that the results you get will have very poor accuracy.

2. QR decomposition (faster than but not as numerically stable as 3, still much more stable/accurate than 1)

Let X = Q R be a reduced QR decomposition of X. Then we have:

X* X a = X* b

(Q R)* (Q R) a = (Q R)* b

R* Q* Q R a = R* Q* b

Assuming that R* is invertible (which it’d better be anyway, or the system will also be underdetermined in addition to being overdetermined; yes, a system can be both if the equations are not linearly independent), we get:

Q* Q R a = Q* b

R a = Q* b because Q* Q is the identity (because Q can be completed to a unitary matrix)

So we compute u = Q* b and we solve the easy triangular system R a = u.

3. SVD (singular value decomposition) (numerically stablest, but not as fast as 2)

Let X = U S V* be a reduced SVD of X. Then we have:

X* X a = X* b

(U S V*)* (U S V) a = (U S V*)* b

V S* U* U S V a = V S U* b

V S² V* a = V S U* b (because U is unitary and S is diagonal)

S² V* a = S U* b (because the unitary matrix V is invertible)

Assuming that S is invertible (which it’d better be anyway, or the system will also be underdetermined in addition to being overdetermined; yes, a system can be both if the equations are not linearly independent), we get:

S V* a = U* b

So we compute c = U* b, we solve the trivial diagonal system S y = c and we compute a = V y. (The inverse of V* is V because V is unitary.)

Note: I think the matrix R resp. S will turn out invertible in practice, but if it is not, then the only method which really works is the SVD, where you can special-case the zeros in the diagonal matrix S. (When computing S y = c, where you would divide by 0, you just use ci=0 instead of ci=yi/0, and it can be proven that you end up with the least squares solution with minimal norm. The harder part is to decide when the diagonal entry is so small that it should be treated as 0.)

(Reposting with working (Unicode) equivalent signs and with a pre tag for the matrix which needs to be formatted properly.)

So, some answers so far were good (in particular, the terms “least squares” and “pseudoinverse” make sense), but they are incomplete, and in fact some are just complete enough to be dangerous!

So let’s start with the modeling:

Let K be the number of measurements.

We want to have A xk = bk for k = 1, …, K.

This leads to an overdetermined system of linear equations:

xk1 A11 + xk2 A12 + xk3 A13 = bk1

xk1 A21 + xk2 A22 + xk3 A23 = bk2

xk1 A31 + xk2 A32 + xk3 A33 = bk3

(for all k, and where the unknowns are the Aij, NOT the xkj!)

which we obviously cannot satisfy exactly (because it is overdetermined: there are 3k equations for only 9 unknowns, which means it’s overdetermined as soon as k>3).

Let’s rewrite this system in matrix form: X a = b, where the known matrix X is of the form:

x11 x12 x13 0 0 0 0 0 0

0 0 0 x11 x12 x13 0 0 0

0 0 0 0 0 0 x11 x12 x13

x21 x22 x23 0 0 0 0 0 0

0 0 0 x21 x22 x23 0 0 0

0 0 0 0 0 0 x21 x22 x23

…

xK1 xK2 xK3 0 0 0 0 0 0

0 0 0 xK1 xK2 xK3 0 0 0

0 0 0 0 0 0 xK1 xK2 xK3

the unknown vector a is of the form:

A11

A12

A13

A21

A22

A23

A31

A32

A33

and the known vector b is of the form:

b11

b12

b13

b21

b22

b23

…

bK1

bK2

bK3

It shall be noted that this can be split into 3 separate systems of equations for each row of A: X’ a(i) = b(i), i=1, 2, 3 where the known matrix X’ is of the form:

x11 x12 x13

x21 x22 x23

…

xk1 xk2 xk3

the unknown vector a(i) is of the form:

Ai1

Ai2

Ai3

and the known vector b(i) is of the form:

b1i

b2i

…

bki

It is probably a good idea to do this for efficiency, though the technique is the same for solving X a = b and X’ a(i) = b(i). For simplicitly, I will refer to the problem as X a = b in the remainder of this post, but I do recommend doing the split.

Now to the solution:

As explained, X a = b cannot be solved exactly. So we want to compute a least squares solution, which minimizes the norm of (X a – b). This is also where the pseudoinverse comes into play, since a = pinv(X) b. But you do NOT want to compute the pseudoinverse explicitly, both for efficiency and for numerical stability reasons. Instead, there are a few other ways to solve this system:

1. normal equations (NOT RECOMMENDED!)

The least squares solution (for a) of X a = b can be computed as X* X a = X* b where * is the transpose operation (because these are real matrices). The problem is that the matrix X* X we compute here tends to be ill-conditioned (its condition number is the square of the condition number of X, and the larger it is, the worse the condition of the matrix), which means that the algorithm is numerically unstable, which means that the results you get will have very poor accuracy.

2. QR decomposition (faster than but not as numerically stable as 3, still much more stable/accurate than 1)

Let X = Q R be a reduced QR decomposition of X. Then we have:

X* X a = X* b

⇔ (Q R)* (Q R) a = (Q R)* b

⇔ R* Q* Q R a = R* Q* b

Assuming that R* is invertible (which it’d better be anyway, or the system will also be underdetermined in addition to being overdetermined; yes, a system can be both if the equations are not linearly independent), we get:

⇔ Q* Q R a = Q* b

⇔ R a = Q* b because Q* Q is the identity (because Q can be completed to a unitary matrix)

So we compute u = Q* b and we solve the easy triangular system R a = u.

3. SVD (singular value decomposition) (numerically stablest, but not as fast as 2)

Let X = U S V* be a reduced SVD of X. Then we have:

X* X a = X* b

⇔ (U S V*)* (U S V) a = (U S V*)* b

⇔ V S* U* U S V a = V S U* b

⇔ V S² V* a = V S U* b (because U is unitary and S is diagonal)

⇔ S² V* a = S U* b (because the unitary matrix V is invertible)

Assuming that S is invertible (which it’d better be anyway, or the system will also be underdetermined in addition to being overdetermined; yes, a system can be both if the equations are not linearly independent), we get:

S V* a = U* b

So we compute c = U* b, we solve the trivial diagonal system S y = c and we compute a = V y. (The inverse of V* is V because V is unitary.)

Note: I think the matrix R resp. S will turn out invertible in practice, but if it is not, then the only method which really works is the SVD, where you can special-case the zeros in the diagonal matrix S. (When computing S y = c, where you would divide by 0, you just use ci=0 instead of ci=yi/0, and it can be proven that you end up with the least squares solution with minimal norm. The harder part is to decide when the diagonal entry is so small that it should be treated as 0.)

So pre tags have no effect. I hope you’ll still get how the matrix X looks like. And you probably want to work with the smaller X’ anyway.

Why don’t you list also the iterative (gradient) algorithm?

Because typically, iterative algorithms are what you use when you cannot compute a factorization. The matrices here probably aren’t so large that factoring would be a problem.

Oops, the last parenthesis should read:

(When solving S y = c, where you would divide by 0, you just use yi=0 instead of yi=ci/0, and it can be proven that you end up with the least squares solution with minimal norm. The harder part is to decide when the diagonal entry is so small that it should be treated as 0.)

Yes, you’re correct. In my answer I hadn’t considered numerical stability (but at least pointed out, that numerically speaking my method wasn’t the best). Looking at your suggestions, I think SVD would be best for the problem (stability is probably more important here than speed).

Well, QR tends to be stable enough (assuming you do the QR factorization properly, i.e. never use Gram-Schmidt for that! Use unitary (Householder) transformations!) if the condition of the matrix isn’t too bad.

The main drawback is that the QR decomposition isn’t as good at revealing the rank of the matrix as the SVD, so you can run into trouble if the matrix does not have full rank, or is close to not having full rank (i.e. is ill-conditioned). (Handling that case also requires extra work in the QR method, because you have to use a pivoting QR factorization, whereas in the SVD method, you just have to zero the entries where you would divide by zero.)

For our problem, this means: If the coefficients ai1, ai2 and ai3 such that yi=ai1 x1+ai2 x2+ai3 x3 (or with a constant term, the coefficients ai1, ai2, ai3 and ci such that yi=ai1 x1+ai2 x2+ai3 x3+ci) are expected to be unique, the QR method should be good enough. If you expect that there might be multiple (infinitely many) solutions, you will have to use the SVD method and check for zeros in S.

No idea if least squares is appropriate for your problem but if you want to use it with python:

install numpy and scipy (in official repos)

import numpy as np

import scipy.linalg

def generate_sample_data(m):

”’

X is the 3×3 matrix to generate the data.

AX + e = B where e is zero-mean normal.

”’

X = np.array([[0.1, 0.3, 0.5],

[0.7, 0.6, 0.2],

[0.9, 0.0, 0.1]])

A = np.random.rand(m,3)

e = np.random.normal(scale = 0.01, size=(m,3))

B = np.dot(A, X) + e

return A,B

def estimate(A,B):

return scipy.linalg.lstsq(A,B)[0]

A,B = generate_sample_data(10)

#print(A)

#print(B)

Xhat = estimate(A,B) # this is the estimate of the X matrix

print(Xhat)

various ways to get your data into a numpy array, eg.

input_data = [[1,2,3], [4,5,6]]

input_array = np.array(input_data)

print(input_array)

Here is a commented python script that explains how I’d do it.

http://dpaste.com/645515/

And here a link that doesn’t expire in six days but has crappy ads

– http://pastebin.com/zLL7Q8L1

reposting, didn’t seem to go through earlier…

No idea if least squares is appropriate for your problem but if you want to do it in python:

install numpy and scipy (in official repos)

import numpy as np

import scipy.linalg

def generate_sample_data(m):

”’

X is the 3×3 matrix to generate the data.

AX + e = B where e is zero-mean normal.

”’

X = np.array([[0.1, 0.3, 0.5],

[0.7, 0.6, 0.2],

[0.9, 0.0, 0.1]])

A = np.random.rand(m,3)

e = np.random.normal(scale = 0.01, size=(m,3))

B = np.dot(A, X) + e

return A,B

def estimate(A,B):

return scipy.linalg.lstsq(A,B)[0]

A,B = generate_sample_data(10)

#print(A)

#print(B)

Xhat = estimate(A,B) # this is the estimate of the X matrix

print(Xhat)

various ways to get your data into a numpy array, eg.

input_data = [[1,2,3], [4,5,6]]

input_array = np.array(input_data)

print(input_array)

I’m not sure why people are doing so difficult with least squares, etc., but if the mapping of an output value Y to input value X is as simple as doing

X = -Y + 1

then you just use the 4×4 matrix given by Seif (let’s call this matrix A) and multiply it by a matrix of 4xN (where N is the number of measurements) (each column contains the 3 output values and a digit 1 on the 4th row) (let’s call this matrix Y), which will result in another 4xN matrix containing the input values on the first 3 rows and a digit 1 on the 4th row (let’s call this matrix X).

Y is:

[ 0.876543 ... ]

[ 0.765432 ... ]

[ 0.654321 ... ]

[ 1 ... ]

X is then:

[ 0.123457 ... ]

[ 0.234568 ... ]

[ 0.345679 ... ]

[ 1 ... ]

To get X you do: A * Y

Because you and everyone else who gave “easy” solutions didn’t understand the problem at all!

The numbers he gave are only example numbers! What we have in reality is a list of vectors x1, x2, …, xN, another list of vectors y1, y2, …, yN, and we’re looking for a matrix A which should make A xk as close as possible to yk for all k from 1 to N. The xk and yk vectors are the result of measurements, so they are not known in advance, which means you cannot hardcode a matrix.

Ah, ok, the easy solution is probably not the right one then. I’m still interested to see the final matrix though

By the way, one thing that SHOULD be retained from Seif’s and your answers is that there might be a need for a constant term, i.e. y = A x might not be the right formula, y = A x + c might make more sense. To support this, in my X’ a(i) = b(i) formulation, just use this X':

x11 x12 x13 1

x21 x22 x23 1

…

xk1 xk2 xk3 1

and this a(i):

Ai1

Ai2

Ai3

ci

and this unchanged b(i):

b1i

b2i

…

bki

(i still goes from 1 to 3).

Maybe the following script can do the job for you; no special software needed:

#!/bin/sh

#

# Find the 3×3 matrix for the linear relation between two sets

# of (linear) RGB triplets. Method is described there:

# http://kybele.psych.cornell.edu/~edelman/Course/lightness/node23.html

#

# Program is optimized in order to handle several millions of pixels

# with absolutely no memory usage.

#

# Datas are read from the standard input as a six columns-based set

#

# Columns 1,2 and 3 are the TARGET datas

# Columns 4,5 and 6 are the INITIAL datas

#

# The final matrix is:

#

# [ R_target ] [ a b c ] [ R_initial ]

# [ G_target ] = [ d e f ] [ G_initial ]

# [ B_target ] [ g h i ] [ B_initial ]

#

awk ‘

BEGIN {

print “scale=48;”;

print “am1=0;am2=0;am3=0;am4=0;am5=0;am6=0;am7=0;am8=0;am9=0;”;

print “bm1=0;bm2=0;bm3=0;bm4=0;bm5=0;bm6=0;bm7=0;bm8=0;bm9=0;”;

}

{

print “am1+=” $1 “*” $4 “;am2+=” $1 “*” $5 “;am3+=” $1 “*” $6 “;”;

print “am4+=” $2 “*” $4 “;am5+=” $2 “*” $5 “;am6+=” $2 “*” $6 “;”;

print “am7+=” $3 “*” $4 “;am8+=” $3 “*” $5 “;am9+=” $3 “*” $6 “;”;

print “bm1+=” $4 “*” $4 “;bm2+=” $4 “*” $5 “;bm3+=” $4 “*” $6 “;”;

print “bm5+=” $5 “*” $5 “;bm6+=” $5 “*” $6 “;bm9+=” $6 “*” $6 “;”;

}

END {

print “bm4=bm2;bm7=bm3;bm8=bm6;”;

print “print ((bm9*bm5-bm8*bm6)/((bm9*bm5-bm8*bm6)*bm1+((-bm9*bm4+bm7*bm6)*bm2+(bm8*bm4-bm7*bm5)*bm3)))*am1+(((-bm9*bm4+bm7*bm6)/((bm9*bm5-bm8*bm6)*bm1+((-bm9*bm4+bm7*bm6)*bm2+(bm8*bm4-bm7*bm5)*bm3)))*am2+((bm8*bm4-bm7*bm5)/((bm9*bm5-bm8*bm6)*bm1+((-bm9*bm4+bm7*bm6)*bm2+(bm8*bm4-bm7*bm5)*bm3)))*am3),\”\\n\”;”;

print “print ((-bm9*bm2+bm8*bm3)/((bm9*bm5-bm8*bm6)*bm1+((-bm9*bm4+bm7*bm6)*bm2+(bm8*bm4-bm7*bm5)*bm3)))*am1+(((bm9*bm1-bm7*bm3)/((bm9*bm5-bm8*bm6)*bm1+((-bm9*bm4+bm7*bm6)*bm2+(bm8*bm4-bm7*bm5)*bm3)))*am2+((-bm8*bm1+bm7*bm2)/((bm9*bm5-bm8*bm6)*bm1+((-bm9*bm4+bm7*bm6)*bm2+(bm8*bm4-bm7*bm5)*bm3)))*am3),\”\\n\”;”;

print “print ((bm6*bm2-bm5*bm3)/((bm9*bm5-bm8*bm6)*bm1+((-bm9*bm4+bm7*bm6)*bm2+(bm8*bm4-bm7*bm5)*bm3)))*am1+(((-bm6*bm1+bm4*bm3)/((bm9*bm5-bm8*bm6)*bm1+((-bm9*bm4+bm7*bm6)*bm2+(bm8*bm4-bm7*bm5)*bm3)))*am2+((bm5*bm1-bm4*bm2)/((bm9*bm5-bm8*bm6)*bm1+((-bm9*bm4+bm7*bm6)*bm2+(bm8*bm4-bm7*bm5)*bm3)))*am3),\”\\n\”;”;

print “print ((bm9*bm5-bm8*bm6)/((bm9*bm5-bm8*bm6)*bm1+((-bm9*bm4+bm7*bm6)*bm2+(bm8*bm4-bm7*bm5)*bm3)))*am4+(((-bm9*bm4+bm7*bm6)/((bm9*bm5-bm8*bm6)*bm1+((-bm9*bm4+bm7*bm6)*bm2+(bm8*bm4-bm7*bm5)*bm3)))*am5+((bm8*bm4-bm7*bm5)/((bm9*bm5-bm8*bm6)*bm1+((-bm9*bm4+bm7*bm6)*bm2+(bm8*bm4-bm7*bm5)*bm3)))*am6),\”\\n\”;”;

print “print ((-bm9*bm2+bm8*bm3)/((bm9*bm5-bm8*bm6)*bm1+((-bm9*bm4+bm7*bm6)*bm2+(bm8*bm4-bm7*bm5)*bm3)))*am4+(((bm9*bm1-bm7*bm3)/((bm9*bm5-bm8*bm6)*bm1+((-bm9*bm4+bm7*bm6)*bm2+(bm8*bm4-bm7*bm5)*bm3)))*am5+((-bm8*bm1+bm7*bm2)/((bm9*bm5-bm8*bm6)*bm1+((-bm9*bm4+bm7*bm6)*bm2+(bm8*bm4-bm7*bm5)*bm3)))*am6),\”\\n\”;”;

print “print ((bm6*bm2-bm5*bm3)/((bm9*bm5-bm8*bm6)*bm1+((-bm9*bm4+bm7*bm6)*bm2+(bm8*bm4-bm7*bm5)*bm3)))*am4+(((-bm6*bm1+bm4*bm3)/((bm9*bm5-bm8*bm6)*bm1+((-bm9*bm4+bm7*bm6)*bm2+(bm8*bm4-bm7*bm5)*bm3)))*am5+((bm5*bm1-bm4*bm2)/((bm9*bm5-bm8*bm6)*bm1+((-bm9*bm4+bm7*bm6)*bm2+(bm8*bm4-bm7*bm5)*bm3)))*am6),\”\\n\”;”;

print “print ((bm9*bm5-bm8*bm6)/((bm9*bm5-bm8*bm6)*bm1+((-bm9*bm4+bm7*bm6)*bm2+(bm8*bm4-bm7*bm5)*bm3)))*am7+(((-bm9*bm4+bm7*bm6)/((bm9*bm5-bm8*bm6)*bm1+((-bm9*bm4+bm7*bm6)*bm2+(bm8*bm4-bm7*bm5)*bm3)))*am8+((bm8*bm4-bm7*bm5)/((bm9*bm5-bm8*bm6)*bm1+((-bm9*bm4+bm7*bm6)*bm2+(bm8*bm4-bm7*bm5)*bm3)))*am9),\”\\n\”;”;

print “print ((-bm9*bm2+bm8*bm3)/((bm9*bm5-bm8*bm6)*bm1+((-bm9*bm4+bm7*bm6)*bm2+(bm8*bm4-bm7*bm5)*bm3)))*am7+(((bm9*bm1-bm7*bm3)/((bm9*bm5-bm8*bm6)*bm1+((-bm9*bm4+bm7*bm6)*bm2+(bm8*bm4-bm7*bm5)*bm3)))*am8+((-bm8*bm1+bm7*bm2)/((bm9*bm5-bm8*bm6)*bm1+((-bm9*bm4+bm7*bm6)*bm2+(bm8*bm4-bm7*bm5)*bm3)))*am9),\”\\n\”;”;

print “print ((bm6*bm2-bm5*bm3)/((bm9*bm5-bm8*bm6)*bm1+((-bm9*bm4+bm7*bm6)*bm2+(bm8*bm4-bm7*bm5)*bm3)))*am7+(((-bm6*bm1+bm4*bm3)/((bm9*bm5-bm8*bm6)*bm1+((-bm9*bm4+bm7*bm6)*bm2+(bm8*bm4-bm7*bm5)*bm3)))*am8+((bm5*bm1-bm4*bm2)/((bm9*bm5-bm8*bm6)*bm1+((-bm9*bm4+bm7*bm6)*bm2+(bm8*bm4-bm7*bm5)*bm3)))*am9),\”\\n\”;”;

}’ | bc

Leaving aside the question of whether awk is a suitable language for this kind of tasks, this script uses the extremely numerically unstable normal equations method. So I cannot recommend using it.

Your second remark is relevant, but please note thatawk is not used at all for any computation here; it merely parses the data. The whole computation is done by bc which is an arbitrary precision calculator; the precision is set to 48 (see at the beginning of the script); thus you have a precision that some other tools won’t reach. For parsing millions of data (which was the purpose of the script when I wrote it), it happens to be very accurate (and very efficient since it can be inserted in a pipe for parsing data without keeping it in memory).

I’m not sure if you’ve solved this problem already with the help you’ve received, but I got to thinking about it after seeing this post, and thought I’d share a solution I’ve implemented in Octave.

http://emphaticallystatic.org/media/transform.tar.bz2

Given two files containing your sets of points (input.csv and output.csv), the linked script computes the transformation parameters (R: rotation matrix, T: translation vector, c: scaling factor) that relates the points via:

y_i = c*R*x_i + t

The archive I’ve linked to also contains the paper that describes the algorithm I’ve implemented, and can be run from the command line as:

octave transformation.m

One can also generate fake data via octave generate_data.m to test the script.

Is there an offset or just y = A x? If there’s an offset, subtract it from both the xs and ys to get zero-mean points.

In numpy, assuming x and y are 3xn and n > 3:

A = scipy.linalg.lstsq(x.T, y.T)[0]

This is the least-squares solution to y = A x.

If you have additional constraints on A, like that it is orthogonal, then look into the Procrustes problem. See also: http://en.wikipedia.org/wiki/User:BenFrantzDale/Rotation_estimation

Everyone above is assuming that the relationship between the three inputs and the three outputs is a linear function, and you want the 3×3 matrix that minimizes the error. If so, you don’t need to write code; use Octave (the GPL Matlab clone). If there is a physical reason to expect some other relationship, this might not be valid though.

I don’t think Octave is all that great a solution here, since AIUI this calibration should be done in C code. (It’s nice for experimenting though, and thus also potentially for deciding what algorithm to implement in the C code.) But there are other ready-made libraries which might fit better, e.g. GSL or LAPACK. (I chatted a bit with hughsie on IRC about the options, GSL looks nice, they have a ready-made API which implements the SVD-based method.)

Now if the function is far from being linear, indeed, the linear least squares solution is not going to be satisfactory, and a nonlinear least squares will have to be considered.

(I’d start by trying the linear fit with a constant term. Richard said the hardware is supposed to be manually calibrated such that the constant term will be 0, so if we need a significant constant term, this is a good sign that the linear fit is a poor fit. The R² error coefficient can of course be computed as well, GSL also does that for you.)

I don’t think you need to fit all the vectors. If you have measurements for black and three colours, that should be all you need. Adding more colours will probably just add more errors. Here’s an example where we have subtracted the black, so Xr,Yr, Zr are the XYZ measurements of red, and so on…

/* Display RGB primaries */

Xr = 21.2832;

Yr = 10.4607;

Zr = 0.347948;

Xg = 14.0206;

Yg = 29.792;

Zg = 4.69381;

Xb = 5.6572;

Yb = 2.91228;

Zb = 31.8093;

/* inverse matrix */

det = Xr*(Yg*Zb-Yb*Zg) + Yr*(Zg*Xb-Zb*Xg) + Zr*(Xg*Yb-Xb*Yg);

Rx=(Yg*Zb-Yb*Zg)/det; Ry=(Zg*Xb-Zb*Xg)/det; Rz=(Xg*Yb-Xb*Yg)/det;

Gx=(Yb*Zr-Yr*Zb)/det; Gy=(Zb*Xr-Zr*Xb)/det; Gz=(Xb*Yr-Xr*Yb)/det;

Bx=(Yr*Zg-Yg*Zr)/det; By=(Zr*Xg-Zg*Xr)/det; Bz=(Xr*Yg-Xg*Yr)/det;

/* Convert XYZ to monitor RGB primaries */

R = Rx*X + Ry*Y + Rz*Z;

G = Gx*X + Gy*Y + Gz*Z;

B = Bx*X + By*Y + Bz*Z;

Hope this helps.

Cheers.

Richard Kirk