I'm writing a paper that details the strategies scientists use to choose the functions which map the indications of measurement devices into the state space, i.e. how scientists select a backward calibration function. These functions take as their input a representation of the device indication and output a system state. The strategy tries to find some function h that, when y (the device indication represented numerically) is taken as h’s argument, minimizes the strictly positive difference between the output of h(y) and the true state of the system. What is a notation that can best represent this?
One thought was simply:
Backwards calibration function = min F[h(y)-x]
but this just instructs one to minimize F, and not to find the h() that minimizes F.
I've found this notation:
BCF = arg minh F[h(y)-x]
I suppose this is a functional. But why is the min applied to h and not F? The other problem I have is none of my readers recognize this notion. Is there a better one?
One thought was simply:
Backwards calibration function = min F[h(y)-x]
but this just instructs one to minimize F, and not to find the h() that minimizes F.
I've found this notation:
BCF = arg minh F[h(y)-x]
I suppose this is a functional. But why is the min applied to h and not F? The other problem I have is none of my readers recognize this notion. Is there a better one?