y=mx+b versus y=ax+b in linear regression

Harry_the_cat

Elite Member
Joined
Mar 16, 2016
Messages
3,760
In coordinate geometry, we tend to use y = mx + b to represent a line in gradient-intercept form.

However, when looking at regression lines we tend to use y = ax + b (or some prefer y = a + bx).

Is there any reason, at a high school maths level, that we can't use y = mx + b for regression (other than the fact that calculators use a and b)?

Trying to achieve some consistency in the language.
 
In coordinate geometry, we tend to use y = mx + b to represent a line in gradient-intercept form.

However, when looking at regression lines we tend to use y = ax + b (or some prefer y = a + bx).

Is there any reason, at a high school maths level, that we can't use y = mx + b for regression (other than the fact that calculators use a and b)?

Trying to achieve some consistency in the language.
I almost always use y = mx + b when doing regression.

-Dan
 
In coordinate geometry, we tend to use y = mx + b to represent a line in gradient-intercept form.

However, when looking at regression lines we tend to use y = ax + b (or some prefer y = a + bx).

Is there any reason, at a high school maths level, that we can't use y = mx + b for regression (other than the fact that calculators use a and b)?

Trying to achieve some consistency in the language.
My understanding is that in many countries they use y=ax+b (or various other options) for lines; y=mx+b as used in America and, clearly, some other places, is entirely arbitrary (though it makes more sense than at first appears in this context).

But of course it makes no difference at all what letters you use; my guess would be that in regression you are moving in the direction of general polynomials (at least if you are headed toward polynomial regression), and a's and b's, or even a1's and a2's, seem reasonable -- precisely for the sake of consistency.

At the high school level, with only linear regression in view, you may as well use the parameters you're used to there. Who needs consistency with the rest of the world, when you can be consistent within the world you know?
 
In coordinate geometry, we tend to use y = mx + b to represent a line in gradient-intercept form.

However, when looking at regression lines we tend to use y = ax + b (or some prefer y = a + bx).

Is there any reason, at a high school maths level, that we can't use y = mx + b for regression (other than the fact that calculators use a and b)?

Trying to achieve some consistency in the language.
In the context of regression analysis, 'a' and 'b' are not necessarily the same as 'm' and 'b' in the coordinate geometry. They are regression coefficients that are based on data. The regression line aims to capture the relationship in the data, while the 'm' and 'b' represent a straight line in a geometric sense.

One represents an exact equation of a straight line, whereas the other is estimates based on data. I would say that's a good reason for the different use of variables.

PS: I use [imath]\hat{y} = \beta_0 + \beta_1x + \epsilon[/imath]
 
Last edited:
In coordinate geometry, we tend to use y = mx + b to represent a line in gradient-intercept form.

However, when looking at regression lines we tend to use y = ax + b (or some prefer y = a + bx).

Is there any reason, at a high school maths level, that we can't use y = mx + b for regression (other than the fact that calculators use a and b)?

Trying to achieve some consistency in the language.
[imath]y = fx+g[/imath] is my absolute favorite :)
 
Top