1) Why wouldn't the constant be 0.4ms? That's how long it takes for the compiler to run an algorithm that has an input size of 150. Wouldn't 0.4ms be multiplied by the input size? (i.e. 0.4ms * x, 0.4ms * x^2, ect..). I used i as a variable to be a stand in for any function of x, whether that function be linear, quadratic, O(n log n), cubic, ect.. i could be x, x^2, x^3, ect..
Why WOULD the constant be 0.4 ms? Did you try carrying out your plan, as I recommended, to see if it works?
If you were right, then the formula for part (a) would be f(x) = 0.4x, right? But then f(150) = 0.4*150 = 60, but you are told it is 0.4 ms!
The constant (in the linear case) is the "rate" in ms/input: 0.4/150 = 0.002667. You're acting as if you were told the rate is 0.4 ms per input. No, that's the time for an input size of 150, not 1.
The right way to find the constant, as HallsofIvy showed, is to write the equation using the unknown constant, plug in the given numbers, and solve for the constant:
f(x) = cx
f(150) = 150c
150c = 0.4
c = 0.4/150 = 0.002667
As for what
i means, you said explicitly
"i represents the input size". You can't define a variable and then ignore what you defined it as! And then you didn't define what you meant by
x, which is, presumably, "
x represents the input size".
2) What do you mean Big-O deals with asymptotic behavior? I know that big-Oh notation is meant to determine how a function grows by comparing it to another function. It terms of Big-O measuring asymptotic behavior, are you referring to the fact that for a big-Oh relationship to be true you just have to find one constant C and k that make the relationship always true? That you just need to find one point at which the relationship will continue to be true forever (whether that be the minimum y-value or a higher y-value)?
Asymptotic behavior is what Big-O is all about, so I hope you were taught that: It is used for comparing how functions behave
for very large values of the input. It does not say that the function has
exactly a given form; it says that for large inputs you can ignore all terms but the one that dominates. Other terms are "negligible" (that is, they can be neglected, or ignored) only for those large inputs. So, for example, an asymptotically quadratic function, O(n^2), may actually be, say, f(x) = 4x^2 - 56x + 2000 (and may even have other non-polynomial terms tossed in). This function is like cx^2 only for very large values of x. (Try graphing f(x) and 4x^2 to see how they compare.) When x is, say, a million, x^2 will be a million times any of the other terms, so you can ignore them -- we say they are
negligible.
Now, in your problem, they are telling you not to include any lower-order terms; apart from (b), it is not talking about Big-O at all, which is good. But a function that is O(n log n) might really be something like 5x log x - 5900x + sqrt(x+4) or whatever. In telling you to assume lower-order terms are negligible, they are saying that the function has the form c x log x. That's good for your purposes, because they want to keep things simple; I just wanted to remind you that they are simplifying the topic for you.
3) Yeah, the question is pretty vague by what it means by low-order terms. But isn't it, in actuality, possible to know exactly how long the program containing the algorithm runs? In Unix, I can use a time() function that measures how long any given program runs. The time() function measures the time that it takes for the program to finish running with three metrics: user, sys, and real. real measures the total time it takes for the program to run but it also includes how long it took for other processes running on the operating system to finish. user and sys specifically measure how long it takes for the program to run. If I add user and sys, I will get a good estimate of how long it took the program to finish running. In this context, I am given a concrete amount of time that the algorithm takes to run given a specific input. That means I could figure out a pretty definitive answer by running it in the terminal, right?
Actually,
the question is not vague, unless you don't know what "low-order terms" means. It's explicitly telling you that they don't mean merely O(x^2), but actually cx^2 with no other terms.
But, no,
it isn't possible in reality to know the exact formula for runtime. You can
measure particular values, but you can't get a
formula from such data. You can
analyze simple algorithms theoretically and find an exact formula in principle, but any real algorithm would be far too complex. That's why we use Big-O!