Little-Oh, Big-Oh

roody__poo

New member
Joined
Aug 8, 2006
Messages
8
Okay, so my book has succeeded in confusing the **** out of me with this whole "f=o(g) and f=O(g)" thing. Can someone, anyone out there help me and explain this big and little stuff in plain English?? I fear I may start shredding my book out of spite if I don't figure this out. Many thanks to any and all who can help me!

-G
 
A function f is said to be O(g) if f is bounded above by a constant multiple of g.

For instance given f(x)=x^2 and g(x)=x^3, we can say that f = O(g). To prove that all we need to do is produce a constant multiple of g that is greater than f. Lets use two. Since x^2 \(\displaystyle \le\) 2*x^3, f(x) = O(g(x)).

Also, if f = x^2 and g=x^2, f = O(g), since x^2 \(\displaystyle \le\) 3x^2.

Little-o notation, I believe, consists of strictly greater-than bounds. So, same thing but <, not \(\displaystyle \le\). In my second example, f is not o(g), but is O(g) as shown.


Mathematically, if the limit of f/g is 0, f=o(g), if it is another number, f=O(g).

Also, it is not necessary for g to be always greater than f. As long as there is a "cutoff" in the domain of f and g such that beyond this value, the function f is always less than g, then f can be said to be o/O(g). You may also be required to find this cutoff, as it gives an idea of what applications the algorithm is good for. For instance, if f(x)=O(g(x)) but only when x>100,000, then g could be more efficient for smaller inputs.

Get it?
 
Top