I'm having trouble with the following question:
The analysis of algorithms for large integers is not affected by the choice of a measure for the size of operands: the number of computer words needed, or the length of their representation in decimal or binary will do equally well. Show that this remark would in fact be false were we considering exponential time.
Could someone tell me if I'm on the right track with the following answer:
Let’s say a problem has 5^n numbers. Every time you add 1 to n your answer grows by 5 times. As an example, if n is 8, then 58 = 390,625. Represented in binary your answer is 1011111010111100001. It requires 3 to 4 times the bits to represent the binary than a decimal representation.
I understand that binary representation is much larger than decimal representation. How do I explain this considering exponential time?
Any help would be appreciated.
The analysis of algorithms for large integers is not affected by the choice of a measure for the size of operands: the number of computer words needed, or the length of their representation in decimal or binary will do equally well. Show that this remark would in fact be false were we considering exponential time.
Could someone tell me if I'm on the right track with the following answer:
Let’s say a problem has 5^n numbers. Every time you add 1 to n your answer grows by 5 times. As an example, if n is 8, then 58 = 390,625. Represented in binary your answer is 1011111010111100001. It requires 3 to 4 times the bits to represent the binary than a decimal representation.
I understand that binary representation is much larger than decimal representation. How do I explain this considering exponential time?
Any help would be appreciated.