Computer scientists often use binary (base 2) notation to represent numbers. The following is an example.
. . .binary: 10110
. . .2^4 place: 1
. . .2^3 place: 0
. . .2^2 place: 1
. . .2^1 place: 1
. . .2^0 = 1 place: 0
Then:
. . .10110 = (1 x 2^4) + (0 x 2^4) + (1 x 2^2) + (1 x 2^1) + (0 x 1)
. . .= 16 + 0 + 4 + 2 + 0
. . .= 22
If you were a systems engineer at Hewletter Parckard, how would you write 1101101 in decimal form?
The answer is 109 but I don't know how it got that answer and need some help.
. . .binary: 10110
. . .2^4 place: 1
. . .2^3 place: 0
. . .2^2 place: 1
. . .2^1 place: 1
. . .2^0 = 1 place: 0
Then:
. . .10110 = (1 x 2^4) + (0 x 2^4) + (1 x 2^2) + (1 x 2^1) + (0 x 1)
. . .= 16 + 0 + 4 + 2 + 0
. . .= 22
If you were a systems engineer at Hewletter Parckard, how would you write 1101101 in decimal form?
The answer is 109 but I don't know how it got that answer and need some help.