The difference between the two terms is a bit subtle. In my opinion, explaining the difference in words could easily get confusing, so I would use a visual demonstration, accompanied by a brief oral explanation of each step as I do it.
When truncating, you simply remove all digits after the desired level of accuracy and leave the remaining digits alone. This could be demonstrated by writing the number on the board and erasing every digit after the hundredths place.
When rounding, you need to pay attention to the digit in the decimal place directly following the desired level of accuracy. If this digit is 5 or more, increase the preceding digit by 1, increasing the digit preceding that if increment the digit "turned over" to a 0; else leave it alone. As before, I would demonstrate by erasing all the digits after the thousandths place. Then I would circle the new last digit (i.e. in the thousandths place), explaining that because 7 is 5 or more, we would round up. Then I would erase the 8 in the hundredths place and replace it with a 9, finishing up by erasing the 7 in the thousandths place.
In a more complicated example, say rounding 20.99752 to 2 decimal places, I would do exactly the same as before, except after I replaced the 9 in the hundredths place with a 0, I would then circle this number too, explaining that because we "rolled over" to a 0, we now need to increment the previous digit as well. I would then do that, circling the new 0, again reminding of the roll over rule, to indicate that we now need to increment the last digit before the decimal place. I would finish up by erasing the 7 in the thousandths place, explaining that we leave the .00 at the end to indicate that we did indeed round to two decimal places, even though they're both now 0.