11

This question is quite clear. What are the differences between an int, an uint8_t, and an uint16_t. I know it has to do with bytes and memory but can someone clarify me a bit?

Things I want to know:

1- How much memory does each take.

2- When to use what.

3- In the end of the day, are they that different?

Dat Ha
  • 2,943
  • 6
  • 24
  • 46

1 Answers1

14

You can decipher most of them yourself.

  • A u prefix means unsigned.
  • The number is the number of bits used. There's 8 bits to the byte.
  • The _t means it's a typedef.

So a uint8_t is an unsigned 8 bit value, so it takes 1 byte. A uint16_t is an unsigned 16 bit value, so it takes 2 bytes (16/8 = 2)

The only fuzzy one is int. That is "a signed integer value at the native size for the compiler". On an 8-bit system like the ATMega chips that is 16 bits, so 2 bytes. On 32-bit systems, like the ARM based Due, it's 32 bits, so 4 bytes. Of the three it is the only one that changes.

Personally I rarely use int and always use uint8_t etc., since the variable type is the same no matter what architecture you compile for. When you use int you can run into problems if you had a program that worked fine on a 32-bit ARM but then doesn't work right on an 8-bit ATMega, since the int can only store a fraction of the range of numbers on the 8-bit system compared to the 32-bit system.

Majenko
  • 105,851
  • 5
  • 82
  • 139