The port bit symbolic names are defined to the bit number rathers than the masks (bit values). This means that code is littered with bit shifts or various macros
TIMSK |= 1<<TOIE1; TIMSK &= ~(1<<TOIE1); TIMSK |= _BV(TOIE1); TIMSK &= ~_BV(TOIE1); if ( UCSRA & (1<<RXC) ) ... if ( bit_is_set( UCSRA, RXC ) ) ...
rather than the trivial
TIMSK |= TOIE1; TIMSK &= ~TOIE1; if ( UCSRA & RXC ) ...
In cases where you need to know the bit number, a helper macro in the libraries can extract that from the mask:
#define BITN(m) (m&1?0 : m&2?1 : m&4?2 : m&8?3 : m&0x10?4 : m&0x20?5 : m&0x40?6 : m&0x80?7 : -1) int bits_per_char = UCSRC >> BITN(UCSZ0) & 3;
This puts things how they should be, where the mask, which is almost always what you want, is immediately available and concise to specify, and the bit number, which is rarely needed, requires using a macro.
(moderators, please move to the appropriate section as it wasn't clear where this would go)