Anyone know how much error in baud rate is viable, for a UART that uses the typical 16x (sampling?) formula?
microprocessor #1 has a 16MHz clock. ( can't change it).
microprocessor #2 has a 147456MHz clock. (yea!)
Neither is new enough to have a fractional baud rate divider.
At 115200, the difference is about 3.5%.
1/16 is about 6%.
If common UARTS oversamples with 16x, it would seem the tolerable error is then driven by ??? I recall that every start bit in common UARTs is 1.5 bit-times. Perhaps the sampling starts after the nth sample in the start bit - then for a stream of bytes, the error is cumulative... or does the UART re-sync to the byte-frame at the stop bit?
From here I get lost.
I'm fishing for why microprocessor #2 seems to lose bytes - about one in 500 data frame bursts, where a burst is about 40 bytes. Bursts have a lot of idle (marking) time in between them.