I am building some new input boards for a pipe organ keyer.
In testing the existing resistor buffer to the 165 I am finding I do not understand why the old analog engineers that designed these interfaces back in the 1970s apply 12 volts (actually 13.5V from the unloaded supply) to the input pin, rather than using the resistors as a divider.
I looked at several designs and this seems to be the way it is always done.
+12 Volts when key depressed, otherwise open | | 100K *----/\/\/\-----> to 'HC165 input | | > < 10K > < | | - com or negative return
When I measure the voltage at the input to the shift register, it looks close to 12V. So does the 100K current limit this? How is it possible to put more than 5 volts on the input pin?
I think in the 1970s they were using straight 74C*** chips as some seem to use + and - 5 volts. But the more recent boards that use this method defiantly use the HC variants.
What is the advantage of doing it this way rather than using the resistors as a divider like in some of the examples posted in the archives here?