Animatronic ATMega16 project goes live (in a big way!)

Go To Last Post
23 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi,

This may well be of more interest to folks in the UK than elsewhere but if you can receive BBC1 television it might be interesting to watch or record The Jonathan Ross Show at 10:35pm (BST) on Friday night as it'll be the first public airing of my ATMega16 project that I've been working on since last Summer (always hoping that the sequence isn't left on the editing room floor!)

Now I know that Americans will probably be familiar with a TV show called "The Apprentice" that features Donald Trump interviewing a bunch of executive hopefuls for a job in his organisation? Well, here in the UK, we have an equivalent where Sir Alan Sugar does the same. Now it just so happens that I've worked for him for the last 22 years and so I was asked to design the "brains" behind a piece of animatronics that is related to "The Apprentice" TV show.

I won't give too much more detail about this right now as I don't want to spoil the surprise too much (it is kind of FUN though!) but once the show has aired and you've had a chance to see it I'll post back here some more details of exactly how the Mega16 is being used.

Cliff

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Sounds great. Congratulations. I'm not in the UK, so looking forward to your follow up.

Harry

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

You can also upload some pictures if you can for the non-UK residents. Congratulations

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

sounds like fun to me 8)

some pictures for us non UK ppl would be great!

congrats

not a rookie anymore, still learning tho

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

OK, the program just aired and my mad design was shown so all can now be revealed:

www.amstrad.com/amsface
(click the buttons to play the two video samples)

I'm going to post some details (quite a lot in fact!) about what's in the box tomorrow. (it's 23:30 here in the UK so I'm off up the wooden hill to bedfordshire now)

Cliff

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Brilliant!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

ROFL - I think I need a job at Amstrad :D

Neil

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

That's very funny, Cliff, I wish I'd seen it. I guess that explains why you were asking about visemes and coarticulation a while ago?

Four legs good, two legs bad, three legs stable.

Last Edited: Sat. Apr 1, 2006 - 01:58 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thanks guys - in fact the REAL clever bit is the mechanical engineering rather than the electronics/software. Inside that head is what looks not unlike a real human skull with 16 movement motors (well OK, twelve in the head and four to operate the door/finger)

More details to come soon when my laptop wakes up.

Cliff

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

(long post warning!!)

DESIGN RATIONALE

The design goal was to make a "lifelike" representation of our Chairman's head for the minimum cost possible that would have moving eyes and mouthparts so it could "speak" and make gestures. I set myself an electronic design budget of $5 for everything. As the most famous catch line of the show is where Alan points a finger at someone and says "You're Fired!" there was also a requirement to have a "pointing finger" mechanism.

Now, because the goal was to make a "animatronics" head I figured that we'd need several minutes worth of reasonable quality sound storage so I initially looked at masked sound chips as found in a lot of children’s toys (anyone familiar with the "Billy the Bass" fish will know what I mean). Chips are available from Winbond and OKI

But the problem was that one of the design goals set was that new voice clips could be added later. So that ruled out masked ROM devices and also "analogue flash" devices could not be used either, as there's no "digital" way to add sounds later, particularly important for the factory programming procedure – I didn’t want it to take 10 minutes to record 10 minutes worth of phrases into each unit!

So I figured I'd need an 8-bit controller micro and some flash to store sound samples and I’d handle the encoding/decoding.

I then started to look for suitable 8 bit micros and suitable flash devices to do the job and started with the "old" chips I'd used many moons ago like the 68xx and Zilog Zxx offerings but before very long it became obvious that the only real contenders in the “modern” 8-bit world would be PIC, MSP430 or AVR8.

I was concerned about battery power so at first the MSP430 looked like a great choice but after visits from the TI and Atmel sales guys it was even clearer that the AVR was the way to go if for no other reason than cost.

Also one of the nice things about Atmel was that they offered a "one stop shop" of both CPU and Dataflash chip and, in buying the two devices together, this meant a further small discount in price.

So I decided on the AVR and an AT45 series flash chip.

At first I wasn't sure how much recorded speech would be required (twenty 10-30 second clips was suggested) so I banked on it being quite a lot and, in order to keep costs down, I figured it would be best to apply some form of compression to the sound. I researched what might be possible and while an AVR that ran at 20MHz might be able to run some more advanced codecs I was concerned about power consumption and I found that an AVR clocked at 8MHz would have more than enough "headroom" for doing ADPCM compression and decompression in real-time.

When the product was initially discussed there was only mention of there being a few moving parts so I looked at various AVR devices that might fit the bill and decided that the 168 was a good choice for development as I could then trade down to the cheaper 88 or possibly even the 48 depending on how small I could get the code. But as our mechanical engineering team got to work on it they ended up putting 18 motors into the product and with the other IO uses of the CPU the 28 pin 168/88/48 just didn’t look like it would have enough IO. So I finally decided on the 40 pin Mega 16 even though it lacks some of the features of the more modern 48/88/168 design. One nice thing about it was that we already used 10’s of 1,000’s of the Mega16 in the front panel of a satellite receiver so supply wouldn’t be a problem.

IMPLEMENTATION

Atmel gave me an STK500 + STK501 fitted with a Mega128 and a JTAGICE mkII so my early experiments used these until they could supply some 168 samples (this was before I made the final switch to the 16). In the end I ported code from the 128 to the 168 and then to the 16 but this process was fairly painless.

I looked at various development tools but as I’ve spent years programming arm-gcc the fact that an avr-gcc existed made it very attractive as I know how much more reliable GCC compilers tend to be compared to a lot of commercial offerings.

The JTAGICE mkII and Atmel’s AVR Studio, including the excellent simulator, seemed like really solid tools compared to some systems I’ve used previously in the last 22 years I’ve been working at Amstrad.

The first thing I always do with any new micro is get the UART talking as soon as possible – while JTAG is great for debugging I like to get a command console running in the target too, so I can present a menu of internal routines and test each part in isolation from command keys.

Once I got the basic UART debug print routines working I figured that I was going to need a way to get ~100K sound sample files into and out of the AVR (stored in the Dataflash). For validity of data and ease of use I chose to transfer data using Xmodem-CRC as it's a low overhead protocol but offers the right level of data protection and it's available in any PC terminal program so anyone using the device later should have the tools available later on to get "files" into the system.

The samples of AT45 that Atmel sent me initially were in the TSOP package which want’ much good for interfacing to my STK500 (which had empty pads where the SPI Dataflash would go) so I got hold of a ZIF TSOP-DIP converter and then just soldered connections between that and the Dataflash area on the STK500

I then worked on routines to save/load data into the AT45 Dataflash. I’d decided at this stage that I’d use the 16Mb device giving me 2MB of storage. I’d worked out that an average sound file would be about 100K so this would allow for about 32 sound clips, which was just about right. Because I wanted to stream data into the flash both from a microphone/ADC circuit and also from Xmodem reception I used the double buffering method offered by the AT45 where data streams into one Ram buffer while the other is being committed to the flash array. This seemed to work well. For playback, again I needed “streaming” access so I just used the AT45s “continuous array read” to pull out the ADPCM samples that would be decoded and then sent to the PWM and out to a filter/amplifier/speaker circuit.

Once I was able to Xmodem a “WAV” file to the AVR, store it into the DataFlash and later read it back out I looked at implementing the PWM that would actually play it. Before our electronic designer had a chance to get a suitable circuit mocked up I ended up just soldering an 8 ohm speaker directly onto the OC1A pin of the circuit. Surprisingly this actually worked! I didn’t have ADPCM working at this stage so I was just using plain PCM samples. I recorded these on a PC at CD frequency (44,100Hz, 16 bit, mono) and then got the sound editor to resample them down to a lower frequency. I planned to run the PWM at 31,025Hz on the AVR so I down-sampled to a quarter of this – 7,700Hz and saved them out as headerless 8bit PCM files. These were Xmodem-ed to the Dataflash and then used to playback and it worked very nicely. With PWM at 31KHz you couldn’t hear the underlying PWM frequency and output was pretty good considering that it was just a speaker hanging off an AVR pin.

Next I wrote routines to do ADC input on the AVR and store this into the Dataflash so I could also create sound samples locally on the AVR without needing to use the PC sound stuff. Again I just wired up a biased electret (in fact an old telephone handset) direct to an AVR pin with no pre-amp or filtering. And again it worked Okish – you gotta love AVRs, everything just seems to work straight off !

At this stage I was given one motor with a single transistor switching it’s 9V supply to play with. It consists of a motor with moving arm that is resisted by a spring – there’s no servo feedback positioning here – the motor is driven simply by driving the motor on until it gets to the right place, off, wait a while, on again and so on – in fact what I’d call “poor mans PWM”. All I did initially was to take the PCM sound samples that were going to the PWM and if they were above or below a threshold I switched the motor on. Surprisingly this made for a movement in time with the sound.

Our electronic designer then took the “prototype” design I’d now put together on an STK500 and made it into a real circuit while I was away at Christmas. When I came back we had a first prototype PCB. The other thing he did was add another 15 motor drive circuits. So I now had 16 motors I could control. These were for:

#define EYES_UP MOT00
#define EYES_DOWN MOT01
#define EYES_LEFT MOT02
#define EYES_RIGHT MOT03
#define TLIP_OUT MOT04
#define TLIP_IN MOT05
#define BLIP_OUT MOT06
#define BLIP_IN MOT07
#define DOOR_OPEN MOT08
#define DOOR_CLOSE MOT09
#define HEAD_TILT MOT10
#define FOREHEAD MOT11
#define HEAD_LEFT MOT12
#define HEAD_RIGHT MOT13
#define JAW MOT14
#define EYELIDS MOT15

(TLIP = Top Lip, BLIP = Bottom Lip)

That’s quite a lot of possibilities! I realised that the sound samples needed to contain embedded instructions as to the state of all the motors at regular intervals, I also realised that I’d usually want to prepare the sounds on a PC and then convert them to ADPCM on that while the ADPCM decoder would be used in the PWM section of the AVR playback during the PWM timer interrupt. The idea being that every so often I’d lift the state of the 16 motors out of the sound stream and just send these off to the driven PORTs to drive the motors states.

So I found a suitable ADPCM algorithm and coded it both in a PC program and also in the AVR. ADPCM actually works on16 bit audio samples so I decided to keep the PC generated sounds at 16 bit, then ADPCM the data which converts each 16 bit sample into a four bit nibble and then pack two of these into a byte. Later, on the AVR I’d recover a byte from the Dataflash, split it into the two nibbles, ADPCM decode each to get back a 16 bit sample but then just use the upper 8 bits for the PWM (so I was throwing away some stored info an the ADPCM just buys 2:1 rather than 4:1 compression but this was fine anyway).

I got my (command line based) PC program so that it would read a .RAW sound file that was just 7700Hz, 16bit, mono PCM samples and then ADPCM this write a file that I could then send to the AVR for playback once it was Xmodem’d across into the DataFlash.

Unfortunately I found that there wasn’t time during 1/7700th of a second to do the entire ADPCM decode of a byte into two 16 bit samples so I had to split the ADPCM decoder so it would only do half the job (one nibble to one 16 bit sample) each time it was called (two separate routines). As I had four timer interrupts per PWM sample being required I recovered the next byte from SPI Dataflash in the first time period, did ADPCM decode stage 1 in the next time period, ADPCM decode stage 2 in the third time period and finally loaded value 1 into the OCR1A, on the next 4 ticks I could just ignore the first three ticks and then load value 2 into the OCR1A on the fourth.

Once I had the PC based ADPCM encode and AVR based PCM decode working I started to look at how I could edit and embed the motor movements.

Simplistically I could just use the energy in the sound samples to determine when to move the JAW motor but that left me another 15 motors I had no control over. So I decided what was need was a Windows GUI application that would load in a sound file and then, at various points allow me to dictate the state of all the motors and finally write out an ADPCM encoded file with motor position info spread through it at regular intervals.

I determined that 1/14th of second would be fast enough to keep setting the motor positions to give me fine enough control over motor movement but not so much as to flood the data with motor moves and get in the way of the ADPCM/PWM operation.

The Windows program I wrote was possibly the biggest part of the entire job in the end. I wrote it in plain ANSI C though Windows programs are supposed to be written in this new fangled C++ stuff – but I’m a luddite and know from the early days of Windows program (Windows 3.0) how to do it in plain C – so that’s the way I still do it!

First I wrote some stuff to show a graphic representation of the PCM wave on screen so I had an idea where in the sound sample I was. But just looking at pictures of sound waves isn’t that useful so I then found out how to playback WAV from within a Windows program and added this too. I could then tell where I was in a sound file by playing clips within it, but also added text edits under every 1/14th second of the sample so I could add on annotating words to show the actual word being spoken at that point.

I then added radio buttons to set the position of the motors but this was a bit “dry” as the only way to see the effect was to generate the output file (ADPCM + motor moves), Xmodem it to the AVR and then play it and see how it looked. Far too long winded for serious editing. So I added a graphic display of what the position of the head would look like with different photos taken in every conceivable head, eye and mouth position. I then added code to first blit an image of the entire head and then overlay this with current eye, head and lip/jaw positions which would be updated each time you click an editing radio button.

For the mouth there are actually 5 motors involved (top/bottom lip in/out and jaw) so there’s in theory 32 possible combinations but some don’t make sense (top lip both in and out at the same time) which reduces the number of combinations - so I added extra radio buttons to select all the possible “mouth combinations” – JawTOBI is Jaw openm Top lip out, bottom lip in for example.

Within the program I can get it to write out a composite file of ADPCM compressed audio with the interspersed motor moves. It will also save the motor states and text labels at each 1/14thy second into a separate text file so I can keep the input in two separate files – a .RAW file containing just PCM sounds samples and a .MOV file that contains the associated movements and labels.

What I now do is use a PC sound editor to record the samples with a decent quality microphone attached, I then edit them to remove any noise, amplify quiet sections, FFT filter to remove high frequency components and then normalise them to get as much sound energy as possible without clipping. I then save a copy in 7700Hz, 8bit, mono format as a .RAW file and load this into my editor.

The first thing I do is work through the sample, playing small parts of it and annotating the text labels onto the sound wave. Next I have buttons to throw in some random eyelid blinks, eye movements and forehead moves (“frowns”).

Then the tricky bit starts – trying to put on the right mouth shapes for the words spoken. I’ve looked at ways to do this automatically where the text labels are used in a text to phoneme conversion, then the phonemes (word noises) are used in a phoneme to viseme (mouth picture) conversion but this is an extremely complicated science (I’ve read a lot of scientific papers about it though!). One thing that is a problem for automation of putting mouth shapes onto sound text/samples is a thing known as co-articulation. If you say the words “Bed” and “Boat” they both start with a “Buh” sound (phoneme) but if you watch your mouth shape you’ll see that the shape for “Buh” is different depending on which phoneme comes after it.

For the time being my mouth editing therefore consists of me reading the words and playing just a small part of the sound sample over and over and trying to watch how my mouth forms the shapes and select the closest mouth shapes available that the head can show.

To edit the moves for about 20 seconds of sound sample takes about 4 to 6 hours (and I actually ended up doing this through the night earlier this week trying to meet a deadline!!).

At the moment I’ve only got the two phrases shown on the TV show and the website working pretty well but even then I’ve had two of these heads to work on. I did most of my development work on the first prototype then just switched everything over to the second prototype towards the end of last week when I found that it’s lip motors and head left/right motors don’t travel anywhere near as far as the ones in my original head. So the current movements are possibly a bit too “subtle”?

So there you have it, the real clever bit of this whole thing is the extraordinary piece of mechanical engineering inside the head itself. When the silicone rubber face mask is removed it looks not unlike a human skull but with tons of cogs, gears and motors spread all over it. My bit was easy – just designing the initial circuit around an AVR and throwing together a bit of software. In total the AVR source is just 92K of C source that compiles to about 11K of AVR code. The windows program is about 100K of C code that compiles to 260K of x86 code (but, hey!, that’s Windows for you). The Windows program also has another 3.2MB of .BMP files that hold pictures of the head in various positions.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thanks for all the info. You must be tickled; a great project! My satisfaction also comes from making an AVR "do tricks".

I've never done animatronics. One thing surprised me--separate motors for "OUT and "IN". My thought was that it would be easiest to have one motor per function, and then drive it in one direction or the other.

Lee

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

Last Edited: Sat. Apr 1, 2006 - 07:30 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thats not A Sugar.Thats Smiley so the truth is out!

Keep it simple it will not bite as hard

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Nope, I only have 14 movement motors in my head.

Oh, and Cliff, this is the single most brilliant thing I've seen here at AVRFreaks to date. Really great work.

And do you really thing this can sell for 39 pounds?

Smiley

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

smileymicros wrote:
And do you really thing this can sell for 39 pounds?

Smiley


Joe,

You'd be surprised how many people in the UK are addicted to "The Apprentice" TV show (I'm one of them - but I guess I have a vested interest ;) ) but at the moment a lot of TV reviewers are calling the second series that is airing at the moment "the best thing currently on British TV" and they're probably right.

Apart from this technical board, I read and post on a few more "media" based boards where people actually discuss the programme and the "fans" there are tickled by the product and seem more than willing to pay for it. You may notice the line about "all profit to Great Ormond Street Hospital". This is the favourite charity of Sir Alan Sugar (personally worth £800m - about $1.4bn) and is one of the largest hospitals in London (and indeed the entire UK) and caters only for helping sick children (in fact it is Great Ormond Street Hospital for Sick Children). So hopefully this will encourage people to go for it if they realise we're not doing it just to make a profit (well only for sick kids anyway)

I think we started out hoping that it could retail for £29 or even £19 and I kept to my end of the bargain and brought the electronics in under $5 but the thing that is setting the retail price is actually all the mechanics (including the motors). Things like that silicone rubber face mask, for example, costs something like $3-$4 alone. When you add up the entire BOM and add all the additional costs for packaging, amortising plastics tooling and so on the fact is that there's no option but to sell at £39 and that doesn't make THAT much money for the charity even.

It is a shame we couldn't have done it cheaper though - at £19.99 or £24.99 I think they might fly out the door like hot cakes!

Cliff

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

theusch wrote:
One thing surprised me--separate motors for "OUT and "IN". My thought was that it would be easiest to have one motor per function, and then drive it in one direction or the other.

Lee


Lee,

Actually that was my electronics designer trying to save IO (even on the Mega16 with more IO than the 168 - I'd already put the support for 5 buttons onto a single ADC pin rather than 5 separate PIN signals!). The way the door open, hand out and door close, hand in sequences work are that I just take the DOOR_OPEN signal high and the door open motor starts to activate. There's then a sensor switch that closes once the door is open and at that point drive is switched to the hand out motor until another microswitch activates to show completion of that motion and then further drive pulses are ignored. The close sequence on another IO line is the opposite of this with another two sensor switches and two motors so four motions are controlled by two IO lines but as this was done in diode logic I guess David, who designed the final electronics, figured it would be TOO complicated to do it all on a reversible drive (which would still take two IO anyway)

The amazing thing is that while I've had the first prototype head for about a month and a half it had no door/hand mechanism and our mechanical design engineer has been scurrying about trying to get it working which involved having some gears hand made as our tooling in Hong Kong wasn't available just yet and the handmade gears were only delivered on Monday so it was late Monday night when we saw the door/hand thing work for the very first time - given its complexity we were totally astonished when it all just worked first time !!!

In fact it's the door/hand thing that really brings a smile to folks faces so the effort turned out to be worthwhile in the end!

Cliff

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Impressive and funny work. Congratulations.

Guillem.

Guillem.
"Common sense is the least common of the senses" Anonymous.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Oh, one thing I forgot to post was some pictures of the cicuit board. This first picture is the top of the single sided board - we went for single sided to try and keep board costs to a minimum:

The main components you see on top apart from the labelled Mega16, LM386 and 3V3 regulator are the main motor drive transistors and all the connectors that run to each motor and sensor switch. For ease of development the Mega16 is in a 40 pin ZIF socket so I can easily switch out parts and remove them to mount in an STK500.

The other picture is the underside of the board which is where all the SMT parts live. This includes the mic/speaker preamp and the AT45 DataFlash chip together with all the switching transistors and R's and C's

Cliff

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Great set of posts Cliff - thanks for the time spent writing them. Very informative!

Sounds like a bit of a seat-of-the-pants design and build rather than a formal specify-design-build-test lifecycle - what would AMS have said if the the s-o-t-p approach started showing up problems right near the end...you're fired! ?

Thought the PCB was a bit sparsely populated looking at the top side, until I saw the SMD side. Neat trick that! What was used to design the PCB? Did you do that bit yourself as well?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

MartinM57 wrote:
Sounds like a bit of a seat-of-the-pants design and build rather than a formal specify-design-build-test lifecycle - what would AMS have said if the the s-o-t-p approach started showing up problems right near the end...you're fired! ?

To a certain extent you are right about the SOTP approach as there wasn't enough time to do all the background research I would have liked to do but I was about 99% certain that the thing was going to be "do-able" right from the moment I picked the AVR as I always had the backstop that I could wind things up to as much as 20MHz if I found I needed more "horsepower" which was the only risk I thought I might have.

MartinM57 wrote:
Thought the PCB was a bit sparsely populated looking at the top side, until I saw the SMD side. Neat trick that! What was used to design the PCB? Did you do that bit yourself as well?

It was my electronic design colleague, David, who did the final board design - I just got it as far as prototype "one of everything" on an STK500 before I handed it over to him top take it from "christams tree" to a production design. I'm not sure which CAD system he used - we have about 20 electrical design engineers and they actually have their own favourtie systems for layout so several different ones are used - not sure which one he favours and sadly, about 3 weeks ago, someone drove into the back of his motobike and broke his leg in about four places so he's not around to ask right now. Actually, in the "panic" in the last couple of days before the TV show was recorded myself and our chief mechanical engineer actually went mob handed round to his house to try and iron out a last minute electronic problem we had - the UART output, left just floating with no PC connected, was leading to the Mega16 sporadically resetting - I hadn't seen this previoulsy as I ALWAYS ran with a UART lead in place to access my command console!

Cliff

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

clawson wrote:
the UART output, left just floating with no PC connected, was leading to the Mega16 sporadically resetting - I hadn't seen this previoulsy as I ALWAYS ran with a UART lead in place to access my command console!

Cliff

Hmmm..just like I do in my current Mega16 project!

Was your command console TTL-compatible or did you have a MAX232 level shifter? If the latter, then I'm very confused? What was the design solution?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

MartinM57 wrote:
Was your command console TTL-compatible or did you have a MAX232 level shifter? If the latter, then I'm very confused? What was the design solution?

I couldn't afford to have a MAX232 on every board we make so the UART signals, as they leave the connector on this PCB, are just the TxD/RxD lines from the ATMega16 (oh and with RxD into the Mega also connecting to INT0 as I auto-baud detect and auto-calibrate the OSCCAL of the internal 8MHz for accurate baud rates from 2400 to 115200 - the user just connects their terminal and hits [enter] a few times and my code measures the width of the start bit and adjusts UBRR and OSCCAL)

I then found some £2.80 Nokia USB-mobile cables which are really just an FTDI USB-RS232 inline in the cable. On the "Nokia" side these are 3V3 TTL too so there's no need for level conversion at all - I just run the TxD/RxD into the "RS232" side of this converter and stick it into a USB port.

With the converter cable attached (but NOT necessarily even plugged into a PC) I had no noise problems on the AVR but with no cable attached to the AVR at all I was getting glitches. This probably just needs a bit of filtering on the AVR board which my electronics guy will look at when he returns. The short term fix was to make a loop back to plug into the PCB socket just connecting TxD back into RxD which solved the problem for the purpose of filming the TV show.

Cliff

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I see the term "Medusa" on the main board. Does the chairman know his likeness with a bunch of cables coming out like snake hair is being referred to as Medusa? :shock:

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

dksmall wrote:
I see the term "Medusa" on the main board. Does the chairman know his likeness with a bunch of cables coming out like snake hair is being referred to as Medusa? :shock:

Yup, "Medusa" was the code name I coined at the beginning of the project - as you may know Medusa is the character from mythology who's stare will turn the person who sees it to stone - this is a very apt description for Alan !! (course the Medusa is a female character in fact but I'm hoping no one noticed that!)

Cliff