megaAVR <> IBM System/360 ?

Go To Last Post
42 posts / 0 new
Author
Message
#1
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Hi,
I am coming across some early 1970s literature on digital filtering and FFT on the IBM System/360 and would like to know your perspective...

Will you compare and contrast the megaAVR with the IBM System/360? Some suggestions for discussion:

Compare and contrast the megaAVR and the IBM System/360 in the following catagories...

    computation speed
    register access speed
    general purpose register architecture
    megaAVR hardware multiplier vs. IBM floating point operations
    does size matter?
    would you rather go to the moon w/ the megaAVR or the IBM System/360?

IBM System/360 Wiki:
http://en.wikipedia.org/wiki/Sys...
http://tinyurl.com/pqjof

Thank you,
Patrick

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Sonos wrote:

Quote:
would you rather go to the moon w/ the megaAVR or the IBM System/360?

Why would I want to go to the moon with a 600 pound IBM360 and all of the operational problems associated with it when, I could go to the moon with an a couple of hundred AVR's which, when programmed correctly, would be problem free by cpmareson. In addition, you could not have a practical form of redundancy with the IBM360 but, every process using an AVR could be reduntant.

Was this a grade school question? Did I pass the test?

I really don't think any of the questions you asked above are relevant...

Can the IBM360 support 64K program memory in one square milimeter? Can the IBM360 support 4K ram in one square milimeter? Can the IBM360 support USARTS, TIMERS, a WDT, SPI and TWI. Can the IBM360 support 8, 10 bit analog inputs, all on a few square milimeters?

I mean, does size matter (don't be asking your wife that question, you might not like the answer) when you could put 100 AVR's in the same space of an IBM360 box?
Interger multiplication compared to IEEE floating point? What will you be calculating?

The only question that may be relevant would be computational speed. Well, what ever the AVR lacked in computational speed, could be overcome using multiple AVRs. I mean, hell, look how big an IBM360 is and just consider the same volume filled with AVRs. It doesn't take much thought to realize that the IBM360 is 30 years obsolete for a very good reason.

But if you needed real power, why not an IBM360 box full of ARMs or Intel P4s or, 68030's or similiar in an embedded system?

Edit:
And what about the electrical power consumption of the IBM360? Can the IBM360 possibly compete with a box full of AVRs, electrical power consumption wise?

You can avoid reality, for a while.  But you can't avoid the consequences of reality! - C.W. Livingston

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

microcarl wrote:
Sonos wrote:
Quote:
would you rather go to the moon w/ the megaAVR or the IBM System/360?

Why would I want to go to the moon with a 600 pound IBM360 and all of the operational problems associated with it when, I could go to the moon with an a couple of hundred AVR's which, when programmed correctly, would be problem free by cpmareson. In addition, you could not have a practical form of redundancy with the IBM360 but, every process using an AVR could be reduntant.

NASA used the IBM360 in the mission control rooms to guide the Apollo spacecrafts. It was not onboard. I am always impressed at what was accomplished with so little compared to today's technology.

microcarl wrote:
Was this a grade school question? Did I pass the test?

1. IMO, any question is relevant, no matter how silly or stupid or off topic it may appear. 2. yes, if there was any test, you passed it simply by providing your comments, no matter how silly or stupid or off topic they may appear. In fact, your comment about redundancy helped me realize the potential benefit of multiple megaAVRs for low cost digital signal processing; similar to how the early CMOS dsp circuits were assembled. -- thanks

microcarl wrote:

Interger multiplication compared to IEEE floating point? What will you be calculating?

I am seeking to squeeze every ounce of computational ability from the megaAVR series for an open source digital signal processing library. It appears to me that in order to learn how to do that, I must understand how dsp was performed when the industry made the mainstream switch from analog to digital - there were growing pains. The IBM/360 was the platform where most of the digital dsp therory was formalized at the time. Most books on dsp today assume that a 32 or 64 bit µC is the development platform since it is the defacto standard. So when we talk about optimizing methods for the 8 bit AVR using 32 bit software as a guide, it's like trying to keep up with a cheeta on your bicycle w/ flat tires, uphill (or at least for me it is).

microcarl wrote:

The only question that may be relevant would be computational speed. Well, what ever the AVR lacked in computational speed, could be overcome using multiple AVRs. I mean, hell, look how big an IBM360 is and just consider the same volume filled with AVRs. It doesn't take much thought to realize that the IBM360 is 30 years obsolete for a very good reason.

As I see it, speed is not the most important issue when considering a digital FFT or digital filter on the megaAVR. Most projects on the AVR platform are not blazingly fast, and could wait for offline digital analysis w/ FFT or offline digital filtering before some task is performed. Realtime, online dsp is best left for dedicated dsp chips for the fancy stuff. However, errors and constraints of the finite register length are unavoidable on any digital system, and are probably the most difficult hurdle for digital signal processing on the megaAVR. Handling errors resulting in AD conversion and arithmetic, and constraining signal levels to avoid overflows and quantization of coefficients seem to be more important than speed.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Can't say that I disagree with anything you have said.

Thanks for the detailed response. While I was trying to be a bit facetious, I do appreciate where we have come in the last 30 years. I now try to imagin what we will see in another 30 years, If I still breath.

You can avoid reality, for a while.  But you can't avoid the consequences of reality! - C.W. Livingston

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I "grew up" with mainframes. My motto, now outdated, was "Never trust a computer you can kick over." The updated version might be "Never trust a computer that you can lose on your desk". ;)

Anyway, we all know that modern microcontrollers will have good integer MIPs numbers compared to classic mainframes, be it /360 or original VAX or CDC 6xxx or others. And a modern microprocessor chip set (e.g., PDA) will run rings around the old mainframes in almost every aspect.

One area that a microcontroller like the AVR will not approach is the I/O subsystem--the size of those old beasties had a lot to do with the peripheral drivers. The separate channels for (one or more of) disk, tape, and commo is what allowed them to do what they did.

Lee

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

theusch wrote:

One area that a microcontroller like the AVR will not approach is the I/O subsystem--the size of those old beasties had a lot to do with the peripheral drivers. The separate channels for (one or more of) disk, tape, and commo is what allowed them to do what they did.

Lee

Am I following you correctly regarding I/O and speed is that read/writes are the bottleneck for the AVR? I do know sine / cos LUTs have been used to decrease computation on the megaAVR, but do these come at a cost for more read/writes? Maybe proper use of the registers for small LUTs would help improve I/O on the AVR?

Here is a similar line of thought on comp.dsp usenet a few weeks ago.

Quote:
sonos wrote:
> steve wrote:

> Can you think of dsp applications suitable for the megaAVR? I am thinking in
> the realm of low frequency analysis and not audio or video digital signal
> processing.

Depends if you need real time or non-real time processing, but a lot of
sensors are only good to 8 bits so I can imagine a number of practical
applications with just 8 bit math (and perhaps 16/24 bit
accumulators), I think the 10 bit A/D in the megaAVR is only accurate
to 8 bits anyway, so I think Atmel correctly matched the processor
power of the AVR with its I/O capability.

As soon as you need 9 bits, the processing power of the mega AVR drops
by an order of magnitude (from 2 cycle to 20 cycle multiplies), storage
requirements double etc, so try to avoid that if possible.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

What I'm referring to is high-speed buffered I/O, often with multiple separate channels.

So on a Burroughs 3500 there could be in progress, all at the same time, a buffered read sector to disk 1, a buffered write sector to disk 2, (...repeat for the number of disks...), a buffered read from tape 1, (...repeat for the number of tapes...), a number of buffered terminal operations in progress, and maybe a "streaming" lookahead sequential read on a disk pack or two. Each of those I/O controllers coupld be thought of as being a computer even if implemented as a state machine with discrete logic (and the problem the original PIC was built to solve).

Lee

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

OK, thanks.

With all the buffered I/O, then real time digital signal processing on the IBM/360 was not the big concern in the 1970s, but rather just finding ways to deal with the finite register length? i.e., whoever heard of voice recognition software running on the old mainframes. Was it just a matter of getting the dsp to meet or exceed continuous analog signal processing specifications?

If so, then it makes sense that we can look at the finite register length problem in the AVR as equal to using inaccurate analog circuit elements when designing analog filters and processing circuits. -- now the 1960-70s papers are making sense. Priorities were different then.

Last Edited: Thu. Aug 3, 2006 - 05:52 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I heard long ago that the computers on the Space Shuttle were just highly
integrated versions of the 360, making generous use of thick-film module
construction methods for a more compact size. Any truth at all to the story?

Tom Pappano
Tulsa, Oklahoma

Tom Pappano
Tulsa, Oklahoma

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

then real time digital signal processing on the IBM/360 was not the big concern in the 1970s

It just wasn't done at all on that type of machine. It was uncommon to find A/D on any devices, much less at high conversion rates.

Mainframes wouldn't be the place to make comparisons. More direct comparisons are early PLC or CNC.

Lee

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Programmable Logic Controller. Computer-aided Numerical Machining.

Both might have A/D (or at least analog comparators) to measure a real-world signal against a setpoint and make decisions based on the comparison.

Guessing from your previous discussions the type of DSP you are talking about is doing manipulations on signals of the speech waveform, or maybe other body signals like EEG or ECG? If so, there just wasn't much if any of that in 1970, at least at the biomediccal engineering department at my uni. It was just at that time that the first microprocessors were becoming readily available, and people go a few in hand and thought "gee, I can replace xyz device with this micro". There just weren't the commonplace devices to do the repetitive A/D at several 10s of ksps and store that much stuff and analyze it on the fly.

I reviewed your original post. I feel you are trying to compare a dump truck to a pickup truck. Yes, they both have engines of some HP so you can compare on that criterion. Both are "vehicles". But if you are doing road construction and hauling gravel it is virtually useless to put the pickup truck into the comparison matrix. Similarly, if your app is delivering parcels around town you wouldn't put the dump truck into the matrix.

Back when I was your age in 1970 there was distinction even among mainframes. Although you probably >>could<< with enough effort, you wouldn't do your data processing for your company--financials and such--on a CDC6xxxx mainframe, but would rather do it on the IBM. And conversly you'd go away from your IBM to do CPU-intensive numerical analysis, engineering cals, and similar.

Lee

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thanks a lot for your time on this thread. I'll look into the PLC/CNC stuff when I can get back to the local university library - they have IEEE proceedings online for free. That's where I found the IBM/360 papers on dsp.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

While there certainly were A/D functions in the early '70s that I'm using for recollection, I didn't remember any A/D "chips" or even any A/D cards in a PDP/8e. There were some functions in our PDP/8i that I used, but I never tore one apart.

Wikipedia has no info on A/D history. I did find this reference in a paper:

Quote:
In 1976 no monolithic silicon A/D converters existed, and the cost of equivalent industrial modules was outside the range of most video art budgets. As this component was basic to digital video processing, a 6 bit A/D converter was attempted. An A/D converter of 4 bits or less was commonly constructed using strings of high speed comparators, but resolutions greater than 4 bits was difficult to perfect.

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

an important distinction in the old mainframes and big minicomputers like VAXen was that there was no huge code bloat from abstracting basics to an excessive extent. Today, it's so much so, with OO that the 99.999% overhead to input/calculate/output anything eats MIPS faster than the they can be delivered by Moore's law.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:
Back when I was your age in 1970 there was distinction even among mainframes. Although you probably >>could<< with enough effort, you wouldn't do your data processing for your company--financials and such--on a CDC6xxxx mainframe, but would rather do it on the IBM. And conversly you'd go away from your IBM to do CPU-intensive numerical analysis, engineering cals, and similar.

Well, a lot of that had to do with marketting. The CDC COBOL compiler was good enough that crazy Lionel B. of CDC's internal CAD group did a PCB autorouter and a logic simulator (both in the 1970's) in COBOL. Why, when the FORTRAN compiler would have been better for the task, I don't know. Maybe the database access was better. But IBM's wobbly precision floating point certainly made numerical applications a nightmare. OTOH, CDC floating point did not have gradual underflow (in those days most folks didn't understand why that was needed), but on balance the FP was more predictable and way, way faster. (I don't think IBM FP has gradual underflow either, but I may be mistaken.)

Having spent a number of years as a CPU designer working on walk-in, refrigerated computers at CDC, Univac, and Amdahl, I can say that a major difference between the mainframes of yore and AVR's is that you don't need a plumber to install the heat sink.

Seriously, mainframes were and still are famous for having lots of I/O bandwidth. And FP on AVR's is strictly RYO. Look at how much logic it takes to do a serious, pipelined FP unit and you'll see why it is not included in hardware.

But consider this: when my part-time student job was as an operator at the campus computation center, the IBM 360/65 had 2 MBytes of "fast" core, and 2 MBytes of "slow" core, 1uSec and 2uSec, respectively. Compare to your current video card. Heck, compare to your Palm Pilot.

-dave

BTW -- I thought NASA had a Univac 1100/66 in mission control? The largest machine that Univac made at the time was the 1100/64 - four CPU's, but NASA RPQ'd a 6-way. Not sure -- I worked there in the 1100/9x era.

Now.... where did I leave my cane?

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Quote:

The CDC COBOL compiler was good enough ...

You can't just go by the language. Since the Burroughs mid-range mainframes were BCD machines (yes, kiddies, you only got binary in a few special modes), the most work went into the COBOL compilers, including the equivalent to today's device drivers. Within the scope of fixed-point PIC 999V99... COMP, numerical operations in COBOL would run rings around the same task in FORTRAN.

Conversely, the VAX/VMS had all the work done on the FORTAN compiler, and had extensions to do data processing operations efficiently.

On other architectures I suspect the reasoning may be the same, but substitue a different language for the "best".

Lee

You can put lipstick on a pig, but it is still a pig.

I've never met a pig I didn't like, as long as you have some salt and pepper.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

stevech wrote:
an important distinction in the old mainframes and big minicomputers like VAXen was that there was no huge code bloat from abstracting basics to an excessive extent. Today, it's so much so, with OO that the 99.999% overhead to input/calculate/output anything eats MIPS faster than the they can be delivered by Moore's law.

After some thought into what dsp means on an AVR, it became evident that humble grunt work in the basics might deliver a big package; a lot of applications probably have been ported to a dedicated dsp chip that could have been realized on the megaAVR.

Early papers on dsp seem to relish a change of 1/2 bit for noise to signal ratio. Try to find a paper like that in today's literature!

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I know that the first hybred A/D & D/A converters were starting to show up for industrial use at about 1975 or 1976. Burr-Brown was one manufacturer. We were using them when I was working for Fairchild in the analog sections of their "High Speed" memory testors. Back then we called the whole analog process thing "Parametric" testing. I syill have a couple of the current output style D/As in my parts collection.

I can dig them out when I get home from work and tell you the exact part number.

You can avoid reality, for a while.  But you can't avoid the consequences of reality! - C.W. Livingston

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

tpappano wrote:
I heard long ago that the computers on the Space Shuttle were just highly integrated versions of the 360, making generous use of thick-film module construction methods for a more compact size. Any truth at all to the story?

No. I worked on the development of the 1st generation Space Shuttle computers, I/O module and display system. The Space Shuttle computer instruction set looked somewhat like the IBM 360 but not exactly. The I/O architecture was completely different. The computer was built out of 7400 series TTL DIPs. Core memory was used (in the 1st generation computers). After the Challenger explosion, NASA returned two recovered computers to us in 55 gal drums for analysis of memory dumps - remember, core memory is non-volatile.

The main shuttle computers are built by the IBM Federal Systems Division in Owego NY. IBM sold the Federal Systems Division to Loral in 1993 and Loral was acquired by Lockheed Martin in 1994.

[edit]
Here's a link to an article on the AP101 - the 1st generation shuttle computer. The article refers to the 4 Pi computer line. The name was a play on the IBM 360. There's 360 degrees in a circle - our embedded computers were twice as good - there are 4 Pi radians in two circles.
[/edit]

Don

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Thanks Don!

Tom Pappano
Tulsa, Oklahoma

Tom Pappano
Tulsa, Oklahoma

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

donblake wrote:
After the Challenger explosion, NASA returned two recovered computers to us in 55 gal drums for analysis of memory dumps - remember, core memory is non-volatile....

Sigh. I just missed out on that. I would have liked to have owned a machine where power down and power up were just a couple more interrupts (with an extended "service" routine after the first). I don't know if IBM went to that trouble on the shuttle computers.

- John

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

one of my first programming experiences was on my employer's AN UYK-1 comptuer made for the Navy. Size of a man. 8Kword core. 17 bit Navy word size as I recall.
FORTRAN compiler. Here's how I developed a program:
Punch cards for source code.
Boot computer using paper tape reader and a 6 ft. long tape loop and switches.
Run program to copy cards to 9 track tape drive.
boot a different paper tape loop.
Read FORTRAN compiler from 9 track drive #1. Compiler read source from Drive #2
Blank tape on drive #1 to accept compiled and linked program.
Boot paper tape to run program to boot tape drive and run my program.
get printout on big old chain printer.

Study printout.

Modify punched cards, start anew.

interactive debugger? Not a concept then!
Disk drive? We didn't have no stinkin' disk drives yet.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

stevech wrote:
one of my first programming experiences was on my employer's AN UYK-1 comptuer made for the Navy. Size of a man. 8Kword core. 17 bit Navy word size as I recall.
FORTRAN compiler. Here's how I developed a program:
Punch cards for source code.
Boot computer using paper tape reader and a 6 ft. long tape loop and switches.
Run program to copy cards to 9 track tape drive.
boot a different paper tape loop.
Read FORTRAN compiler from 9 track drive #1. Compiler read source from Drive #2
Blank tape on drive #1 to accept compiled and linked program.
Boot paper tape to run program to boot tape drive and run my program.
get printout on big old chain printer.

Study printout.

Modify punched cards, start anew.

interactive debugger? Not a concept then!
Disk drive? We didn't have no stinkin' disk drives yet.

Steve, just think what the noob's would think if they had to do it the way we used to have to do it. We'd have the embedded arena all to ourselves.

You can avoid reality, for a while.  But you can't avoid the consequences of reality! - C.W. Livingston

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

jfiresto wrote:
Sigh. I just missed out on that. I would have liked to have owned a machine where power down and power up were just a couple more interrupts (with an extended "service" routine after the first). I don't know if IBM went to that trouble on the shuttle computers.

Actually, at the time, core memory was the only choice. The later generation computers all have volatile monolithic memory. I was only involved with the 1st generation computers.

Don

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Steve,

My initial programming in school wasn't as bad:

Punch program on cards
Take Cards to computer center and submit program (get a job card)
Wait 1-3 days for job to get processed
Get printout and program (if not lost)

Repeat the above.

This was on an IBM 360/75.

Later on, we got to use ASM on an old PDP/11-20. It had a card reader which sounded like a vacuum when it read the cards and a 1st generation inkjet printer. This sequence was:

Punch program.
Get in waiting line (up to 1 hour)
Load cards in reader
start reader and pray
Hope machine doesn't crash - if so, enter boot sequence on toggle switches and load DECTAPE - press start. Repeat from beginning
Wait for error and Core Dump - pray printer doesn't spray you with ink.
Go through core dump and make changes
Get back in line!

I'll take an AVR any day!

Randy

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

stevech wrote:
one of my first programming experiences was on my employer's AN UYK-1 comptuer made for the Navy. Size of a man. 8Kword core. 17 bit Navy word size as I recall.

Never used an AN UYK-1 but did program an AN UYK-7 on several occasions. Programming steps were not much different. Assembler, not Fortran. Punched cards. 9-track tapes (I think we used 5 simultaneously). No punched paper tape, though.

Worked on development of the AN UYK-7 replacement: AN UYK-43. This was a competitive contract. After 2-years of development and tons of overtime, we lost. Read Sole of a New Machine just after that program. Boy, was the story similar.

Don

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Interesting stuff. Someone asked about 360 vs AVR comparisons... the 360 had 16 32 bit registers... in general, mainframes seemed to be 32 bit machines (50s, 60s), 16 bit minicomputers showed up in the 70s, 8 bit microprocessors in the 70s also. Interestingly, the 68000 had 16 32 bit registers, and there was a version with different microcode that would run IBM360 instructions. Also seems like RISC machines like MIPS, SPARC, Power PC all have 32 32 bit registers. Since the mainframes cost big bucks, the idea was to keep the cpu busy. There wasnt any real time anything. OS360 was a big batch system, where the cards were read in then spooled to tape, and the compiles got done later and the poor students got a printout with the syntax error messages from the compiler. Took a day to punch a fortran program, submit the deck, come back the next day to get output. Your account got billed by cpu time used. I guess the 360 cpu was like an airliner... it wasnt earning its money unless it was working all the time.

Imagecraft compiler user

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

donblake wrote:
stevech wrote:
Read Sole of a New Machine just after that program. Boy, was the story similar.

Don

that was a great book. Highly recommended to those in the computer field.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Ah! Soul of a new machine by Tracy Kidder.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I was glad to see that 8-bits made it into this article on globalspec.com http://tinyurl.com/pvrcu

Quote:
Most digital signal processors are fixed-point devices because in real-world signal processing, extra precision is not required and there is a large speed benefit. Floating-point DSPs are common in scientific and other applications that require precision. Digital signal processors feature specialized instructions, modulo-addressing in ring buffers, and bit-reversed addressing mode for Fortran function tree (FFT) cross-referencing. Generally, DSPs are dedicated integrated circuits (ICs); however, DSP functionality can also be realized using field programmable gate array (FPGA) chips. DSPs are used in several classes of computer hardware, including sound cards, modems, telephony boards that handle sound and modem functions, and hardware that handles audio and video compression in real time.

Selecting digital signal processors requires an analysis of performance specifications. DSPs operate with variety of supply voltages and include data buses that range from 8-bit to 256-bit devices. DSPs also vary in terms of clock speed, which is typically expressed in megahertz (MHz) and gigahertz (GHz). Often, integrated on-chip phase-locked loops (PLLs) with clock frequency synthesis capabilities are used to design high-speed internal clocks for data sampling in DSP applications. Measurements of DSP processing power include million instructions per second (MIPS) and million multiply / accumulates per second (MMACS). For floating-point devices, an additional measurement is million floating-point operations per second (MFLOPS). For all DSPs, the operating current, operating temperature, and power dissipation are also important specifications.

Digital signal processors are available with multiple DMA channels and a variety of I/O ports and interfaces. Some devices also feature an external memory interface that determines the amount of memory a chip can handle. Parallel interfaces include universal asynchronous receiver transmitter (UART) and universal synchronous asynchronous receiver transmitter (UART) technology. Serial interfaces include peripheral component interconnect (PCI), universal serial bus (USB), enhanced synchronous serial interface (ESSI), and serial communications interface (SCI). The Joint Test Actions Group (JTAG), a standards organization, has developed a test access port (TAP) that allows access to the inner workings of ICs. Inter-IC (I²C) is used to control and monitor applications in communications, computer, and industrial settings.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

As to the hardware multiplier on the megaAVR, here is an article addressing the issue of improving your application w/ DSP that is already on the AVR, without having to redesign on a new platform...

http://www.elecdesign.com/Articl...

Quote:
Even the design decision to use 16-bit precision for 8-bit data values may not be sound. One major characteristic of most DSP algorithms is a loop, along with multiple memory accesses and repeated multiplications and additions. It doesn't take a very large loop before operations on 8-bit values might overflow the 16-bit number representation. The two key DSP-related problems are in the words might overflow and the not very obvious fact that "C doesn't care."

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

I stumbled across this old news photo. It was taken in the early 70's. This was the development model of the Space Shuttle I/O processor (different from the main computer - the GPC). The development model was a wire-wrapped prototype which occupied most of a 6 foot high 19" rack.

Don

Attachment(s): 

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Just think, you could go to the city dump and drag out old PCs for free. Each one would have a cpu many times more powerful than those old giants had.

Those guys would have had a hard time imagining a day when landfills were piled full of computers many times as capable as their million dollar mainframes.

When I'm old I bet landfills will be full of stuff like cereal boxes with full motion color video and sound running on the back flashing advertisements and all connected to the net with their own ip address.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

A group of students at the Department of Electrical Engineering have designed "ENIAC(TM)-on-a-Chip", under supervision of Professor J. Van der Spiegel, in collaboration with Dr. F. Ketterer. This was done as part of Eniac's 50th Anniversary Celebration. They have integrated the whole "ENIAC" on a 7.44 by 5.29 sq. mm chip using a 0.5 micrometer CMOS technology.

http://www.ese.upenn.edu/~jan/en...

news flash..
A group of students at the Department of Electrical Engineering have disappeard while working under supervision of Professor J. Van der Spiegel. It is rumoured that the students were miniaturized by Dr. Spiegel and are now running around inside the Eniac chip, going mad as they continually reprogram it. 8)

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

The PLC's I have experience with do have A/D conversion.
Also, D/A is used. Although , the ranges were confined to 'standard' voltages required in industrial applications.
No one mentioned the Data General computers yet. I have worked on DG systems that could have many printers, terminals, disk drives and other communication schemes all running in near real time. I think there were some special boards that would allow A/D and D/A applications. Prior to the PC's and even prior to the S-100 type computers, the minicomputers were used in engineering type systems.
After all, they had a buss that products could be designed for.
UPDATE: just read kartman's post on "The soul of a new mcahine" which covered the development of a supermini designed by Data General. I concur, if you get a chance, read it. Could probally make a movie (for the real nerds):-)

I'll believe corporations
are people when Texas executes one.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

No-one has mentioned MIPs in the comparison of 60s 70s and up mainframes, minicomputers, and micros. The VAX11780 was a '1 MIP' machine, did 1677 dhrystones a sec. What did an IBM 360 do? I managed to shoehorn the dhrystone into a 14.7456mhz avr with an external 32k ram and it did 5000 dhrystones per sec... about 3 dhrystone/vax MIPs...

Imagecraft compiler user

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Uhm...are you really trying to compare an ancient supercomputer with a modern processor?

If the point is to prove that modern technology surpassed the old one, we know that all ready.

But look at it this way: What technologies did they have in those times and what did they achieve with them? It's like the pyramids...simple tools, cool outcome.

But if ya wanna good computer, look at Cray-1...

IMHO: For all those who remember/built these systems: You did a fine job...

David "youngster" Gustafik

EDIT:
But what about stylishness?
Compare this :

To this:

I might sound a bit dumb, but I like the 360 system better (in looks that is).

There are pointy haired bald people.
Time flies when you have a bad prescaler selected.

Last Edited: Fri. Jan 12, 2007 - 09:58 PM
  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Is any body doing any work?

I'll believe corporations
are people when Texas executes one.

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

bobgardner wrote:
...What did an IBM 360 do? ...

I can't find any specs! Maybe because it was a 'system'

Quote:
By 1970, after IBM had announce an upgrade to the 360 line, they were offering compatible computers with a 200:1 range.

The First Computers: History and Architectures
by Rojas and Hasagen
MIT press

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

bobgardner wrote:
No-one has mentioned MIPs in the comparison of 60s 70s and up mainframes, minicomputers, and micros. The VAX11780 was a '1 MIP' machine, did 1677 dhrystones a sec. What did an IBM 360 do?
I recall our school's 360/40 was about 0.04 MIPs.

- John

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

Yikes. I have a 9 Mhz HC11 that is .25 MIPs

Imagecraft compiler user

  • 1
  • 2
  • 3
  • 4
  • 5
Total votes: 0

But can the HC11 input EBCDIC?