I haven't seen this point made anywhere, but in evaluating the new V3.X IAR Embedded Workbench C compiler it seems they let you mix memory models for accessing RAM. This, in turn, leads to much smaller code than I get from Codevision.
You can, for example with the Mega8, set IAR to default to the Tiny memory model, but still use RAM over 256 bytes with statements like:
char __near VariableName;
This generates vastly smaller code as the compiler can use 8 bit access for most memory operations and only use 16 bit access when it's required. It simply creates two data segments, one for the Tiny access and one for the Small (near) access. On top of that, IAR seems more efficient in other ways as well.
With Codevision, using Tiny, you only have access to 256 bytes of RAM. With my application, with several multidimension arrays, IAR generates code that's half the size of Codevision and that's with NO IAR optimization enabled and WITH Codevision side optimization! With the higest level of optimization (which I haven't tested to see if it runs as intended), IAR produces code that's roughly ONE QUARTER the size of Codevision! These are really big differences!
I really love Codevision. It's vastly easier to use, and vastly less expensive than IAR. But I'm out of flash space in a production design and that prompted me to try the IAR demo. Short of coding the entire complex routine in assembler (which would make the price of IAR seem cheap), is there any way to force Codevision to use 8 bit indexing with the Small memory model?