Been developing heavily with the Cortex M4 for a few years now and have jumped into the M7 over the last year. I have never had to pick an SDRAM for an application so I have a couple of statements that may needs corrected, and questions I'm hoping someone can shed some light on.
- Do I need to add a memory region to the linker script (along with setting attributes for global variables I want to sit in it) for it to work properly? Or do I somehow dynamically allocate the memory in the application during run-time?
- If SDRAM Controller only operates at 1/2 clock speed (i.e. V71 300 MHz / 2 = 150 MHz), is there any advantage to getting an SDRAM with a higher maximum clock speed? The main reason I ask stems from recent work with Memory Cards. While the MMC controller couldn't come remotely close to the max r/w speeds of cards I tested, using faster grade cards cued the end of read/write far faster than their slower class brothers. I'm wondering if I might see similar phenomenon with SDRAM.
- Forcing aligned memory use will improve performance - T/F?
- Despite the fact that DDR is a form of SDRAM, I'm assuming I can't use DDR because that would require sampling data on both edges of the clock and the SDRAMC doesn't support that...correct?
- Any recommendations for a high capacity (2 Gbits) SDRAM that doesn't operate at 1.8V typical... I have audio converters in the application that don't support 1.8V, and I really don't want to do level translation if possible...I'd have to use 1.8V for VDDIO to work with most of the SDRAMs I have seen but that won't fly with my audio ADC/DACs... Suggestions welcome, but it looks as though I may be dreaming...