You pose a great question. Here is my understanding which could contain errors or not be complete and is an opportunity for correction of my own thinking.
The ESP32 is based on a harvard architecture which means that there are two buses ... one for instructions and one for data. Loosely, address space south of 0x4000 0000 is taken from the data bus while address space (if I remember correctly) from 0x4000 0000 to 0x4FFF FFFF is from the instruction bus.
Now imagine a 64K page of RAM. Unlike in other environments where that page of RAM "just exists" at a fixed address space location, on the ESP32 we have MMU (Memory Mapping Unit) which can make that 64K page of real RAM be mapped to distinct address locations. This means that we can have RAM that can be read from the data bus or have RAM read from the instruction bus.
That then begs the question, what would you put in RAM that can be read from the instruction bus? The answer is (if I understand correctly) ... instructions (executable code).
When we compile a C source file we end up with an object file that is then linked to produce an executable. During compilation, the different "sections" of the compiled C are placed in different "sections" of the object file. For example, code goes into the ".text" section and initialized data goes into the ".data" section. By flagging a piece of code with the "IRAM_ATTR" we are declaring that the compiled code will be placed in a section called ".dram.text" (I'm making that up as I don't have a reference to hand). What this means is that instead of an executable having just ".text" and ".data" sections, there are additional sections. The ESP32 bootloader, upon startup, will copy those ".dram.text" sections into real RAM at startup before giving control to your application. The RAM is then mapped into the instruction area address space (> 0x4000 0000). This means that control can be passed to this code (as normal) from within your running app and it will "work" because the code lives in the instruction bus address space.
What now remains is "why" you would want to do this? The answer is to consider the alternative. If the code you want to run is NOT in RAM, then where else could it be? The answer is "flash" ... if it is in flash, then when a request to execute that code is received, the code has to be executed from there. Flash on the ESP32 is much slower than RAM access ... so there is a memory cache which can be used to resolve some of that ... however we can't be assured that when we branch to a piece of code that it will be present in cache and hence may need a slow load from flash.
And now we come to the kicker ... if the code we want to run is an interrupt service routine (ISR), we invariably want to get in and out of it as quickly as possible. If we had to "wait" within an ISR for a load from flash, things would go horribly wrong. By flagging a function as existing in RAM we are effectively sacrificing valuable RAM for the knowledge that its access will be optimal and of constant time.
Last edited by kolban
on Tue Mar 13, 2018 2:52 am, edited 1 time in total.