The Caveats of System Management Mode

By Robert R. Collins


In previous columns, I've discussed the history and secrets of System Management Mode (SMM). In January 1997, for instance, I presented an overview of SMM and its similarities and differences with the In-Circuit-Emulator (ICE) mode of previous Intel x86 processors. I followed this in March with a discussion of the secrets of the state save map, and the mechanics of the AutoHalt and I/O Restart features. This month, I will discuss the caveats of SMM and minutiae that you would never have gleaned by reading Intel's documentation.

It is important to understand that some of the details discussed here are highly implementation-specific and were discovered on a Pentium processor. The same caveats may not be present in future Intel processors, nor be present in future versions of the Pentium processor. They are discussed in this column to give you a better understanding of how the Pentium processor works, and as warning signs for your own SMM development. Some of these techniques should never be used in production code.

Triggering your own SMI#

Chipsets give you the option of triggering an SMI#, but I prefer not to use any chipset-specific code for my testing purposes. Often, I need to assert an SMI# simultaneously with other microprocessor signals (like NMI). Therefore, I need greater flexibility than the chipset provides in asserting these signals.

The method I've adopted involves a stack of CPU sockets, a logic analyzer, and a multi-channel pulse generator. I remove the SMI# pin (and any other necessary pins, like NMI) from the bottom micro- processor socket - a blank socket that plugs directly into the CPU socket of the motherboard. Removing these pins from the blank socket is necessary to ensure that the chipset isn't driving any signals with opposite polarity to my input signals; this eliminates the possibility of damaging my chipset and motherboard. Sandwiched between the next two sockets, I run a piece of micro-wire into the hole of the SMI# pin (and any other necessary pins). The other end of the micro-wire is attached to the output(s) of the pulse generator. The next highest socket is connected to the logic analyzer. The Pentium processor sits in the logic analyzer pod socket.

The logic analyzer is able to recognize a bus event, such as an I/O output, and send an output trigger to another device - in this case, the pulse generator. In my case, I program the logic analyzer to recognize an output to port DEA0, then to pulse it's output signal. The pulse generator receives this input, and immediately pulses it's output(s). I use a pulse generator (instead of the logic analyzer output) to trigger my signals because it gives me much greater flexibility. For example, the SMI signal is active low, and the NMI signal is active high. When I want to assert these signals simultaneously, the outputs of the pulse generator can be programmed to either polarity. A secondary reason for using a pulse generator is to control the duration of the output pulse. Sending very short pulses is beneficial in verifying the input latches of these microprocessor sig- nals. In any event, this technique gives me precise control over the location and du- ration of generating my own microprocessor input signals.

Documentation Errors

The Pentium manuals did a reasonably good job of discussing SMM. Even so, there are many ambiguities and minor errors. In Chapter 20 of the Pentium Processor Family Developer's Manual, Volume 3: Architecture and Programming Manual discusses SMM (Intel part number 241430), Table 20-1 presents the state save map and shows which fields are considered "writeable." Many of the general-purpose registers (such as EAX, ECX, etc.) are listed as writeable; while the system registers (such as CR0, DS, IDT Base, etc.) are not. We are told that "modifying these registers will result in unpredictable results." This statement is far from true. In fact, if you know what you're doing when writing to these locations, the results are 100 percent predictable. Even so, you must take care when modifying some of these locations, especially the segment registers. But any competent programmer who understands the SMM architecture, knows all of the fields of the state save map (even the undocumented fields I discussed in my last column), and knows the ramifications of changing these locations may modify these locations without fear of unpredictable results.

Volume 3 also mentions that CR1 and CR2 are registers stored in the reserved locations of the state save map. None of the testing I performed ever demonstrated that these registers are stored anywhere in the state save map. CR1 doesn't even exist in the x86 architecture, though it might exist in the microprocessor's internal register file of control registers. CR2 is defined as the most recent page fault ad- dress. Even though I wrote a program that generated a page fault prior to entering SMM, I was never able to observe the contents of CR2 in the state save map.

Table 20-3-shows e initial CPU core register settings upon entrance to SMM. The table mentions that the initial values of some registers are unpredictable. My experiments showed that these registers all have the same values of the interrupted program. To rely on this behavior would be foolhardy. Instead, if you want to rely on the values of any registers, including segment registers, you should read these values out of the state save map upon entrance to SMM. Table 20-3 omits mentioning the initial values of the interrupt descriptor table base register (IDTR), the GDTR, or any of the other system registers. These registers retain their original contents from the interrupted program. Normally, this wouldn't be a problem be- cause the SMM handler doesn't use these architectural features. However, it could be a very big problem if the SMM handler needed to issue or service interrupts.

Section 20.1.4.4 mentions that NMI interrupts are blocked upon entrance to SMM, but "may be enabled through software by invoking a dummy interrupt and vectoring to an Interrupt Service Routine." As I mentioned in March, this statement is not correct. The easiest way to unmask NMI recognition involves setting up a hardware interrupt handler (like a timer tick) and wait for the hardware interrupt to occur. After it occurs, NMI will be recognized within SMM. (The January 1997 edition of the Pentium Processor Specification Update, Intel part number 242480, corrects the Pentium documentation in Specification Clarification #49.)

Modifying Segment Registers

As I've mentioned, Table 20-1 and Table 20-2 show a list of registers that are considered writeable by the SMM handler. The segment registers are not considered writeable. Thus, they are omitted from this list. These registers are considered unwriteable because changing their contents would not change the locations to which they point; thus, you might experience some unpredictable behavior. Segment at- tributes are stored in the hidden Segment Descriptor Cache registers, and not in the segment registers. The segment descriptor cache registers contain the base address, limit, and access attributes of all segment (CS, DS, and so on) and system (GDTR and IDTR) registers. To correctly modify these registers, you should modify their respective segment descriptor cache slots in the state save map. As a side-effect, the segment descriptor cache can point to a location in memory, while the segment register may reference a bogus place in memory, or a bogus descriptor table en- try. The segment descriptor cache always contains the true pointer to segment memory. The actual segment register has o effect on the actual behavior of the microprocessor, except when it is loaded. (For a description of the Segment Descriptor Cache registers, their contents, and how they are used by the microprocessor, (see http://www.rcollins.org/Productivity/DescriptorCache.html.)

SMBASE Relocation

SMBASE relocation is easy, but can end up biting you in your SMM handler. The initial address of the SMM handler can be hanged by modifying the SMBASE slot of the state save map. The new value takes effect upon the next entrance to SMM. (See the appropriate Intel documentation for more details.) However, Table 20-3 can be taken quite literally when it says that the initial value for CS is 3000h. Even though the actual CS base address may be 0A0000h (or some other address), the initial value of CS always equals 3000h. (The Pentium Pro doesn't share the same behavior.) It would be tempting to execute code that moves CS into DS to allow the data segment to have the same frame of reference as the code segment. Doing this would be a mistake if the SMBASE has been changed. After moving CS into DS, the new DS base address is calculated according to the real mode segmentation rules; thus the DS base address equals 30000h, not the base address of the relocated SMBASE. To correctly convert CS into DS, the SMM handler should read the base address from the state save map and make the appropriate address conversion before moving it into the DS register; see Listing One.

SMBASE relocation can also bite you if you plan to service interrupts within your SMM handler. Suppose that you enter SMM, and make all of the necessary arrangements to service interrupts. Upon invocation of the interrupt, CS:IP and EFLAGS are pushed onto the stack. Remember that the initial value of CS is 3000h within your SMM handler; that means the value of CS that is pushed on the stack is 3000h. All is still well until you perform an IRET. Upon execution of IRET, CS will be restored to 3000h, and your code will continue executing somewhere in the 30000h area of memory, and not from your relocated SMM handler. If you service interrupts within a relocated SMM handler, you must make sure that you adjust your CS register beforehand. Listing Two presents sample code to adjust your CS register.

Section 20.1.6.4 mentions the caveats of relocating the SMM handler above 1 MB. Many problems can occur when the SMBASE has been set above 1 MB; the hard-coded initial value of CS=3000h will be just the beginning of your problems. It will be impossible to have any segment register (other than an unmodified code segment register) point to any data within the SMM handler. If interrupts are needed, it will be impossible to return (IRET) to your interrupted code. For these reasons alone, I would strongly discourage anybody from attempting to do such a thing.

Using Protected Mode Within SMM

It is tempting to describe SMM as a completely separate x86 CPU state within a CPU. In that regard, it is orthogonal to all other operating modes. Even though this analogy is not accurate in the literal sense, thinking of SMM in this way will help you to better understand it. Upon entrance to SMM, all of the registers have some initial values, and you're operating in a version of real mode in which all segments have 4-GB limits (also known as big real mode or flat real mode). The processor is capable of entering protected mode, v86 mode, or even loading and executing an operating system, if the SMM handler was programmed to do such a thing (though probably not). Yet, when the RSM instruction (RESUME to the user program) is executed, the entire state of the original interrupted program is restored, and execution continues exactly where it left off. The entire state of the processor is restored, including any protected mode or v86 mode tasks. The- SMM handler enters protected mode it- self, it doesn't need to restore any of the system registers (like the GDT and IDT) before resuming to the interrupted program. All of these values are restored from the state save map.

SMRAM as Reserved Memory

Intel documentation is adamant about de- scribing SMM RAM (SMRAM) as a separate memory space. This description is probably rooted in the behavioral description of the ICE support. When the ICE was active, it's kernel code was literally executing from a physically separate memory space. The ICE hardware implemented memory that was separated from user memory, but occupied the same address space. When the ICE needed to access user memory, the ICE kernel code executed a series of UMOV instructions to transfer memory from user memory to ICE mode. (UMOV is an undocumented instruction on the 80386 and 80486 microprocessor; see http://www.rcollins.org/secrets/opcodes/UMOV.html for more details.)

Even though the historical description of SMRAM lies in ICE mode, chipsets don't implement SMRAM as physically separate sets of memory chips. Instead, they borrow a portion of user memory as SMRAM, and prohibit access to user programs. Since the PC architecture has a convenient 384-KB hole of memory between 640k and 1 MB, the chipset vendors allow the system BIOS to relocate an SMM handler within this area (usually overlapping with video memory). The chipset is programmed to disable video memory when the SMIACT# signal is asserted, thereby allowing the SMRAM to appear as though it is a separate set of memory. Some chipsets can be programmed to prohibit data references to this memory (when SMIACT# is active), while allowing code fetches to occur. At first glance, you might think this is a severe limitation. On the contrary. when data references are disabled, the SMM handler can still access video memory and update the video display. Using this facility, I was able to spawn a copy of DOS within my SMM handler. I even booted Windows while in SMM. The downside of this feature is the inability to access your SMRAM as data. In other words, don't attempt to read or modify any values in the SMM state save map if you have this feature enabled.

It is important to understand why chipsets have the ability to re-map the SMRAM. Without such a feature, the SMM handler would occupy 32 KB of precious user memory. The OS could easily map this area out of OS usage; but it would need support for this feature. When the chipset re-maps this memory, there isn't any OS support needed, the memory isn't taking up precious user resources, and the memory is transparent to the entire system. Thus, it's a cleaner solution for the entire system.

Using SMM to Create a CPL-0 v86 Task

Two of the strangest things I've done with SMM involved changing the descriptor access rights. In one case, I changed the access rights of the GDT descriptor cache to Not Present. The subsequent descriptor table lookup caused a triple fault. This proved that the GDT access rights were used by the Pentium processor, even though they don't exist as part of the LGDT instruction data structure (see my March 1997 column for more details.) But the other "strangest" thing I've done (and the most illegal, too) allowed me to enter a v86 task at CPL-0 and execute privileged instructions.

I was writing a program that was de- signed to test compatibility of Enhanced v86 mode (Ev86 mode) when various EFLAGS bits (such as EFLAGS.VIF and EFIAGS.VIP) were modified within SMM before returning to the Ev86 task. (Ev86 mode is described at http://www.rcollins.org/articles/vme1/.) If I found an incompatibility or a bug in the microprocessor, I wanted to modify the state save map to allow me to return directly to DOS and report the error. The challenge was to modify the state save map to return to real mode. I wanted to avoid performing the laborious tasks of returning back to the v86 task, interrupting to a CPL-0 protected mode task, exiting protected mode, then returning to real mode and DOS. In order to return to a proper form of real mode, I needed to modify many values of the state save map. I knew I needed to perform the following tasks:

  1. Clear CRO.PE to exit protected mode.
  2. Clear CR4.VME to turn off Ev86 mode.
  3. Change the CS descriptor cache base address to point to my real-mode code segment.
  4. Change EIP to point to my real-mode return address.
  5. Change EFLAGS.VM to disable v86 mode.
  6. Modify the CPL of the processor back to CPL-0.

Modifying everything but the CPL was straightforward. Unfortunately, there wasn't a handbook to modify the CPL of the processor by changing values in the state save map. I knew from experience, however, that the CPL must somehow be stored there. If it wasn't, it wouldn't be possible to resume from SMM back to all modes of processor operation. The most obvious place to look was the DPL of the CS descriptor cache. I tried changing it from 3 to 0, but my program crashed. Deep in the back of my mind, I recalled (from reading Intel's 80286 LOADALL document) that the CPL of the processor was deter- mined by the DPL field of the SS descriptor cache. So I changed the value of the SS descriptor cache DPL field from 3 to 0. Viola. I was able to return from Ev86 mode back to real mode.

This raised an interesting question: Could I use the same technique to change the privilege level of the v86 task to a higher CPL-0 level? To answer this, I created a simple v86 task and generated an SMI# interrupt. My SMM handler simply changed the SS descriptor cache DPL field from 3 to 0, then resumed to the v86 task. Upon resumption of the v86 task, I attempted to execute a series of privileged instruction - instructions that only execute at CPL-0. I tried executing RDMSR and WRMSR. Both instructions worked without a problem, thus proving that I could execute privileged instructions from within my CPL-0 v86 task.

Conclusion

The purpose of this series of columns has been to educate and inform you regarding the history, secrets, and caveats of System Management Mode. The secrets and caveats are important tools for writing your own SMM code. Understanding SMM - specifically the state save map - can directly enhance your debugging capabilities and reduce your development time. I've learned much from the school of hard knocks. Hopefully, knowing the secrets of SMM will help prevent you from making your own mistakes. Either way, these columns should have filled in many holes left by Intel's documentation.

Listing 1 - Converting DS from a relocated SMBASE value

mov     eax,SMRAM.CS_DescCache._Addr ; Get base address of CS
shr     eax,4                        ; convert physical address to segment address
mov     ds,ax                        ; now have new DS base pointing to CS

Listing 2 - Code sample to adjust CS to relocated SMBASE value

mov     eax,SMRAM.SS_DescCache._Addr ; make sure we have a stack
shr     eax,4                        ; convert physical address to segment address
mov     ss,ax                        ; create a stack that points to the same
mov     esp,SMRAM.SMM_ESP            ; as our original stack - so no harm done
mov     eax,SMRAM.CS_DescCache._Addr ; Get base address of CS
shr     eax,4                        ; convert physical address to segment address
push    ax                           ; save new CS value
push    offset @Continue             ; save new IP value
retf                                 ; RETF will read new CS:IP from stack
@Continue:

Back to Dr. Dobb's Undocumented Corner home page