2009년 6월 15일 월요일

Emailing: Ember rolls out Cortex M3 Zigbee MCU: is there any differentiation left? - Practical Chip Design - Blog on EDN - 1690000169

Monday, June 8, 2009

Ember rolls out Cortex M3 Zigbee MCU: is there any differentiation left?

In a remarkably short arc, Zigbee radios for embedded applications have gone from "how are you gonna do that in CMOS?" to commodity items. "In theory, these days the Zigbee radio is just another peripheral in an integrated embedded system like a power meter," observed Ember senior vice president of engineering Skip Ashton. "The radios have all been converging lately onto pretty similar sets of specifications."

Still, that's not quite the whole picture. When you integrate a Zigbee radio onto a microcontroller, Ashton points out, this one peripheral takes up about a third of the die. And it requires special design skills way beyond those necessary for counters, timers, or A/D converters. The size, power consumption, and proprietary skills that go into a Zigbee radio continue to make it a special interface, somehow not quite commodity.

And when something is special, it can be a source of differentiation. One area to look out for, according to Ashton, is the amount of energy that is consumed in interactions between the radio hardware and the MCU software. If the MCU is inefficient or the radio interface is poorly thought-out, or if all the opportunities to exploit power-reducing modes have not been explored, you can end up with quite a difference in energy consumption, both on a per-message basis, on as a function of operating hours. These are differences, Ashton says, that you can entirely miss if you rely on data sheets. But in applications that must be battery-powered or must scavenge power, they are vital design issues. Unfortunately, there may be no alternative but for the evaluation team to get samples, code up stubs for an application, and make actual power and execution-time measurements.

The importance of understanding hardware-software interactions led Ember to a rather retro development approach, Ashton said. Instead of relying on system-level simulations, the company very early in the project breadboarded the analog and RF sections of their new chip with discrete components, loaded the RTL and the Cortex M3 into an FPGA, and threw real protocol stack and application software at this hardware emulation. As development progressed, the analog/RF teams dropped a test chip into the emulation in place of the breadboard, but the FPGA stayed. This gave the Ember designers the ability to stress-test the design with real software, look at real timing issues, and understand the hardware-software interactions.

"We found the really subtle bugs in the emulation," Ashton said. "When you simulate, you control time. Doing that, you can make implicit assumptions because you think you understand what's going on. In an emulation, you can execute a huge number of instructions, and everything is going to happen in real time. That can make a big difference. We would find the subtle things on the emulator—things that never would have showed up in simulation—and then pass them back to the simulation guys to explore."

Along with emulation, Ashton is very positive on the choice of the ARM Cortex M3 core. It's a bit unusual in a market that has been dominated by 8051 or proprietary 8-bit cores. But the increased computing power of the M3, even at 12 or 24 MHz, coupled with up to 192 Kbytes of Flash and 12 Kbytes of RAM, gives the customer a lot of headroom for applications development. And the higher efficiency of the M3 compared to older architectures means that the Cortex is actually running a lot less of the time executing the protocol stack, keeping energy consumption down.

Ashton also likes the efficiency of the M3 in terms of latency. He said the interrupt blocking time—the period after entering an interrupt routine when interrupts are masked—is less than 40 microseconds on the M3, compared to over 200 microseconds on alternative cores. This could be the difference between catching or dropping the next packet. And the code efficiency reduces the size of the protocol stack, leaving more room for applications code. "We saw our protocol stack shrink by about 15-20 percent just by recompiling it for the M3," Ashton said.

Finally, Ashton points with some relief to the debug facilities on the M3. Because the ARM embedded debug logic is a de-facto standard, customers can use the chip's hardware debug facility with their mainstream software debug tools. And Ember has integrated the software debug stuff with their own Packet Trace tool, allowing the customer to watch what's going on in the Zigbee packet stream and the Cortex code stream simultaneously.

That is a lot more effective than trying to reason out what's going on based on data from a sniffer, Ashton observed. "You can't trust sniffers. All they can tell you is what they saw in the air. They don't know what your radio actually received, or what the protocol stack did with it." Pulling together embedded debug hardware, software debug tools, and Packet Trace gives an end-to-end picture of an over-the-air transaction. Particularly important to Ashton, the ARM debug facility works in any of the processor's power modes, including letting you see inside a sleeping M3.

So is embedded Zigbee silicon getting commoditized? Maybe from a datasheet perspective the answer is yes. But at least one vendor is arguing that from the point of view of the development team that has to use the chip, and in real-life energy consumption running an application, the answer is no.


댓글 없음: