Plastic Memory Could Boost Flexible Electronics

For stick-on displays, smart bandages, and cheap flexible plastic sensors to really take off, they’ll need some way of storing data for the long-term that can be built on plastic. “In the ecosystem of flexible electronics, having memory options is…

For stick-on displays, smart bandages, and cheap flexible plastic sensors to really take off, they’ll need some way of storing data for the long-term that can be built on plastic. “In the ecosystem of flexible electronics, having memory options is very important,” says Stanford electrical engineering professor Eric Pop.

But versions of today’s non-volatile memories, such as Flash, aren’t a great fit. So when Pop and his team of engineers decided to try adapting a type of phase-change memory to plastic, they figured it would be a long shot. What they came up with was a surprise—a memory that actually works better because it’s built on plastic. The energy needed to reset the memory, a critical feature for this type of device, is an order of magnitude lower than previous flexible versions. They reported their findings this week in Science.Phase-change memory (PCM) is not an obvious win for plastic electronics. It stores its bit as a resistive state. In its crystalline phase, it has a low resistance. But running enough current through the device melts the crystal, allowing it to then freeze in an amorphous phase that is more resistive. The process is reversible. Importantly, especially for experimental neuromorphic systems, PCM can store intermediate levels of resistance. So a single device can store more than one bit of data.Unfortunately, the usual set of materials involved doesn’t work well on flexible substrates like plastic. The problem is “programming current density”: Basically, how much current do you need to pump through a given area in order to heat it up to the temperature at which the phase change takes place? The uneven surface of bendy plastic means PCM cells using the usual materials can’t be made as small as they are on silicon, requiring more current to achieve the same switching temperature.Think of it as trying to bake a pie in an oven with the door slightly ajar. It will work, but it takes a lot more time and energy. Pop and his colleagues were looking for a way to close the oven door.They decided to try a material called a superlattice, crystals made from repeating nanometers-thick layers of different materials. Junji Tominaga and researchers at the National Institute of Applied Industrial Science and Technology in Tsukuba, Japan had reported promising results back in 2011 using a superlattice composed of germanium, antimony, and tellurium. Studying these superlattices, Pop and his colleagues concluded that they should be very thermally insulating, because in its crystalline form there are atomic-scale gaps between the layers. These “van der Waals-like gaps” restrict both the flow of current and, crucially, heat. So when current is forced through, the heat doesn’t quickly drain away from the superlattice, and that means it takes less energy to switch from one phase to another. Current is confined to a superlattice by a pore-like structure of aluminum oxide. This makes heating more efficient so the memory can switch states using less energy.A.I. Khan and A. DausBut the superlattice work was hardly a slam dunk. “We started working on it several years ago, but we really struggled and almost gave up,” says Pop. The superlattice works if the van der Waals gaps are oriented parallel to each other and without major mixing between layers, Pop explains. But the peculiarities of the material deposition equipment involved mean that “just because they published their parameters in Japan, doesn’t mean you can use them in a tool in Palo Alto.”Asir Intisar Khan, a doctoral candidate working with Pop, had to push through a trial-and-error process that involved more than 100 attempts to produce superlattices with the right van der Waals gaps. A superlattice structure formed by alternating layers of antimony telluride and germanium telluride. Van der Waals-like gaps form between the layers, restricting the flow of current and heat.K. Yamamoto and A.I. KhanThe researchers kept the heat in the memory device by confining the flow of current to a 600-nanometer-wide pore-like structure that was surrounded by insulating aluminum oxide. The final layer of insulation was the plastic itself, which resists the flow of heat considerably better than the silicon PCM is usually built on. The completed device had a current density of about 0.1 mega-amperes per square centimeter, about two orders of magnitude lower than conventional PCM on silicon and an order of magnitude better than previous flexible devices. Furthermore, it showed four-stable resistance states. So it can store multiple bits of data in a single device.That building the device on plastic would actually improve things wasn’t something the team had planned. Alwin Daus, a post-doctoral researcher in the lab with flexible electronics expertise, says the team assumed that the titanium nitride electrode between the superlattice and the substrate would limit heat loss and thus the substrate would not influence the memory operation. But later simulations confirmed that heat penetrates into the plastic substrate, which has a low thermal conductivity compared to silicon substrates. The work reported this week is a proof of concept for low-power storage on flexible surfaces, Khan says. But the importance of thermal insulation applies to silicon devices as well. The team hopes to improve the devices by further shrinking the pore diameter and by making the sides of the device more insulating. Simulations already show that making the aluminum oxide walls thicker reduces the current needed to reach the switching temperature.The researchers will also look into other superlattice structures that might have even better properties. phase-change memory flexible electronics wearables superlattice materials memory Samuel K. Moore Samuel K. Moore is the senior editor at IEEE Spectrum in charge of semiconductors coverage. An IEEE member, he has a bachelor’s degree in biomedical engineering from Brown University and a master’s degree in journalism from New York University. The Institute Type Profile Topic Careers From Engineering Intern to Chairman of Tata 10 Sep 2021 6 min read Type News Topic Robotics Video Friday: Robotic Gaze 10 Sep 2021 3 min read Semiconductors Type Feature Topic Magazine A Circuit to Boost Battery Life Digital low-dropout voltage regulators will save time, money, and power Keith A. Bowman 29 Jul 2021 11 min read Edmon de Haro YOU’VE PROBABLY PLAYED hundreds, maybe thousands, of videos on your smartphone. But have you ever thought about what happens when you press “play”? The instant you touch that little triangle, many things happen at once. In microseconds, idle compute cores on your phone’s processor spring to life. As they do so, their voltages and clock frequencies shoot up to ensure that the video decompresses and displays without delay. Meanwhile, other cores, running tasks in the background, throttle down. Charge surges into the active cores’ millions of transistors and slows to a trickle in the newly idled ones. This dance, called dynamic voltage and frequency scaling (DVFS), happens continually in the processor, called a system-on-chip (SoC), that runs your phone and your laptop as well as in the servers that back them. It’s all done in an effort to balance computational performance with power consumption, something that’s particularly challenging for smartphones. The circuits that orchestrate DVFS strive to ensure a steady clock and a rock-solid voltage level despite the surges in current, but they are also among the most backbreaking to design. That’s mainly because the clock-generation and voltage-regulation circuits are analog, unlike almost everything else on your smartphone SoC. We’ve grown accustomed to a near-yearly introduction of new processors with substantially more computational power, thanks to advances in semiconductor manufacturing. “Porting” a digital design from an old semiconductor process to a new one is no picnic, but it’s nothing compared to trying to move analog circuits to a new process. The analog components that enable DVFS, especially a circuit called a low-dropout voltage regulator (LDO), don’t scale down like digital circuits do and must basically be redesigned from scratch with every new generation. If we could instead build LDOs—and perhaps other analog circuits—from digital components, they would be much less difficult to port than any other part of the processor, saving significant design cost and freeing up engineers for other problems that cutting-edge chip design has in store. What’s more, the resulting digital LDOs could be much smaller than their analog counterparts and perform better in certain ways. Research groups in industry and academia have tested at least a dozen designs over the past few years, and despite some shortcomings, a commercially useful digital LDO may soon be in reach. Low-dropout voltage regulators (LDOs) allow multiple processor cores on the same input voltage rail (VIN) to operate at different voltages according to their workloads. In this case, Core 1 has the highest performance requirement. Its head switch, really a group of transistors connected in parallel, is closed, bypassing the LDO and directly connecting Core 1 to VIN, which is supplied by an external power management IC. Cores 2 through 4, however, have less demanding workloads. Their LDOs are engaged to supply the cores with voltages that will save power. The basic analog low-dropout voltage regulator [left] controls voltage through a feedback loop. It tries to make the output voltage (VDD) equal to the reference voltage by controlling the current through the power PFET. In the basic digital design [right], an independent clock triggers a comparator [triangle] that compares the reference voltage to VDD. The result tells control logic how many power PFETs to activate. A TYPICAL SYSTEM-ON-CHIP for a smartphone is a marvel of integration. On a single sliver of silicon it integrates multiple CPU cores, a graphics processing unit, a digital signal processor, a neural processing unit, an image signal processor, as well as a modem and other specialized blocks of logic. Naturally, boosting the clock frequency that drives these logic blocks increases the rate at which they get their work done. But to operate at a higher frequency, they also need a higher voltage. Without that, transistors can’t switch on or off before the next tick of the processor clock. Of course, a higher frequency and voltage comes at the cost of power consumption. So these cores and logic units dynamically change their clock frequencies and supply voltages—often ranging from 0.95 to 0.45 volts— based on the balance of energy efficiency and performance they need to achieve for whatever workload they are assigned—shooting video, playing back a music file, conveying speech during a call, and so on. Typically, an external power-management IC generates multiple input voltage (VIN) values for the phone’s SoC. These voltages are delivered to areas of the SoC chip along wide interconnects called rails. But the number of connections between the power-management chip and the SoC is limited. So, multiple cores on the SoC must share the same VIN rail. But they don’t have to all get the same voltage, thanks to the low-dropout voltage regulators. LDOs along with dedicated clock generators allow each core on a shared rail to operate at a unique supply voltage and clock frequency. The core requiring the highest supply voltage determines the shared VIN value. The power-management chip sets VIN to this value and this core bypasses the LDO altogether through transistors called head switches. To keep power consumption to a minimum, other cores can operate at a lower supply voltage. Software determines what this voltage should be, and analog LDOs do a pretty good job of supplying it. They are compact, low cost to build, and relatively simple to integrate on a chip, as they do not require large inductors or capacitors. But these LDOs can operate only in a particular window of voltage. On the high end, the target voltage must be lower than the difference between VIN and the voltage drop across the LDO itself (the eponymous “dropout” voltage). For example, if the supply voltage that would be most efficient for the core is 0.85 V, but VIN is 0.95 V and the LDO’s dropout voltage is 0.15 V, that core can’t use the LDO to reach 0.85 V and must work at the 0.95 V instead, wasting some power. Similarly, if VIN has already been set below a certain voltage limit, the LDO’s analog components won’t work properly and the circuit can’t be engaged to reduce the core supply voltage further. The main obstacle that has limited use of digital LDOs so far is the slow transient response. However, if the desired voltage falls inside the LDO’s window, software enables the circuit and activates a reference voltage equal to the target supply voltage. HOW DOES THE LDO supply the right voltage? In the basic analog LDO design, it’s by means of an operational amplifier, feedback, and a specialized power p-channel field effect transistor (PFET). The latter is a transistor that reduces its current with increasing voltage to its gate. The gate voltage to this power PFET is an analog signal coming from the op amp, ranging from 0 volts to VIN. The op amp continuously compares the circuit’s output voltage—the core’s supply voltage, or VDD—to the target reference voltage. If the LDO’s output voltage falls below the reference voltage—as it would when newly active logic suddenly demands more current—the op amp reduces the power PFET’s gate voltage, increasing current and lifting VDD toward the reference voltage value. Conversely, if the output voltage rises above the reference voltage—as it would when a core’s logic is less active—then the op amp increases the transistor’s gate voltage to reduce current and lower VDD. A basic digital LDO, on the other hand, is made up of a voltage comparator, control logic, and a number of parallel power PFETs. (The LDO also has its own clock circuit, separate from those used by the processor core.) In the digital LDO, the gate voltages to the power PFETs are binary values instead of analog, either 0 V or VIN. With each tick of the clock, the comparator measures whether the output voltage is below or above the target voltage provided by the reference source. The comparator output guides the control logic in determining how many of the power PFETs to activate. If the LDO’s output is below target, the control logic will activate more power PFETs.Their combined current props up the core’s supply voltage, and that value feeds back to the comparator to keep it on target. If it overshoots, the comparator signals to the control logic to switch some of the PFETs off. NEITHER THE ANALOG nor the digital LDO is ideal, of course. The key advantage of an analog design is that it can respond rapidly to transient droops and overshoots in the supply voltage, which is especially important when those events involve steep changes. These transients occur because a core’s demand for current can go up or down greatly in a matter of nanoseconds. In addition to the fast response, analog LDOs are very good at suppressing variations in VIN that might come in from the other cores on the rails. And, finally, when current demands are not changing much, it controls the output tightly without constantly overshooting and undershooting the target in a way that introduces ripples in VDD. When a core’s current requirement changes suddenly it can cause the LDO’s output voltage to overshoot or droop [top]. Basic digital LDO designs do not handle this well [bottom left]. However, a scheme called adaptive sampling with reduced dynamic stability [bottom right] can reduce the extent of the voltage excursion. It does this by ramping up the LDO’s sample frequency when the droop gets too large, allowing the circuit to respond faster. These attributes have made analog LDOs attractive not just for supplying processor cores, but for almost any circuit demanding a quiet, steady supply voltage. However, there are some critical challenges that limit the effectiveness of these designs. First analog components are much more complex than digital logic, requiring lengthy design times to implement them in advanced technology nodes. Second, they don’t operate properly when VIN is low, limiting how low a VDD they can deliver to a core. And finally, the dropout voltage of analog LDOs isn’t as small as designers would like. Taking those last points together, analog LDOs offer a limited voltage window at which they can operate. That means there are missed opportunities to enable LDOs for power saving—ones big enough to make a noticeable difference in a smartphone’s battery life. Digital LDOs undo many of these weaknesses: With no complex analog components, they allow designers to tap into a wealth of tools and other resources for digital design. So scaling down the circuit for a new process technology will need much less effort. Digital LDOs will also operate over a wider voltage range. At the low-voltage end, the digital components can operate at VIN values that are off-limits to analog components. And in the higher range, the digital LDO’s dropout voltage will be smaller, resulting in meaningful core-power savings. But nothing’s free, and the digital LDO has some serious drawbacks. Most of these arise because the circuit measures and alters its output only at discrete times, instead of continuously. That means the circuit has a comparatively slow response to supply voltage droops and overshoots. It’s also more sensitive to variations in VIN, and it tends to produce small ripples in the output voltage, both of which could degrade a core’s performance. How Much Power Do LDOs Save? It might seem straightforward that low-dropout voltage regulators (LDOs) could minimize processor power consumption by allowing cores to run at a variety of power levels, but exactly how do they do that? The total power consumed by a core is simply the product of the supply voltage and the current through that core. But voltage and current each have both a static component and a dynamic one—dependent on how frequently transistors are switching. The core current’s static component is made up of the current that leaks across devices even when the transistors are not switching and is dependent on supply voltage. Its dynamic component, on the other hand, is a product of capacitance, clock frequency, and supply voltage. For a core connected directly to a voltage rail supplied by the external power supply IC, lowering VIN results in a quadratic reduction in dynamic power with respect to frequency plus a static power reduction that depends on the sensitivity of leakage current to VIN. So lowering the rail voltage saves quite a lot. For cores using the LDO to deliver a supply voltage that is lower than VIN, you have to take into account the power consumed by the LDO itself. At a minimum, that’s the product of the voltage across the LDO (the eponymous dropout voltage in the circuit’s name) and the core current. When you factor that in, the dynamic power saving from lowering the voltage is a linear relation to supply voltage rather than the quadratic one you get without the LDO. Even so, using an LDO to scale supply voltage is worthwhile. LDOs significantly lower the SoC processor power by allowing multiple cores on a shared VIN to operate at lower voltage values. Of these, the main obstacle that has limited the use of digital LDOs so far is their slow transient response. Cores experience droops and overshoots when the current they draw abruptly changes in response to a change in its workload. The LDO response time to droop events is critical to limiting how far voltage falls and how long that condition lasts. Conventional cores add a safety margin to the supply voltage to ensure correct operation during droops. A greater expected droop means the margin must be larger, degrading the LDO’s energy-efficiency benefits. So, speeding up the digital LDO’s response to droops and overshoots is the primary focus of the cutting-edge research in this field. SOME RECENT ADVANCES have helped speed the circuit’s response to droops and overshoots. One approach uses the digital LDO’s clock frequency as a control knob to trade stability and power efficiency for response time. A lower frequency improves LDO stability, simply because the output will not be changing as often. It also lowers the LDO’s power consumption, because the transistors that make up the LDO are switching less frequently. But this comes at the cost of a slower response to transient current demands from the processor core. You can see why that would be, if you consider that much of a transient event might occur within a single clock cycle if the frequency is too low. Conversely, a high LDO clock frequency reduces the transient response time, because the comparator is sampling the output often enough to change the LDO’s output current earlier in the transient event. However, this constant sampling degrades the stability of the output and consumes more power. The gist of this approach is to introduce a clock whose frequency adapts to the situation, a scheme called adaptive sampling frequency with reduced dynamic stability. When voltage droops or overshoots exceed a certain level, the clock frequency increases to more rapidly reduce the transient effect. It then slows down to consume less power and keep the output voltage stable. This trick is achieved by adding a pair of additional comparators to sense the overshoot and droop conditions and trigger the clock. In measurements from a test chip using this technique, the VDD droop reduced from 210 to 90 millivolts—a 57 percent reduction versus a standard digital LDO design. And the time it took for voltage to settle to a steady state shrank to 1.1 microseconds from 5.8 µs, an 81 percent improvement. An alternative approach for improving the transient response time is to make the digital LDO a little bit analog. The design integrates a separate analog-assisted loop that responds instantly to load current transients. The analog-assisted loop couples the LDO’s output voltage to the LDO’s parallel PFETs through a capacitor, creating a feedback loop that engages only when there is a steep change in output voltage. So, when the output voltage droops, it reduces the voltage at the activated PFET gates and instantaneously increases current to the core to reduce the magnitude of the droop. Such an analog-assisted loop has been shown to reduce the droop from 300 to 106 mV, a 65 percent improvement, and overshoot from 80 to 70 mV (13 percent). An alternative way to make digital LDOs respond more quickly to voltage droops is to add an analog feedback loop to the power PFET part of the circuit [top]. When output voltage droops or overshoots, the analog loop engages to prop it up [bottom], reducing the extent of the excursion. Of course, both of these techniques have their drawbacks. For one, neither can really match the response time of today’s analog LDOs. In addition, the adaptive sampling frequency technique requires two additional comparators and the generation and calibration of reference voltages for droop and overshoot, so the circuit knows when to engage the higher frequency. The analog-assisted loop includes some analog components, reducing the design-time benefit of an all-digital system. Developments in commercial SoC processors may help make digital LDOs more successful, even if they can’t quite match analog performance. Today, commercial SoC processors integrate all-digital adaptive circuits designed to mitigate performance problems when droops occur. These circuits, for example, temporarily stretch the core’s clock period to prevent timing errors. Such mitigation techniques could relax the transient response-time limits, allowing the use of digital LDOs and boosting processor efficiency. If that happens, we can expect more efficient smartphones and other computers, while making the process of designing them a whole lot easier. Keep Reading ↓ Show less