MIT Researcher Addresses the Power Efficiency Problem in Wireless Communications

MIT Researcher Addresses the Power Efficiency Problem in Wireless Communications

The success of wireless communications all around the world has also led to massive infrastructural expansion resulting in new cell towers and base stations. As these networks and the volume of information they handle, grows, they will consume an increasingly hefty amount of power, some of which will go to powering the system as it’s supposed to, but much of which is lost as heat due to energy inefficiency. This is a problem both for companies such as mobile network operators, which have to pay large utility bills to cover their operational costs, and for society at large, as the sector’s greenhouse gas emissions rise.

These concerns caused MIT researchers to look into this area. Omer Tanovic, a PhD candidate in the Department of Electrical Engineering and Computer Science is now working on this real-world problem of power efficiency in wireless operations. Most of the projects Omer has worked on at MIT seek to design signal processing systems, optimized to different measures, that will increase power efficiency while ensuring that the output signal (what you hear when talking to someone on the phone, for instance) is true to the original input (what was said by the person on the other end of the call).

His latest project seeks to address the power efficiency problem by decreasing the peak-to-average power ratio (PAPR) of wireless communication signals. In the broadest sense, PAPR is an indirect indicator of how much power is required to send and receive a clear signal across a network. The lower this ratio is, the more energy-efficient the transmission. Namely, much of the power consumed in cellular networks is dedicated to power amplifiers, which collect low-power electronic input and convert it to a higher-power output, such as picking up a weak radio signal generated inside a cell phone and amplifying it so that, when emitted by an antenna it is strong enough to reach a cell tower.

This ensures that the signal is robust enough to maintain an adequate signal-to-noise ratio over the communication link. Power amplifiers are at their most efficient when operating near their saturation level, at maximum output power.

However, because cellular network technology has evolved in a way that accommodates a huge volume and variety of information across the network — resulting in far less uniform signals than in the past — modern communication standards require signals with big peak-to-average power ratios. This means that a radio frequency transmitter must be designed such that the underlying power amplifier can handle peaks much higher than the average power being transmitted, and therefore, most of the time, the power amplifier is working inefficiently — far from its saturation level.

According to Omer, every cell tower has to have some kind of PAPR reduction algorithm in place in order to operate. But the algorithms they use are developed with little or no guaranties on improving system performance. A common conception is that optimal algorithms, which would certainly improve system performance, are either too expensive to implement — in terms of power or computational capacity — or cannot be implemented at all.

Omer, who is supervised by LIDS (Laboratory for Information and Decision Systems) Professor Alexandre Megretski, designed an algorithm that can decrease the PAPR of a modern communication signal, which would allow the power amplifier to operate closer to its maximum efficiency, thus reducing the amount of energy lost in the process. To create this system he first considered it as an optimization problem, the conditions of which meant that any solution would not be implementable, as it would require infinite latency, meaning an infinite delay before transmitting the signal. However, Omer showed that the underlying optimal system, even though of infinite latency, has a desirable fading-memory property, and so he could create an approximation with finite latency — an acceptable lag time.

From this, he developed a way to best approximate the optimal system. The approximation, which is implementable, allows tradeoffs between precision and latency, so that real-time realizations of the algorithm can improve power efficiency without adding too much transmission delay or too much distortion to the signal. Omer applied this system using standardized test signals for 4G communication and found that, on average, he could get around 50 percent reduction in the peak-to-average power ratio while satisfying standard measures of quality of digital communication signals.

Omer’s algorithm, along with improving power efficiency, is also computationally efficient. According to him, this is important in order to ensure that the algorithm is not just theoretically implementable, but also practically implementable, once again stressing that abstract mathematical solutions are only valuable if they cohere to real-world parameters. Microchip real estate in communications is a limited commodity, so the algorithm cannot take up much space, and its mathematical operations have to be executed quickly, as latency is a critical factor in wireless communications. Omer believes that the algorithm could be adapted to solve other engineering problems with similar frameworks, including envelope tracking and model predictive control.

At MIT, Omer has taught both undergraduate and graduate-level courses, including as an instructor-G, an appointment only given to advanced students who have demonstrated teaching expertise. He has won two teaching awards, the MIT School of Engineering Graduate Student Extraordinary Teaching and Mentoring Award in 2018 and the MIT EECS Carlton E. Tucker Teaching Award in 2017.

Publisher: everything RF