Researchers at MIT are working on multiple fronts to develop technologies that could improve the performance of tomorrow’s products, as well as how they are manufactured, in significant ways.

One example is a new kind of airplane wing, developed by a team of engineers from MIT and NASA. The deformable wing, lighter and more energy-efficient than conventionally designed wings, is said to automatically respond to changes in aerodynamic loads by changing shape to control the plane’s flight.

Two other projects could also have far-reaching impacts. An energy-efficient chip capable of running the toughest quantum encryption schemes might protect the data communicated among low-power IoT devices. And an ultra-sensitive on-chip sensor that detects signals at sub-terahertz wavelengths could be just what driverless cars need to navigate through low-visibility areas of fog and dust.

Following are reports from MIT News writers David Chandler and Rob Matheson that explain the work of MIT researchers in these areas.


MIT and NASA Engineers Demonstrate a New Kind of Airplane Wing

Assembled from tiny identical pieces, the wing could enable lighter, more energy-efficient aircraft designs.

By David L. Chandler | MIT News

March 31, 2019

A team of engineers has built and tested a radically new kind of airplane wing, assembled from hundreds of tiny identical pieces. The wing can change shape to control the plane’s flight, and could provide a significant boost in aircraft production, flight, and maintenance efficiency, the researchers say.

The new approach to wing construction could afford greater flexibility in the design and manufacturing of future aircraft. The new wing design was tested in a NASA wind tunnel and is described today [March 31] in a paper in the journal Smart Materials and Structures, co-authored by research engineer Nicholas Cramer at NASA Ames in California; MIT alumnus Kenneth Cheung SM ’07 PhD ’12, now at NASA Ames; Benjamin Jenett, a graduate student in MIT’s Center for Bits and Atoms; and eight others.

Instead of requiring separate movable surfaces, such as ailerons, to control the roll and pitch of the plane, as conventional wings do, the new assembly system makes it possible to deform the whole wing, or parts of it, by incorporating a mix of stiff and flexible components in its structure. The tiny subassemblies, which are bolted together to form an open, lightweight lattice framework, are then covered with a thin layer of similar polymer material as the framework.

The result is a wing that is much lighter, and thus much more energy efficient, than those with conventional designs, whether made from metal or composites, the researchers say. Because the structure, comprising thousands of tiny triangles of matchstick-like struts, is composed mostly of empty space, it forms a mechanical “metamaterial” that combines the structural stiffness of a rubber-like polymer and the extreme lightness and low density of an aerogel.

Jenett explained that for each of the phases of a flight — takeoff and landing, cruising, maneuvering, and so on — each has its own, different set of optimal wing parameters, so a conventional wing is necessarily a compromise that is not optimized for any of these, and therefore sacrifices efficiency. A wing that is constantly deformable could provide a much better approximation of the best configuration for each stage.

While it would be possible to include motors and cables to produce the forces needed to deform the wings, the team has taken this a step further and designed a system that automatically responds to changes in its aerodynamic loading conditions by shifting its shape — a sort of self-adjusting, passive wing-reconfiguration process.

“We’re able to gain efficiency by matching the shape to the loads at different angles of attack,” said Cramer, the paper’s lead author. “We’re able to produce the exact same behavior you would do actively, but we did it passively.”

This is all accomplished by the careful design of the relative positions of struts with different amounts of flexibility or stiffness, designed so that the wing, or sections of it, bend in specific ways in response to particular kinds of stresses.

Cheung and others demonstrated the basic underlying principle a few years ago, producing a wing about a meter long, comparable to the size of typical remote-controlled model aircraft. The new version, about five times as long, is comparable in size to the wing of a real single-seater plane and could be easy to manufacture.

While this version was hand-assembled by a team of graduate students, the repetitive process is designed to be easily accomplished by a swarm of small, simple autonomous assembly robots. The design and testing of the robotic assembly system is the subject of an upcoming paper, Jenett said.

The individual parts for the previous wing were cut using a waterjet system, and it took several minutes to make each part, Jenett said. The new system uses injection molding with polyethylene resin in a complex 3-D mold and produces each part — essentially a hollow cube made up of matchstick-size struts along each edge — in just 17 seconds, he said, which brings it a long way closer to scalable production levels.

“Now we have a manufacturing method,” he said. While there’s an upfront investment in tooling, once that’s done, “the parts are cheap,” he said. “We have boxes and boxes of them, all the same.”

The resulting lattice, he said, has a density of 5.6 kilograms per cubic meter. By way of comparison, rubber has a density of about 1,500 kilograms per cubic meter. “They have the same stiffness, but ours has less than roughly one-thousandth of the density,” Jenett said.

Because the overall configuration of the wing or other structure is built up from tiny subunits, it really doesn’t matter what the shape is. “You can make any geometry you want,” he said. “The fact that most aircraft are the same shape” — essentially a tube with wings — “is because of expense. It’s not always the most efficient shape.” But massive investments in design, tooling, and production processes make it easier to stay with long-established configurations.

Studies have shown that an integrated body and wing structure could be far more efficient for many applications, he said, and with this system, those could be easily built, tested, modified, and retested.

“The research shows promise for reducing cost and increasing the performance for large, light weight, stiff structures,” said Daniel Campbell, a structures researcher at Aurora Flight Sciences, a Boeing company, who was not involved in this research. “Most promising near-term applications are structural applications for airships and space-based structures, such as antennas.”

The new wing was designed to be as large as could be accommodated in NASA’s high-speed wind tunnel at Langley Research Center, where it performed even a bit better than predicted, Jenett said.

The same system could be used to make other structures as well, Jenett said, including the wing-like blades of wind turbines, where the ability to do on-site assembly could avoid the problems of transporting ever-longer blades. Similar assemblies are being developed to build space structures and could eventually be useful for bridges and other high-performance structures.

The team included researchers at Cornell University, the University of California at Berkeley at Santa Cruz, NASA Langley Research Center, Kaunas University of Technology in Lithuania, and Qualified Technical Services, Inc., in Moffett Field, California. The work was supported by NASA ARMD Convergent Aeronautics Solutions Program (MADCAT Project), and the MIT Center for Bits and Atoms.

Reprinted with permission of MIT News (



Securing the ‘Internet of Things’ in the Quantum Age

Efficient chip enables low-power devices to run today’s toughest quantum encryption schemes.

By Rob Matheson | MIT News

March 1, 2019

MIT researchers have developed a novel cryptography circuit that can be used to protect low-power “internet of things” (IoT) devices in the coming age of quantum computing.

Quantum computers can, in principle, execute calculations that today are practically impossible for classical computers. Bringing quantum computers online and to market could one day enable advances in medical research, drug discovery, and other applications.

But there’s a catch: If hackers also have access to quantum computers, they could potentially break through the powerful encryption schemes that currently protect data exchanged between devices.

Today’s most promising quantum-resistant encryption scheme is called “lattice-based cryptography,” which hides information in extremely complicated mathematical structures. To date, no known quantum algorithm can break through its defenses. But these schemes are way too computationally intense for IoT devices, which can only spare enough energy for simple data processing.

In a paper presented at the recent International Solid-State Circuits Conference, MIT researchers describe a novel circuit architecture and statistical optimization tricks that can be used to efficiently compute lattice-based cryptography. The 2-millimeter-squared chips the team developed are efficient enough for integration into any current IoT device.

The architecture is customizable to accommodate the multiple lattice-based schemes currently being studied in preparation for the day that quantum computers come online. “That might be a few decades from now but figuring out if these techniques are really secure takes a long time,” said first author Utsav Banerjee, a graduate student in electrical engineering and computer science. “It may seem early, but earlier is always better.”

Moreover, the researchers say, the circuit is the first of its kind to meet standards for lattice-based cryptography set by the National Institute of Standards and Technology (NIST), an agency of the U.S. Department of Commerce that finds and writes regulations for today’s encryption schemes.

Joining Banerjee on the paper are Anantha Chandrakasan, dean of MIT’s School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science, and Abhishek Pathak of the Indian Institute of Technology.

Efficient Sampling

In the mid-1990s, MIT Professor Peter Shor developed a quantum algorithm that can essentially break through all modern cryptography schemes. Since then, NIST has been trying to find the most secure postquantum encryption schemes. This happens in phases; each phase winnows down a list of the most secure and practical schemes. Two weeks ago [in mid-February], the agency entered its second phase for postquantum cryptography, with lattice-based schemes making up half of its list.

In the new study, the researchers first implemented on commercial microprocessors several NIST lattice-based cryptography schemes from the agency’s first phase. This revealed two bottlenecks for efficiency and performance: generating random numbers and data storage.

Generating random numbers is the most important part of all cryptography schemes, because those numbers are used to generate secure encryption keys that can’t be predicted. That’s calculated through a two-part process called “sampling.”

Sampling first generates pseudorandom numbers from a known, finite set of values that have an equal probability of being selected. Then, a “postprocessing” step converts those pseudorandom numbers into a different probability distribution with a specified standard deviation — a limit for how much the values can vary from one another — that randomizes the numbers further. Basically, the random numbers must satisfy carefully chosen statistical parameters. This difficult mathematical problem consumes about 80 percent of all computation energy needed for lattice-based cryptography.

After analyzing all available methods for sampling, the researchers found that one method, called SHA-3, can generate many pseudorandom numbers two or three times more efficiently than all others. They tweaked SHA-3 to handle lattice-based cryptography sampling and applied some mathematical tricks to make pseudorandom sampling, and the postprocessing conversion to new distributions, faster and more efficient.

They run this technique using energy-efficient custom hardware that takes up only 9 percent of the surface area of their chip. In the end, this makes the process of sampling two orders of magnitude more efficient than traditional methods.

Splitting the Data

On the hardware side, the researchers made innovations in data flow. Lattice-based cryptography processes data in vectors, which are tables of a few hundred or thousand numbers. Storing and moving those data requires physical memory components that take up around 80 percent of the hardware area of a circuit.

Traditionally, the data are stored on a single two-or four-port random access memory (RAM) device. Multiport devices enable the high data throughput required for encryption schemes, but they take up a lot of space.

For their circuit design, the researchers modified a technique called “number theoretic transform” (NTT), which functions similarly to the Fourier transform mathematical technique that decomposes a signal into the multiple frequencies that make it up. The modified NTT splits vector data and allocates portions across four single-port RAM devices. Each vector can still be accessed in its entirety for sampling as if it were stored on a single multiport device. The benefit is the four single-port REM devices occupy about a third less total area than one multiport device.

“We basically modified how the vector is physically mapped in the memory and modified the data flow, so this new mapping can be incorporated into the sampling process. Using these architecture tricks, we reduced the energy consumption and occupied area, while maintaining the desired throughput,” Banerjee said.

The circuit also incorporates a small instruction memory component that can be programmed with custom instructions to handle different sampling techniques — such as specific probability distributions and standard deviations — and different vector sizes and operations. This is especially helpful, as lattice-based cryptography schemes will most likely change slightly in the coming years and decades.

Adjustable parameters can also be used to optimize efficiency and security. The more complex the computation, the lower the efficiency, and vice versa. In their paper, the researchers detail how to navigate these tradeoffs with their adjustable parameters. Next, the researchers plan to tweak the chip to run all the lattice-based cryptography schemes listed in NIST’s second phase.

The work was supported by Texas Instruments and the TSMC University Shuttle Program.

Reprinted with permission of MIT News (



Giving Keener ‘Electric Eyesight’ to Autonomous Vehicles

On-chip system that detects signals at sub-terahertz wavelengths could help steer driverless cars through fog and dust.

By Rob Matheson | MIT News

February 14, 2019

Autonomous vehicles relying on light-based image sensors often struggle to see through blinding conditions, such as fog. But MIT researchers have developed a sub-terahertz-radiation receiving system that could help steer driverless cars when traditional methods fail.

Sub-terahertz wavelengths, which are between microwave and infrared radiation on the electromagnetic spectrum, can be detected through fog and dust clouds with ease, whereas the infrared-based LiDAR imaging systems used in autonomous vehicles struggle. To detect objects, a sub-terahertz imaging system sends an initial signal through a transmitter; a receiver then measures the absorption and reflection of the rebounding sub-terahertz wavelengths. That sends a signal to a processor that recreates an image of the object.

But implementing sub-terahertz sensors into driverless cars is challenging. Sensitive, accurate object-recognition requires a strong output baseband signal from receiver to processor. Traditional systems, made of discrete components that produce such signals, are large and expensive. Smaller, on-chip sensor arrays exist, but they produce weak signals.

In a paper published online on February 8 by the IEEE Journal of Solid-State Circuits, the researchers describe a two-dimensional, sub-terahertz receiving array on a chip that’s orders of magnitude more sensitive, meaning it can better capture and interpret sub-terahertz wavelengths in the presence of a lot of signal noise.

To achieve this, they implemented a scheme of independent signal-mixing pixels — called “heterodyne detectors” — that are usually very difficult to densely integrate into chips. The researchers drastically shrank the size of the heterodyne detectors so that many of them can fit into a chip. The trick was to create a compact, multipurpose component that can simultaneously down-mix input signals, synchronize the pixel array, and produce strong output baseband signals.

The researchers built a prototype, which has a 32-pixel array integrated on a 1.2-square-millimeter device. The pixels are approximately 4,300 times more sensitive than the pixels in today’s best on-chip sub-terahertz array sensors. With a little more development, the chip could potentially be used in driverless cars and autonomous robots.

“A big motivation for this work is having better ‘electric eyes’ for autonomous vehicles and drones,” said co-author Ruonan Han, an associate professor of electrical engineering and computer science, and director of the Terahertz Integrated Electronics Group in the MIT Microsystems Technology Laboratories (MTL). “Our low-cost, on-chip sub-terahertz sensors will play a complementary role to LiDAR for when the environment is rough.”

Joining Han on the paper are first author Zhi Hu and co-author Cheng Wang, both Ph.D. students in in the Department of Electrical Engineering and Computer Science working in Han’s research group.

Decentralized Design

The key to the design is what the researchers call “decentralization.” In this design, a single pixel—a “heterodyne” pixel—generates the frequency beat (the frequency difference between two incoming sub-terahertz signals) and the “local oscillation,” an electrical signal that changes the frequency of an input frequency. This “down-mixing” process produces a signal in the megahertz range that can be easily interpreted by a baseband processor.

The output signal can be used to calculate the distance of objects, similar to how LiDAR calculates the time it takes a laser to hit an object and rebound. In addition, combining the output signals of an array of pixels, and steering the pixels in a certain direction, can enable high-resolution images of a scene. This allows for not only the detection but also the recognition of objects, which is critical in autonomous vehicles and robots.

Heterodyne pixel arrays work only when the local oscillation signals from all pixels are synchronized, meaning that a signal-synchronizing technique is needed. Centralized designs include a single hub that shares local oscillation signals to all pixels.

These designs are usually used by receivers of lower frequencies, and can cause issues at sub-terahertz frequency bands, where generating a high-power signal from a single hub is notoriously difficult. As the array scales up, the power shared by each pixel decreases, reducing the output baseband signal strength, which is highly dependent on the power of local oscillation signal. As a result, a signal generated by each pixel can be very weak, leading to low sensitivity. Some on-chip sensors have started using this design but are limited to eight pixels.

The researchers’ decentralized design tackles this scale-sensitivity trade-off. Each pixel generates its own local oscillation signal, used for receiving and down-mixing the incoming signal. In addition, an integrated coupler synchronizes its local oscillation signal with that of its neighbor. This gives each pixel more output power, since the local oscillation signal does not flow from a global hub.

A good analogy for the new decentralized design is an irrigation system, Han said. A traditional irrigation system has one pump that directs a powerful stream of water through a pipeline network that distributes water to many sprinkler sites. Each sprinkler spits out water that has a much weaker flow than the initial flow from the pump. If you want the sprinklers to pulse at the exact same rate, that would require another control system.

The researchers’ design, on the other hand, gives each site its own water pump, eliminating the need for connecting pipelines, and gives each sprinkler its own powerful water output. Each sprinkler also communicates with its neighbor to synchronize their pulse rates. “With our design, there’s essentially no boundary for scalability,” Han says. “You can have as many sites as you want, and each site still pumps out the same amount of water … and all pumps pulse together.”

The new architecture, however, potentially makes the footprint of each pixel much larger, which poses a great challenge to the large-scale, high-density integration in an array fashion. In their design, the researchers combined various functions of four traditionally separate components — antenna, down mixer, oscillator, and coupler — into a single “multitasking” component given to each pixel. This allows for a decentralized design of 32 pixels.

“We designed a multifunctional component for a [decentralized] design on a chip and combine a few discrete structures to shrink the size of each pixel,” Hu said. “Even though each pixel performs complicated operations, it keeps its compactness, so we can still have a large-scale dense array.”

Guided by Frequencies

In order for the system to gauge an object’s distance, the frequency of the local oscillation signal must be stable.

To that end, the researchers incorporated into their chip a component called a phase-locked loop, which locks the sub-terahertz frequency of all 32 local oscillation signals to a stable, low-frequency reference. Because the pixels are coupled, their local oscillation signals all share identical, high-stability phase and frequency. This ensures that meaningful information can be extracted from the output baseband signals. This entire architecture minimizes signal loss and maximizes control.

“In summary, we achieve a coherent array, at the same time with very high local oscillation power for each pixel, so each pixel achieves high sensitivity,” Hu said.

Reprinted with permission of MIT News (

Subscribe Now

Design-2-Part Magazine

Get the manufacturing industry news and features you need for free in a format you like.

FREE Print, Digital, or Both »

You have Successfully Subscribed!