In a sign of the times, researchers at Georgia Tech are focusing their work in two key areas—low-power integrated circuits and machine learning—that could help spur innovations across a wide swath of industries.
Demand for small robots that can function autonomously and make decisions on their own is booming. In that context, one group of researchers is working to develop an ultra-low power chip that enables palm-sized robots to collaborate, learn, and conserve power in innovative ways.
In another project, researchers are using machine learning to analyze capacitor materials more quickly, in the hopes that information gained from the analysis can be used to design electronic materials superior to those currently available. The new machine learning method is said to produce results eight orders of magnitude faster than the conventional technique based on quantum mechanics.
Following are two reports from Georgia Tech Research News that shed light on what the researchers are trying to accomplish and the possible implications of their work.
Ultra-Low Power Chips Help Make Small Robots More Capable
By John Toon
ATLANTA—(March 5, 2019)—An ultra-low power hybrid chip inspired by the brain could help give palm-sized robots the ability to collaborate and learn from their experiences. Combined with new generations of low-power motors and sensors, the new application-specific integrated circuit (ASIC) – which operates on milliwatts of power – could help intelligent swarm robots operate for hours instead of minutes.
To conserve power, the chips use a hybrid digital-analog time-domain processor in which the pulse-width of signals encodes information. The neural network IC accommodates both model-based programming and collaborative reinforcement learning, potentially providing the small robots larger capabilities for reconnaissance, search-and-rescue, and other missions.
Researchers from the Georgia Institute of Technology demonstrated robotic cars driven by the unique ASICs at the 2019 IEEE International Solid-State Circuits Conference (ISSCC). The research was sponsored by the Defense Advanced Research Projects Agency (DARPA) and the Semiconductor Research Corporation (SRC) through the Center for Brain-inspired Computing Enabling Autonomous Intelligence (CBRIC).
“We are trying to bring intelligence to these very small robots so they can learn about their environment and move around autonomously, without infrastructure,” said Arijit Raychowdhury, associate professor in Georgia Tech’s School of Electrical and Computer Engineering. “To accomplish that, we want to bring low-power circuit concepts to these very small devices so they can make decisions on their own. There is a huge demand for very small, but capable robots that do not require infrastructure.”
The cars demonstrated by Raychowdhury and graduate students Ningyuan Cao, Muya Chang, and Anupam Golder navigate through an arena floored by rubber pads and surrounded by cardboard block walls. As they search for a target, the robots must avoid traffic cones and each other, learning from the environment as they go and continuously communicating with each other.
The cars use inertial and ultrasound sensors to determine their location and detect objects around them. Information from the sensors goes to the hybrid ASIC, which serves as the “brain” of the vehicles. Instructions then go to a Raspberry Pi controller, which sends instructions to the electric motors.
In palm-sized robots, three major systems consume power: the motors and controllers used to drive and steer the wheels; the processor; and the sensing system. In the cars built by Raychowdhury’s team, the low-power ASIC means that the motors consume the bulk of the power. “We have been able to push the compute power down to a level where the budget is dominated by the needs of the motors,” he said.
The team is working with collaborators on motors that use micro-electromechanical (MEMS) technology able to operate with much less power than conventional motors.
“We would want to build a system in which sensing power, communications and computer power, and actuation are at about the same level, on the order of hundreds of milliwatts,” said Raychowdhury, who is the ON Semiconductor Associate Professor in the School of Electrical and Computer Engineering. “If we can build these palm-sized robots with efficient motors and controllers, we should be able to provide runtimes of several hours on a couple of AA batteries. We now have a good idea what kind of computing platforms we need to deliver this, but we still need the other components to catch up.”
In time domain computing, information is carried on two different voltages, encoded in the width of the pulses. That gives the circuits the energy-efficiency advantages of analog circuits with the robustness of digital devices.
“The size of the chip is reduced by half, and the power consumption is one-third what a traditional digital chip would need,” said Raychowdhury. “We used several techniques in both logic and memory designs for reducing power consumption to the milliwatt range while meeting target performance.”
With each pulse-width representing a different value, the system is slower than digital or analog devices, but Raychowdhury said the speed is sufficient for the small robots. (A milliwatt is a thousandth of a watt).
“For these control systems, we don’t need circuits that operate at multiple gigahertz because the devices aren’t moving that quickly,” he said. “We are sacrificing a little performance to get extreme power efficiencies. Even if the compute operates at 10 or 100 megahertz, that will be enough for our target applications.”
The 65-nanometer CMOS chips accommodate both kinds of learning appropriate for a robot. The system can be programmed to follow model-based algorithms, and it can learn from its environment using a reinforcement system that encourages better and better performance over time – much like a child who learns to walk by bumping into things.
“You start the system out with a predetermined set of weights in the neural network so the robot can start from a good place and not crash immediately or give erroneous information,” Raychowdhury said. “When you deploy it in a new location, the environment will have some structures that it will recognize and some that the system will have to learn. The system will then make decisions on its own, and it will gauge the effectiveness of each decision to optimize its motion.”
Communication between the robots allows them to collaborate to seek a target.
“In a collaborative environment, the robot not only needs to understand what it is doing, but also what others in the same group are doing,” he said. “They will be working to maximize the total reward of the group as opposed to the reward of the individual.”
With their ISSCC demonstration providing a proof-of-concept, the team is continuing to optimize designs and is working on a system-on-chip to integrate the computation and control circuitry.
“We want to enable more and more functionality in these small robots,” Raychowdhury added. “We have shown what is possible, and what we have done will now need to be augmented by other innovations.”
This project was supported by the Semiconductor Research Corporation under grant JUMP CBRIC task ID 2777.006.
CITATION: Ningyuan Cao, Muya Chang, Arijit Raychowdhury, “A 65 nm 1.1-to-9.1 TOPS/W Hybrid-Digital-Mixed-Signal Computing Platform for Accelerating Model-Based and Model Free Swarm Robotics.” (2019 IEEE International Solid-State Circuits Conference).
Reprinted with permission of Georgia Tech Research News.
Researchers Use Machine Learning to More Quickly Analyze Key Capacitor Materials
By Josh Brown
ATLANTA—(March 4, 2019)—Capacitors, given their high energy output and recharging speed, could play a major role in powering the machines of the future, from electric cars to cell phones.
But the biggest hurdle for these energy storage devices is that they store much less energy than a battery of similar size.
Researchers at Georgia Institute of Technology are tackling that problem in a novel way, using machine learning to ultimately find ways to build more capable capacitors.
The method, which was described in the journal npj Computational Materials (February 18) and sponsored by the U.S. Office of Naval Research, involves teaching a computer to analyze, at an atomic level, two materials that make up some capacitors: aluminum and polyethylene.
The researchers focused on finding a way to more quickly analyze the electronic structure of those materials, looking for features that could affect performance.
“The electronics industry wants to know the electronic properties and structure of all of the materials they use to produce devices, including capacitors,” said Rampi Ramprasad, a professor in the School of Materials Science and Engineering.
Take a material like polyethylene: it is a very good insulator with a large band gap—an energy range forbidden to electrical charge carriers. But if it has a defect, unwanted charge carriers are allowed into the band gap, reducing efficiency, he said.
“In order to understand where the defects are and what role they play, we need to compute the entire atomic structure, something that so far has been extremely difficult,” said Ramprasad, who holds the Michael E. Tennenbaum Family Chair and is the Georgia Research Alliance Eminent Scholar in Energy Sustainability. “The current method of analyzing those materials using quantum mechanics is so slow that it limits how much analysis can be performed at any given time.”
Ramprasad and his colleagues, who specialize in using machine learning to help develop new materials, used a sample of data created from a quantum mechanics analysis of aluminum and polyethylene as an input to teach a powerful computer how to simulate that analysis.
Analyzing the electronic structure of a material with quantum mechanics involves solving the Kohn-Sham equation of density functional theory, which generates data on wave functions and energy levels. That data is then used to compute the total potential energy of the system and atomic forces.
Using the new machine learning method produces similar results eight orders of magnitude faster than using the conventional technique based on quantum mechanics.
“This unprecedented speedup in computational capability will allow us to design electronic materials that are superior to what is currently out there,” Ramprasad said. “Basically, we can say, ‘Here are defects with this material that will really diminish the efficiency of its electronic structure.’ And once we can address such aspects efficiently, we can better design electronic devices.”
While the study focused on aluminum and polyethylene, machine learning could be used to analyze the electronic structure of a wide range materials. Beyond analyzing electronic structure, other aspects of material structure now analyzed by quantum mechanics could also be hastened by the machine learning approach, Ramprasad said.
“In part we selected aluminum and polyethylene because they are components of a capacitor, but it also allowed us to demonstrate that you can use this method for vastly different materials, such as metals that are conductors and polymers that are insulators,” Ramprasad said.
The faster processing allowed by the machine learning method would also enable researchers to more quickly simulate how modifications to a material will impact its electronic structure, potentially revealing new ways to improve its efficiency.
This research was supported by the Office of Naval Research under grant No. N0014-17-1-2656. The content is the responsibility of the authors and does not necessarily represent the official views of the sponsoring agency.
CITATION: Anand Chandrasekaran, Deepak Kamal, Rohit Batra, Chiho Kim, Lihua Chen and Rampi Ramprasad, “Solving the electronic structure problem with machine learning,” (Computational Materials, 2019). http://dx.doi.org/10.1038/s41524-019-0162-7
Reprinted with permission of Georgia Tech Research News.