Share this article.

AI-empowered neural processing for personal and biomedical assistance

  • Neural signal processing and classification is used in human personal and biomedical assistance technologies.
  • Artificial intelligence offers new capability for neural processing.
  • Power-hungry and time-consuming, AI computing is poorly supported for real-time applications in existing hardware.
  • Professor Jie Gu at the VLSI Research Lab, Northwestern University, USA, takes a new approach ‒ embedded AI technology. His AI-empowered neural processing devices are designed for wearable biomedical devices, and use 1,000 times less power.
  • The applications of this technology extends into robotics, virtual reality, intelligent human machine interface, and much more.

Neural signal processing is important for rehabilitation technologies as well as personal assistance tools such as activity tracking. Physiological signals are detected and classified to determine user activity and health information. Traditionally, neural signals are sent to a remote PC for analysis, but this process is slow and requires vast amounts of power. In recent years, using artificial intelligence (AI) for neural processing has revolutionised the fields of personal assistance and biomedical applications. However, the high computation requirements of AI algorithm prohibit its real-time use on devices specially for biomedical applications.

Addressing this problem is Professor Jie Gu and his team at the VLSI Research Lab, Northwestern University, USA. The researchers are specialists in energy efficient computing hardware. Gu explains, ‘The recent surge in AI algorithms and associated AI hardware allow AI or deep learning tasks to be performed online within the same device. However, these techniques have not been made available for biomedical devices yet due to the extreme constraints on device size and power.’

For the first time, the VLSI Research Lab have developed a number of devices for personal and biomedical assistance using single, tiny customised silicon chips. Their innovative chips incorporate all the functionalities required for neural sensing and processing using a specially designed AI core. Their solution offers real-time classification of biomedical information faster and with lower energy than ever before, through miniature, low-power devices. Beyond biomedical applications, such as rehabilitation and human personal assistance, the applications of their research extend much further ‒ into robotics, virtual reality, intelligent human‒machine interface, and more.

The VLSI Research Lab are specialists in energy efficient computing hardware.

AI empowered neural processing

The sustainable use of wearable devices requires low-power digital computation. For this, an ‘ASIC’ chip with application-specific integrated circuit architecture is used. Neural activity is detected using analog sensing. These low-power signals require analog amplification using a low noise amplifier. The data is then converted from analog to digital in preparation for the AI classification and analysis.

Dramatic reductions in resources

The VLSI Research Lab team demonstrate how their solution reduces the size of a system from between 5 and 10 chips to a single chip with huge reductions in the operation time and power. Where current solutions (eg, OMAP processors) use 100–500mW of power, their new system requires only 0.03–0.15mW and can fit into a 5mm by 20mm printed circuit board. Conventional solutions require a minimum of 30 milliseconds (ms) of latency (the time taken from signal collection to classification results). In contrast, the single chip system of Gu and team requires only 3ms, a high-demand feature for real-time physiological signal processing.

Enabling AI core

So, how does the VLSI Research Lab’s chip work? The key enabling element of their technology is the AI core. The AI core is specially designed with a reconfigurable neural processor and can support the most commonly used AI algorithms including, for example, neural networks, long short-term memory, and data compression. Their innovative chip achieves an impressive 1,000-fold improvement in energy efficiency and much shorter latency compared with the traditional central processing unit (CPU) or graphics processing unit (GPU). The researchers also use a tiny form factor (a hardware design feature defining the size and shape of components) only 5–20mm in size. They have neatly integrated the analog front-end circuitry with the digital back-end classifier – the AI computing core. Importantly, the neural processing and classification are embedded in the same chip. An integration of power management circuit, low noise amplifier, analog-digital conversion, and their AI core means that this customised system-on-chip can support complete real-time end-to-end AI inference (evaluation and analysis of new information) for neural signal processing in a single device. Their customised silicon chips are the next generation of wearable and biomedical devices.

The tiny customised silicon chips developed by the VLSI Research Lab incorporate all the functionalities required for real-time neural sensing and AI-based classification. Measuring 5mm by 20mm, it is the smallest device of its kind in the world.

Pushing boundaries

To overcome limitations in existing computing design, the VLSI Research Lab have set their sights on designing new technologies. As Gu highlights, ‘With AI, 5G, and the Internet of Things (IoT), just around the corner we are at an exciting time ‒ there are many new opportunities for computing hardware. This is a critical point where revolutionary circuitry, architecture, and system solutions are urgently needed to meet the ever-increasing demands on computing power.’

With this in mind, the VLSI Research Lab have developed a device that senses human EMG signals (biomedical signals measuring electrical currents generated in contracting muscles that represent neuromuscular activities) and classifies gestures for rehabilitation use in the control of prosthetic limbs. The human‒machine interface device is also capable of transferring classification results via an app to display movement activity on the user’s smartphone.

The embedded AI core brings AI computing power to the tiny silicon chip for human activity tracking. Multiple chips can collaborate for whole-body gait tracking.

Wireless-powered implantable device

Building on their biosignal processing technology, the VLSI Research Lab have developed an implantable device. When compared with skin-mounted devices, a device implanted in the muscle causes less skin irritation, has less wires, and offers far better signal quality. The researchers’ customised silicon chip has a coil on its surface that enables wireless power connection. The chip is only 2mm by 2mm and the entire device (including the PCB) measures just 5mm by 20mm, making it the smallest device of its kind in the world. This single tiny chip can be implanted into an amputee’s muscle to interpret their motion intention.

The VLSI Research Lab are truly empowering the next generation of wearable devices.

Multi-chip collaboration for activity recognition

The VLSI Research Lab team have created a special infrared daisy-chained communication technique that connects multiple devices in sequence. This multi-chip collaborative computing system enables multi-device collaborative computing for classification and communication.

Using their customised chip, this multi-chip approach is deployed in a wearable device for augmented reality (AR) and virtual reality (VR) applications. Unlike existing camera-based AR/VR devices, their system does not need a camera. This groundbreaking device measures the user’s EMG signals together with the relative positions of multiple limbs. The integrated AI computing core classifies the user’s movements and whole-body gait in real-time according to the EMG signals and the distances between multiple devices worn on the body. The AI computing core is a specialised deep learning accelerator and significantly improves the speed and efficiency of AI inference workloads. Daisy-chained infrared communication enables the sharing of information from individual devices to collaboratively compute the whole body’s movement.

The novel approach by the VLSI Research Lab team embeds in-situ AI functions for intelligent operations, eliminating the need for additional high-power digital processors and remote data communication. In this way, they are truly empowering the next generation of wearable devices.

What inspired you to develop the customised silicon chips?

Many modern computing tasks have been significantly improved by deep learning-based AI algorithms. But this creates tremendous challenges for hardware. Commonly used embedded processors are orders of magnitude away from supporting AI computing in real-time applications due to inadequate computing capability, memory, and power. As a result, despite the strong power of AI, we have rarely seen AI deployed in real-time signal processing for biomedical applications. The only way to use AI in real-time biomedical operations or human‒machine interfaces (such as AR or VR) is through specialised silicon chips with bespoke computing architecture for AI algorithms, which is what we deliver.

How has the integration of AI algorithms with neural sensing techniques revolutionised the fields of personal assistance and biomedical applications, according to your research and experience?

In the same way that AI affects our modern experience ‒ such as facial recognition, natural language processing ‒ AI algorithms are revolutionising the fields of personal assistance and biomedical applications. However, AI algorithms require tremendous computing power. This hinders its use in wearable and biomedical applications which are extremely sensitive to device size, power consumption, and computing latency. It is impossible ‒ from both a size and power point of view ‒ to use graphics processing unit (GPU) computing systems in wearable devices. In contrast, a fully integrated ASIC chip with all the required functionalities offers the high-demand support to enable AI computation in a wearable environment. For biomedical applications in particular, a fully-integrated device enables in-situ computation with long operating time, lasting weeks, even months. It also has the potential for implantation into human bodies.

What is the biggest challenge you faced in developing wearable and implantable devices?

From a technology point of view, the design of the AI core is the most critical enabling technique here, and differentiates our device from others on the market. The AI core needs to be general enough to support all the potential needs of AI algorithms, yet specialised enough to be low power and fast. Given the high computing and memory requirements of AI, the AI core needs to be co-designed with the targeted AI algorithms to achieve optimisation at a systems level. From a practical point of view, getting this system to work in a real person under a non-stationary setting is not trivial due to human motion artifacts to the electronic devices. This requires some adjustments and fine-tuning, both in the sensing circuitry and AI computing models.

What advice would you give a young researcher who’s interested in a career in energy efficient computing in the wearable and biomedical space?

While many of these applications are not entirely new, the recent development of AI brings tremendous new opportunities for researchers and engineers. This is an exciting time to work in a multi-disciplinary field such as biomedical and human machine‒interface, which require deep knowledge in both signal processing and modern computing. Young researchers should think out of domain boxes and leverage cross-disciplinary knowledge to spark creative ideas and market opportunities.

Related posts.

Further reading

Wei, Y, et al, (2023) Human activity recognition SoC for AR/VR with integrated neural sensing, AI classification and chained infrared communication for multi-chip collaboration, VLSI Symposium on Circuits and Technology (VLSI), 2023.


Wei, Y, et al, (2022) A 65nm implantable gesture classification SoC for rehabilitation with enhanced data compression and encoding for robust neural network operation under wireless power condition, 2022 IEEE Custom Integrated Circuits Conference (CICC).


Wei, Y, et al, (2021)A gesture classification SoC for rehabilitation with ADC-less mixed-signal feature extraction and training capable neural network classifier, IEEE Journal of Solid-State Circuits, 56, 3, 876–86.


Wei, Y, et al, (2020)A wearable bio-signal processing system with ultra-low-power SoC and collaborative neural network classifier for low dimensional data communication, Annual Int Conf IEEE Eng Med Biol Soc, 4002–7.

Jie Gu

Jie Gu is an associate professor at Northwestern University, USA. From 2008 to 2014, he worked at Texas Instruments and Maxlinear on the design of low-power mobile processors. His research focuses on deep learning accelerators and intelligent wearable devices empowered by AI techniques. Gu is a recipient of the NSF CAREER award.

Contact Details

e:
t: +1 847 467 5854
w: nu-vlsi.eecs.northwestern.edu


Funding

NSF: CNS-1816870 and CCF-2208573

Collaborators

  • Professor Levi Hargrove, Shirley Ryan Ability Lab
  • Professor Jose Pons, Shirley Ryan Ability Lab
  • Yijie Wei
  • Zhiwei Zhong
  • Xi Chen
  • Kofi Otseidu (currently at Intel)

Cite this Article

Gu, J, (2023) AI-empowered neural processing for personal and biomedical assistance. Research Features, 148. Available at: 10.26904/RF-148-4809923428

Creative Commons Licence

(CC BY-NC-ND 4.0) This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Creative Commons License

What does this mean?
Share: You can copy and redistribute the material in any medium or format