Academics
Academics
Projects developed by undergrad students using neuromorphic sensing, algorithms, and processing hardware.
Using the LuI silicon neurons in a spiking neural network converting the sound processing output of an ANN into spike trains to move a servo motor. Another ANN classifies the spike trains to determine the motor direction.
Contact:
Selin Schmitt (schmittse79503@th-nuernberg.de)
Timon Löwl (loewlti80780@th-nuernberg.de)
Sven Remy (remysv80967@th-nuernberg.de)
Neuromorphic data encoding and decoding using LuI silicon neurons for implementing an efficient Text2Morse converter based on spike trains.
Contact:
Tan Phat, Nguyen (phatnguyen@gmx.de)
Paul Schmachtl (paul.schmachtl@outlook.com)
Using a neuromorphic event-based camera to design and develop a spatial and temporal clustering algorithm for frequency detection used in machine state estimation and predictive maintenance.
Contact:
Benedikt Fischer (fischerbe98484@th-nuernberg.de)
Felix Sixdorf (sixdorffe80095@th-nuernberg.de)
David Stiegler (stieglerda78912@th-nuernberg.de)
Neuromorphic PID implemented using Nengo to control a differential mobile robot trajectory tracking performance under uncertainty.
Contact:
Adrian Stangl (stanglad98626@th-nuernberg.de
Bastian Wunderlich (wunderlichba98628@th-nuernberg.de)
Neuromorphic vision based frequency tracking algorithm and closed-loop control. Embedded processing for collaborative mini-robots.
Neuromorphic visual sensing fusion with LiDAR for scene understanding and planning for mobile robot motion control.
Users can accompany their favourite songs instrumentally by imitating selected instruments with gestures sensed using a neuromorphic vision sensor.
Contact:
Selin Schmitt: schmittse79503@th-nuernberg.de
Kay Hartmann, hartmannka80488@th-nuernberg.de
Privacy-preserving club visuals generation based on synchronized neuromorphic video sensing and sound generation.
Contact:
Selin Schmitt: schmittse79503@th-nuernberg.de
Kay Hartmann, hartmannka80488@th-nuernberg.de
This project addresses the challenge of estimating depth using a single DVS (Dynamic Vision Sensor) camera and a 2D LiDAR. While DVS cameras capture changes in light rather than full images, they lack native depth information.
Most existing solutions rely on two DVS cameras for stereo depth estimation. Still, this project introduces a more cost-effective approach by using a 2D LiDAR instead, eliminating the need for expensive 3D LiDAR systems.
Built using ROS2 for communication and data processing, the system can run on various platforms, including wheeled robots with Jetson Nano or similar hardware.
Since 2D LiDAR measures depth in a single plane, a custom clustering algorithm was developed to estimate the depth of objects outside this plane, such as a person's arms.
The fusion algorithm aligns the LiDAR's measurements with dynamic events from the DVS, creating a unified point cloud, that can be visualized in real time.
This approach enhances depth perception for robotics and other applications while maintaining affordability.
Contact:
Annika Igl: aiglgg@web.de
Timo Kapellner: timo.kapellner@gmail.com