Andrew Liang

Mechatronics Engineering


Autonomous Gem Collector

Designed and led development of an autonomous gem collector robot in a 3-person team, achieving top-5 performance out of 40 teamsSituation:
Tasked with building an autonomous ESP32-based robot to collect and sort green colored gems in an unstructured 25 m² arena, under a 2-minute time limit.
Task:
Led all technical aspects of design and execution, including CAD modeling, software architecture, and manufacturing, while also conceiving the object collection and sorting mechanism.
Action:
• Designed and modeled the entire robot in SolidWorks, overcoming 3D printer limitations by modularizing the assembly for smaller build plates.

• Developed robust embedded code for driving, collection, and sorting subsystems, and replaced unreliable ultrasonic detection with dynamic color-based detection calibrated at startup.

• Solved key edge cases such as detection interference and misalignment in the elastic band collection grid by engineering a front gate verification and flow control system.

• Normalized RGB readings and switched to brightness-based object detection to mitigate variable lighting conditions.
• Redesigned battery mount due battery's center of mass causing the robot to veer rightResult:
Only team to collect gems accurately in both final runs without picking up any incorrect gems. Robot demonstrated consistent and reliable autonomous performance.


M&M Color Counter

Built and trained a YOLOv8-based object detection system to classify and count M&Ms, achieving ~99% mAP and high generalization in variable lighting and overlap conditionsSituation:
Assigned to develop an image-based counting and classification system for M&Ms to simulate real-world inventory automation tasks, with full flexibility in CV/ML approach.
Task:
Responsible for pipeline architecture, dataset management, and model development using both computer vision and ML-based approaches.
Action:• Prototyped a traditional OpenCV solution with HSV masking, CLAHE filtering, and watershedding to segment overlapping M&Ms. Found it unreliable under lighting variation.• Transitioned to YOLOv8 for greater robustness; manually labeled core images and supplemented with public datasets, then expanded to 2800 images via multithreaded data augmentation (blur, saturation, brightness, ect).• Discovered and resolved critical label mismatch bugs across annotation batches using custom visualization and relabeling scripts.• Tuned training settings (image size, batch size, workers) for memory-constrained GPU setup; model trained with strong convergence in 100 epochs.• Built a fully interactive Tkinter GUI that loads images, runs detection pipelines, and displays real-time results including bounding box overlays, per-color M&M counts, size distributions, average object sizes, and standard deviationsResult:
Final model achieved ~0.99 mAP@50, ~0.82 mAP@50–95, and >96% precision, with validation losses closely tracking training losses, indicating excellent generalization and minimal overfitting.


AI Embedded Systems Toolkit (WIP)

Developed a modular AI architecture toolkit for real-time decision making. Designed for transferability across projects and time sensitive applicationsSituation:
Identified the need for a flexible, general-purpose AI framework capable of interpreting temporal data and producing actionable outputs in domains like robotics and automation in real time
Task:
Designed and built a composite AI toolkit centered around State-Space Model encoders, with a focus on plug-and-play modularity, low-latency, and adaptability.
Action:• Implemented a custom framework for different time-series input types (images, video frames, sensor outputs), custom embeddings.• Plans to implement actor-critic decision heads that support both independent and shared intermediate representations. Allows flexible output branching for embedded systems.• Designed the system to allow pretraining individual encoder modules, module swapping, and fine-tuning. Most core logic designed to be O(n) and O(1) time complexity for performance scalability.Result:
Created a AI decision-making toolkit used for initial robotics mimicry experiments, with future applications planned in fields like autonomous cooking systems and real-time sports forecasting.


Mechanical Hand Project (WIP)

Designed and built a 5-finger robotic hand with realistic motion using actuation and motion-linkages. Mimics human gestures through fully 3D-printed PETG assemblySituation:
As a personal project toward developing a full humanoid robot, set out to replicate the mechanical and functional behavior of a human hand using accessible materials and embedded control.
Task:
Designed a 5-finger robotic hand capable of gestures such as grasping, thumbs-up, and other motions. Large focus on form-factor, realism, and cost.
Action:• Designed all components in Fusion360 and 3D printed them using PETG; used Chicago screws for joint pivots and elastic bands for muscle contraction.• Implemented a servo-driven tendon system for individual finger control, plus a linear actuator and linkage mechanism for lateral finger spread.• Programmed hand motions on an Arduino Uno, hardcoding gesture sequences to showcase full range of motion.• Addressed critical design flaws including elastic creep, string abrasion at joints, and current draw when multiple fingers in use.Result:
Successfully demonstrated realistic hand motions in a compact, assembly. Established a mechanical and control foundation for future limb development.