I’m a Computer Science and Engineering student at UCLA, with a passion for software engineering, computer vision, and machine learning. I’m currently on the lookout for a Summer 2025 internship to build on what I’ve learned and dive deeper into new challenges. At the moment, I’m doing research at the UCLA Robot Intelligence Lab, where I’m helping develop cutting-edge AI-driven robotic systems, and at the Interconnected & Integrated Bioelectronics Lab, where I apply machine learning to metabolic panel analysis for healthcare. Before that, I worked as a research assistant at the Vision and Image Processing Lab, where I created machine learning algorithms to push the boundaries of hockey analytics. When I’m not immersed in tech, you can find me playing for UCLA’s Division II hockey team!
I am currently in my 1st year of Computer Science and Engineering at the University of California, Los Angeles.
I completed my high school with a French Immersion Diploma. I was is the Honours Society all 4 years and Chemistry and Physical Education Merit Awards. I was the executive of the Programming Club and Physics Club.
Extracted 3D gaze coordinates from Meta’s Aria glasses to track movements from a mobile ego perspective. Engineered a homography-based solution to align gaze data to and a robot-mounted camera. Improved robot policy learning by incorporating human visual attention to adapt behavior during tasks. Designed and launched the official website with a user-friendly interface to showcase research projects.
Developed the SwipeSmart iOS app to help users maximize cashback rewards by tracking credit card offers. Redesigned the data structure for credit card reward categories to support unique colors and icons and integrated updates into the app using data-passing. Improved development with continuous integration to resolve bugs and enhance app performance. Integrated designer-created views into the application for a seamless, visually cohesive user experience.
Annotated 100+ hockey games of footage to develop a robust dataset for training machine learning models. Designed a YOLO-based object detection and tracking system to track player movements with 97% accuracy. Developed an extreme gradient boosting algorithm using 150+ videos to evaluate performance evaluation. Utilized homography techniques to map player positions and warp visualized data to original footage. Integrated SAM2 to automate player mask creation to create precise overlay of masks on visualized data.