Conor Mc Gartoll

I'm a Software Engineer for the Powertrain Data and Automation Team at Lucid Motors in the San Francisco Bay Area.

I previously completed a B.S. in Mechanical Engineering with a concentration in Computer Science at UCLA. I worked on surgical robotics research in the Mechatronics and Controls Lab advised by Dr. Matthew Gerber and Prof. Tsu-Chin Tsao.

Email  /  X (Twitter)  /  Patent  /  Github  /  LinkedIn

profile photo

Interests

I'm passionate about intelligent robotics (humanoids and not) 🤖, reading 📘, climbing 🧗, and surfing 🏄‍♂️!

Email me or reach out to me on LinkedIn if you want to connect

["]
/[*]\
] [

Projects

(oo)
~[||]~
| |

CrowdBot - Crowdsourcing Quality Robot Foundational Model Training Data

Currently in progress, will be released here! My friend Philip Fung and I working on a project to standardize training data for hackers, engineers, and researchers and make it as low-cost as possible. I have designed standardized hardware (camera locations, lighting, etc.) merged into the official SO-100 arm repo for solo and bimanual manipulation.

Rev 1 Bi-Manual CrowdBot Setup!

Tri-camera view inspired by ALOHA (Image courtesy of Philip)

Community datasets + models: icon

Primary (Color) Care Bot

As a color-blind person I often struggle to differeniate small brightly colored cubes, so this is pretty much what I need most in my life. Utilizes ACT and trained on 150 teleoperation examples. All videos at 2x speed.

Demoing at the Hugging Face Booth at 2024 Humanoids Summit!

Sorting at the Summit!

Great time with the Hugging Face Booth Team!

Sorting Success!

Successful sorting!

Very robust to failed attempts!

Checkout my model: icon

CURRENT GOAL: Allow robot to clean up several cubes all at the same time (instead of one by one)!

Banana-Grama-Bot

Applied custom letter tile image processing (overlay shown in videos below) to transformer robot learning model (ACT) in fork of cmcgartoll/LeRobot
Takes a letter sequence input, and the robot learns how to grasp a letter and which location to place it! All videos at 2x speed.

Spelling the word NAP!

Letter N -> Location 1

Letter A -> Location 2

Letter P -> Location 3

Even robust to regrabbing if first grab fails!

Top down videos are shown with the image overlay used for training and fed through the model during inference.

Checkout my model: icon

CURRENT GOAL: Allow the robot to pick any letter out of a sea of letters. Current work is on expanding this to handle multiple letters and different letter orientations!

Low-Cost Robot Learning

End-to-end trained robotics models built from HuggingFace open-source project LeRobot. All videos at 2x speed.

3 examples of successful inference from just 50 examples! Checkout my model: icon

Trained with imitation learning using Action-Chunking Transformer

More Coming Soon!

Music Madness

March Madness Bracket Challenge but for your personal Spotify Music Tastes! Check it out on Mobile

Device for Mobilizing Lens Material and Polishing the Capsular Bag During Cataract Surgery

Mechatronics and Controls Lab at UCLA with Dr. Matthew Gerber
Check out details in the Patent!

Forked this website from this template