My research is motivated by the desire to have robots enter society and help/interact with people at large. My work has mostly focused on building effective algorithms and intuitive interfaces for robot learning, planning, and human-robot interaction. My PhD dissertation introduced methods for mobile manipulators to learn motor skills for manipulating objects, and symbols to support task planning, as well as mixed reality interfaces for enabling non-expert users to teach robots (You can watch my dissertation defense here).
I enjoy playing (video and board) games, watching movies, and going to farmers markets.
I also love juggling (balls, pins and rings) and slacklining, especially at the same time.
I am also interested in STEM education, and enjoy making blogs/videos/interactive codebases about STEM-related concepts.
I also like to eat/drink cold sweets (e.g: icecream, ICEEs, frozen lemonade), and also own a personal shaved-iced machine.
|
We created a tool for organizing key characteristics of VAM-HRI systems.
Joint work with Thomas R. Groechel, Michael E. Walker, Christine T. Chang, and Jessica Zosa Forde
|
|
We propose a method for learning in challenging dynamic object manipulation tasks.
Equally led with Ben Abbatematteo, joint work with Stefanie Tellex and George Konidaris
|
|
We propose a multimodal algorithm for bidirectional human-robot interaction with mixed reality.
Equally led with David Whitney and Michael Fishman, joint work with Daniel Ullman and Stefanie Tellex
|
|
We propose a mixed-reality visualization of the intended robot motion over the wearer's real-world view of the robot and its environment.
Equally led with David Whitney, joint work with Elizabeth Phillips, Gary Chien, James Tompkin, George Konidaris, and Stefanie Tellex
|