Ayano Hiranaka
I am an incoming CS PhD student at University of Southern California (USC) co-advised by Professor
Daniel Seita
and Professor
Erdem Biyik.
Currently, I am working as a research intern at Sony AI, Tokyo.
Prior to coming to USC, I completed my Master's degree at Stanford University, where I was a research assistant at
Stanford Vision and Learning Lab (SVL)
working with
Prof. Fei-Fei Li
,
Prof. Jiajun Wu,
and
Dr. Ruohan Zhang.
I received my undergraduate degree in mechanical engineering from the University of Illinois at Urbana-Champaign (UIUC).
My experiences are a unique blend of computer science and mechanical engineering,
ranging from AI to robotics to mechanical design.
Email /
CV /
Google Scholar /
Github
|
|
Publications
I am passionate about designing robots that can be incorporated into everyday life as human companions.
My research interest lies in developing methods that enable robots to
communicate effectively and collaborate seamlessly with humans
in households or public spaces,
increasing the quality of human lives
, while
evolving alongside humans by learning from them.
Topics of interest include human-in-the-loop learning, interactive human-robot collaboration, reinforcement learning, and imitation learning,
especially for robotics applications.
|
|
NOIR: Neural Signal Operated Intelligent Robot for Daily Activities
Ruohan Zhang*, Sharon Lee*, Minjune Hwang*,
Ayano Hiranaka*,
Chen Wang, Wensi Ai, Jin Jie Ryan Tan, Shreya Gupta,
Yilun Hao, Gabrael Levine, Ruohan Gao, Anthony Norcia,
Li Fei-Fei, Jiajun Wu
CoRL, 2023  
project page
/
paper
Brain-robot interface system for everyday activities using EEG signal decoding,
primitive skills, and robot intelligence aided by foundation models.
|
|
Primitive Skill-based Robot Learning from Human Evaluative Feedback
Ayano Hiranaka*,
Minjune Hwang*, Sharon Lee, Chen Wang, Li Fei-Fei, Jiajun Wu, Ruohan Zhang
(*equal contribution, alphabetically ordered)
IROS, 2023  
project page
/
paper
Combining intuitive skill-based action space and human evaluative feedback for a
more safe and sample efficient long-horizon task learning in real life.
|
|
A Dual Representation Framework for Robot Learning with Human Guidance
Ruohan Zhang*, Dhruva Bansal*, Yilun Hao*,
Ayano Hiranaka,
Roberto Martín-Martín, Chen Wang, Li Fei-Fei, Jiajun Wu,
Best paper award at Aligning Robot Representations with Humans workshop
CoRL, 2022  
project page
/
paper
Leveraging high-level state representation, active query, and human guidance for
a more sample-efficient learning of low-level robot control policy.
|
|