Research Projects

Our research primarily focuses on Multimodal Large Language Model (MLLM), Embodied AI, and AI Agent, with an emphasis on perception, reasoning, and decision-making in interactive environments:

  • Multimodal Large Language Models (MLLMs) aim to build and train models capable of understanding, reasoning, and generating content across multiple modalities, including text, images, audio, and video. Our research in this area explores the enhancement of multitask learning capabilities, the advancement of high-resolution perception, and the design of unified architectures for multimodal understanding and generation. Please refer to our GitHub Orgnization about JiuTian MLLM ("九天"多模态大模型) for more details. [bilibili]
  • Embodied AI studies agents capable of perceiving, reasoning, and acting within physical environments. We aim to build systems based on MLLM that integrate multimodal perception, instructon comprehension, and continuous action planning to perform complex 3D tasks such as navigation, manipulation, and interactive behaviors. [bilibili]

    Task instruction: Open the drawer, put the toy inside, and then close it.

    To further enhance our embodied AI research, our lab has recently acquired the R1 Lite robot from GaLaXea AI.

    R1 Lite Front View R1 Lite Side View R1 Lite in Lab
  • AI Agent focuses on sequential decision-making across various complex environments such as MineCraft, mobile device. We develop systems based on MLLM that span a wide range of directions, including framework-based agents, native agents, and RL-enhanced reasoning. [bilibili]

    Task instruction: Search today's weather in Shenzhen on Chrome, then write the temperature into "today.md" using Markor.

Terms of Releasing Implementation:

Software provided here is for personal research purposes only. Redistribution and commercial usage are not permitted. Feedback, applications, and further development are welcome. Contact shaorui[AT]hit.edu.cn for bugs and collaborations. All rights of the implementation are reserved by the authors.


© 2025 OrionLab