My primary research interests lie in the general areas of Machine Learning, Artificial Intelligence, Natural Language Processing, as well as ML systems, computer vision, healthcare, and other application domains.
In particular, I'm interested in principles and methodologies of Panoramic Learning (HDSR)—building AI agents with ALL types of experience, ranging from data instances (NeurIPS), embodied experiences (preprint),
structured knowledge (ACL, NeurIPS), constraints,
to rewards (EMNLP), adversaries (NeurIPS), lifelong interplay, etc.
To this end, we've been studying a standardized ML formalism ("Standard Model" of ML) for systematic understanding, unifying, and generalizing a wide range of ML paradigms (e.g., supervised, unsupervised, active, reinforcement, adversarial, meta, lifelong learning).
We're also building World Models (preprint-1, 2) to enable next-generation machine reasoning beyond large language models.
On this basis, I develop methods and tools for Composable ML that enable easy composition of ML solutions (LLM Reasoners, and Texar, ASYML as part of the open-source consortium CASL); and rich applications for controllable text generation (ICML) and others.