Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Future Blog Post

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

Blog Post number 4

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

portfolio

publications

Orchestrate Latent Expertise: Advancing Online Continual Learning with Multi-Level Supervision and Reverse Self-Distillation

Published in IEEE/CVF Conference on Computer Vision and Pattern Recognition 2024, 2024

This paper is about our proposed online continual learning algorithm MOSE, which is inspired by the multi-level feature extraction and cross-layer communication inherent to animal neural circuits, aiming to enhance the model's adaptivity to dynamic distributions and resistance against forgetting.

Recommended citation: Hongwei Yan, Liyuan Wang, Kaisheng Ma, and Yi Zhong. "Orchestrate Latent Expertise: Advancing Online Continual Learning with Multi-Level Supervision and Reverse Self-Distillation." IEEE/CVF Conference on Computer Vision and Pattern Recognition 2024 https://arxiv.org/pdf/2404.00417

MindPainter: Efficient Brain-Conditioned Painting of Natural Images via Cross-Modal Self-Supervised Learning

Published in The 39th Annual AAAI Conference on Artificial Intelligence 2024, 2025

This paper proposes MindPainter, a method for efficient brain-conditioned image editing using brain signals of visual perception as prompts. By employing cross-modal self-supervised learning, it directly reconstructs masked images with pseudo-brain signals generated by the Pseudo Brain Generator, enabling seamless cross-modal integration. The Brain Adapter ensures accurate interpretation of brain signals, while the Multi-Mask Generation Policy enhances generalization for high-quality editing in various scenarios, such as inpainting and outpainting. MindPainter is the first to achieve efficient brain-conditioned image painting, advancing direct brain control in creative AI.

Recommended citation: Muzhou Yu, Shuyun Lin, Hongwei Yan, Kaisheng Ma. "MindPainter: Efficient Brain-Conditioned Painting of Natural Images via Cross-Modal Self-Supervised Learning." The 39th Annual AAAI Conference on Artificial Intelligence 2024 https://ojs.aaai.org/index.php/AAAI/article/view/33585

Right Time to Learn: Promoting Generalization via Bio-inspired Spacing Effect in Knowledge Distillation

Published in The 42nd International Conference on Machine Learning 2025, 2025

Knowledge distillation (KD) is a powerful strategy for training deep neural networks (DNNs). Although it was originally proposed to train a more compact student model from a large teacher model, many recent efforts have focused on adapting it to promote generalization of the model itself, such as online KD and self KD. % as an effective way Here, we propose an accessible and compatible strategy named Spaced KD to improve the effectiveness of both online KD and self KD, in which the student model distills knowledge from a teacher model trained with a space interval ahead. This strategy is inspired by a prominent theory named \emph{spacing effect} in biological learning and memory, positing that appropriate intervals between learning trials can significantly enhance learning performance. With both theoretical and empirical analyses, we demonstrate that the benefits of the proposed Spaced KD stem from convergence to a flatter loss landscape during stochastic gradient descent (SGD). We perform extensive experiments to validate the effectiveness of Spaced KD in improving the learning performance of DNNs (e.g., the performance gain is up to 2.31\% and 3.34\% on Tiny-ImageNet over online KD and self KD, respectively).

Recommended citation: Guanglong Sun*, Hongwei Yan*, Liyuan Wang, Qian Li, Bo Lei, Yi Zhong. "Right Time to Learn: Promoting Generalization via Bio-inspired Spacing Effect in Knowledge Distillation." The 42nd International Conference on Machine Learning 2025 https://arxiv.org/abs/2502.06192

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.