Welcome to AAI lab

Applied Artificial Intelligence

AAILab researches on how to understand, design, and manage complex socio-economic systems in our societies. Diverse complex socio-economic systems exist in our societies, and some of them are listed in the belo...

Read more

Research area

  • Keywords: Diffusion model, Inference-time scaling, Rejection sampling

    Recent advances in powerful pre-trained diffusion models encourage the development of methods to improve the sampling performance under well-trained diffusion models. This paper introduces D…

  • Keywords: Diffusion Model, Inference-time scaling, Preference optimization

    Text-to-image diffusion models rely on text embeddings from a pre-trained text encoder, but these embeddings remain fixed across all diffusion timesteps, limiting their adaptability to the g…

  • Keywords: Large Language Model, RLHF, Wasserstein distance

    Large language models (LLMs) are commonly aligned with human preferences using reinforcement learning from human feedback (RLHF). In this method, LLM policies are generally optimized through…

  • Keywords: Dataset Distillation, Neural Field

    Utilizing a large-scale dataset is essential for training high-performance deep learning models, but it also comes with substantial computation and storage costs. To overcome these challenge…

  • Keywords: Knowledge Distillation, α-mixture

    Autoregressive large language models (LLMs) have achieved remarkable improvement across many tasks but incur high computational and memory costs. Knowledge distillation (KD) mitigates this i…

  • Keywords: Diffusion model, Inference-time scaling, Metropolis-Hastings Algorithm

    Diffusion-based generative models have recently achieved state-of-the-art performance in high-fidelity image synthesis. These models learn a sequence of denoising transition kernels that gra…

  • Keywords: Diffusion model, Inference-time scaling, Safe image generation

    Text-to-image models have recently made significant advances in generating realistic and semantically coherent images, driven by advanced diffusion models and large-scale web-crawled datase…

  • Keywords: Diffusion Model, Density Ratio Estimation, Dataset Bias

    With significant advancements in diffusion models, addressing the potential risks of dataset bias becomes increasingly important. Since generated outputs directly suffer from dataset bias, m…

  • Keywords: Diffusion model, Noisy label, Distribution shift

    Conditional diffusion models have shown remarkable performance in various generative tasks, but training them requires large-scale datasets that often contain noise in conditional inputs, a.…

  • Keywords: Noisy label, Sampling Importance resampling, Dirichlet distribution

    For learning with noisy labels, the transition matrix, which explicitly models the relation between noisy label distribution and clean label distribution, has been utilized to achieve the st…

  • Keywords: Domain Generalization

    The objective of domain generalization (DG) is to enhance the transferability of the model learned from a source domain to unobserved domains. To prevent overfitting to a specific domain, Sh…

  • Keywords: Multi Agent Reinforcement Learning

    In cooperative multi-agent reinforcement learning (MARL), agents aim to achieve a common goal, such as defeating enemies or scoring a goal. Existing MARL algorithms are effective but still r…

  • Keywords: Dataset Distillation, Frequency domain

    This paper presents FreD, a novel parameterization method for dataset distillation, which utilizes the frequency domain to distill a small-sized synthetic dataset from a large-sized original…

News