Density tempering (also called density annealing) is a sequential Monte Carlo approach to Bayesian inference for general state models which is an alternative to Markov chain Monte Carlo. When applied to state space models, it moves a collection of parameters and latent states (which are called particles) through a number of stages, with each stage having its own target distribution. The particles are initially generated from a distribution that is easy to sample from, e.g. the prior; the target at the final stage is the posterior distribution. Tempering is usually carried out either in batch mode, involving all the data at each stage, or sequentially with observations added at each stage, which is called data tempering. Efficient Markov moves for generating the parameters and states for each stage of particle based density tempering are proposed. This allows the proposed SMC methods to increase (scale up) the number of parameters and states that can be handled. Most current methods use
Constructing and analyzing functional brain networks (FBN) has become a promising approach to brain disorder classification. However, the conventional successive construct-and-analyze process would limit the performance due to the lack of interactions and adaptivity among the subtasks in the process. Recently, Transformer has demonstrated remarkable performance in various tasks, attributing to its effective attention mechanism in modeling complex feature relationships. In this paper, for the first time, we develop Transformer for integrated FBN modeling, analysis and brain disorder classification with rs-fMRI data by proposing a Diffusion Kernel Attention Network to address the specific challenges. Specifically, directly applying Transformer does not necessarily admit optimal performance in this task due to its extensive parameters in the attention module against the limited training samples usually available. Looking into this issue, we propose to use kernel attention to replace the o