Archives

  • 2018-07
  • 2018-10
  • 2018-11
  • 2019-04
  • 2019-05
  • 2019-06
  • 2019-07
  • 2019-08
  • 2019-09
  • 2019-10
  • 2019-11
  • 2019-12
  • 2020-01
  • 2020-02
  • 2020-03
  • 2020-04
  • 2020-05
  • 2020-06
  • 2020-07
  • 2020-08
  • 2020-09
  • 2020-10
  • 2020-11
  • 2020-12
  • 2021-01
  • 2021-02
  • 2021-03
  • 2021-04
  • 2021-05
  • 2021-06
  • 2021-07
  • 2021-08
  • 2021-09
  • 2021-10
  • 2021-11
  • 2021-12
  • 2022-01
  • 2022-02
  • 2022-03
  • 2022-04
  • 2022-05
  • 2022-06
  • 2022-07
  • 2022-08
  • 2022-09
  • 2022-10
  • 2022-11
  • 2022-12
  • 2023-01
  • 2023-02
  • 2023-03
  • 2023-04
  • 2023-05
  • 2023-06
  • 2023-07
  • 2023-08
  • 2023-09
  • 2023-10
  • 2023-11
  • 2023-12
  • 2024-01
  • 2024-02
  • 2024-03
  • 2024-04
  • 2024-05
  • 2024-06
  • 2024-07
  • The proposed formulation is however challenging to solve sin

    2024-06-19

    The proposed formulation is, however, challenging to solve since the structured sparsity-inducing norms are non-smooth. In order to solve the new objective function, we consider two different approaches: proximal averaging, which takes the average the solutions from the proximal operator for the individual regularizers and has provable guarantees of convergence (Bauschke et al., 2008, Yu, 2013a); and proximal composition, which the proximal operator for the composite regularizer is the composition of the proximal operators for individual regularizers (Yu, 2013b). We consider accelerated versions of these methods based on suitable FISTA-style (Beck and Teboulle, 2009) application of accelerated gradient descent. Compared with the optimization algorithm in G-SMuRFS, the AGM leads to a fast and correct algorithm for the optimization. Through empirical evaluation and comparison with five different baseline methods on data from ADNI, we illustrate that MT-SGL outperforms other baseline methods, including ridge regression, lasso, group lasso (Yuan and Lin, 2006) applied independently to each task, and multi-task group lasso (MT-GL) based on ℓ2,1-norm regularization (Liu et al., 2009). Improvements are statistically significant for most scores (tasks). MT-SGL showed similar results to G-SMuRFS, although MT-SGL has an efficient optimization method, besides having more general formulation which allows it to tackle a wider spectrum of problems. We also present a discussion on the top ROIs identified by MT-SGL, that is, the ROIs that mostly explain the scores. We found that the selected ROIs corroborate with studies in neuroscience (Devanand et al., 2007, de Toledo-Morrell et al., 2004) as the areas of the protein marker that are more affected by the Alzheimer's disease. It protein marker indicates that MT-SGL can be a useful tool to guide further investigation on the ROIs pointed by the algorithm. The rest of the paper is organized as follows. Section 2 discusses the MT-SGL formulation and optimization strategies are presented in Section 3. Experimental analysis is performed in Section 4 and results are compared with baseline methods. We conclude in Section 5.
    Multi-task sparse group lasso To identify the correlations between cognitive performance scores and MRI features, the linear (least square) regression method is a standard way in medical image analysis research. One of the biggest challenge in the prediction of inferring cognitive outcomes with MRI is the high dimensionality, which affects the computational performance and leads to a wrong estimation and identification of the relevant predictors. Sparse methods have attracted a great amount of research efforts in the neuroimaging field to reduce the high dimensionality and identify the relevant biomarkers due to its sparsity-inducing property. Moreover, the multi-task learning (MTL) methods with sparsity-inducing norm based on MRI features have been widely studied to investigate the prediction power of neuroimaging measures by incorporating inherent correlations among multiple clinical cognitive, and societies has been commonly used to obtain better generalization performance than learning each task individually. It is known that there exist inherent correlations among multiple clinical cognitive variables of a subject. However, many works do not model dependence relation among multiple tasks and neglect the correlation between clinical tasks which is potentially useful. When the tasks are believed to be related, learning multiple related tasks jointly can improve performance relative to learning each task separately. The proposed work on multi-task sparse group lasso (MT-SGL) builds on the existing literature on linear regression models with sparsity structures over the regression coefficients. Our work, on the other hand, builds on the literature on sparse multi-task learning (Argyriou et al., 2007, Evgeniou and Pontil., 2004), which encourages related tasks to have similar sparsity structures.