ToolKit (Index of Posts):


JointNets R package for Joint Network Estimation, Visualization, Simulation and Evaluation from Heterogeneous Samples

jointNets R package: a Suite of Fast and Scalable Tools for Learning Multiple Sparse Gaussian Graphical Models from Heterogeneous Data with Additional Knowledge

JointNets R in CRAN : URL

Github Site: URL

Talk slide by Zhaoyang about the jointnet implementations:

  • URL

  • Youtube Talk by Zhaoyang about the jointnet implementations: URL

Demo GUI Run:

multisGGM

Demo Visualization of a few learned networks:

  • DIFFEE on one gene expression dataset about breast cancer

multisGGM

  • JEEK on one simulated data about samples from multiple contexts and nodes with extra spatial information

multisGGM

  • SIMULE on one word based text dataset including multiple categories

multisGGM

multisGGM

  • SIMULE on one multi-context Brain fMRI dataset

multisGGM

  • Demo downstream task using learned graphs for classification, e.g., on a two class text dataset, we get

multisGGM

  • With Zoom In/Out function

multisGGM

  • With Multiple window design, legend, title coloring schemes

multisGGM

Flow charts of the code design (functional and module level) in jointnets package

multisGGM multisGGM

Citations

@conference{wang2018jeek,
  Author = {Wang, Beilun and Sekhon, Arshdeep and Qi, Yanjun},
  Booktitle = {Proceedings of The 35th International Conference on Machine Learning (ICML)},
  Title = {A Fast and Scalable Joint Estimator for Integrating Additional Knowledge in Learning Multiple Related Sparse Gaussian Graphical Models},
  Year = {2018}}
}

Support or Contact

Having trouble with our tools? Please contact Arsh and we’ll help you sort it out.


Graph Neural Networks for Multi-Label Classification

Title: Neural Message Passing for Multi-Label Classification

Paper ArxivVersion

GitHub: https://github.com/QData/LaMP

Abstract

Multi-label classification (MLC) is the task of assigning a set of target labels for a given sample. Modeling the combinatorial label interactions in MLC has been a long-haul challenge. Recurrent neural network (RNN) based encoder-decoder models have shown state-of-the-art performance for solving MLC. However, the sequential nature of modeling label dependencies through an RNN limits its ability in parallel computation, predicting dense labels, and providing interpretable results. In this paper, we propose Message Passing Encoder-Decoder (MPED) Networks, aiming to provide fast, accurate, and interpretable MLC. MPED networks model the joint prediction of labels by replacing all RNNs in the encoder-decoder architecture with message passing mechanisms and dispense with autoregressive inference entirely. The proposed models are simple, fast, accurate, interpretable, and structure-agnostic (can be used on known or unknown structured data). Experiments on seven real-world MLC datasets show the proposed models outperform autoregressive RNN models across five different metrics with a significant speedup during training and testing time.

Citations

@article{lanchantin2018neural,
  title={Neural Message Passing for Multi-Label Classification},
  author={Lanchantin, Jack and Sekhon, Arshdeep and Qi, Yanjun},
  year={2018}
}

Support or Contact

Having trouble with our tools? Please contact Jack Lanchantin and we’ll help you sort it out.