Learning To-do List
This is an ever changing to-do list of topics I would like to learn and notes to create.
🚧 is where my focus currently is.
Current
Dimensionality reduction 🚧
Classic linear methods
- PCA 🚧
- probabilistic PCA
- Bayesian PCA
-
Factor Analysis (FA) 🚧
- Some examples from Neuroscience?
- Gaussian Process Factor Analysis (GPFA) 🚧
- Canonical correlation analysis (CCA)
- jPCA
- seqPCA
! [[ for_exceptional_broski.png ]]
Regression?
- Partial least squares (PLS)
- Fisher’s linear discriminant (FLD)
- Linear discriminant analysis (LDA)
- Principal component regression (PCR)
- Independent component analysis (ICA)
- GLM methods for dimensionality reduction?
Targeted approaches
Methods that seek subspaces that are relate to behavior or task variables (i.e., measured experimental variables)
- Demixed PCA
- Targeted dimensionality reduction (TDR)
- Model-based targeted dimensionality reduction (mTDR)
- Preferential subspace identification (PSID)
- Subspace identification for linear systems (SID)
Latent dynamics
- LFADS
- VAEs
Nonlinear methods
- Kernel PCA
- Isomap
- T-SNE
- UMAP
-
Multi-dimensional scaling
- GLM methods for dimensionality reduction
Future
- Fix, expand, and post predictive coding notes
-
Topological data analysis
- persistent homology
- mapper algorithm
- Causal inference
-
Make notes on different bursting neuron models
- Include some simulations in the notes
- Include a note on my own paper
- Notes on point-process models
Notes mentioning this note
There are no notes linking to this note.