April 2024 S M T W T F S 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Pages
- 1-bit measurements
- Bregman iteration
- classification
- clustering
- compressed sensing
- computer vision
- conical hull problem
- convex optimization
- dimension reduction
- Divide-and-Conquer
- elastic net
- fast algorithm
- fast SVD
- feature selection
- fixed point continuation
- game
- greedy search
- group sparsity
- Hamming Compressed Sensing
- iterative thresholding
- K-means
- latent variable model
- low-rank
- manifold learning
- matrix completion
- matrix factorization
- multi-label learning
- N-cut
- Nesterov's method
- NIPS 2011
- Nonnegative Matrix Factorization
- optimization
- Quantization recovery
- randomized optimization
- robust principal component analysis
- SDP
- Separable assumption
- sparse learning
- Spectral clustering
- structured learning
- SVM
ClustrMaps
-
What’s new
- List of Submodular Optimization on Streaming Data (In Update)
- Divide-and-Conquer Learning by Anchoring a Conical Hull
- Multi-task Copula – A semiparametric joint prediction model for multiple outputs with sparse graph structure
- NeSVM (Nesterov’s method for SVM) code for our ICDM 2010 paper
- AISTATS 2013 GreBsmo code is released
Articles
Author Archives: tianyizhou
List of Submodular Optimization on Streaming Data (In Update)
Coresets for k-Segmentation of Streaming Data, NIPS 2014 Streaming Submodular Optimization: Massive Data Summarization on the Fly, KDD 2014
Posted in Uncategorized
Leave a comment
Divide-and-Conquer Learning by Anchoring a Conical Hull
Many well-known machine learning methods aim to draw a line between two classes. However, in our recently accepted NIPS 2014 paper “Divide-and-Conquer Learning by Anchoring a Conical Hull“, we reduce lots of fundamental machine learning problems (a broad class of … Continue reading
Multi-task Copula – A semiparametric joint prediction model for multiple outputs with sparse graph structure
Our paper “Multi-task Copula by Sparse Graph Regression“ has been accepted by KDD 2014 this year. So we can talk at the conference which is at NYC, between August 24-27. Before that, let me introduce this new method. In summary, we tackle … Continue reading
Posted in Tianyi's work
Tagged fast algorithm, Hamming Compressed Sensing, structured learning
Leave a comment
NeSVM (Nesterov’s method for SVM) code for our ICDM 2010 paper
You can now download MATLAB code for NeSVM from here. In the code, options.mu is a key parameter to adjust the trade-off between consistent decreasing of primal object function, and the speed. So you need to roughly tune it to … Continue reading
AISTATS 2013 GreBsmo code is released
Here is the GreBsmo code for our AISTATS 2013 paper. You can use it as a greedy version of GoDec solver for X=L+S problem. It is much faster and more robust. There are three video subsequences you can play in … Continue reading
[Best student paper award] Welcome to my “Divide-and-Conquer Anchoring (DCA)” talk at ICDM Dallas Dec 8
Is it possible to finish a 60000×10000 matrix decomposition (NMF, PCA, etc) or completion in 6 seconds on your laptop’s matlab? Can we make it even faster by a simple distributable scheme? How to summarize a huge-scale dataset (ratings, movie, … Continue reading
Our DMKD paper is selected as Top 5 Editor’s Choice Article for Free Reading
Prof. Geoff Webb, the Editor-in-Chief of Data Mining and Knowledge Discovery (Springer) announced in his kdnuggets website that our paper “Manifold Elastic Net: A Unified Framework for Sparse Dimension Reduction”, which was published on DMKD journal in 2011 and cited … Continue reading
Posted in Tianyi's work
Tagged dimension reduction, elastic net, fast algorithm, feature selection, manifold learning, sparse learning
1 Comment
Greedy Bilateral (GreB) Paradigm for Large-scale Matrix Completion, Robust PCA and Low-rank Approximation
Our paper “Greedy Bilateral Sketch, Completion and Smoothing” has been accepted by AISIATS 2013. Abstracts reads below, PDF is here, and code will be coming soon. Abstract: Recovering a large low-rank matrix from highly corrupted, incomplete or sparse outlier overwhelmed … Continue reading
Compressed Labeling: An important extension of Hamming Compressed Sensing; at NIPS now
We are just informed that our submission “Compressed Labeling (CL) on Distilled Labelsets (DL) for Multi-label Learning” is accepted by Machine Learning Journal (Springer). Online first PDF can be downloaded here. CL is an important application and extension of Hamming … Continue reading
Semi-Soft GoDec: >4 times faster, auto-determined k
Here is a good news of GoDec (pertaining to our ICML 2011 paper): Semi-Soft GoDec is released. Different from the ordinary GoDec which imposes hard threshholding to both the singular values of the low-rank part L and the entries of the … Continue reading