Group invariance and computational sufficiency
41 mins 33 secs,
76.03 MB,
MP3
44100 Hz,
249.82 kbits/sec
Share this media item:
Embed this media item:
Embed this media item:
About this item
Description: |
Vu, V
Friday 29th June 2018 - 09:45 to 10:30 |
---|
Created: | 2018-06-29 15:13 |
---|---|
Collection: | Statistical scalability |
Publisher: | Isaac Newton Institute |
Copyright: | Vu, V |
Language: | eng (English) |
Distribution: | World (downloadable) |
Explicit content: | No |
Aspect Ratio: | 16:9 |
Screencast: | No |
Bumper: | UCS Default |
Trailer: | UCS Default |
Abstract: | Statistical sufficiency formalizes the notion of data reduction. In the decision theoretic interpretation, once a model is chosen all inferences should be based on a sufficient statistic. However, suppose we start with a set of methods that share a sufficient statistic rather than a specific model. Is it possible to reduce the data beyond the statistic and yet still be able to compute all of the methods? In this talk, I'll present some progress towards a theory of "computational sufficiency" and show that strong reductions _can_ be made for large classes of penalized M-estimators by exploiting hidden symmetries in the underlying optimization problems. These reductions can (1) enable efficient computation and (2) reveal hidden connections between seemingly disparate methods. As a main example, I'll show how the theory provides a surprising answer to the following question: "What do the Graphical Lasso, sparse PCA, single-linkage clustering, and L1 penalized Ising model selection all have in common?"
|
---|
Available Formats
Format | Quality | Bitrate | Size | |||
---|---|---|---|---|---|---|
MPEG-4 Video | 640x360 | 1.94 Mbits/sec | 604.17 MB | View | Download | |
WebM | 640x360 | 489.56 kbits/sec | 148.93 MB | View | Download | |
iPod Video | 480x270 | 522.18 kbits/sec | 158.79 MB | View | Download | |
MP3 * | 44100 Hz | 249.82 kbits/sec | 76.03 MB | Listen | Download | |
Auto | (Allows browser to choose a format it supports) |