An efficient kernel product for automatic differentiation libraries, with applications to measure transport
41 mins 42 secs,
195.75 MB,
WebM
640x360,
29.97 fps,
44100 Hz,
640.9 kbits/sec
Share this media item:
Embed this media item:
Embed this media item:
About this item
Description: |
Feydy, J
Monday 13th November 2017 - 11:00 to 11:30 |
---|
Created: | 2017-11-14 09:01 |
---|---|
Collection: | Growth form and self-organisation |
Publisher: | Isaac Newton Institute |
Copyright: | Feydy, J |
Language: | eng (English) |
Distribution: | World (downloadable) |
Explicit content: | No |
Aspect Ratio: | 16:9 |
Screencast: | No |
Bumper: | UCS Default |
Trailer: | UCS Default |
Abstract: | Authors : Benjamin Charlier, Jean Feydy, Joan Alexis Glaunès and Alain Trouvé This paper presents a memory-efficient implementation of the kernel matrix-vector product, which is suitable for use with automatic differentiation libraries -- in our case, PyTorch. This piece of software alleviates the major bottleneck of autodiff libraries as far as diffeomorphic image registration is concerned: symbolic python code can now scale up to large point clouds and shapes (100,000+ vertices). To showcase the value of automatic differentiation to the LDDMM community, we introduce the "normalized Hamiltonian" setting and show that it corresponds to a spatially regularized optimal transport of mass distributions: made tractable by autodiff libraries, the kernel normalization trick turns an extrinsic image deformation routine into an intrinsic measure transportation program. |
---|
Available Formats
Format | Quality | Bitrate | Size | |||
---|---|---|---|---|---|---|
MPEG-4 Video | 640x360 | 1.94 Mbits/sec | 607.15 MB | View | Download | |
WebM * | 640x360 | 640.9 kbits/sec | 195.75 MB | View | Download | |
iPod Video | 480x270 | 522.28 kbits/sec | 159.52 MB | View | Download | |
MP3 | 44100 Hz | 249.78 kbits/sec | 76.35 MB | Listen | Download | |
Auto | (Allows browser to choose a format it supports) |