0.25
0.5
0.75
1.25
1.5
1.75
2
Large Scale Image Retrieval and Mining
Published on Jan 31, 20171248 Views
Related categories
Chapter list
Large Scale Image Retrieval and Mining00:00
Introduction00:34
Outline02:18
Image retrieval03:07
Video Google05:35
Local Features - 106:40
Local Features - 208:05
Retrieval Quality09:07
Bag-of-Words (BoW): Off-line Stage10:57
Bag-of-Words : On-line Stage12:14
Feature Distance Approximation - 113:23
Feature Distance Approximation - 214:31
Vector Quantization via k-Means15:15
Bags of Words Image Representation15:45
Efficient Scoring16:50
BoW and Inverted File - 118:19
BoW and Inverted File - 218:57
BoW and Inverted File - 319:30
Geometric Re-ranking20:03
Visual Words and Vector Quantization20:15
Vector Quantization20:19
Hierarchical k-means20:59
Approximate k-means21:57
Hamming Embedding22:20
Soft Assignment23:58
Learning Fine Vocabularies24:51
min-Hash - 126:23
min-Hash - 226:57
min-Hash - 328:25
min-Hash - 430:00
Set Overlap and min-Hash30:25
min-Hash - 532:23
min-Hash - 632:44
Probability of Retrieving an Image Pair33:32
Weighted min-Hash34:04
Image Clustering via min-Hash34:43
Image Clusters as Connected Components - 134:51
Image Clusters as Connected Components - 235:29
Probability of Retrieving an Image Pair36:03
Spatially Related Images36:17
Seed Generation - 136:47
Seed Generation - 237:16
At Least One Seed in Cluster37:29
Summary of the Method38:21
UKY Dataset38:42
Application39:08
Learning Fine Vocabularies39:10
Appearance Variance of a Single Feature - 139:15
Appearance Variance of a Single Feature - 240:16
Geometric min-Hash40:26
Geometric min-Hash algorithm40:31
Object Discovery - 141:19
Object Discovery - 241:48
Unsupervised Discovery of Co-occurrence in Sparse High Dimensional Data42:01
Over-counting42:11
Independence Assumption Violation43:03
Examples of Co-occurring Features44:14
More Examples - 144:47
More Examples - 244:49
Visual Word Frequency45:02
Geometry in image retrieval45:51
Robust Estimation: Hough vs. RANSAC46:07
Ransac - 148:44
Fitting a Line48:46
Ransac - 249:40
Ransac - 349:48
Ransac - 449:51
Ransac - 549:55
Ransac - 650:01
Ransac - 750:02
Ransac - 850:03
Ransac - 950:23
How Many Samples50:51
Ransac [Fischler, Bolles ’81]51:27
Advanced Ransac52:40
*Sac52:45
Beyond visual nearest neighbor search53:51
Retrieval for Browsing - 154:00
Retrieval for Browsing - 254:42
New Problem Formulation - 155:41
New Problem Formulation - 256:19
“Where is this” example57:11
Query Image57:44
All Details on the Landmark58:06
Highest Resolution Transform58:24
Highest Details58:49
Level of Interest Transform58:54
From single image query to detailed 3D reconstruction59:38
Retrieval and SfM01:00:06
Tight Coupling of Retrieval and SfM01:01:06
Beyond Nearest Neighbour01:02:13
Some Results …01:02:43
From dusk till down modelling in the dark01:05:39
Separate Day & Night Dense Reconstructions - 101:06:06
Separate Day & Night Dense Reconstructions - 201:07:40
Separate Day & Night Dense Reconstructions - 301:08:01
Separate Day & Night Dense Reconstructions - 401:08:20
Separate Day & Night Dense Reconstructions - 501:08:24
Separate Day & Night Dense Reconstructions - 601:08:25
Day & Night Dense Models01:08:27
Geometric Fusion of Day & Night Models01:09:04
Recoloring of Day & Night Models01:09:46
Clustering into Day & Night - 101:10:00
Clustering into Day & Night - 201:10:30
Clustering into Day & Night - 301:10:43
Clustering into Day & Night - 401:11:05
Separate Dense Reconstruction of Day & Night01:11:13
Geometric Fusion of Structure & Recoloring - 101:11:36
Geometric Fusion of Structure & Recoloring - 201:11:38
Geometric Fusion of Structure & Recoloring - 301:11:39
Summary - 101:12:15
Some Results01:13:34
Cnn image retrieval learns from bow01:13:59
Retrieval Challenges - 101:14:27
Retrieval Challenges - 201:14:35
Retrieval Challenges - 301:14:40
Retrieval Challenges - 401:14:45
CNN Image Retrieval - 101:15:14
CNN Image Retrieval - 201:15:45
CNN Image Retrieval - 301:16:16
CNN Image Retrieval - 401:16:54
CNN Image Retrieval - 501:17:40
CNN Image Retrieval - 601:18:05
CNN learns from BoW – Training Data - 101:18:28
CNN learns from BoW – Training Data - 201:18:50
CNN learns from BoW – Positives01:19:11
CNN learns from BoW – Negatives01:20:37
CNN Siamese Learning - 101:21:58
CNN Siamese Learning - 201:22:27
Contrastive Loss - 101:22:30
Contrastive Loss - 201:22:41
Whitening and dimensionality reduction01:22:52
Experiments – datasets01:24:13
Experiments – Learning (AlexNet)01:24:38
Experiments – Dataset variability (AlexNet)01:25:47
Experiments – Dimensionality reduction (VGG)01:26:10
Experiments – Overfitting / Generalization01:26:14
State-of-the-art - 101:26:53
State-of-the-art - 201:27:28
Summary - 201:27:31
Vision and Sports Summer School Prague August 201701:28:48
Lecturers 2016 - 101:29:22
Lecturers 2016 - 201:29:33
Lecturers 2016 - 301:29:35