558de37684aa98a5d7856281c451f5d9.ppt
- Количество слайдов: 32
Learning to Combine Bottom-Up and Top-Down Segmentation Anat Levin and Yair Weiss School of CS&Eng, The Hebrew University of Jerusalem, Israel
Bottom-up segmentation Bottom-up approaches: Use low level cues to group similar pixels • Malik et al, 2000 • Sharon et al, 2001 • Comaniciu and Meer, 2002 • …
Bottom-up segmentation is ill posed Many possible segmentation are equally good based on low level cues alone. Some segmentation example (maybe horses from Eran’s paper) images from Borenstein and Ullman 02
Top-down segmentation • Class-specific, top-down segmentation (Borenstein & Ullman Eccv 02) • Winn and Jojic 05 • Leibe et al 04 • Yuille and Hallinan 02. • Liu and Sclaroff 01 • Yu and Shi 03
Combining top-down and bottom-up segmentation + Find a segmentation: 1. Similar to the top-down model 2. Aligns with image edges
Previous approaches • Borenstein et al 04 Combining top-down and bottom up segmentation. • Tu et al ICCV 03 Image parsing: segmentation, detection, and recognition. • Kumar et al CVPR 05 Obj-Cut. • Shotton et al ECCV 06: Texton. Boost Previous approaches: Train top-down and bottom-up models independently
Why learning top-down and bottom-up models simultaneously? • Large number of freedom degrees in tentacles configuration- requires a complex deformable top down model • On the other hand: rather uniform colors- low level segmentation is easy
Our approach • Learn top-down and bottom-up models simultaneously • Reduces at run time to energy minimization with binary labels (graph min cut)
Energy model Segmentation alignment with image edges Consistency with fragments segmentation
Energy model Segmentation alignment with image edges Consistency with fragments segmentation
Energy model Segmentation alignment with image edges Consistency with fragments segmentation Resulting min-cut segmentation
Learning from segmented class images Training data: Goal: Learn fragments for an energy function
Learning energy functions using conditional random fields Theory of CRFs: CRFs For vision: • Lafferty et al 2001 • Kumar and Hebert 2003 • Le. Cun and Huang 2005 • Ren et al 2006 • He et al 2004, 2006 • Quattoni et al 2005 • Torralba et al 04
Learning energy functions using conditional random fields Minimize energy of true segmentation E(x) Maximize energy of all other configurations “It's not enough to succeed. Others must fail. ” –Gore Vidal
Learning energy functions using conditional random fields Minimize energy of true segmentation P(x) Maximize energy of all other configurations “It's not enough to succeed. Others must fail. ” –Gore Vidal
Differentiating CRFs log-likelihood Log-likelihood is convex with respect to Log-likelihood gradients with respect to : Expected feature response minus observed feature response
Conditional random fields-computational challenges CRFs cost- evaluating partition function Derivatives- evaluating marginal probabilities Use approximate estimations: • Sampling • Belief Propagation and Bethe free energy • Used in this work: Tree reweighted belief propagation and Tree reweighted upper bound (Wainwright et al 03)
Fragments selection Candidate fragments pool: Greedy energy design:
Fragments selection challenges Straightforward computation of likelihood improvement is impractical 2000 Fragments 50 Training images 10 Fragments selection iterations 1, 000 inference operations!
Fragments selection First order approximation to log-likelihood gain: Fragment with low error on the training set Similar idea in different contexts: • Zhu et al 1997 • Lafferty et al 2004 • Mc. Callum 2003 Fragment not accounted for by the existing model
Fragments selection First order approximation to log-likelihood gain: • Requires a single inference process on the previous iteration energy to evaluate approximations with respect to all fragments • First order approximation evaluation is linear in the fragment size
Fragments selection- summary Initialization: Low- level term For k=1: K • Run TRBP inference using the previous iteration energy. • Approximate likelihood gain of candidate fragments • Add to energy the fragment with maximal gain.
Training horses model
Training horses model-one fragment
Training horses model-two fragments
Training horses model-three fragments
Results- horses dataset
Mislabeled pixels percent Results- horses dataset Fragments number Comparable to previous results (Kumar et al, Borenstein et al. ) but with far fewer fragments
Results- artificial octopi
Results- cows dataset From the TU Darmstadt Database
Mislabeled pixels percent Results- cows dataset Fragments number
Conclusions • Simultaneously learning top-down and bottom-up segmentation cues. • Learning formulated as estimation in Conditional Random Fields • Novel, efficient fragments selection algorithm • Algorithm achieves state of the art performance with a significantly smaller number of fragments
558de37684aa98a5d7856281c451f5d9.ppt