Скачать презентацию Use of 3 D Imaging for Information Product Скачать презентацию Use of 3 D Imaging for Information Product

1494f5c9d684808887bc562413e630f9.ppt

  • Количество слайдов: 42

Use of 3 D Imaging for Information Product Development David W. Messinger, Ph. D. Use of 3 D Imaging for Information Product Development David W. Messinger, Ph. D. Digital Imaging and Remote Sensing Laboratory Chester F. Carlson Center for Imaging Science Rochester Institute of Technology Feb. 7, 2008

RIT LADAR Research Areas LADAR+HSI Target Detection LADAR 3 D Data Sets LADAR & RIT LADAR Research Areas LADAR+HSI Target Detection LADAR 3 D Data Sets LADAR & MSI/HSI Fusion MSI Data Sets HSI Data Sets LADAR Feature Extraction LADAR Data Exploitation Assisted Scene Construction NURI: Semi-Automated DIRSIG Scene Construction System Performance Trade Studies System Tasking Trade Studies DIRSIG Laser Radar System Simulation Scene Model Algorithm/Exploitation Testing 2

IMINT versus MASINT • Traditional Data Viewers – “Fusion” of 2 D imagery and IMINT versus MASINT • Traditional Data Viewers – “Fusion” of 2 D imagery and 3 D point data. – 3 D “fly around” and basic geometric measurements • Featured Based Visualization –Visualize rich data descriptions extracted from 2 D imagery and 3 D data sets –Potential to render under different modalities, at different times of day –Ability to perform signature analysis techniques because of availability of spectral information 2005 Ford Explorer, red paint (spectral reflectance available) Image courtesy Merrick & Company, Copyright 2004 3

Semi-Automated Process for Scene Generation 3 D Data Sets Terrain Extraction Background Feature Maps Semi-Automated Process for Scene Generation 3 D Data Sets Terrain Extraction Background Feature Maps Initial Tree/Building Segmentation Spectra Retrieval Coarse Registration MSI Imagery HSI Imagery Refined Tree/Building Segmentation Coarse Building Analysis Spectral Assignment Tree Reconstruction Refined Registration Refined Building Reconstruction DIRSIG Scene Description 4

Color Visualization of a Small Scene • Visualization of a 3 D scene model Color Visualization of a Small Scene • Visualization of a 3 D scene model that was automatically generated from 3 D and 2 D data sources. – scene model can be visualized in other wavelengths, from other angles, at different times of day, with different atmospheres, etc. – situational awareness, operational planning, etc. Real Imagery Quick-Look Color Simulation 5

Information Products Available • Terrain extraction and object characterization techniques • Techniques for automated Information Products Available • Terrain extraction and object characterization techniques • Techniques for automated plane extraction and cultural 3 D object reconstruction. – building recognition, segmentation, extraction and reconstruction. • Approaches to 3 D object matching and filtering – spin images, generalize ellipsoids, etc. – techniques for automated tree finding and size estimation • Approaches to 3 D data to 2 D image registration • Approaches to 3 D model to 2 D image registration 6

3 D Model and 2 D Image Registration Passive Imagery Project the 3 D 3 D Model and 2 D Image Registration Passive Imagery Project the 3 D model onto the 2 D image 3 D model overlaid on 2 D imagery 3 D model derived from LADAR data 7

Potential for 3 D Object Change Detection • There are objects in the image Potential for 3 D Object Change Detection • There are objects in the image that are not in the model • Were they missed by the sensor that created the model or “added”? • Potential exists for object change detection based on shape detection methods – Spin Images, described later 8

Potential Applications (Not Yet Developed) • Trafficability and Lines of Communication (LOC) – Potential Potential Applications (Not Yet Developed) • Trafficability and Lines of Communication (LOC) – Potential to semi-automatically detect roads (with occlusions), paths, pipes, etc. – Estimate density of wooded areas and trafficability by vehicles. • 3 D based change detection and component dissection • Line of sight analyses • Path forward to tie 3 D models to process models? – Both natural processes and man-made processes. • Improved MSI and HSI atmospheric compensation – 3 D feature extraction can improve relative solar angle estimation. 9

Fusion of LADAR and HSI for Improved Target Detection Michael Foster, Ph. D. (USAF) Fusion of LADAR and HSI for Improved Target Detection Michael Foster, Ph. D. (USAF) John Schott, Ph. D. David Messinger, Ph. D.

Physics-Based Target Detection Algorithms for HSI • Approach leverages knowledge of the physics of Physics-Based Target Detection Algorithms for HSI • Approach leverages knowledge of the physics of the observable quantities to improve target detection under difficult observation / target state conditions – targets under varying illumination – targets with variable “contrast” – targets with modified surface properties • General methodology – develop a physics-based model to predict the manifestations of the target observable signature – include known sources of variability – detect for family of signatures, called a “target space” • Applied to detection in reflective and emissive spectral regimes 11

“Traditional” Target Detection target property Scene Image target space target detection atmospheric compensation / “Traditional” Target Detection target property Scene Image target space target detection atmospheric compensation / TES radiance space probability map target domain image domain 12

Physics-Based Signatures Detection target properties target manifestations Scene Image target detection radiance space physics-based Physics-Based Signatures Detection target properties target manifestations Scene Image target detection radiance space physics-based model target domain image domain probability map 13

Physics-Based Detection of Surface Targets in Reflective HSI • Physics Based Structured In. Feasibility Physics-Based Detection of Surface Targets in Reflective HSI • Physics Based Structured In. Feasibility Target-detector (PB-SIFT) – Work of Emmett Ientilucci under IC Postdoctoral fellowship – Physics Based Orthogonal Subspace Projection (PBosp) – Structured Infeasibility Projector (SIP) • Overview – Variability in target signature is due to atmospheric contributions and target illumination – Captures variability in target space using endmembers – Can isolate pixels that have a significant projection but are not target 14

Addition of LADAR Information • Physics-based forward modeling techniques for target detection typically use Addition of LADAR Information • Physics-based forward modeling techniques for target detection typically use radiometric variability to describe the target manifestations possible • Generally ignore or over-model geometric terms in the forward model • IF WE HAD – co-temporal, co-registered LADAR & HSI – oversampled LADAR • CAN WE – use these data to constrain the geometric terms in the forward model and improve target detection? 15

Sub-pixel Target Radiometric Model • Predicts spectral radiance at a sensor based on mixture Sub-pixel Target Radiometric Model • Predicts spectral radiance at a sensor based on mixture of target and background spectra for a specific atmosphere and geometry • Inherent geometric terms • Shadowing term – K • Incident illumination angle – q • Downwelled shape factor – F • Pixel purity – M 16

Physics-Based Signatures Detection target properties target manifestations Scene Image target detection radiance space physics-based Physics-Based Signatures Detection target properties target manifestations Scene Image target detection radiance space physics-based model includes geometric information to constrain the model parameter space target domain image domain probability map 17

LADAR 3 D Point Cloud Processing • Shadow estimate – K – Shadow feeler LADAR 3 D Point Cloud Processing • Shadow estimate – K – Shadow feeler • Incident illumination angle – q – Extract points associated with LADAR ground plane – Estimate point normals using eigenvector analysis – Calculate angle between point normals and solar direction • Downwelling shape factor – F – Assume clear sky and use LADAR skydome feeler technique • Pixel purity – M – Spin-image techniques to identify probable LADAR target points – Subject to high false alarms • Project point data into HSI FPA to create pixel maps 18

Microscene Spectral Data - DIRSIG Simulation 1. Gray Humvee 2. Calibration panel 3. Gray Microscene Spectral Data - DIRSIG Simulation 1. Gray Humvee 2. Calibration panel 3. Gray SUV 4. Gray shed 5. Red sedan 6. Red SUV 7. Gray SUV under tree 8. Inclined gray SUV 9. Inclined gray Humvee high spatial resolution for visualization only 19

Microscene Spectral Data - DIRSIG Simulation • Spectral cube has 1. 0 m GSD Microscene Spectral Data - DIRSIG Simulation • Spectral cube has 1. 0 m GSD • Spatially oversampled producing mixed pixels • 0. 4 – 1. 2 mm RGB of cube used in processing 20

Microscene Spatial Data - DIRSIG Simulation 3 D LADAR POINT CLOUD NADIR VIEW Post Microscene Spatial Data - DIRSIG Simulation 3 D LADAR POINT CLOUD NADIR VIEW Post spacing of approximately 40 cm, with and without quantization and pointing error 3 D LADAR POINT CLOUD OBLIQUE VIEW 21

Feature Maps - Estimate of K Note that shadows “line up” with trees in Feature Maps - Estimate of K Note that shadows “line up” with trees in RGB image SHADOW MAP 22

23 23

Feature Maps - Estimate of q Illumination angle map for terrain (after tree removal) Feature Maps - Estimate of q Illumination angle map for terrain (after tree removal) ILLUMINATION ANGLE MAP 24

Feature Maps - Estimate of F Note full sky view on tops of trees Feature Maps - Estimate of F Note full sky view on tops of trees and near zero sky visibility for ground surrounded by trees SHAPE FACTOR MAP 25

Target Detection in 3 D Point Cloud: Spin-Images • 2 D parametric space image Target Detection in 3 D Point Cloud: Spin-Images • 2 D parametric space image • Capture 3 D shape information about a single point in 3 D point cloud • Pose invariant – Based on local geometry relative to a single point normal – i. e. , invariant to tip, tilt, pan • Scale variant – Estimate scale from ground plane/sensor position • Graceful detection degradation in the presence of occlusion 26

Spin-Image Formation: Surface Point Coordinate Transformation • 2 D parameter space coordinates – Radial Spin-Image Formation: Surface Point Coordinate Transformation • 2 D parameter space coordinates – Radial distance to local point normal – Signed vertical distance along basis normal 27

Spin-Image Examples • 3 spin image pairs corresponding to 3 different points on the Spin-Image Examples • 3 spin image pairs corresponding to 3 different points on the model • Left image is high resolution spin image (small bin size) • Right image is low resolution image (larger bin size) post bilinear interpolation 28

Spin Image Geometric Target Detection 3 D target model for all surface points, construct Spin Image Geometric Target Detection 3 D target model for all surface points, construct spin image identify points in image data that have high correspondence to a library model 3 D image data for all data points, construct spin image 29

Library Matching Issues • Spin image library generated from 3 D model – Points Library Matching Issues • Spin image library generated from 3 D model – Points on all sides of model – High sampling density – No occlusion • Spin images generated from the scene – – Target and background present Points only from LADAR illumination direction Self-occlusion and background occlusion Not necessarily at same spatial sampling as model library 30

Spin Image Library Matching • Intelligent model library generation • Typically model has many Spin Image Library Matching • Intelligent model library generation • Typically model has many more points than scene data – Normalize scene and model spin images • Scene has points from only one direction – Spin angle limits model points that can contribute to a spin image when building model library – Compute normal for every point in model – Pick spin image basis point p – Allow only normals within 90° angle relative to spin image basis normal to contribute to model spin image – This builds self-occlusion effects into model library 31

Feature Maps - Estimate of M Results from spin-image detection of geometric target model Feature Maps - Estimate of M Results from spin-image detection of geometric target model PIXEL PURITY MAP 32

33 33

Creating the Target Space 34 Creating the Target Space 34

Multi-Modal Target Detection Methodology & Advantages • Only those pixels on the focal plane Multi-Modal Target Detection Methodology & Advantages • Only those pixels on the focal plane that are likely to contain a target, as determined by the 3 D geometric target detection algorithm, are interrogated on the HSI focal plane – potential dramatic reduction in FAs based on the geometry information • Spectral “background” information is derived from the pixels most likely to not contain the target (again, based on LADAR data) • Per pixel, the physics-based target space is “customized” for the specific geometric conditions in that pixel • “Fusion” occurs in the following sense: – the geometric information, derived from the LADAR data, influences how we exploit the HSI 35

Target Detection Results note the missing calibration panel with the actual target reflectance • Target Detection Results note the missing calibration panel with the actual target reflectance • All features with gray paint have high scores • Even the hidden SUV • Detection statistic only calculated for those pixels with M > 0. 3 • Threshold at = 0. 2 eliminates all false alarms 36

37 37

(Partial) Application to Real LADAR Data • Leica LADAR collection of RIT campus • (Partial) Application to Real LADAR Data • Leica LADAR collection of RIT campus • No co-temporal hyperspectral imagery • Coverage of the simulated area • Point cloud processing schemes applied to real data • Truck parked on the “berm” in the scene 38

Real Point Cloud Processing Results shadow map illumination angle map shape factor map pixel Real Point Cloud Processing Results shadow map illumination angle map shape factor map pixel purity map 39

Summary • Demonstration of the feasibility of improving HSI target detection through the use Summary • Demonstration of the feasibility of improving HSI target detection through the use of LADAR information products • LADAR was used to derive / estimate: – shadowing effects – downwelling illumination factor – target likelihood based on geometric target model – sub-pixel mixing fraction – direct illumination angle on a per-pixel basis in the HSI focal plane • Estimation of other information products possible with existing tools designed to enhance scene building capabilities 40

Questions? David W. Messinger, Ph. D. messinger@cis. rit. edu (585) 475 -4538 Questions? David W. Messinger, Ph. D. messinger@cis. rit. edu (585) 475 -4538

Back Up Charts Back Up Charts