
59db9850980b4b75be1477b05537702a.ppt
- Количество слайдов: 31
A Special-Purpose Processor System with Software-Defined Connectivity Benjamin Miller, Sara Siegal, James Haupt, Huy Nguyen and Michael Vai MIT Lincoln Laboratory 22 September 2009 This work is sponsored by the Navy under Air Force Contract FA 8721 -05 -0002. Opinions, interpretations , conclusions and recommendations are those of the authors and are not necessarily endorsed by the United States Government. SPP-HPEC 09 -1 BAM 3/15/2018 MIT Lincoln Laboratory
Outline • Introduction • System Architecture • Software Architecture • Initial Results and Demonstration • Ongoing Work/Summary SPP-HPEC 09 -2 BAM 3/15/2018 MIT Lincoln Laboratory
Why Software-Defined Connectivity? • Modern ISR, COMM, EW systems need to be flexible – Change hardware and software in theatre as conditions change – Technological upgrade – Various form factors • • Example: Reactive electronic warfare (EW) system – Re-task components as environmental conditions change – Easily add and replace components as needed before and during mission Want the system to be open – Underlying architecture specific enough to reduce redundant software development – General enough to be applied to a wide range of system components E. g. , different vendors SPP-HPEC 09 -3 BAM 3/15/2018 MIT Lincoln Laboratory
Special Purpose Processor (SPP) System Processor 1 Tx 1 Rx 2 FPGA 2 Processor 2 Tx 1 Rx R FPGA M Processor N Tx T HPA FPGA 1 S Rx 1 RF Distribution LNA Antenna Interface Switch Fabric • System representative of advanced EW architectures – RF and programmable hardware, processors all connected through a switch fabric Enabling technology: barebone, low-latency pub/sub middleware SPP-HPEC 09 -4 BAM 3/15/2018 configure backplane connections manage inter-process connections Config. Mgr. Res. Mgr. data processing algorithms Process P Process 2 Process 1 Thin Communications Layer (TCL) Middleware OS OS OS Proc. 1 Proc. 2 Proc. N Switch Fabric FPGA 1 FPGA M Rx 1 Rx R Tx 1 Tx T MIT Lincoln Laboratory
Mode 1: Hardwired Config. Mgr. Process P Process 2 Process 1 Res. Mgr. Thin Communications Layer (TCL) Middleware OS OS OS Proc. 1 Proc. 2 Proc. N connections to hardware fixed Switch Fabric FPGA Rx Rx Tx Tx • • • SPP-HPEC 09 -5 BAM 3/15/2018 Hardware components physically connected Connections through backplane are fixed (no configuration management) No added latency but inflexible MIT Lincoln Laboratory
Mode 2: Pub-Sub Process 4 Process 3 Process 2 Proxy 1 Process P Process 2 Process 1 Res. Mgr. Thin Communications Layer (TCL) Middleware OS Proc • FPGA Rx OS Proc FPGA OS FPGA Tx Proc Switch Fabric OS Rx Proc OS Proc Everything communicates through the middleware – Hardware components have on-board processors running proxy processes for data transfer • SPP-HPEC 09 -6 BAM 3/15/2018 Most flexible, but there will be overhead due to the middleware MIT Lincoln Laboratory
Mode 3: Circuit Switching Config. Mgr. Process P Process 2 Process 1 Res. Mgr. Thin Communications Layer (TCL) Middleware OS OS OS Proc. 1 Proc. 2 Proc. N Switch Fabric FPGA 1 • • • SPP-HPEC 09 -7 BAM 3/15/2018 FPGA M Rx 1 Rx R Tx 1 Tx T Configuration manager sets up all connections across the switch fabric May still be some co-located hardware, or some hardware that communicates via a processor through the middleware Overhead only incurred during configuration MIT Lincoln Laboratory
Today’s Presentation Config. Mgr. Process P Process 2 Process 1 Res. Mgr. Thin Communications Layer (TCL) Middleware OS OS OS Proc. 1 Proc. 2 Proc. N Switch Fabric FPGA 1 • FPGA M Rx 1 Rx R Tx 1 Tx T TCL middleware developed to support the SPP system – Essential foundation • SPP-HPEC 09 -8 BAM 3/15/2018 Resource Manager sets up (virtual) connections between processes MIT Lincoln Laboratory
Outline • Introduction • System Architecture • Software Architecture • Initial Results and Demonstration • Ongoing Work/Summary SPP-HPEC 09 -9 BAM 3/15/2018 MIT Lincoln Laboratory
System Configuration • 3 COTS boards connected through VPX backplane – 1 Single-board computer, dual -core Power. PC 8641 – 1 board with 2 Xilinx Virtex- 5 FPGAs and a dual-core 8641 – 1 board with 4 dual-core 8641 s – Processors run Vx. Works • • Boards come from same vendor, but have different board support packages (BSPs) Data transfer technology of choice: Serial Rapid. IO (s. RIO) – Low latency important for our application • Implement middleware in C++ SPP-HPEC 09 -10 BAM 3/15/2018 MIT Lincoln Laboratory
System Model Application Components System Control Components Signal Processing Lib Vendor Specific BSP 1 HW (Rx/Tx, ADC, etc. ) BSP 2 Operating System Vx. Works (Realtime: Vx. Works; Standard: Linux) Physical Interface VPX + Serial Rapid I/O SPP-HPEC 09 -11 BAM 3/15/2018 MIT Lincoln Laboratory
System Model Application Components System Control Components Signal Processing Lib TCL Middleware Vendor Specific BSP 1 HW (Rx/Tx, ADC, etc. ) BSP 2 Operating System Vx. Works (Realtime: Vx. Works; Standard: Linux) Physical Interface VPX + Serial Rapid I/O SPP-HPEC 09 -12 BAM 3/15/2018 MIT Lincoln Laboratory
System Model New SW components Application Components System Control Components Signal Processing Lib TCL Middleware Vendor Specific BSP 1 BSP 2 New HW components HW (Rx/Tx, ADC, etc. ) Operating System Vx. Works (Realtime: Vx. Works; Standard: Linux) Physical Interface VPX + Serial Rapid I/O New components can easily be added by complying with the middleware API SPP-HPEC 09 -13 BAM 3/15/2018 MIT Lincoln Laboratory
Outline • Introduction • System Architecture • Software Architecture • Initial Results and Demonstration • Ongoing Work/Summary SPP-HPEC 09 -14 BAM 3/15/2018 MIT Lincoln Laboratory
Publish/Subscribe Middleware subscribers Process k Process l 1 d TCL Middleware Topic T Subscribers ---------l 1 l 2 . . . and the subscribers are unconcerned about where their data comes from notify OS Proc. l 1 notify send to application send to l 2 send to l 1 OS Proc. k r e eliv deliver lish pub Publishing application doesn’t need to know where the data is going. . . Process l 2 OS Proc. l 2 Switch Fabric Middleware acts as interface to both application and hardware/OS SPP-HPEC 09 -15 BAM 3/15/2018 MIT Lincoln Laboratory
Abstract Interfaces to Middleware interface with application What dataaccept data from transfer publishers publis technology am I her/su bscrib er using? TCL Middleware mgmt arrival data to notification send s How (exactly) subscriber interface with do I execute a hardware/OS data transfer? • Middleware must be abstract to be effective – Middleware developers are unaware of hardware-specific libraries – Users have to implement functions that are specific to BSPs SPP-HPEC 09 -16 BAM 3/15/2018 MIT Lincoln Laboratory
XML Parser Config. Mgr. Parser setup. xml Process P Process 2 Process 1 Res. Mgr. Thin Communications Layer (TCL) Middleware OS OS OS Proc. 1 Proc. 2 Proc. N Switch Fabric FPGA 1 • FPGA M Rx 1 Rx R Tx 1 Tx T Resource manager is currently in the form of an XML parser – XML file defines topics, publishers, and subscribers – Parser sets up the middleware and defines virtual network topology SPP-HPEC 09 -17 BAM 3/15/2018 MIT Lincoln Laboratory
Middleware Interfaces Interface to BSP Interface to Application Data. Writer change with comm. tech. has a Srio. Data. Reader. Listener Srio. Data. Writer Builder derived from Data. Reader. Listener Srio. Data. Reader • has a derived from Data. Reader has a Custom. BSPBuilder change with hardware Base classes – Data. Reader, Data. Reader. Listener and Data. Writer interface with the application – Builder interfaces with BSPs • Derive board- and communication-specific classes SPP-HPEC 09 -18 BAM 3/15/2018 MIT Lincoln Laboratory
Builder #include <math. h>. . . //member functions STATUS Builder: : perform. Dma. Transfer (. . . ){}. . . Interface to BSP derived from Builder #include <math. h> #include “vendor. Path/bsp. Dma. h”. . . //member functions STATUS Custom. BSPBuilder: : perform. Dma. Transfer (. . . ){ return bsp. Dma. Transfer(. . . ); }. . . • • Custom. BSPBuilder Follows the Builder pattern in Design Patterns* Provides interface for s. RIO-specific tasks – e. g. , establish s. RIO connections, execute data transfer • Certain functions are empty (not necessarily virtual) in the base class, then implemented in the derived class with BSPspecific libraries SPP-HPEC 09 -19 BAM 3/15/2018 MIT Lincoln Laboratory *E. Gamma et. al. Design Patterns: Elements of Reusable Object-Oriented Software. Reading, Mass. : Addison-Wesley, 1995.
Publishers and Subscribers Interface to Application Data. Writer Srio. Data. Reader derived from Data. Reader. Listener Srio. Data. Reader. Listener derived from Data. Reader has a Srio. Data. Writer //member functions virtual STATUS Data. Writer: : write(message )=0; virtual STATUS Srio. Data. Writer: : write(message ) {. . . my. Builder->perform. Dma. Xfer(…); . . . } Derived Builder type determined dynamically • • • Data. Readers, Data. Writers and Data. Reader. Listeners act as “Directors” of the Builder – Tell the Builder what to do, Builder determines how to do it Data. Writer used for publishing, Data. Reader and Data. Reader. Listener used by subscribers Derived classes implement communication(s. RIO)-specific, but not BSP-specific, functionality – SPP-HPEC 09 -20 BAM 3/15/2018 e. g. , ring a recipient’s doorbell after transferring data MIT Lincoln Laboratory
Outline • Introduction • System Architecture • Software Architecture • Initial Results and Demonstration • Ongoing Work/Summary SPP-HPEC 09 -21 BAM 3/15/2018 MIT Lincoln Laboratory
Software-Defined Connectivity Initial Implementation • Experiment: Process-toprocess data transfer latency – Set up two topics – Processes use TCL to send data back and forth – Measure round trip time with and without middleware in place Process 1 Process 2 TCL Middleware OS OS OS Proc. 1 Proc. 2 Proc. N Switch Fabric FPGA 1 SPP-HPEC 09 -22 BAM 3/15/2018 FPGA M Rx 1 Rx R Tx 1 Tx T <Topic> <Name>Send</Name> <ID>0</ID> <Sources> <Source. ID>8</Source. ID> </Sources> <Destination> <DSTID>0</DSTID> </Destinations> </Topic> <Name>Send. Back</Name> <ID>1</ID> <Sources> <Source. ID>0</Source. ID> </Sources> <Destination> <DSTID>8</DSTID> </Destinations> </Topic> MIT Lincoln Laboratory
Software-Defined Connectivity Communication Latency P 1 • • One-way latency ~23 us for small packet sizes Latency grows proportionally to packet size for large packets SPP-HPEC 09 -23 BAM 3/15/2018 P 2 • • Reach 95% efficiency at 64 KB Overhead is negligible for large packets, despite increasing size MIT Lincoln Laboratory
Demo 1: System Reconfiguration TCL Middleware Processor #1 Processor #2 Processor #3 Configuration 1: XML 1 Detect Process Transmit Process Configuration 2: XML 2 Detect Objective: Demonstrate connectivity reconfiguration by simply replacing the configuration XML file SPP-HPEC 09 -24 BAM 3/15/2018 MIT Lincoln Laboratory
Demo 2: Resource Management TCL Middleware Control Receive Proc #1 Proc #2 Transmit I will process this new information! Transmit Proc #2 Receive Proc #1 I’ve detected signals! Low-latency predefined connections allow quick response SPP-HPEC 09 -25 BAM 3/15/2018 MIT Lincoln Laboratory
Demo 2: Resource Management TCL Middleware Control Receive Proc #1 Proc #2 Transmit Working… Transmit Proc #2 I will determine what to transmit in response! I need more help to analyze the signals! Receive Proc #1 Resource manager sets up new connections on demand to efficiently utilize available computing power SPP-HPEC 09 -26 BAM 3/15/2018 MIT Lincoln Laboratory
Demo 2: Resource Management TCL Middleware Control I will transmit the response! Receive Proc #1 Proc #2 Transmit Working… Transmit Proc #2 I have determined an appropriate response! Receive Proc #1/Transmit are publisher/subscriber on topic Transmit. Waveform SPP-HPEC 09 -27 BAM 3/15/2018 MIT Lincoln Laboratory
Demo 2: Resource Management TCL Middleware Control Receive Proc #1 Proc #2 Transmit Done! I will transmit the response! Transmit Proc #2 I have determined an appropriate response! Receive Proc #1 After finishing, components may be re-assigned SPP-HPEC 09 -28 BAM 3/15/2018 MIT Lincoln Laboratory
Outline • Introduction • System Architecture • Software Architecture • Initial Results and Demonstration • Ongoing Work/Summary SPP-HPEC 09 -29 BAM 3/15/2018 MIT Lincoln Laboratory
Ongoing Work Config. Mgr. Process P Process 2 Process 1 Res. Mgr. Thin Communications Layer (TCL) Middleware OS OS OS Proc. 1 Proc. 2 Proc. N Switch Fabric FPGA 1 • Develop the middleware (configuration manager) to set up fixed connections – Mode 3: Objective system SPP-HPEC 09 -30 BAM 3/15/2018 • FPGA M Rx 1 Rx R Tx 1 Tx T Automate resource management - Dynamically reconfigure system as needs change - Enable more efficient use of resources (load balancing) MIT Lincoln Laboratory
Summary • Developing software-defined connectivity of hardware and software components • Enabling technology: low-latency pub/sub middleware – Abstract base classes manage connections between nodes – Application developer implements only system-specific send and receive code • Encouraging initial results – At full s. RIO data rate, overhead is negligible • Working toward automated resource management for efficient allocation of processing capability, as well as automated setup of low-latency hardware connections SPP-HPEC 09 -31 BAM 3/15/2018 MIT Lincoln Laboratory
59db9850980b4b75be1477b05537702a.ppt