Скачать презентацию CHAPTER 7 CONCURRENT SOFTWARE Copyright 2000 Daniel Скачать презентацию CHAPTER 7 CONCURRENT SOFTWARE Copyright 2000 Daniel

eaf36182b3226385b59f7df275a1eb9f.ppt

  • Количество слайдов: 47

CHAPTER 7 CONCURRENT SOFTWARE Copyright © 2000, Daniel W. Lewis. All Rights Reserved. CHAPTER 7 CONCURRENT SOFTWARE Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Program Organization of a Foreground/Background System start Interrupt Initialize ISR for Task #1 ISR Program Organization of a Foreground/Background System start Interrupt Initialize ISR for Task #1 ISR for Task #2 ISR for Task #3 IRET Wait for Interrupts Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Foreground/Background System • Most of the actual work is performed in the Foreground/Background System • Most of the actual work is performed in the "foreground" ISRs, with each ISR processing a particular hardware event. • Main program performs initialization and then enters a "background" loop that waits for interrupts to occur. • Allows the system to respond to external events with a predictable amount of latency. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Task State and Serialization unsigned int byte_counter ; void Send_Request_For_Data(void) { outportb(CMD_PORT, RQST_DATA_CMD) ; Task State and Serialization unsigned int byte_counter ; void Send_Request_For_Data(void) { outportb(CMD_PORT, RQST_DATA_CMD) ; byte_counter = 0 ; } void interrupt Process_One_Data_Byte(void) { BYTE 8 data = inportb(DATA_PORT) ; switch (++byte_counter) { case 1: Process_Temperature(data) ; case 2: Process_Altitude(data) ; case 3: Process_Humidity(data) ; …… } } Copyright © 2000, Daniel W. Lewis. All Rights Reserved. break ;

Input Ready STI Input Data ISR with Long Execution Time Process Data Output Device Input Ready STI Input Data ISR with Long Execution Time Process Data Output Device Ready? Yes Output Data Send EOI Command to PIC IRET Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Input Ready Enter Background STI Initialize Data Enqueued ? Input Data Yes Output Device Input Ready Enter Background STI Initialize Data Enqueued ? Input Data Yes Output Device Ready? Yes Process Data Enqueue Data Send EOI Command to PIC FIFO Queue Dequeue Data Removing the Waiting Loop from the ISR Output Data IRET Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Input Ready Output Ready STI Input Data Enqueued? Process Data Yes Enqueue Data FIFO Input Ready Output Ready STI Input Data Enqueued? Process Data Yes Enqueue Data FIFO Queue Dequeue Data Interrupt. Driven Output Data Send EOI Command to PIC IRET Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Input Ready Kick Starting Output STI Input Data Send. Data Subroutine Output Ready Process Input Ready Kick Starting Output STI Input Data Send. Data Subroutine Output Ready Process Data Enqueued? STI FIFO Queue Enqueue Data Output Device Busy? No! IRET Dequeue Data Clear Busy Flag Output Data CALL Send. Data (Kick Start) Send EOI Command to PIC Yes Set Busy Flag RET Copyright © 2000, Daniel W. Lewis. All Rights Reserved. CALL Send. Data Send EOI Command to PIC IRET

Preventing Interrupt Overrun Input Ready Input Data Send EOI Command to PIC ISR Busy Preventing Interrupt Overrun Input Ready Input Data Send EOI Command to PIC ISR Busy Flag Set? Removes the interrupt request that invoked this ISR. When interupts get re-enabled (see STI below), allow interrupts from lower priority devices (and this device too). Yes Ignore this Interrupt! (Interrupts are re-enabled by the IRET) Set ISR Busy Flag STI Allow interrupts from any device. Process data, write result to output queue, & kick start. Clear ISR Busy Flag Copyright © 2000, Daniel W. Lewis. All Rights Reserved. IRET

Preventing Interrupt Overrun Input Ready Allow interrupts from higher priority devices. STI Removes the Preventing Interrupt Overrun Input Ready Allow interrupts from higher priority devices. STI Removes the interrupt request that invoked this ISR. Input Data Set the mask bit for this device in the 8259 PIC Disable future interrupts from this device. Send EOI Command to PIC Allow interrupts from lower priority devices. Process data, write result to output queue, & kick start. Clear the mask bit for this device in the 8259 PIC Enable future interrupts from this device. Copyright © 2000, Daniel W. Lewis. All Rights Reserved. IRET

Moving Work into Background • Move non-time-critical work (such as updating a display) into Moving Work into Background • Move non-time-critical work (such as updating a display) into background task. • Foreground ISR writes data to queue, then background removes and processes it. • An alternative to ignoring one or more interrupts as the result of input overrun. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Limitations • Best possible performance requires moving as much as possible into the background. Limitations • Best possible performance requires moving as much as possible into the background. • Background becomes collection of queues and associated routines to process the data. • Optimizes latency of the individual ISRs, but background begs for a managed allocation of processor time. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Multi-Threaded Architecture ISR Queue Background Thread Queue ISR Multi-threaded run-time function library (the real-time Multi-Threaded Architecture ISR Queue Background Thread Queue ISR Multi-threaded run-time function library (the real-time kernel) Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Thread Design • Threads usually perform some initialization and then enter an infinite processing Thread Design • Threads usually perform some initialization and then enter an infinite processing loop. • At the top of the loop, the thread relinquishes the processor while it waits for data to become available, an external event to occur, or a condition to become true. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Concurrent Execution of Independent Threads • Each thread runs as if it had its Concurrent Execution of Independent Threads • Each thread runs as if it had its own CPU separate from those of the other threads. • Threads are designed, programmed, and behave as if they are the only thread running. • Partitioning the background into a set of independent threads simplifies each thread, and thus total program complexity. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Each Thread Maintains Its Own Stack and Register Contents Context of Thread 1 Stack Each Thread Maintains Its Own Stack and Register Contents Context of Thread 1 Stack Registers Context of Thread N Stack Registers CS: EIP SS: ESP EAX EBX EFlags Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Concurrency • Only one thread runs at a time while others are suspended. • Concurrency • Only one thread runs at a time while others are suspended. • Processor switches from one thread to another so quickly that it appears all threads are running simultaneously. Threads run concurrently. • Programmer assigns priority to each thread and the scheduler uses this to determine which thread to run next Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Real-Time Kernel • Threads call a library of run-time routines (known as the real-time Real-Time Kernel • Threads call a library of run-time routines (known as the real-time kernel) manages resources. • Kernel provides mechanisms to switch between threads, for coordination, synchronization, communications, and priority. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Context Switching • Each thread has its own stack and a special region of Context Switching • Each thread has its own stack and a special region of memory referred to as its context. • A context switch from thread "A" to thread "B" first saves all CPU registers in context A, and then reloads all CPU registers from context B. • Since CPU registers includes SS: ESP and CS: EIP, reloading context B reactivates thread B's stack and returns to where it left off when it was last suspended. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Context Switching Thread A Thread B Executing Suspended Save context A Restore context B Context Switching Thread A Thread B Executing Suspended Save context A Restore context B Suspended Executing Restore context A Save context B Copyright © 2000, Daniel W. Lewis. All Rights Reserved. Suspended

Non-Preemptive Multi-Tasking • Threads call a kernel routine to perform the context switch. • Non-Preemptive Multi-Tasking • Threads call a kernel routine to perform the context switch. • Thread relinquishes control of processor, thus allowing another thread to run. • The context switch call is often referred to as a yield, and this form of multi-tasking is often referred to as cooperative multitasking. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Non-Preemptive Multi-Tasking • When external event occurs, processor may be executing a thread other Non-Preemptive Multi-Tasking • When external event occurs, processor may be executing a thread other than one designed to process the event. • The first opportunity to execute the needed thread will not occur until current thread reaches next yield. • When yield does occur, other threads may be scheduled to run first. • In most cases, this makes it impossible or extremely difficult to predict the maximum response time of nonpreemptive multi-tasking systems. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Non-Preemptive Multi-Tasking • Programmer must call the yield routine frequently, or else system response Non-Preemptive Multi-Tasking • Programmer must call the yield routine frequently, or else system response time may suffer. • Yields must be inserted in any loop where a thread is waiting for some external condition. • Yield may also be needed inside other loops that take a long time to complete (such as reading or writing a file), or distributed periodically throughout a lengthy computation. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Context Switching in a Non. Preemptive System Start Scheduler selects highest priority thread that Context Switching in a Non. Preemptive System Start Scheduler selects highest priority thread that is ready to run. If not the current thread, the current thread is suspended and the new thread resumed. Thread Initialization Wait? Yes Yield to other threads Data Processing Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Preemptive Multi-Tasking • Hardware interrupts trigger context switch. • When external event occurs, a Preemptive Multi-Tasking • Hardware interrupts trigger context switch. • When external event occurs, a hardware ISR is invoked. • After servicing the interrupt request the ISR raises the priority of the thread that processes the associated data, then switches context switch to the highest priority thread that is ready to run and returns to it. • Significantly improves system response time. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Preemptive Multi-Tasking • Eliminates the programmer's obligation to include explicit calls to the kernel Preemptive Multi-Tasking • Eliminates the programmer's obligation to include explicit calls to the kernel to perform context switches within the various background threads. • Programmer no longer needs to worry about how frequently the context switch routine is called; it's called only when needed - i. e. , in response to external events. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Preemptive Context Switching Thread A Thread B Thread A Executing ISR Hardware Interrupt Process Preemptive Context Switching Thread A Thread B Thread A Executing ISR Hardware Interrupt Process Interrupt Request Context Switch IRET Thread A Suspended Thread B Suspended Scheduler selects highest priority thread that is ready to run. If not the current thread, the current thread is suspended and the new thread resumed. Thread B Executing Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Critical Sections • Critical section: A code sequence whose proper execution is based on Critical Sections • Critical section: A code sequence whose proper execution is based on the assumption that it has exclusive access to the shared resources that it is using during the execution of the sequence. • Critical sections must be protected against preemption, or else integrity of the computation may be compromised. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Atomic Operations • Atomic operations are those that execute to completion without preemption. • Atomic Operations • Atomic operations are those that execute to completion without preemption. • Critical sections must be made atomic. – Disable interrupts for their duration, or – Acquire exclusive access to the shared resource through arbitration before entering the critical section and release it on exit. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Threads, ISRs, and Sharing 1. Between a thread an ISR: Data corruption may occur Threads, ISRs, and Sharing 1. Between a thread an ISR: Data corruption may occur if the thread's critical section is interrupted to execute the ISR. 2. Between 2 ISRs: Data corruption may occur if the critical section of one ISR can be interrupted to execute the other ISR. 3. Between 2 threads: Data corruption may occur unless execution of their critical sections is coordinated. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Shared Resources • A similar situation applies to other kinds of shared resources - Shared Resources • A similar situation applies to other kinds of shared resources - not just shared data. • Consider two or more threads that want to simultaneously send data to the same (shared) disk, printer, network card, or serial port. If access is not arbitrated so that only one thread uses the resource at a time, the data streams might get mixed together, producing nonsense at the destination. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Uncontrolled Access to a Shared Resource (the Printer) Shared Printer Thread A Uncontrolled Access to a Shared Resource (the Printer) Shared Printer Thread A "HELLOn" Thread B Hgo. ELod. LO bye Copyright © 2000, Daniel W. Lewis. All Rights Reserved. "goodbye"

Protecting Critical Sections • Non-preemptive system: Programmer has explicit control over where and when Protecting Critical Sections • Non-preemptive system: Programmer has explicit control over where and when context switch occurs. – Except for ISRs! • Preemptive system: Programmer has no control over the time and place of a context switch. • Protection Options: – – Disabling interrupts Spin lock mutex semaphore Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Disabling Interrupts • The overhead required to disable (and later re-enable) interrupts is negligible. Disabling Interrupts • The overhead required to disable (and later re-enable) interrupts is negligible. – Good for short critical sections. • Disabling interrupts during the execution of a long critical section can significantly degrade system response time. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Spin Locks If the flag is set, another thread is currently using the shared Spin Locks If the flag is set, another thread is currently using the shared memory and will clear the flag when done. Flag set? No Set Flag Critical Section Clear Flag do { disable() ; ok = !flag ; flag = TRUE ; enable() ; } while (!ok) ; Spin-lock in C. flag = FALSE ; L 1: MOV XCHG OR JNZ AL, 1 [_flag], AL AL, AL L 1 Spin-lock in assembly. MOV BYTE [_flag], 0 Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Spin Locks vs. Semaphores • Non-preemptive system requires kernel call inside spin lock loop Spin Locks vs. Semaphores • Non-preemptive system requires kernel call inside spin lock loop to let other threads run. • Context-switching during spin lock can be a significant overhead (saving and restoring threads’ registers and stack). • Semaphores eliminate the context-switch until flag is released. Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Semaphores Semaphore “Pend” Critical Section Kernel suspends this thread if another thread has possession Semaphores Semaphore “Pend” Critical Section Kernel suspends this thread if another thread has possession of the semaphore; this thread does not get to run again until the other thread releases the semaphore with a “post” operation. Semaphore “Post” Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Kernel Services • • • Initialization Threads Scheduling Priorities Interrupt Routines • • Semaphores Kernel Services • • • Initialization Threads Scheduling Priorities Interrupt Routines • • Semaphores Mailboxes Queues Time Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Initialization Services Multi-C: n/a C/OS-II: OSInit() ; OSStart() ; Copyright © 2000, Daniel W. Initialization Services Multi-C: n/a C/OS-II: OSInit() ; OSStart() ; Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Thread Services Multi-C: ECODE Mt. CCoroutine(void (*fn)(…)) ; ECODE Mt. CSplit(THREAD **new, MTCBOOL *old) Thread Services Multi-C: ECODE Mt. CCoroutine(void (*fn)(…)) ; ECODE Mt. CSplit(THREAD **new, MTCBOOL *old) ; ECODE Mt. CStop(THREAD *) ; C/OS-II: BYTE 8 OSTask. Create(void (*fn)(void *), void *data, void *stk, BYTE 8 prio) ; BYTE 8 OSTask. Del(BYTE 8 prio) ; Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Scheduling Services Multi-C: ECODE Mt. CYield(void) ; C/OS-II: void OSSched. Lock(void) ; void OSSched. Scheduling Services Multi-C: ECODE Mt. CYield(void) ; C/OS-II: void OSSched. Lock(void) ; void OSSched. Unlock(void) ; BYTE 8 OSTime. Tick(BYTE 8 old, BYTE 8 new) ; void OSTime. Dly(WORD 16) ; ] Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Priority Services Multi-C: ECODE Mt. CGet. Pri(THREAD *, MTCPRI *) ; ECODE Mt. CSet. Priority Services Multi-C: ECODE Mt. CGet. Pri(THREAD *, MTCPRI *) ; ECODE Mt. CSet. Pri(THREAD *, MTCPRI) ; C/OS-II: BYTE 8 OSTask. Change. Prio(BYTE 8 old, BYTE 8 new) ; Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

ISR Services Multi-C: n/a C/OS-II: OS_ENTER_CRITICAL() ; OS_EXIT_CRITICAL() ; void OSInt. Enter(void) ; void ISR Services Multi-C: n/a C/OS-II: OS_ENTER_CRITICAL() ; OS_EXIT_CRITICAL() ; void OSInt. Enter(void) ; void OSInt. Exit(void) ; Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Semaphore Services Multi-C: ECODE Mt. CSema. Create(SEMA_INFO **) ; ECODE Mt. CSema. Wait(SEMA_INFO *, Semaphore Services Multi-C: ECODE Mt. CSema. Create(SEMA_INFO **) ; ECODE Mt. CSema. Wait(SEMA_INFO *, MTCBOOL *) ; ECODE Mt. CSema. Reset(SEMA_INFO *) ; ECODE Mt. CSema. Set(SEMA_INFO *) ; C/OS-II: OS_EVENT *OSSem. Create(WORD 16) ; void OSSem. Pend(OS_EVENT *, WORD 16, BYTE 8 *) ; BYTE 8 OSSem. Post(OS_EVENT *) ; Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Mailbox Services Multi-C: n/a C/OS-II: OS_EVENT *OSMbox. Create(void *msg) ; void *OSMbox. Pend(OS_EVENT *, Mailbox Services Multi-C: n/a C/OS-II: OS_EVENT *OSMbox. Create(void *msg) ; void *OSMbox. Pend(OS_EVENT *, WORD 16, BYTE 8 *) ; BYTE 8 OSMbox. Post(OS_EVENT *, void *) ; Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Queue Services Multi-C: ECODE Mt. CReceive(void *msgbfr, int *msgsize) ; ECODE Mt. CSend. THREAD Queue Services Multi-C: ECODE Mt. CReceive(void *msgbfr, int *msgsize) ; ECODE Mt. CSend. THREAD *, void *msg, int size, int pri) ; ECODE Mt. CASend. THREAD *, void *msg, int size, int pri) ; C/OS-II: OS_EVENT *OSQCreate(void **start, BYTE 8 size) ; void *OSQPend(OS_EVENT *, WORD 16, BYTE 8 *) ; BYTE 8 OSQPost(OS_EVENT *, void *) ; Copyright © 2000, Daniel W. Lewis. All Rights Reserved.

Time Services Multi-C: n/a C/OS-II: DWORD 32 OSTime. Get(void) ; void OSTime. Set(DWORD 32) Time Services Multi-C: n/a C/OS-II: DWORD 32 OSTime. Get(void) ; void OSTime. Set(DWORD 32) ; Copyright © 2000, Daniel W. Lewis. All Rights Reserved.