Table of Contents Previous Chapter ACIS

3.0 Architecture (36-53203 A)

3.1 Purpose

This section describes the overall architecture of the ACIS Science Instrument Software, and identifies the key interfaces between the hardware and the software, and between the major software components.

3.2 Overall Approach

The bulk of the ACIS Science Instrument Software resides in the Back End Processor (BEP). This software is responsible for managing commands and telemetry, and for filtering and packing science data. The design of the BEP software employs object-oriented techniques, and uses a commercial real-time multi-tasking executive.

The Front End Processor (FEP) software is responsible for processing raw pixel data as quickly as possible. In general, this software has to deal with only one task at a time. As a result, its design employs a simple interrupt handler and single main thread of control, and takes a structured design approach.

3.3 BEP Class Categories and Source Code Directories

Class categories are a way of grouping related classes. Early in the design, categories were used to logically separate the layers of the design. As the design evolved, the source code directory structure superceded this layering to some extent, although some of the aspects of the layered architecture remain. The following illustrates the current BEP software class category directories (NOTE: ipclgen resides at the same level as the other class cateories):

FIGURE 3. Class Category Directories

Early in the ACIS design, there were five main categories. As the design evolved, it was discovered that more fine-grained categories were needed to group more tightly related classes. The original category set was as follows:

Finally, all of the Front End Processor software is contained in the fep directory.

3.4 BEP Class Category Contents and Relationships

This section illustrates classes contained in each of the main category directories, and the clients and servers of those classes.

3.4.1 Device Classes

The device classes are responsible for directly interacting with the BEP hardware. The detailed design of the BEP device classes is contained in Section 5.0 through Section 12.0 .

FIGURE 4. Device Class List

3.4.2 Executive Classes

The Executive classes are responsible for providing an interface layer between the main BEP software and the Nucleus RTX executive. The detailed design of the executive classes is described in Section 15.0

FIGURE 5. Executive Class List

3.4.3 Protocol Classes

The Protocol classes are responsible for managing a variety of interface protocols.

FIGURE 6. Protocol Class List

FIGURE 7. Protocols Command Handler Class List

The detailed design is segmented into sections as follows:

Command Management - Section 16.0 covers


Command Handlers - Section 17.0 covers all of the command handler classes (see Figure 7).

Telemetry Management - Section 18.0 covers


FEP Management - Section 25.0 covers


DEA Management - Section 26.0 covers

NOTE: Classes provided in filesdea and filesdeacheck have yet to be described. Section is TBD.

Parameter Block Management - Section 20.0 covers

NOTE: PblTimedExp, PblContClock, PblDeaHouse, Pbl2dWindow and Pbl1dWindow have yet to be described. Section is TBD.

Fatal Error Reporting - Section 29.0 covers


IP&CL Code-Generation - Section 21.0 , Section 22.0 , and Section 23.0 cover

Code-generation scripts
Standard for Command Format Classes (CmdPkt_*)
Standard for Telemetry Format Classes (Tf_*)

3.4.4 DEA Housekeeping Classes

The DEA Housekeeper class is responsible for periodically acquiring and telemetering information from the Detector Electronics Assembly. Its detailed design is provided in Section 31.0

FIGURE 8. DEA Housekeeping Class List

3.4.5 Software Housekeeping Classes

The Software Housekeeper class is responsible for accumulating software housekeeping statistics reported by other software units within the BEP, and periodically telemetering the accumulated data. Its detailed design is described in Section 28.0

FIGURE 9. Software Housekeeping Class List

3.4.6 Memory Server Classes

The Memory Server class is responsible for servicing command requests to read (dump) or write portions of the BEP's, FEP's or DEA's memory, and to service command requests to execute code on the BEP or FEP. It uses the FEP Manager and DEA Manager classes to respectively forward requests to the FEP and DEA, when needed. Its detailed design is described in Section 27.0

FIGURE 10. Memory Server Class List

3.4.7 System Configuration Classes

The System Configuration classes are responsible for maintaining the system configuration table, and the Bad Pixel and Column maps.

FIGURE 11. System Configuration Class List

The detailed design of the classes are located as follows:

System Configuration Table Management - Section 30.0 covers


Bad Pixel and Column Map Management - Section 32.0 covers


3.4.8 Science Classes

The Science Management classes are responsible for a variety of activities involved in performing a science run, including run setup and execution and bias-map transmission.

FIGURE 12. Science Class List

The detailed design for the science classes is broken into several sections, as described below:

Science Management - Section 33.0 covers


Science Data Processing - Section 37.0 covers

NOTE: Pixel5x5 and PmTeFaint5x5 have yet to be described. Section is TBD.

Bias Map Telemetry Management - Section 38.0 covers


Huffman Data Compression - Section 24.0 covers

NOTE: HuffmanMap is not yet described. Section is TBD

SRAM/PRAM Setup - Section 36.0 , Section 34.0 and Section 35.0 cover


3.5 BEP Tasks

The Back End Processor runs a preemptive, multi-tasking executive. During BEP start-up (see Section 14.0 ), all of the system's tasks are started. Once started, these tasks never exit. Tasks of a given priority are allowed to preempt tasks of a lower priority, and tasks of the same priority run until they are either preempted, or until they sleep for some period of time or relinquish control, at which point another task of the same priority is permitted to run. Once all tasks of a given priority are blocked, waiting for an event to occur, tasks of a lower priority are allowed to run.

The architecture of the BEP relies on a set of concurrently running tasks. Each task is represented by an object of a specific sub-class of the Task class. The following lists the BEP's tasks, listed and grouped according to their priority (highest priority, 51, is first, and lowest priority listed, 55, is last. Tasks with the lowest priority number have the highest run-time priority):

TABLE 3. BEP Tasks

Class Name           Pri.  Object Name          Role                               
TaskMonitor          51    taskMonitor          Perform aliveness tests of the     
                                                other tasks. Allows the watch      
                                                dog timer to reset the BEP if a    
                                                task fails to respond to a query   
                                                within 8 minutes.                  
CmdManager           52    cmdManager           Execute uplinked commands          
SystemConfiguration  53    systemConfiguration  Respond to changes in config       
                                                uration table and monitor the      
                                                radiation flag                     
SwHousekeeper        53    swHousekeeper        Collect and periodically send      
                                                software statistics. Update        
                                                LED bi-levels to reflect instru    
                                                ment's operating state.            
DeaHousekeeper       53    deaHousekeeper       Periodically collect and send      
                                                DEA housekeeping values.           
MemoryServer         54    memoryServer         Handle read (including dump),      
                                                write, and execute memory          
BiasThief            55    biasThief            Trickle the contents of the        
                                                computed CCD bias maps to          
ScienceManager       55    scienceManager       Perform science run, including     
                                                hardware setup, parameter          
                                                dumps, bias computation and        
                                                data processing                    

Each BEP task class provides two sets of member functions. One set is visible to clients of the class and may be called directly by any thread of control. In this document, these are known as "binding" functions. The second set of functions are internal to the task class, and must be called only by the task object's thread of control.

3.6 BEP Global System Objects

This section identifies the some of the objects within ACIS which are globally visible to the rest of the system. Figure 13, "BEP Global System Objects," on page 70 illustrates the key higher level global objects within the ACIS software. In that figure, the solid-lined "clouds" represent objects, and the connecting lines show who is talking to whom. The filled boxes indicate that the object is exclusively used by the party at the other end of the line. The empty boxes indicate that the adjacent object is shared by several other objects.

FIGURE 13. BEP Global System Objects


cmdDevice - This object is responsible for physically reading 
command packets from the BEP's command hardware.
tlmDevice - This object is responsible for setting up the teleme
try hardware to transfer the contents of a region of memory to the 
RCTU serial telemetry port.
deaDevice - This object is responsible for writing commands to 
the DEA command port, and for reading status words from its reply 
fepDevice[6] - Each object corresponds to a single FEP. These 
objects are responsible for accessing the FEP control hardware and 
for accessing memory-mapped hardware and software mailbox 
locations within the corresponding FEP.


cmdManager - This object is responsible for acquiring commands 
from the cmdDevice and executing the commands. It uses the 
tlmManager, via a cmdLog object (not shown), to acknowledge 
the reception of commands and indicate their disposition.
tlmManager - This object is responsible for queueing telemetry 
transfer requests from the many telemetry sources within the sys
tem. The tlmManager uses the tlmDevice to instruct the hard
ware to physically transfer the telemetry items.
deaManager - This object is responsible for formatting and send
ing commands to the Detector Electronics Assembly, and for pro
cessing any acquired status information and data. This object uses 
the deaDevice to issue command and retrieve responses from the 
physical DEA hardware.
fepManager - This object is responsible for commanding all of 
the FEPs and for managing data being produced by the FEPs. This 
object uses all of the fepDevice and fepIoManager objects to 
send and receive information to and from the individual FEPs. 
fepIoManager[6] (not shown) - These 6 objects are responsible 
for managing the I/O protocol to and from each of the Front End 
Processors. Each manager corresponds to a single FEP.


memoryServer - This object is responsible for performing mem
ory dumps, run-time memory loads, and commanded function calls. 
It uses the fepManager to forward such requests to the FEPs and 
the deaManager to perform DEA memory loads and dumps. The 
memoryServer uses the tlmManager to transfer memory 
dumps and send return values from function calls into telemetry.
scienceManager - This object is responsible for managing sci
ence data production, acquisition, and processing. It uses the dea
Manager to load and command the DEAs to clock the CCDs. It 
uses the fepManager to acquire the resulting science data. This 
object uses the tlmManager to place the produced science data 
into the telemetry stream.
deaHousekeeper - This object is responsible for acquiring and 
sending DEA engineering data to telemetry. It uses the deaMan
ager to request and acquire specific housekeeping values from the 
DEA, and it uses the tlmManager to place the acquired house
keeping data into the telemetry stream.
swHousekeeper - This object is responsible for accumulating 
and reporting various software housekeeping statistics. It uses the 
tlmManager to place the acquired data into the telemetry stream. 
Most objects in the system will occasionally report information to 
this object.
biasThief - This object acts under the direction of the 
scienceManager, and is responsible for acquiring and sending 
bias map data from the Front End Processors as telemetry and pro
cessing resources permit.

General Purpose

taskManager - This object is responsible for coordinating the 
activities of all of the tasks within the BEP. This object has indirect 
access to every task within the BEP (not shown).
intrController - This object is responsible for managing 
interrupts within the BEP. It has access to every interruptible device 
within the BEP, and provides interrupt enable/disable services to 
the rest of the BEP software.  
systemClock - This object is responsible for providing the cur
rent time, in units of BEP timer-ticks, to the other objects within the 

In addition to the objects described above, the Back End Processor also uses a variety of global low-level hardware interface objects to manage access to the Back End's CPU and the attached hardware. These include the following:

mongoose - This object is responsible for coordinating access to 
the R3000 System Coprocessor register and to the Mongoose Com
mand/Status Interface (CSI) registers.

bepReg - This object is responsible for coordinating access to the 
Back End Processor's Control, Status and Pulse hardware registers, 
and providing access to the Command FIFO, Downlink Transfer 
Control, and the Detector Electronics Assembly Command, Status 
and Microsecond Timestamp registers.
dmaDevice - This object is responsible for managing transfers 
using the Mongoose's Direct Memory Access (DMA) device.
timerDevice - This object is responsible for managing the Mon
goose's General-Purpose Timer device.
watchdogDevice - This object is responsible for managing the 
Mongoose's Watchdog Timer device.

3.7 FEP Software Architecture

Within ACIS, there are six Front End Processors (FEP), each acting under the direction of the Back End Processor (BEP). During a given run, each FEP is responsible for processing images from one CCD.

This section summarizes the architecture of the software running on each of the Front End Processors. This software consists of two main types:

Figure 14 illustrates a simplified context diagram of the Front End Processor software.

FIGURE 14. Front End Processor Context Diagram

The interface between the Back End Processor and each of the Front End Processors is managed using shared-memory region residing on each FEP. The I/O library software running on a given FEP establishes and manages three interfaces with the Back End:

BEP-FEP Command Mailbox - This mailbox is used by the BEP to issue commands to the software running on a FEP, and to obtain the FEP's response to the command. This mailbox is primarily used by the BEP's FepIoManager class to load parameters, and start and stop bias and science activities on a FEP and to query FEP status.

FEP-BEP Ring-Buffer - This ring-buffer is used by a FEP to send large amounts of science data to the BEP. The data is organized into tagged data records, which are read by the BEP's FepIoManager class, subsequently parsed and processed by the BEP's Science Processing classes. A FEP primarily uses its ring-buffer to send exposure information records, and science data records, such as event records or histogram data records.

The I/O library software also provides low-level access functions to the Front End Processor hardware:

FEP Hardware Registers - Each FEP contains a set of hardware registers. These registers control the behavior of FEP's image acquisition and threshold hardware. The I/O library provides functions to read and write these registers. The BEP's FepDevice class access some of these registers across the shared-memory interface to reset the FEPs, and to determine the current reset state of the FEPs.

FEP Interrupts - Each FEP can be interrupted from a few sources. Its I/O library provides a common interrupt handler, which deals with all interrupt causes.

FEP Image Buffer - Each FEP contains a hardware-maintained image buffer, which is used to acquire CCD images for processing by the FEP software.

FEP Bias Map and Parity Plane (not shown) - Each FEP contains a hardware-maintained bias map buffer and parity plane, which is used by the FEP software to store CCD pixel bias values and verify the integrity of the bias map values. The BEP's FepIoManager class writes into this memory, via the shared memory interface, to mark bad pixels and columns. The BEP's BiasThief class reads from this memory when packing and telemetering the FEP's bias maps.

Figure 15 illustrates the overall data flow within the Front End Processors software. Shaded circles illustrate some of the services provided by the FEP Hardware and I/O library, and the unshaded circles illustrate the functions handled by the science processing functions.

The Science Processing functions perform three types of actions:

Load Parameters - This action is initiated by the Back End Processor, which passes parameters to each Front End using its Command Mailbox. The science software handles these commands between runs, storing the loaded parameters for use for the subsequent bias computations and data processing.

Compute Bias - The science software on a given Front End Processor is capable of computing the bias level for each CCD pixel represented in an image. This action is initiated via a command from the BEP. The type of bias to perform, and the parameters to use for the bias computation, are provided by a previous Load Parameters action. The resulting map of pixel-by-pixel bias values is retained for use by subsequent data processing. NOTE: Although it is not shown in the diagrams, the bias maps are located in shared memory, and are visible to the Back End Processor. This enables the BEP to telemeter the contents of the maps. Unfortunately, due to unforseen timing issues in the hardware, access to this area during data processing interferes with the hardware event processing. As a result, the BEP software only accesses this memory prior to starting event processing on the FEPs.

Process Images - The science software provides an action which processes incoming images from a CCD. The BEP initiates and terminates this action via the Command Mailboxes, specifying which mode to use when processing the images. The parameters to use for data processing are provided by a previous Load Parameters action, and the pixel bias values used are those computed by the most recent Compute Bias action.

FIGURE 15. Front End Processor Data Flow Diagram

3.8 Functional Overview

This section provides an overview of the behavior of key features of the system.

3.8.1 Command Reception and Execution

This section provides a simplified picture of how commands are received, executed, and responded to by the instrument software. For a detailed description of command reception and execution, see Section 16.0 and Section 17.0 .

In order to simplify the command system, the ACIS software is designed to handle one command at a time. As a result, commands must be received and executed as fast as they arrive at the instrument. ACIS is designed to handle no more than 4 commands per second. All commands must be processed in under 250ms.

The following object diagram shows the participants in handling a command, and a simplified sequence of actions which occur when the software processes a command. The numbered actions describe the main steps involved in processing commands sent to the instrument software.

FIGURE 16. Simplified Command Processing Object Diagram

  1. A command packet is received by the Back End's RCTU interface hardware, which stores the packet words into a hardware FIFO. Once the entire packet has been received, the hardware generates an interrupt. The interrupt controller dispatches control to the cmdDevice object, which notifies the cmdManager task object that a command is ready.

  2. The cmdManager establishes a transient cmdPkt object, used to hold the contents of the command packet, and obtains the address of the packet's buffer using getBufferAddress().

  3. The cmdManager then copies the command packet data from the FIFO into cmdPkt's data buffer, using the cmdDevice's function, readFifo().

  4. The cmdManager prepares for a command echo using cmdEcho.openEntry().

  5. The cmdManager obtains the command opcode from the packet using cmdPkt.getOpcode().

  6. The cmdManager uses the opcode to select the appropriate command handler object, in this case cmdHandler, and tells the selected handler to process the command using processCmd().

  7. The handler then performs the required action, usually forwarding action requests onto one or more client objects. Once the action has been performed or forwarded, the cmdHandler returns a result code (usually provided by the client) back to the cmdManager.

  8. The cmdManager passes the result to the cmdEcho.closeEntry() to indicate the disposition of the command.

  9. The cmdEcho then passes the cmdPkt and the result code to a command echo telemetry formatter object, tf_Command_Echo, which stores a copy of the command and the result code in its telemetry buffer.

  10. The cmdEcho then tells the formatter to post its telemetry buffer to be sent out of the instrument, using

  11. The tf_Command_Echo object then passes its buffer to the tlmManager object, which queues the buffer for transfer out of the instrument.

  12. Eventually, the tlmManager instructs the tlmDevice object to transfer the buffer's contents to the RCTU telemetry interface hardware.

3.8.2 Telemetry Production

This section provides a simplified picture of how telemetry buffers are allocated, formatted, and sent out of the instrument. For a detailed description of telemetry management, buffer allocation and formatting, see Section 18.0 and Section 19.0 .

The ACIS software is required to produce many different types of telemetry item, at different rates, and merge these items into single telemetry stream, at either 24Kbps or 500bps. It is a goal of the ACIS software to avoid gaps in the telemetry stream whenever there is something to send. Since the RCTU transfers words at a rate of 128Kbps, and the telemetry hardware has 64 bits of buffering between the transfer hardware and the RCTU, the software must be capable of starting a new telemetry transfer within 0.5ms, in order to avoid padding between transferred packets.

The following object diagram shows an example of telemetry production and a simplified sequence of actions which occur when the software processes a single telemetry item. The numbered actions describe the main steps involved in producing a telemetry item. This particular scenario uses the cmdLog object, described above, as the starting point for illustrating the behavior of the telemetry system.

FIGURE 17. Simplified Telemetry Production Object Diagram

  1. During system initialization, each telemetry buffer allocator object, including cmdLogAllocator, initializes its respective pools of telemetry packet buffers and free queues. Each then proceeds to allocate every packet in its pool, instancePool. Buffer pools are used instead of fixed size arrays of telemetry packet buffers in order to allow easier patching of the number and size of different types of telemetry packets. NOTE: In order to keep different parts of the system reasonably de-coupled, they use different allocators. This way, if one part of the system consumes all of its telemetry buffers, another part may still be able to send data. If a single allocator were used, then any one part of the system could repeatedly consume all available telemetry buffers, denying the rest of the system the ability to send information.

  2. Continuing their initialization, the allocator objects add pointers to the allocated buffers into their respective freeQueue's. Once all buffers have been allocated from their pools and placed into the appropriate freeQueue, the buffer initialization is complete.

  3. Later, in response to an incoming command, the cmdEcho is told to open a log entry by the cmdManager (see Figure 16). As part of its processing, the cmdLog declares a tfCmdEcho object, and tells the object to obtain a telemetry packet buffer.

  4. The tfCmdEcho object asks cmdLogAllocator, the telemetry buffer allocator object used to provide command log and echo telemetry buffers, for a telemetry packet buffer.

  5. cmdLogAllocator goes to its freeQueue and removes a packet buffer from the queue.

  6. Once tfCmdEcho gets a buffer, tlmPkt, from the allocator, the cmdLog tells tfCmdEcho to copy the command packet contents into the telemetry buffer and to fill in the command result field (not shown).

  7. cmdLog then tells tfCmdEcho to post its buffer to the telemetry manager.

  8. tfCmdEcho, in turn, passes tlmPkt to the tlmManager.

  9. tlmManager appends a pointer to the packet into its sendQueue.

  10. Later, once all packets ahead of tlmPkt have been transferred out of the instrument, tlmManager removes tlmPkt from the front of the sendQueue.

  11. tlmManager then prepares tlmPkt for transfer out of the instrument, obtaining its raw buffer address and the number of words to transfer from tlmPkt.

  12. tlmManager tells the tlmDevice to transfer the buffer out of the instrument. tlmDevice then programs the Back End's telemetry interface hardware to supply data from the specified buffer to the RCTU serial telemetry port.

  13. Once the requested number of words have been transferred from the BEP to the RCTU interface, the hardware generates a telemetry interrupt. At this point, the software has a maximum of 0.5ms to program the telemetry hardware to handle a new transfer before a fill-pattern byte is written to the RCTU by the hardware. The tlmDevice's interrupt handler calls the tlmManager directly to service the device.

  14. tlmManager tells the tlmPkt that it is free to be reused. It then attempts to get the next packet from its sendQueue and if another packet buffer is ready to be sent, start the next transfer (not shown).

  15. tlmPkt then tells its allocator, cmdLogAllocator, to release the buffer.

  16. cmdLogAllocator then places tlmPkt's address back into its freeQueue. tlmPkt is now ready to be re-used.

3.8.3 Memory Dumps

This section provides a simplified picture of how requests to dump portions of memory are handled. For a detailed description of memory dump services, and memory load and subroutine calling services, see Section 27.0 .

All memory functions are handled by a memoryServer object. In order to allow for large memory dumps, this object is implemented as a task. This object provides a set of binding functions, which may be safely called from other tasks to request services from the memoryServer, and a set of implementation functions, which are used by the memoryServer's task to implement the requested actions.

The following object diagram shows the participants in handling a request to dump the contents of a region of Front End Processor Memory.

FIGURE 18. Simplified Memory Dump Object Diagram

  1. The cmdManager receives a command packet instructing the system to dump a portion of the memory of one of the Front End Processors. It looks up the corresponding handler object, chReadFep, and instructs it to process the command.

  2. The chReadFep object interprets the contents of the command packet, checking and extracting the index of the FEP from which to read, from which address, and how many words to send. chReadFep then tells the memoryServer object to perform the read using a binding function provided by the memoryServer.

  3. The memoryServer's binding function then notifies the task that a request has been registered. Once the request has been registered, the binding function returns to chReadFep, which then returns the result of the request to the cmdManager object. The cmdManager then logs the result with the cmdEcho, as shown in Figure 16. Note that the time it takes to execute steps 1 - 3 must be less than 250ms.

  4. Later, the memoryServer's task wakes up due to the notification, and proceeds to perform the requested memory dump. It starts by declaring a tfReadFep object, which is used to manage and format the telemetry packet buffer containing the dumped data, and telling the object to wait for a telemetry packet buffer to become available.

  5. The tfReadFep object uses the allocator dedicated to the memoryServer, memServerAllocator, to attempt to allocate a buffer, or to suspend the task until the tlmManger releases one of its packet buffers.

  6. Once tfReadFep obtains a telemetry packet buffer, the memoryServer asks tfReadFep to provide the address and maximum length to write the FEP data into.

  7. The memoryServer task then tells the fepManager to read the data from the indicated FEP directly into the buffer address supplied by tfReadFep.

  8. The fepManager forms and issues a request to a given FEP using a corresponding fepIoManager object.

  9. The fepIoManager object writes the read command into the FEP's command mailbox and polls the mailbox for a reply.

  10. The software running on the FEP periodically polls its command mailbox. Once it detects that a command is present, it execute the command and writes its reply. In this case, the FEP I/O Library software detects the read request, and copies the requested data into the command mailbox.

  11. Once the data request has been satisfied, the memoryServer tells the tfReadFep object to post its buffer to the telemetry manager.

  12. The tfReadFep object then passes its telemetry packet buffer to the tlmManager object for transfer.

  13. Later, once the packet's contents have been transferred out of the instrument, the tlmManager object releases the packet's buffer back to the originating allocator, memServerAllocator. At this point the buffer is ready to be re-used by the memoryServer.

3.8.4 Software Housekeeping

This section provides a simplified picture of how run-time statistics and conditions are acquired and posted to telemetry by the Back End software. For a detailed description of software housekeeping services, see Section 28.0 .

All statistics and warning conditions produced by the Back End Processor software are reported using a swHousekeeper object. In order to periodically telemeter the accumulated information, this object is implemented as task. This object provides a binding function, which other tasks use to report information.

The following object diagram shows how various software conditions are reported to housekeeping, and how this information is posted to telemetry.

FIGURE 19. Software Housekeeping Object Diagram

  1. During start-up, the swHousekeeper object sets up two accumulation buffers, stat_buffer[0] and stat_buffer[1], and sets an internal pointer to one of these objects. Later, once multitasking has started, the main loop of the swHousekeeper task waits for a period of time, and then telemeters the set of statistic counters accumulated during that period.

  2. A client object, client 1, running under any thread of control, including interrupts, reports a statistic to the swHousekeeper object.

  3. The swHousekeeper object logs the occurrence into the current accumulation buffer, e.g., stat_buf[0].

  4. Periodically, the swHousekeeper thread's main loop returns from waiting for the accumulation interval, and proceeds to telemeter the accumulated statistics.

  5. The swHousekeeper creates a temporary software housekeeping format object, form, and tells it to obtain a telemetry packet buffer.

  6. The format object, form, uses the swHouseAllocator object to obtain its telemetry packet buffer. If one is not available at the time of the request, the swHousekeeper object reports it to stat_buffer[0] and waits for the next cycle. This example, assumes that a packet was obtained.

  7. The swHousekeeper task then sets its internal pointer to point to stat_buffer[1] so that new statistics are recorded into the 2nd buffer while the housekeeper is preparing and telemetering the contents of the first. At this point another client, client 2, then may report a statistic.

  8. swHousekeeper directs its new current telemetry object, stat_buffer[1], to accumulate the statistic.

  9. swHousekeeper copies the contents of the old statistics buffer, stat_buffer[0], into the acquired telemetry packet buffer using the format, form.

  10. Once the information has been copied, the swHousekeeper tells the form to posts its telemetry packet buffer for transfer out of the instrument.

  11. The form object tells the tlmManager object to post the packet buffer for transfer.

  12. Later, once the packet has been transferred, the tlmManager releases the packet buffer back to the originating allocator, swHouseAllocator.

3.8.5 DEA Housekeeping

This section provides a simplified picture of how the instrument acquires housekeeping values from the Detector Electronics Assembly (DEA) and sends them to telemetry. For a detailed description of DEA housekeeping services, see Section 31.0 .

The ACIS software uses the deaHousekeeper task to acquire and telemeter housekeeping values from the Detector Electronics Assembly. Like other tasks in the system, the deaHousekeeper provides binding functions, which are used by other tasks to command the housekeeper to perform certain services.

The following object diagram shows how various DEA housekeeping classes are configured, and how housekeeping values are acquired and posted to telemetry.

FIGURE 20. DEA Housekeeping Runs Object Diagram

  1. The cmdManager receives a command packet which contains a DEA housekeeping parameter block to load into the instrument. Using the opcode of the command packet, the cmdManager selects the chLoadDeaHouse object to process the command.

  2. The chLoadDeaHouse object checks the contents of the parameter block, extracts the slot identifier from the block, and then tells the DEA housekeeping parameter block list object, pblDeaHouse, to replace the existing parameter block corresponding to the slot with the new parameter block.

  3. Later, the cmdManager object receives a command packet instructing the instrument to start DEA housekeeping. The cmdManager forwards the command to the chStartDeaRun object for execution.

  4. The chStartDeaRun object checks the command, and tells the deaHousekeeper object's binding function to instruct the task to start acquiring and sending values.

  5. The deaHousekeeper object's binding function notifies the task portion of the object to start its operations.

  6. Later, as a result of the notification, the deaHousekeeper task wakes up, and fetches the parameter block to use from the DEA housekeeping parameter block list, pblDeaHouse, and starts its housekeeping operations.

  7. Once the parameter block has been fetched, the deaHousekeeper object establishes a telemetry object, tfDeaHouse, to manage telemetry buffers and formatting. It tells the telemetry object to get a telemetry packet buffer.

  8. The tfDeaHouse object goes to the housekeeper's buffer allocator, deaHouseAllocator, to allocate a telemetry buffer. If one is not immediately available, it waits until a buffer is released by the telemetry manager, tlmManager.

  9. Once the telemetry object has a buffer, the deaHousekeeper tells the deaManager to obtain a housekeeping value from the DEA.

  10. The deaManager object sends a query command to the DEA using the deaDevice object, and then waits for and reads the reply to the query, returning the replied value back to the housekeeper.

  11. The deaHousekeeper then adds the housekeeping value to its telemetry buffer. The deaHousekeeper repeats these steps (from step 9) for each housekeeping value specified in the parameter block until all entries have been acquired and stored.

  12. Once all of the values have been stored, the deaHousekeeper tells tfDeaHouse to post its buffer to be sent to telemetry.

  13. The tfDeaHouse object passes its telemetry buffer to the tlmManager object for transmission. The deaHousekeeper object then tells the telemetry object to wait for another buffer, and repeats its housekeeping acquisition cycle (from step 8).

  14. Later, once the posted telemetry packet buffer has been sent, the tlmManager object releases it back to the DEA Housekeeping packet allocator, deaHouseAllocator. At this point, the buffer is ready to be re-used by the housekeeper.

  15. Finally, the cmdManager object receives a command to stop DEA Housekeeping, and forwards the packet to the chStopDeaRun object.

  16. The chStopDeaRun object invokes a binding function of the deaHousekeeper object to stop the run. The deaHousekeeper then completes its current cycle and stops housekeeping.

3.8.6 Science Runs

This section provides a simplified picture of how the instrument performs science data acquisition and processing runs. For a detailed description of science operations, see Section 33.0 , Section 37.0 , Section 42.0 , Section 43.0 , Section 44.0 and Section 45.0 .

Science processing is managed using a collection of related objects. The scienceManager object is a task which is responsible for coordinating the operation of a run. It provides binding functions which are used by other tasks to start and stop science runs.

The scienceManager object implements a particular science mode using a particular science mode object. The following example describes a Timed Exposure run, and uses an smTimedExposure object to implement the details of the run.

Any given science mode also has various processing modes. These are handled using a processing mode object. For a given run, one processing mode object is used to process the science data from one corresponding Front End Processor being used for the run. The example only illustrates one such object, pmEvent.

The following object diagram shows a simplified picture of the overall sequence of events which take place during a science run.

FIGURE 21. Science Run Object Diagram

  1. The cmdManager object receives a packet instructing the system to start a science run. The cmdManager forwards the request to the chStartTimedExp object for execution.

  2. The chStartTimedExp object interprets the command packet, and tells the scienceManager object's binding function to start a new science run.

  3. The scienceManager's binding function notifies its task to start a new run.

  4. Later, as a result of the notification, the scienceManager's task is scheduled to run. It retrieves the parameter block to use (not shown), and tells the science mode object, smTimedExposure to setup for a run.

  5. The smTimedExposure object uses the deaManager object to issue commands to the DEA to load the CCD Controller Program and Sequencer RAMs (PRAM, SRAM).

  6. The smTimedExposure object then tells the fepManager object to configure each Front End Processor software.

  7. Once the setup is complete the scienceManager tells smTimedExposure to dump its parameter blocks (not shown). If a bias computation is needed, the manager tells the mode object to compute the pixel-by-pixel bias maps on each of the configured FEPs. If not, skip to step 13.

  8. The smTimedExposure object tells the fepManager object to start the bias computation routines on each of the configured FEPs.

  9. For each configured FEP, the fepManager object issues a command to the FEP command mailbox to start the bias computation.

  10. The FEP I/O Library functions, running on each Front End Processor, read the command.

  11. The "start bias" command is then interpreted by the Science functions running on each FEP. At this point, the FEP Science functions proceed to wait for data to arrive from their respective CCDs, and compute the pixel bias map values from the acquired images.

  12. Once all of the FEPs are ready to compute their bias levels, and are waiting for images, the mode object, smTimedExposure, instructs the deaManager object to start the CCD Controller sequencers on the DEA. Once the sequencers start clocking out images, the FEP Science routines proceed to acquire the images, and use the images to build their respective pixel bias maps. Meanwhile, the smTimedExposure object periodically polls the FEPs to determine when all of the bias maps are complete. Once the maps are complete, the smTimedExposure object then uses the deaManager to stop the sequencers.

  13. Once the bias maps have been computed, the scienceManager object, if configured to do so, notifies the biasThief (not shown) to start packing and posting the computed bias maps to telemetry. Once the bias maps have been sent, it instructs the science processing mode object to start acquiring and processing science event data.

  14. The smTimedExposure object tells the fepManager object to start data processing on all of the configured FEPs. Once they're ready, smTimedExposure re-starts the DEA sequencers.

  15. As the FEP Science functions acquire images and detect events, they use the FEP I/O Library routines to send the data to the Back End Processor.

  16. The FEP I/O Library routines append the data and exposure records to the shared-memory ring-buffer.

  17. As data is placed into the ring buffer, the smTimedExposure object tells the fepManager to read the data.

  18. The fepManager object then reads the exposure and data records from each FEP's ring-buffer.

  19. The smTimedExposure object uses a collection of processing objects, pmEvent[], to process data. Each pmEvent object is associated with one Front End Processor. As data is consumed from a ring-buffer, the smTimedExposure object tells the corresponding processing object to process the read exposure and data records.

  20. As the pmEvent object interprets the records and filters the events, it uses a science data telemetry object, tfEventData, to pack the filtered events into a telemetry buffer. If the telemetry object does not have a buffer, pmEvent instructs it to wait for a telemetry packet buffer. The tfEventData object uses the allocator object dedicated to science processing, scienceAllocator, to wait for and allocate the buffer. Once it has a buffer, the event data is packed into the buffer by tfEventData. Once the tfEventData's buffer is full, the pmEvent object tells the telemetry object to post its buffer for transfer out of the instrument.

  21. The tfEventData object passes its telemetry packet buffer onto the tlmManager object to be posted for transfer out of the instrument.

  22. Later, once the telemetry packet buffer contents have been transferred, the tlmManager releases the packet buffer back to the allocator, scienceAllocator. At this point, the telemetry packet buffer can be re-used by science.

  23. Eventually, the cmdManager receives a command instructing the system to stop performing science. The cmdManager forwards the packet to the chStopTimedExp object for execution.

  24. The chStopTimedExp object then instructs the scienceManager's binding function to stop the run. The scienceManager's binding function tells smTimedExposure to stop, which then sets an internal flag and notifies the task to stop. When the task detects the stop request, it instructs the FEPs to finish processing. Once the last exposure being processed by the Front End Processors is complete, smTimedExposure stops the DEA CCD Controller sequencers, and posts a summary of the run to telemetry (not shown).

Table of Contents Next Chapter