EDPS PASS Products Data Receipt (PDR) Pipeline Reference Manual
Contents
PDR Pipeline Overview
The need for an PDR Pipeline
The PDR Pipeline is the Pipeline within the EDPS that is responsible for receiving and processing products from the HST
Planning and Scheduling System (PASS). The products received include Mission Schedule files, Mission Timeline
Report files, Science Mission Schedule files, Orbital Ephemeris files, and various PASS Auxiliary Data files.
The type of processing performed on each product received is as follows:
Mission Schedule files - parsed to extract various obset and observation related information which is
inserted into database relations that are used to determine and control
the specific processing that is performed by the EDPS FOF, FGS, and Astrometry
pipelines; renamed and submitted for archiving to the HST archive system; and moved to a holding
directory (PDR_HLD_DIR) to be held there until deemed
to be no longer needed by operations personnel
Mission Timeline Report files - renamed and submitted for archiving to the HST archive system
Science Mission Schedule files - renamed and submitted for archiving to the HST archive system
Orbital Ephemeris files - used as input to produce FITS and ASCII files containing the ephemeris data
and also renamed and submitted for archiving to the HST archive system
PASS Auxiliary Data Files - copied to a holding directory (PDR_HLD_DIR) for use by operations personnel
Intended Audience and Scope
This Reference Manual is intended primarily to give OPUS Operations Personnel
information needed to understand the operation of the EDPS PDR Pipeline.
This includes information on the PDR Pipeline Input and Output products, the
directories and files used, and the processes involved. Hopefully, as envisioned, this
manual will be a dynamic document that is updated when PDR Pipeline changes are made.
PDR Pipeline Dependencies
The PDR pipeline is one of two EDPS pipelines that is not dependent upon other EDPS pipelines to function
correctly; the EDPS Engineering Data Receipt (EDR) Pipeline being the other one. The PDR and EDR pipelines
however are dependent upon systems outside of the EDPS. The PDR pipeline is dependent upon the HST Plannng
and Scheduling System (PASS) to to provide notification that PASS products are available for processing and to
provide the products themselves.
Also, the PDR Pipeline and the OPUS EDPS in general for that matter are highly dependent upon
database interaction; see PDR Pipeline Database Usage. In order for the
PDR pipeline database interaction to occur correctly, the processes in the pipeline should be
executed in a pipeline mode as opposed to an interactive mode. Typically, as specified by the
OK_TO_UPDATE_DATABASE mnemonic in process resource files, processes that run in interactive mode
are prohibited from performing database updates. Therefore, to ensure correct operation of the pipeline,
!!! ALL PDR PIPELINE PROCESSES SHOULD BE RUN ONLY IN THE PIPELINE MODE !!!.
PDR Pipeline Nominal Operational Description
The PDR pipeline remains idle until it receives Transfer Notification files from the PASS System. These files
indicate to the PDR pipeline which PASS products are available for transfer and processing. Once a transfer
notification file is received, processes in the PDR pipeline will verify that all of the products listed in the
notification file are present on disk; if any are missing, the notification file will be renamed to indicate an error.
If all specified products are present, the PDR pipeline will determine if any OSF already exists for any of the
specified products. If OSFs already exists, the OSFs will be marked as duplicate and the offending transfer notification
file wil be renamed as a duplicate. If OSFs don't already exist for the products, they will be created for each product
to trigger subsequent processing for the product and a record will be inserted into the database file_times
relation to keep a historical record of the products received.
All of the various products received from PASS are initially received by the PDR pipeline in a transfer directory,
i.e., the PDR_PASS_DIR directory. Prior to processing the products, the products
are copied to individual directories specific to each product type. During the copies, the products are
renamed to make them recognizeable to downstream processes. The product specific directories used and the type
of processing performed (as specified in the Overview section) on each product type is as follows:
PDR_MSC_DIR directory
Mission Schedule files - parsed to extract various obset and observation related information which is
inserted into database relations that are used to determine and control
the specific processing that is performed by the EDPS FOF, FGS, and Astrometry
pipelines; submitted for archiving to the HST archive system; and moved to a
holding directory (PDR_HLD_DIR) to be held there until deemed
to be no longer needed by operations personnel
PDR_MTL_DIR directory
Mission Timeline Report files - submitted for archiving to the HST archive system
PDR_SMS_DIR directory
Science Mission Schedule files - submitted for archiving to the HST archive system
PDR_ORB_DIR directory
Orbital Ephemeris files - used as input to produce FITS and ASCII files containing the ephemeris data
and then submitted along with the FITS and ASCII files for archiving to the HST
archive system
PDR_HLD_DIR directory
PASS Auxiliary Data Files - held in this holding directory for use by operations personnel
After the PASS products have been processed, the PDR pipeline starts is clean-up phase where it deletes no
longer needed files and OSFs. The deletion of files is automatic however the deletion of OSFs requires manual
intervention. The PDR pipeline process that deletes OSFs, i.e., the PDRDEL process
residing in the (DL) stage of the pipeline is triggerred by the manual insertion of a 'd' in the (DL) stage of the
pipeline for each class of OSF that is to be deleted.
PDR Pipeline Directories and Files
- OPUS_HOME_DIR
contains PDR Process Status Files (PSFs) and Process Log Files
- OPUS_HOME_DIR_LOCK
contains transient Lock Files used to avoid PSF collisions
- OPUS_OBSERVATIONS_DIR
contains PDR Observation Status Files (OSFs)
- OPUS_OBSERVATIONS_DIR_LOCK
contains transient Lock Files used to avoid OSF collisions
- OPUS_DEFINITIONS_DIR
contains Path files, Stage Files, Process Resource Files, OPUS Manager Data Files, and other configuration
related files:
pdr.path_template - defines directory mnemonics used by the PDR Pipeline.
Also defines mnemonics such as OPUS_DB, DSQUERY, and OK_TO_UPDATE_DATABASE for database access
null.path - defines directory mnemonics used by PDR processes when they are
being executed in an interactive mode.
PDR_pipeline.stage - defines the 11 stages in the PDR Pipeline; the processes
that execute within each stage; and the possible status values for each stage
- DR - Data Receipt Stage
- EP - Generate Ephemeris Products Stage
- RP - Check for Replan Stage
- UP - Update Science Database Tables Stage
- MS - Create Mission Schedule Database Tables Stage
- CT - Create Control Database Tables Stage
- NS - Create NICMOS SAA Database Tables Stage
- RQ - Archive Request Stage
- RS - Archive Response Stage
- CL - File Deletion/Moving Stage
- DL - OSF Deletion Stage
MSCCPY.RESOURCE - defines characteristics of the MSCCPY Process
MTLCPY.RESOURCE - defines characteristics of the MTLCPY Process
SMSCPY.RESOURCE - defines characteristics of the SMSCPY Process
ORBCPY.RESOURCE - defines characteristics of the ORBCPY Process
PASCPY.RESOURCE - defines characteristics of the PASCPY Process
PDRORB.RESOURCE - defines characteristics of the PDRORB Process
REPLAN.RESOURCE - defines characteristics of the REPLAN Process
UPDATR.RESOURCE - defines characteristics of the UPDATR Process
MSCXTR.RESOURCE - defines characteristics of the MSCXTR Process
CONTRL.RESOURCE - defines characteristics of the CONTRL Process
NICSAA.RESOURCE - defines characteristics of the NICSAA Process
PDRREQ.RESOURCE - defines characteristics of the PDRREQ Process
PDRRSP.RESOURCE - defines characteristics of the PDRRSP Process
MSCMOV.RESOURCE - defines characteristics of the MSCMOV Process
PDRCLN.RESOURCE - defines characteristics of the PDRCLN Process
PDRDEL.RESOURCE - defines characteristics of the PDRDEL Process
pmg_restrictions.dat - used to restrict the number of copies of certain processes that
the PMG is allowed to start up
opus.env - contains user-configurable parameters that establish the format of PSTAT and
OSF blackboard entries, which blackboard implementation to use, and the directory in which PSTAT
entries are stored for FILE blackboard implementations
opus_corba_objs - contains information needed when a CORBA Blackboard implementation is
being used for interprocess communication between the OPUS Managers and the processes in a pipeline
PDR_PASS_DIR
contains the various products and their corresponding notification files received from the PASS system
- <calender id>_<sms id>_f[r].msc - PASS Mission Schedule files
- <calender id>_<sms id>_f[r].msc_trans - PASS Mission Schedule transfer
notification file before processing
- <calender id>_<sms id>_f[r].msc_done - PASS
Mission Schedule transfer notification file after processing
- <calender id>_<sms id>_f[r].msc_bad - PASS
Mission Schedule transfer notification file if error found during processing
- <calender id>_<sms id>_f[r].msc_duplicate - PASS
Mission Schedule transfer notification file if determined to be a duplicate
- <calender id>_<sms id>_f<n>.mtl - PASS Mission Timeline Report files
- <calender id>_<sms id>_f<n>.mtl_trans - PASS
Mission Timeline Report notification file before processing
- <calender id>_<sms id>_f<n>.mtl_done - PASS
Mission Timeline Report notification file after processing
- <calender id>_<sms id>_f<n>.mtl_bad - PASS
Mission Timeline Report notification file if error found during processing
- <calender id>_<sms id>_f<n>.mtl_duplicate - PASS
Mission Timeline Report notification file if determined to be a duplicate
- <calender id>_<sms id>_f.sms - Science Mission Schedule files
- <calender id>_<sms id>_f.sms_trans - Science Mission
Schedule transfer notification file before processing
- <calender id>_<sms id>_f.sms_done - Science Mission
Schedule transfer notification file after processing
- <calender id>_<sms id>_f.sms_bad - Science Mission
Schedule transfer notification file if error found during processing
- <calender id>_<sms id>_f.sms_duplicate - Science Mission
Schedule transfer notification file if determined to be a duplicate
- stdef_<date>.dat - Orbital Ephemeris file
- stdef_<date>.orb_trans - Orbital Ephemeris transfer notification file
before processing
- stdef_<date>.orb_done - Orbital Ephemeris transfer notification file
after processing
- stdef_<date>.orb_bad - Orbital Ephemeris transfer notification file
if error found during processing
- stdef_<date>.orb_duplicate - Orbital Ephemeris transfer notification file
if determined to be a duplicate
- <calender id>_<sms id>_f.<ext> - PASS Auxiliary Data files
- <calender id>_<sms id>_f.pas_trans - PASS Auxiliary Data
transfer notification file before processing
- <calender id>_<sms id>_f.pas_done - PASS Auxiliary Data
transfer notification file after processing
- <calender id>_<sms id>_f.pas_bad - PASS Auxiliary Data
transfer notification file if error found during processing
- <calender id>_<sms id>_f.pas_duplicate - PASS Auxiliary Data
transfer notification file if determined to be a duplicate
WHERE:
calender id = yydddlsv[q]
yy = 2 digit year
ddd = day of year
l = length in days
s = series identifier (a-z)
v = version number (0-9, a-z)
q = optional indentifier for special calenders (eg. r for replan, h for health and safety)
sms id = sadddypr
s = stands for SMS
ddd = day of year
y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...)
p = ?
r = ?
[r] = optional retransmission sequence letter (a,b,c ...)
n = identifier to indicate Mission Timeline sequential time periods (1,2,3 ...)
ext = PASS Auxiliary Data File extension (otr, rul, rdo)
date = yyddd
yy = 2 digit year
ddd = day of year
EXAMPLES:
for year 2000 day 129, 7 day, series b, version 3:
001297b3_sa129k03_f.msc - PASS Mission Schedule file
001297b3_sa129k03_f.msc_trans - PASS Mission Schedule transfer notification file
001297b3_sa129k03_f.msc_done - processed PASS Mission Schedule transfer notification file
001297b3_sa129k03_f.msc_bad - in error PASS Mission Schedule transfer notification file
001297b3_sa129k03_f.msc_duplicate - duplicate PASS Mission Schedule transfer notification file
001297b3_sa129k03_fa.msc - PASS Mission Schedule file retransmission 'a'
001297b3_sa129k03_fa.msc_trans - PASS Mission Schedule transfer notification file retransmission 'a'
001297b3_sa129k03_fa.msc_done - processed PASS Mission Schedule transfer notification file retransmission 'a'
001297b3_sa129k03_f1.mtl - Mission Timeline Report file for time period 1
001297b3_sa129k03_f1.mtl_trans - Mission Timeline Report transfer notification file for time period 1
001297b3_sa129k03_f1.mtl_done - processed Mission Timeline Report transfer notification file for time period 1
001297b3_sa129k03_f2.mtl - Mission Timeline Report file for time period 2
001297b3_sa129k03_f2.mtl_trans - Mission Timeline Report transfer notification file for time period 2
001297b3_sa129k03_f2.mtl_done - processed Mission Timeline Report transfer notification file for time period 2
001297b3_sa129k03_f.sms - Science Mission Schedule file
001297b3_sa129k03_f.sms_trans - Science Mission Schedule transfer notification file
001297b3_sa129k03_f.sms_done - processed Science Mission Schedule transfer notification file
001297b3_sa129k03_f.otr - PASS Auxiliary Data file
001297b3_sa129k03_f.pas_trans - PASS Auxiliary Data transfer notification file
001297b3_sa129k03_f.pas_done - processed PASS Auxiliary Data transfer notification file
001297b3_sa129k03_f.rul - PASS Auxiliary Data file
001297b3_sa129k03_f.pas_trans - PASS Auxiliary Data transfer notification file
001297b3_sa129k03_f.rdo - PASS Auxiliary Data file
001297b3_sa129k03_f.pas_trans - PASS Auxiliary Data transfer notification file
stdef_00129.dat - Orbital Ephemeris file
stdef_00129.orb_trans - Orbital Ephemeris transfer notification file
stdef_00129.orb_done - processed Orbital Ephemeris transfer notification file
PDR_MSC_DIR
contains PASS Mission Schedule files received from PASS and moved here from the
PDR_PASS_DIR directory and renamed. In the File Deletion/Moving
stage (CL) of the PDR pipeline, these files also get moved to the
PDR_HLD_DIR directory.
WHERE:
Y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...)
m = 1 digit month (1 thru c, where 1=January and a=October)
d = 1 digit day (1 thru v, where 1=1, a=10 and v=31)
hh = 2 digit hour (00 thru 23)
mm = 2 digit minute (00 thru 59)
x = file sequence id (0-9,a-z)
EXAMPLE for year 2000, month 5, day 9, hour 0800, minute 25: um5908250.pod
PDR_MTL_DIR
contains Mission Timeline Report files received from PASS and moved here from the
PDR_PASS_DIR directory and renamed.
WHERE:
Y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...)
m = 1 digit month (1 thru c, where 1=January and a=October)
d = 1 digit day (1 thru v, where 1=1, a=10 and v=31)
hh = 2 digit hour (00 thru 23)
mm = 2 digit minute (00 thru 59)
x = file sequence id (0-9,a-z)
EXAMPLE for year 2000, month May, day 9, hour 0800, minute 25: vm5908251.pod
PDR_SMS_DIR
contains Science Mission Schedule files received from PASS and moved here from the
PDR_PASS_DIR directory and renamed.
WHERE:
Y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...)
m = 1 digit month (1 thru c, where 1=January and a=October)
d = 1 digit day (1 thru v, where 1=1, a=10 and v=31)
hh = 2 digit hour (00 thru 23)
mm = 2 digit minute (00 thru 59)
x = file sequence id (0-9,a-z)
EXAMPLE for year 2000, month May, day 9, hour 0800, minute 25: ym590825a.pod
PDR_ORB_DIR
contains Orbital Ephemeris files received from PASS and moved here from the
PDR_PASS_DIR directory and renamed. Also contains products
produced from the Ephemeris file.
- p<Ymdhhmmr>.pod - Orbital Ephemeris POD file
- p<Ymdhhmmr>.fit - FITS format Ephemeris table file
- p<Ymdhhmmr>.asc - ASCII format Ephemeris table file
WHERE:
Y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...)
m = 1 digit month (1 thru c, where 1=January and a=October)
d = 1 digit day (1 thru v, where 1=1, a=10 and v=31)
hh = 2 digit hour (00 thru 23)
mm = 2 digit minute (00 thru 59)
EXAMPLEs for year 2000, month May, day 9, hour 0800, minute 25: pm590825r.pod - pod file
pm590825r.fit - fits product
pm590825r.asc - ascii product
PDR_HLD_DIR
contains PASS Auxiliary Data files moved here from the PDR_PASS_DIR
directory and PASS Mission Schedule POD files moved here from the
PDR_MSC_DIR directory.
- 001297b3_sa129k03_f.otr - example PASS Auxiliary Data file
- 001297b3_sa129k03_f.rul - example PASS Auxiliary Data file
- 001297b3_sa129k03_f.rdo - example PASS Auxiliary Data file
- um590825x.pod - example PASS Mission Schedule files
PDR_AREQ_DIR
contains requests to archive PASS related POD files and product files
- YYYYMMDD_HHMMSS_iymdhhmmx_zzz.areq - request to archive PASS related files
WHERE:
YYYY = year of archive request file generation
MM = month of archive request file generation
DD = day of archive request file generation
HH = hour of archive request file generation
MM = minute of archive request file generation
SS = seconds of archive request file generation
i = file type identifier (u=PASS Mission Schedule file, v=Mission Timeline Report file,
y=Science Mission Schedule file, p=Orbital Ephemeris file)
y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...)
m = 1 digit month (1 thru c, where 1=January and a=October)
d = 1 digit day (1 thru v, where 1=1, a=10 and v=31)
hh = 2 digit hour (00 thru 23)
mm = 2 digit minute (00 thru 59)
x = file sequence id (0-9,a-z) for Orbital Ephemeris files the value is always 'r'
zzz = file type identifier (msc=PASS Mission Schedule file, mtl=Mission Timeline Report file,
sms=Science Mission Schedule file, orb=Orbital Ephemeris file)
EXAMPLES: 20021212_082578_pm590825r_orb.areq - archive request file for ephemeris products
20021212_082578_um590825a_msc.areq - archive request for PASS mission schedule
20021212_082578_ym590825b_sms.areq - archive request file for Science mission schedule
20021212_082578_vm590825c_mtl.areq - archive request file for mission timeline
PDR_ARSP_DIR
contains responses to the request to archive PASS related POD files and product files
- YYYYMMDD_HHMMSS_iymdhhmmx_zzz.arsp - responses to requests to archive PASS related files
WHERE:
YYYY = year of archive request file generation
MM = month of archive request file generation
DD = day of archive request file generation
HH = hour of archive request file generation
MM = minute of archive request file generation
SS = seconds of archive request file generation
i = file type identifier (u=PASS Mission Schedule file, v=Mission Timeline Report file,
y=Science Mission Schedule file, p=Orbital Ephemeris file)
y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...)
m = 1 digit month (1 thru c, where 1=January and a=October)
d = 1 digit day (1 thru v, where 1=1, a=10 and v=31)
hh = 2 digit hour (00 thru 23)
mm = 2 digit minute (00 thru 59)
x = file sequence id (0-9,a-z) for Orbital Ephemeris files the value is always 'r'
zzz = file type identifier (msc=PASS Mission Schedule file, mtl=Mission Timeline Report file,
sms=Science Mission Schedule file, orb=Orbital Ephemeris file)
EXAMPLES: 20021212_082578_pm590825r_orb.arsp - archive response file for ephemeris products
20021212_082578_um5908255_msc.arsp - archive response for PASS mission schedule
20021212_082578_ym5908256_sms.arsp - archive response file for Science mission schedule
20021212_082578_vm5908257_mtl.arsp - archive response file for mission timeline
PDR_LOG_DIR
contains a log file written to by the PDRREQ and PDRRSP processes to indicate the disposition of an archive
request and its corresponding response
- iymdhhmmx_zzz.log - disposition of PASS related files archive request/response
WHERE:
i = file type identifier (u=PASS Mission Schedule file, v=Mission Timeline Report file,
y=Science Mission Schedule file, p=Orbital Ephemeris file)
y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...)
m = 1 digit month (1 thru c, where 1=January and a=October)
d = 1 digit day (1 thru v, where 1=1, a=10 and v=31)
hh = 2 digit hour (00 thru 23)
mm = 2 digit minute (00 thru 59)
x = file sequence id (0-9,a-z) for Orbital Ephemeris files the value is always 'r'
zzz = file type identifier (msc=PASS Mission Schedule file, mtl=Mission Timeline Report file,
sms=Science Mission Schedule file, orb=Orbital Ephemeris file)
EXAMPLES: pm590825r_orb.log - archive log file for ephemeris products
um5908250_msc.log - archive log for PASS mission schedule
ym5908251_sms.log - archive log file for Science mission schedule
vm5908252_mtl.log - archive log file for mission timeline
PDR Pipeline Processes
PDR Pipeline Processes Overview
The PDR Pipeline is composed of the following 16 processes:
- MSCCPY - polls the PDR_PASS_DIR directory
for PASS Mission Schedule Product Transfer Notification Message files from the PASS system; creates 'msc'
class OSFs for each file listed in the notification message files; and moves the files listed in the
notification message files to the PDR_MSC_DIR directory. When the files are
moved they are renamed.
- MTLCPY - polls the PDR_PASS_DIR directory
for PASS Mission Timeline Report Transfer Notification Message files from the PASS system; creates 'mtl'
class OSFs for each file listed in the notification message files; and moves the files listed in the
notification message files to the PDR_MTL_DIR directory. When the files are
moved they are renamed.
- SMSCPY - polls the PDR_PASS_DIR directory
for PASS Science Mission Schedule Product Transfer Notification Message files from the PASS system; creates
'sms' class OSFs for each file listed in the notification message files; and moves the files listed in the
notification message files to the PDR_SMS_DIR directory. When the files are
moved they are renamed.
- ORBCPY - polls the PDR_PASS_DIR directory
for PASS Orbital Ephemeris Product Transfer Notification Message files from the PASS system; creates 'orb'
class OSFs for each file listed in the notification message files; and moves the files listed in the
notification message files to the PDR_ORB_DIR directory. When the files are
moved they are renamed.
- PASCPY - polls the PDR_PASS_DIR directory
for PASS Auxiliary Data Transfer Notification Message files from the PASS system; creates 'pas'
class OSFs for each file listed in the notification message files; and moves the files listed in the
notification message files to the PDR_HLD_DIR directory.
- PDRORB - creates FITS and ASCII format Definitive Ephemeris
Files using the data provided in the Orbital Ephemeris file received from the PASS system in the
PDR_PASS_DIR directory.
- REPLAN - examines Mission Schedule pod files in the PDR_MSC_DIR
directory to determine when Mission Schedule Re-plans occur and removes the information in the database that is being
superceeded by the Re-plan.
- UPDATR - uses information provided in Mission Schedule pod files in the
PDR_MSC_DIR directory to update Support Schedule information in various database
relations
- MSCXTR - parses Mission Schedule pod files in the PDR_MSC_DIR
directory to extract desired observation related information and populates various database tables with the information.
The information is used by other EDPS pipelines; i.e., the FOF, FGS, and AST pipelines in the
generation of their respective output products.
- CONTRL - uses Mission Schedule information contained in previuosly
populated database relations to populate other relations that are used to dictate/control the specific
processing performed by the EDPS FOF, FGS, and AST pipelines.
- NICSAA - uses Mission Schedule information contained in previuosly
populated database relations to populate other relations containing information about NICMOS exposures
and SAA dark associations, and NICMOS associations and SAA dark exposures.
- PDRREQ - generates request to archive various products received from the PASS
System and the products produced by the PDR pipeline itself.
- PDRRSP - processes responses to archive requests.
- MSCMOV - moves Mission Schedule pod files from the PDR_MSC_DIR directory
to the PDR_HLD_DIR directory to keep them around until operations personnel deems them no
longer needed.
- PDRCLN - deletes mtl and sms class pod files from the PDR_MTL_DIR and
PDR_SMS_DIR directories repsectively, and deletes orb pod files and orb
FITS and ASCII products from the PDR_ORB_DIR directory.
- PDRDEL - deletes 'msc', 'mtl', 'sms', 'orb', and 'pas' class OSFs from
the pipeline after their associated processing has completed.
PDR Pipeline Processes Details
MSCCPY Process details
- MSCCPY Process Description
- This process is one of five "copy" processes that reside in
the (DR) stage of the PDR Pipeline. It is a file triggerred process and its purpose is to poll the
PDR_PASS_DIR directory for PASS Mission Schedule Product Transfer Notification
Message files from the PASS system. Once a transfer notification message file is received, MSCCPY will
verify that all of the PASS Mission Schedule files listed in the notification file are present on disk; if
any are missing, the notification file will be renamed to indicate an error. If all specified files are
present, MSCCPY will see if an 'msc' class OSF already exists for any of the specified files. If OSFs
already exists, they will be marked as duplicate and the transfer notification file will be renamed as
duplicate. If they don't already exist, 'msc' class OSFs will be created for each file specified in the
transfer notification file and each of the specified files will be moved to the PDR_MSC_DIR
directory and renamed. Lastly, a record will be inserted into the database file_times
relation for each of the specified files, and their corresponding OSFs will be set to trigger the REPLAN
process.
- MSCCPY Process Triggers
Input Trigger:
MSCCPY is a file poller process that is triggerred by the appearance of PASS Mission Schedule Product
Transfer Notification Message files in the PDR_PASS_DIR directory
Output Triggers:
MSCCPY triggers the REPLAN process
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ w _ _ _ _ _ _ _ _ msc
MSCCPY Process I/O - Input to and Output from the MSCCPY process is as follows:
INPUT:
PDR_PASS_DIR:
<calender id>_<sms id>_f[r].msc - PASS Mission
Schedule files from PASS
<calender id>_<sms id>_f[r].msc_trans - PASS
Mission Schedule transfer notification files from PASS
OUTPUT:
OPUS_OBSERVATIONS_DIR:
'msc' class OSFs for PASS Mission Schedule files
PDR_PASS_DIR:
<calender id>_<sms id>_f[r].msc_done - processed PASS Mission
Schedule transfer notification files from PASS
<calender id>_<sms id>_f[r].msc_bad - PASS Mission
Schedule transfer notification files found to be in error
<calender id>_<sms id>_f[r].msc_duplicate - PASS
Mission Schedule transfer notification files found to be duplicates
PDR_MSC_DIR:
u<Ymdhhmmx>.pod - PASS Mission Schedule files received from PASS and moved
here from the PDR_PASS_DIR directory and renamed
MSCCPY Process Modes - The MSCCPY process needs to make database inserts and thus should be run
only in the pipeline mode as Interactive mode typically prevents database inserts.
Pipeline Mode:
pdrcpy -p opus_definitions_dir:your.path -r msccpy (in task line of resource file)
where:
-p = denotes path file specification follows
-r = denotes resource file for the MSCCPY Process
opus_definitions_dir:your.path = path file to use
MSCCPY Process Resource File
!--------------------------------------------------------------------
!
! MSCCPY RESOURCE FILE
!
! This file is used to construct the trigger, error, and success status
! fields in the observation status file.
!
!--------------------------------------------------------------------
! REVISION HISTORY
!--------------------------------------------------------------------
! MOD PR
! LEVEL DATE NUMBER User Description
! ----- -------- ------ ------ -------------------------------------
! 000 11/02/99 39733 Ken S. Created
! 001 10/24/01 44684 Goldst Standardized OPUS PDR pipeline version
! 002 01/30/02 45016 Goldst Corrected OK_TO_UPDATE_DATABASE
! 003 07/10/02 46101 J.Baum inserted .DANGLE for file status
! 004 08/09/02 46352 Goldst Add OSF_PROCESSING keyword for cleandata
! 005 10/04/02 46352 Goldst Corrected PASS_COMPLETE stage and comment
!--------------------------------------------------------------------
PROCESS_NAME = msccpy
TASK = <pdrcpy -p $PATH_FILE -r msccpy>
DESCRIPTION = 'MSC product notification message poller'
SYSTEM = PDR
CLASS = msc
DISPLAY_ORDER = 1
OK_TO_UPDATE_DATABASE = OK_TO_UPDATE_DATABASE ! Determined by path file
INTERNAL_POLLING_PROCESS = TRUE
FILE_RANK = 1 ! First Trigger
FILE_OBJECT1 = *.msc_trans ! File specification for searches
FILE_DIRECTORY1 = PDR_PASS_DIR ! Polling directory
!File status
FILE_PROCESSING.DANGLE = _proc ! Extension addition during processing
FILE_SUCCESS.DANGLE = _done ! Extension addition if normal processing
FILE_ERROR.DANGLE = _bad ! Extension addition if error
FILE_DUPLICATE.DANGLE = _duplicate ! Extension addition if duplicate
!OSF status
PAS_CREATE.DR = p !PAS file procesing.
PAS_COMPLETE.DR = c !PAS file completed
PAS_COMPLETE.RP = w !Trigger for replan processing
PAS_DUPLICATE.DR = d !PAS file duplicate detected.
PAS_FAIL.DR = e !PAS file processing failed.
OSF_PROCESSING.DR = p !Needed for cleandata processing
POLLING_TIME = 10 ! Wait (seconds) before polling for next
INPATH = PDR_PASS_DIR ! location of pass data files.
OUTPATH = PDR_MSC_DIR ! destination for SMS files
MINBLOCKS = 50000 ! blocks required on output disk
EXTENSION = .msc_trans ! MSC input file extension
PREFIX = u ! Pass file prefix
RENAME = Y ! Rename PASS file.
TIMES = Y ! Update files_times relation.
! forces values from path to be used
ENV.OPUS_DB = OPUS_DB
ENV.DSQUERY = DSQUERY
MSCCPY Process accessed database relations
MSCCPY accessed Relations
MSCCPY Process database querries
MSCCPY Process Querries
MTLCPY Process details
- MTLCPY Process Description
- This process is one of five "copy" processes that reside in
the (DR) stage of the PDR Pipeline. It is a file triggerred process and its purpose is to poll the
PDR_PASS_DIR directory for PASS Mission Timeline Report Transfer Notification
Message files from the PASS system. Once a transfer notification message file is received, MTLCPY will
verify that all of the PASS Mission Timeline Report files listed in the notification file are present on disk;
if any are missing, the notification file will be renamed to indicate an error. If all specified files are
present, MTLCPY will see if an 'mtl' class OSF already exists for any of the specified files. If OSFs
already exists, they will be marked as duplicate and the transfer notification file will be renamed as
duplicate. If they don't already exist, 'mtl' class OSFs will be created for each file specified in the
transfer notification file and each of the specified files will be moved to the PDR_MTL_DIR
directory and renamed in preparation for archiving. Lastly, a record will be inserted into the database
file_times relation for each of the specified files, and their corresponding
OSFs will be set to trigger the PDRREQ process.
- MTLCPY Process Triggers
Input Trigger:
MTLCPY is a file poller process that is triggerred by the appearance of PASS Mission Timeline Report
Transfer Notification Message files in the PDR_PASS_DIR directory
Output Triggers:
MTLCPY triggers the PDRREQ process
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ _ _ _ _ _ w _ _ _ mtl
MTLCPY Process I/O - Input to and Output from the MTLCPY process is as follows:
INPUT:
PDR_PASS_DIR:
<calender id>_<sms id>_f<n>.mtl - Mission
Timeline Report files from PASS
<calender id>_<sms id>_f<n>.mtl_trans - Mission
Timeline Report notification files from PASS
OUTPUT:
OPUS_OBSERVATIONS_DIR:
'mtl' class OSFs for Mission Timeline Report files
PDR_PASS_DIR:
<calender id>_<sms id>_f<n>.mtl_done - processed
Mission Timeline Report transfer notification files from PASS
<calender id>_<sms id>_f<n>.mtl_bad - Mission
Timeline Report transfer notification files found to be in error
<calender id>_<sms id>_f<n>.mtl_duplicate - Mission
Timeline Report transfer notification files found to be duplicates
PDR_MTL_DIR:
v<Ymdhhmmx>.pod - Mission Timeline Report files received from PASS and moved
here from the PDR_PASS_DIR directory and renamed
MTLCPY Process Modes - The MTLCPY process needs to make database inserts and thus should be run
only in the pipeline mode as Interactive mode typically prevents database inserts.
Pipeline Mode:
pdrcpy -p opus_definitions_dir:your.path -r mtlcpy (in task line of resource file)
where:
-p = denotes path file specification follows
-r = denotes resource file for the MTLCPY Process
opus_definitions_dir:your.path = path file to use
MTLCPY Process Resource File
!--------------------------------------------------------------------
!
! MTLCPY RESOURCE FILE
!
!
! This file is used to construct the trigger, error, and success status
! fields in the observation status file.
!
!
!---------------------------------------------------------------------
! REVISION HISTORY
!---------------------------------------------------------------------
! MOD PR
! LEVEL DATE NUMBER User Description
! ----- -------- ------ ------ --------------------------------------
! 000 11/02/99 39733 Ken S. Created
! 001 10/24/01 44684 Goldst Standardized OPUS PDR pipeline version
! 002 01/30/02 45016 Goldst Corrected OK_TO_UPDATE_DATABASE
! 003 07/10/02 46101 J.Baum inserted .DANGLE for file status
! 004 08/09/02 46352 Goldst Add OSF_PROCESSING keyword for cleandata
!---------------------------------------------------------------------
PROCESS_NAME = mtlcpy
TASK = <pdrcpy -p $PATH_FILE -r mtlcpy>
DESCRIPTION = 'MTL product notification message poller'
SYSTEM = PDR
CLASS = mtl
DISPLAY_ORDER = 1
OK_TO_UPDATE_DATABASE = OK_TO_UPDATE_DATABASE ! Determined by path file
INTERNAL_POLLING_PROCESS = TRUE
FILE_RANK = 1 ! First Trigger
FILE_OBJECT1 = *.mtl_trans ! File specification for searches
FILE_DIRECTORY1 = PDR_PASS_DIR ! Polling directory
!File status
FILE_PROCESSING.DANGLE = _proc ! Extension addition during processing
FILE_SUCCESS.DANGLE = _done ! Extension addition if normal processing
FILE_ERROR.DANGLE = _bad ! Extension addition if error
FILE_DUPLICATE.DANGLE = _duplicate ! Extension addition if duplicate
!OSF status
PAS_CREATE.DR = p !PAS file procesing.
PAS_COMPLETE.DR = c !PAS file completed
PAS_COMPLETE.RQ = w !Trigger for request process
PAS_DUPLICATE.DR = d !PAS file duplicate detected.
PAS_FAIL.DR = e !PAS file processing failed.
OSF_PROCESSING.DR = p !Needed for cleandata processing
POLLING_TIME = 10 ! Wait (seconds) before polling for next
INPATH = PDR_PASS_DIR ! location of pass data files.
OUTPATH = PDR_MTL_DIR ! destination for MTL files
MINBLOCKS = 50000 ! blocks required on output disk
EXTENSION = .mtl_trans ! MTL input file extension
PREFIX = v ! pass file prefix
RENAME = Y ! Rename PASS file.
TIMES = Y ! Update files_times relation.
! forces values from path to be used
ENV.OPUS_DB = OPUS_DB
ENV.DSQUERY = DSQUERY
MTLCPY Process accessed database relations
MTLCPY accessed Relations
MTLCPY Process database querries
MTLCPY Process Querries
SMSCPY Process details
- SMSCPY Process Description
- This process is one of five "copy" processes that reside in
the (DR) stage of the PDR Pipeline. It is a file triggerred process and its purpose is to poll the
PDR_PASS_DIR directory for Science Mission Schedule Product Transfer Notification
Message files from the PASS system. Once a transfer notification message file is received, SMSCPY will
verify that all of the Science Mission Schedule files listed in the notification file are present on disk; if
any are missing, the notification file will be renamed to indicate an error. If all specified files are
present, SMSCPY will see if an 'sms' class OSF already exists for any of the specified files. If OSFs
already exists, they will be marked as duplicate and the transfer notification file will be renamed as
duplicate. If they don't already exist, 'sms' class OSFs will be created for each file specified in the
transfer notification file and each of the specified files will be moved to the PDR_SMS_DIR
directory and renamed in preparation for archiving. Lastly, a record will be inserted into the database
file_times relation for each of the specified files, and their corresponding OSFs
will be set to trigger the PDRREQ process.
- SMSCPY Process Triggers
Input Trigger:
SMSCPY is a file poller process that is triggerred by the appearance of Science Mission Schedule Product
Transfer Notification Message files in the PDR_PASS_DIR directory
Output Triggers:
SMSCPY triggers the PDRREQ process
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ _ _ _ _ _ w _ _ _ sms
SMSCPY Process I/O - Input to and Output from the SMSCPY process is as follows:
INPUT:
PDR_PASS_DIR:
<calender id>_<sms id>_f.sms - Science Mission Schedule files from PASS
<calender id>_<sms id>_f.sms_trans - Science Mission Schedule transfer
notification files from PASS
OUTPUT:
OPUS_OBSERVATIONS_DIR:
'sms' class OSFs for Science Mission Schedule files
PDR_PASS_DIR:
<calender id>_<sms id>_f.sms_done - processed Science Mission Schedule
transfer notification files from PASS
<calender id>_<sms id>_f.sms_bad - Science Mission Schedule
transfer notification files found to be in error
<calender id>_<sms id>_f.sms_duplicate - Science Mission Schedule
transfer notification files found to be duplicates
PDR_SMS_DIR:
y<Ymdhhmmx>.pod - Science Mission Schedule files received from PASS and moved
here from the PDR_PASS_DIR directory and renamed
SMSCPY Process Modes - The SMSCPY process needs to make database inserts and thus should be run
only in the pipeline mode as Interactive mode typically prevents database inserts.
Pipeline Mode:
pdrcpy -p opus_definitions_dir:your.path -r smscpy (in task line of resource file)
where:
-p = denotes path file specification follows
-r = denotes resource file for the SMSCPY Process
opus_definitions_dir:your.path = path file to use
SMSCPY Process Resource File
!--------------------------------------------------------------------
!
! SMSCPY RESOURCE FILE
!
!
! This file is used to construct the trigger, error, and success status
! fields in the observation status file.
!
!
!---------------------------------------------------------------------
! REVISION HISTORY
!---------------------------------------------------------------------
! MOD PR
! LEVEL DATE NUMBER User Description
! ----- -------- ------ ------ --------------------------------------
! 000 11/02/99 39733 Ken S. Created
! 001 10/24/01 44684 Goldst Standardized OPUS PDR pipeline version
! 002 01/30/02 45016 Goldst Corrected OK_TO_UPDATE_DATABASE
! 003 07/10/02 46101 J.Baum inserted .DANGLE for file status
! 004 08/09/02 46352 Goldst Add OSF_PROCESSING keyword for cleandata
!---------------------------------------------------------------------
PROCESS_NAME = smscpy
TASK = <pdrcpy -p $PATH_FILE -r smscpy>
DESCRIPTION = 'SMS product notification message poller'
SYSTEM = PDR
CLASS = sms
DISPLAY_ORDER = 1
OK_TO_UPDATE_DATABASE = OK_TO_UPDATE_DATABASE ! Determined by path file
INTERNAL_POLLING_PROCESS = TRUE
FILE_RANK = 1 ! First Trigger
FILE_OBJECT1 = *.sms_trans ! File specification for searches
FILE_DIRECTORY1 = PDR_PASS_DIR ! Polling directory
!File status
FILE_PROCESSING.DANGLE = _proc ! Extension addition during processing
FILE_SUCCESS.DANGLE = _done ! Extension addition if normal processing
FILE_ERROR.DANGLE = _bad ! Extension addition if error
FILE_DUPLICATE.DANGLE = _duplicate ! Extension addition if duplicate
!OSF status
PAS_CREATE.DR = p !PAS file procesing.
PAS_COMPLETE.DR = c !PAS file completed
PAS_COMPLETE.RQ = w !Trigger for request process
PAS_DUPLICATE.DR = d !PAS file duplicate detected.
PAS_FAIL.DR = e !PAS file processing failed.
OSF_PROCESSING.DR = p !Needed for cleandata processing
POLLING_TIME = 10 ! Wait (seconds) before polling for next
INPATH = PDR_PASS_DIR ! location of pass data files.
OUTPATH = PDR_SMS_DIR ! destination for SMS files
MINBLOCKS = 50000 ! blocks required on output disk
EXTENSION = .sms_trans ! SMS input file extension
PREFIX = y ! PASS file prefix
RENAME = Y ! Rename PASS file.
TIMES = Y ! Update files_times relation.
! forces values from path to be used
ENV.OPUS_DB = OPUS_DB
ENV.DSQUERY = DSQUERY
SMSCPY Process accessed database relations
SMSCPY accessed Relations
SMSCPY Process database querries
SMSCPY Process Querries
ORBCPY Process details
- ORBCPY Process Description
- This process is one of five "copy" processes that reside in
the (DR) stage of the PDR Pipeline. It is a file triggerred process and its purpose is to poll the
PDR_PASS_DIR directory for Orbital Ephemeris File Product Transfer Notification
Messages files from the PASS system. Once a transfer notification message file is received, ORBCPY will
verify that all of the Orbital Ephemeris files listed in the notification file are present on disk; if
any are missing, the notification file will be renamed to indicate an error. If all specified files are
present, ORBCPY will see if an 'orb' class OSF already exists for any of the specified files. If OSFs
already exists, they will be marked as duplicate and the transfer notification file will be renamed as
duplicate. If they don't already exist, 'orb' class OSFs will be created for each file specified in the
transfer notification file and each of the specified files will be moved to the PDR_ORB_DIR
directory and renamed. Lastly, a record will be inserted into the database file_times
relation for each of the specified files, and their corresponding OSFs will be set to trigger the PDRORB process.
- ORBCPY Process Triggers
Input Trigger:
ORBCPY is a file poller process that is triggerred by the appearance of Orbital Ephemeris File Product
Transfer Notification Message files in the PDR_PASS_DIR directory
Output Triggers:
ORBCPY triggers the PDRORB process
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c w _ _ _ _ _ _ _ _ _ orb
ORBCPY Process I/O - Input to and Output from the ORBCPY process is as follows:
INPUT:
PDR_PASS_DIR:
stdef_<date>.dat - Orbital Ephemeris files
stdef_<date>.orb_trans - Orbital Ephemeris transfer notification file
before processing
OUTPUT:
OPUS_OBSERVATIONS_DIR:
'orb' class OSFs for Orbital Ephemeris files
PDR_PASS_DIR:
stdef_<date>.orb_done - processed Orbital Ephemeris transfer notification files
stdef_<date>.orb_bad - Orbital Ephemeris transfer notification files found
to be in error
stdef_<date>.orb_duplicate - Orbital Ephemeris transfer notification files
found to be duplicates
PDR_ORB_DIR:
u<Ymdhhmmr>.pod - Orbital Ephemeris files received from PASS and moved
here from the PDR_PASS_DIR directory and renamed
ORBCPY Process Modes - The ORBCPY process needs to make database inserts and thus should be run
only in the pipeline mode as Interactive mode typically prevents database inserts.
Pipeline Mode:
pdrcpy -p opus_definitions_dir:your.path -r orbcpy (in task line of resource file)
where:
-p = denotes path file specification follows
-r = denotes resource file for the ORBCPY Process
opus_definitions_dir:your.path = path file to use
ORBCPY Process Resource File
!--------------------------------------------------------------------
!
! ORBCPY RESOURCE FILE
!
! This file is used to construct the trigger, error, and success status
! fields in the observation status file.
!
!---------------------------------------------------------------------
! REVISION HISTORY
!---------------------------------------------------------------------
! MOD PR
! LEVEL DATE NUMBER User Description
! ----- -------- ------ ------ --------------------------------------
! 000 11/02/99 39733 Ken S. Created
! 001 10/24/01 44684 Goldst Standardized OPUS PDR pipeline version
! 002 01/30/02 45016 Goldst Corrected OK_TO_UPDATE_DATABASE
! 003 07/10/02 46101 J.Baum inserted .DANGLE for file status
! 004 08/09/02 46352 Goldst Add OSF_PROCESSING keyword for cleandata
! 005 08/26/02 46468 J.Baum Fix PAS_COMPLETE stage
!---------------------------------------------------------------------
PROCESS_NAME = orbcpy
TASK = <pdrcpy -p $PATH_FILE -r orbcpy>
DESCRIPTION = 'ORB product notification message poller'
SYSTEM = PDR
CLASS = orb
DISPLAY_ORDER = 1
OK_TO_UPDATE_DATABASE = OK_TO_UPDATE_DATABASE ! Determined by path file
INTERNAL_POLLING_PROCESS = TRUE
FILE_RANK = 1 ! First Trigger
FILE_OBJECT1 = *.orb_trans ! File specification for searches
FILE_DIRECTORY1 = PDR_PASS_DIR ! Polling directory
!File status
FILE_PROCESSING.DANGLE = _proc ! Extension addition during processing
FILE_SUCCESS.DANGLE = _done ! Extension addition if normal processing
FILE_ERROR.DANGLE = _bad ! Extension addition if error
FILE_DUPLICATE.DANGLE = _duplicate ! Extension addition if duplicate
!OSF status
PAS_CREATE.DR = p !PAS file procesing.
PAS_COMPLETE.DR = c !PAS file completed
PAS_COMPLETE.EP = w !Trigger for ephemeris processing
PAS_DUPLICATE.DR = d !PAS file duplicate detected.
PAS_FAIL.DR = e !PAS file processing failed.
OSF_PROCESSING.DR = p !Needed for cleandata processing
POLLING_TIME = 10 ! Wait (seconds) before polling for next
INPATH = PDR_PASS_DIR ! location of pass data files.
OUTPATH = PDR_ORB_DIR ! destination for ORB files
MINBLOCKS = 50000 ! blocks required on output disk
EXTENSION = .orb_trans ! ORB input file extension
PREFIX = p ! Prefix
RENAME = Y ! Rename PASS file.
TIMES = Y ! Update files_times relation.
! forces values from path to be used
ENV.OPUS_DB = OPUS_DB
ENV.DSQUERY = DSQUERY
ORBCPY Process accessed database relations
ORBCPY accessed Relations
ORBCPY Process database querries
ORBCPY Process Querries
PASCPY Process details
- PASCPY Process Description
- This process is one of five "copy" processes that reside in
the (DR) stage of the PDR Pipeline. It is a file triggerred process and its purpose is to poll the
PDR_PASS_DIR directory for Auxiliary Data Product Transfer Notification
Message files from the PASS system. Once a transfer notification message file is received, PASCPY will
verify that all of the auxiliary data files listed in the notification file are present on disk; if
any are missing, the notification file will be renamed to indicate an error. If all specified files are
present, PASCPY will see if a 'pas' class OSF already exists for any of the specified files. If OSFs
already exists, they will be marked as duplicate and the transfer notification file will be renamed as
duplicate. If they don't already exist, 'pas' class OSFs will be created for each file specified in the
transfer notification file and each of the specified files will be moved to the
PDR_HLD_DIR directory.
- PASCPY Process Triggers
Input Trigger:
PASCPY is a file poller process that is triggerred by the appearance of PASS Auxiliary Data Product
Transfer Notification Message files in the PDR_PASS_DIR directory
Output Triggers:
PASCPY puts a 'c' in the PDRDEL (DL) stage. For other EDPS pipelines, a 'c' in the
(DL) stage triggers the process residing in the stage to delete no longer needed OSFs.
In the PDR pipeline however, the process (PDRDEL) residing in the (DL) stage is actually
triggerred by a 'd' in the (DL) stage. The insertion of a 'd' in the (DL) stage is not
done automatically; a manual insertion by operations personnel is required.
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ _ _ _ _ _ _ _ _ c pas
PASCPY Process I/O - Input to and Output from the PASCPY process is as follows:
INPUT:
PDR_PASS_DIR:
<calender id>_<sms id>_f.<ext> - PASS Auxiliary Data files
<calender id>_<sms id>_f.pas_trans - PASS Auxiliary Data transfer notification files
OUTPUT:
OPUS_OBSERVATIONS_DIR:
'pas' class OSFs for PASS Auxiliary Data files
PDR_PASS_DIR:
<calender id>_<sms id>_f.pas_done - processed PASS Auxiliary Data transfer notification files
<calender id>_<sms id>_f.pas_bad - PASS Auxiliary Data transfer
notification files found to be in error
<calender id>_<sms id>_f.pas_duplicate - PASS Auxiliary Data
transfer notification files found to be duplicates
PDR_HLD_DIR:
<calender id>_<sms id>_f.<ext> - PASS Auxiliary Data files
moved here from the PDR_PASS_DIR directory.
PASCPY Process Modes - As with the other PDR pipeline "copy" processes, the PASCPY process should be run
only in the pipeline mode.
Pipeline Mode:
pdrcpy -p opus_definitions_dir:your.path -r pascpy (in task line of resource file)
where:
-p = denotes path file specification follows
-r = denotes resource file for the PASCPY Process
opus_definitions_dir:your.path = path file to use
PASCPY Process Resource File
!--------------------------------------------------------------------
!
! PASCPY RESOURCE FILE
!
! This file is used to construct the trigger, error, and success status
! fields in the observation status file.
!
!--------------------------------------------------------------------
! REVISION HISTORY
!--------------------------------------------------------------------
! MOD PR
! LEVEL DATE NUMBER User Description
! ----- -------- ------ ------ -------------------------------------
! 000 11/02/99 39733 Ken S. Created
! 001 10/24/01 44684 Standardized OPUS PDR pipeline version
! 002 01/30/02 45016 Goldst Corrected OK_TO_UPDATE_DATABASE
! 003 07/10/02 46101 J.Baum inserted .DANGLE for file status
!--------------------------------------------------------------------
PROCESS_NAME = pascpy
TASK = <pdrcpy -p $PATH_FILE -r pascpy>
DESCRIPTION = 'PAS product notification message poller'
SYSTEM = PDR
CLASS = pas
DISPLAY_ORDER = 1
OK_TO_UPDATE_DATABASE = OK_TO_UPDATE_DATABASE ! Determined by path file
INTERNAL_POLLING_PROCESS = TRUE
FILE_RANK = 1 ! First Trigger
FILE_OBJECT1 = *.pas_trans ! File specification for searches
FILE_DIRECTORY1 = PDR_PASS_DIR ! Polling directory
!File status
FILE_PROCESSING.DANGLE = _proc ! Extension addition during processing
FILE_SUCCESS.DANGLE = _done ! Extension addition if normal processing
FILE_ERROR.DANGLE = _bad ! Extension addition if error
FILE_DUPLICATE.DANGLE = _duplicate ! Extension addition if duplicate
!OSF status
PAS_CREATE.DR = p !PAS file procesing.
PAS_COMPLETE.DR = c !PAS file completed
PAS_COMPLETE.DL = c !Trigger for osf_delete process
PAS_DUPLICATE.DR = d !PAS file duplicate detected.
PAS_FAIL.DR = e !PAS file processing failed.
POLLING_TIME = 10 ! Wait (seconds) before polling for next
INPATH = PDR_PASS_DIR ! location of pass data files.
OUTPATH = PDR_HLD_DIR ! Holding Tank directory.
MINBLOCKS = 50000 ! blocks required on output disk
EXTENSION = .pas_trans ! PAS input file extension
PREFIX = N ! PASS file prefix
RENAME = N ! Rename PASS file.
TIMES = N ! Update files_times relation.
! forces values from path to be used
ENV.OPUS_DB = OPUS_DB
ENV.DSQUERY = DSQUERY
PDRORB Process details
- PDRORB Process Description
- This process resides in the (EP) stage of the PDR Pipeline.
It is an OSF triggerred process and its purpose is to create FITS and ASCII format Definitive Ephemeris
Files for archiving by using data provided in the Orbital Ephemeris file received from the PASS system in
the PDR_PASS_DIR directory and data loaded in from the keyword database.
- PDRORB Process Triggers
Input Trigger:
PDRORB is triggerred by the ORBCPY process
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c w _ _ _ _ _ _ _ _ _ orb
Output Triggers:
PDRORB triggers the PDRREQ process
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c c _ _ _ _ _ w _ _ _ orb
PDRORB Process I/O - Input to and Output from the PDRORB process is as follows:
INPUT:
PDR_ORB_DIR:
p<Ymdhhmmr>.pod - Orbital Ephemeris POD file
info from keyword database
OUTPUT:
PDR_ORB_DIR:
p<Ymdhhmmr>.fit - FITS format Ephemeris table file
p<Ymdhhmmr>.asc - ASCII format Ephemeris table file
PDRORB Process Modes - The PDRORB process makes database updates and therefore should be run
only in the pipeline mode.
Pipeline Mode:
omsorb -p opus_definitions_dir:your.path -r pdrorb (in task line of resource file)
where:
-p = denotes path file specification follows
-r = denotes resource file for the PDRORB Process
opus_definitions_dir:your.path = path file to use
PDRORB Process Resource File
!--------------------------------------------------------------------
!
! PDRORB RESOURCE FILE
!
! This file is used to construct the trigger, error, and success status
! fields in the observation status file.
!
!--------------------------------------------------------------------
! REVISION HISTORY
!--------------------------------------------------------------------
! MOD PR
! LEVEL DATE NUMBER User Description
! ----- -------- ------ ------ -------------------------------------
! 000 09/11/96 31820 Ken S. Created
! 001 05/21/97 34189 Ken S. Add to_queue and from_node to command
! 002 10/28/97 35166 Ken S. AXP/VMS port
! 003 12/31/97 35166.3 Ken S. Add archive request trigger
! 004 03/11/98 36318 Ken S. Add holding tank mnemonic.
! 005 07/26/99 38816_03 Goldst Changed SYSTEM from OMS to NSC
! 006 01/04/00 39733 Ken S. Change to OSF poller.
! 007 10/24/01 44684 Goldst Created pdrorb version
! 008 01/30/02 45016 Goldst Corrected OK_TO_UPDATE_DATABASE
! 009 03/11/02 45016 Goldst Added OSF_TRIGGER1.DATA_ID
!--------------------------------------------------------------------
PROCESS_NAME = pdrorb
TASK = <omsorb -p $PATH_FILE -r pdrorb>
DESCRIPTION = 'Process ephemeris data'
SYSTEM = PDR
CLASS = orb
DISPLAY_ORDER = 1
INTERNAL_POLLING_PROCESS = TRUE
OK_TO_UPDATE_DATABASE = OK_TO_UPDATE_DATABASE
! Set to true to prevent requesting the data
! again.
OSF_RANK = 1 ! Time event ordering.
OSF_TRIGGER1.EP = w ! Need a 'Wait' flag in Data Validation
OSF_TRIGGER1.DATA_ID = orb ! Trigger class ID
OSF_PROCESSING.EP = p ! Processing : ORB REceipt
OSF_COMPLETE.EP = c ! Completed : ORB Receipt
OSF_COMPLETE.RQ = w ! Waiting : Archive request generation
OSF_FAILED.EP = f ! Failed : ORB Receipt
POLLING_TIME = 10 ! Wait (seconds)
INPATH = PDR_ORB_DIR ! Directory of input files
OUTPATH = PDR_ORB_DIR ! Directory for output files
HLDPATH = PDR_HLD_DIR ! Holding Tank directory.
MINBLOCKS = 50000 ! blocks required on output disk
! forces values from path to be used
ENV.OPUS_DB = OPUS_DB
ENV.DSQUERY = DSQUERY
PDRORB Process accessed database relations
PDRORB accessed Relations
PDRORB Process database querries
PDRORB Process Querries
REPLAN Process details
- REPLAN Process Description
- This process resides in the (RP) stage of the PDR Pipeline.
It is an OSF triggerred process and its purpose is to examine the Mission Schedule pod files in the
PDR_MSC_DIR directory to determine which of the files constitute valid Mission
Schedules and Mission Schedule Re-plans. When valid Re-plans are detected, the REPLAN process will modify
numerous database relations to delete any information being superceded by the Re-plan.
- REPLAN Process Triggers
Input Trigger:
REPLAN is triggerred by the MSCCPY process
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ w _ _ _ _ _ _ _ _ msc
Output Triggers:
REPLAN triggers the UPDATR process
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ c w _ _ _ _ _ _ _ msc
REPLAN Process I/O - Input to and Output from the REPLAN process is as follows:
INPUT:
PDR_MSC_DIR:
u<Ymdhhmmx>.pod - Mission Schedule pod files
information from database
OUTPUT:
superceeded Mission Schedule infomation removed from database
REPLAN Process Modes - The REPLAN process makes database updates and therefore should be run
only in the pipeline mode.
Pipeline Mode:
xpoll -p opus_definitions_dir:your.path -r replan (in task line of resource file)
where:
xpoll = External Poller Process used to invoke a script
-p = denotes path file specification follows
-r = denotes resource file for the REPLAN Process
opus_definitions_dir:your.path = path file to use
REPLAN Process Resource File
!--------------------------------------------------------------------
! REPLAN RESOURCE FILE
!
! This file is used to define various values for the Update
! Support Schedule Tables process.
!
!--------------------------------------------------------------------
! REVISION HISTORY
!--------------------------------------------------------------------
! MOD PR
! LEVEL DATE NUMBER User Description
! ----- -------- ------ ------ --------------------------------
! 000 10/24/01 44684 Goldst Created initial version
! 001 03/11/02 45016 Goldst Added OSF_TRIGGER1.DATA_ID
!--------------------------------------------------------------------
PROCESS_NAME = replan
TASK = <xpoll -p $PATH_FILE -r replan>
DESCRIPTION = 'Process a replan SMS'
COMMAND = check_replan_msc.pl
SYSTEM = PDR
CLASS = msc
DISPLAY_ORDER = 1
OSF_RANK = 1 ! First Trigger
OSF_TRIGGER1.RP = w ! Need a 'wait' flag for replan
OSF_TRIGGER1.DATA_ID = msc ! Trigger class ID
OSF_PROCESSING.RP = p ! Set the processing flag to 'Processing'
OSF_SUCCESS.RP = c ! Complete: Completed replan processing
OSF_SUCCESS.UP = w ! Complete: Set trigger for UPDATR
OSF_FAIL.RP = f ! Error: Set the trouble flag
XPOLL_ERROR.RP = x ! Undefined exit status
XPOLL_STATE.01 = OSF_FAIL ! exit status 1 == OSF_FAIL state
XPOLL_STATE.00 = OSF_SUCCESS ! exit status 0 == OSF_SUCCESS state
POLLING_TIME = 10 ! Amount of time to wait before polling for next
! forces values from path to be used
ENV.OPUS_DB = OPUS_DB
ENV.SPSS_DB = SPSS_DB
ENV.DSQUERY = DSQUERY
ENV.MSC_DIR = PDR_MSC_DIR
REPLAN Process accessed database relations
REPLAN accessed Relations
REPLAN Process database querries
REPLAN Process Querries
UPDATR Process details
- UPDATR Process Description
- This process resides in the (UP) stage of the PDR Pipeline.
It is an OSF triggerred process and its purpose is to update Support Schedule information in various
database relations.
- UPDATR Process Triggers
Input Trigger:
UPDATR is triggerred by the REPLAN process
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ c w _ _ _ _ _ _ _ msc
Output Triggers:
UPDATR triggers the MSCXTR process
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ c c w _ _ _ _ _ _ msc
UPDATR Process I/O - Input to and Output from the UPDATR process is as follows:
INPUT:
PDR_MSC_DIR:
u<Ymdhhmmx>.pod - Mission Schedule pod files
information from database
OUTPUT:
Support Schedule infomation updated in database
UPDATR Process Modes - The UPDATR process makes database updates and therefore should be run
only in the pipeline mode.
Pipeline Mode:
xpoll -p opus_definitions_dir:your.path -r updatr (in task line of resource file)
where:
xpoll = External Poller Process used to invoke a script
-p = denotes path file specification follows
-r = denotes resource file for the UPDATR Process
opus_definitions_dir:your.path = path file to use
UPDATR Process Resource File
!--------------------------------------------------------------------
!
! UPDATR RESOURCE FILE
!
! This file is used to define various values for the Update
! Support Schedule Tables process.
!
!--------------------------------------------------------------------
! REVISION HISTORY
!--------------------------------------------------------------------
! MOD PR
! LEVEL DATE NUMBER User Description
! ----- -------- ------ ------ --------------------------------
! 000 10/24/01 44684 Goldst Created initial version
! 001 01/30/02 45016 Goldst Added OK_TO_UPDATE_DATABASE
! 002 03/11/02 45016 Goldst Added OSF_TRIGGER1.DATA_ID
!--------------------------------------------------------------------
PROCESS_NAME = updatr
TASK = <xpoll -p $PATH_FILE -r updatr>
DESCRIPTION = 'Update support schedule tables'
COMMAND = insert_support_records.pl
SYSTEM = PDR
CLASS = msc
DISPLAY_ORDER = 1
OSF_RANK = 1 ! First trigger
OSF_TRIGGER1.UP = w ! Need a 'wait' flag in updatr
OSF_TRIGGER1.DATA_ID = msc ! Trigger class ID
OSF_PROCESSING.UP = p ! Set the processing flag to 'Processing'
OSF_SUCCESS.UP = c ! Completion: Completed upatr processing
OSF_SUCCESS.MS = w ! Completion: set wait flag for MSCXTR
OSF_FAIL.UP = f ! Error: Set the trouble flag
XPOLL_ERROR.UP = x ! Undefined exit status
XPOLL_STATE.01 = OSF_FAIL ! exit status 1 == OSF_FAIL state
XPOLL_STATE.00 = OSF_SUCCESS ! exit status 0 == OSF_SUCCESS state
POLLING_TIME = 10 ! Amount of time to wait before polling for next
ENV.MSC_DIR = PDR_MSC_DIR !
ENV.LOG_DELETED_OBSETS = Y ! Y or N, to list obsets deleted from qolink_sms
! forces values from path to be used
ENV.OPUS_DB = OPUS_DB
ENV.SPSS_DB = SPSS_DB
ENV.DSQUERY = DSQUERY
ENV.OK_TO_UPDATE_DATABASE = OK_TO_UPDATE_DATABASE
UPDATR Process accessed database relations
UPDATR accessed Relations
UPDATR Process database querries
UPDATR Process Querries
MSCXTR Process details
- MSCXTR Process Description
- This process resides in the (MS) stage of the PDR Pipeline.
It is an OSF triggerred process and its purpose is to parse Mission Schedule pod files in the
PDR_MSC_DIR directory to extract desired observation information and populate
various database tables with the information. The information is subsequently used in the
control/generation of the products produced by the EDPS FOF, FGS, and AST pipelines.
- MSCXTR Process Triggers
Input Trigger:
MSCXTR is triggerred by the UPDATR process
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ c c w _ _ _ _ _ _ msc
Output Triggers:
MSCXTR triggers the CONTRL process
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ c c c w _ _ _ _ _ msc
MSCXTR Process I/O - Input to and Output from the MSCXTR process is as follows:
INPUT:
PDR_MSC_DIR:
u<Ymdhhmmx>.pod - Mission Schedule pod files
information from database
OUTPUT:
various database tables populated with information extracted from Mission Schedule files
MSCXTR Process Modes - The MSCXTR process makes database updates and therefore should be run
only in the pipeline mode.
Pipeline Mode:
xpoll -p opus_definitions_dir:your.path -r mscxtr (in task line of resource file)
where:
xpoll = External Poller Process used to invoke a script
-p = denotes path file specification follows
-r = denotes resource file for the MSCXTR Process
opus_definitions_dir:your.path = path file to use
MSCXTR Process Resource File
!--------------------------------------------------------------------
!
! mscxtr.resource
!
!
!
!--------------------------------------------------------------------
! REVISION HISTORY
!--------------------------------------------------------------------
! PR
! DATE NUMBER User Description
! -------- ------ ------ -------------------------------------
! 06/10/01 43987 Heller First version
! 10/24/01 44684 Goldst Standardized OPUS PDR pipeline version
! 01/30/02 45016 Goldst Added OK_TO_UPDATE_DATABASE
! 03/11/02 45016 Goldst Added OSF_TRIGGER1.DATA_ID
! 06/10/02 45958 Heller Added definition for exit status 3
!--------------------------------------------------------------------
PROCESS_NAME = mscxtr
TASK = <xpoll -p $PATH_FILE -r mscxtr>
COMMAND = mscxtr.csh
DESCRIPTION = 'Process MSC pod files'
SYSTEM = PDR
CLASS = msc
OSF_RANK = 1
OSF_TRIGGER1.MS = w ! Trigger
OSF_TRIGGER1.DATA_ID = msc ! Trigger class ID
OSF_PROCESSING.MS = p ! Processing
OSF_SUCCESS.MS = c ! Completion
OSF_SUCCESS.CT = w ! Completion
OSF_DB_ERROR.MS = f ! DB Error
OSF_PARSE_ERROR.MS = e ! Parsing Error
POLLING_TIME = 10 ! Required
XPOLL_ERROR.MS = x ! Undefined exit status
XPOLL_ERROR_COUNT = 10 ! This many XPOLL errors will cause the
! process to go ABSENT
! Valid exit codes for COMMAND that allow XPOLL to continue.
! All other XPOLL states will cause process to go ABSENT.
! (The labels are not used for TIME events)
XPOLL_STATE.00 = OSF_SUCCESS
XPOLL_STATE.01 = OSF_PARSE_ERROR
XPOLL_STATE.02 = OSF_DB_ERROR
XPOLL_STATE.03 = OSF_PARSE_ERROR
! Script needs following information to run
ENV.OPUS_DB = OPUS_DB
ENV.DSQUERY = DSQUERY
ENV.OK_TO_UPDATE_DATABASE = OK_TO_UPDATE_DATABASE
ENV.INPATH = PDR_MSC_DIR
MSCXTR Process accessed database relations
MSCXTR accessed Relations
MSCXTR Process database querries
MSCXTR Process Querries
CONTRL Process details
- CONTRL Process Description
- This process resides in the (CT) stage of the PDR Pipeline.
It is an OSF triggerred process and its purpose is to use Mission Schedule information contained in previuosly
populated database relations to populate other relations that are used to dictate/control the specific
processing performed by the EDPS FOF, FGS, and AST pipelines.
- CONTRL Process Triggers
Input Trigger:
CONTRL is triggerred by the MSCXTR process
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ c c c w _ _ _ _ _ msc
Output Triggers:
CONTRL triggers the NICSAA process
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ c c c c w _ _ _ _ msc
CONTRL Process I/O - Input to and Output from the CONTRL process is as follows:
INPUT:
Mission Schedule information from database
OUTPUT:
population of database relations containing information which dictates the actual processing
performed by the EDPS FOF, FGS, and AST pipelines
CONTRL Process Modes - The CONTRL process makes database inserts and therefore should be run
only in the pipeline mode.
Pipeline Mode:
xpoll -p opus_definitions_dir:your.path -r contrl (in task line of resource file)
where:
xpoll = External Poller Process used to invoke a script
-p = denotes path file specification follows
-r = denotes resource file for the CONTRL Process
opus_definitions_dir:your.path = path file to use
CONTRL Process Resource File
!--------------------------------------------------------------------
!
! CONTRL RESOURCE FILE
!
! This file is used to define various values for the Update
! Support Schedule Tables process.
!
!--------------------------------------------------------------------
! REVISION HISTORY
!--------------------------------------------------------------------
! MOD PR
! LEVEL DATE NUMBER User Description
! ----- -------- ------ ------ ------------------------------------
! 000 10/24/01 44684 Goldst Created initial version
! 001 01/25/02 45016 Goldst ENV.OK_TO_UPDATE_DATABASE from path
! 002 03/11/02 45016 Goldst Added OSF_TRIGGER1.DATA_ID
!--------------------------------------------------------------------
!
PROCESS_NAME = contrl
TASK = <xpoll -p $PATH_FILE -r contrl>
DESCRIPTION = 'Update control database tables'
COMMAND = update_control_tables.pl
SYSTEM = PDR
CLASS = msc
DISPLAY_ORDER = 1
OSF_RANK = 1 ! First trigger
OSF_TRIGGER1.MS = c ! Completed MSCXTR processing
OSF_TRIGGER1.CT = w ! Need a 'wait' flag in control processing stage
OSF_TRIGGER1.DATA_ID = msc ! Trigger class ID
OSF_PROCESSING.CT = p ! Set the processing flag to 'Processing'
OSF_SUCCESS.CT = c ! Completion: Completed control table update
OSF_SUCCESS.NS = w ! Completion: Trigger for NICSAA processing
OSF_FAIL.CT = f ! Error: Set the trouble flag
XPOLL_ERROR.CT = x ! Undefined exit status
XPOLL_STATE.01 = OSF_FAIL ! exit status 1 == OSF_FAIL state
XPOLL_STATE.00 = OSF_SUCCESS ! exit status 0 == OSF_SUCCESS state
POLLING_TIME = 10 ! Amount of time to wait before polling for next
! forces values from path to be used
ENV.OPUS_DB = OPUS_DB
ENV.SPSS_DB = SPSS_DB
ENV.DSQUERY = DSQUERY
ENV.OK_TO_UPDATE_DATABASE = OK_TO_UPDATE_DATABASE ! Determined by path file
CONTRL Process accessed database relations
CONTRL accessed Relations
CONTRL Process database querries
CONTRL Process Querries
NICSAA Process details
- NICSAA Process Description
- This process resides in the (NS) stage of the PDR Pipeline.
It is an OSF triggerred process and its purpose is to use Mission Schedule information contained in previuosly
populated database relations to populate other relations containing information about NICMOS exposures
and SAA dark associations, and NICMOS associations and SAA dark exposures.
- NICSAA Process Triggers
Input Trigger:
NICSAA is triggerred by the CONTRL process
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ c c c c w _ _ _ _ msc
Output Triggers:
NICSAA triggers the PDRREQ process
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ c c c c c w _ _ _ msc
NICSAA Process I/O - Input to and Output from the NICSAA process is as follows:
INPUT:
NICMOS related Mission Schedule information from database
OUTPUT:
population of database relations containing information about NICMOS exposures
and SAA dark associations, and NICMOS associations and SAA dark exposures.
NICSAA Process Modes - The NICSAA process makes database inserts and therefore should be run
only in the pipeline mode.
Pipeline Mode:
xpoll -p opus_definitions_dir:your.path -r nicsaa (in task line of resource file)
where:
xpoll = External Poller Process used to invoke a script
-p = denotes path file specification follows
-r = denotes resource file for the NICSAA Process
opus_definitions_dir:your.path = path file to use
NICSAA Process Resource File
!--------------------------------------------------------------------
!
! NICSAA RESOURCE FILE
!
! This file is used to define various values for the NICMOS SAA
! Table Update process.
!
!--------------------------------------------------------------------
! REVISION HISTORY
!--------------------------------------------------------------------
! MOD PR
! LEVEL DATE NUMBER User Description
! ----- -------- ------ ------ --------------------------------
! 000 10/24/01 44684 Goldst Created initial version
! 001 03/11/02 45016 Goldst Added OSF_TRIGGER1.DATA_ID
!--------------------------------------------------------------------
PROCESS_NAME = nicsaa
TASK = <xpoll -p $PATH_FILE -r nicsaa>
DESCRIPTION = 'NICMOS SAA table update'
COMMAND = insert_nic_saa_records.pl
SYSTEM = PDR
CLASS = msc
DISPLAY_ORDER = 1
OSF_RANK = 1 ! First Trigger
OSF_TRIGGER1.NS = w ! Need a 'wait' flag in nicsaa
OSF_TRIGGER1.DATA_ID = msc !
OSF_PROCESSING.NS = p ! Set the processing flag to 'Processing'
OSF_COMPLETE.NS = c ! Complete: Completed RMS post processing
OSF_COMPLETE.RQ = w ! Complete: Set archiving stage
OSF_FAIL.NS = f ! Error: Set the trouble flag
XPOLL_ERROR.NS = x ! Undefined exit status
POLLING_TIME = 10 ! Amount of time to wait before polling for next
XPOLL_STATE.01 = OSF_FAIL ! exit status 1 == OSF_FAIL state
XPOLL_STATE.00 = OSF_COMPLETE ! exit status 0 == OSF_COMPLETE state
ENV.NIC_SAA_MAX_DELTA = 3000 ! max seconds from SAA exit for dark utility
! forces values from path to be used
ENV.OPUS_DB = OPUS_DB
ENV.SPSS_DB = SPSS_DB
ENV.DSQUERY = DSQUERY
NICSAA Process accessed database relations
NICSAA accessed Relations
NICSAA Process database querries
NICSAA Process Querries
PDRREQ Process details
- PDRREQ Process Description
- This process resides in the (RQ) stage of the PDR Pipeline.
It is an OSF triggered process and its purpose is to generate requests to the HST archive to archive the
the various products received from the PASS System and the products generated by the PDR pipeline itself.
The requests that are generated provide information such as the date and time of the request, the name of
the products to be archived, the archive class for the products, the location of the files to be archived,
and the number of files to be archived.
- PDRREQ Process Triggers
Input Trigger:
PDRREQ is triggerred by the MTLCPY, SMSCPY, PDRORB, and NICSAA processes
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ _ _ _ _ _ w _ _ _ mtl
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ _ _ _ _ _ w _ _ _ sms
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c c _ _ _ _ _ w _ _ _ orb
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ c c c c c w _ _ _ msc
Output Triggers:
PDRREQ triggers the PDRRSP process
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ _ _ _ _ _ c w _ _ mtl, sms
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c c _ _ _ _ _ c w _ _ orb
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ c c c c c c w _ _ msc
PDRREQ Process I/O - Input to and Output from the PDRREQ process is as follows:
INPUT: - Constitutes which files to archive
PDR_MSC_DIR
u<Ymdhhmmx>.pod - PASS Mission Schedule files received from PASS and
moved here from the PDR_PASS_DIR directory and renamed.
PDR_MTL_DIR
v<Ymdhhmmx>.pod - Mission Timeline Report files received from PASS
and moved here from the PDR_PASS_DIR directory and renamed
PDR_SMS_DIR
y<Ymdhhmmx>.pod - Science Mission Schedule files received from PASS and
moved here from the PDR_PASS_DIR directory and renamed.
WHERE:
Y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...)
m = 1 digit month (1 thru c, where 1=January and a=October)
d = 1 digit day (1 thru v, where 1=1, a=10 and v=31)
hh = 2 digit hour (00 thru 23)
mm = 2 digit minute (00 thru 59)
x = file sequence id (0-9,a-z)
EXAMPLE for year 2000, month 5, day 9, hour 0800, minute 25: um5908250.pod
PDR_ORB_DIR
p<Ymdhhmmr>.pod - Orbital Ephemeris POD file.
p<Ymdhhmmr>.fit - FITS format Ephemeris table file.
p<Ymdhhmmr>.asc - ASCII format Ephemeris table file.
WHERE:
Y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...)
m = 1 digit month (1 thru c, where 1=January and a=October)
d = 1 digit day (1 thru v, where 1=1, a=10 and v=31)
hh = 2 digit hour (00 thru 23)
mm = 2 digit minute (00 thru 59)
EXAMPLEs for year 2000, month May, day 9, hour 0800, minute 25: pm590825r.pod - pod file
pm590825r.fit - fits product
pm590825r.asc - ascii product
OUTPUT:
PDR_AREQ_DIR:
contains requests to archive PASS related POD files and product files
YYYYMMDD_HHMMSS_iymdhhmmx_zzz.areq - request to archive PASS related files
WHERE:
YYYY = year of archive request file generation
MM = month of archive request file generation
DD = day of archive request file generation
HH = hour of archive request file generation
MM = minute of archive request file generation
SS = seconds of archive request file generation
i = file type identifier (u=PASS Mission Schedule file, v=Mission Timeline Report file,
y=Science Mission Schedule file, p=Orbital Ephemeris file)
y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...)
m = 1 digit month (1 thru c, where 1=January and a=October)
d = 1 digit day (1 thru v, where 1=1, a=10 and v=31)
hh = 2 digit hour (00 thru 23)
mm = 2 digit minute (00 thru 59)
x = file sequence id (0-9,a-z) for Orbital Ephemeris files the value is always 'r'
zzz = file type identifier (msc=PASS Mission Schedule file, mtl=Mission Timeline Report file,
sms=Science Mission Schedule file, orb=Orbital Ephemeris file)
EXAMPLES: 20021212_082578_pm590825r_orb.areq - archive request file for ephemeris products
20021212_082578_um590825a_msc.areq - archive request for PASS mission schedule
20021212_082578_ym590825b_sms.areq - archive request file for Science mission schedule
20021212_082578_vm590825c_mtl.areq - archive request file for mission timeline
PDR_LOG_DIR:
contains a log file written to by the PDRREQ and PDRRSP processes to indicate the disposition of an
archive request and its corresponding response
iymdhhmmx_zzz.log - disposition of PASS related files archive request/response
WHERE:
i = file type identifier (u=PASS Mission Schedule file, v=Mission Timeline Report file,
y=Science Mission Schedule file, p=Orbital Ephemeris file)
y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...)
m = 1 digit month (1 thru c, where 1=January and a=October)
d = 1 digit day (1 thru v, where 1=1, a=10 and v=31)
hh = 2 digit hour (00 thru 23)
mm = 2 digit minute (00 thru 59)
x = file sequence id (0-9,a-z) for Orbital Ephemeris files the value is always 'r'
zzz = file type identifier (msc=PASS Mission Schedule file, mtl=Mission Timeline Report file,
sms=Science Mission Schedule file, orb=Orbital Ephemeris file)
EXAMPLES: pm590825r_orb.log - archive log file for ephemeris products
um5908250_msc.log - archive log for PASS mission schedule
ym5908251_sms.log - archive log file for Science mission schedule
vm5908252_mtl.log - archive log file for mission timeline
updates to database relations
PDRREQ Process Modes - The PDRREQ process makes database inserts/updates and therefore should be
run only in the pipeline mode.
Pipeline Mode
genreq -p opus_definitions_dir:your.path -r pdrreq (in task line of resource file)
where:
-p = denotes path file specification follows
-r = denotes resource file for the PDRREQ Process
opus_definitions_dir:your.path = path file to use
PDRREQ Process Resource File
!--------------------------------------------------------------------
!
! PDRREQ RESOURCE FILE
!
!
! This file is used to construct the trigger, error, and success status
! fields in the observation status file.
!
!
!--------------------------------------------------------------------
! REVISION HISTORY
!--------------------------------------------------------------------
! DATE PR User Description
! -------- ------ ------ -------------------------------------
! 07/01/99 39404 Heller UNIX version of resource file
! 01/13/00 39307 Heller Add POD class
! 03/16/00 40911 Heller Fix qoarchives.cal_archdate update
! 10/24/01 44684 Goldst Created OPUS PDR pipeline version
! 01/30/02 45016 Goldst Corrected OK_TO_UPDATE_DATABASE
! 03/13/02 45016 Goldst Added TRIGGERn and TRIGGERn.DATA_ID
!---------------------------------------------------------------------------
PROCESS_NAME = pdrreq
TASK = <genreq -p $PATH_FILE -r pdrreq>
DESCRIPTION = 'Generate nonscience archive request'
SYSTEM = PDR
CLASS = all
DISPLAY_ORDER = 1
!---------------------------------------------------------------------------
! EVNT resource.
!---------------------------------------------------------------------------
POLLING_TIME = 5 ! Response time of the application
OSF_RANK = 1 ! OSF event ordering.
OSF_TRIGGER1.RQ = w ! ARCREQ is triggered by AR = W
OSF_TRIGGER1.DATA_ID = msc ! Trigger1 class ID
OSF_TRIGGER2.RQ = w ! ARCREQ is triggered by AR = W
OSF_TRIGGER2.DATA_ID = mtl ! Trigger2 class ID
OSF_TRIGGER3.RQ = w ! ARCREQ is triggered by AR = W
OSF_TRIGGER3.DATA_ID = sms ! Trigger3 class ID
OSF_TRIGGER4.RQ = w ! ARCREQ is triggered by AR = W
OSF_TRIGGER4.DATA_ID = orb ! Trigger4 class ID
!---------------------------------------------------------------------------
! Application Specific resource
!---------------------------------------------------------------------------
POLLING_TIME = 1
OSF_PROCESSING.RQ = p ! letter to be used when an OSF is processed.
OSF_ERROR.RQ = e ! letter to be used when there is an error.
OSF_SUCCESS1.RQ = c ! Letters to be used when it is successful
OSF_SUCCESS1.RS = w ! completion.
OSF_SUCCESS2.RQ = c
OSF_SUCCESS2.RS = w
OK_TO_UPDATE_DATABASE = OK_TO_UPDATE_DATABASE ! Determined by path file
MAX_ERROR = 10 ! Maximun number unexpected errors before
! ARCREQ quits
! Archive class and OSF map to be used.
ARCHIVE_MAP = OPUS_DEFINITIONS_DIR:archclass_osf.map
!---------------------------------------------------------------------------
! Archive groups that are recognized. The archive group name (the name
! before the ".", must be one of the archive group name specified in
! ARCHIVE_MAP.
!
! For each archive group, the following resource must be present.
! .AREQ_DIR - The pointer to the areq directory
! .LOG_DIR = The pointer to the directory where the log file is kept.
! .DATASET_DIR = The data set direcotry.
! .TRACK_EXT = (Y/N) When it is Y, every extension is saved in
! archive_files relation.
! .DATA_TYPE = DATA_TYPE value for the archive request.
! .DATASET_FILTER = <DATASET_NAME>.* The filter to be used to find
! all the dataset. <DATASET_NAME> will be replaced
! by the actual OSF dataset name at run time.
!
! .OSF_STATE = This resource allows different archive group to have
! different successfull completion status.
!
! For Generic data class,
! DATASET_FILTER must be set to <DATASET_NAME>.*, since archive is not
! capable
!
! For SM 97 orphan and unassociated fits files, the following
! additional keywords must be specified.
! .TRL_DIR = The directory where the .TRA files will be saved until
! they are deleted by ARCCLEAN.
! .DATASET_FILTER = <DATASET_NAME>.* The filter used to find
! the dataset. <DATASET_NAME> will be replaced
! by the actual OSF dataset name at run time.
!
! FOR SM 97 ASN data, the following addition keywords must be specified.
! .ASN_INGEST_DIR = The directory where archive picks up the ASN table
!
!---------------------------------------------------------------------------
MSC.AREQ_DIR = PDR_AREQ_DIR
MSC.LOG_DIR = PDR_LOG_DIR
MSC.DATASET_DIR = PDR_MSC_DIR
MSC.DATA_TYPE = MSC
MSC.TRACK_EXT = N
MSC.DATASET_FILTER = <DATASET_NAME>.pod
MSC.OSF_STATE = OSF_SUCCESS2
MTL.AREQ_DIR = PDR_AREQ_DIR
MTL.LOG_DIR = PDR_LOG_DIR
MTL.DATASET_DIR = PDR_MTL_DIR
MTL.DATA_TYPE = MTL
MTL.TRACK_EXT = N
MTL.DATASET_FILTER = <DATASET_NAME>.*
MTL.OSF_STATE = OSF_SUCCESS2
ORB.AREQ_DIR = PDR_AREQ_DIR
ORB.LOG_DIR = PDR_LOG_DIR
ORB.DATASET_DIR = PDR_ORB_DIR
ORB.DATA_TYPE = ORB
ORB.TRACK_EXT = N
ORB.DATASET_FILTER = <DATASET_NAME>.*
ORB.OSF_STATE = OSF_SUCCESS2
SMS.AREQ_DIR = PDR_AREQ_DIR
SMS.LOG_DIR = PDR_LOG_DIR
SMS.DATASET_DIR = PDR_SMS_DIR
SMS.DATA_TYPE = SMS
SMS.TRACK_EXT = N
SMS.DATASET_FILTER = <DATASET_NAME>.*
SMS.OSF_STATE = OSF_SUCCESS2
! forces values from path to be used
ENV.OPUS_DB = OPUS_DB
ENV.DSQUERY = DSQUERY
PDRREQ Process accessed database relations
PDRREQ accessed Relations
PDRREQ Process database querries
PDRREQ Process Querries
PDRRSP Process details
- PDRRSP Process Description
- This process resides in the (RS) stage of the PDR Pipeline.
It is an OSF triggered process and its purpose is to generate responses to requests made to the
HST archive to archive PASS System products and the products generated by the PDR pipeline itself.
The responses that are generated provide the same information as their corresponding archive request files
with the following additional information; the status of the archive request, date and time the products
were archived, and error messages that result if an archive request can not be satisfied.
- PDRRSP Process Triggers
Input Triggers:
PDRRSP is triggerred by the PDRREQ process
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ _ _ _ _ _ c w _ _ mtl, sms
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c c _ _ _ _ _ c w _ _ orb
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ c c c c c c w _ _ msc
Output Triggers:
PDRRSP triggers the MSCMOV process for 'msc' class data and the
PDRCLN process for the 'mtl', 'sms', and 'orb' class data. The
MSCMOV and PDRCLN share the (CL) stage of the pipeline.
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ _ _ _ _ _ c c w _ mtl, sms
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c c _ _ _ _ _ c c w _ orb
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ c c c c c c c w _ msc
PDRRSP Process I/O - Input to and Output from the PDRRSP process is as follows:
INPUT:
N/A
OUTPUT:
PDR_ARSP_DIR:
contains responses to the request to archive PASS related POD files and product files
YYYYMMDD_HHMMSS_iymdhhmmx_zzz.arsp - responses to requests to archive PASS related files
WHERE:
YYYY = year of archive request file generation
MM = month of archive request file generation
DD = day of archive request file generation
HH = hour of archive request file generation
MM = minute of archive request file generation
SS = seconds of archive request file generation
i = file type identifier (u=PASS Mission Schedule file, v=Mission Timeline Report file,
y=Science Mission Schedule file, p=Orbital Ephemeris file)
y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...)
m = 1 digit month (1 thru c, where 1=January and a=October)
d = 1 digit day (1 thru v, where 1=1, a=10 and v=31)
hh = 2 digit hour (00 thru 23)
mm = 2 digit minute (00 thru 59)
x = file sequence id (0-9,a-z) for Orbital Ephemeris files the value is always 'r'
zzz = file type identifier (msc=PASS Mission Schedule file, mtl=Mission Timeline Report file,
sms=Science Mission Schedule file, orb=Orbital Ephemeris file)
EXAMPLES: 20021212_082578_pm590825r_orb.arsp - archive response file for ephemeris products
20021212_082578_um5908255_msc.arsp - archive response for PASS mission schedule
20021212_082578_ym5908256_sms.arsp - archive response file for Science mission schedule
20021212_082578_vm5908257_mtl.arsp - archive response file for mission timeline
PDR_LOG_DIR:
contains a log file written to by the PDRREQ and PDRRSP processes to indicate the disposition of an
archive request and its corresponding response
iymdhhmmx_zzz.log - disposition of PASS related files archive request/response
WHERE:
i = file type identifier (u=PASS Mission Schedule file, v=Mission Timeline Report file,
y=Science Mission Schedule file, p=Orbital Ephemeris file)
y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...)
m = 1 digit month (1 thru c, where 1=January and a=October)
d = 1 digit day (1 thru v, where 1=1, a=10 and v=31)
hh = 2 digit hour (00 thru 23)
mm = 2 digit minute (00 thru 59)
x = file sequence id (0-9,a-z) for Orbital Ephemeris files the value is always 'r'
zzz = file type identifier (msc=PASS Mission Schedule file, mtl=Mission Timeline Report file,
sms=Science Mission Schedule file, orb=Orbital Ephemeris file)
EXAMPLES: pm590825r_orb.log - archive log file for ephemeris products
um5908250_msc.log - archive log for PASS mission schedule
ym5908251_sms.log - archive log file for Science mission schedule
vm5908252_mtl.log - archive log file for mission timeline
updates to database relations
PDRRSP Process Modes - The PDRRSP process should be executed in only the Pipeline mode
Pipeline Mode
ingrsp -p opus_definitions_dir:your.path -r pdrrsp (in task line of resource file)
where:
-p = denotes path file specification follows
-r = denotes resource file for the PDRRSP Process
opus_definitions_dir:your.path = path file to use
PDRRSP Process Resource File
!--------------------------------------------------------------------
!
! pdrrsp RESOURCE FILE
!
!
! This file is used to construct the trigger, error, and success status
! fields in the observation status file.
!
!
!--------------------------------------------------------------------
! REVISION HISTORY
!--------------------------------------------------------------------
! MOD PR
! LEVEL DATE NUMBER User Description
! ----- -------- ------ ------ -------------------------------------
! 000 10/24/01 44684 Goldst Created initial version
! 001 01/30/02 45016 Goldst Corrected OK_TO_UPDATE_DATABASE
!---------------------------------------------------------------------------
PROCESS_NAME = pdrrsp
TASK = <ingrsp -p $PATH_FILE -r pdrrsp>
DESCRIPTION = 'Process archive response'
SYSTEM = PDR
CLASS = all
DISPLAY_ORDER = 1
!---------------------------------------------------------------------------
! EVNT resource.
!---------------------------------------------------------------------------
POLLING_TIME = 30 ! Response time of the application.
TIME_RANK = 1 ! Time event ordering.
START_TIME = 1970.001:00:00:00 ! The base reference time
DELTA_TIME = 000:00:00:30 ! The time interval to check for presence of
! archive response.
OK_TO_UPDATE_DATABASE = OK_TO_UPDATE_DATABASE ! Determined by path file
!---------------------------------------------------------------------------
! Application Specific resource
!---------------------------------------------------------------------------
MAX_ERROR = 10 ! Maximun number unexpected errors before
! INGRSP quits
OSF_WAITING.RS = w ! OSF is waiting to be processed.
OSF_PROCESSING.RS = p ! INGRSP is processing the response file.
OSF_SUCCESS.RS = c ! ARCRSP process the OSF successfully.
OSF_SUCCESS.CL = w ! ARCRSP successfull clean off data.
OSF_FAIL.RS = f ! INGRSP fails to process the response file.
OSF_ERROR.RS = e ! archive response status not equal to "OK"
OSF_CORRUPT.RS = z ! archive response is corrupt
! The following keywords specify the extension to be added to the response
! file extension.
RSP_FAIL = _FAIL ! When ARCRSP fails to process the response
RSP_ERROR = _ERROR ! When the response's STATUS value is not OK.
RSP_CORRUPT = _CORRUPT ! When the response is corrupted.
! Missing keywords etc.
RSP_FDUPLICATE = _FDUP ! When the duplicate response has been
! processed with fail status.
RSP_CDUPLICATE = _CDUP ! When the duplicated response has been
! successfully processed previously.
! Archive class and OSF map to be used.
ARCHIVE_MAP = OPUS_DEFINITIONS_DIR:archclass_osf.map
!----------------------------------------------------------------------------
! To specify an archive group add two resources, the .ARSP_DIR directory and
! .LOG_DIR directory. Entries are only necessary if you want to use values
! other than the defaults listed here:
!
! FOR SM 97 ASN data, the following addition keywords must be specified.
! .OMS_DIR = The directory where OPUS will be the table for OMS.
!
DEFAULT.ARSP_DIR = PDR_ARSP_DIR
DEFAULT.LOG_DIR = PDR_LOG_DIR
MSC.ARSP_DIR = PDR_ARSP_DIR
MSC.LOG_DIR = PDR_LOG_DIR
MTL.ARSP_DIR = PDR_ARSP_DIR
MTL.LOG_DIR = PDR_LOG_DIR
SMS.ARSP_DIR = PDR_ARSP_DIR
SMS.LOG_DIR = PDR_LOG_DIR
ORB.ARSP_DIR = PDR_ARSP_DIR
ORB.LOG_DIR = PDR_LOG_DIR
! forces values from path to be used
ENV.OPUS_DB = OPUS_DB
ENV.DSQUERY = DSQUERY
PDRRSP Process accessed database relations
PDRRSP accessed Relations
PDRRSP Process database querries
PDRRSP Process Querries
MSCMOV Process details
- MSCMOV Process Description
- This process is the first of two processes residing in the
(CL) stage of the PDR Pipeline. It is an 'msc' class OSF triggered process and its purpose is to move
the Mission Schedul pod file in the PDR_MSC_DIR directory to the
PDR_HLD_DIR directory to keep the pod files around until they are deemed to
be not needed by operations personnel.
- MSCMOV Process Triggers
Input Trigger:
MSCMOV is triggerred by the PDRRSP process
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ c c c c c c c w _ msc
Output Triggers:
MSCMOV puts a 'c' in the PDRDEL (DL) stage. For other EDPS pipelines, a 'c' in the
(DL) stage triggers the process residing in the stage to delete no longer needed OSFs.
In the PDR pipeline however, the process (PDRDEL) residing in the (DL) stage is actually
triggerred by a 'd' in the (DL) stage. The insertion of a 'd' in the (DL) stage is not
done automatically; a manual insertion by operations personnel is required.
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ c c c c c c c c c msc
MSCMOV Process I/O - Input to and Output from the MSCMOV process is as follows:
INPUT:
PDR_MSC_DIR:
u<Ymdhhmmx>.pod - PASS Mission Schedule pod files to be moved
OUTPUT:
PDR_HLD_DIR:
u<Ymdhhmmx>.pod - PASS Mission Schedule pod files moved here from the
PDR_MSC_DIR directory
WHERE:
Y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...)
m = 1 digit month (1 thru c, where 1=January and a=October)
d = 1 digit day (1 thru v, where 1=1, a=10 and v=31)
hh = 2 digit hour (00 thru 23)
mm = 2 digit minute (00 thru 59)
x = file sequence id (0-9,a-z)
EXAMPLE for year 2000, month 5, day 9, hour 0800, minute 25: um5908250.pod
MSCMOV Process Modes - The MSCMOV process executes in only the Pipeline mode. The unix
'mv' (move) command can be used to mannually move files if desired.
Pipeline Mode
xpoll -p opus_definitions_dir:your.path -r mscmov (in task line of resource file)
where:
-p = denotes path file specification follows
-r = denotes resource file for the MSCMOV Process
opus_definitions_dir:your.path = path file to use
MSCMOV Process Resource File
!--------------------------------------------------------------------
!
! mscmov.resource
!
! External poller using xpoll
!
! This file is used to construct the trigger, error, and success
! status fields in the observation status file.
!
!
!--------------------------------------------------------------------
! REVISION HISTORY
!--------------------------------------------------------------------
! MOD PR
! LEVEL DATE NUMBER User Description
! ----- -------- ------ ------ -------------------------------------
! 000 10/11/02 46773 J.Baum Created initial version
!--------------------------------------------------------------------
PROCESS_NAME = mscmov
TASK = <xpoll -p $PATH_FILE -r mscmov>
DESCRIPTION = 'Moves MSC file'
COMMAND = move_file.csh
SYSTEM = PDR
CLASS = msc
OSF_RANK = 1 ! First Trigger
OSF_TRIGGER1.CL = w ! Trigger
OSF_TRIGGER1.DATA_ID = msc ! Trigger class ID
OSF_PROCESSING.CL = p ! Processing
OSF_SUCCESS.CL = c ! Completion
OSF_SUCCESS.DL = c ! Completion
OSF_FAILURE.CL = f ! Failure setting
XPOLL_ERROR.CL = x ! Undefined exit status
ENV.INPATH = PDR_MSC_DIR
ENV.EXTENSION = .pod
ENV.MOVE_PATH = PDR_HLD_DIR
POLLING_TIME = 10 ! Wait (seconds) before polling for next
XPOLL_STATE.00 = OSF_SUCCESS
XPOLL_STATE.01 = OSF_FAILURE
PDRCLN Process details
- PDRCLN Process Description
- This process is the second of two processes residing in
the (CL) stage of the PDR Pipeline. It is an OSF triggered process and its purpose is to delete the mtl,
sms, and orb class pod files from the PDR_MTL_DIR, PDR_SMS_DIR,
and PDR_ORB_DIR directories respectively, and to delete the FITS and ASCII orb products
from the PDR_ORB_DIR directory.
- PDRCLN Process Triggers
Input Trigger:
PDRCLN is triggerred by the PDRRSP Process
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ _ _ _ _ _ c c w _ mtl, sms
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c c _ _ _ _ _ c c w _ orb
Output Trigger:
PDRCLN puts a 'c' in the PDRDEL (DL) stage. For other EDPS pipelines, a 'c' in the
(DL) stage triggers the process residing in the stage to delete no longer needed OSFs.
In the PDR pipeline however, the process (PDRDEL) residing in the (DL) stage is actually
triggerred by a 'd' in the (DL) stage. The insertion of a 'd' in the (DL) stage is not
done automatically; a manual insertion by operations personnel is required.
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ _ _ _ _ _ c c c c mtl, sms
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c c _ _ _ _ _ c c c c orb
PDRCLN Process I/O - Input to and Output from the PDRCLN process is as follows:
INPUT:
PDR_MTL_DIR
v<Ymdhhmmx>.pod - Mission Timeline Report pod file to be deleted
PDR_SMS_DIR
y<Ymdhhmmx>.pod - Science Mission Schedule pod file to be deleted
PDR_ORB_DIR
p<Ymdhhmmx>.pod - ORB related pod file to be deleted
p<Ymdhhmmx>.fit - ORB FITS product file to be deleted
p<Ymdhhmmx>.asc - ORB ASCII product file to be deleted
OUTPUT:
N/A
PDRCLN Process Modes - The PDRCLN process invokes an OPUS pipeline generic executable to perform
its work; i.e., the 'cleandata' executable. This executable can be executed in either an interactive or
pipeline mode. Interactive mode however is typically used only for testing. The PDRCLN Process itself should
be executed in only a pipeline mode.
Interactive Mode:
Again, the interactive mode is probably only useful for testing. It does not
require the existence of an OSF but the OSF status values must be supplied
by the user on the command line:
cleandata -p path.path -r process -d rootname -i dataid -o status
where:
process is a resource file that contains the optional keywords:
CLASS_GROUPING.nn where nn starts at 01. If CLASS_GROUPING is absent, then
the process CLASS keyword is used. To use all class, set CLASS_GROUPING.01
to '*'.The status should have a c in every stage that has an OUTPATH that
is to be tested.
Pipeline Mode:
cleandata -p opus_definitions_dir:your.path -r pdrcln (in task line of resource file)
where:
-p = denotes path file specification follows
-r = denotes resource file for the PDRCLN Process
opus_definitions_dir:your.path = path file to use
PDRCLN Process Resource File
!--------------------------------------------------------------------
!
! PDRCLN RESOURCE FILE
!
!
! This file is used to construct the trigger, error, and success status
! fields in the observation status file.
!
!
!--------------------------------------------------------------------
! REVISION HISTORY
!--------------------------------------------------------------------
! DATE PR User Description
! -------- ------ ------ -------------------------------------
! 07/01/99 39404 Heller UNIX version of resource file
! 01/14/00 39307 Heller Archive POD files into POD class
! 05/10/00 36852 MARose Add SPSS_DB
! 09/06/00 42386 Heller Delete FITS files for wf2
! 10/24/01 44684 Goldst Create OPUS PDR pipeline version
! 01/30/02 45016 Goldst Removed OK_TO_UPDATE_DATABASE
! Added SYSTEM and CLASS grouping keywords
! 03/13/02 45016 Goldst Removed SYSTEM_GROUPING, added TRIGGERn.DATA_ID
! 03/15/02 45016 Goldst Corrected DL column setting
! 10/11/02 46773 J.Baum Remove msc data_id - keep the msc pod file.
!---------------------------------------------------------------------------
PROCESS_NAME = pdrcln
TASK = <cleandata -p $PATH_FILE -r pdrcln>
DESCRIPTION = 'Clean pipeline directories'
SYSTEM = PDR
CLASS = all
DISPLAY_ORDER = 1
INTERNAL_POLLING_PROCESS = TRUE
OSF_RANK = 1 ! First Trigger
OSF_TRIGGER1.CL = w ! Trigger
OSF_TRIGGER1.DATA_ID = mtl ! Trigger1 class ID
OSF_TRIGGER2.CL = w ! Trigger
OSF_TRIGGER2.DATA_ID = sms ! Trigger2 class ID
OSF_TRIGGER3.CL = w ! Trigger
OSF_TRIGGER3.DATA_ID = orb ! Trigger3 class ID
OSF_PROCESSING.CL = p ! Processing
OSF_SUCCESS.CL = c ! OSF completion
OSF_SUCCESS.DL = c ! Sets DL column to c for OSF deletion
OSF_ERROR.CL = f ! Failure setting
POLLING_TIME = 5 ! Wait (seconds) before polling for next
CLASS_GROUPING.01 = mtl
CLASS_GROUPING.02 = sms
CLASS_GROUPING.03 = orb
PDRDEL Process details
- PDRDEL Process Description
- This process resides in the (DL) stage of the PDR Pipeline.
It is an OSF triggered process and its purpose is to delete 'pas', 'msc', 'mtl', 'sms', and 'orb' class OSFs
from the pipeline after their associated processing has completed. When PDRDEL is triggerred, its looks in its
resource file to determine which OSFs to delete. The following resource file mnemonics are used in making the
determination:
OSF_TRIGGER1.DL = d !value of (DL) stage that triggers PDRDEL
OSF_TRIGGER1.DATA_ID = msc !class of OSF to delete
OSF_TRIGGER2.DL = d !value of (DL) stage that triggers PDRDEL
OSF_TRIGGER2.DATA_ID = mtl !class of OSF to delete
OSF_TRIGGER3.DL = d !value of (DL) stage that triggers PDRDEL
OSF_TRIGGER3.DATA_ID = sms !class of OSF to delete
OSF_TRIGGER4.DL = d !value of (DL) stage that triggers PDRDEL
OSF_TRIGGER4.DATA_ID = orb !class of OSF to delete
OSF_TRIGGER5.DL = d !value of (DL) stage that triggers PDRDEL
OSF_TRIGGER5.DATA_ID = pas !class of OSF to delete
PDRDEL Process Triggers
Input Triggers:
PDRDEL is triggerred by the manual insertion by operations personnel of a 'd' in the (DL) stage of the
pipeline for each class of OSFs.
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ _ _ _ _ _ _ _ _ d pas
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ c c c c c c c c d msc
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c _ _ _ _ _ _ c c c d mtl, sms
DR EP RP UP MS CT NS RQ RS CL DL Class
-- -- -- -- -- -- -- -- -- -- -- -----
c c _ _ _ _ _ c c c d orb
Output Trigger:
N/A
PDRDEL Process I/O - Input to and Output from the PDRDEL process is as follows:
INPUT:
N/A
OUTPUT:
N/A
PDRDEL Process Modes - The PDRDEL process invokes an OPUS pipeline generic executable to perform
its work; i.e., the 'osfdelete' executable. This executable can only be executed in a pipeline mode.
Pipeline Mode:
osfdelete -p opus_definitions_dir:your.path -r pdrdel (in task line of resource file)
where:
-p = denotes path file specification follows
-r = denotes resource file for the PDRDEL Process
opus_definitions_dir:your.path = path file to use
PDRDEL Process Resource File
!--------------------------------------------------------------------
!
! pdrdel.resource
!
! Purpose: This file is used to construct the trigger, error, and
! success status fields in the observation status file.
!
! This resource file uses an OSF trigger.
!
!--------------------------------------------------------------------
! REVISION HISTORY
!--------------------------------------------------------------------
!
! MOD PR
! LEVEL DATE NUMBER User Description
! ----- -------- ------ ------ -------------------------------------
! 000 02/10/01 42443 Heller first version
! 001 10/24/01 44684 Goldst Created OPUS PDR pipeline version
! 002 03/12/02 45016 Goldst Added TRIGGERn and TRIGGERn.DATA_ID
! 003 05/20/02 45016 Goldst Added pas class TRIGGER and DATA_ID
! 004 07/11/02 46101 J.Baum Changed trigger to DL status 'd'.
!--------------------------------------------------------------------
PROCESS_NAME = pdrdel
TASK = <osfdelete -p $PATH_FILE -r pdrdel>
DESCRIPTION = 'Delete OSFs from the BB'
SYSTEM = PDR
CLASS = all
DISPLAY_ORDER = 1
!---------------------------------------------------------------------------
! EVNT resource.
!---------------------------------------------------------------------------
OSF_RANK = 1 ! OSF event ordering.
OSF_TRIGGER1.DL = d ! Manually set to trigger OSF deletion
OSF_TRIGGER1.DATA_ID = msc ! Trigger1 class ID
OSF_TRIGGER2.DL = d ! Manually set to trigger OSF deletion
OSF_TRIGGER2.DATA_ID = mtl ! Trigger2 class ID
OSF_TRIGGER3.DL = d ! Manually set to trigger OSF deletion
OSF_TRIGGER3.DATA_ID = sms ! Trigger3 class ID
OSF_TRIGGER4.DL = d ! Manually set to trigger OSF deletion
OSF_TRIGGER4.DATA_ID = orb ! Trigger4 class ID
OSF_TRIGGER5.DL = d ! Manually set to trigger OSF deletion
OSF_TRIGGER5.DATA_ID = pas ! Trigger5 class ID
POLLING_TIME = 5 ! Response time of the application
!---------------------------------------------------------------------------
! Application Specific resource
!---------------------------------------------------------------------------
OSF_PROCESSING.DL = p ! letter to be used when an OSF is processed.
OSF_ERROR.DL = e ! letter to be used when there is an error.
PDR Pipeline Database usage
Database Relations Accessed
MSCCPY Process accessed relations
MTLCPY Process accessed relations
SMSCPY Process accessed relations
ORBCPY Process accessed relations
PDRORB Process accessed relations
REPLAN Process accessed relations
UPDATR Process accessed relations
MSCXTR Process accessed relations
CONTRL Process accessed relations
NICSAA Process accessed relations
PDRREQ Process accessed relations
PDRRSP Process accessed relations
Database Querries Performed
Database Relations accessed
file_times Relation
This relation is used to track the receival and processing of the various product files received from
the PASS system. The PDR Pipeline MSCCPY, MTLCPY,
SMSCPY, ORBCPY, PDRORB,
REPLAN, UPDATR, MSCXTR,
CONTRL, and NICSAA processes use this relation.
Field name type/size description
---------- --------- -----------
dataset_name C23 file name of product file
archclass C3 classification used to archive the data
archdate C20 latest file date associated with the dataset
window_start C13 Corrected spacecraft time for first minor frame in the file.
(UTC rounded to the nearest second, in the format YYYYDDDHHMMSS)
window_stop C13 Corrected spacecraft time for first minor frame in the file.
(UTC rounded to the nearest second, in the format YYYYDDDHHMMSS)
tm_generated C13 time file was generated at PASS
pdb_version C8 PDB tape ID number
environment C8 AEDP environment tape name
replan_time C13 Replan start time (UTC rounded to the nearest second, in the
format YYYYDDDHHMMSS)
keyword_source Relation
This relation specifies the source of keywords to be written to the FITS file produced by the
PDRORB process. The source may be specified in three ways, from the PDB
(Project Database), from the PMDB (Proposal management Database), or as a difference calculation.
In the first case (PDB) the source of the keyword will be the name of a mnemonic as specified in the
EUDL.DAT file of the PDB. The OPUS software will look up the location of that mnemonic in the yurintab
relation. If it is desired to convert the value of the mnemonic to a string (discrete conversion) or to
engineering units (linear or polynomial conversion), then the second field (subsource) specifies the 8
character mnemonic of a conversion. See the description of conv_discrete, conv_linear, and conv_polynomial.
In the second case (PMDB) the source field is the name of a database relation in the PMDB. The subsource
specifies the name of the field in that relation. Only relations which can be joined with the relation
qolink on the basis of program_id, obset_id and ob_number can be used. There is a special case of deep
relations like qesiparm. In this case there is only a field for si_par_name and si_par_value,
both of which are string fields. To obtain a value from such a deep relation, specify the name of the
parameter in a form (usually uppercase) that will match the value in the database.
Finally, for keywords which are simply differences between two already dredged keywords, only the names of
the subtrahend and minuend mnemonics are required for the source and subsource fields.
Field name type/size description
---------- --------- -----------
instrument C3 instrument to which the keyword is associated:
hsp, wfc, foc, fos, hrs, wfII, fgs, acs, nic, sti
keyword C8 name of keyword
sourcetype C8 which kind of source applies (PDB, PMDB, DELTA)
source C30 source name: mnemonic, relation, subtrahend
subsource C30 specify: conversion mnemonic, fieldname or parameter name,
minuend mnemonic
cgg1_keyword Relation
This relation provides keywords to be written to the FITS file produced by the PDRORB
process.
Field name type/size description
---------- --------- -----------
instrument C3 instrument to which the keyword is associated:
hsp, wfc, foc, fos, hrs, wfII, fgs, acs, nic, sti
file_type C3 generic file type. values are:
shp (standard header packet),
udl (unique data log);
dsk (digital sky), dst (digital star),
ask (analog sky), ast (analog star),
asd (area scan/digital), asa (area scan/analog);
ext (extracted data), sci (science data);
shl (science header line), stl (science trailer line);
img (image)
order_index I4 an index defining the keyword order in the relation;
it is used to order the appearance of keywords in the header
file to which it will be written
fixed_index I4 this item should be thought of as a map of a fixed index
for a keyword into a value in the sequence 1,2,...,n. the
fixed index is used to indicate a particular keyword
keyword_str C8 keyword character string
keyword_typ C3 data type of the keyword value
keyword_val C20 keyword value
comment_str C72 comment character string
cnv_flag C1 flag stating whether to automatically convert keyword
optional C1 indicates whether this keyword is an optional one. If an
optional keyword has a blank value (restricted to strings),
then that keyword will be omitted from the header
cgg4_order Relation
This relation provides information on the groupings and ordering of keywords to be written to the FITS file
produced by the PDRORB process.
Field name type/size description
---------- --------- -----------
instrument C3 instrument to which the keyword is associated:
hsp, wfc, foc, fos, hrs, wfII, fgs, acs, nic, sti
header_type C36 The name of the FITS header. eg: NIC_SPT_PRIMARY
ftype_order I2 The order in which to put the following section
cgg1_instr C3 The cgg1_keyword relation instrument name. This
can be the same as 'instrument' above, or another
name such as SSY or GEN."
cgg1_ftype C3 The cgg1_keyword relation 'file_type' specification
for this section of keywords
sms_catalog Relation
This relation, used by the REPLAN process, provides information about the
generation of the Science Mission Schedule (SMS). It includes information on first generating a
logical SMS (LSMS), precedence checking the LSMS, formatting the LSMS into an SMS, transferring
the SMS to the PASS System, and sending the PASS products to the OPUS EDPS System.
Descriptions of each field in the sms_catalog relation are available to provide detailed information.
qolink Relation
This relation, used by the REPLAN, UPDATR,
CONTRL, and NICSAA processes provides information about the linking of exposures to observations.
Descriptions of each field in the qolink relation are available to provide detailed information.
qolink_sms Relation
This relation is used to provide DADS and OPUS with the SMS_ID for any observation in
the Mission Schedule. It can be used to list all observations and association
products for a particular SMS_ID. It contains a status field to identify observation
or product archive success or unavailability.
The REPLAN, UPDATR, CONTRL, and
PDRRSP processes use this relation.
Field name type/size description
---------- --------- -----------
program_id C3 When a proposal is accepted into the PMDB by SPSS it must
be assigned a unique 3 character base 36 program identifier.
Is is used for identification of proposals by spacecraft
and OPUS software. It is also used in the OPUS and DADS
rootname for all archived science data files.
obset_id C2 An observation set is a collection of one or more alignments
that are grouped together based on FGS pointing requirements.
That is, if multiple alignments can all be executed using the
same guide star pair, they are grouped into the same observation set.
ob_number C3 Observations are numbered sequentially throughout an observation set.
An ob_number is _NOT_ the same as an obset_ID. The third character is
only used for association products
sms_id C9 The C&C list is a major data structure within SPSS that
contains information on 'candidates' and the 'calendar'.
The candidate data, often referred to as the candidate pool, are
scheduling units and their associated observation set, alignment, and
target data. The calendar is a timeline of activities that are laid
down by the SPSS scheduling utilities. When a C&C List is saved in
SPSS it receives an identifier (sms_id)
status C1 The condition or availability for archiving is indicated
by a status code with the following definition:
U - Unexecuted - the initial condition for exposures or
products that will be archived independently of
associations
M - Member - the initial condition for exposures that
are only archived within an association product
N - Not available - set by operations to indicate
exposures or products that cannot be generated
E - Executed and archived successfully
inst C1 One character identifier for the instrument used for the science
observation. The relation between this value and the names in
qobservation.si_id are:
1 = 1 2 = 2 3 = 3 J = ACS N = NIC O = STIS
U = WFII V = HSP W = WFPC X = FOC Y = FOS Z = HRS
tag C1 Flags observation as a target acquisition.
Y - The qobservation.target_acqmode field is 01 or 02.
This observation is a target acquisition
ocx_expected C1 Flags this observation as real time (mode 1) target acq.
Y - The qobservation.target_acqmode field is 01.
This observation is a mode 1 (real time) target acquisition
image
pdq_created C1 indicates if the OPUS pipeline has created a PDQ file for this
observation. The value of this column is an alphanumeric code that
indicates the status, real or inferred, of the PDQ file for the
observation
oms_archived C1 indicates if the OMS pipeline has successfully achived an
observation log for this observation.
X - The data assessment software has validated existence of the
OMS observation log for this observation in DADS
rti_checked C1 indicates if OPUS staff has checked for existence of real time
information for this observation.
X - OPUS staff has run RTI_CHECK software for this observation
ocx_appended C1 indicates if an OCX file has been appended to the PDQ file.
The value of this field is an alphanumeric code that indicates
that the data assessment software has located an OCX file (and
the type of file) for this observation and has appended it to
the PDQ file
assessed C1 indictaes if this observation has been assessed for procedural
quality. The value of this column is an alphanumeric code that
summarizes the essentials of the assessment process."
dq_archived C1 indicates the archive status of the procedural data quality
file(s). The value of this column is an alphanumeric code that
indicates the archive status of the PDQ (and optionally
the OCX) file of this observation
start_time C17 This field (format: yyyy.ddd:hh:mm:ss) contains either the
planned or actual start time of the observation. The time_type
field indicates the source of the time.
end_time C17 This field (format: yyyy.ddd:hh:mm:ss) contains either the
planned or actual end time of the observation. The time_type
field indicates the source of the time.
time_type C1 (P/A) When this field is P for planned, start_time and end_time
are generated from SPSS planning data and the accuracy is
questionable. When this field is A for actual, start_time and
end_time have been updated by the OPUS pipeline using science
data.
executed Relation
This relation is used to replace the executed_flg in the qobservation relation as qobservation
is replicated from SPSS and cannot be updated by the EDPS. Some additional data is also supplied
but only the executed_flg is updated by EDPS software. The additional fields can be used to
distinguish dumps from science observations.
The REPLAN and UPDATR processes use this relation.
Field name type/size description
---------- --------- -----------
program_id C3 When a proposal is accepted into the PMDB by SPSS it must
be assigned a unique 3 character base 36 program identifier.
Is is used for identification of proposals by spacecraft
and OPUS software. It is also used in the OPUS and DADS
rootname for all archived science data files.
obset_id C2 An observation set is a collection of one or more alignments
that are grouped together based on FGS pointing requirements.
That is, if multiple alignments can all be executed using the
same guide star pair, they are grouped into the same observation set.
ob_number C3 Observations are numbered sequentially throughout an observation set.
An ob_number is _NOT_ the same as an obset_ID. The third character is
only used for association products
proposal_id C5 A proposal consists of many individual observations
submitted as a package by a proposer. When a proposal is
processed by the proposal entry system (RPS2),
it is assigned a proposal identifier. That identifier is
an integer that is converted into a 5 character base 36
string
si_id C4 This is the identifier type for a Science Instrument (SI).
The SI list includes the following:
FOC - Faint Object Camera
FOS - Faint Object Spectrograph
WFPC - Wide Field Planetary Camera 1
WFII - Wide Field Planetary Camera 2
WF3 - Wide Field Camera 3
HRS - High Resloution Spectrograph
CSTR - COSTAR
1 - Fine Guidance Sensor 1
2 - Fine Guidance Sensor 2
3 - Fine Guidance Sensor 3
NIC - NICMOS
STIS - Space Telescope Infrared Spectrograph
ACS - Advanced Camera for Surveys
COS - Cosmic Origins Spectrometer
control_id C5 This is information on how OPUS is supposed to
process the data. The data is stored in five bytes:
H/Y/N : Calibration data flag
F/P : Output product type (film/plot) - unused
2 bytes: Output format spec - unused
Y/N : Output product holding tank flag
coord_id C10 This is the SI aperture and coordinate system
identifier; it specifies the aperture and
coordinate system of the instrument to be
used for the observation of the target.
It is the aperture identifier concatenated with
the aperture coordinate system identifier.
They specify the default location of the target within
the aperture
executed_flg C1 When the record is created this value is blank. When
the observation has been executed on board HST, OPUS
receives a science POD file or EDPS generates an
astrometry file from engineering data, and this field is
updated to the Type (ninth) character of the dataset
rootname
qobservation Relation
This relation, used by the REPLAN, UPDATR, and NICSAA processes,
provides information about the observations contained in Science Mission Schedules.
Descriptions of each field in the qobservation relation are available to provide detailed information.
dataset_link Relation
This relation is used to link dataset rootnames to programmatic ids with keys to facilitate
joins to other relations.
The REPLAN and CONTRL processes use this relation.
Field name type/size description
---------- --------- -----------
dataset_rootname C9 IPPPSSOOT for jitter or astrometry datasets
dataset_type C3 Either FGS or AST, for FGS obslogs or astrometry,
respectively
program_id C3 When a proposal is accepted into the PMDB by SPSS it must
be assigned a unique 3 character base 36 program identifier.
Is is used for identification of proposals by spacecraft
and OPUS software. It is also used in the OPUS and DADS
rootname for all archived science data files.
obset_id C2 An observation set is a collection of one or more alignments
that are grouped together based on FGS pointing requirements.
That is, if multiple alignments can all be executed using the
same guide star pair, they are grouped into the same observation set.
ob_number C3 Observations are numbered sequentially throughout an observation set.
An ob_number is _NOT_ the same as an obset_ID. The third character is
only used for association products
product_eng_map Relation
This relation is used to identify the FOF engineering telemetry files that need to be converted to provide
intermediary telemetry files containing telemetry parameters needed for FGS, GSA or AST product generation.
It is used to simplify creation and collection of telemetry for processing. When a product has no
eng_ready = N flags, then the OSF for the product can be created using the product rootname.
The REPLAN and CONTRL processes use this relation.
Field name type/size description
---------- --------- -----------
product_rootname C14 IPPPSSOOT for jitter or astrometry products;
GYYYYDDDHHMMSS for GS acquisition data
product_type C3 FGS for jitter, AST for astrometry, or GSA for GS
acqusition data
eng_rootname C12 TYYYYDDDHHMM, rootname of the ENG telemetry file
eng_ready C1 (Y/N), 'N' indicates the telemetry file eng_rootname is not
yet recognized as ready for this product_type. 'Y' indicates
that the processing for this product type has recognized
the presence of eng_rootname. Having separate control for
each product type simplifies initiation of processing. "
asn_association Relation
This relation is used to provide attributes that apply to entire associations. An association is
a set of exposures that will be merged into products by OPUS pipeline. The full association
consists of a list of exposures and products. This is one of the relations used to define
associations for OPUS. The asn_members and
asn_product_link are the others.
The REPLAN and UPDATR processes use this relation.
Field name type/size description
---------- --------- -----------
association_id C9 This field identifies an OPUS association. An
association is a set of exposures that will be
merged into products by OPUS pipeline calibration
processing. The full association consists of a
list of exposures and products.
This field completely identifies an association. This is
the OPUS value used for the keyword ASN_ID. It has the
following format:
IPPPSSAAa
where:
I = instrument code (e.g., N for NICMOS, O for STIS),
PPP = program_id
SS = obset_id of the first associated exposure,
AAa = the two-character sequence (AA = 01,02,...)
is unique within the obset SS; plus the product id
(a) that is always 0 for associations and primary
products.
si_name C4 This is the identifier type for a Science Instrument (SI).
The SI list includes the:
FOC - Faint Object Camera
FOS - Faint Object Spectrograph
WFII - Wide Field Planetary Camera 2
HRS - High Resloution Spectrograph
CSTR - COSTAR
FGS - Fine Guidance Sensor
NIC - NICMOS
STIS - Space Telescope Infrared Spectrograph
ACS - Advanced Camera for Surveys
COS - Cosmic Origins Spectrometer
WF3 - Wide Field Planetary Camera 3
last_exp_date C17 This field contains the latest predicted time of the
exposure members of an association. It uses the standard
SOGS time format - yyyy.ddd:hh:mm:ss
where:
yyyy = year
ddd = day of year
hh = hours
mm = minutes
ss = seconds
collect_date C17 This field contains the date of the association file
created by OPUS. It uses the standard SOGS time
format - yyyy.ddd:hh:mm:ss
where:
yyyy = year
ddd = day of year
hh = hours
mm = minutes
ss = seconds
asn_members Relation
This relation is used for all members (exposure and product) that form an OPUS association. This
is one of the relations used to describe OPUS associciations. The
asn_association and
asn_product_link are the others.
An association is a set of exposures that will be merged into products by OPUS pipeline. The full
association consists of a list of exposures and products.
An association product is a dataset, distinct from any exposure dataset, that is generated by the
pipeline. Exposures are associated in order to generate products. For an exposure, the
mem_number is two characters and is the same as ob_number. For a product, the mem_number is the
combination of the association number within the obset and the product_id.
Prior to the collection the member_status for exposures is 'U'. Afterwards, it is either 'C' or
'O'. The value for products is always 'P'.
Prior to the collection the product_status for products is 'U'. Afterwards, it is either 'C' or
'M'. The value for exposures is always 'E'.
The REPLAN and UPDATR processes use this relation.
Field name type/size description
---------- --------- -----------
association_id C9 This field identifies an OPUS association. An association
is a set of exposures that will be merged into products by
OPUS pipeline calibration processing. The full association
consists of a list of exposures and products. STIS can only
have one product per association. NICMOS can have as few as
one and as many as nine products. ACS can have
a single product having a product_id of either '0'
(for a dither product) or '1' for a cr-split or repeat-obs
product at a single pointing. If there are more than one
product the there is always a dither product having the
product_id '0' and the other products use product ids that
range from '1' to 'I'.
This field completely identifies an association. It
is set by TRANS. It is used by DADS as the dataset
name to archive the association file. This is the
OPUS value used for the keyword ASN_ID. It has the
following format:
IPPPSSAAa
where:
I = instrument code (e.g., N for NICMOS, O for STIS),
PPP = program_id
SS = obset_id of the first associated exposure,
AAa = the two-character sequence (AA = 01,02,...)
is unique within the obset SS; plus the product id
(a) that is always 0 for associations and primary
products
program_id C3 When a proposal is accepted into the PMDB by SPSS it must
be assigned a unique 3 character base 36 program identifier.
This is done by the PMDB/ACCEPT_PROP command. This program
identifier is tagged as 'program_id' in most PMDB relations.
Is is used for identification of proposals by spacecraft
and OPUS software. It is also used in the OPUS and DADS
rootname for all archived science data files.
Because of flight design software, program_id must be
three characters
obset_id C2 An observation set is a collection of one or more
alignments that are grouped together based on FGS pointing
requirements. That is, if multiple alignments can all be
executed using the same guide star pair, they are grouped
into the same observation set.
An observation set is identified by a 2 character base 36
string. This field, typically called 'obset_id', will
often contribute to the index on relation together with a
proposal_id, version_num, and possibly other fields.
OBSET is an abbreviation for observation set
member_num C3 For exposures, observations are numbered sequentially
throughout an observation set and are assigned by SMS/Gen.
For exposures the name is two characters and it is the
same as the observation ob_number. For products it is the
association number (two characters) plus the product_id
member_type C12 This field describes the role of the member in the
association. If there are multiple products, then the
format of the exposure names correlate to the
product names by rules that depend on the SI.
For exposures this name must be the same as exp_type
in qeassociation
member_status C1 This field describes the status of a member of the
association. Valid values are:
U -- uncollected exposure
C -- collected exposure
O -- orphan exposure (not collected)
P -- product dataset
product_status C1 This field describes the status of a product of the
association. Valid values are:
U -- uncollected product
C -- collected product
N -- not collected - missing product after collection
E -- exposure (not a product)
X -- unknown (only valid for old records)
asn_product_link Relation
This relation is used for all products for an OPUS association. It identifies the exposures
contained in each product. It can also be accessed to find the products associated with any
exposure. The other relations used to describe OPUS associations are
asn_association and asn_members.
The REPLAN, UPDATR, and NICSAA processes
use this relation.
Field name type/size description
---------- --------- -----------
program_id C3 When a proposal is accepted into the PMDB by SPSS it must
be assigned a unique 3 character base 36 program identifier.
This is done by the PMDB/ACCEPT_PROP command. This program
identifier is tagged as 'program_id' in most PMDB relations.
Is is used for identification of proposals by spacecraft
and OPUS software. It is also used in the OPUS and DADS
rootname for all archived science data files.
Because of flight design software, program_id must be
three characters
asn_obset_id C2 Association observation set identifier. An observation set
is identified by a 2 character base 36 string. This field,
typically called 'obset_id', will often contribute to the
index on relation together with a proposal_id, version_num,
and possibly other fields.
member_num C3 Member number (in asn_members) of the product: it is the
association number (two characters) plus the product_id
obset_id C2 Exposure observation set identifier. An observation set is
identified by a 2 character base 36 string. This field,
typically called 'obset_id', will often contribute to the
index on relation together with a proposal_id, version_num,
and possibly other fields. OBSET is an abbreviation for
observation set
ob_number C2 Observations are numbered sequentially throughout
an observation set and are assigned by SMS/Gen.
An ob_number is _NOT_ the same as an obset_ID.
This field can be joined to member_num for exposure records
in asn_members
jitter_evt_map Relation
This relation is used to identify the all the jitter datasets that follow either the SMS start
event or the GS acquisition event, identified by event time. The event_type specifies whether
the event_start_time is for an SMS or a GS acquisition. If the event is an acquisition, then the
evt_start_time will exactly match the acq_start_time in the gsa_data table. This table is also
used to identify internal exposures that do not need engineering telemetry for its defaulted
jitter files. The table was designed to be easily joined to qolink_sms to get exposure status so
that jitter files are not generated for exposures have a status of N.
The REPLAN and CONTRL processes use this relation.
Field name type/size description
---------- --------- -----------
event_start_time C17 YYYY.DDD:HH:MM:SS, Start time of event. If event_type is
G for GSACQ then this time must match the acq_start_time
in the gsa_data table and it can be used to join to that
table
event_type C3 (SMS or GSA) SMS start or GS acquisition start
program_id C3 When a proposal is accepted into the PMDB by SPSS it must
be assigned a unique 3 character base 36 program identifier.
This is done by the PMDB/ACCEPT_PROP command.
It is used for identification of proposals by spacecraft
and OPUS software. It is also used in the OPUS and DADS
rootname for all archived science data files.
Because of flight design software, program_id must be
three characters
obset_id C2 An observation set is a collection of one or more
alignments that are grouped together based on FGS pointing
requirements. That is, if multiple alignments can all be
executed using the same guide star pair, they are grouped
into the same observation set
ob_number C2 Observations are numbered sequentially throughout
an observation set and are assigned by SMS/Gen.
An ob_number is _NOT_ the same as an obset_ID
internal_flag C1 (Y or N) Y indicates if this is a internal observation
product_status Relation
This relation is used to identify the completion status of all FGS, AST, and GSA products in order to
control telemetry file cleanup. The records are created with N for complete_flag. This relation is
designed to be joined to product_eng_map to determined when all the products, identified in
product_eng_map by product_type and product_rootname, have been marked completed.
For reprocessing, all the products to be reprocessed should have the complete_flag interactively reset
to N. For replans, any records for dataset no longer present in qolink. must be deleted. Only one record
for each product and type is allowed. "
The REPLAN and CONTRL processes use this relation.
Field name type/size description
---------- --------- -----------
product_rootname C14 IPPPSSOOT for jitter or astrometry products;
GYYYYDDDHHMMSS for GS acquisition data
product_type C3 FGS for jitter, AST for astrometry, or GSA for GS
acqusition data
complete_flag C1 (Y/N) set to Y by cleanup software indicating this dataset has been
processed and no longer prevents cleanup of telemetry files
qelogsheet Relation
This relation, used by the UPDATR and NICSAA processes is closely related to the fields
(columns) on the proposal Exposure Logsheet. The Logsheet is used to define the proposed
exposures for the HST Scientific instruments.
Descriptions of each field in the qelogsheet relation are available to provide detailed information.
qeassociation Relation
This relation, used by the UPDATR process is used to define the set of
exposures that form an OPUS association.
Descriptions of each field in the qeassociation relation are available to provide detailed information.
product_code Relation
This relation is used to provide the tool, UPDATE_QODATA, with a mechanism to assign a product_id
and member_name (of a product) for NICMOS or any that has multiple products. The product_id is
correlated with the unique value of EXP_TYPE in qeassociation table. This is a one
character product_id that supercedes the last character of the association_id to form the
member-name of a product.
The UPDATR process uses this relation.
Field name type/size description
---------- --------- -----------
si_name C4 This is the identifier type for a Science Instrument (SI).
The current SI list for qeassociation candidates are:
NIC - NICMOS - Near IR Camera - Multi-Object Spectr.
STIS - Space Telescope Infrared Spectrograph
STIS only has one product_id but many exp_type values and
will not appear in this table
exp_type C12 This field describes the role of the exposure in the
association. Valid values for NICMOS exposures are
(EXP-TARG, EXP-BCK1, EXP-BCK2, ..., EXP-BCK8)
product_id C1 This is the last character of an member_id of a product. The
first eight characters of the id are the same as the association_id.
The first product_id is 0 which forms a member_name that is the
same as the association_id
product_code PRODCODE_TYPE Assign product_id by exp_type
msc_events Relation
This relation is used to hold events extracted from Mission Schedule files that are of interest
to the EDPS FGS pipeline.
The MSCXTR processes populates this relation and the relation is used by
the CONTRL process.
Field name type/size description
---------- --------- -----------
event_time C20 YYYY.DDD.HH.MM.SS.CC the time within a hundreth of
a second for each event
event_type
C3 OPS for operational info; FGS for jitter; RTI for PCS
events; RTO for offset slot data; TDR for TDRSS COMCON events
event_class C5
for OPR type classes
--------------------
BOM begin mission schedule
EOM end mission schedule
for FGS type classes
--------------------
BOSn begin slew of type n (n=1 to 4)
PCP Pointing Control Processor
ORBIT related to HST orbit
BOA begin GS acquisition or reacquisition
EOA end GS acquisition or reacquisition
for RTI type classes
--------------------
EOS2 end slew of type 2
FHST3 fhst - start of 3-axis update
for RTO type classes
--------------------
GEN generate offset slot
REUSE reuse offset slot
SET set offset slot
CLEAR clear offset slot
for TDR type classes
--------------------
BOC begin COMCON
EOC end COMCON
BRC begin rejected COMCON
ERC end rejected COMCON
BTC begin trimmed COMCON
ETC end trimmed COMCON "
event_name C10 Format Description
---------- --------------------------------------
mmmmmmmmm 9-char MSC rootname for BOM and EOM
aaaaaaaaaa 10-char aperture name for BOSn,or EOS2
TERMINATE PCP state
GYRO PCP state
FGS_OCCULT PCP state
SAA PCP state
ENTR_DAY entering ORBIT day
ENTR_NIGHT entering ORBIT night
GSACQ1 first acquisition for BOA and EOA
GSACQ2 second acquisition for BOA and EOA
REACQ reacquisition for BOA and EOA
S_ss_pppoo RTO: ss is slot num, pppoo is obset
pppoo_ssss TDR: pppoo is obset, ssss is service
msc_gs_acq Relation
This relation is used to hold the events extracted from Mission Schedule files that entail detailed
parameters of Guide Star (GS) Acquisitions and Re-acquistions. This information is of interest to
the EDPS FGS pipeline.
The MSCXTR process populates this relation.
Field name type/size description
---------- --------- -----------
event_time C20 YYYY.DDD.HH.MM.SS.CC the time within a hundreth of
a second for each event
event_name C10 Format Description
---------- --------------------------------------
GSACQ1 first acquisition for BOA and EOA
GSACQ2 second acquisition for BOA and EOA
REACQ reacquisition for BOA and EOA
dom_fgs C1 FGS number for dominant GS
prim_fgs C1 FGS number for primary GS, that is acquired first
rol_fgs C1 FGS number for roll GS
dom_gs_ra R8 Right ascension (degrees) for dominant GS
dom_gs_dec R8 Declination (degrees) for dominant GS
rol_gs_ra R8 Right ascension (degrees) for roll GS or zero
rol_gs_dec R8 Declination (degrees) for roll GS or zero
dom_gs_mag R4 (Vmag) brightness of dominant GS
rol_gs_mag R4 (Vmag) brightness of roll GS or zero
dom_gs_id C10 GSC ID of dominant GS
rol_gs_id C10 GSC ID of roll GS or blank if no GS
tracking C2 FL for finelock, CT for coarse track, FG for
finelock/gyro, and CG for coarse track/gyro.
CT and CG tracking modes are no longer used.
msc_ast_obset Relation
This relation is used to hold the events extracted from Mission Schedule files that entail obset
level data for astrometry. The event time is taken from the time found in the MSC file but there
is no msc_events record for this data. This information is of interest to the EDPS AST pipeline;
i.e., the fields are used to set astrometry keywords in astrometry output products.
The MSCXTR process populates this relation.
Field name type/size description
---------- --------- -----------
event_time C20 YYYY.DDD.HH.MM.SS.CC the time within a hundreth of
a second for each event
program_id C3 When a proposal is accepted into the PMDB by SPSS it must
be assigned a unique 3 character base 36 program identifier.
This is done by the PMDB/ACCEPT_PROP command.
It is used for identification of proposals by spacecraft
and OPUS software. It is also used in the OPUS and DADS
rootname for all archived science data files.
Because of flight design software, program_id must be
three characters.
obset_id C2 An observation set is a collection of one or more
alignments that are grouped together based on FGS pointing
requirements. That is, if multiple alignments can all be
executed using the same guide star pair, they are grouped
into the same observation set.
fgs C1 FGS number, 1 2 or 3, or 0 for all
param_name C8 Keyword name or keyword related parameter name
param_value C20 Formatted value of parameter
msc_ast_observe Relation
This relation is used to hold the events extracted from Mission Schedule files that entail
observation level data for astrometry. The event time is taken from the time found in the MSC file
but there is no msc_events record for this data. This information is of interest to the EDPS AST
pipeline; i.e., the fields are used to set astrometry keywords in astrometry output products.
The MSCXTR process populates this relation.
Field name type/size description
---------- --------- -----------
event_time C20 YYYY.DDD.HH.MM.SS.CC the time within a hundreth of a second
for each obset
program_id C3 When a proposal is accepted into the PMDB by SPSS it must
be assigned a unique 3 character base 36 program identifier.
Is is used for identification of proposals by spacecraft
and OPUS software. It is also used in the OPUS and DADS
rootname for all archived science data files.
obset_id C2 An observation set is a collection of one or more alignments
that are grouped together based on FGS pointing requirements.
That is, if multiple alignments can all be executed using the
same guide star pair, they are grouped into the same observation set.
ob_number C3 Observations are numbered sequentially throughout an observation set.
An ob_number is _NOT_ the same as an obset_ID. The third character is
only used for association products
fgs C1 FGS number, 1 2 or 3, or 0 for all
param_name C8 Keyword name or keyword related parameter name
param_value C20 Formatted value of parameter
msc_slew_slot Relation
This relation is used to hold the events extracted from Mission Schedule files that entail the details
of msc_events of type RTO and class GEN taken from the Mission Schedule file GEN-SLEW block. This
information is of interest to the EDPS FGS pipeline.
The MSCXTR process populates this relation.
Field name type/size description
---------- --------- -----------
event_time C20 YYYY.DDD.HH.MM.SS.CC the time within a hundreth of a second
for each obset
program_id C3 When a proposal is accepted into the PMDB by SPSS it must
be assigned a unique 3 character base 36 program identifier.
This is done by the PMDB/ACCEPT_PROP command.
It is used for identification of proposals by spacecraft
and OPUS software. It is also used in the OPUS and DADS
rootname for all archived science data files.
Because of flight design software, program_id must be
three characters.
obset_id C2 An observation set is a collection of one or more
alignments that are grouped together based on FGS pointing
requirements. That is, if multiple alignments can all be
executed using the same guide star pair, they are grouped
into the same observation set.
slot C3 Slot number
load_by C17 YYYY.DDD:HH;MM:SS - load by date
max_slew C3 formatted as integer arcsec - maximum slew angle
offset_id C13 An identified used in the SPSS database
qexposure Relation
This relation, used by the CONTRL process is used to define exposures. It
provides information on how to control the Science Instruments for the exposures defined in its
records.
Descriptions of each field in
the qexposure relation are available to provide detailed information.
gsa_data Relation
This relation is used to identify the full time range of planned Guide Star acquisitions, and
to report results of the actual acquisition. The first three fields are populated when the
record is inserted into the table and the remaining fields are updated after gs acqisition
processing.
The CONTRL process updates this relation.
Field name type/size description
---------- --------- -----------
gsa_rootname C14 GYYYYDDDHHMMSS where the time values match the gsa_start_time
gsa_start_time C17 YYYY.DDD:HH:MM:SS, Commanded start of the GSACQ1 or
REACQ event
gsa_end_time C17 YYYY.DDD:HH:MM:SS, Allocated end of the REACQ event, or
the end of the GSACQ1 event if num_pair is 1, or end of
the GSACQ2 event if num_pair is 2.
acq_success_time C17 YYYY.DDD:HH:MM:SS, time of the start of FGS guiding
when takedata flag is first raised during the
acquistion or blank if no telemetry or acq failure
guiding_mode C2 Guiding mode at end of acquisition.
GY - gyro,
FL - fine lock on both stars, FG - fine lock/gyro,
CT - coarse track on both stars, CG - coarse track/
gyro. CT and CG are no longer allowed but old
schedules allowed these modes
acq_status C12 GSFAIL keyword value or blank. The non-blank values
are:
TLMGAP - unknown due to missing telemetry
VEHSAFE - not attempted due to safing
SSLEXP - scan step limit exceeded on primary GS
SSLEXS - scan step limit exceeded on secondary GS
SREXCPn - search radius exceeded on primary GS of pair n
SREXCSn - search radius exceeded on secondary GS of pair n
NOLOCK - failed to obtain finelock on either GS
acq_tlm_gap R4 (Seconds) time of telemetry gap that overlaps
the acquisition window between gsa_start_time and
gsa_end_time
actual_pair_num I4 -1 = undetermined, 0 - failure, 1 - first pair is
acquired, or 2 - second pair is acquired.
acq_dom_fgs I4 Dominant FGS number or 0, after acquisition. A zero
value indicates a total failure
acq_rol_fgs I4 Roll FGS number or 0, after acquisition. A zero
value is either a planned FGS/GYRO mode or a
failure to acquire both GSs
nic_saa_exit Relation
This relation, used by the NICSAA process contains data needed to support the
NICMOS post-SAA (South Atlantic Anomaly) dark exposures. SAA exit times are significant to the NIC
because beginning in cycle 10, NICMOS will execute a series of dark calibration observations immediately
after each SAA passage in order to eliminate persistence due to cosmic rays.
Descriptions of each field in
the nic_saa_exit relation are available to provide detailed information.
nic_saa_dark Relation
This relation is used to identify NICMOS associations containing Post-SAA dark exposures. The records
are created for all such associations whether or not there are any exposures that use it. There is a
separate record for each NICMOS configuration and saa_exit time. The relation
is used to allow the linkage of NICMOS exposures to SAA darks that may occur in a previous (replan) SMS.
Records having saa_exit_hour values that are a few weeks old have no operational value and may be deleted
at any time. There is no reason to archive any of these records.
The NICSAA process populates this relation.
Field name type/size description
---------- --------- -----------
saa_exit_hour C11 YYYY.DDD:HH -- This is the hour at which the saa
exit occurs. If the exit time is near an hour
boundary, a second record with the adjacent hour
is also present. This field is used to match with
the first 11 characters of SPSS nic_saa_exit.saa_exit
relation to create the nic_saa_link records.
config C15 This is the value from qelogsheet.config that must be
the same for both exposures of the association. The same
value must match any qelogsheet.config for linked
exposures in nic_saa_link. There should be three records
with different config values for each value of saa_exit_hour
program_id C3 The program_id of the dark association. See asn_product_link.program_id
obset_id C2 Association observation set identifier of the dark association. See
asn_product_link.asn_obset_id
member_num C3 Member number of the dark association primary product.
See asn_product_link.member_num. "
nic_saa_link Relation
This relation is used to link NICMOS exposures to the SAA dark association that is relevant to the
exposure. If no record exists, then either the exposure is an SAA dark or the exposure occurs too long
after the SAA exit time. The individual exposures of the associations can be accessed through the
asn_product_link table. All the timing info for both the exposures and the darks are found in the
nic_saa_exit table.
For records in this table, the dark association must have nearly the same saa_exit time (from NIC_SAA_EXIT table)
as the exposure. That is, the most recent SAA exit for both the darks and the exposures must be in the
same orbit. The exposure from the dark association must have the same qelogsheet.config value. The
qelogsheet.targname value for the dark exposures will be POST-SAA-DARK.
The records in this table are inserted during the processing of the mission schedule, after updating the
the asn tables.
The NICSAA process populates this relation.
Field name type/size description
---------- --------- -----------
program_id C3 When a proposal is accepted into the PMDB by SPSS it must
be assigned a unique 3 character base 36 program identifier.
This is done by the PMDB/ACCEPT_PROP command. This program
identifier is tagged as 'program_id' in most PMDB relations.
Is is used for identification of proposals by spacecraft
and OPUS software. It is also used in the OPUS and DADS
rootname for all archived science data files.
Because of flight design software, program_id must be
three characters.
obset_id C2 An observation set is a collection of one or more
alignments that are grouped together based on FGS pointing
requirements. That is, if multiple alignments can all be
executed using the same guide star pair, they are grouped
into the same observation set.
An observation set is identified by a 2 character base 36
string. This field, typically called 'obset_id', will
often contribute to the index on relation together with a
proposal_id, version_num, and possibly other fields.
OBSET is an abbreviation for observation set.
ob_number C2 Observations are numbered sequentially throughout
an observation set and are assigned by SMS/Gen.
An ob_number is _NOT_ the same as an obset_ID.
This field can be joined to member_num for
exposure records in asn_members.
dark_program_id C3 The program_id of the dark association. See asn_product_link.program_id. "
dark_obset_id C2 Association observation set identifier of the dark association.
See asn_product_link.asn_obset_id. "
dark_member_num C3 Member number of the dark association primary product.
See asn_product_link.member_num. "
dark_association C9 This is a convenience field. It can be reconstructed
from the letter N, the dark_program_id, the dark_obset_id, and the
dark_member_num. See asn_members for complete definition of
association_id. "
dads_archive Relation
This relation tracks requests made by applications to archive files and responses made by the on-line
dads archive system. Entries are retained in this table after processing is completed for historical
purposes.
The PDR Pipeline PDRREQ and PDRRSP processes use
this relation.
Field name type/size description
---------- --------- -----------
dataset_name C23 The name given to describe a group of files
archclass C3 The classification used to archive the data
archdate C20 The latest file date associated with the dataset
reqdate C23 The date when the archive insertion request was generated
reqtype C4 either a TAPE or DISK archive insertion requests
response C10 Response status returned by DADS
disk_date C23 The optical disk date assigned by DADS
file_cnt I2 The file count as determined by DADS
path C10 The path from which the request is made
tape_date C20 The date when a tape is made
saveset C17 The saveset name
archv_tape C6 Tape label
qoarchives Relation
This relation holds records that have been inserted when the OPUS data partitioning processes
determine the name of the ipppssoots from the science POD files. Any pipeline process that
subsequently has trouble processing science data will update the trouble_flag and trbl_process
fields. In addition, OPUS/EDPS archive processes update a number of fields in this relation.
The PDR Pipeline PDRREQ process uses this relation.
Field name type/size description
---------- --------- -----------
program_id C3 Unique 3 character base 36 program identifier for the proposal for
which this observation is a part. Used with the obset_id,
ob_number, and data_class fields to form the observation rootname,
which uniquely identifies an observation
obset_id C2 A collection of one or more alignments that are grouped together
based on FGS pointing requirements. That is, if multiple alignments
can all be executed using the same guide star pair, they are grouped
into the same observation set. Part of the OPUS rootname.
ob_number C2 Observations are numbered sequentially throughout an observation set
and are assigned by sms (base 36 number max. 1295 observations per
obset). Part of the OPUS rootname.
data_class C1 Type of data (R real-time, T tape-recorded, etc
obs_root C9 The ipppssoot, where
i = instrument code (N - NICMOS, O - STIS, U - WFPC2
V - HSP, W - WFPC1, X - FOC, Y - FOS, Z - HRS)
ppp = program_id
ss = obset_id
oo = ob_number
t = data_clas
proc_strt_tm C16 Time pipeline processing for this dataset. Format is
yyyydddHHMMSSsss
proc_stop_tm C16 Time pipeline processing completed for this dataset. Format is
yyyydddHHMMSSsss
data_eval I4 obsolete field??
flg_mismatch C1 archive flag for file count mismatches
geis_only C1 obsolete field??
calib_indic I2 obsolete field??
trouble_flag C1 Set to 'T' if observation sent to 'trouble'
trbl_process C6 Name of process that sent observation to 'trouble'
edsci_file C9 obsolete field??
edt_archdate C20 DADS ARCHDATE for EDT archive class. Format is yyyydddHHMMSSss
edt_fcnt I2 File count for EDT archive class
calib_file C9 obsolete field??
cal_archdate C20 DADS ARCHDATE for CAL archive class. Format is yyyydddHHMMSSsss
cal_fcnt I2 File count for CAL archive class
cdbs_data C15 obsolete field??
repro_flg C10 obselete field??
archive_files Relation
This relation is used to track a dataset's file extensions. The file extension are written to
this relation when the ARCHIVE_CLASS.TRACK_EXT is set to "Y" in a processes resource file.
The PDR Pipeline PDRREQ process uses this relation.
Field name type/size description
---------- --------- -----------
dataset_name C23 The name given to describe a group of files
archclass C3 The classification used to archive the data
archdate C20 The latest file date associated with a dataset
file_ext C3 The files' extension
Database Querries Performed
MSCCPY Process Querries
- The MSCCPY process performs the following query on the file_times
relation to determine how many records are already in the database for a given PASS Mission Schedule
product file (up to 35 are allowed). It uses this information to determine how to rename the product file
when it is moved from the PDR_PASS_DIR directory to the
PDR_MSC_DIR directory and renamed. Each time a given product file is
received, moved, and renamed, the last character in its file rootname i.e., its file sequence id (0-9,a-z)
is incremented.
SELECT window_start
FROM file_times
WHERE dataset_name like @dataset_name
The MSCCPY process performs the following query on the file_times
relation to insert a new record containing information for a PASS Mission Schedule product file.
INSERT INTO file_times
VALUES (dataset_name,archclass,archdate,window_start,window_stop,tm_generated,pdb_version,environment,
replan_time, dataset_name,archclass,archdate,window_start,window_stop,tm_generated,pdb_version,
environment,replan_time)
MTLCPY Process Querries
- The MTLCPY process performs the following query on the file_times
relation to determine how many records are already in the database for a given Mission Timeline Report
product file (up to 35 are allowed). It uses this information to determine how to rename the product file
when it is moved from the PDR_PASS_DIR directory to the
PDR_MTL_DIR directory and renamed. Each time a given product file is
received, moved, and renamed, the last character in its file rootname i.e., its file sequence id (0-9,a-z)
is incremented.
SELECT window_start
FROM file_times
WHERE dataset_name like @dataset_name
The MTLCPY process performs the following query on the file_times
relation to insert a new record containing information for a Mission Timeline Report product file.
INSERT INTO file_times
VALUES (dataset_name,archclass,archdate,window_start,window_stop,tm_generated,pdb_version,environment,
replan_time, dataset_name,archclass,archdate,window_start,window_stop,tm_generated,pdb_version,
environment,replan_time)
SMSCPY Process Querries
- The SMSCPY process performs the following query on the file_times
relation to determine how many records are already in the database for a given Science Mission Schedule
product file (up to 35 are allowed). It uses this information to determine how to rename the product file
when it is moved from the PDR_PASS_DIR directory to the
PDR_SMS_DIR directory and renamed. Each time a given product file is
received, moved, and renamed, the last character in its file rootname i.e., its file sequence id (0-9,a-z)
is incremented.
SELECT window_start
FROM file_times
WHERE dataset_name like @dataset_name
The SMSCPY process performs the following query on the file_times
relation to insert a new record containing information for a Science Mission Schedule product file.
INSERT INTO file_times
VALUES (dataset_name,archclass,archdate,window_start,window_stop,tm_generated,pdb_version,environment,
replan_time, dataset_name,archclass,archdate,window_start,window_stop,tm_generated,pdb_version,
environment,replan_time)
ORBCPY Process Querries
- The ORBCPY process performs the following query on the file_times
relation to determine how many records are already in the database for a given Orbital Ephemeris
product file (up to 35 are allowed). It uses this information to determine how to rename the product file
when it is moved from the PDR_PASS_DIR directory to the
PDR_ORB_DIR directory and renamed.
SELECT window_start
FROM file_times
WHERE dataset_name like @dataset_name
The ORBCPY process performs the following query on the file_times
relation to insert a new record containing information for an Orbital Ephemeris product file.
INSERT INTO file_times
VALUES (dataset_name,archclass,archdate,window_start,window_stop,tm_generated,pdb_version,environment,
replan_time, dataset_name,archclass,archdate,window_start,window_stop,tm_generated,pdb_version,
environment,replan_time)
PDRORB Process Querries
- The PDRORB process performs the following query on the keyword_source
relation to get information on the source of the keywords to be output to its produced FITS file.
SELECT s.keyword, s.sourcetype, s.source, s.subsource
FROM keyword_source s
WHERE s.instrument = instrument
The PDRORB process performs the following query on the cgg1_keyword
relation to get the keywords it is to output to its produced FITS file.
SELECT DISTINCT g1.keyword_str, g1.keyword_val, g1.keyword_typ, g1.comment_str, g1.optional
FROM cgg1_keyword g1
WHERE g1.instrument = instrument
AND g1.file_type !='AST' AND g1.file_type !='VFI'
AND g1.file_type !='WF2' AND g1.file_type !='WFI'
AND g1.file_type !='YFI' AND g1.file_type !='ZFI'
ORDER by g1.keyword_str
The PDRORB process performs the following query on the cgg1_keyword and
cgg4_order relations to get the groupings and ordering of the keywords it is to
output to its produced FITS file.
SELECT g1.keyword_str, g1.keyword_type, g1.keyword_val, g1.comment_str, g1.optional, g4.header_type,
g4.ftype_order, g1.order_index,
FROM cgg1_keyword g1, cgg4_order g4
WHERE g4.instrument = instrument
AND g1.instrument = g4.cgg1_instr
AND g1.file_type = g4.cgg1_ftype
ORDER by g4.header_type, g4.ftype_order, g1.order_index
The PDRORB process performs the following query on the file_times
relation to update the archdate field for a given dataset.
UPDATE file_times
SET archdate = archdate
WHERE dataset_name = dataset_name
AND archclass = archclass
REPLAN Process Querries
- The REPLAN process performs the following query on the sms_catalog
relation to see if the SMS ID for the current Mission Schedule file is in the relation
SELECT sms_send_stt
FROM SPSS_DB..sms_catalog
WHERE sms_id=SMS_ID
The REPLAN process performs the following query on the file_times
relation to get the true time range of the current Mission Schedule file
SELECT window_start, window_stop, replan_time
FROM file_times
WHERE dataset_name = pod_name
AND archclass = 'MSC'
The REPLAN process performs the following query on the qolink
relation to get the start time of the last planned observation in the current Mission Schedule file
SELECT MAX(ob_start_tim)
FROM SPSS_DB..qolink
The REPLAN process performs the following query on the qolink_sms and
executed relations to get information about the last observation that was
executed
SELECT MAX(start_time)
FROM qolink_sms l, executed ex
WHERE ex.program_id = l.program_id
AND ex.obset_id = l.obset_id
AND ex.ob_number = l.ob_number
ANDex.executed_flg !=" "
The REPLAN process performs the following query on the qobservation
relation to see if a Health and Safety Mission Schedule has no observations
SELECT COUNT(*)
FROM SPSS_DB..qobservation
where pred_strt_tm > sms_start and pred_strt_tm < sms_stop
The REPLAN process performs the following query on the qolink_sms and
executed relations to see if any records in the executed relation have been
updated for the given time range
SELECT COUNT(*)
FROM qolink_sms l, executed ex
where l.start_time > sms_start
AND l.start_time < sms_stop
AND ex.program_id = l.program_id
AND ex.obset_id = l.obset_id
AND ex.ob_number = l.ob_number
AND ex.executed_flg != " "
The REPLAN process performs the following query on the file_times
relation to determine if there are overlapping times in the relation due to Mission Schedule Re-plans
or re-deliveries
SELECT COUNT(*)
FROM file_times
where archclass="MSC" and window_start <=replan_time
AND window_stop >replan_time
The REPLAN process performs the following query on the dataset_link
and product_eng_map relations to create a temporary realtion to
hold information about old obsets corresponding to a given FOF file
SELECT DISTINCT l.program_id,l.obset_id
INTO #obsets
FROM dataset_link l, product_eng_map m
WHERE m.eng_rootname >= adjusted_start
AND m.eng_rootname <=adjusted_stop
AND l.dataset_rootname = m.product_rootname
The REPLAN process performs the following query on the temporary obsets relation and the
qolink relation to create a temporary realtion to hold obsets that are in
a Mission Schedule Re-plan but were not in the original Mission Schedule for the same time period
SELECT program_id,obset_id
INTO #missing
FROM #obsets
WHERE NOT EXISTS (SELECT * FROM SPSS_DB..qolink
WHERE program_id = #obsets.program_id
AND obset_id = #obsets.obset_id)
The REPLAN process performs the following query on the temporary relation that contains information
about obsets that were in a Mission Schedule Re-plan but were not in the original Mission Schedule for
the same time period to enable reporting on the obset that were missing
SELECT * FROM #missing
The REPLAN process performs the following query on the qolink_sms
relation to update the relation to indicate that exposures/products for missing obsets can not be
generated
UPDATE qolink_sms
SET status="N"
FROM qolink_sms l, #missing m
WHERE l.program_id = m.program_id
AND l.obset_id = m.obset_id
The REPLAN process performs the following query on the temporary relation that contains information
about obsets that were in a Mission Schedule Re-plan but were not in the original Mission Schedule for
the same time period and the asn_members relation to create a temporary
table to hold information on associations that were in a Mission Schedule Re-plan but were not in the
original Mission Schedule for the same time period
SELECT DISTINCT a.association_id
INTO #asn
FROM #missing m, asn_members a
WHERE a.program_id = m.program_id
AND a.obset_id = m.obset_id
The REPLAN process performs the following query on the asn_association
relation and the temporary relation containing information on missing associations to remove the missing
associations from the database
DELETE asn_association
FROM asn_association a, #asn m
WHERE a.association_id = m.association_id
The REPLAN process performs the following query on the asn_members
relation and the temporary relation containing information on missing associations to remove the missing
association members from the database
DELETE asn_members
FROM asn_members a, #asn m
WHERE a.association_id = m.association_id
The REPLAN process performs the following query on the
asn_product_link relation and the temporary missing association realtion
to drop the temporary missing association relation and to remove obselete asn_product_link records from
the database
DROP TABLE #asn
DELETE asn_product_link
FROM asn_product_link a, #missing m
WHERE a.program_id = m.program_id
AND a.obset_id = m.obset_id
The REPLAN process performs the following query on the jitter_evt_map
relation and the temporary missing obset relation to remove obselete jitter_evt_map records from the
database
DELETE jitter_evt_map
FROM jitter_evt_map j, #missing m
WHERE j.program_id = m.program_id
AND j.obset_id = m.obset_id
The REPLAN process performs the following query on the product_status,
dataset_link, and temporary missing obsets realtions to delete obselete
product_status records from the database
DELETE product_status
FROM product_status p, dataset_link l, #missing m
WHERE l.program_id = m.program_id
AND l.obset_id = m.obset_id
AND p.product_rootname = l.dataset_rootname
The REPLAN process performs the following query on the product_eng_map,
dataset_link, and temporary missing obsets realtions to delete obselete
product_eng_map records from the database
DELETE product_eng_map
FROM product_eng_map p, dataset_link l, #missing m
WHERE l.program_id = m.program_id
AND l.obset_id = m.obset_id
AND p.product_rootname = l.dataset_rootname
The REPLAN process performs the following query on the temporary relation containing information on
missing obsets to remove the relation from the database
DROP TABLE #missing
The REPLAN process performs the following query on the temporary relation containing information on
obsets to remove the relation from the database
DROP TABLE #obsets
UPDATR Process Querries
- The UPDATR process performs the following query on the file_times
relation to get the true time range of the current Mission Schedule file
SELECT window_start, window_stop, replan_time
FROM file_times
WHERE dataset_name = pod_name
AND archclass = 'MSC'
The UPDATR process performs the following query on the qolink_sms
relation to update the sms_id for any existing qolink_sms records for the given time range
UPDATE qolink_sms SET sms_id = SMS_ID
WHERE start_time >sms_start
AND start_time<sms_stop
The UPDATR process performs the following query on the qolink_sms
and qolink relations to create a temporary relation to hold information
about obsets that are missing from the database
SELECT DISTINCT program_id,obset_id
INTO #missing
FROM qolink_sms s
WHERE s.start_time > sms_start
AND s.start_time < sms_stop
AND NOT EXISTS (SELECT * FROM SPSS_DB..qolink
WHERE program_id = s.program_id
AND obset_id = s.obset_id)
The UPDATR process performs the following query on the temporary relation holding information
about obsets that are missing from the database so that it can generate messages reporting on the
missing obsets
SELECT * FROM #missing
The UPDATR process performs the following query on the qolink_sms
and temporary missing obsets relation to indicate in the qolink_sms relation that exposures/products
from the missing obsets can not be generated
UPDATE qolink_sms
SET status="N"
FROM qolink_sms l, #missing m
WHERE l.program_id = m.program_id
AND l.obset_id = m.obset_id
The UPDATR process performs the following query on the executed and
temporary missing obsets relation to remove the records for the missing obsets from the executed
relation
DELETE executed
FROM executed e, #missing m
WHERE e.program_id = m.program_id
AND e.obset_id = m.obset_id
The UPDATR process performs the following query to create a temporary relation (#inst_id) to
provide a lookup table to convert si_id to a single character code for an instrument
CREATE TABLE #inst_id (inst varchar(1), si_id varchar(4))
CREATE UNIQUE CLUSTERED INDEX #inst_id_1 on #inst_id (si_id)
INSERT INTO #inst_id VALUES ('1','1')
INSERT INTO #inst_id VALUES ('2','2')
INSERT INTO #inst_id VALUES ('3','3')
INSERT INTO #inst_id VALUES ('I','WFC3')
INSERT INTO #inst_id VALUES ('J','ACS')
INSERT INTO #inst_id VALUES ('L','COS')
INSERT INTO #inst_id VALUES ('N','NIC')
INSERT INTO #inst_id VALUES ('O','STIS')
INSERT INTO #inst_id VALUES ('U','WFII')
INSERT INTO #inst_id VALUES ('V','HSP')
INSERT INTO #inst_id VALUES ('W','WFPC')
INSERT INTO #inst_id VALUES ('X','FOC')
INSERT INTO #inst_id VALUES ('Y','FOS')
INSERT INTO #inst_id VALUES ('Z','HRS')
The UPDATR process performs the following query on the qobservation and
executed relations to create a temporary relation to hold information
about observations to be executed for the given time period and to set the present field to "/"
if there are no executed records for the observation
SELECT o.proposal_id,o.program_id,o.obset_id,o.ob_number,
o.si_id,o.pred_strt_tm, o.target_acqmd, o.coord_id, o.control_id,
isnull( e.ob_number, "/") present
INTO #observ
FROM SPSS_DB..qobservation o, executed e
WHERE o.pred_strt_tm>sms_start
AND o.pred_strt_tm<sms_stop
AND o.program_id *= e.program_id
AND o.obset_id *= e.obset_id
AND o.ob_number *= e.ob_number
The UPDATR process performs the following query on the temporary relation holding information
about observations for a given time range to determine if the Mission Schedule covering the time
range constitutes a Re-plan.
SELECT count(*)
FROM #observ
WHERE present !="/"
The UPDATR process performs the following query on the executed and
temporary observation relations to insert all the new exposure and dump records within the current
Mission Schedule into the executed table
INSERT INTO executed
SELECT program_id,obset_id,ob_number,proposal_id,si_id,control_id, coord_id, ' '
FROM #observ
WHERE #observ.present = "/"
The UPDATR process performs the following query on the qolink and
temporary observation and instrument id relations to create a temporary relation to hold information
about observations per instrument. When the temporary relation is created, STIS, NICMOS & ACS dumps
having a blank control_id and si_id not "1","2" or "3" are excluded.
SELECT o.proposal_id,o.program_id,o.obset_id,o.ob_number,
l.alignment_id,l.exposure_id,l.version_num,o.si_id,o.pred_strt_tm,
o.target_acqmd,i.inst,o.present
INTO #obslist
FROM #observ o, SPSS_DB..qolink l, #inst_id i
WHERE NOT(o.si_id!="1" and o.si_id!="2" AND o.si_id!="3" and o.control_id=' ')
AND l.proposal_id=o.proposal_id
AND l.program_id=o.program_id
AND l.obset_id=o.obset_id
AND l.ob_number=o.ob_number
AND i.si_id=o.si_id
ORDER BY o.program_id,o.obset_id,o.ob_number
The UPDATR process performs the following query on the qolink_sms
relation and various temporary relations to remove the temporary relations from the database and to
delete any existing qolink_sms exposures that do not have executed records
DROP TABLE #observ
DROP TABLE #missing
DELETE qolink_sms
FROM qolink_sms,#obslist
WHERE qolink_sms.program_id=#obslist.program_id
AND qolink_sms.obset_id=#obslist.obset_id
AND qolink_sms.ob_number=#obslist.ob_number
AND #obslist.present="/"
The UPDATR process performs the following querries on the qelogsheet
and qolink_sms relations and the temporary relation holing information
about observations per instrument to insert/update new qolink_sms records using new sms_id and after
calculating new start and end times for all observations and to set their status to U (unexecuted).
Don't however update records having time_type=A (actual as opposed to planned execution time).
SELECT l.program_id,l.obset_id,l.ob_number,l.inst,l.pred_strt_tm,
q.opmode,convert(int,q.exptime),l.present
FROM #obslist l, SPSS_DB..qelogsheet q
WHERE l.proposal_id=q.proposal_id
AND l.obset_id=q.obset_id
AND l.alignment_id=q.alignment_id
AND l.exposure_id=q.exposure_id
AND l.version_num=q.version_num
INSERT INTO qolink_sms
VALUES (program_id,obset_id,ob_number,SMS_ID,'U',inst,
' ',' ',' ',' ',' ',' ',' ',' ',start_time,end_time,'P')
UPDATE qolink_sms SET start_time=start_time, end_time=end_time
WHERE program_id=program_id and obset_id=obset_id
AND ob_number=ob_number
AND time_type != "A"
The UPDATR process performs the following query on the qolink_sms
relation and the temporary relation holding information about observations per instrument to update
the new records that were added from a new Mission Schedule to indicate which observations are
mode 1 or mode 2 target acquisitions.
UPDATE qolink_sms SET taq="Y",ocx_expected="Y"
FROM qolink_sms, #obslist
WHERE #obslist.target_acqmd="01"
AND #obslist.present = "/"
AND qolink_sms.program_id=#obslist.program_id
AND qolink_sms.obset_id=#obslist.obset_id
AND qolink_sms.ob_number=#obslist.ob_number
UPDATE qolink_sms SET taq="Y"
FROM qolink_sms, #obslist
WHERE #obslist.target_acqmd="02"
AND #obslist.present = "/"
AND qolink_sms.program_id=#obslist.program_id
AND qolink_sms.obset_id=#obslist.obset_id
AND qolink_sms.ob_number=#obslist.ob_number
The UPDATR process performs the following query on the qeassociation
relation and the temporary realtion holding information about observations per instrument to create
another temporary relation holding information about datasets.
SELECT l.program_id,l.obset_id,l.ob_number,a.association_id,
a.si_name,a.collect,a.exp_type,start_time=l.pred_strt_tm,l.inst,l.present
INTO #datasets
FROM #obslist l, SPSS_DB..qeassociation a
WHERE l.proposal_id*=a.proposal_id
AND l.obset_id*=a.obset_id
AND l.alignment_id*=a.alignment_id
AND l.exposure_id*=a.exposure_id
AND l.version_num*=a.version_num
ORDER BY l.program_id,l.obset_id,l.ob_number
The UPDATR process performs the following query on the following temporary relations previously
created to remove them from the database.
DROP TABLE #obslist
DROP TABLE #inst_id
The UPDATR process performs the following query on the temporary datasets realtion to create
another temporary relation to contain records for every unique association having a new collected
member in the new Mission Schedule.
SELECT program_id, association_id, si_name, last_time=MAX(start_time)
INTO #new_asn_list
FROM #datasets
WHERE collect="Y" AND present="/"
GROUP BY program_id, association_id, si_name
The UPDATR process performs the following query on the temporary datasets and new_asn_list
relations to create another temporary relation to hold information about associations that were
contained in a Mission Scheule that is being superceeded by a Re-plan Mission Schedule.
SELECT program_id, association_id, si_name, last_time=MAX(start_time)
INTO #old_asn_list
FROM #datasets d
WHERE collect="Y" and present!="/"
AND NOT EXISTS (SELECT * FROM #new_asn_list n
WHERE n.program_id = d.program_id
AND n.association_id = d.association_id)
GROUP BY program_id, association_id, si_name
The UPDATR process performs the following query on the asn_association
relation and the temporary new_asn_list relation to remove association record that were in both an
original Mission Schedule and a Re-plan Mission Schedule that is superceeding the original one.
DELETE asn_association
FROM asn_association a, #new_asn_list l
WHERE a.association_id = l.association_id
The UPDATR process performs the following query on the asn_association relation
and the temporary new_asn_list relation to add new association records to the database
INSERT INTO asn_association (association_id, si_name, last_exp_date,
collect_date)
SELECT association_id, si_name, last_time, ' '
FROM #new_asn_lis
The UPDATR process performs the following query on the asn_members
relation and the temporary new_asn_list relation to remove association member record that were in both an
original Mission Schedule and a Re-plan Mission Schedule that is superceeding the original one.
DELETE asn_members
FROM asn_members m, #new_asn_list l
WHERE m.association_id = l.association_id
The UPDATR process performs the following query on the asn_product_link
relation and the temporary new_asn_list relation to remove association member product records that were in both an
original Mission Schedule and a Re-plan Mission Schedule that is superceeding the original one.
DELETE asn_product_link
FROM asn_product_link p, #new_asn_list l
WHERE p.program_id = SUBSTRING(l.association_id,2,3)
AND p.asn_obset_id = SUBSTRING(l.association_id,5,2)
The UPDATR process performs the following query on the asn_members
and temporary datasets relation to populate the asn_members relation with records indicating the new
exposure members to be collected.
INSERT INTO asn_members( association_id, program_id, obset_id, member_num,
member_type, member_status, product_status)
SELECT association_id, program_id, obset_id, ob_number, exp_type, "U", "E"
FROM #datasets
WHERE collect="Y" and present="/"
The UPDATR process performs the following query on the product_code
relation and the temporary datasets relation to create another temporary relation to hold record
containing information on NICMOS products and ACS subproducts.
SELECT d.program_id, obset_id=substring(d.association_id,5,2),
ob_number=substring(d.association_id,7,2)+p.product_id,
d.inst, member_type=("PROD-"+SUBSTRING(d.exp_type,5,8)),
d.exp_type, d.association_id
INTO #nic_acs_prod
FROM #datasets d, product_code p
WHERE d.inst IN ('N','J') and d.collect='Y'
AND d.present="/"
AND p.si_name=d.si_name and p.exp_type=d.exp_type
GROUP BY d.program_id,d.association_id,d.inst,p.product_id,d.exp_type
The UPDATR process performs the following query on the temporary nic_acs_prod relation to create
another temporary relation containing information about ACS dither products.
SELECT n.program_id, n.obset_id, ob_number=substring(n.ob_number,1,2)+'0',
n.inst, n.association_id
INTO #acs_dither
FROM #nic_acs_prod n
WHERE n.inst = "J"
GROUP BY n.program_id,n.obset_id,substring(n.ob_number,1,2),n.inst, n.association_id
HAVING COUNT(*)>1
The UPDATR process performs the following query on the temporary acs_dither relation to create
another temporary relation to also hold information about ACS dither products
INSERT INTO #nic_acs_prod( program_id, obset_id, ob_number, inst,
member_type, exp_type, association_id)
SELECT program_id, obset_id, ob_number, inst, "PROD-DTH", "*",association_id
FROM #acs_dither
The UPDATR process performs the following query on the qolink_sms
relation and the temporary nic_acs_prod relation remove obselete NICMOS and ACS product records from
the qolink_sms relation.
DELETE qolink_sms
FROM qolink_sms,#nic_acs_prod
WHERE qolink_sms.program_id=#nic_acs_prod.program_id
AND qolink_sms.obset_id=#nic_acs_prod.obset_id
AND qolink_sms.ob_number=#nic_acs_prod.ob_number
The UPDATR process performs the following query on the qolink_sms
relation and the temporary nic_acs_prod relation to add new NICMOS and ACS product records to the
qolink_sms relation.
INSERT INTO qolink_sms
SELECT program_id,obset_id,ob_number,"$SMS_ID",'U',inst,
' ',' ',' ',' ',' ',' ',' ',' ',' ',' ',' '
FROM #nic_acs_prod
The UPDATR process performs the following query on the asn_members
relation and the temporary nic_acs_prod relation to add new NICMOS and ACS product records to the
asn_members realtion.
INSERT INTO asn_members( association_id, program_id, obset_id, member_num,
member_type, member_status, product_status)
SELECT association_id, program_id, obset_id, ob_number, member_type, "P", "U"
FROM #nic_acs_prod
The UPDATR process performs the following query on the asn_product_link and
asn_members relations and the temporary nic_acs_prod relation to
insert all NICMOS and subproduct ACS records into the asn_product_link relation.
INSERT INTO asn_product_link( program_id, asn_obset_id, member_num,
obset_id, ob_number)
SELECT n.program_id, n.obset_id, n.ob_number, m.obset_id, m.member_num
FROM #nic_acs_prod n, asn_members m
WHERE m.association_id=n.association_id and m.member_type = n.exp_type
The UPDATR process performs the following query on the asn_product_link and
asn_members relations and the temporary acs_dither relation to
insert ACS dither product records into the asn_product_link relation.
INSERT INTO asn_product_link( program_id, asn_obset_id, member_num,
obset_id, ob_number)
SELECT a.program_id, a.obset_id, a.ob_number, m.obset_id, m.member_num
FROM #acs_dither a, asn_members m
WHERE m.association_id=a.association_id and m.member_status!="P"
The UPDATR process performs the following query on the following temporary relations previously
created to remove them from the database.
DROP TABLE #acs_dither
DROP TABLE #nic_acs_prod
The UPDATR process performs the following query on the qolink_sms
relation and the temporary new_asn_list relation to remove obselete STIS product records from the
qolink_sms relation.
DELETE qolink_sms
FROM qolink_sms,#new_asn_list
WHERE #new_asn_list.si_name="STIS"
AND qolink_sms.program_id=#new_asn_list.program_id
AND qolink_sms.obset_id=SUBSTRING(#new_asn_list.association_id, 5,2)
AND qolink_sms.ob_number=SUBSTRING(#new_asn_list.association_id, 7,3)
The UPDATR process performs the following query on the qolink_sms
relation and the temporary new_asn_list relation to insert new STIS product records into the
qolink_sms relation.
INSERT INTO qolink_sms
SELECT program_id,SUBSTRING(#new_asn_list.association_id, 5,2),
SUBSTRING(#new_asn_list.association_id, 7,3),"$SMS_ID",'U','O',
' ',' ',' ',' ',' ',' ',' ',' ',' ',' ',' '
FROM #new_asn_list
WHERE si_name = "STIS"
The UPDATR process performs the following query on the asn_members
relation and the temporary new_asn_list relation to insert new STIS product records into the
asn_members relation.
INSERT INTO asn_members( association_id, program_id, obset_id, member_num,
member_type, member_status, product_status)
SELECT association_id, program_id,SUBSTRING(#new_asn_list.association_id, 5,2),
SUBSTRING(#new_asn_list.association_id, 7,3), "PRODUCT", "P", "U"
FROM #new_asn_list
WHERE si_name = "STIS"
The UPDATR process performs the following query on the asn_product_link
relation and the temporary new_asn_list relation to insert new STIS records into the asn_product_link
relation.
INSERT INTO asn_product_link(program_id, asn_obset_id, member_num,
obset_id, ob_number)
SELECT #new_asn_list.program_id,SUBSTRING(asn_members.association_id, 5,2),
SUBSTRING(asn_members.association_id, 7,3), asn_members.obset_id, asn_members.member_num
FROM #new_asn_list, asn_members
WHERE #new_asn_list.si_name = "STIS"
AND asn_members.association_id = #new_asn_list.association_id
AND asn_members.member_status="U"
The UPDATR process performs the following query on the asn_product_link
and qolink_sms relations and the temporary new_asn_list and old_asn_list
relations to create another temporary relation containing minimum start and maximum end times over all
exposures for each product for all associations in both new_asn_list and old_asn_list temporary relations in
a Mission Schedule Re-plan scenario.
SELECT p.program_id,p.asn_obset_id,p.member_num,prod_start=MIN(le.start_time),
prod_end=MAX(le.end_time)
INTO #prod_times
FROM #new_asn_list, asn_product_link p, qolink_sms le
WHERE p.program_id = SUBSTRING(#new_asn_list.association_id, 2,3)
AND p.asn_obset_id = SUBSTRING(#new_asn_list.association_id, 5,2)
AND le.program_id = p.program_id
AND le.obset_id = p.obset_id
AND le.ob_number = p.ob_number
AND le.start_time!=" "
GROUP BY p.program_id,p.asn_obset_id,p.member_num
UNION
SELECT p.program_id,p.asn_obset_id,p.member_num,prod_start=MIN(le.start_time), prod_end=MAX(le.end_time)
FROM #old_asn_list, asn_product_link p, qolink_sms le
WHERE p.program_id = SUBSTRING(#old_asn_list.association_id, 2,3)
AND p.asn_obset_id = SUBSTRING(#old_asn_list.association_id, 5,2)
AND le.program_id = p.program_id
AND le.obset_id = p.obset_id
AND le.ob_number = p.ob_number
AND le.start_time!=" "
GROUP BY p.program_id,p.asn_obset_id,p.member_num
The UPDATR process performs the following query on the asn_product_link
and qolink_sms relations and the temporary new_asn_list relation to create
another temporary relation containing minimum start and maximum end times over all exposures for each product
for all associations in the new_asn_list temporary relation in a Mission Schedule non Re-plan scenario.
SELECT p.program_id,p.asn_obset_id,p.member_num,prod_start=MIN(le.start_time),
prod_end=MAX(le.end_time)
INTO #prod_times
FROM #new_asn_list, asn_product_link p, qolink_sms le
WHERE p.program_id = SUBSTRING(#new_asn_list.association_id, 2,3)
AND p.asn_obset_id = SUBSTRING(#new_asn_list.association_id, 5,2)
AND le.program_id = p.program_id
AND le.obset_id = p.obset_id
AND le.ob_number = p.ob_number
AND le.start_time!=" "
GROUP BY p.program_id,p.asn_obset_id,p.member_num
The UPDATR process performs the following query on the following temporary relations previously
created to remove them from the database.
DROP TABLE #new_asn_list
DROP TABLE #old_asn_list
The UPDATR process performs the following query on the qolink_sms relation
and the temporary prod_times relation to update the product records in qolink_sms who's time_type does not
not equal 'A' (actual start and stop times as opposed to planned times)
UPDATE qolink_sms
SET start_time=#prod_times.prod_start, end_time=#prod_times.prod_end,
time_type = 'P'
FROM qolink_sms, #prod_times
WHERE qolink_sms.program_id=#prod_times.program_id
AND qolink_sms.obset_id=#prod_times.asn_obset_id
AND qolink_sms.ob_number=#prod_times.member_num
AND qolink_sms.time_type != "A"
The UPDATR process performs the following query on the following temporary relation previously
created to remove it from the database.
DROP TABLE #prod_times
The UPDATR process performs the following query on the temporary datasets relation to create a temporary
table to hold information about STIS members in associations.
SELECT program_id,obset_id,ob_number,max_collect=MAX(collect)
INTO #stis_obs
FROM #datasets
WHERE inst = 'O' and present="/"
GROUP BY program_id,obset_id,ob_number
The UPDATR process performs the following query on the following temporary relation previously
created to remove it from the database.
DROP TABLE #datasets
The UPDATR process performs the following query on the qolink_sms relation
and the temporary stis_obs relation to update qolink_sms records for STIS observations that are collected in
the qeassociation relation.
UPDATE qolink_sms SET status="M"
FROM #stis_obs, qolink_sms
WHERE #stis_obs.max_collect="Y"
AND qolink_sms.program_id=#stis_obs.program_id
AND qolink_sms.obset_id=#stis_obs.obset_id
AND qolink_sms.ob_number=#stis_obs.ob_number
The UPDATR process performs the following query on the following temporary relation previously
created to remove it from the database.
DROP TABLE #stis_obs
MSCXTR Process Querries
- The MSCXTR process performs the following query on the file_times
relation to get the replan time range of the current Mission Schedule file
SELECT replan_time, window_stop
FROM file_times
WHERE dataset_name = POD_FILE
AND archclass = 'MSC'
The MSCXTR process performs the following querries on the msc_events,
msc_gs_acq, msc_ast_obset,
msc_ast_observe, and msc_slew_slot
relations to delete obselete records and prepare the relations to be to populated with data from
a superceeding Mission Schedule file.
DELETE FROM msc_events
WHERE event_time>=msc_start
AND event_time<=msc_end
DELETE FROM msc_gs_acq
WHERE event_time>=msc_start
AND event_time<=msc_end
DELETE FROM msc_ast_obset
WHERE event_time>=msc_start
AND event_time<=msc_end
DELETE FROM msc_ast_observe
WHERE event_time>=msc_start
AND event_time<=msc_end
DELETE FROM msc_slew_slot
WHERE event_time>=msc_start
AND event_time<=msc_end
The purpose of the majority of the querries that the MSCXTR process performs is to populate the
msc_events, msc_gs_acq,
msc_ast_obset, msc_ast_observe,
and msc_slew_slot relations with observation related information that
is extracted from Mission Schedule files. Every single query the process performs will not be noted
here however representative querries that populate each of the relations will be provided.
INSERT INTO msc_events (event_time,event_type,event_class,event_name)
VALUES ('1997.092:18:00:00.00','OPS','BOM','UH4100000')
INSERT INTO msc_gs_acq (event_time,event_name,dom_fgs,prim_fgs,rol_fgs,dom_gs_ra,dom_gs_dec,
rol_gs_ra,rol_gs_dec,dom_gs_mag,rol_gs_mag,dom_gs_id,rol_gs_id,tracking)
VALUES ('1997.092:21:01:56.00','GSACQ1','3','3','0',167.49661,35.19144,0.00000,0.00000,10.708,
0.000,'0252201866','0252201866','FG')
INSERT INTO msc_ast_obset (event_time,program_id,obset_id,fgs,param_name,param_value)
VALUES ('2002.353:02:22:42.00','8FX','01','0','PCCS','3194123878.')
INSERT INTO msc_ast_observe (event_time,program_id,obset_id,ob_number,fgs,param_name,param_value)
VALUES ('2002.353:02:23:52.00','8FX','01','01','1','K10','24.0')
INSERT INTO msc_slew_slot (event_time,program_id,obset_id,slot,load_by,max_slew,offset_id)
VALUES ('1997.095:00:59:04.00','3WJ','04','5','1997.092:00:59','30','071500OFFC3E')
CONTRL Process Querries
- The CONTRL process performs the following query on the file_times
relation to get the true time range of the current Mission Schedule file
SELECT window_start, window_stop, replan_time
FROM file_times
WHERE dataset_name = pod_name
AND archclass = 'MSC'
The CONTRL process performs the following query on the qolink_sms,
qolink, and qexposure relations to create a
temporary relation containing information about observations and association products for a given
Mission Schedule file time range.
SELECT o.program_id program_id,o.obset_id obset_id,o.ob_number ob_number,
o.inst inst,o.start_time start_time,o.end_time end_time,e.type type
INTO #observ
FROM qolink_sms o, qolink l, qexposure e
WHERE o.start_time>sms_start
AND o.start_time<"$sms_stop"
AND o.program_id = l.program_id
AND o.obset_id = l.obset_id
AND o.ob_number = l.ob_number
AND l.proposal_id = e.proposal_id
AND l.obset_id = e.obset_id
AND l.alignment_id = e.alignment_id
AND l.exposure_id = e.exposure_id
AND l.version_num = e.version_num
The CONTRL process performs the following query on the temporary 'observ' relation to change
the instrument code for astrometry observations.
UPDATE #observ SET inst='F' WHERE inst IN ('1','2','3')
The CONTRL process performs the following query on the temporary 'observ' relation and the
msc_events relation to get the acq_start_time for the start of
searches for GS acqusitions when the given Mission Schedule file constitutes a Re-plan.
SELECT MIN(start_time)
FROM #observ
SELECT MAX(event_time)
FROM msc_events
WHERE event_type = "FGS" and event_class="BOA" and event_name != "GSACQ2"
AND event_time > old_start
AND event_time < first_obs_time
The CONTRL process performs the following querries on the temporary 'observ' relation to create
another temporary relation to contain dataset related information
SELECT (o.inst+o.program_id+o.obset_id+o.ob_number+'J') dataset_rootname,
"FGS" dataset_type, o.program_id, o.obset_id, o.ob_number, o.inst
INTO #datasets
FROM #observ o
INSERT INTO #datasets
SELECT (o.inst+o.program_id+o.obset_id+o.ob_number+'M') dataset_rootname,
"AST" dataset_type, o.program_id, o.obset_id, o.ob_number, o.inst
FROM #observ o
WHERE o.inst="F"
The CONTRL process performs the following querries on the msc_events
relation to create a temporary relation containing GS acquisition start and end times information
and to get the first and last acquisition times from the relation.
SELECT event_time, event_class, event_name,
("G"+SUBSTRING(event_time,1,4)+SUBSTRING(event_time,6,3)
+SUBSTRING(event_time,10,2)+SUBSTRING(event_time,13,2)
+SUBSTRING(event_time,16,2)) gsa_rootname
INTO #acq_times
FROM msc_events
WHERE event_type = "FGS" and event_class in ("BOA","EOA")
AND event_time >= acq_start_time
AND event_time < sms_stop
SELECT MIN(event_time), MAX(event_time)
FROM #acq_times
WHERE event_class="BOA"
AND event_name != "GSACQ2"
The CONTRL process performs the following querries on the gsa_data,
product_status, product_eng_map,
dataset_link, and jitter_evt_map,
relations to remove old records that will be superceeded with new information.
DELETE gsa_data
WHERE gsa_rootname > replan_time and gsa_rootname window_stop
DELETE product_status
WHERE product_type = "GSA"
AND product_rootname > replan_time
AND product_rootname < window_stop
DELETE product_eng_map
WHERE product_type = "GSA"
AND product_rootname > replan_time
AND product_rootname < window_stop
DELETE dataset_link
FROM dataset_link,#datasets
WHERE dataset_link.dataset_rootname = #datasets.dataset_rootname
DELETE jitter_evt_map
FROM jitter_evt_map,#datasets
WHERE jitter_evt_map.program_id = #datasets.program_id
AND jitter_evt_map.obset_id = #datasets.obset_id
AND jitter_evt_map.ob_number = #datasets.ob_number
AND #datasets.dataset_type = "FGS"
DELETE product_eng_map
FROM product_eng_map,#datasets
WHERE product_eng_map.product_rootname = #datasets.dataset_rootname
DELETE product_status
FROM product_status,#datasets
WHERE product_status.product_rootname = #datasets.dataset_rootname
The CONTRL process performs the following query on the jitter_evt_map
relation and the temporary 'observ' relation to map datasets to 'SMS' events in Mission Schedule files
that do not constitute Re-plans.
INSERT INTO jitter_evt_map
SELECT sms_start,"SMS",o.program_id,o.obset_id,o.ob_number,"Y"
FROM #observ o
WHERE o.start_time > sms_start and o.start_time < first_acq_time
AND o.type = "CAL"
UNION
SELECT sms_start,"SMS",o.program_id,o.obset_id,o.ob_number,"N"
FROM #observ o
WHERE o.start_time > sms_start and o.start_time < first_acq_time
AND o.type != "CAL"
The CONTRL process performs the following query on the jitter_evt_map
relation and the temporary 'observ' relation to map datasets to 'GSA' events in Mission Schedule files.
INSERT INTO jitter_evt_map
SELECT next_evt_time,"GSA",o.program_id,o.obset_id,o.ob_number,"Y"
FROM #observ o
WHERE o.start_time > next_evt_time and o.start_time < next_acq_time
AND o.type = "CAL"
UNION
SELECT next_evt_time,"GSA",o.program_id,o.obset_id,o.ob_number,"N"
FROM #observ o
WHERE o.start_time > next_evt_time and o.start_time < next_acq_time
AND o.type != "CAL"
The CONTRL process performs the following query on the gsa_data relation
to insert gsa_data records with default end time of 16 minutes after start skipping the pre-replan acq
that occurs before the sms_start.
INSERT INTO gsa_data
VALUES ("gsa_rootname","next_evt_time","search_end"," "," "," ",0.0,0,0,0)
The CONTRL process performs the following query on the dataset_link
relation and the temporary 'datasets' relation to populate the dataset_link records.
INSERT INTO dataset_link
SELECT dataset_rootname, dataset_type, program_id, obset_id, ob_number
FROM #datasets
The CONTRL process performs the following query on the product_status
relation and the temporary 'datasets' relation to populate the product_status records for jitter and ast
products.
INSERT INTO product_status
SELECT dataset_rootname, dataset_type, "N"
FROM #datasets
The CONTRL process performs the following query on the product_status,
eng_datasets_pads relation, and the temporary 'datasets' relation to populate
the product_status records for the Engineering Products for Calibration products.
INSERT INTO product_status
SELECT dataset_rootname, "EPC", "N"
FROM #datasets d, eng_dataset_pads p
WHERE d.dataset_type = "FGS"
AND d.inst = p.inst
AND p.exp_type="EPC"
The CONTRL process performs the following query on the product_status
relation and the temporary 'acq_times' relation to to populate the product_status records for GSA
products.
INSERT INTO product_status
SELECT gsa_rootname, "GSA", "N"
FROM #acq_times
WHERE event_class="BOA" and event_name!="GSACQ2" and event_time>"$sms_start"
The CONTRL process performs the following querries on the product_eng_map
and the eng_dataset_pads relations and the temporary 'acq_times', 'datasets',
'observ' relations to populate product_eng_map records that indicate which if any of the various types of products
(GSA, FGS, AST, EPC) and internal datasets require which engineering telemetry files for processing.
SELECT event_time, gsa_rootname
FROM #acq_times
WHERE event_class="BOA"
AND event_name != "GSACQ2" and event_time> sms_start"
INSERT INTO product_eng_map
VALUES ( "rootname","type","eng_name","N")
----------------------------------------------------------------------
SELECT d.dataset_rootname, d.dataset_type, o.start_time,o.end_time, p.start_pad, p.end_pad
FROM #datasets d, #observ o, eng_dataset_pads p
WHERE o.program_id = d.program_id
AND o.obset_id = d.obset_id
AND o.ob_number = d.ob_number
AND p.inst = o.inst
AND p.exp_type = o.type
UNION
SELECT d.dataset_rootname, "EPC", o.start_time,o.end_time,p.start_pad, p.end_pad
FROM #datasets d, #observ o, eng_dataset_pads p
WHERE d.dataset_type = "FGS"
AND o.program_id = d.program_id
AND o.obset_id = d.obset_id
AND o.ob_number = d.ob_number
AND p.inst = d.inst
AND p.exp_type = "EPC"
INSERT INTO product_eng_map
VALUES ( "rootname","type","eng_name","N")
----------------------------------------------------------------------
SELECT d.dataset_rootname, d.dataset_type, o.start_time
FROM #datasets d, #observ o
WHERE o.program_id = d.program_id
AND o.obset_id = d.obset_id
AND o.ob_number = d.ob_number
AND o.type = "CAL"
INSERT INTO product_eng_map
VALUES ( "rootname","type","eng_name","N")
The CONTRL process performs the following querries to remove temporary relations from the database.
DROP TABLE #observ
DROP TABLE #datasets
DROP TABLE #acq_times
NICSAA Process Querries
- The NICSAA process performs the following query on the file_times
relation to get the true time range of the current Mission Schedule file
SELECT window_start, window_stop, replan_time
FROM file_times
WHERE dataset_name = pod_name
AND archclass = 'MSC'
The NICSAA process performs the following query on the qobservation and
qolink relations to create a temporary relation to contain information about
NICMOS observation in the time range of the given Mission Schedule file. NICMOS dumps having a blank
control_id are excluded.
SELECT o.program_id,o.obset_id,o.ob_number,l.proposal_id,l.alignment_id,
l.exposure_id,l.version_num
INTO #obslist
FROM $SPSS_DB..qobservation o, $SPSS_DB..qolink l
WHERE o.pred_strt_tm>"sms_start"
AND o.pred_strt_tm<"sms_stop"
AND o.si_id = "NIC"
AND o.control_id!=" "
AND l.program_id = o.program_id
AND l.obset_id = o.obset_id
AND l.ob_number = o.ob_number
The NICSAA process performs the following query on the nic_saa_exit and
qelogsheet relations and the temporary 'obslist' relation to create another
temporary relation to contain nic saa exit information.
SELECT l.program_id,l.obset_id,l.ob_number,n.saa_exit,e.config,e.targname
INTO #nicsaa
FROM #obslist l, nic_saa_exit n, qelogsheet e
WHERE l.program_id=n.program_id
AND l.obset_id=n.obset_id
AND l.ob_number=n.ob_number
AND n.delta_time< NIC_SAA_MAX_DELTA
AND l.proposal_id=e.proposal_id
AND l.obset_id=e.obset_id
AND l.alignment_id=e.alignment_id
AND l.exposure_id=e.exposure_id
AND l.version_num=e.version_num
ORDER BY l.program_id,l.obset_id,l.ob_number
The NICSAA process performs the following query on the asn_product_link
relation and the temporary 'nicsaa' relation to create another temporary relation to hold information about
NIC SAA dark associations.
SELECT l.program_id,obset_id=l.asn_obset_id,l.member_num,n.config,
saa_exit=min(n.saa_exit)
INTO #nicsaadark
FROM #nicsaa n, asn_product_link l
WHERE n.targname="POST-SAA-DARK"
AND l.program_id=n.program_id
AND l.obset_id=n.obset_id
AND l.ob_number=n.ob_number
GROUP BY l.program_id,l.asn_obset_id,l.member_num,n.config
The NICSAA process performs the following query on the nic_saa_dark
relation and temporary 'nicsaadark' relation to remove old data from nic_saa_dark by association ids.
DELETE nic_saa_dark
FROM nic_saa_dark,#nicsaadark
WHERE nic_saa_dark.program_id=#nicsaadark.program_id
AND nic_saa_dark.obset_id=#nicsaadark.obset_id
AND nic_saa_dark.member_num=#nicsaadark.member_num
The NICSAA process performs the following querries on the nic_saa_dark
relation and the temporary 'nicsaadark' relation to finish removing old records from nic_saa_dark and to
populate it with new saa dark association information from the Mission Schedule.
SELECT * FROM #nicsaadark
DELETE FROM nic_saa_dark
WHERE saa_exit_hour = "saa_hour"
AND config = "config"
INSERT INTO nic_saa_dark VALUES ("saa_hour","config","program_id","obset_id","member_num")
The NICSAA process performs the following querries on the nic_saa_link
relation and the temporary 'nicsaa' relation to remove old data from nic_saa_link.
DELETE nic_saa_link
FROM nic_saa_link,#nicsaa
WHERE nic_saa_link.program_id=#nicsaa.program_id
AND nic_saa_link.obset_id=#nicsaa.obset_id
AND nic_saa_link.ob_number=#nicsaa.ob_number
The NICSAA process performs the following querries on the nic_saa_link
and nic_saa_dark relations and the temporary 'nicsaa' relation to populate
nic_saa_link with new information associating NICMOS exposures to their corresponding dark associations.
INSERT INTO nic_saa_link
SELECT s.program_id, s.obset_id, s.ob_number, d.program_id, d.obset_id,
d.member_num, ("N"+d.program_id+d.obset_id+d.member_num)
FROM #nicsaa s, nic_saa_dark d
WHERE s.targname != "POST-SAA-DARK"
AND d.saa_exit_hour = SUBSTRING(s.saa_exit,1,11)
AND d.config = s.config
The NICSAA process performs the following querries to remove temporary relations from the database.
DROP TABLE #obslist
DROP TABLE #nicsaa
DROP TABLE #nicsaadark
PDRREQ Process Querries
- The PDRREQ process uses generic code (GENREQ) common to a lot of OPUS/EDPS Pipelines. It performs
a multitude of queries on a number of relations. The specific queries it performs will not be documented
here. The primary function of the querries performed by PDRREQ is to keep track of and to keep a history
of the requests that were made of the DADS Archive System to archive files.
PDRRSP Process Querries
- The PDRRSP process uses generic code (INGRSP) common to a lot of OPUS/EDPS Pipelines. It performs
a multitude of queries on a number of relations. The specific queries it performs will not be documented here.
The primary function of the querries performed by PDRRSP is to keep track of and to keep a history of the responses
that were generated by the DADS Archive System in response to requests to archive files.