Mission Schedule files - parsed to extract various obset and observation related information which is inserted into database relations that are used to determine and control the specific processing that is performed by the EDPS FOF, FGS, and Astrometry pipelines; renamed and submitted for archiving to the HST archive system; and moved to a holding directory (PDR_HLD_DIR) to be held there until deemed to be no longer needed by operations personnel Mission Timeline Report files - renamed and submitted for archiving to the HST archive system Science Mission Schedule files - renamed and submitted for archiving to the HST archive system Orbital Ephemeris files - used as input to produce FITS and ASCII files containing the ephemeris data and also renamed and submitted for archiving to the HST archive system PASS Auxiliary Data Files - copied to a holding directory (PDR_HLD_DIR) for use by operations personnel
PDR_MSC_DIR directory Mission Schedule files - parsed to extract various obset and observation related information which is inserted into database relations that are used to determine and control the specific processing that is performed by the EDPS FOF, FGS, and Astrometry pipelines; submitted for archiving to the HST archive system; and moved to a holding directory (PDR_HLD_DIR) to be held there until deemed to be no longer needed by operations personnel PDR_MTL_DIR directory Mission Timeline Report files - submitted for archiving to the HST archive system PDR_SMS_DIR directory Science Mission Schedule files - submitted for archiving to the HST archive system PDR_ORB_DIR directory Orbital Ephemeris files - used as input to produce FITS and ASCII files containing the ephemeris data and then submitted along with the FITS and ASCII files for archiving to the HST archive system PDR_HLD_DIR directory PASS Auxiliary Data Files - held in this holding directory for use by operations personnelAfter the PASS products have been processed, the PDR pipeline starts is clean-up phase where it deletes no longer needed files and OSFs. The deletion of files is automatic however the deletion of OSFs requires manual intervention. The PDR pipeline process that deletes OSFs, i.e., the PDRDEL process residing in the (DL) stage of the pipeline is triggerred by the manual insertion of a 'd' in the (DL) stage of the pipeline for each class of OSF that is to be deleted.
WHERE: calender id = yydddlsv[q] yy = 2 digit year ddd = day of year l = length in days s = series identifier (a-z) v = version number (0-9, a-z) q = optional indentifier for special calenders (eg. r for replan, h for health and safety) sms id = sadddypr s = stands for SMS ddd = day of year y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...) p = ? r = ? [r] = optional retransmission sequence letter (a,b,c ...) n = identifier to indicate Mission Timeline sequential time periods (1,2,3 ...) ext = PASS Auxiliary Data File extension (otr, rul, rdo) date = yyddd yy = 2 digit year ddd = day of year EXAMPLES: for year 2000 day 129, 7 day, series b, version 3: 001297b3_sa129k03_f.msc - PASS Mission Schedule file 001297b3_sa129k03_f.msc_trans - PASS Mission Schedule transfer notification file 001297b3_sa129k03_f.msc_done - processed PASS Mission Schedule transfer notification file 001297b3_sa129k03_f.msc_bad - in error PASS Mission Schedule transfer notification file 001297b3_sa129k03_f.msc_duplicate - duplicate PASS Mission Schedule transfer notification file 001297b3_sa129k03_fa.msc - PASS Mission Schedule file retransmission 'a' 001297b3_sa129k03_fa.msc_trans - PASS Mission Schedule transfer notification file retransmission 'a' 001297b3_sa129k03_fa.msc_done - processed PASS Mission Schedule transfer notification file retransmission 'a' 001297b3_sa129k03_f1.mtl - Mission Timeline Report file for time period 1 001297b3_sa129k03_f1.mtl_trans - Mission Timeline Report transfer notification file for time period 1 001297b3_sa129k03_f1.mtl_done - processed Mission Timeline Report transfer notification file for time period 1 001297b3_sa129k03_f2.mtl - Mission Timeline Report file for time period 2 001297b3_sa129k03_f2.mtl_trans - Mission Timeline Report transfer notification file for time period 2 001297b3_sa129k03_f2.mtl_done - processed Mission Timeline Report transfer notification file for time period 2 001297b3_sa129k03_f.sms - Science Mission Schedule file 001297b3_sa129k03_f.sms_trans - Science Mission Schedule transfer notification file 001297b3_sa129k03_f.sms_done - processed Science Mission Schedule transfer notification file 001297b3_sa129k03_f.otr - PASS Auxiliary Data file 001297b3_sa129k03_f.pas_trans - PASS Auxiliary Data transfer notification file 001297b3_sa129k03_f.pas_done - processed PASS Auxiliary Data transfer notification file 001297b3_sa129k03_f.rul - PASS Auxiliary Data file 001297b3_sa129k03_f.pas_trans - PASS Auxiliary Data transfer notification file 001297b3_sa129k03_f.rdo - PASS Auxiliary Data file 001297b3_sa129k03_f.pas_trans - PASS Auxiliary Data transfer notification file stdef_00129.dat - Orbital Ephemeris file stdef_00129.orb_trans - Orbital Ephemeris transfer notification file stdef_00129.orb_done - processed Orbital Ephemeris transfer notification file
WHERE: Y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...) m = 1 digit month (1 thru c, where 1=January and a=October) d = 1 digit day (1 thru v, where 1=1, a=10 and v=31) hh = 2 digit hour (00 thru 23) mm = 2 digit minute (00 thru 59) x = file sequence id (0-9,a-z) EXAMPLE for year 2000, month 5, day 9, hour 0800, minute 25: um5908250.pod
WHERE: Y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...) m = 1 digit month (1 thru c, where 1=January and a=October) d = 1 digit day (1 thru v, where 1=1, a=10 and v=31) hh = 2 digit hour (00 thru 23) mm = 2 digit minute (00 thru 59) x = file sequence id (0-9,a-z) EXAMPLE for year 2000, month May, day 9, hour 0800, minute 25: vm5908251.pod
WHERE: Y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...) m = 1 digit month (1 thru c, where 1=January and a=October) d = 1 digit day (1 thru v, where 1=1, a=10 and v=31) hh = 2 digit hour (00 thru 23) mm = 2 digit minute (00 thru 59) x = file sequence id (0-9,a-z) EXAMPLE for year 2000, month May, day 9, hour 0800, minute 25: ym590825a.pod
WHERE: Y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...) m = 1 digit month (1 thru c, where 1=January and a=October) d = 1 digit day (1 thru v, where 1=1, a=10 and v=31) hh = 2 digit hour (00 thru 23) mm = 2 digit minute (00 thru 59) EXAMPLEs for year 2000, month May, day 9, hour 0800, minute 25: pm590825r.pod - pod file pm590825r.fit - fits product pm590825r.asc - ascii product
WHERE: YYYY = year of archive request file generation MM = month of archive request file generation DD = day of archive request file generation HH = hour of archive request file generation MM = minute of archive request file generation SS = seconds of archive request file generation i = file type identifier (u=PASS Mission Schedule file, v=Mission Timeline Report file, y=Science Mission Schedule file, p=Orbital Ephemeris file) y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...) m = 1 digit month (1 thru c, where 1=January and a=October) d = 1 digit day (1 thru v, where 1=1, a=10 and v=31) hh = 2 digit hour (00 thru 23) mm = 2 digit minute (00 thru 59) x = file sequence id (0-9,a-z) for Orbital Ephemeris files the value is always 'r' zzz = file type identifier (msc=PASS Mission Schedule file, mtl=Mission Timeline Report file, sms=Science Mission Schedule file, orb=Orbital Ephemeris file) EXAMPLES: 20021212_082578_pm590825r_orb.areq - archive request file for ephemeris products 20021212_082578_um590825a_msc.areq - archive request for PASS mission schedule 20021212_082578_ym590825b_sms.areq - archive request file for Science mission schedule 20021212_082578_vm590825c_mtl.areq - archive request file for mission timeline
WHERE: YYYY = year of archive request file generation MM = month of archive request file generation DD = day of archive request file generation HH = hour of archive request file generation MM = minute of archive request file generation SS = seconds of archive request file generation i = file type identifier (u=PASS Mission Schedule file, v=Mission Timeline Report file, y=Science Mission Schedule file, p=Orbital Ephemeris file) y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...) m = 1 digit month (1 thru c, where 1=January and a=October) d = 1 digit day (1 thru v, where 1=1, a=10 and v=31) hh = 2 digit hour (00 thru 23) mm = 2 digit minute (00 thru 59) x = file sequence id (0-9,a-z) for Orbital Ephemeris files the value is always 'r' zzz = file type identifier (msc=PASS Mission Schedule file, mtl=Mission Timeline Report file, sms=Science Mission Schedule file, orb=Orbital Ephemeris file) EXAMPLES: 20021212_082578_pm590825r_orb.arsp - archive response file for ephemeris products 20021212_082578_um5908255_msc.arsp - archive response for PASS mission schedule 20021212_082578_ym5908256_sms.arsp - archive response file for Science mission schedule 20021212_082578_vm5908257_mtl.arsp - archive response file for mission timeline
WHERE: i = file type identifier (u=PASS Mission Schedule file, v=Mission Timeline Report file, y=Science Mission Schedule file, p=Orbital Ephemeris file) y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...) m = 1 digit month (1 thru c, where 1=January and a=October) d = 1 digit day (1 thru v, where 1=1, a=10 and v=31) hh = 2 digit hour (00 thru 23) mm = 2 digit minute (00 thru 59) x = file sequence id (0-9,a-z) for Orbital Ephemeris files the value is always 'r' zzz = file type identifier (msc=PASS Mission Schedule file, mtl=Mission Timeline Report file, sms=Science Mission Schedule file, orb=Orbital Ephemeris file) EXAMPLES: pm590825r_orb.log - archive log file for ephemeris products um5908250_msc.log - archive log for PASS mission schedule ym5908251_sms.log - archive log file for Science mission schedule vm5908252_mtl.log - archive log file for mission timeline
Input Trigger: MSCCPY is a file poller process that is triggerred by the appearance of PASS Mission Schedule Product Transfer Notification Message files in the PDR_PASS_DIR directory Output Triggers: MSCCPY triggers the REPLAN process DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ w _ _ _ _ _ _ _ _ msc
INPUT: PDR_PASS_DIR: <calender id>_<sms id>_f[r].msc - PASS Mission Schedule files from PASS <calender id>_<sms id>_f[r].msc_trans - PASS Mission Schedule transfer notification files from PASS OUTPUT: OPUS_OBSERVATIONS_DIR: 'msc' class OSFs for PASS Mission Schedule files PDR_PASS_DIR: <calender id>_<sms id>_f[r].msc_done - processed PASS Mission Schedule transfer notification files from PASS <calender id>_<sms id>_f[r].msc_bad - PASS Mission Schedule transfer notification files found to be in error <calender id>_<sms id>_f[r].msc_duplicate - PASS Mission Schedule transfer notification files found to be duplicates PDR_MSC_DIR: u<Ymdhhmmx>.pod - PASS Mission Schedule files received from PASS and moved here from the PDR_PASS_DIR directory and renamed
Pipeline Mode: pdrcpy -p opus_definitions_dir:your.path -r msccpy (in task line of resource file) where: -p = denotes path file specification follows -r = denotes resource file for the MSCCPY Process opus_definitions_dir:your.path = path file to use
!-------------------------------------------------------------------- ! ! MSCCPY RESOURCE FILE ! ! This file is used to construct the trigger, error, and success status ! fields in the observation status file. ! !-------------------------------------------------------------------- ! REVISION HISTORY !-------------------------------------------------------------------- ! MOD PR ! LEVEL DATE NUMBER User Description ! ----- -------- ------ ------ ------------------------------------- ! 000 11/02/99 39733 Ken S. Created ! 001 10/24/01 44684 Goldst Standardized OPUS PDR pipeline version ! 002 01/30/02 45016 Goldst Corrected OK_TO_UPDATE_DATABASE ! 003 07/10/02 46101 J.Baum inserted .DANGLE for file status ! 004 08/09/02 46352 Goldst Add OSF_PROCESSING keyword for cleandata ! 005 10/04/02 46352 Goldst Corrected PASS_COMPLETE stage and comment !-------------------------------------------------------------------- PROCESS_NAME = msccpy TASK = <pdrcpy -p $PATH_FILE -r msccpy> DESCRIPTION = 'MSC product notification message poller' SYSTEM = PDR CLASS = msc DISPLAY_ORDER = 1 OK_TO_UPDATE_DATABASE = OK_TO_UPDATE_DATABASE ! Determined by path file INTERNAL_POLLING_PROCESS = TRUE FILE_RANK = 1 ! First Trigger FILE_OBJECT1 = *.msc_trans ! File specification for searches FILE_DIRECTORY1 = PDR_PASS_DIR ! Polling directory !File status FILE_PROCESSING.DANGLE = _proc ! Extension addition during processing FILE_SUCCESS.DANGLE = _done ! Extension addition if normal processing FILE_ERROR.DANGLE = _bad ! Extension addition if error FILE_DUPLICATE.DANGLE = _duplicate ! Extension addition if duplicate !OSF status PAS_CREATE.DR = p !PAS file procesing. PAS_COMPLETE.DR = c !PAS file completed PAS_COMPLETE.RP = w !Trigger for replan processing PAS_DUPLICATE.DR = d !PAS file duplicate detected. PAS_FAIL.DR = e !PAS file processing failed. OSF_PROCESSING.DR = p !Needed for cleandata processing POLLING_TIME = 10 ! Wait (seconds) before polling for next INPATH = PDR_PASS_DIR ! location of pass data files. OUTPATH = PDR_MSC_DIR ! destination for SMS files MINBLOCKS = 50000 ! blocks required on output disk EXTENSION = .msc_trans ! MSC input file extension PREFIX = u ! Pass file prefix RENAME = Y ! Rename PASS file. TIMES = Y ! Update files_times relation. ! forces values from path to be used ENV.OPUS_DB = OPUS_DB ENV.DSQUERY = DSQUERY
Input Trigger: MTLCPY is a file poller process that is triggerred by the appearance of PASS Mission Timeline Report Transfer Notification Message files in the PDR_PASS_DIR directory Output Triggers: MTLCPY triggers the PDRREQ process DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ _ _ _ _ _ w _ _ _ mtl
INPUT: PDR_PASS_DIR: <calender id>_<sms id>_f<n>.mtl - Mission Timeline Report files from PASS <calender id>_<sms id>_f<n>.mtl_trans - Mission Timeline Report notification files from PASS OUTPUT: OPUS_OBSERVATIONS_DIR: 'mtl' class OSFs for Mission Timeline Report files PDR_PASS_DIR: <calender id>_<sms id>_f<n>.mtl_done - processed Mission Timeline Report transfer notification files from PASS <calender id>_<sms id>_f<n>.mtl_bad - Mission Timeline Report transfer notification files found to be in error <calender id>_<sms id>_f<n>.mtl_duplicate - Mission Timeline Report transfer notification files found to be duplicates PDR_MTL_DIR: v<Ymdhhmmx>.pod - Mission Timeline Report files received from PASS and moved here from the PDR_PASS_DIR directory and renamed
Pipeline Mode: pdrcpy -p opus_definitions_dir:your.path -r mtlcpy (in task line of resource file) where: -p = denotes path file specification follows -r = denotes resource file for the MTLCPY Process opus_definitions_dir:your.path = path file to use
!-------------------------------------------------------------------- ! ! MTLCPY RESOURCE FILE ! ! ! This file is used to construct the trigger, error, and success status ! fields in the observation status file. ! ! !--------------------------------------------------------------------- ! REVISION HISTORY !--------------------------------------------------------------------- ! MOD PR ! LEVEL DATE NUMBER User Description ! ----- -------- ------ ------ -------------------------------------- ! 000 11/02/99 39733 Ken S. Created ! 001 10/24/01 44684 Goldst Standardized OPUS PDR pipeline version ! 002 01/30/02 45016 Goldst Corrected OK_TO_UPDATE_DATABASE ! 003 07/10/02 46101 J.Baum inserted .DANGLE for file status ! 004 08/09/02 46352 Goldst Add OSF_PROCESSING keyword for cleandata !--------------------------------------------------------------------- PROCESS_NAME = mtlcpy TASK = <pdrcpy -p $PATH_FILE -r mtlcpy> DESCRIPTION = 'MTL product notification message poller' SYSTEM = PDR CLASS = mtl DISPLAY_ORDER = 1 OK_TO_UPDATE_DATABASE = OK_TO_UPDATE_DATABASE ! Determined by path file INTERNAL_POLLING_PROCESS = TRUE FILE_RANK = 1 ! First Trigger FILE_OBJECT1 = *.mtl_trans ! File specification for searches FILE_DIRECTORY1 = PDR_PASS_DIR ! Polling directory !File status FILE_PROCESSING.DANGLE = _proc ! Extension addition during processing FILE_SUCCESS.DANGLE = _done ! Extension addition if normal processing FILE_ERROR.DANGLE = _bad ! Extension addition if error FILE_DUPLICATE.DANGLE = _duplicate ! Extension addition if duplicate !OSF status PAS_CREATE.DR = p !PAS file procesing. PAS_COMPLETE.DR = c !PAS file completed PAS_COMPLETE.RQ = w !Trigger for request process PAS_DUPLICATE.DR = d !PAS file duplicate detected. PAS_FAIL.DR = e !PAS file processing failed. OSF_PROCESSING.DR = p !Needed for cleandata processing POLLING_TIME = 10 ! Wait (seconds) before polling for next INPATH = PDR_PASS_DIR ! location of pass data files. OUTPATH = PDR_MTL_DIR ! destination for MTL files MINBLOCKS = 50000 ! blocks required on output disk EXTENSION = .mtl_trans ! MTL input file extension PREFIX = v ! pass file prefix RENAME = Y ! Rename PASS file. TIMES = Y ! Update files_times relation. ! forces values from path to be used ENV.OPUS_DB = OPUS_DB ENV.DSQUERY = DSQUERY
Input Trigger: SMSCPY is a file poller process that is triggerred by the appearance of Science Mission Schedule Product Transfer Notification Message files in the PDR_PASS_DIR directory Output Triggers: SMSCPY triggers the PDRREQ process DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ _ _ _ _ _ w _ _ _ sms
INPUT: PDR_PASS_DIR: <calender id>_<sms id>_f.sms - Science Mission Schedule files from PASS <calender id>_<sms id>_f.sms_trans - Science Mission Schedule transfer notification files from PASS OUTPUT: OPUS_OBSERVATIONS_DIR: 'sms' class OSFs for Science Mission Schedule files PDR_PASS_DIR: <calender id>_<sms id>_f.sms_done - processed Science Mission Schedule transfer notification files from PASS <calender id>_<sms id>_f.sms_bad - Science Mission Schedule transfer notification files found to be in error <calender id>_<sms id>_f.sms_duplicate - Science Mission Schedule transfer notification files found to be duplicates PDR_SMS_DIR: y<Ymdhhmmx>.pod - Science Mission Schedule files received from PASS and moved here from the PDR_PASS_DIR directory and renamed
Pipeline Mode: pdrcpy -p opus_definitions_dir:your.path -r smscpy (in task line of resource file) where: -p = denotes path file specification follows -r = denotes resource file for the SMSCPY Process opus_definitions_dir:your.path = path file to use
!-------------------------------------------------------------------- ! ! SMSCPY RESOURCE FILE ! ! ! This file is used to construct the trigger, error, and success status ! fields in the observation status file. ! ! !--------------------------------------------------------------------- ! REVISION HISTORY !--------------------------------------------------------------------- ! MOD PR ! LEVEL DATE NUMBER User Description ! ----- -------- ------ ------ -------------------------------------- ! 000 11/02/99 39733 Ken S. Created ! 001 10/24/01 44684 Goldst Standardized OPUS PDR pipeline version ! 002 01/30/02 45016 Goldst Corrected OK_TO_UPDATE_DATABASE ! 003 07/10/02 46101 J.Baum inserted .DANGLE for file status ! 004 08/09/02 46352 Goldst Add OSF_PROCESSING keyword for cleandata !--------------------------------------------------------------------- PROCESS_NAME = smscpy TASK = <pdrcpy -p $PATH_FILE -r smscpy> DESCRIPTION = 'SMS product notification message poller' SYSTEM = PDR CLASS = sms DISPLAY_ORDER = 1 OK_TO_UPDATE_DATABASE = OK_TO_UPDATE_DATABASE ! Determined by path file INTERNAL_POLLING_PROCESS = TRUE FILE_RANK = 1 ! First Trigger FILE_OBJECT1 = *.sms_trans ! File specification for searches FILE_DIRECTORY1 = PDR_PASS_DIR ! Polling directory !File status FILE_PROCESSING.DANGLE = _proc ! Extension addition during processing FILE_SUCCESS.DANGLE = _done ! Extension addition if normal processing FILE_ERROR.DANGLE = _bad ! Extension addition if error FILE_DUPLICATE.DANGLE = _duplicate ! Extension addition if duplicate !OSF status PAS_CREATE.DR = p !PAS file procesing. PAS_COMPLETE.DR = c !PAS file completed PAS_COMPLETE.RQ = w !Trigger for request process PAS_DUPLICATE.DR = d !PAS file duplicate detected. PAS_FAIL.DR = e !PAS file processing failed. OSF_PROCESSING.DR = p !Needed for cleandata processing POLLING_TIME = 10 ! Wait (seconds) before polling for next INPATH = PDR_PASS_DIR ! location of pass data files. OUTPATH = PDR_SMS_DIR ! destination for SMS files MINBLOCKS = 50000 ! blocks required on output disk EXTENSION = .sms_trans ! SMS input file extension PREFIX = y ! PASS file prefix RENAME = Y ! Rename PASS file. TIMES = Y ! Update files_times relation. ! forces values from path to be used ENV.OPUS_DB = OPUS_DB ENV.DSQUERY = DSQUERY
Input Trigger: ORBCPY is a file poller process that is triggerred by the appearance of Orbital Ephemeris File Product Transfer Notification Message files in the PDR_PASS_DIR directory Output Triggers: ORBCPY triggers the PDRORB process DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c w _ _ _ _ _ _ _ _ _ orb
INPUT: PDR_PASS_DIR: stdef_<date>.dat - Orbital Ephemeris files stdef_<date>.orb_trans - Orbital Ephemeris transfer notification file before processing OUTPUT: OPUS_OBSERVATIONS_DIR: 'orb' class OSFs for Orbital Ephemeris files PDR_PASS_DIR: stdef_<date>.orb_done - processed Orbital Ephemeris transfer notification files stdef_<date>.orb_bad - Orbital Ephemeris transfer notification files found to be in error stdef_<date>.orb_duplicate - Orbital Ephemeris transfer notification files found to be duplicates PDR_ORB_DIR: u<Ymdhhmmr>.pod - Orbital Ephemeris files received from PASS and moved here from the PDR_PASS_DIR directory and renamed
Pipeline Mode: pdrcpy -p opus_definitions_dir:your.path -r orbcpy (in task line of resource file) where: -p = denotes path file specification follows -r = denotes resource file for the ORBCPY Process opus_definitions_dir:your.path = path file to use
!-------------------------------------------------------------------- ! ! ORBCPY RESOURCE FILE ! ! This file is used to construct the trigger, error, and success status ! fields in the observation status file. ! !--------------------------------------------------------------------- ! REVISION HISTORY !--------------------------------------------------------------------- ! MOD PR ! LEVEL DATE NUMBER User Description ! ----- -------- ------ ------ -------------------------------------- ! 000 11/02/99 39733 Ken S. Created ! 001 10/24/01 44684 Goldst Standardized OPUS PDR pipeline version ! 002 01/30/02 45016 Goldst Corrected OK_TO_UPDATE_DATABASE ! 003 07/10/02 46101 J.Baum inserted .DANGLE for file status ! 004 08/09/02 46352 Goldst Add OSF_PROCESSING keyword for cleandata ! 005 08/26/02 46468 J.Baum Fix PAS_COMPLETE stage !--------------------------------------------------------------------- PROCESS_NAME = orbcpy TASK = <pdrcpy -p $PATH_FILE -r orbcpy> DESCRIPTION = 'ORB product notification message poller' SYSTEM = PDR CLASS = orb DISPLAY_ORDER = 1 OK_TO_UPDATE_DATABASE = OK_TO_UPDATE_DATABASE ! Determined by path file INTERNAL_POLLING_PROCESS = TRUE FILE_RANK = 1 ! First Trigger FILE_OBJECT1 = *.orb_trans ! File specification for searches FILE_DIRECTORY1 = PDR_PASS_DIR ! Polling directory !File status FILE_PROCESSING.DANGLE = _proc ! Extension addition during processing FILE_SUCCESS.DANGLE = _done ! Extension addition if normal processing FILE_ERROR.DANGLE = _bad ! Extension addition if error FILE_DUPLICATE.DANGLE = _duplicate ! Extension addition if duplicate !OSF status PAS_CREATE.DR = p !PAS file procesing. PAS_COMPLETE.DR = c !PAS file completed PAS_COMPLETE.EP = w !Trigger for ephemeris processing PAS_DUPLICATE.DR = d !PAS file duplicate detected. PAS_FAIL.DR = e !PAS file processing failed. OSF_PROCESSING.DR = p !Needed for cleandata processing POLLING_TIME = 10 ! Wait (seconds) before polling for next INPATH = PDR_PASS_DIR ! location of pass data files. OUTPATH = PDR_ORB_DIR ! destination for ORB files MINBLOCKS = 50000 ! blocks required on output disk EXTENSION = .orb_trans ! ORB input file extension PREFIX = p ! Prefix RENAME = Y ! Rename PASS file. TIMES = Y ! Update files_times relation. ! forces values from path to be used ENV.OPUS_DB = OPUS_DB ENV.DSQUERY = DSQUERY
Input Trigger: PASCPY is a file poller process that is triggerred by the appearance of PASS Auxiliary Data Product Transfer Notification Message files in the PDR_PASS_DIR directory Output Triggers: PASCPY puts a 'c' in the PDRDEL (DL) stage. For other EDPS pipelines, a 'c' in the (DL) stage triggers the process residing in the stage to delete no longer needed OSFs. In the PDR pipeline however, the process (PDRDEL) residing in the (DL) stage is actually triggerred by a 'd' in the (DL) stage. The insertion of a 'd' in the (DL) stage is not done automatically; a manual insertion by operations personnel is required. DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ _ _ _ _ _ _ _ _ c pas
INPUT: PDR_PASS_DIR: <calender id>_<sms id>_f.<ext> - PASS Auxiliary Data files <calender id>_<sms id>_f.pas_trans - PASS Auxiliary Data transfer notification files OUTPUT: OPUS_OBSERVATIONS_DIR: 'pas' class OSFs for PASS Auxiliary Data files PDR_PASS_DIR: <calender id>_<sms id>_f.pas_done - processed PASS Auxiliary Data transfer notification files <calender id>_<sms id>_f.pas_bad - PASS Auxiliary Data transfer notification files found to be in error <calender id>_<sms id>_f.pas_duplicate - PASS Auxiliary Data transfer notification files found to be duplicates PDR_HLD_DIR: <calender id>_<sms id>_f.<ext> - PASS Auxiliary Data files moved here from the PDR_PASS_DIR directory.
Pipeline Mode: pdrcpy -p opus_definitions_dir:your.path -r pascpy (in task line of resource file) where: -p = denotes path file specification follows -r = denotes resource file for the PASCPY Process opus_definitions_dir:your.path = path file to use
!-------------------------------------------------------------------- ! ! PASCPY RESOURCE FILE ! ! This file is used to construct the trigger, error, and success status ! fields in the observation status file. ! !-------------------------------------------------------------------- ! REVISION HISTORY !-------------------------------------------------------------------- ! MOD PR ! LEVEL DATE NUMBER User Description ! ----- -------- ------ ------ ------------------------------------- ! 000 11/02/99 39733 Ken S. Created ! 001 10/24/01 44684 Standardized OPUS PDR pipeline version ! 002 01/30/02 45016 Goldst Corrected OK_TO_UPDATE_DATABASE ! 003 07/10/02 46101 J.Baum inserted .DANGLE for file status !-------------------------------------------------------------------- PROCESS_NAME = pascpy TASK = <pdrcpy -p $PATH_FILE -r pascpy> DESCRIPTION = 'PAS product notification message poller' SYSTEM = PDR CLASS = pas DISPLAY_ORDER = 1 OK_TO_UPDATE_DATABASE = OK_TO_UPDATE_DATABASE ! Determined by path file INTERNAL_POLLING_PROCESS = TRUE FILE_RANK = 1 ! First Trigger FILE_OBJECT1 = *.pas_trans ! File specification for searches FILE_DIRECTORY1 = PDR_PASS_DIR ! Polling directory !File status FILE_PROCESSING.DANGLE = _proc ! Extension addition during processing FILE_SUCCESS.DANGLE = _done ! Extension addition if normal processing FILE_ERROR.DANGLE = _bad ! Extension addition if error FILE_DUPLICATE.DANGLE = _duplicate ! Extension addition if duplicate !OSF status PAS_CREATE.DR = p !PAS file procesing. PAS_COMPLETE.DR = c !PAS file completed PAS_COMPLETE.DL = c !Trigger for osf_delete process PAS_DUPLICATE.DR = d !PAS file duplicate detected. PAS_FAIL.DR = e !PAS file processing failed. POLLING_TIME = 10 ! Wait (seconds) before polling for next INPATH = PDR_PASS_DIR ! location of pass data files. OUTPATH = PDR_HLD_DIR ! Holding Tank directory. MINBLOCKS = 50000 ! blocks required on output disk EXTENSION = .pas_trans ! PAS input file extension PREFIX = N ! PASS file prefix RENAME = N ! Rename PASS file. TIMES = N ! Update files_times relation. ! forces values from path to be used ENV.OPUS_DB = OPUS_DB ENV.DSQUERY = DSQUERY
Input Trigger: PDRORB is triggerred by the ORBCPY process DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c w _ _ _ _ _ _ _ _ _ orb Output Triggers: PDRORB triggers the PDRREQ process DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c c _ _ _ _ _ w _ _ _ orb
INPUT: PDR_ORB_DIR: p<Ymdhhmmr>.pod - Orbital Ephemeris POD file info from keyword database OUTPUT: PDR_ORB_DIR: p<Ymdhhmmr>.fit - FITS format Ephemeris table file p<Ymdhhmmr>.asc - ASCII format Ephemeris table file
Pipeline Mode: omsorb -p opus_definitions_dir:your.path -r pdrorb (in task line of resource file) where: -p = denotes path file specification follows -r = denotes resource file for the PDRORB Process opus_definitions_dir:your.path = path file to use
!-------------------------------------------------------------------- ! ! PDRORB RESOURCE FILE ! ! This file is used to construct the trigger, error, and success status ! fields in the observation status file. ! !-------------------------------------------------------------------- ! REVISION HISTORY !-------------------------------------------------------------------- ! MOD PR ! LEVEL DATE NUMBER User Description ! ----- -------- ------ ------ ------------------------------------- ! 000 09/11/96 31820 Ken S. Created ! 001 05/21/97 34189 Ken S. Add to_queue and from_node to command ! 002 10/28/97 35166 Ken S. AXP/VMS port ! 003 12/31/97 35166.3 Ken S. Add archive request trigger ! 004 03/11/98 36318 Ken S. Add holding tank mnemonic. ! 005 07/26/99 38816_03 Goldst Changed SYSTEM from OMS to NSC ! 006 01/04/00 39733 Ken S. Change to OSF poller. ! 007 10/24/01 44684 Goldst Created pdrorb version ! 008 01/30/02 45016 Goldst Corrected OK_TO_UPDATE_DATABASE ! 009 03/11/02 45016 Goldst Added OSF_TRIGGER1.DATA_ID !-------------------------------------------------------------------- PROCESS_NAME = pdrorb TASK = <omsorb -p $PATH_FILE -r pdrorb> DESCRIPTION = 'Process ephemeris data' SYSTEM = PDR CLASS = orb DISPLAY_ORDER = 1 INTERNAL_POLLING_PROCESS = TRUE OK_TO_UPDATE_DATABASE = OK_TO_UPDATE_DATABASE ! Set to true to prevent requesting the data ! again. OSF_RANK = 1 ! Time event ordering. OSF_TRIGGER1.EP = w ! Need a 'Wait' flag in Data Validation OSF_TRIGGER1.DATA_ID = orb ! Trigger class ID OSF_PROCESSING.EP = p ! Processing : ORB REceipt OSF_COMPLETE.EP = c ! Completed : ORB Receipt OSF_COMPLETE.RQ = w ! Waiting : Archive request generation OSF_FAILED.EP = f ! Failed : ORB Receipt POLLING_TIME = 10 ! Wait (seconds) INPATH = PDR_ORB_DIR ! Directory of input files OUTPATH = PDR_ORB_DIR ! Directory for output files HLDPATH = PDR_HLD_DIR ! Holding Tank directory. MINBLOCKS = 50000 ! blocks required on output disk ! forces values from path to be used ENV.OPUS_DB = OPUS_DB ENV.DSQUERY = DSQUERY
Input Trigger: REPLAN is triggerred by the MSCCPY process DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ w _ _ _ _ _ _ _ _ msc Output Triggers: REPLAN triggers the UPDATR process DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ c w _ _ _ _ _ _ _ msc
INPUT: PDR_MSC_DIR: u<Ymdhhmmx>.pod - Mission Schedule pod files information from database OUTPUT: superceeded Mission Schedule infomation removed from database
Pipeline Mode: xpoll -p opus_definitions_dir:your.path -r replan (in task line of resource file) where: xpoll = External Poller Process used to invoke a script -p = denotes path file specification follows -r = denotes resource file for the REPLAN Process opus_definitions_dir:your.path = path file to use
!-------------------------------------------------------------------- ! REPLAN RESOURCE FILE ! ! This file is used to define various values for the Update ! Support Schedule Tables process. ! !-------------------------------------------------------------------- ! REVISION HISTORY !-------------------------------------------------------------------- ! MOD PR ! LEVEL DATE NUMBER User Description ! ----- -------- ------ ------ -------------------------------- ! 000 10/24/01 44684 Goldst Created initial version ! 001 03/11/02 45016 Goldst Added OSF_TRIGGER1.DATA_ID !-------------------------------------------------------------------- PROCESS_NAME = replan TASK = <xpoll -p $PATH_FILE -r replan> DESCRIPTION = 'Process a replan SMS' COMMAND = check_replan_msc.pl SYSTEM = PDR CLASS = msc DISPLAY_ORDER = 1 OSF_RANK = 1 ! First Trigger OSF_TRIGGER1.RP = w ! Need a 'wait' flag for replan OSF_TRIGGER1.DATA_ID = msc ! Trigger class ID OSF_PROCESSING.RP = p ! Set the processing flag to 'Processing' OSF_SUCCESS.RP = c ! Complete: Completed replan processing OSF_SUCCESS.UP = w ! Complete: Set trigger for UPDATR OSF_FAIL.RP = f ! Error: Set the trouble flag XPOLL_ERROR.RP = x ! Undefined exit status XPOLL_STATE.01 = OSF_FAIL ! exit status 1 == OSF_FAIL state XPOLL_STATE.00 = OSF_SUCCESS ! exit status 0 == OSF_SUCCESS state POLLING_TIME = 10 ! Amount of time to wait before polling for next ! forces values from path to be used ENV.OPUS_DB = OPUS_DB ENV.SPSS_DB = SPSS_DB ENV.DSQUERY = DSQUERY ENV.MSC_DIR = PDR_MSC_DIR
Input Trigger: UPDATR is triggerred by the REPLAN process DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ c w _ _ _ _ _ _ _ msc Output Triggers: UPDATR triggers the MSCXTR process DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ c c w _ _ _ _ _ _ msc
INPUT: PDR_MSC_DIR: u<Ymdhhmmx>.pod - Mission Schedule pod files information from database OUTPUT: Support Schedule infomation updated in database
Pipeline Mode: xpoll -p opus_definitions_dir:your.path -r updatr (in task line of resource file) where: xpoll = External Poller Process used to invoke a script -p = denotes path file specification follows -r = denotes resource file for the UPDATR Process opus_definitions_dir:your.path = path file to use
!-------------------------------------------------------------------- ! ! UPDATR RESOURCE FILE ! ! This file is used to define various values for the Update ! Support Schedule Tables process. ! !-------------------------------------------------------------------- ! REVISION HISTORY !-------------------------------------------------------------------- ! MOD PR ! LEVEL DATE NUMBER User Description ! ----- -------- ------ ------ -------------------------------- ! 000 10/24/01 44684 Goldst Created initial version ! 001 01/30/02 45016 Goldst Added OK_TO_UPDATE_DATABASE ! 002 03/11/02 45016 Goldst Added OSF_TRIGGER1.DATA_ID !-------------------------------------------------------------------- PROCESS_NAME = updatr TASK = <xpoll -p $PATH_FILE -r updatr> DESCRIPTION = 'Update support schedule tables' COMMAND = insert_support_records.pl SYSTEM = PDR CLASS = msc DISPLAY_ORDER = 1 OSF_RANK = 1 ! First trigger OSF_TRIGGER1.UP = w ! Need a 'wait' flag in updatr OSF_TRIGGER1.DATA_ID = msc ! Trigger class ID OSF_PROCESSING.UP = p ! Set the processing flag to 'Processing' OSF_SUCCESS.UP = c ! Completion: Completed upatr processing OSF_SUCCESS.MS = w ! Completion: set wait flag for MSCXTR OSF_FAIL.UP = f ! Error: Set the trouble flag XPOLL_ERROR.UP = x ! Undefined exit status XPOLL_STATE.01 = OSF_FAIL ! exit status 1 == OSF_FAIL state XPOLL_STATE.00 = OSF_SUCCESS ! exit status 0 == OSF_SUCCESS state POLLING_TIME = 10 ! Amount of time to wait before polling for next ENV.MSC_DIR = PDR_MSC_DIR ! ENV.LOG_DELETED_OBSETS = Y ! Y or N, to list obsets deleted from qolink_sms ! forces values from path to be used ENV.OPUS_DB = OPUS_DB ENV.SPSS_DB = SPSS_DB ENV.DSQUERY = DSQUERY ENV.OK_TO_UPDATE_DATABASE = OK_TO_UPDATE_DATABASE
Input Trigger: MSCXTR is triggerred by the UPDATR process DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ c c w _ _ _ _ _ _ msc Output Triggers: MSCXTR triggers the CONTRL process DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ c c c w _ _ _ _ _ msc
INPUT: PDR_MSC_DIR: u<Ymdhhmmx>.pod - Mission Schedule pod files information from database OUTPUT: various database tables populated with information extracted from Mission Schedule files
Pipeline Mode: xpoll -p opus_definitions_dir:your.path -r mscxtr (in task line of resource file) where: xpoll = External Poller Process used to invoke a script -p = denotes path file specification follows -r = denotes resource file for the MSCXTR Process opus_definitions_dir:your.path = path file to use
!-------------------------------------------------------------------- ! ! mscxtr.resource ! ! ! !-------------------------------------------------------------------- ! REVISION HISTORY !-------------------------------------------------------------------- ! PR ! DATE NUMBER User Description ! -------- ------ ------ ------------------------------------- ! 06/10/01 43987 Heller First version ! 10/24/01 44684 Goldst Standardized OPUS PDR pipeline version ! 01/30/02 45016 Goldst Added OK_TO_UPDATE_DATABASE ! 03/11/02 45016 Goldst Added OSF_TRIGGER1.DATA_ID ! 06/10/02 45958 Heller Added definition for exit status 3 !-------------------------------------------------------------------- PROCESS_NAME = mscxtr TASK = <xpoll -p $PATH_FILE -r mscxtr> COMMAND = mscxtr.csh DESCRIPTION = 'Process MSC pod files' SYSTEM = PDR CLASS = msc OSF_RANK = 1 OSF_TRIGGER1.MS = w ! Trigger OSF_TRIGGER1.DATA_ID = msc ! Trigger class ID OSF_PROCESSING.MS = p ! Processing OSF_SUCCESS.MS = c ! Completion OSF_SUCCESS.CT = w ! Completion OSF_DB_ERROR.MS = f ! DB Error OSF_PARSE_ERROR.MS = e ! Parsing Error POLLING_TIME = 10 ! Required XPOLL_ERROR.MS = x ! Undefined exit status XPOLL_ERROR_COUNT = 10 ! This many XPOLL errors will cause the ! process to go ABSENT ! Valid exit codes for COMMAND that allow XPOLL to continue. ! All other XPOLL states will cause process to go ABSENT. ! (The labels are not used for TIME events) XPOLL_STATE.00 = OSF_SUCCESS XPOLL_STATE.01 = OSF_PARSE_ERROR XPOLL_STATE.02 = OSF_DB_ERROR XPOLL_STATE.03 = OSF_PARSE_ERROR ! Script needs following information to run ENV.OPUS_DB = OPUS_DB ENV.DSQUERY = DSQUERY ENV.OK_TO_UPDATE_DATABASE = OK_TO_UPDATE_DATABASE ENV.INPATH = PDR_MSC_DIR
Input Trigger: CONTRL is triggerred by the MSCXTR process DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ c c c w _ _ _ _ _ msc Output Triggers: CONTRL triggers the NICSAA process DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ c c c c w _ _ _ _ msc
INPUT: Mission Schedule information from database OUTPUT: population of database relations containing information which dictates the actual processing performed by the EDPS FOF, FGS, and AST pipelines
Pipeline Mode: xpoll -p opus_definitions_dir:your.path -r contrl (in task line of resource file) where: xpoll = External Poller Process used to invoke a script -p = denotes path file specification follows -r = denotes resource file for the CONTRL Process opus_definitions_dir:your.path = path file to use
!-------------------------------------------------------------------- ! ! CONTRL RESOURCE FILE ! ! This file is used to define various values for the Update ! Support Schedule Tables process. ! !-------------------------------------------------------------------- ! REVISION HISTORY !-------------------------------------------------------------------- ! MOD PR ! LEVEL DATE NUMBER User Description ! ----- -------- ------ ------ ------------------------------------ ! 000 10/24/01 44684 Goldst Created initial version ! 001 01/25/02 45016 Goldst ENV.OK_TO_UPDATE_DATABASE from path ! 002 03/11/02 45016 Goldst Added OSF_TRIGGER1.DATA_ID !-------------------------------------------------------------------- ! PROCESS_NAME = contrl TASK = <xpoll -p $PATH_FILE -r contrl> DESCRIPTION = 'Update control database tables' COMMAND = update_control_tables.pl SYSTEM = PDR CLASS = msc DISPLAY_ORDER = 1 OSF_RANK = 1 ! First trigger OSF_TRIGGER1.MS = c ! Completed MSCXTR processing OSF_TRIGGER1.CT = w ! Need a 'wait' flag in control processing stage OSF_TRIGGER1.DATA_ID = msc ! Trigger class ID OSF_PROCESSING.CT = p ! Set the processing flag to 'Processing' OSF_SUCCESS.CT = c ! Completion: Completed control table update OSF_SUCCESS.NS = w ! Completion: Trigger for NICSAA processing OSF_FAIL.CT = f ! Error: Set the trouble flag XPOLL_ERROR.CT = x ! Undefined exit status XPOLL_STATE.01 = OSF_FAIL ! exit status 1 == OSF_FAIL state XPOLL_STATE.00 = OSF_SUCCESS ! exit status 0 == OSF_SUCCESS state POLLING_TIME = 10 ! Amount of time to wait before polling for next ! forces values from path to be used ENV.OPUS_DB = OPUS_DB ENV.SPSS_DB = SPSS_DB ENV.DSQUERY = DSQUERY ENV.OK_TO_UPDATE_DATABASE = OK_TO_UPDATE_DATABASE ! Determined by path file
Input Trigger: NICSAA is triggerred by the CONTRL process DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ c c c c w _ _ _ _ msc Output Triggers: NICSAA triggers the PDRREQ process DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ c c c c c w _ _ _ msc
INPUT: NICMOS related Mission Schedule information from database OUTPUT: population of database relations containing information about NICMOS exposures and SAA dark associations, and NICMOS associations and SAA dark exposures.
Pipeline Mode: xpoll -p opus_definitions_dir:your.path -r nicsaa (in task line of resource file) where: xpoll = External Poller Process used to invoke a script -p = denotes path file specification follows -r = denotes resource file for the NICSAA Process opus_definitions_dir:your.path = path file to use
!-------------------------------------------------------------------- ! ! NICSAA RESOURCE FILE ! ! This file is used to define various values for the NICMOS SAA ! Table Update process. ! !-------------------------------------------------------------------- ! REVISION HISTORY !-------------------------------------------------------------------- ! MOD PR ! LEVEL DATE NUMBER User Description ! ----- -------- ------ ------ -------------------------------- ! 000 10/24/01 44684 Goldst Created initial version ! 001 03/11/02 45016 Goldst Added OSF_TRIGGER1.DATA_ID !-------------------------------------------------------------------- PROCESS_NAME = nicsaa TASK = <xpoll -p $PATH_FILE -r nicsaa> DESCRIPTION = 'NICMOS SAA table update' COMMAND = insert_nic_saa_records.pl SYSTEM = PDR CLASS = msc DISPLAY_ORDER = 1 OSF_RANK = 1 ! First Trigger OSF_TRIGGER1.NS = w ! Need a 'wait' flag in nicsaa OSF_TRIGGER1.DATA_ID = msc ! OSF_PROCESSING.NS = p ! Set the processing flag to 'Processing' OSF_COMPLETE.NS = c ! Complete: Completed RMS post processing OSF_COMPLETE.RQ = w ! Complete: Set archiving stage OSF_FAIL.NS = f ! Error: Set the trouble flag XPOLL_ERROR.NS = x ! Undefined exit status POLLING_TIME = 10 ! Amount of time to wait before polling for next XPOLL_STATE.01 = OSF_FAIL ! exit status 1 == OSF_FAIL state XPOLL_STATE.00 = OSF_COMPLETE ! exit status 0 == OSF_COMPLETE state ENV.NIC_SAA_MAX_DELTA = 3000 ! max seconds from SAA exit for dark utility ! forces values from path to be used ENV.OPUS_DB = OPUS_DB ENV.SPSS_DB = SPSS_DB ENV.DSQUERY = DSQUERY
Input Trigger: PDRREQ is triggerred by the MTLCPY, SMSCPY, PDRORB, and NICSAA processes DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ _ _ _ _ _ w _ _ _ mtl DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ _ _ _ _ _ w _ _ _ sms DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c c _ _ _ _ _ w _ _ _ orb DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ c c c c c w _ _ _ msc Output Triggers: PDRREQ triggers the PDRRSP process DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ _ _ _ _ _ c w _ _ mtl, sms DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c c _ _ _ _ _ c w _ _ orb DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ c c c c c c w _ _ msc
INPUT: - Constitutes which files to archive PDR_MSC_DIR u<Ymdhhmmx>.pod - PASS Mission Schedule files received from PASS and moved here from the PDR_PASS_DIR directory and renamed. PDR_MTL_DIR v<Ymdhhmmx>.pod - Mission Timeline Report files received from PASS and moved here from the PDR_PASS_DIR directory and renamed PDR_SMS_DIR y<Ymdhhmmx>.pod - Science Mission Schedule files received from PASS and moved here from the PDR_PASS_DIR directory and renamed. WHERE: Y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...) m = 1 digit month (1 thru c, where 1=January and a=October) d = 1 digit day (1 thru v, where 1=1, a=10 and v=31) hh = 2 digit hour (00 thru 23) mm = 2 digit minute (00 thru 59) x = file sequence id (0-9,a-z) EXAMPLE for year 2000, month 5, day 9, hour 0800, minute 25: um5908250.pod PDR_ORB_DIR p<Ymdhhmmr>.pod - Orbital Ephemeris POD file. p<Ymdhhmmr>.fit - FITS format Ephemeris table file. p<Ymdhhmmr>.asc - ASCII format Ephemeris table file. WHERE: Y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...) m = 1 digit month (1 thru c, where 1=January and a=October) d = 1 digit day (1 thru v, where 1=1, a=10 and v=31) hh = 2 digit hour (00 thru 23) mm = 2 digit minute (00 thru 59) EXAMPLEs for year 2000, month May, day 9, hour 0800, minute 25: pm590825r.pod - pod file pm590825r.fit - fits product pm590825r.asc - ascii product OUTPUT: PDR_AREQ_DIR: contains requests to archive PASS related POD files and product files YYYYMMDD_HHMMSS_iymdhhmmx_zzz.areq - request to archive PASS related files WHERE: YYYY = year of archive request file generation MM = month of archive request file generation DD = day of archive request file generation HH = hour of archive request file generation MM = minute of archive request file generation SS = seconds of archive request file generation i = file type identifier (u=PASS Mission Schedule file, v=Mission Timeline Report file, y=Science Mission Schedule file, p=Orbital Ephemeris file) y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...) m = 1 digit month (1 thru c, where 1=January and a=October) d = 1 digit day (1 thru v, where 1=1, a=10 and v=31) hh = 2 digit hour (00 thru 23) mm = 2 digit minute (00 thru 59) x = file sequence id (0-9,a-z) for Orbital Ephemeris files the value is always 'r' zzz = file type identifier (msc=PASS Mission Schedule file, mtl=Mission Timeline Report file, sms=Science Mission Schedule file, orb=Orbital Ephemeris file) EXAMPLES: 20021212_082578_pm590825r_orb.areq - archive request file for ephemeris products 20021212_082578_um590825a_msc.areq - archive request for PASS mission schedule 20021212_082578_ym590825b_sms.areq - archive request file for Science mission schedule 20021212_082578_vm590825c_mtl.areq - archive request file for mission timeline PDR_LOG_DIR: contains a log file written to by the PDRREQ and PDRRSP processes to indicate the disposition of an archive request and its corresponding response iymdhhmmx_zzz.log - disposition of PASS related files archive request/response WHERE: i = file type identifier (u=PASS Mission Schedule file, v=Mission Timeline Report file, y=Science Mission Schedule file, p=Orbital Ephemeris file) y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...) m = 1 digit month (1 thru c, where 1=January and a=October) d = 1 digit day (1 thru v, where 1=1, a=10 and v=31) hh = 2 digit hour (00 thru 23) mm = 2 digit minute (00 thru 59) x = file sequence id (0-9,a-z) for Orbital Ephemeris files the value is always 'r' zzz = file type identifier (msc=PASS Mission Schedule file, mtl=Mission Timeline Report file, sms=Science Mission Schedule file, orb=Orbital Ephemeris file) EXAMPLES: pm590825r_orb.log - archive log file for ephemeris products um5908250_msc.log - archive log for PASS mission schedule ym5908251_sms.log - archive log file for Science mission schedule vm5908252_mtl.log - archive log file for mission timeline updates to database relations
Pipeline Mode genreq -p opus_definitions_dir:your.path -r pdrreq (in task line of resource file) where: -p = denotes path file specification follows -r = denotes resource file for the PDRREQ Process opus_definitions_dir:your.path = path file to use
!-------------------------------------------------------------------- ! ! PDRREQ RESOURCE FILE ! ! ! This file is used to construct the trigger, error, and success status ! fields in the observation status file. ! ! !-------------------------------------------------------------------- ! REVISION HISTORY !-------------------------------------------------------------------- ! DATE PR User Description ! -------- ------ ------ ------------------------------------- ! 07/01/99 39404 Heller UNIX version of resource file ! 01/13/00 39307 Heller Add POD class ! 03/16/00 40911 Heller Fix qoarchives.cal_archdate update ! 10/24/01 44684 Goldst Created OPUS PDR pipeline version ! 01/30/02 45016 Goldst Corrected OK_TO_UPDATE_DATABASE ! 03/13/02 45016 Goldst Added TRIGGERn and TRIGGERn.DATA_ID !--------------------------------------------------------------------------- PROCESS_NAME = pdrreq TASK = <genreq -p $PATH_FILE -r pdrreq> DESCRIPTION = 'Generate nonscience archive request' SYSTEM = PDR CLASS = all DISPLAY_ORDER = 1 !--------------------------------------------------------------------------- ! EVNT resource. !--------------------------------------------------------------------------- POLLING_TIME = 5 ! Response time of the application OSF_RANK = 1 ! OSF event ordering. OSF_TRIGGER1.RQ = w ! ARCREQ is triggered by AR = W OSF_TRIGGER1.DATA_ID = msc ! Trigger1 class ID OSF_TRIGGER2.RQ = w ! ARCREQ is triggered by AR = W OSF_TRIGGER2.DATA_ID = mtl ! Trigger2 class ID OSF_TRIGGER3.RQ = w ! ARCREQ is triggered by AR = W OSF_TRIGGER3.DATA_ID = sms ! Trigger3 class ID OSF_TRIGGER4.RQ = w ! ARCREQ is triggered by AR = W OSF_TRIGGER4.DATA_ID = orb ! Trigger4 class ID !--------------------------------------------------------------------------- ! Application Specific resource !--------------------------------------------------------------------------- POLLING_TIME = 1 OSF_PROCESSING.RQ = p ! letter to be used when an OSF is processed. OSF_ERROR.RQ = e ! letter to be used when there is an error. OSF_SUCCESS1.RQ = c ! Letters to be used when it is successful OSF_SUCCESS1.RS = w ! completion. OSF_SUCCESS2.RQ = c OSF_SUCCESS2.RS = w OK_TO_UPDATE_DATABASE = OK_TO_UPDATE_DATABASE ! Determined by path file MAX_ERROR = 10 ! Maximun number unexpected errors before ! ARCREQ quits ! Archive class and OSF map to be used. ARCHIVE_MAP = OPUS_DEFINITIONS_DIR:archclass_osf.map !--------------------------------------------------------------------------- ! Archive groups that are recognized. The archive group name (the name ! before the ".", must be one of the archive group name specified in ! ARCHIVE_MAP. ! ! For each archive group, the following resource must be present. ! .AREQ_DIR - The pointer to the areq directory ! .LOG_DIR = The pointer to the directory where the log file is kept. ! .DATASET_DIR = The data set direcotry. ! .TRACK_EXT = (Y/N) When it is Y, every extension is saved in ! archive_files relation. ! .DATA_TYPE = DATA_TYPE value for the archive request. ! .DATASET_FILTER = <DATASET_NAME>.* The filter to be used to find ! all the dataset. <DATASET_NAME> will be replaced ! by the actual OSF dataset name at run time. ! ! .OSF_STATE = This resource allows different archive group to have ! different successfull completion status. ! ! For Generic data class, ! DATASET_FILTER must be set to <DATASET_NAME>.*, since archive is not ! capable ! ! For SM 97 orphan and unassociated fits files, the following ! additional keywords must be specified. ! .TRL_DIR = The directory where the .TRA files will be saved until ! they are deleted by ARCCLEAN. ! .DATASET_FILTER = <DATASET_NAME>.* The filter used to find ! the dataset. <DATASET_NAME> will be replaced ! by the actual OSF dataset name at run time. ! ! FOR SM 97 ASN data, the following addition keywords must be specified. ! .ASN_INGEST_DIR = The directory where archive picks up the ASN table ! !--------------------------------------------------------------------------- MSC.AREQ_DIR = PDR_AREQ_DIR MSC.LOG_DIR = PDR_LOG_DIR MSC.DATASET_DIR = PDR_MSC_DIR MSC.DATA_TYPE = MSC MSC.TRACK_EXT = N MSC.DATASET_FILTER = <DATASET_NAME>.pod MSC.OSF_STATE = OSF_SUCCESS2 MTL.AREQ_DIR = PDR_AREQ_DIR MTL.LOG_DIR = PDR_LOG_DIR MTL.DATASET_DIR = PDR_MTL_DIR MTL.DATA_TYPE = MTL MTL.TRACK_EXT = N MTL.DATASET_FILTER = <DATASET_NAME>.* MTL.OSF_STATE = OSF_SUCCESS2 ORB.AREQ_DIR = PDR_AREQ_DIR ORB.LOG_DIR = PDR_LOG_DIR ORB.DATASET_DIR = PDR_ORB_DIR ORB.DATA_TYPE = ORB ORB.TRACK_EXT = N ORB.DATASET_FILTER = <DATASET_NAME>.* ORB.OSF_STATE = OSF_SUCCESS2 SMS.AREQ_DIR = PDR_AREQ_DIR SMS.LOG_DIR = PDR_LOG_DIR SMS.DATASET_DIR = PDR_SMS_DIR SMS.DATA_TYPE = SMS SMS.TRACK_EXT = N SMS.DATASET_FILTER = <DATASET_NAME>.* SMS.OSF_STATE = OSF_SUCCESS2 ! forces values from path to be used ENV.OPUS_DB = OPUS_DB ENV.DSQUERY = DSQUERY
Input Triggers: PDRRSP is triggerred by the PDRREQ process DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ _ _ _ _ _ c w _ _ mtl, sms DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c c _ _ _ _ _ c w _ _ orb DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ c c c c c c w _ _ msc Output Triggers: PDRRSP triggers the MSCMOV process for 'msc' class data and the PDRCLN process for the 'mtl', 'sms', and 'orb' class data. The MSCMOV and PDRCLN share the (CL) stage of the pipeline. DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ _ _ _ _ _ c c w _ mtl, sms DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c c _ _ _ _ _ c c w _ orb DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ c c c c c c c w _ msc
INPUT: N/A OUTPUT: PDR_ARSP_DIR: contains responses to the request to archive PASS related POD files and product files YYYYMMDD_HHMMSS_iymdhhmmx_zzz.arsp - responses to requests to archive PASS related files WHERE: YYYY = year of archive request file generation MM = month of archive request file generation DD = day of archive request file generation HH = hour of archive request file generation MM = minute of archive request file generation SS = seconds of archive request file generation i = file type identifier (u=PASS Mission Schedule file, v=Mission Timeline Report file, y=Science Mission Schedule file, p=Orbital Ephemeris file) y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...) m = 1 digit month (1 thru c, where 1=January and a=October) d = 1 digit day (1 thru v, where 1=1, a=10 and v=31) hh = 2 digit hour (00 thru 23) mm = 2 digit minute (00 thru 59) x = file sequence id (0-9,a-z) for Orbital Ephemeris files the value is always 'r' zzz = file type identifier (msc=PASS Mission Schedule file, mtl=Mission Timeline Report file, sms=Science Mission Schedule file, orb=Orbital Ephemeris file) EXAMPLES: 20021212_082578_pm590825r_orb.arsp - archive response file for ephemeris products 20021212_082578_um5908255_msc.arsp - archive response for PASS mission schedule 20021212_082578_ym5908256_sms.arsp - archive response file for Science mission schedule 20021212_082578_vm5908257_mtl.arsp - archive response file for mission timeline PDR_LOG_DIR: contains a log file written to by the PDRREQ and PDRRSP processes to indicate the disposition of an archive request and its corresponding response iymdhhmmx_zzz.log - disposition of PASS related files archive request/response WHERE: i = file type identifier (u=PASS Mission Schedule file, v=Mission Timeline Report file, y=Science Mission Schedule file, p=Orbital Ephemeris file) y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...) m = 1 digit month (1 thru c, where 1=January and a=October) d = 1 digit day (1 thru v, where 1=1, a=10 and v=31) hh = 2 digit hour (00 thru 23) mm = 2 digit minute (00 thru 59) x = file sequence id (0-9,a-z) for Orbital Ephemeris files the value is always 'r' zzz = file type identifier (msc=PASS Mission Schedule file, mtl=Mission Timeline Report file, sms=Science Mission Schedule file, orb=Orbital Ephemeris file) EXAMPLES: pm590825r_orb.log - archive log file for ephemeris products um5908250_msc.log - archive log for PASS mission schedule ym5908251_sms.log - archive log file for Science mission schedule vm5908252_mtl.log - archive log file for mission timeline updates to database relations
Pipeline Mode ingrsp -p opus_definitions_dir:your.path -r pdrrsp (in task line of resource file) where: -p = denotes path file specification follows -r = denotes resource file for the PDRRSP Process opus_definitions_dir:your.path = path file to use
!-------------------------------------------------------------------- ! ! pdrrsp RESOURCE FILE ! ! ! This file is used to construct the trigger, error, and success status ! fields in the observation status file. ! ! !-------------------------------------------------------------------- ! REVISION HISTORY !-------------------------------------------------------------------- ! MOD PR ! LEVEL DATE NUMBER User Description ! ----- -------- ------ ------ ------------------------------------- ! 000 10/24/01 44684 Goldst Created initial version ! 001 01/30/02 45016 Goldst Corrected OK_TO_UPDATE_DATABASE !--------------------------------------------------------------------------- PROCESS_NAME = pdrrsp TASK = <ingrsp -p $PATH_FILE -r pdrrsp> DESCRIPTION = 'Process archive response' SYSTEM = PDR CLASS = all DISPLAY_ORDER = 1 !--------------------------------------------------------------------------- ! EVNT resource. !--------------------------------------------------------------------------- POLLING_TIME = 30 ! Response time of the application. TIME_RANK = 1 ! Time event ordering. START_TIME = 1970.001:00:00:00 ! The base reference time DELTA_TIME = 000:00:00:30 ! The time interval to check for presence of ! archive response. OK_TO_UPDATE_DATABASE = OK_TO_UPDATE_DATABASE ! Determined by path file !--------------------------------------------------------------------------- ! Application Specific resource !--------------------------------------------------------------------------- MAX_ERROR = 10 ! Maximun number unexpected errors before ! INGRSP quits OSF_WAITING.RS = w ! OSF is waiting to be processed. OSF_PROCESSING.RS = p ! INGRSP is processing the response file. OSF_SUCCESS.RS = c ! ARCRSP process the OSF successfully. OSF_SUCCESS.CL = w ! ARCRSP successfull clean off data. OSF_FAIL.RS = f ! INGRSP fails to process the response file. OSF_ERROR.RS = e ! archive response status not equal to "OK" OSF_CORRUPT.RS = z ! archive response is corrupt ! The following keywords specify the extension to be added to the response ! file extension. RSP_FAIL = _FAIL ! When ARCRSP fails to process the response RSP_ERROR = _ERROR ! When the response's STATUS value is not OK. RSP_CORRUPT = _CORRUPT ! When the response is corrupted. ! Missing keywords etc. RSP_FDUPLICATE = _FDUP ! When the duplicate response has been ! processed with fail status. RSP_CDUPLICATE = _CDUP ! When the duplicated response has been ! successfully processed previously. ! Archive class and OSF map to be used. ARCHIVE_MAP = OPUS_DEFINITIONS_DIR:archclass_osf.map !---------------------------------------------------------------------------- ! To specify an archive group add two resources, the .ARSP_DIR directory and ! .LOG_DIR directory. Entries are only necessary if you want to use values ! other than the defaults listed here: ! ! FOR SM 97 ASN data, the following addition keywords must be specified. ! .OMS_DIR = The directory where OPUS will be the table for OMS. ! DEFAULT.ARSP_DIR = PDR_ARSP_DIR DEFAULT.LOG_DIR = PDR_LOG_DIR MSC.ARSP_DIR = PDR_ARSP_DIR MSC.LOG_DIR = PDR_LOG_DIR MTL.ARSP_DIR = PDR_ARSP_DIR MTL.LOG_DIR = PDR_LOG_DIR SMS.ARSP_DIR = PDR_ARSP_DIR SMS.LOG_DIR = PDR_LOG_DIR ORB.ARSP_DIR = PDR_ARSP_DIR ORB.LOG_DIR = PDR_LOG_DIR ! forces values from path to be used ENV.OPUS_DB = OPUS_DB ENV.DSQUERY = DSQUERY
Input Trigger: MSCMOV is triggerred by the PDRRSP process DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ c c c c c c c w _ msc Output Triggers: MSCMOV puts a 'c' in the PDRDEL (DL) stage. For other EDPS pipelines, a 'c' in the (DL) stage triggers the process residing in the stage to delete no longer needed OSFs. In the PDR pipeline however, the process (PDRDEL) residing in the (DL) stage is actually triggerred by a 'd' in the (DL) stage. The insertion of a 'd' in the (DL) stage is not done automatically; a manual insertion by operations personnel is required. DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ c c c c c c c c c msc
INPUT: PDR_MSC_DIR: u<Ymdhhmmx>.pod - PASS Mission Schedule pod files to be moved OUTPUT: PDR_HLD_DIR: u<Ymdhhmmx>.pod - PASS Mission Schedule pod files moved here from the PDR_MSC_DIR directory WHERE: Y = 1 digit year (J=1999, k=2000, l=2001, m=2002 ...) m = 1 digit month (1 thru c, where 1=January and a=October) d = 1 digit day (1 thru v, where 1=1, a=10 and v=31) hh = 2 digit hour (00 thru 23) mm = 2 digit minute (00 thru 59) x = file sequence id (0-9,a-z) EXAMPLE for year 2000, month 5, day 9, hour 0800, minute 25: um5908250.pod
Pipeline Mode xpoll -p opus_definitions_dir:your.path -r mscmov (in task line of resource file) where: -p = denotes path file specification follows -r = denotes resource file for the MSCMOV Process opus_definitions_dir:your.path = path file to use
!-------------------------------------------------------------------- ! ! mscmov.resource ! ! External poller using xpoll ! ! This file is used to construct the trigger, error, and success ! status fields in the observation status file. ! ! !-------------------------------------------------------------------- ! REVISION HISTORY !-------------------------------------------------------------------- ! MOD PR ! LEVEL DATE NUMBER User Description ! ----- -------- ------ ------ ------------------------------------- ! 000 10/11/02 46773 J.Baum Created initial version !-------------------------------------------------------------------- PROCESS_NAME = mscmov TASK = <xpoll -p $PATH_FILE -r mscmov> DESCRIPTION = 'Moves MSC file' COMMAND = move_file.csh SYSTEM = PDR CLASS = msc OSF_RANK = 1 ! First Trigger OSF_TRIGGER1.CL = w ! Trigger OSF_TRIGGER1.DATA_ID = msc ! Trigger class ID OSF_PROCESSING.CL = p ! Processing OSF_SUCCESS.CL = c ! Completion OSF_SUCCESS.DL = c ! Completion OSF_FAILURE.CL = f ! Failure setting XPOLL_ERROR.CL = x ! Undefined exit status ENV.INPATH = PDR_MSC_DIR ENV.EXTENSION = .pod ENV.MOVE_PATH = PDR_HLD_DIR POLLING_TIME = 10 ! Wait (seconds) before polling for next XPOLL_STATE.00 = OSF_SUCCESS XPOLL_STATE.01 = OSF_FAILURE
Input Trigger: PDRCLN is triggerred by the PDRRSP Process DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ _ _ _ _ _ c c w _ mtl, sms DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c c _ _ _ _ _ c c w _ orb Output Trigger: PDRCLN puts a 'c' in the PDRDEL (DL) stage. For other EDPS pipelines, a 'c' in the (DL) stage triggers the process residing in the stage to delete no longer needed OSFs. In the PDR pipeline however, the process (PDRDEL) residing in the (DL) stage is actually triggerred by a 'd' in the (DL) stage. The insertion of a 'd' in the (DL) stage is not done automatically; a manual insertion by operations personnel is required. DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ _ _ _ _ _ c c c c mtl, sms DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c c _ _ _ _ _ c c c c orb
INPUT: PDR_MTL_DIR v<Ymdhhmmx>.pod - Mission Timeline Report pod file to be deleted PDR_SMS_DIR y<Ymdhhmmx>.pod - Science Mission Schedule pod file to be deleted PDR_ORB_DIR p<Ymdhhmmx>.pod - ORB related pod file to be deleted p<Ymdhhmmx>.fit - ORB FITS product file to be deleted p<Ymdhhmmx>.asc - ORB ASCII product file to be deleted OUTPUT: N/A
Interactive Mode: Again, the interactive mode is probably only useful for testing. It does not require the existence of an OSF but the OSF status values must be supplied by the user on the command line: cleandata -p path.path -r process -d rootname -i dataid -o status where: process is a resource file that contains the optional keywords: CLASS_GROUPING.nn where nn starts at 01. If CLASS_GROUPING is absent, then the process CLASS keyword is used. To use all class, set CLASS_GROUPING.01 to '*'.The status should have a c in every stage that has an OUTPATH that is to be tested.
Pipeline Mode: cleandata -p opus_definitions_dir:your.path -r pdrcln (in task line of resource file) where: -p = denotes path file specification follows -r = denotes resource file for the PDRCLN Process opus_definitions_dir:your.path = path file to use
!-------------------------------------------------------------------- ! ! PDRCLN RESOURCE FILE ! ! ! This file is used to construct the trigger, error, and success status ! fields in the observation status file. ! ! !-------------------------------------------------------------------- ! REVISION HISTORY !-------------------------------------------------------------------- ! DATE PR User Description ! -------- ------ ------ ------------------------------------- ! 07/01/99 39404 Heller UNIX version of resource file ! 01/14/00 39307 Heller Archive POD files into POD class ! 05/10/00 36852 MARose Add SPSS_DB ! 09/06/00 42386 Heller Delete FITS files for wf2 ! 10/24/01 44684 Goldst Create OPUS PDR pipeline version ! 01/30/02 45016 Goldst Removed OK_TO_UPDATE_DATABASE ! Added SYSTEM and CLASS grouping keywords ! 03/13/02 45016 Goldst Removed SYSTEM_GROUPING, added TRIGGERn.DATA_ID ! 03/15/02 45016 Goldst Corrected DL column setting ! 10/11/02 46773 J.Baum Remove msc data_id - keep the msc pod file. !--------------------------------------------------------------------------- PROCESS_NAME = pdrcln TASK = <cleandata -p $PATH_FILE -r pdrcln> DESCRIPTION = 'Clean pipeline directories' SYSTEM = PDR CLASS = all DISPLAY_ORDER = 1 INTERNAL_POLLING_PROCESS = TRUE OSF_RANK = 1 ! First Trigger OSF_TRIGGER1.CL = w ! Trigger OSF_TRIGGER1.DATA_ID = mtl ! Trigger1 class ID OSF_TRIGGER2.CL = w ! Trigger OSF_TRIGGER2.DATA_ID = sms ! Trigger2 class ID OSF_TRIGGER3.CL = w ! Trigger OSF_TRIGGER3.DATA_ID = orb ! Trigger3 class ID OSF_PROCESSING.CL = p ! Processing OSF_SUCCESS.CL = c ! OSF completion OSF_SUCCESS.DL = c ! Sets DL column to c for OSF deletion OSF_ERROR.CL = f ! Failure setting POLLING_TIME = 5 ! Wait (seconds) before polling for next CLASS_GROUPING.01 = mtl CLASS_GROUPING.02 = sms CLASS_GROUPING.03 = orb
OSF_TRIGGER1.DL = d !value of (DL) stage that triggers PDRDEL OSF_TRIGGER1.DATA_ID = msc !class of OSF to delete OSF_TRIGGER2.DL = d !value of (DL) stage that triggers PDRDEL OSF_TRIGGER2.DATA_ID = mtl !class of OSF to delete OSF_TRIGGER3.DL = d !value of (DL) stage that triggers PDRDEL OSF_TRIGGER3.DATA_ID = sms !class of OSF to delete OSF_TRIGGER4.DL = d !value of (DL) stage that triggers PDRDEL OSF_TRIGGER4.DATA_ID = orb !class of OSF to delete OSF_TRIGGER5.DL = d !value of (DL) stage that triggers PDRDEL OSF_TRIGGER5.DATA_ID = pas !class of OSF to delete
Input Triggers: PDRDEL is triggerred by the manual insertion by operations personnel of a 'd' in the (DL) stage of the pipeline for each class of OSFs. DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ _ _ _ _ _ _ _ _ d pas DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ c c c c c c c c d msc DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c _ _ _ _ _ _ c c c d mtl, sms DR EP RP UP MS CT NS RQ RS CL DL Class -- -- -- -- -- -- -- -- -- -- -- ----- c c _ _ _ _ _ c c c d orb Output Trigger: N/A
INPUT: N/A OUTPUT: N/A
Pipeline Mode: osfdelete -p opus_definitions_dir:your.path -r pdrdel (in task line of resource file) where: -p = denotes path file specification follows -r = denotes resource file for the PDRDEL Process opus_definitions_dir:your.path = path file to use
!-------------------------------------------------------------------- ! ! pdrdel.resource ! ! Purpose: This file is used to construct the trigger, error, and ! success status fields in the observation status file. ! ! This resource file uses an OSF trigger. ! !-------------------------------------------------------------------- ! REVISION HISTORY !-------------------------------------------------------------------- ! ! MOD PR ! LEVEL DATE NUMBER User Description ! ----- -------- ------ ------ ------------------------------------- ! 000 02/10/01 42443 Heller first version ! 001 10/24/01 44684 Goldst Created OPUS PDR pipeline version ! 002 03/12/02 45016 Goldst Added TRIGGERn and TRIGGERn.DATA_ID ! 003 05/20/02 45016 Goldst Added pas class TRIGGER and DATA_ID ! 004 07/11/02 46101 J.Baum Changed trigger to DL status 'd'. !-------------------------------------------------------------------- PROCESS_NAME = pdrdel TASK = <osfdelete -p $PATH_FILE -r pdrdel> DESCRIPTION = 'Delete OSFs from the BB' SYSTEM = PDR CLASS = all DISPLAY_ORDER = 1 !--------------------------------------------------------------------------- ! EVNT resource. !--------------------------------------------------------------------------- OSF_RANK = 1 ! OSF event ordering. OSF_TRIGGER1.DL = d ! Manually set to trigger OSF deletion OSF_TRIGGER1.DATA_ID = msc ! Trigger1 class ID OSF_TRIGGER2.DL = d ! Manually set to trigger OSF deletion OSF_TRIGGER2.DATA_ID = mtl ! Trigger2 class ID OSF_TRIGGER3.DL = d ! Manually set to trigger OSF deletion OSF_TRIGGER3.DATA_ID = sms ! Trigger3 class ID OSF_TRIGGER4.DL = d ! Manually set to trigger OSF deletion OSF_TRIGGER4.DATA_ID = orb ! Trigger4 class ID OSF_TRIGGER5.DL = d ! Manually set to trigger OSF deletion OSF_TRIGGER5.DATA_ID = pas ! Trigger5 class ID POLLING_TIME = 5 ! Response time of the application !--------------------------------------------------------------------------- ! Application Specific resource !--------------------------------------------------------------------------- OSF_PROCESSING.DL = p ! letter to be used when an OSF is processed. OSF_ERROR.DL = e ! letter to be used when there is an error.
This relation is used to track the receival and processing of the various product files received from the PASS system. The PDR Pipeline MSCCPY, MTLCPY, SMSCPY, ORBCPY, PDRORB, REPLAN, UPDATR, MSCXTR, CONTRL, and NICSAA processes use this relation. Field name type/size description ---------- --------- ----------- dataset_name C23 file name of product file archclass C3 classification used to archive the data archdate C20 latest file date associated with the dataset window_start C13 Corrected spacecraft time for first minor frame in the file. (UTC rounded to the nearest second, in the format YYYYDDDHHMMSS) window_stop C13 Corrected spacecraft time for first minor frame in the file. (UTC rounded to the nearest second, in the format YYYYDDDHHMMSS) tm_generated C13 time file was generated at PASS pdb_version C8 PDB tape ID number environment C8 AEDP environment tape name replan_time C13 Replan start time (UTC rounded to the nearest second, in the format YYYYDDDHHMMSS)
This relation specifies the source of keywords to be written to the FITS file produced by the PDRORB process. The source may be specified in three ways, from the PDB (Project Database), from the PMDB (Proposal management Database), or as a difference calculation. In the first case (PDB) the source of the keyword will be the name of a mnemonic as specified in the EUDL.DAT file of the PDB. The OPUS software will look up the location of that mnemonic in the yurintab relation. If it is desired to convert the value of the mnemonic to a string (discrete conversion) or to engineering units (linear or polynomial conversion), then the second field (subsource) specifies the 8 character mnemonic of a conversion. See the description of conv_discrete, conv_linear, and conv_polynomial. In the second case (PMDB) the source field is the name of a database relation in the PMDB. The subsource specifies the name of the field in that relation. Only relations which can be joined with the relation qolink on the basis of program_id, obset_id and ob_number can be used. There is a special case of deep relations like qesiparm. In this case there is only a field for si_par_name and si_par_value, both of which are string fields. To obtain a value from such a deep relation, specify the name of the parameter in a form (usually uppercase) that will match the value in the database. Finally, for keywords which are simply differences between two already dredged keywords, only the names of the subtrahend and minuend mnemonics are required for the source and subsource fields. Field name type/size description ---------- --------- ----------- instrument C3 instrument to which the keyword is associated: hsp, wfc, foc, fos, hrs, wfII, fgs, acs, nic, sti keyword C8 name of keyword sourcetype C8 which kind of source applies (PDB, PMDB, DELTA) source C30 source name: mnemonic, relation, subtrahend subsource C30 specify: conversion mnemonic, fieldname or parameter name, minuend mnemonic
This relation provides keywords to be written to the FITS file produced by the PDRORB process. Field name type/size description ---------- --------- ----------- instrument C3 instrument to which the keyword is associated: hsp, wfc, foc, fos, hrs, wfII, fgs, acs, nic, sti file_type C3 generic file type. values are: shp (standard header packet), udl (unique data log); dsk (digital sky), dst (digital star), ask (analog sky), ast (analog star), asd (area scan/digital), asa (area scan/analog); ext (extracted data), sci (science data); shl (science header line), stl (science trailer line); img (image) order_index I4 an index defining the keyword order in the relation; it is used to order the appearance of keywords in the header file to which it will be written fixed_index I4 this item should be thought of as a map of a fixed index for a keyword into a value in the sequence 1,2,...,n. the fixed index is used to indicate a particular keyword keyword_str C8 keyword character string keyword_typ C3 data type of the keyword value keyword_val C20 keyword value comment_str C72 comment character string cnv_flag C1 flag stating whether to automatically convert keyword optional C1 indicates whether this keyword is an optional one. If an optional keyword has a blank value (restricted to strings), then that keyword will be omitted from the header
This relation provides information on the groupings and ordering of keywords to be written to the FITS file produced by the PDRORB process. Field name type/size description ---------- --------- ----------- instrument C3 instrument to which the keyword is associated: hsp, wfc, foc, fos, hrs, wfII, fgs, acs, nic, sti header_type C36 The name of the FITS header. eg: NIC_SPT_PRIMARY ftype_order I2 The order in which to put the following section cgg1_instr C3 The cgg1_keyword relation instrument name. This can be the same as 'instrument' above, or another name such as SSY or GEN." cgg1_ftype C3 The cgg1_keyword relation 'file_type' specification for this section of keywords
This relation, used by the REPLAN process, provides information about the generation of the Science Mission Schedule (SMS). It includes information on first generating a logical SMS (LSMS), precedence checking the LSMS, formatting the LSMS into an SMS, transferring the SMS to the PASS System, and sending the PASS products to the OPUS EDPS System. Descriptions of each field in the sms_catalog relation are available to provide detailed information.
This relation, used by the REPLAN, UPDATR, CONTRL, and NICSAA processes provides information about the linking of exposures to observations. Descriptions of each field in the qolink relation are available to provide detailed information.
This relation is used to provide DADS and OPUS with the SMS_ID for any observation in the Mission Schedule. It can be used to list all observations and association products for a particular SMS_ID. It contains a status field to identify observation or product archive success or unavailability. The REPLAN, UPDATR, CONTRL, and PDRRSP processes use this relation. Field name type/size description ---------- --------- ----------- program_id C3 When a proposal is accepted into the PMDB by SPSS it must be assigned a unique 3 character base 36 program identifier. Is is used for identification of proposals by spacecraft and OPUS software. It is also used in the OPUS and DADS rootname for all archived science data files. obset_id C2 An observation set is a collection of one or more alignments that are grouped together based on FGS pointing requirements. That is, if multiple alignments can all be executed using the same guide star pair, they are grouped into the same observation set. ob_number C3 Observations are numbered sequentially throughout an observation set. An ob_number is _NOT_ the same as an obset_ID. The third character is only used for association products sms_id C9 The C&C list is a major data structure within SPSS that contains information on 'candidates' and the 'calendar'. The candidate data, often referred to as the candidate pool, are scheduling units and their associated observation set, alignment, and target data. The calendar is a timeline of activities that are laid down by the SPSS scheduling utilities. When a C&C List is saved in SPSS it receives an identifier (sms_id) status C1 The condition or availability for archiving is indicated by a status code with the following definition: U - Unexecuted - the initial condition for exposures or products that will be archived independently of associations M - Member - the initial condition for exposures that are only archived within an association product N - Not available - set by operations to indicate exposures or products that cannot be generated E - Executed and archived successfully inst C1 One character identifier for the instrument used for the science observation. The relation between this value and the names in qobservation.si_id are: 1 = 1 2 = 2 3 = 3 J = ACS N = NIC O = STIS U = WFII V = HSP W = WFPC X = FOC Y = FOS Z = HRS tag C1 Flags observation as a target acquisition. Y - The qobservation.target_acqmode field is 01 or 02. This observation is a target acquisition ocx_expected C1 Flags this observation as real time (mode 1) target acq. Y - The qobservation.target_acqmode field is 01. This observation is a mode 1 (real time) target acquisition image pdq_created C1 indicates if the OPUS pipeline has created a PDQ file for this observation. The value of this column is an alphanumeric code that indicates the status, real or inferred, of the PDQ file for the observation oms_archived C1 indicates if the OMS pipeline has successfully achived an observation log for this observation. X - The data assessment software has validated existence of the OMS observation log for this observation in DADS rti_checked C1 indicates if OPUS staff has checked for existence of real time information for this observation. X - OPUS staff has run RTI_CHECK software for this observation ocx_appended C1 indicates if an OCX file has been appended to the PDQ file. The value of this field is an alphanumeric code that indicates that the data assessment software has located an OCX file (and the type of file) for this observation and has appended it to the PDQ file assessed C1 indictaes if this observation has been assessed for procedural quality. The value of this column is an alphanumeric code that summarizes the essentials of the assessment process." dq_archived C1 indicates the archive status of the procedural data quality file(s). The value of this column is an alphanumeric code that indicates the archive status of the PDQ (and optionally the OCX) file of this observation start_time C17 This field (format: yyyy.ddd:hh:mm:ss) contains either the planned or actual start time of the observation. The time_type field indicates the source of the time. end_time C17 This field (format: yyyy.ddd:hh:mm:ss) contains either the planned or actual end time of the observation. The time_type field indicates the source of the time. time_type C1 (P/A) When this field is P for planned, start_time and end_time are generated from SPSS planning data and the accuracy is questionable. When this field is A for actual, start_time and end_time have been updated by the OPUS pipeline using science data.
This relation is used to replace the executed_flg in the qobservation relation as qobservation is replicated from SPSS and cannot be updated by the EDPS. Some additional data is also supplied but only the executed_flg is updated by EDPS software. The additional fields can be used to distinguish dumps from science observations. The REPLAN and UPDATR processes use this relation. Field name type/size description ---------- --------- ----------- program_id C3 When a proposal is accepted into the PMDB by SPSS it must be assigned a unique 3 character base 36 program identifier. Is is used for identification of proposals by spacecraft and OPUS software. It is also used in the OPUS and DADS rootname for all archived science data files. obset_id C2 An observation set is a collection of one or more alignments that are grouped together based on FGS pointing requirements. That is, if multiple alignments can all be executed using the same guide star pair, they are grouped into the same observation set. ob_number C3 Observations are numbered sequentially throughout an observation set. An ob_number is _NOT_ the same as an obset_ID. The third character is only used for association products proposal_id C5 A proposal consists of many individual observations submitted as a package by a proposer. When a proposal is processed by the proposal entry system (RPS2), it is assigned a proposal identifier. That identifier is an integer that is converted into a 5 character base 36 string si_id C4 This is the identifier type for a Science Instrument (SI). The SI list includes the following: FOC - Faint Object Camera FOS - Faint Object Spectrograph WFPC - Wide Field Planetary Camera 1 WFII - Wide Field Planetary Camera 2 WF3 - Wide Field Camera 3 HRS - High Resloution Spectrograph CSTR - COSTAR 1 - Fine Guidance Sensor 1 2 - Fine Guidance Sensor 2 3 - Fine Guidance Sensor 3 NIC - NICMOS STIS - Space Telescope Infrared Spectrograph ACS - Advanced Camera for Surveys COS - Cosmic Origins Spectrometer control_id C5 This is information on how OPUS is supposed to process the data. The data is stored in five bytes: H/Y/N : Calibration data flag F/P : Output product type (film/plot) - unused 2 bytes: Output format spec - unused Y/N : Output product holding tank flag coord_id C10 This is the SI aperture and coordinate system identifier; it specifies the aperture and coordinate system of the instrument to be used for the observation of the target. It is the aperture identifier concatenated with the aperture coordinate system identifier. They specify the default location of the target within the aperture executed_flg C1 When the record is created this value is blank. When the observation has been executed on board HST, OPUS receives a science POD file or EDPS generates an astrometry file from engineering data, and this field is updated to the Type (ninth) character of the dataset rootname
This relation, used by the REPLAN, UPDATR, and NICSAA processes, provides information about the observations contained in Science Mission Schedules. Descriptions of each field in the qobservation relation are available to provide detailed information.
This relation is used to link dataset rootnames to programmatic ids with keys to facilitate joins to other relations. The REPLAN and CONTRL processes use this relation. Field name type/size description ---------- --------- ----------- dataset_rootname C9 IPPPSSOOT for jitter or astrometry datasets dataset_type C3 Either FGS or AST, for FGS obslogs or astrometry, respectively program_id C3 When a proposal is accepted into the PMDB by SPSS it must be assigned a unique 3 character base 36 program identifier. Is is used for identification of proposals by spacecraft and OPUS software. It is also used in the OPUS and DADS rootname for all archived science data files. obset_id C2 An observation set is a collection of one or more alignments that are grouped together based on FGS pointing requirements. That is, if multiple alignments can all be executed using the same guide star pair, they are grouped into the same observation set. ob_number C3 Observations are numbered sequentially throughout an observation set. An ob_number is _NOT_ the same as an obset_ID. The third character is only used for association products
This relation is used to identify the FOF engineering telemetry files that need to be converted to provide intermediary telemetry files containing telemetry parameters needed for FGS, GSA or AST product generation. It is used to simplify creation and collection of telemetry for processing. When a product has no eng_ready = N flags, then the OSF for the product can be created using the product rootname. The REPLAN and CONTRL processes use this relation. Field name type/size description ---------- --------- ----------- product_rootname C14 IPPPSSOOT for jitter or astrometry products; GYYYYDDDHHMMSS for GS acquisition data product_type C3 FGS for jitter, AST for astrometry, or GSA for GS acqusition data eng_rootname C12 TYYYYDDDHHMM, rootname of the ENG telemetry file eng_ready C1 (Y/N), 'N' indicates the telemetry file eng_rootname is not yet recognized as ready for this product_type. 'Y' indicates that the processing for this product type has recognized the presence of eng_rootname. Having separate control for each product type simplifies initiation of processing. "
This relation is used to provide attributes that apply to entire associations. An association is a set of exposures that will be merged into products by OPUS pipeline. The full association consists of a list of exposures and products. This is one of the relations used to define associations for OPUS. The asn_members and asn_product_link are the others. The REPLAN and UPDATR processes use this relation. Field name type/size description ---------- --------- ----------- association_id C9 This field identifies an OPUS association. An association is a set of exposures that will be merged into products by OPUS pipeline calibration processing. The full association consists of a list of exposures and products. This field completely identifies an association. This is the OPUS value used for the keyword ASN_ID. It has the following format: IPPPSSAAa where: I = instrument code (e.g., N for NICMOS, O for STIS), PPP = program_id SS = obset_id of the first associated exposure, AAa = the two-character sequence (AA = 01,02,...) is unique within the obset SS; plus the product id (a) that is always 0 for associations and primary products. si_name C4 This is the identifier type for a Science Instrument (SI). The SI list includes the: FOC - Faint Object Camera FOS - Faint Object Spectrograph WFII - Wide Field Planetary Camera 2 HRS - High Resloution Spectrograph CSTR - COSTAR FGS - Fine Guidance Sensor NIC - NICMOS STIS - Space Telescope Infrared Spectrograph ACS - Advanced Camera for Surveys COS - Cosmic Origins Spectrometer WF3 - Wide Field Planetary Camera 3 last_exp_date C17 This field contains the latest predicted time of the exposure members of an association. It uses the standard SOGS time format - yyyy.ddd:hh:mm:ss where: yyyy = year ddd = day of year hh = hours mm = minutes ss = seconds collect_date C17 This field contains the date of the association file created by OPUS. It uses the standard SOGS time format - yyyy.ddd:hh:mm:ss where: yyyy = year ddd = day of year hh = hours mm = minutes ss = seconds
This relation is used for all members (exposure and product) that form an OPUS association. This is one of the relations used to describe OPUS associciations. The asn_association and asn_product_link are the others. An association is a set of exposures that will be merged into products by OPUS pipeline. The full association consists of a list of exposures and products. An association product is a dataset, distinct from any exposure dataset, that is generated by the pipeline. Exposures are associated in order to generate products. For an exposure, the mem_number is two characters and is the same as ob_number. For a product, the mem_number is the combination of the association number within the obset and the product_id. Prior to the collection the member_status for exposures is 'U'. Afterwards, it is either 'C' or 'O'. The value for products is always 'P'. Prior to the collection the product_status for products is 'U'. Afterwards, it is either 'C' or 'M'. The value for exposures is always 'E'. The REPLAN and UPDATR processes use this relation. Field name type/size description ---------- --------- ----------- association_id C9 This field identifies an OPUS association. An association is a set of exposures that will be merged into products by OPUS pipeline calibration processing. The full association consists of a list of exposures and products. STIS can only have one product per association. NICMOS can have as few as one and as many as nine products. ACS can have a single product having a product_id of either '0' (for a dither product) or '1' for a cr-split or repeat-obs product at a single pointing. If there are more than one product the there is always a dither product having the product_id '0' and the other products use product ids that range from '1' to 'I'. This field completely identifies an association. It is set by TRANS. It is used by DADS as the dataset name to archive the association file. This is the OPUS value used for the keyword ASN_ID. It has the following format: IPPPSSAAa where: I = instrument code (e.g., N for NICMOS, O for STIS), PPP = program_id SS = obset_id of the first associated exposure, AAa = the two-character sequence (AA = 01,02,...) is unique within the obset SS; plus the product id (a) that is always 0 for associations and primary products program_id C3 When a proposal is accepted into the PMDB by SPSS it must be assigned a unique 3 character base 36 program identifier. This is done by the PMDB/ACCEPT_PROP command. This program identifier is tagged as 'program_id' in most PMDB relations. Is is used for identification of proposals by spacecraft and OPUS software. It is also used in the OPUS and DADS rootname for all archived science data files. Because of flight design software, program_id must be three characters obset_id C2 An observation set is a collection of one or more alignments that are grouped together based on FGS pointing requirements. That is, if multiple alignments can all be executed using the same guide star pair, they are grouped into the same observation set. An observation set is identified by a 2 character base 36 string. This field, typically called 'obset_id', will often contribute to the index on relation together with a proposal_id, version_num, and possibly other fields. OBSET is an abbreviation for observation set member_num C3 For exposures, observations are numbered sequentially throughout an observation set and are assigned by SMS/Gen. For exposures the name is two characters and it is the same as the observation ob_number. For products it is the association number (two characters) plus the product_id member_type C12 This field describes the role of the member in the association. If there are multiple products, then the format of the exposure names correlate to the product names by rules that depend on the SI. For exposures this name must be the same as exp_type in qeassociation member_status C1 This field describes the status of a member of the association. Valid values are: U -- uncollected exposure C -- collected exposure O -- orphan exposure (not collected) P -- product dataset product_status C1 This field describes the status of a product of the association. Valid values are: U -- uncollected product C -- collected product N -- not collected - missing product after collection E -- exposure (not a product) X -- unknown (only valid for old records)
This relation is used for all products for an OPUS association. It identifies the exposures contained in each product. It can also be accessed to find the products associated with any exposure. The other relations used to describe OPUS associations are asn_association and asn_members. The REPLAN, UPDATR, and NICSAA processes use this relation. Field name type/size description ---------- --------- ----------- program_id C3 When a proposal is accepted into the PMDB by SPSS it must be assigned a unique 3 character base 36 program identifier. This is done by the PMDB/ACCEPT_PROP command. This program identifier is tagged as 'program_id' in most PMDB relations. Is is used for identification of proposals by spacecraft and OPUS software. It is also used in the OPUS and DADS rootname for all archived science data files. Because of flight design software, program_id must be three characters asn_obset_id C2 Association observation set identifier. An observation set is identified by a 2 character base 36 string. This field, typically called 'obset_id', will often contribute to the index on relation together with a proposal_id, version_num, and possibly other fields. member_num C3 Member number (in asn_members) of the product: it is the association number (two characters) plus the product_id obset_id C2 Exposure observation set identifier. An observation set is identified by a 2 character base 36 string. This field, typically called 'obset_id', will often contribute to the index on relation together with a proposal_id, version_num, and possibly other fields. OBSET is an abbreviation for observation set ob_number C2 Observations are numbered sequentially throughout an observation set and are assigned by SMS/Gen. An ob_number is _NOT_ the same as an obset_ID. This field can be joined to member_num for exposure records in asn_members
This relation is used to identify the all the jitter datasets that follow either the SMS start event or the GS acquisition event, identified by event time. The event_type specifies whether the event_start_time is for an SMS or a GS acquisition. If the event is an acquisition, then the evt_start_time will exactly match the acq_start_time in the gsa_data table. This table is also used to identify internal exposures that do not need engineering telemetry for its defaulted jitter files. The table was designed to be easily joined to qolink_sms to get exposure status so that jitter files are not generated for exposures have a status of N. The REPLAN and CONTRL processes use this relation. Field name type/size description ---------- --------- ----------- event_start_time C17 YYYY.DDD:HH:MM:SS, Start time of event. If event_type is G for GSACQ then this time must match the acq_start_time in the gsa_data table and it can be used to join to that table event_type C3 (SMS or GSA) SMS start or GS acquisition start program_id C3 When a proposal is accepted into the PMDB by SPSS it must be assigned a unique 3 character base 36 program identifier. This is done by the PMDB/ACCEPT_PROP command. It is used for identification of proposals by spacecraft and OPUS software. It is also used in the OPUS and DADS rootname for all archived science data files. Because of flight design software, program_id must be three characters obset_id C2 An observation set is a collection of one or more alignments that are grouped together based on FGS pointing requirements. That is, if multiple alignments can all be executed using the same guide star pair, they are grouped into the same observation set ob_number C2 Observations are numbered sequentially throughout an observation set and are assigned by SMS/Gen. An ob_number is _NOT_ the same as an obset_ID internal_flag C1 (Y or N) Y indicates if this is a internal observation
This relation is used to identify the completion status of all FGS, AST, and GSA products in order to control telemetry file cleanup. The records are created with N for complete_flag. This relation is designed to be joined to product_eng_map to determined when all the products, identified in product_eng_map by product_type and product_rootname, have been marked completed. For reprocessing, all the products to be reprocessed should have the complete_flag interactively reset to N. For replans, any records for dataset no longer present in qolink. must be deleted. Only one record for each product and type is allowed. " The REPLAN and CONTRL processes use this relation. Field name type/size description ---------- --------- ----------- product_rootname C14 IPPPSSOOT for jitter or astrometry products; GYYYYDDDHHMMSS for GS acquisition data product_type C3 FGS for jitter, AST for astrometry, or GSA for GS acqusition data complete_flag C1 (Y/N) set to Y by cleanup software indicating this dataset has been processed and no longer prevents cleanup of telemetry files
This relation, used by the UPDATR and NICSAA processes is closely related to the fields (columns) on the proposal Exposure Logsheet. The Logsheet is used to define the proposed exposures for the HST Scientific instruments. Descriptions of each field in the qelogsheet relation are available to provide detailed information.
This relation, used by the UPDATR process is used to define the set of exposures that form an OPUS association. Descriptions of each field in the qeassociation relation are available to provide detailed information.
This relation is used to provide the tool, UPDATE_QODATA, with a mechanism to assign a product_id and member_name (of a product) for NICMOS or any that has multiple products. The product_id is correlated with the unique value of EXP_TYPE in qeassociation table. This is a one character product_id that supercedes the last character of the association_id to form the member-name of a product. The UPDATR process uses this relation. Field name type/size description ---------- --------- ----------- si_name C4 This is the identifier type for a Science Instrument (SI). The current SI list for qeassociation candidates are: NIC - NICMOS - Near IR Camera - Multi-Object Spectr. STIS - Space Telescope Infrared Spectrograph STIS only has one product_id but many exp_type values and will not appear in this table exp_type C12 This field describes the role of the exposure in the association. Valid values for NICMOS exposures are (EXP-TARG, EXP-BCK1, EXP-BCK2, ..., EXP-BCK8) product_id C1 This is the last character of an member_id of a product. The first eight characters of the id are the same as the association_id. The first product_id is 0 which forms a member_name that is the same as the association_id product_code PRODCODE_TYPE Assign product_id by exp_type
This relation is used to hold events extracted from Mission Schedule files that are of interest to the EDPS FGS pipeline. The MSCXTR processes populates this relation and the relation is used by the CONTRL process. Field name type/size description ---------- --------- ----------- event_time C20 YYYY.DDD.HH.MM.SS.CC the time within a hundreth of a second for each event event_type C3 OPS for operational info; FGS for jitter; RTI for PCS events; RTO for offset slot data; TDR for TDRSS COMCON events event_class C5 for OPR type classes -------------------- BOM begin mission schedule EOM end mission schedule for FGS type classes -------------------- BOSn begin slew of type n (n=1 to 4) PCP Pointing Control Processor ORBIT related to HST orbit BOA begin GS acquisition or reacquisition EOA end GS acquisition or reacquisition for RTI type classes -------------------- EOS2 end slew of type 2 FHST3 fhst - start of 3-axis update for RTO type classes -------------------- GEN generate offset slot REUSE reuse offset slot SET set offset slot CLEAR clear offset slot for TDR type classes -------------------- BOC begin COMCON EOC end COMCON BRC begin rejected COMCON ERC end rejected COMCON BTC begin trimmed COMCON ETC end trimmed COMCON " event_name C10 Format Description ---------- -------------------------------------- mmmmmmmmm 9-char MSC rootname for BOM and EOM aaaaaaaaaa 10-char aperture name for BOSn,or EOS2 TERMINATE PCP state GYRO PCP state FGS_OCCULT PCP state SAA PCP state ENTR_DAY entering ORBIT day ENTR_NIGHT entering ORBIT night GSACQ1 first acquisition for BOA and EOA GSACQ2 second acquisition for BOA and EOA REACQ reacquisition for BOA and EOA S_ss_pppoo RTO: ss is slot num, pppoo is obset pppoo_ssss TDR: pppoo is obset, ssss is service
This relation is used to hold the events extracted from Mission Schedule files that entail detailed parameters of Guide Star (GS) Acquisitions and Re-acquistions. This information is of interest to the EDPS FGS pipeline. The MSCXTR process populates this relation. Field name type/size description ---------- --------- ----------- event_time C20 YYYY.DDD.HH.MM.SS.CC the time within a hundreth of a second for each event event_name C10 Format Description ---------- -------------------------------------- GSACQ1 first acquisition for BOA and EOA GSACQ2 second acquisition for BOA and EOA REACQ reacquisition for BOA and EOA dom_fgs C1 FGS number for dominant GS prim_fgs C1 FGS number for primary GS, that is acquired first rol_fgs C1 FGS number for roll GS dom_gs_ra R8 Right ascension (degrees) for dominant GS dom_gs_dec R8 Declination (degrees) for dominant GS rol_gs_ra R8 Right ascension (degrees) for roll GS or zero rol_gs_dec R8 Declination (degrees) for roll GS or zero dom_gs_mag R4 (Vmag) brightness of dominant GS rol_gs_mag R4 (Vmag) brightness of roll GS or zero dom_gs_id C10 GSC ID of dominant GS rol_gs_id C10 GSC ID of roll GS or blank if no GS tracking C2 FL for finelock, CT for coarse track, FG for finelock/gyro, and CG for coarse track/gyro. CT and CG tracking modes are no longer used.
This relation is used to hold the events extracted from Mission Schedule files that entail obset level data for astrometry. The event time is taken from the time found in the MSC file but there is no msc_events record for this data. This information is of interest to the EDPS AST pipeline; i.e., the fields are used to set astrometry keywords in astrometry output products. The MSCXTR process populates this relation. Field name type/size description ---------- --------- ----------- event_time C20 YYYY.DDD.HH.MM.SS.CC the time within a hundreth of a second for each event program_id C3 When a proposal is accepted into the PMDB by SPSS it must be assigned a unique 3 character base 36 program identifier. This is done by the PMDB/ACCEPT_PROP command. It is used for identification of proposals by spacecraft and OPUS software. It is also used in the OPUS and DADS rootname for all archived science data files. Because of flight design software, program_id must be three characters. obset_id C2 An observation set is a collection of one or more alignments that are grouped together based on FGS pointing requirements. That is, if multiple alignments can all be executed using the same guide star pair, they are grouped into the same observation set. fgs C1 FGS number, 1 2 or 3, or 0 for all param_name C8 Keyword name or keyword related parameter name param_value C20 Formatted value of parameter
This relation is used to hold the events extracted from Mission Schedule files that entail observation level data for astrometry. The event time is taken from the time found in the MSC file but there is no msc_events record for this data. This information is of interest to the EDPS AST pipeline; i.e., the fields are used to set astrometry keywords in astrometry output products. The MSCXTR process populates this relation. Field name type/size description ---------- --------- ----------- event_time C20 YYYY.DDD.HH.MM.SS.CC the time within a hundreth of a second for each obset program_id C3 When a proposal is accepted into the PMDB by SPSS it must be assigned a unique 3 character base 36 program identifier. Is is used for identification of proposals by spacecraft and OPUS software. It is also used in the OPUS and DADS rootname for all archived science data files. obset_id C2 An observation set is a collection of one or more alignments that are grouped together based on FGS pointing requirements. That is, if multiple alignments can all be executed using the same guide star pair, they are grouped into the same observation set. ob_number C3 Observations are numbered sequentially throughout an observation set. An ob_number is _NOT_ the same as an obset_ID. The third character is only used for association products fgs C1 FGS number, 1 2 or 3, or 0 for all param_name C8 Keyword name or keyword related parameter name param_value C20 Formatted value of parameter
This relation is used to hold the events extracted from Mission Schedule files that entail the details of msc_events of type RTO and class GEN taken from the Mission Schedule file GEN-SLEW block. This information is of interest to the EDPS FGS pipeline. The MSCXTR process populates this relation. Field name type/size description ---------- --------- ----------- event_time C20 YYYY.DDD.HH.MM.SS.CC the time within a hundreth of a second for each obset program_id C3 When a proposal is accepted into the PMDB by SPSS it must be assigned a unique 3 character base 36 program identifier. This is done by the PMDB/ACCEPT_PROP command. It is used for identification of proposals by spacecraft and OPUS software. It is also used in the OPUS and DADS rootname for all archived science data files. Because of flight design software, program_id must be three characters. obset_id C2 An observation set is a collection of one or more alignments that are grouped together based on FGS pointing requirements. That is, if multiple alignments can all be executed using the same guide star pair, they are grouped into the same observation set. slot C3 Slot number load_by C17 YYYY.DDD:HH;MM:SS - load by date max_slew C3 formatted as integer arcsec - maximum slew angle offset_id C13 An identified used in the SPSS database
This relation, used by the CONTRL process is used to define exposures. It provides information on how to control the Science Instruments for the exposures defined in its records. Descriptions of each field in the qexposure relation are available to provide detailed information.
This relation is used to identify the full time range of planned Guide Star acquisitions, and to report results of the actual acquisition. The first three fields are populated when the record is inserted into the table and the remaining fields are updated after gs acqisition processing. The CONTRL process updates this relation. Field name type/size description ---------- --------- ----------- gsa_rootname C14 GYYYYDDDHHMMSS where the time values match the gsa_start_time gsa_start_time C17 YYYY.DDD:HH:MM:SS, Commanded start of the GSACQ1 or REACQ event gsa_end_time C17 YYYY.DDD:HH:MM:SS, Allocated end of the REACQ event, or the end of the GSACQ1 event if num_pair is 1, or end of the GSACQ2 event if num_pair is 2. acq_success_time C17 YYYY.DDD:HH:MM:SS, time of the start of FGS guiding when takedata flag is first raised during the acquistion or blank if no telemetry or acq failure guiding_mode C2 Guiding mode at end of acquisition. GY - gyro, FL - fine lock on both stars, FG - fine lock/gyro, CT - coarse track on both stars, CG - coarse track/ gyro. CT and CG are no longer allowed but old schedules allowed these modes acq_status C12 GSFAIL keyword value or blank. The non-blank values are: TLMGAP - unknown due to missing telemetry VEHSAFE - not attempted due to safing SSLEXP - scan step limit exceeded on primary GS SSLEXS - scan step limit exceeded on secondary GS SREXCPn - search radius exceeded on primary GS of pair n SREXCSn - search radius exceeded on secondary GS of pair n NOLOCK - failed to obtain finelock on either GS acq_tlm_gap R4 (Seconds) time of telemetry gap that overlaps the acquisition window between gsa_start_time and gsa_end_time actual_pair_num I4 -1 = undetermined, 0 - failure, 1 - first pair is acquired, or 2 - second pair is acquired. acq_dom_fgs I4 Dominant FGS number or 0, after acquisition. A zero value indicates a total failure acq_rol_fgs I4 Roll FGS number or 0, after acquisition. A zero value is either a planned FGS/GYRO mode or a failure to acquire both GSs
This relation, used by the NICSAA process contains data needed to support the NICMOS post-SAA (South Atlantic Anomaly) dark exposures. SAA exit times are significant to the NIC because beginning in cycle 10, NICMOS will execute a series of dark calibration observations immediately after each SAA passage in order to eliminate persistence due to cosmic rays. Descriptions of each field in the nic_saa_exit relation are available to provide detailed information.
This relation is used to identify NICMOS associations containing Post-SAA dark exposures. The records are created for all such associations whether or not there are any exposures that use it. There is a separate record for each NICMOS configuration and saa_exit time. The relation is used to allow the linkage of NICMOS exposures to SAA darks that may occur in a previous (replan) SMS. Records having saa_exit_hour values that are a few weeks old have no operational value and may be deleted at any time. There is no reason to archive any of these records. The NICSAA process populates this relation. Field name type/size description ---------- --------- ----------- saa_exit_hour C11 YYYY.DDD:HH -- This is the hour at which the saa exit occurs. If the exit time is near an hour boundary, a second record with the adjacent hour is also present. This field is used to match with the first 11 characters of SPSS nic_saa_exit.saa_exit relation to create the nic_saa_link records. config C15 This is the value from qelogsheet.config that must be the same for both exposures of the association. The same value must match any qelogsheet.config for linked exposures in nic_saa_link. There should be three records with different config values for each value of saa_exit_hour program_id C3 The program_id of the dark association. See asn_product_link.program_id obset_id C2 Association observation set identifier of the dark association. See asn_product_link.asn_obset_id member_num C3 Member number of the dark association primary product. See asn_product_link.member_num. "
This relation is used to link NICMOS exposures to the SAA dark association that is relevant to the exposure. If no record exists, then either the exposure is an SAA dark or the exposure occurs too long after the SAA exit time. The individual exposures of the associations can be accessed through the asn_product_link table. All the timing info for both the exposures and the darks are found in the nic_saa_exit table. For records in this table, the dark association must have nearly the same saa_exit time (from NIC_SAA_EXIT table) as the exposure. That is, the most recent SAA exit for both the darks and the exposures must be in the same orbit. The exposure from the dark association must have the same qelogsheet.config value. The qelogsheet.targname value for the dark exposures will be POST-SAA-DARK. The records in this table are inserted during the processing of the mission schedule, after updating the the asn tables. The NICSAA process populates this relation. Field name type/size description ---------- --------- ----------- program_id C3 When a proposal is accepted into the PMDB by SPSS it must be assigned a unique 3 character base 36 program identifier. This is done by the PMDB/ACCEPT_PROP command. This program identifier is tagged as 'program_id' in most PMDB relations. Is is used for identification of proposals by spacecraft and OPUS software. It is also used in the OPUS and DADS rootname for all archived science data files. Because of flight design software, program_id must be three characters. obset_id C2 An observation set is a collection of one or more alignments that are grouped together based on FGS pointing requirements. That is, if multiple alignments can all be executed using the same guide star pair, they are grouped into the same observation set. An observation set is identified by a 2 character base 36 string. This field, typically called 'obset_id', will often contribute to the index on relation together with a proposal_id, version_num, and possibly other fields. OBSET is an abbreviation for observation set. ob_number C2 Observations are numbered sequentially throughout an observation set and are assigned by SMS/Gen. An ob_number is _NOT_ the same as an obset_ID. This field can be joined to member_num for exposure records in asn_members. dark_program_id C3 The program_id of the dark association. See asn_product_link.program_id. " dark_obset_id C2 Association observation set identifier of the dark association. See asn_product_link.asn_obset_id. " dark_member_num C3 Member number of the dark association primary product. See asn_product_link.member_num. " dark_association C9 This is a convenience field. It can be reconstructed from the letter N, the dark_program_id, the dark_obset_id, and the dark_member_num. See asn_members for complete definition of association_id. "
This relation tracks requests made by applications to archive files and responses made by the on-line dads archive system. Entries are retained in this table after processing is completed for historical purposes. The PDR Pipeline PDRREQ and PDRRSP processes use this relation. Field name type/size description ---------- --------- ----------- dataset_name C23 The name given to describe a group of files archclass C3 The classification used to archive the data archdate C20 The latest file date associated with the dataset reqdate C23 The date when the archive insertion request was generated reqtype C4 either a TAPE or DISK archive insertion requests response C10 Response status returned by DADS disk_date C23 The optical disk date assigned by DADS file_cnt I2 The file count as determined by DADS path C10 The path from which the request is made tape_date C20 The date when a tape is made saveset C17 The saveset name archv_tape C6 Tape label
This relation holds records that have been inserted when the OPUS data partitioning processes determine the name of the ipppssoots from the science POD files. Any pipeline process that subsequently has trouble processing science data will update the trouble_flag and trbl_process fields. In addition, OPUS/EDPS archive processes update a number of fields in this relation. The PDR Pipeline PDRREQ process uses this relation. Field name type/size description ---------- --------- ----------- program_id C3 Unique 3 character base 36 program identifier for the proposal for which this observation is a part. Used with the obset_id, ob_number, and data_class fields to form the observation rootname, which uniquely identifies an observation obset_id C2 A collection of one or more alignments that are grouped together based on FGS pointing requirements. That is, if multiple alignments can all be executed using the same guide star pair, they are grouped into the same observation set. Part of the OPUS rootname. ob_number C2 Observations are numbered sequentially throughout an observation set and are assigned by sms (base 36 number max. 1295 observations per obset). Part of the OPUS rootname. data_class C1 Type of data (R real-time, T tape-recorded, etc obs_root C9 The ipppssoot, where i = instrument code (N - NICMOS, O - STIS, U - WFPC2 V - HSP, W - WFPC1, X - FOC, Y - FOS, Z - HRS) ppp = program_id ss = obset_id oo = ob_number t = data_clas proc_strt_tm C16 Time pipeline processing for this dataset. Format is yyyydddHHMMSSsss proc_stop_tm C16 Time pipeline processing completed for this dataset. Format is yyyydddHHMMSSsss data_eval I4 obsolete field?? flg_mismatch C1 archive flag for file count mismatches geis_only C1 obsolete field?? calib_indic I2 obsolete field?? trouble_flag C1 Set to 'T' if observation sent to 'trouble' trbl_process C6 Name of process that sent observation to 'trouble' edsci_file C9 obsolete field?? edt_archdate C20 DADS ARCHDATE for EDT archive class. Format is yyyydddHHMMSSss edt_fcnt I2 File count for EDT archive class calib_file C9 obsolete field?? cal_archdate C20 DADS ARCHDATE for CAL archive class. Format is yyyydddHHMMSSsss cal_fcnt I2 File count for CAL archive class cdbs_data C15 obsolete field?? repro_flg C10 obselete field??
This relation is used to track a dataset's file extensions. The file extension are written to this relation when the ARCHIVE_CLASS.TRACK_EXT is set to "Y" in a processes resource file. The PDR Pipeline PDRREQ process uses this relation. Field name type/size description ---------- --------- ----------- dataset_name C23 The name given to describe a group of files archclass C3 The classification used to archive the data archdate C20 The latest file date associated with a dataset file_ext C3 The files' extension
SELECT window_start FROM file_times WHERE dataset_name like @dataset_name
INSERT INTO file_times VALUES (dataset_name,archclass,archdate,window_start,window_stop,tm_generated,pdb_version,environment, replan_time, dataset_name,archclass,archdate,window_start,window_stop,tm_generated,pdb_version, environment,replan_time)
SELECT window_start FROM file_times WHERE dataset_name like @dataset_name
INSERT INTO file_times VALUES (dataset_name,archclass,archdate,window_start,window_stop,tm_generated,pdb_version,environment, replan_time, dataset_name,archclass,archdate,window_start,window_stop,tm_generated,pdb_version, environment,replan_time)
SELECT window_start FROM file_times WHERE dataset_name like @dataset_name
INSERT INTO file_times VALUES (dataset_name,archclass,archdate,window_start,window_stop,tm_generated,pdb_version,environment, replan_time, dataset_name,archclass,archdate,window_start,window_stop,tm_generated,pdb_version, environment,replan_time)
SELECT window_start FROM file_times WHERE dataset_name like @dataset_name
INSERT INTO file_times VALUES (dataset_name,archclass,archdate,window_start,window_stop,tm_generated,pdb_version,environment, replan_time, dataset_name,archclass,archdate,window_start,window_stop,tm_generated,pdb_version, environment,replan_time)
SELECT s.keyword, s.sourcetype, s.source, s.subsource FROM keyword_source s WHERE s.instrument = instrument
SELECT DISTINCT g1.keyword_str, g1.keyword_val, g1.keyword_typ, g1.comment_str, g1.optional FROM cgg1_keyword g1 WHERE g1.instrument = instrument AND g1.file_type !='AST' AND g1.file_type !='VFI' AND g1.file_type !='WF2' AND g1.file_type !='WFI' AND g1.file_type !='YFI' AND g1.file_type !='ZFI' ORDER by g1.keyword_str
SELECT g1.keyword_str, g1.keyword_type, g1.keyword_val, g1.comment_str, g1.optional, g4.header_type, g4.ftype_order, g1.order_index, FROM cgg1_keyword g1, cgg4_order g4 WHERE g4.instrument = instrument AND g1.instrument = g4.cgg1_instr AND g1.file_type = g4.cgg1_ftype ORDER by g4.header_type, g4.ftype_order, g1.order_index
UPDATE file_times SET archdate = archdate WHERE dataset_name = dataset_name AND archclass = archclass
SELECT sms_send_stt FROM SPSS_DB..sms_catalog WHERE sms_id=SMS_ID
SELECT window_start, window_stop, replan_time FROM file_times WHERE dataset_name = pod_name AND archclass = 'MSC'
SELECT MAX(ob_start_tim) FROM SPSS_DB..qolink
SELECT MAX(start_time) FROM qolink_sms l, executed ex WHERE ex.program_id = l.program_id AND ex.obset_id = l.obset_id AND ex.ob_number = l.ob_number ANDex.executed_flg !=" "
SELECT COUNT(*) FROM SPSS_DB..qobservation where pred_strt_tm > sms_start and pred_strt_tm < sms_stop
SELECT COUNT(*) FROM qolink_sms l, executed ex where l.start_time > sms_start AND l.start_time < sms_stop AND ex.program_id = l.program_id AND ex.obset_id = l.obset_id AND ex.ob_number = l.ob_number AND ex.executed_flg != " "
SELECT COUNT(*) FROM file_times where archclass="MSC" and window_start <=replan_time AND window_stop >replan_time
SELECT DISTINCT l.program_id,l.obset_id INTO #obsets FROM dataset_link l, product_eng_map m WHERE m.eng_rootname >= adjusted_start AND m.eng_rootname <=adjusted_stop AND l.dataset_rootname = m.product_rootname
SELECT program_id,obset_id INTO #missing FROM #obsets WHERE NOT EXISTS (SELECT * FROM SPSS_DB..qolink WHERE program_id = #obsets.program_id AND obset_id = #obsets.obset_id)
SELECT * FROM #missing
UPDATE qolink_sms SET status="N" FROM qolink_sms l, #missing m WHERE l.program_id = m.program_id AND l.obset_id = m.obset_id
SELECT DISTINCT a.association_id INTO #asn FROM #missing m, asn_members a WHERE a.program_id = m.program_id AND a.obset_id = m.obset_id
DELETE asn_association FROM asn_association a, #asn m WHERE a.association_id = m.association_id
DELETE asn_members FROM asn_members a, #asn m WHERE a.association_id = m.association_id
DROP TABLE #asn DELETE asn_product_link FROM asn_product_link a, #missing m WHERE a.program_id = m.program_id AND a.obset_id = m.obset_id
DELETE jitter_evt_map FROM jitter_evt_map j, #missing m WHERE j.program_id = m.program_id AND j.obset_id = m.obset_id
DELETE product_status FROM product_status p, dataset_link l, #missing m WHERE l.program_id = m.program_id AND l.obset_id = m.obset_id AND p.product_rootname = l.dataset_rootname
DELETE product_eng_map FROM product_eng_map p, dataset_link l, #missing m WHERE l.program_id = m.program_id AND l.obset_id = m.obset_id AND p.product_rootname = l.dataset_rootname
DROP TABLE #missing
DROP TABLE #obsets
SELECT window_start, window_stop, replan_time FROM file_times WHERE dataset_name = pod_name AND archclass = 'MSC'
UPDATE qolink_sms SET sms_id = SMS_ID WHERE start_time >sms_start AND start_time<sms_stop
SELECT DISTINCT program_id,obset_id INTO #missing FROM qolink_sms s WHERE s.start_time > sms_start AND s.start_time < sms_stop AND NOT EXISTS (SELECT * FROM SPSS_DB..qolink WHERE program_id = s.program_id AND obset_id = s.obset_id)
SELECT * FROM #missing
UPDATE qolink_sms SET status="N" FROM qolink_sms l, #missing m WHERE l.program_id = m.program_id AND l.obset_id = m.obset_id
DELETE executed FROM executed e, #missing m WHERE e.program_id = m.program_id AND e.obset_id = m.obset_id
CREATE TABLE #inst_id (inst varchar(1), si_id varchar(4)) CREATE UNIQUE CLUSTERED INDEX #inst_id_1 on #inst_id (si_id) INSERT INTO #inst_id VALUES ('1','1') INSERT INTO #inst_id VALUES ('2','2') INSERT INTO #inst_id VALUES ('3','3') INSERT INTO #inst_id VALUES ('I','WFC3') INSERT INTO #inst_id VALUES ('J','ACS') INSERT INTO #inst_id VALUES ('L','COS') INSERT INTO #inst_id VALUES ('N','NIC') INSERT INTO #inst_id VALUES ('O','STIS') INSERT INTO #inst_id VALUES ('U','WFII') INSERT INTO #inst_id VALUES ('V','HSP') INSERT INTO #inst_id VALUES ('W','WFPC') INSERT INTO #inst_id VALUES ('X','FOC') INSERT INTO #inst_id VALUES ('Y','FOS') INSERT INTO #inst_id VALUES ('Z','HRS')
SELECT o.proposal_id,o.program_id,o.obset_id,o.ob_number, o.si_id,o.pred_strt_tm, o.target_acqmd, o.coord_id, o.control_id, isnull( e.ob_number, "/") present INTO #observ FROM SPSS_DB..qobservation o, executed e WHERE o.pred_strt_tm>sms_start AND o.pred_strt_tm<sms_stop AND o.program_id *= e.program_id AND o.obset_id *= e.obset_id AND o.ob_number *= e.ob_number
SELECT count(*) FROM #observ WHERE present !="/"
INSERT INTO executed SELECT program_id,obset_id,ob_number,proposal_id,si_id,control_id, coord_id, ' ' FROM #observ WHERE #observ.present = "/"
SELECT o.proposal_id,o.program_id,o.obset_id,o.ob_number, l.alignment_id,l.exposure_id,l.version_num,o.si_id,o.pred_strt_tm, o.target_acqmd,i.inst,o.present INTO #obslist FROM #observ o, SPSS_DB..qolink l, #inst_id i WHERE NOT(o.si_id!="1" and o.si_id!="2" AND o.si_id!="3" and o.control_id=' ') AND l.proposal_id=o.proposal_id AND l.program_id=o.program_id AND l.obset_id=o.obset_id AND l.ob_number=o.ob_number AND i.si_id=o.si_id ORDER BY o.program_id,o.obset_id,o.ob_number
DROP TABLE #observ DROP TABLE #missing DELETE qolink_sms FROM qolink_sms,#obslist WHERE qolink_sms.program_id=#obslist.program_id AND qolink_sms.obset_id=#obslist.obset_id AND qolink_sms.ob_number=#obslist.ob_number AND #obslist.present="/"
SELECT l.program_id,l.obset_id,l.ob_number,l.inst,l.pred_strt_tm, q.opmode,convert(int,q.exptime),l.present FROM #obslist l, SPSS_DB..qelogsheet q WHERE l.proposal_id=q.proposal_id AND l.obset_id=q.obset_id AND l.alignment_id=q.alignment_id AND l.exposure_id=q.exposure_id AND l.version_num=q.version_num INSERT INTO qolink_sms VALUES (program_id,obset_id,ob_number,SMS_ID,'U',inst, ' ',' ',' ',' ',' ',' ',' ',' ',start_time,end_time,'P') UPDATE qolink_sms SET start_time=start_time, end_time=end_time WHERE program_id=program_id and obset_id=obset_id AND ob_number=ob_number AND time_type != "A"
UPDATE qolink_sms SET taq="Y",ocx_expected="Y" FROM qolink_sms, #obslist WHERE #obslist.target_acqmd="01" AND #obslist.present = "/" AND qolink_sms.program_id=#obslist.program_id AND qolink_sms.obset_id=#obslist.obset_id AND qolink_sms.ob_number=#obslist.ob_number UPDATE qolink_sms SET taq="Y" FROM qolink_sms, #obslist WHERE #obslist.target_acqmd="02" AND #obslist.present = "/" AND qolink_sms.program_id=#obslist.program_id AND qolink_sms.obset_id=#obslist.obset_id AND qolink_sms.ob_number=#obslist.ob_number
SELECT l.program_id,l.obset_id,l.ob_number,a.association_id, a.si_name,a.collect,a.exp_type,start_time=l.pred_strt_tm,l.inst,l.present INTO #datasets FROM #obslist l, SPSS_DB..qeassociation a WHERE l.proposal_id*=a.proposal_id AND l.obset_id*=a.obset_id AND l.alignment_id*=a.alignment_id AND l.exposure_id*=a.exposure_id AND l.version_num*=a.version_num ORDER BY l.program_id,l.obset_id,l.ob_number
DROP TABLE #obslist DROP TABLE #inst_id
SELECT program_id, association_id, si_name, last_time=MAX(start_time) INTO #new_asn_list FROM #datasets WHERE collect="Y" AND present="/" GROUP BY program_id, association_id, si_name
SELECT program_id, association_id, si_name, last_time=MAX(start_time) INTO #old_asn_list FROM #datasets d WHERE collect="Y" and present!="/" AND NOT EXISTS (SELECT * FROM #new_asn_list n WHERE n.program_id = d.program_id AND n.association_id = d.association_id) GROUP BY program_id, association_id, si_name
DELETE asn_association FROM asn_association a, #new_asn_list l WHERE a.association_id = l.association_id
INSERT INTO asn_association (association_id, si_name, last_exp_date, collect_date) SELECT association_id, si_name, last_time, ' ' FROM #new_asn_lis
DELETE asn_members FROM asn_members m, #new_asn_list l WHERE m.association_id = l.association_id
DELETE asn_product_link FROM asn_product_link p, #new_asn_list l WHERE p.program_id = SUBSTRING(l.association_id,2,3) AND p.asn_obset_id = SUBSTRING(l.association_id,5,2)
INSERT INTO asn_members( association_id, program_id, obset_id, member_num, member_type, member_status, product_status) SELECT association_id, program_id, obset_id, ob_number, exp_type, "U", "E" FROM #datasets WHERE collect="Y" and present="/"
SELECT d.program_id, obset_id=substring(d.association_id,5,2), ob_number=substring(d.association_id,7,2)+p.product_id, d.inst, member_type=("PROD-"+SUBSTRING(d.exp_type,5,8)), d.exp_type, d.association_id INTO #nic_acs_prod FROM #datasets d, product_code p WHERE d.inst IN ('N','J') and d.collect='Y' AND d.present="/" AND p.si_name=d.si_name and p.exp_type=d.exp_type GROUP BY d.program_id,d.association_id,d.inst,p.product_id,d.exp_type
SELECT n.program_id, n.obset_id, ob_number=substring(n.ob_number,1,2)+'0', n.inst, n.association_id INTO #acs_dither FROM #nic_acs_prod n WHERE n.inst = "J" GROUP BY n.program_id,n.obset_id,substring(n.ob_number,1,2),n.inst, n.association_id HAVING COUNT(*)>1
INSERT INTO #nic_acs_prod( program_id, obset_id, ob_number, inst, member_type, exp_type, association_id) SELECT program_id, obset_id, ob_number, inst, "PROD-DTH", "*",association_id FROM #acs_dither
DELETE qolink_sms FROM qolink_sms,#nic_acs_prod WHERE qolink_sms.program_id=#nic_acs_prod.program_id AND qolink_sms.obset_id=#nic_acs_prod.obset_id AND qolink_sms.ob_number=#nic_acs_prod.ob_number
INSERT INTO qolink_sms SELECT program_id,obset_id,ob_number,"$SMS_ID",'U',inst, ' ',' ',' ',' ',' ',' ',' ',' ',' ',' ',' ' FROM #nic_acs_prod
INSERT INTO asn_members( association_id, program_id, obset_id, member_num, member_type, member_status, product_status) SELECT association_id, program_id, obset_id, ob_number, member_type, "P", "U" FROM #nic_acs_prod
INSERT INTO asn_product_link( program_id, asn_obset_id, member_num, obset_id, ob_number) SELECT n.program_id, n.obset_id, n.ob_number, m.obset_id, m.member_num FROM #nic_acs_prod n, asn_members m WHERE m.association_id=n.association_id and m.member_type = n.exp_type
INSERT INTO asn_product_link( program_id, asn_obset_id, member_num, obset_id, ob_number) SELECT a.program_id, a.obset_id, a.ob_number, m.obset_id, m.member_num FROM #acs_dither a, asn_members m WHERE m.association_id=a.association_id and m.member_status!="P"
DROP TABLE #acs_dither DROP TABLE #nic_acs_prod
DELETE qolink_sms FROM qolink_sms,#new_asn_list WHERE #new_asn_list.si_name="STIS" AND qolink_sms.program_id=#new_asn_list.program_id AND qolink_sms.obset_id=SUBSTRING(#new_asn_list.association_id, 5,2) AND qolink_sms.ob_number=SUBSTRING(#new_asn_list.association_id, 7,3)
INSERT INTO qolink_sms SELECT program_id,SUBSTRING(#new_asn_list.association_id, 5,2), SUBSTRING(#new_asn_list.association_id, 7,3),"$SMS_ID",'U','O', ' ',' ',' ',' ',' ',' ',' ',' ',' ',' ',' ' FROM #new_asn_list WHERE si_name = "STIS"
INSERT INTO asn_members( association_id, program_id, obset_id, member_num, member_type, member_status, product_status) SELECT association_id, program_id,SUBSTRING(#new_asn_list.association_id, 5,2), SUBSTRING(#new_asn_list.association_id, 7,3), "PRODUCT", "P", "U" FROM #new_asn_list WHERE si_name = "STIS"
INSERT INTO asn_product_link(program_id, asn_obset_id, member_num, obset_id, ob_number) SELECT #new_asn_list.program_id,SUBSTRING(asn_members.association_id, 5,2), SUBSTRING(asn_members.association_id, 7,3), asn_members.obset_id, asn_members.member_num FROM #new_asn_list, asn_members WHERE #new_asn_list.si_name = "STIS" AND asn_members.association_id = #new_asn_list.association_id AND asn_members.member_status="U"
SELECT p.program_id,p.asn_obset_id,p.member_num,prod_start=MIN(le.start_time), prod_end=MAX(le.end_time) INTO #prod_times FROM #new_asn_list, asn_product_link p, qolink_sms le WHERE p.program_id = SUBSTRING(#new_asn_list.association_id, 2,3) AND p.asn_obset_id = SUBSTRING(#new_asn_list.association_id, 5,2) AND le.program_id = p.program_id AND le.obset_id = p.obset_id AND le.ob_number = p.ob_number AND le.start_time!=" " GROUP BY p.program_id,p.asn_obset_id,p.member_num UNION SELECT p.program_id,p.asn_obset_id,p.member_num,prod_start=MIN(le.start_time), prod_end=MAX(le.end_time) FROM #old_asn_list, asn_product_link p, qolink_sms le WHERE p.program_id = SUBSTRING(#old_asn_list.association_id, 2,3) AND p.asn_obset_id = SUBSTRING(#old_asn_list.association_id, 5,2) AND le.program_id = p.program_id AND le.obset_id = p.obset_id AND le.ob_number = p.ob_number AND le.start_time!=" " GROUP BY p.program_id,p.asn_obset_id,p.member_num
SELECT p.program_id,p.asn_obset_id,p.member_num,prod_start=MIN(le.start_time), prod_end=MAX(le.end_time) INTO #prod_times FROM #new_asn_list, asn_product_link p, qolink_sms le WHERE p.program_id = SUBSTRING(#new_asn_list.association_id, 2,3) AND p.asn_obset_id = SUBSTRING(#new_asn_list.association_id, 5,2) AND le.program_id = p.program_id AND le.obset_id = p.obset_id AND le.ob_number = p.ob_number AND le.start_time!=" " GROUP BY p.program_id,p.asn_obset_id,p.member_num
DROP TABLE #new_asn_list DROP TABLE #old_asn_list
UPDATE qolink_sms SET start_time=#prod_times.prod_start, end_time=#prod_times.prod_end, time_type = 'P' FROM qolink_sms, #prod_times WHERE qolink_sms.program_id=#prod_times.program_id AND qolink_sms.obset_id=#prod_times.asn_obset_id AND qolink_sms.ob_number=#prod_times.member_num AND qolink_sms.time_type != "A"
DROP TABLE #prod_times
SELECT program_id,obset_id,ob_number,max_collect=MAX(collect) INTO #stis_obs FROM #datasets WHERE inst = 'O' and present="/" GROUP BY program_id,obset_id,ob_number
DROP TABLE #datasets
UPDATE qolink_sms SET status="M" FROM #stis_obs, qolink_sms WHERE #stis_obs.max_collect="Y" AND qolink_sms.program_id=#stis_obs.program_id AND qolink_sms.obset_id=#stis_obs.obset_id AND qolink_sms.ob_number=#stis_obs.ob_number
DROP TABLE #stis_obs
SELECT replan_time, window_stop FROM file_times WHERE dataset_name = POD_FILE AND archclass = 'MSC'
DELETE FROM msc_events WHERE event_time>=msc_start AND event_time<=msc_end DELETE FROM msc_gs_acq WHERE event_time>=msc_start AND event_time<=msc_end DELETE FROM msc_ast_obset WHERE event_time>=msc_start AND event_time<=msc_end DELETE FROM msc_ast_observe WHERE event_time>=msc_start AND event_time<=msc_end DELETE FROM msc_slew_slot WHERE event_time>=msc_start AND event_time<=msc_end
INSERT INTO msc_events (event_time,event_type,event_class,event_name) VALUES ('1997.092:18:00:00.00','OPS','BOM','UH4100000') INSERT INTO msc_gs_acq (event_time,event_name,dom_fgs,prim_fgs,rol_fgs,dom_gs_ra,dom_gs_dec, rol_gs_ra,rol_gs_dec,dom_gs_mag,rol_gs_mag,dom_gs_id,rol_gs_id,tracking) VALUES ('1997.092:21:01:56.00','GSACQ1','3','3','0',167.49661,35.19144,0.00000,0.00000,10.708, 0.000,'0252201866','0252201866','FG') INSERT INTO msc_ast_obset (event_time,program_id,obset_id,fgs,param_name,param_value) VALUES ('2002.353:02:22:42.00','8FX','01','0','PCCS','3194123878.') INSERT INTO msc_ast_observe (event_time,program_id,obset_id,ob_number,fgs,param_name,param_value) VALUES ('2002.353:02:23:52.00','8FX','01','01','1','K10','24.0') INSERT INTO msc_slew_slot (event_time,program_id,obset_id,slot,load_by,max_slew,offset_id) VALUES ('1997.095:00:59:04.00','3WJ','04','5','1997.092:00:59','30','071500OFFC3E')
SELECT window_start, window_stop, replan_time FROM file_times WHERE dataset_name = pod_name AND archclass = 'MSC'
SELECT o.program_id program_id,o.obset_id obset_id,o.ob_number ob_number, o.inst inst,o.start_time start_time,o.end_time end_time,e.type type INTO #observ FROM qolink_sms o, qolink l, qexposure e WHERE o.start_time>sms_start AND o.start_time<"$sms_stop" AND o.program_id = l.program_id AND o.obset_id = l.obset_id AND o.ob_number = l.ob_number AND l.proposal_id = e.proposal_id AND l.obset_id = e.obset_id AND l.alignment_id = e.alignment_id AND l.exposure_id = e.exposure_id AND l.version_num = e.version_num
UPDATE #observ SET inst='F' WHERE inst IN ('1','2','3')
SELECT MIN(start_time) FROM #observ SELECT MAX(event_time) FROM msc_events WHERE event_type = "FGS" and event_class="BOA" and event_name != "GSACQ2" AND event_time > old_start AND event_time < first_obs_time
SELECT (o.inst+o.program_id+o.obset_id+o.ob_number+'J') dataset_rootname, "FGS" dataset_type, o.program_id, o.obset_id, o.ob_number, o.inst INTO #datasets FROM #observ o INSERT INTO #datasets SELECT (o.inst+o.program_id+o.obset_id+o.ob_number+'M') dataset_rootname, "AST" dataset_type, o.program_id, o.obset_id, o.ob_number, o.inst FROM #observ o WHERE o.inst="F"
SELECT event_time, event_class, event_name, ("G"+SUBSTRING(event_time,1,4)+SUBSTRING(event_time,6,3) +SUBSTRING(event_time,10,2)+SUBSTRING(event_time,13,2) +SUBSTRING(event_time,16,2)) gsa_rootname INTO #acq_times FROM msc_events WHERE event_type = "FGS" and event_class in ("BOA","EOA") AND event_time >= acq_start_time AND event_time < sms_stop SELECT MIN(event_time), MAX(event_time) FROM #acq_times WHERE event_class="BOA" AND event_name != "GSACQ2"
DELETE gsa_data WHERE gsa_rootname > replan_time and gsa_rootname window_stop DELETE product_status WHERE product_type = "GSA" AND product_rootname > replan_time AND product_rootname < window_stop DELETE product_eng_map WHERE product_type = "GSA" AND product_rootname > replan_time AND product_rootname < window_stop DELETE dataset_link FROM dataset_link,#datasets WHERE dataset_link.dataset_rootname = #datasets.dataset_rootname DELETE jitter_evt_map FROM jitter_evt_map,#datasets WHERE jitter_evt_map.program_id = #datasets.program_id AND jitter_evt_map.obset_id = #datasets.obset_id AND jitter_evt_map.ob_number = #datasets.ob_number AND #datasets.dataset_type = "FGS" DELETE product_eng_map FROM product_eng_map,#datasets WHERE product_eng_map.product_rootname = #datasets.dataset_rootname DELETE product_status FROM product_status,#datasets WHERE product_status.product_rootname = #datasets.dataset_rootname
INSERT INTO jitter_evt_map SELECT sms_start,"SMS",o.program_id,o.obset_id,o.ob_number,"Y" FROM #observ o WHERE o.start_time > sms_start and o.start_time < first_acq_time AND o.type = "CAL" UNION SELECT sms_start,"SMS",o.program_id,o.obset_id,o.ob_number,"N" FROM #observ o WHERE o.start_time > sms_start and o.start_time < first_acq_time AND o.type != "CAL"
INSERT INTO jitter_evt_map SELECT next_evt_time,"GSA",o.program_id,o.obset_id,o.ob_number,"Y" FROM #observ o WHERE o.start_time > next_evt_time and o.start_time < next_acq_time AND o.type = "CAL" UNION SELECT next_evt_time,"GSA",o.program_id,o.obset_id,o.ob_number,"N" FROM #observ o WHERE o.start_time > next_evt_time and o.start_time < next_acq_time AND o.type != "CAL"
INSERT INTO gsa_data VALUES ("gsa_rootname","next_evt_time","search_end"," "," "," ",0.0,0,0,0)
INSERT INTO dataset_link SELECT dataset_rootname, dataset_type, program_id, obset_id, ob_number FROM #datasets
INSERT INTO product_status SELECT dataset_rootname, dataset_type, "N" FROM #datasets
INSERT INTO product_status SELECT dataset_rootname, "EPC", "N" FROM #datasets d, eng_dataset_pads p WHERE d.dataset_type = "FGS" AND d.inst = p.inst AND p.exp_type="EPC"
INSERT INTO product_status SELECT gsa_rootname, "GSA", "N" FROM #acq_times WHERE event_class="BOA" and event_name!="GSACQ2" and event_time>"$sms_start"
SELECT event_time, gsa_rootname FROM #acq_times WHERE event_class="BOA" AND event_name != "GSACQ2" and event_time> sms_start" INSERT INTO product_eng_map VALUES ( "rootname","type","eng_name","N") ---------------------------------------------------------------------- SELECT d.dataset_rootname, d.dataset_type, o.start_time,o.end_time, p.start_pad, p.end_pad FROM #datasets d, #observ o, eng_dataset_pads p WHERE o.program_id = d.program_id AND o.obset_id = d.obset_id AND o.ob_number = d.ob_number AND p.inst = o.inst AND p.exp_type = o.type UNION SELECT d.dataset_rootname, "EPC", o.start_time,o.end_time,p.start_pad, p.end_pad FROM #datasets d, #observ o, eng_dataset_pads p WHERE d.dataset_type = "FGS" AND o.program_id = d.program_id AND o.obset_id = d.obset_id AND o.ob_number = d.ob_number AND p.inst = d.inst AND p.exp_type = "EPC" INSERT INTO product_eng_map VALUES ( "rootname","type","eng_name","N") ---------------------------------------------------------------------- SELECT d.dataset_rootname, d.dataset_type, o.start_time FROM #datasets d, #observ o WHERE o.program_id = d.program_id AND o.obset_id = d.obset_id AND o.ob_number = d.ob_number AND o.type = "CAL" INSERT INTO product_eng_map VALUES ( "rootname","type","eng_name","N")
DROP TABLE #observ DROP TABLE #datasets DROP TABLE #acq_times
SELECT window_start, window_stop, replan_time FROM file_times WHERE dataset_name = pod_name AND archclass = 'MSC'
SELECT o.program_id,o.obset_id,o.ob_number,l.proposal_id,l.alignment_id, l.exposure_id,l.version_num INTO #obslist FROM $SPSS_DB..qobservation o, $SPSS_DB..qolink l WHERE o.pred_strt_tm>"sms_start" AND o.pred_strt_tm<"sms_stop" AND o.si_id = "NIC" AND o.control_id!=" " AND l.program_id = o.program_id AND l.obset_id = o.obset_id AND l.ob_number = o.ob_number
SELECT l.program_id,l.obset_id,l.ob_number,n.saa_exit,e.config,e.targname INTO #nicsaa FROM #obslist l, nic_saa_exit n, qelogsheet e WHERE l.program_id=n.program_id AND l.obset_id=n.obset_id AND l.ob_number=n.ob_number AND n.delta_time< NIC_SAA_MAX_DELTA AND l.proposal_id=e.proposal_id AND l.obset_id=e.obset_id AND l.alignment_id=e.alignment_id AND l.exposure_id=e.exposure_id AND l.version_num=e.version_num ORDER BY l.program_id,l.obset_id,l.ob_number
SELECT l.program_id,obset_id=l.asn_obset_id,l.member_num,n.config, saa_exit=min(n.saa_exit) INTO #nicsaadark FROM #nicsaa n, asn_product_link l WHERE n.targname="POST-SAA-DARK" AND l.program_id=n.program_id AND l.obset_id=n.obset_id AND l.ob_number=n.ob_number GROUP BY l.program_id,l.asn_obset_id,l.member_num,n.config
DELETE nic_saa_dark FROM nic_saa_dark,#nicsaadark WHERE nic_saa_dark.program_id=#nicsaadark.program_id AND nic_saa_dark.obset_id=#nicsaadark.obset_id AND nic_saa_dark.member_num=#nicsaadark.member_num
SELECT * FROM #nicsaadark DELETE FROM nic_saa_dark WHERE saa_exit_hour = "saa_hour" AND config = "config" INSERT INTO nic_saa_dark VALUES ("saa_hour","config","program_id","obset_id","member_num")
DELETE nic_saa_link FROM nic_saa_link,#nicsaa WHERE nic_saa_link.program_id=#nicsaa.program_id AND nic_saa_link.obset_id=#nicsaa.obset_id AND nic_saa_link.ob_number=#nicsaa.ob_number
INSERT INTO nic_saa_link SELECT s.program_id, s.obset_id, s.ob_number, d.program_id, d.obset_id, d.member_num, ("N"+d.program_id+d.obset_id+d.member_num) FROM #nicsaa s, nic_saa_dark d WHERE s.targname != "POST-SAA-DARK" AND d.saa_exit_hour = SUBSTRING(s.saa_exit,1,11) AND d.config = s.config
DROP TABLE #obslist DROP TABLE #nicsaa DROP TABLE #nicsaadark