ADASS XII Conference | ||||
|
Observatory operations - backend
P7.1 Data Calibration Pipeline for the Far Ultraviolet Spectroscopic Explorer
- William V. Dixon, David J. Sahnow, The FUSE SDP Group (JHU)
CalFUSE is the calibration software pipeline used at the Johns Hopkins University to process data from FUSE, the Far Ultraviolet Spectroscopic Explorer. The pipeline corrects for a variety of instrumental effects, extracts target spectra, and applies the appropriate wavelength and flux calibrations. The software is written in C and runs under the Solaris, DEC Alpha, and Linux operating systems. In this poster, we present recent improvements in the pipeline, including a new module to correct for the effects of spacecraft motion during an observation, and announce the availability of calibrated spectral files, created using CalFUSE v2.1.0, from MAST, the Multimission Archive at STScI.
P7.2 The Next Step for the FUSE Calibration Pipeline
- David J. Sahnow, William V. Dixon, The FUSE SDP Group (JHU)
The calibration pipeline for the Far Ultraviolet Spectroscopic Explorer (FUSE) was designed years before the satellite was launched in June of 1999. After launch, a number of unexpected instrumental features were discovered; as the FUSE team dealt with each of them, the pipeline was modified appropriately. Eventually, these changes made the design so cumbersome that the pipeline became difficult to maintain. In 2002, we began to develop a new pipeline concept that takes into account the actual instrument characteristics. We will present our plans for this improved calibration pipeline and describe the progress we have made toward that goal. In addition, we will discuss the lessons learned while modifying the original design.
P7.3 Building a Middle Tier for the CXC Data Archive
- Alexandra Patz, Peter Harbo, John Moran, David Van Stone, Panagoula Zografou
The Chandra Data Archive at the Chandra X-ray Center is developing a middle tier that can be utilized by both the current J2EE web application (WebChaSeR) and the Java Swing application (ChaSeR) to provide a uniform interface to the archive. This middle tier consists of a collection of independent services, from authenticating users to returning data such as an observation image or a proposal abstract. The services are accessible through an HTTP interface, allowing ChaSeR, WebChaSeR or any other HTTP client to access them. The services are run on an application server, implemented in Java using Apache's open-source tool, Struts. Having a central interface to the archive, shared by all client applications, will allow for code reusability and easier maintenance. This poster discusses the design of the middle tier. This project is supported by the Chandra X-ray Center under NASA contract NAS8-39073.
P7.4 ClassX: A VOTABLE-Enabled X-ray Correlation and Classification Pipeline
- Eric Winter (NASA/GSFC, Science Systems and Applications, Inc.) Thomas McGlynn (NASA/GSFC, USRA), Anatoly Suchkov, William Pence (NASA/GSFC), Marc Postman (STScI), Nicholas White (NASA/GSFC), Richard White (STScI)
The ClassX project aims to provide a Web service to classify unclassified astronomical X-ray sources. This objective requires collecting and assimilating data from a wide variety of sources. These data sources differ in both syntax and semantics, and therefore must be translated to a common format to be useful in the classification process. The ClassX pipeline addresses this problem by using the VOTABLE XML DTD (http://us-vo.org/xml/VOTable.dtd) to store and manipulate data from multiple remote sources.
An extensive Perl API for the VOTABLE format was developed during the project, and has been released for use by the NVO community. The lessons learned during the development of the ClassX pipeline provide significant experience in identifying and addressing similar problems that will be encountered during the development of the National Virtual Observatory.P7.5 The Automated Data Processing Pipelines for SIRTF IRS
- Fan Fang, Clare Waterson, Jing Li, Bob Narron, Iffat Khan, Wen P. Lee, John Fowler, Russ Laher, Mehrdad Moshir
We present the design, structure, and implementation of the automated data processing pipelines for the Infrared Spectrograph (IRS) onboard Space Infrared Telescope Facility (SIRTF). This includes science data reduction pipelines that generate Basic Calibrated Data (BCD) and enhanced science (Post-BCD) products, and calibration pipelines generating calibration data that allows reduction of the science data.
P7.6 Self-calibration for the SIRTF GOODS Legacy Project
- David Grumm, Stefano Casertano (STScI)
Data analysis for the SIRTF GOODS Legacy Project must be able to achieve a level of calibration noise well below a part in 10,000. To achieve such a high level of fidelity, a form of self-calibration is required in which the sky intensity and the instrumental effects are derived simultaneously. Two methods being investigated are a least squares approach based on the work of Fixsen and Arendt at GSFC, and an iterative method. Both methods have been applied to derive the sky, flat field, and offset from simulated data for instruments to be flown on SIRTF; the results will be discussed.
P7.7 Calibration of COS Data at STScI
- Philip Hodge (STScI)
This paper describes the program for pipeline calibration of Cosmic Origins Spectrograph (COS) data at the Space Telescope Science Institute. CALCOS is written in Python. Image and table data are read from and written to FITS files using PyFITS, and the data arrays are manipulated using the numarray module, with C extension code for special cases not included in numarray.
P7.8 An Automatic Image Registration and Coaddition Pipeline for the Advanced Camera for Surveys.
- John P. Blakeslee, Kenneth Anderson, Daniel Magee, Gerhardt R. Meurer
We have written an automatic image processing pipeline for the Advanced Camera for Surveys (ACS) Guaranteed Time Observation program. The pipeline supports the different cameras available on the ACS instrument and is written in the Python programming language using a flexible object oriented design that simplifies the incorporation of new pipeline modules. It also makes use of the PyFits and Pyraf packages distributed by STScI, as well as other external software. The processing steps include empirical determination of image offsets and rotation, cosmic ray rejection, and image combination using the drizzle software, as well as the production of object catalogs and XML markup for ingestion into the ACS Team archive.
P7.9 OPUS: A CORBA Pipeline for Java, Python, and Perl Applications
- Walter Warren Miller III, James F. Rose, Michael S. Swam, Christine Heller-Boyer, John Schultz (STScI)
With the introduction of the OPUS CORBA mode, a limited subset of OPUS Applications Programming Interface (OAPI) functionality was cast into CORBA IDL so that both OPUS applications and the Java-based OPUS pipeline managers were able to use the same CORBA infrastructure to access information on blackboards. The primary motivation for doing so was to improve scalability, but moving to distributed object architecture also freed the managers from running strictly on a supported platform with access to a common file system. It also reduced the amount of duplicate code that otherwise would be required in a multi-programming language environment.
Exposing even more of the OAPI through CORBA interfaces would benefit OPUS applications in similar ways. Those applications not developed in C++ could use CORBA to interact with OPUS facilities directly, providing that a CORBA binding exists for the programming language of choice. Other applications might benefit from running `outside' of the traditional file system-based OPUS environment like the Java managers and, in particular, on platforms not supported by OPUS. The enhancements to OPUS discussed in this paper will illustrate how this generality was achieved, and present two examples on how to construct OPUS internal pollers in Java and Python.P7.10 The COBRA/CARMA Correlator Data Processing System
- Steve Scott, Rick Hobbs, Andy Beard, Paul Daniel (Caltech/OVRO) Colby Kraybill, Mel Wright (University of California Berkeley) Erik Leitch (University of Chicago) David Mehringer, Ray Plante, (University of Illinois) N. S. Amarnath, Marc Pound, Kevin Rauch, Peter Teuben (University of Maryland)
The COBRA (Caltech Owens Valley Broadband Reprogrammable Array) correlator is an FPGA based spectrometer with 16 MHz resolution and 4 GHz total bandwidth that will be commissioned on the Caltech Millimeter Array in September, 2002. The processing system described here includes collection of correlation function and total power data from the underlying software systems and then the synchronization and processing that are done to produce calibrated visibilities. The processing steps include passband gain correction, system temperature and flux scaling, blanking and flagging, atmospheric delay correction, and apodization and decimation. CORBA is used to move data from the 5 hardware based computers to the pipeline computer. Within a computer, computational steps are implemented as separate processes using shared memory for communication. At any step along the pipeline the data may be graphically inspected remotely using a CORBA based tool, the CARMA Data Viewer. This same architecture will be applied to the CARMA wideband correlator (8 stations, 8 GHz bandwidth) and the CARMA spectral correlator (15 stations, 4 GHz bandwidth) scheduled for 2003 and 2004.
See accompanying posters by Plante et al and Pound et al.P7.11 CARMA Data Storage, Archiving, Pipeline Processing, and the Quest for a Data Format
- Ray Plante (NCSA/University of Illinois) Marc Pound (University of Maryland) David Mehringer (NCSA/University of Illinois) Steve Scott, Andy Beard, Paul Daniel, Rick Hobbs (Caltech/OVRO) Colby Kraybill, Mel Wright (University of California Berkeley) Erik Leitch (University of Chicago) N. S. Amarnath, Kevin Rauch, Peter Teuben (University of Maryland)
In 2005, the BIMA and OVRO mm-wave interferometers will be merged into a new array, the Combined Array for Research in Millimeter-wave Astronomy (CARMA). Each existing array has its own visibility data format, storage facility, and tradition of data analysis software. The choice for CARMA was to use one of a number of an existing formats or devise a format that combined the best of each. Furthermore, it had to address three important considerations. First, the CARMA data format must satisfy the sometimes orthogonal needs of both astronomers and engineers. Second, forcing all users to adopt a single off-line reduction package is not practical; thus, multiple end-user formats are necessary. Finally, CARMA is on a strict schedule to first light; thus, any solution must meet the restrictions of an accelerated software development cycle and take advantage of code reuse as much as possible. We describe our solution in which the pipelined data passes through two forms: a low-level database-based format oriented toward engineers and a high-level dataset-based form oriented toward scientists.
The BIMA Data Archive at NCSA has been operating in production mode for a decade and will be reused for CARMA with enhanced search capabilities. The integrated BIMA Image Pipeline developed at NCSA will be used to produced calibrated visibility data and images for end-users. We describe the data flow from the CARMA telescope correlator to delivery to astronomers over the web and show current examples of pipeline-processed images of BIMA observations.P7.12 CARMA Software Development
- Marc Pound, N. S. Amarnath, Kevin Rauch, Peter Teuben (University of Maryland) Colby Kraybill, Mel Wright (University of California Berkeley) Andy Beard, Paul Daniel, Rick Hobbs, Steve Scott (Caltech/OVRO) Erik Leitch (University of Chicago) David Mehringer, Ray Plante (University of Illinois)
Combining the existing BIMA and OVRO mm interferometers, and adding a new third sub-array in a combined CARMA mm interferometer will not only bring up new challenges in hardware, but also in software. Both arrays have their own mature operations software, developed over the last decade. For CARMA, the situation is not as simple as choosing one over the other. It is further complicated by the fact that the software developers are dispersed among 5 institutions and 4 time zones. Such multi-institution development requires frequent communication, local oversight, and reliable code management tools.
Timeline has forced us to carefully balance using existing software, with wrappers bound to a new, more object oriented approach, and rewriting from scratch. New hardware, such as the correlator, has already resulted in new software, but we also anticipate re-using a fair fraction of the existing telescope software.
This poster will summarize our ideas on how we plan to do this, as well as outline what we call the CARMA Software Toolkit and the associated Software Engineering aspects.
See also accompanying posters by Scott et al. and Plante et al.P7.13 Refactoring DIRT
- N.S. Amarnath, M.W. Pound, M.G. Wolfire (University of Maryland)
The Dust InfraRed ToolBox (DIRT - a part of the Web Infrared ToolShed, or WITS, located at http://dustem.astro.umd.edu) is a Java applet for modeling astrophysical processes in circumstellar shells around young and evolved stars.
DIRT has been used by the astrophysical community for the past 4 years. DIRT uses results from a number of numerical models of astrophysical processes, and has an awt based user interface. DIRT has been refactored to decouple data representation from plotting and curve fitting. This makes it easier to a) add new kinds of astrophysical models b) use the plotter in other applications c) migrate the user interface to Swing components d) modify the user interface to add functionality (for example, SIRTF tools).
DIRT is now an extension of two generic libraries, one of which manages data representation and caching, and the second of which manages plotting and curve fitting. This project is an example of refactoring with no impact on user interface, so the existing user community was not affected.P7.14 COS Calibration Process
- Stephane Beland, Steven Penton, Erik Wilkinson
COS has two distinct ultraviolet channels covering the spectral range from 1150Å to 3200Å. The NUV channel covers the range from 1700Å to 3200Å and uses the Hubble Space Telescope's STIS spare MAMA. The FUV channel uses a micro channel plate detector with a cross-delay line readout system to cover the range from 1150Å to 1900Å. Due to the analog nature of the readout electronics of the FUV detector, this system is sensitive to temperature variations and has non-uniform pixel size across its sensitive area. We present a step-by-step description of the calibration process required to transform raw data from the COS into fully corrected and calibrated spectra ready for scientific analysis. Initial simulated raw COS data is used to demonstrate the calibration process.
P7.15 Systems Integration Testing of OPUS and the New DADS
- Lisa E. Sherbert, Lauretta Nagel (STScI)
The Data Archive and Distribution System (DADS) will be entering the IDR (Ingest Distribution Redesign) era soon and more major functions will be shifting from the VMS platforms to various Unix platforms. As the first phase, Distribution, is delivered to testing, interfaces with OPUS and OTFR (On The Fly Reprocessing) will change. We will give an overview of the OPUS/DADS/OTFR supersystem, circa Fall 2002, and identify interface changes that will impact the operators and archive users.
P7.16 Supporting The Observatory Mission-Critical Data Flow
- Benoit Pirenne (ESO)
ESO's model for operating the VLT (and several other telescopes) was developed following the space missions model: a well defined, regularly spaced and repeating set of cycles comprising phase-I and phase-II community proposal submission steps, scheduling (long and mid-term), observing, archival, quality control and finally distribution of the observations to the PIs. The above steps are almost all taking place at the headquarter of the Observatory for cost and logistics reasons. This modus operandi is the most logical one for a "service mode"-oriented observatory.
In this contribution, the VLT operations' three year experience is given with particular emphasis on how the headquarters operations management support structure developed and stabilized. A number of metrics to assess the performance of the support operation is provided.
![]() |
![]() |
![]() |
![]() |