ADASS XII Conference

TOC PREV NEXT INDEX

Algorithms


P4.1 Pointing Refinement of SIRTF Images

Frank Masci, David Makovoz, David Shupe, Mehrdad Moshir, John Fowler

The soon-to-be-launched Space Infrared Telescope Facility (SIRTF) shall produce image data with an a-posteriori pointing knowledge of 1.4" (1 sigma radial) with a goal of 1.2" in the ICRS coordinate frame. In order to perform robust image coaddition, mosaic generation, extraction and position determination of sources to faint levels, the pointing will need to be refined to better than few-tenths of an arcsecond. Input to the position refinement software are point sources extracted from a mosaic of overlapping images. The software will use this information to find a "global minimization" of all relative offsets amongst all overlapping images. This is a novel method utilizing a generic linear sparse matrix solver. The pointings and orientations of SIRTF images can be refined in either a "relative" sense where pointings become fixed relative to a single image of a mosaic, or, in an "absolute" sense (in the celestial frame) if absolute point source information is known. Our goal is to produce science products with sub-arcsecond pointing accuracy.

P4.2 A Theoretical Photometric and Astrometric Performance Model for Point Spread Function CCD Stellar Photometry

Kenneth J. Mighell (NOAO)

Using a simple two-dimensional Gaussian Point Spread Function (PSF) on a constant (flat) sky background, I derive a theoretical photometric and astrometric performance model for analytical and digital PSF-fitting stellar photometry. The theoretical model makes excellent predictions for the photometric and astrometric performance of over-sampled and under-sampled CCD stellar observations even with cameras with pixels that have large intra-pixel quantum efficiency variations. The performance model is demonstrated to accurately predict the photometric and astrometric performance of realistic space-based observations from segmented-mirror telescope concepts like the Next Generation Space Telescope with the MATPHOT algorithm for digital PSF CCD stellar photometry which I presented last year at ADASS XI. The key PSF-based parameter of the theoretical performance model is the effective background area which is defined to be the reciprocal of the volume integral of the square of the (normalized) PSF; a critically-sampled PSF has an effective background area of 4 (~12.57) pixels. A bright star with a million photons can theoretically simultaneously achieve a signal-to-noise ratio of 1000 with a (relative) astrometric error of a millipixel. The photometric performance is maximized when either the effective background area or the effective-background-level measurement error is minimized. Real-world considerations, like the use of poor CCD flat fields to calibrate the observations, can and do cause many existing space-based and ground-based CCD imagers to fail to live up to their theoretical performance limits. Future optical and infrared imaging instruments can be designed and operated to avoid the limitations of some existing space-based and ground-based cameras. This work is supported by grants from the Office of Space Science of the National Aeronautics and Space Administration (NASA).

P4.3 Adaptive Optics Software on the CfAO Web Page

Andreas Quirrenbach, Vesa Junkkarinen, Rainer Koehler (UCSD)

Several software packages are publicly available on the web site of the Center for Adaptive Optics (CfAO). These packages support the development of adaptive optics systems, and the analysis of data obtained with adaptive optics.

We will discuss the structure and support of the web site, and give an overview of the capabilities of the individual available packages.

P4.4 Image Reduction Pipeline for the Detection of Variable Sources in Highly Crowded Fields

Claus A. Goessl, Arno Riffeser

We present a reduction pipeline for CCD (charge-coupled device) images which was built to search for variable sources in highly crowded fields like the M 31 bulge and to handle extensive databases due to large time series. We describe all steps of the standard reduction in detail with emphasis on the realization of per pixel error propagation: Bias correction, treatment of bad pixels, flatfielding, and filtering of cosmic rays. The problems of conservation of PSF (point spread function) and error propagation in our image alignment procedure as well as the detection algorithm for variable sources are discussed: We build difference images via image convolution with a technique called OIS (optimal image subtraction, Alard & Lupton 1998), proceed with an automatic detection of variable sources in noise dominated images and finally apply a PSF-fitting, relative photometry to the sources found. For the WeCAPP project (Riffeser et al. 2001) we achieve 3-sigma detections for variable sources with an apparent brightness of e.g. m = 24.9 mag at their minimum and a variation of dm = 2.4 mag (or m = 21.9 mag brightness minimum and a variation of dm = 0.6 mag) on a background signal of 18.1 mag / arcsec2 based on a 500 s exposure with 1.5 arcsec seeing at a 1.2 m telescope. The complete per pixel error propagation allows us to give accurate errors for each measurement.

P4.5 Representations of Spectral Coordinates in FITS

Eric W. Greisen (NRAO) Francisco G. Valdes (NOAO) Mark R. Calabretta (Australia Telescope National Facility) Steven L. Allen (UCO/Lick Observatory)

In Paper I, Greisen & Calabretta (2002) describe a generalized method for specifying the coordinates of FITS data samples. Following that general method, Calabretta & Greisen (2002) in Paper II describe detailed conventions for defining celestial coordinates as they are projected onto a two-dimensional plane. The present paper extends the discussion to the spectral coordinates of wavelength, frequency, and velocity. World coordinate functions are defined for spectral axes sampled evenly in wavelength, frequency, or velocity, evenly in the logarithm of wavelength or frequency, as projected by ideal dispersing elements, and as specified by a lookup table. Papers I and II have been accepted into the FITS standard by at least the North American FITS Committee; we expect the present work to be accepted as well.

P4.6 Image Compression using CFITSIO

William Pence (NASA/GSFC)

The CFITSIO subroutine library now transparently supports reading and writing of FITS images in a new tile-compressed image format. The image is divided into a grid of rectangular tiles and then each tile of pixels is individually compressed (using a choice of different algorithms) and is stored in a variable length array column in a FITS binary table. The advantages of using this format are a) the header keywords remain uncompressed for fast access, and b) it is possible to extract sub-images without having to uncompress the entire original image, because only the tiles that contain pixels in the subimage have to be uncompressed. This image format also supports a lossy compression technique that is very effective for floating point data type images by throwing away the noise bits without sacrificing any scientifically useful information. This paper will demonstrate the effectiveness of this image compression technique on a number of different FITS images that were all extracted from existing public data archives.

P4.7 Restoration of Digitized Astronomical Plates with the Pixon Method

P.R. Hiltner, R. Nestler, P. Kroll

Applications of the Pixon restoration method to digitized plates of the Sonneberg Plate Archive - the world's 2nd largest - are reported. Results so far obtained show that the severe astigmatism/coma distortion present in the outer parts of these wide field images can almost completely be removed. Also, object definition (FWHM) of point sources and S/N improve by factors of 2 to 7, depending on the object's strength and location (background etc.). We discuss consequences for the automated astronomical processing of the restored plates, which are of crucial importance for the inclusion of digitized archives in the virtual observatory context.

P4.8 sso_freeze: De-smearing Solar System Objects in Chandra Observations

Roger Hain, Jonathan McDowell, Arnold Rots, K. J. Glotfelty (Harvard-Smithsonian Center for Astrophysics)

Observations from the Chandra X-Ray Observatory are made in a fixed inertial coordinate frame. Most objects observed with Chandra, such as supernova remnants, quasars, or pulsars, are at infinity for all practical purposes and the observations produce sharp, focused images. However, the motion of objects observed within the solar system, such as planets or comets, will cause the object's image to appear blurred when viewed in a fixed inertial frame. This effect is similar to the blur which would be seen if a fixed camera were to take a photograph of a fast moving car.

To reconstruct the image, the CXC CIAO tool sso_freeze corrects for this effect. An origin is chosen at the center of the object, and moves along with the object as it moves with respect to inertial space. The positions of the source photons are then recalculated with respect to this moving origin. The image formed from the recalculated photons now shows a clear object, such as a disk for a planet. As an effect of this processing, fixed X-ray sources become smeared in the image. The effect is similar to moving the camera to follow the fast moving car in the earlier example. The car becomes clearly focused, and the scene around the car is blurred. Images which demonstrate the effect of sso_freeze are shown for Jupiter and Comet C/1999 S4 Linear.

This project is supported by the Chandra Xray Center under NASA contract NAS8-39073.

P4.9 Merging of Spectral Orders from Fiber Echelle Spectrographs

Petr Skoda, (Astronomical Institute of the Academy of Sciences of the Czech Republic) Herman Hensberge (Royal Observatory of Belgium)

We have reviewed the data reduction of two fiber-based echelle spectrographs (HEROS and FEROS) with emphasis on similarities of the inconsistencies between the overlap of spectral orders before merging. The literature on echelle data reduction shows that such inconsistencies are commonly observed (and usually handled by rather heuristic procedures - mostly interactively). For both instruments, it seems to be the calibration unit which introduces through the flat fielding the major part of the problems. We discuss strategies to treat the problems and to remove the inconsistencies before merging the spectral orders with a minimum of interactive, subjective algorithms.

P4.10 Automated Object Classification with ClassX

Anatoly Suchkov (STScI) Tom McGlynn, Eric Winter, Lorellla Angelini, Michael Corcoran, Sebastien Derriere (NASA/GSFC) Megan Donahue (STScI) Stephen Drake (NASA/GSFC) Pierre Fernique, Francoise Genova (CDS) R.J. Hanisch (STScI) Francois Ochsenbein (CDS) W.D. Pence (NASA/GSFC) Marc Postman (STScI) Nicolas White (NASA/GSFC) Richard White (STScI)

We report preliminary results from the ClassX project. ClassX is aimed at creating an automated system to classify unclassified X-ray sources and is envisaged as a prototype of the Virtual Observatory. The ClassX team has used machine learning methods to generate, or `train' classifiers from a variety of `training' data sets, each representing a particular sample of known objects that have measured X-ray fluxes complemented, wherever possible, with data from other wavelength bands. Specifically in this paper, a classifier is represented by a set of oblique decision trees (DT) induced by a DT generation system OC1. We integrate different classifiers into a network, in which each classifier can make its own class assignment for an unclassified X-ray source from a classifier-specific list of class names (object types). An X-ray source is input into a classifier as a set of X-ray fluxes and possibly other parameters, including data in the optical, infrared, radio, etc. In the network, each classifier is optimized for handling different tasks and/or different object types. Therefore, having a set of unclassified X-ray sources, a user would generally select a certain classifier to make, for instance, a most complete list of candidate QSOs, but a different classifier would be used to make a most reliable list of candidate QSOs. Still different classifiers would be selected to make similar lists for other object types. Along with the straightforward class name assignment, a network classifier outputs the probability for a source to belong to the assigned class as well as probabilities that the source belongs, in fact, to other classes in the given class name list. We illustrate the current capabilities of ClassX and the emerging concept of a classifiers network(s) with the results obtained with classifiers trained on data from ROSAT (the WGA catalog), complemented with data from the Guide Star Catalog (GSC2) and the 2-Micron All-Sky Survey.

P4.11 Genetic Programming and Other Fitting Techniques in Galactic Dynamics

Peter Teuben (University of Maryland)

Fitting is the bread and butter of astronomy. Non-linear fitting, especially large scale ones, present themselves with many problems. Genetic programming is one such solution. In this poster I will present some galaxy dynamics fitting techniques, in particular those of velocity fields, and apply new techniques such as genetic programming.

P4.12 FLY: A Tree Code towards the Adaptive Mesh Refinement

U. Becciani, V. Antonuccio-Delogu (INAF - Astrophysical Observatory of Catania)

We have developed a powerful N-body code to evolve three-dimensional self-gravitating collisionless systems with a large number of particles (N > 107). FLY (Fast Level-based N-bodY code) is a fully parallel code based on a tree algorithm. It adopts periodic boundary conditions implemented by means of the Ewald summation technique. FLY is based on the one-side communication paradigm to share data among the processors that access remote private data avoiding any kind of synchronism. The code was originally developed on a CRAY T3E system using the logically SHared MEMory access routines (SHMEM) and it was ported on SGI ORIGIN systems and on IBM SP, on the latter making use of the Low-Level Application Programming Interface routines (LAPI). FLY is based on four main characteristics: it adopts a simple domain decomposition, a grouping strategy, a dynamic load balancement mechanism without significant overhead, and a data buffering that allows us to minimize data communication. It is an open source free code and more details are available at http://www.ct.astro.it/fly/. This paper show an example of integration of a tree code with an adaptive grid scheme.

PARAMESH is a package of Fortran 90 subroutines using SHMEM and MPI libraries, designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with an adaptive mesh refinement (AMR). The computational domain is hierarchically subdivided into sub-blocks following a 3D tree data-structure.

The use of SHMEM and the tree data-structure allow an easy integration with FLY that adopts the same data structure and the same parallel communication library. This implementation of FLY with PARAMESH makes available the output of FLY integrated with an adaptive grid, having the same data structure of PARAMESH. The adaptive grid structure can be read by FLY or generated from it, and contains the potential field of each data block of the grid, following the PARAMESH scheme.

Moreover, the same procedure of FLY will be also available like an external procedure that will allow to create a PARAMESH grid from any data point distribution in a cubic region, e.g. a cosmological dark matter distribution. This new implementation will allow the FLY output, and more generally any binary output, to be used with any hydrodynamics code, that adopt PARAMESH data structure, to study compressible flow problems.

P4.13 Classification using Labeled and Unlabeled Data

David Bazell (Eureka Scientific, Inc.) David Miller (Penn State University) Kirk Borne (Raytheon)

We will discuss several novel approaches to the exploration, understanding, and classification of astronomical data. We are exploring the use of unlabeled data for supervised classification and for semi-supervised clustering. Current automated classification methods rely heavily on supervised learning algorithms that require training data sets containing large amounts of previously classified, or labeled, data. While unlabeled data is often cheap and plentiful, using a human to classify the data is tedious, time consuming, and expensive. We are examining methods whereby supervised classification techniques can use cheaply available, large volumes of unlabeled data to substantially improve their ability to classify objects. We are also exploring a unified framework where learned models provide clustering or classification solutions, or both, depending on the needs of the user.

P4.14 Predictive Mining of Time Series Data in Astronomy

Eric Perlman, Akshay Java (Joint Center for Astrophysics, UMBC)

We discuss the development of a Java toolbox for astronomical time series data. Rather than using methods conventional in astronomy (e.g., power spectrum and cross-correlation analysis) we employ rule discovery techniques commonly used in analyzing stock-market data. By clustering patterns found within the data, rule discovery allows one to build predictive models, allowing one to forecast when a given event might occur or whether the occurrence of one event will trigger a second. We have tested the toolbox and accompanying display tool on datasets representing several classes of objects from the RXTE All Sky Monitor. We use these datasets to illustrate the methods and functionality of the toolbox. We have found predictive patterns in several ASM datasets. We discuss possible applications, for example in maximizing the return for scheduling either survey or target of opportunity observations. We also discuss problems faced in the development process, particularly the difficulties of dealing with discretized and irregularly sampled data.

P4.15 NIRCAM Image Simulations for NGST Wavefront Sensing

Russell B. Makidon, Anand Sivaramakrishnan, Donald F. Figer, Robert I. Jedrzejewski, Howard A. Bushouse, John E. Krist, H. S. Stockman, Philip E. Hodge, Nadezhda M. Dencheva, Bernard J. Rauscher, Victoria G. Laidler, Catherine Ohara, David C. Redding, Myungshin Im, Joel D. Offenberg

The Next Generation Space Telescope (NGST) will be a segmented, deployable, infrared-optimized 6.5m space telescope. Its active primary segments will be aligned, co-phased, and then fine-tuned in order to deliver image quality sufficient for the telescope's intended scientific goals. Wavefront sensing used to drive this tuning will come from the analysis of focussed and defocussed images taken with its near-IR science camera, NIRCAM. There is a pressing need to verify that this will be possible with the near-IR detectors that are still under development for NGST. We create simulated NIRCAM images to test the maintenance phase of this plan. Our simulations incorporate Poisson and electronics read noise, and are designed to be able to include various detector and electronics non-linearities. We present our first such simulation, using known properties of HAWAII HgCdTe focal plane array detectors. Detector effects characterized by the Independent Detector Testing Laboratory are included as they become available. Simulating InSb detectors can also be done within this framework in future. We generate Point-Spread Functions (PSF's) for a segmented aperture geometry with various wavefront aberrations, and convolve this with typical galaxy backgrounds and stellar foregrounds. We then simulate up-the-ramp (MULTIACCUM in HST parlance) exposures with cosmic ray hits. We pass these images through the HST NICMOS `CALNICA' calibration task to filter out cosmic ray hits. The final images are to be fed to wavefront sensing software, in order to find the ranges of exposure times, filter bandpass, defocus, and calibration star magnitude required to keep the NGST image within its specifications.

P4.16 Integrating Statistical Tools with Databases

Adrian Pope, Tamas Budavari, Alex S. Szalay (JHU) Istvan Szapudi (Univ of Hawaii) Andrew J. Connolly (Univ of Pittsburgh)

With the advent of large astronomical surveys the way scientific calculations are done is changing. Dedicated telescopes are collecting incredible amounts of information about the Universe that are stored in databases. We describe a method to do angular clustering analysis of 50 million galaxies in the Science Archive of the Sloan Digital Sky Survey. Using a Web Service attached to the database, we stream only the relevant coordinates into the correlation function code called eSpICE that computes the angular correlation function directly. The pipeline is set up to take a query that selects the galaxies by different ranges of absolute luminosity and spectral type and returns the angular clustering results. The processing time is remarkable; it is several orders of magnitude faster than traditional implementations of the optimal two-point estimator.


TOC PREV NEXT INDEX