How the real-time pipeline works.

The main controlling Perl script is:


    (1) <Data directory> (e.g. /mnt/dw2-07)
    (2) <Process Nodes> (10)
    (3) <Catalog Nodes> (7)
    (4) <RunTime> (optional)
    (5) <Date> (optional)
Example 1 (real-time): /mnt/dw2-07 10 7
Example 2 (reanalysis): /mnt/dw2-07 10 7 8.0 20060701

Arguments: Root location for analysis products, number of PBS nodes to use for cleaning and photometering the images, number of nodes to use for matching the current photometry with the past data. If reanalyzing data include the runtime for PBS jobs and date to re-reduce

This program creates the run run parameters for real-time processing checks the status of running jobs and programs. It first checks the location for data products. It tests the response status of the dw2-node where the data products are going (using a ping). It also checks that it has sufficient space to run the analysis. It creates the necessary data product directories for clean images, flats, darks, photometry. It also creates a link to the location where images arriving from Palomar are to be placed so that the fileserver can see them. It then retrieves current the observation scheduling plan from the PQ computer of Palomar (Q16). It compares the parameters in this file against the schedule observation schedule (derived from here). If the observation schedule file does not have an entry for the current UT date the pipeline will not start unless a date argument is specified. The date and runtime arguments are specifically to allow past nights to be reanalyzed.

If date is not defined the current UT date is used. The runtime argument is the period that a PBS node will be used for the real-time analysis. If this is not defined it is derived automatically from an estimate of the sunset and sunrise times at Palomar.

Two types of jobs are started on the tg-nodes by this script, data processing jobs and catalog processing jobs. Data processing jobs involve the flatfielding, cleaning, photometering and WCS determination. Catalog jobs involve comparing comparing photometry from past images with the current photometry to find transients and variable stars. These jobs are currently submitted to a PBS queue via (processing) and (catalogs).

Each submitted PBS job is monitored via this script every three minutes (using "qstat -a"). The running job ids are compared against the submitted ids and jobs not running or queued during the expected run period are resubmitted.

In addition to these jobs the script can start and monitor the fileserver ( The fileserver passes the process jobs running on tg-nodes the names of raw data files to clean and photometer. The fileserver script is also checked every few minutes and restarted if it dies. Both the fileserver ( and the pipeline controller ( should run continuously.

  • USAGE: <date><col><filterset><basedir><runtime><que>
  • Example 20040424 1 G /mnt/dw2-07/rawdata 8.0 reserved

    Arguments: date (UT date from runpipe), col (job (or node) number), filter set (obtained from the obschedule.txt and/or obsplan file), working directory location runtime and queue.

    This is the script that launches the PBS data processing jobs onto the tg-nodes and exists. From within each PBS job the node process manager script is then launched.


  • USAGE: <date><col><filterset><basedir>
  • Example: 20040424 1 G scratch

    Arguments: UT date, job, filter set, tg-node working location

    This program runs on the tg-nodes and places data in the /scratch directory. When dark files arrive from Palomar it copies them from their location to scratch and also finds the most recent flatfield for the specified filter. It creates darks and flats for all the PQ camera's columns. It then waits for the exist of an "Activate" file. When it find this files it know that data is about to arrive and launches two cleaning processes (, 1 per cpu. These child processes to the node controlling script and tell it about their status. If an error is received from the child process it is killed and a new process is started.


  • USAGE: <date><process><basedir><whereto><filterset><pistonfile>
  • Example: 20040424 p1 /mnt/dw2-07/rawdata /scratch J /home/donalek/newpipe/aux/scale02142006.txt

    Arguments: UT date, processes (p1 or p2), root data products directory, working directory, filter set, piston file.

    This program runs one cleaning, photometry and astrometry for a tg-node (one is run per cpu). Errors are communicated to the controlling script ( via stderr and cause termination of a process. Pipes for stdin and stdout are also connected. This process script receives the names of files to reduce via a TCP connection to It asks for files and the fileserver gives it the name and location of a file or tells it to come back later. The raw data files are untarred and unzipped, cleaned, photometered and WCS values are inserted. The photometry, clean images and masks are copied back to the data area on the dw2 nodes.


  • USAGE: <basedir> (/mnt/dw2-07/rawdata) <date> (optional)
  • Example 1 (real-time): /mnt/dw2-07/rawdata
    Example 2 (reanalysis): /mnt/dw2-07/rawdata 20060701
    Example 3 (reanalysis): /mnt/dw2-07/rawdata 20060701 6.0

    Arguments: base data directory, date, runtime.

    This program monitors the arrival of data in an incoming directory and give the filenames to clients that connecting to it. Flats and darks arrive from the telescope are automatically moved to their directories as they arrive. The arrival of "sf" or "do" type files cause it to create an "Activate" file which will start the cleaning jobs on the tg-nodes. All tg-nodes jobs require on this program to receive files to work with. It is insulated from various error signals to ensure it keeps running. Nevertheless, if it dies it will be automatically be restarted by As with, a date argument is required if existing data is to be re-reduced.

  • USAGE: <date><firstcol><ncols><filterset><basedir><runtime><jobnumber><queue>
  • Example: 20040424 1 4 G /mnt/dw2-07 8.0 1 reserved

    Arguments: the UT date, first column, number of columns to match on this node, filter set, base directory, runtime, job number, and queue.

    The script is similar to in that it launches a catalog matching job script ( on PBS and exists (a "catalog process"). Unlike, in this case the range of columns that it is in charge of must be specified since only data from those specific columns are matched on the node (There is too much photometry data to match photometry for all columns on all nodes).


  • USAGE: <date><firstcol><totcols (one of 2,4,6,8)><filterset><basedir><jobnumber (optional)>
  • Example: 20040424 11 8 G /mnt/dw2-08/rawdata 1

    Arguments: UT date first column, number of columns to match on this node, filter set.

    This script manages the two scripts that run on each tg-node (1 per cpu). The number of columns of photometry catalogues to process on that node are split between these two processing scripts. The scripts communicate with this controlling script and are killed and restarted if errors are received from the individual child processes.


  • USAGE: <date><process><basedir><whereto><filterset (J,G,A)><firstcol><ncols><job (optional)>

  • Example: 20040424 c1 /mnt/dw2-08/rawdata /scratch/ajd J 11 4 1

    Arguments: UT date, process id, data location, tg-node working space, filter-set, first column, number of columns to process with this job, job id number.

    This script runs the catalog matching and cutout processes. It first read the list of data that has been pre-staged for a given night. As photometry files in a temporary directory /mnt/dw2-08/rawdata/temphot they a globed. Files matching with in the column range that is processed on a particular node are copied to the scratch area of the node and moved to their permanent location /mnt/dw2-08/rawdata/phot/(date)/(column). The photometry files are opened and the first line is read. The data from this determines the photometry catalog necessary to match with that file. If the matching program has not been given this pre-staged catalog file before it is given to the match program along with the name of the file to compare against it, otherwise only the new photometry file is given. The matching program is run via three pipes. Files are input through stdin, transients and other information is read through stdout, errors that cause termination are read from stderr. Transient detections that come from the matching program are sent to SIAP to access PQ coverage of the transient location and fits files (masks and clean images) are copied. If their is sufficient coverage the fits filenames, WCS info and IDs are feed to the cutout program which make the cutouts and appends new transients to a list.


    Arguments: none. A Perl module containing many of the various processing functions required by pipeline scripts.

    Various important error conditions are emailed to a specified user if they occur and are likely to effect the running of the programs. Most of the programs have log files which are auto-flushed.



      (1) <RA> (degrees 185.1)
      (2) <Dec> (degrees 13.1)
      (3) <deltaRA> (degrees 0.53)
      (4) <deltaDec> (degrees 0.13)
      (5) <working directory> (/mnt/dw2-08/rawdata)
      (6) < filterset> (One of z2, z1, gi, gr, JR, JU, JI, JB, J (Johnson), G (Gunn), A (All))
      (7) <max runtime> (10.4 =10 hours 24min)
      (8) <queue> (optional, queue, default dque, alt reserved)
      (9) <jobs> (optional, number of nodes, default 1, max 10)

    Example: 185.1 13.1 0.53 0.13 /mnt/dw2-08/rawdata A 10.0 dque 5

    Arguments: RA, Dec, how many degrees long in RA, how wide in Dec, the location of a working directory, the maximum runtime per job, the PBS queue to use and the number of nodes to use at one time for processing.

    This program photometers a set of clean images in a given area using Sextractor. The photometry data is loaded into two MSSQL tables on Winvoy. One of these contains the information for each detection (magnitudes, flags, etc) while the other contains information about the frame (its RA and Dec limits, the average seeing and average background). Each input region is divided into jobs each containing 3 square degree chunks. These jobs are submitted to PBS where the the script is run. The status of the PBS queue is monitored every few minutes. When one job finishes a new one is submitted.


  • USAGE: <RA> <Dec> <deltaRA> <deltaDec> <filter> (optional <local location> <node location> <piston file>)
  • Example: 185.1 13.1 0.533 0.145 J /mnt/dw2-08/rawdata /scratch /home/ajd/Backup/scale02142006.txt

    Arguments: RA, Dec, width in RA to process with this job, width in Dec, filter or filter-set, working location, location on for staging tg-node, piston file.

    This script calls the SIAP service to return all the clean image in the specified area. These images as well as the associated masks are copied to the data processing location. Sextractor is run on the files to produce a catalog for each frame. Each catalog is reprocessed to add a unique detection ID, detection RA and Dec, a frame ID, the average seeing and background of the frame, the HTM IDs, and Cartesian coordinates of the objects, the frame limits, exposure time, the UT date and observation time, filter id, and the number of detections in the file. This data is then bulk inserted into the frame and Sextractor tables on Winvoy.


  • USAGE: <date><ra><dec><delt_ra><basedir><firstcol><ncols><nodes>
  • Example: 20040424 315.0 0.0 60 /mnt/dw2-07/rawdata 1 28 7

    Arguments: UT date of observations, RA start, central Dec, length in RA to stage, root working directory for staging, first column to stage, number of columns, number of nodes to be used in matching catalogs to new photometry.

    This program is used to pre-stage the photometry from the database into binary catalog files that the matching program reads. As a single node probably cannot handle all the photometry data for a scan each matching process splits the columns across a number of nodes. The size of each region extracted from the database is matched with the number of columns that will be processed on a node. For 7 tg-nodes there will be 4 columns processed on each node and 2 will go to each child process.



      <File> (List of images and locations)
      <Nodes> (Number of nodes to use)
      <Time> (Maximum time per job)
    Example: filename 10 8.0

    To determine and insert the WCS for a list of frames using PBS jobs on the the tg-nodes. This program splits the list of images to process between the number of jobs/nodes to run and submits PBS jobs with The job status is monitored and new jobs are submitted as others finish.


    USAGE: <file><working directory><start dir><job number (optional)>
    USAGE: Example: filename /scratch /mnt/dw2-04 1

    To manage a sequence of wcs determination jobs on a pbs node. This script does individual calls to for each frame.



      (1) Fits image filename (2) Data base directory (eg. /mnt/dw2-07/rawdata, default ./)
      (3) Work directory (eg. /scratch/p1, default /scratch/ajd)
      (4) Output catalogue name (default
      (5) Threshhold (sextractor detection level (default 2.5)
      (6) Reference star in match (default 60)
    Example: filename

    To determine and insert the WCS for an individual clean data frame.