DEFINITIONS I'll refer to a "block" of observations as a set of consecutive 2-D observation files for one particular object (an asteroid, star, or calibration/flats). The block may contain an AB "pair" or one or more ABBA "quads". A "set" of observations is a group of blocks that are needed to directly process one particular asteroid spectrum at one point of time (i.e. blocks for the asteroid, a nearby A or G star, or blocks for a low and high airmass standard G star). The "reduction" process converts each individual observation file (or rarely, a merger of various too-faint observation files) into a one-D vector of spectrum intensities. Each element in the vector corresponds to a wavelength. The "extinction" process converts each set of observations into a merged one-D vector of spectrum intensities where atmospheric absorption effects (water, CO2) have been removed. The extinction process also includes sunlight reflection corrections. REDUCTION PROCESS (old) Prior to March, a set of observations consisted of blocks (one or more ABBA quads) for an asteroid, a calibration block (flats, arc) possibly shared by several asteroids, and two blocks (or more) for a low and high airmass G-star taken at some other time during the evening. We used a slit size of 0.8x60 ("LowRes60") for the observations, in Prism-mode. For each asteroid and star block, the old reduction process was: 1) Read the files (process each ABBA quad separately). 2) Apply flat-fielding via a block of flat files (a "superflat" file). (Note: Bobby Bus suggests using a new block of flat/arc files for each telescope reorientation?). 3) Remove GCRs. 4) Rectify the files via an arc file - this expands the wavelength pixel range to 1024 pixels (from 512) and removes curvature. 5) Do background subtraction by subtracting B's from A's and then using robomean to clean up residuals - ignoring known artifacts like nearby intruder stars. Gaussian trace plots are generated and used later to re-run quads where the background subtraction or star intrusion look faulty. 6) Find the centers of each trace and use an "optimal" extraction process to reduce the trace to a 1-D vector. 7) Store each vector separately in an "sred.fits" file. Note - throughout each step, we perform checks to ignore bad pixels. We also are currently incapable of detecting saturated pixels until the end of the reduction process - at least not directly (saturation is usually obvious by inspecting the final sred result files). Note - all these actions are controlled by one IDL script, and I call that script for each quad via a batch file... so no manual input is needed. Once I figure out what flats go with each quad, I can run the batch file and process (or re-process) a whole night's worth of data in about 30 minutes. Reprocessing is frequently needed, however, as issues are uncovered with intruding stars (for example). REDUCTION PROCESS (new) Beginning with the March 2010 observations, a set of observations consisted of blocks (one or more ABBA quads) for an object/asteroid, a calibration block (flats, arc) unique to the asteroid, a block for a G-star near the asteroid, and a block for an A-star near the asteroid. We also still have two blocks (or more) for a low and high airmass G-star taken at some other time during the evening. We used a slit size of 0.8x15 ("LowRes15") for the observations, single-order, still in Prism-mode? The new reduction process uses Spextool (run "xspextool" inside IDL) as follows: 1) Create and store a "superflat" file for each block of flat files. 2) Create and store a "wavecal" file for each block of arc files (we only use one at a time, so no merging is needed). 3) For each block for an asteroid or A/G star (click the "Point Source" button): 3a) Read in a single AB (or BA) pair. 3b) Indicate the proper superflat and wavecal file to use. Spextool applies the flatfielding and rectification immediately and subtracts the B from the A. Results are displayed visually. (This corresponds to steps #1 through half of #5 of the old rectification procedure) 3c) Check for saturation. The "ximgtool" window displays saturated pixels in red. I've never seen this yet, however, because the Spextool uses information in the FITS file headers to somehow avoid false saturation effects in all known cases (so far)... nice! 3d) Back in Spextool, click a "Make Spatial Profiles" button. This generates a sideways profile of the A-minus-B pair, i.e. two gaussians (one positive, one negative). Results are displayed in a new "Plot Profiles" window where any residual background noise is also visible. (All this is part of step #5 in the old procedures). Note - the purpose of this step is merely to allow the user to manually override any default A-minus-B settings that don't look right... if one of the traces is near the edge, for example. I never had to do this. [ NOTE FROM PETER: There's probably also an opportunity here to tweak the left and right edge of each aperture. This is likely to be important for our faintest targets. ] 3e) Click the "Find/Store Ap Positions" button. This marks the peak of the A and B trace with blue lines, allowing me to see if the tool found the real peaks or something bogus due to poor signal or background noise. (inside step #5 and #6 of the old procedure) 3f) Click the "Trace Objects" button, then "Define Apertures". This visually displays the width of each trace that's going to be extracted (next step) as well as the location of the background region that will be used for final background subtraction. Back in the ximgtool window, the display is marked up with lines so you can see the 2-D location of each trace and the spots to be used for background subtraction - very nice features that make it obvious when we don't have a good enough signal-to-noise ratio to detect a real trace. We don't seem to be able to tweak the choice of background spots, however - which means we can't remove intruding stars (OUCH). [ NOTE FROM PETER: We might be able to do better here with substantial effort and stepping outside the SpexTool framework. I'm strongly hoping that won't be necessary. But, for example, we could help it stay on the trace in faint regions by studying the trace with larger X-axis binning. ] 3g) Click the "Extract" button to do an optimal extraction (same procedure as step #6 in the old procedures). 3h) Store the results in two separate files (A in one, B in the other, step #7 of the old procedures). 3i) Note - steps #3d to #3h above can be collapsed into one step by selecting the "Do All Steps" button within Spextool. The various output windows flash by quickly, so one has to be alert to detect issues with the gaussian width, background, peaks, etc... but it's not too hard. 4) If any AB pairing fails step 3 above because the trace is too faint to be identified, an alternate procedure allows us to combine the 2-D input files together before they are processed. I won't go through all those details, but it is another capability that could help us process very weak traces. Alas, if a star intrudes onto any of the frames, we must remove the affected frames. Notes: 1) The steps above result in the same output files, but the format and resolution of those files is different. Each sred file contains wavelength, flux, and error data, and the pixel range corresponding to wavelengths 0.8 to 2.5 microns is about 430 pixels (instead of 1024). 2) GCR removal happens at some point, but I'm not sure exactly where. We do have more residual GCRs, but later tools are better at removing them so the overall impact is less severe than with the old process (apparently). 3) The new reduction process also internally handles bad pixels - nice. Basically, Spextool appears to do all steps of our old reduction process, though it might do them in a slightly different way or sequence. 4) Since typing, clicking on buttons, and inspecting visual output are all integral parts of Spextool, most of the process cannot be automated. This makes it hard to re-run Spextool, and it makes the initial process take a lot longer (about 4 hours for a night's worth of data?). But the results look great... and in most cases we won't have to re-run anything because the immediate visual feedback prevents later surprises (like from intruding stars). EXTINCTION PROCESS (old) Note: The following detailed process below sounds complicated - and it is - because it has evolved as we have added various improvements & tweaks. Here's a simplistic view of the process: 1a) Produce extinction curves based on low/high airmass star observations. 1b) Apply an extinction curve to each observation. 1c) Divide out the low airmass star. 1d) Normalize, weigh, and merge the results per-object. Now for the more complex, detail-laden version: 1) Each individual 1-D spectrum file (sred) is manually given a weight (in a database) and assigned a low and high airmass star block based on its airmass. Other comments from the logs or unresolved issues in the gaussian trace plots will affect the weights. Some sred's are weighted 0, effectively removing them from later processing. 2) Each individual 1-D spectrum file is smoothed via a 3 pixel wide boxcar smoothing algorithm and stored in a "smoo3" directory (i.e. the originals aren't touched). Prior to the smoothing, we also blot out groups of pixels where we see obvious GCR spikes/dips. The result should be a fairly smooth 1-D spectrum that still retains subtle dips/spikes. 3) For each block of star observations, we create a single weighted, merged 1-D spectrum file for the whole block. 4) For some very dim observations, we merge the 1-D spectrum files together. This merged "group" file will be processed by later steps as a single observation file. Note - we didn't use this capability much, and the results were only marginally better (arguably) than processing all files individually below. 5) For each combination of low and high airmass stars (in step 1), we create a new sred file to be used for airmass corrections (an "extinction curve"). We can generate more than one per night. Note - sometimes when we can't get a good extinction curve for a night's worth of data, we can "force" the use of an extinction curve from a different night. This forced extinction curve override was used on 5 out of the 13 observation evenings. 6) We use a stored ATRAN spectrum to generate and save a unique waterband-only extinction curve per evening. 7) We choose a high quality (but otherwise unremarkable) star observation file to serve as an "anchor". All other 1-D spectra are wavelength-adjusted to try to minimize their RMS difference against this anchor file, in 1/10th pixel increments. The goal of this step is to add some wavelength consistency across all observations on a given night. 8) We correct each individual 1-D spectrum (asteroids and stars) by applying the appropriate extinction curve multiplied by the observation's airmass. We also jiggle the airmass adjustment using the ATRAN-generated waterband-only extinction curve. This step generates an "e_*" file for each non-zero-weighted observation. 9) For each "e_*" file, we divide out the extinction-corrected low airmass star block (from step #3). Results are stored in an "en_*" file. 10) We normalize each "en_*" file and apply weighting to produce a final 1-D spectrum file for each unique object (asteroid, star). 11) We jiggle the results... by running steps 3 through 10 in an automated loop that adjusts the wavelengths (in 0.1 pixel increments) and airmass (in 0.01 airmass increments) of each individual observation file. The effects of these many adjustments are evaluated based on the smoothness of each "en_*" file prior to the final merging step (#9) - this allows per-file adjustments that hopefully converge the overall result (step #10) toward an optimal solution. EXTINCTION PROCESS (new) 1) Inside IDL, run the "xcombspec" tool to combine the individual 1-D spectrum (sred) files for each block of observations for an object (asteroid, A/G star). A new widget window is displayed. Note - this step is similar to #4 in the old process where we immediately merge everything together into "group" files. 1a) Select the files to merge and press the "Load Spectra" button. 1b) Each individual spectrum is color-coded and plotted. Amplitudes may vary slightly, but any drastic differences should be noted (causing us to perhaps remove a spectrum input file). 1c) In the "Modify Spectra" panel, click on the "Order" button. This pulls up a new widget called "Xscalespec". 1d) Click the "Scale to" drop-down menu and choose "Median". This normalizes each spectrum line to a median value. You'll still see minor differences, but the lines should all be practically atop each other now. This step should also remove most GCRs that have survived this far, too - and very cleanly. 1e) Click "Accept". 1f) Correct the "shape" of the result by clicking on the "Correct Shape" menu button. This is tricky and perhaps carries some penalties, but the goal is to remove variances in slope (warning - since slopes are so important to our results, we should be VERY careful here). Here's how the Help file describes this step: ---> Ideally, by scaling all spectra in every order to a common value (e.g. the median), all the spectra should now lie perfectly on top of one another. However, we have found that this is not the case. After scaling, often the spectra in orders farthest from that used to determine the scale factor exhibit a large dispersion about the reference spectrum chosen in step 5 (e.g. the median). This implies that the spectral slope is varying from one spectrum to another over the various orders. One can see this change in spectral slope even across a single order if the spectra are examined closely. We believe this slope change is the result of guiding errors, atmospheric dispersion, and time-variable, wavelength-dependent changes in seeing. Based on our tests, we do not believe it is due to any procedures carried out by Spextool during the data reduction. ---> The user may choose to reduce the dispersion in the spectra about the reference spectrum (and therefore increase the S/N of the final combined spectrum) by correcting for these slope variations. This can be done by clicking on the "Correct Spectral Shape" button. In each spectrum Xcombspec will then remove the low order variations, relative to the reference spectrum, from each spectrum, order by order. After this has been done, all the spectra in a given order should lie on top of one another, which will then increase the S/N in the resulting combined spectrum. 1g) After correcting the shape, all spectra should finally lie atop each other, pretty much. 1h) In the "Combine Spectra" panel, we can choose what algorithm to use to do the final combining. We have many choices, but I've been using the default value of "Robust Weighted Mean". Here is how the Help file describes the various choices: ---> Robust Weighted Mean: A sigma clipping algorithm is used to identify outliers. The value at each pixel is then the weighted average of the good pixels and the error is given by the error on the mean (rms deviation divided by root N). ---> Robust Mean (RMS): A sigma clipping algorithm is used to identify outliers. The value at each pixel is the mean of the good pixels and the error is given by rms deviation of the pixels. ---> Robust Mean (Mean Error): A sigma clipping algorithm is used to identify outliers. The value at each pixel is the mean of the good pixels and the error is given by the error on the mean (rms deviation of the pixels divided by root N.) ---> Weighted Mean: A weighted mean and error (on the mean) are given at each pixel. ---> Mean (RMS): The value at each pixel is given by the mean and the rms deviation of the pixels. ---> Mean (Mean error): The value at each pixel is the mean of the pixel values and the error is given by the error on the mean (rms deviation of the pixels divided by root N.) ---> Median (MAD): The value at each wavelength is given by the median of the pixels. The error is given by the Median Absolute Deviation (MAD) which is given by MAD=1.4826* median( abs(x_i - med) ). In the limit of a gaussian distribution, the MAD=sigma. ---> Median (Median Error): The value at each wavelength is given by the median of the pixels. The error is given by the MAD/ root (N). 1i) Write the merged output file (I call these sred.fits files). 2) For each asteroid observation block, identify blocks for a G star and an A star. These should be stars observed at approximately the same airmass/location. We want to use the A-stars to correct against both an asteroid observation and a G-star. 3) For each unique A-star block to be used (perhaps for multiple asteroid and G-star blocks), run the "xtellcor" tool inside IDL. This pulls up a new visual widget. 3a) Select the appropriate sred.fits filename in the "Std Spectra" box (in the "Load Spectra" panel). 3b) In a web browser, go into SIMBAD (http://simbak.cfa.harvard.edu/simbad/sim-fid) and enter the name of the A-star. SIMBAD will tell you its "B" and "V" values. Enter these values into the appropriate boxes, also in the "Load Spectra" panel. Note - skipping this step is apparently OK (enter "1" and "1") but will affect the final Y axis magnitude of the results. 3c) In the "Obj Spectra" window (same panel), select the sred.fits filename for the G-star or asteroid. 3d) In the "Construct a Convolution Kernel" window, choose "IP". We apparently have to do this because we have single-order observations. Click the "Construct Kernel" button. 3e) In the "Construct Telluric Spectra" panel, click the "Scale Lines" button. This pulls up an Xscalelines widget window that allows you to adjust various hydrogen band magnitudes. This is apparently needed because the band strength of our A star might be different from Vega (which is used as an internal standard). I haven't quite figured this step out yet... more experimentation is needed. For now, I just hit the "Accept" button and don't do any manual hydrogen band corrections. Note - the Help file also contains some warnings about tweaks to avoid if we're in LowRes15 mode (which we are). [ NOTE FROM PETER: The big slow shape you see is from a black body. I'll probably have to read the code and look at their stored Vega spectrum to figure out why we're seeing it here. They probably wrote this help file with higher resolution spectra pricincipally in mind. In that case, the smooth slope/shape from the blackbody is relatively unimportant. Consider, for example, if you were looking _only_ at the region 1.6-1.7 um. Many SpeX observations are probably just like that. Although there's then a noticable slope difference, it isn't as dominating as it is for us where we're looking across almost a factor of 3 in wavelength and the slow gradual blackbody shape dominates. Let's ignore this for now. ] [ NOTE FROM PETER: For others or for later improvement of this one if we see H lines in the final result, I'd probably focus on Braket Gamma at about 2.17um and Paschen Delta at about 1.01um. The others look too close to strong telluric features &/or are expected to be weak. ] 3f) Check for (and fix) "residual" wavelength shifts between the A-star and the object to be corrected by clicking on the "Get Shift" button (in the "Determine Shift" panel). This pulls up an Xfindshift visual widget window that allows the user to shift the wavelength of the A-star relative to the object. One shift is applied to the whole spectrum, and the results are displayed automatically. You can also automatically let IDL determine the amount to shift - it will do an RMS check that sounds similar to ones we worked into our old process. I've played around with several shifts and have never been able to improve the end result, but it may be a nice option for removing residual waterband effects (if any). Click "Accept" when done. 3g) Choose a filename and click the "Write Output" button to create a final output file. This will be a corrected 1-D spectrum that looks like a huge curve... this is probably the result of that "big slow shape" as Peter called it in step 3e above... there's a slope issue while applying the A star correction. But it will resolve itself in the next step. My file naming convention for these files is "out.fits". 4) Repeat step 3a-3g for the asteroid block AND G-star block. 5) Inside IDL read the "out.fits" files for both the asteroid and G-star. Divide the G-star into the asteroid directly and save the result as "_over_G.fits". All done! Notes: 1) Similar to the reduction process, this new process is all interactive and cannot be automated very easily. But the results look good, and you can check for problems visually at each step of the way. 2) I don't see any easy way to apply weighting. In our old process, if we didn't like an observation file but didn't want to get rid of it completely, we could down-weight it. With the Spextool family of tools, it's all or nothing. 3) Airmass and wavelength shifts/corrections are done at various places, but we don't have as much control over either. On the other hand, both are simpler and will work well in most cases. In processing the spectra for March 15th, however, two of the final asteroid merged spectra (1245 and 462) contained airmass/waterband artifacts and I'm not sure yet how to best remove them. 4) Unsure about how best to scale the Y axis of the final result.